uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
2,877,628,088,754
arxiv
\section{Introduction} Here I consider the problem of establishing the identifiability of the standard parametrisation of a generalized linear mixed mode (GLMM). The concept of identifiability that I use here is the following (see Bickel and Dokuson, 1977, page 60, Lehman 1983 and J{\o}rgesen and Labouriau 2012). Consider a statistical model \begin{eqnarray} \label{Eq00} \mathcal{P} = \left \{ P_\theta : \theta \in \Theta \right \} \end{eqnarray} parametrised by $\theta $, {\it i.e.} $\mathcal{P}$ is a family of probability distributions defined in the same measurable space indexed by the parameter $\theta$. The parametrisation used in (\ref{Eq00}) is {\it identifiable} when the mapping $\theta \mapsto P_\theta$ is one-to-one, {\it i.e.} \begin{eqnarray} \nonumber \theta_1 \ne \theta_2 \Longrightarrow P_{\theta_1} \ne P_{\theta_2} \, , \end{eqnarray} for each $ \theta_1$ and $ \theta_2$ in $\Theta$. Without loss of generality we consider a GLMM with one single random component, say $U$ and one fixed effect $x$. Denote by $Y$ the random variable representing the response for one observation. I assume that conditionally on $U$, $Y$ is distributed according to an exponential dispersion model and that \begin{eqnarray} \label{Eq001} E \left ( Y \vert U=u \right ) = h \left ( x\beta + u \right ) \,\, , \end{eqnarray} where $\beta $ is a parameter and $h $ is the response function ({\it i.e.} the inverse of the link function), which is assumed to be monotone ({\it i.e.} increasing or decreasing) and smooth. The random component is assumed to be continuously distributed with expectation $0$ and variance $\sigma^2$ (typically assumed to be normally distributed) and have a probability density of the form $\phi (\, \cdot \, / \sigma)$. Futhermore, I assume that \begin{eqnarray} \label{Eq002} Var \left ( Y \vert U=u \right ) = \xi \, V \left [ h \left ( x\beta + u \right ) \right )] \,\, , \end{eqnarray} where $\xi$ is the dispersion parameter and $V$ is the variance function, which associates the mean to the variance (see J{\o}rgensen {\it et al}, 1996, Breslow and Clayton, 1983). The GLMM referred above is then parametrised by $\theta = (\beta, \sigma^2, \xi )$. I will show that under mild regularity conditions this parametrisation is identifiable. That is, denoting the distribution of $Y$ when the parameter takes the value $\theta = (\beta, \sigma^2, \xi )$ by $P_\theta = P_{\beta, \sigma^2, \xi}$, we will show that, if $\theta_1 = (\beta_1, \sigma^2_1, \xi_1 ) \ne \theta_2 = (\beta_2, \sigma^2_2, \xi_2 ) $, then $P_{\theta_1} \ne P_{\theta_2}$. The proof will be established using two propositions. First I will show in proposition 1 that if $\beta_1 \ne \beta_2$ or $\sigma^2_1 \ne \sigma^2_2$ (no matter the values of $\xi_1$ and $\xi_2$) then $P_{\theta_1} \ne P_{\theta_2}$. This proof will use the property (\ref{Eq001}) and the fact that if two distributions have different expectation, then they are not equal. Next I will show in proposition 2 that if $\xi_1 \ne \xi_2$, then $P_{\beta, \sigma^2, \xi_1} \ne P_{\beta, \sigma^2, \xi_2}$ for any values of $\beta$ and $\sigma$, which will complete the proof. I will call that the part 2 of the proof. The proof will use the relation (\ref{Eq002}) and the fact that if the variance of two distributions are different, then the distributions are different. \section{Preparatory basic calculations} Before embracing the proof, I calculate the expectation and the variance of $Y$. These calculations will use implicitly the following two regularity conditions. For all $k\in\mathbb{R}$ and each $\sigma\in\mathbb{R}_+$, \begin{description} \item {i)} The integral $\int \left [ h ( k + \sigma z )\right ]^2 \phi ( z ) dz$ is finite \item {ii)} The integral $\int V \left [ h ( k + \sigma z )\right ] \phi ( z ) dz$ is finite. \end{description} Here $\phi $ is the density of the distribution of the random component (typically assumed to be normally distributed) and the integration is in the support of the distribution of the random component, which will be assumed to be $\mathbb{R}$. Under condition i) the expectation of $Y$ is given by \begin{eqnarray} \nonumber E(Y) = E \left [ E(Y \vert U ) \right ] & = & \int_\mathbb{R} h ( x\beta + u) \, \phi (u/\sigma ) \, du \\ \nonumber & & \mbox{(making a change of variable)} \\ \label{Eq0003} & = & 1/\sigma \int_\mathbb{R} h ( x\beta + \sigma z ) \, \phi (z ) \, dz = \mu_{\beta, \sigma} . \end{eqnarray} On the other hand, the variance of $Y$ is given by \begin{eqnarray} \label{eq01} Var (Y) = Var \left [ E(Y\vert U) \right ] + E \left [ Var(Y\vert U) \right ] \, . \end{eqnarray} Now, \begin{eqnarray} \nonumber Var\left [ E(Y\vert U) \right ] & = & Var \left [ h(x\beta + U ) \right ] \\ \nonumber & = & \int_\mathbb{R} h^2 ( x\beta + u ) \phi ( u/\sigma ) du - \left [ \int_\mathbb{R} h( x\beta + u ) \phi ( u/\sigma ) du \right ]^2 \\ \label{eq02} & = & \zeta_{\beta, \sigma} - \mu_{\beta, \sigma}^2 \, , \end{eqnarray} where $\zeta_{\beta, \sigma} = \int_\mathbb{R} h^2 ( x\beta + u ) \phi ( u/\sigma ) du $. Furthermore, \begin{eqnarray} \label{eq03} E \left [ Var (Y\vert U) \right ] & = & \xi \int_\mathbb{R} \, V \left [ h \left ( x\beta + u \right ) \right ]\phi ( u/\sigma ) du = \xi \upsilon_{\beta , \sigma} \, , \end{eqnarray} where $ \upsilon_{\beta , \sigma} = \int_\mathbb{R} \, V \left [ h \left ( x\beta + u \right ) \right ]\phi ( u/\sigma ) du $. Inserting (\ref{eq02}) and (\ref{eq03}) in (\ref{eq01}) yields \begin{eqnarray} \label{eq04} Var (Y) = \zeta_{\beta, \sigma} - \mu_{\beta, \sigma}^2 + \xi \upsilon_{\beta , \sigma} \, . \end{eqnarray} \section{Proof of the indentifiability} \subsection{Identifiability of the fixed and random effect parameters} \begin{proposition} For each $\theta_1 = (\beta_1, \sigma^2_1, \xi_1)$ and $\theta_2 = (\beta_2, \sigma^2_2, \xi_2)$ such that $\beta_1 \ne \beta_2$ or $\sigma^2_1 \ne \sigma^2_2$, $P_{\theta_1} \ne P_{\theta_2}$, provided the following two conditions are fulfilled: \begin{description} \item{1)} The integral $\int h ( k + \sigma z ) \phi ( z ) dz$ is finite \item{2)} The explanatory variable $x$ takes values in the set $\mathcal{X}$ and the equation \begin{eqnarray} \label{Eqq09} \frac{h(x\beta_1+\sigma_1z)}{\sigma_1} = \frac{h(x\beta_2+\sigma_2z)}{\sigma_2} \,\, \mbox{ for all } x \mbox{ in } \mathcal{X} \mbox{ and all } z \mbox{ in } \mathbb{R} \end{eqnarray} has no solution. \end{description} \end{proposition} \noindent Note that in the lemma above nothing is said about the values of $\xi_1$ and $\xi_2$. \begin{proof} Suppose, by hypothesis of absurdum, that there exist $\beta_1, \beta_2$, and $\sigma^2_1, \sigma^2_2$ such that $P_{\beta_1, \sigma^2_1, \xi_1} = P_{\beta_2, \sigma^2_2, \xi_2}$. In particular, we have that the expectation of $Y$ is equal for the distributions indexed by the two parameters, so that $\mu_{\beta_1, \sigma_1} = \mu_{\beta_2, \sigma_2}$. That is, \begin{eqnarray} \nonumber \int_\mathbb{R} h ( x\beta_1 + \sigma_1 )/\sigma_1 \phi (z ) dz = \int_\mathbb{R} h ( x\beta_1 + \sigma_2 ) /\sigma_2 \phi (z ) dz \, , \mbox{ for all } x \mbox{ in } \mathcal{X} \, . \end{eqnarray} Moreover, the equality above holds also for any interval $I$ in $\mathbb{R}$ \begin{eqnarray} \label{EqQQQ} \int_I h ( x\beta_1 + \sigma_1 )/\sigma_1 \phi (z ) dz = \int_I h ( x\beta_1 + \sigma_2 ) /\sigma_2 \phi (z ) dz \, , \mbox{ for all } x \mbox{ in } \mathcal{X} \, , \end{eqnarray} which corresponds to say that the two distributions have the same conditional expectations given the result are in the interval $I$. Since the interval $I$ is arbitrary and $h$ and $\phi$ are continuous, the technical lemma \ref{TLemma} implies that \begin{eqnarray} \nonumber \frac{h(x\beta_1+\sigma_1z)}{\sigma_1} = \frac{h(x\beta_2+\sigma_2z)}{\sigma_2} \,\, \mbox{ for all } x \mbox{ in } \mathcal{X} \mbox{ and all } z \mbox{ in } \mathbb{R} \, , \end{eqnarray} which contradicts the hypothesis 2). \end{proof} \subsection{Identifiability of the dispersion parameter} \begin{proposition} Under the regularity conditions i) and ii), $P_{\theta_1} \ne P_{\theta_2}$, for each $\theta_1 = (\beta, \sigma^2, \xi_1)$ and $\theta_2 = (\beta, \sigma^2, \xi_2)$ such that $\xi_1 \ne \xi_2$. \end{proposition} \begin{proof} Suppose, by hypothesis of absurdum, that there exist $\beta, \sigma^2, \xi_1$ and $\xi_2$ with $\xi_1\ne \xi_2$ such that $P_{\beta, \sigma^2, \xi_1} = P_{\beta, \sigma^2, \xi_2}$. Then, $Var(Y)$ under $P_{\beta, \sigma^2, \xi_1}$ must be equal to $Var(Y)$ under $P_{\beta, \sigma^2, \xi_2}$, which implies by (\ref{eq04}) that \begin{eqnarray} \nonumber \zeta_{\beta, \sigma} - \mu_{\beta, \sigma}^2 + \xi_1 \upsilon_{\beta , \sigma} = \zeta_{\beta, \sigma} - \mu_{\beta, \sigma}^2 + \xi_2 \upsilon_{\beta , \sigma} \, , \end{eqnarray} wich is equivalent to \begin{eqnarray} \nonumber \xi_1 \upsilon_{\beta , \sigma} = \xi_2 \upsilon_{\beta , \sigma} \, , \end{eqnarray} which implies that $\xi_1 = \xi_2$ since $ \upsilon_{\beta , \sigma} \ne 0$. But this contradicts the hypothesis of absurdum and therefore the proof is concluded. \end{proof} \section{Closing remarks} Although the proof above is general there are two particular examples that are of interest in many practical situations: the binomial and the Poisson GLMM. In those cases the dispersion parameter $\xi$ will represent the parameter used for modelling under- and over-dispersion via quasi-likelihood. When the dispersion parameter $\xi$ is equal to one the GLMM corresponds to the classical binomial and Poisson standard models; and in this case, the identifiability follows from the proposition 1. For the sake of illustration here is the verification of the identifiability of the standard parametrisation of a Poisson model with the logarithmic link, {\it i.e.} the response function is $h(\, \cdot \, ) = \exp (\, \cdot \, )$. Equation (\ref{Eqq09}) in this particular example is \begin{eqnarray} \nonumber \frac{\exp(x\beta_1 + \sigma_1 z)}{\sigma_1} = \frac{\exp(x\beta_2 + \sigma_2 z)}{\sigma_2} \,\, \mbox{, for all } x \mbox{ in } \mathcal{X} \mbox{ and all } z \mbox{ in } \mathbb{R} \,\, , \end{eqnarray} which is equivalent to (taking logarithms in both sides) \begin{eqnarray} \nonumber x\beta_1 + \sigma_1 (z-1) = x\beta_2 + \sigma_2 (z-1) \,\, \mbox{, for all } x \mbox{ in } \mathcal{X} \mbox{ and all } z \mbox{ in } \mathbb{R} \,\, . \end{eqnarray} But the equation above has no solution if $\mathcal{X}$ has more than one element. Analogous arguments yields the identifiability of a binomial model with the classic logistic link or the probit link. The techniques for establishing identifiability used here can be easily applied, after a straightforward adaptation, to a context of survival analysis for the piecewise constant hazard model or the discrete time proportional hazard model, both with frailties and dispersion parameter (see Maia {\it et al}, 2014 for details on those models). \section*{Acknowledgements} This work was partially financed financed by the {\it Funda\c c\~ao Apolodoro Plaus\^onio}. \newpage
2,877,628,088,755
arxiv
\section{Introduction} It has long been known that on-shell gauge invariance can be utilized to obtain universal soft behaviours of scattering amplitudes for photons and gravitons. Gauge invariance dictates that the amplitude must vanish when one of its polarization vector/tensor is replaced by the momenta. Taking one of the momenta of an $n$-point amplitude ($M_{n}$) to be soft, the gauge invariance of the soft leg then relates the finite part of the amplitude to the singular diagrams, which is given by the product of a three-point vertex and the $n{-}1$-point amplitude. The latter is then amenable to the form of a ``soft operator" acting upon the $n{-}1$-point amplitude. Thus one schematically have: \begin{equation} M_{n}|_{q\rightarrow 0}=\sum_{i=-1}^a\left(\mathcal{S}_{i}\right)M_{n{-}1}+\mathcal{O}(q^{a+1}) \end{equation} where $q$ is the soft momenta, and $\mathcal{S}_{i}$ are the soft operators with its subscript indicating to which degree in the $q$ expansion is it defined. For photons $a=0$, while for gravitons $a=1$~\cite{LowFourPt, LowTheorem, Weinberg, OtherSoftPhotons}. The reason why the soft theorem always terminate at a finite order is because when using gauge invariance, one can only determine the finite part of the amplitude up to a homogeneous solution, denoted as $R^\mu$, satisfying $q\cdot R=0$ for which one has no control. From general principle of locality and Lorentz symmetry, one can only determine the minimum order in $q$ must this term contain, which sets $a$. A few years ago, Strominger and collaborators~\cite{Strominger,CachazoStrominger} demonstrated that the soft-theorems for gravitons can alternatively be interpreted as a consequence of extended Bondi, van der Burg, Metzner and Sachs (BMS) symmetry~\cite{BMS, ExtendedBMS}. This generated new interest in soft-theorems of amplitudes and its relationship with underlying symmetry. As the new interpretation only relies on the structure of space-time at asymptotic infinity, it can be viewed as a direct constraint on any theory of quantum gravity which admits asymptotic flat solutions. However, given that the resulting soft theorems can be derived via ordinary gauge symmetry, it is natural to ask, in the context of amplitudes, what precisely does the new interpretation buy us? This is especially intruiguing given that the soft theorems are modified at loop-level~\cite{BernLoop, Yutin} as well as higher-dimensional operators~\cite{Yutin2, elvang}, which are tied to the details of the interaction. Recently, an interesting opportunity presented itself in the form of an infinite order soft-theorem derived from the Ward identity of large gauge transformations \footnote{See also \cite{Schwab} for the derivation of a Ward identity for residual gauge symmetry.} by Hamada and Shiu~\cite{gary}. An interesting feature of the newly derived soft-theorem is that it only gives the soft-limit of the projected piece of the amplitude. For example for photons one has: \begin{equation} \left.\Omega_{\mu\alpha_{1}\cdots\alpha_{l}} \partial_{q^{\alpha_{1}}}\cdots\partial_{q^{\alpha_{l}}} M_{n}^{\mu}\right |_{q\rightarrow 0}=\sum_{i=-1}^\infty\left(\mathcal{S}^{\mu}_{i,\nu}\right)M^\nu_{n{-}1} \end{equation} where $M_{n}^{\mu}$ is the amplitude with one of the polarisation vector $\epsilon^\mu$ stripped off and $\Omega_{\mu\alpha_{1}\cdots\alpha_{l}}$ is a symmetric tensor.\footnote{\label{note:traceless} In~\cite{gary}, the tensor is symmetric traceless. However the trace piece automatically vanishes upon contracting with the polarisation vectors, as we will discuss shortly, and thus do not make a difference.} In this letter, we will show that the above can again be derived by ordinary on-shell gauge invariance. Recall that the derivation based on gauge symmetry yields soft theorems at finite order due to the potential ambiguities, i.e. the aforementioned $R^\mu$. We will demonstrate that such terms vanish upon the projection. In other words, the infinite order soft-theorem derived in~\cite{gary} is precisely the part of the amplitude that are completely determined by ordinary gauge symmetry. We will demonstrate this for photon and gravitons. Furthermore, we will use explicit examples to demonstrate that while $R^\mu$ can be projected out, it is nonetheless not zero. Finally, for completeness we will discuss the modification of this infinite soft theorem by the presence of higher dimensional operators. \section{Soft Theorem from Ward Identity} We follow \cite{bern,plefka} to investigate infinite order soft limits of photon and graviton amplitude using ordinary on-shell gauge invariance. Beyond the usual (sub)subleading soft theorems, they could only be fixed up to a homogeneous term. However, if restrict our attention certain projected pieces of the amplitude, such term does not contribute, and soft theorems can be obtained up to infinite order. For photons, we reproduce the result from large gauge transformations \cite{gary}. For gravitons, our result is more general, in that it gives the soft limit of a broader piece of the amplitude. That is, the soft theorems here left fewer undetermined pieces than the result in \cite{gary}. \subsection{Photon Soft Theorem} Consider a scattering amplitude \eqal{ M_{n+m+1}\left(q;p_{1},\cdots,p_{m},k_{1},\cdots k_{n}\right) } involving one soft photon, $n$ hard photons, and $m$ matter scalars, with momenta $q$, $k_{1},\cdots k_{n}$, and $p_{1},\cdots,p_{m}$, respectively. Since the amplitude is a linear function in polarization vectors, it can be expressed as \begin{align} M_{n+m+1}= & \epsilon_{q,\mu}M_{n+m+1}^{\mu} \,, \label{eq:Amp_eps} \end{align} where $\epsilon_q$ is the polarization vector for the soft photon. In the following we discuss the partial amplitude $M_{n+m+1}^{\mu}$ without the polarization vector. The scattering amplitude contains contribution with a pole in the soft momentum $q$ and those with no pole, as in Fig.\ref{fig:photons}, \begin{figure}[h] \begin{subfigure}[b]{0.4\linewidth} \includegraphics[height=0.6\textwidth]{photons_pole} \caption{} \label{fig:photons_pole} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[height=0.62\textwidth]{photons_gut} \caption{} \label{fig:photons_gut} \end{subfigure} \caption{Contributions (a) with pole, (b) without pole. } \label{fig:photons} \end{figure} \begin{align} & M_{n+m+1}^{\mu}\left(q;p_{1},\cdots,p_{m},k_{1},\cdots k_{n}\right)\nonumber \\ = & \sum_{i=1}^{m}e_{i}\frac{p_{i}^{\mu}}{p_{i}\cdot q}M_{n+m}\left(p_{1},\cdots,p_{i}+q,\cdots,p_{m},k_{1},\cdots k_{n}\right)\nonumber \\ & +N^{\mu}\left(q;p_{1},\cdots,p_{m},k_{1},\cdots k_{n}\right)\,, \label{eq:M_pole} \end{align} where $e_{i}$ are the charges of scalars, $N^{\mu}$ denotes the terms without pole, and $M_{n+m}$ denotes the lower point amplitude without the soft photon. The pole terms can only arise from the three point vertex involving the soft photon and an external scalar, since there are no self interaction for photons. At leading order, there is no contribution from $N^{\mu}$, giving the leading soft theorem \eqal{ M_{n+m+1}^{\mu}\big|_{q\rightarrow 0} =&\sum_{i=1}^{m}e_{i}\frac{p_{i}^{\mu}}{p_{i}\cdot q}M_{n+m}+\mathcal{O}\left(q^{0}\right) \,. } Beyond this order, $N^{\mu}$ must be considered. On-shell gauge invariance relates $N^{\mu}$ to the lower-point amplitude $M_{n+m}$ by dictating \begin{align} 0= & q_{\mu}M_{n+m+1}^{\mu}=\sum_{i=1}^{m}e_{i}M_{n+m}+q_{\mu}N^{\mu}. \label{eq:Ward_N} \end{align} At zeroth order, the constraint gives charge conservation, \begin{align} \sum_{i=1}^{m}e_{i}= & 0 \, . \label{eq:photon_leading} \end{align} Beyond zeroth order, we may expand $N^{\mu}$ as \begin{align} N^{\mu}= & \sum_{l}q_{\alpha_{1}}\cdots q_{\alpha_{l}}N_{l}^{\mu,\alpha_{1}\cdots\alpha_{l}}\,, \label{eq:N_expand} \end{align} since it is polynomial in $q$ at tree level. Then, order by order we have \begin{align} q_{\mu} & q_{\alpha_{1}}\cdots q_{\alpha_{l}} \times \nonumber \\ & \left( \sum_{i=1}^{m} \frac{e_{i}}{(l+1)!}\partial{}_{i}^{\mu}\partial_{i}^{\alpha_{1}}\cdots\partial_{i}^{\alpha_{l}}M_{n}+N_{l}^{\mu,\alpha_{1}\cdots\alpha_{l}}\right)=0 \,. \end{align} so that $N_{l}^{\mu}$ can be expressed in terms of $M_{n+m}$ up to a homogeneous term $R_{l}$, \begin{align} N_{l}^{\mu,\alpha_{1}\cdots\alpha_{l}}= & -\sum_{i=1}^{m} \frac{e_{i}}{(l+1)!}\partial_{i}^{\mu}\partial_{i}^{\alpha_{1}}\cdots\partial_{i}^{\alpha_{l}}M_{n+m}\nonumber \\ & +R_{l}^{\mu\alpha_{1}\cdots\alpha_{l}} \,, \label{eq:Ward_N_l} \end{align} where $R_{l}$ satisfies Ward identity by itself \begin{align} q_{\mu}q_{\alpha_{1}}\cdots q_{\alpha_{l}}R_{l}^{\mu\alpha_{1}\cdots\alpha_{l}} & =0 \,, \label{eq:Ward_homo} \end{align} posing as an ambiguous term. Generally, $R_{l}$ can be separated into three pieces, \eqal{ R^{\mu\alpha_{1}\cdots\alpha_{l}} &= T_{l}^{\mu\alpha_{1}\cdots\alpha_{l}}+O_{l}^{\mu\alpha_{1}\cdots\alpha_{l}}+A_{l}^{\mu\alpha_{1}\cdots\alpha_{l}} \,, } where $T$ is the trace part, \eqal{ T_{l}^{\mu\alpha_{1}\cdots\alpha_{l}} &= \eta^{(\mu\alpha_{1}}B_{l}^{\alpha_{2}\cdots\alpha_{l})} \,, } $O$ is the symmetric traceless part satisfying \eqal{ \eta_{\mu\alpha_{i}}O_{l}^{\mu\alpha_{1}\cdots\alpha_{l}}=\eta_{\alpha_{i}\alpha_{j}}O_{l}^{\mu\alpha_{1}\cdots\alpha_{l}}=0,\text{ for any }i,j \,, } and $A$ contains the remaining terms, which are antisymmetric in any two indices among $\mu$ and $\alpha$'s. Since any arbitrary $A$ or $T$ automatically satisfy Eq.\,\eqref{eq:Ward_homo}, the symmetric traceless part $O$ must satisfy Eq.\,\eqref{eq:Ward_homo} by itself. It is then straightforward to show that $O$ must vanish \footnote{For arbitrary $p$, we have following separations: $p_\mu= \sum_{i=1}^3 c_i q_{i\mu}$ and $p_\mu p_\nu = c_0 \eta_{\mu \nu} + \sum_{i=1}^3 c_i q_{i\mu} q_{i\nu}$, where $q_i^2 = 0$.}. The trace part $T$ can also be discarded, since the contribution of $R_{l}$ to $N^{\mu}$ is in the form of \begin{align} q_{\alpha_{1}}\cdots q_{\alpha_{l}}R_{l}^{\mu\alpha_{1}\cdots\alpha_{l}} \,, \end{align} so that $T$ either produces terms with $q^{2}=0$ for massless $q$, or $q^{\mu}$ which vanishes after putting back the polarization vector of the soft photon, as in Eq.\,\eqref{eq:Amp_eps}. Therefore, only the antisymmetric part need to be considered, giving us \begin{align} N_{l}^{\mu,\alpha_{1}\cdots\alpha_{l}}= & -\sum_{i=1}^{m} \frac{e_{i}}{(l+1)!}\partial_{i}^{\mu}\partial_{i}^{\alpha_{1}}\cdots\partial_{i}^{\alpha_{l}}M_{n+m}\nonumber \\ & +A_{l}^{\mu\alpha_{1}\cdots\alpha_{l}} \,. \label{eq:Ward_N_l-1} \end{align} Plugging this into the expression for full amplitude Eq.\,\eqref{eq:M_pole}, we get an incomplete soft theorem for all orders up to the antisymmetric homogeneous term $A$, \eqal{ &M_{n+m+1,\left(l\right)}^{\mu}\nonumber\\ =&\sum_{i=1}^{m}\frac{1}{(l+1)!}\frac{e_{i}}{p_{i}\cdot q}q_{\nu}J_{i}^{\mu\nu}\left(q\cdot\partial_{i}\right)^{l}M_{n+m}\nonumber\\ &+q_{\alpha_{1}}\cdots q_{\alpha_{l}}A_{l}^{\mu\alpha_{1}\cdots\alpha_{l}} \,. } where \begin{align} J_{i}^{\mu\nu}= & p_{i}^{\mu}\frac{\partial}{\partial p_{i\nu}}-p_{i}^{\nu}\frac{\partial}{\partial p_{i\mu}}, \end{align} The case $l=0$ contains no homogeneous term, giving us the well-known subleading soft theorem. At higher order $A$ can be non-zero, but we may single out the piece totally symmetric in $\alpha_{i}$ and $\mu$ by contracting with a totally symmetric tensor $\Omega_{\mu\alpha_{1}\cdots\alpha_{l}}$. $A$ is then removed, giving a partial soft term up to all order in $q$, \begin{align} &\Omega_{\mu\alpha_{1}\cdots\alpha_{l}} \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}} M_{n+m+1}^{\mu} \bigg|_{q\rightarrow 0} \nonumber\\ =&\Omega_{\mu\alpha_{1}\cdots\alpha_{l}}\partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}\nonumber\\&\ \left[\sum_{i=1}^{m}\frac{1}{(l+1)!}\frac{e_{i}}{p_{i}\cdot q}q_{\nu}J_{i}^{\mu\nu}\left(q\cdot\partial_{i}\right)^{l}M_{n+m}\right] \bigg|_{q\rightarrow 0} \end{align} where we adopt short-hand notation $\partial^{\alpha_{j}}=\partial/\partial q^{\alpha_{j}}$ and $q\cdot\partial_{i}=q\cdot\partial/\partial p_{i}$. These are exactly the infinite order soft theorems in \cite{gary}. \subsection{Graviton} The derivation for soft theorems of gravitons is similar, except that Ward identity can be applied twice, pushing the usual soft theorem to subsubleading order, and placing more stringent constraint on the homogeneous terms at higher order. In principle, we should consider a general amplitude involving one soft graviton, $n$ hard gravitons, and $m$ matter scalars, \eqal{ M_{n+m+1} \left(q,p_1,\cdots ,p_m, k_1, \cdots, k_n \right), } with momenta $q$, $k_{1},\cdots k_{n}$, and $p_{1},\cdots,p_{m}$, respectively. The pole contribution could then come from both the scalar-graviton vertex and the three-point self-interaction of gravitons. Though the derivation procedure is unchanged, this complicates the calculation of soft factors. For clarity, we separately consider two cases: one involving only a single graviton, and one involving multiple gravitons without scalars. The most general soft theorem can be obtained simply by combining the result of the two. We first discuss the amplitude involving a single soft graviton and $m$ scalars, with momenta $q$ and $p_1, \cdots, p_m$, respectively, \eqal{ M_{m+1} \left( q, p_1, \cdots, p_m \right) } The scattering amplitude again contains contribution with and without a pole in the soft momentum $q$, \begin{align} & M_{m+1}^{\mu\nu}=\sum_{i=1}^{m}\frac{p_{i}^{\mu}p_{i}^{\nu}}{p_{i}\cdot q}M_{m}+N^{\mu\nu} \label{eq:1grav_polegut} \end{align} with $N^{\mu \nu}$ denoting the terms without pole and $M_{n}$ the lower point amplitude without the soft graviton. Expanding in the power of soft momentum $q$, only the pole diagrams contribute to the leading piece \begin{align} M_{(-1)}^{\mu\nu}=\sum_{i=1}^{m} \frac{p_{i}^{\mu}p_{i}^{\nu}}{p_{i}\cdot q} \,. \end{align} However, the higher order pieces contain both pole and gut diagrams, and, by Ward identity, parts of gut diagrams relate to the pole ones. \begin{align} q_{\mu}\left( \sum_{i=1}^{m} \frac{p_{i}^{\mu}p_{i}^{\nu}}{p_{i}\cdot q}M_{m}(p_{i}+q)+N^{\mu\nu}\right)=0 \,. \end{align} Expanding $N^{\mu\nu}$ around $q\rightarrow0$, \begin{align} N^{\mu\nu}=\sum_{l}q^{\alpha_{1}}\cdots q^{\alpha_{l}}N_{l}^{\mu\nu,\alpha_{1}\cdots\alpha_{l}} \,, \end{align} we similarly obtain $N_{l}$ up to a homogeneous term $R_{l}$, \begin{align} N_{l}^{\mu\nu,\alpha_{1}\cdots\alpha_{l}}= & -\sum_{i=1}^{m} \frac{p_{i}^{\nu}}{(l+1)!}\partial_{i}^{\mu}\partial_{i}^{\alpha_{1}}\cdots\partial_{i}^{\alpha_{l}}M_{m}\nonumber \\ & +R_{l}^{\mu\nu\alpha_{1}\cdots\alpha_{l}} \,, \end{align} where \begin{align} q_{\mu}q_{\alpha_{1}}\cdots q_{\alpha_{l}}R_{l}^{\mu\nu\alpha_{1}\cdots\alpha_{l}} & =0 \,. \end{align} Again, $R_{l}$ can be separated into three pieces, \begin{align} R^{\mu\nu\alpha_{1}\cdots\alpha_{l}} & =T_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}}+O_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}}+A_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}} \,, \end{align} where $T$ is the trace part, \begin{align} T_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}} & =\eta^{(\mu\alpha_{1}}B_{l}^{\alpha_{2}\cdots\alpha_{l})\nu} \,, \end{align} $O$ is the symmetric traceless part satisfying \begin{align} \eta_{\mu\alpha_{i}}O_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}}=\eta_{\alpha_{i}\alpha_{j}}O_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}}= & 0,\text{ for any }i,j \,, \end{align} and $A$ is all terms which is antisymmetric in any two indices among $\mu$ and $\alpha$'s. For identical reasons as the case for photons, only $A$ survive. Thus, we can rewrite our amplitude in the $l$'th order as\footnote{Here we have used $q^{2}=0$ and drop out terms proportional to $q^{\mu}$, which would not contribute to the gauge invariance amplitude.} \begin{align} &M_{m+1,(l)}^{\mu\nu} \nonumber \\ = & \sum_{i=1}^{m} \frac{1}{(l+1)!}\frac{p_{i}^{\nu}}{p_{i}\cdot q}\left[p_{i}^{\mu}\left(q\cdot\partial_{i}\right)-\left(p_{i}\cdot q\right)\partial_{i}^{\mu}\right]\left(q\cdot\partial_{i}\right)^{l}M_{m}\nonumber \\ & +q_{\alpha_{1}}\cdots q_{\alpha_{l}}A_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}} \,. \end{align} For $l=0$, we get \begin{align} M_{n+1,(0)}^{\mu\nu}=\sum_{i=1}^{m} \frac{p_{i}^{\nu}}{k\cdot q}q_{\alpha}J_{i}^{\mu\alpha}M_{m} \,, \end{align} but for $l>0$, we need to impose the gauge invariance condition again, \begin{align} \hspace{-0.1in} q_{\nu}\left(\sum_{i=1}^{m} \frac{1}{(l+1)!}\frac{p_{i}^{\nu}}{p_{i}\cdot q}\left[p_{i}^{\mu}\left(q\cdot\partial_{i}\right)-\left(p_{i}\cdot q\right)\partial_{i}^{\mu}\right]\right. \left(q\cdot\partial_{i}\right)^{l} & M_{m} \nonumber \\ \left.+q_{\alpha_{1}}\cdots q_{\alpha_{l}}A_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}}\right) & =0 \end{align} Thus, we get \begin{align} A_{l}^{\underline{\mu}\nu\underline{\alpha_{1}\cdots\alpha_{l}}} & =\nonumber \\ &\hspace{-0.3in} -\sum_{i}\frac{1}{(l+1)!}\left(p_{i}^{\mu}\partial_{i}^{\alpha_{1}}-p_{i}^{\alpha_{1}}\partial_{i}^{\mu}\right)\partial_{i}^{\alpha_{2}}\cdots\partial_{i}^{\alpha_{l}}\partial_{i}^{\nu}M_{m}\nonumber \\ &\hspace{-0.3in} +C_{l}^{\mu\nu\alpha_{1}\cdots\alpha_{l}} \end{align} For similar reasons, $C$ also contains only trace and antisymmetric part in any two indices among $\nu$ and $\alpha$'s. Define $L^{\mu\nu}$ as an antisymmetric tensor in $\mu$ and $\nu$, we can write \begin{align} C_{l}^{\mu\nu\alpha_{1}\cdots\alpha_{l}}= & \sum_{i,j}L^{\mu\alpha_{i}}\left(L^{\nu\alpha_{j}}+\eta^{\nu\alpha_{j}}\right)D_{l}^{\alpha_{1}\cdots\tilde{\alpha_{i}}\cdots\tilde{\alpha_{j}}\cdots\alpha_{l}}\nonumber \\ & +L^{\alpha_{i}\alpha_{j}}E_{l}^{\mu\nu\alpha_{1}\cdots\tilde{\alpha_{i}}\cdots\tilde{\alpha_{j}}\cdots\alpha_{l}}, \end{align} so the amplitude becomes\footnote{The terms proportional to $q^{\nu}$ or $\eta^{\mu\nu}$ are dropped out in gauge invariance amplitude.} \footnote{$\tilde{\alpha_{i}}$ means the entry $\alpha_{i}$ is removed.} \begin{align} M_{m+1,(l)}^{\mu\nu} & = \sum_{i=1}^{m} \frac{1}{(l+1)!}\frac{q_{\alpha}q_{\beta}}{p_{i}\cdot q}J_{i}^{\mu\alpha}J_{i}^{\nu\beta}\left(q\cdot\partial_{i}\right)^{l-1}M_{m}\nonumber \\ + & q_{\alpha_{1}}\cdots q_{\alpha_{l}}\sum_{i,j}L^{\mu\alpha_{i}}L^{\nu\alpha_{j}}D_{l}^{\alpha_{1}\cdots\tilde{\alpha_{i}}\cdots\tilde{\alpha_{j}}\cdots\alpha_{l}}, \end{align} For $l=1$, we do not have the $D$ terms, so we now get the sub-sub-leading graviton soft theorem \begin{align} M_{(1)}^{\mu\nu}= \sum_{i=1}^{m} \frac{q_{\alpha}q_{\beta}}{2k\cdot q}J_{i}^{\mu\alpha}J_{i}^{\nu\beta} \,. \end{align} However, for $l>1$, there are $D$ terms not given by Ward identity, \begin{align} &\partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}M_{n+1,(l)}^{\mu\nu}\nonumber\\&\ \ =\partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}\nonumber\\&\ \ \ \ \ \ \left[\sum_{i=1}^{m}\frac{1}{(l+1)!}\frac{q_{\alpha}q_{\beta}}{p_{i}\cdot q}J_{i}^{\mu\alpha}J_{i}^{\nu\beta}\left(q\cdot\partial_{i}\right)^{l-1}\right]M_{n}\nonumber\\&\ \ \ \ +L^{\mu\alpha_{i}}L^{\nu\alpha_{j}}D_{l}^{\alpha_{1}\cdots\tilde{\alpha_{i}}\cdots\tilde{\alpha_{j}}\cdots\alpha_{l}}. \label{eq:1grav_inf} \end{align} Therefore, we can obtain, for example, either pieces symmetric in $\mu$ and $\alpha_i$, or those symmetric in $\nu$ and $\alpha_i$, \eqal{ & \Omega_{\mu\alpha_{1}\cdots\alpha_{l}} \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}M_{m+1,(l)}^{\mu\nu} \bigg|_{q\rightarrow 0} \nonumber \\ & \ \ =\Omega_{\mu\alpha_{1}\cdots\alpha_{l}} \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}\nonumber \\ & \ \ \ \ \ \ \left[ \sum_{i=1}^{m} \frac{1}{(l+1)!}\frac{q_{\alpha}q_{\beta}}{p_{i}\cdot q}J_{i}^{\mu\alpha}J_{i}^{\nu\beta}\left(q\cdot\partial_{i}\right)^{l-1}\right]M_{m} \bigg|_{q\rightarrow 0}\nonumber \\ & \Omega_{\nu\alpha_{1}\cdots\alpha_{l}} \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}M_{m+1,(l)}^{\mu\nu} \bigg|_{q\rightarrow 0} \nonumber \\ & \ \ =\Omega_{\nu\alpha_{1}\cdots\alpha_{l}} \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}\nonumber \\ & \ \ \ \ \ \ \left[ \sum_{i=1}^{m} \frac{1}{(l+1)!}\frac{q_{\alpha}q_{\beta}}{p_{i}\cdot q}J_{i}^{\mu\alpha}J_{i}^{\nu\beta}\left(q\cdot\partial_{i}\right)^{l-1}\right]M_{m} \bigg|_{q\rightarrow 0}. } where $\Omega^{\rho\alpha_{1}\cdots\alpha_{l}}$ is a totally symmetric tensor. The soft theorems from large gauge transformations \cite{gary}, however, only considers a more restrictive piece, \begin{align} & \left[ \Omega_{\mu \left( \nu\alpha_{1}\cdots\alpha_{l} \right)} + \Omega_{\nu \left( \mu\alpha_{1}\cdots\alpha_{l} \right)} \right] \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}M_{m+1,(l)}^{\mu\nu} \bigg|_{q\rightarrow 0} \nonumber \\ & \ \ =\left[ \Omega_{\mu \left( \nu\alpha_{1}\cdots\alpha_{l} \right)} + \Omega_{\nu \left( \mu\alpha_{1}\cdots\alpha_{l} \right)} \right] \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}\nonumber \\ & \ \ \ \ \ \ \left[ \sum_{i=1}^{m} \frac{1}{(l+1)!}\frac{q_{\alpha}q_{\beta}}{p_{i}\cdot q}J_{i}^{\mu\alpha}J_{i}^{\nu\beta}\left(q\cdot\partial_{i}\right)^{l-1}\right]M_{m} \bigg|_{q\rightarrow 0}, \end{align} where $\Omega_{\mu \left( \nu\alpha_{1}\cdots\alpha_{l} \right)}$ is totally symmetric in $\nu,\alpha_{1},\cdots,\alpha_{l}$, and traceless in all the indices \footnote{see footnote \ref{note:traceless} in the introduction.}. This follows from our result, but does not represent the most general derivable soft theorems. To consider an amplitude involving $n+1$ gravitons, \eqal{ M_{n+1} \left( q, k_1, \cdots, k_n \right) } we only have to replace the three-point vertex with the graviton self-interaction. The remaining steps are exactly the same. Taking one graviton soft, \eqal{ \hspace{-0.2in}M_{n+1}^{\mu\nu}=&\prod_{j=1}^{n}\epsilon_{j,\mu_{j}}\epsilon_{j,\nu_{j}}\sum_{i=1}^{n}\frac{V^{\mu\nu\mu_{i}\nu_{i}\alpha\beta}}{k_{i}\cdot q}M_{n,\alpha\beta}^{\mu_{1}\nu_{1}\cdots\tilde{\mu}_{i}\tilde{\nu}_{i}\cdots\mu_{n}\nu_{n}}\nonumber\\&+N^{\mu\nu}, \label{ngrav_polegut} } where $V$ is the graviton self-interaction vertex \eqal{ V^{\mu\nu\mu_{i}\nu_{i}\alpha\beta}=&\left(k_{i}^{\mu}\eta^{\alpha\mu_{i}}+q_{\rho}\Sigma^{\rho\mu\alpha\mu_{i}}\right)\left(k_{i}^{\nu}\eta^{\beta\nu_{i}}+q_{\tau}\Sigma^{\tau\nu\beta\nu_{i}}\right) \nonumber \\ \Sigma^{abcd}=&\eta^{ac}\eta^{bd}-\eta^{ad}\eta^{bc}. } and $M_{n}$ is the amplitude involving the remaining $n$ gravitons. Again expand $N$ in $q$ and apply Ward identity, we can obtain, for $q^0$ order \footnote{The terms antisymmetric in $\mu$ and $\nu$ are dropped.}, the subleading soft theorem, \begin{equation} M_{n+1,(0)}^{\mu\nu}=\sum_{i=1}^{n} {k_i^\nu \over k\cdot q} q_\alpha J_i^{'\mu \alpha} M_n \,, \end{equation} where \begin{equation} J_i^{'\mu\nu}=k_i^\mu \partial_i^\nu - k_i^\nu \partial_i^\mu + \epsilon_i^\mu {\partial \over \partial \epsilon_{i,\nu}} - \epsilon_i^\nu {\partial \over \partial \epsilon_{i,\mu}} \,. \end{equation} As for the $l$'th order where $l\ge 1$, the expansion of Eq.\,\eqref{eq:1grav_polegut} is \eqal{ &\hspace{-0.2in}M_{n+1,(l)}^{\mu\nu} \nonumber \\ =&\prod_{j=1}^{n}\epsilon_{j,\mu_{j}\nu_{j}} \sum_{i=1}^{n} \Bigg[\frac{k_{i}^{\mu}\eta^{\alpha\mu_{i}}k_{i}^{\nu}\eta^{\beta\nu_{i}}}{k_{i}\cdot q}\frac{\left(q\cdot\partial_{i}\right)^{l+1}}{(l+1)!}\nonumber \\ &+\frac{k_{i}^{\mu}\eta^{\alpha\mu_{i}}q_{\tau}\Sigma^{\tau\nu\beta\nu_{i}}+k_{i}^{\nu}\eta^{\beta\nu_{i}}q_{\rho}\Sigma^{\rho\mu\alpha\mu_{i}}}{k_{i}\cdot q}\frac{\left(q\cdot\partial_{i}\right)^{l}}{l!}\nonumber\\ &+\frac{q_{\rho}\Sigma^{\rho\mu\alpha\mu_{i}}q_{\tau}\Sigma^{\tau\nu\beta\nu_{i}}}{k_{i}\cdot q}\frac{\left(q\cdot\partial_{i}\right)^{l-1}}{(l-1)!}\Bigg]M_{n,\alpha\beta}^{\mu_{1}\nu_{1}\cdots\tilde{\mu}_{i}\tilde{\nu}_{i}\cdots\mu_{n}\nu_{n}}\nonumber \\ &+q_{\alpha_{1}}\cdots q_{\alpha_{l}}N_{l}^{\mu\nu\alpha_{1}\cdots\alpha_{l}} \,. } Applying Ward identity as before, the soft theorem is \eqal{ &\hspace{-0.2in}M_{n+1,(l)}^{\mu\nu}\nonumber\\ =&\sum_{i=1}^{n}\frac{q_{\alpha}q_{\beta}}{k_{i}\cdot q}\left[\frac{J_{i}^{\mu\alpha}J_{i}^{\nu\beta}}{(l+1)!}+\frac{1}{2}\frac{J_{i}^{\mu\alpha}U_{i}^{\nu\beta}+U_{i}^{\mu\alpha}J_{i}^{\nu\beta}}{l!}\right.\nonumber\\ &\hspace{1.2in}\left.+\frac{1}{2}\frac{U_{i}^{\mu\alpha}U_{i}^{\nu\beta}}{(l-1)!}\right]\left(q\cdot\partial_{i}\right)^{l-1}M_{n}\nonumber\\ &+q_{\alpha_{1}}\cdots q_{\alpha_{l}}\sum_{i,j}L^{\mu\alpha_{i}}L^{\nu\alpha_{j}}D_{l}^{\alpha_{1}\cdots\tilde{\alpha_{i}}\cdots\tilde{\alpha_{j}}\cdots\alpha_{l}} \,, } where \begin{equation} U_i^{\mu \nu} = \epsilon_i^\mu{\partial \over \partial \epsilon_{i,\nu}} - \epsilon_i^\nu{\partial \over \partial \epsilon_{i,\mu}} \,. \end{equation} In particular, the sub-sub-leading piece is \begin{equation} M_{n+1, (1)}^{\mu \nu} = \sum_{i=1}^{n} {q_\alpha q_\beta \over 2 k_i\cdot q} J_i^{'\mu\alpha} J_i^{'\nu\beta} \end{equation} without ambiguity. For $l>1$, we again have partially fixed soft theorem up to infinite order. \section{Example of Homogeneous Terms} Here we show the anti-symmetric piece of $N_{l}^{\mu,\alpha_{1}\cdots\alpha_{l}}$ that was projected out is in fact non-zero, which means that the projected soft-theorem is indeed a ``partial soft theorem''. We use an explicit scalar QED five-point amplitude to demonstrate. The diagrams that contribute to $N$ comes from the soft photon coupled to an internal leg as shown in the figure. The contribution from Fig.~\ref{fig:QED_5}(a) is \begin{figure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\textwidth]{qed1} \caption{} \label{figa} \end{subfigure} \begin{subfigure}[b]{0.4\linewidth} \includegraphics[width=\textwidth]{qed2} \caption{} \label{figb} \end{subfigure} \caption{No pole diagrams of scalar QED five-point amplitude} \label{fig:QED_5} \end{figure} \begin{align} N&=(ie)\frac{(p_{1}+p)\cdot\epsilon_{2}}{(p_{1}+p)^{2}-m^{2}} \left[(ie)(p+p+q)\cdot\epsilon_{q}\right] \nonumber \\&(ie)\frac{(p+p_{4}+q)\cdot\epsilon_{3}}{(p+q)^{2}-m^{2}} +(2\leftrightarrow3,1\leftrightarrow4) \nonumber \\ & =(ie)^{3}\frac{(p_{1}+p)\cdot\epsilon_{2}}{(p_{1}+p)^{2}-m^{2}}(2p\cdot\epsilon_{q})\frac{(p+p_{4}+q)\cdot\epsilon_{3}}{(p+q)^{2}-m^{2}} \nonumber \\&+(2\leftrightarrow3,1\leftrightarrow4) =\epsilon_{q,\mu}N^{\mu} \end{align}where $p=p_{1}+p_{2}$, and $(2\leftrightarrow3,1\leftrightarrow4)$ means we have to sum over (2,3) and (1,4) exchange. Then we perform derivative on $N^{\mu}$, then anti-symmetrize the $(\mu,\alpha_{1})$ index \begin{align} N^{\mu, \alpha_{1}}&= \frac{\partial}{\partial q_{\alpha_{1}}} N^{\mu} \nonumber \\&=(-ie)^{3}\frac{(p_{1}+p)\cdot\epsilon_{2}}{(p_{1}+p)^{2}-m^{2}}\frac{\partial}{\partial q^{\alpha_{1}}}\left[(2p)^{\mu}\frac{(p+p_{4}+q)\cdot\epsilon_{3}}{(p+q)^{2}-m^{2}}\right] \nonumber \\ & =(-ie)^{3}\frac{(p_{1}+p)\cdot\epsilon_{2}}{(p_{1}+p)^{2}-m^{2}} [2\frac{p^{\mu}\epsilon_{3}^{\nu}}{(p+q)^{2}-m^{2}} \nonumber \\& + 2\frac{p^{\mu}p^{\alpha_{1}}(p+q)\cdot\epsilon_{3}}{\left[(p+q)^{2}-m^{2}\right]^{2}}]=S^{\mu\alpha_{1}}+A^{\mu \alpha_{1}} \end{align} The anti-symmetric part $A^{\mu \alpha_{1}}$ is not zero. The contribution from Fig.~\ref{fig:QED_5}(b) is \begin{align} N & =(ie)\frac{(p_{1}+p)\cdot\epsilon_{2}}{(p_{1}+p)^{2}-m^{2}}(-2ie^{2})(\epsilon_{2}\cdot\epsilon_{q})+(q\leftrightarrow3,1\leftrightarrow4) \end{align}However, this term doesn't contribute to the $N^{\mu, \alpha_{1}}_{1}$ since it doesn't involve $q$. After considering the Fig.~\ref{fig:QED_5}(a) and Fig.~\ref{fig:QED_5}(b), we have shown the anti-symmetric part of $N^{\mu, \alpha_{1}}_{1}$ is non-zero, but at the end we drop this term to obtain the partial soft theorem. \section{Effect of Higher Dimensional Operators} Now we consider the soft photon theorem in the effective field theory \cite{elvang}, where the sub-leading ($q^{0}$) soft photon theorem will be modified in the presence of the effective operator. The effective operator starts to contribute at $q^{0}$ order and continues to affect higher order ones, so we will explicitly show its modification to the infinite order soft theorem. Here is the modification for sub-leading soft photon theorem, \begin{align} & M_{n+m+1}|_{q\rightarrow 0}=(q^{-1}\mathcal{S}^{(-1)}+q^{0}\mathcal{S}^{(0)})M_{n+m}\nonumber \\&+q^{0}\widetilde{\mathcal{S}}^{(0)}\widetilde{M}_{n+m}+\mathcal{O}(q^{1}) \end{align}where the tilde on the n-point amplitudes indicates that the particle type of the $k$th leg of $\widetilde{M}_{n+m}$ may differ from that in $\widetilde{M}_{n+m+1}$. We choose a specific effective operator, $\varphi F^{\mu\nu}F_{\mu\nu}$ ($\varphi$ is a real scalar field), to show its explicit form of modification. When one of external leg is taken soft, the internal $\varphi$ propagator goes on-shell and the amplitude factorizes as shown in the Fig.~\ref{fig_qk}. \begin{figure}[h] \includegraphics[height=1.6cm]{qk} \caption{} \label{fig_qk} \end{figure} \noindent Its contribution is \begin{align} g\left[2(k_{j} \cdot q)(\epsilon_{k_{j}}\cdot\epsilon_{q})-2(k_{j} \cdot\epsilon_{q})(q\cdot\epsilon_{k_{j}})\right]\frac{1}{2(k_{j}\cdot q)}\widetilde{M}_{n+m} \end{align}with g the coupling constant for the three-point vertex. We have shown the effective operator starts making contribution at sub-leading ($q^{0}$) order, and we now discuss how it modifies the infinite soft theorem. Again we separate the pole diagrams and the no pole ones, then see how Ward identity gives constraints at each $q$ order. \begin{align} 0 & =q_{\mu}M_{n+m+1}^{\mu}|_{q\rightarrow 0}\nonumber \\ & =\sum_{i=1}^{m} e_{i}M_{n+m}\nonumber \\&+\sum_{j=1}^{n}g\left[(k_{j}\cdot q)(\epsilon_{k_{j}}\cdot q)-(k_{j}\cdot q)(q\cdot\epsilon_{k_{j}})\right]\frac{1}{(k_{j}\cdot q)}\widetilde{M}_{n+m} \nonumber \\ &+q_{\mu} N^{\mu} \label{eq:three_term} \\ & =\sum_{i=1}^{m} e_{i}M_{n+m}+q_{\mu}N^{\mu} \label{eq:two_term} \end{align}The first term in Eq.\,\eqref{eq:three_term} is the original pole diagram from photon and matter field coupling, the second one is the pole diagram from effective operator, and the third one are no pole diagrams. We find that the pole diagram from effective operator doesn't constrain the no pole diagram since itself is gauge invariant. So Eq.\,\eqref{eq:two_term} is basically the same as Eq.\,\eqref{eq:Ward_N}. The effective operators doesn't constrain the form of $N$, but it still modifies the infinite soft theorem to be \begin{align} & \Omega_{\mu\alpha_{1}\cdots\alpha_{l}} \partial^{\alpha_{1}}\cdots\partial^{\alpha_{l}}M_{n+m+1}^{\mu}|_{q\rightarrow 0}\nonumber \\ & = \Omega_{\mu\alpha_{1}\cdots\alpha_{l}} \Bigg\{ \sum_{i=1}^{m}\frac{1}{(l+1)!}\frac{e_{i}}{p_{i}\cdot q}q_{\nu}J_{i}^{\mu\nu}\partial_{i}^{\alpha_{1}}\cdots\partial_{i}^{\alpha_{l}}M_{n+m} \nonumber \\ & +\sum^{n}_{j=1}g\left[(k_{j}\cdot q)\epsilon_{k_{j}}^{\mu}-(q\cdot\epsilon_{k_{j}})k_{j}^{\mu} \right]\frac{1}{(k_{j}\cdot q)} \partial_{i}^{\alpha_{1}}\cdots\partial_{i}^{\alpha_{l}} \widetilde{M}_{n+m}] \Bigg\} \end{align} \section{Conclusion and Discussion} In this letter, we demonstrate how on-shell gauge invariance can fix higher order soft limit of photons and gravitons up to an undetermined homogeneous term $R^{\mu}$. This leads to infinite order soft theorems on certain projected pieces of amplitude, to which the homogeneous term does not contribute. We explicitly worked out the appropriate projection to obtain such pieces, and showed that the infinite order soft theorems derived from large gauge transformations can be completely reproduced here. For the case of gravitons, the theorems derived here are actually more complete, leaving fewer undetermined pieces in the amplitude. We use explicit examples to demonstrate that the homogeneous term in $R^{\mu}$ can be projected out but can be non-zero, which means we indeed drop some to obtain the infinite order soft theorem. Finally, we consider the effect of adding higher dimensional operator, which starts to modify photon soft theorem at sub-leading order. Moreover, its modification to the infinite order soft theorem can also be obtained. The fact that the soft-theorems derived from residual gauge symmetries, so far can all be reproduced by ordinary on-shell gauge symmetry, leaves us asking what is the relevance of this new symmetry on a physical observable like the S-matrix. A pessimist may say that the evidence so far is that there are no relevance beyond that implied by ordinary gauge symmetry, which in a sense is not surprising given that one projects the correlation function to obtain the S-matrix and thus certain information might be projected out. Alternatively, one might say that the symmetry is in fact telling us that we are using the wrong asymptotic states for the S-matrix and thus ignorant to its features. We choose single particle states for the S-matrix due to it being irreducible representations of the Poincare group. This statement makes no distinction between massless and massive kinematics. However, for massless kinematics, it is well known that single particle states are ill-defined, as there are no quantum numbers available for us to differentiate colinear multi-particle states, and manifest itself in the IR divergence of massless scattering amplitudes. Thus perhaps the infinite residual gauge symmetry is telling us that the correct asymptotic state for massless kinematics should form representation of this infinte group. Indeed recent analysis along this line for QED has demonstrated that this indeed appears to be the case~\cite{Kapec}, albeit a similar analysis for gravity is still lacking. It will be interesting to understand this in full generality and illustrate how modifications of the three-point interaction via higher dimension operators changes the conclusion. Besides single soft theorems discussed here, one may apply the method in \cite{us} to consider double soft theorems, which involve two, instead of one, soft gauge bosons. It would be interesting to see whether such theorems can be similarly pushed to infinite order by considering a projected piece of the amplitude. \section{Acknowledgement} We thank Yu-tin Huang for suggesting the problem and helping with the draft. Zhi-Zhong Li, Hung-Hwa Lin and Shun-Qing Zhang are supported by MoST grant 106-2628-M-002-012-MY3.
2,877,628,088,756
arxiv
\section{Introduction}\label{intro} \indent From the first attempts in the nineteenth century to account for the effects of randomness in nature, fluctuations have been recognized as a fundamental component in the description of several physical, chemical and biological systems \cite{Gardiner, Nelson, VanKampen}. A stochastic description can be formulated in continuous or discrete time for systems whose phase space is continuous, or embedded in a lattice. For many of these systems, Brownian motion and diffusion models have proven to be key conceptualizations, mathematically accessible and useful for characterizing their statistical behavior \cite{Gardiner, Nelson, VanKampen, Risken, Ricciardi, HanggiMarchesoni2005}. \indent Random walks and diffusion processes evolve in certain regions of their phase space according to the equations or rules governing their dynamics. Complementary to dynamics, boundary conditions shape the probability distribution of the system being in a given state at certain time. In this sense, different scenarios are prescribed by different combinations of boundary conditions \cite{Gardiner, VanKampen, Risken}. Among these scenarios, first-passage-time (FPT) problems -also referred to as exit or escape times- represent a wide class of situations \cite{Redner}, where the random variable of interest is the time at which certain state conditions are met first. The statistics of this and other related variables, such as survival probability, are relevant in many fields and contexts \cite{Redner, Metzler_etal, Grebenkov2015, Ryabov_etal2015}. Examples abound in different disciplines: diffusion-influenced reactions \cite{Tachiya1979, SanoTachiya1979, Szabo_etal1980, Hanggi_etal1990, Krapivsky_etal1994, Krapivsky2012}, the channel-assisted membrane transport of metabolites, ions, or polymers \cite{Eisenberg_etal1995, Berezhkovskii_etal2002, GoychukHanggi2002, Cohen_etal2012} and, in general, the variety of processes orchestrating intracellular transport \cite{BressloffNewby2013}, force-induced unbinding processes and the reconstruction of potential functions in experiments of single-molecule force spectroscopy \cite{Balsera_etal1997, HummerSzabo2003, Dudko_etal2006, Dudko_etal2008, FokChou2010}, the random search and first encounters of mobile or immobile targets by a variety of agents, from foraging animals to proteins \cite{Benichou_etal2011}, including transport-limited reactions in active media \cite{Loverdo_etal2008}, the absorption of particles in restricted geometries \cite{MeersonRedner2014, RednerMeerson2014} and, generically, geometry-controlled kinetics \cite{BenichouVoituriez2014}, escape from confined domains through narrow pores \cite{BenichouVoituriez2008, Pillay_etal2010, Cheviakov_etal2010} as well as through large windows \cite{Rupprecht_etal2015}, the granular segregation of binary mixtures \cite{FarkasFulop2001}, the formation of loops in nucleic acids and the cyclization of polymers \cite{Sokolov2003, Guerin_etal2012}, and finally, the nucleation and stochastic self-assembly of monomers \cite{Yvinec_etal2012}, to name a few. \indent A particularly interesting FPT problem arises in the context of spiking neurons. In general, a neuron accumulates input currents up to a point at which its dynamics become independent of the inputs, and then generates a large excursion in its phase space in a relatively short period of time, producing a kind of stereotyped event called an {\it action potential} or a {\it spike} \cite{Koch, GerstnerKistler}. To a large extent, these spikes form the basis of neuronal communication. Simplified models of spike generation take advantage of the large reproducibility of action potentials by splitting the dynamics in two phases: the phase where inputs and internal processes drive the neuron, i.e. the regime of sub-threshold integration, and the phase where the spike is manifested itself, which is not represented in detail but prescribed by a fire-and-reset rule \cite{Koch, GerstnerKistler}. These {\it integrate-and-fire} (IF) models have different versions according to the processes included during the sub-threshold integration \cite{Koch, GerstnerKistler, Fourcaud-Trocme2003}. However, all of them produce spikes by declaration once the voltage reaches a certain threshold for the first time. Then, the voltage is reset to a lower potential, which is the initial condition for the sub-threshold integration of the following spike. Because of how spikes are defined in these models, the relationship to FPT problems is straightforward. On the other hand, the reproducibility of {\it spike times} is not as precise as expected from deterministic systems \cite{Koch, GerstnerKistler}. This randomness is accounted for by different stochastic phenomena (ionic gating, neuro-transmitter release, etc), which are normally incorporated in these models by adding noise -in general, Gaussian white or colored noise. Overall, the stochastic dynamics of the sub-threshold integration of IF models is equivalent to the motion of a Brownian particle (where different versions correspond to different potentials), whereas the presence of the threshold for declaring spikes sets the definition of the FPT \cite{Ricciardi, Koch, GerstnerKistler, Gerstein1964, Tuckwell, Burkitt2006, SacerdoteGiraudo2013}. \indent One of the processes that greatly affects spike production is the presence of adaptation currents. They are putatively responsible for the widespread observed phenomenon of spike frequency adaptation, where a neuron's discharge rate decreases in response to a step stimulus \cite{Koch, GerstnerKistler}. The simplest version of adaptive IF models incorporates an additive current that decreases exponentially in time during sub-threshold evolution, and enables history-dependent behavior in the spike-and-reset rule \cite{Treves1993, BendaHerz2003, Urdapilleta2011b, SchwalgerLindner2013}. Therefore, the first step towards developing a full understanding of the effect of spike-triggered adaptation currents on interspike interval statistics is to analyze the FPT problem of a Brownian motion -given by the associated IF model under analysis- with a superimposed exponentially decaying additive drift. This problem was partially covered by Lindner in 2004, in which he derived general expressions for the corrections to the moments of FPT distribution in a general time-dependent case, and assessed some particular explicit solutions to the first order moments in the case of exponential temporal driving \cite{Lindner2004}. To address the full FPT statistics of the temporally inhomogeneous problems that exponential time-dependent drift poses, we previously analyzed the survival probability and the FPT distribution based on the backward Fokker-Planck (FP) equation, and inductively solved the system of infinite recurrence equations that results from the proposal of a series solution. This approach was derived for the simple case of a Wiener process, corresponding to the {\it perfect} IF model in the spiking neurons framework \cite{Urdapilleta2011a, Urdapilleta2012}. Here, we aim to extend these results to a system where the main driving force derives from an arbitrary potential. Furthermore, we were able to obtain the explicit solution to the Ornstein-Uhlenbeck process, which corresponds to the {\it leaky} IF model in the context of spiking neurons. Traditionally, this is considered to be the minimal biologically realistic IF model \cite{Tuckwell}, obtained as a diffusion approximation of the Stein model \cite{Stein1965}. \section{First-passage-time in an arbitrary potential: Theoretical framework} \subsection{The homogeneous system}\label{unperturbed} \indent First, the methodology we use later to study the first-passage-time problem in a time-inhomogeneous setup is reviewed in a classical context: a particle driven exclusively by an arbitrary unidimensional potential \cite{Risken, Gardiner}. Even when different approaches can be taken, we focus on the analysis of the survival probability via the backward Fokker-Planck equation. Under this framework, the posterior analysis of a superimposed exponential time-dependent drift has proven to be mathematically tractable \cite{Urdapilleta2011a, Urdapilleta2012}.\\ \indent We consider a continuous-time random walk representing the movement of an overdamped particle, whose position $x(t)$ evolves according to the following equation: \begin{equation}\label{eq1} \frac{\rmd x}{\rmd t} = -\frac{\rmd U(x)}{\rmd x} + \sqrt{2D}~\xi(t). \end{equation} \indent In Eq.~(\ref{eq1}), the conservative force driving the particle is written as the negative of the derivative of a generic position-dependent potential $U(x)$, whereas random forces are represented by an additive Gaussian white noise $\xi(t)$, defined by $\langle \xi(t) \rangle = 0$ and $\langle \xi(t) \xi(t') \rangle = \delta(t-t')$. The particle is initially set at position $x_0$ and the first time it reaches the level $x_{\rm thr} > x_0$, the dynamics are no longer analyzed (or reset) and a first-passage-time (FPT) event is declared.\\ \indent For this system, the evolution of the transition probability density function of a particle being at position $x$ at time $t$ -given that it will be at position $t'$ at a posterior time $t'$ ($t' > t$)- is governed by the backward Fokker-Planck (FP) equation \cite{Risken, Gardiner, Urdapilleta2011a}, which reads \begin{equation}\label{eq2} \frac{\partial P(x',t'|x,t)}{\partial t} = U'(x)~\frac{\partial P(x',t'|x,t)}{\partial x} - D \frac{\partial^2 P(x',t'|x,t)} {\partial x^2}, \end{equation} \noindent where $U'(x)$ indicates the derivative of the potential with respect to $x$. Supplementing Eq.~(\ref{eq2}), initial and boundary conditions are set in accordance to the survival domain, $x'<x_{\rm thr}$. Explicitly, \begin{eqnarray} \label{eq3} P(x',t'|x,t=t') &=& \cases{1 ~~{\rm for}~x<x_{\rm thr},\\ 0~~{\rm for}~x\geq x_{\rm thr},}\\ \label{eq4} P(x',t'|x=x_{\rm thr},t) &=& 0. \end{eqnarray} \indent Within the backward FP formalism, the survival probability at time $t'$ for a particle released at position $x$ at time $t$, \begin{equation}\label{eq5} F(t'|x,t) = \int_{-\infty}^{x_{\rm thr}} P(x',t'|x,t) ~{\rmd x'}, \end{equation} \noindent is obtained simply by integration of both sides of Eq.~(\ref{eq2}) in $x'$ from $-\infty$ to $x_{\rm thr}$. In terms of the auxiliary variable $\tau = t'-t$, the corresponding equation results \begin{equation}\label{eq6} \frac{\partial F(x,\tau)}{\partial \tau} = - U'(x)~\frac{\partial F(x,\tau)}{\partial x} + D \frac{\partial^2 F(x,\tau)} {\partial x^2}, \end{equation} \noindent whereas initial and boundary conditions read \begin{eqnarray} \label{eq7} F(x,\tau=0) &=& \cases{1 ~~{\rm for}~x<x_{\rm thr},\\ 0~~{\rm for}~x\geq x_{\rm thr},}\\ \label{eq8} F(x=x_{\rm thr},\tau) &=& 0. \end{eqnarray} \indent Once the survival probability, whose evolution is given by Eqs.~(\ref{eq6})-(\ref{eq8}), is known, the FPT density function can be immediately evaluated. By construction, $F(x,\tau)$ is the probability that -given that the particle was released at position $x$- it remains alive or not absorbed at time $\tau$ within the survival domain. Equivalently, $F(x,\tau)$ represents the probability that the FPT is posterior to $\tau$: ${\rm Prob}(T>\tau) = F(x,\tau)$, where $T$ is the FPT random variable. Since we are focusing on the potentials that warranty level crossing (which can be set, for example, with a small positive drift), then $F(x,\tau)$ can be directly related to the cumulative distribution function of the FPT, $\Phi(\tau)$, through the relationship: $\Phi(\tau) = {\rm Prob}(T \leq \tau) = 1 - F(x,\tau)$. Therefore, the density function of the FPT, $\phi(\tau)$, is given by \begin{equation}\label{eq9} \phi(\tau) = \frac{\rmd \Phi(\tau)}{\rmd \tau} = - \frac{\partial F(x,\tau)}{\partial \tau}, \end{equation} \noindent where it should be noted that once the system defined by Eqs.~(\ref{eq6})-(\ref{eq8}) has been solved, the initial backward position of the particle $x$ remains as a parameter and can be removed from the notation. \subsection{An additive time-dependent exponential drift} \indent In this work, we study the influence on survival probability and FPT statistics of a superimposed exponential time-dependent drift in a system driven by the potential $U(x)$. This system is described by the Langevin equation \begin{equation}\label{eq10} \frac{\rmd x}{\rmd t} = - U'(x) + \frac{\epsilon} {\tau_{\rmd}}~\rme^{-(t-t_0)/\tau_{\rmd}} + \sqrt{2D}~\xi(t), \end{equation} \noindent where $\epsilon$ and $\tau_{\rmd}$ characterize the strength and time constant of the exponential driving, and $t_{0}$ refers to the initial time of the experimental setting.\\ \indent The backward FP equation is similar to Eq.~(\ref{eq2}), except for an additional term in the drift coefficient \begin{equation}\label{eq11} \fl \hspace{0.5cm} \frac{\partial P(x',t'|x,t)}{\partial t} = \left[ U'(x) - \frac{\epsilon}{\tau_{\rmd}}~\rme^{-(t-t_0)/\tau_{\rmd}} \right]~ ~\frac{\partial P(x',t'|x,t)}{\partial x} - D \frac{\partial^2 P(x',t'|x,t)}{\partial x^2}. \end{equation} \indent Proceeding as in the previous subsection, the survival probability $F(x,\tau; t')$, in terms of the auxiliary variable $\tau = t'-t$, reads \begin{equation}\label{eq12} \fl \hspace{0.5cm} \frac{\partial F(x,\tau;t')}{\partial \tau} = \left[ - U'(x) + \frac{\epsilon}{\tau_{\rmd}}~\rme^{-(t'-t_0)/\tau_{\rmd}}~\rme^{\tau/\tau_{\rmd}} \right]~\frac{\partial F(x,\tau;t')}{\partial x} + D \frac{\partial^2 F(x,\tau;t')} {\partial x^2}, \end{equation} \noindent whereas the initial and boundary conditions are similar to those given by Eqs.~(\ref{eq7}) and (\ref{eq8}). Note that here the term ``initial'' refers to the situation $\tau = 0$, and not to the initial time of the experimental setting, $t_0$.\\ \indent Following our previous studies \cite{Urdapilleta2011a, Urdapilleta2012}, we propose a series expansion in powers of $\epsilon$ for $F(x,\tau;t')$, \begin{equation}\label{eq13} \fl \hspace{0.5cm} F(x,\tau;t') = F_{0}(x,\tau;t') + \epsilon~F_{1}(x,\tau;t') + \epsilon^{2}~F_{2}(x,\tau;t') + \dots = \sum_{n=0}^{\infty}\epsilon^{n}~F_{n}(x,\tau;t'). \end{equation} \indent With this assumption, Eq.~(\ref{eq12}) reads \begin{eqnarray}\label{eq14} \fl \hspace{1.5cm} \left[ \frac{\partial F_{0}}{\partial \tau} + U'(x) \frac{\partial F_{0}}{\partial x} - D \frac{\partial^{2} F_{0}}{\partial x^{2}} \right] \nonumber\\ \fl \hspace{1.5cm} + \sum_{n=1}^{\infty} \epsilon^{n}~\left[\frac{\partial F_{n}} {\partial \tau} + U'(x) \frac{\partial F_{n}}{\partial x} - \frac{1}{\tau_{\rmd}}~\rme^{-(t'-t_{0})/\tau_{\rmd}}~\rme^{\tau/\tau_{\rmd}} ~\frac{\partial F_{n-1}}{\partial x} - D \frac{\partial^{2} F_{n}}{\partial x^{2}}\right] = 0. \end{eqnarray} \indent Given the arbitrariness of $\epsilon$, each term between brackets should be identically $0$, splitting Eq.~(\ref{eq14}) into an infinite system of coupled equations with a recursive structure, \begin{eqnarray} \label{eq15} \fl \hspace{1.5cm} \frac{\partial F_{0}}{\partial \tau} + U'(x)~\frac{\partial F_{0}} {\partial x} - D~\frac{\partial^{2} F_{0}}{\partial x^{2}} &=& 0, \\ \label{eq16} \fl \hspace{1.5cm} \frac{\partial F_{n}}{\partial \tau} + U'(x)~\frac{\partial F_{n}} {\partial x} - D~\frac{\partial^{2} F_{n}}{\partial x^{2}} &=& \frac{1}{\tau_{\rmd}}~\rme^{-(t'-t_{0})/\tau_{\rmd}}~\rme^{\tau/\tau_{\rmd}} ~\frac{\partial F_{n-1}}{\partial x}, ~~{\rm for}~~n \geq 1. \end{eqnarray} \indent For the same reason, the non-homogeneous initial condition, $F(x,\tau=0;t') = 1$ for $x < x_{\rm thr}$, has to be imposed on the zeroth-order function $F_{0}(x,\tau=0;t')$, yielding \begin{eqnarray}\label{eq17} F_{0}(x,\tau=0;t') &=& \cases{1 ~~{\rm if}~ x<x_{\rm thr},\\ 0 ~~{\rm if}~x\geq x_{\rm thr},}\\ \label{eq18} F_{n}(x,\tau=0;t') &=& 0 ~~{\rm for}~ n\geq 1. \end{eqnarray} \indent Without specificity of the order, the boundary condition reads $F_{n}(x=x_{\rm thr},\tau;t') = 0$ for all $n$.\\ \indent The preceding system of recursive \textit{partial} equations, Eqs.~(\ref{eq15}) and~(\ref{eq16}), can be Laplace-transformed to an equivalent system of \textit{ordinary} equations, \begin{eqnarray} \label{eq19} \fl \hspace{1.cm} s~\tilde{F}_{0}^{L}(x) + U'(x)~\frac{\rmd\tilde{F}_{0}^{L}(x)}{\rmd x} - D~\frac{\rmd^{2}\tilde{F}_{0}^{L}(x)}{\rmd x^2} &=& 1, \\ \label{eq20} \fl \hspace{1.cm} s ~\tilde{F}_{n}^{L}(x) + U'(x)~\frac{\rmd \tilde{F}_{n}^{L}(x)}{\rmd x} - D~\frac{\rmd^{2}\tilde{F}_{n}^{L}(x)}{\rmd x^{2}} &=& \frac{1}{\tau_{\rmd}}~\rme^{-(t'-t_{0})/\tau_{\rmd}}~\frac{\rmd}{\rmd x}\left[ \tilde{F}_{n-1}^{L}(x)\Big\rfloor_{s-1/\tau_{\rmd}}\right]. \end{eqnarray} \indent The Laplace transform of each term of the survival probability, $\mathcal{L} [F_{n}(x,\tau;t')]$, is represented by $\tilde{F}_{n}^{L}(x)$, omitting the parametric dependence on $s$ and tentatively on $t'$. In Eqs.~(\ref{eq19}) and~(\ref{eq20}), the initial condition required by the Laplace transform of the temporal derivative, $\mathcal{L}[\partial F_{n}(x,\tau;t') / \partial \tau] = s ~\tilde{F}_{n}^{L}(x) - F_{n}(x,\tau=0;t')$, has been assigned to an internal point, $x<x_{\rm thr}$, according to Eqs.~(\ref{eq17}) and~(\ref{eq18}). This system of recursive equations has to be solved together with the Laplace-transformed boundary condition, $\tilde{F}_{n}^{L}(x=x_{\rm thr}) = 0$ for all $n$.\\ \indent Up to now, we have addressed the evolution of survival probability from the backward state, $x$ at time $t$, to the current position, $x'$ at time $t'$. Since the experimental setup -and in particular the temporally inhomogeneous exponential time-dependent drift- refers to the time $t_0$, when the particle is at position $x_0$, we need to link this survival probability to the real initial state. According to the inverse Laplace transform, the zeroth-order function of this probability \begin{eqnarray}\label{eq21} F_{0}(x,\tau)= \frac{1}{2\pi {\rm j}} \int_{\sigma-{\rm j}\infty}^{\sigma+{\rm j}\infty}\rme^{s\tau}~ \tilde{F}_{0}^{L}(x;s) ~\rmd s, \end{eqnarray} \noindent can be directly evaluated at the initial setting, $x = x_0$ at time difference $\tau = t'-t_0$. In virtue of Eq.~(\ref{eq19}), its Laplace transform satisfies \begin{equation} \label{eq22} \frac{\rmd^{2}\tilde{F}_{0}^{L}(x_0)}{\rmd x_0^2} - \frac{U'(x_0)}{D}~\frac{\rmd\tilde{F}_{0}^{L}(x_0)}{\rmd x_0} - \frac{s}{D}~\tilde{F}_{0}^{L}(x_0) = -\frac{1}{D} \end{equation} \noindent and $\tilde{F}_{0}^{L}(x_0 = x_{\rm thr}) = 0$.\\ \indent From Eq.~(\ref{eq20}), it is easy to show that higher order functions can be written as \begin{equation} \label{eq23} \tilde{F}_{n}^{L}(x) = \rme^{-n(t'-t_{0})/\tau_{\rmd}}~ \tilde{\mathbb{F}}_{n}^{L}(x), \end{equation} \noindent where the time-inhomogeneous part of the solution is restricted to the exponential factor. The time-homogeneous function $\tilde{\mathbb{F}}_{n}^{L}(x)$ satisfies \begin{equation} \label{eq24} \frac{\rmd^{2}\tilde{\mathbb{F}}_{n}^{L}(x)}{\rmd x^2} - \frac{U'(x)}{D}~\frac{\rmd\tilde{\mathbb{F}}_{n}^{L}(x)}{\rmd x} - \frac{s}{D}~\tilde{\mathbb{F}}_{n}^{L}(x) = -\frac{1}{\tau_{\rmd}D} \frac{\rmd}{\rmd x}\tilde{\mathbb{F}}_{n-1}^{L}(x)\rfloor_{s-1/\tau_{\rmd}}. \end{equation} \indent To impose the real initial condition we need to obtain each term in the temporal domain. According to the inverse Laplace transform and Eq.~(\ref{eq23}), these functions read \begin{eqnarray}\label{eq25} F_{n}(x,\tau;t')= \rme^{-n(t'-t_{0})/\tau_{\rmd}}~\frac{1}{2\pi {\rm j}} \int_{\sigma-{\rm j}\infty}^{\sigma+{\rm j}\infty}\rme^{s\tau}~ \tilde{\mathbb{F}}_{n}^{L}(x;s) ~\rmd s. \end{eqnarray} \indent However, this integral cannot be done directly and certain considerations have to be taken into account. Since we focus on systems in which there are appropriate solutions to the FPT problem in the unperturbed situation, the region of convergence of $\tilde{F}_{0}^{L}(x)$, bounded by the path of integration in Eq.~(\ref{eq21}), is assumed to be defined by $\sigma > 0$. Consequently, because of the shift in $s$ in the forcing term, the equation for the first-order function $\tilde{\mathbb{F}}_{1}^{L}(x)$, Eq.~(\ref{eq24}) for $n=1$, is valid in the region $\sigma > 1/\tau_{\rmd}$. The argument is recursive and, therefore, the region of convergence of the $n$th-order function is $\sigma > n/\tau_{\rmd}$. Accordingly, to evaluate the integral in Eq.~(\ref{eq25}) we make the substitution $z = s-n/\tau_{\rmd}$, \begin{eqnarray}\label{eq26} F_{n}(x,\tau;t')= \rme^{-n(t'-t_{0})/\tau_{\rmd}}~\rme^{n\tau/\tau_{\rmd}}~ \frac{1}{2\pi {\rm j}}\int_{\sigma_{z}-{\rm j}\infty}^{\sigma_{z}+{\rm j}\infty} \rme^{z\tau}~\tilde{\mathbb{F}}_{n}^{L}(x;z+n/\tau_{\rmd}) ~\rmd z, \end{eqnarray} \noindent where now $\sigma_{z} > 0$. Once we evaluate the initial setting, $x_0$ at time $t = t_0$ (or, equivalently, $\tau = t'-t_0$), the exponential time-dependent factors cancel out and the remaining contribution to the survival probability does not depend on $t'$, \begin{eqnarray}\label{eq27} F_{n}(x_0,\tau)= \frac{1}{2\pi {\rm j}}\int_{\sigma_{z}-{\rm j}\infty} ^{\sigma_{z}+{\rm j}\infty} \rme^{z\tau}~\tilde{\mathbb{F}}_{n}^{L} (x_0;z+n/\tau_{\rmd}) ~\rmd z. \end{eqnarray} \indent Moreover, the region of convergence of its Laplace transform is the semi-plane defined by ${\rm Re}(s) > 0$. To obtain $\tilde{\mathbb{F}}_{n}^{L} (x_0;z+n/\tau_{\rmd})$ we take the inverse Laplace transform on both sides of Eq.~(\ref{eq24}), \begin{eqnarray} \label{eq28} \fl \hspace{1.cm} \frac{1}{2\pi {\rm j}} \int_{\sigma-{\rm j}\infty}^{\sigma+{\rm j}\infty} \rme^{s\tau}~ \left[ \frac{\rmd^{2}\tilde{\mathbb{F}}_{n}^{L}(x_0;s)}{\rmd x_0^2} - \frac{U'(x_0)}{D}~\frac{\rmd\tilde{\mathbb{F}}_{n}^{L}(x_0;s)}{\rmd x_0} - \frac{s}{D}~\tilde{\mathbb{F}}_{n}^{L}(x_0;s) \right] ~ \rmd s \nonumber\\ = -\frac{1}{\tau_{\rmd}D} ~\frac{1}{2\pi {\rm j}} \int_{\sigma-{\rm j}\infty} ^{\sigma+{\rm j}\infty} \rme^{s\tau}~\frac{\rmd}{\rmd x_0}\tilde{\mathbb{F}} _{n-1}^{L}(x_0;s-1/\tau_{\rmd}) ~ \rmd s, \end{eqnarray} \noindent and apply the same substitution as before, $z = s-n/\tau_{\rmd}$, \begin{eqnarray} \label{eq29} \fl \hspace{1.cm} \frac{1}{2\pi {\rm j}} \int_{\sigma_z-{\rm j}\infty}^{\sigma_z+{\rm j}\infty} \rme^{z\tau}~ \left[ \frac{\rmd^{2}\tilde{\mathbb{F}}_{n}^{L}}{\rmd x_0^2} - \frac{U'(x_0)}{D}~\frac{\rmd\tilde{\mathbb{F}}_{n}^{L}}{\rmd x_0} - \frac{(z+n/\tau_{\rmd})}{D}~\tilde{\mathbb{F}}_{n}^{L} \right] ~ \rmd z \nonumber\\ = -\frac{1}{\tau_{\rmd}D} ~\frac{1}{2\pi {\rm j}} \int_{\sigma_z-{\rm j}\infty} ^{\sigma_z+{\rm j}\infty} \rme^{z\tau}~ \frac{\rmd}{\rmd x_0}\tilde{\mathbb{F}}_{n-1}^{L} ~ \rmd z, \end{eqnarray} \noindent where we have simplified the notation to $\tilde{\mathbb{F}}_{n}^{L} = \tilde{\mathbb{F}}_{n}^{L} (x_0;z+n/\tau_{\rmd})$. From Eq.~(\ref{eq29}), it is straightforward to show that each function $\tilde{\mathbb{F}} _{n}^{L}(x_0;z+n/\tau_{\rmd})$ appearing in the integrand of Eq.~(\ref{eq27}) satisfies \begin{equation} \label{eq30} \fl \hspace{1.0cm} \frac{\rmd^{2}\tilde{\mathbb{F}}_{n}^{L}(x_0)}{\rmd x_0^2} - \frac{U'(x_0)}{D}~\frac{\rmd\tilde{\mathbb{F}}_{n}^{L}(x_0)}{\rmd x_0} - \frac{(s+n/\tau_{\rmd})}{D}~\tilde{\mathbb{F}}_{n}^{L}(x_0) = -\frac{1}{\tau_{\rmd}D} ~ \frac{\rmd}{\rmd x_0} \tilde{\mathbb{F}}_{n-1}^{L}(x_0), \end{equation} \noindent with $\tilde{\mathbb{F}}_{n}^{L}(x_0 = x_{\rm thr}) = 0$.\\ \indent To summarize, the survival probability from the initial state $(x_0,t_0)$ to the current state $(x,t_0 + \tau)$ is given by \begin{equation}\label{eq31} \fl \hspace{0.5cm} F(x_0,\tau) = F_{0}(x_0,\tau) + \epsilon~F_{1}(x_0,\tau) + \epsilon^{2}~F_{2}(x_0,\tau) + \dots = \sum_{n=0}^{\infty}\epsilon^{n}~F_{n}(x_0,\tau), \end{equation} \noindent where all functions are obtained from the corresponding inverse Laplace transforms, either given by Eq.~(\ref{eq21}) or Eq.~(\ref{eq27}). In turn, the Laplace transform of the unperturbed system, $\tilde{F}_{0}^{L}(x_0)$, satisfies Eq.~(\ref{eq22}), whereas all time-homogeneous higher order terms, $\tilde{\mathbb{F}}_{n}^{L}(x_0)$, are recursively obtained from Eq.~(\ref{eq30}). For any order, the boundary condition is zero, $\tilde{F}_{0}^{L}(x_{\rm thr}) = \tilde{\mathbb{F}}_{n} ^{L}(x_{\rm thr}) = 0$.\\ \indent As shown in Section \ref{unperturbed}, FPT density function can be directly obtained once the survival probability from the initial state to the current state is known, see Eq.~(\ref{eq9}). Therefore, the series structure in Eq.~(\ref{eq31}) is inherited by the FPT density function \cite{Urdapilleta2011a, Urdapilleta2012}, \begin{equation}\label{eq32} \phi(x_0,\tau) = \sum_{n=0}^{\infty}\epsilon^{n}~\phi_{n}(x_0,\tau), \end{equation} \noindent where each order function, $\phi_{n}(x_0,\tau)$, satisfies \begin{equation}\label{eq33} \phi_{n}(x_0,\tau) = - \frac{\partial F_{n}(x_0,\tau)}{\partial \tau}. \end{equation} \indent In terms of the Laplace transform, this set of equations reads \begin{eqnarray}\label{eq34} \tilde{\phi}_{0}^{L}(x_0;s) = 1 - s ~ \tilde{F}_{0}^{L}(x_0;s) \\ \label{eq35} \tilde{\phi}_{n}^{L}(x_0;s) = - s ~ \tilde{\mathbb{F}}_{n}^{L}(x_0;s), ~~~~n \geq 1, \end{eqnarray} \noindent where we have taken into account the conditions given by Eqs.~(\ref{eq17}) and (\ref{eq18}) at the real initial state. With these relationships, and based on Eqs.~(\ref{eq22}) and (\ref{eq30}), it is straightforward to derive the system of equations governing each term in the series solution for the FPT density function, Eq.~(\ref{eq32}), \begin{eqnarray}\label{eq36} \fl \hspace{1.75cm} \frac{\rmd^{2}\tilde{\phi}_{0}^{L}(x_0)}{\rmd x_0^2} - \frac{U'(x_0)}{D}~\frac{\rmd\tilde{\phi}_{0}^{L}(x_0)}{\rmd x_0} - \frac{s}{D}~\tilde{\phi}_{0}^{L}(x_0) &=& 0, \\ \label{eq37} \fl \hspace{0.25cm} \frac{\rmd^{2}\tilde{\phi}_{n}^{L}(x_0)}{\rmd x_0^2} - \frac{U'(x_0)}{D}~\frac{\rmd\tilde{\phi}_{n}^{L}(x_0)}{\rmd x_0} - \frac{(s+n/\tau_{\rmd})}{D}~\tilde{\phi}_{n}^{L}(x_0) &=& -\frac{1}{\tau_{\rmd}D} ~ \frac{\rmd}{\rmd x_0} \tilde{\phi}_{n-1}^{L}(x_0), ~n \geq 1, \end{eqnarray} \noindent with boundary conditions $\tilde{\phi}_{0}^{L}(x_{\rm thr}) = 1$ and $\tilde{\phi}_{n}^{L}(x_{\rm thr}) = 0$ for $n\geq 1$. In all cases, a second boundary condition is necessary for these second order ordinary differential equations; as usual, bounded solutions at $x_0 \rightarrow -\infty$ are required. \section{Explicit solutions} \subsection{FPT statistics for the Wiener process}\label{wiener} \indent The Wiener process is defined by the potential $U(x) = -\mu~x$, where $\mu > 0$ guarantees threshold crossing. The associated Langevin equation with additive time-dependent exponential drift is \begin{equation}\label{eq38} \frac{\rmd x}{\rmd t} = \mu + \frac{\epsilon} {\tau_{\rmd}}~\rme^{-(t-t_0)/\tau_{\rmd}} + \sqrt{2D}~\xi(t). \end{equation} \indent This is the simplest system for the kinds of diffusion processes we are considering and it has already been solved in \cite{Urdapilleta2011a, Urdapilleta2012}. The formulation of the FPT problem in these works is equivalent to the derivation presented here, although in this article we have managed to extend it to a general context, without explicit knowledge of the survival probability.\\ \indent The solution to Eqs.~(\ref{eq36}) and (\ref{eq37}) for this particular potential read \begin{eqnarray} \label{eq39} \fl \hspace{0.5cm} \tilde{\phi}_{0}^{L}(x_0;s) = \exp\left\{\frac{(x_{\rm thr}-x_{0})} {2D}[\mu-\sqrt{\mu^{2}+4Ds}]\right\},\\ \label{eq40} \fl \hspace{0.5cm} \tilde{\phi}_{n}^{L}(x_0;s) = - \frac{[\mu-\sqrt{\mu^{2}+4Ds}]}{2D}\nonumber\\ \fl \hspace{1.75cm} \times~\sum_{k=0}^{n}b_{n,k}(s)~\exp\left\{ \frac{(x_{\rm thr}-x_{0})}{2D}[\mu-\sqrt{\mu^{2}+4D(s+k/\tau_{\rm d})}]\right\},~~{\rm for}~n\geq 1, \end{eqnarray} \noindent where the coefficients are given by the recursive scheme, \begin{eqnarray}\label{eq41} \fl \hspace{0.5cm} b_{n,k}(s) &=& -\frac{b_{n-1,k}(s)}{n-k}~\frac{[\mu-\sqrt{\mu^{2}+4D(s+k/\tau_{\rm d})}]}{2D},~~{\rm for}~k=0,\dots,n-1,\\ \fl \hspace{0.5cm} \label{eq42} b_{n,n}(s) &=& -\sum_{k=0}^{n-1} b_{n,k}(s), \end{eqnarray} \noindent starting from $b_{1,0}(s)=1$ and $b_{1,1}(s)=-1$. As shown in \cite{Urdapilleta2012}, it can be shown by mathematical induction that these expressions satisfy Eqs.~(\ref{eq36}) and (\ref{eq37}), for $U'(x_0) = -\mu$. \subsection{FPT statistics for the Ornstein-Uhlenbeck process}\label{ou} \indent The potential for the Ornstein-Uhlenbeck process is given by $U(x) = -\mu~x + \frac{1}{2} ~ x^2 / \tau_{\rm m}$, where the time constant $\tau_{\rm m}$ characterizes the relaxation process towards the equilibrium point, $x_{\rm eq} = \mu ~ \tau_{\rm m}$, for noise-free dynamics without a threshold. Alternatively, $\tau_{\rm m}$ sets the time constant of the exponential autocorrelation function of this process. The presence of the threshold divides the behavior of the system according to the driving force $\mu$. In the \textit{supra-threshold} regime (also denoted \textit{stimulus-driven}), $\mu > x_{\rm thr} / \tau_{\rm m}$, whenever the particle is released at $x_{0} < x_{\rm thr}$, it reaches the threshold in a finite time. In contrast, in the \textit{sub-threshold} regime (or \textit{noise-driven}), $\mu < x_{\rm thr} / \tau_{\rm m}$, the particle cannot reach the threshold unless assisted by noise. In the special case $\mu = x_{\rm thr} / \tau_{\rm m}$, the particle hits the threshold in an infinite time. When supplemented by the time-dependent exponential drift, the associated Langevin equation reads \begin{equation}\label{eq43} \frac{\rmd x}{\rmd t} = \mu - \frac{x}{\tau_{\rm m}} + \frac{\epsilon} {\tau_{\rmd}}~\rme^{-(t-t_0)/\tau_{\rmd}} + \sqrt{2D}~\xi(t). \end{equation} \subsubsection{Zeroth-order density function} \indent The density function of the FPT for the unperturbed system, $\phi_{0}(x_0;\tau)$, can be obtained from the inverse Laplace transform of the solution to Eq.~(\ref{eq36}) for this particular potential; in detail, \begin{equation}\label{eq44} \frac{\rmd^{2}\tilde{\phi}_{0}^{L}}{\rmd x_0^2} + \left( \frac{\mu}{D} - \frac{x_0}{\tau_{\rm m} D} \right) ~ \frac{\rmd\tilde{\phi}_{0}^{L}}{\rmd x_0} - \frac{s}{D}~\tilde{\phi}_{0}^{L} = 0. \end{equation} \indent It can be shown that, with the change of variable \begin{equation}\label{eq45} z = \sqrt{\frac{\tau_{\rm m}}{D}}\left( \mu - \frac{x_0}{\tau_{\rm m}} \right), \end{equation} \noindent the solution can be expressed as \cite{Darling1953, Roy1969, Capocelli1971} \begin{equation}\label{eq46} \tilde{\phi}_{0}^{L}(x_0;s) = \rme^{z^2 / 4}~u_0(z;s), \end{equation} \noindent where $u_0(z;s)$ satisfies \begin{equation}\label{eq47} \frac{\rmd^{2}u_0}{\rmd z^2} + \left( -\tau_{\rm m}~s + \frac{1}{2} - \frac{1}{4} z^2 \right) ~u_0 = 0. \end{equation} \indent The general solution to this homogeneous equation reads \begin{equation}\label{eq48} u_0(z;s) = c_1 ~ \mathcal{D}_{-\tau_{\rm m}s}(z) + c_2 ~ \mathcal{D}_{-\tau_{\rm m}s}(-z), \end{equation} \noindent where $\mathcal{D}_\nu(z)$ are the parabolic cylinder functions according to Whittaker's notation \cite{Handbook}. Given that $\mathcal{D}_{-\tau_{\rm m}s}(-z)$ diverges at $z \rightarrow \infty$, well-behaved solutions require that $c_2 = 0$. Note that because of the transformation, the new domain is from $z_{\rm thr}$ to $\infty$, where $z_{\rm thr} = \sqrt{\tau_{\rm m}/D}~( \mu - x_{\rm thr}/\tau_{\rm m})$. In virtue of the boundary condition at the threshold, $\tilde{\phi}_{0}^{L} (x_{\rm thr}) = 1$, it is straightforward to show that \begin{equation}\label{eq49} u_0(z;s) = \frac{\rme^{-z_{\rm thr}^2 / 4}}{\mathcal{D}_ {-\tau_{\rm m}s}(z_{\rm thr})}~\mathcal{D}_{-\tau_{\rm m}s}(z). \end{equation} \indent Equivalently, \begin{equation}\label{eq50} \tilde{\phi}_{0}^{L}(x_0;s) = {\rme}^{-\frac{\tau_{\rm m}}{4D}\left[ \left(\mu-\frac{x_{\rm thr}}{\tau_{\rm m}}\right)^2 - \left(\mu-\frac{x_0}{\tau_{\rm m}}\right)^2 \right]}~\frac{\mathcal{D}_ {-\tau_{\rm m}s}\left[ \sqrt{\frac{\tau_{\rm m}}{D}}\left( \mu - \frac{x_0}{\tau_{\rm m}} \right) \right]}{\mathcal{D}_ {-\tau_{\rm m}s}\left[ \sqrt{\frac{\tau_{\rm m}}{D}}\left( \mu - \frac{x_{\rm thr}}{\tau_{\rm m}} \right) \right]}. \end{equation} \subsubsection{Higher-order density functions} \indent According to Eq.~(\ref{eq37}), the Laplace transform of higher-order density functions, $\tilde{\phi}_{n}^{L}(x_0;s)$, for the Ornstein-Uhlenbeck process satisfies \begin{equation}\label{eq51} \fl \hspace{1.cm} \frac{\rmd^{2}\tilde{\phi}_{n}^{L}}{\rmd x_0^2} + \left( \frac{\mu}{D} - \frac{x_0}{\tau_{\rm m} D} \right) ~ \frac{\rmd\tilde{\phi}_{n}^{L}}{\rmd x_0} - \frac{(s+n/\tau_{\rmd})}{D}~\tilde{\phi}_{n}^{L} = -\frac{1}{\tau_{\rmd}D} ~ \frac{\rmd \tilde{\phi}_{n-1}^{L}} {\rmd x_0}, ~n \geq 1. \end{equation} \indent In terms of the variable $z$, defined by Eq.~(\ref{eq45}), it is easy to show that all functions can be written as \begin{equation}\label{eq52} \tilde{\phi}_{n}^{L}(x_0;s) = \rme^{z^2 / 4}~u_{n}(z;s), \end{equation} \noindent where $u_{n}(z;s)$, for $n \geq 1$, is governed by \begin{equation}\label{eq53} \fl \hspace{1.cm} \frac{\rmd^{2}u_{n}}{\rmd z^2} + \left[ -\tau_{\rm m}\left( s + \frac{n}{\tau_{\rmd}} \right) + \frac{1}{2} - \frac{1}{4} z^2 \right]~u_{n} = \frac{1} {\tau_{\rmd}}~\sqrt{\frac{\tau_{\rm m}}{D}}~\left( \frac{1}{2}~z~u_{n-1} + \frac{\rmd u_{n-1}}{\rmd z} \right), \end{equation} \noindent and the boundary condition at the transformed threshold is $u_{n}(z_{\rm thr};s) = 0$.\\ \indent Next, we prove by mathematical induction that for $\tau_{\rm m} \neq \tau_{\rmd}$, the solution to Eq.~(\ref{eq53}) reads \begin{equation}\label{eq54} u_{n}(z;s) = \sum_{k=0}^{n} b_{n,k}(s)~\mathcal{D}_{-\tau_{\rm m} [s+(n-k)/\tau_{\rm m}+k/\tau_{\rm d}]}(z), \end{equation} \noindent where the coefficients are given by the recursive structure, \begin{eqnarray} \label{eq55} \fl b_{n,k}(s) &=& \frac{[s+(n-1-k)/\tau_{\rm m} + k/\tau_{\rmd}]} {(n-k)}~\frac{\sqrt{\tau_{\rm m}/D}}{(1-\tau_{\rmd}/\tau_{\rm m})} ~b_{n-1,k}(s),~~{\rm for}~k=0, \dots, n-1,\\ \label{eq56} \fl b_{n,n}(s) &=& - \frac{1}{\mathcal{D}_{-\tau_{\rm m}(s+n/\tau_{\rmd})} (z_{\rm thr})} \sum_{k=0}^{n-1} b_{n,k}(s)~\mathcal{D}_ {-\tau_{\rm m}[s+(n-k)/\tau_{\rm m}+k/\tau_{\rmd}]}(z_{\rm thr}), \end{eqnarray} \noindent starting from \begin{eqnarray} \label{eq57} b_{1,0}(s) &=& \frac{\sqrt{\tau_{\rm m}/D}}{(1-\tau_{\rmd}/\tau_{\rm m})}~ \frac{\rme^{-z_{\rm thr}^2 / 4}}{\mathcal{D}_{-\tau_{\rm m}s} (z_{\rm thr})}~s,\\ \label{eq58} b_{1,1}(s) &=& - \frac{\sqrt{\tau_{\rm m}/D}}{(1-\tau_{\rmd}/\tau_{\rm m})}~ \frac{\rme^{-z_{\rm thr}^2 / 4}}{\mathcal{D}_{-\tau_{\rm m}s} (z_{\rm thr})}~\frac{\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rm m})}(z_{\rm thr})} {\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rmd})}(z_{\rm thr})}~s. \end{eqnarray} \indent Assuming that the $n$th-order function is given by Eq.~(\ref{eq54}), according to Eq.~(\ref{eq53}) the $(n+1)$th-order function should satisfy \begin{eqnarray} \fl \hspace{0.5cm} \frac{\rmd^{2}u_{n+1}}{\rmd z^2} + \left\{ -\tau_{\rm m}\left[ s + \frac{(n+1)}{\tau_{\rmd}} \right] + \frac{1}{2} - \frac{1}{4} z^2 \right\}~u_{n+1} = \frac{1}{\tau_{\rmd}} ~ \sqrt{\frac{\tau_{\rm m}}{D}}\nonumber\\ \label{eq59} \fl \hspace{1.5cm} \times \sum_{k=0}^{n} b_{n,k}(s)~ \left\{ \frac{1}{2}~z~\mathcal{D}_{-\tau_{\rm m} [s+(n-k)/\tau_{\rm m}+k/\tau_{\rm d}]}(z) + \frac{\rmd \mathcal{D}_{-\tau_{\rm m}[s+(n-k)/ \tau_{\rm m}+k/\tau_{\rm d}]}(z)}{\rmd z} \right\}. \end{eqnarray} \indent From the recurrence relationships that Weber parabolic cylinder functions satisfy \cite{Handbook}, it is easy to show that \begin{equation}\label{eq60} \frac{\rmd \mathcal{D}_{\nu}(z)}{\rmd z} + \frac{1}{2}~z~ \mathcal{D}_{\nu}(z) = \nu~\mathcal{D}_{\nu-1}(z), \end{equation} \noindent and, therefore, the forcing term in Eq.~(\ref{eq59}) simplifies to \begin{eqnarray} \fl \hspace{0.5cm} \frac{\rmd^{2}u_{n+1}}{\rmd z^2} + \left\{ -\tau_{\rm m}\left[ s + \frac{(n+1)}{\tau_{\rmd}} \right] + \frac{1}{2} - \frac{1}{4} z^2 \right\}~u_{n+1} = -\frac{\tau_{\rm m}} {\tau_{\rmd}} ~ \sqrt{\frac{\tau_{\rm m}}{D}} \nonumber\\ \label{eq61} \fl \hspace{1.5cm} \times \sum_{k=0}^{n} b_{n,k}(s)~\left[ s + \frac{(n-k)} {\tau_{\rm m}} + \frac{k}{\tau_{\rm d}} \right]~ \mathcal{D}_{-\tau_{\rm m}[s+(n+1-k)/\tau_{\rm m}+ k/\tau_{\rm d}]}(z). \end{eqnarray} \indent The solution to the homogeneous part of this equation is given by an analogous expression to Eq.~(\ref{eq48}), \begin{equation}\label{eq62} \fl \hspace{0.75cm} u_{n+1}^{\rm hom.}(z;s) = c_1 ~ \mathcal{D}_{-\tau_{\rm m}[s+(n+1)/\tau_{\rm d}]}(z) + c_2 ~ \mathcal{D}_{-\tau_{\rm m}[s+(n+1)/\tau_{\rm d}]}(-z), \end{equation} \noindent whereas a particular solution is \begin{eqnarray} \fl \hspace{0.75cm} u_{n+1}^{\rm part.}(z;s) = \frac{\sqrt{\tau_{\rm m}/D}}{(1-\tau_{\rmd}/\tau_{\rm m})} \sum_{k=0}^{n} \frac{[s+(n-k)/\tau_{\rm m}+k/\tau_{\rm d}]}{(n+1-k)} ~b_{n,k}(s)~ \nonumber\\ \label{eq63} \fl \hspace{8.5cm} \times \mathcal{D}_{-\tau_{\rm m}[s+(n+1-k)/\tau_{\rm m}+ k/\tau_{\rm d}]}(z). \end{eqnarray} \indent The recursive structure of the coefficients is implicitly defined in Eq.~(\ref{eq63}) and agrees to Eq.~(\ref{eq55}). Therefore, the general solution $u_{n+1}(z;s) = u_{n+1}^{\rm hom.}(z;s) + u_{n+1}^{\rm part.}(z;s)$ can be expressed as \begin{eqnarray} \fl \hspace{0.75cm} u_{n+1}(z;s) = \sum_{k=0}^{n} b_{n+1,k}(s)~ \mathcal{D}_{-\tau_{\rm m} [s+(n+1-k)/\tau_{\rm m}+k/\tau_{\rm d}]}(z) + c_1 ~ \mathcal{D}_{-\tau_{\rm m} [s+(n+1)/\tau_{\rm d}]}(z) \nonumber\\ \label{eq64} \fl \hspace{7.cm} + ~c_2 ~ \mathcal{D}_{-\tau_{\rm m} [s+(n+1)/\tau_{\rm d}]}(-z). \end{eqnarray} \indent As before, an appropriate behavior at $z \rightarrow -\infty$ implies that $c_2 = 0$, and the evaluation of the boundary condition, $u_{n+1}(z_{\rm thr};s) = 0$, determines $c_1$. Taking into account the definition given in Eq.~(\ref{eq56}), the $(n+1)$th-order function reads \begin{eqnarray} \label{eq65} \fl \hspace{0.75cm} u_{n+1}(z;s) = \sum_{k=0}^{n} b_{n+1,k}(s)~ \mathcal{D}_{-\tau_{\rm m} [s+(n+1-k)/\tau_{\rm m}+k/\tau_{\rm d}]}(z) \nonumber\\ \fl \hspace{7.cm} + ~b_{n+1,n+1}(s)~\mathcal{D}_{-\tau_{\rm m}[s+(n+1)/\tau_{\rm d}]}(z), \nonumber\\ \fl \hspace{2.6cm} = \sum_{k=0}^{n+1} b_{n+1,k}(s)~ \mathcal{D}_{-\tau_{\rm m} [s+(n+1-k)/\tau_{\rm m}+k/\tau_{\rm d}]}(z), \end{eqnarray} \noindent which clearly satisfies Eq.~(\ref{eq54}) for $(n+1)$. Therefore, as far as Eq.~(\ref{eq54}) is true for the $n$th-order function, it holds for the following order, where coefficients are related by Eqs.~(\ref{eq55}) and (\ref{eq56}).\\ \indent The proof is completed by observing that the first-order function $u_{1}(z;s)$ belongs to the family described by Eq.~(\ref{eq54}). According to Eq.~(\ref{eq53}) and taking into account the solution for the zeroth-order function, Eq.~(\ref{eq49}), this function satisfies \begin{equation}\label{eq66} \fl \frac{\rmd^{2}u_{1}}{\rmd z^2} + \left[ -\tau_{\rm m} \left(s + \frac{1}{\tau_{\rmd}} \right) + \frac{1}{2} - \frac{1}{4} z^2 \right]~u_{1} = -\frac{\tau_{\rm m}} {\tau_{\rmd}} ~ \sqrt{\frac{\tau_{\rm m}}{D}}~\frac{\rme^{-z_{\rm thr}^2 / 4}}{\mathcal{D}_{-\tau_{\rm m}s}(z_{\rm thr})}~s~\mathcal{D}_ {-\tau_{\rm m}(s+1/\tau_{\rm m})}(z), \end{equation} \noindent where we have used the recurrence relationship given by Eq.~(\ref{eq60}). The general solution to this equation is \begin{eqnarray} \fl \hspace{0.75cm} u_1(z;s) = c_1~\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rm d})}(z) + c_2~\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rm d})}(-z) \nonumber\\ \label{eq67} \fl \hspace{5.0cm} +\frac{\sqrt{\tau_{\rm m}/D}}{(1-\tau_{\rmd}/\tau_{\rm m})}~ \frac{\rme^{-z_{\rm thr}^2 / 4}}{\mathcal{D}_{-\tau_{\rm m}s} (z_{\rm thr})}~s~\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rm m})}(z). \end{eqnarray} \indent Since $c_2 = 0$ for bounded solutions and $c_1$ is determined from the evaluation of the boundary condition, $u_{1}(z_{\rm thr};s) = 0$, the first-order function is explicitly given by \begin{eqnarray} \fl \hspace{0.75cm} u_1(z;s) = \frac{\sqrt{\tau_{\rm m}/D}}{(1-\tau_{\rmd}/\tau_{\rm m})}~ \frac{\rme^{-z_{\rm thr}^2 / 4}}{\mathcal{D}_{-\tau_{\rm m}s} (z_{\rm thr})}~s~\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rm m})}(z) \nonumber\\ \label{eq68} \fl \hspace{2.5cm} - \frac{\sqrt{\tau_{\rm m}/D}}{(1-\tau_{\rmd}/\tau_{\rm m})}~ \frac{\rme^{-z_{\rm thr}^2 / 4}}{\mathcal{D}_{-\tau_{\rm m}s} (z_{\rm thr})}~\frac{\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rm m})}(z_{\rm thr})} {\mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rmd})}(z_{\rm thr})}~s~ \mathcal{D}_{-\tau_{\rm m}(s+1/\tau_{\rm d})}(z), \end{eqnarray} \noindent which can be expressed according to Eq.~(\ref{eq54}), with coefficients given by Eqs.~(\ref{eq57}) and (\ref{eq58}). This step ends the demonstration by mathematical induction of the series solution to the FPT density function, given by Eqs.~(\ref{eq52}) and (\ref{eq54}) in the transformed variable $z$ with coefficients defined by Eqs.~(\ref{eq55})-(\ref{eq58}).\\ \indent For completeness, the recursive solution for the special case of the Ornstein-Uhlenbeck process with $\tau_{\rm m} = \tau_{\rmd}$ is given in the Appendix. \subsection{Other potentials} \indent The survival probability and FPT statistics for a system driven by an exponential time-dependent drift in an arbitrary potential can be \textit{formally} obtained in a recursive scheme, whenever the Green's function of Eq.~(\ref{eq37}) exists. This function is practically the same as that of the unperturbed system, Eq.~(\ref{eq36}), so essentially the FPT problem for the time-inhomogeneous setup is as analytically tractable as for the unperturbed case. The procedure is exemplified in the Appendix, for the case of the Ornstein-Uhlenbeck process with $\tau_{\rm m} = \tau_{\rmd}$. \section{Comparison to numerical simulations} \indent As the solution we have found for the present FPT problem is given in terms of a series, we need to confirm its convergence or resort to numerical simulations to test its validity. We did not come to any general conclusions about the domain of convergence in the parameter space, and so here we have exemplified the usefulness of our approach by taking several cases spanning different regimes, for those systems for which we have found explicit solutions. For the Wiener process, we refer the reader to \cite{Urdapilleta2012}, where we have shown that the series solution is valid in the prototypical case corresponding to the supra-threshold regime, as it provides an excellent description far beyond the linear perturbative scenario. For the Ornstein-Uhlenbeck process, a number of parameters is available: $\tau_{\rm m}$, $x_{\rm thr}$, $x_0$, $\mu$, $D$, $\tau_{\rm d}$, and $\epsilon$. However, the exact values of $x_{\rm thr}$ and $x_0$ do not add any complexity to the model, and $\tau_{\rm m}$ sets the timescale of the dynamics. Therefore, there are three main parameters to explore, $\mu$, $D$, and $\tau_{\rm d}$, whereas $\epsilon$ controls the intensity of the superimposed time-dependent drift. \begin{figure}[t!] \begin{center} \includegraphics[scale=1.0]{Figure1} \caption{\label{fig1} Comparison between the series solution for the FPT density function and numerical results for different cases of the Ornstein-Uhlenbeck process. (a) Low noise sub-threshold regime. (b) Low noise supra-threshold regime. (c) High noise sub-threshold regime. (d) High noise supra-threshold regime. In all cases, $x_0 = 0$, $x_{\rm thr} = 1$, $\tau_{\rm m} = 10~{\rm ms}$, and $\tau_{\rm d} = 100~{\rm ms}$. Different regimes are defined by $\mu = 0.075~{\rm ms}^{-1}$ (sub-threshold), $\mu = 0.1333~{\rm ms}^{-1}$ (supra-threshold), $D = 0.0025~{\rm ms}^{-1}$ (low noise), and $D = 0.01~{\rm ms}^{-1}$ (high noise). FPT distributions obtained from the numerical simulation of Eq.~(\ref{eq43}) for different intensities of the time-dependent exponential drift are shown by thick stair-like colored lines, labeled in the upper-right hand part of each panel: blue, light blue, black, light red, and red correspond to $\epsilon = +2.0, +0.5, 0, -0.5, -2.0$, respectively. The analytical expression, Eq.~(\ref{eq32}), obtained from the numerical inverse Laplace transform of the explicit solution, Eq.~(\ref{eq50}) for the unperturbed density, and Eqs.~(\ref{eq52}) and (\ref{eq54}) for higher order terms, up to the order $N$ indicated for each intensity, $\mathcal{O}(N)$, is represented as a thin yellow line. In all cases, the analytical results excellently describe FPT statistics obtained from numerical simulations.} \end{center} \end{figure} \indent Given our interest in neuronal adaptation (see Section \ref{intro}), we focus on the correspondence between the Ornstein-Uhlenbeck process and the \textit{leaky integrate-and-fire} neuron model to define the typical parameters. In this model, $x$ is the transmembrane voltage, the input derived from the potential, $-U'(x)$, represents external driving as well as currents flowing through specific (leaky) membrane channels, and the superimposed exponential temporal drift corresponds to an adaptation current. Without stimulation, voltage decays to the resting potential, $x_0$, whereas when inputs drive the neuron across the threshold, $x_{\rm thr}$, a spike is declared and the voltage is reset to a starting point, here assumed to be equal to $x_0$. The specific problem of neuronal adaptation also includes a history-dependent process (see Section \ref{discussion} for further discussion), not included here. By redefining $(x-x_0)/x_{\rm thr} \rightarrow x$, the new dimensionless voltage $x$ starts at $x_0 = 0$ and the FTP problem corresponds to reach the threshold $x_{\rm thr} = 1$. On the other hand, the temporal scale is set by $\tau_{\rm m}$, here assumed to be $\tau_{\rm m} = 10$ time units (ms), in agreement with the experimental values \cite{Koch, GerstnerKistler}. All parameter values explored here have to be compared to this particular $\tau_{\rm m}$; otherwise, an adimensionalization procedure can be used. The remaining parameters are used to explore the convergence of the series solution in different cases. As mentioned in Section \ref{ou}, different dynamical regimes can be defined according to the intensity of the constant drift, $\mu$. In turn, the intensity of the noise, $D$, essentially modulates the dispersion of the FPT distribution. Once the set of parameters to examine has been defined, numerical FPT distributions are obtained from first-hitting times of the system evolving according to the Langevin equation that governs its dynamics, Eq.~(\ref{eq43}). \indent Explicit analytical results are given in the Laplace domain -the zeroth order by Eq.~(\ref{eq50}) and higher order terms by Eqs.~(\ref{eq52}) and (\ref{eq54}); then numerical inversion is required to transform them into the temporal domain. In detail, we performed a standard numerical integration (function NIntegrate in the Mathematica package) between proper limits in the imaginary axis, according to the definition of the inverse Laplace transform with a real integration variable, \begin{equation}\label{eq69} f(\tau) = \frac{1}{2\pi} ~ \int_{-{\rm j}\omega}^{+{\rm j}\omega} \tilde{f}^{L}({\rm j}\omega) ~ \rme^{{\rm j}\omega\tau}~ \rmd \omega, \end{equation} \noindent with $\omega$ properly chosen to represent infinity. According to integration parameters and the function being inverse transformed, numerical instabilities not associated with the validity of the series method itself may arise. However, in all the cases presented here, the analytical results in the temporal domain were consistent for different large $\omega$ limits and numerical parameters. In figure \ref{fig1}, the FPT statistics in the sub/supra-threshold regimes, with low/high noise intensities, and different strengths of the time-dependent drift are shown. In all cases, the time constant of the exponential drift $\tau_{\rm d}$ is set to $100$ time units (ms). As observed, the series solution up to a certain order $N$, represented by the thin yellow lines, properly describes the FPT distributions obtained from the numerical simulations (stair-like colored histograms). However, different parameter combinations require different $N$ to converge. In general, when noise intensity is reduced, the FPT distribution becomes more sharply peaked (or distorted) in comparison to the unperturbed case, and the order of convergence increases concomitantly. This can be noticed in figures \ref{fig1}{\it(a)} and \ref{fig1}{\it(c)}, where the noise intensity has been manipulated in a generic sub-threshold regime, or in figures \ref{fig1}{\it(b)} and \ref{fig1}{\it(d)} in a general supra-threshold condition. Naturally, larger amplitudes of the time-dependent drift require more terms to be considered in the series expansion to converge to a final distribution, supposing the parameters are within the radius of convergence. This increase in the number of terms may affect convergence in the temporal domain -regardless of the radius of convergence of the series itself- because of the amplification of numerical errors and/or instabilities when a specific procedure with a given set of numerical parameters is used to inverse Laplace transform functions. \begin{figure}[t!] \begin{center} \includegraphics[scale=1.0]{Figure2} \caption{\label{fig2} Comparison between the analytical and numerical results of the FPT statistics for different timescales defining time-dependent drift. Relatively moderate positive, $\epsilon = +0.5$, and negative, $\epsilon = -0.5$, intensities are considered in (a) and (b), respectively. FPT distributions for different time constants $\tau_{\rm d}$, obtained from numerical simulations are shown by different stair-like colored lines. Analytical results up to the order $N$ indicated in the upper-right hand part of each panel, $\mathcal{O}(N)$, are represented by thin yellow lines. As observed, the agreement between the numerical results and theoretical expressions is excellent in all cases, but the order of convergence may differ substantially. Parameters governing the dynamics are $x_0 = 0$, $x_{\rm thr} = 1$, $\tau_{\rm m} = 10~{\rm ms}$, $\mu = 0.100~{\rm ms}^{-1}$, and $D = 0.005~{\rm ms}^{-1}$.} \end{center} \end{figure} \indent As the time constant of the time-dependent exponential drift decreases, the series solution requires more higher order terms to become accurate. This general behavior is depicted in figure \ref{fig2}, where the theoretical series solution (thin yellow lines) correctly represents FPT distributions obtained from numerical simulations (stair-like colored histograms), for positive as well as negative intensities $\epsilon$ (figure \ref{fig2}{\it(a)} and \ref{fig2}{\it(b)}, respectively) for different time constants $\tau_{\rm d}$. According to the general expression for the higher order terms in the series solution, Eqs.~(\ref{eq52}) and (\ref{eq54}), the coefficients weighting individual contributions for a given term, Eqs.~(\ref{eq55}) and (\ref{eq56}), are extremely sensitive to the ratio between the extrinsic and intrinsic timescales, $\tau_{\rm d}/\tau_{\rm m}$. Given this dependence (and assuming that convergence exists), the order of convergence of the series solution should increase as $\tau_{\rm d}/\tau_{\rm m} \rightarrow 1$. In the examples analyzed here, convergent behavior was observed for consecutive terms in the series solution as this limit was reached. The special case $\tau_{\rm m} = \tau_{\rm d}$ can not be described by the previous formulae, and a Green's function approach has been taken (see Appendix). On the other hand, as the time constant approaches zero, $\tau_{\rm d} \rightarrow 0$, the superimposed time-dependent exponential drift diverges and the system effectively corresponds to an unperturbed case with a shift in the initial condition \cite{Urdapilleta2011a}. In this limit, according to the value of $\epsilon$, the FTP problem will be well-posed only when this initial condition is below the threshold. \indent The series solution for the FPT density function, Eq.~(\ref{eq32}), is explicitly given in terms of its Laplace transform: the zeroth order by Eq.~(\ref{eq50}), and higher order terms by Eqs.~(\ref{eq52}) and (\ref{eq54}). Without performing any inverse transformation (and, therefore, avoiding all numerical artifacts of numerical implementation), this result can be used to obtain important properties of the FPT distribution. In particular, its moments are given by \cite{Urdapilleta2011a, Urdapilleta2012} \begin{equation}\label{eq70} \langle \tau^{k} \rangle = \int_{0}^{\infty} \phi(\tau)~\tau^{k}~{\rm d}\tau = (-1)^{k}~\frac{{\rm d}^{k} \tilde{\phi}^{L}(s)}{{\rm d}s^{k}}\Big\rfloor_{s=0}. \end{equation} \indent Since the series solution for $\phi(\tau)$ is a linear combination of different functions, the preceding relationship can be written as \begin{equation}\label{eq71} \langle \tau^{k} \rangle = \sum_{n=0}^{\infty} \epsilon^{n}~\langle \tau^{k} \rangle_{\phi_{n}}, \end{equation} \noindent where \begin{equation}\label{eq72} \langle \tau^{k} \rangle_{\phi_{n}} = (-1)^{k}~\frac{{\rm d}^{k} \tilde{\phi}_{n}^{L}(s)}{{\rm d}s^{k}}\Big\rfloor_{s=0}. \end{equation} \begin{figure}[t!] \begin{center} \includegraphics[scale=1.0]{Figure3} \caption{\label{fig3} Comparison between analytical and numerical results for (a) the first and (b) the second moment of the FPT distribution, as a function of the intensity of time-dependent drift $\epsilon$. The symbols represent the average of the numerical results, whereas the lines correspond to the series expression for the $k$-th moment truncated at the order $N$ indicated in the inset, $\langle \tau^{k} \rangle = \sum_{n = 0}^{N} \epsilon^{n} ~ \langle \tau^{k} \rangle_{\phi_{n}} $. The remaining parameters are $x_0 = 0$, $x_{\rm thr} = 1$, $\tau_{\rm m} = 10~{\rm ms}$, $\tau_{\rm d} = 100~{\rm ms}$, $\mu = 0.100~{\rm ms}^{-1}$, and $D = 0.005~{\rm ms}^{-1}$.} \end{center} \end{figure} \indent According to Eqs.~(\ref{eq71}) and (\ref{eq72}), the evaluation of the moments is straightforward in the Laplace domain. In figure \ref{fig3} we show the behavior of the first two moments for the intermediate case analyzed in figure \ref{fig2}, as a function of a wide range of the intensity $\epsilon$. As observed, as the absolute value of $\epsilon$ increases, the order $N$ considered in the series has to increase as well to account for the numerical results. \section{Final remarks and discussion}\label{discussion} \indent In this work, we have studied the survival probability and the FPT statistics of a Brownian particle whose dynamics are governed by a generic unidimensional potential and a superimposed exponential time-dependent drift, Eq.~(\ref{eq10}), with a fixed threshold setting the limit where the FPT is defined. Based on the backward FP description, we first derived the diffusion equation of the survival probability from the backward state, Eq.~(\ref{eq12}), and then proposed a series solution in powers of the intensity of the time-dependent drift contribution, Eq.~(\ref{eq13}). With this procedure, the preceding diffusion equation translates into a system of infinite simpler equations, where each one defines the behavior of each term in the proposed series in a recursive scheme, Eqs.~(\ref{eq15}) and (\ref{eq16}). The particular mathematical structure of the forcing term in these equations -defined by the time-dependent exponential drift- enables a simpler representation in the Laplace domain, Eqs.~(\ref{eq22}) and (\ref{eq30}). From the survival probability, the FPT statistics are readily obtained, which naturally inherit the series structure, Eq.~(\ref{eq32}) where each term is governed by Eqs.~(\ref{eq36}) or (\ref{eq37}). The general derivation of this series solution agrees with the explicit solution we found previously for the Wiener process \cite{Urdapilleta2012}. However, since the present approach is applicable to any unidimensional potential, we explicitly solved the series solution for the FPT statistics of an Ornstein-Uhlenbeck process (with the superimposed exponential time-dependent field), which is mathematically much more challenging than the Wiener process. Given that the convergence properties of the proposed series solution remained unknown, several cases were defined to numerically test the usefulness of the approach and the solution found. In all cases, the analytical and numerical results were in good agreement as long as the number of terms included in the series is large enough. \indent As discussed in Section \ref{intro}, different neuron models have a direct correspondence to different drift-diffusion Brownian motions. In particular, the Wiener and the Ornstein-Uhlenbeck processes correspond to the perfect and leaky IF neuron models, respectively. In all IF models, variable $x$ is the transmembrane voltage of a spiking neuron, the drift derived from the potential, $-U'(x)$, corresponds to the external as well as the internal sub-threshold signals integrated by the Langevin dynamics, Eq. ~(\ref{eq1}), which model the capacitance properties of the cellular membrane. Additive Gaussian white noise is included in order to model randomness arising from different sources (random channel opening and closing, stochastic synaptic transmission, etcetera), with minimal mathematical complexity. The FPT problem results from the procedure used to declare a spike. Whenever the voltage reaches the threshold $x_{\rm thr}$, the voltage dynamics are no longer governed by the simple Langevin equation, Eq.~(\ref{eq1}), and a largely stereotyped waveform is generated by other mechanisms (not included in IF models). Therefore, to a large extent, the exact time at which this event was produced \textit{for the first time} is the only meaningful information to be transmitted to downstream neurons. Once this event happens, there are mechanisms that restore the membrane potential to a reset potential $x_0$ (here also coincident to the resting potential), and a new sub-threshold integration cycle is launched. The superimposed time-dependent exponential drift considered here mimics the temporal evolution of a separate process of neuronal adaptation, which naturally also influences spike time statistics. This adaptation current not only modifies the FPT statistics of the homogeneous case (for example, the pure Wiener or Ornstein-Uhlenbeck processes or, equivalently, the adaptation-free perfect and leaky IF models), but also couples subsequent events, creating negative correlations \cite{Urdapilleta2011b, SchwalgerLindner2013}. The analysis of successive interspike intervals and the emergence of these correlations can be described by a hidden Markov model \cite{Urdapilleta2011b}, in which correlations arise from the fire-and-reset rule coupling the subsequent initial state of the adaptation current with the preceding interspike interval. In this description, the FPT statistics of the temporally inhomogeneous process considered here provide the relationship between the hidden variable (the initial state of the adaptation current) and the observable (the interspike intervals). \indent Since both Wiener and Ornstein-Uhlenbeck processes as well as generic drift-diffusion models are ubiquitous in describing different phenomena, the results obtained here on survival probability and FPT distribution will be of interest in other settings where an additive state-independent temporal relaxation process is being developed as the particle diffuses. \section{Acknowledgments} This work was supported by the Consejo de Investigaciones Cient\'ificas y T\'ecnicas de la Rep\'ublica Argentina. \section*{Appendix} \subsection*{FPT statistics for the Ornstein-Uhlenbeck process, with $\tau_{\rm m} = \tau_{\rmd}$} \indent Naturally, the equation governing the behavior of the unperturbed system, Eq.~(\ref{eq36}), does not depend on the timescale of the exponential time-dependent drift, $\tau_{\rmd}$. On the other hand, when $\tau_{\rm m} = \tau_{\rmd}$, Eq.~(\ref{eq37}) applied to the Ornstein-Uhlenbeck case, Eq.~(\ref{eq51}), reads \begin{equation}\label{ap1} \fl \hspace{1.cm} \frac{\rmd^{2}\tilde{\phi}_{n}^{L}}{\rmd x_0^2} + \left( \frac{\mu}{D} - \frac{x_0}{\tau_{\rm m} D} \right) ~ \frac{\rmd\tilde{\phi}_{n}^{L}}{\rmd x_0} - \frac{(s+n/\tau_{\rm m})}{D}~\tilde{\phi}_{n}^{L} = -\frac{1}{\tau_{\rm m}D} ~ \frac{\rmd \tilde{\phi}_{n-1}^{L}} {\rmd x_0}, ~n \geq 1. \end{equation} \indent Again, with the substitution $z = \sqrt{\tau_{\rm m}/D}~( \mu - x_0/\tau_{\rm m})$ and proposing the functional structure given by Eq.~(\ref{eq52}), the preceding equation transforms to \begin{equation}\label{ap2} \fl \hspace{1.cm} \frac{\rmd^{2}u_{n}}{\rmd z^2} + \left[ -\tau_{\rm m}\left( s + \frac{n}{\tau_{\rm m}} \right) + \frac{1}{2} - \frac{1}{4} z^2 \right]~u_{n} = \frac{1} {\sqrt{\tau_{\rm m}D}}~\left( \frac{1}{2}~z~u_{n-1} + \frac{\rmd u_{n-1}}{\rmd z} \right). \end{equation} \indent As before, boundary conditions are $u_{n}(z_{\rm thr}) = 0$ and $u_{n}(z \rightarrow \infty)$ bounded, where the dependence on the parameter $s$ has been omitted for simplicity. The solution to this infinite set of equations can be recursively found as \begin{equation}\label{ap3} u_{n}(z) = \int_{z_{\rm thr}}^{\infty} g_{n}(z,z')~\frac{1} {\sqrt{\tau_{\rm m}D}}~\left[ \frac{1}{2}~z'~u_{n-1}(z') + \frac{\rmd u_{n-1}(z')}{\rmd z'} \right]~{\rmd}z', \end{equation} \noindent where $g_{n}(z,z')$ is the Green's function of Eq.~(\ref{ap2}), which is obtained as the solution to \begin{equation}\label{ap4} \frac{\partial^{2}g_{n}(z,z')}{\partial z^2} + \left[ -\tau_{\rm m}\left( s + \frac{n}{\tau_{\rm m}} \right) + \frac{1}{2} - \frac{1}{4} z^2 \right]~g_{n}(z,z') = \delta(z-z'), \end{equation} \noindent with the boundary conditions defined above. Explicitly, this function reads \begin{equation}\label{ap5} \fl \hspace{1.cm} g_{n}(z,z') = \frac{1}{{\rm Den}(z')} \left\{ \eqalign{ \Big[ \mathcal{D}_{\nu}(z_{\rm thr})~\mathcal{D}_{\nu}(-z) & - \mathcal{D}_{\nu}(-z_{\rm thr})~\mathcal{D}_{\nu}(z) \Big] ~\mathcal{D}_{\nu}(z') ~,~{\rm for}~~ z<z', \cr \Big[ \mathcal{D}_{\nu}(z_{\rm thr})~\mathcal{D}_{\nu}(-z') & - \mathcal{D}_{\nu}(-z_{\rm thr})~\mathcal{D}_{\nu}(z') \Big] ~\mathcal{D}_{\nu}(z) ~,~{\rm for}~~ z>z', } \right. \end{equation} \noindent where \begin{equation}\label{ap5} \fl \hspace{1.cm} {\rm Den}(z') = \frac{1}{\nu~\mathcal{D}_{\nu}(z_{\rm thr})~ \Big[ \mathcal{D}_{\nu}(-z')~\mathcal{D}_{\nu-1}(z') + \mathcal{D}_{\nu}(z')~\mathcal{D}_{\nu-1}(-z') \Big]}, \end{equation} \noindent and the order $n$ of the function $g_n(z,z')$ comes into the index $\nu = -\tau_{\rm m} \left(s+n/\tau_{\rm m}\right)$ exclusively. \section*{References}
2,877,628,088,757
arxiv
\section{Introduction} In Gideon and Rothan (2011), the use of correlation coefficients (CC) was related to existing optimal linear estimation of variation or scale on ordered data. Another paper, Gideon (2012), gives Correlation Estimation System (CES) examples in many areas: simple linear regression, scale equations, multiple linear regression, CES minimization to estimate $\sigma_{res}/\sigma_y$, nonlinear regression, and estimation of parameters of univariate distributions. CES is a general system of estimation using CCs as the starting point. In Gideon and Rothan (2011), Pearson's CC was related to linear estimates of variation based on order statistics. This paper uses Pearson's CC on order statistics in the simple linear regression model $E(Y|x)=\beta*x$ to see how it compares to classical least squares. By simulation it is found that this estimation technique almost duplicates classical least squares regression results. More importantly, any CC, such as those in Gideon (1987 and 2007), could be used (even those based on ranks) to do simple linear regression (SLR); the CES minimization technique gives a very general way of allowing any correlation coefficient to tackle a wide variety of regression problems. \section{The Minimum Slope Method} The CES minimization technique is developed in this section for a random sample $(\underline{x},\underline{y})$. Assume without loss of generality the intercept is zero so as to better visualize the regression of $(\underline{y}-b\underline{x})^0$ on $\underline{x}^0$ for a selected $b$. The slope of this regression is denoted $s$. The superscript $0$ indicates that the elements of the vector are ordered from least to greatest. It is critical to observe that the elements of the residual vector are not paired with specific components of $\underline{x}$. After selecting a value of $b$, plot $(\underline{y}-b\underline{x})^0$ versus $\underline{x}^0$. If $b$ is such that $b\underline{x}$ produces residuals with wide variation, the $(\underline{y}-b\underline{x})^0$ versus $\underline{x}^0$ plot is steep and the regression has a large slope $s$. As $b$ approaches a more reasonable value, the residuals, $\underline{y}-b\underline{x}$, become closer in value and the $(\underline{y}-b\underline{x})^0$ versus $\underline{x}^0$ plot is less steep and so $s$ is smaller, but always nonnegative due to ordering the residuals. If, of course, $\underline{y}=b\underline{x}$ the vector $\underline{0}$ regressed on $\underline{x}^0$ gives $s=0$. By choosing the $b$ that minimizes slope $s$ (the Minimum Slope method or MS) the residuals $\underline{y}-b\underline{x}$ are as uniform as possible, i.e. they increase as little as possible from least to greatest. So MS regression is a sort of minimum totality criterion. The CES-MS can use any CC to find the $s$ -- each may give a different value -- but with "good" data they are all close to each other. Recall that in SLR, $r_p(independent variable,\; residuals)=0$, where $r_p$ is Pearson's correlation coefficient. Analogously, let the independent variable be $\underline{x}^0$ and let the residuals be $(\underline{y}-b\underline{x})^0-s\underline{x}^0$ to obtain the equation \begin{equation} r_p(\underline{x}^0,(\underline{y}-b\underline{x})^0-s\underline{x}^0)=0. \end{equation} In this equation $r_p$ is used, but any CC could be employed instead. To obtain the estimation, select the value of $b$ which minimizes $s$. This equation is set up in R code in Section 4 and is generally solved by an iterative technique. For Pearson's CC the solution is, for a selected $b$, $s=\dfrac{\Sigma (x_{(i)}-\bar{x})res_{(i)}}{\Sigma( x_{(i)}-\bar{x})^2 }$, where $res_{(i)}$ is the $i^{th}$ smallest element of $\underline{y}-b\underline{x}$; $s$ estimates $\sigma_\epsilon/\sigma_x$. The intercept is chosen by using a location statistic on $(\underline{y}-b\underline{x})$. In CES, SLR analysis estimates of $\sigma_y$ and $\sigma_x$ are needed. To estimate these scale quantities solve for the slope, $s$, in the equation \begin{equation} r_p(\underline{e},\underline{x}^0-s\underline{e})=0 \end{equation} where $\underline{e}$ is the vector with components $\Phi^{-1}(i/(n+1)),\; i= 1,2,\dots,n$. $\Phi$, the distribution function of a standard normal random variable, is used because the simulations are from the normal distribution. Preferably $\underline{e}$ should be the expected values of the order statistics, but they are not available for all sample sizes, and so are replaced by estimates that converge to them. Simulations were run using the random number generator of R to generate random samples for the bivariate normal distribution. The five parameters of the bivariate normal were chosen with the means always zero and then random samples drawn. The conditional distribution of $Y$ given $x$ was studied. The slope parameter, $\beta$, and the standard error, $\sigma_\epsilon$, were calculated for the random samples. These CES methods have been investigated in more complex situations, as stated above, all with excellent results: Gideon (2012) and some unpublished research, including a paper on the Logistic Distribution as well as Sheng (2002) on Time Series. For each random sample the parameters of the bivariate normal distribution and the regression parameters of the conditional distribution of Y given x were estimated by two methods: (a) least squares or classical normal theory and (b) Correlation Estimation System, CES, using the minimization equation (1). The most surprising result in comparing the methods (a) to (b) is that (b) is as good as classical normal theory when the Pearson's correlation coefficient is used. The main point of this paper is to show the apparent equivalence, in the sense of equal in distribution (Randles and Wolfe, 1979), in simple linear regression between LS (least squares minimization) and CES using Pearson's correlation coefficient. This technique, when used with robust CCs, provides a very simple way to judge the regression without having to evaluate and perhaps delete data points. When the three robust CCs, Greatest Deviation, Absolute Value, or Median Absolute Deviation, (these are defined in Gideon (2007) and are used in Table 2) are used over time in simple linear regression and compared to the LS line, it quickly becomes apparent what sort of data causes the robust lines to be different from the LS line, even though the data may not have "real" outliers. The example in Table 2 illustrates this. There are also some data sets in which certain points are considered outliers when they are actually not. Using the robust CCs allows these points to have a role in the analysis without overwhelming it. To compare the two systems, each process is run on the same data. There are two primary comparisons: (1) $\hat{\sigma}_\epsilon(LS )$ and $\hat{\sigma}_\epsilon(CES)$, the standard deviations of the LS and MS residuals (the sum of squares of deviations from the mean divided by $n-1$) are computed and compared. (2) Let $\hat{\sigma}_{ratio}$ represent the MS estimate of $\sigma_\epsilon/\sigma_x$. Now the ordered residuals from least squares can be regressed against $\underline{x}$, using equation (1) with the LS estimate of slope $\beta$ to find $s$. Doing this shows how well LS minimizes the slope so it can be compared to the MS result. Let $\hat{\sigma}_{LSratio}$ represent this estimate of $\sigma_\epsilon/\sigma_x$. (2a) Let $\hat{\sigma}_x(CES)$ denote the estimate of $\sigma_x$ obtained by using equation (2). Now multiply this result by $\hat{\sigma}_{ratio}$, to obtain estimate $\hat{\sigma}_\epsilon(MS)$, an estimate of $\sigma_\epsilon$. $\hat{\sigma}_{LSratio}$ is multiplied by $\hat{\sigma}_x(CES)$ to obtain $\hat{\sigma}_\epsilon(LS2)$, another estimate of $\sigma_\epsilon$. Some insight for the estimator of $\sigma_{ratio}$ comes from Gideon and Rothan (2011). Let the random variable Z be N(0,1), U be N(0, $\sigma_u^2$), and T be N(0, $\sigma_t^2$). Using order statistics notation let, $u_i=E(U_{(i)})$, $ t_i=E(T_{(i)})$, and $k_i=E(Z_{(i)}), i= 1,2,\dots,n$. Now let $\underline{u}$ and $\underline{t}$ be the vectors of the expected values of these order statistics. The equation $r_p(\underline{u},\underline{t}-s\underline{u})=0$ is the same as (2) but with limiting values substituted for the data. Now $u_i=E(U_{(i)})=\sigma_u*k_i$ and $t_i=E(T_{(i)})=\sigma_t*k_i$ and so the solution to the equation is $s=\dfrac{\Sigma u_it_i}{\Sigma u_i^2 }$ $=\dfrac{\Sigma (\sigma_uk_i)(\sigma_tk_i))}{\Sigma (\sigma_uk_i)^2}=\dfrac{\sigma_t}{\sigma_u} $. The fact that this estimation concept works on data is illustrated in Table 1. For the least squares residuals, solving for $s$ in $r_p(\underline{e},\underline{res}^o-s\underline{e})=0$ estimates $\sigma_\epsilon$ because $\sigma_z=1$. Since $\underline{e}=\Phi^{-1}(\underline{p})$, where \\ $\underline{p}= (1/(n+1), 2/(n+1),\dots, n/(n+1))\prime$, the elements of $\underline{e}$ approach the expected values of the order statistics, that is, $\Phi^{-1}(p_i)$ approximates $E(Z_{(i)})$. This can be seen in Table 1; as the sample size increases, $\hat{\sigma}_x(CES)$ approaches $\sigma_x$. As already explained, solving for $s$ in $r_p(\underline{x}^o,\underline{res}^o-s\underline{x}^o)=0$ estimates $\sigma_e/\sigma_x$, i.e. $\hat{\sigma}_{LSratio}$, and the solution to equation (1) for $r_p$ is \\ $s=\dfrac{\Sigma (x_{(i)}-\bar{x})res_{(i)}}{\Sigma( x_{(i)}-\bar{x})^2 }$. The reasonableness of this process is shown by replacing data with theoretical counterparts. Thus, the $x_{(i)}-\bar{x}$ terms are replaced by $E(X_{(i)}-\bar{X}) =E(X_{(i)})-\mu_x=\sigma_x k_i$. The term $res_{(i)}$ is $Y_j-bx_j$ for some $ j$, $1\le j \le n.$ The conditional distribution of $Y_j|x_j$ is $N(\frac{\sigma_y}{\sigma_x}\rho x_j,\; \sigma^2_y(1-\rho^2))=N(\beta x_j,\; \sigma^2_\epsilon)$, or $Y_j|x_j-\beta x_j$ is $N(0,\; \sigma^2_\epsilon)$. Thus, for each $i$ there is a $j$ such that $res_{(i)}=(Y_j-bx_j)_{(i)}$ and $(Y_j-bx_j)_{(i)}$ is replaced by $E(Y_j|x_j-\beta x_j)_{(i)}=\sigma_\epsilon k_i$. So $s$ now becomes $\dfrac{\Sigma (\sigma_x k_i)(\sigma_\epsilon k_i))}{\Sigma (\sigma_x k_i)^2}=\dfrac{\sigma_\epsilon}{\sigma_x} $, as expected. \section{Results} The Tables are representative examples of the many simulations used to study the MS technique. Table 1 results are averages; an individual sample analysis helps put the table in perspective and also helps give meaning to the notation. Start with one sample of size 25 with the same parameters as the third set in Table 1, namely: $$ \rho =0.5727, \beta =0.8008, \sigma_\epsilon =1.7659, \sigma_x =1.5403, \sigma_y =2.1541.$$ The two estimated slopes were found to be $\hat{\beta}(LS)=0.6785$ and $\hat{\beta}(MS)=0.6051$; typically, the estimated slopes are close, but in this example they are somewhat different. The intercept used for the CES method comes from an unpublished paper deriving a location estimator from the Greatest Deviation CC; it is essentially the average of the first and third quantiles of the sample. The intercepts are $int(LS)=-0.3001$ and $int(MS)=-0.3571$. The two sets of residuals are compared using the standard deviation formula: $\hat{\sigma}_\epsilon(LS)=1.4736$ and $\hat{\sigma}_\epsilon(CES)=1.4796$. Note that the denominator is $n-1$ rather than the usual $n-2$, and that the LS quantity is just barely smaller than the CES-MS value. Now use the LS residuals in the MS method to see how it compares to the MS minimum of $\hat{\sigma}_{ratio}=0.7822$. The LS residuals produce a value of $\hat{\sigma}_{LSratio}=0.7836$. Here LS has the slightly higher value. These results were consistent over many samples; there was very little variation between these two quantities within a sample and almost always $\hat{\sigma}_\epsilon(LS)$ was barely smaller than $\hat{\sigma}_\epsilon(CES)$. Likewise, $\hat{\sigma}_{ratio}$ was just barely smaller than $\hat{\sigma}_{LSratio}$. Now these last two values are multiplied by $\hat{\sigma}_x(CES)=1.9440$ to obtain $\hat{\sigma}_{\epsilon}(LS2)=1.5234$ and $\hat{\sigma}_{\epsilon}(MS)=1.5207$. Both values are very close but somewhat different from the LS estimate of $\sigma_\epsilon$, 1.4736. Finally, $\hat{\sigma}_x(LS )=1.8180$. For small sample sizes the bias in the CES method for $\sigma_x$ makes it larger than the classical estimate. It was also true that the two estimates of $\sigma_\epsilon$ within each method, both (1) and (2), were always close together and much further apart between methods. Each set of residuals had 13 negative values and each had 12 positive values. In Table 1 it is unclear which method gives averages closest to $\sigma_\epsilon$. The following observations come from Table 1. As the sample size increases both methods get more accurate without one being better than the other. For all sample sizes and parameter values, $\hat{\sigma}_\epsilon(LS )$ is always better than $\hat{\sigma}_\epsilon(CES )$, but only by a very small amount. On the other hand, $\hat{\sigma}_{ratio}$ is generally better than $\hat{\sigma}_{LSratio}$, again by a very small amount. The few cases in which $\hat{\sigma}_{ratio}$ is not better have high correlations. However, in these cases both systems are essentially indistinguishable. The general conclusion is that fundamentally MS and LS give essentially the same minima for each sample; however, the values of the residuals can be slightly different. The average values of the MS and LS minima also show a very small difference, one that usually affects at most the value of the least significant digit of the data. The whole point so far has been showing how the CES method with $r_p$ using MS is essentially equivalent to classical normal theory. The MS method is now used with other CCs. First, the R code given in Section 4 uses $rfcn$ as a generic symbol to be substituted everywhere for $Pfcn$ in the code. There are no changes in the R code except to define the CC via $rfcn$. So far only $rslp = Pfcn$ was used to assign Pearson's CC to be employed in the MS technique. In Table 2, $rfcn$ was assigned to be, in order, the Greatest Deviation CC, $GDfcn$; Kendall's $\tau$, $Kenfcn$; Gini's CC, $Ginfcn$; the Absolute Value CC, $absfcn$; the Median Absolute Value CC, $madfcn$. The program $Cordef\&reg.R$ on the website has all of these functions. This R program includes a tied value procedure for rank based CCs. The first row of the Table 2 shows outcomes as in Table 1, that is, LS compared to CES with Pearson's CC. This is a different run than the earlier one sample example, but the results are very similar; the slope estimates are very close, and the two minimizations give comparable results as before. The other five CCs give good estimates of $\beta$. In column two are the results of the LS idea of sum of squares of the residuals. This column contains the square root of the sum of squares divided by 24, \textit{i.e.} $n-1$. If this is changed to the unbiased quantity by multiplying by 24/23, results closer to $\sigma_{\epsilon} $ are obtained. Finally, the CES-MS results are in column three and in all cases the CES method gives a lower minimum than the LS2 method directly above. Two of the robust CCs, GDCC and the Absolute Value CC, are much closer to $\beta$ than LS; in column 3 the $\hat{\sigma}_\epsilon(MS)$ values of these two CCs are the smallest. Several $x$-points (near the maximum and minimum $x$-values) have $y$-values that are high or low enough to unduly influence the LS method to increase the slope estimate. For real problems with unknown parameters observational experience on fitting CES lines and comparing to LS results soon leads one to recognize when LS may not be "best." In Table 2 notice that all CES CCs have $\hat{\sigma}_\epsilon(CES )$ closer to $\hat{\sigma}_\epsilon(LS)$ than $\hat{\sigma}_\epsilon(LS2)$ is to $\hat{\sigma}_\epsilon(MS)$. \begin{table} \caption*{Table 1: Comparison of Two Minimization Processes} \vspace*{-.3cm} \centerline{All Tabular Entries are Means} \vspace*{.5cm} \begin{center} \begin{tabular}{|l|l|c|c|c|} \hline \multicolumn{5} {|c|} { $\rho=0.9216$ $\beta=0.3800$ $\sigma_{\epsilon}=0.8000$ } \\ \multicolumn{5} {|c|} {$\sigma_x=5.000$ $\sigma_y=2.0615$ }\\ \hline & nsim=100 & n=20& n=50 & n=100 \\ \hline slopes & $\hat{\beta}(LS)$& 0.3839& 0.3740& 0.3812 \\ &$\hat{\beta}(MS)$& 0.3833& 0.3742& 0.3808 \\ \hline minima& $\hat{\sigma}_\epsilon(LS )$ &0.7624&0.7811&0.7893 \\ (LS Method)& $\hat{\sigma}_\epsilon(CES )$ &0.7643&0.7815&0.7895 \\ \hline minima& $\hat{\sigma}_{LSratio}$ &0.1552 &0.1533 &0.1568 \\ (CES Method)&$\hat{\sigma}_{ratio}$ &0.1548 &0.1533 &0.1567 \\ \hline 2a Method& $\hat{\sigma}_{\epsilon}(LS2)$ &0.8009 &0.7991 &0.8026 \\ &$\hat{\sigma}_{\epsilon}(MS)$ &0.7990 &0.7988 &0.8024 \\ \hline standard & $\hat{\sigma}_x(LS )$ & 4.8885 & 5.0252 & 4.9894 \\ deviations & $\hat{\sigma}_x(CES)$ & 5.3632 & 5.2682 & 5.1372 \\ \hline \hline \multicolumn{5} {|c|} { $\rho=0.0000$ $\beta=0.0000$ $\sigma_{\epsilon}=1.9000$ } \\ \multicolumn{5} {|c|} {$\sigma_x=1.5000$ $\sigma_y=1.9000$ }\\ \hline & nsim=100 & n=20& n=50 & n=100 \\ \hline slopes &$\hat{\beta}(LS)$& -0.0341& -0.0026& -0.0073 \\ &$\hat{\beta}(MS)$& -0.0498& 0.0016 & -0.0093 \\ \hline minima& $\hat{\sigma}_\epsilon(LS )$ &1.8342&1.8437&1.8676\\ (LS Method)& $\hat{\sigma}_\epsilon(CES )$ &1.8385&1.8445& 1.8678 \\ \hline minima& $\hat{\sigma}_{LSratio}$ &1.2130 &1.2152 &1.2327 \\ (CES Method)&$\hat{\sigma}_{ratio}$ &1.2095 &1.2147 &1.2326 \\ \hline 2a Method& $\hat{\sigma}_{\epsilon}(LS2)$ &1.9083 &1.8880 &1.8974 \\ &$\hat{\sigma}_{\epsilon}(MS)$ &1.9029 &1.8872&1.8972 \\ \hline standard & $\hat{\sigma}_x(LS )$ & 1.4768 & 1.4924 & 1.5036 \\ deviations & $\hat{\sigma}_x(CES)$ & 1.6151 & 1.5656 & 1.5473 \\ \hline \hline \multicolumn{5} {|c|} { $\rho=0.5727$ $\beta=0.8008$ $\sigma_{\epsilon}=1.7659$ } \\ \multicolumn{5} {|c|} {$\sigma_x=1.5403$ $\sigma_y=2.1541$ }\\ \hline & nsim=100 & n=20& n=50 & n=100 \\ \hline slopes &$\hat{\beta}(LS)$& 0.8156& 0.7890& 0.7881 \\ &$\hat{\beta}(MS)$& 0.8197& 0.7853 & 0.7911 \\ \hline minima& $\hat{\sigma}_\epsilon(LS )$ &1.6520 &1.7236 & 1.7692\\ (LS Method)& $\hat{\sigma}_\epsilon(CES )$ &1.6566 & 1.7251& 1.7695 \\ \hline minima& $\hat{\sigma}_{LSratio}$ &1.0296 &1.1276 &1.1376 \\ (CES Method)&$\hat{\sigma}_{ratio}$ &1.0266 &1.1268 &1.1375 \\ \hline 2a Method& $\hat{\sigma}_{\epsilon}(LS2)$ &1.7315 &1.7614 &1.7974 \\ &$\hat{\sigma}_{\epsilon}(MS)$ &1.7266 &1.7602 & 1.7972 \\ \hline standard & $\hat{\sigma}_x(LS )$ & 1.5720 & 1.5078 & 1.5417 \\ deviations & $\hat{\sigma}_x(CES)$ & 1.7254 & 1.5796 & 1.5859 \\ \hline \end{tabular} \end{center} \end{table} \section{R Code } The R code for the functions needed to let any reader easily reproduce the analysis and extend the ideas to other correlation coefficients is presented here. The R function \textit{Pfcn} specifies how the variable $b$ is to be estimated using Pearson's CC. General use by other CCs is done by defining \textit{rfcn} to be the CC used in \textit{rtest} which sets up regression equation (1) and its solution using \textit{uniroot}. So the CC choice is done by setting \textit{rfcn} to be \textit{Pfcn} when Pearson is desired. Then \textit{rtest} gives the objective function for \textit{optimize}, called for by \textit{outces}, which defines the data and does the iterations to minimize $s$ in \textit{rtest}. $\quad Pfcn = function(b,x,y) \{cor(x,y-b*x) \}$ $ \quad rfcn = Pfcn$ $\quad rtest = function(b,x,y) \quad \{y1 = sort(y - b*x)$ $\quad s = uniroot(rfcn,c(-4,4),x=xsr, y=y1)\$ root, return(s)\} $ Quantity $xsr = sort(x)$, the sorted $x$ values, is used in \textit{uniroot} within \textit{rtest}. $\quad outces = optimize(rtest, c(-5,5), x=x, y=y)$ \noindent $outces\$min$ is the slope estimate, $\hat{\beta}$, for the regression and $outces\$obj$ is the CES minimum, the MS result, $\hat{\sigma}_{ratio}$. If $cres$ equals the vector of MS residuals, $\hat{\sigma}_{\epsilon}(CES)=sqrt(var(cres))$. $\hat{\sigma}_{\epsilon}(LS)$ is the estimate of $\sigma_{\epsilon}$ using the linear model R routine, $lm$. If $lres$ equals the vector of LS residuals, $\hat{\sigma}_{\epsilon}(LS)=sqrt(var(lres))$. Let $ysr$ be the sorted values of the LS regression residuals, from $lm$ function. Then $\quad \hat{\sigma}_{LSratio} = uniroot(rfcn,c(0,12),x=xsr,y=ysr)$. $\quad p3 = (1:n)/(n+1)$; $q3 = qnorm(p3)$ $\quad \hat{\sigma}_x(CES) = uniroot(rfcn,c(0,15),x=q3,y=xsr)\$root $. This gives the CES estimate of $\sigma_x$. $\quad \hat{\sigma}_{\epsilon}(LS2)=\hat{\sigma}_{LSratio}*\hat{\sigma}_x(CES)$ and $\hat{\sigma}_{\epsilon}(MS)=\hat{\sigma}_{ratio}*\hat{\sigma}_x(CES)$. $\quad \hat{\sigma}_x(LS) $ is $sqrt(var(x))$ Some possible CCs are listed previously. As an example, $GDfcn$ is defined like $Pfcn$ but with $cor$ replaced by $GDave$, the R-routine for GDCC as found in \textit{Cordef\&reg.R}. . \begin{table} \caption*{Table 2: Comparison of Seven Minimization Processes} \vspace*{-.3cm} \centerline{All from One Sample} \vspace*{.5cm} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{4} {|c|} { $\rho=0.5727$ $\beta=0.8008$ $\sigma_{\epsilon}=1.7659$ } \\ \multicolumn{4} {|c|} {n = 25 $\sigma_x=1.5403$ $\sigma_y=2.1541$ }\\ \hline & $\hat{\beta}(LS)$ & $\hat{\sigma}_\epsilon(LS )$ & $\hat{\sigma}_{\epsilon}(LS2)$ \\ & $\hat{\beta}(CES)$ & $\hat{\sigma}_\epsilon(CES )$ & $\hat{\sigma}_{\epsilon}(MS)$ \\ \hline LS & 1.0086& 1.6799& 1.7592 \\ Pearson& 1.0333& 1.6804& 1.7591 \\ \hline LS2& & & 1.9529\\ GDCC& 0.8228 &1.7058& 1.6699 \\ \hline LS2& & & 1.8013 \\ Kendall&1.1598 & 1.6971& 1.7441 \\\hline LS2& & & 1.8546 \\ Gini & 1.2021&1.7080 & 1.7916 \\\hline LS2& & & 1.7565 \\ Absolute &0.8617 & 1.6961& 1.7222\\ \hline LS2& & & 1.8975\\ MAD & 0.6862 & 1.7568 & 1.7255 \\ \hline \end{tabular} \end{table} \section{Conclusion} In LS minimization and zero correlation of $x$ with the residuals imply each other. This is not true for CES. The zero method is shown in Gideon (2012) or in Gideon and Rummel (1992). To change equation (1) to include multiple linear regression, add additional linear terms. For example, a second regression variable is added using $b_2\underline{x}_2$. Now, however, the term $(\underline{y}-b_1\underline{x}_1-b_2\underline{x}_2)^0$ needs to be regressed against $\underline{y}^0$, which can be accomplished by varying $b_1$ and $b_2$ to minimize $s$. This $s$ estimates $\sigma_\epsilon/\sigma_y$. Thus CES maximizes $1-{\sigma}_{\epsilon}^{2}/{\sigma}_{y}^{2}$, the multiple correlation coefficient. Gideon (2012) contains this extension and others, focusing on the Absolute Value and Greatest Deviation correlation coefficients. The author's conjecture is that using Pearson's $r_p$ to find the CES minimum is equivalent to the usual least squares method. This conjecture has not been studied theoretically. Is there a proof? There are three main reasons for this paper: first, to show that the Minimum Slope criterion of CES using Pearson's correlation coefficient is apparently as good as the least squares criterion in simple linear regression. Second, to show the R commands that allow the use of any CC in place of $r_p$ so as to offer a very general estimation system, the Correlation Estimation System. Third, because of the first two, the question for model building becomes not just which is the "best" model but also which is the "best" criterion to select the model. All the models fit by CES with other correlation coefficients in Gideon (2012) were outstanding. Some CES distribution theory was given in Gideon (2010). The generality of CES makes it easy to implement in a wide variety of regression situations. One only needs the R program \textit{Cordef\&reg.R} (or existing R routines for Pearson, Spearman, and Kendall) to set up the correlation coefficients and the regression sequence. One may find that the classical fit is not the "best". \section{References} \setlength{\parindent}{0in} \setlength{\parskip}{.05in} Author website: hs.umt.edu/math/people/default.php?s=Gideon Gideon, R.A. (2007). The Correlation Coefficients, \textit{Journal of Modern Applied Statistical Methods}, \textbf{6}, no. 2, 517--529. Gideon, R.A. (2010). The Relationship between a Correlation Coefficient and its Associated Slope Estimates in Multiple Linear Regression, \textit{Sankhya}, \textbf{72}, Series B, Part 1, 96--106. Gideon, R.A. (2012). Obtaining Estimators from Correlation Coefficients: The Correlation Estimation System and R, \textit{Journal of Data Science}, \textbf{10}, no. 4, 597--617. Gideon, R.A. and Hollister, R.A. (1987). A Rank Correlation Coefficient Resistant to Outliers, \textit{Journal of the American Statistical Association}, \textbf{82}, no. 398, 656--666. Gideon, R.A., Prentice, M.J., and Pyke, R. (1989). The Limiting Distribution of the Rank Correlation Coefficient $r_{gd}$. In: Contributions to Probability and Statistics (Essays in Honor of Ingram Olkin), ed. Gleser, L.,J., Perlman, M.D., Press, S.J., and Sampson, A.R. Springer-Verlang, N,Y., 217--226. Gideon, R.A. and Rothan, A.M., CSJ (2011). Location and Scale Estimation with Correlation Coefficients, \textit{Communications in Statistics-Theory and Methods}, \textbf{40}, Issue 9, 1561--1572. Gideon, R.A. and Rummel, S.E. (1992). Correlation in Simple Linear Regression, unpublished paper (http://www.math.umt.edu/gideon/ \\ CORR-N-SPACE-REG.pdf), University of Montana, Dept. of Mathematical Sciences. Gini, C. (1914). L'Ammontare c la Composizione della Ricchezza della Nazioni,\textit{ Bocca}, Torino. Randles, R. H. and Wolfe, D. A. (1979). Introduction to the Theory of Nonparametric Statistics, Wiley \& Sons, New York. Rummel, Steven E. (1991). A Procedure for Obtaining a Robust Regression Employing the Greatest Deviation Correlation Coefficient, Unpublished Ph.D. Dissertation, University of Montana, Missoula, MT 59812, full text accessible through UMI ProQuest Digital Dissertations. Sheng, HuaiQing (2002). Estimation in Generalized Linear Models and Time Series Models with Nonparametric Correlation Coefficients, Unpublished Ph.D. Dissertation, University of Montana, Missoula, MT 59812, full text accessible through\\ http://wwwlib.umi.com/dissertations/fullcit/3041406. \end{document}
2,877,628,088,758
arxiv
\section{Introduction}\label{sec:1} A standard way to describe a $d$-dimensional field theory at {the} classical level is through an action which is the integral of a Lagrangian over a $d$-dimensional manifold ${\cal M}$. For physical theories, ${\cal M}$ is typically a Lorentzian manifold which may have a codimension one boundary $\partial {\cal M}$. {The} variational principle can then be used to derive equations of motion which are typically second order differential equations over ${\cal M}$. These equations are well-posed if their solutions can be fully determined by specifying the boundary data which e.g. can be the Cauchy data over a constant time slice in ${\cal M}$. Gravity theories in generic $d$ dimensions have two important features: (1) Background independence, meaning that the metric on ${\cal M}$ is a solution to the same theory and not a priori fixed or given. In particular, to completely specify it we need to provide the boundary data. (2) Diffeomorphism invariance, meaning that the metric can only be determined up to generic coordinate transformations. This implies that among $d(d+1)/2$ components of the $d$-dimensional metric, $d$ number of them can be removed by an appropriate choice of coordinates. Out of the remaining $d(d-1)/2$, only $d(d-3)/2$ correspond to propagating gravitons\footnote{Here we are implicitly assuming an Einstein gravity theory which has only a massless spin 2 state as its propagating degree of freedom.} and $d$ are free to choose. This latter will be fixed through the boundary data which are functions over codimension one surfaces, i.e. we need $d$ functions of $d-1$ variables to specify the boundary data \cite{Grumiller:2020vvv}. Discussions of the previous two paragraphs raise a crucial point to be addressed and understood: $\partial{\cal M}$ is a $d-1$ dimensional surface and a generic $d$-dimensional diffeomorphism may non-trivially act on the boundary data and hence move us within the space of solutions determined by the boundary data. This means that a part of the general local coordinate transformations which act non-trivially on the boundary data can become ``physical'' as they change a solution to another physically distinguishable solution. We hence need to refine the equivalence principle, which leads to diffeomorphism invariance, in {the} presence of boundaries \cite{Sheikh-Jabbari:2016lzm}. This argument already tells us that this (potentially) ``physical diffeos'' should be a measure zero subset of $d$-dimensional diffeos which act on the {codimension} one boundary surface. That is, they can at most be $d$ functions of $d-1$ variables parametrising the boundary \cite{Grumiller:2020vvv}. These physical diffeos are indeed labelling the boundary degrees of freedom (b.d.o.f.). In {the} presence of boundaries, especially if they are timelike or null, the system besides the $d(d-3)/2$ bulk gravitons has a maximum number of $d$ b.d.o.f. They can interact among themselves and also with the bulk d.o.f. The details of these interactions are of course determined once we fix the boundary conditions. {Our general goal is to develop a systematic treatment of the b.d.o.f. and their interactions with the bulk d.o.f.} The key questions toward this goal are \begin{enumerate} \item Whether and how this maximal physical diffeos and/or b.d.o.f. can be realised; \item What are the possible choices for boundary conditions, what is the guiding principle; \item How does this fix the interactions mentioned above. \end{enumerate} These questions have of course been under intense study since the seminal work of BMS \cite{Bondi:1962, Sachs:1962} and have gained a boost in the last decade, e.g. see \cite{strominger:2017zoo} and references therein. {In this work we focus on the first question. The two other questions will be briefly discussed in the conclusion section and get a full treatment in upcoming publications.} We can systematically analyse this question by fixing/choosing a codimension one boundary surface using $d$ diffeomorphisms. We then remain with $d$ ``residual'' diffeos which only act on the chosen $d-1$ dimensional ``boundary''. To specify which of these diffeos are trivial or not, we can employ one of the standard methods of associating (conserved) charges to the diffeos, e.g. the covariant phase space method \cite{Lee:1990nz,Iyer:1994ys,Compere:2018aar}; if the charge is finite and non-zero the diffeo is nontrivial {or physical}. This method, however, provides us with {the} variation of the charges and not the charges themselves. There is then another non-trivial step to check if the charge is integrable over the space of solutions discussed above. If integrable, one can then define the charge. The same covariant phase space formulation also implies the algebra of these integrable charges is the same as the Lie algebra of the symmetry generating diffeos, up to possible central charges, e.g. see \cite{Compere:2018aar}. These charges, if well-defined, can be used to label the b.d.o.f., i.e. b.d.o.f. fall into representations (more precisely, coadjoint orbits) of the charge algebra, see e.g. \cite{Oblak:2016eij}. Classic examples are four or three-dimensional asymptotically flat spacetimes where the boundary is the future null infinity $\boldsymbol{\mathscr{I}^+}$. In these cases, the residual diffeos are respectively, three and two functions over codimension two surfaces. The charges turn out to be integrable in the absence of the Bondi news \cite{Bondi:1962, Wald:1999wa, Barnich:2011mi} and the algebra is respectively BMS$_4$ \cite{Bondi:1962, Sachs:1962} or BMS$_3$ \cite{Barnich:2010eb}. In both cases, the maximal numbers of b.d.o.f is not reached and moreover, the symmetry generators are only functions of codimension two surfaces (rather than codimension one). These considerations can also be applied to timelike (causal) boundaries as in the seminal case of asymptotically AdS$_3$ \cite{Brown:1986nw}. The Brown-Henneaux analysis yields integrable charges for only a subsector of the maximal boundary data which has two functions of a single variable (rather than three functions of two variables). The algebra is two copies of Virasoro algebra at the Brown-Henneaux central charge \cite{Brown:1986nw}. While it is common to explore these charge analyses over the asymptotic boundaries of spacetimes, one can take these boundaries to be any codimension one surface. In particular, for the case of black holes, a natural and physically relevant choice is to take the horizon, which is a null surface, to be the boundary over which the boundary data and b.d.o.f. is stored. This choice is physically relevant because the horizon is indeed the boundary of timelike curves (paths of physical observers) outside the horizon. Moreover, one can model what is inside a black hole horizon by replacing the inside region with a membrane, placed at the stretched horizon, as is done in the membrane paradigm \cite{Thorne:1986iy, Parikh:1998mg}. {Furthermore}, near horizon degrees of freedom and symmetries, are the essentials of {the} formulation of the soft hair proposal \cite{Hawking:2016msc} and have been explored in several different publications \cite{Donnay:2015abr, Donnay:2016ejv, Afshar:2016kjj, Afshar:2016wfy, Afshar:2016uax, Mao:2016pwq, Grumiller:2016kcp, Grumiller:2018scv, Ammon:2017vwt, Chandrasekaran:2018aop, Chandrasekaran:2019ewn, Grumiller:2019fmp, Adami:2020amw}. Nonetheless, neither realize the maximal set of b.d.o.f. discussed above. With the above motivations, in this work, we focus on boundary charges over generic null surfaces in two-dimensional dilaton gravity and three-dimensional Einstein gravity theories. These theories do not have propagating d.o.f. and {thus} they provide a more controlled setup to ask, formulate, and address questions about boundary charges in full generality. As we will see this setup is still very rich and provides us with results and insights which could be directly generalized to higher-dimensional cases. In our surface charge analysis, {we realize} the largest set of boundary charges by keeping the boundary conditions free and unfixed. {That is, unlike almost all previous analysis, we do not fix any boundary conditions; fixing boundary conditions generically amounts to a reduction on the phase space governing the b.d.o.f.} In the $2d$ case these are two sets of charges which are functions of the one variable parametrising the null surface. In the $3d$ case these are three sets of charges which are functions of two variables spanning the null surface, which is a null cylinder. {The symmetry generators (non-trivial diffeos) can be chosen to be field/state-dependent, meaning that they can depend on the functions defining the phase space of solutions.} One of our main results is that in the $2d$ and $3d$ cases there always exists a basis where the charges are integrable. The algebra of these charges, however, depends on the basis used for symmetry generators: a state-dependent symmetry generator will change the algebra of charges. We discuss some different charge bases and the corresponding algebras. The organisation of this paper is as follows. In section \ref{sec:2dgrav} and \ref{sec:3dgrav}, we consider dilation gravity in $2d$ and pure gravity in $3d$ respectively. Based on the most general expansion around a null hypersurface, we compute the null boundary symmetries (NBS) and their charges. We show, by construction, that there exists a family of reparametrisation of the symmetries yielding integrable charges. {For $2d$ case we obtain a Heisenberg algebra and in $3d$, Heisenberg semidirect sum with Diff$(S^1)$ as the algebra of charges.} In section \ref{sec:changebasis}, we discuss generic reparametrisations/change of bases and discuss various algebras one can reach for both the $2$ and $3d$ cases. The last section is devoted to concluding remarks and outlook. In particular, we discuss that the ``fundamental null boundary symmetry'' algebra, Heisenberg $\oplus$ Diff$(d-2)$, that can be always reached in generic dimensions (including $d>3$) in the absence of Bondi news through the null surface. In some appendices, we have gathered some useful formulas regarding $2d$, $3d$ gravity theories, and their general solutions and discuss modified bracket method \cite{Barnich:2011mi} to deal with non-integrable charges \section{Null Boundary Symmetry (NBS) Algebra, \emph{2d} Dilaton Gravity Case}\label{sec:2dgrav} Two-dimensional spacetime is the lowest dimension for which one can consider gravity. However, the Einstein-Hilbert action is purely topological in $2d$. One simple way to get a bulk action and therefore equations of motion is to add a scalar field, the dilaton \cite{Jackiw:1984, Teitelboim:1984, Callan:1992rs}. This set-up has no propagating degrees of freedom nevertheless very interesting features, e.g. see \cite{Brown:1988am, Grumiller:2002nm} and references therein. In this work, we focus on the computations of the charges on a null hypersurface, see \cite{Grumiller:2015vaa, Grumiller:2017qao} for earlier analysis of asymptotic symmetry analysis on AdS$_2$. The dilaton-gravity action is \cite{Jackiw:1984, Teitelboim:1984} \begin{equation} S_G=\frac{1}{16\pi G}\int \textrm{d} ^2 x \sqrt{-g}\left(\phi^2 R-\lambda(\partial\phi)^2-V(\phi)\right)\,. \end{equation} However, the kinetic term of dilaton can be absorbed by a field redefinition \begin{equation} g\to \phi^{-\frac{\lambda}{2}}g,\quad\Rightarrow \quad \lambda\to 0, \quad V\to \phi^{-\frac{\lambda}{2}} V\,. \end{equation} So we consider the following action \cite{Grumiller:2002nm} \footnote{We don't consider any extra matter field in the system nor discuss the boundary terms.} \begin{equation} \label{action2d} S=\frac{1}{16\pi G}\int \textrm{d} ^2 x \sqrt{-g}\left(\Phi R-U(\Phi)\right)\,. \end{equation} The above theory with different potentials $U(\Phi)$ have been considered in the literature, e.g. $U(\Phi)=\Phi^k$ type potential may arise from various reduction of higher-dimensional theories to two dimensions. See \cite{Grumiller:2002nm, Grumiller:2017qao} for more discussions and references. For our analysis below, where we consider an expansion around a null surface, however, as long as the potential is not a constant its explicit form does not matter. The most general solution to the above $2d$ gravity theory has been discussed in the appendix \ref{appen:2d-solutions}. There are two classes of families, constant $\Phi$ and non-constant $\Phi$ solutions. The former, however, has always vanishing charges. So, hereafter we only consider {a} non-constant dilaton family of solutions. For the charge analysis, we only need to have the leading behaviour of the fields near the $r=0$ surface, which we take it to be null, \begin{equation}\label{2d-NH-metric} \begin{split} \d s^2 &= 2\eta(v) \d r \d v- 2r \eta(v) F_0(v)\d v^2 +{\cal O}(r^2)\,\\ \Phi &= \Phi_0 (v)+ {\cal O}(r) \end{split} \end{equation} The field equations to first order in $r$ relate the three functions $\eta, F_0,\Phi_0$, \begin{equation} - \Phi_0'' + \Phi_0' \left( F_0 + \frac{\eta'}{\eta} \right)=0 \, , \end{equation} {For our charge analysis, we only focus on the non-constant $\Phi_0$, $\Phi_0'\neq0$, case.\footnote{{In the special case that $\Phi_0'=0$, $F_0, \eta$ are the two arbitrary functions describing the solution near the null boundary $r=0$.}} The system is then described by $\eta, \Phi_0$ which have arbitrary $v$ dependence, as $F_0$ is given by} \begin{equation}\label{F0} F_0= \frac{\Phi''_0}{\Phi'_0}-\frac{\eta'}{\eta} \,. \end{equation} In the special case that $\Phi_0'=0$, $F_0, \eta$ are the two arbitrary functions describing the solution near the null boundary $r=0$. As discussed in appendix \ref{appen:2d-solutions}, the solutions admit \begin{equation} \zeta= \epsilon^{\mu \nu} \partial_\mu \Phi \partial_\nu \end{equation} as a Killing vector, where $\epsilon^{\mu \nu}$ is Levi-Civita tensor in 2$d$ \cite{Gegenberg:1994pv}. This vector is normal to the charge computing surface which is along $\partial_\mu\Phi$. Since $|\zeta|^2=|\d\Phi|^2$, the Killing vector field $\zeta$ is null on $r=0$ when $\partial_v\Phi_0\partial_r\Phi|_{r=0}=0$. \subsection{NBS generating vector fields} The vector fields that preserve the form of solution \eqref{2d-NH-metric} and keep $r=0$ a null surface are \begin{equation}\label{2d-NHKV-flat} \xi=T\partial_{v}-r(W-T')\partial_{r}+ \mathcal{O}(r^2) \, , \end{equation} where $T,W$ are arbitrary functions of $v$. The Lie bracket of two vectors of the form \eqref{2d-NHKV-flat} is \begin{equation}\label{2d-algebra} \begin{split} [\xi(T_{1},W_{1}),\xi(T_{2},W_{2})]&=\xi(T_{12},W_{12})\cr T_{12}=(T_{1}T_{2}'-T_{2}T_{1}')\,&,\qquad W_{12}= (T_{1}W_{2}'-T_{2}W_{1}')\,. \end{split} \end{equation} If we assume that $T,W$ are meromorphic functions of $v$, i.e. if they admit a Laurent expansion,\footnote{{Assuming smooth diffeos, one should consider Taylor expansion, i.e. the sums in \eqref{T-W-Laurent} should be limited to non-negative integers $n=0,1,2,\cdots$. Allowing for negative $n$ amounts to considering diffeos which have poles in $v$. This is somehow like the superrotation charge of BMS$_4$ [12, 38]}} \begin{equation}\label{T-W-Laurent} T=-\sum_{n\in Z} \tau_n v^{n+1},\qquad W= \sum_{n\in Z} \omega_n v^{n+1}, \end{equation} \begin{equation} T_n=-v^{n+1} \partial_v - (n+1)\, r\, v^n \partial_r,\qquad W_n=- r\, v^{n+1}\partial_r, \end{equation} then $T_n, W_n$ satisfy a BMS$_3$ algebra \cite{Barnich:2006av,Barnich:2010eb}, \begin{equation}\label{BMS3-2d-KVA} [T_m, T_n]= (m-n)\,T_{m+n}\, , \quad [T_m, W_n]= (m-n)\,W_{m+n},\quad [W_m, W_n]=0. \end{equation} It is notable that \eqref{2d-NHKV-flat} is one-dimensional Diff $\oplus$ Weyl algebra, in which the Weyl scaling corresponds to the BMS$_3$ supertranslations. Under a transformation generated by \eqref{2d-NHKV-flat}, the metric and the dilaton become \begin{equation} g_{\mu\nu}[\eta, \Phi_0] \to g_{\mu\nu}[\eta+\delta_\xi \eta, \Phi_0+\delta_\xi \Phi_0] \end{equation} where \begin{equation}\label{xi-trans} \delta_\xi \eta= \eta' \, T + 2 \,\eta \, T'-\eta\ W,\qquad \delta_\xi \Phi_0= \Phi_0'\ T\,. \end{equation} \subsection{Surface charges}\label{sec:charge-2d-case} Using the Iyer-Wald presymplectic form \cite{Iyer:1994ys}, one can compute the charge variation associated with the above transformations. Straightforward analysis yields \begin{equation} \slashed{\delta} Q_\xi = - \frac{\sqrt{-g}}{8 \pi G} \left( \delta{\Phi} \, \nabla^{[v} \xi^{r]} - \,\xi^{[v} h^{r]}_{\lambda} \nabla^{\lambda} {\Phi} +2 \xi^{[v} \nabla^{r]} \delta{\Phi} \right), \end{equation} where $h_{\mu\nu}=\delta g_{\mu\nu}$ denotes metric variations and $\slashed{\delta}$ indicates that the charge is not necessarily integrable. We note that the charge variation is coming from the part proportional to $\delta\Phi$ and the part not involving variation of $\Phi$ does not contribute to the charge in the $2d$ case for our background metrics \eqref{2d-NH-metric}. The charges associated with the near null hypersurface Killing vectors \eqref{2d-NHKV-flat} for the metric \eqref{2d-NH-metric} computed at $r=0$ are \begin{eqnarray}\label{Charge-variation-01} 16\pi G \slashed{\delta} Q_{\xi}&=& W\delta\Phi_0+T\left(-\Gamma \delta\Phi_0 -2\delta \Phi_0'+\frac{\Phi_0'\delta\eta}{\eta}\right) \end{eqnarray} where \begin{equation}\label{P-G-01} \Gamma:= - \frac{2 \, \Phi_0''}{\Phi_0'}+\frac{\eta'}{\eta} \,, \qquad \delta_\xi \Gamma = \left(\Gamma \, T \right)' - W' \, . \end{equation} If the symmetry generators are state independent $\delta T =\delta W=0$, the {charges} are not integrable on the phase space. Moreover, they are functions of $v$ and their evolution is not constrained by the equations of motion. Non-inegrability of the charge for non-constant dilaton cases was also reported in \cite{Grumiller:2015vaa, Grumiller:2017qao} where asymptotic symmetries on AdS$_2$ was analysed. One method to deal with the non-intergable charges is to use the Barnich-Troessaert modified bracket \cite{Barnich:2011ct, Barnich:2011mi} (see also \cite{Adami:2020amw} for more discussions) to extract out the integrable part of the non-integrable charges. That is, to write $\slashed{\delta} Q_{\xi}=\delta Q^I_\xi+{\cal F}$ where $Q^I$ is the integrbale charge and $F$ is a non-zero ``flux''. As we show in the appendix \ref{BT-MB-Appendix} the algebra of $Q^I$ is a centerless BMS$_3$ algebra, the same algebra as the symmetry generators \eqref{BMS3-2d-KVA}. In the next part, however, we explore a different line and find a basis in which the charges are integrable. \subsection{Integrable basis for the charges}\label{sec:integrabe-2d-basis} As discussed in \cite{Grumiller:2019fmp}, we may choose which combination of the symmetry generators are assigned to have vanishing variations. That is, one may consider ``change of basis'' by taking a linear combination of the symmetry generators, $W,T$ with possibly field-dependent coefficients to define a new basis. In particular, let us consider \begin{equation}\label{2dInt-basis-generators} \hat W=W-\Gamma\, T\,,\quad \hat T=\left(\frac{\eta}{(\Phi_0')^2}\right)^{s}{\Phi_0'}\ {T}\, \end{equation} where $\Gamma$ is defined in \eqref{P-G-01} and $s$ is an arbitrary real number. In the new basis we take $\hat{W}$ and $\hat{T}$ to be two field-independent functions and to have zero variations over the phase space. {The field-dependence of the symmetry generators has to be taken account when one computes the symmetry algebra} This is done systematically through the adjusted Lie bracket is defined as \cite{Barnich:2010eb, Compere:2015knw}, \begin{equation} [\xi_1,\xi_2]_{_{\text{adj. bracket}}}=[\xi_1, \xi_2]-\hat\delta_{\xi_1}\xi_2+\hat\delta_{\xi_2}\xi_1\end{equation} where $\hat\delta$ is coming from variation of fields $T, W$ in $\xi_1,\xi_2$. Their algebra is \begin{equation}\label{NHKV-algebra-2d} [\xi(\hat W_1, \hat T_1), \xi(\hat W_2, \hat T_2)]_{_{\text{adj. bracket}}}=\xi\left(0,s(\hat T_1\hat W_2-\hat T_2\hat W_1)\right)\,. \end{equation} The charge variation \eqref{Charge-variation-01} in this new basis takes the form \begin{equation}\label{charge-vartiation-02} \delta Q_\xi=\frac1{16\pi G}\left( \hat W \, \delta \Phi_0+\hat T\, \delta \mathcal{P}^{(s)}\right) \end{equation} where \begin{equation}\label{E-01} \mathcal{P}^{(s)}=\left\{\begin{array}{cc}-\frac1s\left( \frac{(\Phi'_0)^2}{\eta}\right)^{s} \,, &\text{ for } s\neq0\\ \ \ & \ \\ - \ln \left( \frac{(\Phi'_0)^2}{\eta}\right)\,, &\text{ for } s=0\end{array}\right. \,. \end{equation} The charges $\mathcal{P}^{(s)}, \Phi_0$ are clearly integrable once we take $ \hat W, \hat T$ to be field-independent in the new basis. This is not a surprise, as in the absence of propagating degrees of freedom in $2d$, one can expects to be able to find a new basis in which flux vanishes. In section \ref{sec:changebasis} we provide a general discussion on the existence of integrable basis. The transformation law of the fields is given by \begin{equation} \begin{cases} \delta_\xi\mathcal{P}^{(s)}= s\,\hat{W}\,\mathcal{P}^{(s)} ,\qquad \delta_\xi \Phi_0=-s\mathcal{P}^{(s)}\hat{ T}\, & \qquad s\neq0\\ \delta_\xi\mathcal{P}^{(0)}= -\hat{W}, \hspace{2 cm} \delta_{\xi}\Phi_{0}=\hat{T} & \qquad s=0\,,\end{cases} \end{equation} yielding the charge algebra, \begin{equation}\label{2d-Phi-Xi-algebra} \begin{split} i \{ \Phi_0 (v), \mathcal{P}^{(s)} (v) \} = &{16\pi G}i\left(-s\,\mathcal{P}^{(s)}(v)+ \,\delta_{s,0} \right)\, \\ i\{ \Phi_0 (v), \Phi_0(v) \} = &\ i\{ \mathcal{P}^{(r)} (v), \mathcal{P}^{(s)} (v) \}=0 \, . \end{split} \end{equation} Let's start by considering the case $s=0$. Performing ``quantisation'' of the algebra by replacing the Poisson brackets with commutators, $i\{ , \}\to [ , ]$, one has \begin{equation}\label{C-R-01} \begin{split} &[\Phi_{0}(v),\mathcal{P}^{(0)}(v)]=16\pi G\ {i} \\ &[\Phi_{0}(v),\Phi_{0}(v)]=[\mathcal{P}^{(0)}(v),\mathcal{P}^{(0)}(v)]=0\,, \end{split} \end{equation} which is the Heisenberg algebra. Therefore in our setup, we have a change of basis where we can reach the Heisenberg algebra \eqref{C-R-01} with $v$ dependent charges as the NBS algebra in {the} $2d$ case. In particular, we would like to emphasise two points: \begin{itemize} \item [1] The null boundary is a one-dimensional line parameterised by $v$ and the boundary phase space is labelled by the b.d.o.f $\Phi_{0}(v),\mathcal{P}^{(0)}(v)$. That is, the b.d.o.f. in this case is that of a one-dimensional particle. \item [2] The $v$ dependence of these charges is not fixed by our charge analysis because we did not fix any boundary condition on the null surface $r=0$. Therefore, the boundary Hamiltonian (generator of translations in $v$) which governs the dynamics over the boundary phase space is not determined through our analysis. We shall discuss this point further in section \ref{sec:conclusion}. \end{itemize} To close the section, we note that the generic $s\neq 0$ case of \eqref{2d-Phi-Xi-algebra} can be obtained from the Heisenberg algebra through \begin{equation}\label{Es-E-1} \mathcal{P}^{(s)}=-\frac1s e^{-s\mathcal{P}^{(0)}}\,. \end{equation} $\mathcal{P}^{(s)}, \Phi_0$ provides us with another basis in which the charges are integrable. In section \ref{sec:changebasis} we discuss that there are in fact infinitely many such integrable basis, with different charge algebras. \section{Null Boundary Symmetry (NBS) Algebra, \emph{3d} Gravity Case}\label{sec:3dgrav} As the next case, we consider the three-dimensional Einstein gravity described by the action and field equations \begin{equation}\label{action-EoM-3d} S=\frac{1}{16\pi G}\int \textrm{d} ^3 x\ \sqrt{-g}\left( R-2\Lambda\right), \qquad \mathcal{E}_{\mu \nu}:= R_{\mu \nu} - 2\Lambda g_{\mu \nu}=0. \end{equation} Depending on $\Lambda$, $\Lambda<0, \Lambda=0, \Lambda>0$ we respectively have AdS$_3$, flat or dS$_3$ gravities. All solutions to the respective theories are locally AdS$_3$, flat or dS$_3$. The AdS$_3$ case admits BTZ black hole solutions \cite{Banados:1992gq, Banados:1992wn} once we identify one of the spatial directions on a circle. Similarly in flat space, there are the so-called flat space cosmologies \cite{Cornalba:2002fi, Cornalba:2003kd, Bagchi:2012xr}, that are solutions with a cosmological horizon. The dS$_3$ solution itself also has a cosmological horizon, e.g. see \cite{Spradlin:2001pw}. We will adopt the $r, v, \phi$ coordinates and take $\phi$ to be $2\pi$ periodic, $\phi\simeq \phi+2\pi$. Since we will be focusing on {the} behaviour of solutions near a null surface, the value of $\Lambda$ will not be of relevance and we leave it free. {A general family of solutions to the AdS$_3$ theory with $r=0$ as a null surface is specified by three functions of $v,\phi$, see appendix \ref{appen:3d-solutions}. This constitutes the maximal configuration of the $3d$ phase space as it is labelled by three codimension one functions. We do not use this family of solutions for our charge analysis because it is written in a coordinate system which makes the charge computations rather cumbersome. However, we still present this solution to convey the idea that the three-functions family of near null surface geometry constructed below, can be extended to a full solution away from $r=0$ surface. } Let us consider a codimension one null hypersurface $\mathcal{N}$ at $r=0$ and adopt a Gaussian-null-like coordinate system as follows. Let $v$ be the `advanced time' coordinate along $\mathcal{N}$ such that a null surface is defined by \begin{equation} g^{\mu \nu} \partial_\mu v \partial_\nu v =0\, . \end{equation} A ray can be defined as the vector tangent to this surface, $k^\mu = \eta\, g^{\mu \nu} \partial_\nu v$, where $\eta$ is an arbitrary non-zero function. Let $r$ be the affine parameter of the generator $k^\mu$ such that $k^{\mu} = \frac{\d x^{\mu}}{\d r}=\delta^\mu_r$. The last coordinate $\phi \sim \phi + 2 \pi$ is chosen as a parameter constant along each ray, $k^\mu \partial_\mu \phi =0$. With this choice of coordinates metric components are restricted as, \begin{equation} g^{vv}=0 \, , \qquad g^{v r}=\frac{1}{\eta} \, , \qquad g^{v \phi} =0\, , \end{equation} or \begin{equation} g_{ r r}=0 \, , \qquad g_{v r}=\eta \, , \qquad g_{ r \phi} =0\, , \end{equation} where $\eta = \eta(v, \phi)$.\footnote{$k\cdot \nabla k =0$ leads to $\partial_r \eta =0$. {One can perform the analysis without this condition, considering $r$-dependent $\eta$. In this case new functions will appear in the symmetry generators which are trivial in the sense that there are no surface charges associated with them. One can use these trivial diffeomorphisms to set sub-leading terms in $g_{vr}$ component of metric, or equivalently $\eta$, and then we are left with only $r$ independent $\eta$. This point, in a very similar $4d$ setup, was discussed in some detail in \cite{Adami:2020amw}. Since the analysis is essentially the same we do not repeat them here.}} Therefore, the most general form of the line-element in the coordinate system constructed above is of the form \begin{equation}\label{3d-NH-metric} \d s^2= - F \d v^2 + 2 \,\eta \d v \d r +2\, f\, \d v \d \phi +h \d \phi^2 \,, \end{equation} where $F$, $f$ and $h$ are some functions of $x^{\mu}$. We will be interested in near null surface $r=0$ expansion of the above general metric, for which \begin{subequations}\label{r-expansion-3d} \begin{align} F(v,r,\phi) =& F_0(v,\phi) + r F_1(v,\phi) +\mathcal{O}(r^2) \\ f(v,r,\phi) =& f_0(v,\phi) + r f_1(v,\phi) +\mathcal{O}(r^2) \\ h(v,r,\phi) =& \Omega(v,\phi)^2 + r h_1(v,\phi) +\mathcal{O}(r^2) \end{align} \end{subequations} Since $r=0$ is a null hypersurface, we must have $g^{rr}|_{r=0}=0$. Hence, \begin{equation}\label{Null-condition} F_0= - \left(\frac{f_0}{\Omega} \right)^2 \, . \end{equation} The near $r=0$ metric has seven functions of $v,\phi$, which are $\eta, h_0, h_1, f_0, f_1, F_0, F_1$. Besides \eqref{Null-condition} there are three other relations among them resulting from the EoM \eqref{action-EoM-3d} which may be imposed order by order in $r$. At zeroth order $\mathcal{E}_{v v}=0$, $\mathcal{E}_{v \phi}=0$ and $\mathcal{E}_{\phi \phi}=0$ yield \begin{equation}\label{v-v} \partial_v^2 \Omega + \frac{1}{2}\left(\Gamma-\frac{\partial_{v}\eta}{\eta} + \frac{f_0 \partial_\phi \eta }{ \Omega^2 \eta} \right) \chi - \partial_\phi \left( \frac{\frac{1}{2}\partial_\phi F_0+\partial_v f_0}{\Omega}\right) =0 \end{equation} \begin{equation}\label{v-phi} \partial_v\Upsilon + \Omega \partial_\phi \Gamma + \frac{\chi\,\partial_\phi \eta }{\eta} +f_0 \partial_\phi \left( \frac{\Upsilon}{\Omega^2}\right)-2 \Omega \partial_{\phi} \left( \frac{f_0 \Upsilon}{\Omega^3}\right) =0 \end{equation} \begin{equation}\label{phi-phi} \partial_{\phi}{\left(\frac{f_{1}-\partial_{\phi}\eta}{\Omega}\right)}-\partial_{v}{\left(\frac{h_{1}}{\Omega}\right)}+\frac{(\partial_{\phi}{\eta})^2}{2\eta\Omega}-\Lambda{\eta\Omega}-\frac{f_{1}^2+F_{1}h_{1}}{2\eta\Omega}=0 \end{equation} with \begin{subequations} \begin{align} \Upsilon:=&\frac{ f_{0}h_{1}}{\eta\Omega}-\frac{f_{1}\Omega}{\eta} \, \label{Upsilon-3d} \\ \Gamma :=& -\frac{F_{1}}{\eta}-\frac{\partial_{v}\eta}{\eta}- \frac{h_1 f_0^2}{\eta \Omega^4}+\frac{f_0 \partial_\phi \eta}{\eta \Omega^2}+ \frac{2f_0 \Upsilon}{\Omega^3} \label{Gamma-3d}\\ \chi:=&\partial_v \Omega - \partial_\phi \left( \frac{f_0}{\Omega}\right) \end{align} \end{subequations} The rest of equations of motion determine higher orders in metric expansion in terms of the lower ones. {Eq.\eqref{v-v} may be viewed as an algebraic equation for $\Gamma$ and \eqref{v-phi}, \eqref{phi-phi} as first order differential equations for $\Upsilon$ and $f_1$.\footnote{The $v$ dependent integration functions will not play any role in our charge analysis and hence we do not consider them.} } \subsection{NBS generating vector fields} The vector field \begin{equation}\label{KV-3D} \begin{split} \xi^v =& T\\ \xi^r =& r (\partial_v T- W) + \frac{r^2 \partial_\phi T}{2 \Omega^2} \left( f_1 + \partial_\phi \eta -\frac{f_0 h_1}{\Omega^2}\right) +\mathcal{O}(r^3)\\ \xi^\phi =& Y - \frac{r\eta \partial_\phi T}{\Omega^2}+ \frac{r^2\eta h_1\partial_\phi T}{2 \Omega^4} +\mathcal{O}(r^3) \end{split} \end{equation} preserves the line-element \eqref{3d-NH-metric}, where $T$, $Y$ and $W$ are some functions of $v$ and $\phi$. The symmetry generating vector field \eqref{KV-3D} preserves location of the null surface $r=0$, which we take to be the boundary of our spacetime. To see this explicitly, we note that \begin{equation} \begin{split} \mathcal{L}_{\xi}g^{rr}=&\xi^{\mu}\partial_{\mu}g^{rr}-2g^{r\mu}\partial_{\mu}\xi^{r}\\ =&\xi^{r}\partial_{r}g^{rr}+\xi^{v}\partial_{v}g^{rr}+\xi^{\phi}\partial_{\phi}g^{rr}-2g^{rr}\partial_{r}\xi^{r}-2g^{rv}\partial_{v}\xi^{r}-2g^{r\phi}\partial_{\phi}\xi^{r}\nonumber \end{split} \end{equation} therefore, $\delta_\xi g^{rr}|_{r=0}=0$. One may compute the algebra of NBS generating vector fields \eqref{KV-3D} using the adjusted bracket, yielding \begin{equation}\label{3d-NBS-KV-algebra} [\xi( W_1, T_1, Y_1), \xi( W_2, T_2, Y_2)]_{_{\text{adj. bracket}}}=\xi( W_{12}, T_{12}, Y_{12}) \end{equation} where \begin{subequations}\label{W12-T12-Y12} \begin{align} & T_{12}=T_1 \partial_v T_2 - T_2 \partial_v T_1 +Y_1 \partial_\phi T_2 - Y_2 \partial_\phi T_1\\ & W_{12}= T_1 \partial_v W_2 - T_2 \partial_v W_1 +Y_1\partial_\phi W_2- Y_2\partial_\phi W_1+ \partial_v Y_1 \partial_\phi T_2 -\partial_v Y_2 \partial_\phi T_1 \\ & Y_{12}= Y_1\partial_\phi Y_2- Y_2\partial_\phi Y_1+T_1 \partial_v Y_2 - T_2 \partial_v Y_1 \, , \end{align} \end{subequations} This is Diff$(C_2)\oplus$ Weyl$(C_2)$, where $C_2$ stands for the null cylinder parameterised by $v,\phi$ and Weyl$(C_2)$ is the Weyl scaling on this cylinder; $T, Y$ are generators of Diff$(C_2)$ and $W$ that of Weyl$(C_2)$.\footnote{As our analysis here indicates, if we repeat the null boundary symmetry analysis for general dimension $d$, we will find Diff$(C_{d-1})\oplus$ Weyl$(C_{d-1})$, where $C_{d-1}$ is the $d-1$ dimensional null ``cylinder'' one would find at $r=0$ and Weyl$(C_{d-1})$ denotes Weyl scaling on $C_{d-1}$. It is clear that generators of this algebra consist of $d=(d-1)+1$ functions over $C_{d-1}$.} Under transformations generated by these vector fields, fields transform as \begin{subequations} \begin{align} \delta_\xi \eta =& T \partial_v \eta + 2 \eta \partial_v T - \eta W +Y \partial_\phi \eta - \frac{f_0 \eta }{\Omega^2} \partial_\phi T\\ \delta_\xi \Omega = & T \partial_v \Omega + \partial_\phi \left( Y \Omega\right)+\frac{f_0 }{\Omega} \partial_\phi T \\ \delta_\xi f_0 = & \partial_v \left( T f_0\right) +\partial_\phi \left( Y f_0\right)+ \Omega^2 \partial_v Y -F_0 \partial_\phi T \end{align} \end{subequations} \begin{subequations} \begin{align} \hspace*{-5mm}\delta_{\xi}{\Upsilon}&=T\partial_{v}\Upsilon+ Y\partial_{\phi}\Upsilon +2\Upsilon\partial_{\phi}Y +\Omega\partial_{\phi}\left( W -\frac{f_{0}\partial_{\phi}T}{\Omega^2}\right)-\Omega \left(\Gamma -\frac{2f_{0}\Upsilon}{\Omega^3}+\frac{2\chi}{\Omega}\right)\partial_{\phi}T\\ \hspace*{-5mm}\delta_{\xi}\Gamma&=\partial_{v}{(T\Gamma)}+Y\partial_{\phi}\Gamma-\partial_{v}W+\partial_{v}{(\frac{f_{0}\partial_{\phi}T}{\Omega^2})} +\frac{f_{0}\partial_{\phi}W}{\Omega^2}-\frac{f_{0}}{\Omega^2}\left(\Gamma\partial_{\phi}T+\partial_{\phi}{(\frac{f_{0}\partial_{\phi}T}{\Omega^2})}\right)\\ \hspace*{-5mm}\delta_\xi \chi&= \partial_v (T \chi) +\partial_\phi (Y \chi) \,. \end{align} \end{subequations} \subsection{Surface charges} The Iyer-Wald surface charge \cite{Iyer:1994ys} is given by \begin{equation}\label{3dnonintch} \slashed{\delta} Q_{\xi} := \oint_{\partial \Sigma} \mathcal{Q}^{\mu \nu}_\xi[g ; \delta g] \d x_{\mu \nu} \end{equation} with \begin{equation}\label{charge-var-non-int} \mathcal{Q}^{\mu \nu}_\xi =\frac{\sqrt{-g}}{8 \pi G}\, \Big( h^{\lambda [ \mu} \nabla _{\lambda} \xi^{\nu]} - \xi^{\lambda} \nabla^{[\mu} h^{\nu]}_{\lambda} - \frac{1}{2} h \nabla ^{[\mu} \xi^{\nu]} + \xi^{[\mu} \nabla _{\lambda} h^{\nu] \lambda} - \xi^{[\mu} \nabla^{\nu]}h \Big), \end{equation} where $h_{\mu \nu}= \delta g_{\mu \nu}$ is a metric perturbation and $h= g^{\mu \nu}h_{\mu \nu}$. We take $\Sigma$ to be a constant $v$ hypersurface and $\partial \Sigma$ to be its cross-section with {the hypersurface} $r=0$. The surface charge on the given co-dimension two surface $\partial \Sigma$ is then \begin{equation}\label{charge-variation''} \slashed{\delta} Q_{\xi} = \frac{1}{16 \pi G}\int_{0}^{2 \pi} \d \phi \left[ W \delta \Omega + Y \delta \Upsilon + T \slashed{\delta} \mathcal{A}\right] \, , \end{equation} with \begin{equation} \slashed{\delta} \mathcal{A}=-2 \delta \chi - \Gamma \delta \Omega + \frac{ f_0}{\Omega^2} \delta\Upsilon {+} \partial_\phi \left( \frac{ f_0 \delta \Omega}{\Omega^2}\right) + \frac{ \chi\delta \eta}{\eta} \end{equation} As we see in the basis \eqref{KV-3D} the charges are not integrable for state independent $W,Y,T$. One may use {the} modified bracket method to separate its flux and integrable parts, as done in Appendix \ref{BT-MB-Appendix}. However, in what follows we show there exists a basis in which the charges are integrable. \subsection{Integrable basis for the charges}\label{sec:integrabe-3d-basis} Let us consider the state/field-dependent transformations, \begin{equation}\label{hat-generators-3d} \hat{W}=W- T \Gamma {-}\ \frac{f_0}{\Omega^2} \ \partial_\phi T \, , \qquad \hat{Y}=Y+T \frac{f_0 }{\Omega^2} \, , \qquad \hat{T} = \frac{\chi}{\Xi^{(s)}} T \, , \end{equation} or conversely, \begin{equation}\label{Int-basis-generators} \begin{split} T=\frac{\Xi^{(s)}}{\chi}\hat{T}, \hspace{1 cm} Y=\hat{Y}-\frac{f_{0}\Xi^{(s)}}{\chi\Omega^2}\hat{T},\hspace{1 cm} W=\hat{W}+\frac{\Gamma\Xi^{(s)}}{\chi}\hat{T}{+}\frac{f_{0}}{\Omega^2}\partial_{\phi}\left(\frac{\Xi^{(s)}\hat{T}}{\chi}\right)\ , \end{split} \end{equation} where $s$ is an arbitrary real number and \begin{equation}\label{Def-Xi-01} \Xi^{(s)} := \left(\frac{\chi^2}{\eta} \right)^{s}\ . \end{equation} In this basis the charge variation takes the form \begin{equation}\label{3d-integrable-charge-variation} \delta Q_{\xi} = \frac{1}{16 \pi G}\int_{0}^{2 \pi} \d \phi\, \left( \hat{W} \delta \Omega + \hat{Y} \delta \Upsilon +\hat{T} \delta \mathcal{P}^{(s)}\right) \end{equation} with \begin{equation}\label{charges-3d} \mathcal{P}^{(s)}= \begin{cases} - \frac{1}{s}\,\Xi^{(s)} & \text{if } s \neq 0 \\ -\ln{\Xi^{(1)}} & \text{if } s =0 \end{cases} \end{equation} which is clearly integrable if $\hat W, \hat Y, \hat T$ are taken to be field-independent, i.e. $\delta\hat W=\delta\hat Y=\delta\hat T=0$. We note that the equation of motion for $\mathcal P^{(s)}$ is \begin{equation}\label{calEs-EoM} \begin{cases} \frac{\partial_v \mathcal P^{(s)}}{\mathcal P^{(s)}} - \frac{f_0}{\Omega^2}\frac{\partial_\phi \mathcal P^{(s)}}{\mathcal P^{(s)}}+s\Gamma -2s \partial_\phi\left(\frac{f_0}{\Omega^2}\right) &=0,\qquad s\neq 0,\\ & \\ \partial_v \mathcal{P}^{(0)} -\frac{f_0}{\Omega^2}\partial_\phi \mathcal P^{(0)}-\Gamma +2 \partial_\phi\left(\frac{f_0}{\Omega^2}\right) &=0. \end{cases}\end{equation} So, among the six fields $\{\Omega, \, \mathcal{P}^{(s)}, \, \Upsilon; f_0, \, f_1 , \, \Gamma \}$ one can solve \eqref{calEs-EoM}, \eqref{v-phi} and \eqref{phi-phi} to obtain $f_0, \, f_1 , \, \Gamma$ in terms of $\Omega, \, \mathcal{P}^{(s)}, \, \Upsilon$, up to two $v$-dependent integration constants. Nonetheless, as one can see explicitly from \eqref{3d-integrable-charge-variation}, only the three arbitrary fields $\Omega, \, \mathcal{P}^{(s)}, \, \Upsilon$, and not the two $v$-dependent functions, appear in the surface charge expressions. Therefore, the solution phase space can be characterized by these three functions on codimension one $r=0$ null hypersurface. Being field-dependent, one should use the adjusted Lie bracket \cite{Barnich:2010eb, Compere:2015knw} when computing the bracket of two symmetry generators. This yields \begin{equation}\label{NHKV-algebra-3d} [\xi(\hat W_1, \hat T_1, \hat Y_1), \xi(\hat W_2, \hat T_2, \hat Y_2)]_{_{\text{adj. bracket}}}=\xi(\hat W_{12}, \hat T_{12}, \hat Y_{12}) \end{equation} where \begin{subequations}\label{NHKV-algebra-3d-01} \begin{align} & \hat T_{12}=s(\hat T_1\hat W_2-\hat T_2\hat W_1)+\hat Y_1\partial_\phi \hat T_2-\hat Y_2\partial_\phi \hat T_1+ (2s-1)(\hat T_1 \partial_\phi \hat Y_2 - \hat T_2 \partial_\phi \hat Y_1 )\\ & \hat W_{12}= \hat{Y}_1\partial_\phi \hat W_2 -\hat{Y}_2\partial_\phi \hat W_1 \\ & \hat Y_{12}=\hat Y_1\partial_\phi \hat Y_2-\hat Y_2\partial_\phi \hat Y_1, \end{align} \end{subequations} for both generic $s$ and $s=0$. As we see the above algebra does not involve derivatives w.r.t. $v$ parameter. To read off the algebra of charges $\Omega, \Upsilon, \mathcal{P}^{(s)}$ we need to know the variations of charges under diffeomorphisms, which upon equations of motion \eqref{v-v} and \eqref{v-phi}, they can be simplified to \begin{subequations} \begin{align} \delta_\xi \Omega= & -s\mathcal{P}^{(s)} \hat{T} + \partial_\phi ( \hat{Y} \Omega )\, , \\ \delta_\xi \mathcal{P}^{(s)}= & s \hat{W} \, \mathcal{P}^{(s)} +2s \,\mathcal{P}^{(s)} \,\partial_\phi \hat{Y} + \partial_\phi \mathcal{P}^{(s)} \hat{Y} \, ,\\ \delta_{\xi}{\Upsilon}=& (2s-1)\, \partial_\phi \mathcal{P}^{(s)} \hat{T} +2s \mathcal{P}^{(s)} \partial_\phi \hat{T} +\hat{Y} \partial_\phi \Upsilon +2 \Upsilon \partial_\phi \hat{Y}+ \Omega \partial_\phi \hat{W} \, , \end{align} \end{subequations} for $s \neq0$ and \begin{subequations} \begin{align} \delta_\xi \Omega= & \hat{T} + \partial_\phi ( \hat{Y} \Omega )\, , \\ \delta_\xi \mathcal{P}^{(0)}=& -\hat{W} -2\partial_\phi \hat{Y}+\partial_\phi \mathcal{P}^{(0)} \hat{Y} \\ \delta_{\xi}{\Upsilon}=& - \hat{T} \partial_\phi \mathcal{P}^{(0)}- 2 \partial_\phi \hat{T} +\hat{Y} \partial_\phi \Upsilon +2 \Upsilon \partial_\phi \hat{Y}+ \Omega \partial_\phi \hat{W} \, , \end{align} \end{subequations} for $s=0$. The algebra of charges in new basis is hence \begin{equation}\label{Charge-algebra} \begin{split} \hspace*{-1cm} \{ Q^{(s)}(\xi_1),Q^{(s)}(\xi_2)\}&= Q^{(s)}(\xi_{12}) \\ &+ \frac{1}{16 \pi G}\, \delta_{s,0} \int_{0}^{2\pi} \d \phi \left[ (\hat{W}_1 \hat{T}_2 - \hat{W}_2 \hat{T}_1 ) -2 ( \hat{Y}_1 \partial_\phi \hat{T}_2 - \hat{Y}_2 \partial_\phi \hat{T}_1) \right]\, , \end{split} \end{equation} which is the algebra of symmetry generators up to the central terms appearing in {the} $s=0$ case. {The charges, $\Omega,\mathcal P^{(s)},\Upsilon$, depend on the coordinates $(v,\phi)$ and their algebra is \begin{subequations}\label{3d-charge-algebra} \begin{align} & \{ \Omega(v,\phi), \Omega (v,\phi') \} =0 \, ,\label{3d,a'}\\ & \{\mathcal{P}^{(r)}(v,\phi), \mathcal{P}^{(s)}(v,\phi') \} =0 \, , \label{3d,b'}\\ & \{\Omega(v,\phi), \mathcal{P}^{(s)}(v,\phi') \} = 16\pi G \left( - s \mathcal{P}^{(s)}(v,\phi)+ \delta_{s,0}\right) \, \delta(\phi-\phi') \, , \label{3d,c'}\\ & \{\Upsilon(v,\phi), \Upsilon(v,\phi') \} = 16\pi G\left(\Upsilon(v,\phi') \partial_\phi -\Upsilon(v,\phi) \partial_{\phi'} \right) \delta(\phi-\phi') \, , \label{3d,d'}\\ & \{\Upsilon(v,\phi), \Omega(v,\phi') \}= -16 \pi G\, \Omega(v,\phi) \,\partial_{\phi'}\, \delta(\phi-\phi') \, ,\label{3d,e'}\\ & \{\Upsilon(v,\phi), \mathcal{P}^{(s)}(v,\phi') \}= 16\pi G\left( - \mathcal{P}^{(s)}(v,\phi)\partial_{\phi'} + (2s-1) \, \mathcal{P}^{(s)}(v,\phi')\,\partial_\phi +2\delta_{s,0}\partial_{\phi'}\right) \delta(\phi-\phi') \, .\label{3d,f'} \end{align} \end{subequations}} If we Fourier expand the charges \begin{equation} \begin{split} \mathcal{P}^{(s)}(v,\phi) :=8 G\,\sum_n \mathcal{P}^{(s)}_n(v) e^{{-}in\phi}&,\qquad \Omega(v,\phi) :=8 G\,\sum_n \Omega_n(v) e^{{-}in\phi},\\ \Upsilon(v,\phi) :=&8 G\,\sum_n \Upsilon_n(v) e^{{-}in\phi}, \end{split} \end{equation} and upon quantisation $i\{,\}\to [ ,]$, we have \begin{subequations}\label{3d-charge-algebra''} \begin{align} & [\Omega_m(v), \Omega_n(v) ] =0 \, , \\ & [\mathcal{P}^{(r)}_{m}(v), \mathcal{P}^{(s)}_{n}(v) ] =0 \, , \\ & [\Omega_{m}(v), \mathcal{P}^{(s)}_{n}(v) ] =-i\,s\, \mathcal{P}^{(s)}_{m+n}(v)+ \frac{i}{8 G}\, \delta_{s,0} \,\delta_{m+n,0} \, , \\ & [\Upsilon_{m}(v), \Upsilon_{n}(v) ] =(m-n)\, \Upsilon_{m+n}(v)\, \, , \\ & [\Upsilon_{m}(v), \Omega_{n}(v) ]= -n \, \Omega_{m+n}(v)\, ,\\ & [\Upsilon_{m}(v), \mathcal{P}^{(s)}_{n}(v)]= \left((2s-1)\, m -n\right)\, \mathcal{P}^{(s)}_{m+n}(v)+ \frac{n}{4 G} \,\delta_{s,0} \,\delta_{m+n,0} \, .\label{Y-E-algebra} \end{align} \end{subequations} While the charges are in general $v$ dependent their algebra is not; meaning that the algebra takes the same form for all $v$. We close this part by some comments: \begin{itemize} \item The algebra \eqref{3d-charge-algebra''} consists of a Witt algebra, spanned by $Y$, semidirect sum with the ``supertranslation'' $T$ (of conformal weight $2s$) and the current $W$ (of conformal weight 1). The two supertranslations commute among each other for $s=0$. \item The form of metric near $r=0$ null surface \eqref{3d-NH-metric} and \eqref{r-expansion-3d}, may be viewed as near horizon metric in which $\Omega$, which is the area density of the bifurcation surface (circle in the $3d$ case), is the ``entropy density''. Note that $\Omega$ is the charge associated with $\hat{W}$ \eqref{3d-integrable-charge-variation} and in our original geometry involves scaling in $r$ direction accompanied by a field-dependent diffeomorphism in $v$ direction, \emph{cf.} \eqref{KV-3D}, \eqref{hat-generators-3d}. \item Our results for the NBS generating vectors \eqref{hat-generators-3d} and the associated charges $\Omega, \Upsilon, \mathcal{P}^{(s)}$ and their algebra \eqref{charges-3d} are all independent of the cosmological constant. $\Lambda$ appears only in one of our equations of motion \eqref{phi-phi}, and does not enter in our charge analysis in any crucial way. Hence, our results for charges and symmetries are valid for AdS$_3$, $3d$ flat space and dS$_3$ cases. This is physically expected because we are considering an expansion around {the} null surface $r=0$, and our analysis is local and independent of the asymptotic and global properties of the solution. \item There is an interesting sublagebra of \eqref{3d-charge-algebra} spanned by $\Omega_m$ and $\Upsilon_m$ which is a semidirect sum of the Witt algebra generated by $\Upsilon_m$ with an Abelian current $\Omega_m$. In the terminology used in \cite{Parsa:2018kys}, it is $W(0,0)$ algebra. This algebra was found as near horizon symmetry algebra of $3d$ black holes \cite{Donnay:2015abr} (see also \cite{Grumiller:2019fmp}). But one should note that in our case the charges have arbitrary $v$ dependence, while that of \cite{Donnay:2015abr, Grumiller:2019fmp} is $v$ independent. Moreover, here $\Omega_m$ is associated with the Weyl scaling on the null surface $r=0$ but instead in \cite{Donnay:2015abr} it was associated with supertranslation on the given null surface. However, in both cases, the zero-mode charges $\Omega_0$ and $\Upsilon_0$ are respectively proportional to {the} entropy and angular momentum of the black holes whose horizon is taken to be the null surface we are expand about. \item The algebra \eqref{3d-charge-algebra} has another sublagebra spanned by $\mathcal{P}^{(s)}_{m}(v)$ and $\Upsilon_{m}(v)$. Modulo the arbitrary $v$ dependence of the charges, it coincides with $W(0,-2s-1)$ \cite{gao2011low,Parsa:2018kys}. This algebra is an algebraic deformation of BMS$_3$ \cite{Parsa:2018kys} where $\mathcal{P}^{(s)}_{m}(v)$ describes spin-$2s$ supertranslations. \item For the $s=0$ case and as \eqref{3d,a'}, \eqref{3d,b'} and \eqref{3d,c'} show, $\Omega(v,\phi)$ and $\mathcal{P}^{(0)}(v,\phi)$ form a Heisenberg algebra and one can treat $\mathcal{P}^{(0)}(v,\phi)$ as conjugate momentum of the field $\Omega(v,\phi)$; these commutation relations upon the usual replacement of Poisson brackets by commutators, $i\,\{\,, \, \} \to [\, , \,]$, may be viewed as the ``equal time'' (equal $v$) canonical commutation relations of a two-dimensional field theory defined on {the $r=0$ surface}. \item The algebra for generic $s$ case can be constructed using the $s=0$ algebra through \begin{equation}\label{G-s-01} \mathcal{P}^{(s)}= -\frac{1}{s} \exp{\left(-s \,\mathcal{P}^{(0)}\right)}\,. \end{equation} \item The structure constants of the algebra \eqref{3d-charge-algebra''} and its $2d$ counterpart \eqref{2d-Phi-Xi-algebra} are both $v$ independent, while the charges are $v$ dependent. We expect this to be the case for all integrable Lie-algebras related to these algebras by a change of basis (see discussions of the next section). One should, however, note that this is not the case for the algebra of the original symmetry generators (before coming to the integrable basis), \eqref{3d-NBS-KV-algebra} and \eqref{W12-T12-Y12} for $3d$ case and \eqref{2d-algebra}, for the $2d$ case; they involve derivatives w.r.t $v$. Therefore, the algebra of integrable part of charges obtained through the modified bracket method (see appendix \ref{BT-MB-Appendix}) would also have $v$ derivatives. \end{itemize} \section{Change of Basis for Integrable Charges }\label{sec:changebasis} In the previous sections, we discussed a specific {field-dependent} change of basis {that} rendered non-integrable charges integrable, yielding Heisenberg-type algebra among the integrable charges {in} the new basis. {Given} this specific $2d, 3d$ examples one is led to two questions: \begin{enumerate} \item Is it possible to always make a set of given non-integrable charges integrable by a {field-dependent} change of basis? If not, when does the integrable basis exist? \item Once in an integrable basis, can we still make {a} further change of basis and remain with integrable charge? \end{enumerate} In this work, we mainly restrict ourselves to the $2d, 3d$ setups we have explored in the previous two sections for which (by construction) the answer to the first question is affirmative. While the full analysis of the first question is postponed to an upcoming publication \cite{progress-2}, we will briefly discuss it in the last section \ref{sec:conclusion}. In this section, we focus on the second question. Assuming we have a basis in which the charges are integrable, in subsection \ref{sec:4.1} we present a general formulation of change of basis among integrable charges. Then in the next two subsections, we apply this general formulation to $2d$ and $3d$ cases. \subsection{Change of basis, general formulation}\label{sec:4.1} Suppose that we have a set of integrable surface charges generated by {the} symmetry generators $\mu^i$, \begin{equation}\label{charge-variation-generic} \delta Q=\int \mu^i \delta Q_i\, \d{}^{d-2}x\, , \qquad i=1,2,\cdots, N. \end{equation} Integrability of $Q_i$ means that $\mu^i$ are field-independent functions,\footnote{As our $2d, 3d$ examples show (see also \cite{Adami:2020amw} for $4d$ examples), while the integral \eqref{charge-variation-generic} is over a codimension two surface, constant $v$ slices on $r=0$ in our setup, $\mu^i$ are in general functions over a codimension one surface.} that is $\delta \mu^i / \delta Q^j=0$. We adopt a thermodynamical terminology in which $\mu^i$ are referred to as chemical potentials. Thus the charge variation is integrable on phase space, \begin{equation} Q=\int \mu^i Q_i\, \d{}^{d-2}x \, . \end{equation} Let us now consider a generic change of basis of the form \begin{equation}\label{change-of-basis} \tilde Q_i=\tilde Q_i[Q_j, \partial^n Q_k] \, , \end{equation} which is a functional of $Q_{i}$ and their derivatives. {The functional $\tilde Q_i$ may be restricted by some physical conditions like being well-defined and real, e.g. excluding $\tilde Q_i=\frac1{Q_i}$ or $\tilde Q_i=\sqrt{Q_i}$ when charges can be zero or negative, but is otherwise generic. It even need not be one-to-one and may reduce number of charges, restricting us to a subspace in the space of solutions over which the charges are defined. However, here for simplicity we assume $ \tilde Q_i$ to be one-to-one. Of course one can make further restrictions to desired subspaces after the change of basis.} We can now impose the condition that the charges are integrable in the new basis, as we have already implicitly assumed in \eqref{change-of-basis}. In the new basis, the charge variation can be written as \begin{equation}\label{charge-variation-new} \delta Q=\int \tilde{\mu}^i \delta \tilde{Q}_i\, \d{}^{d-2}x\, , \end{equation} where $\tilde\mu^i$ are the chemical potentials in the new basis and $\delta \tilde{\mu}^i / \delta \tilde{Q}_j=0$. Equating \eqref{charge-variation-generic} and \eqref{charge-variation-new}, one can read the relation between the chemical potentials in the two basis, \begin{equation}\label{chemical-pot-mapping} \tilde \mu^i=\frac{\delta Q_j}{\delta \tilde Q_i}\ \mu^j. \end{equation} where $\frac{\delta Q_j}{\delta \tilde Q_i}$ denotes {the} variation of $Q_i$ w.r.t $\tilde Q_j$ and we assume this variation exists and is well-defined. We will show below that $\frac{\delta Q_j}{\delta \tilde Q_i}$ can in general be an operator involving spatial derivatives along the $d-2$ directions, acting on $\mu^j$. In essence, one may start with an arbitrary $\tilde Q_i=\tilde Q_i[Q_j]$ and integrability condition fixes $\mu^i$ in terms of $\tilde{\mu}^i$. As we see the two chemical potentials are related linearly by field-dependent coefficients. Note also that the charge variation $\delta Q$ is the same in the two bases (imposed through our integrability requirement). The integrated charge $Q$ is however \emph{not} necessarily the same in the two bases; they are only equal iff $\tilde Q_i$ are linear combinations of $Q_i$. The above conditions are a bit abstract and formal. One can explore them further introducing, \begin{equation}\label{Ch-den-var} \delta \tilde Q_i := \Pi_{i}{}^{j} \delta Q_j \, , \end{equation} with \begin{equation} \Pi_{i}{}^{j}= \sum_{p=0} (\Pi_{i}{}^{j}){}^{A_1\cdots A_p} [Q_i] \, \partial_{A_1\cdots A_p} \, , \qquad (\Pi_{i}{}^{j}){}^{A_1\cdots A_p} [Q_i]:= \frac{\partial \tilde{Q}_i}{\partial (\partial_{A_1\cdots A_p} Q_j)} \, , \end{equation} where $A_l=1,\cdots, d-2$ are running over the directions along the codimension two spacelike surface the charges are integrated over, in our case constant $v$ slices on the $r=0$ null surface.\footnote{Note that the charges $Q_i$ and $(\Pi_{i}{}^{j}){}^{A_1\cdots A_p}$ can have general $v$ dependence.} Substituting \eqref{Ch-den-var} into \eqref{charge-variation-new} and integrating by-part, one can make \eqref{chemical-pot-mapping} more explicit: \begin{equation}\label{mu-tilde-mu} \mu^i = \sum_{p=0} (-1)^p \partial_{A_1\cdots A_p} \left[ (\Pi_{j}{}^{i}){}^{A_1\cdots A_p} \tilde{\mu}^j\right]. \end{equation} \paragraph{Algebra of charges under change of basis.} Let us suppose that the Poisson bracket of the charges in the original basis is of the form of a Lie-algebra \begin{equation}\label{Q-Lie-algebra} \{ Q_i (v,{x}^A), Q_j(v,{y}^B)\}= f_{ij}{}^k({x}^A, {y}^B;v) Q_k (v,{x}^C) -f_{ji}{}^k({y}^B, {x}^A;v) Q_k (v,{y}^C)+ c_{ij}({x}^A, {y}^B;v), \end{equation} where $c_{ji}({y}^B, {x}^A;v)=-c_{ij}({x}^A, {y}^B;v)$ and $f_{ij}{}^k$ are $Q_i$ independent while both can have $v,x^A$ {dependence} and can involve derivatives w.r.t. $x^A$ or $y^B$. To ease the notation, whenever there is no confusion we drop $v, x^A$ or the derivative dependence. We assume the charge bracket \eqref{Q-Lie-algebra} satisfies the Jacobi identity, $\{ \{ Q_i, Q_j\}, Q_k\}+\text{cyclic permutation}=0$. After a change of basis to $\tilde Q_i$, the algebra of new charges is hence \begin{equation}\label{tilde-Q-algebra} \{ \tilde Q_i, \tilde Q_j\}= \Pi_i{}^k\Pi_j{}^l\ \{ Q_k, Q_l\} \end{equation} The right-hand-side, as we see is not necessarily of the form $\tilde f_{ij}{}^k \tilde Q_k+ c_{ij}$ for some $\tilde Q$ independent structure constants $\tilde f_{ij}{}^k$ and central charges $c_{ij}$. Remember that $\Pi_i{}^j$ are in general operators involving spacetime derivatives acting on $Q_m$, as well as being matrices. One may ask if $\tilde Q_i$ form an algebra, i.e. if their bracket satisfies Jacobi identity. The answer is not always affirmative because \begin{equation}\label{Jacobi-tilde-Q} \begin{split} \{ \{ \tilde Q_i, \tilde Q_j\}, \tilde Q_k\}+\text{cyclic perm.} &= \Pi_i{}^l\Pi_j{}^m \Pi_k{}^n\ \left(\{\{ Q_l, Q_m\}, Q_n\}+\text{cyclic perm.}\right)\\ &+ \Pi_k{}^n \{\Pi_i{}^l\Pi_j{}^m, Q_n\}\{Q_l, Q_m\}+\text{cyclic perm.} \end{split} \end{equation} While the first line in the right-hand-side vanishes the second line does not in general. This is of course expected, as the general change of basis takes us from the algebra to the enveloping algebra of the charges which does not close onto an algebra with the same dimension. Closure of the algebra in the new basis therefore, severely restricts possible change of basis. Given the above discussions, three questions are then in order: \begin{itemize} \item[I.] Is it possible to find an integrable basis where {the} charge algebra is a Lie algebra, possibly up to a central extension? In the $2d$ and $3d$ examples, we have shown the answer is affirmative: the Heisenberg type algebras are indeed centrally extended Lie-algebras. We will argue in the next section that in general there always exists an integrable Lie-algebra basis whenever the charges are integrable. \item[II.] Are there other possible changes of bases that take an integrable charge algebra to other algebras? That is, are there possible $\Pi_i{}^k$ for which the second line of \eqref{Jacobi-tilde-Q} vanishes? \item[III.] If there are other integrable charge algebras, i.e. if the answer to the previous question affirmative, are there Lie-algebras among them? Is the integrable Lie-algebra basis unique? The answer is no. See for example explicit constructions in \cite{Parsa:2018kys, Safari:2019zmc, Grumiller:2019fmp}. One can argue, based on the experience and analysis of these references, that if the integrable Lie-algebra we start with is rigid/stable then this basis is unique. Otherwise, if the algebra is not stable and admits non-trivial deformations, all the algebraic deformations of the algebra in integrable Lie-algebra basis will also be an integrable Lie-algebra. \end{itemize} In what follows we do not intend to explore the above questions in full generality, while our analysis for the $2d$ case is exhaustive, for the $3d$ case we analyse some examples of charges and algebras. \subsection{\emph{2d} gravity -- generic case} We have already established in section \ref{sec:2dgrav} that there exist some different integrable basis in which the charge algebra is a Lie-algebra. These bases are labeled by a parameter $s$ \eqref{2d-Phi-Xi-algebra}. So, we already see that for this example there exists more than one integrable Lie-algebra basis. One may, however, still ask if our $s$-family of algebras unique or there are other integrable Lie-algebras which could be reached through a general change of basis discussed above. To this end, let us take the $s=0$ case as the starting point. This is especial in our $s$ family because the charge algebra is just {the} Heisenberg algebra \begin{equation}\label{2d-Heisenberg} \{ Q_i(v), Q_j(v) \}= 16\pi G\,\varepsilon_{ij},\qquad \forall v, \end{equation} with charges $Q_i=\{ \Phi_0, \mathcal{P}^{(0)} \}$ and symmetry generators $\mu^i =\{ \hat{W},\hat{T} \}$. Suppose that the new basis $\tilde{Q}_i$ are non-singular functions of original ones. Using properties of Poisson brackets, one can write Poisson bracket of charges in new basis as \begin{equation}\label{new-basis-no-derivative-2d'} \{ \tilde{Q}_i(v), \tilde{Q}_j(v) \}= 16\pi G\, \Pi_i{}^{k} \,\varepsilon_{kl} \,\Pi_j{}^{l} \,. \end{equation} where $\Pi_i{}^{j}{} =\frac{\partial \tilde{Q}_i}{\partial Q_j}$. One can treat $\Pi_i{}^j$ as a $2\times 2$ matrix and hence $\Pi_i{}^{k} \,\varepsilon_{kl} \,\Pi_j{}^{l}= \varepsilon_{ij} \det{(\Pi_k{}^l)}$. Note that in the $2d$ case $\Pi_i{}^j$ do not involve derivatives and just matrices (rather than being differential operator-valued matrices). One can readily see that the closure of the algebra \eqref{new-basis-no-derivative-2d'} (the Jacobi identity) does not impose any condition on $\Pi_i{}^j$. However, {the} condition of having a Lie-algebra restricts it as, \begin{equation}\label{Pi-01'} \det{(\Pi_i{}^j)}= \gamma +\beta^i \tilde{Q}_i\, , \end{equation} where $\gamma$ and $\beta^i$ are some constants. The charge algebra \eqref{new-basis-no-derivative-2d'} with \eqref{Pi-01'} can also be written as \begin{equation}\label{2d-changed-basis} \{ \tilde{Q}_i(v), \tilde{Q}_j(v) \}= 16\pi G\, (\varepsilon_{ij} \gamma + \tilde{Q}_i \alpha_j-\tilde{Q}_j \alpha_i) \, , \end{equation} where $\alpha_i=\varepsilon_{ij} \beta^j$. The special case of $\alpha_i=0$ corresponds to generic canonical transformations on the two-dimensional phase space of $\Phi_0, \mathcal{P}^{(0)}$. So, let us focus on the less trivial cases where $\alpha_i\neq 0$. Without loss of generality one can always rotate the basis in the $2d$ $\Phi_0, \mathcal{P}^{(0)}$ plane such that $\alpha_2=0, \alpha_1:= s\neq 0$. Then, \eqref{2d-changed-basis} takes the form \begin{equation}\label{2d-changed-basis-2} i \{ \tilde{Q}_1(v), \tilde{Q}_2(v) \}= 16\pi G\, {i} ( \gamma - s\tilde{Q}_2 ) \, . \end{equation} For $\gamma=0$ this algebra reduces to \eqref{2d-Phi-Xi-algebra}. So, as we see in the $2d$ case our earlier construction already covers the most general integrable basis up to a central $\gamma$ term. That is, our $s$ family of algebras \eqref{2d-Phi-Xi-algebra}, up to the central term $\gamma$, exhausts the Lie-algebras one can achieve by a change of basis. \subsection{\emph{3d} gravity -- general discussion and fundamental basis for NBS algebra}\label{Change-basis-3d} In the 3$d$ gravity in our integrable basis we have three charges $\Phi_0(v,\phi), \mathcal{P}^{(s)} (v,\phi), \Upsilon(v,\phi)$ which may collectively call them $Q_i$ which satisfy the algebra \eqref{3d-charge-algebra}. The most general change of basis may then be parameterised through $\Pi_{i}{}^{j}$ \eqref{Ch-den-var}, which is a 3$\times$3 matrix operator which admit a series expansion in derivative w.r.t. $\phi$ direction, \begin{equation} \Pi_{i}{}^{j}= \sum_{p=0} (\Pi_{i}{}^{j})_p [Q_k] \, \partial_\phi^n \, , \qquad (\Pi_{i}{}^{j})_p [Q_k]= \frac{\partial \tilde{Q}_i}{\partial (\partial_\phi^p Q_j)} \, , \end{equation} where $(\Pi_{i}{}^{j})_p [Q_i]$ are generic $v,\phi$ dependent coefficients. Eq.\eqref{mu-tilde-mu} can hence be written as \begin{equation} \mu^i = \sum_{p=0} (- \partial_\phi )^p \left( (\Pi_{j}{}^{i})_p\ \tilde{\mu}^j\right) \, . \end{equation} The analysis here is more involved compared to the $2d$ case because besides the matrices we are also dealing with derivatives in $\phi$, moreover, the structure constants of the $3d$ charge algebra \eqref{3d-charge-algebra} are not as simple as that of a Heisenberg algebra. To make the analysis of change of basis tractable, we focus on the simplest member of the one-parameter family of the algebras \eqref{3d-charge-algebra}, the $s=0$ case. In addition, we make another change of basis to bring the algebra to the form of {the} direct sum of two Lie-algebras, which we dub as the ``fundamental NBS algebra''. \subsubsection{\emph{3d} NBS algebra in ``fundamental basis''} Let us start with the $s=0$ algebra with generators $\Phi_0, \mathcal{P}^{(0)}, \Upsilon$, which form a Heisenberg algebra {in} semidirect sum with the Witt algebra. Consider the following change of basis, keep $\Phi_0, \mathcal{P}^{(0)}$ and redefine $\Upsilon$ as \begin{equation}\label{Bar-Upsilon} \Upsilon=-2\partial_{\phi}\Omega-\Omega\partial_{\phi}\mathcal{P}^{(0)}+16\pi G\,{\cal S}\, . \end{equation} If we view $\Upsilon$ as generators of ``superrotations'' (as we have in BMS$_3$ algebra), one can decompose it into ``orbital'' part (the external part) and the ``spin'' part ${\cal S}$ (the internal part). To keep the basis integrable, one should then transform the symmetry generators, {i.e.} the chemical potentials, as \begin{equation} \begin{split} \hat{W}= & \tilde{W} -2 \partial_\phi \tilde{Y} +\tilde{Y} \partial_\phi \mathcal{P}^{(0)} \, , \\ \hat{Y}= & \tilde{Y}\, ,\\ \hat{T}= & \tilde{T} - \partial_\phi (\Omega \tilde{Y})\, . \end{split} \end{equation} and assume $\tilde{W}, \tilde{Y}, \tilde{T}$ to be independent of charges in the new basis {$\{\Phi_0, \mathcal{P}^{(0)}, {\cal S}\}$.} The transformation laws are \begin{equation}\label{Tr-Low-new} \delta_\xi \Omega= \tilde{T} \, , \qquad \delta_\xi \mathcal{P}^{(0)}= -\tilde{W} \, , \qquad \delta_\xi {\cal S} = \tilde{Y} \partial_\phi {\cal S}+2 {\cal S} \partial_\phi \tilde{Y} \, . \end{equation} Therefore, in this case, Poisson brackets \eqref{3d,d'}, \eqref{3d,e'} and \eqref{3d,f'} can be replaced by \begin{equation}\begin{split} \{{\cal S}(v,\phi), \Omega(v,\phi')\}&=\{{\cal S}(v,\phi), \mathcal{P}^{(0)}(v,\phi')\}=0\\ \{{\cal S}(v,\phi), {\cal S}(v,\phi')\}&= \left({\cal S}(v,\phi')\,\partial _\phi-{\cal S}(v,\phi)\,\partial_{\phi'}\right) \delta (\phi-\phi')\,. \end{split} \end{equation} {In this basis, the charge algebra is Heisenberg $\oplus$ Diff($S^1)$, or Heisenberg $\oplus$ Witt. Due to its very simple form and the fundamental role this algebra plays in the analysis of change of basis, we call it fundamental NBS algebra. In terms of Fourier modes and after ``quantization'' $i\,\{\,, \, \} \to [\, , \,]$, it takes the form \begin{subequations} \begin{align} & [\Omega_m(v), \Omega_n(v) ] = [\mathcal{P}^{(0)}_{m}(v), \mathcal{P}^{(0)}_{n}(v) ] =0 \, , \label{s=0,b}\\ & [\Omega_{m}(v), \mathcal{P}^{(0)}_{n}(v) ] = \frac{i}{8 G}\,\delta_{m+n,0}\, , \label{s=-1,c}\\ & [{\cal S}_{m}(v), {\cal S}_{n}(v) ] =(m-n)\, {\cal S}_{m+n}(v)\, , \label{s=-1,d}\\ & [{\cal S}_{m}(v), \Omega_{n}(v) ]= [{\cal S}_{m}(v), \mathcal{P}^{(0)}_{n}(v) ]= 0 \, .\label{s=-1,f} \end{align} \end{subequations} The charges are explicitly integrable in the new basis and their algebra is clearly a Lie-algebra. \subsubsection{Further integrable Lie-algebras} \label{sec:4.2.2} The fundamental basis provides a suitable ground for exploring other changes of basis. Noting the direct sum structure of this algebra, we can readily observe that one may obtain the two parameters $\gamma,s$-family of algebras, where the algebra becomes a direct sum of \eqref{2d-changed-basis-2} and Witt algebra. While one can try to exhaustively explore the changes of basis from the fundamental algebra which keep the algebra {an} integrable Lie-algebra, we find it more useful and illuminating to discuss two interesting examples. {We will only comment on the issue of exhaustiveness at the end of the section}. To facilitate the analysis we first introduce two current algebras \begin{equation}\label{def-j+-} J^\pm := \frac{1}{16\pi G} \left( \Omega \mp 2 G k \, \partial_\phi \mathcal{P}^{(0)} \right), \end{equation} with some constant $k$. The relation among chemical potentials is \begin{equation} \tilde{W}= \epsilon^+ + \epsilon^- \, , \qquad \tilde{T}= 2 G k\, \partial_\phi \left( \epsilon^+ - \epsilon^- \right) \, . \end{equation} Therefore, in this basis the charge variation is \begin{equation} \delta Q_\xi = \int_0^{2\pi} \d \phi \left( \epsilon^+ \delta J^+ + \epsilon^- \delta J^- {+\tilde Y \delta \mathcal{S}} \right) \, . \end{equation} The transformation laws \eqref{Tr-Low-new} now yields \begin{equation} \delta_\xi J^\pm = \pm \frac{k}{4 \pi} \, \partial_\phi \epsilon^\pm \,, \end{equation} and hence the charge algebra is given by \begin{equation}\label{Currents+Witt} \begin{split} \{ J^\pm (v,\phi), J^\pm (v,\phi') \}&= \pm \frac{k}{4\pi} \, \partial_{\phi} {\delta} (\phi-\phi'), \qquad \{ J^\pm (v,\phi), J^\mp (v,\phi') \}=0. \end{split} \end{equation} This current algebra is very similar to the one obtained as near horizon symmetries of BTZ black holes or $3d$ flat space cosmologies \cite{Afshar:2016kjj, Afshar:2016wfy}. There are, however, two crucial differences: \begin{itemize} \item [1] Our charges, besides $\phi$ dependence, are also $v$ dependent. The algebras there may be viewed at {a} constant $v$ slice of ours. \item[2] Besides $J^\pm$ our NBS algebra has the ${\cal S}$ part, generating a Witt algebra, which is absent in those analyses. \end{itemize} The first example is the {Vir} $\oplus$ {Vir} $\oplus$ Witt algebra. Given the $J^\pm, {\cal S}$ charges one can define a new basis as \begin{equation}\label{L+-} L^\pm (v,\phi):= \frac{2\pi}{k} \left[ J^\pm( v,\pm \phi)\right]^2 +{\beta_\pm} \partial_\phi J^\pm(v,\pm \phi) \end{equation} for which we have \begin{equation} \epsilon^\pm(v,\phi) = \left(\frac{4\pi}{k} J^\pm(v,\phi) \mp {\beta_\pm} \partial_\phi \right) \chi^\pm( v,\pm \phi) \, , \end{equation} where $\chi^\pm$ and $\tilde{Y}$ are chemical potentials in the new basis. The charge variation is \begin{equation} \delta Q_{\xi}= \int_0^{2\pi} \d \phi \, (\chi^{+} \delta L^+ +\chi^{-} \delta L^- {+\tilde Y \delta \mathcal{S}})\, . \end{equation} The variations of new generators $L^\pm$ are hence \begin{equation} \delta_\xi L^\pm = \chi^\pm \partial_\phi L^\pm +2 L^\pm \partial_\phi \chi^\pm - \frac{k}{4\pi} {\beta_\pm^2}\partial_\phi^3 \chi^\pm \, . \end{equation} The complete charge algebra in this basis is \begin{subequations} \begin{align} \{L^\pm (v,\phi), L^\pm (v,\phi')\}&= \left(L^\pm (v,\phi')\,\partial _\phi-L^\pm (v,\phi)\,\partial_{\phi'} + \frac{k}{4\pi} {\beta_\pm^2} \partial_{\phi'}^3 \right) {\delta} (\phi-\phi')\, , \label{LpmLpm}\\ \{L^\pm (v,\phi), L^\mp (v,\phi')\}&=0 ,\qquad \{L^\pm (v,\phi), {\cal S} (v,\phi')\}= 0 \label{Lpm-S}\\ \{ {\cal S}(v,\phi), {\cal S}(v,\phi')\}&= \left( {\cal S}(v,\phi')\,\partial _\phi- {\cal S}(v,\phi)\,\partial_{\phi'}\right) {\delta} (\phi-\phi')\, .\label{SS-3d} \end{align} \end{subequations} This is a direct sum of three algebras and the central charge of the two Virasoro algebras $c_\pm=6 k \beta_\pm^2$ are arbitrary undetermined numbers. The second example is the {BMS$_3\ \oplus$ Witt algebra.} Consider the two currents in the off-diagonal basis, \begin{equation}\label{def-JK} J:= J^+ + J^-,\qquad K:= J^+- J^- \end{equation} and define \begin{equation}\label{L&M} L= \frac{{2}\pi}{k} J K + {\beta}\partial_\phi K+ {\alpha}\partial_\phi J\, , \hspace{0.7 cm} M= \frac{\pi}{k } J^2 +{\beta}\partial_\phi J \, . \end{equation} The charge is then \begin{equation} \delta Q_{\xi}= \int_0^{2\pi} \d \phi \, (\epsilon_{\text{\tiny{L}}} \delta L +\epsilon_{\text{\tiny{M}}} \delta M {+\tilde Y \delta \mathcal{S}})\, , \end{equation} with \begin{equation} \epsilon^\pm = \frac{{2}\pi}{k} \left[ (K \pm J ) \epsilon_{\text{\tiny{L}}} + \epsilon_{\text{\tiny{M}}} J\right] - \beta \partial_\phi (\epsilon_{\text{\tiny{M}}} \pm \epsilon_{\text{\tiny{L}}})-\alpha \partial_{\phi}\epsilon_{\text{\tiny{L}}}\, . \end{equation} The fields transform as \begin{equation} \begin{split} \delta_\xi L=&\partial_\phi L \, \epsilon_{\text{\tiny{L}}} + 2 L \partial_\phi \epsilon_{\text{\tiny{L}}} +\partial_\phi M \epsilon_{\text{\tiny{M}}} + 2 M \partial_\phi \epsilon_{\text{\tiny{M}}} - \frac{k}{2\pi} {\beta^2} \partial_\phi^3 \epsilon_{\text{\tiny{M}}} - \frac{k}{\pi} \alpha \beta \partial_\phi^3 \epsilon_{\text{\tiny{L}}}\, \\ \delta_\xi M=& \partial_\phi M \epsilon_{\text{\tiny{L}}} + 2 M \partial_\phi \epsilon_{\text{\tiny{L}}} - \frac{k}{2\pi} {\beta^2} \partial_\phi^3 \epsilon_{\text{\tiny{L}}} \, , \end{split} \end{equation} and the algebra is \begin{equation} \begin{split} \{M (v,\phi), M (v,\phi')\}=&0 \, , \\ \{L (v,\phi), L (v,\phi')\}=& \left(L (v,\phi')\,\partial _\phi-L (v,\phi)\,\partial_{\phi'}+ \frac{k}{\pi} {\alpha\beta} \partial_{\phi'}^3 \right) {\delta} (\phi-\phi')\, , \\ \{L (v,\phi), M (v,\phi')\}=& \left(M (v,\phi')\,\partial _\phi-M (v,\phi)\,\partial_{\phi'} + \frac{k}{2\pi} {\beta^2} \partial_{\phi'}^3 \right) {\delta} (\phi-\phi')\, , \\ \end{split} \end{equation} which is a centrally extended BMS$_3$ algebra with central charges $c_{\text{\tiny{LM}}}=12k \beta^2, c_{\text{\tiny{LL}}}=24k \alpha\beta$ \cite{Barnich:2006av, Barnich:2011ct}. It is notable that the asymptotic symmetry algebra of $3d$ flat space over null infinity only realizes the $c_{\text{\tiny{LM}}}$ central charge \cite{Barnich:2006av, Barnich:2011ct}. itt part generated by ${\cal S}$. These two examples are actually the only types of integrable Lie algebras that can be obtained from the Heisenberg sector of the fundamental basis without mixing the Heisenberg and Witt part.\footnote{A possibility of mixing the two sectors is given in \ref{sec:integrabe-3d-basis} where the algebra is a semidirect sum of Heisenberg and the Witt algebra generated by $\Upsilon (v,\phi)$.} Let us start with the two currents $J^\pm$ or $J, K$. If the charge redefinitions involve powers of currents higher than two, the algebra would not close, we get something like $W_\infty$ algebras. Moreover, one can show that having more than one derivative of currents will not close either. Therefore, the expressions like \eqref{L+-} or \eqref{L&M} which are of twisted-Sugawara type\footnote{We note that within the class of twisted-Sugawara constructions, of course, one can obtain a bigger family of algebras than Vir $\oplus$ Vir or BMS$_3$: One can get all the family of centrally extended $W(a,b)$ algebras and in general, all algebraic deformations of centrally extended BMS$_3$, see \cite{Parsa:2018kys} for more discussions.} \cite{Afshar:2016wfy} are the only options. On the other hand the Witt algebra of the fundamental basis cannot be deformed since it is stable/rigid \cite{Parsa:2018kys}. \section{Discussion and Outlook}\label{sec:conclusion} In this work, we studied ``asymptotic symmetries'' near a null boundary in two and three-dimensional gravity theories. This work is motivated by questions regarding black holes and is a continuation of similar analysis in four-dimensional vacuum Einstein theory \cite{Adami:2020amw}. We analysed the problem in full generality and in particular, we did not impose any specific boundary conditions. We were hence able to realize the maximal boundary degrees of freedom (b.d.o.f.), i.e. two charges which are arbitrary functions of light-cone time variable $v$ for $2d$ case and three charges with arbitrary dependence on $v,\phi$ coordinates in $3d$ case. Not imposing any boundary condition means that the theory which governs the b.d.o.f. is not specified and their $v$ dependence remains arbitrary. This is how our work differs from all previous works in the literature, except \cite{Adami:2020amw}. {In the literature the boundary conditions are typically specified by (1) falloff behaviour of the fields (here metric components in a specific coordinate system) and (2) choosing the field dependence of symmetry generators. As our analysis shows these two are conceptually independent and should be discussed separately. This second part is often not duly emphasised in the literature. To achieve the ``maximal b.d.o.f.'' analysis here we were forced not to impose restrictions on the falloff condition around $r=0$ null surface, we only assumed smoothness. We analysed the role of field dependence through the change of basis discussed in section \ref{sec:changebasis}. As our analysis explicitly demonstrates, the field dependence does not necessarily influence number of b.d.o.f.} As our other important result, we established that in $2d$ and $3d$ cases there exists a basis in which the charges are integrable. In other words, non-integrability of charges in $2d$ and $3d$ cases may be removed by working in a particular state/field-dependent bases. We discussed that the integrable basis is not unique. We wrote the charges in different bases and computed their algebra. In $2d$ case we exhausted all such integrable Lie-algebra bases and in $3d$ case presented arguments what these exhaustive bases are expected to be. It would, of course, be interesting to provide a rigorous proof of the latter. Moreover, expanding upon the discussions and examples in \cite{Grumiller:2019fmp, Grumiller:2019ygj}, we elaborated on the general state/field-dependent change of basis which moves us in the family of integrable charges and also changes their algebra. We gave examples of how such change of basis can lead to non-zero central charges in the charge algebra. That is, {the} existence of a central charge {in our maximal b.d.o.f analysis} can be an artifact of the basis used to present the charges. In the course of this paper, in the introduction and in section \ref{sec:changebasis} we have posed some questions but discussed and analysed a few of them. Here we would like to expand further on those and discuss future projects and new directions. {\paragraph{Change of basis and field dependence of symmetry generators.} In our examples we explored two classes of change of basis: Those which render a non-integrable charge integrable (\emph{cf}. discussions in sections \ref{sec:2dgrav} and \ref{sec:3dgrav}) and those which move us within the class of integrable charges, \emph{cf.} discussions in section \ref{sec:changebasis}. In both of these classes the change of bases are generically field-dependent.} {We mentioned above that choice of the field dependence of symmetry generators is an essential part of fixing the boundary conditions and in both of these classes change of basis should be viewed as choosing different boundary conditions. As the thermodynamical terminology we have adopted indicates, change of basis is conceptually analogous to change of ensemble in the thermodynamical systems. Each ensemble is specified by which quantities are held fixed while the associated conjugate thermodynamical charges are allowed to vary.} {In a different viewpoint, our solution space may be viewed as a phase space where the charges are labelling the points in the solution phase space and the symmetry generators move us in this phase space. Explicitly, given the set of charges associated with the symmetry generators $\xi_i$, $Q_{\xi_i}$, $\delta_{\xi_i} Q_{\xi_j}=Q_{[\xi_i,\xi_j]}+ \text{central terms}$, where $Q_{[\xi_i,\xi_j]}$ labels a different point in the phase space. A change of basis is then like a general coordinate transformation on this phase space (labelled by the charges). This change of basis in general specifies how we move on the phase space, it changes the charge algebra.\footnote{{As a clarifying example, let us consider phase space of a given mechanical system in Hamiltonian formulation. Sympletomorphisms (``canonical transformations'') are a subset of possible change of basis which keep some given structures, the canonical Poisson brackets, intact. One may consider more general coordinate transformations on this phase space which also changes the basic Poisson brackets and takes us to a non-Barboux basis.}}} {We also remark that in the both classes of our change of basis discussed above, in this paper we only restricted our analysis to change of basis within the maximal b.d.o.f. setting. It is, however, possible to impose further restrictions on these maximal charges ($d$ functions on $d-1$ surface, in the $d$ dimensional gravity setting) e.g. restrict them to $n$ ($n<d$) number of charges on $d-p$ ($p>1$) surfaces, together with our change of basis. We did not explore such possibilities in this work, but we expect to be able to impose all possible boundary conditions (in the sense discussed in the second paragraph of this section) starting from our maximal b.d.o.f setting.} \paragraph{Boundary conditions and the variational principle.} In our analysis, we only fixed a null surface to be sitting at $r=0$ and considered the most general expansion of metric around this surface which is allowed by field equations. We did not impose any boundary condition on the metric near the null surface. This is in contrast to the usual surface charge analysis where physically motivated boundary conditions are imposed. Imposing variational principle, which may require addition of appropriate boundary terms, has a direct implication on the boundary conditions. In particular, it defines the boundary dynamics of the b.d.o.f, residing on the boundary. This latter would restrict $v$ (time) dependence of our b.d.o.f. This procedure and imposing specific boundary conditions was carried out in $2d$ JT gravity \cite{Grumiller:2015vaa} and the AdS$_3$ \cite{Grumiller:2016pqb} and $3d$ flat \cite{Grumiller:2017sjh} cases. This hence relates {to} our ``most general'' surface charges and those in these papers. As our analysis and discussions implicitly imply, the choice of boundary conditions and the charge basis are closely related to each other and the information about these two is encoded in the chemical potentials. In analogy with usual thermodynamics, the choice of a boundary condition and a basis is like {the} choice of the ensemble which manifests itself through the chemical potentials which are variables conjugate to the (conserved) charges appearing in the first law. Determining the appropriate boundary theory at a null surface may give a better handle on {the} formulation of the membrane paradigm for black holes \cite{Thorne:1986iy, Parikh:1998mg}. It would be interesting to study the membrane paradigm by the addition of the surface degrees of freedom and how they may help with {the} identification of black hole microstates and resolution of the information puzzle. See \cite{Grumiller:2018scv} for preliminary discussions on this idea. \paragraph{Modified bracket vs. integrable basis.} Non-integrable surface charges can generally appear in the covariant phase space analysis and in the literature so far two methods have been proposed to deal with them. (1) The Wald-Zoupas method \cite{Wald:1999wa} which prescribes adding a boundary term to the action to absorb the non-integrable (flux) part of the charge variation through the boundary. In this method a ``reference point'' in the phase space, for which we expect having a vanishing flux, is needed and may not be applied to generic cases, in particular to NBS analysis, see \cite{Adami:2020amw} for more discussions. (2) The Barnich-Troessaert modified bracket method \cite{Barnich:2011mi}, prescribing to separate the charge variation into an integrable part and a flux. The ambiguity in this separation is then fixed by introducing a modified bracket such that algebra of the integrable part of the charges matches with the algebra of the symmetry generators, possibly up to a central extension; see appendix \ref{BT-MB-Appendix} for analysis of the modified bracket method for the $2d$ and $3d$ cases. In this work, we made the observation that non-integrability could be a result of {the} choice of basis for {the} charges and {the} corresponding chemical potentials. If an integrable basis exists, we proved in section \ref{sec:changebasis} that such integrable bases are not unique. One may then ask if there always exists such integrable bases? Our analysis \cite{progress-2} shows that the answer, in general, is no. In the particular case of $4d$ gravity one can show that, when there is a flux of propagating bulk d.o.f. (gravitons) through the boundary, the Bondi news, {it} will introduce ``genuine'' non-integrabilities which cannot be removed by a change of basis. This is of course expected: There is a close relation between genuine non-integrability and non-conservation of the charges, and {the} presence of non-trivial interactions between the boundary and bulk d.o.f will render surface charges non-conversed and non-integrable. Making these statements more rigorous and robust and also connecting the ``change of basis method'' to Wald-Zoupas or Barnich-Troessaert modified bracket methods is postponed to our upcoming publication. \paragraph{Higher-dimensional cases.} While we choose the lower dimensional gravity framework to establish our three main ideas, namely formulating the surface charge analysis in full generality near a null boundary, showing {the} existence of integrable basis and general change of basis among integrable charge Lie-algebras, our main goal is to apply this to $4d$ cases (black holes in real-world). This will enable us to extend the analysis in \cite{Adami:2020amw} to the most general case in which all the charges have arbitrary time dependence ($x^+$-dependence in the notation of that work) and in which the Bondi news through the null surface (horizon) is also allowed/turned on. This analysis will be presented elsewhere \cite{progress-1}. Here we discuss a partial result: In the absence of Bondi news and ``genuine'' flux, the charges can be made integrable. In this case for generic dimension $d$, there is a ``fundamental NBS'' basis in which the algebra is Heisenberg $\oplus$ Diff$({\cal N}^{d-2}$), where ${\cal N}^{d-2}$ is the codimension two spacelike manifold at constant $r,v$. In the presence of genuine flux, one can get the same algebra among the integrable part of the charges, using the modified bracket method to separate the flux. Based on this algebra and for $d\geq 3$ one can construct BMS$_3\oplus {\cal D}$ where ${\cal D}$ is a subalgebra of Diff$({\cal N}^{d-2}$); see the analysis in section \ref{sec:4.2.2} for the $d=3$ case. This construction makes a closer connection to the near horizon BMS$_3$ algebras uncovered by Carlip \cite{Carlip:2017xne, Carlip:2019dbu}. It would be interesting to explore this line further and examine which boundary theories can be imposed on the system and how one can include more dynamical features like Bondi news, modeling matter falling into or coming out of the black hole horizon {as} the Hawking radiation. \section*{Acknowledgement} We would like to especially thank Daniel Grumiller for his contribution in the earlier stages of this work and for many fruitful discussions in recent years which led to this project and its development. We are grateful to Hamid Afshar and Hamid Safari for their comments and discussions. VT would like to thank Mohammad Hassan Vahidinia for the useful discussions. MMShJ thanks the hospitality of ICTP HECAP where a part of this research carried out. HY acknowledges Yau Mathematical Sciences Center for hospitality and support. HA acknowledges Saramadan grant No. ISEF/M/98204. MMShJ acknowledges the support by INSF grant No 950124 and Saramadan grant No. ISEF/M/98204. The work of HY is supported in part by National Natural Science Foundation of China, Project 11675244. CZ was supported by the Austrian Science Fund (FWF), projects P 30822 and M 2665.
2,877,628,088,759
arxiv
\section{Introduction} Much of the evolution of galaxies takes place in the group environment. Despite the small number of galaxies a typical group contains compared to a cluster, the role of galaxy groups in the construction of large-scale structures is fundamental. With approximately 70\% of galaxies located in groups \citep{Tully87}, they are the most common environment in the local Universe \citep{GellerHuchra83,Eke05}. In hierarchical structure formation, galaxy clusters are built up through the merger of groups \citep[e.g.][]{vandenBosch14,Haines17}, and the early evolution of rich cluster galaxies \citep[e.g.][]{Bekki99,MossWhittle00} is thus closely connected with the evolution of galaxies in groups. The study of groups is therefore essential in order to acquire a complete understanding of the evolution of galaxies along with the environmental processes involved \citep[e.g.][]{Forbes06}. Galaxy groups are an ideal laboratory to study the most efficient process in the morphological transformation of galaxies: merging \citep[e.g.][]{ToomreToomre72}. \citet{Mamon00} showed that galaxies in groups merge roughly two orders of magnitude more often than in rich clusters. Although groups represent shallower potential wells, gravity plays an important role due to the low velocity dispersion of these systems and the close proximity of members. The high rate of mergers and tidal interactions makes the group environment a locus of galaxy evolution and enhanced star formation \citep[e.g.,][]{MulchaeyZabludoff98,HashimotoOemler00}. Many galaxy groups also possess hot gaseous haloes which, given the ubiquity of groups, make up a significant fraction of the baryonic component of the Universe \citep{Fukugita98}. However, whereas in virialised clusters the hot gas is always the dominant baryonic component, in groups the gas content varies, with numerous examples of systems in which the gas and stellar components are equal \citep{Lagana11} or in which the stellar component dominates \citep[e.g.][]{Giodini09}. Hence, groups are the natural environment to study the origin and nature of this important mass component and its close relationship with galaxies and their evolution \citep{Forbes06,Liang16}. One aspect of this relationship which is of particular interest is the thermal regulation of the intra-group medium (IGM) via star formation or active galactic nuclei (AGN) fuelled by cooling gas (generally referred to as `feedback'). The most common observational evidence for the AGN mechanism are the radio bubbles and the X-ray cavities that the radio galaxies create in the hot IGM or the gaseous haloes of their host galaxies \citep{McNamara00,Fabian06}. While many studies have focused on the most powerful AGN in massive clusters of galaxies \citep[e.g.,][]{McNamara05}, groups of galaxies are an important environment, where feedback may have the greatest impact on galaxy evolution and formation. Like the brightest cluster galaxies (BCGs), brightest group-dominant early-type galaxies (BGEs) are ideal targets for the study of the evolution of groups and massive galaxies. They are typically highly luminous, old elliptical galaxies, located near the centres of the IGM and dark matter halo \cite[e.g.,][]{Linden07,Stott10}. Their stellar kinematic properties cover the full range from field elliptical-like to BCG-like \citep{Loubser18} and their nuclei host super-massive black holes (e.g. \citealt{Rafferty06}), with many of the BGEs revealing the activity of their nuclei by radio emission and even with radio jets that deposit their energy back to the IGM (see \citealt{McNu}). Earlier studies have shown that $\sim30$\% of the most massive galaxies exhibit radio continuum emission \citep[e.g.,][]{Best05,Shabala08}, with relevant group/cluster studies of central BGEs/BCGS in the local Universe suggesting a high detection rate (80-90\%) in radio \citep[e.g.,][]{Magliocchetti07,Dunn10}. It is also known that radio AGN in early-type galaxies are more common in higher density environments (groups or clusters; \citealt{Lilly09,Bardelli10,Malavasi15}). In such environments, the radio activity in cluster and group-central AGN is most effectively enhanced by cluster/group merging, or in the case of lower density environments by `inter-group' galaxy-galaxy interactions and mergers \citep{Miles04,TaylorBabul05}. It appears that mergers and interactions direct gas to the AGN in early type galaxies, resulting in radio emission and the launching of jets, whereas in late-type galaxies, the corresponding processes would trigger or increase radio emission from star formation (SF) \citep[e.g.,][]{Vollmer01}. Galaxy clusters may also host diffuse structures associated with the ICM \citep[see][for a general classification of cluster radio sources]{Kempner04}. These include \textit{mini-halos}, typically found at the center of cool-core clusters around powerful radio galaxies, and thought to be powered by turbulence in the cooling region or perhaps by minor mergers \citep{Ferrari08,Feretti12,Brunetti14,Gitti15,Giacintucci17}, \textit{radio relics}, narrow, linear arcs of radio emission found far from the cluster core, associated with large-scale shocks and steep spectral indices, and \textit{radio halos}, which are thought to be produced through turbulent re-acceleration of electrons by a cluster merger event. In galaxy groups the diffuse radio sources seen are mainly associated with the central galaxy and not the IGM but their production mechanism is still uncertain. In this paper we present a study of the radio properties of the dominant galaxies of the 26-group CLoGS high-richness sub-sample, including new Giant Metrewave Radio Telescope (GMRT) 235 and 610~MHz observations of 21 systems. We present the properties of the central radio sources, examine their environment and provide qualitative comparison between the GMRT radio data and the X-ray data for each group. The CLoGS sample and the X-ray properties of the groups are described in more detail in \citet[hereafter Paper~I]{OSullivan17}. The paper is organized as follows: In Section 2 we present the sample of galaxy groups, whereas, in Section 3 we describe the GMRT and X-ray observations along with the approach followed for the radio data reduction. In Section 4 we present the radio detection statistics of the BGEs, and in Section 5 their radio properties (morphology and power) along with information on their spectral index and the possible contribution from star formation on their radio emission. Section 6 contains the discussion of our results focusing on detection statistics, environmental properties of the radio sources and the energetics of jet systems with cavities. The summary and the conclusions are given in Section 7. The radio images and information on the central galaxies of this sample work are presented in Appendix~A, the values of star formation rates (SFR$_{FUV}$) and expected radio power due to star formation for the related BGEs are shown in Appendix~B and the information on the flux density distribution for each system is given in Appendix~C. Throughout the paper we adopt the $\Lambda$CDM cosmology with $H_o=71$ \kmsmpc, $\Omega_m$ = 0.27, and $\Omega_\Lambda$ = 0.73. The radio spectral index $\alpha$ is defined as $S_\nu \propto \nu^{-\alpha}$, where $S_\nu$ is the flux density at the frequency $\nu$. \section{The Complete Local-Volume Groups Sample} We present in this section only a short general description of the Complete Local-Volume Groups Sample (CLoGS). CLoGS is an optically-selected, statistically-complete sample of groups in the nearby universe, chosen to facilitate studies in the radio, X-ray and optical bands, with the detection of an X-ray luminous IGM providing confirmation that groups are gravitationally bound systems. The sample is intended to facilitate investigations of a number of scientific questions, including the role of AGNs in maintaining the thermal balance of the IGM. A detailed description of the CLoGS sample selection criteria and the X-ray properties of the high--richness sub-sample can be found in Paper~I. CLoGS is an optically selected sample of 53 groups in the local Universe ($\leq$80 Mpc), drawn from the shallow, all-sky Lyon Galaxy Group catalog sample (LGG; \citealt{Garcia93}). Groups were selected to have a minimum of 4 members and at least one luminous early-type galaxy (L$_B$ $>$ 3$\times$10$^{10}$ L$_\odot$). Declination was required to be $>$30$^\circ$ to ensure visibility from the GMRT and Very Large Array (VLA). Group membership was expanded and refined using the HyperLEDA catalog \citep{Paturel03}, and group mean velocity dispersion and richness $R$ were estimated from the revised membership, where richness is defined as the number of member galaxies with log L$_B$ $\geq$ 10.2. Systems with $R>10$ correspond to known galaxy clusters and were excluded, as were six groups that have $R$ = 1, since they are not rich enough to give a trustworthy determination of their physical parameters. This selection process produced a 53--group statistically complete sample, which we then divided into two sub-samples: i) the 26 high-richness groups with $R$ = 4$-$8 and ii) the 27 low-richness groups $R$ = 2$-$3. The BGE was assumed to lie at or near the centre of the group, and used as the target for observations. In this paper we examine the radio properties of the high--richness sub-sample. \begin{table*} \begin{minipage}{\linewidth} \caption{Details of our GMRT observations analysed here, along with information on data used in the previous study of \citet{Simona11} and \citet{David09}. For each source the first line displays the details for the 610~MHz and the second line for the 235~MHz. The columns give the LGG (Lyon Groups of Galaxies) number for each group, the BGE name, observation date, frequency, time on source, beam parameters and the rms noise in the resulting images.} \centering \label{GMRTtable} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline Group Name & BGE & Observation & Frequency & On source & Beam, P.A. & rms \\ LGG & & Date & (MHz) & Time (minutes) & (Full array, $''\times'',{}^{\circ}$) & mJy beam$^{-1}$ \\ \hline 18 &NGC 410 & 2011 Jul & 610 & 218 & $6.97\times4.09$, 71.48 & 0.05 \\ & & 2011 Jul & 235 & 218 & $17.87\times10.95$, 50.40 & 0.40 \\ 27 &NGC 584 & 2011 Jul & 610 & 200 & $6.82\times3.22$, 59.80 & 0.20 \\ & & 2011 Jul & 235 & 200 & $17.87\times10.95$, 50.40 & 1.20 \\ 31 &NGC 677 & 2013 Oct & 610 & 200 & $5.26\times4.34$, 68.59 & 0.04 \\ & & 2013 Oct & 235 & 200 & $12.94\times10.76$, 66.90 & 1.20 \\ 42 &NGC 777 & 2010 Dec & 610 & 338 & $5.71\times4.76$, -70.53 & 0.15 \\ & & 2010 Dec & 235 & 338 & $12.96\times19.94$, 65.78 & 0.40 \\ 58 &NGC 940 & 2010 Dec & 610 & 360 & $5.81\times4.76$, -75.11 & 0.06 \\ & & 2010 Dec & 235 & 360 & $13.27\times11.16$, 66.88 & 0.30 \\ 61 & NGC 924 & 2011 Jul & 610 & 125 & $5.52\times3.74$, 56.25 & 0.05 \\ & & 2011 Jul & 235 & 125 & $12.40\times10.43$, 59.19 & 0.30 \\ 66 &NGC 978 & 2010 Dec & 610 & 341 & $5.81\times4.65$, -81.63 & 0.06 \\ & & 2010 Dec & 235 & 341 & $14.13\times11.69$, 76.74 & 0.40 \\ 72 &NGC 1060 & 2010 Dec & 610 & 286 & $3.97\times3.54$, -86.98 & 0.09 \\ & & 2010 Dec & 235 & 286 & $13.97\times11.87$, 81.36 & 0.50 \\ 80 &NGC 1167 & 2011 Jul & 610 & 139 & $7.41\times4.04$, 68.27 & 0.06 \\ & & 2011 Jul & 235 & 139 & $16.08\times10.40$, 66.80 & 0.6 \\ 103 &NGC 1453 & 2011 Nov & 610 & 187 & $7.99\times4.60$, 48.98 & 0.06 \\ & & 2013 Oct & 235 & 258 & $16.06\times11.34$, 58.47 & 0.60 \\ 158 &NGC 2563 & 2010 Dec & 610 & 344 & $5.10\times4.60$, -57.85 & 0.07 \\ & & 2010 Dec & 235 & 344 & $11.88\times9.93$, 53.00 & 0.30 \\ 185 &NGC 3078 & 2010 Dec & 610 & 369 & $6.61\times4.71$, 2.38 & 0.20 \\ & & 2010 Dec & 235 & 369 & $16.02\times10.99$, 20.72 & 0.50 \\ 262 &NGC 4008 & 2011 Apr & 610 & 168 & $5.76\times4.52$, 78.93 & 0.05 \\ & & 2011 Apr & 235 & 168 & $14.38\times11.78$, 89.31 & 1.30 \\ 276 &NGC 4169 & 2011 Apr & 610 & 168 & $5.47\times4.43$, 61.96 & 0.08 \\ & & 2011 Apr & 235 & 168 & $13.94\times10.78$, 72.62 & 1.20 \\ 278 &NGC 4261 & 2009 Feb & 610 & 270 & $7.32\times4.77$, 76.62 & 1.00 \\ & & 2009 Feb & 235 & 270 & $15.30\times11.00$, 67.17 & 1.60 \\ 310 &ESO 507-25 & 2011 Apr & 610 & 167 & $9.40\times8.43$, 2.59 & 0.10 \\ & & 2011 Apr & 235 & 167 & $15.59\times12.22$, 7.69 & 0.50 \\ 345 &NGC 5084 & 2011 Jul & 610 & 150 & $5.91\times5.06$, 29.68 & 0.09 \\ & & 2011 Jul & 235 & 150 & $15.71\times12.62$, -4.35 & 0.65 \\ 351 &NGC 5153 & 2011 Apr & 610 & 167 & $7.63\times4.81$, 26.32 & 0.06 \\ & & 2011 Apr & 235 & 167 & $16.01\times11.25$, 13.68 & 0.30 \\ 363 &NGC 5353 & 2011 Apr & 610 & 207 & $5.92\times4.44$, -76.02 & 0.06 \\ & & 2011 Apr & 235 & 207 & $16.91\times12.00$, -79.87 & 0.60 \\ 393 &NGC 5846 & 2012 Mar & 235 & 145 & $14.02\times11.27$, 85.70 & 0.50 \\ 402 &NGC 5982 & 2011 Apr & 610 & 237 & $7.99\times6.47$, -81.04 & 0.09 \\ & & 2011 Apr & 235 & 237 & $19.42\times14.43$, -63.66 & 0.40 \\ 421 &NGC 6658 & 2011 Apr & 610 & 236 & $5.03\times4.35$, 55.03 & 0.05 \\ & & 2011 Apr & 235 & 236 & $12.82\times10.30$, 69.71 & 0.60 \\ \hline Previous work & & & & & & \\ \hline 9 &NGC 193\footnote{\cite{Simona11}} & 2007 Aug & 610 & 110 & $7.0\times6.0$, 0 & 0.08 \\ & & 2008 Aug & 235 & 120 & $13.5\times12.9$, -55 & 0.80 \\ 117& NGC 1587$^a$ & 2006 Aug & 610 & 200 & $5.7\times4.7$, 67 & 0.05 \\ & & 2008 Aug & 235 & 120 & $17.2\times11.0$, 46 & 1.00 \\ 338& NGC 5044\footnote{\cite{David09}} & 2008 Feb & 610 & 130 & $6.2\times4.4$, 41.9 & 0.10 \\ & & 2008 Feb & 235 & 140 & $15.9\times11.3$, -4.6 & 0.80 \\ 393 & NGC 5846$^a$ & 2006 Aug & 610 & 140 & $6.0\times5.5$, -84 & 0.06 \\ 473 & NGC 7619$^{a}$ & 2007 Aug & 610 & 100 & $6.1\times4.6$, 39 & 0.10 \\ & & 2008 Aug & 235 & 120 & $34.6\times11.3$, -41.4 & 1.20 \\ \hline \end{tabular} \end{minipage} \end{table*} \section{OBSERVATIONS AND DATA ANALYSIS} \subsection{GMRT radio observations and data analysis} With the exception of systems for which suitable archival data was already available, the CLoGS galaxy groups were observed using the GMRT in dual-frequency 235/610 MHz mode during 2010 December, 2011 April, July and November, 2012 March and 2013 October. The average total time spent on each source was approximately 4 hours. The data for both frequencies were recorded using the upper side band correlator (USB) providing a total observing bandwidth of 32 MHz at both 610 and 235~MHz. The data for both frequencies were collected with 512/256 channels at a spectral resolution of 65.1/130.2 kHz per channel and were analysed and reduced using the NRAO Astronomical Image Processing System (\textsc{aips}) package. In Table~\ref{GMRTtable} we summarize the details of the observations, where we report the observing date, frequency, total time on source, the half-power beamwidth (HPBW) and the rms noise level (1 $\sigma$) at full resolution. The procedure of the GMRT data analysis followed is the same as described in \citet{Kolokythas14}. The data were edited to remove radio-frequency interference (RFI) by initially removing data with extreme phase difference from the phase and flux calibrators, and then from the source. The task \textsc{setjy} was used and the flux density scale was set to ``best VLA values (2010)\footnote{http://www.vla.nrao.edu/astro/calib/manual/baars.html}''. The data were then calibrated using the task \textsc{calib} using uniform weighting, and the flux density scale was defined using the amplitude calibrators observed at the beginning and end at each of our observations. Then a calibrated, RFI-free channel was used for bandpass calibration, in order to relate the amplitude difference between all channels. Using task \textsc{splat}, the channels were averaged in order to increase signal-to-noise while minimizing the effects of band smearing. For imaging purposes, the field of view of the GMRT was split into multiple facets approximating a plane surface in the imaging field. The normal number of facets created at 610 MHz was 56 (with cellsize 1.5$''$) and 85 at 235 MHz (with cellsize 3$''$). The final images give a field of view of $\sim 1.2^{\circ}\times1.2^{\circ}$ at 610 MHz and $\sim 3^{\circ}\times3^{\circ}$ at 235 MHz. Task \textsc{imagr} was then used repeatedly, applying phase-only self-calibration before every iteration. The majority of the noise remaining in our images arises from calibration uncertainties with phase errors at the lowest frequencies originating from rapidly varying ionospheric delays. The presence of bright sources in the field can also affect the noise level in our final images, reducing the dynamic range. The final images were corrected for the primary beam pattern of the GMRT using the task \textsc{pbcor}. The mean sensitivity (1$\sigma$ noise level) of the analysis achieved in the final full resolution images is $\sim$0.08 mJy at 610 MHz and $\sim$0.6 mJy at 235~MHz (see Table~\ref{GMRTtable}). The theoretical values of noise calculated for our observations are 29 $\mu$Jy for 610 MHz and 80 $\mu$Jy for 235 MHz\footnote{Calculated using the rms noise sensitivity equation in \S 2.1.6 from the GMRT Observer's Manual: http://gmrt.ncra.tifr.res.in/gmrt$\_$hpage/Users /doc/manual/Manual$\_$2013/manual$\_$20Sep2013.pdf}. The mean sensitivity achieved for 610 MHz is comparable to the theoretical noise level while at 235~MHz the mean sensitivity is considerably higher than the theoretical level. We note that the full resolution of the GMRT is $\sim13''$ at 235~MHz and $\sim6''$ at 610 MHz with the \textit{u-v} range at 235~MHz being $\sim0.05 - 21$ k$\lambda$ and $\sim0.1 - 50$ k$\lambda$ at 610~MHz. The error associated with the flux density measurements is the typical uncertainty in the residual amplitude calibration errors. That is $\sim$5\% at 610~MHz and $\sim$8\% at 235~MHz \citep[e.g.,][]{Chandra04}. Out of the 26 BGEs in the CLoGS high-richness sub-sample, 20 were analyzed in this study at both GMRT 610/235~MHz and one (NGC~5846) at 235~MHz. The GMRT observations and images for NGC~193, NGC~7619, NGC~1587 at both 235/610 MHz and NGC~5846 at 610~MHz, are drawn from the earlier study of \citet{Simona11}, those for NGC~5044 from \citet{David09, David11} and for NGC~4261 from \citet{Kolokythas14}. Analysis of those observations is described in detail in those works. For sources with resolved structure at both GMRT frequencies (see Table~\ref{corespix}), we created spectral index maps using the task \textsc{comb} in \textsc{aips} with images having the same \textit{uv} range, resolution and cellsize. 1400 MHz data were drawn primarily from the NRAO VLA Sky Survey (NVSS, \citealt{Condon98}) and the Faint Images of the Radio Sky at Twenty centimeters Survey (FIRST, \citealt{Becker95}). For some sources, flux densities were drawn from \citet{Brown11}, who used measurements from the NVSS, Green Bank Telescope, and Parkes Radio Telescope, as well as from \citet{Kuhr81} and \citet{Condon02} studies for NGC~4261 and NGC~193 respectively. \subsection{X-ray observations} \label{sec:Xray} Paper~I describes the X-ray properties of the high-richness group sample, based on observations from the \textit{Chandra} and \textit{XMM-Newton} observatories. We summarize here only the main results of the X-ray analysis, as our focus is the comparison between the radio and X-ray structures. Of the 26 groups of the high-richness sub-sample, 19 were observed by \textit{XMM-Newton}, 13 by \textit{Chandra}, and 8 have data from both X-ray observatories. An IGM is detected in 14/26 systems ($\sim$54\%) with a further 3/26 systems ($\sim$12\%) containing galaxy-scale haloes of X-ray emitting gas (extent $<65$~kpc, luminosity $<10^{41}$~erg~s$^{-1}$) associated with the dominant early-type galaxy. The remaining nine groups show only point-like X-ray emission ($\sim$35\%) from AGN and stellar sources in member galaxies. In the groups with detected gas haloes, the typical halo temperatures are found to be in the range $\sim0.4-1.4$ keV, corresponding to masses in the range $M_{500}\sim0.5-5\times 10^{13} M_{\odot}$. The X-ray luminosities of the systems were estimated in the $0.5-7$ keV band to be between $L_{X,R500}\sim2-200\times10^{41}$ erg s$^{-1}$. Paper~I classes systems that have a central decline in their X-ray temperature profile (at greater than 3$\sigma$ significance) as cool core. By this definition, roughly one third (5/14) of the X-ray confirmed groups are non-cool-core systems, while the remaining two thirds (9/14) have a cool core. Cavities in the IGM, associated with the activity of central radio jet sources, have been identified in four systems (see e.g., \citealt{David09,David11,David17} for NGC~5044, \citealt{Allen06}, \citealt{Dong10} and \citealt{Machacek11} for NGC~5846, \citealt{Bogdan14} for NGC~193, \citealt{ewan4261} for NGC~4261). In this paper, we use cavity properties to examine the power output of AGN jets. Where necessary, cavity power estimates are made using methods similar to those described in \citet{OSullivan11}. The cooling luminosity (L$_{cool}$), defined as the luminosity of the gas in the volume within which the cooling time is $\leq$3 Gyr, was calculated in the 0.5-7 keV band, based on the luminosity and cooling time profiles measured in Paper~I, using a PROJCT*WABS*APEC model in \textsc{Xspec}. \section{Radio detections in brightest group galaxies} Using the 235 and 610~MHz GMRT images, and the available NVSS and FIRST catalog data, we identified sources in the BGE of each high-richness group, determined their morphology and measured their flux densities and extent. Measured flux densities are shown in Table~\ref{Sourcetable}. GMRT images of the groups and more details of these sources are presented in Appendix~\ref{AppA}. We find a high radio detection rate of 92\% (24 of 26 BGEs), with only two galaxies (NGC~5153 and NGC~6658) being undetected at 235, 610 and 1400 MHz. Considering only the GMRT data, 23 of the 26 BGEs are detected at 235 and/or 610~MHz (89\%), with NGC~584 being the only galaxy detected at 1.4~GHz alone. In Figure~\ref{235Flux}, we present the flux density distribution at 610~MHz and 235~MHz. The majority of the BGEs have flux densities in the range $10-100$~mJy with the flux density distribution at 610~MHz exhibiting a narrow sharp peak compared to the 235~MHz where the spread in flux densities is bigger. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{FLUXdensitydistributionofBGEs} \caption{Flux density distribution of the central brightest group early-type galaxies of the high-richness sample at 610~MHz (black dashed line) and 235~MHz (cyan columns) in CLoGS high-richness sample.} \label{235Flux} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth]{LradiovsNumberBGEshistmatch} \caption{Radio power of the detected BGEs at 610~MHz (black dashed line) and 235~MHz (pink columns) in CLoGS high-richness sample. } \label{Lradio} \end{figure} We estimate the limiting sensitivity of our sample based on the typical noise level of our observations and the maximum distance for the observed groups, 78~Mpc. We define radio sources as detected in our GMRT data if they reach a 5$\sigma$ level of significance above the noise. Using the mean r.m.s. from our GMRT observations for each frequency ($\sim$80 $\mu$Jy beam$^{-1}$ at 610~MHz and $\sim$600 $\mu$Jy beam$^{-1}$ at 235~MHz), we find that we should be sensitive to any source with power $>2.9\times10^{20}$ W Hz$^{-1}$ at 610~MHz or $>2.2\times10^{21}$ W Hz$^{-1}$ at 235~MHz. For comparison, the equivalent limit for NVSS 1400~MHz power sensitivity at this distance and level of significance is $>1.7\times10^{21}$ W Hz$^{-1}$. We note that sources in nearby groups may be detected at $>$5$\sigma$ significance with lower powers; these limits reflect the sensitivity of the sample as a whole. \begin{table*} \caption{Radio flux densities and spectral indices for our targets. The columns list the BGE name, redshift, flux density of each source at 235 and 610~MHz, the 235$-$610~MHz spectral index, the flux density at 1.4~GHz (drawn from the literature), the 235$-$1400~MHz spectral index, and the radio power at 235 and 610~MHz. All upper limits shown here from our analysis are 5 $\times$ r.m.s. Three galaxies show no radio emission detected from our GMRT observations. The sensitivity of the GMRT observations is on average $\sim$80 $\mu$Jy beam$^{-1}$ at 610~MHz and $\sim$600 $\mu$Jy beam$^{-1}$ at 235~MHz. The references for the 1.4~GHz flux densities and the GMRT measurements from previous works are listed at the bottom of the table. \label{Sourcetable}} \begin{center} \begin{tabular}{lcccccccc} \hline Source& Redshift&S$_{235 MHz}$ & S$_{610 MHz}$ & $\alpha_{235 MHz}^{610 MHz}$ & S$_{1.4GHz}$ & $\alpha_{235 MHz}^{1400 MHz}$ & P$_{235 MHz}$ & P$_{610 MHz}$ \\ & ($z$) & $\pm8\%$ (mJy) & $\pm5\%$ (mJy) & ($\pm$0.04) & (mJy) & & (10$^{23}$ W Hz$^{-1}$) & (10$^{23}$ W Hz$^{-1}$) \\ \hline NGC 410 & 0.017659 & 28.5 & 13.6 & 0.78 & 6.3$\pm0.6^a$ & 0.85$\pm$0.05 & 0.201 & 0.096 \\ NGC 584 & 0.006011 & $\leq$6.0 & $\leq$1.0 & - & 0.6$\pm0.5^b$ & - & - & - \\ NGC 677 & 0.017012 & 99.2 & 45.6 & 0.81 & 20.6$\pm1.6^a$ & 0.88$\pm$0.05 & 0.720 & 0.331 \\ NGC 777 & 0.016728 & 20.9 & 10.2 & 0.75 & 7.0$\pm0.5^b$ & 0.61$\pm$0.05 & 0.133 & 0.065 \\ NGC 940 & 0.017075 & 3.3 & 4.3 & -0.28 & - & - & 0.021 & 0.028 \\ NGC 924 & 0.014880 & $\leq$1.5 & 1.7 & - & - & - & - & 0.008 \\ NGC 978 & 0.015794 & $\leq$2.0 & 1.3 & - & - & - & - & 0.007 \\ NGC 1060 & 0.017312 & 28.5 & 12.4 & 0.87 & 9.2$\pm0.5^b$ & 0.63$\pm$0.04 & 0.197 & 0.086 \\ NGC 1167 & 0.016495 & 4018.4 & 2295.3 & 0.59 & 1700$\pm100^b$ & 0.48$\pm$0.04 & 24.751 & 14.138 \\ NGC 1453 & 0.012962 & 47.1 & 40.4 & 0.16 & 28.0$\pm1^b$ & 0.29$\pm$0.04 & 0.221 & 0.190 \\ NGC 2563 & 0.014944 & $\leq$1.5 & 1.3 & - & 0.3$\pm0.5^b$ & - & - & 0.007 \\ NGC 3078 & 0.008606 & 582.8 & 384.5 & 0.44 & 310$\pm10^b$ & 0.35$\pm$0.04 & 0.802 & 0.529 \\ NGC 4008 & 0.012075 & 24.8 & 16.4 & 0.43 & 10.9$\pm0.5^b$ & 0.46$\pm$0.04 & 0.086 & 0.057 \\ NGC 4169 & 0.012622 & $\leq$6.0 & 3.0 & - & 1.07$\pm0.15^c$ & - & - & 0.007 \\ NGC 4261$^d$ & 0.007465 & 48500 & 29600 & 0.52 & 19510$\pm410^e$ & 0.51$\pm$0.04 & 59.193 & 36.126 \\ ESO 507-25 & 0.010788 & 55.2 & 46.6 & 0.18 & 24.0$\pm2^b$ & 0.47$\pm$0.05 & 0.133 & 0.112 \\ NGC 5084 & 0.005741 & 53.9 & 36.2 & 0.42 & 46.6$\pm1.8^a$ & 0.08$\pm$0.04 & 0.034 & 0.023 \\ NGC 5153 & 0.014413 & $\leq$1.5 & $\leq$0.3 & - & - & - & - & - \\ NGC 5353 & 0.007755 & 65.8 & 45.8 & 0.38 & 41.0$\pm1.3^a$ & 0.27$\pm$0.04 & 0.096 & 0.067 \\ NGC 5846 & 0.005717 & 58.5 & - & 0.52 & 21.0$\pm1^b$ & 0.57$\pm$0.04 & 0.047 & - \\ NGC 5982 & 0.010064 & 4.5 & 1.9 & 0.90 & 0.5$\pm0.5^b$ & 1.23$\pm$0.04 & 0.010 & 0.004 \\ NGC 6658 & 0.014243 & $\leq$3.0 & $\leq$0.3 & - & - & - & - & - \\ \hline \multicolumn{9}{l}{Previous work}\\ \hline NGC 193$^f$ & 0.014723 & 5260 & 3184 & 0.53 & 1710$\pm102^g$ & 0.62$\pm$0.04 & 34.217 & 20.710 \\ NGC 1587$^f$ & 0.01230 & 655 & 222 & $>$1.13 & 131$\pm5^b$ & 0.90$\pm$0.04 & 2.042 & 0.692 \\ NGC 5044$^h$ & 0.009280 & 229 & 38.0 & 1.88 & 35$\pm1^b$ & 1.05$\pm$0.04 & 0.399 & 0.066 \\ NGC 5846$^f$ & 0.005717 & - & 36.0 & - & - & - & - & 0.029 \\ NGC 7619$^f$ & 0.012549 & 56.3 & 32.3 & 0.58 & 20$\pm1^b$ & 0.58$\pm$0.04 & 0.195 & 0.112 \\ \hline \end{tabular} \end{center} $^a$ \citet{Condon98}, $^b$ \citet{Brown11}, $^c$ \citet{Becker95}, $^d$ \citet{Kolokythas14} $^e$ \citet{Kuhr81}, $^f$ \citet{Simona11}, $^g$ \citet{Condon02}, $^h$ \citet{David09} \end{table*} \section{Radio properties of the brightest group galaxies} \subsection{Radio morphology} The radio-detected BGEs of the CLoGS high-richness sample exhibit a rich variety of structures that differ dramatically in size, radio power and morphology. The physical scale of their radio emission ranges from a few kpc (point sources; galactic scale) to several tens of kpc (large jets; group scale). The radio structures observed in central galaxies consist of the normal double-lobed radio galaxies, small-scale jets, point sources and irregular-diffuse structures with no clear jet/lobe structure. Our morphology classification is based on our dual-frequency GMRT analysis, the NVSS and FIRST data surveys and where needed, on the literature. Including the BGEs that are non-radio emitting at any frequency, we classify the radio emission seen from the BGEs in the following categories: i) \textit{point-like} - unresolved radio point source, ii) \textit{diffuse} emission - extended but amorphous with no preference in orientation, iii) \textit{small-scale jets} - confined within the stellar body of the host galaxy, with extent $<20$~kpc, and iv) \textit{large-scale jets} - extending $>20$~kpc, beyond the host galaxy and into the intra-group medium. In this class of radio sources we also include the subclass of \textit{remnant jet} systems. These are systems which previous studies have shown to be the products of past periods of jet activity, and which are now passively ageing, without significant input from the AGN. Lastly, the systems that have no radio source detected are classed as category v) \textit{absence of radio emission}. Table~\ref{t:listtable2} lists the morphology of each source. We note that some of our systems can be placed in more than one of the radio morphology classes described above. For example, the radio source in NGC~1167 appears as a \textit{small-scale jet} in our 610~MHz image, at 235~MHz the emission is \textit{point-like}, but deeper 1.4~GHz observations have revealed the presence of a pair of old radio lobes extending $>100$~kpc \citep{Shulevski12,Brienza16}. Since our interest is in the interaction between radio sources and their environment, in cases where the source can be placed in more than one category, we list the most extended class. We find that 4 BGEs host large-scale jets (of which 2 are remnant jets), 2 host small-scale jets, 4 possess diffuse emission, while the remaining 14 radio-detected BGEs host point-like radio sources. The two currently-active large-scale jet systems are NGC~193 and NGC~4261. Both are Fanaroff-Riley type I (FR I; Fanaroff \& Riley 1974) radio galaxies (4C+03.01 and 3C~270 respectively) with their jet/lobe components extending several tens of kpc away from the host galaxy. The two sources present similar morphology with roughly symmetrical double-lobed structure and bright, straight jets. Of the remnant sources, NGC~1167 (4C+34.09) is also roughly symmetrical \citep{Shulevski12}, while NGC~5044 has a single-sided bent jet and detached lobe. Two BGEs, NGC~5153 and NGC~6658, lack radio emission in any of the frequencies we examined. A third galaxy, NGC~584, is undetected in our GMRT data, but \citet{Brown11} find a marginal detection of a faint point source at 1.4~GHz ($0.6\pm0.5$~mJy). The noise levels of the GMRT observations ($\leq1$~mJy at 610 and $\leq6$ mJy at 235~MHz; 5$\sigma$ significance) are consistent with the non-detection of such a faint source. \subsection{Radio power in CLoGS BGEs} Using the measured radio flux densities, we calculated the radio power in each BGE for each frequency by: \begin{equation} \label{Power610} P_{\nu}=4 \pi D^2 (1+\rm z)^{(\alpha-1)}S_{\nu}, \end{equation} where D is the distance to the source, $\alpha$ is the spectral index, $z$ the redshift and S$_{\nu}$ is the flux density of the source at frequency $\nu$. In the case where no spectral index is known for the source (i.e., it is only detected at one frequency), $\alpha$ is by default set to $0.8$ \citep{Condon92}, which is the typical value for extragalactic radio sources. In figure~\ref{Lradio}, we show the radio power distribution of the BGEs in our sample at 235 and 610~MHz. The majority of the galaxies are having low powers in the range 10$^{21}$ $-$ 10$^{23}$ W Hz$^{-1}$. Only one of our BGEs exhibits power in the range $10^{23}-10^{24}$ W Hz$^{-1}$ (NGC 1587), but the sample contains three high-power sources with P$_{235MHz}>10^{24}$ W Hz$^{-1}$. All three are bright jet hosting radio galaxies (NGC~4261, NGC~193 and NGC~5044). We note that typical relevant values of radio power at 235 and 610 MHz in BCGs ranges between $\sim10^{23}- 5\times10^{26}$ W Hz$^{-1}$ \citep[see e.g. Table 2,][]{Yuan16}. \subsubsection{Radio-loudness in CLoGS BGEs} Only three of our BGEs can be considered radio-loud. We follow \citet{Best05} in defining radio-loud systems as having P$_{1.4GHz}>10^{23}$~W~Hz$^{-1}$, and note that the NVSS and FIRST sensitivity limits mean that those of our systems which are undetected at 1.4~GHz must have radio powers below this limit. The three radio-loud systems are shown in Table~\ref{radioloud23}. \citet{LinMohr07}, using the same threshold of radio-loudness, found that while in high-mass clusters (M$_{200}>10^{14.2}$~M$_\odot$) $\sim$36\% of the BCGs have P$_{1.4GHz}>10^{23}$ W Hz$^{-1}$, this fraction drops to only $\sim$13\% in low-mass clusters and groups (10$^{13}<M_{200}<10^{14.2}$). The latter percentage is very similar to that observed in our sample, $\sim$12\% (3/26). \subsection{Contributions from star formation} While AGN are clearly the origin of the radio jet sources in our sample, we must consider whether star formation might contribute to the radio emission of the other sources, particularly those with low radio luminosities. We discuss the diffuse sources in detail in \S6.5, but note here that their luminosity and morphology makes star formation unlikely to be their dominant source of radio emission. For the point-like sources, the beam sizes limit the emission to regions a few kiloparsecs across, but except in the most luminous systems, it is possible that we could mistake a compact central star forming region for AGN emission. Our CO survey of CLoGS BGEs confirms that the sample includes galaxies whose radio emission would be consistent with the star formation expected from their molecular gas reservoirs, if they were forming stars at rates similar to those in spiral galaxies (O'Sullivan et al., in prep.) We will investigate star formation in these galaxies more thoroughly in a later paper (Kolokythas et al., in prep.), but for now we need only determine whether it is likely to be a significant source of radio emission. To do this, we use Far-Ultraviolet (FUV) fluxes from the Galaxy Evolution Survey \citep{Martin05} GR6 catalog\footnote{$http://galex.stsci.edu/GR6/?page=mastform$} to estimate the star formation rate (SFR) using the calibration from \citet{Salim07}. We then calculate the expected 610~MHz radio emission from star formation at that rate using the relation of \citet{Garn09}. A table of SFR and expected radio power for the BGEs with point-like and diffuse radio emission is provided in Appendix~B. We find that in most cases, the detected radio emission is 1$-$2 orders of magnitude more luminous than would be expected from the FUV SFR. For three galaxies (NGC~924, NGC~2563 and NGC~5982) SF may contribute 20$-$40\% of the radio emission, and in one (NGC~584) it may be the dominant source of radio emission. There is also one galaxy (NGC~4169) for which no FUV flux is available. We therefore consider that SF dominates at most 2 of the 14 BGEs with point-like emission, though it could have some impact on our measurements of radio properties in 1$-$3 more. \begin{table*} \caption{Morphological properties of CLoGS high-richness groups and their central radio sources. For each group we note the LGG number, the BGE name, the angular scale, the largest linear size (LLS) of the radio source, measured from the 235 MHz radio images unless stated otherwise, the radio and X-ray morphology class (X-ray class drawn from paper~I), the energy output of any radio jets, estimated from 235~MHz power or from the X-ray cavities (P$_{\mathrm{cav}}$; see also paper~I), and lastly the relevant cooling X-ray luminosity L$_{\mathrm{cool}}$. } \label{t:listtable2} \begin{center} \begin{tabular}{llccccccc} \hline Group & BGE & Scale & LLS & Radio morphology & X-ray morphology & Energy output (radio) & P$_{\mathrm{cav}}$ & L$_{\mathrm{cool}}$ \\ & & (kpc/$''$) & (kpc) & & & (10$^{42}$ erg s$^{-1}$) & (10$^{42}$ erg s$^{-1}$) & (10$^{40}$ erg s$^{-1}$) \\ \hline LGG 9 & NGC 193 & 0.359 & 80$^a$ & large-scale jet & Group & 43.57$^{+22.18}_{-14.70}$ & 4.66$^{+3.55}_{-3.63}$ & 2.35$^{+0.13}_{-0.08}$ \\ LGG 18 & NGC 410 & 0.373 & $\leq$11 & point & Group & - & -& - \\ LGG 27 & NGC 584 & 0.121 & $\leq$3$^b$ & point & Galaxy & - & - & -\\ LGG 31 & NGC 677 & 0.378 & 30 & diffuse & Group & - & - & -\\ LGG 42 & NGC 777 & 0.354 & $\leq$8 & point & Group & - & - & -\\ LGG 58 & NGC 940 & 0.359 & $\leq$6 & point & Point & - & - & -\\ LGG 61 & NGC 924 & 0.310 & $\leq$4$^c$ & point & Point & - & - & -\\ LGG 66 & NGC 978 & 0.334 & $\leq$3$^c$ & point & Galaxy & - & - & -\\ LGG 72 & NGC 1060 & 0.368 & 14 & small-scale jet & Group & 1.13$^{+0.16}_{-0.19}$ & 0.18$^{+0.31}_{-0.03}$ & 45.70$^{+1.40}_{-1.30}$\\ LGG 80 & NGC 1167 & 0.349 & 240$^d$ & remnant jet & Point & 34.68$^{+15.84}_{-10.87}$ & -& - \\ LGG 103 & NGC 1453 & 0.305 & $\leq$11 & point & Group & - & - & -\\ LGG 117 & NGC 1587 & 0.252 & 22$^a$ & diffuse & Group & - & -& - \\% 3.5 \\ LGG 158 & NGC 2563 & 0.315 & $\leq$3 & point & Group & - & -& - \\ LGG 185 & NGC 3078 & 0.165 & 26 & diffuse & Galaxy & - & -& - \\ LGG 262 & NGC 4008 & 0.262 & $\leq$7 & point & Galaxy & - & - & -\\ LGG 276 & NGC 4169 & 0.218 & $\leq$3$^c$ & point & Point & - & - & -\\ LGG 278 & NGC 4261 & 0.155 & 80 & large-scale jet & Group & 64.32$^{+38.79}_{-24.20}$ & 21.45$^{+4.37}_{-3.04}$ & 8.00$^{+0.40}_{-0.01}$\\ LGG 310 & ESO 507-25 & 0.218 & 11 & diffuse & Galaxy & - & -& - \\ LGG 338 & NGC 5044 & 0.184 & 63$^a$ & remnant jet & Group & 1.85$^{+0.14}_{-0.15}$ & 2.00$^{+1.74}_{-0.33}$ & 310.00$^{+4.00}_{-9.00}$ \\ LGG 345 & NGC 5084 & 0.112 & $\leq$4 & point & Galaxy & - & -& - \\ LGG 351 & NGC 5153 & 0.291 & - & - & Galaxy & - & -& - \\ LGG 363 & NGC 5353 & 0.170 & $\leq$10 & point & Group & - & - & -\\ LGG 393 & NGC 5846 & 0.126 & 12$^a$ & small-scale jet & Group & 0.41$^{+0.11}_{-0.15}$ & 1.45$^{+0.42}_{-0.38}$ & 34.50$^{+0.70}_{-0.80}$ \\ LGG 402 & NGC 5982 & 0.213 & $\leq$3 & point & Group & - & - & -\\ LGG 412 & NGC 6658 & 0.305 & - & - & Point & - & -& - \\ LGG 473 & NGC 7619 & 0.262 & $\leq$6$^a$ & point & Group & - & -& - \\ \hline \end{tabular} \end{center} $^a$ \citet{Simona11}, $^b$ Measured from the 1.4~GHz image, $^c$ Measured from the 610~MHz image, $^b$ \citet{Shulevski12} (1.4~GHz)\\ \end{table*} \subsection{235~MHz radio power and largest linear size} Figure~\ref{LLS235Power} shows the 235~MHz power of our sources plotted against their largest linear size (LLS) at any radio frequency, with data points from the study of \citet{Simona11} for comparison. For all resolved radio sources (i.e., excluding point-like sources) the largest linear size was measured across the maximum extent of the detected radio emission, at the frequency at which the emission is most extended. The resolved radio sources of the CLoGS central galaxies cover a large spatial scale from $\sim$12~kpc (small scale jets; NGC~5846) to $>200$~kpc (large scale jets; NGC~1167) with 235~MHz radio powers in the range $\sim5\times10^{21}$ W Hz$^{-1}$ to $\sim5\times10^{24}$ W Hz$^{-1}$. Our CLoGS radio sources seem to follow the same linear correlation between size and power noted by \citet{Ledlow02} and \citet{Simona11}. Although the sample in \citet{Simona11} included only cavity systems in groups, and the work of \citet{Ledlow02} included radio-sources that resided in rich clusters, we see that this linear correlation holds over 3 orders of magnitude at 235~MHz and extends to low power radio sources. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{LLSvs235MHzPowerVSsimonaFnopoint2018} \caption{Radio power at 235 MHz plotted against the largest linear size of radio sources associated with the central brightest group ellipticals of the CLoGS high-richness sample, with data points from \citet{Simona11} for comparison. Different symbols indicate the radio morphology of the CLoGS sources.} \label{LLS235Power} \end{figure} \begin{table} \centering \caption{Radio-loud AGN (P$_{1.4GHz}>$10$^{23}$ W Hz$^{-1}$) in CLoGS group dominant early-type galaxies. Columns show the LGG number, the BGE name and the 1.4 GHz power.} \begin{tabular}{lcc} \hline Group & BGE & P$_{1.4GHz}$ \\ & & (10$^{23}$ W Hz$^{-1}$) \\ \hline LGG 9 & NGC 193 & 11.1$\pm$0.6 \\ LGG 80 & NGC 1167 & 10.5$\pm$0.6 \\ LGG 278 & NGC 4261 & 23.2$\pm$0.5 \\ \hline \end{tabular} \label{radioloud23} \end{table} \subsection{Spectral index} The spectral index of synchrotron emission corresponds to the index of the energy distribution of the relativistic electrons in the source which, in the absence of energy injection, will steepen over time owing to synchrotron and inverse Compton losses \citep[e.g.,][]{deZotti10}. In our sample, 19/26 galaxies are detected at both 610 and 235~MHz. Considering only these two frequencies, we find that 2/20 sources have a steep radio spectra with $\alpha_{235}^{610}>$1 (NGC~1587 and NGC~5044) while the majority of the sources range from very flat values of $\alpha_{235}^{610}\sim$0.2 to typical radio synchrotron spectra of $\alpha_{235}^{610}\sim$0.9. Only NGC~940 presents a flux density greater at 610~MHz than in 235~MHz giving an inverted spectral index of $\alpha_{235}^{610}=-0.28\pm0.04$. Leaving the two steep spectrum outliers out, we find that the mean spectral index value for the 17/19 radio sources that reside in the central galaxies of our groups sample is $\alpha_{235}^{610}=0.53\pm0.16$. We can also measure the spectral index between 235 and 1400~MHz for 18/26 BGEs. The plots of the spectral distribution of the BGEs detected at both GMRT 235/610~MHz and 1.4~GHz from NVSS are shown in Appendix~C, from which we see that in most cases a simple powerlaw provides a reasonable representation of the spectrum as a whole for each central galaxy. Only small deviations from a powerlaw are observed in some systems that can be attributed mainly to the small flux density offsets at specific frequencies either due to a difference in morphology between the two GMRT frequencies (e.g., NGC~5044, ESO~507-25, NGC~1060) or the lower sensitivity at 235~MHz. The mean value of $\alpha_{235}^{1400}$ calculated for the 18/26 BGEs with both 235 and 1400 MHz data, is $0.60\pm0.16$. In the case of the two steep spectra systems (NGC~1587 and NGC~5044), we observe a deviation from a simple powerlaw. This is caused by differences in morphology between frequencies, with both the remnant jet in NGC~5044 and the diffuse emission around NGC~1587 being only visible at 235~MHz. The steep spectra of these sources is an indicator that we are observing radio emission from different outbursts \citep[see also][]{Simona11}. Table~\ref{radiospix} shows the mean values of the spectral indices for each radio morphology class. Naturally, the steepest mean indices are found for the remnant jets. The mean indices for the other classes are all comparable within the uncertainties, although there is a hint that as expected, steeper spectra are found in diffuse sources and small jet systems, and flatter spectra in the point-like sources. For the four sources where extended emission was observed at both 235 and 610~MHz we also calculated separate spectral indices for the core and the extended components by creating images with matched resolution. Table~\ref{corespix} lists these spectral index values. In three systems (NGC~193, NGC~677, NGC~3078) we find that the spectral index does not differ between the two components. Only NGC~4261 exhibits a flatter core index, probably as a result of the `cosmic conspiracy' (see \citealt{Cotton80}), or the presence of free-free emission \citep[see][for a more in-depth discussion]{Kolokythas14}. \begin{table} \centering \caption{Mean spectral indices $\alpha_{235}^{610}$ and $\alpha_{235}^{1400}$ for the different radio morphologies of our sources.} \begin{tabular}{lcc} \hline Radio Morphology & Mean $\alpha_{235}^{610}$ & Mean $\alpha_{235}^{1400}$ \\ \hline Point-like & 0.46$\pm$0.11 & 0.54$\pm$0.11 \\ Small-scale jet & 0.70$\pm$0.06 & 0.60$\pm$0.06 \\ Remnant jet & 1.24$\pm$0.06 & 0.77$\pm$0.06 \\ Large-scale jet & 0.53$\pm$0.06 & 0.57$\pm$0.06 \\ Diffuse emission & 0.64$\pm$0.07 & 0.65$\pm$0.09 \\ \hline \end{tabular} \label{radiospix} \end{table} \begin{table} \centering \caption{235-610~MHz spectral indices for the cores and extended emission of those radio sources in our sample with their morphology resolved at both GMRT frequencies. Columns show the LGG number, the BGE name, the radio morphology and the spectral index $\alpha_{235}^{610}$ of the core and the surrounding emission of each source.} \begin{tabular}{llcc} \hline Group & BGE & Core $\alpha_{235}^{610}$ & Surrounding $\alpha_{235}^{610}$ \\ \hline LGG 9 & NGC 193 & 0.54$\pm$0.04 & 0.53$\pm$0.04 \\ LGG 31 & NGC 677 & 0.79$\pm$0.04 & 0.82$\pm$0.04 \\ LGG 185 & NGC 3078 & 0.43$\pm$0.04 & 0.45$\pm$0.04 \\ LGG 278 & NGC 4261 & 0.20$\pm$0.04 & 0.52$\pm$0.04 \\ \hline \end{tabular} \label{corespix} \end{table} \section{DISCUSSION} \subsection{Detection statistics and comparisons} The comparison between group and cluster samples in radio requires caution, since the most relevant studies have not used exactly the same selection criteria (eg. distance, radio power etc). Several studies have investigated the radio detection fraction of brightest cluster galaxies, group-dominant galaxies and large ellipticals in the local Universe. \citet{Dunn10}, using a sample of nearby bright ellipticals, most of which reside in galaxy groups or clusters, found that $\sim$81\% (34/42) are detected using the NVSS and Sydney University Molonglo Sky Survey (SUMSS, 843~MHz), rising to $\sim$94\% (17/18) for those systems with \textit{ROSAT} All-Sky Survey $0.1-2.4$~keV X-ray fluxes $>3\times10^{-12}$~erg~s$^{-1}$~cm$^{-2}$ (equivalent to L$_{0.1-2.4keV}\sim2.3\times10^{42}$~erg~s$^{-1}$ at the distance limit of CLoGS). For comparison with CLoGS we detect 21/26 BGEs from NVSS ($\sim$81\%) whereas all of our groups above the equivalent X-ray flux limit from \citet{Dunn10} sample are detected (5/5; 100\%). Using the X-ray selected groups sample of \citet{Eckmiller11}, \citet{Bharadwaj14} found that 20/26 BGEs (77\%), are detected in radio using the NVSS, SUMSS and VLA Low frequency Sky Survey (VLSS, 74~MHz) radio catalogs. In the X-ray selected local, low mass cluster sample of \citet{Magliocchetti07}, the radio detection rate in central BCGs is 20\% when using a radio luminosity limit of L$_{1.4GHz}>10^{22}$ W Hz$^{-1}$ (our detection rate with CLoGS above this limit is 7/26, $\sim$27\%), rising to 92\% (11/12 of BCGs) when this limit is lowered to L$_{1.4GHz}>10^{20}$ W Hz$^{-1}$, comparable to the sensitivity of our sample. These results agree with our own in suggesting that, with sufficiently deep observations, almost all group or cluster dominant galaxies will be found to host a central radio source. Comparison with more massive clusters is difficult, since they tend to be much more distant than our groups, with radio surveys usually identifying only the brightest central radio sources. We note that numerous earlier X-ray selected cluster samples suggest a radio detection rate for BCGs $\sim$50\% (e.g., B55 sample \citealt{Edge90,Peres98}, Highest X-ray Flux Galaxy Cluster Sample, HIFLUGCS \citealt{Reiprich02,Mittal09}), for sources of sufficient power to be detected in volumes extending out to $z=0.3$. A similar radio detection rate for BCGs was also found by \citet{Ma13} in a combined sample of X-ray selected galaxy clusters matched with NVSS ($\sim$52\% for radio sources $>3$~mJy) and \citet{Kale15} ($\sim$48\%; matched with NVSS and FIRST catalogs) in a combined sample of 59 X-ray selected BCGs extracted from the Extended Giant Metrewave Radio Telescope (GMRT) Radio Halo Survey (EGRHS; \citealt{Venturi07, Venturi08}). However, \citet{LinMohr07} using a cluster sample from two large X-ray cluster catalogs drawn from the ROSAT All-Sky Survey (RASS), NORAS \citep{Bohringer00} and REFLEX \citep{Bohringer04}, detected 122 BCGs in 342 clusters (36\%) that have a radio component with flux density greater than 10~mJy at 1.4~GHz. The selection criterion of a cut-off at 10~mJy for the NVSS catalog matching is most probably the reason for this lower radio detection rate. Another recent study by \citet{Hogan15} using a combination of X-ray selected BCG samples matching with NVSS, SUMSS and FIRST radio catalogs finds a detection rate of $61.1\pm5.5\%$ for the extended Brightest Cluster Sample (eBCS; \citealt{Ebeling00}), $62.6\pm5.5\%$ for the REFLEX-NVSS sample (ROSATESO Flux Limited X-ray; \citealt{Bohringer04}) and $60.3\pm7.7\%$ for the REFLEX-SUMSS sample. These detection percentages are slightly higher compared to previous radio-BCG detection rates found in X-ray selected clusters, but are in good agreement within errors. Comparing by visual inspection the maxBCG sample with FIRST, \citet{MeiLin10} identify 552 double-lobed central radio galaxies in the $\sim$13000 cataloged clusters ($\sim$4\%). In our sample, only NGC~193 and NGC~4261 are large and bright enough to be comparable, though NGC~1167 probably was, and NGC~5044 may have been, in the recent past. Within the large uncertainties, this suggests a comparable fraction of large, double-lobed sources between groups and clusters, though clearly larger samples of groups are needed for a more accurate comparison. \subsection{The X-ray environment of CLoGS central radio galaxies} While in cluster environments an X-ray emitting intra-cluster medium is almost always present (e.g., 72\%; \citealt{Balogh11}, 89\%; \citealt{Andreon11}) in our optically-selected groups sample we find that only $\sim$55\% of systems exhibit extended X-ray emission with luminosities and temperatures typical of group-scale haloes (see Paper~I). In these X-ray bright groups, a variety of radio morphologies is observed, including group-group mergers (NGC~1060 and NGC~7619) and two recently tidally-disturbed `sloshing' systems (NGC~5846 and NGC~5044, \citealt{Gastaldello13}). In other cases, disturbances in the X-ray-emitting gas are caused by a central radio source (e.g., NGC~193). The radio jets seen in these systems can potentially heat the IGM \citep[through a variety of mechanisms, see e.g.,][]{Fabian12}, balancing radiative energy losses and helping to maintain the long-term thermal equilibrium of the IGM. In examining interactions between the central radio galaxy and the IGM, we divided our groups into X-ray bright and X-ray faint subsets; X-ray bright groups are those which in Paper~I we find to have group-scale haloes (extending $>65$~kpc with L$_{X,R500}>10^{41}$~erg~s$^{-1}$), while X-ray faint groups are those that either have galaxy-scale diffuse or point-like X-ray emission. Using this classification, 14/26 of our groups are X-ray bright. Table~\ref{t:listtable2} lists the radio and X-ray morphology of the central BGEs in the high-richness sub-sample. \begin{table} \centering \caption{Fraction of radio sources with a given radio morphology detected in X-ray bright and X-ray faint CLoGS high-richness groups.} \begin{tabular}{lcc} \hline Radio Morphology & X-ray Bright & X-ray Faint \\ \hline Point-like & 7/14 & 7/12 \\ Small/Large scale jet & 5/14 & 1/12 \\ Diffuse emission & 2/14 & 2/12 \\ No detection & 0 & 2/12 \\ \hline \end{tabular} \label{radiomorphxrays} \end{table} Table~\ref{radiomorphxrays} shows the fraction of X-ray bright and faint groups which host each class of central radio source. There is a clear environmental difference for the jet radio sources between X-ray bright and faint groups. All but one of the jet sources are found in X-ray bright systems, confirming that the X-ray bright groups or clusters are the preferred environment for radio jets \citep[as found by, e.g.,][]{Magliocchetti07,McNu}, even down to the relatively low mass range covered by our sample. This suggests that the presence of a group scale IGM provides a richer gas supply which is more likely to fuel an outburst by the central AGN. On the other hand, all galaxies that lack radio emission are found in X-ray faint systems. With only two non-detections, this is not a statistically significant result, but it hints at an environmental trend worth further investigation with larger samples. These X-ray faint groups may have shallow gravitational potentials that are unable to heat the available gas to temperatures where X-ray emission becomes detectable, (though this is perhaps unlikely given the sensitivity of our X-ray data and the velocity dispersions of the groups), or they may be deficient in diffuse intra-group gas and hence not bright enough, as has been suggested for other X-ray faint optically-selected groups \citep[e.g.,][]{Rasmussen06}. All radio jet systems in X-ray bright groups (5/5) reside in cool cores \citep[in agreement with the study of][]{Sun09}, suggesting that a cool core is a necessary requirement for jet activity in such systems. NGC~1167, the only jet source identified in an X-ray faint group, has an extensive cold gas reservoir, which has likely fueled the AGN outburst. Of the five X-ray bright jet-hosting systems, four show clear correlations between the GMRT low-frequency radio and X-ray structures, in the form of cavities and/or rims of enhanced X-ray emission surrounding the radio lobes. This provides proof of interactions between the central radio galaxy and the IGM. Of these four systems, the two large-scale, currently active radio jet systems (NGC~193 and NGC~4261) have large cavities or cocoon-like structures excavated and filled by the radio lobes. NGC~5846 and NGC~5044 possess smaller-scale cavities commensurate with their current radio activity, but there is evidence of a larger cavity correlated with the old, detached radio lobe of NGC~5044, from abundance mapping \citep{OSullivan14}. In the fifth system, NGC~1060, the jets are too small for cavities to be resolved in the \textit{XMM-Newton} observation described in Paper~I. However, the predominant radio morphology across all our groups is the point-like, found in $\sim$50\% of X-ray bright and $\sim$58\% of X-ray faint groups. In X-ray bright systems, point-like radio sources are common in both cool-core and non-cool core groups. Considering an AGN as the dominant source of radio activity (as most likely is the case in almost all of our point-like systems; see \S 5.3), this reveals that not only the environment, but also the efficiency of the processes involved plays a significant role in the morphology of a radio source. While in X-ray faint groups the lack of a significant quantity of gas to fuel the central engine may explain the presence of only low-power activity, in X-ray bright cool-core systems we must either invoke inefficiencies in fueling, or duty cycle effects (e.g., the need to build up a reservoir of cold, dense gas via cooling from the IGM after a previous round of feedback heating) to explain the lack of ongoing jet activity. Another possible explanation for low power point-like radio sources in both environments is a contribution from the stellar population. We also note that diffuse radio sources appear equally common in both X-ray bright and X-ray faint groups. We discuss these sources further in \S~6.5. \subsection{Environment and spectral index} In dense environments, such as the centres of galaxy clusters, it is known that radio galaxies usually present a steep synchrotron spectral slope compared to galaxies that reside in the field \citep[e.g.,][]{Govoni01,Bornancini10}. As the radio morphology of a galaxy depends on the environment in which it is embedded, we examine the radio spectral index distribution of the central galaxies in our sample in order to unveil any relation that may exist between spectral index and environment, comparing also with what is known for BCGs. The typical range of spectral index values $\alpha$, of radio galaxies at the centers of cool-core clusters is $1-2$ \citep{Simona11}. The simplest interpretation for such steep radio spectra is the restriction of the relativistic radio plasma in the central regions of the cluster by the high pressure and dense external medium, which may slow the expansion and consequent fading of the radio source \citep[e.g.,][]{Fanti95,Murgia99}. \citet{Simona11} observed 15 X-ray bright galaxy groups hosting extended radio sources with the GMRT at 235 and 610~MHz, and found that 9/15 had a steep spectral index of $\alpha_{235}^{610}>1$. This is the opposite of what we find from our CLoGS sample, where only 2/26 BGEs host a steep-spectrum radio source. Clearly our inclusion of X-ray faint groups and point-like central radio sources, some of which also have contribution from star-formation in their radio emission, has a significant impact on the spectral index distribution. Our sources are more comparable to those found in the general population of galaxy clusters, both cool-core and non-cool-core. \citet{Bornancini10}, using a sample of maxBCG clusters, calculated the mean value of the spectral index for BCGs between 325~MHz and 1.4~GHz to be $\alpha_{325}^{1400}=0.65$. This is similar to our mean spectral index ($\alpha_{235}^{1400}=0.60\pm0.16$) using almost the same frequency range. The mean spectral index in cool-core groups alone is calculated to be $\alpha_{235}^{1400}=0.72\pm0.13$, with the equivalent mean value in the non-cool-core ones being $\alpha_{235}^{1400}=0.60\pm0.16$. We find that cool-core groups present steeper mean spectral index but taking into account the uncertainties in the values, the difference does not seem to be so strong. Figure~\ref{LXspix2351400} shows spectral index $\alpha_{235}^{1400}$ plotted against L$_{X R_{500}}$, the X-ray luminosity of the group within the fiducial radius R$_{500}$. We find that radio sources with spectral indices steeper than the average value of $\alpha_{235}^{1400}=0.60$ reside in X-ray bright groups, regardless of their radio morphology. These steeper-than-average sources include not only jets and known remnants of past outbursts, but also systems with point-like or diffuse radio morphology. This supports the interpretation suggested above, that radio sources remain visible for longer periods in environments with a dense IGM, while those in X-ray faint systems expand and fade relatively rapidly. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{LXvsa2351400Xraymorphnewf2018b} \caption{Spectral index distribution $\alpha_{235}^{1400}$ of BGEs compared to the group X-ray luminosity within R$_{500}$. The solid horizontal line indicates the average spectral index value of 0.60 at this frequency range, with the associated error bars shown as dot-dashed lines. The red dashed vertical line shows the threshold in X-ray luminosity (L$_{X R_{500}}$=10$^{41}$ erg s$^{-1}$) above which we consider groups as X-ray bright.} \label{LXspix2351400} \end{figure} \subsection{Radio emission and group spiral fraction} The environment in which a galaxy resides is known to play an important role on its properties. In galaxy groups, the fraction of early-type galaxies ranges from $\sim$25\% (as in the field) to $\sim$55\% \citep[as in rich clusters,][]{MulchaeyZabludoff98}. A high fraction of spiral galaxies can be an indicator of a group's dynamical youth. Systems whose collapse and virialisation occurred further in the past are expected to have higher fractions of early-type galaxies, since there has been a longer history of interactions and mergers. Studies in a sample of compact groups have suggested a correlation between extended X-ray emission and low spiral fraction \citep[e.g.,][]{Pildis95} suggesting that dynamical age is linked to the ability to build and retain a hot IGM. We therefore explore whether any correlation between the close environment of a BGE and its radio morphology or the group's X-ray emission exists, by looking at the spiral galaxy content of the member galaxies in our groups. Table~\ref{t:listtable3} shows the spiral fractions in each group. We took galaxy morphologies from HyperLEDA\footnote{http://leda.univ-lyon1.fr/}, classifying them as either early-type (morphological T-type $<$0), late-type (T-type $\geq$ 0) or unknown. As the objects with unknown morphology are typically faint and small, we consider them as dwarf galaxies and include them with the late-type galaxies, defining spiral fraction F$_{sp}$ as the number of late-type or unknown morphology galaxies over the total number of group members. We follow \citet{Bitsakis10} in considering groups with F$_{sp}>0.75$ as spiral-rich, and potentially dynamically young. We find that 11/26 (42\%) of our groups are classified as spiral-rich and 15/26 (58\%) as spiral-poor. \citet{Bitsakis14}, considering a sample of 28 Hickson compact groups \citep[HCGs,][]{Hickson92} which are likely to have a higher rate of galaxy interaction than most of our groups \citep{Hickson97}, finds a similar fraction to our sample with 46\% (13/28) being spiral-rich and 54\% (15/28) being spiral-poor. Fig~\ref{Fspunknown} shows the relation between F$_{sp}$ and the X-ray luminosity calculated at R$_{500}$, L$_{X R_{500}}$. Point-like radio sources appear to be equally divided between dynamically young and old groups therefore showing no preference in the group environment they reside. We further find that in high spiral fraction groups the radio point-like systems are almost evenly distributed between X-ray bright (3/7) and X-ray faint groups (4/7). In \citet{Ponman96} a mild trend between spiral fraction systems and diffuse X-ray luminosity is suggested using ROSAT observations over a sample of HCGs. In our sample however, we see that spiral-rich systems have no particular preference in the X-ray environment, since 6/11 spiral-rich groups are X-ray faint and 5/11 X-ray bright. \begin{table} \begin{minipage}{\linewidth} \caption{Spiral fraction for the high-richness sub-sample CLoGS groups. We note the brightest group early-type galaxy and the spiral fraction F$_{sp}$, which is the ratio of the number of late-type plus unknown galaxies over the total number of galaxies.} \centering \label{t:listtable3} \begin{tabular}{|c|c|} \hline BGE & Spiral fraction \\ & F$_{sp}$ \\ \hline NGC 193 & 0.67 \\ NGC 410 & 0.52 \\ NGC 584 & 0.64 \\ NGC 677 & 0.91 \\ NGC 777 & 0.56 \\ NGC 940 & 0.78 \\ NGC 924 & 0.71 \\ NGC 978 & 0.60 \\ NGC 1060 & 0.80 \\ NGC 1167 & 0.75 \\ NGC 1453 & 0.76 \\ NGC 1587 & 0.62 \\ NGC 2563 & 0.72 \\ NGC 3078 & 0.74 \\ NGC 4008 & 0.88 \\ NGC 4169 & 0.88 \\ NGC 4261 & 0.71 \\ ESO 507-25 & 0.57 \\ NGC 5044 & 0.53 \\ NGC 5084 & 0.82 \\ NGC 5153 & 0.79 \\ NGC 5353 & 0.91 \\ NGC 5846 & 0.48 \\ NGC 5982 & 0.86 \\ NGC 6658 & 0.60 \\ NGC 7619 & 0.62 \\ \hline \end{tabular} \end{minipage} \end{table} For fig~\ref{Fspunknown} a Spearman correlation coefficient $\rho=-0.17$ with $P=0.42$ is found, showing that the association between F$_{sp}$ and L$_{XR500}$ for our groups cannot be considered statistically significant. However, we point out that the majority of spiral-rich systems with L$_{XR500}<10^{42} \mbox{ } erg \mbox{ } s^{-1}$, 75\% (6/8), appear to exhibit point-like radio emission and that 67\% (4/6) of the jet systems resides in dynamically old groups. Fig~\ref{Fspunknown2} shows the relation between F$_{sp}$ and the radio power at 235~MHz (P$_{235 MHz}$) for the systems with radio emission detected at 235~MHz. We find that the majority of radio point-like systems (6/9) detected at 235~MHz resides in groups with higher spiral fractions (dynamically young) with a mean spiral fraction of 0.73$\pm$0.12. Comparing with the mean value for jet sources (0.66$\pm$0.21) we find that point radio sources have a marginally higher mean F$_{sp}$ but, given the large errors this result cannot be regarded as statistically significant. While the suggestion that BGEs with point-like radio emission may be more common in X-ray faint, dynamically young groups is interesting, the lack of a clear trend is perhaps more telling. Point-like emission might be expected from the low accretion rates of AGN in hot-gas-poor systems, and in some cases star formation fueled by cold gas (perhaps more common in X-ray faint groups) may make a contribution, but low radio luminosities and point-like emission would also be expected in the interludes between outbursts of jet activity in BGEs at the centres of cooling flows (group relevant timescales $\sim$10$^7$ $-$ 10$^8$ yr; see \citealt{David11}, \citealt{Machacek11}, \citealt{Randall11}, \citealt{Rafferty13}). The presence of jets in dynamically young systems also indicates the diverse factors which can affect nuclear activity, with our sample including at least one example of powerful remnant jets likely fueled by interactions with a cold gas rich neighbor. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{FspunknownXrayLxr500lineFsp75out5a2018b} \caption{Spiral fraction F$_{sp}$, i.e. the number of late type+unknown galaxies over the total number of galaxies in each group, of different radio morphologies in relation to the X-ray luminosity at R$_{500}$ (L$_{XR_{500}}$). The X-ray bright groups are marked in red square, different radio morphologies are shown in different symbols, whereas the red line indicates a spiral fraction of 0.75 following \citet{Bitsakis10}.} \label{Fspunknown} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{FspunknownXrayP235lineFsp75out5a2018b} \caption{Spiral fraction F$_{sp}$ as in fig.~\ref{Fspunknown}, in relation to radio power at 235~MHz (P$_{235 MHz}$). The radio morphology of each group can also be seen here in different symbols, with the X-ray bright groups being marked in red square.} \label{Fspunknown2} \end{figure} \subsection{Diffuse radio sources in group central galaxies} \label{sec:diffuse} The origin of the diffuse radio sources that we find in our groups is somewhat mysterious. They have some similarities to the radio mini-halos seen in cool core clusters \citep[e.g.,][]{MazzottaGiacintucci08,Doria12,Kale13}. The scale of the diffuse emission in our groups is small, only a few tens of kpc, whereas mini-halos in clusters can extend up to a few hundreds of kpc in radius. Turbulence in groups is considerably weaker than in clusters, and is generally thought not to be capable of providing the energy necessary to accelerate the electron population and produce radio emission. It is also notable that the groups in which we see evidence of mergers and tidal disturbances do not host diffuse radio sources. It therefore seems unlikely that our sources are radio halos or mini-halos. Of our four diffuse radio sources, in three cases the diffuse emission is only observed at 610~MHz. This is a result of the significantly superior sensitivity achieved at 610~MHz, $5-30$ times better than that at 235~MHz in these systems. We find no clear correlation between X-ray and radio structure for the diffuse sources; only two of the four reside in X-ray bright groups, and in those two (NGC~677 and NGC~1587) no cavities or shock fronts are observed. The spectral indices of these two sources are relatively steep, and the radio emission is more extended than the stellar population of the galaxies, ruling out star formation as the dominant source of emission. The spectral indices of the two diffuse sources in X-ray faint groups are relatively flat. They are therefore unlikely to be radio phoenixes or relic radio galaxies. Of the two X-ray faint systems, NGC~3078 presents the more interesting morphology. Located in a galaxy-scale X-ray halo, at both 235 and 610~MHz it shows a clear, relatively symmetric east-west extension, roughly matching the minor axis of its host galaxy and confined within the stellar body of the galaxy (at least in projection). This morphology could indicate a galactic wind or an outflow associated with a failed or decollimated jet source. A second possibility is star-formation. However, using the relation of \citet{Garn09} we find the expected star-formation rate only from the diffuse 610~MHz emission (excluding the central point source) to be $\sim$2.5~M$_{\odot}$ yr$^{-1}$, which is an order of magnitude higher than the rate expected from mid-infrared \textit{Wide-field Infrared Survey Explorer} (\textit{WISE}) W3 luminosity ($\sim$0.2 M$_{\odot}$ yr$^{-1}$). ESO~507-25, located in another galaxy-scale X-ray halo, is less structured, with diffuse emission extending in all directions, again on scales smaller than the stellar extent. As in NGC~3078, star formation can be ruled out as the main origin of the radio emission in this system, as the predicted rate ($\sim$1.20 M$_{\odot}$ yr$^{-1}$) is again almost an order of magnitude greater than what would be consistent with the WISE L$_{W3}$ rate ($\sim$0.17 M$_{\odot}$ yr$^{-1}$). In addition, for both of these systems the SFR$_{FUV}$ rates are also too low (see Appendix~B). The flat spectral indices of these sources ($\alpha^{1400}_{235}=0.35\pm0.14$ and $0.47\pm0.05$) suggest ongoing activity. In general, it seems that while star formation may make a contribution to some of our diffuse radio sources, it cannot be the dominant source of emission in any of them. We are left to conclude that there is no clear single origin for these sources, and that we may be observing sources with a similar appearance but different formation mechanisms, depending on their environments. A group-scale IGM can plausibly confine old radio lobes in the group core and, over time, IGM gas motions may alter their morphology until the lobe structure is lost. In NGC~1587, an ongoing encounter with a companion galaxy, NGC~1588, could raise the possibility of shock-driven re-acceleration, but the lack of a clear correlation between the radio and X-ray morphologies (i.e., no clear shock fronts or correlated surface brightness edges) is a problem for this hypothesis. In the X-ray faint groups, the flat spectral indices make relic radio lobes an unlikely explanation, but a disrupted jet may provide an explanation for the emission in NGC~3078. \subsection{Power output of jet systems} For the BGEs that host jet sources, we estimate the mechanical power output of the jets using an approach similar to that described in \citet{OSullivan11}. For NGC~193, NGC~4261, and NGC~5044 and NGC~5846 cavity size was estimated from the X-ray images. For NGC~1060, the spatial resolution of the available \textit{XMM-Newton} data were insufficient to resolve cavities on the scale of the small-scale jet seen only at 610~MHz. We therefore estimated the cavity size from the 610~MHz extent. We defined the mechanical power output estimated from the cavities P$_\mathrm{cav}$ as: \begin{equation} \label{PcavX} {\mathrm{P}}_{\mathrm{cav}}=\frac{4p_{th}V}{t_{cav}}, \end{equation} where $p_{th}$ is the value of the azimuthally-averaged pressure profile at the radius of the cavity centre, $V$ is the cavity volume, and $t_{cav}$ is an estimate of the age of the cavity. We chose to use the sonic timescale (i.e., the time required for the cavity to rise to its current position at the local sound speed in the IGM) so as to facilitate comparison with estimates from the literature. The sonic timescale was calculated by \begin{equation} \label{tcav} {\mathrm{t}}_{\mathrm{cav}}=\frac{r_{cav}}{\sqrt{\frac{\gamma kT}{m}}}, \end{equation} where $r_{cav}$ is the mean radius of the cavity from the nucleus, $\gamma$ is the adiabatic index of the IGM (taken to be 5/3), $k$ is Boltzmann's constant, $T$ is the IGM temperature, and $m$ is the mean particle mass in the IGM. We also note that for NGC~5044, we exclude the detached radio lobe from consideration, since its filling factor and age cannot be accurately estimated, and only include the cavities identified in \citet{David17}. Our estimates of P$_\mathrm{cav}$ are listed in Table~\ref{t:listtable2}. We find mechanical power values in the range $\sim10^{41}-10^{43}$ erg s$^{-1}$, typical for galaxy groups. Fig.~\ref{PcavXP235} shows the relation between the mechanical power output (P$_\mathrm{cav}$) and the radio power at 235~MHz (P$_{235}$) for the 5 jet systems in the CLoGS high-richness sub-sample. Our galaxy groups fall in the lower end of the range covered by the 24 groups and clusters described by \citet{Birzan08} and are in good agreement with the 7 groups of \citet{OSullivan11}. This is unsurprising given the overlap between the samples (three systems in common; NGC~193, NGC~5044 and NGC~5846). NGC~1060 has one of the lowest observed cavity powers, suggesting that either the cavities are larger than the 610~MHz emission, or that its small lobes are still young and are as yet over-pressured with respect to the IGM. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{PcavP235otherstudiesplusHLnocommon2018f} \caption{Cavity power P$_\mathrm{cav}$ calculated from X-rays vs. radio power at 235~MHz. Systems of our groups sample are marked by pink circles with errors, members from \citet{Hlavacek2012} are shown by cyan circles, members from \citet{OSullivan11} are in green triangles and members from \citet{Birzan08} in black triangles. Note that for the three systems in common between CLoGS and \citet{OSullivan11} (NGC~193, NGC~5044 and NGC~5846) we use the CLoGS measurements. The black dotted line indicates the relation found by B\^{i}rzan et al., and the green dashed that of O'Sullivan et al.} \label{PcavXP235} \end{figure} One goal of measuring the relationship between radio power and cavity power is to allow estimates of feedback in systems where high-quality X-ray data are not available. The surveys performed by the eROSITA mission are expected to detect tens of thousands of groups and poor clusters \citep[]{Merloni12,Pillepich12}, but are unlikely to provide reliable estimates of cavity size. The use of radio data as a proxy estimator of AGN power output will therefore be important when studying this population. We can consider the likely effectiveness of such an approach by applying it to our data, using the P$_\mathrm{cav}$ $-$ P$_{235}$ relation measured by \citet{OSullivan11} to estimate a cavity power based on the radio power. The 235~MHz radio power is used since low frequencies are least likely to be affected by spectral aging. One immediate benefit is that we can estimate a mechanical power output for NGC~1167, where no X-ray luminous IGM is detected. As expected for such an extended source, the old radio lobes of NGC~1167 imply that during the period of activity, the jets in this systems had a large mechanical power output, $\sim3.5\times10^{43}$~erg~s$^{-1}$. Table~\ref{t:listtable2} lists the radio-estimated jet powers for the six groups. We find that these estimates are in agreement with the X-ray estimate of P$_\mathrm{cav}$ for NGC~5044, but disagree for the small-scale jet sources and the large-scale jet systems with high P$_{235MHz}$. For the small jets, we find both under- and over-estimates, probably arising from the relatively large scatter in the relation. We note that this scatter appears to be a product on the intrinsic uncertainties in cavity measurements, and is not significantly improved at other frequencies \citep[see, e.g.,][]{Birzan08,Kokotanekov17}. For the large scale jets, the radio estimate of cavity power is greater than the X-ray measurement, though the large uncertainties on the radio estimate make the differences only $\sim$2$\sigma$ significant. Both NGC~4261 and NGC~193 have shells of X-ray emission surrounding their radio lobes, indicating that the lobes are still over-pressured and expanding trans-sonically. It is therefore possible that for these systems the X-ray cavity power is actually an underestimate of the true jet power. We might expect the radio and X-ray estimates to come into agreement as these sources age, expand, and reach equilibrium with the surrounding IGM. \subsubsection{Heating vs Cooling} We can also examine the balance between heating by the central AGN source and radiative cooling of the IGM core. Fig.~\ref{PcavLxbolvsPanagoulia}, shows the ratio P$_\mathrm{cav}$ compared with the cooling luminosity L$_{cool}$ (see \S~\ref{sec:Xray}), over-plotted on data from the sample of \citet{Panagoulia14} \begin{figure} \centering{ \includegraphics[width=0.48\textwidth]{PcavLx057keVvsPanagouliamultiplecav} } \caption{Cavity power from the X-rays in relation to the cooling luminosity, L$_{cool}$, calculated in the 0.5$-$7 keV band for our CLoGS groups (colored triangles; see Table 4) in comparison to the groups and clusters used in the sample study of \citet{Panagoulia14}). In order from left to right for the data from this study, cyan and pink triangles represent the large-scale jet systems (NGC~193 and NGC~4261), whereas red, green and blue triangles represent the small-scale jet ones (NGC~5846, NGC~5044 and NGC~1060).} \label{PcavLxbolvsPanagoulia} \end{figure} Comparing our results to \citet{Panagoulia14} we find that the small-scale jet systems NGC~1060 and NGC~5846, along with the remnant jet in NGC~5044, lie within the scatter about the relation, indicating approximate thermal balance. The large-scale jet systems NGC~193 and NGC~4261 fall about two orders of magnitude above the relation, suggesting that they are significantly over-powered. It is notable that in both systems the size of the jets and lobes greatly exceeds that of the cooling region. NGC~4261 has a small cool core, only $\sim$10~kpc in radius \citep{ewan4261} which corresponds to the cooling region as defined here. Its jets extend $\sim$40~kpc on each side (see also \citealt{Kolokythas14} for a detailed radio study), and seem to primarily impact the surrounding IGM, having excavated wedge-shaped channels to exit the core. The radio lobes of NGC~193 seem to have formed a cocoon or series of cavities with compressed gas rims, leaving little cool gas in the galaxy core. We note that the exceptional power of the outburst is not dependent on our assumption of a single cavity; using the jet powers estimated by \citet{Bogdan14} for a pair of cavities produces similar results. This may indicate that in both systems, cooling and heating have become detached, with the jets heating the IGM without significantly impacting the material which is fueling the AGN. In this case, absent some disturbance, we would expect these outbursts to continue until they exhaust the ability of the cool core to provide fuel. \subsection{AGN feedback in CLoGS groups} Previous observational studies have suggested that feedback in most galaxy groups and clusters operates in a near-continuous `bubbling' mode \citep[e.g.,][]{Birzan08,Birzan12,Panagoulia14}, in which feedback heating is relatively gentle. This model has been supported by theoretical studies \citep[e.g.,][]{Gaspari11}. Our small-scale and remnant jet systems seem to fit in with this picture. However, there are counter-examples. \citet{Nulsenetal07,Nulsenetal09} found that the cavity power of a number of giant ellipticals (some of which are the dominant galaxies of groups and clusters) exceeds their cooling luminosity by up to an order of magnitude. \citet{Rafferty06} give the example of the exceptionally powerful outburst in the cluster MS~0735+7421, which again greatly exceeds the cooling luminosity, even though using a longer cooling time threshold and consider bolometric X-ray luminosity. Investigating the effect of heating in the same cluster, \citet{Gitti07} found that powerful outbursts are likely occurring $\sim$ 10\% of the time in most cool core clusters. NGC~193 and NGC~4261 seem to fall in to this category, with cavity powers greatly exceeding the cooling luminosity. Such systems may imply an intermittent feedback mechanism, with strong outbursts heating their surroundings enough that long periods of cooling will be required before a new fuel supply can be built up. Modeling suggests that this type of feedback can be produced by more chaotic accretion processes \citep[e.g.,][and references therein]{Prasad15, Prasad17}. It is interesting to note that both our large-scale, high-power jet systems are found in groups whose X-ray characteristics present difficulties to analysis. NGC~4261 has an X-ray bright core, but its IGM has a relatively low surface brightness, its cavities extend beyond the \textit{Chandra} ACIS-S3 field of view, and its cavity rims were only recognised in \textit{XMM-Newton} observations \citep{ewan4261,Croston05}. The outburst in NGC~193 has severely disturbed its inner X-ray halo, making the usual radial analysis difficult and somewhat uncertain. Neither system is included in the majority of X-ray selected samples of groups used to study group properties and AGN feedback. This raises the question of whether the biases implicit in X-ray selection \citep[e.g., toward centrally-concentrated, relaxed cool-core systems,][]{Eckert11} may tend to exclude the more extreme cases of AGN feedback. NGC~4261 and NGC~193 are by no means the most extreme radio galaxies hosted by groups and poor clusters. In the nearby universe, NGC~383 and NGC~315 host the giant radio sources 3C~31 and B2~0055+30, both of which have jets extending hundreds of kiloparsecs \citep[see, e.g.,][]{Simona11}. The impact of such sources on the cooling cycle is not well understood, but they suggest that caution should be exercised before concluding that feedback in groups and clusters is always a gentle process with only mild effects on IGM properties. \section{Conclusions} \label{conclusions} In this paper we have presented, for the first time, the GMRT radio images of the brightest group early-type galaxies from the high-richness CLoGS sub-sample at 235 and 610~MHz. A high radio detection rate of 92\% (24 of 26 BGEs) is found at either 235, 610 and 1400~MHz, with the majority of the BGEs exhibiting low radio powers between 10$^{21}$ $-$ 10$^{23}$ W Hz$^{-1}$ with only three radio sources in the sample being characterized as radio loud (P$_{235~MHz}$ $>$10$^{24}$ W Hz$^{-1}$ - NGC~4261, NGC~193 and NGC~5044). In agreement with previous studies \citep[e.g.,][]{Magliocchetti07,Dunn10} we confirm the trend suggesting that nearly all dominant galaxies in groups or clusters are hosting a central radio source. The extended radio sources in our sample have spatial scales spanning the range $\sim10-240$~kpc with a variety of morphologies extending over 3 orders of magnitude in power, from ~10$^{21}$ W Hz$^{-1}$ (NGC~5982), to ~6 $\times$ 10$^{24}$ W Hz$^{-1}$ (NGC~4261). The majority of our systems (14/26, $\sim$53\%) exhibit a point-like radio morphology, while 6/26 groups ($\sim$23\%) host currently or recently active small/large scale jets, and 4/26 groups (15\%) host diffuse radio sources. We find that the unresolved point-like radio sources are mainly AGN dominated with the stellar population most probably contributing in the radio emission between $20-40\%$ in 3 systems (NGC~940, NGC~2563, NGC~5982) and most likely dominating in another one (NGC~584). Comparing the X-ray environment in which the BGEs reside with their radio emission we find a distinction between central radio sources in X-ray bright and faint groups. All but one of the jet sources are found in X-ray bright cool-core systems, confirming that the X-ray bright groups or clusters are the preferred environment for radio jets to appear even down to the lower mass range covered by our sample. On the other hand, all galaxies that lack radio emission are found in X-ray faint systems. Low power radio point-like sources are found to be common in both environments. We find that 19/26 galaxies are detected at both 610 and 235~MHz with 2/19 sources exhibiting steep radio spectra with $\alpha_{235}^{610}>1$ (NGC~1587 and NGC~5044). The spectral indices of the remaining 17/19 sources cover the range $\alpha_{235}^{610}\approx0.2-0.9$ with a mean of $\alpha_{235}^{610}=0.53\pm0.16$. The equivalent mean value 235-1400~MHz spectral index for the 18/26 BGEs that exhibit emission at both frequencies, is measured to be $\alpha_{235}^{1400}=0.60\pm0.16$. In X-ray bright groups, the radio sources are found to have steeper spectral indices than the mean $\alpha_{235}^{1400}$. The mean spiral fraction (F$_{sp}$) per radio morphology shows that radio point-like emission appears preferably in group systems where the majority of galaxies is spirals with this mild trend being stronger towards lower X-ray luminosity systems. Considering the origin of the radio emission in our diffuse sources, their range of spectral indices, morphologies and scales means that no single mechanism seems able to explain all the sources. We rule out star formation as the dominant factor, but it could, along with disrupted jets, aged and distorted radio lobes from past outbursts, and material shocked or compressed by galaxy interactions, potentially make a contribution. Considering the energetics of the radio jet sources, we find mechanical power values (P$_\mathrm{cav}$) typical for galaxy groups, in the range $\sim10^{41}-10^{43}$ with our systems being in good agreement with other group-central sources \citep[e.g.,][]{Birzan08,OSullivan11}. Lastly, we find that small scale jet systems are able to balance cooling in the central region of the group provided that the AGN continuously injects energy, in agreement with \citet{Panagoulia14}, whereas the mechanical power output of the two large scale systems in our sample (NGC~193 and NGC~4261) appears to exceed the cooling X-ray luminosity of their environment by a factor of $\sim$100. This suggests that while in some groups thermal regulation can be achieved by a relatively gentle, `bubbling', feedback mode, considerably more violent AGN outbursts can also take place, which may effectively shut down the central engine for long periods. \section{Acknowledgments} The GMRT staff are gratefully acknowledged for their assistance during data acquisition for this project. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. E. O'Sullivan acknowledges support for this work from the National Aeronautics and Space Administration through Chandra Awards GO6-17121X and GO6-17122X, issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060, and through the Astrophysical Data Analysis programme, award NNX13AE71G. S. Giacintucci acknowledges support for basic research in radio astronomy at the Naval Research Laboratory by 6.1 Base funding. M. Gitti acknowledges partial support from PRIN-INAF 2014. A. Babul acknowledges support by NSERC (Canada) through the discovery grant program. Some of this research was supported by the EU/FP7 Marie Curie award of the IRSES grant CAFEGROUPS (247653). We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
2,877,628,088,760
arxiv
\section{Introduction} Extensive spectropolarimetric surveys have revealed that $7\% $ of massive OB stars host an observable surface magnetic field \citep{morel2015,fossati2015,wade2016,grunhut2017,shultz2018}. These fields are strong (on the order of kG), mostly dipolar, stable on long time-scales, and are understood to be of fossil origin \citep{donati2009}. Surface magnetic fields exhibit a complex interaction with the stellar winds of hot stars, channelling the outflow and confining a significant fraction of the wind in a closed, co-rotating magnetosphere \citep{babel1997,ud2002,townsend2005,petit2013,bard2016,owocki2016}. Two key `surface' phenomena resulting from this interaction and important in the evolutionary context have been identified: \textit{magnetic braking} (increasing the angular momentum loss of the star) and \textit{mass-loss quenching} (decreasing the mass-loss rate of the star). Simple prescriptions, taking into account magnetic, wind, and rotational properties, have been developed and can be exploited to quantify these phenomena \citep{ud2008,ud2009}. First steps towards exploring the evolutionary consequences of surface fossil magnetic fields of massive stars have previously been taken by a few investigations. \cite{mahes1994} studied the evolution of rotation and surface fossil magnetic fields in 60 M$_{\odot}$ models using a kinematic approach and placed constraints on the Be and Wolf-Rayet phases. However, the stellar evolution model was not modified to account for rotation or magnetic fields. \newpage \cite{meynet2011} studied magnetic braking in massive star models. The key results of their study are that i) magnetic braking can greatly deplete the core angular momentum reservoir on the main sequence, and ii) a surface magnetic field can significantly modify the chemical enrichment of the star. However, a basic limitation of the study was the assumption that the surface magnetic field strength remained constant during stellar evolution (thus the magnetic flux increased) and the fact that mass-loss quenching by the magnetic field was not considered. Recently, \cite{petit2017} studied mass-loss quenching alone in non-rotating models with evolving surface field strength using the \textsc{mesa} code \citep{paxton2013}, and \cite{georgy2017} presented first results including both mass-loss quenching and magnetic braking in the Geneva code \citep{eggenberger2008}. \cite{petit2017} and \cite{georgy2017} showed that mass-loss quenching allows the star to maintain a higher mass during its evolution, respectively in the context of `heavy' stellar-mass black holes such as those whose merger was reported by aLIGO (\citealt{abbott2016,abbott2017a,abbott2017b}), and in the context of pair-instability supernovae. Hence, these studies have shown that magnetic massive stars even at solar metallicity could serve as progenitors in both cases. Since both the studies by \cite{petit2017} and \cite{georgy2017} focused primarily on mass-loss quenching, massive and very massive stars (40 - 80 M$_\odot$ and 200 M$_\odot$, respectively) were modelled - objects that are considerably rare in nature and representative of only a small fraction of known magnetic hot stars. \cite{potter2012b} studied magnetic braking for a population of stars using the \textsc{rose} code \citep{potter2012a}. They implemented an $\alpha$-$\Omega$ radiative dynamo mechanism to induce and maintain an internal large-scale magnetic field, while magnetic braking was attributed to an external surface field. A key result of their study is that they obtain a population of slowly rotating, nitrogen-enriched stars with surface magnetic field strengths that could potentially be detected by current instrumentation; however, this modeled population is strictly limited to a mass range close to 10 M$_\odot$. Furthermore, \cite{quentin2018} studied the evolution of a typical intermediate-mass star, considering a large-scale, dynamo-generated internal magnetic field. The magnetic field equations are coupled to stellar rotation, and the magneto-rotational instability is considered (see also \citealt{wheeler2015}). While currently the fossil field hypothesis is favoured to explain the origin of observed magnetism at the surface of hot stars, these approaches - in addition to studies that investigated the Spruit-Tayler \citep{spruit2002,tayler73} dynamo mechanism \citep{maeder2003,maeder2004,maeder2005,heger2005} - may also prove instructive for future studies, especially to account for angular momentum redistribution by fossil fields. Considering a different approach, \cite{petermann2015} computed stellar evolution models implementing a reduction in the convective core size caused by a strong fossil magnetic field. While this reduction is accomplished by an arbitrary parameter, the physical nature of magnetic fields suppressing convection has been studied in a variety of contexts, e.g., subsurface convection zones of massive stars \citep{sundqvist2013}, Ap stars \citep{michaud1970}, late-type main-sequence stars \citep{cox1981,chabrier2007} and the atmospheres of white dwarfs \citep{tremblay2015}. In addition to these studies of single-star evolution, recent studies by \cite{chen2016} and \cite{song2018} also consider surface magnetic fields, however in the context of intermediate-mass and massive binary evolution. Apart from these studies, surface fossil magnetic fields are usually not considered in massive star evolution models \citep{brott2011,ekstroem2012,paxton2013}, although it has been speculated that surface magnetism may play a significant role in resolving several problems. For instance, some anomalous trends on the Hunter diagram \citep{hunter2008} showing nitrogen abundance as a function of projected rotational velocity have been speculated to be caused by stellar magnetism \citep{hunter2009,brott2011b}. We will discuss this in detail in Section~\ref{sec:hunt}. The important evolutionary consequences of magnetic fields, established by previous studies, suggest that the application of non-magnetic stellar evolution models to magnetic hot stars may be inappropriate, and may result in erroneous conclusions when deriving stellar parameters from non-magnetic models. The present study seeks to address two important questions related to this uncertainty: \begin{itemize} \item When comparing known magnetic stars to the predictions of stellar evolution models, can non-magnetic models yield reasonable stellar parameters? \item To what extent can the newly-incorporated prescription of an evolving surface magnetic field inform and potentially account for debated problems in stellar astrophysics, for instance, for the surface nitrogen enrichment of massive stars? \end{itemize} The scope and motivation of this study is to assess the cumulative influence of magnetic mass-loss quenching, magnetic braking and magnetic field evolution, each of these components having heretofore been considered separately. Moreover, by predicting the evolution of surface field strengths beyond the main sequence, this study provides benchmarks for the detection of magnetic fields on hot, evolved stars. To this extent we perform model calculations with the Geneva code for a typical massive star model with initial mass of $M_{\star,\rm ini} = 15$ M$_{\odot}$ at solar ($Z_{\rm ini} = 0.014$, \citealt{asplund2005,ekstroem2012}) metallicity, and investigate how the fossil field changes with time, assuming that the surface magnetic flux is conserved, and no other significant flux decay or enhancement processes occur. This work is structured as follows: In Section 2 we describe the model including the evolution of a surface magnetic field. In Section 3, we compare the evolution of non-magnetic and magnetic models. In Section 4 we discuss the rotational and angular momentum evolution, the nitrogen abundance vs the rotational velocity, and the evolution of the surface magnetic field and related magnetospheric parameters. In Section 5 we draw conclusions regarding surface magnetism in massive stars and relevant points for future surveys. \section{Methods}\label{sec:met} \subsection{Surface fossil magnetic field prescription} We use the Geneva stellar evolution code (\textsc{genec}, \citealt{eggenberger2008}) for our model calculations. We consider that the magnetic flux is frozen into the plasma (Alfv\'en's theorem, \citealt{alfven1942}) thereby leading to the local conservation of magnetic flux over time. Thus we assume that the change in surface magnetic field strength is related to the change in the stellar radius as: \begin{equation}\label{eq:field} B_p = B_{\rm p, ini} \left( \frac{R_{\rm \star, ini}}{R_\star } \right)^{2} \, , \end{equation} where $B_p$ is the polar magnetic field strength at the stellar surface (photosphere), $B_{\rm p, ini}$ is the initial, zero age main sequence (ZAMS) surface polar magnetic field strength, $R_\star $ is the stellar radius, and $R_{\rm \star, ini}$ is the ZAMS radius\footnote{There is an important difference between the spatial and the time dependence of the magnetic field. While flux conservation yields a $B_p (t) \propto R_{\star}^{-2}(t)$ time dependence, the spatial variation in the radial direction for a dipole field configuration is described by $B (r) \propto r^{-3}$.}. $B_p$ and $R_{\star}$ evolve in concert with the star, that is $B_p = B_p (t)$ and $R_{\star} = R_{\star} (t)$, however for simplicity we will not denote explicitly the time dependence since it is understood for all but the initial quantities. We account for mass-loss quenching and magnetic braking due to a fossil magnetic field as previously included in stellar evolution model calculations by \citet{meynet2011}, \citet{keszthelyi2017b}, \citet{petit2017}, \citet{georgy2017}, and \citet{song2018}. We employ the prescriptions and formalism based on the results of \citet{ud2002}, \citet{ud2008,ud2009}, and \citet{petit2013}. Mass-loss quenching results from stellar wind plasma being trapped by closed magnetic field lines near the stellar magnetic equator, causing cooled gas to return to the stellar surface \citep{ud2002}. It leads to a fractional reduction $f_B$ in the overall mass-loss rate, also defined as the escaping wind fraction, \begin{equation}\label{eq:fb} f_B \approx 1 - \sqrt{ 1 - \frac{R_\star}{R_c} } \, , \end{equation} where $R_c$ is the closure radius \citep{ud2008} that defines the distance of the last closed magnetic loop from the stellar surface, and can be expressed in terms of the Alfv\'en radius $R_A$ (see below) as: \begin{equation} R_c \approx R_\star + 0.7 ( R_A - R_\star ) \, . \end{equation} Thus the effective mass-loss rate becomes: \begin{equation}\label{eq:quench} \dot{M} = f_B \cdot \dot{M}_{B=0} \, , \end{equation} where $\dot{M}_{B = 0}$ is the mass-loss rate the star would have in the absence of a magnetic field. It should be noted that Equation~\ref{eq:fb} does not take into account the latitudinal variation in the mass flux due to the tilt of the magnetic field with respect to the normal to the surface, and that this approximation is particularly good for large values of $R_c$, but somewhat less robust otherwise. There are, however, leakage mechanisms that may allow for mass to escape from closed loops if the star is rotating rapidly, but this is typically only important for the early evolution as the star quickly spins down \citep{owocki2018}. Thus, Equation \ref{eq:fb} is indeed an approximation for the reduction in mass-loss rates on average. Furthermore, it should be noted that when the stellar radius is smaller, that is the star is more compact, the escaping wind fraction becomes larger. This formulation in \textsc{genec} is coupled to the stellar wind calculations, namely any adopted mass-loss scheme can be applied to obtain $\dot{M}_{B=0}$, and then this value is scaled by $f_B$ to account for the reduction by the magnetic field. Magnetic braking accounts for the removal of an additional amount of angular momentum, over and above that naturally removed by (non-magnetic) mass loss, due principally to Maxwell stresses: \begin{equation}\label{eq:br} \dot{J} = \frac{2}{3} \dot{M}_{B = 0} \Omega_\star R_\star^2 \, R_A^2 = \frac{2}{3} \dot{M}_{B = 0} \Omega_\star R_\star^2 \, \left[ (\eta_\star + 0.25)^{0.25} + 0.29 \right]^2 \, , \end{equation} where $\Omega_\star$ is the surface angular velocity, $R_A$ is the Alfv\'en radius for a dipole configuration, and $\eta_\star$ is the equatorial wind magnetic confinement parameter \citep{ud2009}, defined as \begin{equation}\label{eq:eta} \eta_\star = \frac{B_p^2R_\star^2}{4 \dot{M}_{B=0} v_{\infty}} \, , \end{equation} where $v_{\infty}$ is the terminal wind velocity. For consistency, we also updated the Geneva code to systematically calculate $v_\infty$ from the escape velocity $v_{\rm esc}$ (following \citealt{kudritzki2000} and \citealt{vink2001}) as $v_\infty =~2.6 v_{\rm esc}$ for $T_{\rm eff} > 20$~kK, $v_\infty =~1.3 v_{\rm esc}$ for 20~kK$~>~ T_{\rm eff} >~10$~kK, and $v_\infty = 0.7 v_{\rm esc}$ for $T_{\rm eff} <~10$ kK. Since magnetic braking modifies the angular velocity of the stellar surface, the Geneva code implements Equation \ref{eq:br} as a boundary condition of the internal angular momentum transport equation at the stellar surface, and modifies the total angular momentum content of the star. \subsection{Internal angular momentum transport} Possibly one of the most important questions regarding stellar rotation is whether the internal layers of a star could be considered to achieve solid-body rotation or radial differential rotation. In the former case, some mechanism must couple the stellar core and envelope. This question divides our model calculations into two branches to account for this uncertainty. Although solid-body rotation could be enforced artificially, for main sequence models it is also a known natural consequence of adopting the viscosity of a Spruit-Tayler (ST, \citealt{spruit2002,tayler73}) dynamo mechanism in the radiative envelope of the star to account for the angular momentum transport \citep{maeder2003,maeder2004,maeder2005,heger2005,petrovic2005,brott2011,keszthelyi2017a,song2016}. Therefore, two scenarios are considered regarding the angular momentum transport, which, following \cite{zahn92} and \cite{chaboyer1992}, can be written as: \begin{equation}\label{eq:gam} \rho \frac{ \partial (r^2 \Omega) }{ \partial t} = \frac{1}{5 r^2} \frac{\partial}{\partial r } \left( \rho r^4 \Omega U(r) \right) + \frac{1}{r^2} \frac{\partial}{\partial r} \left( \rho r^4 \, D_{\rm AM} \, \frac{\partial \Omega}{\partial r} \right) \, , \end{equation} where $\rho$ is the density, $r$ is the radius, $t$ is time, $\Omega$ is the angular velocity in a given layer of the star, $U(r)$ is the amplitude of the vertical component of the meridional circulation velocity, and $D_{\rm AM}$ is the sum of diffusion coefficients that contribute to angular momentum transport. The first term on the r.h.s. of Equation~\ref{eq:gam} is the advective term, while the second term is the diffusive term. We consider two approaches to construct $D_{\rm AM}$ in radiative zones. In the differentially rotating case, when the transport is less efficient, \begin{equation} D_{\rm AM,1} = D_{\rm shear} \, , \end{equation} where $D_{\rm shear}$ is the diffusion coefficient due to shears. In this case, both advective and diffusive terms are used in Equation~\ref{eq:gam} and the meridional currents (\citealt{eddington1925,sweet1950}) are thus accounted for via advection (see \citealt{zahn92,maeder1998,meynet2013}). In the solid-body rotating case, when the transport is more efficient, \begin{equation} D_{\rm AM,2} = D_{\rm shear} + D_{\rm circ,H} + D_{\rm ST} \, , \end{equation} where $D_{\rm circ,H}$ is a diffusion coefficient accounting for meridional circulation and $D_{\rm ST}$ is a diffusion coefficient resulting from the viscosity of the ST dynamo mechanism (Equation~15 of \citealt{maeder2005}). In this case, the advective term in Equation \ref{eq:gam} can be neglected, justifying a purely diffusive treatment \citep{song2016}. In the following, by \textit{magnetic models} we always refer to models with surface fossil magnetic fields. We stress here that the ST dynamo is only introduced in order to achieve a flat angular velocity profile on the main sequence since it results in a large diffusion coefficient for angular momentum transport\footnote{Strictly speaking, the inclusion of $D_{\rm ST}$ does not achieve completely perfect solid-body rotation since a minimal shear is required to operate the ST mechanism, which in turn significantly flattens the $\Omega$ profile and enhances meridional circulation \citep{maeder2005}. Moreover, this mechanism is expected to weaken somewhat after the main sequence, although still maintaining a nearly flat $\Omega$ profile.}. Including or neglecting the viscosity of the hypothetical ST dynamo in the equation of internal angular momentum transport serves only to establish two fundamentally different rotational configurations which are of interest to study. \begin{table} \caption{Geneva stellar evolution models computed in this study. In the brackets we denote rotation `R', diffusion coefficient from Spruit-Tayler dynamo `S', and surface magnetic fields `B', for clarity.} \begin{tabular}{lccc} \hline Model & $v_{\rm rot, ini}$ & $D_{\rm ST}$ & $B_{\rm p,ini}$ (surface) \\ & [km s$^{-1}$] & & [kG] \\ \hline \hline M1 (- - -)& 0 & - & 0 \\ M2 (- - B) & 0 & - & 4 \\ M3 (R - -) & 200 & no &0 \\ M4 (R - B) & 200 & no &4 \\ M5 (R S -) & 200 & yes &0 \\ M6 (R S B) & 200 & yes &4 \\ \hline \end{tabular} \label{tab:tm} \end{table} \subsection{General model parameters} The models follow the conventional computation of a Geneva stellar evolution model \citep{ekstroem2012}. The most important initial parameters of the models are outlined in Table \ref{tab:tm}. An initial mass of $M_{\star,\rm ini} = 15$ M$_{\odot}$ and solar metallicity ($Z_{\rm ini} =~ 0.014$) are adopted. Models are computed from the zero age main sequence until they cross the Hertzsprung-Russell diagram (HRD) from the `blue' to the `red' part and start to deplete core helium. The initial elemental abundances follow the \cite{asplund2005} ratios, except for neon, which is adopted from \cite{cunha2006}. We compute rotating models with $v_{\rm rot, ini} = ~200 \, \mathrm{km \, s^{-1}}$ ($ v_{\rm rot, ini}/v_{\rm crit, ini} \sim 0.3$), as well as non-rotating models. The implementation of the rotating models follows the theory of \cite{zahn92} and \cite{maeder1998}. Core overshooting is calculated using the step method, extending the core by 10\% of the local pressure scale height. The core boundary is determined by the Schwarzschild criterion. Due to the initial mass of the model, we use the prescription of \cite{dej1988} to obtain the (non-magnetic) mass-loss rates. We adopt an initial surface magnetic field strength of 4 kG, considering that it is typical for magnetic massive OB stars based on the observed distribution of field strengths in young magnetic stars \citep{petit2013,shultz2018}. For further details regarding \textsc{genec} and the details of the model calculations, see \cite{hirschi2004,ekstroem2012} and \cite{georgy2013}. Some of the models (those without surface magnetic fields) have already been computed with a previous version of the Geneva code. However, for consistency we recomputed these models. This means that from Table \ref{tab:tm}, models M1 (- - -) and M3 (R- -) resemble the non-rotating and rotating \citet{ekstroem2012} models, respectively, while M5 (RS-), including $D_{\rm ST}$ is similar to models presented by \cite{maeder2005}. In addition, \cite{meynet2011} computed models with surface magnetic fields (see Introduction), but those models considered magnetic braking under a constant magnetic field strength, and did not account for mass-loss quenching. In the models including surface magnetic fields M2 (- -B), M4 (R-B), and M6 (RSB) presented in this study, mass-loss quenching and magnetic braking are considered under self-consistently evolving surface magnetic parameters, obeying the magnetic field evolution model described by Equation \ref{eq:field}. \section{Results} In this section we present the results of our model calculations, and study the influence of individual model ingredients. We first compare a model without a surface magnetic field to a model with a surface magnetic field to quantify the impact of surface magnetism in stellar evolution models. This means that the discussion of the 6 computed models is divided into 3 subsections: 2 models without rotation, 2 models with rotation and without $D_{\rm ST}$, and 2 models with rotation and adopting $D_{\rm ST}$. Along these lines, Figures \ref{fig:nrm1}-\ref{fig:nrm3} display the model pairs on three panels: the HRD (left panels), the nitrogen abundance (middle panel), and mass-loss rates (right panels) vs the effective temperature. \begin{figure*} \includegraphics[width=6.5cm]{212hrd.png}\includegraphics[width=6.5cm]{212nit.png}\includegraphics[width=5.5cm]{212Mdot.png} \caption{Comparison between the non-magnetic model M1 (- - -) and magnetic model M2 (- -B) without rotation on the HRD (left panel) colour-coded with the log of the surface gravity, their nitrogen enrichment (middle panel), and mass-loss history (right panel).} \label{fig:nrm1} \end{figure*} \begin{figure*} \includegraphics[width=6.5cm]{234hrd.png}\includegraphics[width=6.5cm]{234nit.png}\includegraphics[width=5.5cm]{234Mdot.png} \caption{Same as Fig. \ref{fig:nrm1} but for models M3 (R- -) and M4 (R-B) with rotation and without D$_{\rm ST}$.} \label{fig:nrm2} \end{figure*} \begin{figure*} \includegraphics[width=6.5cm]{256hrd.png}\includegraphics[width=6.5cm]{256nit.png}\includegraphics[width=5.5cm]{256Mdot.png} \caption{Same as Fig. \ref{fig:nrm1} but for models M5 (RS-) and M6 (RSB) with rotation and with D$_{\rm ST}$.} \label{fig:nrm3} \end{figure*} \subsection{Non-magnetic vs magnetic models without rotation: model M1 (- - -) and M2 (- -B)} Figure \ref{fig:nrm1} compares the predictions of non-magnetic and magnetic models without rotation, M1 (- - -) and M2 (- -B), respectively. In the simplified case when rotation is neglected, the surface magnetic fields are only considered to decrease the overall mass-loss rate via mass-loss quenching. Such models were presented previously by \cite{petit2017}, but with significantly higher initial masses, until the models reached the terminal age main sequence (TAMS). The mass loss by stellar winds in the case of 15 M$_\odot$ models at solar metallicity is modest and thus only weakly affects the evolution of the star. It is expected that the reduction in mass-loss rate due to magnetic mass-loss quenching will be modest. This is indeed the case, as shown by the non-rotating 15 M$_\odot$ models, the evolutionary tracks on the HRD are barely influenced by the magnetic mass-loss quenching (Figure \ref{fig:nrm1}, left panel); however, it should be noted that this does not mean that all stellar parameters are unaffected. For instance, small differences in stellar mass, age, and effective temperature are present throughout the main sequence (see Appendix Table \ref{tab:t1}). For example, the TAMS mass of model M1 (- - -) is 14.81 M$_\odot$, while for model M2 (- -B), it is 14.97 M$_\odot$. For comparison, in the case of the initially 80 M$_\odot$ non-rotating models computed by \cite{petit2017}, the TAMS masses of their non-magnetic and magnetic model with similar initial surface magnetic field strength are 52 M$_\odot$ and 62 M$_\odot$, respectively (see their Figure 7). Moreover, the magnetic model has a slightly shorter main sequence lifetime than the non-magnetic model (at the TAMS 11.20 Myr vs 11.26 Myr, respectively), which is also in agreement with the results of \cite{petit2017}. In order to reliably compare these predictions to observations the following point should be considered: The rotational history of the star would need to be known to establish whether a non-rotating model may be applicable to compare with observations. If, for instance, the star had undergone significant magnetic braking in the past, then a non-rotating model may not be a reasonable choice for comparison with a currently slowly rotating star, since the previous rotational history may have changed the stellar properties. For example, \cite{meynet2011} pointed out that the core properties and the surface enrichment would be different. Since in a non-rotating case the assumption is that rotation-induced instabilities do not operate, surface nitrogen enrichment (middle panel, Figure \ref{fig:nrm1}) is only expected when the star reaches the red supergiant stage. Whether or not a surface magnetic field is present has no impact on these results in the present model so long as only mass-loss quenching is relevant. Nevertheless, the nitrogen abundance could help to observationally evaluate the applicability of non-rotating models. The right panel of Figure \ref{fig:nrm1} shows the mass-loss history of models M1 (- - -) and M2 (- -B). Since mass-loss quenching reduces the effective mass-loss rate, the magnetic model will initially experience an order-of-magnitude lower mass-loss rate than the non-magnetic model. However, as the star evolves, so do the magnetic parameters (see Section \ref{sec:43}). As a consequence, the magnetic confinement weakens systematically as the star crosses the HRD. Ultimately, the magnetic field weakens to such a degree that mass-loss quenching becomes negligible. These results, although quantitatively dependent on the adopted mass-loss scheme (other applicable schemes for hot stars are derived by, e.g., \citealt{kudritzki1989,vink2001,muijres2012,krticka2014}), demonstrate the same qualitative behaviour, independent of the adopted mass-loss scheme. The initial reduction in the mass-loss rate of model M2 (- -B) results in a systematically higher stellar mass, hence a higher stellar luminosity is expected on the main sequence when compared to model M1 (- - -) due to mass-loss quenching (left panel of Figure \ref{fig:nrm1}, see also Figure 4 of \citealt{petit2017}). However, this expectation is limited, to a degree, by an interesting feedback effect. Since $\dot{M} \propto L_\star^x$, where $x$ is some positive power depending on the adopted scheme\footnote{For the scheme derived by \cite{dej1988} and used in this study, $x \sim 1.8$. For comparison, the \cite{vink2001} scheme yields $x \sim 2.2$.}, the mass-loss rate of the magnetic model will increase if it becomes more luminous. Therefore, one may conclude that mass-loss quenching alone results in the following loop of consequences: \begin{enumerate} \item Mass-loss quenching $\rightarrow$ initially lower mass-loss rate \item Initially lower mass-loss rate $\rightarrow$ higher stellar mass \item Higher stellar mass $\rightarrow$ higher stellar luminosity \item Higher stellar luminosity $\rightarrow$ higher mass-loss rate \end{enumerate} In the case of the 15 M$_\odot$ models, mass-loss quenching alone is modest. However, \cite{petit2017} demonstrated that for higher-mass models, following the above reasoning, the consideration of surface fossil magnetic fields indeed leads to a notable increase in the stellar luminosity. Despite that this results in a consequent increase in the mass-loss rate of the magnetic model, the relative value of the mass-loss rate still remains lower compared to that of a non-magnetic model with the same initial mass. This is because the power that the luminosity depends on the stellar mass is higher than the power the mass-loss rate depends on the luminosity. \subsection{Non-magnetic vs magnetic models with rotation without \texorpdfstring{$D_{\rm ST}$}{DST}: models M3 (R- -) and M4 (R-B)} Figure \ref{fig:nrm2} shows the predictions of models M3 (R- -) and M4 (R-B). Both of these models are initially rotating at $v_{\rm rot, ini} =$~200~km~s$^{-1}$ and are characterized by differential rotation. They do not include the viscosity of an ST dynamo for the internal angular momentum transport. Model M4 (R-B) includes mass-loss quenching and magnetic braking. Whether such a configuration, i.e. radial differential rotation, is possible for a star with large-scale surface fossil magnetic fields extending to the stellar interiors has not yet been established (however, see \citealt{duez2010}). It is likely that a strong magnetic field would flatten the $\Omega$ profile and result in near solid-body rotation, but perhaps not for the entire star. We further discuss this point in the last paragraph of Section \ref{sec:unc}. The left panel of Figure \ref{fig:nrm2} shows models M3 (R- -) and M4 (R-B) on the HRD. It may seem intuitive that the above outlined mass-loss quenching loop yields a higher luminosity for model M4 (R-B) compared to model M3 (R- -). For reference, the TAMS masses of model M3 (R- -) and model M4 (R-B) are 14.71 M$_\odot$ and 14.94 M$_\odot$, respectively (see Appendix Table \ref{tab:t2}). However, in this particular model setup the large increase of the stellar luminosity of model M4 (R-B) compared to model M3 (R- -) cannot be attributed to mass-loss quenching alone, even though the mass-loss rate of model M4 (R-B) with surface magnetic field is still lower than that of model M3 (R- -) until both models' effective temperatures reach below 10 kK (Figure \ref{fig:nrm2}, right panel). At 10 kK the surface magnetic field is so weak that mass-loss quenching becomes negligible. Therefore, the feedback effects need a more careful interpretation in rotating models. We mention here that the rotating models do contain an additional scaling factor, which takes into account the rotational enhancement on the mass-loss rates, following the work of \cite{maeder2000}, namely Equation 4.29 in their work. As shown by other studies as well (e.g., recently \citealt{keszthelyi2017a}, their Figures 10 and 11), this enhancement factor on the mass-loss rates due to rotation is generally of the order of maximum a few percent, but becomes important when the surface rotational velocity approaches the critical velocity. Hence we checked whether this factor would play a role and found that in these model calculations the rotational velocities remain far from their critical values, ergo the rotational enhancement on the mass-loss rates (reaching a maximum of 3\%) is practically negligible in all models throughout their evolution. In contrast to the case of non-rotating models, rotating models that include surface magnetic fields account for magnetic braking. This mechanism leads to changes in the $\Omega$ profile near the stellar surface, which influences chemical mixing (Figure \ref{fig:nrm2}, middle panel) and hence it changes the mean molecular weight ($L_\star \propto\mu^x$, where $x$ is typically a high power, $x \sim 4$). When differential rotation is considered, magnetic braking induces significant chemical mixing via strong shears and therefore increases the average value of $\mu$, which in turn results in a higher luminosity. This can indeed be traced by the nitrogen enrichment of the model. As a consequence, the effects of magnetic braking alone could be described in the following manner for differential rotation: \begin{enumerate} \item Magnetic braking $\rightarrow$ strong shear mixing \item Strong shear mixing $\rightarrow$ increase of average $\mu$ and notable surface enrichment \item Increase of average $\mu$ $\rightarrow$ higher stellar luminosity \item Higher stellar luminosity $\rightarrow$ higher mass-loss rate \end{enumerate} Thus we confirm the findings of \cite{meynet2011} regarding the surface nitrogen enrichment. Although we changed the magnetic field evolution model ($B_p \propto R_\star^{-2}$, hence the magnetic field strength weakens with time), we observe the same qualitative behaviour: surface magnetic fields and radial differential rotation results in a very rapid enhancement of the surface nitrogen due to the large shears (Figure \ref{fig:nrm2}, middle panel). As a consequence of the induced mixing in the magnetic model, the ages of the models differ significantly. Even at the early phases, model M4 (R-B) takes a longer time to deplete core hydrogen and this difference is propagated to the TAMS age. The TAMS ages of models M3 (R- -) and M4 (R-B) are 13.45 Myr and 15.29 Myr, respectively (Appendix Table~\ref{tab:t2}). Therefore, while mass-loss quenching alone resulted in decreasing the age of the magnetic model, magnetic braking in differentially rotating models actually yields longer main sequence lifetimes. To conclude, the combination of magnetic braking and differential rotation can explain the notably higher luminosity of model M4 (R-B) compared to M3 (R- -). Model M4 (R-B) becomes more luminous than model M3 (R- -) because the shears induced by magnetic braking increase the mean molecular weight on average inside the star. Additionally, mass-loss quenching also modestly increases the luminosity of model M4 (R-B) in comparison to model M3 (R- -). In fact, because magnetic braking efficiently leads to a higher luminosity of model M4 (R-B) compared to model M3 (R- -), the notably higher luminosity also scales the effective mass-loss rate of the stellar model. This is why the mass-loss histories of the two models (Figure \ref{fig:nrm2}, right panel) differ less than in a non-rotating case, even though mass-loss quenching reduces the effective mass-loss rate of model M4 (R-B). \subsection{Non-magnetic vs magnetic models with rotation adopting \texorpdfstring{$D_{\rm ST}$}{DST}: model M5 (RS-) and M6 (RSB)} Figure \ref{fig:nrm3} shows the predictions of models M5 (RS-) and M6~(RSB). Both models adopt an internal ST dynamo mechanism for the chemical element and angular momentum transport to ensure a flat angular velocity profile. Similar to model M4 (R-B), model M6 (RSB) includes mass-loss quenching and magnetic braking. The differences between these two models on the HRD are practically negligible, however other parameters show more noticeable differences. Throughout its evolution, model M6 (RSB) maintains a mass-loss rate which is significantly lower than that of model M5 (RS-), (Figure \ref{fig:nrm3}, right panel). However, when comparing models M5 (RS-) and M6 (RSB), the nitrogen enrichment is notably lower in model M6 (RSB). In particular, the post-main sequence plateau remains 0.2 dex smaller compared to model M5 (RS-) when crossing the HRD (Figure \ref{fig:nrm3}, middle panel). These results agree qualitatively\footnote{However, see Section \ref{sec:hunt} for the quantitative difference.} with the findings of \cite{meynet2011}, namely that when a flat angular velocity profile is assumed, and surface magnetic fields are considered, the mixing is much less efficient than in the case of differential rotation (cf. also model M4 (R-B) and model M6 (RSB) in the middle panels of Figure \ref{fig:nrm2} and \ref{fig:nrm3} with solid lines). Without magnetic braking, the solid-body rotating model results in higher surface nitrogen enrichment compared to a differentially rotating model (cf. also model M3 (R- -) and model M5 (R-B) in the middle panels of Figure \ref{fig:nrm2} and \ref{fig:nrm3} with dashed lines). When magnetic braking is considered, the surface angular velocity decreases rapidly, thus in solid-body rotating models the rotation of the entire star brakes. The systematically and uniformly decreasing value of angular velocity throughout the star leads to weaker chemical element transport (predominantly weaker meridional currents). This can be summarized in the case of solid-body rotation as follows: \begin{enumerate} \item Magnetic braking $\rightarrow$ decrease of surface $\Omega$ \item Decrease of surface $\Omega$ $\rightarrow$ decrease of $\Omega$ throughout the star \item Decrease of $\Omega$ throughout the star $\rightarrow$ weaker meridional currents \item Weaker meridional currents $\rightarrow$ less chemical enrichment at the surface \end{enumerate} Hence magnetic braking results in lower meridional circulation velocity compared to a corresponding model without surface magnetic fields, and this is why the surface nitrogen enrichment of model M6 (RSB) is less than that of model M5 (RS-). Given these two differences between the models, that is, model M6 (RSB) is less enriched but evolves with a slightly higher mass than model M5 (RS-), they have an opposite effect on the stellar luminosity, thus the nearly identical evolutionary tracks on the HRD are explained. This is also the reason why the stellar ages are quite similar in this case, at the TAMS equal to 11.46 Myr and 11.36 Myr, respectively. For reference, the TAMS masses of model M5 (RS-) and model M6 (RSB) are 14.65 M$_\odot$ and 14.95 M$_\odot$ (see Appendix Table \ref{tab:t3}). \begin{figure*} \includegraphics[width=9cm]{vrot.png}\includegraphics[width=9cm]{jtot.png} \caption{\textit{Left panel}: The time evolution of the surface equatorial rotational velocities from the ZAMS until the models start to deplete core Helium. \textit{Right panel}: The evolution of the total angular momentum over time on the main sequence. The TAMS values of log $J_{\rm tot}$ in units of g~cm$^2$ s$^{-1}$ are indicated next to the tracks.}\label{fig:rot1} \end{figure*} \section{Discussion} \subsection{Rotational velocity and angular momentum evolution} The left panel of Figure \ref{fig:rot1} shows the surface rotational evolution predicted by the 4 models with rotation and reveals that the rotational history of models with/without surface magnetic fields is very different. In the case of models M3~(R-~-) and M5 (RS-) (green and cyan lines), we observe the following: With the adopted initial mass and initial rotational velocity, differential rotation leads to a relatively modest decrease of the surface rotational velocity on the main sequence. However, this strongly depends on the initial mass and rotational velocity of the model, namely the surface rotation would brake much more rapidly if the model was more massive and/or had a higher initial rotational velocity \citep[e.g.,][]{georgy2013}. When solid-body rotation is imposed by means of the viscosity of the ST dynamo, the rotational velocity remains nearly constant during (almost) the entire main sequence \citep{maeder2003,keszthelyi2017a}. The surface rotational velocity decreases abruptly after the TAMS since the radius of the star undergoes a dramatic increase. In the case of models M4 (R-B) and M6 (RSB) (blue and red lines), we interpret their rotational history as follows: When surface magnetic fields and hence magnetic braking is accounted for, the surface rotation of the star already brakes efficiently on the main sequence - as expected from theoretical works \citep{ud2009,meynet2011}. As a consequence, the surface rotational velocity rapidly approaches zero. However, the time-scale of magnetic braking depends on the assumption of how angular momentum is distributed inside the star, and how efficiently the surface angular momentum reservoir can be replenished by driving angular momentum from the core. The problem of internal angular momentum distribution was already addressed by \cite{pin1989}, who concluded that it may significantly influence the evolution of the surface angular momentum reservoir. Model M6 (RSB) undergoes a somewhat more rapid initial spin-down than model M4 (R-B) although, despite the initially more rapid spin-down, both curves are qualitatively similar. The time-scale required to slow down the surface rotation from the ZAMS (v$_{\rm rot, \rm ini}$ = 200 km~s$^{-1}$) to reach a surface rotational velocity below 50~km s$^{-1}$ is 3.3~Myr and 2.8 Myr for models M4 (R-B) and M6 (RSB), respectively. We will call this the `effective spin-down time', meaning the time after ZAMS to become a slow rotator ($< $50 km s$^{-1}$), in order to identify the time after which surface rotation has small or negligible impact on the structure and evolution of the model. The right panel of Figure \ref{fig:rot1} shows the evolution of the total angular momentum of the star models on the main sequence. Models M3 (R- -) and M5 (RS-) lose a small amount of total angular momentum on the main sequence. This loss is due to mass loss, which reduces the angular momentum content of the star \citep{langer1998, vink2010,keszthelyi2017a}. When surface magnetic fields are accounted for, they strongly impact the angular momentum evolution of the star since magnetic braking is a very efficient mechanism to remove angular momentum \citep[see also][]{song2018}. Despite the fact that while on the main sequence model M6 (RSB) has an order of magnitude lower mass-loss rate than model M5 (RS-) (see Figure \ref{fig:nrm3}, right panel), the former model loses almost 0.4 dex more of its total angular momentum. This is because magnetic braking is significantly more efficient to remove angular momentum from the star than mass loss by the stellar winds \citep{ud2009}. It may not be immediately intuitive why model M4 (R-B) loses more of its total angular momentum on the main sequence than model M6 (RSB), in particular because one may expect that solid-body rotation brakes the rotation of the entire star (i.e., reduces the total angular momentum reservoir), while differential rotation only allows for effectively braking the surface (i.e., exhausting only the surface angular momentum reservoir). Hence, if the only difference between the two models was their rotational configuration, model M4 (R-B) would be expected to maintain more of its total angular momentum. However, this is not the case: model M4 (R-B) does lose more angular momentum than does model M6 (RSB). This is because model M4 (R-B) evolves with a higher luminosity than all the other models due to its increased mean molecular weight. As a consequence, its mass-loss rate is higher compared to model M6 (RSB). Therefore, while magnetic braking is indeed the main driver of angular momentum loss, model M6 (RSB) loses $\sim$ 0.06 dex less of its total angular momentum at the TAMS compared to model M4 (R-B) because its mass-loss rate is lower. \begin{figure*} \includegraphics[width=19cm]{hunt.png} \caption{Hunter diagram of the main sequence phase of the rotating models. $v$ sin $i$ is obtained by scaling the surface rotational velocities by $\pi / 4$. Group 2 stars are denoted with the hatched box on the diagram and the polar magnetic field strength is colour-coded in the case of the two magnetic models. The evolution of the models begins at the lower right corner of the diagram.}\label{fig:hun} \end{figure*} \subsection{Surface nitrogen enrichment}\label{sec:hunt} The surface nitrogen abundance and the projected rotational velocity (plotted on the Hunter diagram) are two very practical quantities to trace and to study rotational mixing in massive stars since rotational mixing brings core-processed CNO elements to the stellar surface \citep{maeder2014a}. While in Section 3 we focused on the predictions of the models and the inferred impact of a surface magnetic field, now we turn our attention to the relevance of surface magnetic fields in the context of the Hunter diagram \citep{hunter2008}. The VLT-FLAMES Surveys \citep{evans2004,evans2005,evans2008} obtained a significant collection of spectroscopic data of hot, massive stars in the Large Magellanic Cloud (LMC), Small Magellanic Cloud (SMC), and our Galaxy. From these observations several studies have identified anomalies in the Hunter diagram that cannot be explained with `standard rotational mixing' \citep{hunter2008,hunter2009,brott2009,brott2011b,maeder2014a,grin2017,dufton2018}. In particular, two groups of observed stars (see Figure 1 of \citealt{hunter2009}) are challenging to explain with standard stellar evolution models (see also \citealt{aerts2014}). Group 1 stars exhibit fast rotation but show negligible evidence of surface chemical enrichment. Group 2 stars are inferred to be slow rotators but have notable surface enrichment. The stars in both groups have surface gravities that are generally consistent with them being main sequence stars. The existence of both groups is somewhat counterintuitive since typically the faster the star rotates, the more efficient is the chemical mixing, thus the larger is the surface chemical enrichment. Therefore, standard stellar evolution models, such as models M3 (R- -) and M5 (RS-), do not seem to be able to explain such a behaviour in main sequence stars. While comprehensive observational data supports the existence of these groups in the LMC \citep[e.g.,][]{hunter2008,brott2011b,rivero2012a,rivero2012b,grin2017,dufton2018}, the same anomalous behaviour, notwithstanding statistically smaller samples, is also evidenced in our own Galaxy \citep{crowther2006,morel2008,hunter2009,przybilla2010,briquet2012,aerts2014,martins2015,cazorla2017a,markova2018}. Surface magnetic fields have been invoked to account for the phenomenon \citep{morel2008,hunter2009,brott2011b,martins2012,aerts2014,martins2015,grin2017,dufton2018} and, in particular, the mechanism of magnetic braking has been proposed to explain Group 2 stars \citep{meynet2011,potter2012b}, although binarity may also provide an alternative explanation \citep[e.g., most recently,][]{song2018b}. Here we aim to focus only on Galactic Group 2 stars, and the general behaviour of the computed models as shown in Figure \ref{fig:hun}. In contrast to models M3 (R- -) and M5 (RS-), both models M4 (R-B) and M6 (RSB) exhibit rapid decrease of their surface rotation on the main sequence. We recall that the effective spin-down time for these models to become slow rotators is of the order of 3 Myr. For model M6 (RSB) the time after the ZAMS that is required to start increasing the surface nitrogen abundance is 2.0 Myr (when $v \sin i \sim 50$~km~s$^{-1}$). In the case of model M4 (R-B) the enrichment takes place on a shorter time-scale, the surface nitrogen abundance increases after 1.1 Myr (when $v \sin i \sim 80$ km s$^{-1}$). As is common, the surface equatorial rotational velocity is converted to projected rotational velocity by accounting for an averaged inclination, i.e., $v_{\rm rot} \cdot \pi /4 \sim v \sin i$. Not only does model M4 (R-B) mix more rapidly than model M6 (RSB), but the overall surface nitrogen abundance is higher too (see also middle panels of Figure \ref{fig:nrm2} and Figure~\ref{fig:nrm3}). With [N/H] we denote the number fraction of nitrogen relative to hydrogen. At the TAMS, log [N/H] + 12 = 8.33 in the case of model M4 (R-B) and log [N/H] + 12 = 8.11 in the case of model M6 (RSB), thus there is an overall 0.2 dex difference which originates from the assumptions regarding the internal rotation profile of the star. Interestingly, this difference is also observable in models without surface magnetic fields, however, in the opposite direction. Model M5 (RS-) reaches log~[N/H]~+~12~=~8.27 at the TAMS, while model M3 (R-~-) produces log~[N/H]~+~12~=~8.02. This is because when surface magnetic fields are not considered, $\Omega$ is maintained near to uniform throughout the star model M5 (RS-). As a consequence, meridional currents remain very efficient in transporting core-processed chemical elements to the stellar surface. Therefore, comparing models with different rotational configuration, we can conclude the following: magnetic braking greatly enhances mixing if radial differential rotation is allowed for in the model. On the other hand, the inclusion of surface magnetic fields yields a lower enrichment in the case of near solid-body rotation. This result reinforces the point that although magnetic braking does provide a qualitative explanation for Group~2 stars, the surface nitrogen enrichment depends strongly on the internal mixing processes. This could be the reason why previous studies by \cite{morel2008}, \cite{aerts2014} and \cite{martins2015} did not find a direct correlation between magnetic field strength and measured nitrogen abundance. This might imply that magnetic massive stars may have different angular momentum transport processes at work or the strength of the transport varies significantly from one star to another. Our results, for the case of differential rotation, are in complete agreement with findings of \cite{meynet2011} regardless of the additionally considered magnetic field evolution and mass-loss quenching in this study. However, \cite{meynet2011} obtained a different conclusion considering magnetic braking and solid-body rotation jointly. Namely, without mass-loss quenching and magnetic field evolution, their solid-body rotating models undergoing magnetic braking did not produce notable surface nitrogen enrichment, and thus did not appear as Group 2 stars on the Hunter diagram. In this study, however, model M6 (RSB) does yield an observable surface nitrogen enrichment. This may have partially been a result of the slight difference in the adopted initial mass of the models. \cite{meynet2011} considered $M_{\star, \rm ini} = 10$ M$_{\odot}$, while in this work we considered $M_{\star, \rm ini} = 15$ M$_{\odot}$. Therefore, we computed a model with the exact same configuration as model M6 (RSB) but with $M_{\star, \rm ini} = 10$ M$_{\odot}$ and found that it overlaps with model M6 (RSB) on the Hunter diagram. Thus, this difference is only due to considering mass-loss quenching and magnetic field evolution which lead to weakening magnetic braking over time. Since Group 2 stars might provide indirect evidence for the presence of magnetic stars in extragalactic environments, further spectropolarimetric measurements of \textit{Galactic} Group 2 stars would be valuable to establish the robustness of this potential proxy. Therefore, an important question remains the incidence rate of surface magnetism in Galactic Group~2 stars. In particular, stars with known nitrogen enrichment and relatively modest rotational velocities, e.g., from the sample of Galactic O stars by \cite{markova2018}, and from the samples of Galactic B stars by \cite{crowther2006} and \cite{fraser2010} should be investigated. We also mention that the modest rotational velocities are an advantage to search for surface magnetic fields through spectropolarimetric observations since in those cases the spectral line broadening due to rotation requires a shorter exposure time to detect a magnetic field of given strength. Since we accounted for the evolution of the surface magnetic field (colour-coded in Figure \ref{fig:hun}), we can place some constraints regarding this matter. Both models with surface magnetic fields are assumed to have an initial polar magnetic field strength of 4 kG. During the early evolution, this field should remain strong if magnetic flux conservation is valid. The field starts weakening once the stellar radius increases. Since typically the change in stellar radius is modest on the main sequence (a factor of 2 - 3), one would expect that the magnetic field strength also weakens modestly (a factor of 4 - 9). Indeed, both models M4 (R-B) and M6 (RSB) approach Group 2 on the Hunter diagram with $B_{\rm p} \sim$ 3.5 and 3.0~kG, respectively, and at the TAMS their magnetic field strengths are $B_{\rm p} \sim$ 870 G and 740 G, respectively. Since the adopted ZAMS field strength is typical of known magnetic O and B stars, these values provide some guidance as the expected observable field strength in Group 2 stars. The spectropolarimetric measurements must be sufficiently sensitive to detect a polar field strength of $B_{\rm p} \sim$ 800~G (if the star is close to the TAMS), that is, a peak longitudinal field of $B_z \sim$~250~G, assuming that the choice of $B_{\rm p,ini} =$ 4 kG at the ZAMS and the model of magnetic flux conservation are appropriate. Detecting such a field at $3\sigma$ would require a longitudinal field uncertainty (i.e., 1$\sigma$ error bar) of about 80~G. Weaker initial magnetic fields would naturally imply better required precision (e.g., 40~G for an initial polar field of 2~kG). These values are comparable with those obtained for the O and B samples of the MiMeS survey (respectively 50~G and 30~G; \citealt{grunhut2017} and Wade et al., in prep.). In addition to the obtaining spectropolarimetric measurements of known Galactic Group 2 stars, it may be worthwhile to identify stars in the MiMeS survey located in Group~2 of the Hunter diagram (e.g., exploiting abundances and $v\sin i$ such as those reported by \citealt{martins2015} for the MiMeS O-type stars) and evaluate if the precision with which they were individually observed is sufficient to detect their expected weaker magnetic fields (and re-observe them with better precision if necessary). Ultimately, magnetic fields and binarity are not necessarily mutually exclusive to explain Group 2 stars on the Hunter diagram. While other studies (e.g., \citealt{song2018b}) have shown that binarity can explain Group 2 stars, we have shown here that surface magnetism in single stars can also explain them. The question that should be answered by observations is: \textit{What fraction of those stars are in binary systems and what fraction of them possess detectable surface magnetic fields?} \begin{figure*} \includegraphics[width=9cm]{mBpfb.png}\includegraphics[width=9cm]{meta.png} \caption{Evolution of the surface magnetic parameters. \textit{Left panel:} log of the polar magnetic field strength (left abscissa, solid lines) and the escaping wind fraction (right abscissa, dashed lines) against the effective temperature. \textit{Right panel:} evolution of the equatorial magnetic confinement parameter. }\label{fig:mag} \end{figure*} \subsection{Evolution of the surface magnetic field strength and magnetospheric parameters}\label{sec:43} As both the magnetic field strength and mass-loss rate evolve with time, the nature of the interaction between the surface magnetic field and the stellar wind is expected to change over evolutionary time-scales. Figure \ref{fig:mag} shows the evolution of the magnetospheric parameters, $B_p$, $f_B$, and $\eta_\star$ (see Equations \ref{eq:quench} and \ref{eq:eta}). The left panel reveals that the main sequence weakening of the polar magnetic field strength (left abscissa and solid lines) is relatively modest, under the assumption that the magnetic flux is conserved during the evolution. However, once the stellar radius increases drastically after the TAMS, the magnetic field strength rapidly weakens from the order of kG to hundreds of G at $\sim$ 25 - 15 kK, to the order of 100 G to 10 G between 15 and 10 kK, and to the order of 10 G to 1 G between 10 and 5 kK. Since the stellar radius governs the field evolution, models M2 (- -B), M4 (R-B), and M6 (RSB) show similar characteristics because the evolution of the stellar radius is similar in these models. The left panel of Figure \ref{fig:mag} also shows the escaping wind fraction $f_B$ (right abscissa and dashed lines). On the main sequence, the escaping wind fraction is 10 - 20\%, which means a drastic reduction in the mass-loss rates for stars with surface magnetic fields. Until the late B star regime ($\sim$ 10 kK) mass-loss quenching is still very efficient as $f_B \approx$ 20 - 50\%. Therefore during the O-star regime in our models the returning wind fraction (that is, 1~-~$f_B$) is of the order of 80 - 90\%, which means that most of the line-driven mass loss returns to the stellar surface due to channeling by the magnetic field through closed field lines, while for B stars this fraction is at least 50\%. Evidently, this ratio depends on the assumption of the initial magnetic field strength. In extreme cases, the returning mass fraction can go up to 96\% as in the case of NGC 1624-2 (the escaping wind fraction is $f_B = 4\%$, \citealt{petit2017}), the most strongly magnetized known O-type star, which is believed to host a giant magnetosphere \citep{wade2012,petit2015,erba2017,daviduraz2018}. The right panel of Figure \ref{fig:mag} shows how the equatorial magnetic confinement parameter evolves over time as a function of effective temperature. This means that during the early evolution of the star the magnetic confinement is expected to be strong ($\eta_\star > 10$) for a typical initial surface magnetic field strength. As shown previously by \cite{keszthelyi2017b}, $\eta_\star$ can be very sensitive to changes in $\dot{M}$ on the main sequence. This is especially the case when sudden changes in the mass-loss rates are considered due to the bi-stability mechanism \citep{pauldrach1990,vink1999,petrov2016,keszthelyi2017a,sander2018,vink2018}. However, if the mass-loss rates are not assumed to show abrupt changes (see the right panels of Figures \ref{fig:nrm1}-\ref{fig:nrm3}), then the evolution of the magnetic confinement parameter is characterized by a systematic decrease in $B_p$ along with a systematic increase in $\dot{M}$. According to our calculations of a typical massive star model, at $T_{\rm eff} \sim 20 - 8 \, \mathrm{kK}$ a moderate magnetic confinement ($\eta_\star \approx 0.1 - 10$) is still retained. Ultimately, the stellar wind dominates over the magnetic field, which weakens as the star becomes a red supergiant, and as a consequence, the initially strong magnetic confinement ($\eta_\star \approx 10^3$) disappears in these models when they cross the HRD. Most observed magnetic hot stars are believed to be on the main sequence, however these results can be strongly influenced by the particular evolutionary models chosen for comparison, especially since most determinations rely on non-magnetic stellar evolution models. The actual position of the TAMS for these stars thus remains to be determined. In stellar evolution models, along with the efficiency of rotationally-induced instabilities and thus chemical mixing in the envelope, a key parameter that influences the position of the TAMS on the HRD is the value of core overshooting \citep{bressan1981,stothers1985,langer1986,maeder1991,higgins2019}. For instance, \cite{castro2014} remark that, in general, core overshooting may require a different calibration depending on the initial mass of the star, while grids of stellar evolution models typically use one calibrated value for an entire mass range (\citealt{schaller1992,brott2011};\break \citealt{ekstroem2012}, but see also \citealt{vandenberg2006}). Moreover, \cite{briquet2012} argued that fossil fields could suppress core overshooting, and \cite{petermann2015} included an \textit{ad hoc} reduction in the convective core size to explain observed properties of B supergiants. The choice of overshoot parameter determines the width of the main sequence, which in turn may change the derived ages and evolutionary status of observed magnetic stars. To observationally constrain the fossil field evolution scenario, further spectropolarimetric measurements are required of stars that are presumably evolved. Non-chemically peculiar B and A supergiants (in the range of T$_{\rm eff} \sim~25 - 8$~kK) are of interest in this respect, since, if magnetic flux conservation is valid, a fraction of those stars should have observable fields consistent with their progenitors in the O and early B phase. Indeed, a few of these objects have already been discovered, however examples remain scarce. \cite{fossati2015b} report the detection of surface magnetic fields on the order of 10-100 G in the early B stars $\beta$~CMa (B1II, $T_{\rm eff} = 24.7$~kK, $M_\star = 12$ M$_\odot$, $B_p = 100$~G,) and $\epsilon$~CMa (B1.5II, $T_{\rm eff} = 22.5$~kK, $M_\star = 13$ M$_\odot$, $B_p > 13$~G). Three non-chemically peculiar A-type supergiants are known to possess weak surface magnetic fields, namely $\iota$ Car (A7Ib, $T_{\rm eff} = 7.5$~kK, $M_\star = 6.9 - 11.0$ M$_\odot$, $B_p = 3$~G, \citealt{neiner2017}), HR 3890 (A7Ib, $T_{\rm eff} = 7.5$~kK, $M_\star = 10.9-14.7$ M$_\odot$, $B_p = 6$ G, \citealt{neiner2017}), and 19 Aur (A5Ib, T$_{\rm eff} = 8.5$~kK, $M_\star = 6.9 - 9.7$ M$_\odot$, B$_p = 3$~G, \citealt{martin2018}). It is currently unclear whether these fields of A supergiants are organized fossil fields or generated by a near-surface dynamo mechanism. These stars might belong to the group that exhibit Vega-like magnetism as detected in intermediate-mass stars \citep{lignieres2009}. Since the mass determination of these objects is highly uncertain (mostly because their rotational histories are not known and partially because non-magnetic stellar evolution models were used for comparison), it is possible that these are intermediate-mass stars. Nevertheless, a class of evolved hot massive stars with weak surface magnetic fields should be observable, quite similar to these detections, but even in the $>$~15~M$_\odot$ initial mass range. \subsection{Magneto-rotational evolution}\label{sec:mre} \begin{figure*} \includegraphics[width=19cm]{mrark.png} \caption{Evolution of a magnetic model M6 (RSB) on the R$_K$-R$_A$ plane, colour-coded with log g. As the star spins down, it crosses the dashed magenta line which represents the transition from the CM regime to the DM regime. The Alfv\'en radius decreases rapidly after the TAMS. The two cartoons are adopted from \citet{petit2013}.}\label{fig:mag2} \end{figure*} Magnetospheres of hot massive stars are classified by \cite{petit2013} into dynamical magnetospheres (DM) and centrifugal magnetospheres (CM), depending on the relative sizes of the Alfv\'en radius and Kepler co-rotation radius. The Alfv\'en radius $R_A$ is approximated by \cite{ud2002} as \begin{equation} \frac{R_A}{R_\star} = 0.29 + (\eta_\star + 0.25)^{0.25} \, , \end{equation} and the Kepler co-rotation radius is expressed as \begin{equation} \frac{R_K}{R_\star} = \left( \frac{v_{\rm rot}}{\sqrt{G M_\star / R_\star}} \right)^{-2/3} \, . \end{equation} A magnetosphere is classified as a DM when $R_A < R_K$, and as a CM when $R_K < R_A$. A DM contains wind material, driven from the surface and confined by closed magnetic loops, which co-rotate with the stellar surface, and then shocks near the magnetic equator before falling back onto the star, creating dynamically complex flows (e.g., \citealt{babel1997,ud2002,owocki2016}). In the case of rapidly-rotating stars, rotation plays a significant dynamical role as the additional centrifugal support past the Kepler co-rotation radius can lead to the accumulation of wind material trapped at the apex of the closed loops, forming a CM \citep{townsend2005}. Rotationally-modulated variations in a number of observational diagnostics (Balmer lines, UV, X-rays, etc.) have indeed been linked to the presence of such magnetospheres (e.g., \citealt{landstreet1978,shore1990,gagne2005,petit2013}). Figure \ref{fig:mag2} shows the evolution of model M6 (RSB) on the $R_K$-$R_A$ plane, the confinement-rotation diagram (see also Figure 3 of \citealt{petit2013}). Model M4 (R-B) follows a nearly identical evolution on this diagram with a somewhat larger noise close to the TAMS, thus for clarity only one model is shown. The star begins its main sequence evolution in the CM regime given the assumed initial surface magnetic field strength and rotational velocity. However, as the surface rotational velocity decreases, the Kepler radius increases. In the case of model M6 (RSB) the transition from the centrifugally supported magnetosphere regime to the dynamical magnetosphere regime occurs after the star becomes a slow rotator ($v_{\rm rot} <$ 50 km~s$^{-1}$) at 3.4 Myr after the ZAMS. At that time the Alfv\'en and Kepler radii are both 6.7 R$_\star$ (or roughly 34~R$_\odot$). It is presently unknown how this transition from the CM to DM regime could be properly characterized, and whether the wind material trapped in the centrifugally supported region would mostly fall back onto the star, would escape, or perhaps may be forming a disk as the transition occurs. New magnetohydrodynamic simulations will study the transition in more detail (ud-Doula, priv. comm.). The star then spends the remainder of its evolution in the DM regime. An abrupt decrease in $R_K$ occurs after the TAMS, due to the sudden increase of the stellar radius. Nevertheless, $R_K$ systematically increases with time, while $R_A$ systematically decreases. The former is mainly governed by the evolution of the rotational velocity on the main sequence as long as changes in $M_\star$ and $R_\star$ are modest. After the TAMS, the increase of $R_K$ is attributed to the decreasing stellar mass. The evolution of $R_A$ depends on the evolution of the magnetic confinement parameter $\eta_\star$ (see Figure \ref{fig:mag}). There is a stable decrease on the main sequence from $R_A =$ 7.7 R$_\star$ at the ZAMS to $R_A =$ 3.3 R$_\star$ at the TAMS. After the star crosses the HRD, the initially large-scale magnetosphere has no or minimal effect on the stellar atmosphere. A key take-away of the CM to DM evolution is that (although quantitatively depending on the initial mass, rotation rate, field strength, etc.) by identifying a centrifugally-supported magnetosphere, constraints can, in principle, be placed on the age of the star. Since the surface spin-down of more massive stars is more rapid, magnetic O stars in the CM regime must be very young, close to their ZAMS\footnote{Or alternatively those stars may have been spun-up by, e.g., mass transfer in a binary system; see \citealt{grunhut2013}.}. On the other hand, in case of observed DM stars with $R_K$ / $R_A$ $>$ 5, there may be a strong indication that the star is evolved and is possibly on the post-main sequence. This is consistent with the fact that those stars that fulfill this criterion typically have lower values of observed log $g$ than those that have smaller $R_K$ / $R_A$ ratios \citep[see Table 1 and Figure 3 of][]{petit2013}. Additionally, we mention that this method may be worth comparing with cluster ages when available (see also \citealt{schneider2016}). It may also prove to be a robust tool to estimate the age and evolutionary status of runaway magnetic stars, e.g., HD 57682 (see \citealt{comeron1998} and \citealt{grunhut2009}). \subsection{Uncertainties of fossil field evolution and limitations of the current models}\label{sec:unc} The main sequence is the longest and most stable amongst all burning stages in a star's lifetime. In single stars, it is expected that fossil fields would remain stable, but their strength generally weakens on the main sequence. Observations show evidence that these large-scale fossil fields are stable during decades \citep[e.g.,][]{oksala2012,silvester2014,shultz2016,sikora2018}. The adopted model of surface magnetic field evolution follows empirical constraints: the fields are strong on the main sequence ($B_p \approx 10^2 - 10^3 \, \mathrm{G}$), but they show evidence of weakening as stars age \citep{landstreet2007,landsreet2008,blazere2015,grunhut2017}. However, it remains debated whether the model of magnetic flux conservation is valid or not. Theoretical works \citep[e.g.,][]{braithwaite2017} point out that Ohmic decay is expected to lead to a more rapid decrease of B$_p$ than would result from magnetic flux conservation. However, the magnetic flux of compact remnants, in particular, of neutron stars is consistent with a constant magnetic flux evolution of their progenitor OB stars \citep{ferrario2008}. There are observational studies \citep{landstreet2007,fossati2016} that infer a field evolution model consistent with magnetic flux decay, on the other hand, other observational studies argue for the validity of magnetic flux conservation \citep{grunhut2017,neiner2017}. Recently \cite{shultz2018} proposed that the magnetic flux may depend on the stellar mass. To a large extent, this debate is a consequence of the lack of comprehensive observational data of evolved massive stars with sufficient precision to reliably test the different models (Petit et al., in prep.). During the post-main sequence phases two key phenomena occur that should determine the fate of a surface fossil field: i) development of convective zones in the stellar envelope, and ii) the rapid change in the stellar radius. A major complexity is to consider the interplay between convective zones and a fossil magnetic field. Since this interaction is not well understood, we did not include the suppression of developing convective zones. This phenomenon is expected for strong magnetic fields, however during the post-MS phase, the weaker fossil fields may not suppress these zones. Additionally, convective regions can give rise to dynamo activity, generating small-scale magnetic fields, which can interact with the fossil field (see, e.g. \citealt{feat2009}). The current challenges can be listed as follows: \begin{itemize} \item Parametric, one-dimensional prescriptions are not established to account for how a fossil field and convection would interact in massive stars. \item One-dimensional hydrodynamical codes treat convective layers in a simplified approach. Usually, the mixing-length theory and solid-body rotation are adopted in convective zones. \item Convection-driven, dynamo-generated magnetic fields are not yet ubiquitously implemented in stellar evolution codes. However, recent studies of the cores of evolved massive stars (e.g., \citealt{maeder2014b,cantiello2016,kissin2018}) have begun to explore such effects. It is, nevertheless, expected that B-type stars also maintain convective core dynamos \citep{aug2016}. \end{itemize} Therefore, while according to magnetic flux conservation the weakening of the polar field strength is accounted for in our models, it is not evident how these fields transform as the star crosses the HRD. Our model calculations predict that near-surface convective zones appear after the TAMS, that may give rise to dynamo activity. Such dynamo activity is indeed observed in cool massive stars \citep{grunhut2010}. It remains a puzzling question to what depth fossil fields, which form a large-scale magnetosphere above the stellar surface, penetrate in the stellar interiors (see e.g. \citealt{mahes1988,braithwaite2006}). To more appropriately model magnetic braking, the depth to which the magnetic torque is exerted in the stellar interior should be established. The layers with strong magnetic torque are indeed expected to maintain a nearly flat angular velocity profile, however if there exist adjacent layers where the torque has ceased, then this could introduce a jump in the angular velocity profile leading to the development of strong shears. On the other hand, the strong shears may increase the efficiency of angular momentum transport, hence mitigating the break in the angular velocity profile. This is why the assumptions regarding internal angular momentum and chemical element transport are critical, because two fundamentally different scenarios yield different model responses to surface magnetic braking. Nonetheless, it is clear that magnetic fields in the stellar interior need to be considered jointly with stellar rotation \citep[e.g.,][]{mestel1987b,maeder2003,heger2005}. Multi-dimensional approaches have been invoked to study the interaction between stellar rotation and internal magnetic fields \citep[e.g.,][]{mathis2005,duez2010,mathis2011,mathis2012}, and recent studies have begun to focus on this interaction, primarily via the magneto-rotational instability in massive and intermediate-mass stellar evolution models \citep{wheeler2015,quentin2018}. \section{Conclusions} We have discussed the combined effects of mass-loss quenching and magnetic braking, considering an evolving dipolar surface fossil magnetic field, in the Geneva stellar evolution code, and in this work we have shown how 15 M$_{\odot}$ single star models evolve with and without surface magnetic fields. We computed non-rotating models, rotating models neglecting $D_{\rm ST}$ in the angular momentum transport equation to allow for differential rotation, and rotating models including $D_{\rm ST}$ in the angular momentum transport equation to achieve a flattened $\Omega$ profile. The key results of this study are the following: \begin{itemize} \item We showed that in the case of 15 M$_\odot$ stellar models, mass-loss quenching due to a fossil field is modest since mass loss over the lifetime of the star is modest compared to its initial mass. However, the rotational evolution of the star models is strikingly different, even if the HRD tracks are nearly identical as is the case for M5 (RS-) and M6 (RSB). \item We identified that even for 15 M$_{\odot}$ models the evolutionary tracks are notably different between magnetic and non-magnetic models if differential rotation is considered (i.e., the models evolve at higher luminosity if the star hosts a magnetic field). This is primarily a consequence of the rapid and strong shear mixing induced by magnetic braking. In non-rotating but initially higher-mass models ($>$ 40 M$_\odot$)\break \cite{petit2017} found that the increase of stellar luminosity results from magnetic mass-loss quenching alone. \item We found that main sequence rotating models which include a surface magnetic field evolve to a region of the Hunter diagram where the anomalous Group 2 objects are located. In accord with the results of \cite{meynet2011}, magnetic braking enhances the chemical enrichment if the star undergoes radial differential rotation, however the enrichment is reduced if the star rotates as a solid body. We found that including magnetic field evolution results in weakening magnetic braking, hence our magnetic models with solid-body rotation do evolve into Group 2 unlike the models by \cite{meynet2011}. \item Accounting for the evolution of the surface field, we placed constraints on the observable magnetic field strength for future spectropolarimetric observations and we studied the time evolution of the magnetospheric parameters. We found that the surface polar field strength weakens at maximum by an order of magnitude on the main sequence, however as the star crosses the HRD the decrease becomes more rapid. Consequently, the initially strong magnetic confinement results in an escaping wind fraction of only 10-20\% during the early evolution of the star. As the rotation of the star evolves, the Kepler co-rotation radius increases systematically, whereas the Alfv\'en radius decreases. In its early evolution, the star transitions from the centrifugally-supported magnetosphere regime to the dynamical magnetosphere regime of the confinement-rotation diagram. \end{itemize} To gain further insights towards understanding massive stars with surface fossil magnetic fields, the following points should be considered: \begin{itemize} \item The effects on the stellar surface by fossil magnetic fields can be incorporated into stellar evolution model calculations by means of scaling relations. However, it remains largely uncertain how the surface fields behave in the subsurface layers and throughout the radiative envelope of the star. Therefore, developing a formalism to account for that would be advantageous to constrain the angular momentum transport mechanisms used in stellar evolution model calculations. \item It is evident that due to the model dependence on stellar rotation, mass loss, initial mass, metallicity, and other parameters, a more comprehensive (parameter) study is required. To this extent, we are computing a large grid of models that will explore the above-mentioned parameter space in more detail, in order to study the consequences of individual parameters on model predictions (Keszthelyi et al., in prep.). \item Based on our model calculations, it would be worthwhile to invest in undertaking two large-scale observational surveys. i) There is interest in understanding how fossil fields evolve with time (the LIFE collaboration, \citealt{martin2018}), however data from stars at late evolutionary phases remain scarce. It would be valuable to know if evolved massive stars with late B and A spectral types retain weak, organized surface magnetic fields consistent with the characteristics and incidence of surface magnetism in the OB phase. ii) It has been proposed that Group 2 stars on the Hunter diagram may have surface magnetic fields, however no systematic survey of magnetism in these objects has been carried out. In our solar metallicity models the polar magnetic field strength weakens from a few kG to $\sim$ 800 G in the Group~2 star regime of the Hunter diagram. It would be worthwhile to investigate the fractional incidence of surface magnetism in known Group 2 stars. \end{itemize} \section*{Acknowledgements} We appreciate great discussions with R. Townsend,\break A. ud-Doula, J. Puls, and M. Shultz. We thank the anonymous referee for helpful comments on the manuscript. Z.K. greatly acknowledges the warm hospitality at the Geneva Observatory, where part of this research was carried out. G.M. and C.G. acknowledge support from the Swiss National Science Foundation (project number 200020-172505). G.A.W. acknowledges support in the form of a Discovery Grant from the Natural Science and Engineering Research Council (NSERC) of Canada. V.P. acknowledges support provided by (i) the National Aeronautics and Space Administration through Chandra Award Number GO3-14017A issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060, and (ii) program HST-GO-13734.011-A that was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. A.D.U. acknowledges support from NSERC. \bibliographystyle{mnras}
2,877,628,088,761
arxiv
\section{Introduction} The field of reinforcement learning (RL) involves an agent interacting with an environment, maximizing a cumulative reward \citep{puterman2014markov}. As RL becomes more instrumental in real-world applications \citep{lazic2018data,kiran2021deep,mandhane2022muzero}, exogenous inputs beyond the prespecified reward pose a new challenge. Particularly, an external authority (e.g., a human operator) may decide to terminate the agent's operation when it detects undesirable behavior. In this work, we generalize the basic RL framework to accommodate such external feedback. We propose a generalization of the standard Markov Decision Process (MDP), in which external termination can occur due to a non-Markovian observer. When terminated, the agent stops interacting with the environment and cannot collect additional rewards. This setup describes various real-world scenarios, including: passengers in autonomous vehicles \citep{le2015autonomous,zhu2020safe}, users in recommender systems \citep{wang2009recommender}, employees terminating their contracts (churn management) \citep{sisodia2017evaluation}, and operators in factories; particularly, datacenter cooling systems, or other safety-critical systems, which require constant monitoring and rare, though critical, human takeovers \citep{modares2015optimized}. In these tasks, human preferences, incentives, and constraints play a central role, and designing a reward function to capture them may be highly complex. Instead, we propose to let the agent itself learn these latent human utilities by leveraging the termination events. We introduce the Termination Markov Decision Process (TerMDP), depicted in \Cref{fig: termination diagram}. We consider a terminator, observing the agent, which aggregates penalties w.r.t. a predetermined, state-dependent, yet \emph{unknown}, cost function. As the agent progresses, unfavorable states accumulate costs that gradually increase the terminator's inclination to stop the agent and end the current episode. Receiving merely the sparse termination signals, the agent must learn to behave in the environment, adhering to the terminator's preferences while maximizing reward. Our contributions are as follows. \textbf{(1)} We introduce a novel history-dependent termination model, a natural extension of the MDP framework which incorporates non-trivial termination (\Cref{sec: perliminaries}). \textbf{(2)} We learn the unknown costs from the implicit termination feedback (\Cref{sec: theory}), and provide local guarantees w.r.t. every visited state. We leverage our results to construct a tractable algorithm and provide regret guarantees. \textbf{(3)} Building upon our theoretical results, we devise a practical approach that combines optimism with a cost-dependent discount factor, which we test on MinAtar \citep{young19minatar} and a new driving benchmark. \textbf{(4)} We demonstrate the efficiency of our method on these benchmarks as well as on human-collected termination data (\Cref{sec: experiments}). Our results show significant improvement over other candidate solutions, which involve direct termination penalties and history-dependent approaches. We also introduce a new task for RL -- a driving simulation game which can be easily deployed on mobile phones, consoles, and PC \footnote{Code for Backseat Driver and our method, TermPG, can be found at \href{https://github.com/guytenn/Terminator}{https://github.com/guytenn/Terminator}.}. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{imgs/termination.pdf} \caption{ A block diagram of the TerMDP framework. An agent interacts with an environment while an exogenous observer (i.e., terminator) can choose to terminate the agent based on previous interactions. If the agent is terminated, it transitions to a sink state where a reward of $0$ is given until the end of the episode.} \label{fig: termination diagram} \end{figure} \section{Termination Markov Decision Process} \label{sec: perliminaries} We begin by presenting the termination framework and the notation used throughout the paper. Informally, we model the termination problem using a logistic model of past ``bad behaviors". We use an unobserved state-dependent cost function to capture these external preferences. As the overall cost increases throughout time, so does the probability of termination. For a positive integer $n$, we denote $[n] = \brk[c]*{1, \hdots, n}$. We define the Termination Markov Decision Process (TerMDP) by the tuple $\terMDP=(\sset,\aset, P,R,H,c)$, where $\sset$ and $\aset$ are state and action spaces with cardinality $S$ and $A$, respectively, and $H\in\N$ is the maximal horizon. We consider the following protocol, which proceeds in discrete episodes $k=1, 2, \hdots, K$. At the beginning of each episode~$k$, an agent is initialized at state $s_1^k \in \sset$. At every time step $h$ of episode~$k$, the agent is at state $s_h^k \in \sset$, takes an action $a_h^k \in \aset$ and receives a random reward $R_h^k\in[0,1]$ generated from a fixed distribution with mean $r_h(s_h^k,a_h^k)$. A terminator overseeing the agent utilizes a cost function $c: \brk[s]{H}\times \s \times \A \mapsto \R$ that is unobserved and \emph{unknown to the agent}. At time step $h$, the episode terminates with probability \begin{align*} \rho_h^k(c) = \rho\brk*{\sum_{t=1}^h c_t(s_t^k,a_t^k) - b}, \end{align*} where $\rho(x)=\brk*{1+\exp(-x)}^{-1}$ is the logistic function and $b \in \R$ is a bias term which determines the termination probability when no costs are aggregated. Upon termination, the agent transitions to a terminal state $\terminalstate$ which yields no reward, i.e., $r_h(\terminalstate,a)=0$ for all $h\in\brk[s]*{H}, a\in\aset$. If no termination occurs, the agent transitions to a next state $s_{h+1}^k$ with probability $P_h(s_{h+1}^k | s_h^k,a_h^k)$. Let $t_k^*=\min\brk[c]*{h:s_h^k=\terminalstate}-1$ be the time step when the $k^{\text{th}}$ episode was terminated. Notice that the termination probability is non-Markovian, as it depends on the entire trajectory history. We also note that, when $c \equiv 0$, the TerMDP reduces to a finite horizon MDP with discount factor $\gamma = \rho\brk*{-b}$. Finally, we note that our model allows for negative costs. Indeed, these may capture satisfactory behavior, diminishing the effect of previous mistakes, and decreasing the probability of termination. We define a stochastic, history dependent policy $\pi_h(s_h, \tau_{1:h})$ which maps trajectories $\tau_{1:h} = (s_1, a_1, \hdots, s_{h-1}, a_{h-1})$ up to time step $h$ (excluding) and the $h^{\text{th}}$ states $s_{h}$ to probability distributions over $\aset$. Its value is defined by ${ V_h^{\pi}(s,\tau) \!=\! \E{\sum_{t=h}^H r_t(s_t, a_t) | s_h=s, \tau_{1:h} = \tau, a_t \sim \pi_t(s_t,\tau_{1:t})}. }$ With slight abuse of notation, we denote the value at the initial time step by $V_1^\pi(s)$. An optimal policy $\pi^*$ maximizes the value for all states and histories simultaneously \footnote{Such a policy always exists; we can always augment the state space with the history, which would make the environment Markovian and imply the existence of an optimal history-dependent policy \citep{puterman2014markov}.}; we denote its value function by $V^*$. We measure the performance of an agent by its \emph{regret}; namely, the difference between the cumulative value it achieves and the value of an optimal policy, ${ \Reg{K} = \sum_{k=1}^K V_1^*(s_1^k) - V_1^{\pi^k}(s_1^k). }$ \textbf{Notations. } We denote the Euclidean norm by $\norm{\cdot}_2$ and the Mahalanobis norm induced by the positive definite matrix $A\succ 0$ by $\norm{x}_A=\sqrt{x^TAx}$. We denote by $n_h^k(s,a)$ the number of times that a state action pair $(s,a)$ was visited at the $h^{\text{th}}$ time step before the $k^{\text{th}}$ episode. Similarly, we denote by $\hat{X}_h^k(s,a)$ the empirical average of a random variable $X$ (e.g., reward and transition kernel) at $(s,a)$ in the $h^{\text{th}}$ time step, based on all samples before the $k^{\text{th}}$ episode. We assume there exists a known constant $L$ that bounds the norm of the costs; namely, $\sqrt{\sum_{s, a} \sum_{t=1}^H c^2_t(s_t,a_t)} \leq L$. We also denote the maximal reciprocal derivative of the logistic function by ${ \kappa = \max_{h\in\brk[s]{H}}\enspace\max_{\brk[c]*{(s_t,a_t)}_{t=1}^h\in(\sset\times\aset)^h} \brk*{\dot{\rho}\brk*{\sum_{t=1}^hc_t(s_t,a_t) - b}}^{-1}. }$ This factor will be evident in our theoretical analysis in the next section, as estimating the costs in regions of saturation of the sigmoid is more difficult when the derivative nears zero. Finally, we use $\mathcal{O}(x)$ to refer to a quantity that depends on $x$ up to a poly-log expression in $S, A, K, H, L, \kappa$ and $\log\brk*{\frac{1}{\delta}}$. \section{An Optimistic Approach to Overcoming Termination} \label{sec: theory} Unlike the standard MDP setup, in the TerMDP model, the agent can potentially be terminated at any time step. Consider the TerMDP model for which the costs are \emph{known}. We can define a Markov policy $\pi_h$ mapping augmented states $\sset \times \R$ to a probability distribution over actions, where here, the state space is augmented by the accumulated costs $\sum_{t=1}^{h-1} c_t(s_t, a_t)$ . There exists a policy, which does not use historical information, besides the accumulated costs, and achieves the value of the optimal history-dependent policy (see \Cref{appendix: known costs}). Therefore, when solving for an optimal policy (e.g., by planning), one can use the current accumulated cost instead of the full trajectory history. This suggests a plausible approach for solving the TerMDP -- first learn the cost function, and then solve the state-augmented MDP for which the costs are known. This, in turn, leads to the following question: \textbf{can we learn the costs $c$ from the termination signals?} In what follows, we answer this question affirmatively. We show that by using the termination structure, one can efficiently converge to the true cost function \emph{locally} -- for every state and action. We provide uncertainty estimates for the state-wise costs, which allow us to construct an efficient optimistic algorithm for solving the problem. \textbf{Learning the Costs. } To learn the costs, we show that the agent can effectively gain information about costs even in time steps where no termination occurs. Recall that at any time step $h\in[H-1]$, the agent acquires a sample from a Bernoulli random variable with parameter $p = \rho_h^k(c) = \rho\brk*{\sum_{t=1}^h c_t(s_t^k,a_t^k) - b}$. Notably, a lack of termination, which occurs with probability $1-\rho_h^k(c)$, is also an informative signal of the unknown costs. We propose to leverage this information by recognizing the costs $c$ as parameters of a probabilistic model, maximizing their likelihood. We use the regularized cross-entropy, defined for some $\lambda > 0$ by \begin{align} \label{eq: cost likelihood} \mathcal{L}^k_\lambda(c) = \sum_{k'=1}^{k} \sum_{h=1}^{H-1} \left[ \indicator{h< t_{k'}^*}\log\brk*{1- \rho_h^k(c)} + \indicator{h=t_{k'}^*}\log\brk*{\rho_h^k(c) }\right] - \lambda\norm{c}_2^2. \end{align} By maximizing the cost likelihood in \Cref{eq: cost likelihood}, global guarantees of the cost can be achieved, similar to previous work on logistic bandits \citep{zhang2016online,abeille2021instance}. Particularly, denoting by $\hat{c}^k \in\arg\max \mathcal{L}^k_\lambda(c)$ the maximum likelihood estimates of the costs, it can be shown that for any history, a global upper bound on $\norm{\hat{c}^k-c}_{\Sigma_k}$ can be obtained, where the history-dependent design matrix $\Sigma_k$ captures the empirical correlations of visitation frequencies (see \Cref{supp: cost concentration} for details). Unfortunately, using $\norm{\hat{c}^k-c}_{\Sigma_k}$ amounts to an intractable algorithm \citep{chatterji2021theory}, and thus to an undesirable result. Instead, as terminations are sampled on \emph{every time step} (i.e., non-terminations are informative signals as well), we show we can obtain a \emph{local} bound on the cost function $c$. Specifically, we show that the error $\abs{\hat c_h^k(s,a) - c_h(s,a)}$ diminishes with $n_h^k(s,a)$. The following result is a main contribution of our work, and the crux of our regret guarantees later on (see \Cref{supp: cost concentration} for proof). \begin{theorem}[Local Cost Estimation Confidence Bound] \label{thm: local cost confidence front} Let $\hat{c}^k \in\arg\max \mathcal{L}^k_\lambda(c)$ be the maximum likelihood estimate of the costs. Then, for any $\delta>0$, with probability of at least $1-\delta$, for all episodes $k\in [K]$, timesteps $h\in[H-1]$ and state-actions $(s,a)\in\sset\times\aset$, it holds that \begin{align*} \abs{\hat c_h^k(s,a) - c_h(s,a) } \leq \Ob\brk*{ \brk*{n^k_h(s,a)}^{-0.5} \sqrt{\kappa SAHL^{3}} \log\brk*{\frac{1}{\delta}\brk*{1+\frac{kL}{S^2A^2H}}} }. \end{align*} \end{theorem} We note the presence of $\kappa$ in our upper bound, a common factor \citep{chatterji2021theory}, which is fundamental to our analysis, capturing the complexity of estimating the costs. Trajectories that saturate the logistic function lead to more difficult credit assignment. Specifically, when the accumulated costs are high, any additional penalty would only marginally change the termination probability, making its estimation harder. A similar argument can be made when the termination probability is low. \begin{algorithm}[t!] \caption{TermCRL: Termination Confidence Reinforcement Learning} \label{alg: Termination CRL} \begin{algorithmic}[1] \STATE{ \textbf{require:} $\lambda >0$} \FOR{$k=1, \hdots, K$} \FOR{$(h,s,a) \in [H]\times \sset \times \aset$} \STATE $\bar r_h^{k}(s,a) = \hat r_h^{k}(s,a) + b^r_{k}(h,s,a) + b^p_{k}(h,s,a)$ \STATE $\bar c_h^{k}(s,a) = \hat c_h^{k}(s,a) - b^{c}_{k}(h,s,a)$ \hfill {\color{gray}// \Cref{appendix: optimism}} \ENDFOR \STATE $\pi^k \gets $ TerMDP-Plan$\brk*{\terMDP \brk1{\sset,\aset,H,\bar{r}^{k},\hat{P}^{k},\bar{c}^{k}}}$ \hfill {\color{gray}// \Cref{appendix: planning}} \STATE Rollout a trajectory by acting $\pi^k$ \STATE $\hat c^{k+1} \in \arg\max_c \mathcal{L}^k_\lambda(c)$ \hfill {\color{gray}// \Cref{eq: cost likelihood}} \STATE Update $\hat{P}^{k+1}(s,a), \hat{r}^{k+1}(s,a), n^{k+1}(s,a)$ over rollout trajectory \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Termination Confidence Reinforcement Learning} We are now ready to present our method for solving TerMDPs with unknown costs. Our proposed approach, which we call Termination Confidence Reinforcement Learning (TermCRL), is shown in \Cref{alg: Termination CRL}. Leveraging the local convergence guarantees of \Cref{thm: local cost confidence front}, we estimate the costs by maximizing the likelihood in \Cref{eq: cost likelihood}. We compensate for uncertainty in the reward, transitions, and costs by incorporating optimism. We define bonuses for the reward, transition, and cost function by $b_k^r(h,s,a) = \Ob\brk*{\sqrt{\frac{\log\brk*{1/\delta}}{n_h^k(s,a)\vee 1}}}, b_k^p(h,s,a)=\Ob\brk*{\sqrt{\frac{SH^2\log\brk*{1/\delta}}{n_h^k(s,a)\vee 1}}}$, and $b_k^c(h,s,a)=\Ob\brk*{\sqrt{\frac{\kappa SAHL^3}{n_h^k(s,a)\vee 1}}\log\brk*{\frac{1}{\delta}}}$ for some $\delta > 0$ (see \Cref{appendix: optimism} for explicit definitions). We add the reward and transition bonuses to the estimated reward (line 4), while the optimistic cost bonus is applied directly to the estimated costs (line 5). Then, a planner (line 7) solves the optimistic MDP for which the costs are known and are given by their optimistic counterparts. We refer the reader to \Cref{appendix: planning} for further discussion on planning in TerMDPs. The following theorem provides regret guarantees for \Cref{alg: Termination CRL}. Its proof is given in \Cref{appendix: regret analysis} and relies on \Cref{thm: local cost confidence front} and the analysis of UCRL \citep{auer2008near,efroni2019tight}. \begin{restatable}{theorem}{MainResult}[Regret of TermCRL] \label{theorem: main} With probability at least $1-\delta$, the regret of \Cref{alg: Termination CRL} is \begin{align*} \Reg{K} \leq \Ob\brk*{\sqrt{\kappa S^2A^2H^7L^3 K\log^2\brk*{ \frac{SAHK}{\delta}}}}. \end{align*} \end{restatable} Compared to the standard regret of UCRL \citep{auer2008near}, an additional $\sqrt{\kappa AH^4L^3}$ multiplicative factor is evident in our result, which is due to the convergence rates of the costs in \Cref{thm: local cost confidence front}. Motivated by our theoretical results, in what follows we propose a practical approach, inspired by \Cref{alg: Termination CRL}, which utilizes local cost confidence intervals in a deep RL framework. \begin{algorithm}[t!] \caption{TermPG} \label{alg: Termination PG} \begin{algorithmic}[1] \STATE{ \textbf{require:} window $w$, number of ensembles $M$, number of rollouts $N$, number of iterations $K$, policy gradient algorithm \texttt{ALG-PG}} \STATE{ \textbf{initialize:} $\B_{\text{pos}} \gets \emptyset, \B_{\text{neg}} \gets \emptyset, \pi_{\theta} \gets $ random initialization } \FOR{$k=1,\hdots,K$} \STATE Rollout $N$ trajectories using $\pi_{\theta}$, $\mathcal{R} = \brk[c]*{s^i_1, a^i_1, r^i_1, \hdots, s^i_{t^*_i}, a^i_{t^*_i}, r^i_{t^*_i}}_{i=1}^N$. \FOR{$i=1, \hdots, N$} \STATE Add $t^*_i - 1$ negative examples $\brk2{s_{\max\brk[c]*{1, l-w+1}}, a_{\max\brk[c]*{1, l-w+1}}, \hdots, s_l, a_l}_{l=1}^{t^*-1}$ to $\B_{\text{neg}}$. \STATE Add one positive example $\brk2{s_{\max\brk[c]*{1, t^*-w+1}}, a_{t^*-\max\brk[c]*{1, t^*-w+1}}, \hdots, s_{t^*}, a_{t^*}}$. \ENDFOR \STATE Train bootstrap ensemble $\brk[c]*{c_{\phi_m}}_{m=1}^M$ using binary cross entropy over data $\B_{\text{neg}}, \B_{\text{pos}}$. \STATE Augment states in $\mathcal{R}$ by $s_l^i \gets s_l^i \cup \sum_{j=1}^{\min\brk[c]*{w, l}} \min_{m} c_{\phi_m}(s^i_{l-j}, a^i_{l-j})$. \STATE Update policy $\pi_{\theta} \gets \texttt{ALG-PG}\brk*{\mathcal{R}}$ with dynamic discount (see \Cref{sec: discount factor}). \ENDFOR \end{algorithmic} \end{algorithm} \section{Termination Policy Gradient} \label{sec: termpg} Following the theoretical analysis in the previous section, we propose a practical approach for solving TerMDPs. Particularly, in this section, we devise a policy gradient method that accounts for the unknown costs leading to termination. We assume a stationary setup for which the transitions, rewards, costs, and policy are time-homogeneous. Our approach consists of three key elements: learning the costs, leveraging uncertainty estimates over costs, and constructing efficient value estimates through a dynamic cost-dependent discount factor. \Cref{alg: Termination PG} describes the Termination Policy Gradient (TermPG) method, which trains an ensemble of cost networks (to estimate the costs and uncertainty) over rollouts in a policy gradient framework. We represent our policy and cost networks using neural networks with parameters $\theta, \brk[c]{\phi_m}_{m=1}^M$. At every iteration, the agent rollouts $N$ trajectories in the environment using a parametric policy,~$\pi_\theta$. The rollouts are split into subtrajectories which are labeled w.r.t. the termination signal, where positive labels are used for examples that end with termination. Particularly, we split the rollouts into ``windows" (i.e., subtrajectories of length $w$), where a rollout of length $t^*$, which ends with termination, is split into $t^*-1$ negative examples $\brk*{s_{\max\brk[c]*{1, l-w+1}}, a_{\max\brk[c]*{1, l-w+1}}, \hdots, s_l, a_l}_{l=1}^{t^*-1}$, and one positive example $\brk*{s_{\max\brk[c]*{1, t^*-w+1}}, a_{t^*-\max\brk[c]*{1, t^*-w+1}}, \hdots, s_{t^*}, a_{t^*}}$. Similarly, a rollout of length $H$ which does not end with termination contains $H$ negative examples. We note that by taking finite windows, we assume the terminator ``forgets" accumulated costs that are not recent - a generalization of the theoretical TerMDP model in \Cref{sec: perliminaries}, for which $w=H$. In \Cref{sec: experiments}, we provide experiments of misspecification of the true underlying window width, where this model assumption does not hold. \subsection{Learning the Costs} Having collected a dataset of positive and negative examples, we train a logistic regression model consisting of an ensemble of $M$ cost networks $\brk[c]*{c_{\phi_m}}_{m=1}^M$, shared across timesteps, as depicted in \Cref{fig: cost diagram}. Specifically, for an example $\brk*{s_{\max\brk[c]*{1, l-w+1}}, a_{\max\brk[c]*{1, l-w+1}}, \hdots, s_l, a_l}$ we estimate the termination probability by $\rho\brk*{\sum_{j=1}^{\min\brk[c]*{w,l}} c_{\phi_m}\brk*{s_{l-j+1}, a_{l-j+1}} - b_m}$, where $\brk[c]*{b_m}_{m=1}^M$ are learnable bias parameters. The parameters are then learned end-to-end using the cross entropy loss. We use the bootstrap method \citep{bickel1981some,chua2018deep} over the ensemble of cost networks. This ensemble is later used in \Cref{alg: Termination PG} to produce optimistic estimates of the costs. Particularly, the agent policy $\pi_\theta$ uses the current state augmented by the optimistic cummulative predicted cost, i.e., $s_l^{\text{aug}} = \brk*{s_l, C_{\text{optimistic}}}$, where $C_{\text{optimistic}} = \sum_{j=1}^{\min\brk[c]*{w, l}} \min_{m} c_{\phi_m}(s_{l-j+1}, a_{l-j+1})$. Finally, the agent is trained with the augmented states using a policy gradient algorithm \texttt{ALG-PG} (e.g., PPO \citep{schulman2017proximal}, IMPALA \citep{espeholt2018impala}). \begin{figure}[t!] \includegraphics[width=1\linewidth]{imgs/costs.pdf} \caption{ Block diagram of the cost training procedure. Rollouts are split into subtrajectories, labeled according to whether they end in termination. Given the dataset of labeled subtrajectories, an ensemble of $M$ cost networks is trained end-to-end using cross-entropy with bootstrap samples (all time steps share the same ensemble).} \label{fig: cost diagram} \end{figure} \subsection{Optimistic Dynamic Discount Factor} \label{sec: discount factor} While augmenting the state with the optimistic accumulated costs is sufficient for obtaining optimality, we propose to further leverage these estimates more explicitly -- noticing that the finite horizon objective we are solving can be cast to a discounted problem. Particularly, it is well known that the discount factor $\gamma \in (0,1)$ can be equivalently formulated as the probability of ``staying alive" (see the discounted MDP framework, \citet{puterman2014markov}). Similarly, by augmenting the state $s$ with the accumulated cost $C_h=\sum_{t=1}^h c(s_t, a_t)$, we view the probability $1-\rho(C_h)$ as a state-dependent discount factor, capturing the probability of an agent in a TerMDP to not be terminated. We define a dynamic, cost-dependent discount factor for value estimation. We use the state-action value function $Q(s,a,C)$ over the augmented states, defined for any~$s, a, C$ by $ Q^\pi(s,a,C) = \expect*{\pi}{\sum_{t=1}^H \brk*{\prod_{h=1}^t \gamma_h } r(s_t, a_t) | s_1 = s, a_1=a, C_1 = C}, $ where ${\gamma_h = 1-\rho\brk*{C + \sum_{i=2}^{h-1} c(s_i, a_i) - b}}$. This yields the Termination Bellman Equations $ Q^\pi(s,a,C) = r(s,a) + \brk*{1-\rho(C)}\expect*{s' \sim P(\cdot | s,a), a' \sim \pi(s')}{Q^\pi(s', a', C + c(s', a'))} $ (see \Cref{appendix: termination bellman equations} for derivation). To incorporate uncertainty in the estimated costs, we use the optimistic accumulated costs $C_{\text{optimistic}} = \sum_{j=1}^{\min\brk[c]*{w, l}} \min_{m} c_{\phi_m}(s_{l-j+1}, a_{l-j+1})$. Then, the discount factor becomes ${\gamma\brk*{C_{\text{optimistic}}} = 1-\rho\brk*{C_{\text{optimistic}} - b}}$. Assuming that, w.h.p., optimistic costs are smaller than the true costs, the discount factor decreases as the agent exploits previously visited states. The dynamic discount factor allows us to obtain a more accurate value estimator. In particular, we leverage the optimistic cost-dependent discount factor $\gamma\brk*{C_{\text{optimistic}}}$ in our value estimation procedure, using Generalized Advantage Estimation (GAE, \citet{schulman2015high}). As we will show in the next section, using the optimistic discount factor significantly improves overall performance. \section{Experiments} \label{sec: experiments} In this section we evaluate the strength of our approach, comparing it to several baselines, including: \textbf{(1) PG (naive):} The standard policy gradient without additional assumptions, which ignores termination. \textbf{(2) Recurrent PG:} The standard policy gradient with a history-dependent recurrent policy (without cost estimation or dynamic discount factor). As the history is a sufficient statistic of the costs, the optimal policy is realizable. \textbf{(3) PG with Reward Shaping (RS):} We penalize the reward upon termination by a constant value, i.e., $r(s,a) - p\indicator{\terminalstate}$, for some $p > 0$. This approach can be applied to any variant of \Cref{alg: Termination PG} or the methods listed above. \textbf{(4) TermPG:} Described in \Cref{alg: Termination PG}. We additionally implemented two variants of TermPG, including: \textbf{(5) TermPG with Reward Shaping:} We penalize the reward with a constant value upon termination. \textbf{(6) TermPG with Cost Penalty:} We penalize the reward at every time step by the optimistic cost estimator, i.e., $r - \alpha C_{\text{optimistic}}$ for some $\alpha > 0$. All TermPG variants used an ensemble of three cost networks, and a dynamic cost-dependent discount factor, as described in \Cref{sec: discount factor}. We report mean and std. of the total reward (without penalties) for all our experiments. \begin{wrapfigure}[12]{R}{0pt} \centering \includegraphics[width=0.4\linewidth]{imgs/backseatdriver.png} \caption{ Backseat Driver} \label{fig: backseat driver ingame} \end{wrapfigure} \textbf{Backseat Driver (BDr). } We simulated a driving application, using MLAgents \citep{juliani2018unity}, by developing a new driving benchmark, ``Backseat Driver" (depicted in \Cref{fig: backseat driver ingame}), where we tested both synthetic and human terminations. The game consists of a five lane never-ending road, with randomly instantiating vehicles and coins. The agent can switch lanes and is rewarded for overtaking vehicles. In our experiments, states were represented as top view images containing the position of the agent, nearby cars, and coins with four stacked frames. We used a finite window of length $120$ for termination ($30$ agent decision steps), mimicking a passenger forgetting mistakes of the past. \textbf{BDr Experiment 1: Coin Avoidance.} In the first experiment of Backseat Driver, coins are considered as objects the driver must avoid. The coins signify unknown preferences of the passenger, which are not explicitly provided to the agent. As the agent collects coins, a penalty is accumulated, and the agent is terminated probabilistically according to the logistic cost model in \Cref{sec: perliminaries}. We emphasize that, while the coins are visible to the agent (i.e., part of the agent's state), the agent only receives feedback from collecting coins through implicit terminations. \begin{figure}[t!] \centering \includegraphics[width=0.4\linewidth]{imgs/backseat_driver_learn_vs_lstm.png} \includegraphics[width=0.4\linewidth]{imgs/backseat_driver_human.png} \includegraphics[width=0.18\linewidth]{imgs/legend.png} \caption{ Mean reward with std. over five seeds of ``Backseat Driver''. Left: coin avoidance; right: human termination. Variants with reward shaping (RS, orange and brown) penalize the agent with a constant value upon termination. The recurrent PG variant (green) uses a history-dependent policy without learning costs. The TermPG+Penalty variant (purple) penalizes the reward at every time step using the estimated costs.} \label{fig: backseat driver} \end{figure} Results for Backseat Driver with coin-avoidance termination are depicted in \Cref{fig: backseat driver}. We compared TermPG (pink) and its two variants (brown, purple) to the PG (blue), recurrent PG (green), and reward shaping (orange) methods described above. Our results demonstrate that TermPG significantly outperforms the history-based and penalty-based baselines. We found TermPG (pink) to perform significantly better, doubling the reward of the best PG variant. All TermPG variants converged quickly to a good solution, suggesting fast convergence of the costs (see \Cref{appendix: additional results}). \textbf{BDr Experiment 2: Human Termination.} To complement our results, we evaluated human termination on Backseat Driver. For this, we generated data of termination sequences from agents of varying quality (ranging from random to expert performance). We asked five human supervisors to label subsequences of this data by terminating the agent in situations of ``continual discomfort". This guideline was kept ambiguous to allow for diverse termination signals. The final dataset consisted of 512 termination examples. We then trained a model to predict human termination and implemented it into Backseat Driver to simulate termination. We refer the reader to \Cref{appendix: implementation details} for specific implementation details. \Cref{fig: backseat driver} shows results for human termination in Backseat Driver. As before, a significant performance increase was evident in our experiments. Additionally, we found that using a cost penalty (purple) or termination penalty (brown) for TermPG did not greatly affect performance. \begin{figure}[t!] \centering \includegraphics[width=0.45\linewidth]{imgs/spaceinvaders.png} \includegraphics[width=0.45\linewidth]{imgs/seaquest.png} \includegraphics[width=0.45\linewidth]{imgs/breakout.png} \includegraphics[width=0.45\linewidth]{imgs/asterix.png} \caption{ Results for MinAtar benchmarks. All runs were averaged over five seeds. Comparison of best performing TermPG variant to best performing PG variant (relative improvement percentage): $80\%$ in Space Invaders, $150\%$ in Seaquest, $410\%$ in Breakout, and $90\%$ in Asterix.} \label{fig: minatar} \end{figure} \begin{table*}[t!] \caption{\label{table: results} Summary of results (top) and ablations for TermPG (bottom). Standard deviation optimism did not have significant impact on performance. Removing optimism or the dynamic discount factor had negative impact on performance. TermPG was found to be robust to model misspecifcations of the accumulated cost window.} \centering \hspace*{-0.8cm} \begin{scriptsize} \begin{tabular}{|c|cc|cccc|} \hline \multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\bf {\small Backseat Driver} } & \multicolumn{4}{c|}{\bf {\small MinAtar}} \\ \hline \hline \bf {\small Experiment} & \bf {\small Coin Avoid.} & \bf{\small Human} & \bf {\small Space Inv.} & \bf {\small Seaquest} & \bf {\small Breakout} & \bf {\small Asterix} \\ \hline PG & $5.3 \pm 0.8$ & $4.9 \pm 1.5$ & $5.2 \pm 1.8$ & $0.6 \pm 0.4$ & $1.4 \pm 2.8$ & $0.8 \pm 0.3$ \\ \hline Recurrent PG & $3.4 \pm 0.21$ & $5 \pm 1.8$ & $2.8 \pm 0.05$ & $0.1 \pm 0.3$ & $0.7 \pm 0.6$ & $0.7 \pm 0.2$ \\ \hline PG + RS & $5.9 \pm 1.4$ & $7.4 \pm 1.7$ & $7.6 \pm 2.3$ & $0.4 \pm 0.2$ & $0.5 \pm 0.03$ & $0.9 \pm 0.2$ \\ \hline \rowcolor{Gray} TermPG (ours) & $\mathbf{8.7 \pm 1.4}$ & $8.3 \pm 1.3$ & $\mathbf{9.7 \pm 1.1}$ & $\mathbf{1.4 \pm 0.8}$ & $\mathbf{8.2 \pm 0.3}$ & $1 \pm 0.2$ \\ \hline \rowcolor{Gray} TermPG + RS (ours) & $\mathbf{8.4 \pm 1.3}$ & $7.7 \pm 0.3$ & $\mathbf{11.8 \pm 0.8}$ & $0.3 \pm 0.6$ & $5.1 \pm 1$ & $0.8 \pm 0.1$ \\ \hline \rowcolor{Gray} TermPG + Penalty (ours) & $6 \pm 0.8$ & $\mathbf{11.8 \pm 1.5}$ & $7.7 \pm 1.4$ & $\mathbf{2.4 \pm 1}$ & $2.3 \pm 2.3$ & $\mathbf{1.7 \pm 0.1}$ \\ \hline\hline \bf {\small Ablation Test} & \bf {\small Coin Avoid.} & \bf {\small Human} & \bf {\small Space Inv.} & \bf {\small Seaquest} & \bf {\small Breakout} & \bf {\small Asterix} \\ \hline Optimism with Ensemble Std. & $7.6 \pm 2.1$ & $7.5 \pm 1.1$ & $2.8 \pm 0.02$ & $0.9 \pm 0.6$ & $10.9 \pm 1$ & $1 \pm 0.1$ \\ \hline No Optimism & $7.8 \pm 1.3$ & $8.8 \pm 1.2$ & $5.2 \pm 1.6$ & $0.7 \pm 0.3$ & $1.3 \pm 0.7$ & $1 \pm 0.1$ \\ \hline No Dynamic Discount & $6.9 \pm 0.6$ & $5.9 \pm 0.7$ & $4.4 \pm 1.8$ & $0.4 \pm 0.1$ & $0.5 \pm 0.02$ & $0.8 \pm 0.1$ \\ \hline $\times 0.5$ Window Misspecification & $7.2 \pm 1.1$ & $7.2 \pm 0.5$ & $9.7 \pm 3.1$ & $2.4 \pm 0.4$ & $7.9 \pm 0.8$ & $0.8 \pm 0.1$ \\ \hline $\times 2$ Window Misspecification & $8.3 \pm 0.1$ & $8.4 \pm 0.2$ & $11.1 \pm 3$ & $2.2 \pm 0.2$ & $10.3 \pm 0.8$ & $1 \pm 0.1$ \\ \hline \end{tabular} \end{scriptsize} \end{table*}% \textbf{MinAtar. } We further compared our method to the PG, recurrent PG, and reward shaping methods, on MinAtar \citep{young19minatar}. For each environment, we defined cost functions that do not necessarily align with the pre-specified reward, to mimic uncanny behavior that humans are expected to dislike. For example, in Breakout, the agent was penalized whenever the paddle remained in specific regions (e.g., sides of the screen), whereas in Space Invaders, the agent was penalized for ``near misses" of enemy bullets. We refer the reader to \Cref{appendix: implementation details} for specific details of the different termination cost functions. \Cref{fig: minatar} depicts results on MinAtar. As with Backseat Driver, TermPG lead to significant improvement, often achieving a magnitude order as much reward as Recurrent PG. We found that adding a termination penalty and cost penalty produced mixed results, with them being sometimes useful (e.g., Space Invaders, Sequest, Asterix), yet other times harmful to performance (e.g., Breakout). Therefore, we propose to fine-tune these penalties in \Cref{alg: Termination PG}. Finally, we note that training TermPG was, on average, $67\%$ slower than PG, on the same machine. Nevertheless, though TermPG was somewhat more computationally expensive, it showed a significant increase in overall performance. A summary of all of our results is presented in \Cref{table: results} (top). \textbf{Ablation Studies.} We present various ablations for TermPG in \Cref{table: results} (bottom). First, we tested the effect replacing the type of cost optimism in TermPG. In \Cref{sec: termpg}, cost optimism was defined using the minimum of the cost ensemble, i.e., $\text{min}\brk[c]*{c_{\phi_m}}$. Instead, we replaced the cost optimism to $C_{\text{optimistic}} = \text{mean}\brk[c]*{c_{\phi_m}} - \alpha \text{std}\brk[c]*{c_{\phi_m}}$, testing different values of $\alpha$. Surprisingly, this change mostly decreased performance, except for Breakout, where it performed significantly better. Other ablations included removing optimism altogether (i.e., only using the mean of the ensemble), and removing the dynamic discount factor. In both cases we found a significant decrease in performance, suggesting that both elements are essential for TermPG to work properly and utilize the estimator of the unknown costs. Finally, we tested misspecifications of our model by learning with windows that were different from the environment's real cost accumulation window. In both cases, TermPG was suprisingly robust to window misspecification, as performance remained almost unaffected by it. \section{Related Work} Our setup can be linked to various fields, as listed below. \textbf{Constrained MDPs. } Perhaps the most straightforward motivations for external termination stems from constraint violation \citep{chow2018lyapunov,efroni2020exploration,hasanzadezonuzy2020learning}, where strict or soft constraints are introduced to the agent, who must learn to satisfy them. In these setups, which are often motivated by safety \citep{garcia2015comprehensive}, the constraints are usually known. In contrast, in this work, the costs are \emph{unknown} and only implicit termination is provided. \textbf{Reward Design. } Engineering a good reward function is a hard task, for which frequent design choices may drastically affect performance \citep{oh2021creating}. Moreover, for tasks where humans are involved, it is rarely clear how to engineer a reward, as human preferences are not necessarily known, and humans are non-Markovian by nature \citep{clarke2013human,christiano2017deep}. Termination can thus be viewed as an efficient mechanism to elicit human input, allowing us to implicitly interpret human preferences and utility more robustly than trying to specify a reward. \textbf{Global Feedback in RL. } Recent work considered once-per-trajectory reward feedback in RL, observing either the cumulative rewards at the end of an episode \citep{efroni2020reinforcement,cohen2021online} or a logistic function of trajectory-based features \cite{chatterji2021theory}. While these works are based on a similar solution mechanism, our work concentrates on a new framework, which accounts for non-Markovian termination. Additionally, we provide per-state concentration guarantees of the unknown cost function, compared to global concentration bounds in previous work \citep{abbasi2011improved,zhang2016online,qi2018bandit,abeille2021instance}. Using our local guarantees, we are able to construct a scalable policy gradient solution, with significant improvement over recurrent and reward shaping based approaches. \textbf{Preference-based RL. } In contrast to traditional reinforcement learning, preference-based reinforcement learning (PbRL) relies on subjective opinions rather than numerical rewards. In PbRL, preferences are captured through probabilistic rankings of trajectories \citep{wirth2016model,wirth2017survey,xu2020preference}. Similar to our work, \citet{christiano2017deep} use a regression model to learn a reward function that could account for the preference feedback. Our work considers a different setting in which human feedback is provided through termination, where termination and reward may not align. \section{Limitations and Negative Societal Impact} A primary limitation of our work involves the linear dependence of the logistic termination model. In some settings, it might be hard to capture true human preferences and behaviors using a linear model. Nevertheless, when measured across the full trajectory, our empirical findings show that this model is highly expressive, as we demonstrated on real human termination data (\Cref{sec: experiments}). Additionally, we note that work in inverse RL \citep{arora2021survey} also assumes such linear dependence of human decisions w.r.t. reward. Future work can consider more involved hypothesis classes, building upon our work to identify the optimal tradeoff between expressivity and convergence rate. Finally, we note a possible negative societal impact of our work. Termination is strongly motivated by humans interacting with the agent. This may be harmful if not carefully controlled, as learning incorrect or biased preferences may, in turn, result in unfavorable consequences, or if humans engage in adversarial behavior in order to mislead an agent. Our work discusses initial research in this domain. We encourage caution in real-world applications, carefully considering the possible effects of model errors, particularly in applications that affect humans. \newpage \section{Introduction} The field of reinforcement learning (RL) involves an agent interacting with an environment, maximizing a cumulative reward \citep{puterman2014markov}. As RL becomes more instrumental in real-world applications \citep{lazic2018data,kiran2021deep,mandhane2022muzero}, exogenous inputs beyond the prespecified reward pose a new challenge. Particularly, an external authority (e.g., a human operator) may decide to terminate the agent's operation when it detects undesirable behavior. In this work, we generalize the basic RL framework to accommodate such external feedback. We propose a generalization of the standard Markov Decision Process (MDP), in which external termination can occur due to a non-Markovian observer. When terminated, the agent stops interacting with the environment and cannot collect additional rewards. This setup describes various real-world scenarios, including: passengers in autonomous vehicles \citep{le2015autonomous,zhu2020safe}, users in recommender systems \citep{wang2009recommender}, employees terminating their contracts (churn management) \citep{sisodia2017evaluation}, and operators in factories; particularly, datacenter cooling systems, or other safety-critical systems, which require constant monitoring and rare, though critical, human takeovers \citep{modares2015optimized}. In these tasks, human preferences, incentives, and constraints play a central role, and designing a reward function to capture them may be highly complex. Instead, we propose to let the agent itself learn these latent human utilities by leveraging the termination events. We introduce the Termination Markov Decision Process (TerMDP), depicted in \Cref{fig: termination diagram}. We consider a terminator, observing the agent, which aggregates penalties w.r.t. a predetermined, state-dependent, yet \emph{unknown}, cost function. As the agent progresses, unfavorable states accumulate costs that gradually increase the terminator's inclination to stop the agent and end the current episode. Receiving merely the sparse termination signals, the agent must learn to behave in the environment, adhering to the terminator's preferences while maximizing reward. Our contributions are as follows. \textbf{(1)} We introduce a novel history-dependent termination model, a natural extension of the MDP framework which incorporates non-trivial termination (\Cref{sec: perliminaries}). \textbf{(2)} We learn the unknown costs from the implicit termination feedback (\Cref{sec: theory}), and provide local guarantees w.r.t. every visited state. We leverage our results to construct a tractable algorithm and provide regret guarantees. \textbf{(3)} Building upon our theoretical results, we devise a practical approach that combines optimism with a cost-dependent discount factor, which we test on MinAtar \citep{young19minatar} and a new driving benchmark. \textbf{(4)} We demonstrate the efficiency of our method on these benchmarks as well as on human-collected termination data (\Cref{sec: experiments}). Our results show significant improvement over other candidate solutions, which involve direct termination penalties and history-dependent approaches. We also introduce a new task for RL -- a driving simulation game which can be easily deployed on mobile phones, consoles, and PC \footnote{Code for Backseat Driver and our method, TermPG, can be found at \href{https://github.com/guytenn/Terminator}{https://github.com/guytenn/Terminator}.}. \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{imgs/termination.pdf} \caption{ A block diagram of the TerMDP framework. An agent interacts with an environment while an exogenous observer (i.e., terminator) can choose to terminate the agent based on previous interactions. If the agent is terminated, it transitions to a sink state where a reward of $0$ is given until the end of the episode.} \label{fig: termination diagram} \end{figure} \section{Termination Markov Decision Process} \label{sec: perliminaries} We begin by presenting the termination framework and the notation used throughout the paper. Informally, we model the termination problem using a logistic model of past ``bad behaviors". We use an unobserved state-dependent cost function to capture these external preferences. As the overall cost increases throughout time, so does the probability of termination. For a positive integer $n$, we denote $[n] = \brk[c]*{1, \hdots, n}$. We define the Termination Markov Decision Process (TerMDP) by the tuple $\terMDP=(\sset,\aset, P,R,H,c)$, where $\sset$ and $\aset$ are state and action spaces with cardinality $S$ and $A$, respectively, and $H\in\N$ is the maximal horizon. We consider the following protocol, which proceeds in discrete episodes $k=1, 2, \hdots, K$. At the beginning of each episode~$k$, an agent is initialized at state $s_1^k \in \sset$. At every time step $h$ of episode~$k$, the agent is at state $s_h^k \in \sset$, takes an action $a_h^k \in \aset$ and receives a random reward $R_h^k\in[0,1]$ generated from a fixed distribution with mean $r_h(s_h^k,a_h^k)$. A terminator overseeing the agent utilizes a cost function $c: \brk[s]{H}\times \s \times \A \mapsto \R$ that is unobserved and \emph{unknown to the agent}. At time step $h$, the episode terminates with probability \begin{align*} \rho_h^k(c) = \rho\brk*{\sum_{t=1}^h c_t(s_t^k,a_t^k) - b}, \end{align*} where $\rho(x)=\brk*{1+\exp(-x)}^{-1}$ is the logistic function and $b \in \R$ is a bias term which determines the termination probability when no costs are aggregated. Upon termination, the agent transitions to a terminal state $\terminalstate$ which yields no reward, i.e., $r_h(\terminalstate,a)=0$ for all $h\in\brk[s]*{H}, a\in\aset$. If no termination occurs, the agent transitions to a next state $s_{h+1}^k$ with probability $P_h(s_{h+1}^k | s_h^k,a_h^k)$. Let $t_k^*=\min\brk[c]*{h:s_h^k=\terminalstate}-1$ be the time step when the $k^{\text{th}}$ episode was terminated. Notice that the termination probability is non-Markovian, as it depends on the entire trajectory history. We also note that, when $c \equiv 0$, the TerMDP reduces to a finite horizon MDP with discount factor $\gamma = \rho\brk*{-b}$. Finally, we note that our model allows for negative costs. Indeed, these may capture satisfactory behavior, diminishing the effect of previous mistakes, and decreasing the probability of termination. We define a stochastic, history dependent policy $\pi_h(s_h, \tau_{1:h})$ which maps trajectories $\tau_{1:h} = (s_1, a_1, \hdots, s_{h-1}, a_{h-1})$ up to time step $h$ (excluding) and the $h^{\text{th}}$ states $s_{h}$ to probability distributions over $\aset$. Its value is defined by ${ V_h^{\pi}(s,\tau) \!=\! \E{\sum_{t=h}^H r_t(s_t, a_t) | s_h=s, \tau_{1:h} = \tau, a_t \sim \pi_t(s_t,\tau_{1:t})}. }$ With slight abuse of notation, we denote the value at the initial time step by $V_1^\pi(s)$. An optimal policy $\pi^*$ maximizes the value for all states and histories simultaneously \footnote{Such a policy always exists; we can always augment the state space with the history, which would make the environment Markovian and imply the existence of an optimal history-dependent policy \citep{puterman2014markov}.}; we denote its value function by $V^*$. We measure the performance of an agent by its \emph{regret}; namely, the difference between the cumulative value it achieves and the value of an optimal policy, ${ \Reg{K} = \sum_{k=1}^K V_1^*(s_1^k) - V_1^{\pi^k}(s_1^k). }$ \textbf{Notations. } We denote the Euclidean norm by $\norm{\cdot}_2$ and the Mahalanobis norm induced by the positive definite matrix $A\succ 0$ by $\norm{x}_A=\sqrt{x^TAx}$. We denote by $n_h^k(s,a)$ the number of times that a state action pair $(s,a)$ was visited at the $h^{\text{th}}$ time step before the $k^{\text{th}}$ episode. Similarly, we denote by $\hat{X}_h^k(s,a)$ the empirical average of a random variable $X$ (e.g., reward and transition kernel) at $(s,a)$ in the $h^{\text{th}}$ time step, based on all samples before the $k^{\text{th}}$ episode. We assume there exists a known constant $L$ that bounds the norm of the costs; namely, $\sqrt{\sum_{s, a} \sum_{t=1}^H c^2_t(s_t,a_t)} \leq L$. We also denote the maximal reciprocal derivative of the logistic function by ${ \kappa = \max_{h\in\brk[s]{H}}\enspace\max_{\brk[c]*{(s_t,a_t)}_{t=1}^h\in(\sset\times\aset)^h} \brk*{\dot{\rho}\brk*{\sum_{t=1}^hc_t(s_t,a_t) - b}}^{-1}. }$ This factor will be evident in our theoretical analysis in the next section, as estimating the costs in regions of saturation of the sigmoid is more difficult when the derivative nears zero. Finally, we use $\mathcal{O}(x)$ to refer to a quantity that depends on $x$ up to a poly-log expression in $S, A, K, H, L, \kappa$ and $\log\brk*{\frac{1}{\delta}}$. \section{An Optimistic Approach to Overcoming Termination} \label{sec: theory} Unlike the standard MDP setup, in the TerMDP model, the agent can potentially be terminated at any time step. Consider the TerMDP model for which the costs are \emph{known}. We can define a Markov policy $\pi_h$ mapping augmented states $\sset \times \R$ to a probability distribution over actions, where here, the state space is augmented by the accumulated costs $\sum_{t=1}^{h-1} c_t(s_t, a_t)$ . There exists a policy, which does not use historical information, besides the accumulated costs, and achieves the value of the optimal history-dependent policy (see \Cref{appendix: known costs}). Therefore, when solving for an optimal policy (e.g., by planning), one can use the current accumulated cost instead of the full trajectory history. This suggests a plausible approach for solving the TerMDP -- first learn the cost function, and then solve the state-augmented MDP for which the costs are known. This, in turn, leads to the following question: \textbf{can we learn the costs $c$ from the termination signals?} In what follows, we answer this question affirmatively. We show that by using the termination structure, one can efficiently converge to the true cost function \emph{locally} -- for every state and action. We provide uncertainty estimates for the state-wise costs, which allow us to construct an efficient optimistic algorithm for solving the problem. \textbf{Learning the Costs. } To learn the costs, we show that the agent can effectively gain information about costs even in time steps where no termination occurs. Recall that at any time step $h\in[H-1]$, the agent acquires a sample from a Bernoulli random variable with parameter $p = \rho_h^k(c) = \rho\brk*{\sum_{t=1}^h c_t(s_t^k,a_t^k) - b}$. Notably, a lack of termination, which occurs with probability $1-\rho_h^k(c)$, is also an informative signal of the unknown costs. We propose to leverage this information by recognizing the costs $c$ as parameters of a probabilistic model, maximizing their likelihood. We use the regularized cross-entropy, defined for some $\lambda > 0$ by \begin{align} \label{eq: cost likelihood} \mathcal{L}^k_\lambda(c) = \sum_{k'=1}^{k} \sum_{h=1}^{H-1} \left[ \indicator{h< t_{k'}^*}\log\brk*{1- \rho_h^k(c)} + \indicator{h=t_{k'}^*}\log\brk*{\rho_h^k(c) }\right] - \lambda\norm{c}_2^2. \end{align} By maximizing the cost likelihood in \Cref{eq: cost likelihood}, global guarantees of the cost can be achieved, similar to previous work on logistic bandits \citep{zhang2016online,abeille2021instance}. Particularly, denoting by $\hat{c}^k \in\arg\max \mathcal{L}^k_\lambda(c)$ the maximum likelihood estimates of the costs, it can be shown that for any history, a global upper bound on $\norm{\hat{c}^k-c}_{\Sigma_k}$ can be obtained, where the history-dependent design matrix $\Sigma_k$ captures the empirical correlations of visitation frequencies (see \Cref{supp: cost concentration} for details). Unfortunately, using $\norm{\hat{c}^k-c}_{\Sigma_k}$ amounts to an intractable algorithm \citep{chatterji2021theory}, and thus to an undesirable result. Instead, as terminations are sampled on \emph{every time step} (i.e., non-terminations are informative signals as well), we show we can obtain a \emph{local} bound on the cost function $c$. Specifically, we show that the error $\abs{\hat c_h^k(s,a) - c_h(s,a)}$ diminishes with $n_h^k(s,a)$. The following result is a main contribution of our work, and the crux of our regret guarantees later on (see \Cref{supp: cost concentration} for proof). \begin{theorem}[Local Cost Estimation Confidence Bound] \label{thm: local cost confidence front} Let $\hat{c}^k \in\arg\max \mathcal{L}^k_\lambda(c)$ be the maximum likelihood estimate of the costs. Then, for any $\delta>0$, with probability of at least $1-\delta$, for all episodes $k\in [K]$, timesteps $h\in[H-1]$ and state-actions $(s,a)\in\sset\times\aset$, it holds that \begin{align*} \abs{\hat c_h^k(s,a) - c_h(s,a) } \leq \Ob\brk*{ \brk*{n^k_h(s,a)}^{-0.5} \sqrt{\kappa SAHL^{3}} \log\brk*{\frac{1}{\delta}\brk*{1+\frac{kL}{S^2A^2H}}} }. \end{align*} \end{theorem} We note the presence of $\kappa$ in our upper bound, a common factor \citep{chatterji2021theory}, which is fundamental to our analysis, capturing the complexity of estimating the costs. Trajectories that saturate the logistic function lead to more difficult credit assignment. Specifically, when the accumulated costs are high, any additional penalty would only marginally change the termination probability, making its estimation harder. A similar argument can be made when the termination probability is low. \begin{algorithm}[t!] \caption{TermCRL: Termination Confidence Reinforcement Learning} \label{alg: Termination CRL} \begin{algorithmic}[1] \STATE{ \textbf{require:} $\lambda >0$} \FOR{$k=1, \hdots, K$} \FOR{$(h,s,a) \in [H]\times \sset \times \aset$} \STATE $\bar r_h^{k}(s,a) = \hat r_h^{k}(s,a) + b^r_{k}(h,s,a) + b^p_{k}(h,s,a)$ \STATE $\bar c_h^{k}(s,a) = \hat c_h^{k}(s,a) - b^{c}_{k}(h,s,a)$ \hfill {\color{gray}// \Cref{appendix: optimism}} \ENDFOR \STATE $\pi^k \gets $ TerMDP-Plan$\brk*{\terMDP \brk1{\sset,\aset,H,\bar{r}^{k},\hat{P}^{k},\bar{c}^{k}}}$ \hfill {\color{gray}// \Cref{appendix: planning}} \STATE Rollout a trajectory by acting $\pi^k$ \STATE $\hat c^{k+1} \in \arg\max_c \mathcal{L}^k_\lambda(c)$ \hfill {\color{gray}// \Cref{eq: cost likelihood}} \STATE Update $\hat{P}^{k+1}(s,a), \hat{r}^{k+1}(s,a), n^{k+1}(s,a)$ over rollout trajectory \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Termination Confidence Reinforcement Learning} We are now ready to present our method for solving TerMDPs with unknown costs. Our proposed approach, which we call Termination Confidence Reinforcement Learning (TermCRL), is shown in \Cref{alg: Termination CRL}. Leveraging the local convergence guarantees of \Cref{thm: local cost confidence front}, we estimate the costs by maximizing the likelihood in \Cref{eq: cost likelihood}. We compensate for uncertainty in the reward, transitions, and costs by incorporating optimism. We define bonuses for the reward, transition, and cost function by $b_k^r(h,s,a) = \Ob\brk*{\sqrt{\frac{\log\brk*{1/\delta}}{n_h^k(s,a)\vee 1}}}, b_k^p(h,s,a)=\Ob\brk*{\sqrt{\frac{SH^2\log\brk*{1/\delta}}{n_h^k(s,a)\vee 1}}}$, and $b_k^c(h,s,a)=\Ob\brk*{\sqrt{\frac{\kappa SAHL^3}{n_h^k(s,a)\vee 1}}\log\brk*{\frac{1}{\delta}}}$ for some $\delta > 0$ (see \Cref{appendix: optimism} for explicit definitions). We add the reward and transition bonuses to the estimated reward (line 4), while the optimistic cost bonus is applied directly to the estimated costs (line 5). Then, a planner (line 7) solves the optimistic MDP for which the costs are known and are given by their optimistic counterparts. We refer the reader to \Cref{appendix: planning} for further discussion on planning in TerMDPs. The following theorem provides regret guarantees for \Cref{alg: Termination CRL}. Its proof is given in \Cref{appendix: regret analysis} and relies on \Cref{thm: local cost confidence front} and the analysis of UCRL \citep{auer2008near,efroni2019tight}. \begin{restatable}{theorem}{MainResult}[Regret of TermCRL] \label{theorem: main} With probability at least $1-\delta$, the regret of \Cref{alg: Termination CRL} is \begin{align*} \Reg{K} \leq \Ob\brk*{\sqrt{\kappa S^2A^2H^7L^3 K\log^2\brk*{ \frac{SAHK}{\delta}}}}. \end{align*} \end{restatable} Compared to the standard regret of UCRL \citep{auer2008near}, an additional $\sqrt{\kappa AH^4L^3}$ multiplicative factor is evident in our result, which is due to the convergence rates of the costs in \Cref{thm: local cost confidence front}. Motivated by our theoretical results, in what follows we propose a practical approach, inspired by \Cref{alg: Termination CRL}, which utilizes local cost confidence intervals in a deep RL framework. \begin{algorithm}[t!] \caption{TermPG} \label{alg: Termination PG} \begin{algorithmic}[1] \STATE{ \textbf{require:} window $w$, number of ensembles $M$, number of rollouts $N$, number of iterations $K$, policy gradient algorithm \texttt{ALG-PG}} \STATE{ \textbf{initialize:} $\B_{\text{pos}} \gets \emptyset, \B_{\text{neg}} \gets \emptyset, \pi_{\theta} \gets $ random initialization } \FOR{$k=1,\hdots,K$} \STATE Rollout $N$ trajectories using $\pi_{\theta}$, $\mathcal{R} = \brk[c]*{s^i_1, a^i_1, r^i_1, \hdots, s^i_{t^*_i}, a^i_{t^*_i}, r^i_{t^*_i}}_{i=1}^N$. \FOR{$i=1, \hdots, N$} \STATE Add $t^*_i - 1$ negative examples $\brk2{s_{\max\brk[c]*{1, l-w+1}}, a_{\max\brk[c]*{1, l-w+1}}, \hdots, s_l, a_l}_{l=1}^{t^*-1}$ to $\B_{\text{neg}}$. \STATE Add one positive example $\brk2{s_{\max\brk[c]*{1, t^*-w+1}}, a_{t^*-\max\brk[c]*{1, t^*-w+1}}, \hdots, s_{t^*}, a_{t^*}}$. \ENDFOR \STATE Train bootstrap ensemble $\brk[c]*{c_{\phi_m}}_{m=1}^M$ using binary cross entropy over data $\B_{\text{neg}}, \B_{\text{pos}}$. \STATE Augment states in $\mathcal{R}$ by $s_l^i \gets s_l^i \cup \sum_{j=1}^{\min\brk[c]*{w, l}} \min_{m} c_{\phi_m}(s^i_{l-j}, a^i_{l-j})$. \STATE Update policy $\pi_{\theta} \gets \texttt{ALG-PG}\brk*{\mathcal{R}}$ with dynamic discount (see \Cref{sec: discount factor}). \ENDFOR \end{algorithmic} \end{algorithm} \section{Termination Policy Gradient} \label{sec: termpg} Following the theoretical analysis in the previous section, we propose a practical approach for solving TerMDPs. Particularly, in this section, we devise a policy gradient method that accounts for the unknown costs leading to termination. We assume a stationary setup for which the transitions, rewards, costs, and policy are time-homogeneous. Our approach consists of three key elements: learning the costs, leveraging uncertainty estimates over costs, and constructing efficient value estimates through a dynamic cost-dependent discount factor. \Cref{alg: Termination PG} describes the Termination Policy Gradient (TermPG) method, which trains an ensemble of cost networks (to estimate the costs and uncertainty) over rollouts in a policy gradient framework. We represent our policy and cost networks using neural networks with parameters $\theta, \brk[c]{\phi_m}_{m=1}^M$. At every iteration, the agent rollouts $N$ trajectories in the environment using a parametric policy,~$\pi_\theta$. The rollouts are split into subtrajectories which are labeled w.r.t. the termination signal, where positive labels are used for examples that end with termination. Particularly, we split the rollouts into ``windows" (i.e., subtrajectories of length $w$), where a rollout of length $t^*$, which ends with termination, is split into $t^*-1$ negative examples $\brk*{s_{\max\brk[c]*{1, l-w+1}}, a_{\max\brk[c]*{1, l-w+1}}, \hdots, s_l, a_l}_{l=1}^{t^*-1}$, and one positive example $\brk*{s_{\max\brk[c]*{1, t^*-w+1}}, a_{t^*-\max\brk[c]*{1, t^*-w+1}}, \hdots, s_{t^*}, a_{t^*}}$. Similarly, a rollout of length $H$ which does not end with termination contains $H$ negative examples. We note that by taking finite windows, we assume the terminator ``forgets" accumulated costs that are not recent - a generalization of the theoretical TerMDP model in \Cref{sec: perliminaries}, for which $w=H$. In \Cref{sec: experiments}, we provide experiments of misspecification of the true underlying window width, where this model assumption does not hold. \subsection{Learning the Costs} Having collected a dataset of positive and negative examples, we train a logistic regression model consisting of an ensemble of $M$ cost networks $\brk[c]*{c_{\phi_m}}_{m=1}^M$, shared across timesteps, as depicted in \Cref{fig: cost diagram}. Specifically, for an example $\brk*{s_{\max\brk[c]*{1, l-w+1}}, a_{\max\brk[c]*{1, l-w+1}}, \hdots, s_l, a_l}$ we estimate the termination probability by $\rho\brk*{\sum_{j=1}^{\min\brk[c]*{w,l}} c_{\phi_m}\brk*{s_{l-j+1}, a_{l-j+1}} - b_m}$, where $\brk[c]*{b_m}_{m=1}^M$ are learnable bias parameters. The parameters are then learned end-to-end using the cross entropy loss. We use the bootstrap method \citep{bickel1981some,chua2018deep} over the ensemble of cost networks. This ensemble is later used in \Cref{alg: Termination PG} to produce optimistic estimates of the costs. Particularly, the agent policy $\pi_\theta$ uses the current state augmented by the optimistic cummulative predicted cost, i.e., $s_l^{\text{aug}} = \brk*{s_l, C_{\text{optimistic}}}$, where $C_{\text{optimistic}} = \sum_{j=1}^{\min\brk[c]*{w, l}} \min_{m} c_{\phi_m}(s_{l-j+1}, a_{l-j+1})$. Finally, the agent is trained with the augmented states using a policy gradient algorithm \texttt{ALG-PG} (e.g., PPO \citep{schulman2017proximal}, IMPALA \citep{espeholt2018impala}). \begin{figure}[t!] \includegraphics[width=1\linewidth]{imgs/costs.pdf} \caption{ Block diagram of the cost training procedure. Rollouts are split into subtrajectories, labeled according to whether they end in termination. Given the dataset of labeled subtrajectories, an ensemble of $M$ cost networks is trained end-to-end using cross-entropy with bootstrap samples (all time steps share the same ensemble).} \label{fig: cost diagram} \end{figure} \subsection{Optimistic Dynamic Discount Factor} \label{sec: discount factor} While augmenting the state with the optimistic accumulated costs is sufficient for obtaining optimality, we propose to further leverage these estimates more explicitly -- noticing that the finite horizon objective we are solving can be cast to a discounted problem. Particularly, it is well known that the discount factor $\gamma \in (0,1)$ can be equivalently formulated as the probability of ``staying alive" (see the discounted MDP framework, \citet{puterman2014markov}). Similarly, by augmenting the state $s$ with the accumulated cost $C_h=\sum_{t=1}^h c(s_t, a_t)$, we view the probability $1-\rho(C_h)$ as a state-dependent discount factor, capturing the probability of an agent in a TerMDP to not be terminated. We define a dynamic, cost-dependent discount factor for value estimation. We use the state-action value function $Q(s,a,C)$ over the augmented states, defined for any~$s, a, C$ by $ Q^\pi(s,a,C) = \expect*{\pi}{\sum_{t=1}^H \brk*{\prod_{h=1}^t \gamma_h } r(s_t, a_t) | s_1 = s, a_1=a, C_1 = C}, $ where ${\gamma_h = 1-\rho\brk*{C + \sum_{i=2}^{h-1} c(s_i, a_i) - b}}$. This yields the Termination Bellman Equations $ Q^\pi(s,a,C) = r(s,a) + \brk*{1-\rho(C)}\expect*{s' \sim P(\cdot | s,a), a' \sim \pi(s')}{Q^\pi(s', a', C + c(s', a'))} $ (see \Cref{appendix: termination bellman equations} for derivation). To incorporate uncertainty in the estimated costs, we use the optimistic accumulated costs $C_{\text{optimistic}} = \sum_{j=1}^{\min\brk[c]*{w, l}} \min_{m} c_{\phi_m}(s_{l-j+1}, a_{l-j+1})$. Then, the discount factor becomes ${\gamma\brk*{C_{\text{optimistic}}} = 1-\rho\brk*{C_{\text{optimistic}} - b}}$. Assuming that, w.h.p., optimistic costs are smaller than the true costs, the discount factor decreases as the agent exploits previously visited states. The dynamic discount factor allows us to obtain a more accurate value estimator. In particular, we leverage the optimistic cost-dependent discount factor $\gamma\brk*{C_{\text{optimistic}}}$ in our value estimation procedure, using Generalized Advantage Estimation (GAE, \citet{schulman2015high}). As we will show in the next section, using the optimistic discount factor significantly improves overall performance. \section{Experiments} \label{sec: experiments} In this section we evaluate the strength of our approach, comparing it to several baselines, including: \textbf{(1) PG (naive):} The standard policy gradient without additional assumptions, which ignores termination. \textbf{(2) Recurrent PG:} The standard policy gradient with a history-dependent recurrent policy (without cost estimation or dynamic discount factor). As the history is a sufficient statistic of the costs, the optimal policy is realizable. \textbf{(3) PG with Reward Shaping (RS):} We penalize the reward upon termination by a constant value, i.e., $r(s,a) - p\indicator{\terminalstate}$, for some $p > 0$. This approach can be applied to any variant of \Cref{alg: Termination PG} or the methods listed above. \textbf{(4) TermPG:} Described in \Cref{alg: Termination PG}. We additionally implemented two variants of TermPG, including: \textbf{(5) TermPG with Reward Shaping:} We penalize the reward with a constant value upon termination. \textbf{(6) TermPG with Cost Penalty:} We penalize the reward at every time step by the optimistic cost estimator, i.e., $r - \alpha C_{\text{optimistic}}$ for some $\alpha > 0$. All TermPG variants used an ensemble of three cost networks, and a dynamic cost-dependent discount factor, as described in \Cref{sec: discount factor}. We report mean and std. of the total reward (without penalties) for all our experiments. \begin{wrapfigure}[12]{R}{0pt} \centering \includegraphics[width=0.4\linewidth]{imgs/backseatdriver.png} \caption{ Backseat Driver} \label{fig: backseat driver ingame} \end{wrapfigure} \textbf{Backseat Driver (BDr). } We simulated a driving application, using MLAgents \citep{juliani2018unity}, by developing a new driving benchmark, ``Backseat Driver" (depicted in \Cref{fig: backseat driver ingame}), where we tested both synthetic and human terminations. The game consists of a five lane never-ending road, with randomly instantiating vehicles and coins. The agent can switch lanes and is rewarded for overtaking vehicles. In our experiments, states were represented as top view images containing the position of the agent, nearby cars, and coins with four stacked frames. We used a finite window of length $120$ for termination ($30$ agent decision steps), mimicking a passenger forgetting mistakes of the past. \textbf{BDr Experiment 1: Coin Avoidance.} In the first experiment of Backseat Driver, coins are considered as objects the driver must avoid. The coins signify unknown preferences of the passenger, which are not explicitly provided to the agent. As the agent collects coins, a penalty is accumulated, and the agent is terminated probabilistically according to the logistic cost model in \Cref{sec: perliminaries}. We emphasize that, while the coins are visible to the agent (i.e., part of the agent's state), the agent only receives feedback from collecting coins through implicit terminations. \begin{figure}[t!] \centering \includegraphics[width=0.4\linewidth]{imgs/backseat_driver_learn_vs_lstm.png} \includegraphics[width=0.4\linewidth]{imgs/backseat_driver_human.png} \includegraphics[width=0.18\linewidth]{imgs/legend.png} \caption{ Mean reward with std. over five seeds of ``Backseat Driver''. Left: coin avoidance; right: human termination. Variants with reward shaping (RS, orange and brown) penalize the agent with a constant value upon termination. The recurrent PG variant (green) uses a history-dependent policy without learning costs. The TermPG+Penalty variant (purple) penalizes the reward at every time step using the estimated costs.} \label{fig: backseat driver} \end{figure} Results for Backseat Driver with coin-avoidance termination are depicted in \Cref{fig: backseat driver}. We compared TermPG (pink) and its two variants (brown, purple) to the PG (blue), recurrent PG (green), and reward shaping (orange) methods described above. Our results demonstrate that TermPG significantly outperforms the history-based and penalty-based baselines. We found TermPG (pink) to perform significantly better, doubling the reward of the best PG variant. All TermPG variants converged quickly to a good solution, suggesting fast convergence of the costs (see \Cref{appendix: additional results}). \textbf{BDr Experiment 2: Human Termination.} To complement our results, we evaluated human termination on Backseat Driver. For this, we generated data of termination sequences from agents of varying quality (ranging from random to expert performance). We asked five human supervisors to label subsequences of this data by terminating the agent in situations of ``continual discomfort". This guideline was kept ambiguous to allow for diverse termination signals. The final dataset consisted of 512 termination examples. We then trained a model to predict human termination and implemented it into Backseat Driver to simulate termination. We refer the reader to \Cref{appendix: implementation details} for specific implementation details. \Cref{fig: backseat driver} shows results for human termination in Backseat Driver. As before, a significant performance increase was evident in our experiments. Additionally, we found that using a cost penalty (purple) or termination penalty (brown) for TermPG did not greatly affect performance. \begin{figure}[t!] \centering \includegraphics[width=0.45\linewidth]{imgs/spaceinvaders.png} \includegraphics[width=0.45\linewidth]{imgs/seaquest.png} \includegraphics[width=0.45\linewidth]{imgs/breakout.png} \includegraphics[width=0.45\linewidth]{imgs/asterix.png} \caption{ Results for MinAtar benchmarks. All runs were averaged over five seeds. Comparison of best performing TermPG variant to best performing PG variant (relative improvement percentage): $80\%$ in Space Invaders, $150\%$ in Seaquest, $410\%$ in Breakout, and $90\%$ in Asterix.} \label{fig: minatar} \end{figure} \begin{table*}[t!] \caption{\label{table: results} Summary of results (top) and ablations for TermPG (bottom). Standard deviation optimism did not have significant impact on performance. Removing optimism or the dynamic discount factor had negative impact on performance. TermPG was found to be robust to model misspecifcations of the accumulated cost window.} \centering \hspace*{-0.8cm} \begin{scriptsize} \begin{tabular}{|c|cc|cccc|} \hline \multicolumn{1}{|c|}{} & \multicolumn{2}{c|}{\bf {\small Backseat Driver} } & \multicolumn{4}{c|}{\bf {\small MinAtar}} \\ \hline \hline \bf {\small Experiment} & \bf {\small Coin Avoid.} & \bf{\small Human} & \bf {\small Space Inv.} & \bf {\small Seaquest} & \bf {\small Breakout} & \bf {\small Asterix} \\ \hline PG & $5.3 \pm 0.8$ & $4.9 \pm 1.5$ & $5.2 \pm 1.8$ & $0.6 \pm 0.4$ & $1.4 \pm 2.8$ & $0.8 \pm 0.3$ \\ \hline Recurrent PG & $3.4 \pm 0.21$ & $5 \pm 1.8$ & $2.8 \pm 0.05$ & $0.1 \pm 0.3$ & $0.7 \pm 0.6$ & $0.7 \pm 0.2$ \\ \hline PG + RS & $5.9 \pm 1.4$ & $7.4 \pm 1.7$ & $7.6 \pm 2.3$ & $0.4 \pm 0.2$ & $0.5 \pm 0.03$ & $0.9 \pm 0.2$ \\ \hline \rowcolor{Gray} TermPG (ours) & $\mathbf{8.7 \pm 1.4}$ & $8.3 \pm 1.3$ & $\mathbf{9.7 \pm 1.1}$ & $\mathbf{1.4 \pm 0.8}$ & $\mathbf{8.2 \pm 0.3}$ & $1 \pm 0.2$ \\ \hline \rowcolor{Gray} TermPG + RS (ours) & $\mathbf{8.4 \pm 1.3}$ & $7.7 \pm 0.3$ & $\mathbf{11.8 \pm 0.8}$ & $0.3 \pm 0.6$ & $5.1 \pm 1$ & $0.8 \pm 0.1$ \\ \hline \rowcolor{Gray} TermPG + Penalty (ours) & $6 \pm 0.8$ & $\mathbf{11.8 \pm 1.5}$ & $7.7 \pm 1.4$ & $\mathbf{2.4 \pm 1}$ & $2.3 \pm 2.3$ & $\mathbf{1.7 \pm 0.1}$ \\ \hline\hline \bf {\small Ablation Test} & \bf {\small Coin Avoid.} & \bf {\small Human} & \bf {\small Space Inv.} & \bf {\small Seaquest} & \bf {\small Breakout} & \bf {\small Asterix} \\ \hline Optimism with Ensemble Std. & $7.6 \pm 2.1$ & $7.5 \pm 1.1$ & $2.8 \pm 0.02$ & $0.9 \pm 0.6$ & $10.9 \pm 1$ & $1 \pm 0.1$ \\ \hline No Optimism & $7.8 \pm 1.3$ & $8.8 \pm 1.2$ & $5.2 \pm 1.6$ & $0.7 \pm 0.3$ & $1.3 \pm 0.7$ & $1 \pm 0.1$ \\ \hline No Dynamic Discount & $6.9 \pm 0.6$ & $5.9 \pm 0.7$ & $4.4 \pm 1.8$ & $0.4 \pm 0.1$ & $0.5 \pm 0.02$ & $0.8 \pm 0.1$ \\ \hline $\times 0.5$ Window Misspecification & $7.2 \pm 1.1$ & $7.2 \pm 0.5$ & $9.7 \pm 3.1$ & $2.4 \pm 0.4$ & $7.9 \pm 0.8$ & $0.8 \pm 0.1$ \\ \hline $\times 2$ Window Misspecification & $8.3 \pm 0.1$ & $8.4 \pm 0.2$ & $11.1 \pm 3$ & $2.2 \pm 0.2$ & $10.3 \pm 0.8$ & $1 \pm 0.1$ \\ \hline \end{tabular} \end{scriptsize} \end{table*}% \textbf{MinAtar. } We further compared our method to the PG, recurrent PG, and reward shaping methods, on MinAtar \citep{young19minatar}. For each environment, we defined cost functions that do not necessarily align with the pre-specified reward, to mimic uncanny behavior that humans are expected to dislike. For example, in Breakout, the agent was penalized whenever the paddle remained in specific regions (e.g., sides of the screen), whereas in Space Invaders, the agent was penalized for ``near misses" of enemy bullets. We refer the reader to \Cref{appendix: implementation details} for specific details of the different termination cost functions. \Cref{fig: minatar} depicts results on MinAtar. As with Backseat Driver, TermPG lead to significant improvement, often achieving a magnitude order as much reward as Recurrent PG. We found that adding a termination penalty and cost penalty produced mixed results, with them being sometimes useful (e.g., Space Invaders, Sequest, Asterix), yet other times harmful to performance (e.g., Breakout). Therefore, we propose to fine-tune these penalties in \Cref{alg: Termination PG}. Finally, we note that training TermPG was, on average, $67\%$ slower than PG, on the same machine. Nevertheless, though TermPG was somewhat more computationally expensive, it showed a significant increase in overall performance. A summary of all of our results is presented in \Cref{table: results} (top). \textbf{Ablation Studies.} We present various ablations for TermPG in \Cref{table: results} (bottom). First, we tested the effect replacing the type of cost optimism in TermPG. In \Cref{sec: termpg}, cost optimism was defined using the minimum of the cost ensemble, i.e., $\text{min}\brk[c]*{c_{\phi_m}}$. Instead, we replaced the cost optimism to $C_{\text{optimistic}} = \text{mean}\brk[c]*{c_{\phi_m}} - \alpha \text{std}\brk[c]*{c_{\phi_m}}$, testing different values of $\alpha$. Surprisingly, this change mostly decreased performance, except for Breakout, where it performed significantly better. Other ablations included removing optimism altogether (i.e., only using the mean of the ensemble), and removing the dynamic discount factor. In both cases we found a significant decrease in performance, suggesting that both elements are essential for TermPG to work properly and utilize the estimator of the unknown costs. Finally, we tested misspecifications of our model by learning with windows that were different from the environment's real cost accumulation window. In both cases, TermPG was suprisingly robust to window misspecification, as performance remained almost unaffected by it. \section{Related Work} Our setup can be linked to various fields, as listed below. \textbf{Constrained MDPs. } Perhaps the most straightforward motivations for external termination stems from constraint violation \citep{chow2018lyapunov,efroni2020exploration,hasanzadezonuzy2020learning}, where strict or soft constraints are introduced to the agent, who must learn to satisfy them. In these setups, which are often motivated by safety \citep{garcia2015comprehensive}, the constraints are usually known. In contrast, in this work, the costs are \emph{unknown} and only implicit termination is provided. \textbf{Reward Design. } Engineering a good reward function is a hard task, for which frequent design choices may drastically affect performance \citep{oh2021creating}. Moreover, for tasks where humans are involved, it is rarely clear how to engineer a reward, as human preferences are not necessarily known, and humans are non-Markovian by nature \citep{clarke2013human,christiano2017deep}. Termination can thus be viewed as an efficient mechanism to elicit human input, allowing us to implicitly interpret human preferences and utility more robustly than trying to specify a reward. \textbf{Global Feedback in RL. } Recent work considered once-per-trajectory reward feedback in RL, observing either the cumulative rewards at the end of an episode \citep{efroni2020reinforcement,cohen2021online} or a logistic function of trajectory-based features \cite{chatterji2021theory}. While these works are based on a similar solution mechanism, our work concentrates on a new framework, which accounts for non-Markovian termination. Additionally, we provide per-state concentration guarantees of the unknown cost function, compared to global concentration bounds in previous work \citep{abbasi2011improved,zhang2016online,qi2018bandit,abeille2021instance}. Using our local guarantees, we are able to construct a scalable policy gradient solution, with significant improvement over recurrent and reward shaping based approaches. \textbf{Preference-based RL. } In contrast to traditional reinforcement learning, preference-based reinforcement learning (PbRL) relies on subjective opinions rather than numerical rewards. In PbRL, preferences are captured through probabilistic rankings of trajectories \citep{wirth2016model,wirth2017survey,xu2020preference}. Similar to our work, \citet{christiano2017deep} use a regression model to learn a reward function that could account for the preference feedback. Our work considers a different setting in which human feedback is provided through termination, where termination and reward may not align. \section{Limitations and Negative Societal Impact} A primary limitation of our work involves the linear dependence of the logistic termination model. In some settings, it might be hard to capture true human preferences and behaviors using a linear model. Nevertheless, when measured across the full trajectory, our empirical findings show that this model is highly expressive, as we demonstrated on real human termination data (\Cref{sec: experiments}). Additionally, we note that work in inverse RL \citep{arora2021survey} also assumes such linear dependence of human decisions w.r.t. reward. Future work can consider more involved hypothesis classes, building upon our work to identify the optimal tradeoff between expressivity and convergence rate. Finally, we note a possible negative societal impact of our work. Termination is strongly motivated by humans interacting with the agent. This may be harmful if not carefully controlled, as learning incorrect or biased preferences may, in turn, result in unfavorable consequences, or if humans engage in adversarial behavior in order to mislead an agent. Our work discusses initial research in this domain. We encourage caution in real-world applications, carefully considering the possible effects of model errors, particularly in applications that affect humans. \newpage
2,877,628,088,762
arxiv
\section{\label{sec:introduction}Introduction} We present a measurement of the cross section $\sigma(p\bar{p} \to Z/\gamma^{*}\mbox{ }Z/\gamma^{*})$ at $\sqrt{s} = 1.96$~TeV, using events where each $Z/\gamma^{*}$ results in two charged leptons. Because the branching fraction of the $Z$ boson to charged leptons is smaller than that to quarks or neutrinos, this process is relatively rare, but has the advantage of being an extremely pure final state. The largest fraction of the background results from events in which one or more jet has been misidentified as a lepton, since few other processes in the standard model (SM) produce four isolated leptons. We also unfold our measurement to determine the $\sigma(p\bar{p} \to ZZ)$ cross section. After measuring the $t$-channel $Z/\gamma^{*}\mbox{ }Z/\gamma^{*}$ cross section, we reinterpret the analysis as a search for the Higgs boson in the four lepton final state, predicted in the SM as a result of electroweak symmetry breaking. Both the ATLAS and CMS experiments at the CERN LHC $pp$ collider have observed a four lepton resonance at a mass of $\sim$125 GeV \cite{atlas_higgs, cms_higgs} which, when combined with other decay channels, is consistent with the SM Higgs boson. $Z$ boson pair production was studied at the CERN LEP2 collider by the ALEPH \cite{Barate:1999jj}, DELPHI \cite{Abdallah:2003dv}, L3 \cite{Acciarri:1999ug}, and OPAL \cite{Abbiendi:2003va} collaborations in multiple final states, including $e^+e^- \rightarrow \ell^{+} \ell^{-} \ell^{'+} \ell^{'-}$, where $\ell$ represents an electron or a muon. The LEP experiments also set limits on anomalous $ZZZ$ and $ZZ\gamma$ couplings \cite{Alcaraz:2006mx}. The Fermilab Tevatron experiments have also searched for and measured the pair production of $Z$ bosons. The D0 collaboration's analysis of $ZZ \rightarrow \ell^+ \ell^- \ell^{'+} \ell^{'-}$ production with 1.1 fb$^{-1}$ of $p \bar{p}$ data yielded an upper limit of 4.4 pb on the $ZZ$ production cross section at 95\% C.L. Additionally, limits on anomalous $ZZZ$ and $ZZ\gamma$ couplings were determined~\cite{:2007hm}. The D0 collaboration was the first to observe $ZZ$ production in $p \bar{p}$ collisions in the $\ell^+ \ell^- \ell^{'+} \ell^{'-}$ final state with 2.7 fb$^{-1}$ of data \cite{iananalysis}. The D0 collaboration has also measured the $ZZ$ cross section in the $\ell^+\ell^- \nu \bar{\nu}$ final state, first with 2.2 fb$^{-1}$ \cite{d0_zz_2l2nu1} and later with 8.6 fb$^{-1}$ of integrated luminosity, yielding a final measurement of $1.64 \pm 0.44 \thinspace \mbox{(stat)}^{+0.13}_{-0.15}\thinspace \mbox{(syst)}$ pb~\cite{d0_zz_2l2nu}. The CDF collaboration has analyzed data from 1.9~fb$^{-1}$ of integrated luminosity to study $ZZ$ production, measuring, when combining $\ell^+ \ell^- \ell^{'+} \ell^{'-}$ and $\ell^+\ell^- \nu \bar{\nu}$ channels, a cross section of $\sigma(ZZ) = 1.4^{+0.7}_{-0.6}~\mathrm{(stat+syst)}$~pb~\cite{cdf_zz}. The ATLAS collaboration has observed $pp \to ZZ$ production in the four charged lepton final state in 1.0 fb$^{-1}$ of data at $\sqrt{s} = 7$ TeV \cite{atlas_zz}. The CMS collaboration has measured $\sigma(pp \to ZZ)$ in 5.0 fb$^{-1}$ of data at $\sqrt{s} = 7$ TeV \cite{cms_zz}, and has observed the rare decay $Z \to \ell^+ \ell^- \ell^{'+} \ell^{'-}$ with a branching fraction in agreement with the SM prediction. This article is an update of the D0 collaboration's prior $ZZ$ to four charged lepton analysis that measured a cross section of $\sigma(p\bar{p} \to ZZ)=1.26^{+0.47}_{-0.37}~\mathrm{(stat)} \pm 0.11~\mathrm{(syst)} \pm 0.08~\mathrm{(lumi)}$~pb using 6.4~fb$^{-1}$ of integrated luminosity \cite{d0_zz_4lep}. The result presented here uses 9.6~fb$^{-1}$ to 9.8~fb$^{-1}$ of integrated luminosity, and expands electron acceptance in the $eeee$ final state. \section{\label{sec:detector}Detector} The D0 detector is described in detail elsewhere~\cite{thedetector,thedetector2,thedetector3,thedetector4}. The main components are the central tracking system, the calorimeter system, and the muon detectors. The central-tracking system is located within a 2~T solenoidal field and consists of two different trackers. Located closest to the interaction point is the silicon microstrip tracker (SMT) and surrounding that is the central fiber tracker (CFT). The SMT is an assembly of barrel silicon detectors in the central region, along with large-diameter disks in the forward regions for tracking at high pseudorapidity ($\eta$) \cite{etaref}. The CFT consists of eight concentric coaxial barrels each carrying two doublet layers of scintillating fibers. The liquid-argon calorimeter system is housed in three cryostats. The central calorimeter (CC) covers up to $|\eta|=1$, and two end calorimeters (EC) are located in the forward regions, extending coverage to $|\eta|=4$. In the intercryostat region (ICR) between the CC and EC cryostats, there is a scintillating intercryostat detector (ICD) between $1.1 <|\eta| <1.4$ that recovers some energy from particles passing through the ICR. Closest to the collisions are the electromagnetic (EM) regions of the calorimeter followed by hadronic layers of fine and coarse segmentation. A muon detection system \cite{themuons1} is located beyond the calorimeters and consists of a layer of tracking detectors and scintillation trigger counters before 1.8 T toroid magnets, followed by two similar layers after the toroids. There is a three-level trigger system consisting of a collection of specialized hardware elements, microprocessors, and decision-making algorithms to selectively record the events of most interest. \section{Monte Carlo} We use the {\sc pythia} ~\cite{Pythia} Monte Carlo (MC) program to determine the $Z/\gamma^{*} \mbox{ } Z/\gamma^{*} \to \ell^+ \ell^- \ell^{'+} \ell^{'-}$ signal acceptance and to simulate the migration background. The signal is defined to consist of $Z/\gamma^{*} \mbox{ }Z/\gamma^{*}$ pairs where each $Z/\gamma^{*}$ boson has a mass greater than 30 GeV. The migration background consists of $Z/\gamma^{*}\mbox{ } Z/\gamma^{*}$ events where at least one of the two $Z/\gamma^{*}$ bosons has an invariant mass of less than 30 GeV; it enters the signal sample either due to mismeasurement or by mis-assigning the lepton pairs in the $eeee$ and $\mu \mu \mu \mu$ channels. We include $Z/\gamma^{*} \mbox{ } Z/\gamma^{*} \to \ell^+ \ell^- \tau^{+} \tau^{-}$ events where the taus decay into electrons or muons as appropriate to match the final four-lepton signature in the signal acceptance. Contributions from $ZZ \to \tau^{+} \tau^{-} \tau^{+} \tau^{-}$ with subsequent decays into muons and electrons are also examined, but found to be negligible. The $ZZ$ transverse momentum ($p_T$) spectrum is also estimated using {\sc sherpa} MC \cite{sherpa}, and the difference between the $p_T$ spectra from {\sc pythia} and {\sc sherpa} is used as a systematic. The dominant tree-level diagrams for $p\bar{p} \rightarrow Z/\gamma^* \mbox{ }Z/\gamma^* \rightarrow \ell^+ \ell^- \ell^{'+} \ell^{'-}$ are shown in Fig.~\ref{fig:decayfd}. The singly resonant $Z$ boson diagram contributes at low mass, and we expect a negligible contribution to the signal yields from this diagram in our analysis. \begin{figure}[!htb] \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.5in]{fig01a.eps} \label{fig:decay} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \centering \includegraphics[width=1.5in]{fig01b.eps} \label{fig:singledecay} \end{minipage} \caption{Feynman diagrams for (a) the $t$-channel tree-level process $q\bar{q} \rightarrow ZZ \to \ell^+ \ell^- \ell^{'+} \ell^{'-}$ and (b) the singly resonant process.} \label{fig:decayfd} \end{figure} To estimate the migration background, we generate $Z/\gamma^{*}$ pairs where at least one of the bosons has a mass between 5 and 30 GeV, and estimate the cross section of these events using next-to-leading-order (NLO) MC from {\sc mcfm}~\cite{MCFM} with the CTEQ61M PDF set \cite{cteq6m}. The $t \bar t$ background is estimated using {\sc alpgen}~\cite{Alpgen} with a top quark mass of 172~GeV and is normalized to an approximate NNLO cross section calculation~\cite{Moch}. Other backgrounds where photons or jets must be misidentified for the event to enter our sample, such as events containing a $Z$ plus jets, are estimated from data as described in Sec.~\ref{sec:InstrumBkg}. For the Higgs boson search, we generate SM Higgs boson events with masses between 115 and 200 GeV in 5 GeV increments. We simulate the gluon fusion ($gg \to H$) and $ZH$ associated production ($q\bar{q} \to ZH$) processes using {\sc pythia}. The expected $gg \to H$ cross section is corrected to next-to-NLO (NNLO) with next-to-next-to-leading-log resummation of soft gluons ~\cite{higgsxsec1}. The associated $ZH$ production cross section is corrected to NNLO \cite{higgsxec2}. The expected branching fractions for the Higgs boson decay are determined using {\sc hdecay} \cite{hdecay}. All of the MC samples are passed through a {\sc geant} \cite{geant} simulation of the D0 detector. To account for detector noise and additional $p \bar{p}$ interactions, data from random beam crossings are overlaid onto all MC events to match the instantaneous luminosity distribution of the selected data. The same algorithms used to reconstruct real data events are run on these simulated events. \section{Object Identification \label{sec:objid}} All muon candidates are reconstructed either as a muon track reconstructed from hits in both the wire chambers and scintillators in at least one layer of the muon system, or as a narrow energy deposit in the calorimeter system consistent with that expected from a muon passing through the calorimeter that is not associated with tracks in the muon system. Each muon candidate must be matched to a track in the central tracker with a $p_T$ $>$ 15 GeV, and the track $p_T$ is taken as the $p_T$ of the muon, $p_T^{\mu}$. This track must have an impact parameter consistent with the muon coming from the interaction point. We consider two muon isolation variables: $E_{T}^{\rm trkcone}$, the scalar sum of the track $p_T$ within a cone of $\Delta R \leq 0.5$~\cite{delRdef} about the muon track; and $E_{T}^{\rm halo}$, the sum of the calorimeter energy in an annulus $ 0.1 <\Delta R \leq 0.4$ centered on the muon track. If the muon is reconstructed in the muon system, then we impose the requirement that $E_{T}^{\rm trkcone} / p_T^{\mu} < 0.25$ and $E_{T}^{\rm halo}/p_T^{\mu}<0.4$. Otherwise, each variable divided by $p_T^{\mu}$ must be less than $0.1$. Different selection requirements apply for electrons identified in the CC ($|\eta_{d}|<1.1$), EC ($1.5 < | \eta_{d}| < 3.2$), and ICR ($1.1 < | \eta_{d}| < 1.5$), where $\eta_{d}$ is the pseudorapidity calculated with respect to the center of the detector. In the CC and EC, electrons must have at least 90\% of their energy found in the EM calorimeter, have $p_T>$ 15 GeV, and pass a calorimeter isolation requirement. The $p_T$ estimate for the CC and EC electrons is based on the energy deposited in the calorimeter. For electrons in the CC, the sum of transverse momenta of the charged central tracks in an annulus of $ 0.05 <\Delta R \leq 0.4$ about the electron, $I4$, must be less than 4.0 GeV. There must either be a track in the central tracker associated with the calorimeter cluster, or hits in the central tracker consistent with a track along the extrapolation of the calorimeter cluster to the interaction point. Finally, the electron must pass a neural net (NN) discriminant trained to separate electrons from jets in the CC using seven shower shape and isolation variables as input. In the EC only, we require that the track isolation $I4$ be less than $(7.0 -2.5 \times |\eta_{d}|)$ GeV or $0.01$ GeV, whichever is larger. The electron must pass a NN discriminant trained to separate electrons from jets in the EC using three shower shape and isolation variables as input and an additional chi-square-based shower shape requirement designed to distinguish electrons from jets. Within the ICR, there is incomplete EM calorimeter coverage, so the electron must pass a minimum EM + ICD energy fraction requirement that varies with $|\eta_{d}|$. The candidate must be matched to a central track with $p_T > 15$ GeV and have a $p_T > 10$ GeV measured in the calorimeter. Additionally, the ICR electron must satisfy two multivariate discriminants designed to reject jet background. Due to the limited energy resolution in the ICR, we use the $p_T$ of the track associated with the ICR electron to estimate the ICR electron energy. Jets are used in the estimation of the instrumental background, as discussed in Sec.~\ref{sec:InstrumBkg}. In this analysis, we use jets reconstructed from energy deposits in the CC, EC, and ICD detectors using the Run II midpoint cone algorithm \cite{jccbjets} with a cone size of $\Delta R = 0.5$. The jets must have $p_T > 15$ GeV and $|\eta_{d}| < 3.2$. We apply the standard jet energy scale (JES) corrections \cite{jes} to jets in both data and MC. The missing transverse energy, \met, is calculated using a vector sum of the transverse components of calorimeter energy depositions, with appropriate JES corrections \cite{jes}. In the $ee\mu \mu$ and $\mu\mu\mu\mu$ final states, the \met~is corrected for identified muons. \section{\label{sec:selection} Event Selection} To maximize the acceptance, we consider all events that pass the event selection requirements below without requiring a specific trigger. The majority of our acceptance comes from events collected by single lepton and di-lepton triggers. As there are four high-$p_{T}$ leptons in this final state, we estimate that the trigger efficiency for the signal is greater than 99.5\% in all channels. \subsection{\label{sec:electrons}$\boldsymbol{ eeee}$ final state} All electron candidates have to satisfy the requirements in Sec.~\ref{sec:objid}. We require at least four electron candidates. If there are four CC/EC electron candidates, no ICR electron candidates are considered, and if there are more than four CC/EC electron candidates, the highest-$p_T$ candidates are used. At least two of the electrons must be in the CC, and if an event has more than one ICR electron, only the leading ICR electron is considered as a lepton candidate. All possible pairings of the selected electrons are considered with no charge requirement imposed, and we require that one of the pairings has di-electron mass $M_{ee}>30$ GeV for both di-electrons. Additionally, there must be $\Delta R > 0.5$ between any ICR electron and any CC and EC electrons, or the ICR electron is not considered. Because the instrumental background contamination is expected to vary significantly depending on the number of central electrons, the $eeee$ channel is then divided into four sub-channels that depend on the number of electrons in the CC, $N_{CC}$, the EC, $N_{EC}$, and in the ICR, $N_{ICR}$: $N_{CC}=4$, $N_{CC}=2$ with $N_{EC}=2$, $N_{CC}=3$ with $N_{EC}=1$, and $N_{CC} \ge 2$ with $N_{ICR}=1$. Since we do not use the muon system in $eeee$ event reconstruction, we include events where the muon system was not fully operational. This leads to a slightly higher integrated luminosity in the $eeee$ final state compared to the $ee \mu \mu$ and $\mu \mu \mu \mu$ final states. \subsection{\label{sec:emu}$\boldsymbol{ ee\mu\mu}$ channel} The $ee\mu\mu$ channel is divided into three sub-channels that depend on the number of electrons in the CC: $N_{CC}=2$, $N_{CC}=1$, and $N_{CC} = 0$. No ICR electrons are used in this channel. As in the $eeee$ final state, we apply this splitting because the instrumental background contamination varies significantly depending on the number of central electrons. We require at least two electrons and two muons; if there are more leptons in the event, only the highest-$p_T$ leptons of each type are used. To reject cosmic ray background, the cosine of the angle between the muons must satisfy $\mbox{cos}\alpha <0.96$, and the acoplanarity \cite{cit:acoplanarity} between the two muons must be greater than 0.05 radians. We further require $|\Delta z_{DCA}| < 3.0$ cm between the muon tracks, where $z_{DCA}$ refers to the location along the beam axis where the track has its distance of closest approach to the beamline. Also, we impose the requirement that $\Delta R > 0.2$ between all possible electron-muon pairings. Both the muon pair and electron pair invariant masses must exceed 30 GeV. There is no opposite charge requirement placed on the lepton pairs in order to maximize acceptance. \subsection{\label{sec:muons}$\boldsymbol{\mu\mu\mu\mu}$ final state} In the four-muon final state, there must be at least four muon candidates satisfying the requirements in Sec.~\ref{sec:objid}, and at least two of the muons must be matched to tracks found in the muon system. The four-muon system must be charge neutral ($\sum_{i=1}^{4}q_{i} = 0$), and only oppositely charged pairs are considered as $Z$ boson candidates. If more than four muons are reconstructed in the event, we consider only the four highest-$p_T$ muons. We further require $|\Delta z_{DCA}| < 3.0$ cm between all muons. We also require that one of the two possible sets of dimuons has a dimuon mass $M_{\mu\mu}>30$ GeV for both dimuons. \section{Instrumental Background \label{sec:InstrumBkg}} The instrumental background primarily arises from $Z(\to \ell \ell)$ + jets and $Z(\to \ell \ell) + \gamma$ + jets production (with smaller contributions from $WZ$ + jets, $WW$ + jets, $W$ + jets, and multijet production with $\ge$4 jets). These events contaminate the four-lepton channels when a jet is falsely reconstructed as an isolated lepton. $Z(\to \ell \ell) + \gamma$ + jets production where a photon and a jet are mis-identified as an electron contaminates the $eeee$ and $ee\mu\mu$ channels. We estimate the instrumental background using the data. We first find the probability for a jet to be mis-identified as a lepton, $P_{j\ell}$. A tag and probe method is used to determine $P_{j\ell}$ where di-jet activity is considered with jet $p_T > 15$~GeV. The tagged jet must be associated with a jet that fired a single jet trigger and be the highest-$p_T$ jet in the event. We then look for a probe jet with $|\Delta \phi| > 3.0$ with respect to the tag jet, where $\phi$ is the azimuthal angle. To suppress contamination from $W$+jet events, we require $\mbox{\met}< 20$ GeV in the tag and probe sample. The probe jets form the denominator of the $P_{j\ell}$ calculation. To calculate the numerator of the $P_{je}$ estimate, we first find all good electrons in the event with a $p_T > 15$ GeV. We then select those electrons that satisfy the same criteria imposed on the probe jets, noted above. The $P_{je}$ estimate is parametrized as a function of the jet $p_T$ and $\eta_{d}$. The $P_{j\mu}$ estimate is determined using a similar method. The tagged jet is defined as was done for electron events, but in the numerator, rather than have an electron, we use any muon that has $|\Delta \phi| > 3.0$ from the tag jet, and $P_{j\mu}$ is taken as the number of muons divided by the number of probe jets in the sample. The $P_{j\mu}$ estimate is parameterized in terms of $p_{T}$ and $\eta$. The $P_{j\ell}$ estimates for both electrons and muons are on the order of 10$^{-3}$. To estimate the instrumental background for the $eeee$ final state, $P_{je}$ is applied to events with three reconstructed electrons and one or more jet. The jet kinematics are used to model the electron kinematics in the event. This method accounts for events where either a photon or a jet is misreconstructed as one electron and a jet is misreconstructed as the other. This method overestimates the background from events with two real electrons and two jets misreconstructed as electrons. To determine the rate, we look at events with two reconstructed electrons and two or more reconstructed jets and apply $P_{je}$ to both jets. The number of $ee$ plus two jet events after $P_{je}$ is applied to both jets is found to be negligible, so only $eee$+jet events are used to model the instrumental background distributions in the $eeee$ final state. The instrumental background in the $ee\mu\mu$ channel is calculated from two different contributions. The first contribution is from events with $e \mu \mu$ plus one or more jet, where we apply $P_{je}$ to the jet. This method gives an estimate of a background due to $Z(\to\mu\mu)$ + jets and $Z(\to\mu\mu)+ \gamma$ + jets where a jet has been reconstructed as an electron. We also consider the $ee$ plus two jet or more case, where we apply $P_{j\mu}$ to the jets. This method gives an estimate of the background due to $Z(\to ee)$ + jets where the jets can contain muons. The $P_{j\mu}$ is applied to jets in $\mu\mu$ plus two or more jets data to determine the instrumental background for the $\mu\mu\mu\mu$ channel. Background estimates derived from the above method can be found in Tables~\ref{tab:embkgd}--\ref{tab:mubkgd} in each final state. \begin{table*}[htb] \caption{\label{tab:embkgd}Contributions from non-negligible backgrounds in the $eeee$ subchannels, plus expected $t$-channel $ZZ$ and Higgs boson signals and number of observed events. Uncertainties are statistical followed by systematic.} \begin{tabular}{l|c|c|c|c} \hline \hline & 2 CC & 3 CC & 4 CC & $\geq$ 2 CC \\ & 2 EC & 1 EC & & 1 ICR \\ \hline & & & & \\[-2mm] Instrumental backg. & $0.15 \pm 0.01 \pm 0.03$ & $0.12 \pm 0.01 \pm 0.02$ & $0.05 \pm 0.01 \pm 0.01$ & $0.29 \pm 0.04~ ^{+0.03}_{-0.12}$ \\[1mm] Migration & $0.014 \pm 0.001 \pm 0.002$ & $0.023 \pm 0.001 \pm 0.004$ & $0.025 \pm 0.001 \pm 0.004$ & $0.024 \pm 0.001 \pm 0.003$ \\[1mm] \hline & & & & \\[-2mm] Total non-$ZZ$ & $0.17 \pm 0.01 \pm 0.03 $ & $0.14 \pm 0.01 \pm 0.02 $ & $0.08 \pm 0.01 \pm 0.01 $ & $0.32 \pm 0.04~ ^{+0.03}_{-0.12} $ \\ background & & & &\\[1mm] \hline & & & & \\[-2mm] Expected & $0.48 \pm 0.01 \pm 0.07 $ & $1.14 \pm 0.01 \pm 0.17 $ & $1.03 \pm 0.01 \pm 0.15 $ & $1.47 \pm 0.01 \pm 0.19 $ \\ $t$-channel $Z/\gamma^{*}\mbox{ }Z/\gamma^{*}$ & & & &\\[1mm] \hline & & & & \\[-2mm] Expected $gg \to H$ & $<0.001$ & 0.001 & 0.004 & 0.002 \\ $M_H = $ 125 GeV&& & &\\[1mm] Expected $ZH$ & 0.003 & 0.006 & 0.010 & 0.008 \\ $M_H =$ 125 GeV&& & &\\[1mm] \hline & & & & \\[-2mm] Total Higgs boson & 0.003 & 0.007 & 0.014 & 0.010 \\ $M_H =$ 125 GeV & & & &\\[1mm] \hline & & & & \\[-2mm] Observed & 0 & 1 & 2 & 2\\ Events& & & &\\ \hline \hline \end{tabular} \end{table*} \begin{table*}[!htb] \caption{\label{tab:emubkgd}Contributions from non-negligible backgrounds in the $ee\mu\mu$ subchannels, plus expected signal and number of observed events. Uncertainties are statistical followed by systematic.} \begin{tabular}{l|c|c|c} \hline \hline & 0 CC & 1 CC & 2 CC \\ \hline & & & \\[-2mm] Instrumental backg. & $0.11 \pm 0.01 \pm 0.03$ & $0.21 \pm 0.01 \pm 0.04 $ & $0.27 \pm 0.01 \pm 0.04 $ \\[1mm] $t\bar{t}$ & $(0.2~^{+0.3}_{-0.1}\pm 0.6)\e{-2}$ & $(1.0~^{+0.5}_{-0.3}\pm 0.2)\e{-2}$ & $(0.3~^{+0.2}_{-0.1}\pm 0.3)\e{-2}$ \\[1mm] Migration & $(2.1~^{+0.9}_{-0.7}~^{+0.3}_{-1.0})\e{-3}$ & $(5.0 \pm 0.8 ~^{+0.6}_{-1.4})\e{-3}$ & $(4.8~^{+0.6}_{-0.5}~\pm 1.0)\e{-3}$ \\[1mm] Cosmic rays & $<0.001$ & $<0.003$ & $<0.006$ \\[1mm] \hlin & & & \\[-2mm] Total non-$ZZ$ & $0.12 \pm 0.01 \pm 0.03 $ & $0.23 \pm 0.01 \pm 0.04 $ & $0.27 \pm 0.01 \pm 0.04 $ \\ background & & & \\[1mm] \hline & & & \\[-2mm] Expected & $0.43 \pm 0.01 \pm 0.06 $ & $2.37 \pm 0.02 \pm 0.28$ & $4.13 \pm 0.03 \pm 0.49$ \\ $t$-channel $Z/\gamma^{*}\mbox{ }Z/\gamma^{*}$ & & & \\[1mm] \hline & & & \\[-2mm] Expected $gg \to H$ & $<0.001$ & 0.002 & 0.007 \\ $M_H =$ 125 GeV& & & \\[1mm] & & & \\[-2mm] Expected $ZH$ & 0.001 & 0.015 & 0.036 \\ $M_H =$ 125 GeV& & &\\[1mm] \hline & & & \\[-2mm] Total Higgs boson & 0.002 & 0.017 & 0.043 \\ $M_H =$ 125 GeV&& &\\[1mm] \hline Observed Events & 2 & 1 & 2 \\ \hline \hline \end{tabular} \end{table*} \section{\label{sec:syst}Systematic Uncertainties} The following factors contribute to the systematic uncertainty on this measurement. We assess a 1\% trigger efficiency uncertainty. Lepton identification uncertainties are calculated by studying $Z \to \ell \ell$ events; lepton identification uncertainties of 3.7\% per CC and EC electron, 6\% per ICR electron, and 3.2\% per muon are used. There is a 10\%--50\% systematic uncertainty on the instrumental background expectation in the various final states that is due to observed variations in $P_{j \ell}$ when changing selection requirements for the di-jet sample as well as limited statistics in the data samples used. We assign 20\% uncertainty to the $t\bar{t}$ background. This covers uncertainty on the theoretical production rate of 7\% for $m_{\rm top}=172$~GeV~\cite{Moch}, plus variation in the cross section due to uncertainty on the top quark mass, and also that on the rate at which the $b$ quark from top quark decays is misidentified as an isolated lepton. We estimate a PDF uncertainty of 2.5\% on all MC samples. We assign a 7.1\% uncertainty on the $ZZ$ cross section used to estimate the migration background and the $ZZ$ background to the Higgs boson search. A systematic uncertainty of 6.1\% is assessed on the luminosity measurement \cite{newlumi}. We assess a systematic uncertainty on the $ZZ$ $p_T$ distribution by reweighting the {\sc pythia} $ZZ$ $p_T$ to match a distribution derived from {\sc sherpa} MC \cite{sherpa}. The $ZZ$ $p_T$ systematic is between 1\% and 7\% for signal $t$-channel $ZZ$ events, but has up to a 40\% effect on the migration background. We also assess systematic uncertainties on the muon and electron energy resolution \cite{mumom}, which lead to an uncertainty on the cross section measurements and Higgs boson production limits of less than 2\%. For the Higgs boson search, we assess a theoretical uncertainty on the expected gluon fusion and $ZH$ associated cross sections of 10.9\% and 6.2\%, respectively \cite{higgsxsec1,higgsxec2}. \begin{table}[!htb] \begin{center} \caption{\label{tab:mubkgd}Contributions from non-negligible backgrounds in the $\mu\mu\mu\mu$ channel, plus expected $t$-channel $ZZ$ and Higgs boson signal and number of observed events. Uncertainties are statistical followed by systematic.} \begin{tabular}{l|c} \hline \hline & Number of Events\\ \hline & \\[-2mm] Instrumental backg. & $0.12 \pm 0.01~ ^{+0.07}_{-0.05}$ \\[1mm] & \\[-2mm] Migration & $(0.34 \pm 0.02~ ^{+0.07}_{-0.04})\e{-1}$ \\[1mm] & \\[-2mm] Cosmic rays & $<$0.01 \\[1mm] \hline & \\[-2mm] Total non-$ZZ$ & $0.15 \pm 0.01~ ^{+0.07}_{-0.05}$ \\ background & \\[1mm] \hline & \\[-2mm] Expected & $4.26 \pm 0.02 \pm 0.43$ \\ $t$-channel $Z/\gamma^{*}\mbox{ }Z/\gamma^{*}$ &\\[1mm] \hline & \\[-2mm] Expected $gg \to H$ & 0.007 \\ $M_H =$ 125 GeV&\\[1mm] & \\[-2mm] Expected $ZH$ & 0.033 \\ $M_H =$ 125 GeV&\\[1mm] \hline & \\[-2mm] Total Higgs boson & 0.040 \\ $M_H =$ 125 GeV&\\[1mm] \hline Observed Events & 3 \\ \hline \hline \end{tabular} \end{center} \end{table} \newpage \section{\label{sec:results} Cross Section measurement} The data are used to measure the production cross section $p\bar{p} \to ZZ$ at $\sqrt{s} = 1.96$~TeV. The integrated luminosities analyzed for the three channels are 9.8, 9.6, and 9.6 fb$^{-1}$ for the $eeee$, $ee\mu\mu$, and $\mu \mu \mu \mu$ channels, respectively. A summary of the signal and background event expectations are included in Tables~\ref{tab:embkgd}--\ref{tab:mubkgd} for the three channels. We observe five $eeee$ candidate events, five $ee \mu\mu$ candidate events, and three $\mu\mu\mu\mu$ candidate events, for 13 data events total, with a total of $16.8 \pm 1.9 \thinspace \mbox{(stat+syst+lumi)}$ expected events. \begin{table*}[!htb] \caption{\label{tab:acceptance_4e} Acceptance $\times$ efficiency for the $eeee$ subchannels, for $ZZ \to ee ee$ and $ZZ \to ee \tau \tau$ decays. Uncertainties are statistical followed by systematic.} \begin{tabular}{l|c|c} \hline \hline Channel & $eeee$ & $ee\tau\tau$ \\ \hline 2 CC, 2 EC & $0.025 \pm 0.001 \pm 0.004$ & $0.0002 \pm 0.0001 \pm 0.0001$ \\ 3 CC, 1 EC & $0.059 \pm 0.001 \pm 0.011$ & $0.0006 \pm 0.0001 \pm 0.0001$ \\ 4 CC & $0.053 \pm 0.001 \pm 0.009$ & $0.0007 \pm 0.0001 \pm 0.0001$ \\ $\geq$ 2 CC, 1 ICR & $0.076 \pm 0.001 \pm 0.012$ & $0.0007 \pm 0.0001 \pm 0.0001$ \\ \hline \hline \end{tabular} \end{table*} \begin{table*}[!htb] \caption{\label{tab:acceptance_2e2mu} Acceptance $\times$ efficiency for the $ee\mu\mu$ subchannels, for $ZZ \to ee \mu \mu$, $ZZ \to ee \tau \tau$, and $ZZ \to \mu \mu \tau \tau$ decays. Uncertainties are statistical followed by systematic.} \begin{tabular}{l|c|c|c} \hline \hline Channel & $ee\mu\mu$ & $ee\tau\tau$ & $\mu \mu \tau\tau$ \\ \hline 0 CC & $0.011 \pm 0.001 \pm 0.001$ & $0.0001 \pm 0.0001 \pm 0.0001$ & $0.0002 \pm 0.0001 \pm 0.0001$ \\ 1 CC & $0.063 \pm 0.001 \pm 0.007$ & $0.0007 \pm 0.0001 \pm 0.0001$ & $0.0007 \pm 0.0001 \pm 0.0001$ \\ 2 CC & $0.110 \pm 0.001 \pm 0.012$ & $0.0014 \pm 0.0001 \pm 0.0002$ & $0.0019 \pm 0.0001 \pm 0.0002$ \\ \hline \hline \end{tabular} \end{table*} \begin{table*}[!htb] \caption{\label{tab:acceptance_4mu} Acceptance $\times$ efficiency for the $\mu \mu \mu \mu$ channel, for $ZZ \to \mu \mu \mu \mu$ and $ZZ \to \mu \mu \tau \tau$ decays. Uncertainties are statistical followed by systematic.} \begin{tabular}{c|c} \hline \hline $\mu \mu \mu\mu$ & $\mu \mu \tau\tau$ \\ \hline $0.224 \pm 0.002 \pm 0.022$ & $0.0032 \pm 0.0002 \pm 0.0003$ \\ \hline \hline \end{tabular} \end{table*} A negative log-likelihood function is constructed by taking as input the expected signal acceptance, the number of expected background events, and the number of observed events in each of the subchannels. The signal acceptance times efficiency for each channel are shown in Tables~\ref{tab:acceptance_4e}--\ref{tab:acceptance_4mu}. The branching ratio for each channel is calculated using the relevant $Z$ boson branching ratios from Ref.~\cite{pdg}. The cross section, $\sigma$, is varied to minimize the negative log-likelihood, which gives $\sigma(p\bar{p} \to Z/\gamma^*Z/\gamma^*)= 1.26 ^{+0.44}_{-0.36} \thinspace \mbox{(stat)} ^{+0.17}_{-0.15} \thinspace \mbox{(syst)} \pm 0.08 \thinspace \mbox{(lumi)}$~pb for $M(Z/\gamma^{*}) > 30$~GeV. We then calculate the ratio of $\sigma(p\bar{p} \to Z/\gamma^*Z/\gamma^*)$ to $\sigma(p\bar{p} \to ZZ)$ for this mass region using {\sc mcfm} \cite{MCFM}, and from this correction determine the $p\bar{p} \to ZZ$ cross section to be $1.05 ^{+0.37}_{-0.30} \thinspace \mbox{(stat)} ^{+0.14}_{-0.12} \thinspace \mbox{(syst)} \pm 0.06\thinspace \mbox{(lumi)}$~pb. We combine this measurement with the $p\bar{p} \to ZZ$ cross section measured in the $\ell^{+}\ell^{-}\nu\bar{\nu}$ final state using data from the D0 detector \cite{d0_zz_2l2nu}, giving a total combined $p\bar{p} \to ZZ$ cross section of $1.32 ^{+0.29}_{-0.25} \thinspace \mbox{(stat)} \pm 0.12 \thinspace \mbox{(syst)} \pm 0.04\thinspace \mbox{(lumi)}$~pb. The measured cross section values are consistent with the SM expectation of $1.43 \pm 0.10$ pb \cite{MCFM}. \section{Higgs Boson Production Limits \label{higgsSection}} The main Higgs boson production mechanisms that can result in four final state charged leptons are gluon fusion and $ZH$ associated production. For Higgs boson events produced through gluon fusion, final states with four charged leptons arise from the decay $H \to ZZ$, where both $Z$ bosons then decay leptonically. As all of the decay products of the Higgs boson in this decay are well measured, the best discriminating variable between the gluon fusion Higgs boson signal and the backgrounds is the four-lepton invariant mass. In the case of associated $ZH$ production, two of the leptons in each event can come from the decay of the associated $Z$ boson, so Higgs decay modes with two or more final state leptons will contribute to our signal. The majority of the $ZH$ signal arises from $H \to \tau^{+}\tau^{-}$, $H \to WW$, and $H \to ZZ$ decays. We expect large \met~in these events, due to the neutrinos from the $\tau$ and $W$ boson decays, as well as in events where one $Z$ boson from the $H \to ZZ$ decays to neutrinos. We therefore set limits on SM Higgs boson production using the four-lepton invariant mass and the \met. The four-lepton mass and \met~are shown in Fig.~\ref{fig:fourmass}, with the expected Higgs boson signal distributions for a Higgs boson mass, $M_H$, of 125 GeV. Additional differential distributions are provided in Appendix~\ref{sec:kinematics}. The expected yields for each production and decay mode for each Higgs boson mass considered are shown in Table~\ref{tab:decaytable}. For events with $\mbox{\met}< 30$ GeV, the four-lepton mass is used to discriminate the Higgs boson signal from all backgrounds; in events with $\mbox{\met} \geq 30$ GeV, the \met~is used. For the Higgs boson search, the $t$-channel $Z/\gamma^*\mbox{ }Z/\gamma^*$ background is fixed to the SM expectation. \begin{table*}[!htb] \caption{\label{tab:decaytable} Expected numbers of Higgs boson events for each mass point for the given production and decay mode. The $H \to \gamma \gamma$, $H \to \mu \mu$, and $H \to Z\gamma$ contributions are summed together in the $H \to \mbox{other}$ decays column.} \begin{tabular}{c|c|cccc|c} \hline \hline $M_H$ & $gg \to H$ & \multicolumn{4}{c|}{$q\bar{q} \to ZH$} & Total \\ (GeV) & $H \to ZZ$ & $H \to WW$ & $H \to ZZ$ & $H \to \tau \tau$ & $H \to \mbox{other}$ & \\ \hline 115 & 0.009 & 0.016 & 0.013 & 0.060 & 0.008 & 0.106 \\ 120 & 0.013 & 0.026 & 0.017 & 0.052 & 0.006 & 0.113 \\ 125 & 0.024 & 0.040 & 0.024 & 0.043 & 0.005 & 0.137 \\ 130 & 0.049 & 0.058 & 0.039 & 0.035 & 0.004 & 0.184 \\ 135 & 0.090 & 0.066 & 0.047 & 0.025 & 0.003 & 0.232 \\ 140 & 0.138 & 0.077 & 0.055 & 0.018 & 0.003 & 0.291 \\ 145 & 0.185 & 0.088 & 0.061 & 0.013 & 0.002 & 0.348 \\ 150 & 0.210 & 0.092 & 0.059 & 0.008 & 0.001 & 0.371 \\ 155 & 0.196 & 0.099 & 0.049 & 0.004 & 0.001 & 0.348 \\ 160 & 0.112 & 0.100 & 0.026 & 0.002 & 0.000 & 0.240 \\ 165 & 0.059 & 0.097 & 0.012 & 0.001 & 0.000 & 0.169 \\ 170 & 0.062 & 0.088 & 0.012 & 0.000 & 0.000 & 0.162 \\ 175 & 0.082 & 0.086 & 0.015 & 0.000 & 0.000 & 0.183 \\ 180 & 0.148 & 0.078 & 0.027 & 0.000 & 0.000 & 0.254 \\ 185 & 0.348 & 0.068 & 0.063 & 0.000 & 0.000 & 0.478 \\ 190 & 0.440 & 0.058 & 0.077 & 0.000 & 0.000 & 0.575 \\ 195 & 0.467 & 0.051 & 0.082 & 0.000 & 0.000 & 0.600 \\ 200 & 0.468 & 0.046 & 0.083 & 0.000 & 0.000 & 0.597 \\ \hline \hline \end{tabular} \end{table*} \begin{figure}[!htb] \centering \includegraphics[width=\columnwidth]{fig02a.eps} \includegraphics[width=\columnwidth]{fig02b.eps} \caption{ \label{fig:fourmass} Distributions of (a) the four lepton invariant mass and (b) the \met~ in data, and of expected signal and background. The Higgs boson signal for $M_H$ of 125 GeV is shown scaled by a factor of 40.} \end{figure} We find no evidence of SM Higgs boson production and proceed to set limits. We consider potential $M_H$ values between 115 and 200 GeV, in 5 GeV increments. We calculate limits on the SM Higgs boson production cross section using a modified frequentist approach~\cite{bib-wade1,bib-wade2,bib-wade3}. A log-likelihood ratio (LLR) test statistic is formed using the Poisson probabilities for estimated background yields, the expected signal acceptance, and the number of observed events for each considered Higgs boson mass hypothesis. The confidence levels are derived by integrating the LLR distribution in pseudo-experiments using both the signal-plus-background hypothesis ({\it CL}$_{s+b}$) and the background-only hypothesis ({\it CL}$_b$). The excluded production cross section is taken to be the cross section for which the confidence level for signal, {\it CL}$_s={\mbox{\it CL}}_{s+b}/{\mbox{\it CL}}_b$, is less than or equal to $0.05$. The calculated limits are listed in Table~\ref{tab:limits}. At $M_H=$ 125 GeV, we expect to set a limit of 42.8 times the SM cross section at the 95\% C.L., and observe a limit of 42.3 times the SM cross section. The limits vs. $M_H$ are shown in Fig.~\ref{fig:limit}, along with the associated LLR distribution. \begin{figure}[b] \includegraphics[width=0.45\textwidth]{fig03a.eps} \includegraphics[width=0.45\textwidth]{fig03b.eps} \caption{ The (a) expected and observed 95\% C.L. upper limits on the SM Higgs boson production cross section relative to the value expected in the SM, and the (b) log-likelihood ratio for all four lepton channels combined.} \label{fig:limit} \end{figure} \begin{table}[b] \caption{\label{tab:limits} Expected and observed 95\% C.L. upper limits on the SM Higgs boson production cross section relative to the value expected in the SM.} \begin{tabular}{c|c|c} \hline \hline $M_{H}$ (GeV) & Expected & Observed \\ \hline 115 & 57.3 & 78.9 \\ 120 & 54.9 & 60.6 \\ 125 & 42.8 & 42.3 \\ 130 & 30.6 & 33.5 \\ 135 & 21.5 & 21.0 \\ 140 & 16.2 & 18.2 \\ 145 & 13.4 & 13.9 \\ 150 & 12.4 & 12.1 \\ 155 & 13.4 & 14.2 \\ 160 & 20.8 & 20.6 \\ 165 & 29.6 & 28.3 \\ 170 & 32.3 & 39.0 \\ 175 & 30.4 & 28.4 \\ 180 & 22.9 & 19.6 \\ 185 & 13.3 & 9.7 \\ 190 & 11.8 & 8.6 \\ 195 & 11.8 & 9.5 \\ 200 & 12.4 & 9.9 \\ \hline \hline \end{tabular} \end{table} \section{Conclusions} We have measured the production cross section for $p\bar{p} \to Z/\gamma^{*} \mbox{ } Z/\gamma^{*}$ with $M(Z/\gamma^{*}) > 30$ GeV to be $1.26 ^{+0.44}_{-0.36} \thinspace \mbox{(stat)} ^{+0.17}_{-0.15} \thinspace \mbox{(syst)} \pm 0.08 \thinspace \mbox{(lumi)}$ pb. We correct this measurement by the expected ratio of $\sigma(p\bar{p} \to Z/\gamma^*Z/\gamma^*)$ to $\sigma(p\bar{p} \to ZZ)$ for this mass region and obtain a $p\bar{p} \to ZZ$ cross section of $1.05 ^{+0.37}_{-0.30} \thinspace \mbox{(stat)} ^{+0.14}_{-0.12} \thinspace \mbox{(syst)} \pm 0.06\thinspace \mbox{(lumi)}$~pb. We also searched for the Higgs boson in the four lepton final state, assuming that the $t$-channel $ZZ$ pair is produced with the cross section predicted by the SM. At $M_{H} = 125$ GeV, we expect a limit of 42.8 times the SM cross section, and set a limit of 42.3 times the SM cross section at the 95\% C.L. \input{acknowledgement}
2,877,628,088,763
arxiv
\section{Introduction} \IEEEPARstart{T}{he} concept of {\em linear complementary dual} (LCD) codes was introduced by Massey in 1992 \cite{M}, and they have interesting applications in communication systems, cryptography, and data storage. In particular, Carlet et al. \cite{BCCGM, CG2,CG1} found a new application of binary LCD codes in implementations against side-channel and fault injection attacks. Since then, LCD codes have attracted great attention from researchers in coding community. Yang and Massey \cite{YM94} showed a necessary and sufficient condition for a cyclic code to guarantee LCD property as well. The authors of \cite{LDL2017,LLD2017} constructed two families of both LCD and BCH codes. A {\em maximum distance separable} (MDS) code has the greatest error correcting capability when its length and dimension is fixed. MDS codes are extensively used in communications (for example, Reed-Solomon codes are all MDS codes), and they have good applications in minimum storage codes and quantum codes. There are many known constructions for MDS codes; for instance, {\em Generalized Reed-Solomon} (GRS) codes \cite{RS}, based on the equivalent problem of finding $n$-arcs in projective geometry \cite{MS}, circulant matrices \cite{RL}, Hankel matrices \cite{RS2}, or extending GRS codes. Both LCD codes and MDS codes have good algebraic structures, and they have interesting practical applications as mentioned above. Until now, constructions of most of LCD MDS codes have been achieved by using generalized Reed-Solomon codes (GRS codes) since GRS codes are all MDS. We discuss recent research progresses on LCD MDS codes as follows. (1) Jin \cite{J} and Shi et al. \cite{SYY} constructed several LCD MDS codes using generalized Reed-Solomon codes with some additional conditions on special polynomials. (2) Chen and Liu \cite{CL} made a different approach to obtain some LCD MDS codes from generalized Reed-Solomon codes, and he extended some results by Jin \cite{J}. Afterwards, Luo et al. \cite{LCC} and Fang et al. \cite{FFLZ} further extended the results of Chen and Liu and investigated Euclidean and Hermitian hulls of MDS codes and their applications in quantum codes. (3) Beelen and Jin \cite{BJ} found an explicit construction of several LCD MDS codes in the odd characteristic case using the theory of algebraic function fields. (4) Sari and Koroglu \cite{SK} constructed MDS negacyclic LCD codes of some special lengths, and they provided lower bounds on the minimum distance of these codes. (5) Carlet et al. \cite{CMTQ} obtained many parameters of Euclidean and Hermitian LCD MDS codes by using some linear codes with small dimension or codimension, self-orthogonal codes and generalized Reed-Solomon codes. (6) Carlet et al. \cite{CMTQP} introduced a general construction of LCD codes from any linear codes. More exactly speaking, if there is an $[n, k, d]$ linear code over $\mathbb{F}_q$ $(q >3)$ (respectively, over $\mathbb{F}_{q^2}$ $(q>2)$), then there exists an $[n, k, d]$ Euclidean (respectively, Hermitian) LCD code over $\mathbb{F}_q$ (respectively, over $\mathbb{F}_{q^2}$). \vskip 0.3cm In this paper, we construct some classes of {\it new} Euclidean LCD MDS codes and Hermitian LCD MDS codes which are not monomially equivalent to Reed-Solomon codes, called LCD MDS codes {\it of non-Reed-Solomon type}. To the best of our knowledge, this is the first paper on the construction of LCD MDS codes of non-Reed-Solomon type. In the coding theory, it is an important issue to find all inequivalent codes of the same parameters. We point out that any LCD MDS code of non-Reed-Solomon type constructed by our method is not monomially equivalent to any LCD code constructed by the method of Carlet et al.~\cite{CMTQP}. In particular, we construct some {\it twisted} Reed-Solomon codes or {\it Roth-Lempel} codes which are also LCD MDS codes of non-Reed-Solomon type; these codes cannot be constructed by the method in \cite{CMTQP}. We also present some examples of non-Reed-Solomon LCD MDS codes, which are obtained by using our results and Magma implementation. Our method is based on the constructions of Beelen et al. \cite{BPR} and Roth and Lempel \cite{RL}. For construction of the LCD MDS codes of non-Reed-Solomon type, first we use some special matrices to form a generator matrix such that the product of the generator matrix and its (conjugate) transpose is as simple as possible. Secondly, we need to add some conditions to make sure that the product is nonsingular. Finally, we use the lifting of the finite field so that these codes also have the MDS property. \vskip 0.3cm This paper is organized as follows. In Section 2, we recall some basic concepts on Euclidean and Hermitian LCD MDS codes and two constructions of these codes. In Section 3, we find some new Euclidean and Hermitian LCD MDS codes, which are not monomially equivalent to generalized Reed-Solomon codes. We finish this paper with a conclusion in Section 4. \section{Preliminaries} Let $\Bbb F_q$ be the finite field of order $q$, where $q$ is a power of an odd prime. An ${[n, k]_q}$ linear code $\mathcal{C}$ over $\Bbb F_q$ is a $k$-dimensional subspace of $\Bbb F_q^n$. The minimum distance $d$ of a linear code $\mathcal{C}$ is bounded above by the so-called {\em Singleton bound}, that is, $d\le n-k+1$. If $d= n-k+1$, then the code $\mathcal{C}$ is called a {\em maximum distance separable} (MDS) code. For $x \in \Bbb F_{q^2}$, the conjugate of $x$ is denoted by $\overline x = x^q$. For a matrix $A$, we denote by $A^T$ the transpose of $A$, and $\overline A$ the matrix of conjugates of $A$. For a set $B=\{x_1,x_2,\ldots, x_l\}\subseteq \Bbb F_{q^2}$, we define $\overline{B}=\{x_1^q, x_2^q, \ldots, x_l^q\}$. \subsection{Equivalence of codes} We recall some equivalence notions of codes over the finite field $\Bbb F_q$ (see \cite[Sections 1.6 and 1.7]{HP}). \begin{definition}{\rm Let $\mathcal C_1$ and $\mathcal C_2$ be two linear codes of the same length over $\Bbb F_q$. Two linear codes $\mathcal C_1$ and $\mathcal C_2$ are {\it permutation equivalent} if there is a permutation matrix $P$ such that $G_1$ is a generator matrix of $\mathcal C_1$ if and only if $G_1P$ is a generator matrix of $\mathcal C_2$. } \end{definition} Recall that a \emph{monomial matrix} is a square matrix which has exactly one nonzero entry in each row and each column. A monomial matrix $M$ can be written either in the form of $DP$ or the form of $PD'$, where $D$ and $D'$ are diagonal matrices and $P$ is a permutation matrix. \begin{definition}{\rm Let $\mathcal C_1$ and $\mathcal C_2$ be two linear codes of the same length over $\Bbb F_q$, and let $G_1$ be a generator matrix of $\mathcal C_1$. Then $\mathcal C_1$ and $\mathcal C_2$ are {\em monomially equivalent} if there is a monomial matrix $M$ such that $G_1M$ is a generator matrix of $\mathcal C_2$. } \end{definition} \subsection{Euclidean and Hermitian LCD codes} Given a linear code $\mathcal C$ of length $n$ over $\Bbb F_q$ (resp. $\Bbb F_{q^2}$ ), the Euclidean dual code and the Hermitian dual code of $\mathcal C$ are defined by $$\resizebox{9cm}{!}{ $\mathcal C^{\bot_ E}=\bigg\{(x_0, \ldots, x_{n-1})={\bf x}\in \Bbb F_q^{n} : \begin{array}{l} \langle {\bf x},{\bf y}\rangle_{E}=\sum_{i=0}^{n-1}x_iy_i=0 \\ ~ \forall~ {\bf y}= (y_0, \ldots,y_{n-1})\in \mathcal C\end{array}\bigg\} $} $$ and $$\resizebox{9cm}{!}{ $\mathcal C^{\bot_ H}=\bigg\{(x_0, \ldots, x_{n-1})={\bf x}\in \Bbb F_{q^2}^{n} : \begin{array}{l} \langle {\bf x},{\bf y}\rangle_{H}=\sum_{i=0}^{n-1}x_iy_i^q=0 \\ ~ \forall~ {\bf y}= (y_0, \ldots,y_{n-1})\in \mathcal C\end{array}\bigg\},$} $$ respectively. A linear code $\mathcal C$ over $\Bbb F_q$ is called a {\em Euclidean Linear Complementary Dual (Euclidean LCD) code } if $\mathcal C \cap \mathcal C^{\bot_ E} = \{0\}$, and it is called a {\em Hermitian Linear Complementary Dual (Hermitian LCD) code} if $\mathcal C \cap \mathcal C^{\bot_ H} = \{0\}$. \begin{lemma}{\rm \label{lem1} \cite[Proposition 2]{CMTQ} If $G$ is a generator matrix of an $[n, k]_q$ linear code $\mathcal C$, then $\mathcal C$ is a Euclidean (resp. Hermitian) LCD code if and only if the $k\times k$ matrix $GG^T$ (resp. $G\overline{G}^T$) is nonsingular. \end{lemma} \subsection{Constructions of MDS codes} In this subsection, we recall some developments on constructions of MDS codes, which include generalized Reed-Solomon codes, twisted Reed-Solomon codes and Roth-Lempel codes as follows. Hereafter, we denote respectively by $G_1$ and $G_2$ the generator matrix of a twisted Reed-Solomon code and a Roth-Lempel code.\\ We begin with the well-known generalized Reed-Solomon codes. \begin{definition}{\rm \label{def1} Let $\alpha_{1},\ldots,\alpha_{n}$ be distinct elements in $\Bbb F_{{q}} \cup \{\infty\}$ and $v_{1},\ldots,v_{n}$ be nonzero elements in $\Bbb F_{{q}}$. For $1\leq k \leq n$, the corresponding {\em generalized Reed-Solomon $(GRS)$ code} over $\Bbb F_{{q}}$ is defined by $$\resizebox{9cm}{!}{ $GR{S_k}({\boldsymbol{ \alpha}},{\bf v}): = \left\{({v_1}f({\alpha_1}), \ldots ,{v_{n }}f({\alpha_{n }})) \mid f(x) \in {\Bbb F_{q}}[x],\phantom{.} \deg(f(x)) < k \right\},$}$$ where $\boldsymbol{ \alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\in (\Bbb F_{{q}}\cup \{\infty\})^{n}$ and ${\bf v}=(v_{1},v_{2},\ldots,v_{n})\in {(\Bbb F_{{q}}^*)}^{n}$, and the quantity $f(\infty)$ is defined as the coefficient of $x^{k-1}$ in the polynomial $f$. } \end{definition} If $v_i=1$ for every $i=1, \ldots n$, then $GR{S_k}({\boldsymbol{ \alpha}},{\bf v})$ is called a {\it Reed-Solomon $(RS)$ code.} It is well-known that a generalized Reed-Solomon code $GRS_k(\boldsymbol{\alpha},{\bf v})$ is an $ {\left[ {n,k,n-k+1} \right] }$ MDS code. In fact, $GRS_k(\boldsymbol{\alpha},{\bf v})$ has a generator matrix as follows: $$\resizebox{9cm}{!}{ $\left(\begin{array}{cclc} v_1 &v_2 &\ldots &v_n\\ v_1\alpha_1 &v_2\alpha_2 &\ldots & v_n\alpha_{n}\\ \vdots &\vdots &\ddots&\vdots \\ v_1\alpha_1^{k-1} &v_2\alpha_2^{k-1} &\ldots & v_n\alpha^{k-1}_{n}\end{array}\right)=\left(\begin{array}{cclc} 1 &1 &\ldots &1\\ \alpha_1 &\alpha_2 &\ldots &\alpha_{n}\\ \vdots &\vdots &\ddots&\vdots \\ \alpha_1^{k-1} &\alpha_2^{k-1} &\ldots & \alpha^{k-1}_{n}\end{array}\right) \left(\begin{array}{cclc} v_1 &0 &\ldots &0\\ 0 &v_2 &\ldots &0\\ \vdots &\vdots &\ddots&\vdots \\ 0 &0 &\ldots & v_{n}\end{array}\right).$}$$ In 2017, Beelen {\em et al.} \cite{BPR} presented a generalization of Reed-Solomon codes, so-called {\it twisted Reed-Solomon codes}. \begin{definition} \label{def2} Let $\eta$ be a nonzero element in the finite field $\Bbb F_q$. Let $k,t$ and $h$ be nonnegative integers such that $0\le h<k\le q$, $k<n$, and $0<t\le n-k$. Let $\alpha_{1},\ldots,\alpha_{n}$ be distinct elements in $\Bbb F_{{q}} \cup \{\infty\}$, and we write $\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})$. Then the corresponding {\em twisted Reed-Solomon code} over $\Bbb F_q$ of length $n$ and dimension $k$ is given by \begin{eqnarray*}\resizebox{9cm}{!}{ $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)= \{(f(\alpha_1), \cdots, f(\alpha_{n})): f(x)=\sum_{i=0}^{k-1}a_ix^i+\eta a_hx^{k-1+t}\in \Bbb F_q[x]\}$}.\end{eqnarray*} \end{definition} In fact, \begin{equation}\label{eq1}\resizebox{8cm}{!}{ $G_1=\left(\begin{array}{cclcc} 1 &1 &\ldots &1\\ \alpha_1 &\alpha_2 &\ldots & \alpha_{n}\\ \vdots &\vdots &\ddots&\vdots \\ \alpha_1^{h-1} &\alpha_2^{h-1} &\ldots & \alpha^{h-1}_{n}\\ \alpha_1^h+\eta \alpha_1^{k-1+t} &\alpha_2^h+\eta \alpha_2^{k-1+t} &\ldots & \alpha_{n}^h+\eta \alpha_n^{k-1+t}\\ \alpha_1^{h+1} &\alpha_2^{h+1} &\ldots & \alpha^{h+1}_{n}\\ \vdots &\vdots &\ddots&\vdots \\ \alpha_1^{k-1} &\alpha_2^{k-1} &\ldots & \alpha^{k-1}_{n}\end{array}\right)$ } \end{equation} is the generator matrix of the twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$. Note that in general, the twisted Reed-Solomon codes are not MDS. Beelen {\em et al.} \cite{BPR} obtained some results on the twisted Reed-Solomon codes as follows: \begin{lemma}\label{lem2} \cite[Theorem 17]{BPR} {\rm Let $\Bbb F_s \subset \Bbb F_q$ be a proper subfield and $\alpha_{1},\ldots,\alpha_{n}\in \Bbb F_s$. If $\eta \in\Bbb F_q \backslash \Bbb F_s $, then the twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is MDS.} \end{lemma} As indicated in \cite[Remark 8]{BPR2}, there is a mistake in the proof of \cite[Theorem 18]{BPR}. Here we present an exact statement of \cite[Theorem 18]{BPR}. To do so, recall from \cite[Theorem 1]{BPR} that any MDS code having a generator matrix of the form $[I_k \mid \mathbf{ A}]$, is a GRS if and only if all $3\times 3$ minor of $\widetilde{\mathbf{ A}}$ are zero, where $\mathbf{ A}=(A_{ij})$ and $\widetilde{A}_{ij}=A_{ij}^{-1}$. Let $\eta$ be in $\Bbb F_q^*$ such that the twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is MDS. Let $[I_k \mid \mathbf{ A}]$ be a generator matrix of $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$. Let $M_i=\frac{p_i(\eta)}{q_i(\eta)}$ for $i=1, 2, \ldots, l$ be all $3\times 3$ minors of $\widetilde{\mathbf{ A}}$, where $\mathbf{ A}=(A_{ij})$ and $\widetilde{A}_{ij}=A_{ij}^{-1}$. Here $p_i, q_i$ are polynomials over $\Bbb F_q$, and $q_i(\eta)\neq 0$. \begin{lemma}\label{lem3} \cite[Theorem 18]{BPR} Let $\alpha_{1},\ldots,\alpha_{n}\in \Bbb F_{{q}} $ and $2<k<n-2$. Let $H=\{\eta\in\Bbb F_q^*: \mbox{the twisted Reed-Solomon code } \mathcal C_k(\boldsymbol{\alpha}, t,h,\eta) \mbox{ is MDS} \}.$ Assume that a certain $3\times3$ minor $M_i=\frac{p_i(\eta)}{q_i(\eta)}$ of $\widetilde{\mathbf{ A}}$ defined in right above is nonzero for some $\eta \in H$. Then $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ for such $\eta \in H$ is a non-Reed-Solomon code. \end{lemma} \begin{remark} (1) In Lemma \ref{lem3}, if $p_i(\eta)=0$ for any $3\times 3$ minor of $\widetilde{\mathbf{ A}}$, then $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ for $\eta\in H$ is monomially equivalent to an RS code. By \cite[Corollary 2]{BPR}, an $[n,k,n-k+1]$ MDS code with $k<3$ or $n-k<3$, is monomially equivalent to an RS code. On the other hand, it was proved in \cite[Corollary 9.2]{B} that for a linear MDS code over $\Bbb F_{p^n}$ with parameters $[p^n+1, k(\le p), p^n-k+2]$ is an RS code. (2) To find a twisted MDS code of non-Reed-Solomon type, we need to check the minor assumption of Lemma \ref{lem3}. $\blacksquare$ \end{remark} Combining Lemma \ref{lem2} and Lemma \ref{lem3}, we have \begin{lemma} \cite[Corollary 20]{BPR} Let $\Bbb F_s \subset \Bbb F_q$ and $\alpha_{1},\ldots,\alpha_{n}\in \Bbb F_s$. Let $2<k<n-2$ and $n\le s$. Assume that the minor condition for $\eta\in\Bbb F_q \backslash \Bbb F_s$ of Lemma \ref{lem3} holds. Then $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is MDS but not monomially equivalent to an RS code. \end{lemma} \begin{remark} In the next section, to obtain LCD MDS codes, we first study the LCD property of twisted Reed-Solomon codes, and then we use a suitable vector $\boldsymbol{\alpha}$. Note that Lemma 2.6 shows the existence of MDS twisted Reed-Solomon code, and in general, it is also hard to find an element $\eta\in \Bbb F_q$ such that $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is a non-Reed-Solomon LCD MDS code even for small lengths. \end{remark} Roth and Lempel \cite{RL} found a new construction of MDS codes of non-Reed-Solomon type. A set $S\subseteq \Bbb F_q$ of size $m$ is called an $(m,t, \delta)$-set in $\Bbb F_q$ if there exists an element $\delta\in \Bbb F_q$ such that no $t$ elements of $S$ sum to $\delta$. We note that if $S$ belongs to some subfield $\Bbb F_s$ of $\Bbb F_q$, then the set $S$ is an $(m,t, \delta)$-set in $\Bbb F_q$ for each $\delta\in \Bbb F_q \backslash \Bbb F_s$. \begin{definition} {\rm \cite{RL} Let $n$ and $k$ be two integers such that $k\ge 3$ and $k+1\le n\le q$. Let $\alpha_1, \ldots, \alpha_{n}$ be distinct elements of $\Bbb F_q$, $\delta\in \Bbb F_q$, and $\boldsymbol{\alpha}=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})$. Then an $[n+2,k]$ {\em Roth-Lempel code} $RL({\boldsymbol{ \alpha}},k,n+2)$ over $\Bbb F_q$ is generated by the matrix \begin{equation}\label{eq2} G_2= \left( \begin{array}{cccccc} 1 & 1& \ldots & 1 & 0 &0\\ \alpha_1& \alpha_2& \ldots & \alpha_{n} & 0 &0\\ \vdots& \vdots& \ldots & \vdots & \vdots &\vdots\\ \alpha_1^{k-2}& \alpha_2^{k-2}& \ldots & \alpha_{n}^{k-2} & 0 &1\\ \alpha_1^{k-1}& \alpha_2^{k-1}& \ldots & \alpha_{n}^{k-1} & 1 &\delta\\ \end{array} \right). \end{equation} } \end{definition} \begin{lemma}{\rm \cite{RL} An $[n+2,k]$ Roth-Lempel code with $k\ge 3$ and $k+1\le n\le q$ over $\Bbb F_q$ is a non-Reed-Solomon code. Moreover, the Roth-Lempel code is MDS if and only if the set $\{\alpha_1, \ldots, \alpha_{n}\}$ is an $(n, k-1, \delta)$-set in $\Bbb F_q$. } \end{lemma} \begin{remark} \label{lem6} {\rm In the next section, in order to find LCD MDS codes, we first study the LCD property of Roth-Lempel codes. Then we use some set $\{\alpha_1, \ldots, \alpha_{n}\}$ which is contained in some subfield of $\Bbb F_q$. } \end{remark} Roth and Lempel \cite{RL} proved that the generator matrix $G_2$ in (2.2) cannot generate a GRS code by considering a certain form of codewords. That is, $G_2P$ can not generate a GRS code for any permutation matrix $P$. By Definition 2.2, GRS codes are naturally monomially equivalent to RS codes. We claim that Roth-Lempel codes are not monomially equivalent to RS codes. Suppose that a $RL({\boldsymbol{ \alpha}},k,n)$ code is monomially equivalent {\bf to} a RS code. This means that there exists a monomial matrix $M$ such that $G_2M=G'$, where $G'$ is a generator matrix of a RS code. Recall that a monomial matrix $M$ can be written either in the form $DP$ or the form $PD'$, where $D$ and $D'$ are diagonal matrices and $P$ is a permutation matrix. (1) If $M=DP$, then $G_2M=G_2DP=G'$. Hence, $G_2=G'P^{-1}D^{-1}$. Note that $G'P^{-1}$ generates some RS codes: that is, $G_2$ generates some GRS codes, which is a contradiction. (2) If $M=PD'$, then $G_2M=G_2PD'=G'$. Hence, $G_2=G'D'^{-1}P^{-1}$. We also see that $G_2$ generates some GRS codes; this leads to a contradiction. In conclusion, Roth-Lempel codes are not monomially equivalent to RS codes. Moreover, in the whole paper, if a code is not monomially equivalent to an RS code, then we call it a code of {\em non-Reed-Solomon type} or a {\em non-Reed-Solomon code}. \section{New LCD MDS codes} In this section, we use constructions in Section 2 to obtain new Euclidean and Hermitian LCD MDS codes. Subsections 3.1 and 3.2 deal with the Euclidean and Hermitian LCD MDS codes, respectively. \subsection{Euclidean LCD MDS codes } Let $\gamma$ be a primitive element of $\Bbb F_q$ and $k\mid (q-1)$. Then $\gamma^{\frac{q-1}{k}}$ generates a subgroup of $\Bbb F_q^*$ of order $k$. Let $\alpha_i=\gamma^{\frac{q-1}{k}i}$ for $1\le i\le k$. One can easily check that \begin{equation}\label{eq3} \theta_f=\alpha_1^f+\cdots+\alpha_{k}^f=\left\{ \begin{array}{ll} k & \mbox{if}\ f\equiv0\pmod {k},\\ 0 & \mbox{otherwise}. \end{array} \right. \end{equation} Let \begin{eqnarray}\label{eq4} \resizebox{8.1cm}{!}{ $ A_{\beta}=\left( \begin{array}{cccccc} 1 & 1& \ldots & 1 &1\\ \beta\alpha_1& \beta\alpha_2& \ldots & \beta\alpha_{k-1} & \beta\alpha_k \\ \vdots& \vdots& \ldots & \vdots &\vdots\\ (\beta\alpha_1)^{k-1}& (\beta\alpha_2)^{k-1} &\cdots & (\beta\alpha_{k-1})^{k-1} &(\beta\alpha_k)^{k-1} \\ \end{array} \right).$} \end{eqnarray} By Equation (\ref{eq3}), we have \begin{eqnarray} \label{eq5}A_{\beta}A_{\beta}^T=\left( \begin{array}{ccccccc} k &0&0& \ldots & 0 &0\\ 0& 0&0& \ldots & 0 & \beta^k k \\ \vdots&\vdots& \vdots& \ldots & \vdots &\vdots\\ 0&0&\beta^k k&\cdots & 0&0 \\ 0&\beta^k k&0&\cdots & 0&0 \\ \end{array} \right). \end{eqnarray} Let $C_{\beta}=A_{\beta}+B_{\beta}$, where $B_{\beta}$ is given by \begin{eqnarray}\label{eq6}\resizebox{8.5cm}{!}{$ \left( \begin{array}{cccccc} 0 & 0& \ldots & 0 &0\\ \vdots& \vdots& \ldots & \vdots &\vdots\\ \eta(\beta\alpha_1)^{k-1+t}& \eta(\beta\alpha_2)^{k-1+t}& \ldots & \eta(\beta\alpha_{k-1})^{k-1+t} &\eta(\beta\alpha_k)^{k-1+t} \\ \vdots& \vdots& \ldots & \vdots &\vdots\\ 0 & 0& \ldots & 0 &0\\ \end{array} \right)\begin{array}{l} \\ \\\leftarrow(h+1)th\\ \\ \\ \end{array}$}.\end{eqnarray} The following lemma plays an important role in proving our main results of this Subsection 3.1. First, we should find some conditions under which a twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_q$ becomes a Euclidean LCD code. \begin{lemma}\label{lem7}{\rm Let $q$ be a power of an odd prime. If $k$ is a positive integer with $k\mid (q-1)$, $k<\frac{q-1}{2}$ and $h>0$, then there exists a $[2k,k]_q$ Euclidean LCD twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_q$ for $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k,\gamma\alpha_1,\ldots,\gamma\alpha_k)$, where $\gamma$ is a primitive element of $\Bbb F_q$ and $\alpha_i=\gamma^{\frac{q-1}{k}i}$ for $1\le i\le k$. } \end{lemma} {\em Proof:} By Definition \ref{def2}, to make sure that $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is a twisted Reed-Solomon code, we need $k\neq q-1$, and the entries of $\boldsymbol{\alpha}$ are all distinct, which are obvious. From Equation (\ref{eq1}), we recall that $G_1$ is a generator matrix of the twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_q$. It follows from Lemma \ref{lem1} that $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_q$ in Definition \ref{def2} is Euclidean LCD if and only if $G_1G_1^T$ is nonsingular. Let $\theta_j=\sum_{i=1}^{n}\alpha_i^j$ and $l=k-1+t$. Then we compute $G_1G_1^T$ in Equation (\ref{eq7}) in the top of next page. \newcounter{mytempeqncnt} \begin{figure*}[!h \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{6} \begin{equation} \label{eq7} \resizebox{16cm}{!}{$G_1G_1^T = \left( \begin{array}{cccccccccccccccccccccccccccc} n & \theta_{1}& \ldots &\theta_{h-1}&\theta_{h}+\eta\theta_{l} & \theta_{h+1} &\ldots& \theta_{k-2} &\theta_{k-1}\\ \theta_{1} & \theta_{2}& \ldots &\theta_{h}&\theta_{h+1}+\eta\theta_{l+1} & \theta_{h+2} &\ldots& \theta_{k-1} &\theta_{k}\\ \vdots & \vdots& \ldots &\vdots&\vdots & \vdots& \ldots &\vdots&\vdots\\ \theta_{h-1} & \theta_h& \ldots &\theta_{2h-2}&\theta_{2h-1}+\eta\theta_{l+h-1}& \theta_{2h} &\ldots& \theta_{k+h-3} &\theta_{k+h-2}\\ \theta_{h} +\eta\theta_{l}& \theta_{h+1} +\eta\theta_{l+1}& \ldots &\theta_{2h-1} +\eta\theta_{l+h-1}&\theta_{2h}+2\eta\theta_{l+h}+\eta^2\theta_{2l}& \theta_{2h+1}+\eta\theta_{l+h+1}& \ldots& \theta_{h+k-2}+\eta\theta_{l+k-2}&\theta_{k+h-1}+\eta\theta_{l+k-1}\\ \theta_{h+1} & \theta_{h+2}& \ldots &\theta_{2h}&\theta_{2h+1}+\eta\theta_{l+h+1} & \theta_{2h+2} &\ldots& \theta_{k+h-1} &\theta_{k+h}\\ \vdots & \vdots& \ldots &\vdots&\vdots& \vdots& \ldots &\vdots&\vdots\\ \theta_{k-1} & \theta_{k}& \ldots &\theta_{k+h-2}&\theta_{k+h-1}+\eta\theta_{l+k-1}& \theta_{k+h} &\ldots&\theta_{2k-3} &\theta_{2k-2}\\ \end{array}\right)$}\\ \end{equation} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{4pt} \end{figure*} \setcounter{equation}{2 Let $C_{\beta}=A_{\beta}+B_{\beta}$, where $A_{\beta}, B_{\beta}$ are given in Equation (\ref{eq4}) and Equation (\ref{eq7}), respectively. By Equation (\ref{eq5}), we have $C_{\beta}C_{\beta}^T$ in Equation (\ref{eq8}) in the top of next page. \begin{figure*}[!h \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{7} \begin{equation} \label{eq8} \resizebox{16cm}{!}{$ C_{\beta}C_{\beta}^T = \begin{pmatrix} k &0& \ldots & 0 &0\\ 0& 0& \ldots & 0 & \beta^k k \\ \vdots& \vdots& \ldots & \vdots &\vdots\\ 0&\beta^k k&\cdots & 0&0 \\ \end{pmatrix} + \begin{pmatrix} 0 & 0& \ldots &0& \eta\beta^l\theta_{l}&0 &\ldots&0\\ 0 & 0& \ldots &0& \eta\beta^{l+1}\theta_{l+1}&0 &\ldots&0\\ \vdots & \vdots& \ldots &\vdots& \vdots & \vdots& \ldots &\vdots\\ 0 & 0& \ldots &0& \eta\beta^{l+h-1}\theta_{l+h-1}&0 &\ldots& 0\\ \eta\beta^l\theta_{l}& \eta\beta^{l+1}\theta_{l+1}& \ldots &\eta\beta^{l+h-1}\theta_{l+h-1}& 2\eta\beta^{l+h}\theta_{l+h}+\eta^2\beta^{2l}\theta_{2l}&\eta\beta^{l+h+1}\theta_{l+h+1}&\ldots&\eta\beta^{l+k-1}\theta_{l+k-1}\\ 0 &0& \ldots &0& \eta\beta^{l+h+1}\theta_{l+h+1}&0 &\ldots&0\\ \vdots & \vdots& \ldots &\vdots & \vdots & \vdots& \ldots &\vdots\\ 0 & 0& \ldots &0 & \eta\beta^{l+k-1}\theta_{l+k-1}&0 &\ldots&0 \end{pmatrix}$} \end{equation} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{4pt} \end{figure*} \setcounter{equation}{3 Since every $\theta_t$ for $l\leq t\leq l+k-1$ is zero except exactly one $\theta_{t'}$, we can rewrite (8) as \begin{eqnarray*}&&C_{\beta}C_{\beta}^T=\left( \begin{array}{cccccc} k &0& \ldots & 0 &0\\ 0& 0& \ldots & 0 & \beta^k k \\ \vdots& \vdots& \ldots & \vdots &\vdots\\ 0&\beta^k k&\cdots & 0&0 \\ \end{array} \right)\\ &+&\left( \begin{array}{cccccccc} 0 &\ldots&0& \ldots & 0 &\ldots&0\\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & *_{\beta} & \ldots&0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &*_{\beta}& \ldots & \Delta_{\beta} &\ldots& 0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots&0&\cdots & 0&\ldots&0 \\ \end{array} \right),\end{eqnarray*} where $\ast_{\beta}$ and $\Delta_{\beta}$ are all elements in $\Bbb F_q$, the $\ast_{\beta}$ and $\Delta_{\beta}$ are respectively entries located in the $(i+1,h+1)th$, $(h+1,i+1)th$ and $(h+1,h+1)th$ positions, and the other elements are all zero. Let $G_1=[C_1: C_{\gamma}]$. Then \begin{eqnarray*}&&G_1G_1^T=C_1C_1^T+C_{\gamma}C_{\gamma}^T\\ &=&\left( \begin{array}{cccccc} 2k &0& \ldots & 0 &0\\ 0& 0& \ldots & 0 & (1+\gamma^k) k \\ \vdots& \vdots& \ldots & \vdots &\vdots\\ 0&(1+\gamma^k) k&\cdots & 0&0 \\ \end{array} \right)\\ &+&\left( \begin{array}{cccccccc} 0 &\ldots&0& \ldots & 0 &\ldots&0\\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & *_{1} + *_{\gamma}& \ldots&0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &*_{1} + *_{\gamma}& \ldots & \Delta_1+\Delta_{\gamma} &\ldots& 0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots&0&\cdots & 0&\ldots&0 \\ \end{array} \right).\end{eqnarray*} By the assumption on $k$ and $h>0$, we have $(\gamma ^k+1)k\neq0$. Hence, we can delete the element $*_{1} + *_{\gamma}$ by some elementary row and column operations of matrices at the same time. Namely, we can find an elementary matrix $P$ such that \\ \begin{eqnarray*}&&PG_1G_1^TP^T = \left( \begin{array}{cccccc} 2k &0& \ldots & 0 &0\\ 0& 0& \ldots & 0 & (1+\gamma^k) k \\ \vdots& \vdots& \ldots & \vdots &\vdots\\ 0&(1+\gamma^k) k&\cdots & 0&0 \\ \end{array} \right)\\ &+&\left( \begin{array}{cccccccc} 0 &\ldots&0& \ldots & 0 &\ldots&0\\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & 0& \ldots&0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & \Delta_1+\Delta_{\gamma} &\ldots& 0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots&0&\cdots & 0&\ldots&0 \\ \end{array} \right).\end{eqnarray*} Since det$(P)=1$ and the element $ \Delta_1+\Delta_{\gamma}$ is the entry in the $(h+1)$th row and $(h+1)$th column, we have det$(G_1G_1^T)=2(1+\gamma^k)^{k-1}k^k\neq 0$ by the definition of determinant. Since the matrix $G_1G_1^T$ is nonsingular, the code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is a Euclidean LCD code. This completes the proof. $\blacksquare$ \begin{remark} {\rm For $G_1=[C_1: C_{\gamma}]$, it is not guaranteed that $G_1G_1^T$ is nonsingular when $k=\frac{q-1}{2}$; if $\boldsymbol{\alpha}$ takes all non-zero elements of $\Bbb F_q$ and $1+\gamma^k=0$, then $G_1G_1^T$ is singular. However, if $q=5$ then we have $k=\frac{q-1}{2}=2$ and $G_1=\left( \begin{array}{cccccc} 1&1 & 1&1\\ 2& 1& 2 & 0\\ \end{array} \right)$. Then it is easy to verify that $\mathcal C_2(\boldsymbol{\alpha}, 1,1,1)$ is a Euclidean LCD code for $\boldsymbol{\alpha}=(1,2,-2,-1)$. } \end{remark} \begin{example} {\rm Let $q=3^4=81$, $k=4$, and $\gamma$ be a primitive element of $\Bbb F_{q}$. Consider a twisted Reed-Solomon code $\mathcal C_4(\boldsymbol{\alpha}, 1,3,\eta)$, when $\boldsymbol{\alpha}=(1,\gamma^{20}, \gamma^{40}, \gamma^{60}, \gamma,\gamma\gamma^{20}, \gamma\gamma^{40}, \gamma\gamma^{60})$ and $\eta=\gamma^i\in \Bbb F_{81}$. Then its generator matrix $G_1$ is given in the top of next page. \begin{figure*}[!h \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{4} \begin{equation*} \resizebox{16cm}{!}{$ G_1=\left(\begin{array}{cccccccccc} 1 &1 &1 &1& 1 &1 &1 &1\\ 1&\gamma^{20}& \gamma^{40}&\gamma^{60} & \gamma &\gamma\gamma^{20}& \gamma\gamma^{40}& \gamma\gamma^{60}\\ 1&(\gamma^{20})^2& (\gamma^{40})^2&(\gamma^{60})^2 &(\gamma)^2&(\gamma\gamma^{20})^2& (\gamma\gamma^{40})^2& (\gamma\gamma^{60})^2\\ 1+\gamma^i&(\gamma^{20})^3+\gamma^i(\gamma^{20})^4& (\gamma^{40})^3+\gamma^i(\gamma^{40})^4&(\gamma^{60})^3+\gamma^i(\gamma^{60})^4 & (\gamma)^3+\gamma^i(\gamma)^4&(\gamma\gamma^{20})^3+\gamma^i(\gamma\gamma^{20})^4& (\gamma\gamma^{40})^3+\gamma^i(\gamma\gamma^{40})^4& (\gamma\gamma^{60})^3+\gamma^i(\gamma\gamma^{60})^4 \end{array}\right)$} \end{equation*} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{4pt} \end{figure*} \setcounter{equation}{3 By Lemma \ref{lem7}, $\mathcal C_4(\boldsymbol{\alpha}, 1,3,\gamma^i)$ is Euclidean LCD for all $i$. By Magma, it follows that the codes $\mathcal C_4(\boldsymbol{\alpha}, 1,3,\eta)$ are MDS with parameters $[8,4]_{81}$ if and only if $\eta$ belongs to $\{\gamma^j: j=0,1,5,6,7,11,15,16,17,19,20,21,25,26,27,31,35,36,37,\\39,40,41,45,46,47,51, 55,56,57,59,60,61,65,66,67,71,\\75,76,77,79\}$. By Magma the code $\mathcal C_4(\boldsymbol{\alpha}, 1,3,1)$ has a generator matrix of the form of $[I_4 \mid \mathbf{ A}]$, where \begin{eqnarray*} \mathbf{ A}=\left( \begin{array}{cccccc} \gamma^7 & \gamma^{32}&\gamma^{56} & \gamma^{78} \\ \gamma^{31} & \gamma^{21}&\gamma^{64} & \gamma^{44} \\ \gamma^{12}& \gamma^{9}&\gamma^{74} & \gamma^{77} \\ \gamma^{60} & \gamma^{49}&\gamma^{52} & \gamma^{79} \\ \end{array} \right). \end{eqnarray*} Then it is easy to check that the $3\times 3$ minor of the first three rows and columns of $\widetilde{\mathbf{ A}}$ is equal to $\gamma^{26}$, which is confirmed by Magma. By Lemma \ref{lem3}, $\mathcal C_4(\boldsymbol{\alpha}, 1,3,1)$ is an $[8,4]_{81}$ Euclidean LCD MDS non-Reed-Solomon code. $\blacksquare$} \end{example} An effective method for construction of twisted Reed-Solomon codes with MDS property is to use the lifting of the finite field (refer to \cite{BPR}). We note that the Euclidean LCD property of a given code is preserved under the lifting of the finite field. Hence, we obtain the following theorem. \begin{theorem} Let $q$ be a power of an odd prime and $\Bbb F_s \subset \Bbb F_q$. Let $k$ is a positive integer with $k\mid (q-1)$, $2<k< (s-1)/2$. Let $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k,\gamma\alpha_1,\ldots,\gamma\alpha_k)$, where $\gamma$ is a primitive element of $\Bbb F_s$ and $\alpha_i=\gamma^{\frac{s-1}{k}i}$ for $1\le i\le k$. Assume that the minor condition for $\eta\in\Bbb F_q \backslash \Bbb F_s$ of Lemma \ref{lem3} holds. Then $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is a $[2k, k]_q$ Euclidean LCD MDS non-Reed-Solomon code. \end{theorem} {\em Proof:} Similar to Lemma \ref{lem7}, by the conditions the code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is Euclidean LCD. By Lemma \ref{lem2} and $\eta \in\Bbb F_q \backslash \Bbb F_s $, then $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is an MDS twisted Reed-Solomon code. Then the result follows from Lemma \ref{lem3}. $\blacksquare$ In the following, we consider the construction of Roth and Lempel \cite{RL}. \begin{lemma} \label{lem8} {\rm Let $q$ be a power of an odd prime, and let $k\mid (q-1)$ and $k\geq 3$. Let $\gamma$ be a primitive element of $\Bbb F_q$ and $\alpha_i=\gamma^{\frac{q-1}{k}i}$, $1\le i\le k$. Then there exists a Euclidean LCD Roth-Lempel code $RL({\boldsymbol{ \alpha}},k,n)$ with one of the following parameters: (1) $[k+2,k]_q$ if $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k)$; (2) $[k+3,k]_q$ if $\gcd(k+1,q)=1$ and $\boldsymbol{\alpha}=(0,\alpha_1,\ldots,\alpha_k)$; (3) $[2k+2,k]_q$ if $k< \frac{q-1}{2}$ and $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k,\gamma\alpha_1,\ldots,\gamma\alpha_k)$. } \end{lemma} {\em Proof:} By Lemma \ref{lem1}, the code $RL({\boldsymbol{ \alpha}},k,n)$ over $\Bbb F_q$ is Euclidean LCD if and only if $G_2G_2^T$ is nonsingular. Let $$D=\left( \begin{array}{cccccc} 0 &0\\ 0 &0\\ \vdots &\vdots\\ 0 &0\\ 0 &1\\ 1 &\delta\\ \end{array} \right).$$ (1) Let $G_2=[A_{1}: D]$, where $A_{1}$ is given in Equation (\ref{eq4}). Then \begin{eqnarray*}G_2G_2^T=A_{1}A_{1}^T+DD^T=\left( \begin{array}{ccccccc} k & 0& 0&\ldots & 0 &0\\ 0&0&0& \ldots & 0 &k\\ \vdots&\vdots& \vdots& \ldots & \vdots &\vdots\\ 0 & 0&k&\cdots & 1 &\delta\\ 0& k& 0&\ldots &\delta &1+\delta^2\\ \end{array} \right). \end{eqnarray*} Then the matrix $G_2G_2^T$ is nonsingular; so, $RL({\boldsymbol{ \alpha}},k,k+2)$ is a Euclidean LCD code. (2) Let $G_2=[e_1:A_{1}: D]$, where $e_1=(1,0,\ldots, 0)^T$. Then \begin{eqnarray*}&&G_2G_2^T=e_1e_1^T+A_{1}A_{1}^T+DD^T\\&=&\left( \begin{array}{ccccccc} k+1 & 0& 0&\ldots & 0 &0\\ 0&0&0& \ldots & 0 &k\\ \vdots&\vdots& \vdots& \ldots & \vdots &\vdots\\ 0 & 0&k&\cdots & 1 &\delta\\ 0& k& 0&\ldots &\delta &1+\delta^2\\ \end{array} \right). \end{eqnarray*} Then the matrix $G_2G_2^T$ is nonsingular; hence, $RL({\boldsymbol{ \alpha}},k,k+3)$ is a Euclidean LCD code. (3) Note that $k\neq \frac{q-1}{2}$. Let $G_2=[A_{1}: A_{\gamma}: D]$. Then \begin{eqnarray*}&&G_2G_2^T=A_{1}A_{1}^T+A_{\gamma}A_{\gamma}^T+DD^T\\ &=&\left( \begin{array}{ccccccc} 2k & 0& 0&\ldots & 0 &0\\ 0&0&0& \ldots & 0 &(1+\gamma^k) k\\ \vdots&\vdots& \vdots& \ldots & \vdots &\vdots\\ 0 & 0&(1+\gamma^k) k&\cdots & 1 &\delta\\ 0& (1+\gamma^k) k& 0&\ldots &\delta &1+\delta^2\\ \end{array} \right). \end{eqnarray*} By the proof of Lemma \ref{lem7}, the matrix $G_2G_2^T$ is nonsingular; therefore, $RL({\boldsymbol{ \alpha}},k,2k+2)$ is a Euclidean LCD code. This completes the proof. $\blacksquare$ \begin{theorem} {\rm Let $q$ be a power of an odd prime and $\Bbb F_s \subset \Bbb F_q$. Let $k\geq3$ be an integer with $k\mid (s-1)$. (1) If $\gcd(k+1,s)=1$, then there exists a $[k+3, k]_q$ Euclidean LCD MDS non-Reed-Solomon code. (2) If $k< \frac{s-1}{2}$, then there exists a $[2k+2, k]_q$ Euclidean LCD MDS non-Reed-Solomon code.} \end{theorem} {\em Proof:} By the proof of Lemma \ref{lem8}, we can construct a Euclidean LCD Roth-Lempel code over $\Bbb F_q$. Note that we can require that $\alpha_i\in \Bbb F_s$ for all $i$. By Lemma 2.12, an Roth-Lempel code is a non-Reed-Solomon code, and it is an MDS code if and only if the set $S=\{\alpha_1, \ldots, \alpha_n\}$ forms an $(n,k-1,\delta)$-set in $\Bbb F_q$; that is, there exists an element $\delta\in \Bbb F_q$ such that no $k-1$ elements of $S$ sum to $\delta$. Note that we can require that $\alpha_i\in \Bbb F_s$ for all $i$. Hence, $S \subseteq \Bbb F_s$, and we can find some $\delta\in \Bbb F_q \backslash \Bbb F_s$ such that $S$ is an $(n,k-1,\delta)$-set in $\Bbb F_q$. Similar to Lemma \ref{lem8}, by the conditions we can find a vector $\boldsymbol{\alpha}\in \Bbb F_s^n$ such that the code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is Euclidean LCD and the result follows. $\blacksquare$ The followings are examples of Theorem 3.6. \begin{example}{\rm (1) Let $q=3^2=9$ and $k=4$. Let $\gamma$ be a primitive element of $\Bbb F_{9}$. We choose $\boldsymbol{\alpha}=(0,1, \gamma^{2}, \gamma^{4},\gamma^{6} )$ and $\delta=\gamma^i$ for some integer $i$ with $0\le i\le 7$. Then the generator matrix of the code $\mathcal{C}_2$ is given as follows: \begin{eqnarray*} \left(\begin{array}{ccccccccc} 1 &1 &1 &1&1&0&0\\ 0 &1 &\gamma^{2} &\gamma^{4}&\gamma^{6}&0&0\\ 0 &1 &(\gamma^{2})^2 &(\gamma^{4})^2&(\gamma^{6})^2&0&1\\ 0 &1 &(\gamma^{2})^3 &(\gamma^{4})^3&(\gamma^{6})^3&1&\gamma^i\\ \end{array}\right) \end{eqnarray*} By Magma, there is no $i$ such that $\mathcal{C}_2$ is a Euclidean LCD MDS non-Reed-Solomon Roth-Lempel code with parameters $[7,4]_9$. However, we can make these LCD codes have the MDS property by lifting the finite field $\Bbb F_{9}$ which is shown in the following. (2) Let $q=3^4$ and $k=4$. Let $w$ be a primitive element of $\Bbb F_{81}$ and $\gamma$ a primitive element of $\Bbb F_{9}$ with $\gamma=w^{10}$. Choose $\boldsymbol{\alpha}=(0,1, \gamma^{2}, \gamma^{4},\gamma^{6} )$ and $\delta=w^i\in \Bbb F_{81}$. Then the generator matrix of $RL({\boldsymbol{ \alpha}},k,n)$ is given as follows: \begin{eqnarray*} \left(\begin{array}{ccccccccc} 1 &1 &1 &1&1&0&0\\ 0 &1 &\gamma^{2} &\gamma^{4}&\gamma^{6}&0&0\\ 0 &1 &(\gamma^{2})^2 &(\gamma^{4})^2&(\gamma^{6})^2&0&1\\ 0 &1 &(\gamma^{2})^3 &(\gamma^{4})^3&(\gamma^{6})^3&1&w^i\\ \end{array}\right). \end{eqnarray*} By Theorem 3.6 $(1)$, $RL({\boldsymbol{ \alpha}},k,k+3)$ is a Euclidean LCD MDS non-Reed-Solomon code with parameters $[7,4]_{81}$ when $i$ is not divisible by 10. } \end{example} \begin{remark} {\rm We emphasize that any Euclidean LCD MDS code of non-Reed-Solomon type constructed in Theorems 3.4 and 3.6 is not monomially equivalent to any Euclidean LCD code constructed by the method of Carlet et al.~\cite{CMTQP}. Now, we briefly justify why they are not monomially equivalent for the case of Theorem 3.4, and the case of Theorem 3.6 can be also justified similarly. According to the result of Carlet et al. ~\cite[Theorem 5.1]{CMTQP}, assume that there is a $[2k,k]$ linear MDS code over $\mathbb{F}_q$ with generator matrix $[I_k~ A]$ satisfying the conditions of~\cite[Theorem 5.1]{CMTQP}. Then there is a monomial matrix $M$ such that $[I_k~ A]M$ generates a LCD MDS code $\mathcal C$ by~\cite[Theorem 5.1]{CMTQP}. Now, if we suppose that the code $\mathcal C$ is monomially equivalent to our LCD MDS code $\mathcal{C}_k(\boldsymbol{\alpha},t,h,\eta)$ of non-Reed-Solomon type with generator matrix $G_1$, then there should exist a monomial matrix $M'$ such that $G_1=[I_k~ A]MM'$ or $G_1(M')^{-1}M^{-1}=[I_k~A]$. Recall that a monomial matrix is a square matrix which has exactly one nonzero entry in each row and each column. It follows that $MM'=PDP'$, where $P$ and $P'$ are permutation matrices and $D$ is a diagonal matrix. Therefore, the entries of the first row of $G_1(M')^{-1}M^{-1}$ are nonzero and all-one except two coordinate positions, and the product of all the entries of the first row of $[I_k~A]$ is zero; this is impossible. } \end{remark} \subsection{Hermitian LCD MDS codes} In this subsection, we consider Hermitian LCD MDS codes over $\Bbb F_{q^2}$. Let $\gamma$ be a primitive element of $\Bbb F_{q^2}$ and $k\mid (q^2-1)$. Then $\gamma^{\frac{q^2-1}{k}}$ generates a subgroup of order $k$ in $\Bbb F_{q^2}^*$. Let $\alpha_i=\gamma^{\frac{q^2-1}{k}i}$ with $1\le i\le k$. For $i,j\in \{0,\ldots, k-1\}$, assume that $a_{\beta}(i,j)$ is the entry in the $(i+1)$-th row and $(j+1)$-th column of the matrix $A_{\beta}\overline{A}_{\beta}^T$, where $A_{\beta}$ is given in Equation (\ref{eq4}). Then \begin{eqnarray*} &&a_{\beta}(i,j)=(\beta\alpha_1)^i(\overline{\beta\alpha_1})^j+\cdots+(\beta\alpha_k)^i(\overline{\beta\alpha_k})^j\\ &=&\left\{ \begin{array}{ll} \beta^{i+jq}k & \mbox{if}\ i+jq\equiv0\pmod {k},\\ 0 & \mbox{otherwise}. \end{array} \right. \end{eqnarray*} Every row of the matrix $A_{\beta}\overline{A}_{\beta}^T$ has exactly one nonzero element, and every column of the matrix $A_{\beta}\overline{A}_{\beta}^T$ has exactly one nonzero element. Hence, the matrix $A_{\beta}\overline{A}_{\beta}^T$ is nonsingular over $\Bbb F_{q^2}$. The following lemma plays an important role in proving our main results of this Subsection 3.2. First, we investigate the twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_{q^2}$. \begin{lemma}{\rm Let $q$ be a power of an odd prime and $k$ be a positive integer with $k\mid (q^2-1)$. If there exists an odd prime number $p$ such that $v_p(k)<v_p(q^2-1)$ and $h>0$, then there exists a $[2k,k]_{q^2}$ Hermitian LCD twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_{q^2}$ for $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k,\gamma^r\alpha_1,\ldots,\gamma^r\alpha_k)$, where $\gamma$ is a primitive element of $\Bbb F_{q^2}$, $\alpha_i=\gamma^{\frac{q^2-1}{k}i}$, $1\le i\le k$, and $r=2^{v_2(q^2-1)}$. } \end{lemma} {\em Proof:} The generator matrix $G_1$ of the twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_{q^2}$ is shown in Equation (\ref{eq1}). By Lemma \ref{lem1}, $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is Hermitian LCD if and only if $G_1\overline{G}_1^T$ is nonsingular. Let $E=\theta_{h+hq}+\eta^q\theta_{h+lq}+\eta\theta_{l+hq}+\eta^{1+q}\theta_{l+lq}, \theta_j=\sum_{i=1}^{n}\alpha_i^j$ and $l=k-1+t$. Then we compute $G_1\overline{G}_1^T$ in Equation (9) in the top of next page. \begin{figure*}[!h \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{8} \begin{equation} \label{eq9} \resizebox{16cm}{!}{$G_1\overline{G}_1^T=\left( \begin{array}{cccccccccccccc} n & \theta_{q}& \ldots &\theta_{(h-1)q}&\theta_{hq}+\eta^q\theta_{lq} & \theta_{(h+1)q} &\ldots& \theta_{(k-2)q} &\theta_{(k-1)q}\\ \theta_{1} & \theta_{1+q}& \ldots &\theta_{1+(h-1)q}&\theta_{1+hq}+\eta^q\theta_{1+lq} & \theta_{1+(h+1)q} &\ldots& \theta_{1+(k-2)q} &\theta_{1+(k-1)q}\\ \vdots & \vdots& \ldots &\vdots&\vdots & \vdots& \ldots &\vdots&\vdots\\ \theta_{h-1} & \theta_{h-1+q}& \ldots &\theta_{h-1+(h-1)q}&\theta_{h-1+hq}+\eta\theta_{h-1+lq} & \theta_{h-1+(h+1)q} &\ldots& \theta_{h-1+(k-2)q} &\theta_{h+(k-1)q}\\ \theta_{h} +\eta\theta_{l}& \theta_{h+q} +\eta\theta_{l+q}& \ldots &\theta_{h+(h-1)q} +\eta\theta_{h-1+lq}&E & \theta_{h+(h+1)q}+\eta\theta_{l+(h+1)q}& \ldots& \theta_{h+(k-2)q}+\eta\theta_{k-2+lq}&\theta_{h+(k-1)q}+\eta\theta_{l+(k-1)q}\\ \theta_{h+1} & \theta_{h+1+q}& \ldots &\theta_{h+1+(h-1)q}&\theta_{h+1+hq}+\eta\theta_{h+1+lq} & \theta_{h+1+(h+1)q} &\ldots& \theta_{h+1+(k-2)q} &\theta_{h+1+(k-1)q}\\ \vdots & \vdots& \ldots &\vdots&\vdots & \vdots& \ldots &\vdots&\vdots\\ \theta_{k-1} & \theta_{k-1+q}& \ldots &\theta_{k-1+(h-1)q}&\theta_{k-1+hq}+\eta\theta_{k-1+lq} & \theta_{k-1+(h-1)q} &\ldots&\theta_{k-1+(k-2)q} &\theta_{k-1+(k-1)q}\\ \end{array} \right),$} \end{equation} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{4pt} \end{figure*} \setcounter{equation}{3 Let $C_{\beta}=A_{\beta}+B_{\beta}$, where $A_{\beta}$ is given in Equation (\ref{eq4}) and $B_{\beta}$ is given in Equation (\ref{eq7}). By Equation (\ref{eq9}), we compute $C_{\beta}\overline{C}_{\beta}^T$ in Equation (10) in the top of next page. \begin{figure*}[!h \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{9} \begin{equation} \label{eq10} \resizebox{16cm}{!}{$C_{\beta}\overline{C}_{\beta}^T=A_{\beta}\overline{A}_{\beta}^T+\left( \begin{array}{ccccccccc} 0 & 0& \ldots &0 & \eta^q\beta^{lq}\theta_{lq}&0 &\ldots&0\\ 0 & 0& \ldots &0 & \eta\beta^{1+lq}\theta_{1+lq}&0 &\ldots&0\\ \vdots & \vdots& \ldots &\vdots & \vdots & \vdots& \ldots &\vdots\\ 0 & 0& \ldots &0 & \eta\beta^{h-1+lq}\theta_{h-1+lq}&0 &\ldots& 0\\ \eta\beta^l\theta_{l}& \eta\beta^{l+q}\theta_{l+q}& \ldots &\eta\beta^{l+(h-1)q}\theta_{l+(h-1)q} & \eta\beta^{l+hq}\theta_{l+hq}+\eta^{1+q}\beta^{l+lq}\theta_{l+lq}+\eta^q\beta^{h+lq}\theta_{h+lq}&\eta\beta^{l+(h+1)q}\theta_{l+(h+1)q}&\ldots&\eta\beta^{l+(k-1)q}\theta_{l+(k-1)q}\\ 0 &0& \ldots &0 & \eta\beta^{h+1+lq}\theta_{h+1+lq}&0 &\ldots&0\\ \vdots & \vdots& \ldots &\vdots & \vdots & \vdots& \ldots &\vdots\\ 0 & 0& \ldots &0 & \eta\beta^{k-1+lq}\theta_{k-1+lq}&0 &\ldots&0 \end{array} \right).$} \end{equation} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{4pt} \end{figure*} \setcounter{equation}{3 Note that $\alpha _i^{q^2}=\alpha_i$. Then $\overline{\{\theta_l, \theta_{l+q},\ldots, \theta_{l+(k-1)q}\}}=\{\theta_{lq}, \theta_{1+lq},\ldots, \theta_{(k-1)+lq}\}$. Exactly one element in the set $\{\theta_l, \theta_{l+q},\ldots, \theta_{l+(k-1)q}\}$ has value $k$. Hence, \begin{eqnarray*}C_{\beta}\overline{C}_{\beta}^T=A_{\beta}\overline{A}_{\beta}^T+\left( \begin{array}{cccccccc} 0 &\ldots&0& \ldots & 0 &\ldots&0\\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & \overline{*_{\beta} } & \ldots&0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &*_{\beta}& \ldots & \Delta_{\beta} &\ldots& 0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots&0&\cdots & 0&\ldots&0 \\ \end{array} \right),\end{eqnarray*} where $\ast_{\beta}$ and $\Delta_{\beta}$ are elements belong to $\Bbb F_{q^2}$, $\ast_{\beta}$, $ \overline{*_{\beta} }$, and $\Delta_{\beta}$ are entries placed in the $(i+1,h+1)th$, $(h+1,i+1)$th, and $(h+1,h+1)$th positions, respectively and the other elements are all zero. Let $h>0$ and $G_1=[C_1: C_{\gamma^r}]$, where $r=2^{v_2(q^2-1)}$. By the condition that there exists an odd prime number $p$ such that $v_p(k)<v_p(q^2-1)$, any two columns in $G_1$ are not same. Then \begin{eqnarray*}&&G_1\overline{G}_1^T=C_1\overline{C}_1^T+C_{\gamma^r}\overline{C}_{\gamma^r}^T\\ &=&A_{1}\overline{A}_{1}^T+A_{\gamma^r}\overline{A}_{\gamma^r}^T\\ &+&\left( \begin{array}{cccccccc} 0 &\ldots&0& \ldots & 0 &\ldots&0\\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & \overline{*_{1}} + \overline{*_{\gamma^r}}& \ldots&0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &*_{1} + *_{\gamma^r}& \ldots & \Delta_1+\Delta_{\gamma^r} &\ldots& 0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots&0&\cdots & 0&\ldots&0 \\ \end{array} \right).\end{eqnarray*} Let $b(i,j)$ be the entry in the $i$-th row and $j$-th column of the matrix $A_{1}\overline{A}_{1}^T+A_{\gamma^r}\overline{A}_{\gamma^r}^T$. Then \begin{equation*} b(i,j)=\left\{ \begin{array}{ll} (1+ \gamma^{r(i+jq)})k & \mbox{if}\ i+jq\equiv0\pmod {k},\\ 0 & \mbox{otherwise}. \end{array} \right. \end{equation*} When $i+jq\equiv0\pmod {k}$, assume that $b(i,j)=0$. We have $\gamma^{r(i+jq)}=-1=\gamma^{\frac{q^2-1}{2}}$ and $r(i+jq)\equiv \frac{q^2-1}{2}\pmod{q^2-1}$. Since $r=2^{v_2(q^2-1)}$ and $v_2({\frac{q^2-1}{2}})=v_2(q^2-1)-1$, we get a contradiction. So, we have $b(i,j)\neq0$ when $i+jq\equiv0\pmod {k}$. Therefore, every row of the matrix $A_{1}\overline{A}_{1}^T+A_{\gamma^r}\overline{A}_{\gamma^r}^T$ has a nonzero element and every column of the matrix $A_{1}\overline{A}_{1}^T+A_{\gamma^r}\overline{A}_{\gamma^r}^T$ has exactly one nonzero element. Since the matrix $G_1\overline{G}_1^T$ is conjugate symmetric, we can delete the elements $*_{1} + *_{\gamma^r}$ and $\overline{*_{1}} + \overline{*_{\gamma^r}} $ by some elementary row and column operations of matrices at the same time. Therefore, we can find an elementary matrix $P$ such that \begin{eqnarray*}&&PG_1\overline{G}_1^T\overline{P}^T=PA_{1}\overline{A}_{1}^T\overline{P}^T+PA_{\gamma^r}\overline{A}_{\gamma^l}^T\overline{P}^T\\ &+&\left( \begin{array}{cccccccc} 0 &\ldots&0& \ldots & 0 &\ldots&0\\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & 0& \ldots&0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots &0& \ldots & \Delta_1+\Delta_{\gamma^r} &\ldots& 0 \\ \vdots&& \vdots& \ldots & \vdots &&\vdots\\ 0&\ldots&0&\cdots & 0&\ldots&0 \\ \end{array} \right).\end{eqnarray*} Hence, $G_1\overline{G}_1^T$ is nonsingular; thus, the code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_{q^2}$ is a Hermitian LCD code. This completes the proof. $\blacksquare$ \begin{example}{\rm Let $q=11^2=121$, $k=5$, and $\gamma$ be a primitive element of the finite field $\Bbb F_{121}$. Consider a twisted Reed-Solomon code $\mathcal C_5(\boldsymbol{\alpha}, 1,3,\eta)$ with $\boldsymbol{\alpha}=(1,\gamma^{24}, \gamma^{48}, \gamma^{72}, \gamma^{96} ,\gamma^8,\gamma^{8}\gamma^{24}, \gamma^8\gamma^{48}, \gamma^8\gamma^{72}, \gamma^8\gamma^{96})$ and $\eta=\gamma^i\in \Bbb F_{121}$.Then its generator matrix $G_2$ is given in the top of next page. By Lemma 3.8, $\mathcal C_5(\boldsymbol{\alpha}, 1,3,\gamma^i)$ is Hermitian LCD for all $i$. By Magma, the codes $\mathcal C_5(\boldsymbol{\alpha}, 1,3,\gamma^i)$ are MDS with parameters $[10,5]_{121}$ when $\eta\in \{\gamma^j: j=0,5,7,10,13,22,23\}$. By Magma the code $\mathcal C_5(\boldsymbol{\alpha}, 1,3,\gamma^{23})$ has a generator matrix of the form of $[I_5 \mid \mathbf{ A}]$, where \begin{eqnarray*} \mathbf{ A}=\left( \begin{array}{cccccc} \gamma^{111} & \gamma^{115}&\gamma^{6} & \gamma^{45}&10 \\ \gamma^{73} & \gamma^{19}&\gamma^{5} & \gamma^{54}&\gamma^{81} \\ \gamma^{91} & \gamma^{10}&\gamma^{22} & \gamma^{55}&\gamma^{81} \\ \gamma^{40} & \gamma^{94}&\gamma^{7} & \gamma^{43}&\gamma^{62} \\ \gamma^{38} & \gamma^{38}&\gamma^{116} & \gamma^{104}&\gamma^{56} \\ \end{array} \right). \end{eqnarray*} Then it is easy to check that the $3\times 3$ minor of the first three rows and columns of $\widetilde{\mathbf{ A}}$ is equal to $8$, which is confirmed by Magma. By Lemma \ref{lem3}, $\mathcal C_5(\boldsymbol{\alpha}, 1,3,\gamma^{23})$ is a $[10,5]_{121}$ Hermitian LCD MDS non-Reed-Solomon code. $\blacksquare$ \begin{figure*}[!h \normalsize \setcounter{mytempeqncnt}{\value{equation}} \setcounter{equation}{6} \begin{equation*} \label{eq10} \resizebox{18cm}{!}{$G_2=\left(\begin{array}{cccccccccccccc} 1 &1 &1 &1&1 & 1 &1 &1 &1&1\\ 1&\gamma^{24}&\gamma^{48}&\gamma^{72}&\gamma^{96} & \gamma^8&\gamma^8\gamma^{24}&\gamma^8\gamma^{48}&\gamma^8\gamma^{72}&\gamma^8\gamma^{96}\\ 1&(\gamma^{24})^2&(\gamma^{48})^2&(\gamma^{72})^2&(\gamma^{96})^2 & (\gamma^8)^2&(\gamma^8\gamma^{24})^2&(\gamma^8\gamma^{48})^2&(\gamma^8\gamma^{72})^2&(\gamma^8\gamma^{96})^2\\ 1+\gamma^i&(\gamma^{24})^3+\gamma^i(\gamma^{24})^6&(\gamma^{48})^3+\gamma^i(\gamma^{48})^6&(\gamma^{72})^3+\gamma^i(\gamma^{72})^6&(\gamma^{96})^3+\gamma^i(\gamma^{96})^6 & (\gamma^8)^3+\gamma^i(\gamma^8)^6&(\gamma^8\gamma^{24})^3+\gamma^i(\gamma^8\gamma^{24})^6&(\gamma^8\gamma^{48})^3+\gamma^i(\gamma^8\gamma^{48})^6&(\gamma^8\gamma^{72})^3+\gamma^i(\gamma^8\gamma^{72})^6&(\gamma^8\gamma^{96})^3+\gamma^i(\gamma^8\gamma^{96})^6\\ 1&(\gamma^{24})^4&(\gamma^{48})^4&(\gamma^{72})^4&(\gamma^{96})^4 & (\gamma^8)^4&(\gamma^8\gamma^{24})^4&(\gamma^8\gamma^{48})^4&(\gamma^8\gamma^{72})^4&(\gamma^8\gamma^{96})^4\\ \end{array}\right).$} \end{equation*} \setcounter{equation}{\value{mytempeqncnt}} \hrulefill \vspace*{4pt} \end{figure*} \setcounter{equation}{3 } \end{example} In a similar way as Theorem 3.4, we obtain the following result. \begin{theorem} {\rm Let $q$ be a power of an odd prime and $\Bbb F_s \subset \Bbb F_{q^2}$. Let $k$ be a positive integer such that $k\mid (s-1)$, $2<k< (s-1)/2$. There exists an odd prime number $p$ such that $v_p(k)<v_p(s-1)$. Let $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k,\gamma^r\alpha_1,\ldots,\gamma^r\alpha_k)$, where $\gamma$ is a primitive element of $\Bbb F_{s}$, $\alpha_i=\gamma^{\frac{s-1}{k}i}$, $1\le i\le k$, and $r=2^{v_2(s-1)}$. Assume that the minor condition for $\eta\in\Bbb F_{q^2} \backslash \Bbb F_s$ of Lemma \ref{lem3} holds. Then $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ is a $[2k, k]_{q^2}$ Hermitian LCD MDS non-Reed-Solomon code. } \end{theorem} {\em Proof:} By Lemma 3.9, $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_{q^2}$ is Hermitian LCD. By Lemma \ref{lem2} and $\eta \in\Bbb F_{q^2} \backslash \Bbb F_s $, then the twisted Reed-Solomon code $\mathcal C_k(\boldsymbol{\alpha}, t,h,\eta)$ over $\Bbb F_{q^2}$ is MDS. Then the result follows from Lemma \ref{lem3}. $\blacksquare$ In the following, we consider Roth-Lempel codes. \begin{lemma} {\rm Let $q$ be a power of an odd prime, and let $k\mid (q^2-1)$ and $k\geq 3$. Let $\gamma$ be a primitive element of $\Bbb F_{q^2}$ and $\alpha_i=\gamma^{\frac{q^2-1}{k}i}$ for $1\le i\le k$. Then there exists a Hermitian LCD Roth-Lempel code $RL({\boldsymbol{ \alpha}},k,n)$ over $\Bbb F_{q^2}$ with one of the following parameters: (1) $[k+2,k]_{q^2}$ if $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k)$; (2) $[k+3,k]_{q^2}$ if $\gcd(k+1,q)=1$ and $\boldsymbol{\alpha}=(0,\alpha_1,\ldots,\alpha_k)$; (3) $[2k+2,k]_{q^2}$ if there exists an odd prime number $p$ such that $v_p(k)<v_p(q^2-1)$, $\boldsymbol{\alpha}=(\alpha_1,\ldots,\alpha_k,\gamma^r\alpha_1,\ldots,\gamma^r\alpha_k)$, and $r=2^{v_2(q^2-1)}$. } \end{lemma} {\em Proof:} By Lemma \ref{lem1}, the Roth-Lempel code over $\Bbb F_q$ in Definition 2.11 is Hermitian LCD if and only if $G_2\overline{G}_2^T$ is nonsingular. (1) Let $G_2=[A_{1}: D]$, where $D$ is given in the proof of Lemma 3.5. Then \begin{eqnarray*}&&G_2\overline{G}_2^T=A_{1}\overline{A}_{1}^T+D\overline{D}^T\\ &=&A_{1}\overline{A}_{1}^T+\left( \begin{array}{ccccccc} 0 & 0& 0&\ldots & 0 &0\\ \vdots&\vdots& \vdots& \ldots & \vdots &\vdots\\ 0 & 0&0&\cdots & 1 &\delta^q\\ 0& 0& 0&\ldots &\delta &1+\delta^{1+q}\\ \end{array} \right). \end{eqnarray*} Since the matrix $A_{1}\overline{A}_{1}^T$ is nonsingular, the matrix $G_2\overline{G}_2^T$ is nonsingular and $RL({\boldsymbol{ \alpha}},k,k+2)$ is a Hermitian LCD code. (2) Let $G_2=[e_1:A_{1}: D]$, where $e_1=(1,0,\ldots, 0)^T$. Then $G_2\overline{G}_2^T=e_1e_1^T+A_{1}\overline{A}_{1}^T+D\overline{D}^T$ and the entry of the $(1,1)$th position of $A_{1}\overline{A}_{1}^T$ is $k$, which is nonzero. Then the entry of the $(1,1)$th position of the matrix $G_2\overline{G}_2^T$ is $k+1$, and so the matrix $G_2\overline{G}_2^T$ is nonsingular. Therefore, the $RL({\boldsymbol{ \alpha}},k,k+3)$ code is a Hermitian LCD code. (3) By the proof of Lemma 3.5, let $G_2=[A_1: A_{\gamma^r}: D]$. By the proof of Lemma 3.1, we can take some element $\delta\in \Bbb F_{q^2}$ such that the matrix $G_2\overline{G}_2^T$ is nonsingular, and hence the $RL({\boldsymbol{ \alpha}},k,2k+2)$ code is a Hermitian LCD code. This completes the proof. $\blacksquare$ In a similar way as Theorem 3.6, we have the following theorem on Hermitian LCD codes over $\Bbb F_{q^2}$. \begin{theorem}{\rm Let $q$ be a power of an odd prime and $ \Bbb F_{q^2}$ be the finite field of order $q^2$. Let $k$ be an integer such that $k\mid (q-1)$ and $k\ge 3$. (1) If $\gcd(k+1,q)=1$, then there exists a $[k+3, k]_{q^2}$ Hermitian LCD MDS non-Reed-Solomon code over $\Bbb F_{q^2}$. (2) If there exists an odd prime number $p$ such that $v_p(k)<v_p(q-1)$, then there exists a $[2k+2, k]_{q^2}$ Hermitian LCD MDS non-Reed-Solomon code over $\Bbb F_{q^2}$. } \end{theorem} We give the following example. \begin{example}{\rm (1) Let $q=5^2$ and $k=6$. Let $\gamma$ be a primitive element of the finite field $\Bbb F_{25}$, $\boldsymbol{\alpha}=(0,1,\gamma^{4}, \gamma^{8}, \gamma^{12},\gamma^{16}, \gamma^{20} )$, and $\delta=\gamma^i$ for some integer $i$ with $0\le i\le 23$. Then the generator matrix of the code $\mathcal{C}_1$ is given as follows: \begin{eqnarray*}\resizebox{8.3cm}{!}{ $ \left(\begin{array}{cccccccccc} 1 &1 &1 &1&1&1&1&0&0\\ 0 &1&\gamma^{4} &\gamma^{8} &\gamma^{12}&\gamma^{16}&\gamma^{20}&0&0\\ 0 &1&(\gamma^{4})^2 &(\gamma^{8})^2 &(\gamma^{12})^2&(\gamma^{16})^2&(\gamma^{20})^2&0&0\\ 0 &1&(\gamma^{4})^3 &(\gamma^{8})^3 &(\gamma^{12})^3&(\gamma^{16})^3&(\gamma^{20})^3&0&0\\ 0 &1&(\gamma^{4})^4 &(\gamma^{8})^4 &(\gamma^{12})^4&(\gamma^{16})^4&(\gamma^{20})^4&0&1\\ 0 &1&(\gamma^{4})^5 &(\gamma^{8})^5 &(\gamma^{12})^5&(\gamma^{16})^5&(\gamma^{20})^5&1&\gamma^i\\ \end{array}\right).$} \end{eqnarray*} By Magma, $\mathcal{C}_1$ is a Hermitian LCD MDS non-Reed-Solomon code over $\Bbb F_{25}$ with parameters $[9,6]_{25}$ when $i=1,2,5,6,9,10,13,14,17,18,21,22$. (2) Let $q=7^2$ and $k=8$. Let $\gamma$ be a primitive element of the finite field $\Bbb F_{25}$ and $\boldsymbol{\alpha}=(0,1,\gamma^{6}, \gamma^{12}, \gamma^{18},\gamma^{24}, \gamma^{30},\gamma^{36},\gamma^{42} )$, and $\delta=\gamma^i$ for some integer $i$ with $0\le i\le 47$. Then the generator matrix of the code $\mathcal{C}_2$ is given as follows: \begin{equation*}\resizebox{8.3cm}{!}{ $\left(\begin{array}{cccccccccccc} 1 &1 &1 &1&1&1&1&1&1&0&0\\ 0 &1&\gamma^{6} &\gamma^{12} &\gamma^{18}&\gamma^{24}&\gamma^{30}&\gamma^{36}&\gamma^{42}&0&0\\ 0 &1&(\gamma^{6})^2 &(\gamma^{12} )^2&(\gamma^{18})^2&(\gamma^{24})^2&(\gamma^{30})^2&(\gamma^{36})^2&(\gamma^{42})^2&0&0\\ 0 &1&(\gamma^{6})^3 &(\gamma^{12} )^3&(\gamma^{18})^3&(\gamma^{24})^3&(\gamma^{30})^3&(\gamma^{36})^3&(\gamma^{42})^3&0&0\\ 0 &1&(\gamma^{6})^4 &(\gamma^{12} )^4&(\gamma^{18})^4&(\gamma^{24})^4&(\gamma^{30})^4&(\gamma^{36})^4&(\gamma^{42})^4&0&0\\ 0 &1&(\gamma^{6})^5 &(\gamma^{12} )^5&(\gamma^{18})^5&(\gamma^{24})^5&(\gamma^{30})^5&(\gamma^{36})^5&(\gamma^{42})^5&0&0\\ 0 &1&(\gamma^{6})^6 &(\gamma^{12} )^6&(\gamma^{18})^6&(\gamma^{24})^6&(\gamma^{30})^6&(\gamma^{36})^6&(\gamma^{42})^6&0&1\\ 0 &1&(\gamma^{6})^7 &(\gamma^{12} )^7&(\gamma^{18})^7&(\gamma^{24})^7&(\gamma^{30})^7&(\gamma^{36})^7&(\gamma^{42})^7&1&\gamma^i\\ \end{array}\right)$ }. \end{equation*} By Magma, $\mathcal{C}_2$ is a Hermitian LCD MDS non-Reed-Solomon code over $\Bbb F_{49}$ with parameters $[11,8]_{49}$ when $i=4,16,22,28,29,34,40,46$. $\blacksquare$ } \end{example} \begin{remark} {\rm We point out that any {\it Hermitian} LCD MDS code of non-Reed-Solomon type constructed in Theorems 3.11 and 3.13 is not monomially equivalent to any {\it Hermitian} LCD code constructed by the method of Carlet et al.~\cite{CMTQP}. This can be justified in a similar way as Remark 3.8 (for the Euclidean case).} \end{remark} \section{Concluding remarks} Main contributions of this paper are constructions of some new Euclidean and Hermitian LCD MDS codes of non-Reed-Solomon type. According to the results of Carlet et al. ~\cite{CMTQP}, all parameters of Euclidean LCD codes $(q > 3)$ and Hermitian LCD codes $(q > 2)$ have been completely determined, including the LCD MDS codes. However, in the coding theory, it is an important issue to find all {\it inequivalent} codes of the same parameters. We emphasize that any Euclidean (or Hermitian) LCD MDS code of non-Reed-Solomon type constructed by our method is not monomially equivalent to any Euclidean (or Hermitian) LCD code constructed by the method of Carlet et al. in \cite{CMTQP}; this is justified in Remarks 3.8 and 3.15. Finally, we provided some examples of non-Reed-Solomon LCD MDS codes. \bigskip \section*{Acknowledgments} The authors are very grateful to the reviewers and the Associate Editor for their valuable comments and suggestions to improve the quality of this paper. \ifCLASSOPTIONcaptionsoff \newpage \mathfrak{i}
2,877,628,088,764
arxiv
\section{Introduction and overview} \label{sec:intro} A multivariate normal distribution is determined by its covariance matrix and its mean vector. So for a fixed $n\geq 1$, the family $\domain{N}$ of $n$-variate normal distributions is a differentiable manifold which can be identified with the product of the space of positive definite symmetric $n\times n$-matrices by the vector space $\mathbbm{R}^n$. For various statistical purposes, it is desirable to have a measure of distance between the elements of $\domain{N}$. Such a distance measure is provided by the \emph{Fisher metric} on $\domain{N}$, which is a Riemannian metric that appears naturally in a certain statistical framework. We briefly review some properties of Fisher metric on the normal distributions in Section \ref{sec:background}. Computing the distances on $\domain{N}$, however, turns out to be a non-trivial task. Even though explicit forms for the geodesics of the Fisher metric on $\domain{N}$ are known (due to Calvo and Oller \cite{CO2}), these only yield explicit formulas for the distance in particular cases. So Lovri\v{c}, Min-Oo and Ruh \cite{LMR} proposed the use of a different metric in which distances are easier to compute. They map $\domain{N}$ diffeomorphically onto the Riemannian symmetric space $\group{SL}(n+1,\mathbbm{R})/\group{SO}(n+1)$. This map is not an isometry between the Fisher metric and the metric of the symmetric space, which we call the \emph{Killing metric}, but nevertheless, the two metrics are quite similar in appearance. So it is reasonable to ask how different they really are. In Section \ref{sec:normal} we describe the geometry of $\domain{N}$ as a Riemannian homogeneous but non-symmetric space with the Fisher metric. In Theorem \ref{mthm:normal_fisher} we show that $\domain{N}$ is a bundle whose base is the cone $\mathrm{Pos}(n,\mathbbm{R})$ of symmetric positive definite $n\times n$-matrices and whose fiber is $\mathbbm{R}^n$. This also gives rise to two pointwise mutually orthogonal foliations, one with leaves isometric to $\mathrm{Pos}(n,\mathbbm{R})$, the other with leaves isometric to $\mathbbm{R}^n$. To make a case for using the Killing metric as a sensible approximation for the Fisher metric, we compare the geometry of the Fisher metric and the geometry of the Killing metric in Section \ref{sec:symmetric}. We find that the Levi-Civita connection for the Fisher metric on the leaves $\mathrm{Pos}(n,\mathbbm{R})$ is affinely equivalent to the Levi-Civita connection of the Killing metric. So unparameterized geodesics in these leaves are the same for the two metrics. In Theorem \ref{mthm:asymptotic}, we show that Killing geodesics orthogonal to a leaf $\mathrm{Pos}(n,\mathbbm{R})$ at some point are \emph{asymptotically geodesic} in the Fisher metric, that is, their defect from being a Fisher geodesic tends to zero as their curve parameter tends to infinity. So we find that for two important classes of unparameterized geodesics, the Killing geodesics approximate or are identical to the corresponding Fisher geodesics. Though this is not an exhaustive comparison, it provides some justification to consider the easier to compute Killing metric as a good approximation for the Fisher metric. \subsection*{Notations and conventions} Throughout, we will assume matrices to be real-valued. For a matrix $X\in\mathbbm{R}^{n\times n}$, we let $X^\top$ denote its transpose. We also write $X^{-\top}=(X^\top)^{-1}$. The identity matrix is denoted by $I$ or $I_n$. By $E_{ij}$ we denote the elementary matrix whose entry in row $i$, column $j$ is $1$, and all other entries are $0$. Its symmetrization is $S_{ij}=\frac{2-\delta_{ij}}{2}(E_{ij}+E_{ji})$. The canonical basis vectors of $\mathbbm{R}^n$ are denoted by $e_1,\ldots,e_n$. As usual, \begin{align*} \group{GL}(n,\mathbbm{R}) &= \{A\in\mathbbm{R}^{n\times n}\mid \det(A)\neq0\}, \\ \group{SL}(n,\mathbbm{R}) &= \{A\in\group{GL}(n,\mathbbm{R})\mid\det(A)=1\}, \\ \O(n) &= \{A\in\group{GL}(n,\mathbbm{R})\mid A^\top = A^{-1}\}, \\ \group{SO}(n) &= \{A\in\O(n)\mid \det(A)=1\} \end{align*} denote the general linear, special linear, and (special) orthogonal groups, respectively. The subgroup of $\group{GL}(n,\mathbbm{R})$ of matrices with positive determinant is denoted by $\group{GL}^+(n,\mathbbm{R})$. The affine group is the semidirect product \[ \group{Aff}(n,\mathbbm{R})=\group{GL}(n,\mathbbm{R})\ltimes\mathbbm{R}^n, \] where the semidirect product is given by $(A_1,b_1)(A_2,b_2)=(A_1 A_2,b_1+A_1 b_2)$ for $(A_i,b_i)\in\group{Aff}(n,\mathbbm{R})$. We also write $\group{Aff}^+(n,\mathbbm{R})=\group{GL}^+(n,\mathbbm{R})\ltimes\mathbbm{R}^n$. By $\mathrm{Sym}(n,\mathbbm{R})$ we denote the set of symmetric $n\times n$-matrices, \[ \mathrm{Sym}(n,\mathbbm{R}) = \{S\in\mathbbm{R}^{n\times n}\mid S=S^\top\}. \] We write $\mathrm{Sym}_0(n,\mathbbm{R})$ for the corresponding subspaces of elements with trace $0$. The subset of diagonal matrices in $\mathrm{Sym}(n,\mathbbm{R})$ is denoted by $\mathrm{Diag}(n,\mathbbm{R})$. The set of positive definite symmetric matrices in $\mathrm{Sym}(n,\mathbbm{R})$ is denoted by $\mathrm{Pos}(n,\mathbbm{R})$, \[ \mathrm{Pos}(n,\mathbbm{R}) = \{S\in\mathrm{Sym}(n,\mathbbm{R})\mid x^\top S x > 0 \text{ for all non-zero } x\in\mathbbm{R}^n\}. \] Its subset of unimodular elements is \[ \mathrm{Pos}_1(n,\mathbbm{R}) = \{S\in\mathrm{Pos}(n,\mathbbm{R})\mid \det(S)=1\}. \] Recall that $\mathrm{Pos}(n,\mathbbm{R})=\group{GL}(n,\mathbbm{R})/\O(n)$ and $\mathrm{Pos}_1(n,\mathbbm{R})=\group{SL}(n,\mathbbm{R})/\group{SO}(n)$. \section{Some background on information geometry} \label{sec:background} In this section we briefly review the concepts from information geometry that we use in the following. We mainly follow Amari and Nagaoka's \cite{AN} presentation. \subsection{The Fisher metric and dual connections} Information geometry provides a framework to study a class of probability distributions $p(x;\theta)$ defined on a sample space $\Omega$ and determined by finitely many parameters $\theta=(\theta_1,\ldots,\theta_n)$, where we assume for simplicity that $p$ depends smoothly on $x$ and $\theta$. For example, the set of univariate normal distributions is parametrized by the mean $\theta_1=\mu$ and the variance $\theta_2=\sigma^2$. In general, the set $M$ of admissible values for $\theta$ can be viewed as an $n$-dimensional differentiable manifold, and we can define a positive semidefinite bilinear tensor $\tensor{g}=(\tensor{g}_{ij})$ on $M$ via \begin{equation} \tensor{g}_{ij}(\theta) = -\int_\Omega \frac{\partial^2 \log(p(x;\theta))}{\partial\theta_i\partial\theta_j} p(x;\theta)\ \d x. \label{eq:fisher} \end{equation} In the following we assume that $\tensor{g}$ is positive definite everywhere, so that $(M,\tensor{g})$ is a Riemannian manifold. Then $\tensor{g}$ is called the \emph{Fisher metric} on $M$, and $(M,\tensor{g})$ is called a \emph{statistical manifold}. In addition to the Fisher metric, there are two particular torsion-free affine connections defined on $M$, denoted by $\nabla^{({\rm e})}$ and $\nabla^{({\rm m})}$. These connections are \emph{dual} to each other with respect to $\tensor{g}$, which means that for all vector fields $X,Y,Z$ on $M$, \begin{equation} Z \tensor{g}(X,Y) = \tensor{g}(\nabla^{({\rm e})}_Z X,Y) + \tensor{g}(X,\nabla^{({\rm m})}_Z Y). \label{eq:dual} \end{equation} Moreover, the affine combination \[ \nabla = \frac{1}{2}\nabla^{({\rm e})} + \frac{1}{2}\nabla^{({\rm m})} \] yields the Levi-Civita connection $\nabla$ of the Fisher metric $\tensor{g}$. The letters ``e'' and ``m'' stand for ``exponential'' and ``mixture'', respectively, referring to two families of probability distributions in which these connections appear naturally. More generally, there is a whole family of affine connections $\nabla^{(\alpha)}$ with $\alpha\in[-1,1]$ associated to $\tensor{g}$, and $\nabla^{({\rm e})}=\nabla^{(1)}$, $\nabla^{({\rm m})}=\nabla^{(-1)}$. However, we are not concerned with values $\alpha\neq\pm 1$ here. \subsection{Exponential families} An \emph{exponential family} is a statistical manifold $M$ that consists of probability distributions of the form \[ p(x;\theta) = \exp(c(x)+\theta_1 f_1(x) + \ldots + \theta_n f_n(x) - \psi(\theta)) \] for given functions $c,f_1,\ldots,f_n:\Omega\to\mathbbm{R}$ and $\psi:M\to\mathbbm{R}$. The normalization of $p(x;\theta)$ implies \begin{equation} \psi(\theta) = \log\left(\int_\Omega \exp(c(x) + \theta_1 f_1(x) + \dots + \theta_n f_n(x)) \d x\right). \label{eq:psi} \end{equation} The connections $\nabla^{({\rm e})}$ and $\nabla^{({\rm m})}$ are distinguished on an exponential family (see Amari and Nagaoka \cite[Sections 2.3 and 3.3]{AN}). \begin{thm}\label{thm:flat} Let $M$ be an exponential family. Then $\nabla^{({\rm e})}$ and $\nabla^{({\rm m})}$ are flat torsion-free affine connections on $M$. \end{thm} In fact, the $\theta_1,\ldots,\theta_n$ form a \emph{flat coordinate system} in the sense that $\nabla^{({\rm e})}_{\partial_i}\partial_j=0$, $i,j=1,\ldots,n$, for the coordinate vector fields $\partial_i=\frac{\partial}{\partial\theta_i}$. The flat coordinate system $\eta_1,\ldots,\eta_n$ for $\nabla^{({\rm m})}$ is obtained via a Legendre transform of $\theta_1,\ldots,\theta_n$, \[ \frac{\partial \psi}{\partial \theta_i} = \eta_i, \quad i=1,\ldots,n. \] In the flat $\theta$-coordinates, the Fisher metric for an exponential family is given as a Hessian metric $\tensor{g}=\nabla^{({\rm e})}\d\psi$, or equivalently \begin{equation} \tensor{g}_{ij}(\theta) = \frac{\partial^2 \psi(\theta)}{\partial\theta_i\partial\theta_j}. \label{eq:hessian} \end{equation} We call $\psi$ the \emph{potential} of the Fisher metric. The \emph{dual potential} $\psi^*$ is given by $\psi^*=\theta^\top\eta-\psi$, and in the flat $\eta$-coordinates, the inverse $\tensor{g}^{ij}$ is given as a Hessian metric \begin{equation} \tensor{g}^{ij}(\eta) = \frac{\partial^2 \psi^*(\eta)}{\partial\eta_i\partial\eta_j}. \label{eq:hessian_dual} \end{equation} Another important property of exponential families is the following (see Amari and Nagaoka \cite[Theorem 2.5]{AN}). \begin{thm}\label{thm:totgeod} A submanifold $N$ of an exponential family $M$ is totally geodesic in $M$ with respect to $\nabla^{({\rm e})}$ if and only if $N$ is an exponential family itself. \end{thm} \subsection{Normal distributions} \label{subsec:normal} The most important exponential family is formed by the normal distributions. An $n$-variate normal distribution is determined by its covariance matrix $\Sigma\in\mathrm{Pos}(n,\mathbbm{R})$ and its mean $\mu\in\mathbbm{R}^n$ by the following formula \[ p(x; \Sigma, \mu) = \frac{1}{\sqrt{(2\pi)^n\det(\Sigma)}} \exp\left(-\frac{1}{2}(x-\mu)^\top \Sigma^{-1} (x-\mu)\right) \] so the manifold we are considering is the space $\domain{N}=\mathrm{Pos}(n,\mathbbm{R})\times\mathbbm{R}^n$. The flat coordinates for the connection $\nabla^{({\rm m})}$ are $(\Xi,\xi)$, where \[ \xi=\mu\in\mathbbm{R}^n, \quad \Xi=\Sigma+\mu\mu^\top\in\mathrm{Pos}(n,\mathbbm{R}), \] and the flat coordinates for the connection $\nabla^{({\rm e})}$ are $(\Theta,\theta)$, where \[ \theta=\Sigma^{-1}\mu\in\mathbbm{R}^n, \quad \Theta=-\frac{1}{2}\Sigma^{-1}\in\mathrm{Pos}(n,\mathbbm{R}). \] The potential $\psi$ in these coordinate systems is (compare \eqref{eq:psi}) \begin{align*} \psi(\Sigma,\mu) &= \frac{1}{2}\mu^\top \Sigma^{-1} \mu + \frac{1}{2}\log(\det(2\pi\Sigma)), \\ \psi(\Xi,\xi) &= \frac{1}{2}\xi^\top(\Sigma-\xi\xi^\top)^{-1}\xi +\frac{1}{2}\log(\det(2\pi(\Xi-\xi\xi^\top))), \\ \psi(\Theta,\theta) &= -\frac{1}{4}\theta^\top\Theta\theta-\frac{1}{2}\log(\det(-\pi^{-1}\Theta)). \end{align*} \section{Geometry of the family of normal distributions} \label{sec:normal} In this section we take a closer look at the information geometry of the manifold $\domain{N}=\mathrm{Pos}(n,\mathbbm{R})\times\mathbbm{R}^n$. Note that $\mathrm{Pos}(n,\mathbbm{R})=\mathbbm{R}\times\mathrm{Pos}_1(n,\mathbbm{R})$ as a product of manifolds. \subsection{Basic geometric properties of $\domain{N}$} Here, we state the explicit form of the Fisher metric, its Levi-Civita connection and its curvature tensor in the $(\Sigma,\mu)$-coordinates. These were originally computed by Skovgaard \cite{skovgaard0,skovgaard}. If $\tensor{g}$ is the Fisher metric on $\domain{N}$, $X,Y$ are two coordinate vector fields in the $\Sigma$-directions, and $v,w$ are two coordinate vector fields in the $\mu$-directions, then the metric tensor is \begin{equation} \tensor{g}_{(\Sigma,\mu)}\bigl( (X,v),(Y,w) \bigr) =v^\top\Sigma^{-1} w + \frac{1}{2}\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Y), \label{eq:Fisher_metric} \end{equation} and the Levi-Civita connection is determined by \begin{equation} \begin{split} \nabla_XY &= \nabla_YX = -\frac{1}{2}(X\Sigma^{-1}Y+Y\Sigma^{-1}X), \\ \nabla_vw &= \nabla_wv = \frac{1}{2}(vw^\top+wv^\top), \\ \nabla_Xv &= \nabla_vX = -\frac{1}{2}X\Sigma^{-1}v. \end{split} \label{eq:LeviCivita} \end{equation} Note that the symmetry in these equations is due to the fact that we are looking at coordinate vector fields. If $X_1,X_2,X_3,X_4$ and $v_1,v_2,v_3,v_4$ are coordinate vector fields in the $\Sigma$- and $\mu$-directions, respectively, then the curvature of the Fisher metric is determined by \begin{equation} \begin{split} \tensor{R}(v_1,v_2,v_3,v_4) =&\ \frac{1}{4}\bigl((v_2^\top\Sigma^{-1}v_3)(v_1^\top\Sigma^{-1}v_4)-(v_1^\top\Sigma^{-1}v_3)(v_2^\top\Sigma^{-1}v_4)\bigr), \\ \tensor{R}(X_1,X_2,X_3,X_4) =&\ \frac{1}{4}\bigl(\mathrm{tr}(X_2\Sigma^{-1}X_1\Sigma^{-1}X_3\Sigma^{-1}X_4\Sigma^{-1})\\ &\ -\mathrm{tr}(X_1\Sigma^{-1}X_2\Sigma^{-1}X_3\Sigma^{-1}X_4\Sigma^{-1})\bigr), \\ \tensor{R}(v_1,v_2,X_1,X_2) =&\ \frac{1}{4}(v_1^\top\Sigma^{-1}X_1\Sigma^{-1}X_2\Sigma^{-1}v_2-v_1^\top\Sigma^{-1}X_2\Sigma^{-1}X_1\Sigma^{-1}v_2), \\ \tensor{R}(v_1,X_1,v_2,X_2) =&\ \frac{1}{4}v_1^\top\Sigma^{-1}X_1\Sigma^{-1}X_2\Sigma^{-1}v_2. \end{split} \label{eq:curvature} \end{equation} We now consider the two foliations of $\domain{N}$ into submanifolds of fixed $\Sigma_0$ or $\mu_0$, respectively. For fixed $\Sigma_0\in\mathrm{Pos}(n,\mathbbm{R})$, $\mu_0\in\mathbbm{R}^n$ we will write \begin{align*} \domain{N}(\cdot,\mu_0) &= \{(\Sigma,\mu_0)\mid\Sigma\in\mathrm{Pos}(n,\mathbbm{R})\}, \\ \domain{N}(\Sigma_0,\cdot) &= \{(\Sigma_0,\mu)\mid\mu\in\mathbbm{R}^n\}. \end{align*} It follows from \eqref{eq:Fisher_metric} that the two foliations determined by these submanifolds are orthogonal. Recall that the second fundamental form $B$ of a submanifold $N$ of $M$ is the normal component of $\nabla_X Y$ in $\mathrm{T} M$ for two vector fields $X,Y$ tangent to $N$. We let $\partial_{(ij)}$ denote the coordinate vector field in direction $S_{ij}$, and we let $\partial_m$ denote the coordinate vector field in direction $e_m$. We denote by $J_\Sigma=\{(i,j)\mid 1\leq i\leq j\leq n\}$ the set enumerating the coordinates of $\mathrm{Sym}(n,\mathbbm{R})$ and by $J_\mu=\{i=1,\ldots,n\}$ the set enumerating the coordinates of $\mathbbm{R}^n$, and set $J=J_\Sigma\cup J_\mu$. When we refer to an index $p\in J$, it may mean either a single index from $J_\mu$ or an index pair from $J_{\Sigma}$. Then the Christoffel symbols for the Levi-Civita connection $\nabla$ are denoted by $\Gamma_{pq}^r$ with $p,q,r\in J$. \begin{prop}\label{prop:Nmu0_tot_geod} For any $\mu_0 \in \mathbbm{R}^n$ and with respect to the Fisher metric of $\domain{N}$, the submanifold $\domain{N}(\cdot,\mu_0)$ is totally geodesic. \end{prop} \begin{proof} By \eqref{eq:LeviCivita}, $\nabla_{\partial_{(ij)}}\partial_{(kl)}$ is tangent to $\domain{N}(\cdot,\mu_0)$ for all $i,j,k,l$. An arbitrary tangent vector field $X$ to $\domain{N}(\cdot,\mu_0)$ can be written as $X=\sum_{(i,j)\in J_\Sigma} w_{ij} \partial_{(ij)}$, with $w_{ij}\in\mathrm{C}^\infty(\domain{N})$. Then \begin{align*} \nabla_{\partial_{(ij)}}X &= \sum_{p\in J} (\partial_{(ij)} w_p + \sum_{q\in J} \Gamma_{(ij)q}^p w_q) \partial_p \\ &= \sum_{p\in J} (\partial_{(ij)} w_p + \sum_{(k,l)\in J_\Sigma}\Gamma_{(ij)(kl)}^p w_{kl}) \partial_p & (w_m=0\text{ for }m\in J_\mu) \\ &= \sum_{(r,s)\in J_\Sigma} (\partial_{(ij)} w_{rs} + \sum_{(k,l)\in J_\Sigma}\Gamma_{(ij)(kl)}^{(rs)} w_{kl}) \partial_{rs} & (\Gamma_{(ij)(kl)}^m=0=w_m\text{ for }m\in J_\mu). \end{align*} This last expression is the induced covariant derivative on the submanifold $\domain{N}(\cdot,\mu_0)$, since the $\mu$- and $\Sigma$-directions are orthogonal everywhere. Hence the second fundamental form of $\domain{N}(\cdot,\mu_0)$ vanishes, which means $\domain{N}(\cdot,\mu_0)$ is totally geodesic. \end{proof} \begin{prop}\label{prop:NSigma0_parallel} For any $\Sigma_0 \in \mathrm{Pos}(n,\mathbbm{R})$ and with respect to the Fisher metric of $\domain{N}$, the submanifold $\domain{N}(\Sigma_0,\cdot)$ is parallel. Also, the second fundamental form $B$ of $\domain{N}(\Sigma_0,\cdot)$ satisfies \[ B(e_i,e_j) = \frac{1}{2}(E_{ij} + E_{ji}) \] for all $i,j = 1, \dots, n$. \end{prop} \begin{proof} The second fundamental form of $\domain{N}(\Sigma_0,\cdot)$ is given by \[ B(\partial_i,\partial_j) = \sum_{(k,l)\in J_\Sigma} \Gamma_{ij}^{(kl)} \partial_{(kl)}, \] where $i,j \in J_\mu$. Denote by $\nabla^\perp$ and $\bar{\nabla}$ the normal and induced connection for $\domain{N}(\Sigma_0,\cdot)$, respectively. By \eqref{eq:LeviCivita}, $\bar{\nabla}$ is a flat connection on $\domain{N}(\Sigma_0,\cdot)$. Then the covariant derivative of $B$ is given by ($i,j,m \in J_\mu$) \[ (\nabla_{\partial_m} B)(\partial_i, \partial_j) = \nabla^\perp_{\partial_m}(B(\partial_i,\partial_j)) - B(\bar{\nabla}_{\partial_m} \partial_i, \partial_j) - B(\partial_i, \bar{\nabla}_{\partial_m} \partial_j) = \nabla^\perp_{\partial_m}(B(\partial_i,\partial_j)), \] where the last identity holds since $\bar{\nabla}$ is flat and $\partial_i$ come from affine coordinates. Hence, we have for all $i,j,m \in J_\mu$ \begin{align*} (\nabla_{\partial_m} B)(\partial_i, \partial_j) &= \nabla^\perp_{\partial_m}(B(\partial_i,\partial_j)) \\ &= \nabla^\perp_{\partial_m}\Bigl(\sum_{(k,l)\in J_\Sigma} \Gamma_{ij}^{(kl)} \partial_{(kl)}\Bigr) \\ &= \sum_{(k,l) \in J_\Sigma} (\partial_m \Gamma_{ij}^{(kl)}) \partial_{(kl)} + \sum_{(k,l),(r,s) \in J_\Sigma} \Gamma_{ij}^{(kl)} \Gamma_{(kl)m}^{(rs)} \partial_{(rs)}. \end{align*} In this expression, $\partial_m \Gamma_{ij}^{(kl)}=0$ and $\Gamma_{(kl)m}^{(rs)}=0$ due to equation in \eqref{eq:LeviCivita}. These computations imply that $\nabla^\perp B = 0$, in other words that $\domain{N}(\Sigma_0,\cdot)$ is parallel. On the other hand, to compute $B$ we use \eqref{eq:LeviCivita}, \begin{align*} B(e_i,e_j =\frac{1}{2}(e_i e_j^\top + e_j e_i^\top) =\frac{1}{2}(E_{ij}+E_{ji}), \end{align*} where we have used the identification of basis vector with their corresponding partial differential operators. \end{proof} From the previous result the submanifold $\domain{N}(\Sigma_0,\cdot)$ is not totally geodesic. Hence, $\domain{N}$ is not the Riemannian product of $\domain{N}(\cdot,\mu_0)$ and $\domain{N}(\Sigma_0,\cdot)$ even though they are mutually orthogonal. \subsection{$\boldsymbol{\domain{N}}$ as a homogeneous space} It is well-known that the affine group $\group{Aff}(n,\mathbbm{R})$ acts transitively on $\domain{N}$ by \begin{equation} (A,b)\cdot(\Sigma,\mu) = (A\Sigma A^\top, A\mu+b), \label{eq:action} \end{equation} where $A\in\group{GL}(n,\mathbbm{R})$, $b\in\mathbbm{R}^n$, $(\Sigma,\mu)\in\domain{N}$. Furthermore, the action remains transitive when restricted to $\group{Aff}^+(n,\mathbbm{R})$. The tangent space $\mathrm{T}_{(I,0)}\domain{N}$ can be identified with the vector space $\mathrm{Sym}(n,\mathbbm{R})\times\mathbbm{R}^n$. Given $(\Sigma,\mu)\in\domain{N}$ and $(X,v)\in\mathrm{T}_{(\Sigma,\mu)}\domain{N}$, the tangent action of $(A,b)\in\group{Aff}(n,\mathbbm{R})$ is \begin{equation} (A,b)\cdot(X,v) = (AXA^\top, Av). \label{eq:action_tangent} \end{equation} Thus we can identify \[ \mathrm{T}_{(\Sigma,\mu)}\domain{N} \ \cong\ (A\cdot\mathrm{Sym}(n,\mathbbm{R})\cdot A^\top)\oplus\mathbbm{R}^n, \] where $AA^\top=\Sigma$. \begin{lem}\label{lem:Aff_isometry} The affine group $\group{Aff}(n,\mathbbm{R})$ acts transitively and isometrically on $\domain{N}$ by \eqref{eq:action}. Moreover, if $R\subset\group{GL}(n,\mathbbm{R})$ denotes the subgroup of lower triangular matrices with positive diagonal entries, then the subgroup $R\ltimes\mathbbm{R}^n$ acts simply transitively on $\domain{N}$. \end{lem} \begin{proof} The transitivity is a well-known fact. It remains to check that \eqref{eq:action} is isometric. The tangent action of $(A,b)\in\group{Aff}(n,\mathbbm{R})$ is \eqref{eq:action_tangent}, hence \begin{align*} &\tensor{g}_{(A\Sigma A^\top,A\mu+b)}( (AXA^\top,Av),(AXA^\top,Av) ) \\ &= (Av)^\top (A\Sigma A^\top)^{-1} (Av) + \frac{1}{2}\mathrm{tr}((A\Sigma A^\top)^{-1} AXA^\top (A\Sigma A^\top)^{-1} AXA^\top) \\ &= v^\top \Sigma^{-1} v + \frac{1}{2}\mathrm{tr}(A^{-\top}\Sigma^{-1} X \Sigma^{-1} X A^\top) = v^\top \Sigma^{-1} v + \frac{1}{2}\mathrm{tr}(\Sigma^{-1} X \Sigma^{-1} X) \\ &=\tensor{g}_{(\Sigma,\mu)}((X,v),(X,v)). \end{align*} This shows that the action is isometric. Note that $(A,b)\cdot(I,0)=(I,0)$ is equivalent to $A\in\O(n)$, $b=0$. So the stabilizer of $\group{Aff}(n,\mathbbm{R})$ at $(I,0)$ is $\O(n)$. From the Iwasawa decomposition $\group{GL}(n,\mathbbm{R})=\O(n) R$ it follows that $R\ltimes\mathbbm{R}^n$ acts simply transitively. \end{proof} \subsection[Geometry of $\mathrm{Pos}(n,\mathbbm{R})$]{Geometry of $\boldsymbol{\mathrm{Pos}(n,\mathbbm{R})}$} As a consequence of Proposition \ref{prop:Nmu0_tot_geod} and Theorem \ref{thm:totgeod}, the Fisher metric of the family $\domain{N}(\cdot,\mu_0)$ of normal distributions with mean $\mu_0$ coincides with the restriction of the Fisher metric of $\domain{N}$ to $\domain{N}(\cdot,\mu_0)$. Since all of these submanifolds are isometric, we may take $\mu_0=0$ for convenience. In the following, we will make explicit how $\domain{N}(\cdot,0)$ with its Fisher metric is isometric to a symmetric space $\mathrm{Pos}(n,\mathbbm{R})=\group{GL}(n,\mathbbm{R})/\O(n)$ with a suitably scaled Killing metric. Consider the product of irreducible Riemannian symmetric spaces \[ M = \mathbbm{R}\times\mathrm{Pos}_1(n,\mathbbm{R}), \] where its Riemannian metric $\tensor{g}_M=\tensor{g}_1\times\tensor{g}_2$ is the product of the metric $\tensor{g}_1$, which is $\frac{1}{2}$ times the multiplication on $\mathbbm{R}$, and the metric $\tensor{g}_2$ on $\mathrm{Pos}_1(n,\mathbbm{R})$ given by $\tensor{g}_{2,\Sigma}( X,Y ) = \frac{1}{2}\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Y)$. Let $\group{GL}(n,\mathbbm{R})$ act on $M$ via \[ A\cdot(\alpha,\Sigma) = (\alpha+2\log(\det(A)),\ \det(A)^{-2}A\Sigma A^\top). \] \begin{lem}\label{lem:RxPos1_isometry} The $\group{GL}(n,\mathbbm{R})$-action on $M$ given above is by isometries. \end{lem} \begin{proof} The tangent action of $A\in\group{GL}(n,\mathbbm{R})$ at $(\alpha,\Sigma)$ on $(t,X)\in\mathrm{T}_{(\alpha,\Sigma)}M$ is \[ \d A_{(\alpha,\Sigma)} (t,X) =(t,\ \det(A)^{-2} A X A^\top). \] Hence \begin{align*} &\tensor{g}_{M,A\cdot(\alpha,\Sigma)}(\d A_{(\alpha,\Sigma)}(t_1,X_1),\d A_{(\alpha,\Sigma)}(t_2,X_2)) \\ &= \tensor{g}_{M,A\cdot(\alpha,\Sigma)}((t_1,\det(A)^{-2}AX_1A^\top),(t_2,\det(A)^{-2}AX_2A^\top)) \\ &=\tensor{g}_{1,\alpha+2\log(\det(A))}(t_1,t_2) + \tensor{g}_{2,\det(A)^{-2}A\Sigma A^\top}(\det(A)^{-2}AX_1A^\top,\det(A)^{-2}AX_2A^\top) \\ &= \frac{1}{2}t_1 t_2 + \frac{1}{2}\mathrm{tr}((\det(A)^{-2}A\Sigma A^\top)^{-1}\det(A)^{-2}AX_1A^\top (\det(A)^{-2}A\Sigma A^\top)^{-1} \det(A)^{-2}AX_2A^\top) \\ &= \frac{1}{2}t_1 t_2 + \frac{1}{2}\mathrm{tr}(\Sigma^{-1} X_1 \Sigma^{-1} X_2) =\tensor{g}_{M,(\alpha,\Sigma)}((t_1,X_1),(t_2,X_2)) \end{align*} Hence the action of $A\in\group{GL}(n,\mathbbm{R})$ is isometric. \end{proof} Now define a map \begin{equation} \Psi:\mathrm{Pos}(n,\mathbbm{R})\to \mathbbm{R}\times\mathrm{Pos}_1(n,\mathbbm{R}), \quad \Sigma\mapsto (\log(\det(\Sigma)),\ \det(\Sigma)^{-1}\Sigma). \label{eq:symmetric_isometry} \end{equation} Note that for $A\in\group{GL}(n,\mathbbm{R})$, \begin{align*} \Psi(A\cdot\Sigma) &= (\log(\det(A\Sigma A^\top)),\ \det(A\Sigma A^\top)^{-1}A\Sigma A^\top) \\ &= (\log(\det(\Sigma))+2\log(\det(A)),\ \det(A)^{-2} \det(\Sigma^{-1}) A\Sigma A^\top) \\ &= A\cdot(\det(\Sigma),\ \det(\Sigma)^{-1}\Sigma) \\ &=A\cdot\Psi(\Sigma). \end{align*} So the map $\Psi$ is $\group{GL}(n,\mathbbm{R})$-equivariant. We equip the manifold $\mathrm{Pos}(n,\mathbbm{R})$ with the restriction of the Fisher metric \eqref{eq:Fisher_metric} of $\domain{N}$ to $\domain{N}(\cdot,0)$, which is the Fisher metric $\tensor{g}$ of $\domain{N}(\cdot,0)$ by Proposition \ref{prop:Nmu0_tot_geod}. Then $\group{GL}(n,\mathbbm{R})$ acts isometrically on $\mathrm{Pos}(n,\mathbbm{R})$ by Lemma \ref{lem:Aff_isometry}. \begin{prop}\label{prop:symmetric_product} The Riemannian manifold $(\mathrm{Pos}(n,\mathbbm{R}),\tensor{g})$ is isometric to the product $(\mathbbm{R}\times\mathrm{Pos}_1(n,\mathbbm{R}),\tensor{g}_1\times\tensor{g}_2)$ of the irreducible Riemannian symmetric spaces $(\mathbbm{R},\tensor{g}_1)$ and $(\mathrm{Pos}_1(n,\mathbbm{R}),\tensor{g}_2)$. In particular, $(\mathrm{Pos}(n,\mathbbm{R}),\tensor{g})$ is a Riemannian symmetric space. \end{prop} \begin{proof} The map $\Psi$ defined in \eqref{eq:symmetric_isometry} is the desired isometry. In fact, $\Psi$ is $\group{GL}(n,\mathbbm{R})$-equivariant with respect to the isometric $\group{GL}(n,\mathbbm{R})$-actions on $\mathrm{Pos}(n,\mathbbm{R})$ and $M$, and since $\Psi(\Sigma)=\Psi(A\cdot I)=A\cdot\Psi(I)$ (where $\Sigma=AA^\top$), it is enough to show that $\Psi$ is an isometry at $I\in\mathrm{Pos}(n,\mathbbm{R})$. So let $X,Y\in\mathrm{T}_I\mathrm{Pos}(n,\mathbbm{R})\cong\mathrm{Sym}(n,\mathbbm{R})$. The differential of $\Psi$ at $I$ is \begin{align*} \d\Psi_I X &= \left.\frac{\d}{\d t}\right|_{t=0} (\log(\det(I+tX)),\ \det(I+tX)^{-1}(I+tX)) \\ =\ & \left(\det(I+tX)^{-1}\frac{\d}{\d t}\det(I+tX),\right. \\ & -\det(I+tX)^{-2}\left.\left.\left(\left(\frac{\d}{\d t}\det(I+tX)\right)(I + tX) - \det(I+tX)\frac{\d}{\d t}(I+tX)\right)\right)\right|_{t=0} \\ =\ & (\mathrm{tr}(X),\ X-\mathrm{tr}(X)I). \end{align*} Then \begin{align*} \tensor{g}_{M,\Psi(I)}(\d\Psi_I X,\d\Psi_I Y) &= \tensor{g}_{M,(0,I)}((\mathrm{tr}(X),X-\mathrm{tr}(X)I),(\mathrm{tr}(Y),Y-\mathrm{tr}(Y)I)) \\ &= \frac{1}{2}\mathrm{tr}(X)\mathrm{tr}(Y) + \frac{1}{2}\mathrm{tr}((X-\mathrm{tr}(X)I)(Y-\mathrm{tr}(Y)I)) \\ &= \frac{1}{2}\mathrm{tr}(X)\mathrm{tr}(Y) + \frac{1}{2}\mathrm{tr}(XY-\mathrm{tr}(X)Y-\mathrm{tr}(Y)X)+\frac{1}{2}\mathrm{tr}(X)\mathrm{tr}(Y) \\ &= \mathrm{tr}(X)\mathrm{tr}(Y) + \frac{1}{2}\mathrm{tr}(XY) -\frac{1}{2}\mathrm{tr}(\mathrm{tr}(X)Y)-\frac{1}{2}\mathrm{tr}(\mathrm{tr}(Y)X) \\ &= \frac{1}{2}\mathrm{tr}(XY) = \tensor{g}_I(X,Y). \end{align*} This shows that $\Psi$ is an isometry and concludes the proof of the proposition. \end{proof} \begin{cor}\label{cor:Pos_isometries} $\group{Iso}(\mathrm{Pos}(n,\mathbbm{R}),\tensor{g})^\circ=\group{GL}^+(n,\mathbbm{R})$. \end{cor} \begin{proof} Let $G=\group{Iso}(\mathrm{Pos}(n,\mathbbm{R}),\tensor{g})$ and let $K$ be a subgroup of $G$ such that $G/K=\mathrm{Pos}(n,\mathbbm{R})$. Let $\mathfrak{g}$, $\mathfrak{k}$ denote the respective Lie algebras of $G$, $K$, and $\sigma$ the Cartan involution. Since $G/K$ is a symmetric product by Proposition \ref{prop:symmetric_product}, $\mathfrak{g}$ and $\mathfrak{k}$ split as a products $\mathfrak{g}=\mathfrak{g}_1\times\mathfrak{g}_2$ and $\mathfrak{k}=\mathfrak{k}_1\times \mathfrak{k}_2$, $\mathfrak{k}_i\subset\mathfrak{g}_i$, such that $(\mathfrak{g}_1,\sigma)$ and $(\mathfrak{g}_2,\sigma)$ are the symmetric Lie algebras associated to $\mathbbm{R}$ and $\mathrm{Pos}(n,\mathbbm{R})$, respectively (cf.~Kobayashi \& Nomizu \cite[Section XI.5]{KN}). Since $\mathrm{Pos}_1(n,\mathbbm{R})=\group{SL}(n,\mathbbm{R})/\O(n)$ and $\group{SL}(n,\mathbbm{R})$ is simple, $\mathfrak{g}_2=\sl(n,\mathbbm{R})$ by Helgason \cite[Theorem V.4.1]{helgason}. Hence \[ \dim G=\dim\group{SL}(n,\mathbbm{R})+\dim\group{Iso}(\mathbbm{R},\tensor{g}_\mathbbm{R})=(n^2-1)+1=\dim\group{GL}(n,\mathbbm{R}) \] and clearly $\group{GL}(n,\mathbbm{R})\subseteq G$, so that $G^\circ=\group{GL}(n,\mathbbm{R})^\circ=\group{GL}^+(n,\mathbbm{R})$. \end{proof} \subsection{Bundle geometry and foliations on $\domain{N}$} Let $\tensor{g}$ denote the Fisher metric on $\domain{N}$. We can now describe the geometry of $(\domain{N},\tensor{g})$ in terms of Riemannian symmetric spaces. \begin{mthm}\label{mthm:normal_fisher} Consider the family of $n$-variate normal distributions $\domain{N}$ equipped with the Fisher metric $\tensor{g}$, given by \eqref{eq:Fisher_metric}. The following hold: \begin{enumerate} \item $(\domain{N},\tensor{g})$ is a vector bundle \[ \mathbbm{R}^n\longrightarrow \domain{N} \longrightarrow \mathrm{Pos}(n,\mathbbm{R}), \] where the base $\mathrm{Pos}(n,\mathbbm{R})$ is equipped with the Fisher metric and the fiber over $\Sigma$ is $\mathbbm{R}^n$ with scalar product determined by $\Sigma^{-1}$. \item The base $\mathrm{Pos}(n,\mathbbm{R})$ can be identified with the totally geodesic submanifold $\domain{N}(\cdot,\mu_0)$ for any $\mu_0\in\mathbbm{R}^n$, and it is isometric to a product of irreducible Riemannian symmetric spaces \[ \mathrm{Pos}(n,\mathbbm{R}) = \mathbbm{R}\times\mathrm{Pos}_1(n,\mathbbm{R}) \] with the metrics on the factors given in Proposition \ref{prop:symmetric_product}. \item The fiber $\mathbbm{R}^n$ over $\Sigma_0$ can be embedded as a parallel submanifold $\domain{N}(\Sigma_0,\cdot)$ for any fixed $\Sigma_0\in\mathrm{Pos}(n,\mathbbm{R})$, and as such it is orthogonal at $(\Sigma_0,\mu_0)\in\domain{N}$ to the embedding of the base as $\domain{N}(\cdot,\mu_0)$. \item The submanifolds $\domain{N}(\cdot,\mu)$ for all $\mu\in\mathbbm{R}^n$ and the submanifolds $\domain{N}(\Sigma,\cdot)$ for all $\Sigma\in\mathrm{Pos}(n,\mathbbm{R})$ form two foliations of $\domain{N}$, the leaves of which are pointwise orthogonal to one another. \end{enumerate} \end{mthm} \begin{proof} $\domain{N}=\mathbbm{R}^n\times\mathrm{Pos}(n,\mathbbm{R})$ is a product of differentiable manifolds, though not of Riemannian manifolds. As such, $\domain{N}$ is trivally a vector bundle with base $\mathrm{Pos}(n,\mathbbm{R})$ and fiber $\mathbbm{R}^n$. By Propositions~\ref{prop:Nmu0_tot_geod} and \ref{prop:NSigma0_parallel}, the submanifold $\domain{N}(\cdot,\mu_0)$ is totally geodesic and the submanifold $\domain{N}(\Sigma_0,\cdot)$ is parallel, and they are orthogonal to each other at $(\Sigma_0,\mu_0)$. Also, the base $\domain{N}(\cdot,\mu_0)$ is isometric to $\mathrm{Pos}(n,\mathbbm{R})$ for every $\mu_0\in\mathbbm{R}^n$. The metrics on base and fiber are clear from \eqref{eq:Fisher_metric}. This proves parts (1) and (3). Part (2) is Proposition \ref{prop:symmetric_product}. For part (4), it is clear that $\domain{N}$ is a union of either of these families of submanifolds, and their pointwise orthogonality is clear from \eqref{eq:Fisher_metric}. \end{proof} \section{The symmetric space of normal distributions} \label{sec:symmetric} Due to the difficulty of explicitely computing distances in the Fisher metric on $\domain{N}$, Lovri\v{c}, Min-Oo and Ruh \cite{LMR} suggested to replace the Fisher metric of $\domain{N}$ by the Killing metric of the symmetric space $\mathrm{Pos}_1(n+1,\mathbbm{R})$. For any homogeneous manifold that admits a Riemannian metric $\kappa$ turning it into a symmetric space, we will call $\kappa$ the \emph{Killing metric}. Although the Fisher and the Killing metric are not isometric for $n>1$, they are still quite similar, and distances in the Killing metric can be computed rather easily by exploiting the geometry of the symmetric space, as explained in \cite{LMR}. In this section, we briefly recall the approach by Lovri\v{c} et al.~\cite{LMR} and compare the Killing metric on $\mathrm{Pos}_1(n+1,\mathbbm{R})$ to the Fisher metric on $\domain{N}$. We will find that for a lot of geodesics in the Fisher metric on $\domain{N}$, the geodesics in the Killing metric are good approximations at long distances. \subsection{On the symmetric space $\boldsymbol{\mathrm{Pos}_1(n+1,\mathbbm{R})}$} A diffeomorphism from $\domain{N}$ to $\mathrm{Pos}_1(n+1,\mathbbm{R})$ is given by \begin{equation} \Phi:\domain{N}\to\mathrm{Pos}_1(n+1,\mathbbm{R}),\quad (\Sigma,\mu)\mapsto\frac{1}{\sqrt[n+1]{\det(\Sigma)}}\begin{pmatrix} \Sigma+\mu\mu^\top & \mu\\ \mu^\top & 1 \end{pmatrix}, \label{eq:Phi} \end{equation} in particular, $\dim\domain{N}=\dim\mathrm{Pos}_1(n+1,\mathbbm{R})$. However, $\Phi$ is not an isometry. The tangent space of $\domain{N}$ at $(\Sigma,\mu)=(I_n,0)$ can be identified with $\mathrm{Sym}(n,\mathbbm{R})\oplus\mathbbm{R}^n$. The differential of $\Phi$ at $(I_n,0)$ is given by \begin{equation} (X,v)\mapsto\d\Phi_{(I_n,0)}(X,v) = \begin{pmatrix} X-\frac{\mathrm{tr}(X)}{n+1}I_n & v\\ v^\top & -\frac{\mathrm{tr}(X)}{n+1} \end{pmatrix}, \label{eq:dPhi} \end{equation} where $X\in\mathrm{Sym}(n,\mathbbm{R})$ and $v\in\mathbbm{R}^n$. \begin{remark} $\Phi$ is not an isometry. For example, $\|(\lambda I_n,0)\|^{\tensor{g}}_{(I_n,0)}=\frac{n}{2}\lambda^2$, but $\|\d\Phi_{(I_n,0)}(\lambda I_n,0)\|^{\kappa}_{\Phi(I_n,0)}=\frac{1}{2}\frac{n}{n+1}\lambda^2$. In fact, the spaces $\domain{N}$ and $\mathrm{Pos}_1(n+1,\mathbbm{R})$ are isometric precisely for $n=1$. \end{remark} The map $\Phi$ allows us to identify the spaces $\domain{N}$ and $\mathrm{Pos}_1(n+1,\mathbbm{R})$, and use the global coordinate system $(\Sigma,\mu)$ to describe the elements of $\mathrm{Pos}_1(n+1,\mathbbm{R})$ as well. In the following we will do so, while suppressing the dependence on $\Phi$ in the notation. \begin{remark} Note that we use a different coordinate system to the one in \cite{LMR}. There, instead of $(\Sigma,\mu)$, the authors use coordinates $(A,\mu)$, where $A$ is the symmetric square root of $\Sigma$, that is $\Sigma=AA^\top$. This explains the absence of certain scalar factors in our formulas \eqref{eq:Phi} and \eqref{eq:dPhi}. It also affects the appearance of the metric \eqref{eq:Killing_metric} below, where in addition we use the different scaling factor $\frac{1}{2}$ rather than $\frac{1}{4}$ for the trace to obtain a symmetric metric from the Killing form of $\sl(n+1,\mathbbm{R})$. \end{remark} For $n\geq 1$, the isometry group of $\mathrm{Pos}_1(n+1,\mathbbm{R})$ is $\group{SL}(n+1,\mathbbm{R})$, which acts on $P\in\mathrm{Pos}_1(n+1,\mathbbm{R})$ by \begin{equation} P\mapsto SPS^\top. \label{eq:symmetric_action} \end{equation} The affine group $\group{Aff}^+(n,\mathbbm{R})$ also acts isometrically on $\mathrm{Pos}_1(n+1,\mathbbm{R})$ via the homomorphic embedding \begin{equation} \group{Aff}^+(n,\mathbbm{R})\hookrightarrow\group{SL}(n+1,\mathbbm{R}),\quad (A,b)\mapsto\frac{1}{\sqrt[n+1]{\det(A)}}\begin{pmatrix} A & b\\ 0 & 1 \end{pmatrix}. \label{eq:Aff_embedding} \end{equation} \begin{lem}\label{lem:Phi_equivariant} The diffeomorphism $\Phi$ is equivariant for the $\group{Aff}^+(n,\mathbbm{R})$-actions on $\domain{N}$ and $\mathrm{Pos}_1(n+1,\mathbbm{R})$. In particular, $\group{Aff}^+(n,\mathbbm{R})$ acts transitively on $\mathrm{Pos}_1(n+1,\mathbbm{R})$. \end{lem} \begin{proof} For any $(A,b)\in\group{Aff}^+(n,\mathbbm{R})$ and $(\Sigma,\mu)\in\domain{N}$, with \eqref{eq:action}, \begin{align*} &\Phi((A,b)\cdot(\Sigma,\mu)) = \Phi(A\Sigma A^\top,A\mu+b) \\ &=\frac{1}{\sqrt[n+1]{\det(A)^2\det(\Sigma)}}\begin{pmatrix} A\Sigma A^\top+(A\mu+b)(A\mu+b)^\top & A\mu+b\\ \mu^\top A^\top+b^\top & 1 \end{pmatrix}\\ &=\frac{1}{\sqrt[n+1]{\det(A)^2\det(\Sigma)}} \begin{pmatrix} A\Sigma A^\top + A\mu\mu^\top A^\top+b\mu^\top A^\top+A\mu b^\top+bb^\top & A\mu+b\\ \mu^\top A^\top+b^\top & 1 \end{pmatrix}, \end{align*} and with \eqref{eq:symmetric_action} and \eqref{eq:Aff_embedding}, \begin{align*} &(A,b)\cdot\Phi(\Sigma,\mu) = \frac{1}{\sqrt[n+1]{\det(A)^2\det(\Sigma)}} \begin{pmatrix} A & b\\ 0 & 1 \end{pmatrix} \begin{pmatrix} \Sigma+\mu\mu^\top & \mu\\ \mu^\top & 1 \end{pmatrix} \begin{pmatrix} A^\top & 0\\ b^\top & 1 \end{pmatrix}\\ &=\frac{1}{\sqrt[n+1]{\det(A)^2\det(\Sigma)}} \begin{pmatrix} A\Sigma+A\mu\mu^\top+b\mu^\top & A\mu+b\\ \mu^\top & 1 \end{pmatrix} \begin{pmatrix} A^\top & 0\\ b^\top & 1 \end{pmatrix}\\ &=\frac{1}{\sqrt[n+1]{\det(A)^2\det(\Sigma)}} \begin{pmatrix} A\Sigma A^\top+A\mu\mu^\top A^\top+b\mu^\top A^\top + A\mu b^\top + bb^\top & A\mu+b\\ \mu^\top A^\top + b^\top & 1 \end{pmatrix}. \end{align*} Hence $\Phi$ is $\group{Aff}^+(n,\mathbbm{R})$-equivariant. Since $\group{Aff}^+(n,\mathbbm{R})$ acts transitively on $\domain{N}$, it acts transitively on $\mathrm{Pos}_1(n+1,\mathbbm{R})$ as well. \end{proof} The symmetric space $\mathrm{Pos}_1(n+1,\mathbbm{R})$ is irreducible, which means that its Killing metric is, up to a positive multiple, determined by the Killing form of the Lie algebra $\sl(n+1,\mathbbm{R})$. The diffeomorphism $\Phi$ allows us to identify the Killing metric $\kappa$ with its pullback to $\domain{N}$, and thus express it in the $(\Sigma,\mu)$-coordinates of $\domain{N}$. We can choose $\kappa$ suitably scaled such that in the $(\Sigma,\mu)$-coordinates on $\domain{N}$, it is given at $(\Sigma,\mu)=(I_n,0)$ by \begin{equation} \begin{split} \kappa_{(I_n,0)}\bigl((X,v),(Y,w)\bigr) &= \frac{1}{2}\mathrm{tr}(\d\Phi_{(I_n,0)}(X,v)\d\Phi_{(I_n,0)}(Y,w)) \\ &= v^\top w + \frac{1}{2}\mathrm{tr}(XY) - \frac{1}{2(n+1)}\mathrm{tr}(X)\mathrm{tr}(Y). \end{split} \label{eq:Killing_metric0} \end{equation} Here we used \eqref{eq:dPhi} for the differentials. Then at any point $(\Sigma,\mu)\in\domain{N}$, the Killing metric is given by transporting \eqref{eq:Killing_metric0} by the action of the affine group. We obtain \begin{equation} \begin{split} &\kappa_{(\Sigma,\mu)}\bigl((X,v),(Y,w)\bigr)\\ &= v^\top\Sigma^{-1}w+\frac{1}{2}\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Y)-\frac{1}{2(n+1)}\mathrm{tr}(\Sigma^{-1}X)\mathrm{tr}(\Sigma^{-1}Y). \label{eq:Killing_metric} \end{split} \end{equation} Note that we use a scaling of the Killing metric $\kappa$ different from the one in \cite{LMR}, to make it resemble the Fisher metric on $\domain{N}$ more closely. Namely, up to the term $-\frac{1}{2(n+1)}\mathrm{tr}(\Sigma^{-1}X)\mathrm{tr}(\Sigma^{-1}Y)$, \eqref{eq:Killing_metric} resembles the Fisher metric \eqref{eq:Fisher_metric} on $\domain{N}$. The similarity becomes more apparent in the following paragraph. \subsection{Killing geodesics and Fisher geodesics in $\boldsymbol{\mathrm{Pos}(n,\mathbbm{R})}$} We will simply speak of \emph{Fisher geodesics} and \emph{Killing geodesics} when referring to geodesics of the Fisher metric $\tensor{g}$ and the Killing metric $\kappa$, respectively. Even though $(\domain{N},\tensor{g})$ and $(\mathrm{Pos}_1(n+1,\mathbbm{R}),\kappa)$ are not isometric, we will see that the corresponding embeddings of the symmetric cone $\mathrm{Pos}(n,\mathbbm{R})$ in both spaces are affinely equivalent. For any fixed $\mu_0\in\mathbbm{R}^n$, define the submanifold \[ P_n(\mu_0)= \Bigl\{\frac{1}{\sqrt[n+1]{\det(\Sigma)}}\begin{pmatrix} \Sigma+\mu_0\mu_0^\top & \mu_0\\ \mu_0^\top & 1 \end{pmatrix}\,\Bigl|\,\Sigma\in\mathrm{Pos}(n,\mathbbm{R}) \Bigr\} \] of the symmetric space $\mathrm{Pos}_1(n+1,\mathbbm{R})$ with Killing metric \eqref{eq:Killing_metric}. Clearly, $P_n(\mu_0)$, just like $\domain{N}(\cdot,\mu_0)$, is diffeomorphic to $\mathrm{Pos}(n,\mathbbm{R})$. \begin{prop}\label{prop:tot_geod2} Consider $P_n(\mu_0)$ for any fixed $\mu_0\in\mathbbm{R}^n$. \begin{enumerate} \item The affine transformation $(I,\mu_0)$ maps $P_n(0)$ isometrically to $P_n(\mu_0)$. In particular, the $P_n(\mu_0)$ are isometric to each other for all $\mu_0$. \item $\Phi(\domain{N}(\cdot,\mu_0))=P_n(\mu_0)$. \item $P_n(\mu_0)$ is a totally geodesic submanifold of $\mathrm{Pos}_1(n+1,\mathbbm{R})$. \item Let $X,Y$ be coordinate vector fields in the $\Sigma$-coordinates on $\mathrm{Pos}_1(n+1,\mathbbm{R})$. Then their covariant derivative with respect to the Killing metric at the point $(\Sigma,\mu_0)$ is \begin{equation} \nabla^{\kappa}_X Y = -\frac{1}{2}(X\Sigma^{-1}Y + Y\Sigma^{-1}X). \label{eq:Killing_LC} \end{equation} \item $\Phi|_{\domain{N}(\cdot,\mu_0)}:\domain{N}(\cdot,\mu_0)\to P_n(\mu_0)$ is an affine equivalence. \end{enumerate} \end{prop} In the proof of this proposition, we use the following formulas by Skovgaard \cite[Lemma 2.3 and its proof]{skovgaard0}. Let $X,Y,Z\in\mathrm{Sym}(n,\mathbbm{R})$ and $\Sigma\in\mathrm{Pos}(n,\mathbbm{R})$, and let $\partial_X$ denote the directional derivative in the direction of $X$. Then \begin{equation} \begin{split} \partial_X\mathrm{tr}(Y\Sigma^{-1}) &= -\mathrm{tr}(Y\Sigma^{-1}X\Sigma^{-1}), \\ \partial_X\mathrm{tr}(Y\Sigma^{-1}Z\Sigma^{-1}) &=-\bigl(\mathrm{tr}(Y\Sigma^{-1}X\Sigma^{-1}Z\Sigma^{-1})+\mathrm{tr}(Y\Sigma^{-1}Z\Sigma^{-1}X\Sigma^{-1})\bigr). \end{split} \label{eq:skovgaard2.3} \end{equation} \begin{proof}[Proof of Proposition \ref{prop:tot_geod2}] Part (1) is straightforward to verify using \eqref{eq:symmetric_action}, \eqref{eq:Aff_embedding} and the definition of $P_n(\mu_0)$. Part (2) is straightforward from \eqref{eq:Phi}. If we use the relations for the Levi-Civita connection of $\kappa$ given in \cite[(3.8)]{LMR}, the computation for part (3) is identical to the proof of Proposition \ref{prop:Nmu0_tot_geod}. For part (4), Let $X,Y,Z$ be coordinate vector fields in the $\Sigma$-coordinates. We interpret them as tangent vector fields of the totally geodesic submanifold $P_n(\mu_0)$. Define a covariant derivative $\tilde{\nabla}_XY$ on $P_n(\mu_0)$ by \eqref{eq:Killing_LC}. Use \eqref{eq:skovgaard2.3} together with \eqref{eq:Killing_metric} to find \begin{align*} X\kappa(Y,Z) =\ & \frac{1}{2}\partial_X\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}Z) \\ &-\frac{1}{2(n+1)}\bigl((\partial_X\mathrm{tr}(\Sigma^{-1}Y))\mathrm{tr}(\Sigma^{-1}Z) +\mathrm{tr}(\Sigma^{-1}Y)(\partial_X\mathrm{tr}(\Sigma^{-1} Z))\bigr) \\ =\ & -\frac{1}{2}\bigl(\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}X\Sigma^{-1}Z)+\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}Z\Sigma^{-1}X)\bigr) \\ &+\frac{1}{2(n+1)}\bigl(\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}X)\mathrm{tr}(\Sigma^{-1}Z)+\mathrm{tr}(\Sigma^{-1}Y)\mathrm{tr}(\Sigma^{-1}Z\Sigma^{-1}X)\bigr) \\ \end{align*} and \begin{align*} \kappa(\tilde{\nabla}_X Y,Z) =\ & -\frac{1}{2}\kappa(X\Sigma^{-1}Y+Y\Sigma^{-1}X,Z) \\ =\ &-\frac{1}{4}\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Y\Sigma^{-1}Z+\Sigma^{-1}Y\Sigma^{-1}X\Sigma^{-1}Z) \\ &+\frac{1}{4(n+1)}\bigl(\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Y)\mathrm{tr}(\Sigma^{-1}Z) +\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}X)\mathrm{tr}(\Sigma^{-1}Z)\bigr), \\ =\ &-\frac{1}{4}\bigl(\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Y\Sigma^{-1}Z)+\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}X\Sigma^{-1}Z)\bigr) \\ &+\frac{1}{2(n+1)}\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Y)\mathrm{tr}(\Sigma^{-1}Z), \\ \kappa(Y,\tilde{\nabla}_X Z) =\ &-\frac{1}{4}\bigl(\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}X\Sigma^{-1}Z)+\mathrm{tr}(\Sigma^{-1}Y\Sigma^{-1}Z\Sigma^{-1}X)\bigr) \\ &+\frac{1}{2(n+1)}\mathrm{tr}(\Sigma^{-1}Y)\mathrm{tr}(\Sigma^{-1}X\Sigma^{-1}Z). \end{align*} After applying some identities for the trace and collecting terms, we find that indeed \[ X\kappa(Y,Z)=\kappa(\tilde{\nabla}_X Y,Z)+\kappa(Y,\tilde{\nabla}_X Z). \] By evaluating $\tilde{\nabla}_X Y$ on coordinate vector fields, we readily find that the torsion vanishes. Hence $\tilde{\nabla}$ is the Levi-Civita connection of the restriction of $\kappa$ to $P_n(\mu_0)$, and since $P_n(\mu_0)$ is a totally geodesic submanifold, $\tilde{\nabla}$ is the restriction of the Levi-Civita connection $\nabla^{\kappa}$ of $(\mathrm{Pos}_1(n+1,\mathbbm{R}),\kappa)$ to $P_n(\mu_0)$. For part (5), it is evident from comparing \eqref{eq:LeviCivita} and \eqref{eq:Killing_LC} that $\Phi|_{\domain{N}(\cdot,\mu_0)}$ is indeed an affine equivalence from $\domain{N}(\cdot,\mu_0)$ to $P_n(\mu_0)$. \end{proof} \subsection{Distances in $\boldsymbol{\mathrm{Pos}(n,\mathbbm{R})}$} Distances between points contained in $\domain{N}(\cdot,\mu_0)$ for fixed $\mu_0\in\mathbbm{R}$ are readily computed using the fact that $\domain{N}(\cdot,\mu_0)$ is a totally geodesic submanifold of $\domain{N}$, and also a Riemannian symmetric space isometric to $\mathrm{Pos}(n,\mathbbm{R})$. The distances in this symmetric space are easy to compute, since they can be reduced to computations in a flat totally geodesic submanifold. \begin{lem}\label{lem:dist_diag} Let $\Delta=\diag(\delta_1,\ldots,\delta_n)\in\mathrm{Diag}(n,\mathbbm{R})\cap\mathrm{Pos}(n,\mathbbm{R})$. The Fisher distance from the identity matrix $I_n$ to $\Delta$ is \[ \dist_{\tensor{g}}(I_n,\Delta) = \sqrt{\frac{1}{2}\sum_{i=1}^n\log(\delta_i)^2}. \] \end{lem} \begin{proof} It is well-known that $\mathrm{Diag}(n,\mathbbm{R})\cap\mathrm{Pos}(n,\mathbbm{R})$ is a maximal flat totally geodesic subspace in $\mathrm{Pos}(n,\mathbbm{R})$. Thus the geodesic $\gamma$ from $I_n$ to $\Delta$ is \[ \gamma(t) = \exp(t\Lambda), \] where $\Lambda=\log(\Delta)$ (this is well-defined since all eigenvalues of $\Delta$ are positive). Then by \eqref{eq:Fisher_metric} for all $t$, \[ \|\gamma'(t)\|_{\gamma(t)}^{\tensor{g}} = \|\Lambda\gamma(t)\|_{\gamma(t)}^{\tensor{g}} = \sqrt{\frac{\mathrm{tr}(\Lambda^2)}{2}}. \] The distance from $I_n$ to $\Delta$ is then \[ \dist_{\tensor{g}}(I_n,\Delta)=\int_0^1\|\gamma'(t)\|^{\tensor{g}}_{\gamma(t)}\d t =\sqrt{\frac{\mathrm{tr}(\Lambda^2)}{2}} =\sqrt{\frac{1}{2}\sum_{i=1}^n\log(\delta_i)^2} \] with $\Lambda=\diag(\log(\delta_1),\ldots,\log(\delta_n))$. \end{proof} Similar to the procedure described by Lovri\v{c} et al.~\cite[pp.~42-43]{LMR} for $\mathrm{Pos}_1(n+1,\mathbbm{R})$ with the Killing metric, we describe the procedure to derive the Fisher distance formula for elements $S_1=(\Sigma_1,\mu_0)$, $S_2=(\Sigma_2,\mu_0)$ in $\domain{N}(\cdot,\mu_0)$, \begin{enumerate} \item By applying the isometry $(I_n,-\mu_0)\in\group{Aff}^+(n,\mathbbm{R})$, we may assume that $S_1,S_2\in\domain{N}(\cdot,0)$. Under the identification of this submanifold with $\mathrm{Pos}(n,\mathbbm{R})$, we identify $S_i$ with $\Sigma_i$. \item We can write $\Sigma_1=A_1 A_1^\top$ for some $A_1\in\group{SL}(n,\mathbbm{R})$. \item When applying the isometry $(A_1^{-1},0)\in\group{Aff}^+(n,\mathbbm{R})$, we have \[ \dist_{\tensor{g}}(\Sigma_1,\Sigma_2)=\dist_{\tensor{g}}(I_n,A_1^{-1}\Sigma_2 A_1^{-\top}). \] We may thus assume that $\Sigma_1=I_n$. Note that this may change the eigenvalues of $\Sigma_2$. \item By applying an isometry $(T,0)$ for some $T\in\O(n)$, we may assume that $A_1^{-1}\Sigma_2 A_1^{-\top}=\Delta$ is a diagonal matrix in $\mathrm{Diag}(n,\mathbbm{R})\cap\mathrm{Pos}(n,\mathbbm{R})$. Now Lemma \ref{lem:dist_diag} applies, and we obtain \begin{equation} \dist_{\tensor{g}}(S_1,S_2) =\dist_{\tensor{g}}(\Sigma_1,\Sigma_2) =\sqrt{\frac{1}{2}\sum_{i=1}^n\log(\lambda_i)^2} \end{equation} for the eigenvalues $\lambda_1,\ldots,\lambda_n$ of the matrix $A_1^{-1}\Sigma_2^2 A_1^{-\top}$. \end{enumerate} Up to a factor $\frac{1}{\sqrt{2}}$, this coincides with the distance formula for elements in $P_n(\mu_0)$ as computed in \cite{LMR}. \subsection{Asymptotic geodesics orthogonal to $\boldsymbol{\mathrm{Pos}(n,\mathbbm{R})}$} As we just saw, the lengths of geodesics in $\domain{N}$ tangent to the symmetric submanifolds $\domain{N}(\cdot,\mu_0)$ are rela\-tively easy to compute. Unfortunately, the same cannot be said for geodesics transversal to $\domain{N}(\cdot,\mu_0)$. Although explicit solutions for the Fisher metric's geodesic equation have been found by Calvo and Oller \cite[Section 3]{CO2}, they only yield explicit formulas for the distance between two points in some special cases. In this paragraph, we want to argue that Killing geodesics provide reasonable approximations whose lengths are easy to compute. We introduce some terminology. Let $c:\mathbbm{R}\to\domain{N}$ be a differentiable curve and define the \emph{geodesic defect} of $c$ to be \[ \delta(c) = \lim_{t\to\infty}\frac{1}{t}\int_0^t\|\nabla_{c'(s)}c'(s)\|^{\tensor{g}}_{c(s)}\d s. \] If $\delta(c)=0$, then we call $c$ an \emph{asymptotic geodesic} in the Fisher metric on $\domain{N}$. Note that by this definition, $\delta(c)$ is invariant under isometries of $\domain{N}$. We restrict ourselves to curves with domain of definition $\mathbbm{R}$ here, since below we will only study Killing geodesics $c$, which are complete. Our goal in this paragraph is to compare the behaviour of such Killing geodesics with that of Fisher geodesics, and eventually we will show: \begin{mthm}\label{mthm:asymptotic} Consider the family of $n$-variate normal distributions $\domain{N}$ equipped with the Fisher metric $\tensor{g}$, given by \eqref{eq:Fisher_metric}. Let $c:\mathbbm{R}\to\domain{N}$ be a geodesic for the Killing metric $\kappa$ on $\domain{N}$, given by \eqref{eq:Killing_metric}. Assume that $c(0)=(\Sigma_0,\mu_0)$ and $c'(0)\perp\domain{N}(\cdot,\mu_0)$. Then $c$ is an asymptotic geodesic for the Fisher metric. \end{mthm} The proof requires some preparations. For simplicity, we will assume that \[ c(0)=(I_n,0), \quad c'(0)=(0,e_1). \] In the proof of Theorem \ref{mthm:asymptotic} below we see that it is sufficient to treat this case. At the point $(I_n,0)$, the tangent subspace orthogonal to $\mathrm{T}_{(I_n,0)}\domain{N}(\cdot,0)$ is mapped by $\d\Phi$ to \[ V= \Bigl\{ \begin{pmatrix} 0 & v\\ v^\top & 0 \end{pmatrix}\ \Bigl|\ v\in\mathbbm{R}^n \Bigr\}. \] Incidentally, $V$ is also the orthogonal space to $\mathrm{T}_{I_{n+1}}P_n(0)$ for the Killing metric on $\mathrm{Pos}_1(n+1,\mathbbm{R})$. Moreover, $V$ lies in $\mathrm{Sym}_0(n+1,\mathbbm{R})$, the complement of the maximal subalgebra of compact type in the Cartan decomposition of $\sl(n+1,\mathbbm{R})$. \begin{remark} Recall that in any Rie\-mannian symmetric space $M=G/K$, the geodesics through a point $p\in M$ are given as the orbits of one-parameter subgroups \begin{equation} \gamma(t)=\exp(tX)p \quad \text{ for some } X\in\frm, \label{eq:geodesic_opg} \end{equation} where $\mathfrak{g}=\mathfrak{k}\oplus\frm$ is a Cartan decomposition of the Lie algebra of $G$ (cf.~Kobayashi \& Nomizu \cite[Corollary X.2.5]{KN}). In particular, for $M=\mathrm{Pos}(n,\mathbbm{R})$ and $G=\group{GL}(n,\mathbbm{R})$, the subspace $\frm$ is $\mathrm{Sym}(n,\mathbbm{R})$, and for $M=\mathrm{Pos}_1(n,\mathbbm{R})$ and $G=\group{SL}(n,\mathbbm{R})$, the subspace $\frm$ is $\mathrm{Sym}_0(n,\mathbbm{R})$. \end{remark} By this remark, the Killing geodesics tangent to $V$ at $I_{n+1}$ in $\mathrm{Pos}_1(n+1,\mathbbm{R})$ is given by the action \eqref{eq:symmetric_action} of the one-parameter subgroups \[ \exp\begin{pmatrix} 0 & tv\\ tv^\top & 0 \end{pmatrix}. \] \begin{lem}\label{lem:exponential_geodesic} The Killing geodesic $\tilde{c}$ with \[ \tilde{c}(0)=I_{n+1}, \quad \tilde{c}'(0)= \begin{pmatrix} 0 & e_1\\ e_1^\top & 0 \end{pmatrix} \] is given by \begin{equation} \tilde{c}(t) = \begin{pmatrix} \cosh(2t) & 0 & \sinh(2t) \\ 0 & I_{n-1} & 0 \\ \sinh(2t) & 0 & \cosh(2t) \end{pmatrix}. \label{eq:exponential_geodesic} \end{equation} Its preimage in $\domain{N}$ under the diffeomorphism $\Phi$ is \begin{equation} c(t)= (\Phi^{-1}\circ\tilde{c})(t) =\Bigl(\begin{pmatrix} \cosh(2t)^{-2} & 0\\ 0 & \cosh(2t)^{-1}I_{n-1} \end{pmatrix},\tanh(2t)e_1\Bigr). \label{eq:exponential_geodesic2} \end{equation} \end{lem} \begin{proof} Write $X=\tilde{c}'(0)$. By induction, we find that the even and odd powers of $X$ are \[ X^{2k} = \begin{pmatrix} e_1 e_1^\top & 0\\ 0 & 1 \end{pmatrix}, k \geq 1, \quad X^{2k+1}= \begin{pmatrix} 0 & e_1\\ e_1^\top & 0 \end{pmatrix}, k \geq 0. \] Since $e_1 e_1^\top = E_{11}$ we have \begin{align*} \exp(tX) &= \sum_{k=0}^\infty\frac{t^{2k+1}}{(2k+1)!}X^{2k+1} +\sum_{k=0}^\infty\frac{t^{2k}}{(2k)!}X^{2k} \\ &= \begin{pmatrix} 0 & 0 & \sinh(t)\\ 0 & 0 & 0 \\ \sinh(t) & 0 & 0 \end{pmatrix} +\begin{pmatrix} \cosh(t)& 0 & 0 \\ 0 & I_{n-1} & 0 \\ 0 & 0 & \cosh(t) \end{pmatrix} \\ &= \begin{pmatrix} \cosh(t) & 0 & \sinh(t) \\ 0 & I_{n-1} & 0 \\ \sinh(t) & 0 & \cosh(t) \end{pmatrix}. \end{align*} This one-parameter subgroup acts on $I_{n+1}$ by \[ \exp(tX)I_{n+1}\exp(tX)^\top = \exp(tX)^2 = \exp(2tX) = \begin{pmatrix} \cosh(2t) & 0 & \sinh(2t) \\ 0 & I_{n-1} & 0 \\ \sinh(2t) & 0 & \cosh(2t) \end{pmatrix}. \] which is the desired expression \eqref{eq:exponential_geodesic} for the geodesic $\tilde{c}$. To obtain the expression for $c$, we need the $(\Sigma,\mu)$-coordinates of $\tilde{c}$. By \eqref{eq:Phi}, \[ \frac{1}{\sqrt[n+1]{\det(\Sigma)}}=\cosh(2t), \quad \mu=\tanh(2t)e_1, \] and thus \[ \Sigma=\begin{pmatrix} 1 &0\\ 0& \cosh(2t)^{-1}I_{n-1} \end{pmatrix} -\tanh(2t)^2 e_1e_1^\top = \begin{pmatrix} \cosh(2t)^{-2}&0\\ 0&\cosh(2t)^{-1}I_{n-1} \end{pmatrix}. \] This yields the expression \eqref{eq:exponential_geodesic2} for $c(t)$. \end{proof} After applying some identities for the hyperbolic functions, we find: \begin{lem}\label{lem:nasty_derivatives} The first and second derivatives of the Killing geodesic $c$ are \begin{align} c'(t) &=\Bigl(\begin{pmatrix} -\frac{4\sinh(2t)}{\cosh(2t)^3} & 0\\ 0 & -\frac{2\sinh(2t)}{\cosh(2t)^2}I_{n-1} \end{pmatrix},\frac{2}{\cosh(2t)^2}e_1\Bigr), \label{eq:dcdt} \\ c''(t) &=\Bigl(\begin{pmatrix} \frac{-8+16\sinh(2t)^2}{\cosh(2t)^4} & 0\\ 0 & \frac{-4+4\sinh(2t)^2}{\cosh(2t)^3}I_{n-1} \end{pmatrix},-\frac{8\sinh(2t)}{\cosh(2t)^3}e_1\Bigr). \label{eq:d2cdt2} \end{align} \end{lem} Using Lemma \ref{lem:nasty_derivatives} and \eqref{eq:LeviCivita}, we can now compute the second covariant derivative of $c(t)=(\Sigma(t),\mu(t))$, \begin{equation} \nabla_{c'(t)}{c'(t)} =c''(t)-\bigl(\Sigma'(t)\Sigma^{-1}(t)\Sigma'(t)-\mu'(t)\mu'(t)^\top,\ \Sigma'(t)\Sigma^{-1}(t)\mu'(t)\bigr), \label{eq:covar_c_0} \end{equation} with \begin{align*} &\Sigma'(t)\Sigma^{-1}(t)\Sigma'(t)\\ &=\begin{pmatrix} -\frac{4\sinh(2t)}{\cosh(2t)^3} & 0\\ 0 & -\frac{2\sinh(2t)}{\cosh(2t)^2}I_{n-1} \end{pmatrix} \begin{pmatrix} \cosh(2t)^{2} & 0\\ 0 & \cosh(2t)I_{n-1} \end{pmatrix} \begin{pmatrix} -\frac{4\sinh(2t)}{\cosh(2t)^3} & 0\\ 0 & -\frac{2\sinh(2t)}{\cosh(2t)^2}I_{n-1} \end{pmatrix}\\ &= \begin{pmatrix} \frac{16\sinh(2t)^2}{\cosh(2t)^4}&0\\ 0&\frac{4\sinh(2t)^2}{\cosh(2t)^3}I_{n-1} \end{pmatrix},\\ &\mu'(t)\mu'(t)^\top = \frac{4}{\cosh(2t)^4}e_1e_1^\top = \begin{pmatrix} \frac{4}{\cosh(2t)^4}&0\\ 0&0 \end{pmatrix}, \\ &\Sigma'(t)\Sigma^{-1}(t)\mu'(t) \\ &=\frac{2}{\cosh(2t)^2}\begin{pmatrix} -\frac{4\sinh(2t)}{\cosh(2t)^3} & 0\\ 0 & -\frac{2\sinh(2t)}{\cosh(2t)^2}I_{n-1} \end{pmatrix} \begin{pmatrix} \cosh(2t)^{2} & 0\\ 0 & \cosh(2t)I_{n-1} \end{pmatrix} e_1 \\ &= -\frac{8\sinh(2t)}{\cosh(2t)^3}e_1. \end{align*} We substitute these expressions and \eqref{eq:d2cdt2} in \eqref{eq:covar_c_0} to obtain: \begin{lem}\label{lem:covar_c} The second covariant derivative $\nabla_{c'(t)}c'(t)$ for the Fisher metric in $\domain{N}$ of the Killing geodesic $c$ is \begin{equation} \nabla_{c'(t)}c'(t) = \Bigl( \begin{pmatrix} -\frac{4}{\cosh(2t)^4} & 0 \\ 0 & -\frac{4}{\cosh(2t)^3} I_{n-1} \end{pmatrix} , 0\Bigr). \label{eq:covar_c} \end{equation} In particular, $c$ is not a Fisher geodesic. \end{lem} With this lemma, we can prove Theorem \ref{mthm:asymptotic}. \begin{proof}[Proof of Theorem \ref{mthm:asymptotic}] Let $c$ be a Killing geodesic beginning at a point $c(0)=(\Sigma_0,\mu_0)$ whose initial direction $c'(0)$ is orthogonal to $\domain{N}(\cdot,\mu_0)$. That is, $c'(0)=(0,v)$ for some $v\in\mathbbm{R}^n$. \begin{enumerate} \item We may reparameterize $c$ by rescaling the parameter $t$ such that $\|v\|^{\tensor{g}}_{(\Sigma_0,\mu_0)}=1$. This affects the second covariant derivative of $c$ only by a constant factor. \item By applying an isometry $(A,b)\in\group{Aff}^+(n,\mathbbm{R})$ with $A^\top A=\Sigma_0^{-1}$ and $b=-A\mu_0$, we may assume that $c(0)=(I_n,0)$ and $c'(0)$ is orthogonal to $\domain{N}(\cdot,0)$. \item Then we may apply another isometry $(T,0)\in\group{Aff}(n,\mathbbm{R})$ with $T\in\O(n)$, so that we may assume $c'(0)=(0,e_1)$, while $c(0)=(I_n,0)$ still holds. \end{enumerate} Since the affine group acts isometrically for both the Fisher metric and the Killing metric, the resulting curve $c$ is still a Killing geodesic. By Lemma \ref{lem:covar_c} and \eqref{eq:Fisher_metric}, \begin{align*} \|\nabla_{c'(t)}c'(t)\|^{\tensor{g}}_{c(t)} &=\sqrt{\frac{1}{2}\mathrm{tr}\bigl(c(t)^{-1} (\nabla_{c'(t)}c'(t)) c(t)^{-1} (\nabla_{c'(t)}c'(t))\bigr)} \\ &=\sqrt{\frac{1}{2}\mathrm{tr}\begin{pmatrix} \frac{16}{\cosh(2t)^4} & 0 \\ 0 & \frac{16}{\cosh(2t)^4}I_{n-1} \end{pmatrix}} \\ &=\frac{2\sqrt{2n}}{\cosh(2t)^2}. \end{align*} Now \[ \int_0^t \|\nabla_{c'(s)}c'(s)\|^{\tensor{g}}_{c(s)} \d s = 2\sqrt{2n}\int_0^t\frac{1}{\cosh(2s)^2} \d s = \sqrt{2n}\tanh(2t). \] It follows that \[ \delta(c) = \lim_{t\to\infty}\frac{1}{t}\int_0^t \|\nabla_{c'(s)}c'(s)\|^{\tensor{g}}_{c(s)} \d s \\ = \lim_{t\to\infty}\frac{\sqrt{2n}\tanh(2t)}{t} =0. \] Hence the Killing geodesic $c$ is an asymptotic Fisher geodesic. As the geodesic defect is invariant under isometries of the Fisher metric, this is true for any geodesic with $c'(0)$ orthogonal to $\domain{N}(\cdot,\mu_0)$. \end{proof}
2,877,628,088,765
arxiv
\section{Introduction} Path integrals are widely used in various fields of physics \cite{{Feynman},{Schulman},{Kleinert},{Swanson}}. They are thought to be particularly suited to semi-classsical evaluation of quantum mechanical transition amplitudes (or partition functions), since apparently they can incorporate a classical picture much more easily than the operator formalism can. In particular many works on spin systems make use of the spin-coherent-state (i.e., the SU(2)-coherent-state) path integral. To list just a few of the notable applications, precession of a single spin under a constant magnetic field \cite{Kuratsuji}, one dimensional anti-ferromagnets \cite{Fradkin}, tunneling of a giant spin in a mesoscopic magnet \cite{Chudnovsky}, tunneling of a magnetic domain wall \cite{Braun-Loss}, and so on. The standard starting point, in the case of a single spin of magnitude $S$, is the following expression for the transition amplitude: \begin{mathletters} \beqa && \< {\bf n}_F| e^{-i {\hat H} T / \hbar} |{\bf n}_I \> = \int_{SCS} {\cal D}\theta {\cal D}\phi \exp \left( \frac{i}{\hbar}\tilde{S}_{SCS}[\theta,\phi] \right), *\rm~ {ctscspi}\\ && \tilde{S}_{SCS}[\theta,\phi]:=\int_{0}^{T}dt \left\{ \hbar S ( \cos \theta(t) - 1 )\dot{\phi}(t) - H( \theta(t),\phi(t) ) \right\}. *\rm~ {ctscspi-action} \eeqa*\rm~ {ctscs}\end{mathletters}(Throughout the paper, the equation A:=B denotes that A is defined by B.) Here, $|{\bf n}_{\alpha}\>$ with ${\bf n}_{\alpha}$ being a unit vector ($\alpha =$ {\it I} or {\it F} ) is the spin coherent state in which the spin may be visualized as oriented along ${\bf n}_{\alpha}$. In the integral, $\theta$ and $\phi$ denote polar and azimuthal angles of the spin orientation at intermediate times, and $H(\theta,\phi)$ is the Hamiltonian in the spin-coherent-state representation. (The precise definition of various symbols is given in Sec. II.) Time $t$ is treated as a continuous parameter in the above expression, which therefore may be called the {\em continuous-time spin-coherent-state path integral}, to be abbreviated as CTSCSPI. As we will see, the expression leads to a grave difficulty. If one tries to evaluate it in the spirit of semi-classical approximation, one fails already at the "classical level". If one ignores this failure and proceeds to integrate over fluctuations, one obtains a meaningless result. In order to appreciate the difficulty, it is worthwhile to recall the coherent-state (i.e., boson-coherent-state) path integral, which is in a sense a linear version of the spin-coherent-state path integral. In the case of a single particle governed by a Hamiltonian $H(p,q)$, the relevant transition amplitude is often expressed as \begin{mathletters} \beqa && \< p_F, q_F| e^{-i {\hat H} T / \hbar} |p_I ,q_I \> = \int_{CS} {\cal D}p {\cal D}q \exp \left( \frac{i}{\hbar} \tilde{S}_{CS}[p,q] \right), *\rm~ {ctcspi} \\ && \tilde{S}_{CS}[p,q]:=\int_{0}^{T}dt \left\{ \frac{1}{2}( p(t)\dot{q}(t) - \dot{p} (t)q(t) ) - H( p(t),q(t) ) \right\}, *\rm~ {ctcspi-action} \eeqa*\rm~ {ctcs}\end{mathletters}where $| p_{\alpha} ,q_{\alpha} \>$ ($\alpha =$ {\it I} or {\it F} ) is the coherent state labeled by the complex number $q_{\alpha} + i p_{\alpha}$. This expression may be called the {\em continuous-time coherent-state path integral}, to be abbreviated as CTCSPI. It will be seen that this innocent-looking expression also contains difficulties. In order to appreciate them, it is in turn worthwhile to compare (\ref{ctcs}) with the phase-space path integral expression for a Feynman kernel. In the case of the single particle, it is often expressed as \begin{mathletters} \beqa && \< q_{F}|e^{-i\hat{H}T/\hbar}| q_{I} \> = \int_{PS} {\cal D}p {\cal D}q \exp \left( \frac{i}{\hbar} S_{PS}[p,q] \right), *\rm~ {ctpspi} \\ && S_{PS}[p,q]:= \int_{0}^{T}dt \{ p(t)\dot{q}(t) - H( p(t),q(t) ) \}, *\rm~ {ctps-action} \eeqa *\rm~ {ctps}\end{mathletters}where $|q_{\alpha}\>$ ($\alpha =$ {\it I} or {\it F} ) is a position eigenket. This expression may be called the {\em continuous-time phase-space path integral}, to be abbreviated as CTPSPI. For the understanding of the announced difficulties associated with (\ref{ctscs}) and (\ref{ctcs}), it is of vital importance to appreciate the difference between (\ref{ctcs}) and (\ref{ctps}) in spite of their apparent similarity. Therefore, we will begin by reviewing the semiclassical evaluation of (\ref{ctps}), which will be followed by that of (\ref{ctcs}) and (\ref{ctscs}). In the latter two cases, in contrast to the case of CTPSPI, one can not in general find a "classical path". Even if one somehow circumvent this difficulty, one obtains a wrong value for the "classical action". Fluctuation integrals lead to further non-sensical result. In Sec. IV, we critically review what is to be called Klauder's $\epsilon$-prescription \cite{Klauder}, and point out that ambiguities arise in dealing with fluctuation especially in the case of the spin-coherent-state path integral. Section V is devoted to a thorough re-examination of the whole issue in the proper discrete-time formalism. First, it is shown that the difficulties concerning the "classical path" and "classical action" disappear; they are shown to be illusions caused by the ambiguity of the continuous-time formalism. Second, it is clarified how the $\epsilon$-prescription is related to the discrete-time formalism. We present these discussions in a concrete form by explicitly working out the examples of a harmonic oscillator for the coherent-state case and a spin under a constant magnetic field for the spin-coherent-state case both in the continuous-time and discrete-time formalism. Our conclusion, then, is that any serious work with (spin-)coherent-state path integrals should be based on their discrete-time form. Part of the difficulties associated with CTSCSPI was previously noted by Funahashi et al \cite{{Funahashi1},{Funahashi2}} and by Schilling \cite{Schilling}. The discrete-time formalism was employed by Solari \cite{Solari} who developed a general method of evaluating the fluctuation integral, and by Funahashi et al who evaluated the partition function for a single spin under a constant magnetic field \cite{fn1}. However, the nature of the difficulties associated with CTSCSPI seems to have been left unscrutinized; many workers still use CTSCSPI or its Klauder-augmented version (to be calle KCTSCSPI) because of their apparent simplicity. We hope that the present paper serves to warn the users of the spin-coherent-state path integral against uncritical use of CTSCSPI or KCTSCSPI. \section{Notation} In the case of the phase-space and coherent-state path integrals, we consider a single-particle system. The position and momentum of the particle are denoted by $q$ and $p$ which are measured in units such that both of them have dimension of $\hbar^{1/2}$. The corresponding operators are marked by a caret. Accordingly \begin{equation} [\hat{q},\hat{p} ] = i \hbar. \eeq We introduce \begin{equation} \hat{a} := \sqrt{\frac{1}{2\hbar}}( \hat{q} + i\hat{p} ),\qquad \hat{a}^{\dagger} := \sqrt{\frac{1}{2\hbar}}( \hat{q} - i\hat{p} ), *\rm~ {a-q-p} \eeq and their c-number counterparts: \begin{equation} \xi := \sqrt{\frac{1}{2\hbar}}( q + ip ) ,\qquad \xi^* := \sqrt{\frac{1}{2\hbar}}( q - ip ) . *\rm~ {xi-q-p} \eeq It is to be understood that $\xi_\alpha$ and $ \xi^*_\alpha$ are related to $q_{\alpha}$ and $p_{\alpha}$ in the above fashion for any subscript $\alpha$. The coherent state is defined in the standard way as \begin{equation} |p,q \> \equiv |\xi \> := \exp \left[ \xi \hat{a}^{\dagger} - \xi^{*} \hat{a}\right] | 0 \> = \exp \left[ \frac{i}{\hbar} ( p \hat{q} - q \hat{p} ) \right] | 0 \>, *\rm~ {coherent-state} \eeq with \begin{equation} \hat{a} |0\> = 0, \qquad \<0|0\> = 1. \eeq Hence \begin{mathletters} \beqa && |\xi\> = e^{-|\xi|^2/2} e^{\xi \hat{a}^\dagger} |0\>, \qquad \hat{a} | \xi\> = \xi |\xi\>, \\ && \<\xi | \xi'\> = \exp \left[ -\frac{1}{2}( |\xi|^{2} + |\xi'|^{2} ) + \xi^{*}\xi' \right]. *\rm~ {cs-innerp} \eeqa \end{mathletters}For an illustration we treat the harmonic oscillator governed by the Hamiltonian \begin{equation} \hat{H} \equiv H (\hat{p}, \hat{q}) := \frac{1}{2}(\hat{p}^2 + \hat{q}^2 - \hbar) = \hbar \hat{a}^{\dagger} \hat{a}. *\rm~ {cs-ho-hamiltonian} \eeq (By convention $H$ has the dimension of $\hbar$, and time is dimensionless.) Under this Hamiltonian the coherent state evolves in time as \begin{equation} e^{-i\hat{H}T/\hbar} | \xi \> = | \xi e^{-iT} \>. \eeq It follows that \beqa \<p_F,q_F| e^{-i {\hat H} T / \hbar} |p_I, q_I\> &=& \<\xi_F|e^{-i {\hat H} T / \hbar}|\xi_I\> \nonumber \\ &=& \< \xi_{F} | \xi_{I} e^{-iT} \> = \exp \left[ -\frac{1}{2} ( | \xi_{F} |^{2} + | \xi_{I} |^{2} ) + \xi^{*}_{F} \xi_{I} e^{-iT} \right]. *\rm~ {exact-cs-ta} \eeqa The matrix element of ${\hat H}$ in the coherent-state representation is given by \begin{equation} {\cal H}(\xi^*, \xi') := \frac{\<\xi|{\hat H} |\xi'\>}{\<\xi| \xi'\>} = \hbar \xi^* \xi'. *\rm~ {cs-hamiltonian} \eeq This is a function of $\xi^*$ and $\xi'$ alone and involves neither $\xi$ nor $(\xi')^*$. (This property holds not only for a harmonic oscillator but also for any system.) In the case of the spin-coherent-state path integral, we consider a system of a single spin of magnitude $S$. The dimensionless spin operator is denoted by ${\hat {\bf S}}$, whose components obey \begin{equation} [\hat{S}_x, \hat{S}_y ] = i \hat{S}_z, \qquad {\rm and ~cyclic}. \eeq We introduce an auxiliary unit vector ${\bf n}$ whose polar and azimuthal angles are $\theta$ and $\phi$, respectively, and also a complex number $\xi$ corresponding to the Riemann projection of ${\bf n}$: \begin{equation} \xi := e^{i \phi} \tan \frac{\theta}{2}, \qquad \xi^* := e^{-i \phi} \tan \frac{\theta}{2} . *\rm~ {xi-phi-theta} \eeq It is to be understood that $\xi_\alpha$ and $ \xi^*_\alpha$ are related to $\phi_\alpha$ and $\theta_\alpha$ in the above fashion for any subscript $\alpha$. The spin coherent state \cite{Radcliffe} is defined as \beqa |{\bf n}\> \equiv |\xi\> &:=& \exp \left[- \zeta^{*} \hat{S}_{+} + \zeta \hat{S}_{-} \right] | S \> \nonumber \\ &=& ( 1 + |\xi|^2 )^{-S} \sum_{M=-S}^{S}\left(\frac{(2S)!}{(S-M)!(S+M)!}\right)^{1/2} \xi^{S-M} | M \>, \qquad \zeta := e^{i\phi}\frac{\theta}{2}, *\rm~ {spin-coherent-state} \eeqa with \begin{equation} \hat{S}_\pm = \hat{S}_x \pm i\hat{S}_y , \qquad \hat{S}_z |S\> = S |S\> , \qquad \<S| S\> = 1. \eeq Hence \begin{mathletters}\beqa && {\bf n} \cdot {\hat {\bf S}} |\xi\> = S |\xi\>, \\ && \<\xi|\xi'\> = \frac{(1 + \xi^{*}\xi')^{2S}}{(1 + |\xi|^{2})^{S}( 1 + |\xi'|^{2})^{S}}. *\rm~ {scs-innerp} \eeqa \end{mathletters}For an illustration we treat the system governed by the Hamiltonian \begin{equation} \hat{H} := - \hbar \hat{S}_z *\rm~ {scs-constmag-hamiltonian} \eeq which represents a spin under a constant magnetic field. It is useful to note that \begin{equation} e^{i \hat{S}_z T} \hat{S}_\pm e^{- i \hat{S}_{z} T} = e^{ \pm i T} \hat{S}_\pm.\eeq Therefore, under the Hamiltonian (\ref{scs-constmag-hamiltonian}), the spin coherent state evolves in time as \begin{equation} e^{-i \hat{H} T} |\xi\> = \left. |\xi\> \right|_{\zeta \rightarrow \zeta e^{- i T}} = e^{iST}|\xi e^{-iT} \>. \eeq It follows that \beqa \<{\bf n}_F| e^{-i {\hat H} T / \hbar} |{\bf n}_I\> &=& \<\xi_F| e^{-i {\hat H} T / \hbar} |\xi_I\> \nonumber \\ &=& \frac{( 1 + \xi^{*}_{F}\xi_{I}e^{-iT})^{2S}}{(1 + |\xi_{F}|^{2})^{S}(1 + |\xi_{I}|^{2})^{S}}e^{iST}. *\rm~ {exact-scs-ta} \eeqa The matrix element of ${\hat H}$ in the spin-coherent-state representation is given by \begin{equation} {\cal H}(\xi^*, \xi') := \frac{\<\xi|{\hat H} |\xi'\>}{\<\xi| \xi'\>} = - \hbar S \frac{1 - \xi^* \xi'}{1 + \xi^* \xi'}. *\rm~ {scs-hamiltonian} \eeq The remark made on Eq.~(\ref{cs-hamiltonian}) applies to this equation as well. \section{Continuous-Time Formalism} In this section we discuss stationary-action approximations for continuous-time path integrals. \subsection{Continuous-Time Phase-Space Path Integral} One would expect that CTPSPI (\ref{ctps}) is dominated by the statinary-action path $(p^{cl}(t),q^{cl}(t))$ at which the action $S_{PS}[p,q]$ is stationary. The stationary-action path is determined by \begin{mathletters} \beqa && 0 = \left. \frac{\delta S_{PS}[p,q]}{\delta p(t)} \right|_{cl} = \dot{q}^{cl}(t) - \left. \frac{\partial H(p,q)}{\partial p} \right|_{cl}, *\rm~ {eqmo-ctps-sap1} \\ && 0 = \left. \frac{\delta S_{PS}[p,q]}{\delta q(t)} \right|_{cl} = -\dot{p}^{cl}(t) - \left. \frac{\partial H(p,q)}{\partial q} \right|_{cl}, *\rm~ {eqmo-ctps-sap2} \eeqa*\rm~ {eqmo-ctps-sap}\end{mathletters}where the symbol $|_{cl}$ indicates the replacement $(p,q) \rightarrow (p^{cl}(t),q^{cl}(t))$ after differentiation. On inspection of the left-hand-side of (\ref{ctpspi}), one would argue that the space of paths to be integrated over is defined by the "boundary condition" \begin{equation} q(T) = q_{F}, \qquad q(0) = q_{I}, *\rm~ {bc-ctps1} \eeq and accordingly that the same condition should be imposed on the purported dominant path: \begin{equation} q^{cl}(T) = q_{F}, \qquad q^{cl}(0)=q_{I}. *\rm~ {bc-ctps2} \eeq Being a couple of first-order differential equations, the above set of equations has a solution under this boundary condition \cite{fn2}. Note that no boundary condition is imposed on $p^{cl}$; the values of $p^{cl}(T)$ and $p^{cl}(0)$ are determined {\em a posteriori}. Obviously the stationary-action path is a solution of the Hamilton equation of motion and deserves the name "classical path"; hence the superscript $cl$. Once a classical path is found, one may decompose $(p(t),q(t))$ at intermediate times into a sum of $(p^{cl}(t),q^{cl}(t))$ and fluctuations around it, thereby proceeding to integration over the fluctuations. The rest is a well-known story and need not be repeated here. The result so obtained is known to be correct. (One might as well note a subtle point which is often ignored; the claim that the pahse-space path integral is dominated by the classical path as defined above is not correct. See Sec. V.) Since $S_{PS}[p,q]$ remains real throughout the calculation, the above procedure may also be called a stationary-phase approximation. \subsection{Continuous-Time Coherent-State Path Integral} Let us review CTCSPI in parallel with the previous subsection. Let the stationary-action path be $(p^S(t),q^S(t))$, which is determined by \begin{mathletters} \beqa && 0 = \left. \frac{\delta \tilde{S}_{CS}[p,q]}{\delta p(t)} \right|_S = \dot{q}^{S}(t) - \left. \frac{\partial H(p,q)}{\partial p} \right|_{S}, *\rm~ {eqmo-ctcs-sap1} \\ && 0 = \left. \frac{\delta \tilde{S}_{CS}[p,q]}{\delta q(t)} \right|_S = -\dot{p}^{S}(t) - \left. \frac{\partial H(p,q)}{\partial q} \right|_{S}, *\rm~ {eqmo-ctcs-sap2} \eeqa*\rm~ {eqmo-ctcs-sap}\end{mathletters}where the symbol $|_S$ indicates the replacement $(p,q) \rightarrow (p^S(t),q^S(t))$ after differentiation. This set of equations is also identical with the Hamilton equation of motion. On inspection of the left-hand-side of (\ref{ctcspi}), one would now think that the space of paths in the present case is defined not by the boundary condition (\ref{bc-ctps1}) but by \begin{equation} p(T) = p_{F},\qquad q(T) = q_{F}, \qquad p(0) = p_{I}, \qquad q(0) = q_{I}, *\rm~ {bc-ctcs-qp} \eeq and accordingly require \begin{equation} p^{S}(T) = p_{F},\qquad q^{S}(T) = q_{F}, \qquad p^{S}(0) = p_{I}, \qquad q^{S}(0) = q_{I}. *\rm~ {bc-ctcs-qp2} \eeq However a couple of first-order differential equations can not in general accomodate a set of four conditions. This is the first difficulty \cite{Schul}. A way to evade this difficulty would be to note that the above boundary condition is motivated by the notation $\<p_F, q_F|$ and $|p_I, q_I\>$, which is rather misleading. Unlike the ket $|q\>$ corresponding to the definite position $q$, the state $|p,q\>$ does not correspond to a definite "momentum and position". (In the latter state, both momentum and position have indeterminacy of ${\cal O}(\hbar^{1/2})$.) It is more appropriate to label the state by a single complex number $\xi$ which is related to $(p, q)$ via Eq.~(\ref{xi-q-p}). Accordingly, one would re-write Eqs.~(\ref{eqmo-ctcs-sap}) in terms of \begin{equation} \xi^S(t) := \sqrt{\frac{1}{2\hbar}}( q^S(t) + i p^S(t)) , \qquad {\rm and} \qquad {\bar \xi}^S(t) := \sqrt{\frac{1}{2\hbar}}( q^S(t) - i p^S(t)), *\rm~ {sap-xi-q-p} \eeq where the symbol ${\bar \xi}^S(t)$ is used instead of $\{ \xi^S(t) \}^*$ for the reason to be explained shortly. Equivalently one may re-express the action in terms of $\xi$ and $\xi^*$ as \beqa S_{CS}[\xi^*, \xi] &:=& \tilde{S}_{CS}\left[ \sqrt{\frac{\hbar}{2}} \frac{1}{i}(\xi - \xi^*), \sqrt{\frac{\hbar}{2}} (\xi + \xi^*)\right]\nonumber\\ &=& \int_0^T dt \left\{\frac{i\hbar}{2}( \xi^*(t) \dot{\xi}(t) - \dot{\xi}^{*}(t) \xi(t) ) - {\cal H}(\xi^{*}(t) , \xi(t) ) \right\}, *\rm~ {action-ctcs-xi} \eeqa and vary it with respect to $\xi$ and $\xi^*$ formally regarding them as mutually independent. One would then find \begin{equation} \dot{\xi}^{S}(t) = -\left.\frac{i}{\hbar}\frac{\partial {\cal H}(\xi^{*},\xi)}{\partial \xi^{*}}\right|_{S} = -i \xi^{S}(t), \qquad \dot{\bar{\xi}^{S}}(t) = \left.\frac{i}{\hbar}\frac{\partial {\cal H}(\xi^{*},\xi)}{\partial \xi}\right|_{S} = i \bar{\xi}^{S}(t), *\rm~ {eqmo-ho-ctcs-xi} \eeq where the symbol $|_{S}$ indicates the replacement $(\xi^{*},\xi) \to (\bar{\xi}^{S}(t),\xi^{S}(t))$ after differentiation, and the last equalities hold for the harmonic oscillator. Now one would make a crucial observation; since the normalization factor can be taken care of separately, the state $|\xi_\alpha \>$ ($\alpha = {\it I}$ or ${\it F}$) may be replaced by \begin{equation} |\xi_\alpha) := e^{|\xi_\alpha|^2/2} |\xi_\alpha \> = e^{\xi_\alpha {\hat a}^\dagger}|0 \>. *\rm~ {nonnorm-cs} \eeq Then the amplitude $(\xi_F| e^{-i\hat{H}T/\hbar}|\xi_I)$ does not depend on $\xi_F$ nor on $\xi_I^*$ but depends only on $\xi_F^*$ and $\xi_I$. One could thus argue that the relevant space of paths is defined by the boundary condition \cite{Itzykson} \begin{equation} \xi^{*}(T) = \xi^{*}_{F},\qquad \xi(0) = \xi_{I}, *\rm~ {bc-ctcs-xi2} \eeq and that the boundary condition to be imposed on the set of Eqs.~(\ref{eqmo-ho-ctcs-xi}) is \begin{equation} \bar{\xi}^{S}(T) = \xi^{*}_{F}, \qquad \xi^{S}(0) = \xi_{I}. *\rm~ {bc-ctcs-xi} \eeq With this boundary condition, the equations can be solved to yield \begin{equation} \xi^{S}(t) = \xi_{I} e^{-it}, \qquad \bar{\xi}^{S}(t) = \xi^{*}_{F} e^{ i( t - T )}. *\rm~ {sol-ho-ctcs} \eeq The price to be paid is that ${\bar \xi}^S(t)$ is in general different from the complex conjugate of $\xi^S(t)$. This is the reason of the notation used. As a result, $(p^S(t),q^S(t))$ related to $(\xi^S(t), {\bar \xi}^S(t))$ via (\ref{sap-xi-q-p}) are not in general real. However, they are real if and only if \begin{equation} \xi_F = \xi_I e^{-iT},*\rm~ {special-bc} \eeq in which case they describe a classical path, that is, a real solution of the Hamilton equation of motion. The appearance of a complex stationary-action path does not by itself cause any difficulty; contours of integration over each $p(t)$ and $q(t)$ at intermdediate times, which are originally defined to be along the real axis, may be distorted into the respective complex plane so that they as a whole constitute a steepest-descent contour through the saddle point $(p^S,q^S)$. The steepest-descent method of course entails the decomposition of $p(t)$ and $q(t)$ into a sum of the stationary-action path and fluctuations around it. In this procedure, the action $\tilde{S}_{CS}[p,q]$ does not remain real but become complex in general. Hence it is inappropriate to call the procedure a stationary-phase approximation. By the same token it is misleading to call the statioanry-action path a classical path. (In passing, note that Eqs.~(\ref{eqmo-ho-ctcs-xi}) with the boundary condition (\ref{bc-ctcs-xi}) has a solution for arbitrary $T$ in contrast to the case of Eqs.~(\ref{eqmo-ctps-sap}) with the boundary condition (\ref{bc-ctps2}).) One might thus hope that the CTCSPI could be worked out by imposing the boundary condition (\ref{bc-ctcs-xi}). Unfortunately the action $S_{CS}^{SAP}$ associated with the stationary-action path \beqa S_{CS}^{SAP} &:=& S_{CS}[\bar{\xi}^{S},\xi^{S}]\nonumber\\ &=& \int_{0}^{T} \left\{ \frac{1}{2} \left(\bar{\xi^{S}}(t)\left.\frac{\partial {\cal H}(\xi^{*},\xi)}{\partial \xi^{*}}\right|_{S} + \left.\frac{\partial {\cal H}(\xi^{*},\xi)}{\partial \xi}\right|_{S}\xi^{S}(t) \right) - {\cal H}(\bar{\xi}^{S}(t),\xi^{S}(t) ) \right\} \eeqa vanishes in the case of the harmonic oscillator for which ${\cal H}$ is bilinear in $\xi^{*}$ and $\xi$. This is the second difficulty, since one would have hoped that $S_{CS}^{SAP}$ agrees with the exponent of (\ref{exact-cs-ta}); in the quasi-classical situation where $p_\alpha$ and $q_\alpha$ are regarded as of ${\cal O}(\hbar^0)$, the exponent is of ${\cal O}(\hbar^{-1})$ . Suppose one disregarded this difficulty and proceeded to make the following replacement in (\ref{ctcs}): \begin{equation} \xi(t) = \xi^{S}(t) + \eta(t) , \qquad \xi^{*}(t) = \bar{\xi}^{S}(t) + \eta^{*}(t) ,*\rm~ {expand-xiseta} \eeq where $\eta^*(t)$ is the complex conjugate of $\eta(t)$ \cite{independent}. Since the boundary condition has been taken care of by the stationary-action path, one would restrict $\eta(t)$ so that \begin{equation} \eta^{*}(T) = 0, \qquad \eta(0) = 0. *\rm~ {bc-ctcs-fluc-eta} \eeq By definition of the stationary-action path, the action does not contain terms linear in the fluctuations $\eta(t)$. It takes the form \begin{mathletters} \beqa &&S_{CS}[ \bar{\xi}^{S} + \eta^{*}, \xi^{S} + \eta ] \simeq S_{CS}^{SAP} + S^{(2)}_{CS}[\eta^{*} , \eta] , *\rm~ {expand-sapfluc}\\ && S^{(2)}_{CS}[ \eta^{*} , \eta ] := i\hbar \int_{0}^{T} dt \eta^{*}(t)\left(\frac{d}{dt} + i \right) \eta(t). \eeqa \end{mathletters}If one formally integrated over the fluctuations, one would obtain \begin{equation} \int {\cal D}\eta {\cal D}\eta^{*} \exp\left( \frac{i}{\hbar} S^{(2)}_{CS}[ \eta^{*} , \eta] \right) \propto \left( \det\left( \frac{d}{dt} + i \right) \right)^{-1} , *\rm~ {int-ctcs-fluc} \eeq where the formal determinant denotes the product of the eigenvalues of the differential operator $ d/dt + i$, which is supposed to act on the space of functions satisfying the Dirichlet boundary condition (\ref{bc-ctcs-fluc-eta}). Obviously, this differential operator does not possess an eigenfunction. Hence the formal determinant does not exist. This is the third difficulty. \subsection{Continuous-Time Spin-Coherent-State Path Integral} It is easy to see that the situation with CTSCSPI is largely the same as in the case of CTCSPI. Thus, a stationary-action path $(\theta^S(t), \phi^S(t))$ satisfying the boundary condition \begin{equation} \theta^{S}(T) = \theta_{F},\qquad \phi^{S}(T) = \phi_{F}, \qquad \theta^{S}(0) = \theta_{I},\qquad \phi^{S}(0) =\phi_{I} *\rm~ {bc-ctscs-phitheta} \eeq does not exist. Again, the spin coherent state $|\xi\>$, apart from the normalization factor $(1 + |\xi|^2)^{-S}$, does not depend on $\xi^*$ but depend only on $\xi$. Hence one would proceed as follows. One would re-expresses the action as \beqa S_{SCS}[\xi^*, \xi] &&:= \tilde{S}_{SCS}\left[2 \tan^{-1}(|\xi|), \frac{1}{2i}\ln \left(\frac{\xi}{\xi^{*}}\right)\right] \nonumber \\ &&= \int_{0}^{T}dt \left[\frac{i\hbar S}{ 1 + |\xi(t)|^{2}} \left( \xi^{*}(t)\dot{\xi}(t) - \dot{\xi}^{*}(t)\xi(t) \right) - {\cal H} ( \xi^{*}(t) , \xi(t) ) \right] *\rm~ {action-ctscs} \eeqa and vary it with respect to $\xi$ and $\xi^*$, formally regarding them as mutually independent. One would then find \begin{mathletters}\beqa && \dot{\xi}^{S}(t) = - \left. \frac{i( 1 + \bar{\xi}^{S}(t)\xi^{S}(t) )^{2}}{2\hbar S} \frac{\partial {\cal H}(\xi^{*} , \xi) }{\partial \xi^{*}}\right|_{S} = -i \xi^{S}(t), \\ && \dot{\bar{\xi}^{S}}(t) = \left. \frac{i( 1 + \bar{\xi}^{S}(t)\xi^{S}(t) )^{2}}{2\hbar S} \frac{\partial {\cal H}(\xi^{*} , \xi) }{\partial \xi}\right|_{S} = i \bar{\xi}^{S}(t), *\rm~ {eqmo-constmag-ctscs-xi} \eeqa \end{mathletters}where the last equalities hold for the spin under a constant magnetic field. Arguing that the boundary condition to be imposed is (\ref{bc-ctcs-xi}), one would obtain the solution which is formally identical to (\ref{sol-ho-ctcs}). The action $S_{SCS}^{SAP}$ associated with this stationary-action path would then be found as \beqa S_{SCS}^{SAP} &:=& S_{SCS}[ \bar{\xi}^{S}, \xi^{S} ] \nonumber \\ &=& \int_{0}^{T} dt \left\{ \frac{1}{2}( 1 + \bar{\xi}^{S}(t)\xi^{S}(t)) \left( \bar{\xi}^{S}(t)\left.\frac{\partial {\cal H}(\xi^{*} , \xi) }{\partial \xi^{*}}\right|_{S} + \left.\frac{\partial {\cal H}(\xi^{*} , \xi) }{\partial \xi}\right|_{S} \xi^{S}(t) \right) - {\cal H} ( \bar{\xi}^{S}(t) , \xi^{S}(t) ) \right\} \nonumber \\ &=& \hbar ST, *\rm~ {specialcasevalue} \eeqa which does not lead to the desired result (\ref{exact-scs-ta}) except for the special case of (\ref{special-bc}). Integration over the fluctuations also leads to the same sort of difficuly as in CTCSPI. \section{Klauder's $\epsilon$-Prescription} Klauder \cite{Klauder} insisted on having a stationary-action path satisfying the boundary condition (\ref{bc-ctcs-qp2}). He augmented the action by what is to be called Klauder's $\epsilon$-term. According to him, it is motivated by the metric of the relevant phase space. \subsection{Klauder's Continuous-Time Coherent-State Path Integral} In the case of the coherent state, the phase space is a plane whose metric is $(dp)^2 + (dq)^2$ or equivalently $2\hbar |d\xi|^{2}$. Klauder's augmented action, to be denoted by $S_{KCS}[\xi^{*},\xi]$, reads \begin{mathletters} \beqa S_{KCS}[\xi^{*},\xi] := S_{CS-\epsilon}[\xi^{*},\xi] + S_{CS}[\xi^{*},\xi] , *\rm~ {action-kctcs} \eeqa where \beqa S_{CS-\epsilon}[\xi^{*},\xi] := \int_0^T dt \frac{i\hbar}{2}\epsilon|\dot{\xi}(t)|^{2} *\rm~ {action-eterm-kctcs} \eeqa *\rm~ {kaluder-action}\end{mathletters}with $\epsilon$ being an infinitesimal positive number. Accordingly Klauder's stationary-action path, to be denoted by $(\bar{\xi}^{KS}(t), \xi^{KS}(t))$ and specialized to the harmonic oscillator, obeys the following set of equations: \begin{mathletters} \beqa && \frac{\epsilon}{2}\ddot{\bar{\xi}}~^{KS}(t) + \dot{\bar{\xi}}~^{KS}(t) - i\bar{\xi}^{KS}(t) = 0 , *\rm~ {eqmo-kctcs-sap1} \\ && \frac{\epsilon}{2}\ddot{\xi}^{KS}(t) - \dot{\xi}^{KS}(t) - i\xi^{KS}(t) = 0 . *\rm~ {eqmo-kctcs-sap2} \eeqa *\rm~ {eqmo-kctcs-sap}\end{mathletters} This set of equations, being a couple of second-order differential equations, can accomodate the boundary condition (\ref{bc-ctcs-qp2}), namely \beqa \bar{\xi}^{KS}(T) = \xi^{*}_{F},\qquad \xi^{KS}(T)=\xi_{F},\qquad \bar{\xi}^{KS}(0) = \xi^{*}_{I},\qquad \xi^{KS}(0) =\xi_{I}. \eeqa Although these equations can be solved for arbitary $\epsilon$, we may proceed as follows for an infinitesimal $\epsilon$. In Eq. (\ref{eqmo-kctcs-sap1}), the first term is effective only for the initial interval $0 < t \stackrel{<}{\sim} \epsilon$, where it forces $\bar{\xi}^{KS}(t)$ change sharply. Thus it is convenient to put \beqa \bar{\xi}^{KS}(t) = \bar{\chi}(t)\bar{\xi}^{S}(t) *\rm~ {chixis1} \eeqa with $\bar{\xi}^{S}(t)$ given by (\ref{sol-ho-ctcs}). Here $\bar{\chi}(t)$ is {\em essentially} unity (i.e., approximately equal to unity with corrections only of ${\cal O}(\exp(-T/\epsilon))$) except for the initial interval, where it changes sharply so as to ensure the condition $\bar{\xi}^{KS}(0) = \xi^{*}_{I}$. Hence $\bar{\chi}(t), \dot{\bar{\chi}}(t)$ and $\ddot{\bar{\chi}}(t)$ are at most of ${\cal O}(\epsilon ^{0}), {\cal O}(\epsilon ^{-1})$ and ${\cal O}(\epsilon ^{-2})$, respectively. Similar consideration applies to Eq. (\ref{eqmo-kctcs-sap2}). Thus, if we put \beqa \xi^{KS}(t) = \chi(t)\xi^{S}(t) *\rm~ {chixis2} \eeqa with $\xi^{S}(t)$ given by (\ref{sol-ho-ctcs}), then $\chi(t)$ is essentially unity except for the final interval $T-\epsilon \stackrel{<}{\sim} t < T$, where it changes sharply. Up to ${\cal O}(\epsilon^{0})$, Eq. (\ref{eqmo-kctcs-sap}) takes the form \beqa &&\frac{\epsilon}{2}\ddot{\bar{\chi}}(t) + ( 1 + i\epsilon)\dot{\bar{\chi}}(t) = 0 , \\ &&\frac{\epsilon}{2}\ddot{\chi}(t) - ( 1 + i\epsilon)\dot{\chi}(t) = 0 . \eeqa This may be solved to yield \beqa &&\bar{\chi}(t) = 1 + (\bar{\chi}(0)-1)e^{- \lambda t} + {\cal O}(\epsilon) , \\ &&\chi(t) = 1 + (\chi(T) - 1)e^{- \lambda(T-t)} + {\cal O}(\epsilon), \eeqa where \beqa \bar{\chi}(0) = \xi^{*}_{I}/\bar{\xi}^{S}(0), \qquad \chi(T) = \xi_{F}/\xi^{S}(T), \qquad \lambda = 2 \left(\frac{1}{\epsilon} + i \right). \eeqa This solution reproduces Eq. (15) of \cite{Klauder} with the exponent $2/\epsilon$ there replaced by $ (2/\epsilon + i)$. (This correction is needed for the solution to satisfy Eq. (\ref{eqmo-kctcs-sap}) up to ${\cal O}(\epsilon^{0})$, although it does not affect the ensuring discussion.) Klauder's stationary-action path is depicted in Fig. 1 in terms of $(p^{KS}(t),q^{KS}(t))$. === Fig. 1=== Remarkably the corresponding value of the action gives the desired exponent of (\ref{exact-cs-ta}). \beqa \frac{i}{\hbar} S_{KCS}^{SAP} := \frac{i}{\hbar} S_{KCS}[\bar{\xi}^{KS}, \xi^{KS}] = -\frac{1}{2} ( |\xi_{F}|^{2} + | \xi_{I} |^{2} ) + \xi^{*}_{F} \xi_{I} e^{-iT}+ {\cal O}(\epsilon). *\rm~ {sa-kctcs} \eeqa Moreover it can be shown that \beqa S_{CS-\epsilon}[\bar{\xi}^{KS}, \xi^{KS}] = {\cal O}(\epsilon). *\rm~ {sa-eterm} \eeqa This follows from the property that the potentially dangerous integrand $\dot{\bar{\chi}}(t)\dot{\chi}(t)$ is proportional to $\epsilon^{-2}\exp(-2T/\epsilon)$, which is exponentially small. Hence, although Klauder's $\epsilon$-term plays an important role in ensuring the existence of, and determining, Klauder's stationary-action path, it does not contribute to the stationary value of the action. In view of this circumstance, one might be tempted to follow what might be called the "semi-$\epsilon$ prescription" \cite{fn4}: \begin{quote} (i) Adopt Eq.~(\ref{eqmo-kctcs-sap}) and the boundary condition (\ref{bc-ctcs-qp2}) to determine $(\bar{\xi}^{KS}(t), \xi^{KS}(t))$. \\ (ii) Once this has been done, discard $S_{CS-\epsilon}[\xi^{*}, \xi]$ altogether, and work with $S_{CS}[\xi^{*}, \xi]$ alone to evaluate fluctuation integrals as well as the stationary value of the action. \end{quote} Unfortunately, as shown in the next section, this "semi-$\epsilon$ prescription" fails; integration over fluctuations leads to a non-sensical result. The correct result \cite{fn5} may be obtained only if Klauder's $\epsilon$-term is {\em properly discretized} and kept. There is no unique scheme of discretization, and the {\em proper discretization} amounts to working with the discrete-time formalism throughout (see the end of Sec. V.B.). \subsection{Klauder's Continuous-Time Spin-Coherent-State Path Integral} In the case of the spin-coherent state, the relevant phase space is a sphere whose metric is $(d {\bf n})^2 (=(d\theta)^{2} + (\sin \theta)^{2}(d\phi)^{2}) $, or equivalently $4(1+|\xi|^{2})^{-2}|d\xi|^{2}$. Klauder's augmented action, to be denoted by $S_{KSCS}[\xi^{*},\xi]$, now reads \begin{mathletters} \beqa && S_{KSCS}[\xi^{*},\xi] := S_{SCS-\epsilon}[\xi^{*},\xi] + S_{SCS}[\xi^{*},\xi], *\rm~ {action-kctscs}\\ && S_{SCS-\epsilon}[\xi^{*},\xi] := \int_{0}^{T} dt \frac{i\hbar S \epsilon |\dot{\xi}(t)|^{2}}{(1+|\xi(t)|^{2})^{2}}. *\rm~ {action-eterm-kctscs} \eeqa *\rm~ {klauder-scs-action}\end{mathletters}Klauder's stationary-action path specialized to the spin under the constant magnetic field obeys \begin{mathletters} \beqa &&\frac{\epsilon}{2}\left\{ \ddot{\bar{\xi}}~^{KS}(t) - \frac{2\xi^{KS}(t)(\dot{\bar{\xi}}~^{KS}(t))^{2}}{1 + \bar{\xi}^{KS}(t)\xi^{KS}(t) } \right\} + \dot{\bar{\xi}}~^{KS}(t) - i\bar{\xi}^{KS}(t) = 0 , *\rm~ {keqmo-scspi1}\\ &&\frac{\epsilon}{2}\left\{ \ddot{\xi}^{KS}(t) - \frac{2 \bar{\xi}^{KS}(t)(\dot{\xi}^{KS}(t))^{2}}{1 + \bar{\xi}^{KS}(t)\xi^{KS}(t) } \right\} - \dot{\xi}^{KS}(t) - i\xi^{KS}(t) = 0 . *\rm~ {keqmo-scspi2} \eeqa *\rm~ {keqmo-scspi}\end{mathletters}As in the previous section we employ the substitution (\ref{chixis1}),(\ref{chixis2}) to obtain the following equation which is correct up to ${\cal O}(\epsilon)$: \begin{mathletters} \beqa && \frac{\epsilon}{2} \ddot{\bar{\chi}}(t) + (1+i\epsilon)\dot{\bar{\chi}}(t) - \frac{\epsilon R^{S}\chi(t)}{1+R^{S}\bar{\chi}(t)\chi(t)}(\dot{\bar{\chi}}(t) + 2i\bar{\chi}(t))\dot{\bar{\chi}}(t) =0, *\rm~ {chieqmo-scspi1}\\ && \frac{\epsilon}{2} \ddot{\chi}(t) - (1+i\epsilon)\chi(t) - \frac{\epsilon R^{S}\bar{\chi}(t)}{1+R^{S}\bar{\chi}(t)\chi(t)}(\dot{\chi}(t) - 2i\chi(t))\dot{\chi}(t) = 0 *\rm~ {chieqmo-scspi2} \eeqa *\rm~ {chieqmo-scspi}\end{mathletters}where \beqa R^{S} := \bar{\xi}^{S}(t)\xi^{S}(t) = \xi^{*}_{F}\xi_{I}e^{-iT} \eeqa is a constant. The nonlinearity of the spin coherent state manifests itself in the last terms proportional to $R^{S}$. Since the one in Eq. (\ref{chieqmo-scspi1}) is multiplied by $\dot{\bar{\chi}}(t)$, which vanishes except for the initial interval, we may replace $\chi(t)$ there by unity. (Recall that $\chi(t)$ should be essentially unity except for the final interval.) Similarly, $\bar{\chi}(t)$ in Eq. (\ref{chieqmo-scspi2}) may be replaced by unity. We can straightforwardly solve the resulting equations to find \begin{mathletters} \beqa &&\bar{\chi}(t) = \frac{1 + R^{S}\bar{\chi}(0) + (\bar{\chi}(0) -1)e^{-\mu t}}{1 + R^{S} \bar{\chi}(0) -(\bar{\chi}(0)-1)R^{S}e^{-\mu t}} + {\cal O}(\epsilon), *\rm~ {solchi-scspi1} \\ &&\chi(t) = \frac{1+R^{S}\chi(T)+(\chi(T)-1)e^{-\mu (T-t)}}{1 + R^{S}\chi(T)-(\chi(T)-1)R^{S}e^{-\mu (T-t)}} +{\cal O}(\epsilon) , *\rm~ {solchi-scspi2} \eeqa*\rm~ {solchi-scspi}\end{mathletters}where \beqa \bar{\chi}(0) = \xi^{*}_{I}/\bar{\xi}^{S}(0) ,\qquad \chi(T) = \xi_{F}/\xi^{S}(T),\qquad \mu = 2\left( \frac{1}{\epsilon} + i \frac{1-R^{S}}{1+R^{S}} \right). \eeqa This solution reproduces Eqs. (51-52) of \cite{Klauder} with the exponent $2/\epsilon$ there replaced by $\mu$. The same mechanism as in the coherent-state case gives the result \beqa S_{SCS-\epsilon}[\bar{\xi}^{KS},\xi^{KS}] = {\cal O}(\epsilon). \eeqa Hence \beqa S^{SAP}_{KSCS} &:=& S_{KSCS}[\bar{\xi}^{KS},\xi^{KS}] \nonumber \\ &=& i\hbar S \int_{0}^{T}dt \frac{R^{S}}{1+R^{S}\bar{\chi}(t)\chi(t)}(\bar{\chi}(t)\dot{\chi}(t)-\dot{\bar{\chi}}(t)\chi(t)) \nonumber \\ &+& \int_{0}^{T}dt \left\{i\hbar S \frac{\bar{\chi}(t)\chi(t)}{1+R^{S}\bar{\chi}(t)\chi(t)}(\bar{\xi}^{S}(t)\dot{\xi}^{S}(t) - \dot{\bar{\xi}}~^{S}(t)\xi^{S}(t))-{\cal H}(\bar{\xi}^{S}(t),\xi^{S}(t)) \right\}. \eeqa The first integral is essentially equal to \beqa i\hbar S \int_{0}^{T}dt \left(\frac{R^{S}\dot{\chi}(t)}{1+R^{S}\chi(t)}-\frac{R^{S}\dot{\bar{\chi}}(t) }{1+R^{S}{\bar{\chi}}(t)} \right) = i\hbar S \ln \frac{(1+|\xi_{F}|^{2})(1+|\xi_{I}|^{2})}{(1+R^{S})^{2}}, \eeqa while the second integral is the same as (\ref{specialcasevalue}) apart from a correction of ${\cal O}(\epsilon)$ since $\bar{\chi}(t)\chi(t)$ is essentially unity except for the initial and final intervals. The correct answer (\ref{exact-scs-ta}) is thus reproduced. However one encounters a difficulty when it comes to evaluating the fluctuation integral (see the end of Sec. V.C.). \section{Discrete-Time Formalism} In order to resolve various difficulties encountered in the continuous-time formalism, we go back to the basic definition of the amplitudes which the path integrals in question are supposed to represent. Again it is instructive to begin by reviewing the familiar case of the Feynman kernel. \subsection{Discrete-Time Phase-Space Path Integral} By a repeated use of the resolution of unity \beqa \int dp |p\>\<p| = 1, \qquad \int dq |q\>\<q| = 1, \eeqa the Feynman kernel is expressed as \begin{mathletters} \beqa && \< q_{F} | e^{ -i\hat{H}T/\hbar } | q_{I} \> = \lim_{N \to \infty} \int \prod_{n=1}^{N-1} dq_{n} \prod_{n=1}^{N} \frac{dp_{n}}{2\pi\hbar} \exp\left( \frac{i}{\hbar} {\cal S}_{PS}[ \{p\} , \{q\} ] \right) , *\rm~ {f-kernel-disps}\\ && {\cal S}_{PS}[\{p\},\{q\}] := \sum_{n=1}^{N} \left\{( q_{n} - q_{n-1} )p_n - \varepsilon H ( p_{n} , q_{n} ) \right\} *\rm~ {action-disps} \eeqa *\rm~ {discrete-cs-action}\end{mathletters}with \beqa q_{N} \equiv q_{F}, \qquad q_{0} \equiv q_{I}, \qquad \varepsilon \equiv T/N, \eeqa and $\{p\}$ and $\{q\}$ standing for the set $\{p_1, p_2, \cdots, p_N \}$ and $\{q_1, q_2, \cdots, q_{N-1} \}$, respectively. (To be precise ${\cal S}_{PS}$ should carry index $N$, which is omitted for brevity.) This is what we call the {\it discrete-time phase-space path integral} (DTPSPI). It is true that the time becomes effectively continuous in the limit $N \rightarrow \infty$. But this should not blur the distinction from the continuous-time formalism. What counts is that $N$ is kept finite until the very end of calculation. For large $N$, one might be tempted to re-write the first term of (\ref{action-disps}) as \beqa \sum_{n=1}^{N} ( q_n - q_{n-1} ) p_n = \sum_{n=1}^{N} \varepsilon \frac{ q_n - q_{n-1} }{\varepsilon} p_n \sim \int_0^T dt \dot{q}(t)p(t), \eeqa thereby claiming to have reduced it to the first term of (\ref{ctps-action}). But this argument is not warranted, because at this stage the integrand does not contain any factor which would ensure that $q_n - q_{n-1}$ is "small"; CTPSPI does {\em not} automatically follow from DTPSPI. The multiple integral over the $2N -1$ variables may be evaluated by the stationary-phase method. Let $( \{p^{cl}\}, \{q^{cl}\} )$ be the stationary point of the action ${\cal S}_{PS}[\{p\}, \{q\}]$: \begin{mathletters} \beqa && 0 = \left. \frac{\partial {\cal S}_{PS}[\{p\},\{q\}]}{\partial p_1} \right|_{cl} = q_1^{cl} - q_I - \left. \varepsilon \frac{\partial H(\{p\},\{q\})}{\partial p_1} \right|_{cl} , *\rm~ {}\\ && 0 = \left. \frac{\partial {\cal S}_{PS}[\{p\},\{q\}]}{\partial p_n} \right|_{cl} = q_n^{cl} - q_{n-1}^{cl} - \left. \varepsilon \frac{\partial H(\{p\},\{q\})}{\partial p_n} \right|_{cl} : 2 \leq n \leq N-1, *\rm~ {}\\ && 0 = \left. \frac{\partial {\cal S}_{PS}[\{p\},\{q\}]}{\partial p_N} \right|_{cl} = q_F - q_{N-1}^{cl} - \left. \varepsilon \frac{\partial H(\{p\},\{q\})}{\partial p_N} \right|_{cl} , *\rm~ {}\\ && 0 = \left. \frac{\partial {\cal S}_{PS}[\{p\},\{q\}]}{\partial q_n} \right|_{cl} = p_n^{cl} - p_{n+1}^{cl} - \left. \varepsilon \frac{\partial H(\{p\},\{q\})}{\partial q_n} \right|_{cl} : 1 \leq n \leq N-1. *\rm~ {} \eeqa *\rm~ {}\end{mathletters}These constitute a set of $2N-1$ equations for the same number of unknowns. There is no room for a "boundary condition" ; such a notion does not exist in the context of this set of equations. On inspection one finds it convenient to {\em define} \beqa q_N^{cl} := q_F, \qquad q_0^{cl} := q_I. *\rm~ {bc-disps} \eeqa (Remember that $q^{cl}_N$ and $q^{cl}_0$ did not exist among the variables.) With this definition, the above set of equations takes the compact form: \begin{mathletters} \beqa && q_n^{cl} - q_{n-1}^{cl} = \left. ~~~\varepsilon \frac{\partial H(\{p\},\{q\})}{\partial p_n} \right|_{cl} \qquad : 1 \leq n \leq N, *\rm~ {eqmo-disps-sap1}\\ && p_{n+1}^{cl} - p_{n}^{cl} = - \left. \varepsilon \frac{\partial H(\{p\},\{q\})}{\partial q_n} \right|_{cl}\qquad : 1 \leq n \leq N-1. *\rm~ {eqmo-disps-sap2} \eeqa *\rm~ {eqmo-disps-sap}\end{mathletters}One may regard this as a set of $2N-1$ equations for the $2N+1$ variables $\{p^{cl}_1, p^{cl}_2, \cdots, p^{cl}_N; q^{cl}_0, q^{cl}_1, \cdots, q^{cl}_N \}$, and regard Eq.~(\ref{bc-disps}) as the boundary condition to be imposed. In this way the notion of the boundary condition can be introduced, if desired, merely as a matter of convenience \cite{fn1-1}. The factor $\varepsilon$ on the right-hand side ensures that $q_n^{cl} - q_{n-1}^{cl}$ and $p_{n+1}^{cl} - p_{n}^{cl}$ are of ${\cal O} (\varepsilon)$. Hence, in place of the above difference equation, one may solve the differential equations (\ref{eqmo-ctps-sap}) with the boundary condition (\ref{bc-ctps2}). The solution to the difference equation is then obtained as \beqa q_n^{cl} = q^{cl}(t_n) + {\cal O} (\varepsilon), \qquad p_n^{cl} = p^{cl}(t_n) + {\cal O} (\varepsilon), *\rm~ {sol-sap-ctps} \eeqa where $t_n := n \varepsilon = (n/N)T$. This is the rationale for Eqs.~(\ref{eqmo-ctps-sap}) and (\ref{bc-ctps2}) encountered in CTPSPI. Although we mentioned in subsection III.A., that the evaluation of the fluctuation integral in CTPSPI is a routine matter, it can be handled only after some "discretization", which however is not unique. By contrast, in DTPSPI, such a notion as "discretization" does not appear; the fundamental formula is discrete by definition. Hence fluctuation integral can be performed without ambiguity. Decomposing the integration variables as \beqa p_n = p_n^{cl} + {\sl p}_n, \qquad q_n = q_n^{cl} + {\sl q}_n, *\rm~ {separated-path-disps} \eeqa one finds \begin{equation} {\cal S}_{PS}[\{p\},\{q\}] \simeq S_{PS}^{cl} + {\cal O}(\varepsilon) + {\cal S}^{(2)}_{PS}[\{{\sl p}\},\{{\sl q}\}] *\rm~ {separated-action-disps} \eeq where $\simeq$ indicates that only terms up to the second order in fluctuations are kept. We have made use of \beqa {\cal S}_{PS}[\{p^{cl}\},\{q^{cl}\}] = S_{PS}^{cl} + {\cal O}(\varepsilon) *\rm~ {classical-action-disps} \eeqa with $S_{PS}^{cl} := S_{PS}[p^{cl},q^{cl}]$. The Feynman kernel specialized to the harmonic oscillator now becomes \beqa \<q_F|e^{-i\hat{H}T/\hbar}|q_I\>&=& \exp\left( \frac{i}{\hbar}S_{PS}^{cl} \right)\lim_{N\to \infty} \int \prod_{n=1}^{N-1} d{\sl q}_{n} \prod_{n=1}^{N} \frac{d{\sl p}_{n}}{2\pi\hbar}\nonumber \\ &&\times\exp \left[\frac{i}{\hbar}\sum_{n=1}^{N}\left\{ ({\sl q}_{n} - {\sl q}_{n-1}){\sl p}_{n} - \varepsilon \left(\frac{{\sl p}^{2}_{n}}{2} + \frac{{\sl q}^{2}_{n}}{2} \right) \right\} \right] \eeqa which involves Gaussian integrals. However, each integration variable ${\sl p}_n$ appear in the exponent in the form $i\varepsilon {\sl p}_n^2/\hbar$. Consequently it ranges effectively over the region $|{\sl p}_n| \stackrel{<}{\sim} (\hbar/\varepsilon)^{1/2}$, which covers the entire real axis as $\varepsilon$ tends to 0 in the limit $N \rightarrow \infty$ . Thus, ${\sl p}_n$'s cannot be regarded as constituting small fluctuation at all; the picture that the phase-space path integral is dominated by the classical path $(\{p^{cl}\},\{q^{cl}\})$ is erroneous. Fortunately, in the case of the harmonic oscillator (or of a non-relativistic particle in general), integrations over ${\sl p}_n$'s can be carried out exactly. The result is a configuration-space path integral, in which ${\sl q}_n$'s can indeed be regarded as constituting small fluctuation since they appear in the exponent in the form $i{\sl q}_n^2/(\hbar\varepsilon)$. \subsection{Discrete-Time Coherent-State Path Integral} In dealing with the case of the coherent-state path integral, it is more convenient to work with $\xi$ related to $(p,q)$ via Eq.~(\ref{xi-q-p}). By a repeated use of the resolution of unity \begin{equation} \int \frac{d\xi d\xi^{*}}{2\pi i}|\xi\>\<\xi| = 1 ,\qquad \frac{d\xi d\xi^{*}}{2\pi i} := \frac{d(\Re \xi)d(\Im \xi)}{\pi} = \frac{dpdq}{2\pi\hbar}, \eeq the amplitude in question is expressed as \begin{mathletters} \beqa && \< \xi_{F} | e^{-i\hat{H}T/\hbar} | \xi_{I} \> = \lim_{N \to \infty}\int \prod_{n=1}^{N-1} \frac{d\xi_{n}d\xi^{*}_{n}}{2 \pi i} \exp\left( \frac{i}{\hbar}{\cal S}_{CS}[ \{\xi^{*}\} ,\{ \xi \} ] \right) , *\rm~ {ta-discs} \\ && \frac{i}{\hbar}{\cal S}_{CS}[ \{\xi^{*}\} ,\{ \xi \} ] = \sum_{n=1}^{N} \left[ -\frac{1}{2}( |\xi_{n}|^{2} + |\xi_{n-1}|^{2} ) + \xi^{*}_{n} \xi_{n-1} -\frac{i}{\hbar}\varepsilon {\cal H}( \xi^{*}_{n} , \xi_{n-1} )\right] , *\rm~ {action-discs} \eeqa \end{mathletters}where the convention in the previous subsection is followed, that is, \begin{equation} \xi_{N} \equiv \xi_{F} ,\qquad \xi_{0} \equiv \xi_{I}, \qquad \varepsilon \equiv N/T, \eeq and $\{\xi\}$ stands for the set $\{\xi_1, \xi_2, \cdots, \xi_{N-1} \}$ of $N-1$ complex variables. Since each $\xi_n^*$ is the complex conjugate of $\xi_n$, the notation ${\cal S}_{CS}[\{ \xi^* \}, \{ \xi \}]$ is redundant at this stage, but will be found useful later. \subsubsection{stationary-action path} Let $( \{\bar{\xi}^S \}, \{\xi^S \} )$ be the stationary point of the action ${\cal S}_{CS}[\{\xi^* \}, \{\xi \}]$: \begin{mathletters} \beqa && 0= \left. \frac{i}{\hbar}\frac{\delta {\cal S}_{CS} }{\delta \xi^{*}_{n} } \right|_{S} = \left. \left( - \xi_{n} + \xi_{n-1} - \frac{i}{\hbar}\varepsilon\frac{\partial {\cal H}}{\partial \xi^{*}_{n}} \right)\right|_{S} \qquad :2 \le n \le N-1. *\rm~ {sabun-discs1}\\ && 0= \left.\frac{i}{\hbar} \frac{\delta {\cal S}_{CS} }{\delta \xi^{*}_{1} } \right|_{S} = \left. \left( - \xi_{1} + \xi_{I} - \frac{i}{\hbar}\varepsilon\frac{\partial {\cal H}}{\partial \xi^{*}_{1}} \right)\right|_{S} . *\rm~ {sabun-discs2}\\ && 0= \left. \frac{i}{\hbar}\frac{\delta {\cal S}_{CS} }{\delta \xi_{n} } \right|_{S} = \left. \left( - \xi^{*}_{n} + \xi^{*}_{n+1} - \frac{i}{\hbar}\varepsilon\frac{\partial {\cal H}}{\partial \xi_{n}} \right)\right|_{S} \qquad :1 \le n \le N-2 . *\rm~ {sabun-discs3}\\ && 0= \left. \frac{i}{\hbar}\frac{\delta {\cal S}_{CS} }{\delta \xi_{N-1} } \right|_{S} = \left. \left( - \xi^{*}_{N-1} + \xi^{*}_{F} - \frac{i}{\hbar}\varepsilon\frac{\partial {\cal H}}{\partial \xi_{N-1}} \right)\right|_{S} . *\rm~ {sabun-discs4} \eeqa *\rm~ {sabun-discs10}\end{mathletters}These constitute a set of $2(N-1)$ equations for the same number of unknowns. Again, there is no room for a "boundary condition". On inspection one finds it convenient to {\em define} \begin{equation} \qquad \bar{\xi}^{S}_{N} := \xi^{*}_{F}. \qquad \xi^{S}_{0} := \xi_{I}. *\rm~ {bc-discs} \eeq (Remember that $\bar{\xi}_N^S$ and $\xi_0^S$ did not exist among the unknowns.) With this definition, the above set of equations takes the compact form: \begin{mathletters} \beqa \xi^{S}_{n} - \xi^{S}_{n-1} &=& -\frac{i}{\hbar}\varepsilon \left.\frac{\partial {\cal H}}{\partial \xi^{*}_{n}} \right|_{S} \qquad :1 \le n \le N-1 , *\rm~ {sabun-discs5}\\ \bar{ \xi } ^{S}_{n+1} - \bar{ \xi } ^{S}_{n} &=& ~~~ \frac{i}{\hbar}\varepsilon \left.\frac{\partial {\cal H}}{\partial \xi_{n}} \right|_{S} \qquad :1 \le n \le N-1 . *\rm~ {sabun-discs6} \eeqa *\rm~ {sabun-discs20}\end{mathletters}One may regard this as a set of $2N-2$ equations for the $2N$ unknowns $\{ \bar{\xi}_1^S, \bar{\xi}_2^S, \cdots, \bar{\xi}_{N}^S ; \xi_0^S, \xi_1^S, \cdots, \xi_{N-1}^S \}$, and regard Eq.~(\ref{bc-discs}) as the boundary condition to be imposed. The factor $\varepsilon$ on the right-hand side ensures that $\xi_{n}^S - \xi_{n-1}^S$ and $\bar{\xi}_{n+1}^S - \bar{\xi}_{n}^S$ are of ${\cal O} (\varepsilon)$. Hence, in place of the above difference equation, one may solve the differential equations (\ref{eqmo-ho-ctcs-xi}) with the boundary condition (\ref{bc-ctcs-xi}). The solution to the difference equation is then obtained as \begin{mathletters} \beqa \xi^{S}_{n} &=& \xi^{S}(t_{n}) + {\cal O}(\varepsilon) \qquad :0 \le n \le N-1 , *\rm~ {sol-diffeq-cs1}\\ \bar{\xi}^{S}_{n} &=& \bar{\xi}^{S}( t_{n} ) + {\cal O}(\varepsilon) \qquad :1 \le n \le N. *\rm~ {sol-diffeq-cs2} \eeqa *\rm~ {sol-diffeq-cs}\end{mathletters}This is the rationale for Eqs.~(\ref{eqmo-ho-ctcs-xi}) and (\ref{bc-ctcs-xi}) encountered in CTCSPI. In what follows the stationary point shall be called the stationary-action path. However, it is important to note that $\xi_F$ and $\xi_I^*$ do not occur in Eqs.~(\ref{sabun-discs20}) and that these equations do not involve such unknowns as $\xi_{N}^S$ or $\bar{\xi}_0^S $. Hence neither $\xi_F - \xi_{N-1}^S$ nor $\bar{\xi}_1^S - \xi_I^*$ can be said to be of ${\cal O}(\varepsilon)$. Consequently, even in the limit $N \rightarrow \infty$, the stationary-action path need not be continuous at the initial and final times. Let us illustrate this point with the harmonic oscillator, for which Eqs.~(\ref{sabun-discs20}) reduce to \beqa \xi^{S}_{n} - \xi^{S}_{n-1} = -i\varepsilon \xi^{S}_{n-1} ,\qquad \bar{ \xi } ^{S}_{n+1} - \bar{ \xi } ^{S}_{n} = i\varepsilon \bar{\xi}^{S}_{n+1} \qquad :1 \le n \le N-1, \eeqa and the solution is immediately found as \beqa \xi^{S}_{n} = (1 - i\varepsilon)^{n} \xi_{I} = \xi_{I} e^{-it_{n}} + {\cal O}(\varepsilon), \qquad \bar{\xi}^{S}_{n} = ( 1-i\varepsilon)^{N-n}\xi^{*}_{F} = \xi^{*}_{F}e^{ i(t_{n} - T )}+ {\cal O}(\varepsilon) . *\rm~ {sap-discs} \eeqa The last approximate expression agrees with the result obtained via the procedure (\ref{sol-diffeq-cs}) of course. In terms of $ ( \{p^S\}, \{q^S\} )$ related to $( \{\xi^S\}, \{ {\bar \xi}^S \} )$ via (\ref{sap-xi-q-p}), the stationary-action path is given by \begin{mathletters} \beqa q^{S}_{n} &=& ~~~~\sqrt{\frac{\hbar}{2}} ( \xi^{S}_{n} + \bar{\xi}^{S}_{n} ) =~~~ \sqrt{\frac{\hbar}{2}} ( \xi_{I} e^{-i t_{n} } + \xi^{*}_{F} e^{ i ( t_{n} - T) } )\qquad :1 \le n \le N-1,\\ p^{S}_{n} &=& -i\sqrt{\frac{\hbar }{2}} ( \xi^{S}_{n} - \bar{\xi}^{S}_{n} ) = -i\sqrt{\frac{\hbar }{2}}( \xi_{I} e^{-i t_{n} } - \xi^{*}_{F} e^{ i ( t_{n} - T) } )\qquad :1 \le n \le N-1. \eeqa \end{mathletters}(Note that neither $ ( p_N^S, q_N^S )$ nor $ ( p_0^S, q_0^S )$ is defined.) This is depicted in Fig.~2. === Fig. 2 === The stationary-action path is not in general real as observed in Sec.~III.B.. Its real and imaginary parts trace a circle of radius \beqa \left[\frac{\hbar}{2}\{|\xi_{F}|^{2}+|\xi_{I}|^{2} \pm 2 \Re (\xi^{*}_{F}\xi_{I}e^{-iT})\}\right]^{1/2}, \eeqa respectively. Neither of them coincides with the classical path connecting $(p_I, q_I)$ and $(p_F, q_F)$ in time $T$; such a classical path would exist only in the special case (\ref{special-bc}) alone. Comparison of Figs.~1 and 2 shows that Klauder's $\epsilon$-prescription is a device to interpolate smoothly between $(p_I, q_I)$ and $ ( p_1^S, q_1^S )$ as well as between $(p_{N-1}^S, q_{N-1}^S)$ and $ ( p_F, q_F )$. \subsubsection{$\varepsilon$-term of the action} The action can be re-written as \begin{mathletters} \beqa && {\cal S}_{CS}[\{\xi^* \}, \{\xi \}] = {\cal S}_{CS-\varepsilon}[\{\xi^* \}, \{\xi \}] + {\cal S}_{CS-c}[\{\xi^* \}, \{\xi \}] +{\cal S}_{CS-d}[\{\xi^* \}, \{\xi \}] , *\rm~ {separated-action-discs} \\ && \frac{i}{\hbar}{\cal S}_{CS-\varepsilon}[\{\xi^* \}, \{\xi \}] := -\frac{1}{2}\left( |\xi_{F}-\xi_{N-1}|^{2} + \sum_{n=2}^{N-1}|\xi_{n}-\xi_{n-1}|^{2}+|\xi_{1}-\xi_{I}|^{2} \right), *\rm~ {eterm-action-discs} \\ && \frac{i}{\hbar}{\cal S}_{CS-c}[\{\xi^* \}, \{\xi \}] := -\frac{1}{2}\bigg( \xi^{*}_{F}(\xi_{F}-\xi_{N-1})-(\xi^{*}_{F}-\xi^{*}_{N-1})\xi_{F} \nonumber \\ && + \sum_{n=2}^{N-1}\{ \xi^{*}_{n}(\xi_{n}-\xi_{n-1})-(\xi^{*}_{n}-\xi^{*}_{n-1})\xi_{n}\} + \xi^{*}_{1}(\xi_{1}-\xi_{I})-(\xi^{*}_{1} - \xi^{*}_{I})\xi_{I} \bigg), *\rm~ {cterm-action-discs} \\ && {\cal S}_{CS-d}[\{\xi^* \}, \{\xi \}]:= -\varepsilon \sum_{n=1}^{N}{\cal H}(\xi^{*}_{n},\xi_{n-1}). *\rm~ {dterm-action-discs} \eeqa \end{mathletters}The first term would resemble Klauder's $\epsilon$-term if the following manipulation were correct. \beqa {\cal S}_{CS-\varepsilon}[\{\xi^* \}, \{\xi \}] = \frac{i\hbar}{2}\sum_{n=1}^{N}\varepsilon^{2} \frac{|\xi_{n} - \xi_{n-1}|^{2}}{\varepsilon^{2}} \sim \frac{i\hbar}{2} \varepsilon \int_{0}^{T} dt \dot{\xi}^{*}(t)\dot{\xi}(t). *\rm~ {keterm-action} \eeqa It is of course strange to keep $\varepsilon$ partially while converting a sum into an integral under the supposition that everything behaves smoothly in the limit $\varepsilon \rightarrow 0$. Nevertheless, this is the only clue to identify what would correspond to Klauder's $\epsilon$-term. Therefore we call ${\cal S}_{CS-\varepsilon}[\{\xi^* \}, \{\xi \}]$ the $\varepsilon$-term, which is the reason for the notation adopted. Likewise the second term would resemble the first term of (\ref{action-ctcs-xi}). Hence it is to be called the canonical term. The last term represents the contribution of the Hamiltonian and is to be called the dynamical term. \subsubsection{stationary action} We now evaluate the contributions of the three terms of (\ref{separated-action-discs}) separately to the stationary action. The contribution of the $\varepsilon$-term may be further separated into three parts. That from the intermediate times (the second term of (\ref{eterm-action-discs})) is obviouly of ${\cal O}(\varepsilon)$ in view of (\ref{sol-diffeq-cs}): \begin{mathletters} \beqa \sum_{n=2}^{N-1}(\bar{\xi}^{S}_{n}-\bar{\xi}^{S}_{n-1})(\xi^{S}_{n}-\xi^{S}_{n-1}) = \varepsilon \int_{0}^{T} dt \dot{\bar{\xi^{S}}}(t)\dot{\xi}^{S}(t) = {\cal O}(\varepsilon). \eeqa The "initial discontinuity" (the last term of (\ref{eterm-action-discs})) contributes \beqa (\bar{\xi}^{S}_{1}-\xi^{*}_{I})(\xi^{S}_{1} - \xi_{I}) = (\xi^{*}_{F}e^{i(\varepsilon - T)}-\xi^{*}_{I})(e^{-i\varepsilon}-1)\xi_{I} = {\cal O}(\varepsilon). \eeqa \end{mathletters}Similar result is found for the contribution of the "final discontinuity" (the first term of (\ref{eterm-action-discs})). Hence \beqa {\cal S}_{CS-\varepsilon}^{SAP} := {\cal S}_{CS-\varepsilon}[\{ \bar{\xi}^S \}, \{ \xi^S \}] = {\cal O}(\varepsilon). \eeqa It is thus concluded that the $\varepsilon$-term does not contribute to the stationary action, which is in accord with the result of Klauder that his $\epsilon$-term does not contribute to the stationary action. One should not be betrayed by the expression (\ref{eterm-action-discs}), from which one might guess that the discontinuities shown in Fig.~2 would make a contribution of ${\cal O}(\varepsilon^{0})$. The reason why they do not is that $\bar{\xi}^{S}_{n}$ and $\xi^{S}_{n}$ are not necessarily mutually complex conjugate. The contribution of the canonical term may be evaluated in a similar fashion. Thus, the intermediate times contribute \begin{mathletters}\beqa -\frac{1}{2}\sum_{n=2}^{N-1}\{\bar{\xi}^{S}_{n}(\xi^{S}_{n}-\xi^{S}_{n-1})-(\bar{\xi}^{S}_{n} -\bar{\xi}^{S}_{n-1})\xi^{S}_{n} \} = iT \xi^{*}_{F}\xi_{I}e^{-iT}+{\cal O}(\varepsilon),*\rm~ {canonical-inter-discs} \eeqa while the final and initial discontinuity respectively contributes \beqa && -\frac{1}{2}\{ \xi^{*}_{F}(\xi_{F}- \xi^{S}_{N-1}) -(\xi^{*}_{F}-\bar{\xi}^{S}_{N-1})\xi_{F}\} = \frac{1}{2}\xi^{*}_{F}\xi_{I}e^{-iT} - \frac{1}{2}|\xi_{F}|^{2} + {\cal O}(\varepsilon), *\rm~ {final-tran-discs} \\ && -\frac{1}{2} \{ \bar{\xi}^{S}_{1} (\xi^{S}_{1} - \xi_{I}) -(\bar{\xi}^{S}_{1} - \xi^{*}_{I} ) \xi^{S}_{1} \} = \frac{1}{2} \xi^{*}_{F} \xi_{I} e^{-iT} - \frac{1}{2}|\xi_{I}|^{2} + {\cal O}(\varepsilon). *\rm~ {initial-tran-discs} \eeqa \end{mathletters}Finally, the contribution of the dynamical term is found as \beqa -i\varepsilon\sum_{n=1}^{N}\bar{\xi}^{S}_{n}\xi^{S}_{n-1} = -iT\xi^{*}_{F}\xi_{I}e^{-iT} + {\cal O}(\varepsilon), \eeqa which cancels the intermediate-time contribution (\ref{canonical-inter-discs}) of the canonical term. Hence the stationary action , to be denoted by, ${\cal S}^{SAP}_{CS}$, is determined entirely by the discontinuity parts of the canonical term as \beqa \frac{i}{\hbar}{\cal S}^{SAP}_{CS} &:=& \frac{i}{\hbar}{\cal S}_{CS}[\{ \bar{\xi}^{S} \},\{ \xi^{S} \}] \\ &=& -\frac{1}{2}(|\xi_{F}|^{2}+|\xi_{I}|^{2}) + \xi_{F}^{*} \xi_{I} e^{-iT} + {\cal O}(\varepsilon), \eeqa which reproduces the exponent of (\ref{exact-cs-ta}). \bigskip Comment: One might start from the following action \cite{Itzykson} \beqa &&\frac{i\hbar}{2}\{ \xi^{*}_{F}(\xi_{F}-\xi_{N-1})-(\xi^{*}_{1}-\xi^{*}_{I})\xi_{I}\}\nonumber \\ &&+ \int_{0}^{T}dt\left\{\frac{i\hbar}{2}( \xi^{*}(t)\dot{\xi}(t) - \dot{\xi}^{*}(t)\xi(t) ) - {\cal H}(\xi^{*}(t),\xi(t)) \right\}, \eeqa which is a mixture of discrete and continuous forms. Adopting (\ref{sol-ho-ctcs}) as the stationary-action path, one would obtain the correct value for the stationary action. However, re-writing the action (\ref{separated-action-discs}) in such a mixed form is not a unique procedure. Success would not be guaranteed unless one knew the answer beforehand. At any rate one would fail if one proceeded to integration over fluctuations. \subsubsection{fluctuations} Let us turn to the evaluation of fluctuation integrals, again keeping track of the roles played by each of the terms of (\ref{separated-action-discs}). Separating the integration variables as \beqa \xi_{n} = \xi^{S}_{n} + \eta_{n} ,\qquad \xi^{*}_{n} = \bar{\xi}^{S}_{n} + \eta^{*}_{n} \qquad :1 \le n \le N-1 , *\rm~ {decomposed-xi-disscs} \eeqa one finds \begin{mathletters}\beqa && {\cal S}_{CS}[\{\xi^* \}, \{\xi \}] = {\cal S}_{CS}^{SAP} + {\cal O}(\varepsilon) + {\cal S}^{(2)}_{CS}[\{\eta^* \}, \{\eta \}], *\rm~ {separared-sap-action-discs}\\ && {\cal S}^{(2)}_{CS}[\{\eta^* \}, \{\eta \}] := {\cal S}^{(2)}_{CS-\varepsilon}[\{\eta^* \}, \{\eta \}] + {\cal S}^{(2)}_{CS-cd}[\{\eta^* \}, \{\eta \}], *\rm~ {separated-fluc-action-discs} \eeqa \end{mathletters}where \begin{mathletters}\beqa && \frac{i}{\hbar}{\cal S}^{(2)}_{CS-\varepsilon}[\{\eta^* \}, \{\eta \}] := -\sum_{n=1}^{N-1} \eta^{*}_{n}\eta_{n} + \frac{1}{2}\sum_{n=2}^{N-1}( \eta^{*}_{n}\eta_{n-1} + \eta^{*}_{n-1}\eta_{n} ) , *\rm~ {eterm-fluc-action-discs}\\ && \frac{i}{\hbar}{\cal S}^{(2)}_{CS-cd}[\{\eta^* \}, \{\eta \}] := \frac{1}{2}\sum_{n=2}^{N-1}( \eta^{*}_{n}\eta_{n-1} - \eta^{*}_{n-1}\eta_{n} ) - i\varepsilon \sum_{n=2}^{N-1}\eta^{*}_{n}\eta_{n-1}. *\rm~ {usual-term-fluc-action-discs} \eeqa \end{mathletters}Accordingly \begin{mathletters} \beqa && \<\xi_{F}|e^{-i\hat{H}T/\hbar}|\xi_{I}\> = \exp\left(\frac{i}{\hbar} {\cal S}_{CS}^{SAP} \right){\cal K}^{(2)}_{CS}(T), \\ && {\cal K}^{(2)}_{CS}(T) := \lim_{N \to \infty} \int \prod_{n=1}^{N-1} \frac{d \eta_{n} d \eta^{*}_{n} }{2\pi i} \exp \left( \frac{i}{\hbar}{\cal S}^{(2)}_{CS}[\{\eta\},\{\eta^{*}\}] \right). \eeqa \end{mathletters}At this stage $\eta^*$ is the complex conjugate of $\eta$, and \beqa \frac{d\eta_{n}d\eta^{*}_{n}}{2\pi i} := \frac{d{\sl p}_{n}d{\sl q}_{n}}{2\pi\hbar}, \eeqa with $({\sl p}_{n},{\sl q}_{n})$ related to $(\eta^{*}_{n},\eta_{n})$ via Eq.~(\ref{xi-q-p}). Noting that \beqa \frac{i}{\hbar}{\cal S}^{(2)}_{CS}[\{\eta^* \}, \{\eta \}] = - \sum_{n=1}^{N-1} \eta^{*}_{n} \left\{ \eta_{n} - \alpha \eta_{n-1} \right\}, \qquad \alpha := 1 -i\varepsilon, \eeqa where we have defined $\eta_0 = 0$ \cite{fn6}, we make further change of the integration variables as \cite{1} \begin{equation} \eta^{'*}_{n} := \eta^{*}_{n} ,\qquad \eta^{'}_{n} := \eta_{n} - \alpha \eta_{n-1} *\rm~ {variable-change} \eeq to find \beqa {\cal K}^{(2)}_{CS}(T) = \lim_{N \to \infty} \int \prod_{n=1}^{N-1} \frac{d \eta^{'}_{n} d \eta^{'*}_{n} }{2\pi i} \exp\left( -\sum_{n=1}^{N-1}\eta^{'*}_{n}\eta^{'}_{n}\right) = 1. *\rm~ {flucint-disscspi} \eeqa The complete amplitude (\ref{exact-cs-ta}) is thus recovered by treating DTCSPI in the stationary-action approximation, which should be exact for the harmonic oscillator. Note that $\eta_{n}$ appears in the exponent of the integral in the form $-|\eta_{n}|^{2} ( = ( {\sl p}_{n}^{2} + {\sl q}_{n}^{2} )/2\hbar )$. Hence the effective range of integration over ${\sl p}_{n}$ and ${\sl q}_{n}$ is of ${\cal O}(\hbar^{1/2})$. Thus, contrary to the case of DTPSPI, $\eta_{n}$'s constitute a small fluctuation in the quasi-classical situation. What would happen if the $\varepsilon$-term were discarded in integrating over the fluctuations with the spirit of the "semi-$\epsilon$-prescription mentioned at Sect. IV.A.? The integral (\ref{flucint-disscspi}) would then be replaced by \beqa {\cal K}^{(2)}_{CS-cd}(T) := \int \prod _{n=1}^{N-1} \frac{d\eta_{n}d\eta^{*}_{n}}{2\pi i} \exp \left( \frac{i}{\hbar}{\cal S}^{(2)}_{CS-cd}[\{\eta^{*}\},\{\eta\}]\right). \eeqa In order to evaluate this integral, we may write ${\cal S}^{(2)}_{CS-cd}$ in the following matrix expression: \beqa \frac{i}{\hbar}{\cal S}^{(2)}_{CS-cd} = -\frac{1}{2}~ ^{t}{\bf \eta}^{*} {\cal M} {\bf \eta} \eeqa where \beqa && ^{t}{\bf \eta}^{*} := \left( \begin{array}{cccc} \eta^{*}_{1} & \eta^{*}_{2} & \ldots & \eta^{*}_{N-1} \end{array}\right), \qquad {\cal M} := \left( \begin{array}{cccc} 0 & 1 & & \\ -a & \ddots & \ddots & \\ & \ddots & \ddots & 1 \\ & & -a & 0 \end{array}\right), \qquad {\bf \eta} := \left( \begin{array}{c} \eta_{1} \\ \eta_{2} \\ \vdots \\ \eta_{N-1} \end{array}\right), *\rm~ {matrix-exp} \eeqa with $ a := 1 -2i\varepsilon \simeq e^{-2i\varepsilon}$. The determinant of ${\cal M}$ is given as \beqa \det {\cal M} = \left\{ \begin{array}{cc} a^{\frac{N-1}{2}} & :{\rm if}~ N ~{\rm is ~odd} \\ 0 & ~:{\rm if}~ N ~{\rm is ~even} \end{array}\right. . \eeqa Hence, $N$ must be chosen to be an odd integer in order for the integral to make sence. With this coice, \beqa {\cal K}^{(2)}_{CS-cd}(T) &=& \lim_{N \to \infty} 2^{N-1}( \det {\cal M} )^{-1} \nonumber \\ &=& e^{-iT}\lim_{N \to \infty}2^{N-1}, \eeqa which does not even tend to a finite value in the limit $N \rightarrow \infty$. This is the non-sensical result announced in the introduction. If one started from Klauder's action (\ref{action-kctscs}) and make the formal expansion as Eq. (\ref{expand-xiseta}), then one could find Eq. (\ref{expand-sapfluc}) with the subscript $CS$ replaced by $KCS$, where \beqa S^{(2)}_{KCS}[\eta^{*},\eta] := -i\hbar \int_{0}^{T}dt \left\{-\frac{1}{2}\epsilon \dot{\eta}^{*}(t)\dot{\eta}(t)-\frac{1}{2}(\eta^{*}(t)\dot{\eta}(t)-\dot{\eta}^{*}(t)\eta(t))-i\eta^{*}(t)\eta(t)\right\}. \eeqa One might adopt the following discretization \beqa S^{(2)}_{KCS}[\eta^{*},\eta] &\to& -i\hbar \sum_{n=1}^{N} \left[ -\frac{1}{2} (\eta^{*}_{n}-\eta^{*}_{n-1})(\eta_{n}-\eta_{n-1}) \right. \nonumber \\ &&-\left.\frac{1}{2} \left\{\eta^{*}_{n}(\eta_{n}-\eta_{n-1}) -(\eta^{*}_{n}-\eta^{*}_{n-1}) \eta_{n} \right\} -i\varepsilon\eta^{*}_{n}\eta_{n-1}\right], \\ {\cal D}\eta {\cal D}\eta^{*} &\to& \prod_{n=1}^{N-1}\frac{d\eta_{n}d\eta^{*}_{n}}{2\pi i}. *\rm~ {discretization} \eeqa The fluctuation integral would then give unity. Clealy, adoption of the above discretization scheme is equivalent to working with DTCSPI. However there is no compelling reason why we should adopt the particular discretization. If we adopted ( with $\eta^{*}_{N+1}= \eta_{-1} = 0$, ) \beqa S^{(2)}_{KCS}[\eta^{*},\eta] &\to& -i\hbar\sum_{n=1}^{N}\bigg[ -\frac{1}{2} (\eta^{*}_{n+1}-\eta^{*}_{n})(\eta_{n}-\eta_{n-1}) \nonumber \\ &&- \frac{1}{2} \left\{\eta^{*}_{n}(\eta_{n}-\eta_{n-1}) -(\eta^{*}_{n}-\eta^{*}_{n-1}) \eta_{n-1} \right\} -i\varepsilon\eta^{*}_{n}\eta_{n-1}\bigg] \nonumber \\ &=& \frac{i\hbar}{2}\sum_{n=1}^{N-1}\eta_{n}^{*}( \eta_{n} + 2i\varepsilon\eta_{n-1} - \eta_{n-2} ) . \eeqa we would obtain a non-sensical result: \beqa \int {\cal D}\eta{\cal D}\eta^{*} \exp\left(\frac{i}{\hbar}S^{(2)}_{KCS}[\eta^{*},\eta] \right) \to \lim_{N\to\infty}2^{N-1}. \eeqa We conclude that the fluctuation integral can be unambiguously evaluated only in DTCSPI. \subsection{Discrete-Time Spin-Coherent-State Path Integral} The case of spin-coherent-state path integral may be discussed in parallel with the coherent-state case. Thus, we work with $\xi$ related to $\theta$ and $\phi$ via Eq.~(\ref{xi-phi-theta}). By a repeated use of the resolution of unity \begin{equation} \int \frac{2S+1}{2\pi i}\frac{d \xi d\xi^{*}}{( 1 + |\xi|^{2} )^{2}}|\xi\>\<\xi| = 1 , \eeq the amplitude in question is expressed as \begin{mathletters}\beqa &&\<\xi_{F}|e^{-i\hat{H}T/\hbar}|\xi_{I}\> = \lim_{N \to \infty}\int \prod_{n=1}^{N-1} \frac{2S + 1}{2\pi i} \frac{d\xi_{n}d\xi^{*}_{n}}{ ( 1 + |\xi_{n}|^{2})^{2} } \exp\left( \frac{i}{\hbar}{\cal S}_{SCS}[ \{\xi^{*}\} ,\{ \xi\} ] \right), *\rm~ {ta-disscs}\\ && \frac{i}{\hbar}{\cal S}_{SCS}[ \{\xi^{*}\} ,\{ \xi \} ] = \sum_{n=1}^{N} \left[ S \ln \frac{ ( 1 + \xi^{*}_{n}\xi_{n-1} )^{2} }{ ( 1+ |\xi_{n}|^{2} ) ( 1+ |\xi_{n-1}|^{2} ) } - \frac{i}{\hbar}\varepsilon {\cal H}( \xi^{*}_{n} , \xi_{n-1} ) \right],*\rm~ {action-disscs} \eeqa \end{mathletters}where the convention in the previous subsection is followed. The remark to the notation ${\cal S}_{CS}[\{ \xi^* \}, \{ \xi \}]$ applies here as well. \subsubsection{stationary-action path} As in the case of DTCSPI, we denote the stationary-action path (i.e., the stationary point of the action) by $( \{\xi^S \}, \{\bar{\xi}^S\} )$. It obeys a set of equations whose basic structure is the same as that of Eqs.~(\ref{sabun-discs10}). In particular, there is no room for a "boundary condition" . Adopting the {\em definition} (\ref{bc-discs}), We can cast the set of equations into the following form: \begin{mathletters}\beqa &&2S \frac{\xi^{S}_{n} - \xi^{S}_{n-1}}{(1+\bar{\xi}^{S}_{n}\xi^{S}_{n-1})(1+\bar{\xi}^{S}_{n}\xi^{S}_{n})} ~=~ - \left.\frac{i}{\hbar}\varepsilon\frac{\partial {\cal H}}{\partial \xi_{n}} \right|_{S} \qquad :1 \le n \le N-1, *\rm~ {sabun-disscs5}\\ &&2S \frac{\bar{\xi}^{S}_{n+1} - \bar{\xi}^{S}_{n}}{(1+\bar{\xi}^{S}_{n+1}\xi^{S}_{n})(1+\bar{\xi}^{S}_{n+1}\xi^{S}_{n+1})} = \left.\frac{i}{\hbar}\varepsilon\frac{\partial {\cal H}}{\partial \xi^{*}_{n}} \right|_{S} \qquad :1 \le n \le N-1. *\rm~ {sabun-disscs6} \eeqa \end{mathletters}Again, the factor $\varepsilon$ on the right-hand side ensures that $\xi_{n}^S - \xi_{n-1}^S$ and $\bar{\xi}_{n+1}^S - \bar{\xi}_{n}^S$ are of ${\cal O} (\varepsilon)$, but neither $\xi_F - \xi_{N-1}^S$ nor $\bar{\xi}_1^S - \xi_I^*$ can be said to be of ${\cal O}(\varepsilon)$. For the spin under a constant magnetic field described by the Hamiltonian (\ref{scs-constmag-hamiltonian}), the above equations reduce to \begin{mathletters}\beqa && \frac{ \xi^{S}_{n} - \xi^{S}_{n-1} } { 1 + \bar{\xi}^{S}_{n}\xi^{S}_{n} } = ~-i\varepsilon \frac{ \xi^{S}_{n-1} } { 1 + \bar{\xi}^{S}_{n} \xi^{S}_{n-1} } \qquad :1 \le n \le N-1, *\rm~ {sabun-disscs7} \\ && \frac{ \bar{\xi}^{S}_{n} - \bar{\xi}^{S}_{n-1} } { 1 + \bar{\xi}^{S}_{n-1} \xi^{S}_{n-1} } = i\varepsilon \frac{ \bar{\xi}^{S}_{n} } { 1 + \bar{\xi}^{S}_{n} \xi^{S}_{n-1} } \qquad :2 \le n \le N. *\rm~ {sabun-disscs8} \eeqa *\rm~ {sabun-disscs101}\end{mathletters}This set of non-linear difference equations may be solved by identifying conserved quantities. Let \begin{equation} P_{n} := \bar{\xi}^{S}_{n}\xi^{S}_{n-1},\qquad R_{n} := \bar{\xi}^{S}_{n}\xi^{S}_{n}, *\rm~ {def-pr} \eeq then it follows from Eqs.~(\ref{sabun-disscs101}) that \begin{mathletters}\beqa && \frac { R_{n} - P_{n} }{ 1 + R_{n} } =-i \varepsilon \frac{P_{n}}{ 1 + P_{n} } \qquad :1 \le n \le N-1, *\rm~ {const-eq1}\\ && \frac { P_{n} - R_{n-1} }{ 1 + R_{n-1} } =i \varepsilon \frac{P_{n}}{ 1 + P_{n} } \qquad :2 \le n \le N . *\rm~ {const-eq2} \eeqa \end{mathletters}Putting these equations together, we find \beqa \frac{ 1 + P_{n} }{ 1 + R_{n-1} } = \frac{ 1 + P_{n} }{ 1 + R_{n} } \qquad :2 \le n \le N-1 . \eeqa If $1 + P_n$ vanished, the contribution of the stationary-action path to the amplitude in question would vanish because of the factor $\ln (1 + P_n)$ in Eq.~(\ref{action-disscs}). Hence we can assume that $1 + P_n \neq 0$. Consequently $R_n$ is a conserved quantity, whose value is to be denoted by $R$:\beqa R_{n} = R \qquad :1 \le n \le N-1. *\rm~ {constR} \eeqa Combining this with Eq.~(\ref{const-eq1}), we find that $P_n$ is also conserved and denote its value by $P$: \beqa P_n = P := (1+i\varepsilon)R + {\cal O}(\varepsilon^2), \qquad 2 \le n \le N-1. *\rm~ {constP} \eeqa (We can disregard the other root, which is equal to $-1 + {\cal O}(\varepsilon)$.) With these results, the set of equations (\ref{sabun-disscs101}) reduces to \begin{mathletters}\beqa && \xi^{S}_{n} - \xi^{S}_{n-1} = -i\varepsilon \xi^{S}_{n-1} + {\cal O}(\varepsilon^2) \qquad :1\le n \le N-1, *\rm~ {sabun-disscs9}\\ && \bar{\xi}^{S}_{n} - \bar{\xi}^{S}_{n-1} = ~~i \varepsilon \bar{\xi}^{S}_{n}~~~ + {\cal O}(\varepsilon^2)\qquad :2 \le n \le N. *\rm~ {sabun-disscs10} \eeqa\end{mathletters}Thus, to ${\cal O}(\varepsilon)$, the stationary-action path expressed by $\xi^S$ and $\bar{\xi}^S$ is identical with that for the harmonic oscillator given by (\ref{sap-discs}). Accordingly \beqa R = \xi^{*}_{F}\xi_{I}e^{-iT} + {\cal O}(\varepsilon) ,\qquad P = \xi^{*}_{F}\xi_{I}e^{-iT+i\varepsilon} + {\cal O}(\varepsilon^{2}). *\rm~ {extra} \eeqa (We have illustrated how the stationary-action path may be found in the fully discrete form. The result to the lowest order in $\varepsilon$ can also be found by going over to a differential equation at the stage of Eqs.~(\ref{sabun-disscs101}).) The result may be converted into $(\{\theta^S\}, \{\phi^S\})$ via \begin{equation} \xi^{S} = e^{i\phi^{S}}\tan \frac{\theta^{S}}{2},\qquad \bar{\xi}^{S} = e^{-i\phi^{S}}\tan \frac{\theta^{S}}{2}, \eeq or equivalently into ${\bf n}^S$. It is depicted in Fig.~3. === Fig. 3 === Again, the stationary-action path is not in general real. Neither of them coincides with the classical path connecting ${\bf n}_I$ and ${\bf n}_F$ in time $T$; such a classical path would exist only in the special case (\ref{special-bc}). \subsubsection{$\varepsilon$-term of the action} If one regarded all the differences $|\xi_n - \xi_{n-1}|$ ( $n = 1,2,\cdots,N$ with the convention $\xi_N := \xi_F$ and $\xi_0 := \xi_I$) as small in some sense and expanded the action to the second order in them, one would obtain (the equality so found is to be denoted by $\sim$) \begin{mathletters}\beqa && {\cal S}_{SCS}[\{\xi^* \}, \{\xi \}] \sim {\cal S}_{SCS-\varepsilon 1}+{\cal S}_{SCS-\varepsilon 2} + {\cal S}_{SCS-c} +{\cal S}_{SCS-d} , *\rm~ {separated-action-disscs}\\ && \frac{i}{\hbar}{\cal S}_{SCS-\varepsilon 1} := -S\sum_{n=1}^{N}\frac{1}{( 1 + |\xi_{n}|^{2} )^{2}}(\xi^{*}_{n}-\xi^{*}_{n-1})( \xi_{n} - \xi_{n-1} ) , *\rm~ {1eterm-action-disscs}\\ && \frac{i}{\hbar}{\cal S}_{SCS-\varepsilon 2} := \frac{S}{2}\sum_{n=1}^{N}\frac{1}{( 1 + |\xi_{n}|^{2} )^{2}}[ \{\xi_{n}(\xi^{*}_{n}-\xi^{*}_{n-1})\}^{2} - \{ \xi^{*}_{n}( \xi_{n} - \xi_{n-1} ) \}^{2}] , *\rm~ {2eterm-action-disscs}\\ && \frac{i}{\hbar}{\cal S}_{SCS-c} := S \sum_{n=1}^{N}\frac{1}{ 1 + |\xi_{n}|^{2} }\{ \xi_{n}( \xi^{*}_{n} - \xi^{*}_{n-1}) - \xi^{*}_{n}(\xi_{n} - \xi_{n-1} )\}, *\rm~ {cterm-action-disscs} \\ && {\cal S}_{SCS-d} := -\varepsilon\sum_{n=1}^{N} {\cal H}( \xi^{*}_{n} , \xi_{n-1} ) . *\rm~ {dterm-action-disscs} \eeqa *\rm~ {actions-disscs}\end{mathletters}The four terms (\ref{actions-disscs}b-e) are to be called the "$\varepsilon 1$-term", "$\varepsilon 2$-term", the canonical term, and the dynamical term, respectively. In terms of $(\theta_{n},\phi_{n})$, they take the following forms: \begin{mathletters}\beqa \frac{i}{\hbar}{\cal S}_{SCS-\varepsilon 1} &\sim& \frac{i}{\hbar}\tilde{{\cal S}}_{SCS-\varepsilon 1}[\{\theta\},\{\phi\}]\nonumber\\ &:=& -\frac{S}{4}\sum_{n=1}^{N}\{ (\theta_{n} - \theta_{n-1})^{2} + (\phi_{n}-\phi_{n-1})^{2}(\sin\theta_{n})^{2} \} , *\rm~ {e1term-action-tp} \\ \frac{i}{\hbar}{\cal S}_{SCS-\varepsilon 2} &\sim& \frac{i}{\hbar}\tilde{{\cal S}}_{SCS-\varepsilon 2}[\{\theta\},\{\phi\}]\nonumber\\ &:=& -iS\sum_{n=1}^{N} (\theta_{n}-\theta_{n-1})(\phi_{n}-\phi_{n-1}) \tan \frac{\theta_{n}}{2}\left(\sin\frac{\theta_{n}}{2}\right)^{2} , *\rm~ {e2term-action-tp}\\ \frac{i}{\hbar}{\cal S}_{SCS-c} &\sim& \frac{i}{\hbar}\tilde{{\cal S}}_{SCS-c}[\{\theta\},\{\phi\}]\nonumber\\ &:=& iS \sum_{n=1}^{N} \left\{ (\phi_{n}-\phi_{n-1})(\cos \theta_{n}-1) + (\theta_{n}-\theta_{n-1})(\phi_{n}-\phi_{n-1}) \tan \frac{\theta_{n}}{2} \right\} .*\rm~ {cterm-action-tp} \eeqa*\rm~ {actions-disscs-tp}\end{mathletters}Thus ${\cal S}_{SCS-\varepsilon 1}$ would resemble Klauder's $\epsilon$-term if the manipulation analogous to (\ref{keterm-action}) were correct. However, there is no reason to neglect ${\cal S}_{SCS-\varepsilon 2}$, which is also of the second order in $(\xi_{n}-\xi_{n-1})$. Eq.~(\ref{e2term-action-tp}) shows that it would give rise to \beqa -iS \varepsilon \int_{0}^{T} dt \dot{\theta}(t)\dot{\phi}(t) \tan \frac{\theta(t)}{2}\left(\sin \frac{\theta(t)}{2}\right)^{2} \eeqa in addition to Klauder's $\epsilon$-term. It is seen from Eq.~(\ref{cterm-action-tp}) that a term of the same form emerges also from the canonical term, which is linear in $|\xi_{n}-\xi_{n-1}|$. This is due to the nonlinearity of the transformation (\ref{xi-phi-theta}). \subsubsection{stationary action} We now evaluate the contributions of each term of (\ref{actions-disscs}) separately to the stationary action. By an argument similar to that for the coherent-state case, it is shown that the $\varepsilon 1$-term does not contribute to the stationary action. \beqa {\cal S}_{SCS-\varepsilon 1}[\{\bar{\xi}^{S}\},\{\xi^{S}\}] = {\cal O}(\varepsilon).*\rm~ {contri-e1term} \eeqa In the same way, if the $\varepsilon 2 $-term is separated into three parts, the contribution from the intermediate times is obviously of ${\cal O}(\varepsilon)$, while the initial and final discontinuity respectively contributes \beqa \frac{S}{2(1+R)^{2}}(R-|\xi_{I}|^{2})^{2} + {\cal O}(\varepsilon),\qquad -\frac{S}{2(1+|\xi_{F}|)^{2}}(R-|\xi_{F}|^{2})^{2}+ {\cal O}(\varepsilon). *\rm~ {contribution-initial-final} \eeqa In contrast to the case of the $\varepsilon 1$-term, neither of these vanishes. The contribution of the canonical term can be separated into three parts as well. The contribution from the intermediate times gives \beqa iST\frac{2R}{1+R} + {\cal O}(\varepsilon),*\rm~ {contribution-canonical-inter} \eeqa while the initial and final discontinuity respectively contributes \beqa \frac{S}{1+R}(R-|\xi_{I}|^{2}),\qquad \frac{S}{1+|\xi_{F}|^{2}}(R-|\xi_{F}|^{2}). *\rm~ {contribution-canonical-initial-final} \eeqa Finally, the contribution of the dynamical term, which contains the factor $\varepsilon$, comes from the intermediate times alone and gives \beqa iST\frac{1-R}{1+R} + {\cal O}(\varepsilon).*\rm~ {contribution-dynamical} \eeqa The sum of (\ref{contribution-canonical-inter}) and (\ref{contribution-dynamical}) is equal to $iST$, which would reproduce the correct result (\ref{exact-scs-ta}) only in the special case (\ref{special-bc}); in this special case, the discontinuity terms (\ref{contribution-initial-final}) and (\ref{contribution-canonical-initial-final}) vanish as well. However, in a generic case, the contribution of the discontinuity terms does not vanish, and the correct result is not reproduced even if one neglects the $\varepsilon 2$-term. The fluctuation integral can not remedy the result, either. If one evaluated the stationary action by use of (\ref{actions-disscs-tp}), the discontinuity contributions from the $\varepsilon 1$-term do not vanish: \beqa \tilde{{\cal S}}_{SCS-\varepsilon 1}[\{ \theta^{S}\},\{\phi^{S}\}] &=& -\frac{S}{4}\Bigg[ \theta_{F}^{2}+\theta_{I}^{2} -4 \tan^{-1}\{\sqrt{R}\} \left( \theta_{F}+\theta_{I} -2\tan^{-1}\sqrt{R} \right) \nonumber \\ && -(\sin\theta_{F})^{2}\left(\ln \frac{\sqrt{R}}{\tan \frac{\theta_{F}}{2}}\right)^{2} - \frac{4R}{(1+R)^{2}}\left(\ln \frac{\sqrt{R}}{\tan \frac{\theta_{I}}{2}}\right)^{2} \Bigg]. \eeqa The apparent contradiction between this and (\ref{contri-e1term}) is caused by the interplay of the nonlinearity of (\ref{xi-phi-theta}) and the unwarranted negligence of the discontinuities at the initial and final times in writing (\ref{actions-disscs}) and (\ref{actions-disscs-tp}); ${\cal S}_{SCS-\varepsilon 1}[\{\bar{\xi}^{S}\},\{\xi^{S}\}]$ is not equivalent to $\tilde{{\cal S}}_{SCS-\varepsilon 1}[\{\theta^{S}\},\{\phi^{S}\}]$. A closely related ambiguity exists in the expression (\ref{2eterm-action-disscs}). If $|\xi_{n}-\xi_{n-1}|$ is small, the factor $ 1 + |\xi_{n}|^{2}$ could be replaced by $1+|\xi_{n-1}|^{2}$, for instance. The coresponding factor $1 + |\xi_{F}|^{2}$ in (\ref{contribution-initial-final}) would then be replaced by $1+R$, and so on. Hence the expansion in $|\xi_{n}-\xi_{n-1}|$ is not unique due to the nonlinearity and discontinuity. In any case, neither (\ref{actions-disscs}b-d) nor (\ref{actions-disscs-tp}) reproduces the correct result. Therefore, we must go back to the original action (\ref{action-disscs}). Substituting (\ref{def-pr}) into it, we find \beqa \frac{i}{\hbar} {\cal S}_{SCS}^{SAP} &:=& \frac{i}{\hbar} {\cal S}_{SCS} [ \{\bar{\xi}^{S}\} , \{\xi^{S}\}] \nonumber\\ &=& S\ln \frac{(1+\xi^{*}_{F}\xi^{S}_{N-1})^{2}}{1+|\xi_{F}|^{2}} +S \sum_{n=2}^{N-1}\left(2\ln \frac{1+P}{1+R} + i\varepsilon \frac{1-P}{1+P}\right) \nonumber \\ &&-2S\ln(1+R) +S\ln \frac{(1+\bar{\xi}^{S}_{1}\xi_{I})^{2}}{1+|\xi_{I}|^{2}} + {\cal O}(\varepsilon). *\rm~ {stationary-action-disscs} \eeqa The second term, where the value of $P$ to be used should be Eq. (\ref{constP}) correct up to ${\cal O}(\varepsilon)$, gives $ iST + {\cal O}(\varepsilon)$, while the final discontinuity (the first term) and the initial discontinuity (the last term) give \beqa S\ln \frac{(1+R)^{4} }{( 1+ |\xi_{F}|^{2} )(1 + | \xi_{I} |^{2}) }+ {\cal O}(\varepsilon) ,*\rm~ {exaxt-sa-disscs} \eeqa thereby reproducing the complete amplitude (\ref{exact-scs-ta}). Conclusion: If the action is expanded to the second order in the differences $|\xi_n - \xi_{n-1}|$ and those terms which would corresponding to Klauder's $\epsilon$-term are kept, the resulting action does not reproduce the correct result (\ref{exact-scs-ta}). It is essential to respect the "discontinuities at the initial and final times". \subsubsection{fluctuations} Let us evaluate the fluctuation integral and show that it reduces to unity. We start from the proper discrete action (\ref{action-disscs}). (If one started from (\ref{separated-action-disscs}), one would arrive at a non-sensical result as in the previous subsection.) Decomposing the integration variables as (\ref{decomposed-xi-disscs}), we find \begin{mathletters}\beqa &&{\cal S}_{SCS}[\{\xi^* \}, \{\xi \}] \simeq {\cal S}_{SCS}^{SAP} + {\cal O}(\varepsilon) + {\cal S}^{(2)}_{SCS }[\{\eta^* \}, \{\eta \}], *\rm~ {sa-fluc2-disscs}\\ &&{\cal S}^{(2)}_{SCS}[\{\eta^* \}, \{\eta \}] := \sum_{n=1}^{N-1}\frac{i\hbar S}{(1+ \bar{\xi}^{S}_{n}\xi^{S}_{n})^{2}}\eta^{*}_{n}\eta_{n} + \sum_{n=2}^{N-1}\left( \frac{-i\hbar S}{(1+ \bar{\xi}^{S}_{n}\xi^{S}_{n-1})^{2}} - \left.\frac{\partial^{2}{\cal H}(\xi^{*}_{n},\xi_{n-1})}{\partial \xi^{*}_{n}\partial\xi_{n-1}}\right|_{S}\right) \eta^{*}_{n}\eta_{n-1} \nonumber \\ &-&\varepsilon\sum_{n=1}^{N-1}\frac{S}{(1+ \bar{\xi}^{S}_{n}\xi^{S}_{n-1})^{2}}\left.\left\{\frac{\partial}{\partial \xi^{*}_{n}}\left( ( 1 + \xi^{*}_{n}\xi_{n-1})^{2} \frac{\partial {\cal H}(\xi^{*}_{n},\xi_{n-1})}{\partial \xi^{*}_{n}} \right)\right\}\right|_{S} (\eta^{*}_{n})^{2} \nonumber \\ &-&\varepsilon\sum_{n=2}^{N}\frac{S}{(1+ \bar{\xi}^{S}_{n}\xi^{S}_{n-1})^{2}}\left.\left\{\frac{\partial}{\partial \xi_{n-1}}\left(( 1 + \xi^{*}_{n}\xi_{n-1})^{2} \frac{\partial {\cal H}(\xi^{*}_{n},\xi_{n-1})}{\partial \xi_{n-1}} \right)\right\}\right|_{S} (\eta_{n-1})^{2}\Bigg] , *\rm~ {fluc-action-disscs} \eeqa \end{mathletters}with the symbol $\simeq$ indicating equality up to the second order in $\eta_{n}$'s. Accordingly the amplitude under consideration may be approximated as \begin{mathletters}\beqa \< \xi_{F} | e^{ -i\hat{H}T/\hbar } | \xi_{I} \> &\simeq& \exp \left( \frac{i}{\hbar} {\cal S}_{SCS}^{SAP} \right) {\cal K}^{(2)}_{SCS}(T) , \\ {\cal K}^{(2)}_{SCS} (T) &=& \lim_{ N \to \infty} \int\prod_{n=1}^{N-1} \frac{2S+1}{2\pi i} \frac{ d \eta_{n} d \eta^{*}_{n} }{\{(1+\bar{\xi}^{S}_{n}+\eta^{*}_{n})(1+\xi^{S}_{n}+\eta_{n})\}^{2}} \nonumber\\ &&\times\exp \left( \frac{i}{\hbar} {\cal S}^{(2)}_{SCS} [\{\eta^{*}\},\{\eta\}] \right). *\rm~ {integral-fluc-disscs} \eeqa \end{mathletters}Since the integrand contains the Gaussian factor with exponent proportional to $-S\eta^{*}_{n}\eta_{n}$, the effective range of integration over $\eta_n$ is $|\eta_n| \stackrel{<}{\sim} S^{-1/2}$. Thus, $\eta_n$'s can legitimately be said to constitute a small fluctuation provided that $S \gg 1$. Accordingly the present stationary-action approximation is in fact an expansion with respect to $1/S$, in agreement with the intuition that a spin should behave "semi-classically" for large $S$. In the case of the spin under a constant magnetic field, the coefficients of the third and fourth term of (\ref{fluc-action-disscs}) vanish; ${\cal S}^{(2)}_{SCS}$ reduces to \begin{mathletters}\beqa && \frac{i}{\hbar}{\cal S}^{(2)}_{SCS}[\{\eta^* \}, \{\eta \}] = -\frac{2S}{(1+R)^{2}} \sum_{n=1}^{N-1} \eta^{*}_{n}( \eta_{n} - \alpha \eta_{n-1}) , \\ && \alpha := (1-i\varepsilon)\left(\frac{1+R}{1+P}\right)^{2}, \qquad \eta_{0}:=0. \eeqa \end{mathletters}Hence, we make the formally same change of the integration variables as in (\ref{variable-change}) to find \beqa {\cal K}^{(2)}_{SCS}(T) &=& \lim_{N \to \infty}\int\prod_{n=1}^{N-1} \frac{2S}{2\pi i}\frac{ d \eta^{'}_{n} d \eta^{'*}_{n} }{( 1 + R ) ^{2} } \exp \left( -\frac{2S}{ ( 1 + R )^{2} } \eta^{'*}_{n} \eta^{'}_{n} \right)\left\{1+{\cal O}\left(\frac{1}{S}\right)\right\} \nonumber \\ &=& 1 + {\cal O}\left(\frac{1}{S}\right). \eeqa The complete amplitude (\ref{exact-scs-ta}) is thus recovered by treating DTSCSPI in the stationary-action approximation provided that $S \gg 1$ \cite{fn7}. What would one find if one evaluated the fluctuation integrl starting from Klauder's action. Formal expansion as Eq. (\ref{expand-xiseta}) would give Eq. (\ref{expand-sapfluc}) with the subscript $CS$ replaced by $KSCS$, where \beqa S^{(2)}_{KSCS}[\eta^{*},\eta] &:=& \frac{i\hbar S}{(1+R)^{2}}\int_{0}^{T}dt\bigg[ \epsilon\bigg\{ \dot{\eta}^{*}(t)\dot{\eta}(t) - \frac{2R(1-2R)}{(1+R)^{2}}\eta^{*}(t)\eta(t)\nonumber\\ &&-\frac{2iR}{1+R}(\eta^{*}(t)\dot{\eta}(t)-\dot{\eta}^{*}(t)\eta(t))\bigg\}+\frac{4iR}{1+R}\eta^{*}(t)\eta(t) \nonumber\\ &&+ \eta^{*}(t)\dot{\eta}(t)-\dot{\eta}^{*}(t)\eta(t) + 2i\frac{1-R}{1+R} \eta^{*}(t)\eta(t) \bigg]. \eeqa (Note that $R^{S}=R$.) One might adopt the discretization \beqa S^{(2)}_{KSCS}[\eta^{*},\eta] &\to& {\cal S}^{(2)}_{KSCS}[\{\eta^{*}\},\{\eta\}]\nonumber\\ &:=& \frac{i\hbar S}{(1+R)^{2}}\sum_{n=1}^{N} \bigg[ ( \eta^{*}_{n} - \eta^{*}_{n-1})(\eta_{n} - \eta_{n-1}) -\varepsilon^{2}\frac{2R(1-2R)}{(1+R)^{2}}\eta^{*}_{n}\eta_{n} \nonumber \\ &-& \varepsilon \frac{2iR}{(1+R)^{2}}\{ \eta^{*}_{n}(\eta_{n}-\eta_{n-1})-(\eta^{*}_{n}-\eta^{*}_{n-1})\eta_{n} \}+ \varepsilon\frac{4iR}{1+R}\eta^{*}_{n}\eta_{n}\nonumber\\ && + \{ \eta^{*}_{n}(\eta_{n}-\eta_{n-1})-(\eta^{*}_{n}-\eta^{*}_{n-1})\eta_{n}\} + 2i\varepsilon \frac{1-R}{1+R} \eta^{*}_{n}\eta_{n-1} \bigg], *\rm~ {flucint-app1} \eeqa supplemented with \beqa {\cal D}\eta{\cal D}\eta^{*} \to \prod_{n=1}^{N-1}\frac{2S}{2\pi i}\frac{d\eta_{n}d\eta^{*}_{n}}{(1+R)^{2}}. \eeqa This would lead to a wrong result (see Appendix A): \beqa \int {\cal D}\eta{\cal D}\eta^{*} \exp\left(\frac{i}{\hbar}S^{(2)}_{KSCS}[\eta^{*},\eta] \right) &\to& \lim_{N\to\infty}\int\prod_{n=1}^{N-1}\frac{2S}{2\pi i} \frac{d\eta_{n}d\eta^{*}_{n}}{(1+R)^{2}}\exp\left(\frac{i}{\hbar}{\cal S}^{(2)}_{KSCS}[\{\eta^{*}\},\{\eta\}]\right)\nonumber\\&& = \exp\left\{-i\frac{(1+2R)R}{(1+R)^{2}}T\right\}. *\rm~ {flucint-app2} \eeqa If one neglected terms proportional to $\varepsilon$ except for the Hamiltonian term, the fluctuation integral would give unity. However, this negligence is {\em ad~hoc}. Otherwise, the fluctuation integral would not give unity. Other discretization schemes do not work either. This is the difficulty encountered by Klauder's $\epsilon$-prescription in the case of the spin-coherent-state path integral. \acknowledgments The authors are also grateful to T. Kashiwa for a valuable discussion on. This work has been supported by Grant-in-aid for Scientific Research C No.08640471 and Grant-in-Aid for Scientific Research on Priority Areas No.271, Japan Ministry of Education and Culture. \begin{appendix} \section{calculation of fluctuation integral}*\rm~ {appenA} In order to calculate the fluctuation integral (\ref{flucint-app2}), we may write ${\cal S}^{(2)}_{KSCS}$ defined by (\ref{flucint-app1}) in the following matrix expression: \beqa \frac{i}{\hbar}{\cal S}^{(2)}_{KSCS} = -\frac{2S}{(1+R)^{2}}~ ^{t}{\bf \eta}^{*} {\cal M} {\bf \eta} \eeqa where the notation $ ^{t}{\bf \eta}^{*}$ and ${\bf \eta}$ is same as in Eq.~(\ref{matrix-exp}), and \beqa {\cal M} := \left( \begin{array}{cccc} a & -c & & \\ -b & \ddots & \ddots & \\ & \ddots & \ddots & -c \\ & & -b & a \end{array}\right), \eeqa with \begin{mathletters} \beqa && a:= 1 + i\varepsilon \frac{2R}{1+R} +{\cal O}(\varepsilon^{2}), \\ && b:= 1 - i\varepsilon \left( \frac{R}{(1+R)^{2}} + \frac{1-R}{1+R} \right), \\&& c:= i\varepsilon \frac{R}{(1+R)^{2}}. \eeqa \end{mathletters}The determinant of ${\cal M}$ is obtained as \beqa \det {\cal M} = \frac{\lambda_{+}^{N-2}(a^{2}-bc-\lambda_{-}a)-\lambda_{-}^{N-2}(a^{2}-bc-\lambda_{+}a)}{\lambda_{+}-\lambda_{-}}, \eeqa where \beqa && \lambda_{+} := \frac{a+\sqrt{a^{2}-4bc}}{2} = 1 + i\varepsilon \frac{(1+2R)R}{(1+R)^{2}} + {\cal O}(\varepsilon^{2}), \\ && \lambda_{-} := \frac{a-\sqrt{a^{2}-4bc}}{2} = i\varepsilon \frac{R}{(1+R)^{2}} + {\cal O}(\varepsilon^{2}). \eeqa Since $\lambda_{-}$ is ${\cal O}({\varepsilon})$, we can the $\det{\cal M}$ in the limit $N \to \infty$ to \beqa \det{\cal M} = \lambda^{N}_{+}\{ 1 +{\cal O}(\varepsilon) \} = \exp \left\{ i\frac{(1+2R)R}{(1+R)^{2}}T \right\} + {\cal O}(\varepsilon). \eeqa Hence, we obtain the result (\ref{flucint-app2}). \end{appendix}
2,877,628,088,766
arxiv
\section{Introduction} Modeling in computer vision has long been dominated by convolutional neural networks (CNNs). Recently, transformer models in the field of natural language processing (NLP) \cite{DBLP:journals/corr/abs-1810-04805,NIPS2017-3f5ee243,10.1145/3437963.3441667} have attracted great interests of computer vision (CV) researchers. The Vision Transformer (ViT) \cite{DBLP:journals/corr/abs-2010-11929} model and its variants have gained state-of-the-art results on many core vision tasks \cite{Zhao2020CVPR,pmlr-v139-touvron21a}. The original ViT, inherited from NLP, first splits an input image into patches, while equipped with a trainable class (CLS) token that is appended to the input patch tokens. Following, patches are treated in the same way as tokens in NLP applications, using self-attention layers for global information communication, and finally uses the output CLS token for prediction. Recent work \cite{DBLP:journals/corr/abs-2010-11929,Liu-2021-ICCV} shows that ViT outperforms state-of-the-art convolutional networks \cite{Huang-2018-CVPR} on large-scale datasets. However, when trained on smaller datasets, ViT usually underperforms its counterparts based on convolutional layers. The original ViT lacks inductive bias such as locality and translation equivariance, which leads to overfitting and data inefficiency of ViT models. To address this data inefficiency, numerous subsequent efforts have studied how to introduce the locality of the CNN model into the ViT model to improve its scalability \cite{NEURIPS2021-4e0928de,DBLP:journals/corr/abs-2107-00641}. These methods typically re-introduce hierarchical architectures to compensate for the loss of non-locality, such as the Swin Transformer \cite{Liu-2021-ICCV}. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{DifferentAttentionMechanisms.pdf} \caption{Illustration of different self-attention mechanisms in Transformer backbones. Our AEWin is different from two aspects. First, we split multi-heads into three groups and perform self-attention in local window, horizontal and vertical axes simultaneously. Second, we set different token lengths for window attention and axial attention to achieve fine-grained local and coarse-grained global interactions, which can achieve better trade-off between computation cost and capability.} \label{DifferentAttentionMechanisms-flabel} \end{figure} Local self-attention and hierarchical ViT (LSAH-ViT) has been demonstrated to address data inefficiency and alleviate model overfitting. However, LSAH-ViT uses window-based attention at shallow layers, losing the non-locality of original ViT, which leads to LSAH-ViT having limited model capacity and henceforth scales unfavorably on larger data regimes such as ImageNet-21K \cite{NEURIPS2021-20568692}. To bridge the connection between windows, previous LSAH-ViT works propose specialized designs such as the “haloing operation” \cite{Vaswani-2021-CVPR} and “shifted window” \cite{Liu-2021-ICCV}. These approaches often need complex architectural designs and the receptive field is enlarged quite slowly and it requires stacking a great number of blocks to achieve global self-attention. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{SplitGroup.pdf} \caption{Illustration of parallel implementation of AEWin. It is worthwhile to note that the token length of axial attention is only half of that of windowed attention, so as to set different granularity for local and global.} \label{SplitGroup-flabel} \end{figure} When observing a scene, humans usually focus on a local region while attending to non-attentional regions at coarse granularity. Based on this observation, we present the Axially Expanded Window (AEWin) self-attention, which is illustrated in Figure \ref{DifferentAttentionMechanisms-flabel} and compared with existing self-attention mechanisms. Considering that the visual dependencies between nearby regions are usually stronger than those far away, we perform the fine-grained self-attention within the local window and coarse-grained attention on the horizontal and vertical axes. We split the multi-heads into three parallel groups and the number of heads in the first two groups is half of that in the final group, the first two groups are used for self-attention on the horizontal and vertical axes respectively, and the final group is used for self-attention within the local window. It is worth noting that with AEWin self-attention mechanism, the self-attention in the local window, horizontal axis, and vertical axis are calculated in parallel, and this parallel strategy does not introduce extra computation cost. As shown in Figure \ref{SplitGroup-flabel}, the feature map focuses on its closest surroundings with long tokens and the surrounding on its horizontal and vertical axes with short tokens to capture coarse-grained visual dependencies. Therefore, it has the ability to capture both short- and long-range visual dependencies efficiently. Benefit from the fine-grained window self-attention and coarse-grained axial self-attention, our AEWin self-attention can better balance performance and computational cost compared to existing local self-attention mechanisms shown in Figure \ref{DifferentAttentionMechanisms-flabel}. Based on the proposed AEWin self-attention, we design a general vision transformer backbone with a hierarchical architecture, named AEWin Transformer. Our tiny variant AEWin-T achieves 83.6\% Top-1 accuracy on ImageNet-1K without any extra training data or label. \section{Related Work} Transformers were proposed by Vaswani et al. \cite{NIPS2017-3f5ee243} for machine translation, and have since become the state of the art method in many NLP tasks. Recently, the pioneering work ViT \cite{DBLP:journals/corr/abs-2010-11929} demonstrates that pure Transformer-based architectures can also achieve very competitive results. One challenge for vision transformer-based models is data efficiency. Although ViT \cite{DBLP:journals/corr/abs-2010-11929} can perform better than convolutional networks with hundreds of millions images for pre-training, such a data requirement is not always practical. To improve data efficiency, many recent works have focused on introducing the locality and hierarchical structure of convolutional neural networks into ViT, proposing a series of local and hierarchical ViT. The Swin Transformer \cite{Liu-2021-ICCV} focuses attention on shifted windows in a hierarchical architecture. Nested ViT \cite{zhang2022nested} proposes a block aggregation module, which can more easily achieves cross-block non-local information communication. Focal ViT \cite{DBLP:journals/corr/abs-2107-00641} presents focal self-attention, each token attends its closest surrounding tokens at fine granularity and the tokens far away at coarse granularity, which can effectively capture both short- and long-range visual dependencies. Based on the local window, a series of local self-attentions with different shapes are proposed in subsequent work. Axial self-attention \cite{DBLP:journals/corr/abs-1912-12180} and criss-cross attention \cite{Huang-2019-ICCV} achieve longer-range dependencies in horizontal and vertical directions respectively by performing self-attention in each single row or column of the feature map. CSWin \cite{Dong-2022-CVPR} proposed a cross-shaped window self-attention region including multiple rows and columns. Pale Transformer \cite{DBLP:journals/corr/abs-2112-14000} proposes a Pale-Shaped self-Attention, which performs self-attention within a pale-shaped region to capture richer contextual information. The performance of the above attention mechanisms is either limited by the restricted window size or has a high computation cost, which cannot achieve better trade-off between computation cost and global-local interaction. In this paper, we propose a new hierarchical vision Transformer backbone by introducing axially expanded window self-attention. Focal ViT \cite{DBLP:journals/corr/abs-2107-00641} and CSWin \cite{Dong-2022-CVPR} are the most related works with our AEWin, which allows a better trade-off between computation cost and global-local interaction compared to them. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{OverallArchitecture.pdf} \caption{(a) The overall architecture of our AEWin Transformer. (b) The composition of each block. } \label{OverallArchitecture-flabel} \end{figure*} \section{Method} \subsection{Overall Architecture} An overview of the AEWin-ViT architecture is presented in Figure \ref{OverallArchitecture-flabel} (a), which illustrates the tiny version. AEWin-ViT consists of four hierarchical stages, and we follow the popular design in Swin-ViT \cite{Liu-2021-ICCV} to build hierarchical architecture to capture multi-scale features and alternately use shifted window. Each stage contains a patch merging layer and multiple AEWin Transformer blocks. As the network gets deeper, the input features are spatially downsample by a certain ratio through the patch merging layer, and the channel dimension is expanded by twice to produce a hierarchical image representation. Specifically, the spatial downsampling ratio is set to 4 in the first stage and 2 in the last three stages, using the same patch merging layer as Swin-ViT. The outputs of the patch merging layer are fed into the subsequent AEWin Transformer block, and the number of tokens is kept constant. Finally, we apply a global average pooling step on the output of the last block to obtain the image representation vector for the final prediction. \subsection{Axially Expanded Window Self-Attention} LSAH-ViT uses window-based attention at shallow layers, losing the non-locality of original ViT, which leads to LSAH-ViT having limited model capacity and henceforth scales unfavorably on larger data regimes. Existing works propose specialized designs such as the “haloing operation” \cite{Vaswani-2021-CVPR} and “shifted window” \cite{Liu-2021-ICCV}, to communicate information between windows. These approaches often need complex architectural designs and the receptive field is enlarged quite slowly and it requires stacking a great number of blocks to achieve global self-attention. For capturing dependencies varied from short-range to long-range, inspired by human observation scenes, we propose Axially Expanded Window Self-Attention (AEWin-Attention), which performs fine-grained self-attention within the local window and coarse-grained self-attention on the horizontal and vertical axes. \noindent \textbf{Axially Expanded Windows.} According to the multi head self-attention mechanism, the input feature $X\in {{R}^{(H\times W)\times C}}$ will be first linearly projected to $K$ heads, and then each head will perform local self-attention within the window or horizontal axis or vertical axis. For horizontal axial self-attention, $X$ is evenly split into non-overlapping horizontal stripes $[{{X}^{1}},\cdots ,{{X}^{H}}]$, and each stripe contains $1\times W$ tokens. Formally, suppose the projected queries, keys and values of the ${{k}^{th}}$ head all have dimension ${{d}_{k}}$, then the output of the ${{k}^{th}}$ head's horizontal axis self-attention is defined as: \begin{equation} \begin{aligned} & X=[{{X}^{1}},{{X}^{2}},\cdots ,{{X}^{H}}], \\ & Y_{k}^{i}=\text{MSA}({{X}^{i}}W_{k}^{Q},{{X}^{i}}W_{k}^{K},{{X}^{i}}W_{k}^{V}), \\ & \text{H-MS}{{\text{A}}_{k}}(X)=[Y_{k}^{1},Y_{k}^{2},\cdots ,Y_{k}^{H}] \\ \end{aligned} \label{horizontalAttention-glabel} \end{equation} where ${{X}^{i}}\in {{R}^{(1\times W)\times C}}$, $i\in \left\{ 1,2,\cdots H \right\}$, and $\text{MSA}$ indicates the Multi-head Self-Attention. $W_{k}^{Q}\in {{R}^{C\times {{d}_{k}}}}$, $W_{k}^{K}\in {{R}^{C\times {{d}_{k}}}}$, $W_{k}^{V}\in {{R}^{C\times {{d}_{k}}}}$ represent the projection matrices of queries, keys and values for the ${{k}^{th}}$ head respectively, and ${{d}_{k}}=C/K$. The vertical axial self-attention can be similarly derived, and its output for ${{k}^{th}}$ head is denoted as $\text{V-MS}{{\text{A}}_{k}}(X)$. For windowed self-attention, $X$ is evenly split into non-overlapping local windows $[X_{m}^{1},\cdots ,X_{m}^{N}]$ with height and width equal to $M$, and each window contains $M\times M$ tokens. Based on the above analysis, the output of the windowed self-attention for ${{k}^{th}}$ head is defined as: \begin{equation} \begin{aligned} & {{X}_{m}}=[X_{m}^{1},X_{m}^{2},\cdots ,X_{m}^{N}], \\ & Y_{k}^{i}=\text{MSA}(X_{m}^{i}W_{k}^{Q},X_{m}^{i}W_{k}^{K},X_{m}^{i}W_{k}^{V}), \\ & \text{W-MS}{{\text{A}}_{k}}(X)=[Y_{k}^{1},Y_{k}^{2},\cdots ,Y_{k}^{N}] \\ \end{aligned} \label{windowAttention-glabel} \end{equation} where $N=(H\times W)/(M\times M)$, $M$ defaults to 7. \noindent \textbf{Parallel implementation of different granularities.} We split the $K$ heads into three parallel groups, with $K/4$ heads in the first two groups and $K/2$ heads in the last group, thus building a different granularity between local and global, as shown in Figure \ref{SplitGroup-flabel}. The first group of heads perform horizontal axis self-attention, the second group of heads perform vertical axis self-attention, and the third group of heads perform local window self-attention. Finally, the output of these three parallel groups will be concatenated back together. \begin{equation} \text{hea}{{\text{d}}_{k}}=\left\{ \begin{matrix} \text{H-MS}{{\text{A}}_{k}}\text{(}X\text{) } \\ \text{V-MS}{{\text{A}}_{k}}\text{(}X\text{) } \\ \text{W-MS}{{\text{A}}_{k}}\text{(}X\text{) } \\ \end{matrix}\begin{array}{*{35}{l}} k=1,\cdots ,K/4 \\ k=K/4+1,\cdots ,K/2 \\ k=K/2+1,\cdots ,K \\ \end{array} \right. \label{mergingtoken-glabel} \end{equation} \begin{equation} \text{AEWin(}X\text{)=Concat(hea}{{\text{d}}_{1}}\text{,}\cdots \text{,hea}{{\text{d}}_{K}}\text{)}{{W}^{O}} \label{finalOutMlp-glabel} \end{equation} where ${{W}^{O}}\in {{R}^{C\times C}}$ is the commonly used projection matrix that is used to integrate the output tokens of the three groups. Compared to the step-by-step implementation of axial and windowed self-attention separately, such a parallel mechanism has a lower computation complexity and can achieve different granularities by carefully designing the number of heads in different groups. \noindent \textbf{Complexity Analysis.} Given the input feature of size $H\times W\times C$ and window size $(M,M)$, $M$ is set to 7 by default, the standard global self-attention has a computational complexity of \begin{equation} \Omega (\text{Global})=4HW{{C}^{2}}+2{{(HW)}^{2}}C \label{GlobalComplexity-glabel} \end{equation} however, our proposed AEWin-Attention under the parallel implementation has a computational complexity of \begin{equation} \Omega (\text{AEWin})=4HW{{C}^{2}}+HWC*(\frac{1}{2}H+\frac{1}{2}W+{{M}^{2}}) \label{AEWinComplexity-glabel} \end{equation} which can obviously alleviate the computation and memory burden compared with the global one, since $2HW>>(\frac{1}{2}H+\frac{1}{2}W+{{M}^{2}})$ always holds. \subsection{AEWin Transformer Block} Equipped with the above self-attention mechanism, AEWin Transformer block is formally defined as: \begin{equation} \begin{aligned} & \overset{\wedge }{\mathop{{{X}^{l}}}}\,=\text{AEWin-Attention}(\text{LN}({{X}^{l-1}}))+{{X}^{l-1}}, \\ & {{X}^{l}}=\text{MLP}(\text{LN}(\overset{\wedge }{\mathop{{{X}^{l}}}}\,))+\overset{\wedge }{\mathop{{{X}^{l}}}}\, \\ \end{aligned} \label{windowAttention-glabel} \end{equation} where $\overset{\wedge }{\mathop{{{X}^{l}}}}\,$ and ${{X}^{l}}$ denote the output features of the $\mathsf{AEWin}$ module and the $\mathsf{MLP}$ module for block $l$, respectively. In computing self-attention, we follow Swin-ViT by including a relative position bias $B$ to each head in computing similarity. \section{Experiments} We first compare our AEWin Transformer with the state-of-the-art Transformer backbones on ImageNet-1K \cite{5206848} for image classification. We then compare the performance of AEWin and state-of-the-art Transformer backbones on small datasets Caltech-256 \cite{griffin2007caltech} and Mini-ImageNet \cite{krizhevsky2012imagenet}. Finally, we perform comprehensive ablation studies to analyze each component of AEWin Transformer. \subsection{Experiment Settings} \noindent \textbf{Dataset}. For image classification, we benchmark the proposed AEWin Transformer on the ImageNet-1K, which contains 1.28M training images and 50K validation images from 1,000 classes. To explore the performance of AEWin Transformer on small datasets, we also conducted experiments on Caltech-256 and Mini-ImageNet. Caltech-256 has 257 classes with more than 80 images in each class. Mini-ImageNet contains a total of 60,000 images from 100 classes. \noindent \textbf{Implementation details}. This setting mostly follows \cite{Liu-2021-ICCV}. We use the PyTorch toolbox \cite{paszke2019pytorch} to implement all our experiments. We employ an AdamW \cite{kingma2014adam} optimizer for 300 epochs using a cosine decay learning rate scheduler and 20 epochs of linear warm-up. A batch size of 256, an initial learning rate of 0.001, and a weight decay of 0.05 are used. ViT-B/16 uses an image size 384 and others use 224. We include most of the augmentation and regularization strategies of \cite{Liu-2021-ICCV} in training. \subsection{Image Classification on the ImageNet-1K} \begin{table}[h] \centering \caption{Comparison of different models on ImageNet-1K.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|c} \hline Method & Image Size & Param. & FLOPs & Top-1 acc. \\ \hline ViT-B \cite{DBLP:journals/corr/abs-2010-11929} & ${{384}^{2}}$ & 86M & 55.4G &77.9 \\ \hline Swin-T \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 29M & 4.5G &81.3 \\ Swin-B \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 88M & 15.4G &83.3 \\ \hline Pale-T \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 22M & 4.2G &83.4 \\ Pale-B \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 85M & 15.6G &84.9 \\ \hline CSWin-T \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 23M & 4.3G &82.7 \\ CSWin-B \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 78M & 15.0G &84.2 \\ \hline AEWin-T (ours) & ${{224}^{2}}$ & 23M & 4.0G & \textbf{83.6} \\ AEWin-B (ours) & ${{224}^{2}}$ & 78M & 14.6G &\textbf{85.0} \\ \hline \end{tabular} } \label{ImageNet-Top1} \end{table} Table \ref{ImageNet-Top1} compares the performance of our AEWin Transformer with the state-of-the-art Vision Transformer backbones on ImageNet-1K. Compared to ViT-B, our AEWin-T model is +5.7\% better and has much lower computation complexity than ViT-B. Meanwhile, our AEWin Transformer variants outperform the state-of-the-art Transformer-based backbones, and is +0.9\% higher than the most related CSWin Transformer. AEWin Transformer has the lowest computation complexity compared to all models in Table \ref{ImageNet-Top1}. For example, AEWin-T achieves 83.6\% Top-1 accuracy with only 4.0G FLOPs. And for the base model setting, our AEWin-B also achieves the best performance. \begin{table}[h] \centering \caption{Comparison of different models on Caltech-256.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|c} \hline Method & Image Size & Param. & FLOPs & Top-1 acc. \\ \hline ViT-B \cite{DBLP:journals/corr/abs-2010-11929} & ${{384}^{2}}$ & 86M & 55.4G &37.6 \\ \hline Swin-T \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 29M & 4.5G &43.3 \\ Swin-B \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 88M & 15.4G &46.7 \\ \hline Pale-T \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 22M & 4.2G &45.2 \\ Pale-B \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 85M & 15.6G &47.1 \\ \hline CSWin-T \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 23M & 4.3G &47.7 \\ CSWin-B \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 78M & 15.0G &48.5 \\ \hline AEWin-T (ours) & ${{224}^{2}}$ & 23M & 4.0G & \textbf{48.6} \\ AEWin-B (ours) & ${{224}^{2}}$ & 78M & 14.6G &\textbf{49.3} \\ \hline \end{tabular} } \label{Caltech-256-Top1} \end{table} \subsection{Image Classification on Caltech-256 and Mini-ImageNet} We show the performance of ViT on small datasets in Table \ref{Caltech-256-Top1} and Table \ref{Mini-ImageNet-Top1}. It is known that ViTs usually perform poorly on such tasks as they typically require large datasets to be trained on. The models that perform well on large-scale ImageNet do not necessary work perform on small-scale Mini-ImageNet and Caltech-256, e.g., ViT-B has top-1 accuracy of 58.3\% and Swin-B has top-1 accuracy of 67.4\% on the Mini-ImageNet, which suggests that ViTs are more challenging to train with less data. Our proposed AEWin can significantly improve the data efficiency and performs well on small datasets such as Caltech-256 and Mini-ImageNet. Compared with CSWin, it has increased by 0.8\% and 0.7\% respectively. \begin{table}[h] \centering \caption{Comparison of different models on Mini-ImageNet.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|ccc|c} \hline Method & Image Size & Param. & FLOPs & Top-1 acc. \\ \hline ViT-B \cite{DBLP:journals/corr/abs-2010-11929} & ${{384}^{2}}$ & 86M & 55.4G &58.3 \\ \hline Swin-T \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 29M & 4.5G &66.3 \\ Swin-B \cite{Liu-2021-ICCV} & ${{224}^{2}}$ & 88M & 15.4G &67.4 \\ \hline Pale-T \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 22M & 4.2G &67.4 \\ Pale-B \cite{DBLP:journals/corr/abs-2112-14000} & ${{224}^{2}}$ & 85M & 15.6G &68.5 \\ \hline CSWin-T \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 23M & 4.3G &66.8 \\ CSWin-B \cite{Dong-2022-CVPR} & ${{224}^{2}}$ & 78M & 15.0G &68.4 \\ \hline AEWin-T (ours) & ${{224}^{2}}$ & 23M & 4.0G & \textbf{68.2} \\ AEWin-B (ours) & ${{224}^{2}}$ & 78M & 14.6G &\textbf{69.1} \\ \hline \end{tabular} } \label{Mini-ImageNet-Top1} \end{table} \subsection{Ablation Study} In this section, we compare with existing self-attention mechanisms. For a fair comparison, we use Swin-T as backbone and only change the self-attention mechanism. As shown in Table \ref{differentMechanisms}, our AEWin self-attention mechanism performs better than the existing self-attention mechanism. \begin{table}[h] \centering \caption{Comparison of different self-attention mechanisms.} \resizebox{\linewidth}{!}{ \begin{tabular}{l|c} \hline Attention mode & ImageNet-1K Top-1 acc. \\ \hline Shifted Window \cite{Liu-2021-ICCV} & 81.3 \\ Sequential Axial \cite{DBLP:journals/corr/abs-1912-12180} &81.5 \\ Criss-Cross \cite{Huang-2019-ICCV} & 81.7 \\ Pale \cite{DBLP:journals/corr/abs-2112-14000} & 82.5 \\ Cross-shaped window \cite{Dong-2022-CVPR} & 82.2 \\ Axially expanded window & \textbf{83.1} \\ \hline \end{tabular}} \label{differentMechanisms} \end{table} \section{Conclusions} This work proposes a new efficient self-attention mechanism, called axially expanded window attention (AEWin-Attention). Compared with previous local self-attention mechanisms, AEWin-Attention simulates the way humans observe a scene by performing fine-grained attention locally and coarse-grained attention in non-attentional regions. Different granularities are distinguished by setting different number of group heads, and the parallel computing of three groups further improves the efficiency of AEWin. Based on the proposed AEWin-Attention, we develop a Vision Transformer backbone, called AEWin Transformer, which achieves state-of-the-art performance on ImageNet-1K for image classification. \bibliographystyle{named}
2,877,628,088,767
arxiv
\section{Introduction} Initializing Deep Neural Networks (DNNs) correctly is crucial for trainability and convergence. In the recent years, there has been remarkable progress in tackling the problem of exploding and vanishing gradients. One line of work utilizes the convergence of DNNs to Gaussian Processes in the limit of infinite width \citep{neal1996priors, lee2018deep, matthews2018gaussian, novak2018bayesian, garriga2018deep, hron2020infinite, yang2019tensor}. The infinite width analysis is then used to determine critical initialization for the hyperparameters of the network \citep{he2015delving, poole2016exponential, schoenholz2016deep, lee2018deep, roberts2021principles, doshi2021critical}. It has further been shown that dynamical isometry can improve the performance of DNNs \citep{pennington2018spectral, xiao2018dynamical}. Exploding and vanishing gradients can also be regulated with special activation functions such as SELU \citep{klambauer2017self-normalizing} and GPN \citep{lu2020bidirectionally}. Deep Kernel shaping \citep{martens2021deep, zhang2022deep} improves trainability of deep networks by systematically controlling $Q$ and $C$ maps. Normalization layers such as LayerNorm \citep{ba2016layer}, BatchNorm \citep{ioffe2015batch} and \citep{wu2018group} facilitate training of DNNs by significantly enhancing the critical regime \citep{doshi2021critical}. There have also been algorithmic attempts at regulating the forward pass, such as LSUV \citep{mishkin2015lsuv}. Another line of work sets the networks with residual connections to criticality by suppressing the contribution from the residual branches at initialization. In Highway Networks \citep{srivastava2015training}, this is achieved by initializing the network to have a small ``transform gate''. \citet{goyal2017accurate} achieve this in ResNets, by initializing the scaling coefficient for the residual block's last BatchNorm at 0. In Fixup \citep{zhang2019fixup} and T-Fixup \citep{huang2020improving}, careful weight-initialization schemes ensure suppression of residual branches in deep networks. Techniques such as SkipInit \citep{de2020batch}, LayerScale \citep{touvron2021cait} and ReZero \citep{bachlechner2021rezero} multiply the residual branches by a trainable parameter, initialized to a small value or to 0. Despite this progress, the aforementioned techniques are limited by either the availability of analytical solutions, specific use of normalization layers, or the use of residual connections. One needs to manually decide on the techniques to be employed on a case-by-case basis. In this work, we propose a simple algorithm, which we term $\texttt{AutoInit}$, that automatically initializes a DNN to criticality. Notably, the algorithm can be applied to any feedforward DNN, irrespective of the architectural details, large width assumption or existence of analytic treatment. We expect that $\texttt{AutoInit}$ will be an essential tool in architecture search tasks because it will always ensure that a never-before-seen architecture is initialized well. \subsection{Criticality in Deep Neural Networks} In the following, we employ the definition of criticality using \emph{Partial Jacobian} \citep{doshi2021critical}. Consider a DNN made up of a sequence of blocks. Each block consists of Fully Connected layers, Lipschitz activation functions, Convolutional layers, Residual Connections, LayerNorm\citep{ba2016layer}, BatchNorm\citep{ioffe2015batch}, AffineNorm\citep{touvron2021resmlp}, LayerScale\citep{touvron2021cait}, or any combination of thereof. We consider a batched input to the network, where each input tensor $x \in \mathbb{R}^{n^0_1} \otimes \mathbb{R}^{n^0_2} \otimes \cdots$ is taken from the batch $B$ of size $\lvert B \rvert$. The output tensor of the $l^{th}$ block is denoted by $h^l (x) \in \mathbb{R}^{n^l_1} \otimes \mathbb{R}^{n^l_2} \otimes \cdots$. $h^{l+1}(x)$ depends on $h^l(x)$ through a layer-dependent function $\mathcal{F}^{l}$, denoting the operations of the aforementioned layers. This function, in turn, depends on the parameters of the various layers within the block, denoted collectively by $\theta^{l+1}$. The explicit layer dependence of the function $\mathcal{F}^{l}$ highlights that we do not require the network to have self-repeating layers (blocks). We note that $h^{l+1} (x)$ can, in general, depend on $h^{l} (x')$ for all $x'$ in the batch $B$; which will indeed be the case when we employ BatchNorm. The recurrence relation for such a network can be written as \begin{align}\label{eq:DNNrecursion} h^{l+1} (x) = \mathcal{F}^{l+1}_{\theta^{l+1}} \left( \{h^l(x') \;|\; \forall x' \in B \} \right) \,, \end{align} where we have suppressed all the indices for clarity. Each parameter matrix $\theta^{l+1}$ is sampled from a zero-mean distribution. We will assume that some $2+\delta$ moments of $|\theta^{l+1}|$ are finite such that the Central Limit Theorem holds. Then the variances of $\theta^{l+1}$ can be viewed as hyperparameters and will be denoted by $\sigma^{l+1}_{\theta}$ for each $\theta^{l+1}$. We define the $\texttt{Flatten}$ operation, which reshapes the output $h^l(x)$ by merging all its dimensions. \begin{align} \bar h^l(x) = \texttt{Flatten}\left( h^l(x) \right) \sim \mathbb{R}^{N^l} \,, \end{align} where $N^l \equiv n^l_1 n^l_2 \cdots$. \begin{definition}[Average Partial Jacobian Norm (APJN)] \label{def:APJN} For a DNN given by \eqref{eq:DNNrecursion}, APJN is defined as \begin{align} \mathcal J^{l_0, l} \equiv \mathbb E_{\theta} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l}} \sum_{i=1}^{N_{l_0}} \sum_{x, x' \in B} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \right] \,, \end{align} where $\mathbb E_\theta[\cdot]$ denotes the average over parameter initializations. \end{definition} \begin{remark} For DNNs without BatchNorm and normalized inputs, definition of APJN for $|B|>1$ is equivalent to the one in $|B|=1$ case. \end{remark} We use APJN as the empirical diagnostic of criticality. \begin{definition}[Critical Initialization] \label{def:critical} A DNN given by \eqref{eq:DNNrecursion}, consisting of $L+2$ blocks, including input and output layers, is critically initialized if all block-to-block APJN are equal to $1$, i.e. \begin{align} \mathcal J^{l,l+1} = 1 \,, \quad \forall \quad 1 \leq l \leq L \,. \end{align} \end{definition} Critical initialization as defined by \Cref{def:critical} is essential, as it prevents the gradients from exploding or vanishing at $t=0$. One can readily see this by calculating the gradient for any flattened parameter matrix $\theta$ at initialization: \begin{align}\label{eq:grad} \frac{1}{|\theta^l|}\|\nabla_{\theta^l} \mathcal L \|^2_2 =& \frac{1}{|\theta^l|} \left\|\sum_{\mathrm{all}} \frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \frac{\partial \bar{h}^{L+1}_i}{\partial \bar{h}^L_j}\cdots \frac{\partial \bar{h}^{l+1}_k}{\partial \bar{h}^l_m} \frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_2 \nonumber \\ \sim &\, O \left( \frac{1}{|\theta^l|} \left\|\frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \right\|^2_2 \cdot \mathcal J^{L, L+1} \cdots \mathcal J^{l, l+1} \cdot \left \|\frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_F \right)\,, \end{align} where $\| \cdot \|_F$ denotes the Frobenius norm. In the second line, we utilized the factorization property of APJN \begin{align}\label{eq:factor} \mathcal J^{l_0,l} = \prod_{l'=l_0}^{l-1} \mathcal J^{l', l'+1} \,, \end{align} which holds in the infinite width limit given there is no weight sharing across the blocks. One may further require $\left\| \partial \mathcal L / \partial \bar{h}^{L+1}_i \right\|_2 \sim O(1)$. However, in practice we observe that this requirement is less important once the condition in \Cref{def:critical} is met. \subsection{Automatic Critical Initialization} For general architectures, analytically calculating APJN is often difficult or even impossible. This poses a challenge in determining the correct parameter initializations to ensure criticality; especially in networks without self-similar layers. Moreover, finite network width is known to have nontrivial corrections to the criticality condition \citep{roberts2021principles}. This calls for an algorithmic method to find critical initialization. To that end, we propose the \Cref{alg:j_general} that we called $\texttt{AutoInit}$ for critically initializing deep neural networks \emph{automatically}, without the need for analytic solutions of the signal propagation or of the meanfield approximation. The algorithm works for general feedforward DNNs, as defined in \eqref{eq:DNNrecursion}. Moreover, it naturally takes into account all finite width corrections to criticality because it works directly with an instance of a network. We do tacitly assume the existence of a critical initialization. If the network cannot be initialized critically, the algorithm will return a network that can propagate gradients well because the APJNs will be pushed as close to $1$ as possible. The central idea behind the algorithm is to choose the hyperparameters for all layers such that the condition in \Cref{def:critical} is met. This is achieved by optimizing a few auxiliary scalar parameters $a^l_{\theta}$ of a twin network with parameters $a^l_{\theta} \theta^{l}$ while freezing the parameters $\theta^{l}$. The loss function is minimized by the condition mentioned in \Cref{def:critical}. \begin{algorithm}[h] \caption{\texttt{AutoInit} (SGD)} \label{alg:j_general} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma^l_\theta;\, a^l_{\theta}(t) \; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$ and $\{a_\theta^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l_\theta(t+1) = a^l_\theta(t) - \eta \nabla_{a^l_\theta} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{\sigma^l_\theta = \sigma^l_{\theta} a^l_{\theta}(t) ;\, 1\; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$ \end{algorithmic} \end{algorithm} In practice, for speed and memory reasons we use an unbiased estimator \citep{hoffman2019robust} of APJN in \Cref{alg:j_general}, defined as \begin{align}\label{eq:j_est} \hat {\mathcal{J}}^{l, l+1} \equiv \frac{1}{N_v} \sum_{\mu=1}^{N_v} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l+1}} \sum_{k=1}^{N_{l+1}} \sum_{i=1}^{N_{l}} \sum_{x, x' \in B} \frac{\partial (v_{\mu j} \bar{h}^{l+1}_j(x'))}{\partial \bar{h}^{l}_i(x)} \frac{\partial (v_{\mu k} \bar{h}^{l+1}_k(x'))}{\partial \bar{h}^{l}_i(x)} \right] \,, \end{align} where each $v_{\mu i}$ is a unit Gaussian random vector for a given $\mu$. The Jacobian-Vector Product (JVP) structure in the estimator speeds up the computation by a factor of $N_{l+1} / N_v$ and consumes less memory at the cost of introducing some noise. In \Cref{sec:auto} we analyze $\texttt{AutoInit}$ for multi-layer perceptron (MLP) networks. Then we discuss the problem of exploding and vanishing gradients of the tuning itself; and derive bounds on the learning rate for ReLU or linear MLPs. In \Cref{sec:bn} we extend the discussion to BatchNorm and provide a strategy for using $\texttt{AutoInit}$ for a general network architecture. In \Cref{sec:exp} we provide experimental results for more complex architectures: VGG19\_BN and ResMLP-S12. \section{AutoInit for MLP networks} \label{sec:auto} MLPs are described by the following recurrence relation for preactivations \begin{align}\label{eq:mlp_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} W^{l+1}_{ij} \phi(h^l_j(x)) + b^{l+1}_i \,. \end{align} Here $x$ is an input vector, weights $W^{l+1}_{ij} \sim \mathcal N(0, \sigma_w^2/N_l)$ and biases $b^{l+1}_i \sim \mathcal N(0, \sigma_b^2)$ are collectively denoted as $\theta^{l+1}$. We assume $\phi$ is a Lipschitz activation function throughout this paper. For a network with $L$ hidden layers, in infinite width limit $N_l \rightarrow \infty$, preactivations \{$h^l_i(x) \,|\, 1 \leq l \leq L, \forall i \in N_l\}$ are Gaussian Processes (GPs). The distribution of preactivations is then determined by the Neural Network Gaussian Process (NNGP) kernel \begin{align} \mathcal K^{l}(x, x') = \mathbb E_{\theta} \left[ h^l_i(x) h^l_i(x') \right] \,, \end{align} which value is independent of neuron index $i$. The NNGP kernel can be calculated recursively via \begin{align} \mathcal K^{l+1}(x, x') = \sigma_w^2 \mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} \left[\phi\left(h_i^l(x)\right) \phi\left(h_i^l(x')\right) \right] + \sigma_b^2 \,. \end{align} Note that we have replaced the average over parameter initializations $\mathbb{E}_\theta[\cdot]$ with an average over preactivation-distributions $\mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} [\cdot]$; which are interchangeable in the infinite width limit \citep{lee2018deep, roberts2021principles}. Critical initialization of such a network is defined according to \Cref{def:critical}. In practice, we define a twin network with extra parameters, for MLP networks the twin preactivations can be written as \begin{align}\label{eq:twin_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} a_W^{l+1} W^{l+1}_{ij} \phi(h^l_j(x)) + a_b^{l+1} b^{l+1}_i \,, \end{align} where $a_{\theta}^{l+1} \equiv \{a^{l+1}_W, a^{l+1}_b\}$ are auxiliary parameters that will be tuned by \Cref{alg:j_train}. \begin{algorithm}[h] \caption{\texttt{AutoInit} for MLP (SGD)} \label{alg:j_train} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$, $\{a_W^l(0)=1\}$ and $\{a_b^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l(t+1) = a^l(t) - \eta \nabla_{a^l} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{a^l_W(t) \sigma_w, \, a^l_b(t) \sigma_b, 1,\, 1 \;| \; \forall 1 \leq l \leq L \})$ \end{algorithmic} \end{algorithm} In \Cref{alg:j_train}, one may also return $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, while freezing all $a^l_{\theta}$. However, this leads to different training dynamics while updating weights and biases. Alternatively, one can leave the auxiliary parameters trainable, but in practice this leads to unstable training dynamics. \paragraph{Loss function} The choice of loss function $\mathcal L$ is important. We will use the following loss \begin{align}\label{eq_loss_sq_J} \mathcal L_{\log} = \frac{1}{2} \sum_{l=1}^L \left[\log(\mathcal J^{l, l+1})\right]^2 \,, \end{align} We will refer to \eqref{eq_loss_sq_J} as Jacobian Log Loss (JLL). This definition is inspired by the factorization property \eqref{eq:factor}, which allows one to optimize each of the partial Jacobian norms independently. Thus the tuning dynamics is less sensitive to the depth. One could naively use $\log(\mathcal J^{0, L+1})^2$ as a loss function, however optimization will encounter the same level of exploding or vanishing gradients problem as \eqref{eq:grad}. One may worry that the factorization property will be violated for $t>0$, due to the possible correlation across all $\{a^l(t)\}$. It turns out that the correlation introduced by \Cref{alg:j_train} does not change the fact that all weights and biases are iid, ensuring that \eqref{eq:factor} holds for any $t \geq 0$. Another choice for the loss is Jacobian Square Loss (JSL), defined as $\mathcal L_2 = \frac{1}{2} \sum_{l=1}^L \left(\mathcal J^{l, l+1} - 1 \right)^2$. However JSL has poor convergence properties when $\mathcal J^{l, l+1} \gg 1$. One may further restrict the forward pass by adding terms that penalize the difference between $\mathcal K^l(x, x)$ and $\mathcal K^{l+1}(x,x)$. For brevity, we leave these discussions for the appendix. \paragraph{Exploding and Vanishing Gradients} While the objective of \Cref{alg:j_train} is to solve the exploding and vanishing gradients problem, the \Cref{alg:j_train} itself has the same problem, although not as severe. Consider optimizing MLP networks using $\mathcal L_{\log}$, where the forward pass is defined by \eqref{eq:twin_preact}. Assuming the input data $x$ is normalized, the SGD update (omit x) of $a^{l}_{\theta}$ at time $t$ can be written as \begin{align}\label{eq:a_update} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) \end{align} For a deep neural network, i.e. $|L - l| \gg 1$ holds for some $l$, the depth dependent term of \eqref{eq:a_update} can lead to exploding or vanishing gradients problems. We will show next that this is not the familiar exploding or vanishing gradients problem. First we would like to explain the vanishing gradients problem for $a_W^{l+1}$. Rewrite the right hand side of \eqref{eq:a_update} as \begin{align}\label{eq:iso} - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{W}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) = - \eta \frac{2}{a_W^{l+1}(t)} \log \mathcal J^{l, l+1}(t) + (l' > l\; \mathrm{terms}) \,. \end{align} Vanishing gradients can only occur if the isolated term is exactly canceled by the other terms for all $t \geq 0$, which does not happen in practice. To discuss the exploding gradients problem for $a_W^{l+1}$ we consider the update of $a_W^{l+1}$ (omit t). Depth dependent terms can be written as \begin{align}\label{eq:tauto_aw} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} \log \mathcal J^{l', l'+1} = \sum_{l'>l}^L \left(\frac{4\chi^{l'}_{\Delta}}{a_W^{l+1} \mathcal J^{l', l'+1}} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x, x)\right) \log \mathcal J^{l', l'+1} \,, \end{align} where we have defined two new quantities $\chi^{l'}_{\Delta} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l'}_i) \phi''(h^{l'}_i) + \phi'(h^{l'}_i) \phi'''(h^{l'}_i) \right]$ and $\chi^{l'}_{\mathcal K} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi'(h^{l'}_i) \phi'(h^{l'}_i) + \phi(h^{l'}_i) \phi''(h^{l'}_i) \right]$. We note that the exploding gradients problem for $a_W^{l+1}$ in $\texttt{AutoInit}$ is not severe for commonly used activation functions: \begin{itemize} \item $\tanh$-like bounded odd activation functions: $\chi_{\mathcal K}^{l'} \leq 1$ holds and $\mathcal K^l(x,x)$ saturates to a constant for large $l$. Thus the divergence problem of \eqref{eq:tauto_aw} is less severe than the one of \eqref{eq:grad} when $\mathcal J^{l', l'+1} > 1$. \item $\mathrm{ReLU}$: $\chi^{l'}_{\Delta}=0$. \item $\mathrm{GELU}$: The sum in \eqref{eq:tauto_aw} scales like $O(L \prod_{\ell=1}^L \chi^{\ell}_{\mathcal K})$ for large $L$, which may lead to worse exploding gradients than \eqref{eq:grad} for a reasonable $L$. Fortunately, for $\chi^{l'}_{\mathcal K} > 1$ cases, $\chi_{\Delta}^{l'}$ is close to zero. As a result, we find numerically that the contribution from \eqref{eq:tauto_aw} is very small. \end{itemize} For $a_b^{l+1}$, there is no isolated term like the one in \eqref{eq:iso}. Then the update of $a_b^{l+1}$ is proportional to \begin{align}\label{eq:tauto_ab} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_b^{l+1}} \log \mathcal J^{l, l+1} = \sum_{l'>l}^L \left(\frac{4 a_b^{l+1}}{\mathcal J^{l', l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \sigma_b^2 \right) \log \mathcal J^{l, l+1} \,. \end{align} Comparing \eqref{eq:tauto_ab} and \eqref{eq:tauto_aw}, it is clear that the exploding gradients problem for $a_b^{l+1}$ is the same as that for $a_W^{l+1}$, hence not severe for common activation functions. The vanishing gradients problem is seemingly more serious, especially for $\sigma_b=0$. However, the vanishing gradients for $a_b^{l+1}$ does not prevent \texttt{AutoInit} from reaching a critical initialization: \begin{itemize} \item For $\sigma_b > 0$, as $a_W^{l+1}$ gets updated, the update in \eqref{eq:tauto_ab} gets larger with time t. \item For $\sigma_b=0$ the phase boundary is at $\sigma_w \geq 0$, which can be reached by $a_W^{l+1}$ updates. \end{itemize} \subsection{Linear and ReLU networks} In general, it is hard to predict a good learning rate $\eta$ for the \Cref{alg:j_train}. However, for ReLU (and linear) networks, we can estimate the optimal learning rates. We will discuss ReLU in detail. Since $a_b^l$ can not receive updates in this case, we only discuss updates for $a_W^l$. Different APJN $\{\mathcal J^{l,l+1}\}$ for ReLU networks evolve in time independently according to \begin{align}\label{eq:relu_jupdate} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \frac{\sigma_w^2}{\sqrt{\mathcal J^{l,l+1}(t)}} \log \mathcal J^{l,l+1}(t) \,. \end{align} Then one can show that for any time $t$: \begin{align}\label{eq:eta_t} \eta_t < & \min_{1 \leq l \leq L} \left\{\frac{2\left( \sqrt{\mathcal J^{l,l+1}(t)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(t)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(t) } \right\} \end{align} guarantees a convergence. In this case, the value of $\mathcal J^{l, l+1}(t)$ can be used to create a scheduler for \Cref{alg:j_train}. Moreover, one can solve \eqref{eq:relu_jupdate} and find a learning rate that allows the \Cref{alg:j_train} to converge in 1-step: \begin{align}\label{eq:1step_lr} \eta^l_{\mathrm{1-step}} = \frac{\left( \sqrt{\mathcal J^{l,l+1}(0)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(0)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(0) } \,, \end{align} Next we study the dynamics of the optimization while using a single learning rate $\eta$. We estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align}\label{eq:eta_0_jl} \eta_0 = \frac{\left(a_W^{l+1}\sigma_w - \sqrt{2}\right) a_W^{l+1}}{\sigma_w \left(\log \left[(a_W^{l+1}\sigma_w)^2 \right] - \log 2\right)} \,. \end{align} In \Cref{fig:relu_jac}, we checked our results with \cref{alg:j_train}. All $\mathcal J^{l,l+1}(t)$ values plotted in the figure agree with the values we obtained by iterating \eqref{eq:relu_jupdate} for $t$ steps. The gap between $\eta_0$ and trainable regions can be explained by analyzing \eqref{eq:eta_t}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 < \eta < \eta_t$, there is still a chance that \Cref{alg:j_train} can converge. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 > \eta > \eta_t$ holds, \Cref{alg:j_train} may diverge at a later time. A similar analysis for JSL is performed in the appendix. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JLL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JLL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{fig:relu_jac} \end{figure} \section{BatchNorm, Residual Connections and General Strategy} \label{sec:bn} \paragraph{BatchNorm and Residual Connections} For MLP networks, the APJN value is only a function of $t$ and it is independent of $|B|$. This property holds except when there is a BatchNorm (BN). We consider a Pre-BN MLP network with residual connections. The preactivations are given by \begin{align}\label{eq:bnmlp_preact} h^{l+1}_{x; i} = \sum_{j=1}^N a_W^{l+1} W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + a_b^{l+1} b^{l+1}_i + \mu h^l_{x;i} \,, \end{align} where we label different inputs with indices $x,x^\prime, \cdots$ and $\mu$ quantifies the strength of the residual connections (common choice is $\mu=1$). At initialization, the normalized preactivations are defined as \begin{align} \tilde h^l_{x; i} = \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \,. \end{align} The change in batch statistics leads to non-trivial $\mathcal J^{l,l+1}$ values, which can be approximated using \Cref{conj:bn}. \begin{conjecture}[APJN with BatchNorm]\label{conj:bn} In infinite width limit and at large depth $l$, APJN of Pre-BN MLPs \eqref{eq:bnmlp_preact} converges to a deterministic value determined by the NNGP kernel as $B \rightarrow \infty$: \begin{align} \mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} & (a_W^{l+1} \sigma_w)^2 \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, 1)} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where the actual value of indices $x'$ and $x$ is not important, as long as $x \neq x'$. \end{conjecture} \begin{remark} Under the condition of \Cref{conj:bn} $\mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} 1 + O(l^{-1})$, if $\mu=1$. The finite $|B|$ correction is further suppressed by $l^{-1}$. \end{remark} In \Cref{fig:bn_relu} we show numerical results that can justify our conjecture, where we empirically find that finite $|B|$ correction for $|B| \geq 128$. Analytical details are in appendix. Similar results without residual connections have been obtained for finite $|B|$ by \citet{yang2018mean}. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{Figures/bn_relu_mu1.pdf} \caption{$\mathcal J^{l,l+1}(0)$ phase diagrams for $|B|=256$ in $\sigma_b-\sigma_w$ plane ($\mu=0$ and $\mu=1$); $\mathcal J^{l, l+1}$-$|B|$ plot. From left to right: 1) Pre-BN MLP networks with $\mu=0$ are everywhere chaotic; 2) Pre-BN MLP networks with $\mu=1$ are critical everywhere; 3) For $|B|\geq 128$, the finite $|B|$ corrections are negligible. In all plots we use $L=30$, $N_l=500$, $a_W^l(0)=1$ and averaged over 50 initializations.} \label{fig:bn_relu} \end{figure} \paragraph{General Strategy} For general network architectures, we propose the following strategy for using \Cref{alg:j_general} with normalized inputs: \begin{itemize} \item If the network does not have BatchNorm, use the algorithm with $|B|=1$. \item If the network has BatchNorm, and the user has enough resources, use the algorithm with a $|B|$ which will be used for training. When $|B|$ is large, one should make $\mathcal J^{l,l+1}$ vs. $|B|$ plots like the one in \Cref{fig:bn_relu}, then choose a $|B|$ that needs less computation. \item When resources are limited, one can use a non-overlapping set $\{\mathcal{J}^{l, l+k}\}$ with $k>1$ to cover the whole network. \end{itemize} The computational cost of the algorithm depends on $k$ and $|B|$. \section{Experiments} \label{sec:exp} In this section, we use a modified version of $\mathcal L_{\log}$, where we further penalize the ratio between NNGP kernels from adjacent layers. The Jacobian-Kernel Loss (JKL) is defined as: \begin{align}\label{eq:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=0}^{L+1} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=0}^{L+1} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,, \end{align} where we introduced an extra hyperparameter $\lambda$ to control the penalization strength. We also included input and output layers. Both APJNs and NNGP kernels will be calculated using flattened preactivations. \subsection{ResMLP} ResMLP \citep{touvron2021resmlp} is an architecture for image recognition built entirely on MLPs. It offers competitive performance in both image recognition and machine translation tasks. The architecture consists of cross-channel and cross patch MLP layers, combined with residual connections. The presence of residual connections and the absence of normalization techniques such as LayerNorm \citep{ba2016layer} or BatchNorm \citep{ioffe2015batch} render ResMLP to be initialized off criticality. To mitigate this issue, ResMLP architecture utilizes LayerScale \citep{touvron2021cait}; which multiplies the output residual branch with a trainable matrix, initialized with small diagonal entries. \paragraph{CIFAR-10} Here we obtain a critical initialization for ResMLP-S12 using \Cref{alg:j_train} with loss \eqref{eq:jkle}, with $a^l_{\theta}$ introduced for all layers. In our initialization, the ``smallnes'' is distributed across all parameters of the residual block, including those of linear, affine normalization and LayerScale layers. As we show in \Cref{fig:resmlp}, Kaiming initialization is far from criticality. $\texttt{AutoInit}$ finds an initialization with almost identical $\{\mathcal J^{l, l+1} \}$ and similar $\{\mathcal K^{l, l+1}(x, x)\}$ compared to the prescription proposed by \citet{touvron2021resmlp}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/resmlp_comparison.pdf} \caption{From left to right: 1) and 2) Comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ for ResMLP-S12 for Kaiming, original and $\texttt{AutoInit}$ initializations. Depth $l$ is equal to the number of residual connections. The network function in the $\texttt{AutoInit}$ case is very close to identity at initialization. 3) Training and validation accuracy. Both, original and \texttt{AutoInit}, models are trained on CIFAR-10 dataset for 600 epochs using \texttt{LAMB} optimizer\citep{You2020Large} with $|B|=256$. The learning rate is decreased by a factor of 0.1 at 450 and 550 epochs. Training accuracy is measured on training samples with Mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:resmlp} \end{figure} \paragraph{ImageNet \citep{liILSVRC15}} We report $74.0\%$ top-1 accuracy for ResMLP-S12 initialized using \texttt{AutoInit}, whereas the top-1 accuracy reported in \citep{touvron2021resmlp} for the same architecture is $76.6\%$. The model has $15$ million parameters. We used a setup similar to the one in original paper, which is based on timm library \citep{rw2019timm} under Apache-2.0 license \citep{apachev2}. However, we made the following modifications in our training: 1) We use learning rate $\eta=0.001$ and $|B|=1024$. 2) We use mixed precision. 3) We do not use \texttt{ExponentialMovingAverage}. The training was performed on two NVIDIA RTX 3090 GPUs; and took around $3.5$ days to converge (400 epochs). The auto-initialized model are obtained by tuning the Kaiming initialization using \Cref{alg:j_train} with $\mathcal L_{\mathcal J \mathcal K\log}(\lambda=0.5)$, $\eta=0.03$ and $|B|=32$ for 500 steps. \subsection{VGG} VGG \citep{simonyan2014very} is an old SOTA architecture, which was notoriously difficult to train before Kaiming initialization was invented. The BatchNorm variants $\mathrm{VGG19\_BN}$ further improve the training speed and performances compared to the original version. PyTorch version of VGG \citep{NEURIPS2019_9015} is initialized with $\mathrm{fan\_out}$ Kaiming initialization \citep{he2015delving}. In \Cref{fig:bn_relu} we show that the BatchNorm makes Kaiming-initialized ReLU networks chaotic. We obtain a close to critical initialization using \Cref{alg:j_train} for $\mathrm{VGG19\_BN}$, where we introduce the auxiliary parameters $a^l_{\theta}$ for all BatchNorm layers. $\mathcal J^{l, l+1}$ is measured by the number of composite (Conv2d-BatchNorm-ReLU) blocks or MaxPool2d layers. We compared $\mathcal J^{l, l+1}$, $\mathcal K^l(x,x)$ and accuracies on CIFAR-10 datasets between auto-initialized model and the one from PyTorch\citep{krizhevsky2009learning}, see \Cref{fig:vgg}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/vgg_comparison.pdf} \caption{From left to right: 1) and 2) comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ between PyTorch version $\mathrm{VGG19\_BN}$ and \texttt{AutoInit} version, we ensure $\mathcal J^{l, l+1}=1$ with a high priority ($\lambda=0.05$); 3) training and validation accuracy. We train both models on CIFAR-10 dataset using SGD with $\mathrm{momentum}=0.9$ and $|B|=256$ for 300 epochs, where we decrease the learning rate by a factor of 0.1 at 150 and 225 epochs. Training accuracy is measured on training samples with mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:vgg} \end{figure} \section{Conclusions} \label{sec:conclu} In this work we have introduced an algorithm, \texttt{AutoInit}, that allows to initialize an arbitrary feed-forward deep neural network to criticality. \texttt{AutoInit} is an unsupervised learning algorithm that forces norms of all nearby partial Jacobians to have a unit norm via minimizing the loss function \eqref{eq_loss_sq_J}. A slight variation of the \texttt{AutoInit} also tunes the forward pass to ensure that gradients in all layers of a DNN are well-behaved. To gain some intuition about the algorithm we have solved the training dynamics for MLPs with ReLU activation and discussed the choice of hyperparameters for the tuning procedure that ensures its convergence. Then we have evaluated the performance of \texttt{AutoInit}-initialized networks against initialization schemes used in literature. We considered two examples: ResMLP architecture and VGG. The latter was notoriously difficult to train at the time it was introduced. \texttt{AutoInit} finds a good initialization (somewhat close to Kaiming) and ensures training. ResMLP uses a variation of ReZero initialization scheme that puts it close to dynamical isometry condition. \texttt{AutoInit} finds a good initialization that appears very different from the original, however the network function is also very close to the identity map at initialization. In both cases the performance of the \texttt{AutoInit}-initialized networks is competitive with the original models. We emphasize that \texttt{AutoInit} removes the necessity for trial-and-error search for a working initialization. We expect that \texttt{AutoInit} will be useful in automatic neural architecture search tasks as well as for general exploration of new architectures. \begin{ack} T.H., D.D. and A.G. were supported, in part, by the NSF CAREER Award DMR-2045181 and by the Salomon Award. \end{ack} \bibliographystyle{plainnat} \section{Experimental Details} \Cref{fig:relu_jac}: The the second panel is made of 1200 points, each point takes around $1.5$ minutes running on a single single NVIDIA RTX 3090 GPU. \Cref{fig:bn_relu}: We scanned over $400$ points for each phase diagram, which overall takes around $5$ hours on a single NVIDIA RTX 3090 GPU. \Cref{fig:resmlp}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset. We use SGD with $\eta=0.03$ and $N_v=2$ for $392$ steps, $|B|=256$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.005, 0.01\}$, $\mathrm{weight\; decay}=\{10^{-5}, 10^{-4}\}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip, Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{fig:vgg}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset for $392$ steps with $\eta=0.01$ $|B|=128$ and $N_v=3$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.001, 0.002, 0.005, 0.01, 0.02\}$, $\mathrm{weight\; decay}=\{0.0005, 0.001, 0.002, 0.005 \}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip and Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. We froze auxiliary parameters instead of scale the weights. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{figapp:relu_jac}: Exactly the same as \Cref{fig:relu_jac}, except we used JSL. \section{Theoretical Details} \subsection{Factorization of APJN} We the factorization property using MLP networks in infinite width limit. This proof works for any iid $\theta^l$ where $|\theta^l|$ has some $2+\delta$ moments. We start from the definition of the partial Jacobian, set $a^l_{\theta}=1$ for simplicity. \begin{align} \nonumber \mathcal{J}^{l,l+2} &\equiv \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_l} \sum_{k,m=1}^{N_{l+1}} \left(W^{l+2}_{ik} \phi'(h^{l+1}_k) \right) \left(W^{l+2}_{im} \phi'(h^{l+1}_m) \right) \left(\frac{\partial h^{l+1}_k}{\partial h^{l+1}_j} \frac{\partial h^{l+1}_m}{\partial h^{l}_j}\right) \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{\theta} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{W^{l+1}, b^{l+1}} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) W^{l+1}_{kj} W^{l+1}_{kj} \right] \mathbb E_{\theta} \left[\phi'(h^l_k) \phi'(h^l_k) \right ] \\ &= \mathcal J^{l,l+1} \mathcal J^{l+1, l+2} + O\left(\frac{\chi^l_{\Delta}}{N_l} \right) \,, \end{align} where the $1/N_l$ correction is zero in the infinite width limit. We used the fact that in infinite width limit $h^{l+1}_k$ is independent of $h^l_k$, and calculated the first expectation value of the fourth line using integration by parts. Recall that for a single input (omit x) \begin{align} \chi^{l}_{\Delta} \equiv (a_W^{l+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l}_i) \phi''(h^{l}_i) + \phi'(h^{l}_i) \phi'''(h^{l}_i) \right] \,. \end{align} \subsection{Exploding and Vanishing Gradients} We show details for deriving \eqref{eq:tauto_aw} for MLP networks, assuming $l'>l$: \begin{align} \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial \mathbb E_{h^{l'}_i \sim \mathcal N(0, \mathcal K^{l'}(x,x))} \left[(a_W^{l'+1} \sigma_w)^2 \phi'(h^{l'}_i) \phi'(h^{l'}_i) \right]}{\partial a_W^{l+1}} \nonumber \\ =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial}{\partial K^{l'}(x,x)} \left(\frac{{(a_W^{l'+1} \sigma_w)^2}}{\sqrt{2\pi \mathcal K^{l'}(x,x)}} \int \phi'(h^{l'}_i) \phi'(h^{l'}_i) e^{-\frac{h^{l'}_i h^{l'}_i}{2\mathcal K^{l'}(x,x)}} dh^{l'}_i \right) \frac{\partial K^l(x,x')}{\partial a_W^{l+1}}\nonumber \\ =& \frac{2}{\mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \frac{\partial \mathcal K^{l'}(x, x)}{\partial \mathcal K^{l'-1}(x,x)} \cdots \frac{\partial \mathcal K^{l+1}(x,x)}{\partial a_W^{l+1}} \nonumber \\ =& \frac{4}{a_W^{l+1} \mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x,x) \,, \end{align} where we calculated the derivative respect to $\mathcal K^{l'}(x,x)$, then used integration by parts to get the third line. The derivation for \eqref{eq:tauto_ab} is similar. \subsection{ReLU Details} \paragraph{Learning rate $\eta$} The learning rate bound \eqref{eq:eta_t} is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} -1 |$ to decrease monotonically with time $t$. \paragraph{\texorpdfstring{Derivation for $\chi_{\Delta}^l=0$}{}} This is straightforward to show by direct calculation in the infinite width limit. We set $a_W^l=1$ and ignore neuron index $i$ for simplicity. \begin{align} \chi^{l'}_{\Delta} =& \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left[\phi''(h^{l}) \phi''(h^{l}) + \phi'(h^{l}) \phi'''(h^{l'}) \right] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \phi'(h^l) \phi''(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{h^l}{\mathcal K^l(x,x)} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= 0 \,, \end{align} where $\Theta(h^l)$ is Heaviside step function and $\delta(h^l)$ is Dirac delta function. To get the last line we used $h^l \delta(h^l)=0$. \subsection{\texorpdfstring{\Cref{conj:bn}}{}} Here we offer an non-rigorous explanation for the conjecture in the infinite $|B|$ and the infinite width limit. We use a MLP model with $a_{\theta}^l=1$ as an example. We consider \begin{align} h^{l+1}_{x; i} = \sum_{j=1}^N W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + b^{l+1}_j + \mu h^l_{x; j}\,, \end{align} where \begin{align}\label{eq:BN} \tilde h^l_{x; i} =& \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \nonumber \\ =& \frac{\sqrt{|B|} \sum_{x' \in B} P_{x x'} h^l_{x'; i}}{\sqrt{ \sum_{x \in B} \left(\sum_{x' \in B} P_{x x'} h^l_{x'; i}\right)^2 }} \,, \end{align} where $P_{xx'} \equiv \delta_{xx'} - 1/ |B|$. It is a projector in the sense that $\sum_{x' \in B} P_{xx'} P_{x'x''} = P_{xx''}$. Derivative of the normalized preactivation: \begin{align} \frac{\partial \tilde h^l_{x; i}}{\partial h^l_{x';j}} = \sqrt{|B|} \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2}} - \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; i} \sum_{x'' \in B} P_{x'x''} h^l_{x''; i}}{\left( \sqrt{\sum_{x\in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2 } \right)^3} \right)\delta_{ij} \,. \end{align} Then the one layer APJN: \begin{align}\label{eqapp:bn_apjn} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \sum_{x,x' \in B} \sum_{j=1}^{N_l} \mathbb E_{\theta} \left[\left(\phi'(\tilde h^l_{x; j}) \right)^2 \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2}} \right. \right. \nonumber \\ &\left. \left.- \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; j} \sum_{x'' \in B} P_{x'x''} h^l_{x''; j}}{\left( \sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2 } \right)^3} \right)^2 \right] + \mu^2 \,. \end{align} In the infinite $|B|$ limit, only one term can contribute: \begin{align} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{x,x' \in B} \sum_{j=1}^{N_l} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx'} P_{xx'}}{\sum_{x=1}^B \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2}\right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[ \sum_{j=1}^{N_l} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx}}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{j=1}^{N_l} \left( \left[\frac{1}{|B|} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \right] \frac{|B|-1}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right) \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ \xrightarrow{B \rightarrow \infty} & \frac{\sigma_w^2}{N_l} \sum_{j=1}^{N_l} \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, \delta_{xx'} )} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where $x'$ is a dummy index, just to label the off-diagonal term. We used \cref{conjecture:proj} and \cref{conjecture:tilde_h} to get the result. \begin{conjecture}[Projected Norm]\label{conjecture:proj} In the infinite width limit. For a large depth $l$, $\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2$ converges to a deterministic value $\frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right)$ as batch size $|B| \rightarrow \infty$. \end{conjecture} \begin{proof}[Non-regirous "proof"] In the infinite width limit $h^l_{x; j}$ is sampled from a Gaussian distribution $\mathcal N(0, \mathcal K^l_{xx'})$, where the value $\mathcal K_{xx'}$ only depends on if $x$ is the same as $x'$ or not. We first simplify the formula: \begin{align}\label{eq:proj} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ =& \frac{1}{|B|} \sum_{x', x'' \in B} P_{x' x''} h^l_{x'; j} h^l_{x''; j} \nonumber \\ =& \frac{1}{|B|} \left(\sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x', x'' \in B} h^l_{x';j} h^l_{x'';j} \right) \nonumber \\ =& \frac{1}{|B|} \left(\frac{|B|-1}{|B|} \sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x' \neq x''}^B h^l_{x';j} h^l_{x'';j} \right) \,. \end{align} The average over $x'$ and $x''$ in infinite $|B|$ limit can be replaced by integration over their distribution (this is the non-rigorous step, complete rigorous proof see \citet{yang2018mean}): \begin{align} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ \xrightarrow{|B| \rightarrow \infty} & \frac{|B|-1}{|B|} \left(\mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{xx'}) } \left[(h^l_{x'; j})^2 \right] - \mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{x'x''}) } \left[h^l_{x';j} h^l_{x'';j} \right] \right) \nonumber \\ = & \frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right) \,, \end{align} \end{proof} Next, we need to show how to calculate $\mathcal K^{l}_{xx'}$. Before that we first try to simplify find the distribution of $\tilde h^l_{x; i}$ in the infinite $|B|$ limit. \begin{conjecture}[$\tilde h^l_{x;i}$ distribution]\label{conjecture:tilde_h} In the infinite $|B|$ limit and the infinite width limit, assume for large depth $\mathcal K^l_{xx}$ reaches a fixed point. Then $\tilde h^l_{x;i}$ can be seen as sampled from a Gaussian distribution with the covariance matrix \begin{align} \lim_{|B| \rightarrow \infty} \mathbb E_{\theta} \left[\tilde h^l_{x;i} \tilde h^l_{y;j}\right] = & \mathbb E_{\theta} \left[ \frac{\sum_{x', x'' \in B} P_{x x'} P_{y x''} h^l_{x';i} h^l_{x'';j}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \right] \nonumber \\ =& \frac{\sum_{x', x'' \in B} P_{xx'} P_{yx''} \mathcal K^l_{xx'}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =& \frac{\mathcal K^l_{xy} - \frac{1}{|B|} \mathcal K^l_{\hat{x}\hat{x}} - \frac{|B|-1}{|B|} \mathcal K^l_{x\hat{x}}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =&\delta_{xy} \delta_{ij} \,\, \end{align} where we used \cref{conjecture:proj} in the first line. \end{conjecture} For ReLU: \begin{align} \mathcal K^{l+1}_{xx'} = \begin{cases} \frac{\sigma_w^2}{2} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx} & \text{if $x=x'$} \\ \frac{\sigma_w^2}{2\pi} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx'} & \text{if $x \neq x'$} \,. \end{cases} \end{align} Then for $\mu=0$ APJN is independent of $\sigma_w^2$ and $\sigma_b^2$ in infinite $|B|$ limit: \begin{align} \mathcal J^{l, l+1} = \frac{\pi}{\pi - 1} \,, \end{align} and for $\mu=1$: \begin{align} \mathcal J^{l, l+1} = 1 + O \left(\frac{1}{l} \right) \,. \end{align} It is also intuitively clear by realizing the denominator of \eqref{eqapp:bn_apjn} is growing with $l$ when $\mu=1$. Thus the finite $|B|$ corrections are further suppressed. We checked our results in \Cref{fig:bn_relu}. \section{JSL and JKL} \subsection{JSL} Since we already discussed our results for JL, we show details for JSL in this section. Derivation for JL is almost identical. Using JSL, The SGD update of $a^l_{\theta}$ at time t is \begin{align}\label{eqapp:a_update_jsl} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \left(\mathcal J^{l', l'+1}(t) - 1 \right) \,. \end{align} We focus on ReLU networks to demonstrate the difference between JL and JSL. For ReLU networks, we can rewrite \eqref{eqapp:a_update_jsl} as \begin{align}\label{eqapp:j_update_jsl} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(\mathcal J^{l,l+1}(t) - 1\right) \,. \end{align} \paragraph{Learning Rate $\eta$} The learning rate limit $\eta_t$ is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} - 1|$ monotonically decrease with time $t$, for any $l$, then we have \begin{align}\label{eqapp:eta_t_jsl} \eta_t < \min_{1 \leq l \leq L} \left\{\frac{2}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(1 + \sqrt{\mathcal J^{l, l+1}(t)}\right)} \right \} \,. \end{align} Or by solving \eqref{eqapp:j_update_jsl} with $J^{l,l+1}(1)=1$: \begin{align} \eta_{\mathrm{1-step}} = \frac{1}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(0)} \left(1 + \sqrt{\mathcal J^{l, l+1}(0)}\right)} \,. \end{align} For the dynamics of the optimization while using a single learning rate $\eta$. We again estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align} \eta_0 = \frac{4}{\sigma_w^3 a_W^l \left(\sqrt{2} + a_W^l \sigma_w \right)} \,. \end{align} Compared to \eqref{eq:eta_0_jl}, which scales as $1/\log \sigma_w$ for large $\sigma_w$, $\eta_0$ for JSL scales as $\sigma_w^{-4}$ for large $\sigma_w$. This makes the JSL a way worse choice than JLE when $\mathcal J^{l,l+1} \gg 1$. In \Cref{figapp:relu_jac}, we checked our results with \cref{alg:j_train} using JSL. All other details are the same as \Cref{fig:relu_jac}. The gap between $\eta_0$ and trainable regions can again be explained similarly by analyzing \eqref{eqapp:eta_t_jsl}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 > \eta > \eta_t$, there is still a chance that \Cref{alg:j_train} diverges for some $t>0$. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 < \eta < \eta_t$ holds, \Cref{alg:j_train} may say still have a chance to converge for some $t>0$. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jsl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JSL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JSL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{figapp:relu_jac} \end{figure} \subsection{JKL} We mentioned the following loss function in the main text. \begin{align}\label{eqapp:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} There are other possible choices for controlling the forward pass, we will discuss this one briefly. First we calculate the derivative from kernel terms. We omit $x$ and $t$ dependency and introduce $r^{l+1, l} = \mathcal K^{l+1} / \mathcal K^l$ for clarity: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l'\geq l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{2\lambda}{a_W^{l+1}} \log r^{l+1,l} + \frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l' > l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \,, \end{align} which has a similar structure as APJN terms. Next we pick a term with $l' > l$ in the parentheses: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{\lambda}{\mathcal K^{l'+1} \mathcal K^{l'}} \left(\mathcal K^{l'} \frac{\partial \mathcal K^{l'+1}}{\partial a_W^{l+1}} - \mathcal K^{l'+1} \frac{\partial \mathcal K^{l'}}{\partial a_W^{l+1}}\right)\log{r^{l'+1,l'}} \,, \end{align} which is independent of depth for $\sigma_b=0$, and is always finite for $\sigma_b$. We find that update from the forward pass term for $a_b^{l+1}$ is subtle. For $\sigma_b=0$, similar to the discussion of APJN terms, the update of $a_b^{l+1}$ is zero. For $\sigma_b > 0$, there are two possibilities: \begin{itemize} \item Unbounded activation functions, when $\chi_{\mathcal K}^l > 1$: $\mathcal K^l \rightarrow \infty$ as $l\rightarrow \infty$, thus updates of $a_b^{l+1}$ from the forward pass term vanishes. \item Bounded activation functions or unbounded activation functions with $\chi_{\mathcal K}^l < 1$: $\mathcal K^l \rightarrow \mathcal K^{\star}$, thus the contribution from the forward pass term is always $O(1)$. \end{itemize} Summarizing the discussion above, we do not have exploding and vanishing gradients problem originated from the term we introduced to tune the forward pass. The forward pass term simply speeds the update of $a_W^{l+1}$ and $a_b^{l+1}$ in most cases. \paragraph{ReLU} Again we use a ReLU MLP network as an example. For $\sigma_b=0$, $\mathcal L_{\mathcal J \mathcal K \log}$ is equivalent to $(1+\lambda)\mathcal L_{\log}$ due to the scale invariance property of the ReLU activation function, which can be checked by using $\mathcal K^{l+1}(x,x) = \sigma_w^2 \mathcal K^l(x,x) / 2$. For finite $\sigma_b$, we use $\mathcal K^{l,l+1}(x,x) = \mathcal J^{l,l+1} \mathcal K^l(x,x) + \sigma_b^2$: \begin{align}\label{eqapp:relu_jkl} \mathcal L_{\mathcal J \mathcal K \log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\mathcal J^{l,l+1} + \frac{\sigma_b^2}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} In ordered phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 < 1$, one can prove that $\mathcal K^l(x,x) \rightarrow \sigma_b^2 /(1 - (a_W^{l+1}(t)\sigma_w)^2/2)$ as $l \rightarrow \infty$, thus \eqref{eqapp:relu_jkl} is equivalent to JL at large depth. In chaotic phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 > 1$ and $\mathcal K^l(x,x) \rightarrow \infty$. \eqref{eqapp:relu_jkl} is equivalent to JL with an extra overall factor $1+\lambda$ at large depth. \section{ResMLP} \subsection{Network Recursion Relation} \paragraph{Input} The input image is chopped into an $N \times N$ grid of patches of size $P\times P$ pixels (often $16 \times 16$). The patches are fed into (the same) Linear layer to form a set of $N^2$ $d$-dimensional embeddings, referred to as channels. The resulting input to the ResMLP blocks : $h^0_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. Here and in what follows, Greek letters ($\mu,\nu$ etc.) index the patches, while Latin letters ($i,j$ etc.) index the channels. Note that in practice, the above two operations are combined into a Convolutional layer with the filer size coinciding with the patch-resolution ($P \times P \times C$); and the stride equal to $P$ so as to avoid overlap between patches. Here, $C$ is the number of channels in the original image. \paragraph{ResMLP block} The input embedding $h^0_{\mu i}$ is passed through a series of ($L$) self-similar ResMLP blocks, which output $h^L_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. In the following, we use the notation $1^{2,3}_{4}$ for the parameters; where 1 denotes the parameter, 2 denotes the block-index, 3 denotes the specific action within the block, and 4 denotes the neural indices. A ResMLP block consists of the following operations. \begin{align}\label{appeq:resmlp_rec_full} &\texttt{AffineNorm1:} & a^{l+1}_{\mu i} &= \left( \alpha^{{l+1},a}_i h^{l+1}_{\nu i} + \beta^{{l+1},a}_i \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear1:} & b^{l+1}_{\mu i} &= \sum^{N^2}_{\nu=1} W^{{l+1},b}_{\mu\nu} a^{l+1}_{\nu i} + B^{{l+1},b}_{\mu} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual1:} & c^{l+1}_{\mu i} &= \mathcal E^{{l+1},c}_i b^{l+1}_{\mu i} + \mu_1 a^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ & \texttt{AffineNorm2:} & d^{l+1}_{\mu i} &= \left( \alpha^{{l+1},d}_i c^{l+1}_{\mu i} + \beta^{{l+1},d}_i \right) \,, &&\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear2:} & e^{l+1}_{\mu i} &= \sum^d_{j=1} W^{{l+1},e}_{ij} d^{l+1}_{\mu j} + B^{{l+1},e}_{i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{activation:} & f^{l+1}_{\mu i} &= \phi\left(e^{l+1}_{\mu j} \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{linear3:} & g^{l+1}_{\mu i} &= \sum^{4d}_{j=1} W^{{l+1},g}_{ij} f^{l+1}_{\mu j} + B^{{l+1},g}_{i}\,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual2:} & h^{l+1}_{\mu i} &= \mathcal E^{{l+1},h}_i g^{l+1}_{\mu i} + \mu_2 c^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \end{align} where the brackets on the right contain the dimensions of the output of the layers. We consider linear layers with weights and biases initialized with standard fan\_in. \texttt{linear1} acts on the patches, with parameters initialized as $W^{l+1,a}_{\mu\nu} \sim \mathcal N(0, \sigma_w^2/N)\,; B^{l+1,a}_{\mu} \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear2} acts on the channels, with parameter initialized as $W^{l+1,e}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{d})\,; B^{l+1,e}_i \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear3} also acts on the channels, with parameters initialized as $W^{l+1,g}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{4d})\,; B^{l+1,g}_i \sim \mathcal N(0, \sigma_b^2)$. GELU is used as the activation function $\phi$. \texttt{AffineNrom1} and \texttt{AffineNrom2} perform an element-wise multiplication with a trainable vector of weights $\alpha^{l+1,a}_i, \alpha^{l+1,d}_i \in \mathbb R^d$ and an addition of a trainable bias vector $\beta^{l+1,a}_i, \beta^{l+1,d}_i \in \mathbb R^d$. Residual branches are scaled by a trainable vector $\mathcal E^{l+1,c}_i, \mathcal E^{l+1,h}_i \in \mathbb R^{d}$ (\texttt{LayerScale}), whereas the skip connections are scaled by scalar strengths $\mu_1$ and $\mu_2$. \paragraph{Output} The action of blocks is followed by an Average-Pooling layer, to to convert the output to a $d$-dimensional vector. This vector is fed into a linear classifier that gives the output of the network $h^{L+1}_i$. \subsection{NNGP Kernel Recursion Relation} At initialization, $\alpha^{l+1,a}_i = \alpha^{l+1,d}_i = \mathbf 1_d$ and $\beta^{l+1,a}_i = \beta^{l+1,d}_i = \mathbf 0_d$. Thus, AffineNorm layers perform identity operations at initialization. \texttt{LayerScale} is initialized as $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$, where $\mathcal E$ is chosen to be a small scalar. (For examlpe, $\mathcal E$ is taken to be 0.1 for 12-block ResMLP and $10^{-5}$ for a 24-block ResMLP network.) Additionally, we also take $\mu_1 = \mu_2 = \mu$. With these simplifications, we can obtain the recursion relation for the diagonal part of the Neural Network Gaussian Process (NNGP) kernel for the ResMLP block-outputs. We note that the the full NNGP kernel $\mathcal K^l_{\mu\nu;ij}$ is a tensor in $\mathbb R^{N^2} \otimes \mathbb R^{N^2} \otimes \mathbb R^{d} \times \mathbb R^{d}$. Here, we focus on its diagonal part $\mathcal K^l_{\mu\mu;ii}$. For clarity, we remove the subscripts ($\mu\mu;ii$). The diagonal part of the NNGP kernel for a block output $h^l_{\mu i}$ is defined as \begin{align}\label{appeq:remlp_nngpk} \mathcal K^l \equiv \mathbb E_\theta \left[ h^l_{\mu i} h^l_{\mu i} \right] \,, \end{align} which is independent of its patch and channel indices, $\mu$ and $i$, in the infinite width limit. The recursion relation can be obtained by propagating the NNGP through a block. For clarity, we define NNGP kernel for the intermediate outputs within the blocks. For example, $\mathcal K^{l+1}_a \equiv \mathbb{E}_{\theta} \left[ a^{l+1}_{\mu i} a^{l+1}_{\mu i} \right]$, $\mathcal K^{l+1}_b \equiv \mathbb{E}_{\theta} \left[ b^{l+1}_{\mu i} b^{l+1}_{\mu i} \right]$, etc. \begin{align}\label{appeq:resmlp_nngpk_rec} &\texttt{AffineNorm1:} & \mathcal K^{l+1}_a &= \mathcal K^l \,, \\ \nonumber &\texttt{linear1:} & \mathcal K^{l+1}_b &= \sigma_w^2 \mathcal K^{l+1}_a + \sigma_b^2 \\ &&&= \sigma_w^2 \mathcal K^l + \sigma_b^2 \,, \\ \nonumber &\texttt{residual1:} & \mathcal K^{l+1}_c &= \mathcal E^2 \mathcal K^{l+1}_b + \mu^2 \mathcal K^l \\ \nonumber &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \\ &&&= \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \,, \\ \nonumber &\texttt{AffineNorm2:} & \mathcal K^{l+1}_d &= \mathcal K^{l+1}_c \\ &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \,, \\ \nonumber &\texttt{linear2:} & \mathcal K^{l+1}_e &= \sigma_w^2 \mathcal K^{l+1}_d + \sigma_b^2 \\ &&&= \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \,, \\ \nonumber &\texttt{activation:} & \mathcal K^{l+1}_f &= \frac{\mathcal K^{l+1}_e}{4} + \frac{\mathcal K^{l+1}_e}{2\pi} \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\left( \mathcal K^{l+1}_e \right)^2}{\pi \left( 1 + \mathcal K^{l+1}_e \right) \sqrt{1 + 2\mathcal K^{l+1}_e}} \\ \nonumber &&&\equiv \mathcal G \left[ \mathcal K^{l+1}_e \right] \\ &&&= \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \\ \nonumber &\texttt{linear3:} & \mathcal K^{l+1}_g &= \sigma_w^2 \mathcal K^{l+1}_f + \sigma_b^2 \\ &&&= \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \,, \\ \nonumber &\texttt{residual2:} & \mathcal K^{l+1} &= \mathcal E^2 \mathcal K^{l+1}_g + \mu^2 \mathcal K^{l+1}_c \\ \nonumber &&&= \mathcal \mu^2 \left( \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \right) + \\ \nonumber &&& \quad + \mathcal E^2 \, \left\{ \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \right\} \\ \nonumber &&&= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \\ &&&\quad + \mathcal E^2 \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \end{align} where we have defined \begin{equation} \mathcal G [z] = \frac{z}{4} + \frac{z}{2\pi} \arcsin{\left( \frac{z}{1 + z} \right)} + \frac{\left( z \right)^2}{\pi \left( 1 + z \right) \sqrt{1 + 2z}} \,. \end{equation} Thus, we have a recursion relation, representing $\mathcal K^{l+1}$ in terms of $\mathcal K^l$. As a side note, if we replace \texttt{GELU} activation function with \texttt{ReLU}, the relation simplifies greatly, offering us intuition. Specifically, $\mathcal G[z]$ gets replaced by $z/2$ in this case. This gives us the following recursion relation for ResMLP with \texttt{ReLU}. \begin{align} \nonumber \mathcal K^{l+1} &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \left( \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right) \\ &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 + \frac{1}{2} \mu^2 \mathcal E^2 \sigma_w^4 + \frac{1}{2} \mathcal E^4 \sigma_w^6 \right) \mathcal K^l + (1 + \mu^2 + \frac{1}{2}\sigma_w^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \sigma_b^2 \end{align} \subsection{Jacobian Recursion Relation} Next, we calculate the APJN for ResMLP, between two consecutive blocks. For clarity, we first derive the expression for the partial derivative of ${l+1}^{th}$ block output $h^{l+1}_{\mu i}$ with respect to $l^{th}$ block output $h^l_{\nu j}$. \begin{align}\label{appeq:resmlp_derivative} \nonumber \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} &= \mathcal E^{l+1,h}_i \frac{\partial g^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \frac{\partial f^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) \frac{\partial e^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial d^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial c^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m \frac{\partial b^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \frac{\partial a^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \mathcal E^{l+1,c}_i \frac{\partial b^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \frac{\partial a^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d \sum_{\lambda=1}^{N^2} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \delta_{\mu\nu} \delta_{mj} + \\ \nonumber &\quad + \mu_2 \mathcal E^{l+1,c}_i \sum_{\lambda=1}^{N^2} W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^{l+1,h}_i \mathcal E^{l+1,c}_j \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \\ \nonumber &\quad + \mu_1 \mathcal E^{l+1,h}_i \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \mu_2 \mathcal E^{l+1,c}_i \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^2 \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \mu \mathcal E \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \\ &\quad + \mu\mathcal E \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu^2 \delta_{\mu\nu} \delta_{ij} \,, \end{align} where in the last step, we have used the initial values of the parameters : $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$ and $\mu_1 = \mu_2 = \mu$. Next, we calculate the APJN using \eqref{appeq:resmlp_derivative}. We will perform the calculation in the limit of large $N^2$ and $d$; dropping all the corrections of order $\frac{1}{N^2}$ and $\frac{1}{d}$. \begin{align}\label{appeq:resmlp_apjn} \nonumber \mathcal J^{l,l+1} &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} \delta_{\mu\nu} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{ij} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \mu^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{\mu\nu} \delta_{ij} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \delta_{\mu\nu} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} + \right. \\ \nonumber &\qquad\qquad\quad + \left. \mu^2\mathcal E^2 \sigma_w^2 N^2 d + \mu^4 N^2 d \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \frac{1}{4} \mathcal E^4 \sigma_w^6 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \mu^2\mathcal E^2 \sigma_w^4 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \right. \\ \nonumber &\qquad\qquad\quad \left. + (\mathcal E^2 \sigma_w^2 + \mu^2) \mu^2 N^2 d \right] \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \mathbb E_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \right) \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H_e [\mathcal K^{l+1}_e] \right) \\ &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H [\mathcal K^l] \right) \,, \end{align} where we have defined \begin{align} \nonumber \mathcal H_e [\mathcal K^{l+1}_e] &\equiv \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \\ &= \frac{1}{4} + \frac{1}{2\pi} \left( \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\mathcal K^{l+1}_e (3 + 5 \mathcal K^{l+1}_e)}{(1 + \mathcal K^{l+1}_e) (1 + 2\mathcal K^{l+1}_e)^{3/2}} \right) \,. \end{align} We also write $\mathcal K^{l+1}_e$ in terms of $\mathcal K^l$ and define \begin{align} \nonumber \mathcal H [\mathcal K^l] &= \mathcal H_e [\mathcal K^{l+1}_e] \\ &= \mathcal H_e \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 (\sigma_w^2 \mathcal K^l + \sigma_b^2) + \sigma_b^2 \right) \right] \,. \end{align} It is clear from \eqref{appeq:resmlp_apjn} that for $\mu=1$, $\mathcal J^{l,l+1} > 1$, rendering the network off criticality. However, $\mathcal J^{l,l+1}$ can be tuned arbitrarily close to criticality by taking $\mathcal E$ to be small at $t=0$. This explains the necessity for \texttt{LayerScale} with small initial value in the ResMLP architecture. We note that the results in \eqref{appeq:resmlp_apjn} greatly simplify on using \texttt{ReLU} instead of \texttt{GELU} as $\phi$. We mention them here to provide intuition. $\mathcal H_e [\mathcal K^{l+1}_e] = \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] = \frac{1}{2}$ in this case. This gives us the simple result \begin{equation} \mathcal J^{l,l+1} = (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \frac{1}{2}\mathcal E^2 \sigma_w^4 \right) \end{equation} for \texttt{ReLU} activation function. \section{Experimental Details} \Cref{fig:relu_jac}: The the second panel is made of 1200 points, each point takes around $1.5$ minutes running on a single single NVIDIA RTX 3090 GPU. \Cref{fig:bn_relu}: We scanned over $400$ points for each phase diagram, which overall takes around $5$ hours on a single NVIDIA RTX 3090 GPU. \Cref{fig:resmlp}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset. We use SGD with $\eta=0.03$ and $N_v=2$ for $392$ steps, $|B|=256$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.005, 0.01\}$, $\mathrm{weight\; decay}=\{10^{-5}, 10^{-4}\}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip, Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{fig:vgg}: We use \cref{alg:j_train} to tune our model on CIFAR-10 dataset for $392$ steps with $\eta=0.01$ $|B|=128$ and $N_v=3$. The training curves we reported are selected from the best combination from the following hyperparameters $\mathrm{lr}=\{0.001, 0.002, 0.005, 0.01, 0.02\}$, $\mathrm{weight\; decay}=\{0.0005, 0.001, 0.002, 0.005 \}$. We used RandAugment\citep{cubuk2020RandAug}, horizontal flip and Mixup with $\alpha=0.8$ \citep{zhang2018mixup} and Repeated-augmentation \citep{hoffer2020aug}. We froze auxiliary parameters instead of scale the weights. All of our results are obtained using a single NVIDIA RTX 3090 GPU. \Cref{figapp:relu_jac}: Exactly the same as \Cref{fig:relu_jac}, except we used JSL. \section{Theoretical Details} \subsection{Factorization of APJN} We the factorization property using MLP networks in infinite width limit. This proof works for any iid $\theta^l$ where $|\theta^l|$ has some $2+\delta$ moments. We start from the definition of the partial Jacobian, set $a^l_{\theta}=1$ for simplicity. \begin{align} \nonumber \mathcal{J}^{l,l+2} &\equiv \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \frac{\partial h^{l+2}_i}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{1}{N_{l+2}} \mathbb E_{\theta} \left[ \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_l} \sum_{k,m=1}^{N_{l+1}} \left(W^{l+2}_{ik} \phi'(h^{l+1}_k) \right) \left(W^{l+2}_{im} \phi'(h^{l+1}_m) \right) \left(\frac{\partial h^{l+1}_k}{\partial h^{l+1}_j} \frac{\partial h^{l+1}_m}{\partial h^{l}_j}\right) \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{\theta} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \frac{\partial h^{l+1}_k}{\partial h^{l}_j} \right ] \\ \nonumber &= \frac{\sigma_w^2}{N_{l+2} N_{l+1}} \sum_{i=1}^{N_{l+2}}\sum_{j=1}^{N_{l}} \sum_{k=1}^{N_{l+1}} \mathbb E_{W^{l+1}, b^{l+1}} \left[ \phi'(h^{l+1}_k) \phi'(h^{l+1}_k) W^{l+1}_{kj} W^{l+1}_{kj} \right] \mathbb E_{\theta} \left[\phi'(h^l_k) \phi'(h^l_k) \right ] \\ &= \mathcal J^{l,l+1} \mathcal J^{l+1, l+2} + O\left(\frac{\chi^l_{\Delta}}{N_l} \right) \,, \end{align} where the $1/N_l$ correction is zero in the infinite width limit. We used the fact that in infinite width limit $h^{l+1}_k$ is independent of $h^l_k$, and calculated the first expectation value of the fourth line using integration by parts. Recall that for a single input (omit x) \begin{align} \chi^{l}_{\Delta} \equiv (a_W^{l+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l}_i) \phi''(h^{l}_i) + \phi'(h^{l}_i) \phi'''(h^{l}_i) \right] \,. \end{align} \subsection{Exploding and Vanishing Gradients} We show details for deriving \eqref{eq:tauto_aw} for MLP networks, assuming $l'>l$: \begin{align} \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial \mathbb E_{h^{l'}_i \sim \mathcal N(0, \mathcal K^{l'}(x,x))} \left[(a_W^{l'+1} \sigma_w)^2 \phi'(h^{l'}_i) \phi'(h^{l'}_i) \right]}{\partial a_W^{l+1}} \nonumber \\ =& \frac{1}{\mathcal J^{l',l'+1}}\frac{\partial}{\partial K^{l'}(x,x)} \left(\frac{{(a_W^{l'+1} \sigma_w)^2}}{\sqrt{2\pi \mathcal K^{l'}(x,x)}} \int \phi'(h^{l'}_i) \phi'(h^{l'}_i) e^{-\frac{h^{l'}_i h^{l'}_i}{2\mathcal K^{l'}(x,x)}} dh^{l'}_i \right) \frac{\partial K^l(x,x')}{\partial a_W^{l+1}}\nonumber \\ =& \frac{2}{\mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \frac{\partial \mathcal K^{l'}(x, x)}{\partial \mathcal K^{l'-1}(x,x)} \cdots \frac{\partial \mathcal K^{l+1}(x,x)}{\partial a_W^{l+1}} \nonumber \\ =& \frac{4}{a_W^{l+1} \mathcal J^{l',l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x,x) \,, \end{align} where we calculated the derivative respect to $\mathcal K^{l'}(x,x)$, then used integration by parts to get the third line. The derivation for \eqref{eq:tauto_ab} is similar. \subsection{ReLU Details} \paragraph{Learning rate $\eta$} The learning rate bound \eqref{eq:eta_t} is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} -1 |$ to decrease monotonically with time $t$. \paragraph{\texorpdfstring{Derivation for $\chi_{\Delta}^l=0$}{}} This is straightforward to show by direct calculation in the infinite width limit. We set $a_W^l=1$ and ignore neuron index $i$ for simplicity. \begin{align} \chi^{l'}_{\Delta} =& \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left[\phi''(h^{l}) \phi''(h^{l}) + \phi'(h^{l}) \phi'''(h^{l'}) \right] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \phi'(h^l) \phi''(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{d}{dh^{l}} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= \sigma_w^2 \mathbb E_{h^l \sim \mathcal N(0, \mathcal K^l(x,x))} \left [ \frac{h^l}{\mathcal K^l(x,x)} \left( \Theta(h^l) \delta(h^l) \right) \right ] \nonumber \\ &= 0 \,, \end{align} where $\Theta(h^l)$ is Heaviside step function and $\delta(h^l)$ is Dirac delta function. To get the last line we used $h^l \delta(h^l)=0$. \subsection{\texorpdfstring{\Cref{conj:bn}}{}} Here we offer an non-rigorous explanation for the conjecture in the infinite $|B|$ and the infinite width limit. We use a MLP model with $a_{\theta}^l=1$ as an example. We consider \begin{align} h^{l+1}_{x; i} = \sum_{j=1}^N W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + b^{l+1}_j + \mu h^l_{x; j}\,, \end{align} where \begin{align}\label{eq:BN} \tilde h^l_{x; i} =& \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \nonumber \\ =& \frac{\sqrt{|B|} \sum_{x' \in B} P_{x x'} h^l_{x'; i}}{\sqrt{ \sum_{x \in B} \left(\sum_{x' \in B} P_{x x'} h^l_{x'; i}\right)^2 }} \,, \end{align} where $P_{xx'} \equiv \delta_{xx'} - 1/ |B|$. It is a projector in the sense that $\sum_{x' \in B} P_{xx'} P_{x'x''} = P_{xx''}$. Derivative of the normalized preactivation: \begin{align} \frac{\partial \tilde h^l_{x; i}}{\partial h^l_{x';j}} = \sqrt{|B|} \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2}} - \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; i} \sum_{x'' \in B} P_{x'x''} h^l_{x''; i}}{\left( \sqrt{\sum_{x\in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x'';i}\right)^2 } \right)^3} \right)\delta_{ij} \,. \end{align} Then the one layer APJN: \begin{align}\label{eqapp:bn_apjn} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \sum_{x,x' \in B} \sum_{j=1}^{N_l} \mathbb E_{\theta} \left[\left(\phi'(\tilde h^l_{x; j}) \right)^2 \left(\frac{P_{xx'}}{\sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2}} \right. \right. \nonumber \\ &\left. \left.- \frac{\sum_{x'' \in B} P_{xx''} h^l_{x''; j} \sum_{x'' \in B} P_{x'x''} h^l_{x''; j}}{\left( \sqrt{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x,x''} h^l_{x''; j}\right)^2 } \right)^3} \right)^2 \right] + \mu^2 \,. \end{align} In the infinite $|B|$ limit, only one term can contribute: \begin{align} \mathcal J^{l, l+1} =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{x,x' \in B} \sum_{j=1}^{N_l} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx'} P_{xx'}}{\sum_{x=1}^B \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2}\right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[ \sum_{j=1}^{N_l} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \frac{P_{xx}}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ =& \frac{\sigma_w^2}{N_l} \mathbb E_{\theta} \left[\sum_{j=1}^{N_l} \left( \left[\frac{1}{|B|} \sum_{x \in B} \left(\phi'(\tilde h^l_{x; j}) \right)^2 \right] \frac{|B|-1}{\sum_{x \in B} \left(\sum_{x'' \in B} P_{x x''} h^l_{x''; j}\right)^2} \right) \right] + \mu^2 + O\left(\frac{1}{|B|}\right) \nonumber \\ \xrightarrow{B \rightarrow \infty} & \frac{\sigma_w^2}{N_l} \sum_{j=1}^{N_l} \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, \delta_{xx'} )} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where $x'$ is a dummy index, just to label the off-diagonal term. We used \cref{conjecture:proj} and \cref{conjecture:tilde_h} to get the result. \begin{conjecture}[Projected Norm]\label{conjecture:proj} In the infinite width limit. For a large depth $l$, $\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2$ converges to a deterministic value $\frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right)$ as batch size $|B| \rightarrow \infty$. \end{conjecture} \begin{proof}[Non-regirous "proof"] In the infinite width limit $h^l_{x; j}$ is sampled from a Gaussian distribution $\mathcal N(0, \mathcal K^l_{xx'})$, where the value $\mathcal K_{xx'}$ only depends on if $x$ is the same as $x'$ or not. We first simplify the formula: \begin{align}\label{eq:proj} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ =& \frac{1}{|B|} \sum_{x', x'' \in B} P_{x' x''} h^l_{x'; j} h^l_{x''; j} \nonumber \\ =& \frac{1}{|B|} \left(\sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x', x'' \in B} h^l_{x';j} h^l_{x'';j} \right) \nonumber \\ =& \frac{1}{|B|} \left(\frac{|B|-1}{|B|} \sum_{x' \in B} (h^l_{x'; j})^2 - \frac{1}{|B|} \sum_{x' \neq x''}^B h^l_{x';j} h^l_{x'';j} \right) \,. \end{align} The average over $x'$ and $x''$ in infinite $|B|$ limit can be replaced by integration over their distribution (this is the non-rigorous step, complete rigorous proof see \citet{yang2018mean}): \begin{align} &\frac{1}{|B|} \sum_{\hat{x} \in B} \left(\sum_{x' \in B} P_{\hat{x}x'} h^l_{x';j}\right)^2 \nonumber \\ \xrightarrow{|B| \rightarrow \infty} & \frac{|B|-1}{|B|} \left(\mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{xx'}) } \left[(h^l_{x'; j})^2 \right] - \mathbb E_{h^l_{x';j} \sim \mathcal N(0, \mathcal K^l_{x'x''}) } \left[h^l_{x';j} h^l_{x'';j} \right] \right) \nonumber \\ = & \frac{|B|-1}{|B|} \left(\mathcal K^l_{x'x'} - \mathcal K^l_{x' x''} \right) \,, \end{align} \end{proof} Next, we need to show how to calculate $\mathcal K^{l}_{xx'}$. Before that we first try to simplify find the distribution of $\tilde h^l_{x; i}$ in the infinite $|B|$ limit. \begin{conjecture}[$\tilde h^l_{x;i}$ distribution]\label{conjecture:tilde_h} In the infinite $|B|$ limit and the infinite width limit, assume for large depth $\mathcal K^l_{xx}$ reaches a fixed point. Then $\tilde h^l_{x;i}$ can be seen as sampled from a Gaussian distribution with the covariance matrix \begin{align} \lim_{|B| \rightarrow \infty} \mathbb E_{\theta} \left[\tilde h^l_{x;i} \tilde h^l_{y;j}\right] = & \mathbb E_{\theta} \left[ \frac{\sum_{x', x'' \in B} P_{x x'} P_{y x''} h^l_{x';i} h^l_{x'';j}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \right] \nonumber \\ =& \frac{\sum_{x', x'' \in B} P_{xx'} P_{yx''} \mathcal K^l_{xx'}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =& \frac{\mathcal K^l_{xy} - \frac{1}{|B|} \mathcal K^l_{\hat{x}\hat{x}} - \frac{|B|-1}{|B|} \mathcal K^l_{x\hat{x}}}{\frac{|B|-1}{|B|} \left(\mathcal K^l_{xx} - \mathcal K^l_{x \hat{x}}\right)} \delta_{ij} \nonumber \\ =&\delta_{xy} \delta_{ij} \,\, \end{align} where we used \cref{conjecture:proj} in the first line. \end{conjecture} For ReLU: \begin{align} \mathcal K^{l+1}_{xx'} = \begin{cases} \frac{\sigma_w^2}{2} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx} & \text{if $x=x'$} \\ \frac{\sigma_w^2}{2\pi} + \sigma_b^2 + \mu^2 \mathcal K^l_{xx'} & \text{if $x \neq x'$} \,. \end{cases} \end{align} Then for $\mu=0$ APJN is independent of $\sigma_w^2$ and $\sigma_b^2$ in infinite $|B|$ limit: \begin{align} \mathcal J^{l, l+1} = \frac{\pi}{\pi - 1} \,, \end{align} and for $\mu=1$: \begin{align} \mathcal J^{l, l+1} = 1 + O \left(\frac{1}{l} \right) \,. \end{align} It is also intuitively clear by realizing the denominator of \eqref{eqapp:bn_apjn} is growing with $l$ when $\mu=1$. Thus the finite $|B|$ corrections are further suppressed. We checked our results in \Cref{fig:bn_relu}. \section{JSL and JKL} \subsection{JSL} Since we already discussed our results for JL, we show details for JSL in this section. Derivation for JL is almost identical. Using JSL, The SGD update of $a^l_{\theta}$ at time t is \begin{align}\label{eqapp:a_update_jsl} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \left(\mathcal J^{l', l'+1}(t) - 1 \right) \,. \end{align} We focus on ReLU networks to demonstrate the difference between JL and JSL. For ReLU networks, we can rewrite \eqref{eqapp:a_update_jsl} as \begin{align}\label{eqapp:j_update_jsl} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(\mathcal J^{l,l+1}(t) - 1\right) \,. \end{align} \paragraph{Learning Rate $\eta$} The learning rate limit $\eta_t$ is obtained by requiring $|\sqrt{\mathcal J^{l,l+1}(t)} - 1|$ monotonically decrease with time $t$, for any $l$, then we have \begin{align}\label{eqapp:eta_t_jsl} \eta_t < \min_{1 \leq l \leq L} \left\{\frac{2}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(t)} \left(1 + \sqrt{\mathcal J^{l, l+1}(t)}\right)} \right \} \,. \end{align} Or by solving \eqref{eqapp:j_update_jsl} with $J^{l,l+1}(1)=1$: \begin{align} \eta_{\mathrm{1-step}} = \frac{1}{\sigma_w^2 \sqrt{\mathcal J^{l,l+1}(0)} \left(1 + \sqrt{\mathcal J^{l, l+1}(0)}\right)} \,. \end{align} For the dynamics of the optimization while using a single learning rate $\eta$. We again estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align} \eta_0 = \frac{4}{\sigma_w^3 a_W^l \left(\sqrt{2} + a_W^l \sigma_w \right)} \,. \end{align} Compared to \eqref{eq:eta_0_jl}, which scales as $1/\log \sigma_w$ for large $\sigma_w$, $\eta_0$ for JSL scales as $\sigma_w^{-4}$ for large $\sigma_w$. This makes the JSL a way worse choice than JLE when $\mathcal J^{l,l+1} \gg 1$. In \Cref{figapp:relu_jac}, we checked our results with \cref{alg:j_train} using JSL. All other details are the same as \Cref{fig:relu_jac}. The gap between $\eta_0$ and trainable regions can again be explained similarly by analyzing \eqref{eqapp:eta_t_jsl}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 > \eta > \eta_t$, there is still a chance that \Cref{alg:j_train} diverges for some $t>0$. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 < \eta < \eta_t$ holds, \Cref{alg:j_train} may say still have a chance to converge for some $t>0$. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jsl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JSL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JSL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{figapp:relu_jac} \end{figure} \subsection{JKL} We mentioned the following loss function in the main text. \begin{align}\label{eqapp:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} There are other possible choices for controlling the forward pass, we will discuss this one briefly. First we calculate the derivative from kernel terms. We omit $x$ and $t$ dependency and introduce $r^{l+1, l} = \mathcal K^{l+1} / \mathcal K^l$ for clarity: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l'\geq l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{2\lambda}{a_W^{l+1}} \log r^{l+1,l} + \frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \sum_{l' > l}^{L} \left[\log{r^{l'+1,l'}}\right]^2 \right) \,, \end{align} which has a similar structure as APJN terms. Next we pick a term with $l' > l$ in the parentheses: \begin{align} &\frac{\partial}{\partial a_W^{l+1}}\left( \frac{\lambda}{2} \left[\log{r^{l'+1,l'}}\right]^2 \right) \nonumber \\ =& \frac{\lambda}{\mathcal K^{l'+1} \mathcal K^{l'}} \left(\mathcal K^{l'} \frac{\partial \mathcal K^{l'+1}}{\partial a_W^{l+1}} - \mathcal K^{l'+1} \frac{\partial \mathcal K^{l'}}{\partial a_W^{l+1}}\right)\log{r^{l'+1,l'}} \,, \end{align} which is independent of depth for $\sigma_b=0$, and is always finite for $\sigma_b$. We find that update from the forward pass term for $a_b^{l+1}$ is subtle. For $\sigma_b=0$, similar to the discussion of APJN terms, the update of $a_b^{l+1}$ is zero. For $\sigma_b > 0$, there are two possibilities: \begin{itemize} \item Unbounded activation functions, when $\chi_{\mathcal K}^l > 1$: $\mathcal K^l \rightarrow \infty$ as $l\rightarrow \infty$, thus updates of $a_b^{l+1}$ from the forward pass term vanishes. \item Bounded activation functions or unbounded activation functions with $\chi_{\mathcal K}^l < 1$: $\mathcal K^l \rightarrow \mathcal K^{\star}$, thus the contribution from the forward pass term is always $O(1)$. \end{itemize} Summarizing the discussion above, we do not have exploding and vanishing gradients problem originated from the term we introduced to tune the forward pass. The forward pass term simply speeds the update of $a_W^{l+1}$ and $a_b^{l+1}$ in most cases. \paragraph{ReLU} Again we use a ReLU MLP network as an example. For $\sigma_b=0$, $\mathcal L_{\mathcal J \mathcal K \log}$ is equivalent to $(1+\lambda)\mathcal L_{\log}$ due to the scale invariance property of the ReLU activation function, which can be checked by using $\mathcal K^{l+1}(x,x) = \sigma_w^2 \mathcal K^l(x,x) / 2$. For finite $\sigma_b$, we use $\mathcal K^{l,l+1}(x,x) = \mathcal J^{l,l+1} \mathcal K^l(x,x) + \sigma_b^2$: \begin{align}\label{eqapp:relu_jkl} \mathcal L_{\mathcal J \mathcal K \log} = \frac{1}{2} \sum_{l=1}^{L} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=1}^{L} \left[\log\left (\mathcal J^{l,l+1} + \frac{\sigma_b^2}{\mathcal K^l(x,x)} \right)\right]^2 \,. \end{align} In ordered phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 < 1$, one can prove that $\mathcal K^l(x,x) \rightarrow \sigma_b^2 /(1 - (a_W^{l+1}(t)\sigma_w)^2/2)$ as $l \rightarrow \infty$, thus \eqref{eqapp:relu_jkl} is equivalent to JL at large depth. In chaotic phase $\mathcal J^{l,l+1} = (a_W^{l+1}(t) \sigma_w)^2 / 2 > 1$ and $\mathcal K^l(x,x) \rightarrow \infty$. \eqref{eqapp:relu_jkl} is equivalent to JL with an extra overall factor $1+\lambda$ at large depth. \section{ResMLP} \subsection{Network Recursion Relation} \paragraph{Input} The input image is chopped into an $N \times N$ grid of patches of size $P\times P$ pixels (often $16 \times 16$). The patches are fed into (the same) Linear layer to form a set of $N^2$ $d$-dimensional embeddings, referred to as channels. The resulting input to the ResMLP blocks : $h^0_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. Here and in what follows, Greek letters ($\mu,\nu$ etc.) index the patches, while Latin letters ($i,j$ etc.) index the channels. Note that in practice, the above two operations are combined into a Convolutional layer with the filer size coinciding with the patch-resolution ($P \times P \times C$); and the stride equal to $P$ so as to avoid overlap between patches. Here, $C$ is the number of channels in the original image. \paragraph{ResMLP block} The input embedding $h^0_{\mu i}$ is passed through a series of ($L$) self-similar ResMLP blocks, which output $h^L_{\mu i} \in \mathbb R^{N^2} \otimes \mathbb R^{d}$. In the following, we use the notation $1^{2,3}_{4}$ for the parameters; where 1 denotes the parameter, 2 denotes the block-index, 3 denotes the specific action within the block, and 4 denotes the neural indices. A ResMLP block consists of the following operations. \begin{align}\label{appeq:resmlp_rec_full} &\texttt{AffineNorm1:} & a^{l+1}_{\mu i} &= \left( \alpha^{{l+1},a}_i h^{l+1}_{\nu i} + \beta^{{l+1},a}_i \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear1:} & b^{l+1}_{\mu i} &= \sum^{N^2}_{\nu=1} W^{{l+1},b}_{\mu\nu} a^{l+1}_{\nu i} + B^{{l+1},b}_{\mu} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual1:} & c^{l+1}_{\mu i} &= \mathcal E^{{l+1},c}_i b^{l+1}_{\mu i} + \mu_1 a^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ & \texttt{AffineNorm2:} & d^{l+1}_{\mu i} &= \left( \alpha^{{l+1},d}_i c^{l+1}_{\mu i} + \beta^{{l+1},d}_i \right) \,, &&\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{linear2:} & e^{l+1}_{\mu i} &= \sum^d_{j=1} W^{{l+1},e}_{ij} d^{l+1}_{\mu j} + B^{{l+1},e}_{i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{activation:} & f^{l+1}_{\mu i} &= \phi\left(e^{l+1}_{\mu j} \right) \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^{4d} \right) \\ &\texttt{linear3:} & g^{l+1}_{\mu i} &= \sum^{4d}_{j=1} W^{{l+1},g}_{ij} f^{l+1}_{\mu j} + B^{{l+1},g}_{i}\,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \\ &\texttt{residual2:} & h^{l+1}_{\mu i} &= \mathcal E^{{l+1},h}_i g^{l+1}_{\mu i} + \mu_2 c^{l+1}_{\mu i} \,, & &\left( \mathbb R^{N^2} \otimes \mathbb R^d \right) \end{align} where the brackets on the right contain the dimensions of the output of the layers. We consider linear layers with weights and biases initialized with standard fan\_in. \texttt{linear1} acts on the patches, with parameters initialized as $W^{l+1,a}_{\mu\nu} \sim \mathcal N(0, \sigma_w^2/N)\,; B^{l+1,a}_{\mu} \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear2} acts on the channels, with parameter initialized as $W^{l+1,e}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{d})\,; B^{l+1,e}_i \sim \mathcal N(0, \sigma_b^2)$. \texttt{linear3} also acts on the channels, with parameters initialized as $W^{l+1,g}_{ij} \sim \mathcal N(0, \sigma_w^2/\sqrt{4d})\,; B^{l+1,g}_i \sim \mathcal N(0, \sigma_b^2)$. GELU is used as the activation function $\phi$. \texttt{AffineNrom1} and \texttt{AffineNrom2} perform an element-wise multiplication with a trainable vector of weights $\alpha^{l+1,a}_i, \alpha^{l+1,d}_i \in \mathbb R^d$ and an addition of a trainable bias vector $\beta^{l+1,a}_i, \beta^{l+1,d}_i \in \mathbb R^d$. Residual branches are scaled by a trainable vector $\mathcal E^{l+1,c}_i, \mathcal E^{l+1,h}_i \in \mathbb R^{d}$ (\texttt{LayerScale}), whereas the skip connections are scaled by scalar strengths $\mu_1$ and $\mu_2$. \paragraph{Output} The action of blocks is followed by an Average-Pooling layer, to to convert the output to a $d$-dimensional vector. This vector is fed into a linear classifier that gives the output of the network $h^{L+1}_i$. \subsection{NNGP Kernel Recursion Relation} At initialization, $\alpha^{l+1,a}_i = \alpha^{l+1,d}_i = \mathbf 1_d$ and $\beta^{l+1,a}_i = \beta^{l+1,d}_i = \mathbf 0_d$. Thus, AffineNorm layers perform identity operations at initialization. \texttt{LayerScale} is initialized as $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$, where $\mathcal E$ is chosen to be a small scalar. (For examlpe, $\mathcal E$ is taken to be 0.1 for 12-block ResMLP and $10^{-5}$ for a 24-block ResMLP network.) Additionally, we also take $\mu_1 = \mu_2 = \mu$. With these simplifications, we can obtain the recursion relation for the diagonal part of the Neural Network Gaussian Process (NNGP) kernel for the ResMLP block-outputs. We note that the the full NNGP kernel $\mathcal K^l_{\mu\nu;ij}$ is a tensor in $\mathbb R^{N^2} \otimes \mathbb R^{N^2} \otimes \mathbb R^{d} \times \mathbb R^{d}$. Here, we focus on its diagonal part $\mathcal K^l_{\mu\mu;ii}$. For clarity, we remove the subscripts ($\mu\mu;ii$). The diagonal part of the NNGP kernel for a block output $h^l_{\mu i}$ is defined as \begin{align}\label{appeq:remlp_nngpk} \mathcal K^l \equiv \mathbb E_\theta \left[ h^l_{\mu i} h^l_{\mu i} \right] \,, \end{align} which is independent of its patch and channel indices, $\mu$ and $i$, in the infinite width limit. The recursion relation can be obtained by propagating the NNGP through a block. For clarity, we define NNGP kernel for the intermediate outputs within the blocks. For example, $\mathcal K^{l+1}_a \equiv \mathbb{E}_{\theta} \left[ a^{l+1}_{\mu i} a^{l+1}_{\mu i} \right]$, $\mathcal K^{l+1}_b \equiv \mathbb{E}_{\theta} \left[ b^{l+1}_{\mu i} b^{l+1}_{\mu i} \right]$, etc. \begin{align}\label{appeq:resmlp_nngpk_rec} &\texttt{AffineNorm1:} & \mathcal K^{l+1}_a &= \mathcal K^l \,, \\ \nonumber &\texttt{linear1:} & \mathcal K^{l+1}_b &= \sigma_w^2 \mathcal K^{l+1}_a + \sigma_b^2 \\ &&&= \sigma_w^2 \mathcal K^l + \sigma_b^2 \,, \\ \nonumber &\texttt{residual1:} & \mathcal K^{l+1}_c &= \mathcal E^2 \mathcal K^{l+1}_b + \mu^2 \mathcal K^l \\ \nonumber &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \\ &&&= \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \,, \\ \nonumber &\texttt{AffineNorm2:} & \mathcal K^{l+1}_d &= \mathcal K^{l+1}_c \\ &&&= \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \,, \\ \nonumber &\texttt{linear2:} & \mathcal K^{l+1}_e &= \sigma_w^2 \mathcal K^{l+1}_d + \sigma_b^2 \\ &&&= \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \,, \\ \nonumber &\texttt{activation:} & \mathcal K^{l+1}_f &= \frac{\mathcal K^{l+1}_e}{4} + \frac{\mathcal K^{l+1}_e}{2\pi} \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\left( \mathcal K^{l+1}_e \right)^2}{\pi \left( 1 + \mathcal K^{l+1}_e \right) \sqrt{1 + 2\mathcal K^{l+1}_e}} \\ \nonumber &&&\equiv \mathcal G \left[ \mathcal K^{l+1}_e \right] \\ &&&= \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \\ \nonumber &\texttt{linear3:} & \mathcal K^{l+1}_g &= \sigma_w^2 \mathcal K^{l+1}_f + \sigma_b^2 \\ &&&= \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \,, \\ \nonumber &\texttt{residual2:} & \mathcal K^{l+1} &= \mathcal E^2 \mathcal K^{l+1}_g + \mu^2 \mathcal K^{l+1}_c \\ \nonumber &&&= \mathcal \mu^2 \left( \left( \mu^2 + \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + \mathcal E^2 \sigma_b^2 \right) + \\ \nonumber &&& \quad + \mathcal E^2 \, \left\{ \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] + \sigma_b^2 \right\} \\ \nonumber &&&= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \\ &&&\quad + \mathcal E^2 \sigma_w^2 \, \mathcal G \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right] \,, \end{align} where we have defined \begin{equation} \mathcal G [z] = \frac{z}{4} + \frac{z}{2\pi} \arcsin{\left( \frac{z}{1 + z} \right)} + \frac{\left( z \right)^2}{\pi \left( 1 + z \right) \sqrt{1 + 2z}} \,. \end{equation} Thus, we have a recursion relation, representing $\mathcal K^{l+1}$ in terms of $\mathcal K^l$. As a side note, if we replace \texttt{GELU} activation function with \texttt{ReLU}, the relation simplifies greatly, offering us intuition. Specifically, $\mathcal G[z]$ gets replaced by $z/2$ in this case. This gives us the following recursion relation for ResMLP with \texttt{ReLU}. \begin{align} \nonumber \mathcal K^{l+1} &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 \right) \mathcal K^l + (1 + \mu^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \left( \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 \left( \sigma_w^2 \mathcal K^l + \sigma_b^2 \right) \right) + \sigma_b^2 \right) \\ &= \left( \mu^4 + \mu^2 \mathcal E^2 \sigma_w^2 + \frac{1}{2} \mu^2 \mathcal E^2 \sigma_w^4 + \frac{1}{2} \mathcal E^4 \sigma_w^6 \right) \mathcal K^l + (1 + \mu^2 + \frac{1}{2}\sigma_w^2) \mathcal E^2 \sigma_b^2 + \frac{1}{2} \mathcal E^2 \sigma_w^2 \sigma_b^2 \end{align} \subsection{Jacobian Recursion Relation} Next, we calculate the APJN for ResMLP, between two consecutive blocks. For clarity, we first derive the expression for the partial derivative of ${l+1}^{th}$ block output $h^{l+1}_{\mu i}$ with respect to $l^{th}$ block output $h^l_{\nu j}$. \begin{align}\label{appeq:resmlp_derivative} \nonumber \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} &= \mathcal E^{l+1,h}_i \frac{\partial g^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \frac{\partial f^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) \frac{\partial e^{l+1}_{\mu k}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial d^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \frac{\partial c^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \frac{\partial c^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m \frac{\partial b^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \frac{\partial a^{l+1}_{\mu m}}{\partial h^l_{\nu j}} + \mu_2 \mathcal E^{l+1,c}_i \frac{\partial b^{l+1}_{\mu i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \frac{\partial a^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \\ \nonumber &= \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d \sum_{\lambda=1}^{N^2} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mathcal E^{l+1,c}_m W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda m}}{\partial h^l_{\nu j}} + \\ \nonumber &\quad + \mathcal E^{l+1,h}_i \sum_{k=1}^{4d} \sum_{m=1}^d W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{km} \mu_1 \delta_{\mu\nu} \delta_{mj} + \\ \nonumber &\quad + \mu_2 \mathcal E^{l+1,c}_i \sum_{\lambda=1}^{N^2} W^{l+1,b}_{\mu\lambda} \frac{\partial a^{l+1}_{\lambda i}}{\partial h^l_{\nu j}} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^{l+1,h}_i \mathcal E^{l+1,c}_j \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \\ \nonumber &\quad + \mu_1 \mathcal E^{l+1,h}_i \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \mu_2 \mathcal E^{l+1,c}_i \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu_2\mu_1 \delta_{\mu\nu} \delta_{ij} \\ \nonumber &= \mathcal E^2 \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} + \mu \mathcal E \delta_{\mu\nu} \sum_{k=1}^{4d} W^{l+1,g}_{ik} \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} + \\ &\quad + \mu\mathcal E \delta_{ij} W^{l+1,b}_{\mu\nu} + \mu^2 \delta_{\mu\nu} \delta_{ij} \,, \end{align} where in the last step, we have used the initial values of the parameters : $\mathcal E^{l+1,c}_i = \mathcal E^{l+1,h}_i = \mathcal E \, \mathbf 1_d$ and $\mu_1 = \mu_2 = \mu$. Next, we calculate the APJN using \eqref{appeq:resmlp_derivative}. We will perform the calculation in the limit of large $N^2$ and $d$; dropping all the corrections of order $\frac{1}{N^2}$ and $\frac{1}{d}$. \begin{align}\label{appeq:resmlp_apjn} \nonumber \mathcal J^{l,l+1} &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \frac{\partial h^{l+1}_{\mu i}}{\partial h^l_{\nu j}} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \sum_{k,m}^{4d} \delta_{\mu\nu} W^{l+1,g}_{ik} W^{l+1,g}_{im} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu m}) W^{l+1,e}_{kj} W^{l+1,e}_{mj} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{ij} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \mu^4 \sum_{\mu,\nu}^{N^2} \sum_{i,j}^d \delta_{\mu\nu} \delta_{ij} \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \mathcal E^4 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} W^{l+1,b}_{\mu\nu} W^{l+1,b}_{\mu\nu} + \right. \\ \nonumber &\qquad\qquad\quad \left. + \mu^2\mathcal E^2 \sigma_w^2 \sum_{\mu,\nu}^{N^2} \sum_j^d \sum_k^{4d} \delta_{\mu\nu} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) W^{l+1,e}_{kj} W^{l+1,e}_{kj} + \right. \\ \nonumber &\qquad\qquad\quad + \left. \mu^2\mathcal E^2 \sigma_w^2 N^2 d + \mu^4 N^2 d \right] \\ \nonumber &= \frac{1}{N^2 d} \mathbb E_\theta \left[ \frac{1}{4} \mathcal E^4 \sigma_w^6 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \mu^2\mathcal E^2 \sigma_w^4 \sum_\mu^{N^2} \sum_k^{4d} \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) + \right. \\ \nonumber &\qquad\qquad\quad \left. + (\mathcal E^2 \sigma_w^2 + \mu^2) \mu^2 N^2 d \right] \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \mathbb E_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \right) \\ \nonumber &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H_e [\mathcal K^{l+1}_e] \right) \\ &= (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \mathcal E^2 \sigma_w^4 \, \mathcal H [\mathcal K^l] \right) \,, \end{align} where we have defined \begin{align} \nonumber \mathcal H_e [\mathcal K^{l+1}_e] &\equiv \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] \\ &= \frac{1}{4} + \frac{1}{2\pi} \left( \arcsin{\left( \frac{\mathcal K^{l+1}_e}{1 + \mathcal K^{l+1}_e} \right)} + \frac{\mathcal K^{l+1}_e (3 + 5 \mathcal K^{l+1}_e)}{(1 + \mathcal K^{l+1}_e) (1 + 2\mathcal K^{l+1}_e)^{3/2}} \right) \,. \end{align} We also write $\mathcal K^{l+1}_e$ in terms of $\mathcal K^l$ and define \begin{align} \nonumber \mathcal H [\mathcal K^l] &= \mathcal H_e [\mathcal K^{l+1}_e] \\ &= \mathcal H_e \left[ \sigma_w^2 \left( \mu^2 \mathcal K^l + \mathcal E^2 (\sigma_w^2 \mathcal K^l + \sigma_b^2) + \sigma_b^2 \right) \right] \,. \end{align} It is clear from \eqref{appeq:resmlp_apjn} that for $\mu=1$, $\mathcal J^{l,l+1} > 1$, rendering the network off criticality. However, $\mathcal J^{l,l+1}$ can be tuned arbitrarily close to criticality by taking $\mathcal E$ to be small at $t=0$. This explains the necessity for \texttt{LayerScale} with small initial value in the ResMLP architecture. We note that the results in \eqref{appeq:resmlp_apjn} greatly simplify on using \texttt{ReLU} instead of \texttt{GELU} as $\phi$. We mention them here to provide intuition. $\mathcal H_e [\mathcal K^{l+1}_e] = \mathbb{E}_\theta \left[ \phi'(e^{l+1}_{\mu k}) \phi'(e^{l+1}_{\mu k}) \right] = \frac{1}{2}$ in this case. This gives us the simple result \begin{equation} \mathcal J^{l,l+1} = (\mu^2 + \mathcal E^2 \sigma_w^2) \left( \mu^2 + \frac{1}{2}\mathcal E^2 \sigma_w^4 \right) \end{equation} for \texttt{ReLU} activation function. \section{Introduction} Initializing Deep Neural Networks (DNNs) correctly is crucial for trainability and convergence. In the recent years, there has been remarkable progress in tackling the problem of exploding and vanishing gradients. One line of work utilizes the convergence of DNNs to Gaussian Processes in the limit of infinite width \citep{neal1996priors, lee2018deep, matthews2018gaussian, novak2018bayesian, garriga2018deep, hron2020infinite, yang2019tensor}. The infinite width analysis is then used to determine critical initialization for the hyperparameters of the network \citep{he2015delving, poole2016exponential, schoenholz2016deep, lee2018deep, roberts2021principles, doshi2021critical}. It has further been shown that dynamical isometry can improve the performance of DNNs \citep{pennington2018spectral, xiao2018dynamical}. Exploding and vanishing gradients can also be regulated with special activation functions such as SELU \citep{klambauer2017self-normalizing} and GPN \citep{lu2020bidirectionally}. Deep Kernel shaping \citep{martens2021deep, zhang2022deep} improves trainability of deep networks by systematically controlling $Q$ and $C$ maps. Normalization layers such as LayerNorm \citep{ba2016layer}, BatchNorm \citep{ioffe2015batch} and \citep{wu2018group} facilitate training of DNNs by significantly enhancing the critical regime \citep{doshi2021critical}. There have also been algorithmic attempts at regulating the forward pass, such as LSUV \citep{mishkin2015lsuv}. Another line of work sets the networks with residual connections to criticality by suppressing the contribution from the residual branches at initialization. In Highway Networks \citep{srivastava2015training}, this is achieved by initializing the network to have a small ``transform gate''. \citet{goyal2017accurate} achieve this in ResNets, by initializing the scaling coefficient for the residual block's last BatchNorm at 0. In Fixup \citep{zhang2019fixup} and T-Fixup \citep{huang2020improving}, careful weight-initialization schemes ensure suppression of residual branches in deep networks. Techniques such as SkipInit \citep{de2020batch}, LayerScale \citep{touvron2021cait} and ReZero \citep{bachlechner2021rezero} multiply the residual branches by a trainable parameter, initialized to a small value or to 0. Despite this progress, the aforementioned techniques are limited by either the availability of analytical solutions, specific use of normalization layers, or the use of residual connections. One needs to manually decide on the techniques to be employed on a case-by-case basis. In this work, we propose a simple algorithm, which we term $\texttt{AutoInit}$, that automatically initializes a DNN to criticality. Notably, the algorithm can be applied to any feedforward DNN, irrespective of the architectural details, large width assumption or existence of analytic treatment. We expect that $\texttt{AutoInit}$ will be an essential tool in architecture search tasks because it will always ensure that a never-before-seen architecture is initialized well. \subsection{Criticality in Deep Neural Networks} In the following, we employ the definition of criticality using \emph{Partial Jacobian} \citep{doshi2021critical}. Consider a DNN made up of a sequence of blocks. Each block consists of Fully Connected layers, Lipschitz activation functions, Convolutional layers, Residual Connections, LayerNorm\citep{ba2016layer}, BatchNorm\citep{ioffe2015batch}, AffineNorm\citep{touvron2021resmlp}, LayerScale\citep{touvron2021cait}, or any combination of thereof. We consider a batched input to the network, where each input tensor $x \in \mathbb{R}^{n^0_1} \otimes \mathbb{R}^{n^0_2} \otimes \cdots$ is taken from the batch $B$ of size $\lvert B \rvert$. The output tensor of the $l^{th}$ block is denoted by $h^l (x) \in \mathbb{R}^{n^l_1} \otimes \mathbb{R}^{n^l_2} \otimes \cdots$. $h^{l+1}(x)$ depends on $h^l(x)$ through a layer-dependent function $\mathcal{F}^{l}$, denoting the operations of the aforementioned layers. This function, in turn, depends on the parameters of the various layers within the block, denoted collectively by $\theta^{l+1}$. The explicit layer dependence of the function $\mathcal{F}^{l}$ highlights that we do not require the network to have self-repeating layers (blocks). We note that $h^{l+1} (x)$ can, in general, depend on $h^{l} (x')$ for all $x'$ in the batch $B$; which will indeed be the case when we employ BatchNorm. The recurrence relation for such a network can be written as \begin{align}\label{eq:DNNrecursion} h^{l+1} (x) = \mathcal{F}^{l+1}_{\theta^{l+1}} \left( \{h^l(x') \;|\; \forall x' \in B \} \right) \,, \end{align} where we have suppressed all the indices for clarity. Each parameter matrix $\theta^{l+1}$ is sampled from a zero-mean distribution. We will assume that some $2+\delta$ moments of $|\theta^{l+1}|$ are finite such that the Central Limit Theorem holds. Then the variances of $\theta^{l+1}$ can be viewed as hyperparameters and will be denoted by $\sigma^{l+1}_{\theta}$ for each $\theta^{l+1}$. We define the $\texttt{Flatten}$ operation, which reshapes the output $h^l(x)$ by merging all its dimensions. \begin{align} \bar h^l(x) = \texttt{Flatten}\left( h^l(x) \right) \sim \mathbb{R}^{N^l} \,, \end{align} where $N^l \equiv n^l_1 n^l_2 \cdots$. \begin{definition}[Average Partial Jacobian Norm (APJN)] \label{def:APJN} For a DNN given by \eqref{eq:DNNrecursion}, APJN is defined as \begin{align} \mathcal J^{l_0, l} \equiv \mathbb E_{\theta} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l}} \sum_{i=1}^{N_{l_0}} \sum_{x, x' \in B} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \frac{\partial \bar{h}^{l}_j(x')}{\partial \bar{h}^{l_0}_i(x)} \right] \,, \end{align} where $\mathbb E_\theta[\cdot]$ denotes the average over parameter initializations. \end{definition} \begin{remark} For DNNs without BatchNorm and normalized inputs, definition of APJN for $|B|>1$ is equivalent to the one in $|B|=1$ case. \end{remark} We use APJN as the empirical diagnostic of criticality. \begin{definition}[Critical Initialization] \label{def:critical} A DNN given by \eqref{eq:DNNrecursion}, consisting of $L+2$ blocks, including input and output layers, is critically initialized if all block-to-block APJN are equal to $1$, i.e. \begin{align} \mathcal J^{l,l+1} = 1 \,, \quad \forall \quad 1 \leq l \leq L \,. \end{align} \end{definition} Critical initialization as defined by \Cref{def:critical} is essential, as it prevents the gradients from exploding or vanishing at $t=0$. One can readily see this by calculating the gradient for any flattened parameter matrix $\theta$ at initialization: \begin{align}\label{eq:grad} \frac{1}{|\theta^l|}\|\nabla_{\theta^l} \mathcal L \|^2_2 =& \frac{1}{|\theta^l|} \left\|\sum_{\mathrm{all}} \frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \frac{\partial \bar{h}^{L+1}_i}{\partial \bar{h}^L_j}\cdots \frac{\partial \bar{h}^{l+1}_k}{\partial \bar{h}^l_m} \frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_2 \nonumber \\ \sim &\, O \left( \frac{1}{|\theta^l|} \left\|\frac{\partial \mathcal L}{\partial \bar{h}^{L+1}_i} \right\|^2_2 \cdot \mathcal J^{L, L+1} \cdots \mathcal J^{l, l+1} \cdot \left \|\frac{\partial \bar{h}^l_m}{\partial \theta^{l}_{n}} \right\|^2_F \right)\,, \end{align} where $\| \cdot \|_F$ denotes the Frobenius norm. In the second line, we utilized the factorization property of APJN \begin{align}\label{eq:factor} \mathcal J^{l_0,l} = \prod_{l'=l_0}^{l-1} \mathcal J^{l', l'+1} \,, \end{align} which holds in the infinite width limit given there is no weight sharing across the blocks. One may further require $\left\| \partial \mathcal L / \partial \bar{h}^{L+1}_i \right\|_2 \sim O(1)$. However, in practice we observe that this requirement is less important once the condition in \Cref{def:critical} is met. \subsection{Automatic Critical Initialization} For general architectures, analytically calculating APJN is often difficult or even impossible. This poses a challenge in determining the correct parameter initializations to ensure criticality; especially in networks without self-similar layers. Moreover, finite network width is known to have nontrivial corrections to the criticality condition \citep{roberts2021principles}. This calls for an algorithmic method to find critical initialization. To that end, we propose the \Cref{alg:j_general} that we called $\texttt{AutoInit}$ for critically initializing deep neural networks \emph{automatically}, without the need for analytic solutions of the signal propagation or of the meanfield approximation. The algorithm works for general feedforward DNNs, as defined in \eqref{eq:DNNrecursion}. Moreover, it naturally takes into account all finite width corrections to criticality because it works directly with an instance of a network. We do tacitly assume the existence of a critical initialization. If the network cannot be initialized critically, the algorithm will return a network that can propagate gradients well because the APJNs will be pushed as close to $1$ as possible. The central idea behind the algorithm is to choose the hyperparameters for all layers such that the condition in \Cref{def:critical} is met. This is achieved by optimizing a few auxiliary scalar parameters $a^l_{\theta}$ of a twin network with parameters $a^l_{\theta} \theta^{l}$ while freezing the parameters $\theta^{l}$. The loss function is minimized by the condition mentioned in \Cref{def:critical}. \begin{algorithm}[h] \caption{\texttt{AutoInit} (SGD)} \label{alg:j_general} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma^l_\theta;\, a^l_{\theta}(t) \; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$ and $\{a_\theta^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l_\theta(t+1) = a^l_\theta(t) - \eta \nabla_{a^l_\theta} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{\sigma^l_\theta = \sigma^l_{\theta} a^l_{\theta}(t) ;\, 1\; | \; \forall \; 1 \leq l \leq L\,, \forall \theta^l \})$ \end{algorithmic} \end{algorithm} In practice, for speed and memory reasons we use an unbiased estimator \citep{hoffman2019robust} of APJN in \Cref{alg:j_general}, defined as \begin{align}\label{eq:j_est} \hat {\mathcal{J}}^{l, l+1} \equiv \frac{1}{N_v} \sum_{\mu=1}^{N_v} \left[\frac{1}{|B| N_l} \sum_{j=1}^{N_{l+1}} \sum_{k=1}^{N_{l+1}} \sum_{i=1}^{N_{l}} \sum_{x, x' \in B} \frac{\partial (v_{\mu j} \bar{h}^{l+1}_j(x'))}{\partial \bar{h}^{l}_i(x)} \frac{\partial (v_{\mu k} \bar{h}^{l+1}_k(x'))}{\partial \bar{h}^{l}_i(x)} \right] \,, \end{align} where each $v_{\mu i}$ is a unit Gaussian random vector for a given $\mu$. The Jacobian-Vector Product (JVP) structure in the estimator speeds up the computation by a factor of $N_{l+1} / N_v$ and consumes less memory at the cost of introducing some noise. In \Cref{sec:auto} we analyze $\texttt{AutoInit}$ for multi-layer perceptron (MLP) networks. Then we discuss the problem of exploding and vanishing gradients of the tuning itself; and derive bounds on the learning rate for ReLU or linear MLPs. In \Cref{sec:bn} we extend the discussion to BatchNorm and provide a strategy for using $\texttt{AutoInit}$ for a general network architecture. In \Cref{sec:exp} we provide experimental results for more complex architectures: VGG19\_BN and ResMLP-S12. \section{AutoInit for MLP networks} \label{sec:auto} MLPs are described by the following recurrence relation for preactivations \begin{align}\label{eq:mlp_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} W^{l+1}_{ij} \phi(h^l_j(x)) + b^{l+1}_i \,. \end{align} Here $x$ is an input vector, weights $W^{l+1}_{ij} \sim \mathcal N(0, \sigma_w^2/N_l)$ and biases $b^{l+1}_i \sim \mathcal N(0, \sigma_b^2)$ are collectively denoted as $\theta^{l+1}$. We assume $\phi$ is a Lipschitz activation function throughout this paper. For a network with $L$ hidden layers, in infinite width limit $N_l \rightarrow \infty$, preactivations \{$h^l_i(x) \,|\, 1 \leq l \leq L, \forall i \in N_l\}$ are Gaussian Processes (GPs). The distribution of preactivations is then determined by the Neural Network Gaussian Process (NNGP) kernel \begin{align} \mathcal K^{l}(x, x') = \mathbb E_{\theta} \left[ h^l_i(x) h^l_i(x') \right] \,, \end{align} which value is independent of neuron index $i$. The NNGP kernel can be calculated recursively via \begin{align} \mathcal K^{l+1}(x, x') = \sigma_w^2 \mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} \left[\phi\left(h_i^l(x)\right) \phi\left(h_i^l(x')\right) \right] + \sigma_b^2 \,. \end{align} Note that we have replaced the average over parameter initializations $\mathbb{E}_\theta[\cdot]$ with an average over preactivation-distributions $\mathbb E_{h_i^l(x), h_i^l(x') \sim \mathcal N(0, \mathcal K^l(x, x'))} [\cdot]$; which are interchangeable in the infinite width limit \citep{lee2018deep, roberts2021principles}. Critical initialization of such a network is defined according to \Cref{def:critical}. In practice, we define a twin network with extra parameters, for MLP networks the twin preactivations can be written as \begin{align}\label{eq:twin_preact} h^{l+1}_i(x) = \sum_{j=1}^{N_l} a_W^{l+1} W^{l+1}_{ij} \phi(h^l_j(x)) + a_b^{l+1} b^{l+1}_i \,, \end{align} where $a_{\theta}^{l+1} \equiv \{a^{l+1}_W, a^{l+1}_b\}$ are auxiliary parameters that will be tuned by \Cref{alg:j_train}. \begin{algorithm}[h] \caption{\texttt{AutoInit} for MLP (SGD)} \label{alg:j_train} \begin{algorithmic} \State {\textbf{Input:}} Model $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, Loss function $\mathcal L(\{\mathcal J^{l,l+1}\}_{l=1}^{L})$, $T$, $\epsilon$, and $\eta$. \State \textbf{Set} $t=0$, $\{a_W^l(0)=1\}$ and $\{a_b^l(0)=1\}$ \State \textbf{Evaluate} $\mathcal L(0)$ \While {$0 \leq t < T$ and $\mathcal L(t) > \epsilon$} \State $a^l(t+1) = a^l(t) - \eta \nabla_{a^l} \mathcal L(t)$ \State \textbf{Evaluate} $\mathcal L(t+1)$ \EndWhile \State \textbf{Return} $\mathcal M(\{a^l_W(t) \sigma_w, \, a^l_b(t) \sigma_b, 1,\, 1 \;| \; \forall 1 \leq l \leq L \})$ \end{algorithmic} \end{algorithm} In \Cref{alg:j_train}, one may also return $\mathcal M(\{\sigma_w, \sigma_b, a_W^l(t), a_b^l(t) \;| \; \forall 1 \leq l \leq L \})$, while freezing all $a^l_{\theta}$. However, this leads to different training dynamics while updating weights and biases. Alternatively, one can leave the auxiliary parameters trainable, but in practice this leads to unstable training dynamics. \paragraph{Loss function} The choice of loss function $\mathcal L$ is important. We will use the following loss \begin{align}\label{eq_loss_sq_J} \mathcal L_{\log} = \frac{1}{2} \sum_{l=1}^L \left[\log(\mathcal J^{l, l+1})\right]^2 \,, \end{align} We will refer to \eqref{eq_loss_sq_J} as Jacobian Log Loss (JLL). This definition is inspired by the factorization property \eqref{eq:factor}, which allows one to optimize each of the partial Jacobian norms independently. Thus the tuning dynamics is less sensitive to the depth. One could naively use $\log(\mathcal J^{0, L+1})^2$ as a loss function, however optimization will encounter the same level of exploding or vanishing gradients problem as \eqref{eq:grad}. One may worry that the factorization property will be violated for $t>0$, due to the possible correlation across all $\{a^l(t)\}$. It turns out that the correlation introduced by \Cref{alg:j_train} does not change the fact that all weights and biases are iid, ensuring that \eqref{eq:factor} holds for any $t \geq 0$. Another choice for the loss is Jacobian Square Loss (JSL), defined as $\mathcal L_2 = \frac{1}{2} \sum_{l=1}^L \left(\mathcal J^{l, l+1} - 1 \right)^2$. However JSL has poor convergence properties when $\mathcal J^{l, l+1} \gg 1$. One may further restrict the forward pass by adding terms that penalize the difference between $\mathcal K^l(x, x)$ and $\mathcal K^{l+1}(x,x)$. For brevity, we leave these discussions for the appendix. \paragraph{Exploding and Vanishing Gradients} While the objective of \Cref{alg:j_train} is to solve the exploding and vanishing gradients problem, the \Cref{alg:j_train} itself has the same problem, although not as severe. Consider optimizing MLP networks using $\mathcal L_{\log}$, where the forward pass is defined by \eqref{eq:twin_preact}. Assuming the input data $x$ is normalized, the SGD update (omit x) of $a^{l}_{\theta}$ at time $t$ can be written as \begin{align}\label{eq:a_update} a_{\theta}^{l+1}(t+1) - a_{\theta}^{l+1}(t) = - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{\theta}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) \end{align} For a deep neural network, i.e. $|L - l| \gg 1$ holds for some $l$, the depth dependent term of \eqref{eq:a_update} can lead to exploding or vanishing gradients problems. We will show next that this is not the familiar exploding or vanishing gradients problem. First we would like to explain the vanishing gradients problem for $a_W^{l+1}$. Rewrite the right hand side of \eqref{eq:a_update} as \begin{align}\label{eq:iso} - \eta \sum_{l' \geq l}^L \frac{\partial \log \mathcal J^{l', l'+1}(t)}{\partial a_{W}^{l+1}(t)} \log \mathcal J^{l', l'+1}(t) = - \eta \frac{2}{a_W^{l+1}(t)} \log \mathcal J^{l, l+1}(t) + (l' > l\; \mathrm{terms}) \,. \end{align} Vanishing gradients can only occur if the isolated term is exactly canceled by the other terms for all $t \geq 0$, which does not happen in practice. To discuss the exploding gradients problem for $a_W^{l+1}$ we consider the update of $a_W^{l+1}$ (omit t). Depth dependent terms can be written as \begin{align}\label{eq:tauto_aw} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_W^{l+1}} \log \mathcal J^{l', l'+1} = \sum_{l'>l}^L \left(\frac{4\chi^{l'}_{\Delta}}{a_W^{l+1} \mathcal J^{l', l'+1}} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \mathcal K^{l+1}(x, x)\right) \log \mathcal J^{l', l'+1} \,, \end{align} where we have defined two new quantities $\chi^{l'}_{\Delta} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi''(h^{l'}_i) \phi''(h^{l'}_i) + \phi'(h^{l'}_i) \phi'''(h^{l'}_i) \right]$ and $\chi^{l'}_{\mathcal K} \equiv (a_W^{l'+1} \sigma_w)^2 \mathbb E_{\theta} \left[\phi'(h^{l'}_i) \phi'(h^{l'}_i) + \phi(h^{l'}_i) \phi''(h^{l'}_i) \right]$. We note that the exploding gradients problem for $a_W^{l+1}$ in $\texttt{AutoInit}$ is not severe for commonly used activation functions: \begin{itemize} \item $\tanh$-like bounded odd activation functions: $\chi_{\mathcal K}^{l'} \leq 1$ holds and $\mathcal K^l(x,x)$ saturates to a constant for large $l$. Thus the divergence problem of \eqref{eq:tauto_aw} is less severe than the one of \eqref{eq:grad} when $\mathcal J^{l', l'+1} > 1$. \item $\mathrm{ReLU}$: $\chi^{l'}_{\Delta}=0$. \item $\mathrm{GELU}$: The sum in \eqref{eq:tauto_aw} scales like $O(L \prod_{\ell=1}^L \chi^{\ell}_{\mathcal K})$ for large $L$, which may lead to worse exploding gradients than \eqref{eq:grad} for a reasonable $L$. Fortunately, for $\chi^{l'}_{\mathcal K} > 1$ cases, $\chi_{\Delta}^{l'}$ is close to zero. As a result, we find numerically that the contribution from \eqref{eq:tauto_aw} is very small. \end{itemize} For $a_b^{l+1}$, there is no isolated term like the one in \eqref{eq:iso}. Then the update of $a_b^{l+1}$ is proportional to \begin{align}\label{eq:tauto_ab} \sum_{l'>l}^L \frac{\partial \log \mathcal J^{l',l'+1}}{\partial a_b^{l+1}} \log \mathcal J^{l, l+1} = \sum_{l'>l}^L \left(\frac{4 a_b^{l+1}}{\mathcal J^{l', l'+1}} \chi^{l'}_{\Delta} \chi^{l'-1}_{\mathcal K} \cdots \chi^{l+1}_{\mathcal K} \sigma_b^2 \right) \log \mathcal J^{l, l+1} \,. \end{align} Comparing \eqref{eq:tauto_ab} and \eqref{eq:tauto_aw}, it is clear that the exploding gradients problem for $a_b^{l+1}$ is the same as that for $a_W^{l+1}$, hence not severe for common activation functions. The vanishing gradients problem is seemingly more serious, especially for $\sigma_b=0$. However, the vanishing gradients for $a_b^{l+1}$ does not prevent \texttt{AutoInit} from reaching a critical initialization: \begin{itemize} \item For $\sigma_b > 0$, as $a_W^{l+1}$ gets updated, the update in \eqref{eq:tauto_ab} gets larger with time t. \item For $\sigma_b=0$ the phase boundary is at $\sigma_w \geq 0$, which can be reached by $a_W^{l+1}$ updates. \end{itemize} \subsection{Linear and ReLU networks} In general, it is hard to predict a good learning rate $\eta$ for the \Cref{alg:j_train}. However, for ReLU (and linear) networks, we can estimate the optimal learning rates. We will discuss ReLU in detail. Since $a_b^l$ can not receive updates in this case, we only discuss updates for $a_W^l$. Different APJN $\{\mathcal J^{l,l+1}\}$ for ReLU networks evolve in time independently according to \begin{align}\label{eq:relu_jupdate} \sqrt{\mathcal J^{l,l+1}(t+1)} - \sqrt{\mathcal J^{l,l+1}(t)} = -\eta \frac{\sigma_w^2}{\sqrt{\mathcal J^{l,l+1}(t)}} \log \mathcal J^{l,l+1}(t) \,. \end{align} Then one can show that for any time $t$: \begin{align}\label{eq:eta_t} \eta_t < & \min_{1 \leq l \leq L} \left\{\frac{2\left( \sqrt{\mathcal J^{l,l+1}(t)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(t)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(t) } \right\} \end{align} guarantees a convergence. In this case, the value of $\mathcal J^{l, l+1}(t)$ can be used to create a scheduler for \Cref{alg:j_train}. Moreover, one can solve \eqref{eq:relu_jupdate} and find a learning rate that allows the \Cref{alg:j_train} to converge in 1-step: \begin{align}\label{eq:1step_lr} \eta^l_{\mathrm{1-step}} = \frac{\left( \sqrt{\mathcal J^{l,l+1}(0)} - 1 \right) \sqrt{\mathcal J^{l,l+1}(0)} }{\sigma_w^2 \log \mathcal J^{l,l+1}(0) } \,, \end{align} Next we study the dynamics of the optimization while using a single learning rate $\eta$. We estimate the allowed maximum learning rate $\eta_0$ at $t=0$ using $\mathcal J^{l, l+1}(0) = (a_W^{l+1} \sigma_w)^2 / 2$: \begin{align}\label{eq:eta_0_jl} \eta_0 = \frac{\left(a_W^{l+1}\sigma_w - \sqrt{2}\right) a_W^{l+1}}{\sigma_w \left(\log \left[(a_W^{l+1}\sigma_w)^2 \right] - \log 2\right)} \,. \end{align} In \Cref{fig:relu_jac}, we checked our results with \cref{alg:j_train}. All $\mathcal J^{l,l+1}(t)$ values plotted in the figure agree with the values we obtained by iterating \eqref{eq:relu_jupdate} for $t$ steps. The gap between $\eta_0$ and trainable regions can be explained by analyzing \eqref{eq:eta_t}. Assuming at time $t$: $|\mathcal J^{l,l+1}(t) - 1| < |\mathcal J^{l,l+1}(0) - 1|$ holds. For $\mathcal J^{l,l+1} < 1$ if we use a learning rate $\eta$ that satisfies $\eta_0 < \eta < \eta_t$, there is still a chance that \Cref{alg:j_train} can converge. For $\mathcal J^{l,l+1} > 1$ if $\eta_0 > \eta > \eta_t$ holds, \Cref{alg:j_train} may diverge at a later time. A similar analysis for JSL is performed in the appendix. \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{Figures/relu_jacbackprop_jl.pdf} \caption{$\mathcal J^{l, l+1}(t)$ plot for $L=10$, $N_l=500$ ReLU MLP networks, initialized with $a_W^l=1$. From left to right: 1) $\mathcal{J}^{l, l+1}(t=1)$ values are obtained by tuning with \Cref{alg:j_train} using $\eta_{\mathrm{1-step}}$ with JLL; 2) we scan in $\eta$-$\sigma_w$ plane using $\sigma_b=0$ networks, tune $\mathcal J^{l, l+1}(t)$ using \Cref{alg:j_train} with JLL for $980$ steps. Only $0.8< \mathcal J^{l,l+1} <1.25$ points are plotted; All networks are trained with normalized CIFAR-10 dataset, $|B|=256$.} \label{fig:relu_jac} \end{figure} \section{BatchNorm, Residual Connections and General Strategy} \label{sec:bn} \paragraph{BatchNorm and Residual Connections} For MLP networks, the APJN value is only a function of $t$ and it is independent of $|B|$. This property holds except when there is a BatchNorm (BN). We consider a Pre-BN MLP network with residual connections. The preactivations are given by \begin{align}\label{eq:bnmlp_preact} h^{l+1}_{x; i} = \sum_{j=1}^N a_W^{l+1} W^{l+1}_{ij} \phi(\tilde h^l_{x; j}) + a_b^{l+1} b^{l+1}_i + \mu h^l_{x;i} \,, \end{align} where we label different inputs with indices $x,x^\prime, \cdots$ and $\mu$ quantifies the strength of the residual connections (common choice is $\mu=1$). At initialization, the normalized preactivations are defined as \begin{align} \tilde h^l_{x; i} = \frac{h^l_{x; i} - \frac{1}{|B|}\sum_{x' \in B} h^l_{x'; i} }{\sqrt{\frac{1}{|B|} \sum_{x' \in B} \left( h^l_{x'; i} \right)^2 - \left(\frac{1}{|B|} \sum_{x' \in B} h^l_{x'; i}\right)^2 }} \,. \end{align} The change in batch statistics leads to non-trivial $\mathcal J^{l,l+1}$ values, which can be approximated using \Cref{conj:bn}. \begin{conjecture}[APJN with BatchNorm]\label{conj:bn} In infinite width limit and at large depth $l$, APJN of Pre-BN MLPs \eqref{eq:bnmlp_preact} converges to a deterministic value determined by the NNGP kernel as $B \rightarrow \infty$: \begin{align} \mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} & (a_W^{l+1} \sigma_w)^2 \mathbb E_{\tilde h^l_{x;j} \sim \mathcal N(0, 1)} \left[\phi'(\tilde h^l_{x;j}) \phi'(\tilde h^l_{x;j}) \right] \frac{1}{\mathcal K^l_{xx} - \mathcal K^l_{xx'}} + \mu^2 \,, \end{align} where the actual value of indices $x'$ and $x$ is not important, as long as $x \neq x'$. \end{conjecture} \begin{remark} Under the condition of \Cref{conj:bn} $\mathcal J^{l, l+1} \xrightarrow{|B| \rightarrow \infty} 1 + O(l^{-1})$, if $\mu=1$. The finite $|B|$ correction is further suppressed by $l^{-1}$. \end{remark} In \Cref{fig:bn_relu} we show numerical results that can justify our conjecture, where we empirically find that finite $|B|$ correction for $|B| \geq 128$. Analytical details are in appendix. Similar results without residual connections have been obtained for finite $|B|$ by \citet{yang2018mean}. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{Figures/bn_relu_mu1.pdf} \caption{$\mathcal J^{l,l+1}(0)$ phase diagrams for $|B|=256$ in $\sigma_b-\sigma_w$ plane ($\mu=0$ and $\mu=1$); $\mathcal J^{l, l+1}$-$|B|$ plot. From left to right: 1) Pre-BN MLP networks with $\mu=0$ are everywhere chaotic; 2) Pre-BN MLP networks with $\mu=1$ are critical everywhere; 3) For $|B|\geq 128$, the finite $|B|$ corrections are negligible. In all plots we use $L=30$, $N_l=500$, $a_W^l(0)=1$ and averaged over 50 initializations.} \label{fig:bn_relu} \end{figure} \paragraph{General Strategy} For general network architectures, we propose the following strategy for using \Cref{alg:j_general} with normalized inputs: \begin{itemize} \item If the network does not have BatchNorm, use the algorithm with $|B|=1$. \item If the network has BatchNorm, and the user has enough resources, use the algorithm with a $|B|$ which will be used for training. When $|B|$ is large, one should make $\mathcal J^{l,l+1}$ vs. $|B|$ plots like the one in \Cref{fig:bn_relu}, then choose a $|B|$ that needs less computation. \item When resources are limited, one can use a non-overlapping set $\{\mathcal{J}^{l, l+k}\}$ with $k>1$ to cover the whole network. \end{itemize} The computational cost of the algorithm depends on $k$ and $|B|$. \section{Experiments} \label{sec:exp} In this section, we use a modified version of $\mathcal L_{\log}$, where we further penalize the ratio between NNGP kernels from adjacent layers. The Jacobian-Kernel Loss (JKL) is defined as: \begin{align}\label{eq:jkle} \mathcal L_{\mathcal J \mathcal K\log} = \frac{1}{2} \sum_{l=0}^{L+1} \left[\log(\mathcal J^{l, l+1})\right]^2 + \frac{\lambda}{2} \sum_{l=0}^{L+1} \left[\log\left (\frac{\mathcal K^{l+1}(x, x)}{\mathcal K^l(x,x)} \right)\right]^2 \,, \end{align} where we introduced an extra hyperparameter $\lambda$ to control the penalization strength. We also included input and output layers. Both APJNs and NNGP kernels will be calculated using flattened preactivations. \subsection{ResMLP} ResMLP \citep{touvron2021resmlp} is an architecture for image recognition built entirely on MLPs. It offers competitive performance in both image recognition and machine translation tasks. The architecture consists of cross-channel and cross patch MLP layers, combined with residual connections. The presence of residual connections and the absence of normalization techniques such as LayerNorm \citep{ba2016layer} or BatchNorm \citep{ioffe2015batch} render ResMLP to be initialized off criticality. To mitigate this issue, ResMLP architecture utilizes LayerScale \citep{touvron2021cait}; which multiplies the output residual branch with a trainable matrix, initialized with small diagonal entries. \paragraph{CIFAR-10} Here we obtain a critical initialization for ResMLP-S12 using \Cref{alg:j_train} with loss \eqref{eq:jkle}, with $a^l_{\theta}$ introduced for all layers. In our initialization, the ``smallnes'' is distributed across all parameters of the residual block, including those of linear, affine normalization and LayerScale layers. As we show in \Cref{fig:resmlp}, Kaiming initialization is far from criticality. $\texttt{AutoInit}$ finds an initialization with almost identical $\{\mathcal J^{l, l+1} \}$ and similar $\{\mathcal K^{l, l+1}(x, x)\}$ compared to the prescription proposed by \citet{touvron2021resmlp}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/resmlp_comparison.pdf} \caption{From left to right: 1) and 2) Comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ for ResMLP-S12 for Kaiming, original and $\texttt{AutoInit}$ initializations. Depth $l$ is equal to the number of residual connections. The network function in the $\texttt{AutoInit}$ case is very close to identity at initialization. 3) Training and validation accuracy. Both, original and \texttt{AutoInit}, models are trained on CIFAR-10 dataset for 600 epochs using \texttt{LAMB} optimizer\citep{You2020Large} with $|B|=256$. The learning rate is decreased by a factor of 0.1 at 450 and 550 epochs. Training accuracy is measured on training samples with Mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:resmlp} \end{figure} \paragraph{ImageNet \citep{liILSVRC15}} We report $74.0\%$ top-1 accuracy for ResMLP-S12 initialized using \texttt{AutoInit}, whereas the top-1 accuracy reported in \citep{touvron2021resmlp} for the same architecture is $76.6\%$. The model has $15$ million parameters. We used a setup similar to the one in original paper, which is based on timm library \citep{rw2019timm} under Apache-2.0 license \citep{apachev2}. However, we made the following modifications in our training: 1) We use learning rate $\eta=0.001$ and $|B|=1024$. 2) We use mixed precision. 3) We do not use \texttt{ExponentialMovingAverage}. The training was performed on two NVIDIA RTX 3090 GPUs; and took around $3.5$ days to converge (400 epochs). The auto-initialized model are obtained by tuning the Kaiming initialization using \Cref{alg:j_train} with $\mathcal L_{\mathcal J \mathcal K\log}(\lambda=0.5)$, $\eta=0.03$ and $|B|=32$ for 500 steps. \subsection{VGG} VGG \citep{simonyan2014very} is an old SOTA architecture, which was notoriously difficult to train before Kaiming initialization was invented. The BatchNorm variants $\mathrm{VGG19\_BN}$ further improve the training speed and performances compared to the original version. PyTorch version of VGG \citep{NEURIPS2019_9015} is initialized with $\mathrm{fan\_out}$ Kaiming initialization \citep{he2015delving}. In \Cref{fig:bn_relu} we show that the BatchNorm makes Kaiming-initialized ReLU networks chaotic. We obtain a close to critical initialization using \Cref{alg:j_train} for $\mathrm{VGG19\_BN}$, where we introduce the auxiliary parameters $a^l_{\theta}$ for all BatchNorm layers. $\mathcal J^{l, l+1}$ is measured by the number of composite (Conv2d-BatchNorm-ReLU) blocks or MaxPool2d layers. We compared $\mathcal J^{l, l+1}$, $\mathcal K^l(x,x)$ and accuracies on CIFAR-10 datasets between auto-initialized model and the one from PyTorch\citep{krizhevsky2009learning}, see \Cref{fig:vgg}. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{Figures/vgg_comparison.pdf} \caption{From left to right: 1) and 2) comparing $\mathcal J^{l, l+1}$ and $\mathcal K^l(x,x)$ between PyTorch version $\mathrm{VGG19\_BN}$ and \texttt{AutoInit} version, we ensure $\mathcal J^{l, l+1}=1$ with a high priority ($\lambda=0.05$); 3) training and validation accuracy. We train both models on CIFAR-10 dataset using SGD with $\mathrm{momentum}=0.9$ and $|B|=256$ for 300 epochs, where we decrease the learning rate by a factor of 0.1 at 150 and 225 epochs. Training accuracy is measured on training samples with mixup $\alpha=0.8$. Both models interpolate the original training set.} \label{fig:vgg} \end{figure} \section{Conclusions} \label{sec:conclu} In this work we have introduced an algorithm, \texttt{AutoInit}, that allows to initialize an arbitrary feed-forward deep neural network to criticality. \texttt{AutoInit} is an unsupervised learning algorithm that forces norms of all nearby partial Jacobians to have a unit norm via minimizing the loss function \eqref{eq_loss_sq_J}. A slight variation of the \texttt{AutoInit} also tunes the forward pass to ensure that gradients in all layers of a DNN are well-behaved. To gain some intuition about the algorithm we have solved the training dynamics for MLPs with ReLU activation and discussed the choice of hyperparameters for the tuning procedure that ensures its convergence. Then we have evaluated the performance of \texttt{AutoInit}-initialized networks against initialization schemes used in literature. We considered two examples: ResMLP architecture and VGG. The latter was notoriously difficult to train at the time it was introduced. \texttt{AutoInit} finds a good initialization (somewhat close to Kaiming) and ensures training. ResMLP uses a variation of ReZero initialization scheme that puts it close to dynamical isometry condition. \texttt{AutoInit} finds a good initialization that appears very different from the original, however the network function is also very close to the identity map at initialization. In both cases the performance of the \texttt{AutoInit}-initialized networks is competitive with the original models. We emphasize that \texttt{AutoInit} removes the necessity for trial-and-error search for a working initialization. We expect that \texttt{AutoInit} will be useful in automatic neural architecture search tasks as well as for general exploration of new architectures. \begin{ack} T.H., D.D. and A.G. were supported, in part, by the NSF CAREER Award DMR-2045181 and by the Salomon Award. \end{ack} \bibliographystyle{plainnat}
2,877,628,088,768
arxiv
\section{Introduction} \par Jackiw--Teitelboim (JT) gravity \cite{Teitelboim83, Jackiw84} is a two-dimensional (2D) topological quantum gravitational theory that is considered on Riemann surfaces, possibly with boundaries. When the Riemann surfaces have asymptotic boundaries, the quantum fluctuations or ``wiggles'' along the asymptotic boundaries are controlled by the Schwarzian theory \cite{Kitaevtalk, MS2016, Kitaev2017}. As the one-dimensional Schwarzian theory describes the low-energy limit of the Sachdev--Ye--Kitaev (SYK) model \cite{Sachdev92, Kitaevtalk, Kitaev2017}, it relates JT gravity to SYK models. Moreover, JT gravity has been studied in the context of AdS/CFT correspondence \cite{Almheiri2014}. \par Recently, the duality of JT gravity and the matrix integral has been discussed in \cite{SSS2019}. Mirzakhani's recursion relation \cite{Mirzakhani2007} that computes the volume of the moduli of hyperbolic Riemann surfaces on the JT gravity side corresponds \cite{Eynard2007} to the Eynard--Orantin topological recursion \cite{Eynard2007tp} \footnote{The genus expansion of a Hermitian-matrix integral \cite{Eynard2004} is obtained via the topological recursion of Eynard and Orantin.} on the matrix-integral side; this correspondence is important in the discussion \cite{SSS2019} of the duality of the two theories. Recent studies of JT gravity include \cite{Blommaert2019, SSS2019, Cotler2019, Moitra2019, StanfordWitten2019, Okuyama2019, Almheiri2019, Marolf2020, OkuyamaSakai2020multi, Maxfield202006, Kimura2008, Momeni202009, Momeni2020, Alishahiha2020, Narayan2020, Moitra2021, Griguolo2021, Griguolo202106}. \vspace{5mm} \par The path integral in the JT gravity on Riemann surfaces with asymptotic boundaries is obtained by computing the path integral over the wiggles along the asymptotic boundaries of the surfaces \cite{Jensen2016, MSY2016, EMV2016}, together with the path integral over the moduli of the Riemann surfaces with the geodesic boundaries and the path integral over the ``trumpets'' connecting an asymptotic boundary (with which a boundary wiggle is associated) and a geodesic boundary \cite{SSS2019}. The volume of the moduli of hyperbolic Riemann surfaces with geodesic boundaries is known as the ``Weil--Petersson volume.'' \par The path integral over a ``trumpet'' connecting an asymptotic boundary and a geodesic boundary of a Riemann surface was computed in \cite{SSS2019}. Thus, the correlation function in JT gravity is obtained when the Weil--Petersson volumes are successfully computed. Although the computation of the Weil--Petersson volumes for {\it any} genus ($g$) and for any number ($n$) of the geodesic boundaries is possible in principle, it is not simple, as we discuss briefly. \par One of the goals in this study is to compute the higher-genus correlation functions with any number of the boundaries in JT gravity. Utilizing this result, we determine the higher-genus contributions to the resolvents on the matrix-integral side. We also compute the intersection numbers on the moduli of the hyperbolic Riemann surfaces of the large genera ($g$), with any number of boundaries. \par The computation results of the resolvents in this study might be a useful tool for confirming the duality of JT gravity and the matrix integral as discussed in \cite{SSS2019}. \par The genus expansion of the correlation function in JT gravity is as follows \cite{SSS2019}: \begin{equation} \label{genus expansion corr in intro} <Z(\beta_1) \ldots Z(\beta_n)>_c \simeq \sum^\infty _{g=0} Z_{g,n}(\beta_1, \ldots, \beta_n) \cdot e^{(-2g+2-n)S_0}. \end{equation} Here, $Z_{g,n}(\beta_1, \ldots, \beta_n)$ denotes the JT path integral for the Riemann surfaces with a topology of genus $g$ with $n$ asymptotic boundaries. The genus-$g$ partition function with $n$ boundaries, $Z_{g,n}(\beta_1, \ldots, \beta_n)$, can be expressed as follows \cite{SSS2019} \footnote{There is a normalization constant, $\alpha$, in the genus-$g$ partition functions \cite{SSS2019}. We adopt the convention to set $\alpha=1$ in this study.} : \begin{equation} Z_{g,n}(\beta_1, \ldots, \beta_n) = \prod^n_{i=1} \int^\infty_0 b_idb_i \, V_{g,n}(b_1, \ldots, b_n) \prod^n_{j=1} Z_{\rm Sch}^{\rm trumpet}(\beta_j, b_j), \end{equation} where $Z_{\rm Sch}^{\rm trumpet}$ denotes the contribution from the path integral over the trumpet connecting an asymptotic boundary and a geodesic boundary. $V_{g,n}(b_1, \ldots, b_n)$ denotes the Weil--Petersson volume of the moduli of the Riemann surfaces of genus $g$ with $n$ geodesic boundaries. $b_1, \ldots, b_n$ denote the lengths of the geodesic boundaries. The contribution from the trumpet, $Z_{\rm Sch}^{\rm trumpet}$, was computed in \cite{SSS2019}. As stated previously, one can compute the correlation function with $n$ boundaries by evaluating the values of the Weil--Petersson volumes, $V_{g,n}(b_1, \ldots, b_n)$. However, computing the higher-genus contributions of the correlation function by directly evaluating the Weil--Petersson volumes for large genera $g$ is difficult. \par Mirzakhani discovered a method to compute the Weil--Petersson volumes, $V_{g,n}$, for any genus, $g$, recursively \cite{Mirzakhani2007}. However, as genus $g$ becomes large, performing the actual computation becomes difficult. Although the evaluation of the Weil--Petersson volume is, in principle, possible for any genus $g$ by utilizing Mirzakhani's recursion formula \cite{Mirzakhani2007} iteratively, an expression of the Weil--Petersson volume, $V_{g,n}(b_1, \ldots, b_n)$, cannot be obtained as a function of genus $g$ and the $n$ number of the geodesic boundaries. Obtaining the expression of the Weil--Petersson volume as a function of $g$ and $n$ is generally not considered a straightforward problem \footnote{Several efforts have been made to obtain the asymptotic expressions \cite{Penner92, Zograf2008, Kimura2008} of the Weil--Petersson volumes, $V_{g,n}$, as functions of $g$ when $g$ is large.}. \par The asymptotic expressions for the Weil--Petersson volumes, $V_{g,n}(b_1, \ldots, b_n)$, as functions of $g$ and $n$ were obtained when $g$ was large in \cite{Kimura2008} by utilizing partial differential equations \cite{DoNorbury, Do2008} that hold for the moduli of the hyperbolic Riemann surfaces. The use of the expressions of the Weil--Petersson volumes in \cite{Kimura2008} enables the direct evaluation of the higher-genus contributions to the correlation functions in JT gravity, avoiding the iterative steps to recursively compute genus-$g$ partition functions, which becomes difficult once $g$ becomes large. \par We compute the higher-genus contributions to the correlation function in JT gravity with any number of the asymptotic boundaries by utilizing the volume evaluated in \cite{Kimura2008}. We also evaluate the intersection numbers of the line bundles on the moduli of the Riemann surfaces of a large genus. The results obtained in this study determine the expressions of the correlation functions and the intersection numbers with large genus as explicit functions of genus $g$ of the surface and the number of boundaries, $n$; this may provide a clear outlook for studying their behavior. \par The intersection numbers of the line bundles on the moduli of the Riemann surfaces were discussed in the context of topological gravity in \cite{Witten1990, Dijkgraaf2018}. The Witten conjecture \cite{Witten1990} was used for the equivalence of two models of 2D quantum gravitational theory, and the statement of the conjecture involved the intersection numbers of the line bundles on the moduli of the Riemann surfaces. A proof of the Witten conjecture was given in \cite{Kontsevich1992}. Proofs of this conjecture can also be found in \cite{Okounkov2000, Mirzakhani2007int, Kazarian2007}. \par We also estimate the higher-genus contributions to the resolvent on the matrix-integral side utilizing large-$g$ asymptotics of the Weil--Petersson volumes in \cite{Kimura2008}, as stated earlier. \par The intersection numbers with one boundary were explicitly computed when the genus is large in \cite{Okuyama2019}, and correlation functions with two boundaries were explicitly computed in the low-temperature limit in \cite{OkuyamaSakai2020multi}. The authors in \cite{Okuyama2019, OkuyamaSakai2020multi} used the Korteweg--de Vries (KdV) hierarchy approach to calculate these; this is considerably different from the approach used in this study. \vspace{5mm} \par The remainder of this note is structured as follows. In section \ref{sec2}, we compute the higher-genus contributions to the correlation function with any number of the boundaries in JT gravity and discuss the physical consequences. We also evaluate the resolvent on the matrix-integral side, which may provide a tool for confirming the duality \cite{SSS2019} of JT gravity and the matrix integral. \par In section \ref{sec3}, we evaluate the intersection numbers of the moduli of the Riemann surfaces of large genera with any number of boundaries larger than or equal to two. In section \ref{sec4}, we present our concluding remarks and the unresolved problems. \section{Correlation functions with multiple boundaries in JT gravity and resolvent in the dual matrix integral} \label{sec2} \subsection{Correlation functions with $n$ number of boundaries in JT gravity} \label{subsec2.1} We compute the correlation functions in JT gravity with any number of boundaries. To facilitate the analysis, we limit the discussion to the higher-genus contributions to the correlation functions. An advantage of imposing the constraint is as follows: The correlation functions in JT gravity can be computed by evaluating the Weil--Petersson volume, which is the volume of the moduli of the hyperbolic Riemann surfaces, as discussed in \cite{SSS2019}. Mirzakhani established a method to recursively compute the Weil--Petersson volumes for any genus, $g$, of the Riemann surfaces \cite{Mirzakhani2007}; however, the difficulty of the computation using Mirzakhani's recursive method \cite{Mirzakhani2007} increases as genus $g$ becomes large. Thus, precisely expressing the Weil--Petersson volume, $V_{g,n}$, as a function of genus $g$ and the number asymptotic boundaries ($n$) is presently considerably difficult. However, when the discussion is limited to the region where genus $g$ of the Riemann surface is large, the Weil--Petersson volume, $V_{g,n}$, can be expressed as a function of $g$ and $n$, as obtained in \cite{Kimura2008}. The result in \cite{Kimura2008} can be utilized to compute the higher-genus contributions to the correlation function in JT gravity for any number of asymptotic boundaries. \par Under conditions $g>>b_i$, $i=1, \ldots, n$, and $g>>1$, where $b_i$s denote the lengths of the geodesic boundaries of a Riemann surface, the Weil--Petersson volume, $V_{g,n}(b_1, \ldots, b_n)$, of the moduli of the Riemann surface with genus $g$ and $n$ geodesic boundaries was evaluated to have the following expression to the leading order \cite{Kimura2008} \footnote{For $n=1$, the expression of the Weil--Petersson volume, $V_{g,1}(b)$, was obtained in \cite{SSS2019} under conditions $g >> b$ and $g >> 1$.} : \begin{equation} \label{asymptoticVgn in 2.1} V_{g,n}(b_1, \ldots, b_n) \sim \sqrt{\frac{2}{\pi}} 2^n \, (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\, \cdot \prod_{i=1}^n \frac{{\rm sinh}(\frac{b_i}{2})}{b_i}. \end{equation} Applying expression (\ref{asymptoticVgn in 2.1}), we compute the higher-genus correlation functions in JT gravity with any number of boundaries. \vspace{5mm} \par The connected correlation function of JT gravity on a Riemann surface of genus $g$ with $n$ asymptotic boundaries has genus expansion (\ref{genus expansion corr in intro}) \cite{SSS2019}, as noted in the introduction. As previously mentioned, $Z_{\rm Sch}^{\rm trumpet}(\beta, b)$ denotes the contribution from the path integral over the ``trumpet'' connecting an asymptotic wiggly boundary and a geodesic boundary of length $b$. The contribution, $Z_{\rm Sch}^{\rm trumpet}(\beta, b)$, was computed in \cite{SSS2019} as $\sqrt{\frac{\gamma}{2\pi \beta}}e^{-\frac{\gamma b^2}{2\beta}}$. \par First, we calculate the higher-genus correlation for $n=2$ to illustrate the proposed method for computing the higher-genus correlations in JT gravity for $n$ number of boundaries. \par For $n=2$, the genus expansion of the correlation function becomes as follows \cite{SSS2019} : \begin{equation} <Z(\beta_1) Z(\beta_2)>_c \simeq \sum^\infty _{g=0} Z_{g,2}(\beta_1, \beta_2) \cdot e^{-2g\,S_0}, \end{equation} Where the genus-$g$ path integral, $Z_{g,2}$, is given as \cite{SSS2019} \begin{equation} \label{path int Zg2 in 2.1} Z_{g,2}(\beta_1, \beta_2) = \int^\infty_0 b_1db_1 \int^\infty_0 b_2db_2 \, V_{g,2}(b_1, b_2) Z_{\rm Sch}^{\rm trumpet}(\beta_1, b_1) Z_{\rm Sch}^{\rm trumpet}(\beta_2, b_2). \end{equation} We apply the expression for the Weil--Petersson volume, $V_{g,2}(b_1, b_2)$, deduced in \cite{Kimura2008}, which holds when genus $g$ is large to path integral (\ref{path int Zg2 in 2.1}) to obtain the higher-genus contributions to the correlation function with two boundaries in JT gravity. \par The Weil--Petersson volume, $V_{g,2}(b_1, b_2)$, is given as follows under conditions $g>>b_1, b_2$ and $g>>1$ \cite{Kimura2008} : \begin{equation} \label{asymptotic Vg2 in 2.1} V_{g,2}(b_1, b_2) \sim \sqrt{\frac{2}{\pi}} 4 \, (4\pi^2)^{2g-1} \, \Gamma(2g-\frac{1}{2})\, \cdot \frac{{\rm sinh}(\frac{b_1}{2})}{b_1}\frac{{\rm sinh}(\frac{b_2}{2})}{b_2}. \end{equation} \par We substitute expression (\ref{asymptotic Vg2 in 2.1}) into (\ref{path int Zg2 in 2.1}) to compute the genus-$g$ partition function, $Z_{g,2}$, for a large genus, $g$ ($g>>1$), as follows \footnote{Owing to the presence of factors $Z_{\rm Sch}^{\rm trumpet}(\beta_1, b_1)$ and $Z_{\rm Sch}^{\rm trumpet}(\beta_2, b_2)$ in the integrand of (\ref{path int Zg2 in 2.1}), we expect that the integral over the region where $g>>b_1, b_2$ makes the dominant contribution. Given this observation, we expect that the use of expression (\ref{asymptotic Vg2 in 2.1}) for $V_{g,2}(b_1, b_2)$ over the entire region in integral (\ref{path int Zg2 in 2.1}) approximates the exact result with high precision.} : \begin{equation} \label{Zg2 computation in 2.1} Z_{g,2}(\beta_1, \beta_2) = \sqrt{\frac{2}{\pi}} 4 \, (4\pi^2)^{2g-1} \, \Gamma(2g-\frac{1}{2})\, \sqrt{\frac{\gamma}{2\pi \beta_1}} \sqrt{\frac{\gamma}{2\pi \beta_2}} (\int^\infty_0 db_1 {\rm sinh}(\frac{b_1}{2})e^{-\frac{\gamma b_1^2}{2\beta_1}}) (\int^\infty_0 db_2 {\rm sinh}(\frac{b_2}{2})e^{-\frac{\gamma b_2^2}{2\beta_2}}). \end{equation} \par Factors $\int^\infty_0 db_i {\rm sinh}(\frac{b_i}{2})e^{-\frac{\gamma b_i^2}{2\beta_i}}$, $i=1,2$, in the computed genus-$g$ partition function, $Z_{g,2}(\beta_1, \beta_2)$, for $g>>1$ can be written in terms of the error function when $\gamma=\beta_i$: $\int^\infty_0 db {\rm sinh}(\frac{b}{2})e^{-\frac{ b^2}{2}}=e^\frac{1}{8}\sqrt{\frac{\pi}{2}}\, {\rm erf}(\frac{1}{2\sqrt{2}})$. Therefore, the two factors in (\ref{Zg2 computation in 2.1}) can be viewed as variants of the error function. It appears that they cannot be expressed using the standard functions. They do not depend on genus $g$. \par Thus, we deduce that the higher-genus contributions to the correlation function with two boundaries can be expressed as follows: \begin{equation} 4\sqrt{\frac{2}{\pi}} \, \sqrt{\frac{\gamma}{2\pi \beta_1}} \sqrt{\frac{\gamma}{2\pi \beta_2}} (\int^\infty_0 db_1 {\rm sinh}(\frac{b_1}{2})e^{-\frac{\gamma b_1^2}{2\beta_1}}) (\int^\infty_0 db_2 {\rm sinh}(\frac{b_2}{2})e^{-\frac{\gamma b_2^2}{2\beta_2}}) \sum_{g>>1} (4\pi^2)^{2g-1} \, \Gamma(2g-\frac{1}{2})\cdot e^{-2g\,S_0}. \end{equation} \par The computation of the genus-$g$ partition function with general $n$ boundaries is analogous to the computation of the genus-$g$ partition function with two boundaries that we discussed earlier. Utilizing expression (\ref{asymptoticVgn in 2.1}), we find that the genus-$g$ partition function with $n$ boundaries can be derived as follows when $g>>1$: \begin{equation} Z_{g,n}(\beta_1, \ldots, \beta_n) = \sqrt{\frac{2}{\pi}} 2^n \, (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\, \prod_{i=1}^n\sqrt{\frac{\gamma}{2\pi \beta_i}} \, \prod_{j=1}^n \int^\infty_0 db_j {\rm sinh}(\frac{b_j}{2})e^{-\frac{\gamma b_j^2}{2\beta_j}}. \end{equation} Similar to the case with two boundaries, the resulting function is the product of a coefficient that depends on genus $g$ and the variants of the error function. \par We deduce that the higher-genus contributions to the correlation function with $n$ boundaries are given as follows: \begin{equation} 2^n \sqrt{\frac{2}{\pi}} \, \prod_{i=1}^n\sqrt{\frac{\gamma}{2\pi \beta_i}} \, \big(\prod_{j=1}^n \int^\infty_0 db_j {\rm sinh}(\frac{b_j}{2})e^{-\frac{\gamma b_j^2}{2\beta_j}} \big) \cdot \sum_{g>>1} (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\cdot e^{(-2g+2-n)S_0}. \end{equation} \par From the computed correlation functions, we learn that the nonperturbative correction takes the form of $e^{-\frac{e^{S_0}}{4\pi^2}}$. This agrees with the results of the correction and the effect of the ZZ brane \cite{Zamolodchikov2001}, as discussed in \cite{Okuyama2019}. \par The proposed method applies to cases with two or more boundaries, i.e. $n\ge 2$. The higher-genus contributions to the correlation function with one boundary can also be deduced in a similar manner using the large-$g$ asymptotic of the Weil--Petersson volume with one boundary, $V_{g,1}(b)$, as deduced in \cite{SSS2019}. \par The lower-genus contributions to the correlation functions can be obtained from the Weil--Petersson volumes of the moduli of the Riemann surfaces of lower genera, which can be computed by directly applying Mirzakhani's recursion relation. For higher-genus contributions, the direct computation by applying Mirzakhani's recursion relation is challenging; therefore, the proposed method for computing the higher-genus contributions is important. \subsection{Resolvents on the matrix-integral side} \label{subsec2.2} The correlation functions of the resolvents in the double-scaled matrix integral as dual of JT gravity have the following genus expansion \cite{SSS2019}: \begin{equation} \label{resolv genus exp in 2.2} <R(E_1)\ldots R(E_n)>_c = \sum^{\infty}_{g=0} R_{g,n}(E_1, \ldots, E_n)\cdot e^{(-2g-n+2)S_0}. \end{equation} Genus expansion (\ref{resolv genus exp in 2.2}) defines the multi-resolvent correlators, $R_{g,n}$, as discussed in \cite{SSS2019}. We compute the multi-resolvent correlators, $R_{g,n}$, when genus $g$ is large. This yields the large-genus contributions to the correlation functions of the resolvents, $<R(E_1)\ldots R(E_n)>_c$. \par Functions $W_{g,n}$ are defined as follows, as discussed in \cite{Eynard2014, SSS2019} : \begin{equation} \label{def W in 2.2} W_{g,n}(z_1, \ldots, z_n) = (-2)^n\cdot \prod^n_{i=1} z_i \cdot R_{g,n}(-z_1^2, \ldots, -z_n^2). \end{equation} As proved in \cite{Eynard2007}, function $W_{g,n}$ is given in terms of the Weil--Petersson volume, $V_{g,n}$, as follows: \begin{equation} \label{rel W and WP vol in 2.2} W_{g,n}(z_1, \ldots, z_n) = \prod^n_{i=1} \int_0^{\infty} b_i\, db_i \, e^{-b_i z_i} \, \cdot V_{g,n}(b_1, \ldots, b_n). \end{equation} For the region where $g>>1$, using the equation (\ref{rel W and WP vol in 2.2}), we can compute $W_{g,n}$ by utilizing the large-$g$ asymptotic of the Weil--Petersson volume, $V_{g,n}$, deduced in \cite{Kimura2008}. Based on the definition of $W_{g,n}$ (\ref{def W in 2.2}), the multi-resolvent correlator, $R_{g,n}$, is deduced from the computed $W_{g,n}$. \par Applying the large-$g$ asymptotic Weil--Petersson volume (\ref{asymptoticVgn in 2.1}) into (\ref{rel W and WP vol in 2.2}), we obtain $W_{g,n}$ as follows: \begin{eqnarray} \label{int W in 2.2} W_{g,n}(z_1, \ldots, z_n) & \sim \sqrt{\frac{2}{\pi}} 2^n \, (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\, \prod^n_{i=1} \int^{\infty}_0 db_i {\rm sinh}(\frac{b_i}{2})\, e^{-b_i z_i} \\ \nonumber & = 2^n \sqrt{\frac{2}{\pi}} \, (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\, \frac{(-2)^n}{\prod^n_{i=1}\big( 4(-z_i^2)+1 \big)}. \end{eqnarray} \par According to this result, in region $g>>1$, the multi-resolvent correlator, $R_{g,n}$, is determined as follows: \begin{equation} R_{g,n}(E_1, \ldots, E_n) \sim 2^n \sqrt{\frac{2}{\pi}} \, (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\, \frac{1}{\prod^n_{i=1}(4E_i+1)} \frac{1}{\sqrt{(-1)^n \prod^n_{i=1} E_i}}. \end{equation} Therefore, we deduce that the higher-genus contributions to the correlation functions of the resolvents of the double-scaled matrix integral with $n$ boundaries can be given as follows: \begin{equation} \sum_{g>>1} 2^n \sqrt{\frac{2}{\pi}} \, (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\, \frac{1}{\prod^n_{i=1}(4E_i+1)} \frac{1}{\sqrt{(-1)^n \prod^n_{i=1} E_i}} \cdot e^{(-2g-n+2)S_0}. \end{equation} \par Similar to the statement presented in section \ref{subsec2.1}, the proposed method applies to cases with two or more boundaries. Using the large-$g$ asymptotic of the Weil--Petersson volume with one boundary, $V_{g,1}(b)$, as deduced in \cite{SSS2019}, we can obtain the higher-genus contributions to the resolvent with one boundary. \section{Intersection numbers with multiple boundaries} \label{sec3} \par The 2D topological gravity is an intersection theory of line bundles on the moduli space, and the correlation functions are determined by the intersection theory. The Weil--Petersson volume contains the information of the correlation functions \cite{Dijkgraaf2018}, and the intersection numbers of the cohomology classes associated with the first Chern classes of line bundles and the Weil--Petersson symplectic form can also be extracted from the Weil--Petersson volume. \par We determine the intersection numbers of the cohomology classes for the moduli of the Riemann surfaces with large genera with $n$ number of boundaries, where $n\ge 2$. \par As discussed in \cite{Dijkgraaf2018}, the following equality holds owing to a result in \cite{Mirzakhani2007int}: \begin{eqnarray} \label{rel volume and int number in 3} V_{g,n}(b_1, \ldots, b_n) = & \int_{\overline{\mathcal{M}}_{g,n}} {\rm exp}(2\pi^2\kappa_1+\frac{1}{2}\sum_{i=1}^n b_i^2 \psi_i) \\ \nonumber = & \sum_{3g-3+n\ge \sum_{i=1}^n m_i\ge 0} \frac{(2\pi^2)^{3g-3+n-\sum_{i=1}^n m_i}}{(3g-3+n-\sum_{i=1}^n m_i)!\prod_{i=1}^n m_i!}\, \prod_{i=1}^n (\frac{b_i^2}{2})^{m_i}\, <\kappa_1^{3g-3+n-\sum_{i=1}^n m_i} \, \prod_{i=1}^n \psi_i^{m_i}>. \end{eqnarray} \footnote{$\kappa_1$ denotes the first Miller--Morita--Mumford class, which is cohomologous to the Weil--Petersson symplectic form $\omega$ times $\frac{1}{2\pi^2}$, that is, $\kappa_1=\frac{\omega}{2\pi^2}$ \cite{Wolpert1983, Wolpert1986}.} We use $\mathcal{L}_i$ to represent the cotangent spaces to the marked points in a closed Reimann surface of genus $g$. We use $\psi_i$ to denote the first Chern class of the cotangent space, $\mathcal{L}_i$, namely $\psi_i=c_1(\mathcal{L}_i)$. The $m_i$s are non-negative integers that satisfy condition $3g-3+n\ge \sum_{i=1}^n m_i\ge 0$. Utilizing equality (\ref{rel volume and int number in 3}) and based on the large $g$ asymptotics of the Weil--Petersson volumes deduced in \cite{Kimura2008}, we compute the intersection numbers of the cohomology classes associated with the line bundles on the moduli of the Riemann surfaces with boundaries, under conditions $g>>b_i$, $i=1, \ldots, n$, and $g>>1$. This is the capacity to which our argument applies in this section. \par Applying expression (\ref{asymptoticVgn in 2.1}) for the large $g$ asymptotic of the Weil--Petersson volume obtained in \cite{Kimura2008} into equality (\ref{rel volume and int number in 3}), we obtain the following relation \footnote{Relation (\ref{equality int num in 3}) holds only under $g>>b_i$, $i=1, \ldots, n$, and $g>>1$. The right-hand side contains finitely many terms in $b_i$, whereas the left-hand side (LHS) contains infinitely many terms in $b_i$; the relation does not hold when $b_i>>g$.}: \begin{eqnarray} \label{equality int num in 3} \sqrt{\frac{2}{\pi}} 2^n \, (4\pi^2)^{2g+n-3} \, \Gamma(2g+n-\frac{5}{2})\, \prod^n_{i=1} \frac{{\rm sinh}(\frac{b_i}{2})}{b_i} \sim & \\ \nonumber \sum_{3g-3+n\ge \sum_{i=1}^n m_i\ge 0} \frac{(2\pi^2)^{3g-3+n-\sum_{i=1}^n m_i}}{(3g-3+n-\sum_{i=1}^n m_i)!\prod_{i=1}^n m_i!}\, \prod_{i=1}^n (\frac{b_i^2}{2})^{m_i}\, <\kappa_1^{3g-3+n-\sum_{i=1}^n m_i} \, \prod_{i=1}^n \psi_i^{m_i}>. & \end{eqnarray} Because $\frac{{\rm sinh}(\frac{b}{2})}{b}$ has the following Taylor expansion: \begin{equation} \label{Taylor exp in 3} \frac{{\rm sinh}(\frac{b}{2})}{b} = \sum_{n=0}^\infty \frac{b^{2n}}{2^{2n+1}\cdot (2n+1)!}, \end{equation} the coefficient of $\prod_{i=1}^n b_i^{2m_i}$ on the LHS of (\ref{equality int num in 3}) can be determined from (\ref{Taylor exp in 3}). By comparing the coefficients of $\prod_{i=1}^n b_i^{2m_i}$ on both sides of (\ref{equality int num in 3}), we determine the intersection number as follows under $g>>b_i$, $i=1, \ldots, n$, and $g>>1$: \begin{equation} \label{computed int num in 3} <\kappa_1^{3g-3+n-\sum_{i=1}^n m_i} \, \prod_{i=1}^n \psi_i^{m_i}> \sim \sqrt{\frac{2}{\pi}} 2^{g+n-3} \, \pi^{2\sum_{i=1}^n m_i-2g} \, \cdot \Gamma(2g+n-\frac{5}{2})\, \frac{(3g-3+n-\sum_{i=1}^n m_i)! \prod_{i=1}^n m_i!}{\prod_{i=1}^n (2m_i+1)!}. \end{equation} \par Thus, we obtain the intersection numbers with $n$ number of boundaries where $n\ge 2$ under conditions $g>>b_i$, $i=1, \ldots, n$, and $g>>1$. We expect that this study can provide useful information that is comparable to previously known results. \section{Concluding remarks and unresolved problems} \label{sec4} In this study, we discussed the physical applications of the large $g$ asymptotics of Weil--Petersson volumes to JT gravity. We computed the higher-genus contributions to the correlation functions with multiple boundaries and deduced the resolvent on the matrix integral side. We obtained results for any number of boundaries of the Riemann surfaces. We also evaluated the intersection numbers of line bundles on the moduli of the Riemann surfaces of large genera with general number of boundaries ($n\ge 2$). \par As the resolvent is a fundamental quantity in the matrix integral, our results may provide information that can be utilized to confirm the duality of JT gravity and the matrix integral, as discussed in \cite{SSS2019}. \par In general, the determination of the asymptotic expressions of the Weil--Petersson volumes is not simple. The large $g$ asymptotics of the Weil--Petersson volumes \cite{Kimura2008} that were used in this study to compute the contributions to the correlation functions, resolvent, and intersection numbers involved the leading terms in $g$. A future study can focus on the calculation of the subleading terms and investigating the effects of their inclusion. \par The nonperturbative effects of the matrix integral might be observed from the higher-genus contributions that were computed in this study. This can be another potential direction of future research. \par According to a result in \cite{Mirzakhani2007}, the Weil--Petersson volume, $V_{g,n}(b_1, \ldots, b_n)$, is a polynomial in $b$. In the region where $b_i>>g$ \footnote{The authors in \cite{Maxfield202006} evaluated the asymptotic Weil--Petersson volume for any genus with one geodesic boundary length becoming large.}, terms $\prod_{i=1}^n b_i^{2m_i}$, where $\sum_{i=1}^n m_i=3g-3+n$ makes the dominant contribution. In the region where $b_i>>g$, the expressions of the intersection numbers should be significantly different from those deduced in section \ref{sec3}. It might be interesting to deduce the expressions for the intersection numbers in the region where $b_i>>g$. \section*{Acknowledgments} We would like to thank Shun'ya Mizoguchi and Kazuhiro Sakai for discussions.
2,877,628,088,769
arxiv
\section{INTRODUCTION} $N=2$ susy gauge theories possess an important holomorphicity property. The low-energy effective action of the $SU(2)$ theory e.g. is characterized by a holomorphic function ${\cal F}(a)$, or actually by two such functions $a(u)$ and $a_D(u)={\cal F}'(a(u))$. Knowledge of these two functions completely determines the low-energy effective action, as well as the masses of all BPS states at any point $u$ of the moduli space. Holomorphicity is of course a very strong mathematical condition. It allowed Seiberg and Witten \cite{SW}, given some physical one-loop input, to completely determine these two functions, thus obtaining also all non-perturbative multi-instanton contributions. One might expect that holomorphicity implies that all physical quantities vary smoothly over the moduli space. This is certainly true for the masses. There is however one important exception that concerns the stability of BPS states. As first observed in a two-dimensional context \cite{CV} and in the present context by Seiberg and Witten \cite{SW}, there exists a curve on moduli space, called the curve of marginal stability, where otherwise stable BPS states become degenerate with other BPS states and can decay. This curve is simply determined by the ratio $(a_D/a)(u)$ being real, and hence has a priori nothing to do with the holomorphicity properties of $a_D(u)$ and $a(u)$. In \cite{FB,BF} we studied the properties of these curves, which together with a certain discrete global symmetry acting on the (Coulomb branch of) moduli space, allowed us to exactly determine which BPS states can and do exist on either side of the curves, i.e. the weak-coupling (semi-classical) and strong-coupling spectra. These spectra are highly discontinuous across the curves, with almost all weak-coupling states decaying into very few strong-coupling states. Here, after giving a very brief account of the work of Seiberg and Witten for gauge group $SU(2)$, we discuss the curves of marginal stability together with the discrete global symmetries and show how the different spectra are obtained. We first concentrate in more detail on the pure $SU(2)$ Yang-Mills case ($N_f=0$), and then present the results for the cases with quark hypermultiplets, $N_f=1,2,3$, which are the only asymptotically free possibilities for $SU(2)$ (restricting ourselves to vanishing bare masses). \section{REVIEW OF SEIBERG-WITTEN THEORY} Let us start with a flash review of \cite{SW} where the low-energy effective action and BPS mass formula on the moduli space of pure $N=2$ susy $SU(2)$ Yang-Mills theory was studied. The relevant susy multiplet is a $N=2$ vector multiplet containing a vector $A_\mu\equiv \sum_{b=1}^3 A_\mu^b\, {1\over 2} \sigma_b$, Weyl spinors $\lambda,\ \psi$ and a complex scalar $\phi$, all in the adjoint representation of the gauge group $SU(2)$. The action is such that a scalar potential $V(\phi)={1\over 2} {\rm tr \, } ([\phi, \phi^+])^2$ is present. Unbroken susy implies $[\phi, \phi^+]=0$, still leaving the possibility of a non-vanishing vavuum expectation value of $\phi$ which one may take as $\phi={1\over 2} a\, \sigma_3$ with $a\in {\bf C}$. Gauge inequivalent vacua then are parametrized by $a^2$ or rather by $u=\langle {\rm tr \, } \phi^2\rangle$ which is a good coordinate on the moduli space ${\cal M} ={\bf C} \cup \{ \infty \}$. Since $\phi$ has a non-vanishing expectation value, by the Higgs mechanism, all components $b=1,2$ of the vector multiplet acquire a mass $m=\sqrt{2} \vert a \vert$ while the $b=3$ components remain massless. This breaks $SU(2)$ to $U(1)$. Integrating out the massive fields, one can then in principle compute the low-energy effective action of the massless fields. Restricting oneself to two-derivative terms, the form of this action is constrained by $N=2$ susy to depend only on a single holomorphic function ${\cal F}(a)$. Combining the knowledge of the one-loop computation of ${\cal F}$ (the only perturbative contribution) and arguments about monodromies and the expected number of singularities on moduli space, Seiberg and Witten were able to determine $a(u)$ and $a_D(u)\equiv (\partial {\cal F} /\partial a)(u)$ explicitly in terms of integrals of the holomorphic one-form on a certain genus one elliptic curve over the cycles of the homology basis. From there, one can reconstruct ${\cal F}(a)$ and the low-energy effective action. The $a(u)$ and $a_D(u)$ however directly give the masses of the BPS states as \begin{equation} m=\sqrt{2}\vert n_e a(u) - n_m a_D(u)\vert \label{i} \end{equation} where $n_e$ and $n_m$ are the integer electric and magnetic charge quantum numbers of the state. The massive perturbative states (W bosons and superpartners) have $n_e=\pm 1,\ n_m=0$ so that $m=\sqrt{2} \vert a \vert$ as noted above. Magnetic monopoles that do exist as non-pertubative solitonic states have $n_e=0, \ n_m=\pm 1$ and mass $m=\sqrt{2} \vert a_D \vert$. States with masses strictly larger than the r.h.s. of (\ref{i}) are non BPS states and must be in a different type of susy multiplet, a so-called long multiplet containing necessarily spins larger than one. All known states turn out to be in short multiplets of spins less or equal to one and are BPS states satifying (\ref{i}). The functions $a(u)$ and $a_D(u)$ are explicitly given by \begin{eqnarray} a(u)&=& \left({u+1\over 2}\right)^{\scriptstyle 1\over \scriptstyle 2}\, F\left(-{1\over 2}\raise 2pt\hbox{,}{1\over 2}\raise 2pt\hbox{,}1;{2\over u+1}\right) \ , \cr a_D(u)&=& i\, {u-1\over 2}\, F\left({1\over 2}\raise 2pt\hbox{,}{1\over 2}\raise 2pt\hbox{,}2;{1-u\over 2}\right) \ . \label{ii} \end{eqnarray} They have cuts and branch points. The latter are the singular points on the moduli space at $u=1,\, -1$ and $\infty$. Around these points the section $(a_D,a)$ has non-trivial monodromies in $SL(2,{\bf Z})\equiv Sp(2,{\bf Z})$. The full $Sp(2,{\bf Z})$ is the group of duality symmetries of the low-energy effective action and it includes electric-magnetic duality. Note that the spectrum of (massive) BPS states will not be invariant under the full duality group, since the latter is not a symmetry of the non-abelian $N=2$ susy Yang-Mills theory, only of the abelian low-energy effective action. The singular points on moduli space have an important physical interpretation. There, an otherwise massive BPS state becomes massless. While integrating out massive states is a sensible operation, integrating out a massless field leads to divergencies. So when a massive state that had been integrated out becomes massless somewhere on moduli space, one expects and indeed gets a singularity at this point. The singularity at $u=+1$ corresponds to a magnetic monopole (as well as anti-monopole) $(n_e,n_m)=\pm (0,1)$ becoming massless and the singularity at $u=-1$ to a dyon $(\pm 1,1)$ (and antidyon $-(\pm 1,1)$) becoming massless (cf. eqs. (\ref{i}) and (\ref{ii})). Note already that the two singularities are related by a ${\bf Z}_2$ symmetry acting on the moduli space as $u\to -u$. Note also that as $u\to \infty$, $a(u) \sim \sqrt{2u} \to \infty$. Since $a$ sets the mass scale, the $SU(2)$ Yang-Mills theory becomes asymptotically free in this limit, which hence is the semi-classical limit. On the other hand, $u=\pm 1$ are in a region of strong coupling and the associated singularities are genuine non-perturbative effects. \section{BPS STATES, CURVE OF MAR-\break GINAL STABILITY AND EXAMPLES OF DECAYS} Equation (\ref{i}) gives the mass of a BPS state in terms of the corresponding central charge \begin{equation} Z=n_e a(u) - n_m a_D(u)\ , \quad n_e,\ n_m \in {\bf Z} \label{iii} \end{equation} appearing in the $N=2$ susy algebra. Note that $Z$ is given by the standard symplectic invariant $\eta(p,\Omega)$ of $p=(n_e, n_m)$ and $\Omega=(a_D,a)$ which is such that $\eta(G p,G \Omega)=\eta(p,\Omega)$ for any $G\in Sp(2,{\bf Z})$. Generically, for a given point $u$ in moduli space, $a$ and $a_D$ are two complex numbers such that $a_D/a\notin {\bf R}$ and all possible central charges form a lattice in the complex plane, see Fig. 1. \begin{figure}[htb] \vspace{9pt} \centerline{ \fig{4.5cm}{4.5cm}{lattice.eps} } \caption{The lattice of central charges for generic $a_D$ and $a$} \label{lattice} \end{figure} Each lattice point corresponds to an a priori possible BPS state $(n_e, n_m)$ whose mass is simply its euclidean distance from the origin. Consider e.g. the dyon state $(n_e,n_m)=(1,1)$. By charge conservation alone, it could decay into the W boson $(1,0)$ and the magnetic monopole $(0,1)$ but, by the triangle inequality of elementary geometry, the sum of the masses of the decay products would be larger than the mass of the $(1,1)$ dyon. Hence the latter is stable. The same argument applies to all BPS states $(n_e, n_m)$ such that $(n_e, n_m)\ne q(n,m)$ with $n,m,q\in{\bf Z}$, $q\ne \pm 1$: states with $n_e$ and $n_m$ relatively prime are stable. The preceeding argument fails if $(a_D/a)(u)\in {\bf R}$, since then the lattice collapses onto a single line and decays of otherwise stable BPS states become possible. It is thus of interest to determine the set of all such $u$, i.e. ${\cal C}=\{u \in {\bf C}\ \vert\ (a_D(u)/a(u))\in {\bf R}\}$, which is called the curve of marginal stability \cite{ARG,MAT}. Given the explicit form of $a_D(u)$ and $a(u)$ it is straightforward to determine {$\cal C$}\ numerically \cite{FB}, see Fig. 2, although it can also be done analytically \cite{MAT}. \begin{figure}[htb] \vspace{9pt} \centerline{ \fig{6cm}{4cm}{curve.eps} } \caption{In the $u$ plane, we show the curve {$\cal C$}\ of marginal stability which is almost an ellipse centered at the origin (thick line), the cuts of $a(u)$ and $a_D(u)$ (dotted and dashed lines), as well as the definitions of the weak-coupling region ${\cal R}_W$ and the strong-coupling region (${\cal R}_{S+} \cup {\cal R}_{S-}$).} \label{curve} \end{figure} The precise form of the curve however is irrelevant for our purposes. What is important is that as $u$ varies along the curve, $a_D/a$ takes all values in $[-1,1]$. More precisely, if we call ${\cal C}^\pm$ the parts of {$\cal C$}\ in the upper and lower half $u$ plane, then \begin{eqnarray} {a_D\over a}(u) &\in& [-1,0] \quad {\rm for}\ u\in {\cal C}^+ \ , \cr &{}& \cr {a_D\over a}(u) &\in& [0,1] \quad {\rm for}\ u\in {\cal C}^- \ , \label{iv} \end{eqnarray} with the value being discontinuous at $u=-1$ due to the cuts of $a_D$ and $a$ running along the real axis from $-\infty$ to $+1$. The curve {$\cal C$}\ separates the moduli space into two distinct regions: inside the curve and outside the curve, see Fig. 2. If two points $u$ and $u'$ are in the same region, i.e. if they can be joined by a path not crossing {$\cal C$}\ then the spectrum of BPS states (by which we mean the set of quantum numbers $(n_e, n_m)$ that do exist) is necessarily the same at $u$ and $u'$. Indeed, start with a given stable BPS state at $u$. Then imagine deforming the theory adiabatically so that the scalar field $\phi$ slowly changes its vacuum expectation value and $\langle {\rm tr \, }\phi^2\rangle$ moves from $u$ to $u'$. In doing so, the BPS state will remain stable and it cannot decay at any point on the path. Hence it will also exist at $u'$. If, however, $u$ and $\tilde u$ are in different regions so that the path joining them must cross the curve {$\cal C$}\ somewhere, then the initial BPS state will no longer be stable as one crosses the curve and it can decay. Hence the spectrum at $u$ and $\tilde u$ need not be the same. As an example, consider the possible decay of the W boson $(1,0)$ when crossing the curve on ${\cal C}^+$ at a point where $a_D/a =r$ with $r$ any real number between $-1$ and $0$. Charge conservation alone allows for the reaction \begin{equation} (1,0)\to (1,-1) + (0,1) \ . \label{v} \end{equation} On ${\cal C}^+$, and only on ${\cal C}^+$, we also have the equality of masses, thanks to \begin{eqnarray} \vert a+a_D\vert + \vert a_D\vert &=&\vert a \vert \left( \vert 1+r\vert +\vert r \vert \right) \cr &=& \vert a \vert \left( 1+r -r\right) = \vert a \vert \ . \label{vi} \end{eqnarray} Had one crossed the curve in the lower half plane instead, $r$ would have been between $0$ and $+1$ and the dyon $(1,-1)$ would have been decribed as $(1,1)$ (see below), and eq. (\ref{vi}) would have worked out correspondingly. Since the region of moduli space outside the curve contains the semi-classical domain $u\to\infty$, we refer to this region as the semi-classical or weak-coupling region ${\cal R}_W$ and to the region inside the curve as the strong-coupling region ${\cal R}_S$. We call the corresponding spectra also weak and strong-coupling spectra ${\cal S}_W$ and ${\cal S}_S$. This terminology is used due to the above-explained continuity of the spectra throughout each of the two regions. Nevertheless, the physics close to the curve is always strongly coupled even in the so-called weak-coupling region. \section{THE MAIN ARGUMENT AND THE WEAK-COUPLING SPECTRUM} The important property of the curve {$\cal C$}\ of marginal stability is \noindent {\bf P1 : Massless states can only occur on the curve {$\cal C$}.} \noindent The proof is trivial: If we have a massless state at some point $u$, it necessarily is a BPS state, hence $m(u) = 0$ implies $n_e a(u) - n_m a_D(u) = 0$ which can be rewritten as $(a_D/a)(u) = n_e/n_m$. But $n_e/n_m$ is a real number, hence $(a_D/a)(u)$ is real, and thus $u\in$ {$\cal C$}. Indeed the points $u=\pm 1$ where the magnetic monopole and the dyon $(\pm 1,1)$ become massless are on the curve. The converse statement obviously also is true: \noindent {\bf P2 : A BPS state $(n_e, n_m)$ with $n_e/n_m \in [-1,1]$ becomes massless somewhere on the curve {$\cal C$}.} \noindent Of course, it will become massless precisely at the point $u\in$ {$\cal C$}\ where $(a_D/a)(u) = n_e/n_m$. Strictly speaking, in its simple form, this only applies to BPS states in the weak-coupling region, since the description of BPS states in the strong-coupling region is slightly more involved as shown below. Let me now state the main hypothesis. \noindent {\bf H : A state becoming massless always leads to a singularity of the low-energy effective action, and hence of $a_D(u),\ a(u)$. The Seiberg-Witten solution (\ref{ii}) for $a_D(u),\ a(u)$ is correct and there are only two singularities at finite $u$, namely $u=\pm 1$.} \noindent Then the argument we will repeatedly use goes like this: If a certain state would become massless at some point $u$ on moduli space, it would lead to an extra singularity which we know cannot exist. Hence this state either is the magnetic monopole $\pm (0,1)$ or the $\pm (\pm 1,1)$ dyon and $u=\pm 1$, or this state cannot exist. As an immediate consequence we can show that the weak-coupling spectrum cannot contain BPS states with $\vert n_m\vert > \vert n_e\vert >0$. Indeed, for such a state, $n_e/n_m \in [-1,1]$ and it would be massless at the point $u$ on {$\cal C$}\ where $(a_D/a)(u) = n_e/n_m$. Since $\vert n_m\vert > \vert n_e\vert >0$ it is neither the monopole ($n_e=0$) nor the $(\pm 1,1)$ dyon, hence it cannot exist. To determine which states are in ${\cal S}_W$ one uses a global symmetry. Taking $u\to e^{2\pi i} u$ along a path outside {$\cal C$}\ does not change the theory since one comes back to the same point of moduli space, and hence must leave ${\cal S}_W$ invariant. But it induces a monodromy transformation \begin{equation} \pmatrix{ n_e\cr n_m\cr} \, \to \, M_\infty \pmatrix{ n_e\cr n_m\cr} \, , \, M_\infty=\pmatrix{ -1&\hfill 2\cr \hfill 0 & -1\cr} . \label{vii} \end{equation} In other words, $M_\infty {\cal S}_W = {\cal S}_W$. Now, we know that ${\cal S}_W$ contains at least the two states that are responsible for the singularities, namely $(0,1)$ and $(1,1)$ together with their antiparticles $(0,-1)$ and $(-1,-1)$. Applying $M_\infty^{\pm 1}$ on these two states generates all dyons $(n,\pm 1),\ n\in {\bf Z}$. This was already clear from \cite{SW}. But now we can just as easily show that there are no other dyons in the weak-coupling spectrum. If there were such a state $\pm (k,m)$ with $\vert m\vert \ge 2$, then applying $M_\infty^n,\ n\in {\bf Z}$, there would also be all states $\pm (k-2n m,m)$. The latter would become massless somewhere on {$\cal C$}\ if $(k-2n m/m)=(k/m)-2n \in [-1,1]$. Since there is always such an $n\in {\bf Z}$, this state, and hence $\pm (k,m)$ cannot exist in ${\cal S}_W$. Finally. the W boson which is part of the perturbative spectrum is left invariant by $M_\infty$: $M_\infty (1,0) = - (1,0)$, where the minus sign simply corresponds to the antiparticle. Hence we conclude \begin{equation} {\cal S}_W = \left\{ \pm (1,0),\ \pm (n,1),\ n\in {\bf Z} \right\} \ . \label{viii} \end{equation} This result was already known from semi-classical considerations on the moduli space of multi-monopole configurations \cite{SEN,STERN}, but it is nice to rederive it in this particularly simple way. Now let us turn to the new results of \cite{FB} concerning the strong-coupling spectrum. \section{THE ${\bf Z}_2$ SYMMETRY} The classical susy $SU(2)$ Yang-Mills theory has a $U(1)_R$ $R$-symmetry acting on the scalar $\phi$ as $\phi\to e^{2 i \alpha}\phi$ so that $\phi$ has charge two. In the quantum theory this global symmetry is anomalous, and it is easy to see from the explicit form of the one-loop and instanton contributions to the low-energy effective action (i.e. to ${\cal F}$) that only a discrete subgroup ${\bf Z}_8$ survives, corresponding to phases $\alpha={2\pi \over 8} k,\ k\in{\bf Z}$. Hence under this ${\bf Z}_8$ one has $\phi^2\to (-)^k \phi^2$. This ${\bf Z}_8$ is a symmetry of the quantum action and of the Hamiltonian, but a given vacuum with $u=\langle {\rm tr \, }\phi^2\rangle \ne 0$ is invariant only under the ${\bf Z}_4$ subgroup corresponding to even $k$. The quotient (odd $k$) is a ${\bf Z}_2$ acting as $u\to -u$. Although a given vacuum breaks the full ${\bf Z}_8$ symmetry, the broken symmetry (the ${\bf Z}_2$) relates physically equivalent but distinct vacua. In particular, the mass spectra at $u$ and at $-u$ must be the same. This means that for every BPS state $(n_e, n_m)$ that exists at $u$ there must be some BPS state $(\tilde n_e, \tilde n_m)$ at $-u$ having the same mass: \begin{equation} \vert \tilde n_e a(-u) - \tilde n_m a_D(-u) \vert = \vert n_e a(u) - n_m a_D(u) \vert \ . \label{ix} \end{equation} This equality shows that there must exist a matrix $G\in Sp(2,{\bf Z})$ such that \begin{eqnarray} \pmatrix{ \tilde n_e \cr \tilde n_m \cr} &=&\pm G \pmatrix{ n_e\cr n_m\cr} \ , \cr &{}& \cr \pmatrix{ a_D\cr a\cr} (-u) &=& e^{i\omega}\, G \pmatrix{ a_D\cr a\cr} (u) \label{x} \end{eqnarray} where $e^{i\omega}$ is some phase. Indeed, from the explicit expressions (\ref{ii}) of $a_D$ and $a$ one finds, using standard relations between hypergeometric functions, that \begin{equation} G= G_{W,\epsilon} \equiv \pmatrix{ 1&\epsilon\cr 0& 1\cr}\, ,\ e^{i\omega}= e^{-i\pi\epsilon/2} \label{xi} \end{equation} where $\epsilon =\pm 1$ according to whether $u$ is in the upper or lower half plane. The subscript $W$ indicates that this is the matrix to be used in the weak-coupling region, while for the strong-coupling region there is a slight subtlety to be discussed soon. We have just shown that for any BPS state $(n_e, n_m)$ existing at $u$ (in the weak-coupling region) with mass $m$ there exists another BPS state $(\tilde n_e, \tilde n_m)=\pm G_{W,\epsilon} (n_e, n_m)$ at $-u$ with the same mass $m$. Now, since both $u$ and $-u$ are outside the curve {$\cal C$}, they can be joined by a path never crossing {$\cal C$}, and hence the BPS state $(\tilde n_e, \tilde n_m)$ must also exist at $u$, although with a different mass $\tilde m$. So we have been able to use the broken symmetry to infer the existence of the state $(\tilde n_e, \tilde n_m)$ at $u$ from the existence of $(n_e, n_m)$ at the {\it same} point $u$ of moduli space. Starting from the magnetic monopole $(0,1)$ at $u$ in the upper half plane (outside {$\cal C$}) one deduces the existence of all dyons $(n,1)$ with $n\ge 0$. Taking similarly $u$ in the lower half plane (again outside {$\cal C$}) one gets all dyons $(n,1)$ with $n\le 0$. The W boson $(1,0)$ is invariant under $G_{W,\epsilon}$. Once again, one generates exactly the weak-coupling spectrum ${\cal S}_W$ of (\ref{viii}), and clearly $G_{W,\epsilon} {\cal S}_W = {\cal S}_W$. \section{THE STRONG-COUPLING SPECTRUM} It is in the strong-coupling region that this ${\bf Z}_2$ symmetry will show its full power. Here $M_\infty$ no longer is a symmetry, since a monodromy circuit around infinity can be deformed all through the weak-coupling region but it cannot cross {$\cal C$}\ into the strong-coupling region since the state that is taken along this circuit may well decay upon crossing the curve {$\cal C$}. The relations (\ref{x},\ref{xi}) expressing $a_D(-u), a(-u)$ in terms of $a_D(u), a(u)$ nevertheless remain true. What needs to be reexamined is the relation between $\tilde n_e, \tilde n_m$ and $n_e, n_m$. This is due to the fact that there is a cut of the function $a(u)$ running between $-1$ and $1$, separating the strong-coupling region ${\cal R}_S$ into two parts, ${\cal R}_{S+}$ and ${\cal R}_{S-}$, as shown in Fig. 2. As a consequence, the same BPS state is described by two different sets of integers in ${\cal R}_{S+}$ and ${\cal R}_{S-}$. If we call the corresponding spectra ${\cal S}_{S+}$ and ${\cal S}_{S-}$ then we have \begin{eqnarray} {\cal S}_{S-}&=&M_1^{-1} {\cal S}_{S+}\ ,\ \pmatrix{n_e'\cr n_m'\cr} = M_1^{-1} \pmatrix{n_e\cr n_m\cr} \ ,\cr M_1^{-1}&=&\pmatrix{1&0\cr 2&1\cr} \ . \label{xia} \end{eqnarray} This change of description is easily explained: take a BPS state $(n_e, n_m)\in {\cal S}_{S+}$ at a point $u\in {\cal R}_{S+}$ and transport it to a point $u'\in {\cal R}_{S-}$. In doing so, its mass varies continuously and nothing dramatic can happen since one does not cross the curve {$\cal C$}. Hence, as one crosses from ${\cal R}_{S+}$ into ${\cal R}_{S-}$, the functions $a_D$ and $a$ must also vary smoothly, which means that at $u'\in{\cal R}_{S-}$ one has the analytic continuation of $a_D(u)$ and $a(u)$. But this is not what one calls $a_D$ and $a$ in ${\cal R}_{S-}$. Rather, these analytic continuations $\tilde a_D(u')$ and $\tilde a(u')$ are related to $a_D(u')$ and $a(u')$ by the monodromy matrix around $u=1$ which is $M_1$ as \begin{equation} \pmatrix{\widetilde {a_D}(u')\cr \tilde a(u')\cr} = M_1 \pmatrix{ a_D(u')\cr a(u')\cr} \ . \label{xib} \end{equation} Hence the mass of the BPS state at $u'$ is $\sqrt{2} \vert n_e \tilde a(u') - n_m \tilde a_D(u')\vert$ $= \sqrt{2}\vert n_e' a(u') - n_m' a_D(u') \vert$ where $n_e',\, n_m'$ are given by eq. (\ref{xia}). As a consequence of the two different descriptions of the same BPS state, the $G$-matrix implementing the ${\bf Z}_2$ transformation on the spectrum has to be modified. As before, from the existence of $(n_e, n_m)$ at $u\in {\cal R}_{S+}$ one concludes the existence of a state $G_{W,+} (n_e, n_m)$ at $-u\in {\cal R}_{S-}$. This same state must then also exist at $u$ but is described as $M_1 G_{W,+} (n_e, n_m)$. Had one started with a $u\in {\cal R}_{S-}$ the relevant matrix would have been $M_1^{-1} G_{W,-}$. Hence, in the strong-coupling region $G_{W,\pm}$ is replaced by \begin{equation} G_{S,\epsilon}=(M_1)^\epsilon G_{W,\epsilon} = \pmatrix{\hfill 1&\hfill\epsilon\cr -2\epsilon&-1\cr}\ , \label{xii} \end{equation} and again one concludes that the existence of a BPS state $(n_e, n_m)$ at $u\in {\cal R}_{S,\epsilon}$ implies the existence of another BPS state $G_{S,\epsilon} (n_e, n_m)$ at the {\it same} point $u$. The important difference now is that $G_{S,\epsilon}^2=-{\bf 1}$, so that applying this argument twice just gives back $(-n_e, -n_m)$. But this is the antiparticle of $(n_e, n_m)$ and always exists together with $(n_e, n_m)$. As far as the determination of the spectrum is concerned we do not really need to distinguish particles and antiparticles. In this sense, applying $G_{S,\epsilon}$ twice gives back the same BPS state. Hence in the strong-coupling region, all BPS states come in pairs, or ${\bf Z}_2$ doublets (or quartets if one counts particles and antiparticles separately): \newpage \begin{eqnarray} \pm \pmatrix{n_e\cr n_m\cr}\in {\cal S}_{S+} \, &\Leftrightarrow& \, \pm\, G_{S,+} \pmatrix{n_e\cr n_m\cr} \cr &=&\pm \pmatrix{ n_e+n_m\cr -2n_e-n_m\cr} \in {\cal S}_{S+} \cr &{}& \label{xiii} \end{eqnarray} and similarly for ${\cal S}_{S-}$. An example of such a doublet is the magnetic monopole $(0,1)$ and the dyon $(1,-1)=-(-1,1)$ which are the two states becoming massless at the ${\bf Z}_2$-related points $u=1$ and $u=-1$. Note that in ${\cal S}_{S-}$ the monopole is still described as $(0,1)$ while the same dyon is described as $(1,1)$. It is now easy to show that this is the only doublet one can have in the strong-coupling spectrum. Indeed, one readily sees that either $n_e/n_m \equiv r$ is in $[-1,0]$ or $(n_e+n_m)/(-2 n_e-n_m)= - (r+1)/(2r+1)$ is in $[-1,0]$. This means that one or the other member of the ${\bf Z}_2$ doublet (\ref{xiii}) becomes massless somewhere on ${\cal C}^+$, the part of the curve {$\cal C$}\ that can be reached from ${\cal R}_{S+}$. But as already repeatedly argued, the only states ever becoming massless are the magnetic monopole $(0,1)$ and the dyon $(1,-1)$. Hence no other ${\bf Z}_2$ doublet can exist in the strong-coupling spectrum and we conclude that \begin{eqnarray} {\cal S}_{S+} &=& \left\{ \pm (0,1), \pm (-1,1) \right\} \cr &\Leftrightarrow& \cr {\cal S}_{S-} &=& \left\{ \pm (0,1), \pm (1,1) \right\} \ . \label{xiv} \end{eqnarray} {\bf P3 : The strong-coupling spectrum consists of only those BPS states that are responsible for the singularities. All other weak-coupling, i.e. semi-classical BPS states must and do decay consistently into them when crossing the curve {$\cal C$}.} \noindent We have shown above the example of the decay of the W boson, cf. eq. (\ref{v}), but it is just as simple to show consistency of the other decays \cite{FB}. When adding massless quark hypermultiplets next, we will see that the details of the spectrum change, however, the conclusion P3 will remain the same. \section{GENERALISATION TO $N=2$ SUSY QCD INCLUDING $N_f=1,2,3$ MASSLESS QUARK HYPERMULTIPLETS} We will continue to consider only the gauge group SU(2) as studied in \cite{SWII}. We will also restrict ourselves to the case of vanishing bare masses of the quark hypermultiplets. Here, we will be very qualitative and describe only the results, referring the reader to \cite{BF} for details. The main difference with respect to the previous case of pure Yang-Mills theory is that now the BPS states carry representations of the flavour group which is the covering group of $SO(2N_f)$, namely $SO(2)$ for one flavour, $Spin(4)=SU(2)\times SU(2)$ for two flavours, and $Spin(6)=SU(4)$ for three flavours. We will present each of the three cases separately. \subsection{$N_f=1$} According to Seiberg and Witten \cite{SWII} there are 3 singularities at finite points of the Coulomb branch of the moduli space. They are related by a global discrete ${\bf Z}_3$ symmetry. This ${\bf Z}_3$ is the analogue of the ${\bf Z}_2$ symmetry discussed previously. Its origin is slightly more complicated, however, since the original ${\bf Z}_{12}$ is due to a combination of a ${\bf Z}_6$ coming from the anomalous $U(1)_R$ symmetry and of the anomalous flavour-parity of the $O(2N_f)$ flavour group. In any case, the global discrete symmetry of the quantum theory is ${\bf Z}_{12}$. The vacuum with non-vanishing value of $u=\langle {\rm tr\, }\phi^2 \rangle$ breaks this to ${\bf Z}_4$. The quotient ${\bf Z}_3$ acting as $u\to e^{\pm 2\pi i/3} u$ then is a symmetry relating different but physically equivalent vacua. The three singular points are due to a massless monopole $(0,1)$, a massless dyon $(-1,1)$ and another massless dyon $(-2,1)$. Again there is a curve of marginal stability that was obtained from the explicit expressions for $a_D(u)$ and $a(u)$ \cite{BF}. It is almost a circle, and of course, it goes through the three singular points, see Fig. 3, \begin{figure}[htb] \vspace{9pt} \centerline{ \fig{5.5cm}{4.5cm}{matter.eps} } \caption{The curve of marginal stability and the three different portions of the strong-coupling region separated by the cuts, for $N_f=1$} \label{matter} \end{figure} where we also indicated the various cuts and correspondingly different portions ${\cal R}_{S+}, {\cal R}_{S-}, {\cal R}_{S0}$ of the strong-coupling region ${\cal R}_S$. So here one needs to introduce three different desciptions of the same strong-coupling BPS state. The corresponding spectra are denoted ${\cal S}_{S+}$, ${\cal S}_{S0}$ and ${\cal S}_{S-}$. The ratio $a_D/a$ increases monotonically from $-2$ to $+1$ as one goes along the curve in a clockwise sense, starting at the point where $(-2,1)$ is massless. Then using exactly the same type of arguments as we did before, one obtains the weak and strong-coupling spectra. All states in the latter now belong to a single ${\bf Z}_3$ triplet, containing precisely the three states responsible for the singularities. Denoting a BPS state by $(n_e, n_m)_S$ where $S$ is the $SO(2)$ flavour charge, and denoting its antiparticle $(-n_e, -n_m)_{-S}$ simply by $-(n_e, n_m)_S$, one finds \cite{BF} \begin{eqnarray} {\cal S}_W &=& \left\{ \pm (2,0)_0 , \ \pm (1,0)_1 ,\ \pm (2n,1)_{1/2} , \right.\cr &{}& \phantom{\{ } \left. \pm (2n+1,1)_{-1/2} ,\ n\in {\bf Z} \right\} \cr &{}& \cr {\cal S}_{S0} &=& \left\{ \pm (0,1)_{1/2} , \ \pm (-1,1)_{-1/2} , \ \pm (1,0)_{1/2} \right\} \cr &{}& \label{xv} \end{eqnarray} with states in ${\cal S}_{S+}$ or ${\cal S}_{S-}$ related to the description in ${\cal S}_{S0}$ by the appropriate monodromy matrices: ${\cal S}_{S+} = \pmatrix{\hfill 2&1\cr -1 & 0\cr} {\cal S}_{S0}$, ${\cal S}_{S-} = \pmatrix{1&0\cr 1&1\cr} {\cal S}_{S0}$. One sees that the state $(1,0)_{1/2}$ in ${\cal S}_{S0}$ corresponds to $(2,-1)_{1/2}$ in ${\cal S}_{S+}$ or to $(1,1)_{1/2}$ in ${\cal S}_{S-}$ and is the one responsible for the third singularity. Also note that following \cite{SWII,BF} we changed the normalisation of the electric charge by a factor of 2, so that now $(2,0)_0$ is the W boson and $(1,0)_1$ is the quark. All decays across the curve {$\cal C$}\ are consistent with conservation of the mass and of all quantum numbers, i.e. electric and magnetic charges, as well as the $SO(2)$ flavour charge. For example, when crossing {$\cal C$}\ into ${\cal R}_{S0}$, the quark decays as $(1,0)_1 \to (0,1)_{1/2}+(1,-1)_{1/2}$. \subsection{$N_f=2$} This case is very similar to the pure Yang-Mills case. The global discrete symmetry acting on the Coulomb branch of moduli space is again ${\bf Z}_2$ and the curve of marginal stability is exactly the same, cf. Fig. 2, with the singularities again due to a massless magnetic monopole $(0,1)$ and a massless dyon $(1,1)$. Note however, that this is in the new normalisation where the W boson is $(2,0)$. So this dyon has half the electric charge of the W, contrary to what happened for $N_f=0$. With the present normalisation one finds the weak and strong-coupling spectra as \begin{eqnarray} {\cal S}_W &=& \left\{ \pm (2,0) , \ \pm (1,0) ,\ \pm (n,1) ,\ n\in {\bf Z} \right\} \cr &{}& \cr {\cal S}_{S+} &=& \left\{ \pm (0,1) , \ \pm (-1,1) \right\} \label{xvi} \end{eqnarray} and all decays across {$\cal C$}\ are again consistent with all quantum numbers. For the quark one has e.g. $(1,0)\to (0,1)+(1,-1)$ with the flavour representations of $SU(2)\times SU(2)$ working out as $({\bf 2}, {\bf 2}) = ({\bf 2}, {\bf 1})\otimes ({\bf 1}, {\bf 2})$. \subsection{$N_f=3$} In this case the global symmetry of the action is ${\bf Z}_4$ and a given vacuum is invariant under the full ${\bf Z}_4$. Consequently, there is no global discrete symmetry acting on the Coulomb branch of the moduli space. There are two singularities \cite{SWII}, one due to a massless monopole, the other due to a massless dyon $(-1,2)$ of {\it magnetic} charge 2. The existence of magnetic charges larger than 1 is a novelty of $N_f=3$. The curve of marginal stability again goes through the two singular points. It is a shifted and rescaled version of the corresponding curve for $N_f=0$, see Fig. 2. Due to the cuts, again we need to introduce two different descriptions of the same strong-coupling BPS state. The variation of $a_D/a$ along the curve {$\cal C$}\ is from $-1$ to $-1/2$ on ${\cal C}^+$ and from $-1/2$ to $0$ on ${\cal C}^-$. Luckily, this is such that we do not need any global symmetry to determine the strong-coupling spectrum. For the weak-coupling spectrum, one uses the $M_\infty$ symmetry. One finds \begin{eqnarray} {\cal S}_W &=& \left\{ \pm (2,0) , \ \pm (1,0) ,\ \pm (n,1) ,\right.\cr &{}& \phantom{ \{ } \left. \pm (2n+1,2) , \ n\in {\bf Z} \right\} \cr &{}& \cr {\cal S}_{S+} &=& \left\{ \pm (1,-1) , \ \pm (-1,2) \right\} \label{xvii} \end{eqnarray} with $(1,-1)\in {\cal S}_{S+}$ corresponding to $(0,1)\in {\cal S}_{S-}$, so this is really the magnetic monopole. The flavour symmetry group is $SU(4)$, and the quark $(1,0)$ is in the representation ${\bf 6}$, the W boson $(2,0)$ and the dyons of magnetic charge two are singlets, while the dyons $(n,1)$ of magnetic charge one are in the representation ${\bf 4}$ if $n$ is even and in ${\overline {\bf 4}}$ if $n$ is odd. Antiparticles are in the complex conjugate representations of $SU(4)$. Again, all decays across the curve {$\cal C$}\ are consistent with all quantum numbers, and in particular with the $SU(4)$ Clebsch-Gordan series. As an example, consider again the decay of the quark, this time as $(1,0)\to 2\times (0,1) + (1,-2)$. The representations on the l.h.s. and r.h.s. are ${\bf 6}$ and ${\bf 4}\otimes {\bf 4}\otimes {\bf 1}$. Since ${\bf 4}\otimes {\bf 4}={\bf 6} \oplus {\bf 10}$ this decay is indeed consistent. All other examples can be found in \cite{BF}. \section{CONCLUSIONS} The conclusions have already been listed in the abstract, and there is no need to repeat them here again. \vskip 2.mm \noindent{\bf Acknowledgements} It is a pleasure to thank the organizing committee of the SUSY'96 conference at College Park, MD, and in particular Jim Gates, for the invitation to present the material covered in these notes. \noindent{
2,877,628,088,770
arxiv
\section*{Statement of Changes} \title{Learning Safe, Generalizable Perception-based Hybrid Control with Certificates} \author{Charles Dawson$^{1}$, Bethany Lowenkamp$^{2}$, Dylan Goff$^1$, and Chuchu Fan$^{1}$% \thanks{Manuscript received: September 9, 2021; Revised November 23, 2021; Accepted December 30, 2021. \thanks{This paper was recommended for publication by Editor Jens Kober upon evaluation of the Associate Editor and Reviewers' comments. This work was supported by the NASA University Leadership Initiative (grant \#80NSSC20M0163) and the Defense Science and Technology Agency in Singapore, but this article solely reflects the opinions and conclusions of its authors and not any NASA entity, DSTA Singapore, or the Singapore Government. CD is supported by the NSF GRFP under Grant No. 1745302.} \thanks{$^{1} $CD, DG, and CF are with the Dept. of Aeronautics and Astronautics, MIT, Cambridge MA {\tt\footnotesize \{cbd, dgoff, chuchu\}@mit.edu}}% \thanks{$^{2} $BL is with the Dept. of Mechanical Engineering, MIT, Cambridge MA {\tt\footnotesize [email protected]}}% \thanks{$^{*} $BL and DG contributed equally.}% \thanks{Digital Object Identifier (DOI): see top of this page.} } \maketitle \begin{abstract} Many robotic tasks require high-dimensional sensors such as cameras and Lidar to navigate complex environments, but developing certifiably safe feedback controllers around these sensors remains a challenging open problem, particularly when learning is involved. Previous works have proved the safety of perception-feedback controllers by separating the perception and control subsystems and making strong assumptions on the abilities of the perception subsystem. In this work, we introduce a novel learning-enabled perception-feedback hybrid controller, where we use Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) to show the safety and liveness of a full-stack perception-feedback controller. We use neural networks to learn a CBF and CLF for the full-stack system directly in the observation space of the robot, without the need to assume a separate perception-based state estimator. Our hybrid controller, called LOCUS (Learning-enabled Observation-feedback Control Using Switching), can safely navigate unknown environments, consistently reach its goal, and generalizes safely to environments outside of the training dataset. We demonstrate LOCUS in experiments both in simulation and in hardware, where it successfully navigates a changing environment using feedback from a Lidar sensor. \end{abstract} \begin{IEEEkeywords} perception-based control, safe control, control barrier functions, certificate learning \end{IEEEkeywords} \section{Introduction} \IEEEPARstart{U}{sing} visual input to control autonomous systems without compromising safety or soundness is a challenging problem in robotics. Traditional methods from control theory provide powerful tools for safety and performance analysis but lack the expressiveness to deal with rich sensing models such as vision or Lidar. On the other hand, learning-based methods have been used successfully on visual-feedback control tasks including autonomous driving~\cite{Pan2017AgileOA} and aircraft taxiing~\cite{taxinet}, but ensuring the safety of these controllers remains an open question. Most existing work analyzing the safety of learned vision-based controllers has been done \textit{post hoc}, where the controller is synthesized and then independently verified. Works such as~\cite{julian2020} assess safety through adversarial testing, searching for an input sequence that causes a learned vision-based controller to fail. Others such as~\cite{Katz2021dasc} learn a generative model to predict observations from states, then conduct a reachability analysis to check the safety of the concatenated generator-controller network. In both cases, controller safety and controller synthesis are treated as two separate issues, rather than using safety considerations to inform the synthesis process. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/headline.png} \caption{Our controller combines a learned Control Barrier Function (based on Lidar sensor data, green) and a learned Control Lyapunov Function (based on range and bearing relative to the goal, blue) with a hybrid control architecture. The robot switches between goal-seeking and exploratory modes, but is guaranteed to maintain safety (using the barrier function) and eventually reach the goal if a safe path exists (using the Lyapunov function).} \label{fig:headline} \end{figure} Integrating safety into the control synthesis process has born impressive results in the state-feedback regime, in the form of certificate learning for control. These approaches draw on control theoretic certificates such as Lyapunov functions~\cite{Giesl2015,Dai2020}, barrier functions~\cite{ames_cbf,Peruffo2020}, and contraction metrics~\cite{Sun2020,Singh2020} that prove properties such as stability, forward invariance, and differential tracking stability of dynamical systems. Recent works have demonstrated that learning these certificates alongside a state-feedback control policy can enable provably safe controllers for walking~\cite{Castaneda2020}, stable flight in turbulent conditions~\cite{Sun2020}, and decentralized multi-agent control~\cite{Qin2021}. Several works have attempted to extend these approaches to observation-feedback control synthesis~\cite{Dean2020,Dean2020a,anonymous2021robust}, but these works make a number of \textcolor{black}{strong} assumptions. For instance, \cite{Dean2020a} and~\cite{anonymous2021robust} assume that the robot's state can be inferred with bounded error from a single observation, limiting them to ``third-person'' camera perspectives where each observation completely captures the state of the robot and its environment. As a result, these approaches do not readily apply when the vision system (either a camera or Lidar scanner) is mounted to the robot and cannot observe the entire environment. Further, these approaches assume that the environment (including the locations of any obstacles) is known prior to deployment, limiting the ability of these controllers to generalize to new environments. \textcolor{black}{We relax these assumptions to permit deployment on a robot in previously-unseen environments, although we retain some assumptions on the sensor model (see~\ref{assumptions})}. In this work, our main contribution is to combine techniques from machine learning and hybrid control theory to learn safe observation-feedback controllers that generalize to previously-unseen environments. In particular, we combine learned observation-based barrier and Lyapunov functions with a novel certificate-based hybrid control architecture. We provide proofs that our hybrid controller will remain safe and reach its goal without deadlock when provided with valid certificate functions, \textcolor{black}{and we use neural networks to learn certificates that are valid with high probability.} To our knowledge, this represents the first attempt to provide safety and goal-reaching guarantees for a perception-feedback controller that considers the full autonomous navigation stack (rather than sand-boxing the perception system with assumptions on error rates and considering the controller separately). We validate our approach, called LOCUS (Learning-enabled Observation-feedback Control Using Switching) both in simulation and in hardware, where we demonstrate a mobile robot navigating an unseen environment with Lidar measurements using our learned hybrid controller. \textcolor{black}{Experiments show that our learned controller can be run 6 times faster than MPC, reaches the goal almost twice as quickly as end-to-end reinforcement learning policies, and maintains a perfect safety rate in previously-unseen environments.} \section{Problem Statement} Our goal is to synthesize an observation-feedback controller for a robot with nonlinear dynamics navigating an unknown environment. The robot perceives its environment using a Lidar sensor, as in Fig.~\ref{fig:headline}, and it must navigate to a goal point while avoiding collisions. In contrast with higher-level approaches that combine SLAM with a global path planner, or robust planning approaches like~\cite{ChenFaSTrack}, we restrict our focus to real-time controllers with feedback from local observations. Our observation-feedback controller can be combined with SLAM and planning modules to track waypoints along a path, but it can also be used without those components, or when those components fail, without compromising safety. Formally, we consider a robot with nonlinear discrete-time dynamics $x_{t+1} = f(x_t, u_t)$, where $x \in \mathcal{X} \subseteq \R^n$ and $u \in \mathcal{U} \subseteq \R^m$, and observation model $o_t = o(x_t) \in \mathcal{O} \subseteq \R^p$ (denote by $o_t^i$ the $i$-th element of this observation). In particular, we focus on a Lidar observation model that measures the locations where $n_{rays}$ evenly-spaced rays originating at the robot first make contact with an object in the environment. In this model, $\mathcal{O} = \R^{2n_{rays}}$, and the points are expressed in robot-centric coordinates. Given a goal location $x_g$, we seek a control policy $u_t = \pi(o_t)$ that drives $x_t \to x_g$ as $t\to\infty$. Moreover, we require that $u_t$ avoids colliding with obstacles in the environment, defined as occurring when $\min_{i=0,\ldots,n_{rays}} ||o_t^i|| \leq d_c$, where $d_c > 0$ is a user-defined margin (i.e. collision occurs when the robot gets too close to any obstacle in the environment). We make a number of assumptions about the robot's dynamics and capabilities, as well as on the structure of the environment. Some of these assumptions are standard, while others are particular to our approach, as outlined in Section~\ref{overview}. \subsection{Assumptions}\label{assumptions} \subsubsection{Dynamics \& State Estimation}\label{assumptions_dynamics} Our approach is applicable in two regimes. If a (potentially noisy) state estimate is available, we make no assumptions on the dynamics other than controllability (a standard assumption). If no state estimate is available (other than a measurement of range $\rho$ and bearing $\phi$ to the goal point), then we assume that the robot's dynamics are controllable and \textit{approximately local}, by which we mean the state update has the form $x_{t+1} - x_{t} \approx f_\Delta(u_t)$. This means that the change in state (expressed in a local frame) can be approximated using only the control input (i.e. without a state estimate). This second case is motivated by our choice of hardware platform (a mobile robot with Dubins car dynamics, which satisfy this assumption and thus do not require any state estimation), but we include both cases for completeness. \subsubsection{Observation Model} We assume that the robot observes its environment using a 2D, finite-resolution, finite-range Lidar sensor. This sensor measures the $xy$ location in the robot frame of the first point of contact with an obstacle along each of $n_{rays}$ evenly-spaced rays. If no contact point is detected within maximum range $d_o$, the measurement saturates at that range. This model distinguishes us from less-realistic overhead camera models in \cite{anonymous2021robust}, \textcolor{black}{but it is important to note that we do not explicitly model sensor noise (we hope to extend our theoretical analysis to cover sensor noise in a future work).} \subsubsection{Environment} We assume only that there are no ``hedgehog'' obstacles with spikes thinner than the gap between adjacent Lidar beams and that all obstacles are bounded. \section{Overview}\label{overview} To solve this safe navigation problem, we propose the controller architecture in Fig.~\ref{fig:architecture}. The core of this controller is a Control Barrier Function (CBF) defined on the space of observations, which enables safe observation-based control by guaranteeing that the robot will not collide with any obstacles. Unlike other CBF-based approaches, which define the CBF in terms of the state-space and require accurate state estimates and knowledge of the environment \cite{ames_cbf}, our approach defines the CBF directly in observation-space. We also use a Control Lyapunov Function (CLF) to guide the robot towards its goal. In an offline phase, we use neural networks to learn the observation-space CBF and CLF. In an online phase, we use the learned CBF and CLF in a hybrid controller that switches between goal-seeking and exploratory behavior to safely navigate an \textit{a priori} unknown environment. To ensure that the CBF and CLF constraints are respected during the online phase, the controller uses its knowledge of system dynamics to approximate future observations with a one-step horizon, using this ``approximate lookahead'' to select optimal control actions given the CBF/CLF constraints. Section~\ref{certificates} describes an extension of CBF theory to include control from observations, and Section~\ref{hybrid_controller} describes the hybrid goal-seeking/exploratory controller, including the approximate lookahead strategy. Section~\ref{learning} describes our approach to learning observation-space CBFs and CLFs using neural networks. Section~\ref{experiments} presents experiments, both in simulation and in hardware, demonstrating the performance of our approach. \begin{figure}[b] \centering \includegraphics[width=0.8\linewidth]{figs/block_diagram.png} \caption{Block diagram of controller architecture.} \label{fig:architecture} \end{figure} \section{Ensuring Safety and Convergence with Observation-space Certificates}\label{certificates} As shown in Fig.~\ref{fig:architecture}, at the foundation of our approach is a pair of certificate functions: a Control Barrier Function (CBF) that allows the robot to detect and avoid obstacles, and a Control Lyapunov Function (CLF) that guides the robot towards its goal. Existing approaches to designing CBF/CLF-based controllers have defined these certificates as functions of the robot's state \cite{ames_cbf,Choi2020,Qin2021,Dean2020a}; however, this dependence on state means that state-based certificates, particularly CBFs, have difficulty generalizing to new environments. For example, a state-based CBF might encode regions of the state space corresponding to obstacles, but if the obstacles move then the entire CBF is invalidated. A more flexible implementation is to define these certificates as functions of observations (e.g. Lidar returns and the range and bearing to the goal). Instead of encoding which regions of the state space are ``inside'' obstacles, an observation-based CBF can encode which observations indicate that the robot is unsafe (e.g. when the minimum distance measured via Lidar is less than some threshold). In this paper, we extend traditional state-based CBFs to observation-based CBFs (oCBFs) defined as functions of observations . Formally, oCBFs are scalar functions $h: \mathcal{O} \mapsto \R$ such that for some $0 \leq \alpha_h < 1$ \begin{align} o\in\mathcal{O}_{safe} \ \implies \quad &h(o) \leq 0 \label{ocbf_safe}\\ o\in\mathcal{O}_{unsafe} \ \implies \quad &h(o) \geq 0 \label{ocbf_unsafe} \\ \forall x\in\mathcal{X}\ \exists u\in\mathcal{U}\ \text{s.t.}\quad &h\left[o(f(x_t, u))\right] - \alpha_h h[o(x_t)] \leq 0 \label{ocbf_dynamics} \end{align} Similarly, an observation-based CLF (oCLF) can be defined as a scalar function $V: \mathcal{O} \mapsto \R$ such that for some $0 \leq \alpha_V < 1$ \begin{align} V(o) \geq 0;\quad o\in\mathcal{O}_{goal}\iff V(o) = 0 \label{oclf_pd}\\ \forall x\in\mathcal{X}\ \exists u\in\mathcal{U}\ \text{s.t.}\ V\left[o(f(x_t, u))\right] - \alpha_V V[o(x_t)] \leq 0 \label{oclf_dynamics} \end{align} Of course, these definitions require that the space of observations $\mathcal{O}$ is rich enough to both differentiate between safe and unsafe sets $\mathcal{O}_{safe}$ and $\mathcal{O}_{unsafe}$ (e.g. via Lidar measurements) and detect when the robot has reached the goal in $\mathcal{O}_{goal}$ (e.g. by measuring range to the goal). In some ways, this extension is trivial, since every oCBF $h: \mathcal{O} \mapsto \R$ (resp. oCLF $V: \mathcal{O} \mapsto \R$) defines a state-space CBF $h \circ o: \mathcal{X} \mapsto \R$ (resp. CLF $V \circ o: \mathcal{X} \mapsto \R$), and so oCBFs and oCLFs inherit the safety and convergence guarantees of their state-space equivalents. In particular, as long as a policy selects actions from the set $\mathcal{K}_{CBF} = \set{u\ :\ h\left[o(f(x, u))\right] - \alpha_h h[o(x)] \leq 0}$, the system will remain safe (or, if started in the unsafe region, move to the safe region). Similarly, as long as a policy selects actions from the set $\mathcal{K}_{CLF} = \set{u\ :\ V\left[o(f(x, u))\right] - \alpha_h V[o(x)] \leq 0}$, the system will converge to the goal. In addition to inheriting these safety and convergence guarantees, oCBFs and oCLFs also inherit the two main drawbacks of CBFs and CLFs. First, it is very difficult to construct $h$ and $V$ by hand to satisfy conditions~\eqref{ocbf_dynamics} and~\eqref{oclf_dynamics}. Second, if the intersection $\mathcal{K}_{CBF} \cap \mathcal{K}_{CLF}$ empty, then traditional CBF/CLF controllers have to choose between preserving safety or convergence. As such, the novelty of our approach is not in the straightforward extension to observation-based certificates, but in our solutions to these two drawbacks. To address the first drawback, we use neural networks to learn an oCBF and oCLF, allowing our approach to generalize to previously-unseen environments; to solve the second issue, we use a novel hybrid control architecture to prove the safety and convergence of the resulting learning-enabled controller. We discuss each of these advances in the next two sections; we begin by discussing the hybrid control architecture, then describe how to learn oCBFs and oCLFs for use with that controller. \section{Hybrid CBF/CLF Control Policy}\label{hybrid_controller} oCBF and oCLF certificates guarantee safety and stability by defining the sets of allowable control inputs $\mathcal{K}_{CBF}$ and $\mathcal{K}_{CLF}$. As long as the input lies within $\mathcal{K}_{CBF}$ the robot will remain safe, and as long as it remains within $\mathcal{K}_{CLF}$ it will converge to its goal. However, there is no guarantee that these sets will have a non-empty intersection. We can easily construct ``bug-trap'' environment like that in Fig.~\ref{fig:two-modes-operation} where following the oCLF would lead the robot into an obstacle, violating the oCBF. In these situations, it is common to relax the oCLF condition, allowing the robot to remain safe even if it can no longer reach the goal \cite{Castaneda2020}. Although this relaxation preserves safety, it can lead to \textit{deadlock} when the robot becomes stuck and unable to reach its goal. To avoid deadlock (i.e. ensure \textit{liveness}), we propose a hybrid controller with two modes, \textbf{G}oal-seeking and \textbf{E}xploratory, as in Fig.~\ref{fig:two-modes-operation}. We will first intuitively explain the operation of these two modes, then provide a formal description of each mode and a proof of the safety and liveness of this controller. Both modes enforce the oCBF condition $u \in \mathcal{K}_{CBF}$ at all times, but they differ in their use of the oCLF. The goal seeking mode enforces the oCLF condition along with the oCBF condition, $u \in \mathcal{K}_{CBF} \cap \mathcal{K}_{CLF}$, maintaining safety while approaching the goal. When it is no longer feasible to satisfy both the oCLF and oCBF conditions, the robot switches to the exploratory mode, where it ignores the oCLF condition and executes a random walk around a level set of the oCBF (this can be seen as randomly exploring the region near the surface of the obstacle that caused it to become stuck, inspired by classic ``right-hand rules'' for maze navigation but adapted for a nonlinear dynamical system). The robot switches back to the goal-seeking mode when it encounters a region where the oCLF has decreased below the value where the robot became stuck (indicating that the robot is now closer to the goal than it was when beginning its exploration). This process is visualized in Fig.~\ref{fig:two-modes-operation}. Note that this controller is substantially different from the vision-based hybrid controller in \cite{anonymous2021robust}; ours switches between seeking the goal and exploring an unknown environment, while \cite{anonymous2021robust} switches between different controllers hand-designed for navigating a known environment. \begin{figure}[th] \centering \includegraphics[width=0.9\linewidth]{figs/controller_modes_combined.png} \caption{The hybrid control scheme allows the robot to escape potential deadlocks, as in this bug-trap environment.} \label{fig:two-modes-operation} \end{figure} Formally, the robot's behavior is given by solving an optimal control problem with a one-step horizon (in the goal-seeking mode) or executing a constrained stochastic policy (in the exploratory mode). As opposed to a traditional MPC policy, which must consider a multi-step horizon, the oCBF and oCLF encode all the information needed to ensure long-term safety and convergence into a single step. Crucially, the oCBF and oCLF rely only on local information: Lidar observations and range and bearing relative to the goal. In the goal-seeking mode, the robot solves the optimization: \vspace{-1em} \begin{subequations} \begin{align} u = \argmin_u & \quad \lambda_1 ||u|| \label{eq:g_controller} \\ \rm{s.t.} & \quad V_{t+1} - \alpha_V V_t \leq 0 \label{eq:g_clf}\\ & \quad h_{t+1} - \alpha_h h_t \leq 0 \label{eq:g_cbf} \end{align} \end{subequations} Constraint~\eqref{eq:g_clf} ensures that the controller progresses towards the origin, while constraint~\eqref{eq:g_cbf} ensures that the controller remains safe. In practice, \textcolor{black}{we solve an unconstrained analogue of this problem using a penalty method}, incorporating the oCLF and oCBF constraints into the objective with a penalty coefficients $\lambda_2$ and $\lambda_3$, where $\lambda_1 = 0.01$, $\lambda_2 = 1.0$, and $\lambda_3 = 10^3$ such that $\lambda_3 \gg \lambda_1 \gg \lambda_2$. \textcolor{black}{We find that this choice of penalty coefficients results in good performance in practice by effectively filtering out any solutions that do not satisfy the constraints~\eqref{eq:g_clf} and~\eqref{eq:g_cbf}.} We also note that even if this approach fails to find a feasible solution, the controller can immediately detect this fault, raise an error, and switch to a fail-safe mode. The robot transitions from the goal-seeking mode to the exploratory mode when the oCLF constraint in \eqref{eq:g_controller} becomes infeasible, indicating that the robot is stuck and must explore its environment to find a new path to the goal. Once in the exploratory mode, the robot follows a stochastic policy by discretizing the input space and sampling from the (appropriately normalized) distribution \begin{equation} \begin{aligned} \rm{Pr}(u) \propto \begin{cases} 0 \qquad h_{t+1} - (1-\alpha_h)h_t \geq 0 \vee\ |h_{t+1} - h_0| \geq \epsilon_h \\ 1 / \rm{exp}\left( \lambda_1[h_{t+1} - (1-\alpha_h)h_t]_+\right. \qquad \text{otherwise}\\ \quad + \left . \lambda_2 [|h_{t+1} - h_0| - \epsilon_h]_+ + \lambda_3 v^2 \right) \end{cases} \end{aligned}\label{eq:e_controller} \end{equation} where $h_0$ is the value of the oCBF recorded upon entering the exploratory mode, $\epsilon_h$ is the width of the region around the obstacle that the policy should explore, $v$ is the forward velocity implied by control input $u$ (included to encourage exploration), $[x]_+ = \max(x, 0.001 x)$ is the leaky rectified linear unit, and $\lambda_i$ are coefficients specifying the relative importance of each term. In our experiments, $\lambda_1 = \lambda_2 = 10^3$, and $\lambda_3 = -0.1$ to encourage faster exploration. The robot switches from the exploratory to the goal-seeking mode when it reaches a state where the oCLF value decreases below $\alpha_V V_0$, where $V_0$ is the value of the oCLF observed upon most recently switching to the exploratory mode. Both the optimization problem \eqref{eq:g_controller} and the stochastic policy \eqref{eq:e_controller} are evaluated by discretizing the input space and using the approximate one-step lookahead strategy discussed in Section~\ref{lookahead}; \textcolor{black}{in practice, both policies can be evaluated quickly by parallelizing the search over the input space.} Since the oCBF constraint is enforced in both the goal-seeking and exploratory modes, the hybrid controller will be safe regardless of which mode it is in. As a remark, the discrete-time formulation of oCBFs allows the robot to tolerate changes in its environment, both continuous (moving an obstacle), and discontinuous (adding an obstacle). As long as the change does not flip the sign of $h$ (in essence, as long as the new obstacle position is far enough away that the robot does not instantly become unsafe), the oCBF controller will recover and avoid the obstacle. If the new obstacle makes the robot unsafe, the oCBF can immediately detect that fault. It remains to prove that this controller will eventually reach the goal. We will do this by combining the following lemmas. \begin{lemma}\label{lemma_1} While in the goal-seeking mode, the robot converges to the goal. \end{lemma} \begin{proof} This property will hold as long as constraint~\eqref{eq:g_clf} holds for the optimal solution of \eqref{eq:g_controller}. Since the robot transitions out of the goal-seeking mode when this constraint is violated, this condition must hold whenever the robot is in this mode. \end{proof} \vspace{-0.8em} \begin{lemma}\label{lemma_2} As long as there exists a path from the robot's current state to the goal such that the oCBF condition holds along that path (i.e. as long as the goal is reachable), the robot will eventually exit the exploratory mode, at which point the value of the oCLF will have decreased by a factor of $\alpha_V$ relative to its value entering the exploratory mode. \end{lemma} \begin{proof} Let $x_0$ denote the point where the system enters the exploratory mode, and let $h_0$ (resp. $V_0$) denote the value of the oCBF (resp. oCLF) observed at $x_0$. In the absence of any oCBF, the robot would be able to continue from $x_0$ along a trajectory towards the goal $\tilde{x} = \tilde{x}_1, \tilde{x}_2, \ldots$ such that $V_{t+1} \leq \alpha_V V_t$ at each point. There are two cases: either the goal lies inside the $h_0$-superlevel set tangent to $x_0$ (the goal lies inside an obstacle represented by the oCBF) or it does not. In the former case, no safe path exists to the goal (violating the assumption of this lemma). In the later case, since we assume contours of the oCBF to be closed, $\tilde{x}$ will eventually leave the $h_0$-superlevel set bordered by $x_0$. As a result, there exists a point along this hypothetical oCLF-only trajectory $\tilde{x}_T$ that lies near the $h_0$-level set of the oCBF and $V(\tilde{x}_T) \leq \alpha_V V(x_0)$. Since the system is assumed to be controllable and contours of the oCBF closed, the stochastic policy in \eqref{eq:e_controller} will eventually explore all states in the region around the oCBF $h_0$-level set. Thus, the stochastic exploration policy will eventually approach $\tilde{x}_T$ (after circling around the $h_0$-level set), at which point the hybrid controller will switch to the goal-seeking mode. \end{proof} \begin{theorem} The hybrid goal-seeking/exploration controller in Fig.~\ref{fig:two-modes-operation} will eventually reach the goal. \end{theorem} \begin{proof} Denote by $[t_i, T_i]$ the period of time the robot spends in the goal-reaching mode on the $i$-th time it enters that mode, and consider the mode-switching behavior of the robot over an infinitely long horizon. Since Lemma~\ref{lemma_2} shows that the system will always exit the exploratory mode, either there are infinitely many episodes in which the robot is in the goal-seeking mode (i.e. $i = 0, 1, \ldots, \infty$), or the robot eventually reaches the goal-seeking mode and stays there (i.e. $i = 0, 1, \ldots, N_g$ and $T_{N_g} = \infty$). We present the proof for the first case; the second case follows similarly. Consider the trace of oCLF values \begin{equation} V({t_0}), \ldots, V({T_0}), V({t_1}), \ldots, V({T_i}), \ldots, V({t_{i+1}}), \ldots \label{eq:V_seq} \end{equation} and let $t'$ denote an index into this sequence (i.e. the cumulative time spent in the goal-reaching mode). We will show that this sequence is upper-bounded by the geometric sequence $\alpha_V^{t'} V({t_0})$. Within any episode $[t_i, T_i]$, the sequence $V({t_i}), \ldots, V({T_i})$ is upper-bounded by $V({t}) \leq \alpha_V^{(t - t_i)} V({t_i})$ by the properties of the oCLF. Further, by Lemma~\ref{lemma_2} we have the relation $V(t_{i+1}) \leq \alpha_i V(T_{i})$. As a result, we see that each term of the concatenated sequence~\eqref{eq:V_seq} obeys $V(t' + 1) \leq \alpha_V^{t'}V(t')$. As a result, as $t' \to \infty$, $V(t') \to 0$. By the properties of the oCLF, this implies that the robot's state approaches the goal as the cumulative time spent in the goal-reaching state increases (regardless of intervening mode switches). Since Lemma~\ref{lemma_2} implies that cumulative goal-reaching time goes to infinity with cumulative overall time (and our system operates in discrete time), this completes the proof that our hybrid controller converges to the goal. \end{proof} \subsection{Approximate one-step lookahead}\label{lookahead} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figs/approximate_lookahead.png} \caption{Future observations can be approximated by translating and rotating past observations. By considering multiple possible futures, the robot can select an action according to one-step lookahead policies like~\eqref{eq:g_controller} and \eqref{eq:e_controller}.} \label{fig:lidar-lookahead} \end{figure} The control policies~\eqref{eq:g_controller} and \eqref{eq:e_controller} require a prediction of $h_{t+1}$ and $V_{t+1}$ after executing a possible control action $u$. In turn, predicting $h_{t+1}$ and $V_{t+1}$ requires predicting the future observations $o_{t+1}$ and future range and bearing $(\rho_{t+1}, \phi_{t+1})$ to the goal. To generate these predictions, we use prior knowledge of the robot's dynamics to predict\footnote{For robots with local dynamics (as in Section~\ref{assumptions_dynamics}), this can be done without a state estimate; otherwise, a state estimate is required.} $\Delta x = x_{t+1} - x_t$. This change in state implies a change in the robot's position and orientation in the 2D workspace, defining a transformation $T_u \in SE(2)$, describing the future position in the robot's current local frame. This transformation can be used to predict future observations, subject to errors introduced by discontinuities in the environment: \begin{align} o^i_{t+1} &= T_u^{-1} o^i_{t} &\textit{(Lidar update)}\label{lidar_update}\\ g_{t+1} &= T_u^{-1} \rho_{t} \mat{\cos\phi & \sin\phi}^T &\textit{(Goal update)}\label{goal_update}\\ \rho_{t+1} &= ||g_{t+1}||;\ \phi_{t+1} = \angle g_{t+1} &\textit{(Range and bearing)}\label{range_update} \end{align} The predicted $o_{t+1}$, $\rho_{t+1}$, and $\phi_{t+1}$ can be used to evaluate $h_{t+1}$ and $V_{t+1}$ in the goal-seeking and exploratory control policies. Of course, the approximate nature of this update means that the safety and convergence properties of the oCBF and oCLF are no longer guaranteed to apply, but this issue can be mitigated in practice by adding a margin to the oCBF/oCLF conditions (e.g. $V_{t+1} - (1-\alpha_V) V_t \leq -\gamma_V$ in~\eqref{eq:g_controller} with $\gamma_V \geq 0$, as in \cite{Dean2020a}). Following our discussion of how to learn an oCBF and oCLF in the next section, we will use empirical results from simulation and hardware to demonstrate that this hybrid lookahead controller exhibits safe and convergent behavior in a range of environments, despite its use of approximations. \section{Learning Observation-space CBFs}\label{learning} Recent works have successfully applied neural networks to learn state-space CBFs \cite{Qin2021,Peruffo2020} and CLFs \cite{Richards2018,Abate2020,Chang2019,Chang2021}. These approaches sample points from the state space, label those points as either safe or unsafe, and then train a neural network to minimize the violation of the state-dependent equivalents of conditions~\eqref{ocbf_safe}--\eqref{ocbf_dynamics} (for CBFs) and~\eqref{oclf_pd}--\eqref{oclf_dynamics} (for CLFs) on all points on the training set. To evaluate conditions~\eqref{ocbf_dynamics} and~\eqref{oclf_dynamics}, these approaches either use a fixed control policy~\cite{Richards2018,Abate2020,Peruffo2020} or learn a single control policy~\cite{Qin2021,Chang2019,Chang2021}. These approaches are useful when the safe and unsafe regions of the state space are well-defined and static, but they essentially memorize which regions of the state space are safe and which are unsafe. As a result, they do not generalize when the safe/unsafe regions change, for example when moving between environments with different arrangements of obstacles. There are two key insights in our approach to learning oCBFs and oCLFs, as contrasted with prior approaches to learning CBFs and CLFs. First, by defining these certificates in terms of observations allows the learned certificates -- especially the oCBF -- to generalize much more easily to new environments. Second, we train our oCBF and oCLF to not simply to satisfy conditions~\eqref{ocbf_dynamics} and~\eqref{oclf_dynamics} for a single example controller but to ensure the feasibility of the exploratory control policy~\eqref{eq:g_controller}. This task-specific learning approach allows us to learn an oCLF and oCBF that are directly relevant to the task at hand, as well as improve generalization by not committing to a single control policy. In our approach, the oCLF is defined as $V(\rho, \sin\phi, \cos\phi) = V_\omega(\rho, \sin\phi, \cos\phi) + \rho^2 + (1-\cos\phi)/2$, where $V_\omega$ is a neural network with parameters $\omega$ (two hidden layers, 48 ReLU units each, one output dimension), $\rho$ and $\phi$ are the range and heading to the goal in the local frame, respectively, and the last term encodes the prior belief that the oCLF should correlate with distance from the goal. Because the Lidar observations $o$ are symmetric under permutation, we must adopt a different structure for learning an oCBF that reflects this symmetry. Taking inspiration from \cite{Qin2021}, we define a permutation-invariant encoder $e(o) = \max_i e_\theta(o^i)$, which passes every point in the Lidar observation through an neural network $e_\theta$ with parameters $\theta$ (two hidden layers, 48 ReLU units each, 48 output dimensions). $e_\theta$ lifts each two-dimensional point to a 48-dimensional space, and we then take the element-wise maximum among all observed points. We then construct the oCBF as $h(o) = h_\sigma(e(o)) - \min_i ||o^i|| + d_c$, where $h_\sigma$ is a neural network with parameters $\sigma$ (two hidden layers, 48 ReLU units each, one output dimension) and the minimum is taken over all points in the Lidar observation $o$. The final two distance terms impose the prior belief that the oCBF should correlate with the distance to the nearest obstacle, and we use an additional learned term to ensure that $h$ satisfies~\eqref{ocbf_dynamics}. To train these neural networks, we sample $N$ points uniformly from the state space, use a simulated Lidar model to compute the observation $o$ at each point, then use stochastic gradient descent to minimize the loss $L = \frac{1}{N}\sum_{i=0}^N (a_1 \text{ReLU}(\epsilon_h + h(o_i))\mathbf{1}_{safe}$ $+ a_2 \text{ReLU}(\epsilon_h - h(o_i))\mathbf{1}_{unsafe} + a_3 L_g )$, where $a_1 = a_2 = 100$, $a_3 = 1$, $\mathbf{1}_{safe}$ and $\mathbf{1}_{unsafe}$ are indicator functions for the safe and unsafe sets, and $L_g(o)$ is the optimal cost of the relaxed goal-seeking control policy~\eqref{eq:g_controller}. We also find that adding a regularization term for the L2-norm of $h_\sigma$ and $V_\omega$ helps improve training performance. Loss curves for the training and validation datasets are shown in Fig.~\ref{fig:loss_curves}, covering 72 training epochs. \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{figs/loss_curves_smol.png} \caption{Loss $L$ on a dataset of 10,000 points ($10\%$ reserved for validation).} \label{fig:loss_curves} \end{figure} \subsection{Verification of learned certificates} \textcolor{black}{ In Section~\ref{hybrid_controller}, we prove that if $h$ and $V$ are valid oCBFs and oCLFs, then our hybrid controller is guaranteed to reach the goal and remain safe. However, because we learn the oCBF and oCLF using neural networks trained on a finite number of sampled points, we do not have an explicit guarantee that the learned $h$ and $V$ will be valid. Related works in certificate learning rely on probabilistic verification methods such as those discussed in \cite{Qin2021} and \cite{Boffi2020}. These methods draw on results from statistical learning theory to show that if a learned certificate is valid (with some margin) on a large set of sample points, then there is a high probability that it will be valid in general. Even if a certificate is invalid in a small subset of the state space, almost-Lyapunov theory \cite{Liu2020} suggests that global safety and stability guarantees may still exist.} \textcolor{black}{To check the validity of our learned certificates, we sample 100,000 points from the state space and check whether constraint~\eqref{eq:g_cbf} is feasible at each point; we found that this constraint is feasible at each tested point. However, we acknowledge that this sampling analysis falls short of a formal proof, and factors such as sensor noise and inaccuracies in the approximate lookahead model mean that the formal guarantees from Section~\ref{hybrid_controller} may not hold in practice. To demonstrate empirically that our learned certificates yield a safe, performant controller in practice, the next section presents experimental results showing that our controller remains safe and reaches the goal in a wide range of simulated and hardware environments. In our future work, we hope to extend our theoretical analysis to explain this strong empirical performance. } \section{Experiments}\label{experiments} We validate our hybrid lookahead controller in two ways. First, we compare its performance against model-predictive control (MPC), reinforcement learning (RL), and state-based CBF/CLF baselines in simulation with a Dubins car model, demonstrating that our controller exhibits improved performance and generalizes well beyond the training environment. Second, we deploy our controller in hardware to navigate a mobile robot through a changing environment. \subsection{Generalization beyond training environment} To assess controllers' ability to generalize beyond the environments in which they were trained, both our oCBF/oCLF controller, an \textcolor{black}{end-to-end} RL controller, and a state-based CBF/CLF controller were trained in a static environment with 8 randomly placed obstacles. For the RL agent, we use \textcolor{black}{a neural-network policy with the same inputs as our controller, trained in an end-to-end manner using} proximal policy optimization (PPO) as implemented in the OpenAI Safe RL benchmarks~\cite{Ray2019}. The RL agent was trained with a large reward for reaching the goal, a smaller dense reward equal to $-\rho^2 + (1-\cos\phi)/2$, and a large cost for colliding with objects. The state-based CBF/CLF method was trained identically to our own method except with $h(x) = h_\sigma(x)$. The MPC method is model-based and does not need training; it constructs a convex approximation of the locally-observable free space (inspired by \cite{Deits2015}), plans a path towards the goal within that region, and executes the first \SI{0.1}{s} of that path. \textcolor{black}{When it becomes stuck, MPC attempts to navigate around the boundaries of obstacles.} All controllers were run at \SI{10}{Hz}, and numerical simulations of continuous-time Dubins car dynamics occurred at \SI{100}{Hz}. Example trajectories for each controller navigating a Dubins car through a randomized 2D environment are shown in Fig.~\ref{fig:random_environment}, and Fig.~\ref{fig:generalization_bar_chart} shows the collision and goal-reaching rates for each controller across 500 such randomized environments (along with the average time required to reach the goal when successful). These data show that our proposed approach achieves a $100\%$ safety rate while reaching the goal within \SI{10}{s} in $93.2\%$ of trials (less than $100\%$ due to obstacles occluding the goal in some cases). Our method significantly outperforms the other learning-based methods (both of which have lower safety and goal-reaching rates due to difficulty generalizing beyond the training environment). MPC achieves slightly worse safety and goal-reaching rates than our approach: \textcolor{black}{$99.6\%$ and $90.4\%$, respectively}. Typical behaviors for each controller are shown in Fig.~\ref{fig:random_environment}: the state-based learned CBF does not adapt to the new environment and is unsafe, the PPO policy is not precise enough to pass through the gap to reach the goal (although it remains safe). Only our approach and MPC consistently reach the goal safely, \textcolor{black}{although our approach achieves a slightly higher goal-reaching rate and is much less computationally intensive (\SI{18}{ms} of online computational time for our method vs. \SI{102}{ms} for MPC).} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{figs/annotated_trajectories_contingent.png} \caption{Plot of trajectories for observation-space CBF/CLF, state-based CBF/CLF, MPC, and end-to-end RL policies in random environment.} \label{fig:random_environment} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{figs/success_rates_contingent.png} \caption{Goal-reaching rate, safety rate, and time-to-goal for oCBF/oCLF, state-based CBF/CLF, MPC, and PPO on waypoint tracking, averaged over 500 randomized environments. Trial runs were limited to a maximum of \SI{10}{s}, and time-to-goal was averaged only over trials that reached the goal.} \label{fig:generalization_bar_chart} \end{figure} \subsection{Hardware demonstration}\label{hardware} To validate our controller in the real world, we deployed our controller on a TurtleBot 3 mobile robot. We model this platform as a Dubins car with control inputs for forward and angular velocity. The TurtleBot is equipped with an HLS-LFCD2 2D laser scanner with $1^\circ$ resolution in a $360^\circ$ range, which we downsample to 32 equally-spaced measurements. Lidar scans are preprocessed to convert from polar to local Cartesian coordinates before being passed to the learned encoder. Range and bearing to the goal are estimated using odometry, demonstrating robustness to imperfect state estimation. The controller is implemented with a zero-order hold at \SI{10}{Hz}, but Lidar data are only available at \SI{5}{Hz} (stale scans were used when new data were not available). \textcolor{black}{Although we do not explicitly model sensor uncertainty, we observed both small-magnitude noise and spurious detections in our Lidar data (these effects can be seen in the supplementary video), demonstrating that our controller can handle the noise resulting from real sensors.} We compare our method against the TurtleBot's built-in SLAM module in combination with an online dynamic window path planner. Fig.~\ref{fig:hw} shows our controller successfully avoiding both fixed obstacles and an obstacle thrown into its path mid-experiment. Video footage of this experiments (and others in different environments) is included in the supplementary materials. Our controller successfully escapes the initial trap and avoids the thrown obstacle to reach the goal, while the combined SLAM/planning system becomes stuck when the new obstacle is added. The planner eventually becomes unstuck (after rotating the robot in place to re-create the map), but this issue demonstrates the difficulty of guaranteeing liveness for an online SLAM and planning system. In contrast, our controller is entirely reactive, allowing it to avoid the new obstacle without pausing, providing a significant advantage compared to model-based online planning methods. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figs/snapshots_condensed.png} \caption{(Left) Our controller navigating around static and dynamic obstacles, shown as a composite image. (Right) A SLAM module paired with an online global planner gets stuck when an unexpected object is added to the scene. A video of these experiments is included in the supplementary materials.} \label{fig:hw} \end{figure} \section{Conclusions and Future Work} Safely integrating rich sensor models into autonomous control systems is a challenging problem at the intersection of robotics, control theory, and machine learning. In this work, we explore one direction for constructing safe, generalizable perception-based controllers. We use neural networks to learn an observation-based control barrier function, which we combine with a hybrid control architecture that allows us to prove the safety and liveness of the resulting learning-enabled controller. We demonstrate in simulation that this architecture outperforms other learning- and model-based control systems \textcolor{black}{(including end-to-end learning using RL)}, and we successfully deploy our controller in hardware. In addition to these successes, it is important to highlight the drawbacks of our approach, which point to interesting areas of future work. In particular, we rely on an approximate model of the Lidar sensor to predict observations one step into the future. This reliance prevents us from easily extending our approach from Lidar to image data, as it is much more difficult to construct an approximate model for image updates. In future work, we hope to replace this model-based update with a generative model learned from data. Using a learned, generative update might also allow us to relax the assumption of local dynamics, presenting a promising line of future study. \bibliographystyle{IEEEtran}
2,877,628,088,771
arxiv
\section{Introduction} Let $X_1, \dots, X_n$ be independent uniform random variables on $[0,1]^2$. Let $d(x,y) = \Vert x - y \Vert_2$ be the Euclidean distance. Let $L(X_1, \dots, X_n)$ be the distance of the optimal Traveling Salesman tour through these points, under distance $d(\cdot, \cdot)$. In seminal work, Beardwood et al (1959) analyzed the limiting behavior of the value of the optimal Traveling Salesman tour length, under the random Euclidean model. \begin{theorem}[\cite{Beardwood1959}] There exists a constant $\beta$ such that \[ \lim_{n \to \infty} \frac{L(X_1, \dots, X_n)}{\sqrt{n}} = \beta \] almost surely. \end{theorem} The authors additionally showed in \cite{Beardwood1959} that \[0.625 \leq \beta \leq \beta_+\] where $\beta_+ = 2 \int_{0}^{\infty} \int_0^{\sqrt{3}} \sqrt{z_1^2 + z_2^2} e^{-\sqrt{3}z_1} \left(1 - \frac{z_2}{\sqrt{3}}\right) dz_2 dz_1.$ This integral is equal to approximately $0.92116$ (\cite{Steinerberger2015}). To date, the only improvement to the upper bound was given in \cite{Steinerberger2015}, showing that $\beta \leq \beta_+ - \epsilon_0$, for an explicit $\epsilon_0 > \frac{9}{16}10^{-6}$. In \cite{Steinerberger2015}, the author also claimed to improve the lower bound; however, we have found a fault in the argument. The rest of this note is structured as follows. In Section \ref{sec:approaches}, we present the proof of $\beta \geq 0.625$ by \cite{Beardwood1959}. We then outline the approach of \cite{Steinerberger2015} to improve the bound. Section \ref{sec:lower-bound} corrects the result in \cite{Steinerberger2015}, giving the lower bound $\beta \geq 0.625 + \frac{19}{10368} \approx 0.6268$. Finally, Section \ref{sec:improvement} tightens the argument of \cite{Steinerberger2015} to derive the improved bound, $\beta \geq 0.6277$. \section{Approaches for the Lower Bound}\label{sec:approaches} By the following lemma, we can equivalently study the limiting behavior of $\frac{\mathbb{E}\left[L(X_1, \dots, X_n)\right]}{\sqrt{n}}$. \begin{lemma}[\cite{Beardwood1959}]\label{lemma:expectation} It holds that \[\frac{\mathbb{E}\left[L(X_1, \dots, X_n)\right]}{\sqrt{n}} \to \beta.\] \end{lemma} Further, we can switch to a Poisson process with intensity $n$. Let $\mathcal{P}_n$ denote a Poisson process with intensity $n$ on $[0,1]^2$. \begin{lemma}[\cite{Beardwood1959}]\label{lemma:poisson} It holds that \[\frac{\mathbb{E}\left[L(\mathcal{P}_n)\right]}{\sqrt{n}} \to \beta.\] \end{lemma} \cite{Beardwood1959} gave the following lower bound on $\beta$. \begin{theorem}[\cite{Beardwood1959}] The value $\beta$ is lower bounded by $\frac{5}{8}$. \end{theorem} \begin{proof} (Sketch) We outline the proof given by \cite{Beardwood1959}, giving a lower bound on $\mathbb{E}\left[L(\mathcal{P}_n)\right]$. Observe that in a valid traveling salesman tour, every point is connected to exactly two other points. To lower bound, we can connect each point to its two closest points. We can further assume that the Poisson process is over all of $\mathbb{R}^2$, rather than just $[0,1]^2$, in order to remove the boundary effect. The expected distance of a point to its closest neighbor is shown to be $\frac{1}{2\sqrt{n}}$, and the expected distance to the next closes neighbor is shown to be $\frac{3}{4 \sqrt{n}}$. Each point contributes half the expected lengths to the closest two other points. Since the number of points is concentrated around $n$, it holds that $\beta \geq \frac{1}{2} \left( \frac{1}{2} + \frac{3}{4}\right)$. \end{proof} Certainly there is room to improve the lower bound. Observe that short cycles are likely to appear when we connect each point to the two closest other points. In \cite{Steinerberger2015}, the author gave an approach to identify situations in which $3$-cycles appear, and then lower-bounded the contribution of correcting these $3$-cycles. We outline the approach below. \begin{enumerate} \item For point $a$, let $r_1$ be the distance of $a$ to the closest point, and let $r_2$ be the distance to the next closest point. Let $E_a$ be the event that the third closest point is at a distance of $r_3 \geq r_1 + 2 r_2$. \item The probability that $E_a$ occurs is calculated to be $\frac{7}{324}$ for a given point $a$. Therefore, the expected number of points satisfying this geometric property is $\frac{7}{324}n$, and the number of triples involved is at least $\frac{1}{3} \frac{7}{324}n$ in expectation. \item Using the relationship $r_3 \geq r_1 + 2 r_2$, we can show that if $\{a,b,c,d\}$ satisfy the geometric property with $\Vert a - b\Vert = r_1$, $\Vert a - c \Vert = r_2$, and $\Vert a - d \Vert = r_3$, then the closest two points to $b$ are $a$ and $c$, and the closest two points to $c$ are $a$ and $b$. Therefore, the ``count the closest two distances'' method would create a triangle in this situation. \item To correct for the triangle, subtract the lengths coming from the triangle and add a lower bound on the new lengths. The triangle contribution is calculated to be at most $3(r_1 + r_2)$ and the new lengths are calculated to be at least $2 r_3$. Therefore, whenever the geometric property holds for a triplet of points, the calculated contribution is $2r_3 - 3(r_1 + r_2)$. \item The final adjustment is calculated to be $\frac{19}{5184}$. \end{enumerate} There are two errors in this analysis that are both due to inconsistency with counting edge lengths. If edge lengths are counted from the perspective of vertices, then the right thing to do would be to give each vertex two ``stubs.'' These stubs are connected to other vertices, and may form edges if there are agreements. A stub from vertex $a$ to vertex $b$ contributes $\frac{1}{2} \Vert a - b \Vert$ to the path length. In this way, a triangle comprises 6 stubs, and the contribution to the path length is the sum of the edge lengths. On page 35, the author writes $r_1 + r_2 + 2 \Vert a - c\Vert$ as the contribution of the triangle. This is probably a typo and likely $r_1 + r_2 + 2 \Vert b - c\Vert$ was meant instead. However, it should be $r_1 + r_2 + \Vert b - c\Vert \leq 2 (r_1 + r_2)$. Fixing this error helps the analysis. The next step is to redirect the six stubs, and determine their length contributions. We break edge $(b,c)$, which means we need to redirect two stubs, while the four stubs that comprise the edges $(a,b)$ and $(a,c)$ remain. The redirected stubs contribute $\frac{1}{2} \Vert b - d \Vert + \frac{1}{2} \Vert c - e \Vert$. The six stubs therefore yield an overall contribution of $\Vert a - b\Vert + \Vert a - c \Vert + \frac{1}{2} \Vert b - d \Vert + \frac{1}{2} \Vert c - e \Vert \geq r_1 + r_2 + \frac{1}{2} \left(r_3 - r_1\right) + \frac{1}{2} \left(r_3 - r_2\right) = r_3 + \frac{1}{2} (r_1 + r_2)$. In the analysis above Figure 5 in the paper, the author includes the full lengths $\Vert b-d\Vert$ and $\Vert c - e \Vert$. The effect of this is to give points $d$ and $e$ a third stub each. To summarize, the overall contribution for the triangle scenario, after breaking edges $(b,c)$, is $r_3 + \frac{1}{2} (r_1 + r_2) - 2 (r_1 + r_2) = r_3 - \frac{3}{2} r_1 - \frac{3}{2}r_2$. \section{Derivation of the Lower Bound}\label{sec:lower-bound} In this section we use the approach of \cite{Steinerberger2015} to derive a lower bound on $\beta$. \begin{theorem}\label{thm:first-lower-bound} It holds that $\beta \geq \frac{5}{8} + \frac{19}{10368}$. \end{theorem} The proof of Theorem \ref{thm:first-lower-bound} requires Lemmas \ref{lemma:density} and \ref{lemma:integral}. \begin{lemma}[Lemma 4 in \cite{Steinerberger2015}]\label{lemma:density} Let $\mathcal{P}_n$ be a Poisson point process on $\mathbb{R}^2$ with intensity $n$. Then for any fixed point $p \in \mathbb{R}^2$, the probability distribution of the distance between $p$ and the the three closest points to $p$ is given by \begin{align*} h(r_1,r_2,r_3) &= \begin{cases} e^{-n \pi r_3^3} (2n\pi)^3 r_1 r_2 r_3 & \text{if } r_1 < r_2 < r_3\\ 0 & \text{otherwise.} \end{cases} \end{align*} \end{lemma} \begin{lemma}\label{lemma:integral} \begin{align*} \int_{r_1 = 0}^{\infty} \int_{r_2 = r_1}^{\infty} \int_{r_3 = r_1 + 2 r_2}^{\infty} \left(r_3 - \frac{3}{2}r_1 - \frac{3}{2}r_2 \right) e^{-n \pi r_3^2} r_1 r_2 r_3 dr_3 dr_2 dr_1&= \frac{19}{27648 \pi^3 n^{\frac{7}{2}}} \end{align*} \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:first-lower-bound}] First we verify that the lower bound from breaking edge $(b,c)$ is valid. If edge $(a,b)$ is broken instead, the new stub lengths are $\Vert a - c \Vert + \Vert b - c \Vert + \frac{1}{2} \Vert a - d \Vert + \frac{1}{2} \Vert b - e \Vert$. The difference after subtracting the original stub lengths is then equal to \vspace{-6pt} \begin{align*} \Vert a - c \Vert + \Vert b - c \Vert + \frac{1}{2} \Vert a - d \Vert + \frac{1}{2} \Vert b - e \Vert - \left(\Vert a - c \Vert + \Vert b - c \Vert + \Vert a - b\Vert \right) &= \frac{1}{2} \Vert a - d \Vert + \frac{1}{2} \Vert b - e \Vert - \Vert a - b\Vert \\ &\geq \frac{1}{2} r_3 + \frac{1}{2} \left( \Vert a - e \Vert - \Vert a - b \Vert \right) - r_1\\ &\geq \frac{1}{2} r_3 + \frac{1}{2} \left( r_3 - r_1 \right) - r_1\\ &= r_3 - \frac{3}{2}r_1 \end{align*} Similarly, if edge $(a,c)$ is broken, the contribution is lower bounded by $r_3 - \frac{3}{2} r_2$. Since $r_3 - \frac{3}{2} r_1 - \frac{3}{2} r_2 \leq r_3 - \frac{3}{2} r_2 \leq r_3 - \frac{3}{2} r_1$, we conclude that $r_3 - \frac{3}{2} r_1 - \frac{3}{2} r_1$ from breaking edge $(b,c)$ is a valid lower bound. Therefore, from the discussion in Section \ref{sec:approaches} and Lemma \ref{lemma:density} we adjust the integral in \cite{Steinerberger2015} to give \begin{align*} \beta \geq \frac{5}{8} + \frac{\sqrt{n}}{3} \int_{r_1 = 0}^{\infty} \int_{r_2 = r_1}^{\infty} \int_{r_3 = r_1 + 2r_2}^{\infty} \left(r_3 - \frac{3}{2}r_1 - \frac{3}{2}r_2 \right) e^{-n \pi r_3^2} (2n\pi)^3 r_1 r_2 r_3 dr_3 dr_2 dr_1 \end{align*} From Lemma \ref{lemma:integral}, \[\beta \geq \frac{5}{8} + \frac{\sqrt{n}}{3} (2n\pi)^3 \frac{19}{27648 \pi^3 n^{\frac{7}{2}}} = \frac{5}{8} + \frac{19}{10 368} \approx 0.626833.\] \end{proof} \section{An Improvement}\label{sec:improvement} In this section, we improve upon the bound in Section \ref{sec:lower-bound} by tightening the triangle inequality. \begin{theorem}\label{thm:second-lower-bound} It holds that \begin{align*} \beta &\geq \frac{5}{8} + \frac{1}{2} \left(\frac{19}{10368}\right) + \frac{1}{2}\left(\frac{3072 \sqrt{2} - 4325}{5376}\right)\\ &\geq 0.6277. \end{align*} \end{theorem} \begin{proof} Place a Cartesian grid so that point $a$ is at the origin and point $b$ is at $(r_1, 0)$. Then with probability $\frac{1}{2}$, point $c$ falls into the first or fourth quadrant, and with probability $\frac{1}{2}$, point $c$ falls into the second or third quadrant. Conditioned on point $c$ falling into the first or fourth quadrant, the maximum length of $\Vert b - c\Vert$ is $\sqrt{r_1^2 + r_2^2}$. Conditioned on point $c$ falling into the second or third quadrant, the maximum length of $\Vert b - c\Vert$ is $r_1 + r_2$, which corresponds to the computation in Section \ref{sec:lower-bound}. Conditioned on point $c$ falling into the first or fourth coordinate, the length contribution from breaking edge $(b,c)$ is at least $r_3 + \frac{1}{2} \left( r_1 + r_2\right) - \left(r_1 + r_2 + \sqrt{r_1^2 + r_2^2}\right) = r_3 - \frac{1}{2} r_1 - \frac{1}{2} r_2 - \sqrt{r_1^2 + r_2^2}$. If edge $(a,b)$ is broken instead, the new stub lengths are $\Vert a - c \Vert + \Vert b - c \Vert + \frac{1}{2} \Vert a - d \Vert + \frac{1}{2} \Vert b - e \Vert$. The difference after subtracting the original stub lengths is then equal to \vspace{-6pt} \begin{align*} \Vert a - c \Vert + \Vert b - c \Vert + \frac{1}{2} \Vert a - d \Vert + \frac{1}{2} \Vert b - e \Vert - \left(\Vert a - c \Vert + \Vert b - c \Vert + \Vert a - b\Vert \right) &= \frac{1}{2} \Vert a - d \Vert + \frac{1}{2} \Vert b - e \Vert - \Vert a - b\Vert \\ &\geq \frac{1}{2} r_3 + \frac{1}{2} \left( \Vert a - e \Vert - \Vert a - b \Vert \right) - r_1\\ &\geq \frac{1}{2} r_3 + \frac{1}{2} \left( r_3 - r_1 \right) - r_1\\ &= r_3 - \frac{3}{2}r_1 \end{align*} \vspace{-6pt} Similarly, if edge $(a,c)$ is broken, the contribution is lower bounded by $r_3 - \frac{3}{2} r_2$. Since $r_3 - \frac{1}{2} r_1 - \frac{1}{2}r_2 - \sqrt{r_1^2 + r_2^2} \leq r_3 - \frac{3}{2} r_2 \leq r_3 - \frac{3}{2} r_1$, we conclude that $r_3 - \frac{1}{2} r_1 - \frac{1}{2}r_2 - \sqrt{r_1^2 + r_2^2}$ from breaking edge $(b,c)$ is a valid lower bound. We therefore break edge $(b,c)$. \begin{proposition} If $r_3 \geq r_2 + \sqrt{r_1^2 + r_2^2}$, then the closest points to each of $a,b,c$ are the other two points in the set $\{a,b,c\}$, whenever point $b$ is in the first or fourth quadrant. \end{proposition} \begin{proof} Point $a$: $d(a,b) = r_1$, $d(a,c) = r_2$, and for any $d \notin\{a,b,c\}$, it holds that $d(a,d) \geq r_3 \geq r_2 + \sqrt{r_1^2 + r_2^2}$. Therefore $d(a,d) \geq d(a,b)$ and $d(a,d) \geq d(a,c)$.\\ Point $b$: $d(a,b) = r_1$, $d(b,c) \leq \sqrt{r_1^2 + r_2^2}$, and for any $d \notin\{a,b,c\}$, it holds that $d(b,d) \geq d(a,d) - d(a,b) \geq r_2 + \sqrt{r_1^2 + r_2^2} - r_1$. Therefore $d(b,d) \geq d(a,b)$ and $d(b,d) \geq d(b,c)$.\\ Point $c$: $d(a,c) = r_2$, $d(b,c) \leq \sqrt{r_1^2 + r_2^2}$, and for any $d \notin\{a,b,c\}$, it holds that $d(c,d) \geq d(a,d) - d(a,c) \geq r_2 + \sqrt{r_1^2 + r_2^2} - r_2 = \sqrt{r_1^2 + r_2^2}$. Therefore $d(c,d) \geq d(a,c)$ and $d(c,d) \geq d(b,c)$. \end{proof} The lower bound on $\beta$ is therefore \begin{align*} \frac{5}{8} + \frac{\sqrt{n}}{3} \int_{r_1 = 0}^{\infty} \int_{r_2 = r_1}^{\infty} \int_{r_3 = r_2 + \sqrt{r_1^2 + r_2^2}}^{\infty} \left(r_3 - \frac{1}{2}r_1 - \frac{1}{2}r_2 - \sqrt{r_1^2 + r_2^2} \right) e^{-n \pi r_3^2} (2n\pi)^3 r_1 r_2 r_3 dr_3 dr_2 dr_1. \end{align*} \begin{lemma}\label{lemma:second-integral} It holds that \small \begin{align*} &\int_{r_1 = 0}^{\infty} \int_{r_2 = r_1}^{\infty} \int_{r_3 = r_2 + \sqrt{r_1^2 + r_2^2}}^{\infty} \left(r_3 - \frac{1}{2}r_1 - \frac{1}{2}r_2 - \sqrt{r_1^2 + r_2^2} \right) e^{-n \pi r_3^2} r_1 r_2 r_3 dr_3 dr_2 dr_1 \\ &= \left[- \frac{\left(\frac{1}{1+\sqrt{2}} \right)^8}{8 \cdot 48} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^7}{7 \cdot 16} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^6}{6 \cdot 16} + \frac{1}{5}\left(\frac{1}{8} + \frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right)\left(\frac{1}{1+\sqrt{2}} \right)^5 - \frac{13\left(\frac{1}{1+\sqrt{2}} \right)^4 }{64} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^3}{48} + \frac{\left(\frac{1}{1+\sqrt{2}} \right)^2}{32} \right] \frac{15}{16\pi^3 n^{\frac{7}{2}}}. \end{align*} \normalsize \end{lemma} Multiplying the value of the integral in Lemma \ref{lemma:second-integral} by $\frac{\sqrt{n}(2 n \pi)^3}{3}$, we obtain the following lower bound. \begin{align*} &\frac{5}{8} + \frac{5}{2} \left[- \frac{\left(\frac{1}{1+\sqrt{2}} \right)^8}{8 \cdot 48} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^7}{7 \cdot 16} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^6}{6 \cdot 16} + \frac{1}{5}\left(\frac{1}{8} + \frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right)\left(\frac{1}{1+\sqrt{2}} \right)^5 - \frac{13\left(\frac{r_3}{1+\sqrt{2}} \right)^4 }{64} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^3}{48} + \frac{\left(\frac{1}{1+\sqrt{2}} \right)^2}{32} \right] \\ &= \frac{5}{8} + \frac{3072 \sqrt{2} - 4325}{5376}\\ &\approx \frac{5}{8} + 0.003621 \end{align*} Finally, conditioning on the quadrant, the overall lower bound is \begin{align*} \beta \geq \frac{5}{8} + \frac{1}{2} \left(\frac{19}{10368}\right) + \frac{1}{2}\left(\frac{3072 \sqrt{2} - 4325}{5376}\right) \geq 0.6277 \end{align*} \end{proof} \section*{Appendix} \begin{proof}[Proof of Lemma \ref{lemma:integral}] We can change the order of integration to compute the integral more easily. \begin{align*} &\int_{r_1 = 0}^{\infty} \int_{r_2 = r_1}^{\infty} \int_{r_3 = r_1 + 2 r_2}^{\infty} \left(r_3 - \frac{3}{2}r_1 - \frac{3}{2}r_2 \right) e^{-n \pi r_3^2} r_1 r_2 r_3 dr_3 dr_2 dr_1\\ &=\int_{r_3 = 0}^{\infty} \int_{r_1 = 0}^{\frac{r_3}{3}} \int_{r_2 = r_1}^{\frac{r_3 - r_1}{2}} \left(r_3 - \frac{3}{2}r_1 - \frac{3}{2}r_2 \right) e^{-n \pi r_3^2} r_1 r_2 r_3 dr_2 dr_1 dr_3\\ &=\int_{r_3 = 0}^{\infty}r_3 e^{-n \pi r_3^2} \int_{r_1 = 0}^{\frac{r_3}{3}} r_1 \int_{r_2 = r_1}^{\frac{r_3 - r_1}{2}} r_2 \left(r_3 - \frac{3}{2}r_1 - \frac{3}{2}r_2 \right) dr_2 dr_1 dr_3\\ &=\int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{3}}r_1 \left( \frac{r_2^2}{2} \left(r_3 -\frac{3}{2}r_1\right) - \frac{1}{2} r_2^3 \right) \Big |_{r_2=r_1}^{\frac{r_3-r_1}{2}} dr_1 dr_3\\ &=\int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{3}}r_1 \left( \frac{\left(\frac{r_3-r_1}{2}\right)^2 - r_1^2}{2} \left(r_3 -\frac{3}{2}r_1\right) - \frac{1}{2} \left( \left(\frac{r_3-r_1}{2}\right)^3 - r_1^3 \right) \right) dr_1 dr_3\\ &=\int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{3}} \left(\frac{9 r_1^4}{8} - \frac{3r_1^3 r_3}{16} - \frac{r_1^2 r_3^2}{4} + \frac{r_1 r_3^3}{16} \right) dr_1 dr_3\\ &=\int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2} \left(\frac{9 r_1^5}{40} - \frac{3r_1^4 r_3}{64} - \frac{r_1^3 r_3^2}{12} + \frac{r_1^2 r_3^3}{32} \right) \Big |_{r_1 = 0}^{\frac{r_3}{3}} dr_3\\ &=\int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2} \left(\frac{9 \left(\frac{r_3}{3}\right)^5}{40} - \frac{3\left(\frac{r_3}{3}\right)^4 r_3}{64} - \frac{\left(\frac{r_3}{3}\right)^3 r_3^2}{12} + \frac{\left(\frac{r_3}{3}\right)^2 r_3^3}{32} \right) dr_3\\ &= \left(\frac{9 \left(\frac{1}{3}\right)^5}{40} - \frac{3\left(\frac{1}{3}\right)^4}{64} - \frac{\left(\frac{1}{3}\right)^3}{12} + \frac{\left(\frac{1}{3}\right)^2}{32} \right) \int_{r_3 = 0}^{\infty} r_3^6 e^{-n \pi r_3^2} dr_3\\ &= \frac{19}{25920} \int_{r_3 = 0}^{\infty} r_3^6 e^{-n \pi r_3^2} dr_3\\ &= \frac{19}{25920} \frac{15}{16 \pi^3 n^{\frac{7}{2}}} \\ &= \frac{19}{27648 \pi^3 n^{\frac{7}{2}}} \end{align*} \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:second-integral}] Again we change the order of integration to compute the integral more easily. Given $r_3$, the upper bound on $r_1$ is derived by setting $r_3 = r_1 + \sqrt{2r_1^2} \iff r_1 = \frac{r_3}{1 + \sqrt{2}}$. Given $r_3$ and $r_1$, set $r_3 = r_2 + \sqrt{r_1^2 + r_2^2}$. We have \begin{align*} &\left(r_3 - r_2\right)^2 = r_1^2 + r_2^2\\ &r_3^2 - 2r_2 r_3 + r_2^2 = r_1^2 + r_2^2\\ &r_3^2 - 2r_2 r_3 = r_1^2\\ &r_2 = \frac{r_3^2 - r_1^2}{2r_3} \end{align*} Therefore, \scriptsize \begin{align*} &\int_{r_1 = 0}^{\infty} \int_{r_2 = r_1}^{\infty} \int_{r_3 = r_2 + \sqrt{r_1^2 + r_2^2}}^{\infty} \left(r_3 - \frac{1}{2}r_1 - \frac{1}{2}r_2 - \sqrt{r_1^2 + r_2^2} \right) e^{-n \pi r_3^2} r_1 r_2 r_3 dr_3 dr_2 dr_1\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{1 + \sqrt{2}}} r_1 \int_{r_2 = r_1}^{\frac{r_3^2 - r_1^2}{2r_3}} r_2\left(r_3 - \frac{1}{2}r_1 - \frac{1}{2}r_2 - \sqrt{r_1^2 + r_2^2} \right) dr_2 dr_1 dr_3\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{1 + \sqrt{2}}} r_1 \left[\frac{r_2^2}{2} \left( r_3 - \frac{1}{2} r_1 \right) - \frac{1}{6} r_2^3 - \frac{1}{3} \left(r_1^2 + r_2^2 \right)^{\frac{3}{2}}\right] \Big |_{r_2 = r_1}^{\frac{r_3^2 - r_1^2}{2r_3}} dr_1 dr_3\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{1 + \sqrt{2}}} r_1 \left[\frac{\left(\frac{r_3^2 - r_1^2}{2r_3} \right)^2}{2} \left( r_3 - \frac{1}{2} r_1 \right) - \frac{1}{6} \left(\frac{r_3^2 - r_1^2}{2r_3} \right)^3 - \frac{1}{3} \left(r_1^2 + \left(\frac{r_3^2 - r_1^2}{2r_3} \right)^2 \right)^{\frac{3}{2}} - \frac{r_1^2}{2} \left( r_3 - \frac{1}{2} r_1 \right) + \frac{1}{6} r_1^3 + \frac{1}{3} \left(r_1^2 + r_1^2 \right)^{\frac{3}{2}}\right] dr_1 dr_3\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{1 + \sqrt{2}}} r_1 \left[\frac{\left(\frac{r_3^2 - r_1^2}{2r_3} \right)^2}{2} \left( r_3 - \frac{1}{2} r_1 \right) - \frac{1}{6} \left(\frac{r_3^2 - r_1^2}{2r_3} \right)^3 - \frac{1}{3} \left(\frac{\left(r_1^2 + r_3^2 \right)^2}{4r_3^2} \right)^{\frac{3}{2}} - \frac{r_1^2 r_3}{2} + \left(\frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right) r_1^3\right] dr_1 dr_3\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{1 + \sqrt{2}}} r_1 \left[\frac{\left(\frac{r_3^2 - r_1^2}{2r_3} \right)^2}{2} \left( r_3 - \frac{1}{2} r_1 \right) - \frac{1}{6} \left(\frac{r_3^2 - r_1^2}{2r_3} \right)^3 - \frac{1}{3} \left(\frac{r_1^2 + r_3^2 }{2r_3} \right)^3 - \frac{r_1^2 r_3}{2} + \left(\frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right) r_1^3\right] dr_1 dr_3\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2}\int_{r_1 = 0}^{\frac{r_3}{1 + \sqrt{2}}} \left[- \frac{r_1^7}{48 r_3^3} - \frac{r_1^6}{16r_3^2} - \frac{r_1^5}{16r_3} + \left(\frac{1}{8} + \frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right)r_1^4 - \frac{13r_1^3 r_3}{16} - \frac{r_1^2 r_3^2}{16} + \frac{r_1 r_3^3}{16} \right] dr_1 dr_3\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2} \left[- \frac{r_1^8}{8 \cdot 48 r_3^3} - \frac{r_1^7}{7 \cdot 16r_3^2} - \frac{r_1^6}{6 \cdot 16r_3} + \frac{1}{5}\left(\frac{1}{8} + \frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right)r_1^5 - \frac{13r_1^4 r_3}{64} - \frac{r_1^3 r_3^2}{48} + \frac{r_1^2 r_3^3}{32} \right] \Big |_{r_1 = 0}^{\frac{r_3}{1 + \sqrt{2}}} dr_3\\ &= \int_{r_3 = 0}^{\infty} r_3 e^{-n \pi r_3^2} \left[- \frac{\left(\frac{r_3}{1+\sqrt{2}} \right)^8}{8 \cdot 48 r_3^3} - \frac{\left(\frac{r_3}{1+\sqrt{2}} \right)^7}{7 \cdot 16r_3^2} - \frac{\left(\frac{r_3}{1+\sqrt{2}} \right)^6}{6 \cdot 16r_3} + \frac{1}{5}\left(\frac{1}{8} + \frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right)\left(\frac{r_3}{1+\sqrt{2}} \right)^5 - \frac{13\left(\frac{r_3}{1+\sqrt{2}} \right)^4 r_3}{64} - \frac{\left(\frac{r_3}{1+\sqrt{2}} \right)^3 r_3^2}{48} + \frac{\left(\frac{r_3}{1+\sqrt{2}} \right)^2 r_3^3}{32} \right] dr_3\\ &= \left[- \frac{\left(\frac{1}{1+\sqrt{2}} \right)^8}{8 \cdot 48} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^7}{7 \cdot 16} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^6}{6 \cdot 16} + \frac{1}{5}\left(\frac{1}{8} + \frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right)\left(\frac{1}{1+\sqrt{2}} \right)^5 - \frac{13\left(\frac{1}{1+\sqrt{2}} \right)^4 }{64} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^3}{48} + \frac{\left(\frac{1}{1+\sqrt{2}} \right)^2}{32} \right] \int_{r_3 = 0}^{\infty} r_3^6 e^{-n \pi r_3^2} dr_3 \\ &= \left[- \frac{\left(\frac{1}{1+\sqrt{2}} \right)^8}{8 \cdot 48} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^7}{7 \cdot 16} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^6}{6 \cdot 16} + \frac{1}{5}\left(\frac{1}{8} + \frac{1}{4} + \frac{1}{6} + \frac{2^{\frac{3}{2}}}{3} \right)\left(\frac{1}{1+\sqrt{2}} \right)^5 - \frac{13\left(\frac{1}{1+\sqrt{2}} \right)^4 }{64} - \frac{\left(\frac{1}{1+\sqrt{2}} \right)^3}{48} + \frac{\left(\frac{1}{1+\sqrt{2}} \right)^2}{32} \right] \frac{15}{16\pi^3 n^{\frac{7}{2}}} \end{align*} \end{proof} \normalsize \bibliographystyle{plain}
2,877,628,088,772
arxiv
\section{Introduction}\label{sec:intro} {Active sensing} consists of controlling the sensor state to gather (more) informative observations~\cite{radmard2017active} or to accomplish a task (e.g., find a target within a certain time budget~\cite{MTS_Rinner}). This control framework has been largely used to solve autonomous target search and tracking~\cite{shahidian2017single}, often relying on probabilistic approaches~\cite{detection_tracking_survey}: data from onboard sensors and Recursive Bayesian Estimation (RBE) schemes~\cite{smith2013MonteCarlo} are used to generate a probabilistic map (also known as belief map), encoding the knowledge about potential target locations. The control problem is then cast as the optimization of a suitable objective function built upon the probabilistic map (e.g., time to detection~\cite{MTS_Rinner}, estimate uncertainty~\cite{shahidian2017single}, distance to the target~\cite{hasanzade2018rf}). Stochastic motion and observation models~\cite{radmard2017active} account for the uncertainties on target dynamics and on the perception process, and allow to treat no-detection observations~\cite{negative_information}. For these reasons, probabilistic approaches are suitable for real-life scenarios, which are also characterized by energy costs associated to the movement of the active sensing platform~\cite{liu2018energy}. \noindent \textbf{Related works -} Typical modalities for active sensing include vision, audio and radio~\cite{radmard2017active,haubner2019active,shahidian2017single}. Visual-based tracking provides high accuracy~\cite{aghajan2009multi} and does not require the target to use any emitting device. Occlusions and Field of View (FoV) directionality~\cite{mavrinac2013modeling} limit the range, applicability and success of camera-only platforms~\cite{detection_tracking_survey}, especially for applications where time of detection is critical (e.g., search and rescue missions~\cite{SAR_radio}). To collect measurements on wider ranges, and reduce the search phase, other technologies can be used, such as acoustic~\cite{haubner2019active} or radio-frequency (RF)~\cite{shahidian2017single} signals. Despite the high localization accuracy of acoustic signals~\cite{haubner2019active}, sound pollution and extra hardware requirements (e.g., microphone arrays) are critical pitfalls of this technology~\cite{zafari2019survey}. Conversely, RF signals are energy efficient, have large reception ranges ($\sim100 \; [m]$), and low hardware requirements, since platforms are not needed to be equipped with multiple receivers; moreover, the Received Signal Strength Indicator (RSSI) is extracted from standard data packet traffic~\cite{zanella2016best}. For these reasons, RSSI-based localization systems widely appear in literature and in commercial applications, despite environmental interference (e.g., cluttering and multi-path distortions) often limits their accuracy~\cite{zanella2016best}. Multi-modal sensor fusion techniques have been shown to overcome the inadequacies of uni-modal approaches, being more robust and reliable~\cite{DeepRL_gazeControl}. \noindent \textbf{Contributions -} This paper exploits the complementary benefits of radio and visual cues for visually detecting a radio-emitting target with an aerial robot, equipped with a radio receiver and a Pan-Tilt (PT) camera. We formulate the control problem within a probabilistic active sensing framework, where camera measurements refine radio ones within a RBE scheme, used to keep the map updated. The fusion of RF and camera sensor data for target search and tracking is an open problem and the literature addressing this task is sparse. Indeed, to the best of authors knowledge, this is the first attempt to combine radio and visual measurements within a probabilistic active sensing framework. Furthermore, unlike existing solutions operating on limited control spaces (e.g., platform position~\cite{shahidian2017single} or camera orientation~\cite{DeepRL_gazeControl}), we propose a gradient-based optimal control, defined on a continuous space comprising both platform position and camera orientation. Theoretical and numerical analyses are provided to validate the effectiveness of the proposed algorithm. What emerges is that bi-modality is proven to increase the target localization accuracy; this, together with the availability of an integrated high-dimensional control space, leads to higher detection success rates, as well as superior time and energy efficiency with respect to radio and vision-only counterparts. $ $ \section{Problem statement}\label{sec:problem_formulation} Fig. \ref{fig:scenario} shows the main elements of the problem scenario, namely the target and the sensing platform\footnote{Bold letters indicate (column) vectors, if lowercase, matrices otherwise. With $\vect{I}_n$ we define the $n$-dimensional identity matrix, while $\vect{0}_n$ is the zero vector of dimension $n$. \\ Regarding the statistical distributions used in this paper, $\chi^2(n)$ denotes the chi-squared distribution with $n$ degrees of freedom, and $\mathcal{N}(x|\mu,\sigma^2)$ is the Gaussian distribution over the random variable $x$ with expectation $\mu$ and variance $\sigma^2$.\\ With the shorthand notation $z_{t_0:t_1}$ we indicate a sequence of measurements from time instant $t_0$ to $t_1$, namely $\left\lbrace z_k \right\rbrace_{k=t_0}^{t_1}$.\\ The Euclidean distance between vectors $\vect{a}, \vect{b} \in \mathbb{R}^n$ is denoted as \begin{equation} d(\vect{a},\vect{b}) = \lVert \vect{a} - \vect{b} \rVert_2 = \left( \sum_{i=1}^n \left( a(i) - b(i)\right)^2 \right)^{1/2} \end{equation} where $a(i)$ is the $i$-th component of $\vect{a}$. Given a plane $\Pi$ and a vector $\vect{a} \in \mathbb{R}^n$, we denote as $\vect{a}_\Pi$ the orthogonal projection of $\vect{a}$ onto $\Pi$.}. \noindent \textbf{Target - }The radio-emitting target moves on a planar environment \mbox{$\Pi \subset \mathbb{R}^2$}, according to a (possibly) non-linear stochastic Markovian state transition model~\cite{radmard2017active} \begin{equation}\label{eq:target_dyn} \vect{p}_{t+1} = f(\vect{p}_{t},\bm{\eta}_{t}). \end{equation} where $\vect{p}_t \in \Pi$ is the target position at time $t$, referred to the global 3D reference frame $\mathcal{F}_0$; when expressed in $\mathbb{R}^3$, it is referred as $\vect{p}_t^+ = [ \, \vect{p}_t^\top \quad 0 \,]^\top$. The uncertainty on the underlying target movements are captured by the distribution of the stochastic process noise $\bm{\eta}_t$. The probabilistic form of \eqref{eq:target_dyn}, namely $p(\vect{p}_{t+1}|\vect{p}_{t})$, is known as process model~\cite{radmard2017active}. \noindent \textbf{Sensing platform - }The sensing platform is an aerial vehicle (UAV), equipped with an omnidirectional radio receiver and a PT camera on gimbal, and endowed with processing capabilities and a real-time target detector ~\cite{yolo}. The state of the platform is the camera pose, namely \begin{equation}\label{eq:platform_state} \begin{split} & \vect{s}_t = \begin{bmatrix} \vect{c}_t^\top & \bm{\psi}_t^\top \end{bmatrix}^\top, \\ & \vect{c}_t \in \mathbb{R}^3; \quad \bm{\psi}_t = \begin{bmatrix} \alpha_t & \beta_t \end{bmatrix}^\top \in [-\pi/2+\theta,\pi/2-\theta]^2. \end{split} \end{equation} The UAV position $\vect{c}_t$ is referred to $\mathcal{F}_0$, it is supposed to coincide with the camera focal point and its altitude is fixed (i.e., non-controllable); $\alpha_t$ (resp. $\beta_t$) is the pan (resp. tilt) angle w.r.t. the camera inertial reference frame $\mathcal{F}_u$ (obtained by a $\vect{c}_t$ translation of $\mathcal{F}_0$); $\theta$ is the half-angle of view. The camera state follows a linear deterministic Markovian transition model~\cite{radmard2017active} \begin{equation}\label{eq:sensor_dynamics} \vect{s}_{t+1} = \vect{s}_t + \vect{u}_t; \; \vect{u}_t = \begin{bmatrix} \vect{u}_{\vect{c},t}^\top \quad \vect{u}_{\bm{\psi},t}^\top \end{bmatrix}^\top \in \mathcal{A} \end{equation} where $\mathcal{A}$ is the control space. It comprises all possible control inputs that can be applied to the platform to regulate position and attitude. In particular, being the UAV altitude fixed, we focus on a planar control $\vect{u}_{\vect{c}_\Pi,t}$, acting on the projection $\vect{c}_{\Pi,t}$. Inspired by real-life scenarios, the UAV movements are considered energy-consuming with a linear dependence on the flying distance~\cite{liu2018energy}, that is \begin{equation}\label{eq:energy_model} \Delta E_t = d(\vect{c}_{t+1},\vect{c}_{t}), \end{equation} where $\Delta E_t$ is the energy used to move the platform from $\vect{c}_{t}$ to $\vect{c}_{t+1}$. The total available energy is denoted as $E_{tot}$. \begin{figure}[t] \centering \includegraphics[scale=0.3, trim={0 0 0 38}, clip]{images/scenario.pdf} \vspace{-0.9cm} \caption{Problem scenario. A target moves in a planar environment and establishes a radio signal communication with a camera-embedded UAV. The objective is to control the camera pose so that the target is visually detected.} \label{fig:scenario} \end{figure} Motivated by the long reception ranges of radio signals~\cite{zanella2016best}, we suppose the target to be always within the range of the platform receiver and, from received data packets, the RSSI value $r_t\in\mathbb{R}$ is extracted. This is related to the platform-target distance $d(\vect{c}_t,\vect{p}_t^+)$ according to the log-distance path loss model~\cite{goldsmith_2005} \begin{equation}\label{eq:path_loss_model} r_t = \kappa - 10 n\log_{10} \left( d(\vect{c}_t,\vect{p}_t^+) \right). \end{equation} The parameters $\kappa$ and $n$ are estimated via offline calibration procedures~\cite{zanella2016best} and they represent the RSSI at a reference distance (e.g., $1\;[m]$) and the attenuation gain, respectively. Thus, defining $T_{\text{RF}}$ as the receiver sampling interval, the radio observation model is \begin{equation}\label{eq:obs_Rx} z_{\text{RF},t} = \begin{cases} r_t + v_{\text{RF},t} , & t = M T_{\text{RF}}, \; M \in \mathbb{N}\\ \emptyset, & \text{otherwise} \end{cases} \end{equation} where $v_{\text{RF},t} \sim \mathcal{N}\left(v| 0,\sigma_{\text{RF}}^2 \right)$ is the noise in RSSI measurements, and $\emptyset$ is an empty observation (i.e., measurements without target information). Camera measurements are modeled through the projection perspective geometry~\cite{aghajan2009multi} \begin{equation}\label{eq:obs_c} \vect{z}_{c,t} = \begin{cases} \vect{P}(\vect{s}_t)\widetilde{\vect{p}}_t + \vect{v}_{c,t}, & D_t =1 \text{ and } t = N T_a, \; N \in \mathbb{N}\\ \emptyset, & \text{otherwise} \end{cases} \end{equation} where $\vect{v}_t \sim \mathcal{N}\left(\vect{v}| \vect{0}_2,\bm{\Sigma}_{c} \right)$ is the noise of camera observations, $\widetilde{\vect{p}}_t$ is the homogeneous representation of $\vect{p}_t^+$ and \begin{equation} \vect{P}(\vect{s}_t) = \begin{bmatrix} \vect{I}_2 & \vect{0}_2 \end{bmatrix} \vect{K} \begin{bmatrix} \vect{R}(\bm{\psi_t}) & \vect{c}_t \end{bmatrix} \in \mathbb{R}^{2 \times 4} \end{equation} is the camera projection matrix that maps $\widetilde{\vect{p}}_t$ onto the image plane $\mathcal{I}$. $\vect{P}(\vect{s}_t)$ depends on $\vect{K} \in \mathbb{R}^{3 \times 3}$, the matrix of intrinsic parameters, and $\vect{R}(\bm{\psi_t})$, the camera rotation matrix w.r.t. $\mathcal{F}_0$. The camera frame rate $T_a$ satisfies \begin{equation}\label{eq:sampling_time_relation} T_{\text{RF}} = \nu T_a, \; \nu>1 \end{equation} since radio reception is typically characterized by longer sample rates than cameras~\cite{zhuang2016smartphone,vollmer2011high}. Without any loss of generality, we will consider a normalized frame rate (i.e., $T_a=1$). Finally, a successful target detection is indicated by the value $1$ of the binary variable $D_t$. \begin{problem} With this formalism, the \emph{visual target detection problem} can be formulated as the control of the camera state $\vect{s}_t$ (through $\vect{u}_t$) to realize event $D_t$. \end{problem} \section{Methodology}\label{sec:method} To solve the problem defined in Sec. \ref{sec:problem_formulation}, a probabilistic bi-modal active sensing approach is proposed. As shown in Fig.~\ref{fig:pipeline}, radio-visual measurements are aggregated to form a bi-modal likelihood function; this is used to keep the target belief map updated through an RBE scheme. Finally, an optimal controller is fed with the probabilistic map and generates the platform control input. \noindent \textbf{Probabilistic map -} Given the observations $\vect{z}_{1:t}$, RBE provides a two-stage procedure to recursively update the target belief state, namely the posterior distribution $p(\vect{p}_t| \vect{z}_{1:t})$. The prediction stage involves using the process model \eqref{eq:target_dyn} to obtain the prior of the target position via the \mbox{Chapman-Kolmogorov} equation~\cite{radmard2017active} \begin{equation}\label{eq:Chapman-Kolmogorov} p(\vect{p}_{t}| \vect{z}_{1:t-1}) = \int p(\vect{p}_{t}| \vect{p}_{t-1}) p(\vect{p}_{t-1}| \vect{z}_{1:t-1}) d \vect{p}_{t-1}. \end{equation} As a new observation $\vect{z}_{t}$ becomes available, Bayes rule~\cite{smith2013MonteCarlo} updates the target belief state \begin{equation}\label{eq:bayes} p(\vect{p}_{t}| \vect{z}_{1:t}) = \frac{ p(\vect{p}_{t}| \vect{z}_{1:t-1}) p(\vect{z}_{t}| \vect{p}_{t}) }{ \int p(\vect{z}_{t}| \vect{p}_{t-1}) p(\vect{p}_{t-1}| \vect{z}_{1:t-1}) d \vect{p}_{t-1} }, \end{equation} with $p(\vect{p}_0)$ as initial target state belief. Particle filtering~\cite{smith2013MonteCarlo} is a Monte Carlo approximation of the density $p(\vect{p}_t|\vect{z}_{1:t})$ with a sum of $N_s$ Dirac functions centered in the particles $\{ \vect{p}_t^{(i)} \}_{i=1}^{N_s}$, that is \begin{equation}\label{eq:posterior_PF} p(\vect{p}_t|\vect{z}_{1:t}) \approx \sum_{i=1}^{N_s} w_t^{(i)} \delta \left( \vect{p}_t - \vect{p}_t^{(i)} \right), \end{equation} where $w_t^{(i)}$ is the weight of particle $\vect{p}_t^{(i)}$ and it holds \begin{subequations}\label{eq:PF} \begin{align} & \vect{p}_t^{(i)} = f\left( \vect{p}_{t-1}^{(i)},\bm{\eta}_{t-1} \right) \label{eq:prediction} \\ & w_t^{(i)} \propto w_{t-1}^{(i)}p\left(\vect{z}_t|\vect{p}_t^{(i)}\right) \label{eq:update} \end{align} \end{subequations} for $i=1,\dots,N_s$. The $i$-th weight is therefore proportional to the likelihood function evaluated on the $i$-th particle, namely $p(\vect{z}_t|\vect{p}_t^{(i)})$. Under this scheme, a few particles will concentrate most of the weight over time, leading to a phenomenon called degeneracy. To alleviate this problem, systematic resampling is adopted, which involves resampling from the particle set. Particles with higher weights are more likely to be sampled, but the procedure preserves some low weight particles as well~\cite{smith2013MonteCarlo}. \tikzset{ block/.style = {draw, fill=white, rectangle, minimum height=3em, minimum width=3em}, block_transp/.style = {rectangle, minimum height=3em, minimum width=3em}, sum/.style= {draw, fill=white, circle, node distance=1cm}} \begin{figure}[t!] \center \begin{tikzpicture}[auto, node distance=3cm,>=latex',scale=0.8, transform shape] \node [block] (g) {$f(\vect{p}_{t-1},\bm{\eta}_{t-1})$}; \node [block_transp, right of = g, xshift=-1cm] (omega) { $\bm{\eta}_{t-1}$}; \node [block, below of = g, yshift=1.5cm] (time_update_g) { $z^{-1}$}; \node [block, below of = g,yshift=-1cm,xshift=-4cm] (sensing) { \nlenv{sensing\\ units}}; \node [block, right of = sensing, xshift=1.5cm] (sensor dynamics) { $\vect{s}_t = \vect{s}_{t-1} + \vect{u}_{t-1}^*$}; \node [block, below of = sensor dynamics, yshift=1.2cm,xshift=1cm] (time_update_sensor) { $z^{-1}$}; \node [block, below of = sensing, xshift=-1.2cm, yshift=-1.2cm] (RF_likelihood) { $p(z_{\text{RF},t}|\vect{p}_t,\vect{s}_t)$}; \node [block, below of = sensing, xshift=1.2cm,yshift=-1.2cm] (visual_likelihood) { $p(\vect{z}_{c,t}|\vect{p}_t,\vect{s}_t)$}; \node [sum, below of = sensing, yshift = -4.5cm] (prod) { $\times$}; \node [block, below of = prod, yshift = 1cm] (likelihood) { $p(\vect{z}_t|\vect{p}_t,\vect{s}_t)$}; \node [block, below of = sensor dynamics, yshift = -4.5cm] (RBE) { RBE}; \node [block, above of = RBE, yshift=0.3cm] (control) { $\argmin{ J(\vect{s}_t + \vect{u}_t) }$}; \draw [->] (omega.west) -- node{} (g.east); \draw [->] (g.west) -- ++ (-2.9,0) -- node[name=output_g]{ $\vect{p}_{t}$} (sensing.north); \draw [->] ($(g.west)+(-0.5,0)$) |- (time_update_g.west) ; \draw [->] (time_update_g.east) -- ++ (1,0) -- node[]{ $\vect{p}_{t-1}$} ++ (0,1.2) -- ($(g.east)-(0,0.3)$); \draw [->] (sensor dynamics.west) -- node[xshift=-0.2cm]{$\vect{s}_{t}$} (sensing.east); \draw [->] ($(sensor dynamics.west)+(-1,0)$) |- (time_update_sensor.west) ; \draw [->] (time_update_sensor.east) -- ++ (0.3,0) -- node[]{ $\vect{s}_{t-1},\vect{u}_{t-1}^*$} ++ (0,1.8) -- ($(sensor dynamics.east)$); \draw [->] ($(time_update_sensor.west) + (-0.5,0)$) -- ($(control.north)$) ; \draw [->] ($(sensing.south) -(0.3,0)$) -- ++ (0,-2) -- ++ (-0.9,0) --node[]{ $z_{\text{RF},t}$} (RF_likelihood.north); \draw [->] ($(sensing.south) +(0.3,0)$) -- ++ (0,-2) -- ++ (0.9,0) -- node[xshift=-0.7cm]{ $\vect{z}_{c,t}$} (visual_likelihood.north); \draw [->] (RF_likelihood.south) |- node{} (prod.west); \draw [->] (visual_likelihood.south) |- node{} (prod.east); \draw [->] (prod.south) -- node{} (likelihood.north); \draw [->] (likelihood.east) -- node{} (RBE.west); \draw [->] (RBE.north) -- node[]{ $p(\vect{p}_{t}|\vect{z}_{1:t})$} (control.south); \draw [->] ($(RBE.north)+(0,0.5)$) -- ++ (1,0) -- ++ (0,-1.05) -- (RBE.east); \draw [->] ($(control.north)+(1,0)$) -- node[]{ $\vect{u}_t^*$}(time_update_sensor.south); \begin{scope}[on background layer] \draw [fill=lightgray,dashed] ($(time_update_g) + (-2,-0.7)$) rectangle ($(omega) + (0.5,0.7)$); \draw [fill=lightgray,dashed] ($(RF_likelihood) + (-1.2,0.8)$) rectangle ($(likelihood) + (2.3,-0.7)$); \draw [fill=lightgray,dashed] ($(control.west) + (-0.2,0.8)$) rectangle ($(control.east) + (0.2,-0.8)$); \draw [fill=lightgray,dashed] ($(sensor dynamics.west) + (-1.8,0.7)$) rectangle ($(time_update_sensor.east) + (0.5,-0.7)$); \draw [fill=lightgray,dashed] ($(RBE.west) + (-1,0.7)$) rectangle ($(RBE.east) + (1,-0.7)$); \end{scope} \draw [dashed] ($(RF_likelihood) + (-1.4,5.1)$) rectangle ($(control) + (2.2,-4.6)$); \node [block_transp, above of = g, yshift=-2cm] (target_annotation) {\textit{Target dynamics}}; \node [block_transp, below of = sensor dynamics, xshift=-1.7cm,yshift=0.25cm] (sensor_annotation) { \textit{Platform dynamics}}; \node [block_transp, below of = likelihood, yshift=2cm] (likeliood_annotation) {\textit{Bi-modal likelihood}}; \node [block_transp, below of = control, yshift=2cm,xshift=1cm] (control_annotation) { \textit{Control}}; \node [block_transp, below of = RBE, yshift=2cm] (RBE_annotation) { \textit{Probabilistic map}}; \node [block_transp, above of = sensor dynamics, yshift=-1.8cm] (sensor_annotation) { \textit{Sensing platform}}; \end{tikzpicture} \caption{ Scheme of the proposed Probabilistic Radio-Visual Active Sensing algorithm.} \label{fig:pipeline} \end{figure} \textbf{Bi-modal likelihood - } In active sensing frameworks, the configuration of the sensing robot is not constant; hence, the likelihood function $p(\vect{z}_t|\vect{p}_t)$ should account also for the platform state, that is $p(\vect{z}_t|\vect{p}_t,\vect{s}_t)$. Given this premise, and recalling the stochastic characterization of the radio observation model \eqref{eq:obs_Rx}, the \emph{RF likelihood} is \begin{equation}\label{eq:likelihood_RF} p(z_{\text{RF},t}|\vect{p}_t,\vect{s}_t) = \begin{cases} \mathcal{N}\left(z| r_t,\sigma_{\text{RF}}^2 \right), & t=MT_{\text{RF}} \\ 1, & \text{otherwise}. \end{cases} \end{equation} Note that $z_{\text{RF},t}$ updates the belief map only when it carries information on the target position (i.e., \mbox{$z_{\text{RF},t} \neq \emptyset$}, at $t=MT_{\text{RF}}$). To define the visual likelihood from the observation model \eqref{eq:obs_c}, we consider the detection event as a Bernoulli random variable with success probability \begin{equation}\label{eq:POD} p(D_t=1|\vect{p}_t,\vect{s}_t) = \begin{cases} \Upsilon(\vect{p}_t,\vect{s}_t), & \vect{p}_t \in \Phi(\vect{s}_t) \\ 0, & \text{otherwise} \end{cases} \end{equation} with $\Phi(\vect{s}_t)$ camera FoV onto $\Pi$ (see Fig. \ref{fig:scenario}), and \begin{equation}\label{eq:Upsilon} \Upsilon(\vect{p}_t,\vect{s}_t) = \left[ 1 + e^{\gamma(d(\vect{c}_t,\vect{p}_t^+)/f - \epsilon)} \right]^{-1} \left( 1 + e^{-\gamma \epsilon} \right). \end{equation} Thus, the target can be detected only if inside the camera FoV and, from \eqref{eq:Upsilon}, the detection probability is proportional to the resolution $d(\vect{c}_t,\vect{p}_t^+)/f$ at which the target is observed, where $f$ is the camera focal length; $\epsilon > 0$ is the resolution at which the target is no longer well detectable, $\gamma > 0$ is the rate of the target detectability decrease. Then, the \textit{visual likelihood} is \begin{equation}\label{eq:likelihood_V} \begin{split} & p(\vect{z}_{c,t}|\vect{p}_t,\vect{s}_t) = p(\vect{z}_{c,t}|\vect{p}_t,\vect{s}_t,D_t)p(D_t|\vect{p}_t,\vect{s}_t) = \\ & \begin{cases} \mathcal{N}(\vect{z}| \vect{P}(\vect{s}_t)\widetilde{\vect{p}}_t,\bm{\Sigma}_c)\Upsilon(\vect{p}_t,\vect{s}_t), & D_t = 1,\; \vect{p}_t \in \Phi(\vect{s}_t) \\ 0, & D_t = 1,\; \vect{p}_t \not\in \Phi(\vect{s}_t) \\ 1-\Upsilon(\vect{p}_t,\vect{s}_t), & D_t = 0,\; \vect{p}_t \in \Phi(\vect{s}_t) \\ 1, & D_t = 0,\; \vect{p}_t \not\in \Phi(\vect{s}_t) \end{cases} \end{split} \end{equation} By aggregating radio and visual likelihoods, the following \emph{bi-modal likelihood} is obtained \begin{equation}\label{eq:bimodal_likelihood_radioVisual} p(\vect{z}_t |\vect{p}_t , \vect{s}_t ) = p(z_{\text{RF}.t} |\vect{p}_t ,\vect{s}_t )p(\vect{z}_{c,t} |\vect{p}_t , \vect{s}_t ). \end{equation} Then, \eqref{eq:bimodal_likelihood_radioVisual} is applied to the update stage \eqref{eq:update} of the particle filter that implements the RBE scheme. \noindent \textbf{Controller -} The platform control input is computed by solving the following optimization program \begin{equation}\label{eq:control_input} \begin{split} \mathcal{C}: \; \vect{u}_t^* = & \argmin_{\vect{u}_t \in \mathcal{A}} J(\vect{s}_t+\vect{u}_t)\\ & \text{s.t. } d \left( \vect{c}_{t+1},\vect{c}_{t} \right) \leq E_t \end{split} \end{equation} where $E_t$ is the residual energy at time $t$, computed as \begin{equation}\label{eq:residual_energy} E_t = E_{tot} - \sum_{k=0}^{t-1} \Delta E_k = E_{tot} - \sum_{k=0}^{t-1} d(\vect{c}_{k+1},\vect{c}_k). \end{equation} The cost function is chosen as \begin{equation}\label{eq:cost_function} J(\vect{c}_{\Pi,{t+1}}) = \frac{1}{2} d\left(\vect{c}_{\Pi,{t+1}}, \hat{\vect{p}}_t\right)^2, \end{equation} where $\vect{c}_{\Pi,t+1}$ is the projection of $\vect{c}_{t+1}$ onto $\Pi$, and $\hat{\vect{p}}_t$ is the MAP estimate\footnote{An alternative would be the MMSE estimate~\cite{hasanzade2018rf}, namely \mbox{$\hat{\vect{p}}_t = \sum_{i=1}^{N_s} \omega_t^{(i)} \vect{p}_t^{(i)}$.} This choice leads to smoother trajectories, but it is proven to make $\mathcal{C}$ more prone to local minima~\cite{radmard2017active}.} of the target position; formally \begin{equation}\label{eq:MAP} \hat{\vect{p}}_t = \vect{p}_t^{(i^*)}; \; i^* = \argmax_{i \in [1,N_s]}{ w_t^{(i)} }. \end{equation} Note that $J(\cdot)$ is function of $\vect{s}_t+\vect{u}_t$ (i.e., $\vect{s}_{t+1}$), since $\vect{c}_{\Pi,t+1}$ can be regulated by acting on the camera pose, through the inverse perspective geometry (with known and fixed UAV altitude w.r.t. $\Pi$)~\cite{aghajan2009multi}. Furthermore, $J(\cdot)$ extracts information from the belief map, according to the probabilistic active sensing approach (Fig. \ref{fig:pipeline}). The convexity of $J(\cdot)$ w.r.t. $\vect{c}_{\Pi,t+1}$ allows to solve \eqref{eq:control_input} with the gradient-based control law \begin{equation} \label{eq:gradient_based_controller} \!\begin{cases} \vect{s}_{\tau+1} = \vect{s}_\tau + \vect{u}_\tau, \; \tau \in [0, \tau_{max}] \\ \begin{split} \\[-4pt] \vect{u}_\tau & = \! \begin{bmatrix} \begin{array}{c} \vect{u}_{\vect{c},t} \\ \hline \vect{u}_{\bm{\psi},t} \\[4pt] \end{array} \end{bmatrix} \! = - \!\begin{bmatrix} \begin{array}{c|c} \vect{G}_{\vect{c}} & \vect{0} \\ \hline \vect{0} & \vect{G}_{\bm{\psi}} \end{array} \end{bmatrix} \! \begin{bmatrix} \begin{array}{c} \frac{\partial J(\vect{c}_{\Pi,\tau+1})}{\partial \vect{c}_{\Pi}} \\ 0 \\ \hline \\[-4pt] \frac{\partial J(\vect{c}_{\Pi,\tau+1})}{\partial \bm{\psi}} \\[4pt] \end{array}\end{bmatrix} \end{split} \end{cases} \end{equation} where $\tau_{max}$ accounts for the maximum number of iterations in order to accommodate the next incoming measurement at $t+1$. $\vect{G}_{\vect{c}} \in \mathbb{R}^{3 \times 3}$ and $\vect{G}_{\bm{\psi}} \in \mathbb{R}^{2 \times 2}$ are suitable control gain matrices. By choosing $\vect{G}_{\vect{c}}$ entries small, energy is preserved, since \eqref{eq:gradient_based_controller} commands short UAV movements. Conversely, larger $\vect{G}_{\vect{c}}$ and $\vect{G}_{\bm{\psi}}$ lead to a more reactive system, capable of getting close to the setpoint $\hat{\vect{p}}_t$ within a maximum number of iterations $\tau_{max}$. Consequently, if the localization procedure is accurate (i.e., \mbox{$\hat{\vect{p}}_t \approx \vect{p}_t$}), the condition $\vect{p}_t \in \Phi(\vect{s}_t)$ is likely to be satisfied and $d(\vect{c}_t,\vect{p}_t^+)$ is small, which is necessary to have high detection probabilities, according to \eqref{eq:POD}. Finally, it is important to remark that $J(\cdot)$ is \emph{purely-exploitative} and \emph{not energy-aware}: in the controller design problem $\mathcal{C}$, energy appears only in the constraint and no energy preservation~\cite{liu2018energy}, nor information-seeking (explorative)~\cite{radmard2017active} criteria are included. \section{Theoretical results }\label{sec:theoretical} This Section formally motivates the use of an action space involving the entire camera pose, as in \eqref{eq:sensor_dynamics}, and supports the choice of a combined radio-visual perception system. We observe that the particle weight distribution is an indicator of the target localizability: highly-weighted regions allow to focus the position estimate, while uniform weight patterns suggest ambiguity in the target localization. In this respect, we show that radio-only solutions need the sensing platform to move in order to solve localization ambiguity (Ths.~1-2 that follow), which can be conversely attained through a radio-visual approach also with a static platform (Th.~3). \begin{theorem}\label{th:axis_symmetric_ambiguity} Let the following hypotheses hold \begin{enumerate} \item the target moves according to an unbiased random walk, i.e., \mbox{$p(\vect{p}_{t+1}|\vect{p}_t) = \mathcal{N}(\vect{p}|\vect{p}_t,\sigma^2 \vect{I}_2)$}; \item the platform is static, i.e. $\vect{c}_t = \vect{c}; \; \forall t$; \item the RBE scheme updates through \eqref{eq:update} exploiting only the RF likelihood \eqref{eq:likelihood_RF}. \end{enumerate} Then, \begin{equation}\label{eq:axisSymmetric_condition} \begin{split} & \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] = \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right], \; t \geq 0 \\ & \forall i,j \in [1,\dots,N_s] \text{ s.t. } d(\vect{p}_0^{(i)},\vect{c}_\Pi) = d(\vect{p}_0^{(j)},\vect{c}_\Pi) \end{split} \end{equation} \end{theorem} \begin{proof} The dynamic model associated to the target unbiased random walk is \begin{equation} \vect{p}_{t+1} = \vect{p}_t + \bm{\eta}_t, \quad \bm{\eta}_t \sim \mathcal{N}(\bm{\eta}|\vect{0}_2,\sigma^2\vect{I}_2) \end{equation} Equivalently, \begin{equation} \vect{p}_t = \vect{p}_0 + \sum_{k=0}^{t-1} \bm{\eta}_k, \; t>0. \end{equation} Given $\bm{\eta}_t, \; \bm{\eta}_s$ i.i.d. for any $s \neq t$, it follows \begin{equation} \bar{\bm{\eta}}_{t-1} := \sum_{k=0}^{t-1}\bm{\eta}_k \sim \mathcal{N} \left(\bm{\eta} | \vect{0}_2,t\sigma^2\vect{I}_2 \right). \end{equation} Then, the squared distance $d_t^{(i),2} := d(\vect{p}_t^{(i)},\vect{c}_\Pi)^2$ is \begin{equation} \begin{split} d_t^{(i),2} & = \lVert \vect{p}_t^{(i)} - \vect{c}_\Pi \rVert_2^2 = \lVert \vect{p}_0^{(i)} + \bar{\bm{\eta}}_{t-1} - \vect{c}_\Pi \rVert_2^2 \\ & = \sum_{\ell=1}^2 \left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right)^2 + \sum_{\ell=1}^2 \bar{\eta}_{t-1}(\ell)^2 \\ & + 2\sum_{\ell=1}^2 \left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right) \bar{\eta}_{t-1}(\ell), \; t > 0. \end{split} \end{equation} It holds, \begin{equation} \small 2\left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right)\! \bar{\eta}_{t-1}(\ell) \sim \! \mathcal{N} \left( \! \eta | 0,4t\sigma^2\left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right)^2 \!\right) \end{equation} and, since the components of $\bar{\bm{\eta}}_{t-1}$ are i.i.d. with distribution $\mathcal{N}\left(0,t\sigma^2\right)$, \begin{equation} \sum_{\ell=1}^2 \bar{\eta}_{t-1}(\ell)^2 \sim t\sigma^2 \chi^2(2). \end{equation} Finally, recalling that $\sum_{\ell=1}^2 ( p_0^{(i)}(\ell) - c_\Pi(\ell) )^2 = d_0^{(i)}$, \begin{equation}\label{eq:distance_distribution_th1} d_t^{(i),2} \sim \mathcal{N}(d^2 |\mu_d, \sigma_d^2 ) + t\sigma^2 \chi^2(2), \; t \geq 0 \end{equation} with \begin{equation}\label{eq:parameters_distance_th1} \mu_d = d_0^{(i),2}, \quad \sigma_d^2 = 4t\sigma^2d_0^{(i),2}. \end{equation} From \eqref{eq:distance_distribution_th1}-\eqref{eq:parameters_distance_th1}, $d_t^{(i),2}$ is identically distributed for all particles with the same initial distance from the platform. Using likelihood \eqref{eq:likelihood_RF} and with $z_{\text{RF},1:t}$ given, $\omega_t^{(i)}$ is a \mbox{(non-linear)} function of $d_t^{(i),2}$, through \eqref{eq:path_loss_model}. Then, condition \eqref{eq:axisSymmetric_condition} follows. \end{proof} \begin{theorem}\label{th:nonStatic_Rx} Let the following hypotheses hold \begin{enumerate} \item the target moves according to an unbiased random walk, i.e., \mbox{$p(\vect{p}_{t+1}|\vect{p}_t) = \mathcal{N}(\vect{p}|\vect{p}_t,\sigma^2 \vect{I}_2)$}; \item the platform moves at a fixed altitude according to a deterministic linear Markovian motion, i.e., $\vect{c}_{\Pi,t+1} = \vect{c}_{\Pi,t} + \vect{u}_{\vect{c}_\Pi,t}$; \item the RBE scheme updates through \eqref{eq:update} exploiting only the RF likelihood \eqref{eq:likelihood_RF}. \end{enumerate} Then, \begin{equation} \begin{split} & \exists (i,j) \in [1,N_s] \text{ s.t. } d(\vect{p}_0^{(i)},\vect{c}_{\Pi,0}) = d(\vect{p}_0^{(j)},\vect{c}_{\Pi,0}) \text{ and } \\ & \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right],\; t > 0 \end{split} \end{equation} \end{theorem} \begin{proof} The platform planar dynamics can be equivalently written as \begin{equation} \vect{c}_{\Pi,t}= \vect{c}_{\Pi,0} + \bar{\vect{u}}_{t-1}, \quad \bar{\vect{u}}_{t-1} := \sum_{k=0}^{t-1}\vect{u}_{\vect{c}_\Pi,k}, \; t>0. \end{equation} With similar computations involved of \mbox{Th. \ref{th:axis_symmetric_ambiguity}}, it is possible to show that the squared distance \mbox{$d_t^{(i),2} := d(\vect{p}_t^{(i)},\vect{c}_{\Pi,t})^2$} is distributed as \begin{equation}\label{eq:distance_distribution_th2} d_t^{(i),2} \sim \mathcal{N}(d^2 |\mu_d,\sigma_d^2 ) + t\sigma^2 \chi^2(2,\lambda), \; t>0 \end{equation} with \begin{equation}\label{eq:parameters_distance_th2} \begin{split} & \mu_d = d_0^{(i),2} - 2(\vect{p}_0^{(i)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1}, \;\; \sigma_d^2 = 4t\sigma^2 d_0^{(i),2}\\ & \lambda = -\sum_{\ell=1}^2 \bar{u}_{\vect{c}_\Pi,t-1}(\ell)^2 \end{split} \end{equation} where $\chi^2(2,\lambda)$ is a non-central chi-squared distribution, with non-centrality parameter $\lambda$. From \mbox{\eqref{eq:distance_distribution_th2}-\eqref{eq:parameters_distance_th2}}, the distribution of $d_t^{(i),2}$ depends on both $d_0^{(i),2}$ and \mbox{$(\vect{p}_0^{(i)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1}$}. It is always possible to find $(i,j) \in [1,N_s]$, such that $d_0^{(i)}=d_0^{(j)}$ and \mbox{$(\vect{p}_0^{(i)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1} \neq (\vect{p}_0^{(j)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1}$}. Then, in general, \begin{equation} \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right]. \end{equation} \end{proof} \newpage \begin{theorem}\label{th:biModality_benefit} Let the following hypotheses hold \begin{enumerate} \item the target moves according to an unbiased random walk, i.e., \mbox{$p(\vect{p}_{t+1}|\vect{p}_t) = \mathcal{N}(\vect{p}|\vect{p}_t,\sigma^2\vect{I}_2)$}; \item the platform is static, i.e. $\vect{c}_t = \vect{c}; \; \forall t$; \item the RBE scheme updates through \eqref{eq:update} exploiting the radio-visual likelihood \eqref{eq:bimodal_likelihood_radioVisual}. \end{enumerate} Then, \begin{equation}\label{eq:non_axisSymmetric_condition} \begin{split} & \exists (i,j) \in [1,N_s] \text{ s.t. } d(\vect{p}_0^{(i)},\vect{c}_{\Pi}) = d(\vect{p}_0^{(j)},\vect{c}_{\Pi}) \text{ and } \\ & \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right], \; t>0 \end{split} \end{equation} \end{theorem} \begin{proof} Suppose (w.l.o.g.) that $\vect{p}_0^{(i)}$ and $\vect{p}_0^{(j)}$ satisfy \mbox{$d(\vect{p}_0^{(i)},\vect{c}_{\Pi}) = d(\vect{p}_0^{(j)},\vect{c}_{\Pi})$}; hence, $d_t^{(i),2}$ and $d_t^{(j),2}$ are equally distributed, from Th. \ref{th:axis_symmetric_ambiguity}. Suppose now that \mbox{$\vect{p}_t^{(i)} \in \Phi(\vect{s}_t)$}, but \mbox{$\vect{p}_t^{(j)} \not\in \Phi(\vect{s}_t)$}. From \eqref{eq:likelihood_V}-\eqref{eq:bimodal_likelihood_radioVisual}, $\omega_t^{(i)}$ and $\omega_t^{(j)}$ are different functions of two equally distributed random variables (once $z_{\text{RF},1:t}$ are fixed). Thus, in general, $ \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right]. $ \end{proof} \subsection{Discussion}\label{subsec:theory_discussion} \begin{figure*}[h!] \centering \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/axis_symmetry.pdf} \label{fig:axis_symmetry} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/no_axis_symmetry.pdf} \label{fig:no_axis_symmetry} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/loc_error.pdf} \label{fig:convergence} } \caption{Probabilistic map obtained with the uni-modal RF likelihood (a) and the bi-modal radio-visual likelihood (b). (c) Localization error (average and $68\%$ confidence interval over a MC simulation) with radio-only (RF), visual-only (V) and radio-visual (RF+V) likelihood.} \label{fig:distance_set} \vspace{-0.5cm} \end{figure*} Th. \ref{th:axis_symmetric_ambiguity} statistically characterizes the axis-symmetric ambiguity, one of the main issues of RSSI-based localization systems. As \eqref{eq:path_loss_model} suggests, RSSI values bring information only on the target-receiver distance; consequently, the RF likelihood \eqref{eq:likelihood_RF} is axis-symmetric w.r.t. the receiver. Thus, in a particle-based RBE scheme, particles at the same distance from the receiver are equally weighted. When the sensing platform is static, this phenomenon generates severe convergence issues, since the belief map is toroidal with non-unique MAP estimate, as depicted in Fig. \ref{fig:axis_symmetry}. For this reason, most literature solutions propose to use multiple receivers~\cite{canton2017bluetooth}. When this is not feasible, the axis-symmetric effect can be mitigated by using a single moving receiver, as proven in Th. \ref{th:nonStatic_Rx}. Therefore, including the platform position in the control space, as in \eqref{eq:sensor_dynamics}, is beneficial for the convergence of RSSI-based probabilistic active tracking strategies. Another viable solution is the adoption of a bi-modal perception system. According to Th. \ref{th:biModality_benefit}, the visual likelihood \eqref{eq:likelihood_V} reduces the ambiguity in the particles weighting process. This is one main advantage of the radio-visual approach w.r.t. the radio-only counterpart. To completely exploit the disambiguation effect of visual observations, camera orientation should be included in the control space, as in \eqref{eq:sensor_dynamics}. In particular, with the control law defined in Sec. \ref{sec:method}, the platform uses camera measurements to discard wrong MAP candidates. In this way, most of the weight will concentrate around the true target position over time, as Fig. \ref{fig:no_axis_symmetry} shows. In conclusion, combining radio-visual cues in the RBE scheme and comprising the entire camera pose in $\mathcal{A}$, reduces the estimation ambiguities. This makes the localization procedure faster and more accurate (\mbox{Sec. \ref{subsec:proof_concept}}). Accordingly, the overall target visual detection task becomes more robust and more efficient (\mbox{Sec. \ref{sec:numerical}}). \subsection{Proof of concept - target localization}\label{subsec:proof_concept} To support the theoretical results, the control law~\eqref{eq:control_input} is used to localize a static target (i.e., \mbox{$\vect{p}_t = \vect{p}; \; \forall t$}) in a Python-based synthetic environment (see Fig. \ref{fig:distance_set}). For consistency with the hypotheses of Th. \ref{th:axis_symmetric_ambiguity}-\ref{th:biModality_benefit}, the process model is an unbiased random walk. The receiver is kept static (i.e., \mbox{$\vect{G}_\vect{c} = \vect{0}$}), to stress the effects of the axis-symmetric ambiguity, as well as the improvements introduced by visual information. RSSI data are collected with sampling rate \mbox{$T_{\text{RF}} = 2$} ($T_a=1$) and noise level \mbox{$\sigma_{\text{RF}}=2 \; [dBm]$}; the target is placed at distance \mbox{$d(\vect{p}^+,\vect{c})=10 \; [m]$} from the platform. The proposed bi-modal radio-visual approach (RF+V) is compared with a visual-only (V) and a radio-only (RF) one. % To capture the performance variability, the evaluation is assessed through a Monte Carlo (MC) simulation with \mbox{$N_{tests}=150$} tests of $T_W=100$ iterations each. The performance index is the localization error \mbox{$e_t = d(\vect{p},\hat{\vect{p}}_t)$}. Fig. \ref{fig:convergence} depicts the average localization error and its $68\%$ confidence interval over the MC tests. Only RF+V converges to the groundtruth; namely, the estimate is unbiased and with zero variability over the MC tests \begin{equation} \begin{split} & \mu_{e_{T_W}} := \mathbb{E}[ e_{T_W} ] = 0 \; [m] \\ & \sigma_{e_{T_W}}^2 := \mathbb{E}[ (e_{T_W} - \mu_{e_{T_W}})^2 ] = 0 \; [m]. \end{split} \end{equation} The effect of the axis-symmetric ambiguity is clear in the RF approach. As proven in Th. \ref{th:axis_symmetric_ambiguity}, the MAP estimate is non-unique; hence, at any time, the estimate $\hat{\vect{p}}_t$ may be not representative of the true target position. This leads RF to converge, on average, to a biased estimate (\mbox{$\mu_{e_{T_W}} \approx 1 \; [m]$}), with remarkable inter-test variability (\mbox{$\sigma_{e_{T_W}} \approx 3 \; [m]$}). The oscillations of $e_t$ are even higher for the visual-only approach. The main issue here is related to the limited sensing domain of the camera: as \eqref{eq:likelihood_V} suggests, in case of non-detection ($D_t=0$), only particles inside the camera FoV are updated; these might be a small portion of the entire particles set. Consequently, many particles share the same weight for a long time, leading to a stronger ambiguity than the axis-symmetry (especially in the first iterations). Therefore, the localization procedure experiences a very slow and erratic converge behavior. \begin{figure*}[htp!] \vspace{0.2cm} \centering \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/ECDFscenario2case4.pdf} \label{fig:controlSpace} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/ECDFmodalities_new.pdf} \label{fig:modalities} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/ECDF_energy.pdf} \label{fig:energy} } \caption{MC simulation results. (a) Detection time ECDF with different control spaces: full camera pose (RF+V), no pan-tilt movements (RF+V: $\vect{G}_{\bm{\Psi}}=\vect{0}$), static platform (RF+V: $\vect{G}_{\vect{c}}=\vect{0}$). (b) Detection time ECDF with different type and amount of information employed for the tracking task: radio-visual (RF+V), visual-only (V), radio-only (RF), and radio-only with two receivers ($2$RF). (c) Energy consumption ECDF between RF+V and RF.} \label{fig:numerical_results} \end{figure*} \section{Numerical results}\label{sec:numerical} In this Section the localization problem of Sec.~\ref{subsec:proof_concept} is extended to the active tracking of a moving target. At first, we numerically motivate the choice of the entire camera pose as control space (Sec.~\ref{subsec:why_control_space}), in accordance with the theoretical results of Sec.~\ref{sec:theoretical}. Secondly, we show that bi-modality allows to achieve higher robustness and time-efficiency than several uni-modal baselines (\mbox{Sec.~\ref{subsec:why_bimodality}}). Finally, the proposed algorithm is proven to be even more energy-efficient than a radio-only counterpart \mbox{(Sec.~\ref{subsec:finite energy})}, albeit the control law~\eqref{eq:control_input} does not account for any energy-preserving term. \subsection{Setup parameters}\label{subsec:setup} As in Sec. \ref{subsec:proof_concept}, we compare RF+V, RF and V over a MC simulation with $N_{tests}=150$ tests of \mbox{$T_W=100$} iterations each. The belief map is approximated by \mbox{$N_s=500$} particles, initialized according to a uniform distribution over $\Pi$. The underlying target motion is simulated using a linear stochastic Markovian transition planar model \begin{equation} \vect{p}_{t+1} = \vect{p}_{t} + \vect{q}_{t}; \quad \vect{q}_{t} \sim \mathcal{N}(\vect{q}|\bm{\mu}_{\vect{q}},\bm{\Sigma}_{\vect{q}}), \; \bm{\mu}_{\vect{q}} \neq \vect{0}_2. \end{equation} The initial condition $\vect{p}_{0}$ is randomly changed at each MC test, as well as the platform initial planar position $\vect{c}_{\Pi,0}$. The process model is an unbiased random walk, the receiver sampling rate is $T_{\text{RF}}=10$ ($T_a=1$), the total available energy is fixed to $E_{tot}=300$, and the RSSI noise level is $\sigma_{\text{RF}}=3.5 \; [dBm]$. \subsection{Performance assessment}\label{subsec:performance_assessment} The performance assessment of the active tracking task is based on the following indices. \begin{enumerate} \item \textit{Detection success rate}: robustness index that counts the number of successful target visual detections over the MC tests \begin{equation}\label{eq:detection_success_rate} \begin{split} & \varpi = \frac{1}{N_{tests}}\sum_{j=1}^{N_{tests}} \mathds{1}_{D,j} \\ & \mathds{1}_{D,j} = \begin{cases} 1, & \text{if } \exists t_{D} \in [1,T_W] \; \text{s.t. } D_{t_{D}}=1 \\ 0, & \text{otherwise} \end{cases} \end{split} \end{equation} \item \textit{Detection time ECDF\footnote{ECDF: Empirical Cumulative Distribution Function.}}: time-efficiency index that accounts for the time required to visually detect the target \begin{equation}\label{eq:time_ecdf} \begin{split} & p(t_{D} \leq t) = \frac{1}{N_{tests}}\sum_{j=1}^{N_{tests}} \mathds{1}_{t_{D,j}\leq t} \\ & \mathds{1}_{t_{D,j}\leq t} = \begin{cases} 1, & \text{if } t_{D,j}\leq t \\ 0, & \text{otherwise} \end{cases} \end{split} \end{equation} \item \textit{Energy consumption ECDF}: energy-efficiency index that accounts for the amount of available energy $E_{T_W} \in [0,E_{tot}]$ at the end of the task \begin{equation}\label{eq:energy_ecdf} \begin{split} & p(E_{T_W} \leq E) = \frac{1}{N_{tests}}\sum_{j=1}^{N_{tests}} \mathds{1}_{E_{T_W,j}\leq E} \\ & \mathds{1}_{E_{T_W,j} \leq E} = \begin{cases} 1, & \text{if } E_{T_W,j} \leq E \\ 0, & \text{otherwise}. \end{cases} \end{split} \end{equation} \end{enumerate} \subsection{Impact of the control space}\label{subsec:why_control_space} The controller $\mathcal{C}$, described in Sec. \ref{sec:method}, acts on a high-dimensional space comprising the full camera pose (namely, its position and orientation). This choice has been theoretically motivated in Ths.~\ref{th:nonStatic_Rx}-\ref{th:biModality_benefit} of Sec.~\ref{sec:theoretical} and it is here supported by numerical results. Specifically, the proposed control law (i.e., RF+V) is compared with two variants: the first does not allow pan-tilt movements (i.e., \mbox{$\vect{G}_{\bm{\Psi}}=\vect{0}$}) and the camera is fixed and facing downwards (namely, $\alpha_t=\beta_t=0, \; \forall t$); the second considers a static platform (i.e., $\vect{G}_{\vect{c}}=\vect{0}$). Indeed, without pan-tilt actuation, the target can be detected only when the UAV gets very close to it. However, this requirement is difficult to be met for every initial condition $\vect{p}_0$ and $\vect{c}_0$, due to localization errors and to the bias in the target motion. On the other side, a static platform is not capable to get closer to the target; hence, according to \eqref{eq:POD}-\eqref{eq:Upsilon}, many detection failures may occur. These considerations are supported by the numerical results reported in Tab.~\ref{tab:detection_rate} (top row): the detection success rate of RF+V is $77 \%$, against $39 \%$ of the case $\vect{G}_{\bm{\Psi}}=\vect{0}$ and $29 \%$ of the case $\vect{G}_{\vect{c}}=\vect{0}$. Moreover, Fig. \ref{fig:controlSpace} shows a remarkable difference in the detection time ECDF between the proposed solution and the other two versions. In conclusion, considering the entire camera pose as control space induces more robustness and more time-efficiency in the target visual detection task. \subsection{Impact of the sensing modalities}\label{subsec:why_bimodality} \begin{table}[t] \centering \caption{Detection success rate of the compared approaches.} \label{tab:detection_rate} \begin{tabular}{c| c|c|c} \hline \rowcolor{lightgray} & RF+V & RF+V: $\vect{G}_{\bm{\Psi}}=\vect{0}$ & RF+V: $\vect{G}_{\vect{c}}=\vect{0}$ \\ $\varpi$ & $77\%$ & $39\%$ & $29\%$ \\ \hline \rowcolor{lightgray} & RF & V & 2RF \\ $\varpi$ & $55\%$ & $26\%$ & $64\%$ \\ \hline \end{tabular} \end{table} As shown in Sec. \ref{subsec:proof_concept}, RF+V achieves a more accurate localization than than RF and V. This, combined with the control law $\mathcal{C}$, allows the platform to get closer to the target and to move the camera so that the target is inside the FoV. These two factors induce higher detection probabilities, according to \eqref{eq:POD}-\eqref{eq:Upsilon}. As a consequence, active tracking is more robust (see Tab. \ref{tab:detection_rate}) and more time-efficient (see Fig. \ref{fig:modalities}) when adopting the proposed bi-modal strategy, rather than the uni-modal counterparts. One may argue that the superior performance of the bi-modal strategy is due to the larger amount of information involved. To this aim, we compared RF+V with $2$RF: instead of combining radio-visual data (heterogeneous sensor fusion), we aggregate observations coming from two receivers (homogeneous radio-only sensor fusion). Thus, the measurements of $2$RF are \begin{equation}\label{eq:2RF_vector} \vect{z}_{2\text{RF},t} = \begin{bmatrix} z_{\text{RF}^{(1)},t} & z_{\text{RF}^{(2)},t} \end{bmatrix}^\top \end{equation} where $z_{\text{RF}^{(\ell)},t}$ identifies the RSSI sample collected by the $\ell$-th receiver at time $t$, according to the model \eqref{eq:obs_Rx}. Supposing independence between the observations of the two receivers, the likelihood of $2$RF becomes \begin{equation} p(\vect{z}_{2\text{RF},t} |\vect{p}_t , \vect{s}_t ) = \prod_{\ell=1}^2 p(z_{\text{RF}^{(\ell)},t}|\vect{p}_t, \vect{s}_t), \end{equation} with $p(z_{\text{RF}^{(\ell)},t}|\vect{p}_t, \vect{s}_t)$ as in \eqref{eq:likelihood_RF}. Tab. \ref{tab:detection_rate} and Fig. \ref{fig:modalities} show that $2$RF is more robust and time-efficient than RF, but less than RF+V. This means that the tracking performance is not only affected by the \emph{quantity}, but also by the \emph{quality}, of the aggregated information. In particular, exploiting complementary cues (e.g. radio and visual ones) is more advantageous than combining homogeneous data extracted from different sources; in fact, one can prove that the axis-symmetric ambiguity is not reduced when applying multiple receivers placed at the same location. \subsection{Energy-efficiency}\label{subsec:finite energy} As mentioned in Sec.~\ref{sec:method}, the proposed control law~\eqref{eq:control_input} does not include any explicit energy-aware term; hence, $\mathcal{C}$ is not expected to generate energy-preserving platform movements (unless $\vect{G}_{\vect{c}}$ is kept small, but this has been proven to be ineffective). Therefore, when the total available energy runs out, that is \begin{equation}\label{eq:out_of_energy_condition} \exists t \in [1,T_W] \text{ s.t. } E_{t}=0, \end{equation} the platform becomes static and the detection capabilities dramatically decrease (see \mbox{Sec. \ref{subsec:why_control_space}}). In the absence of an energy-aware control technique, the only way to avoid energy run-out is by producing accurate target position estimates. In fact, if $\hat{\vect{p}}_t \approx \vect{p}_t$, the controller $\mathcal{C}$ drives the platform towards the target through a smooth and direct trajectory, which is more energy-efficient than irregular patterns, according to \eqref{eq:energy_model}. Given this premise, and the localization superiority of RF+V w.r.t. RF (Secs.~\ref{sec:method} and \ref{subsec:why_bimodality}), we expect to have more energy preservation in RF+V than in RF. Indeed, the two energy consumption ECDFs in Fig. \ref{fig:energy} satisfy \begin{equation} p(E_{T_W}^{\text{RF+V}} \leq E) < p(E_{T_W}^{\text{RF}} \leq E);\; \forall E \in [0,E_{tot}]. \end{equation} This means that RF drains more energy than RF+V. In particular, the out-of-energy condition \eqref{eq:out_of_energy_condition} is higher in RF ($76\%$) than in RF+V ($68\%$). \section{Conclusion }\label{sec:conclusion} This work discusses the problem of visually detecting a RF-emitting target under energy constraint. To this aim, we propose a probabilistic active sensing approach, leveraging radio-visual information. A theoretical discussion highlights the main limitations of radio-only localization, as well as the benefits introduced by the bi-modal strategy. These findings are then validated through numerical results, from which bi-modality is shown to be more robust and more time and energy-efficient than uni-modal camera and radio-only counterparts. Bi-modality is proven to be even more effective than homogeneous sensor fusion strategies, since the combination of two receivers does not get comparable performance of the suggested radio-visual integration. Future work will be devoted to the design of specific energy-aware terms to be included in the optimal controller design. In addition, it is worth comparing the proposed exploitative strategy with information-seeking (explorative) ones. Finally, we are planning to carry out laboratory experiments to evaluate the algorithm on a real robotic platforms. \bibliographystyle{IEEEtran} \section{Introduction}\label{sec:intro} {Active sensing} consists of controlling the sensor state to gather (more) informative observations~\cite{radmard2017active} or to accomplish a task (e.g., find a target within a certain time budget~\cite{MTS_Rinner}). This control framework has been largely used to solve autonomous target search and tracking~\cite{shahidian2017single}, often relying on probabilistic approaches~\cite{detection_tracking_survey}: data from onboard sensors and Recursive Bayesian Estimation (RBE) schemes~\cite{smith2013MonteCarlo} are used to generate a probabilistic map (also known as belief map), encoding the knowledge about potential target locations. The control problem is then cast as the optimization of a suitable objective function built upon the probabilistic map (e.g., time to detection~\cite{MTS_Rinner}, estimate uncertainty~\cite{shahidian2017single}, distance to the target~\cite{hasanzade2018rf}). Stochastic motion and observation models~\cite{radmard2017active} account for the uncertainties on target dynamics and on the perception process, and allow to treat no-detection observations~\cite{negative_information}. For these reasons, probabilistic approaches are suitable for real-life scenarios, which are also characterized by energy costs associated to the movement of the active sensing platform~\cite{liu2018energy}. \noindent \textbf{Related works -} Typical modalities for active sensing include vision, audio and radio~\cite{radmard2017active,haubner2019active,shahidian2017single}. Visual-based tracking provides high accuracy~\cite{aghajan2009multi} and does not require the target to use any emitting device. Occlusions and Field of View (FoV) directionality~\cite{mavrinac2013modeling} limit the range, applicability and success of camera-only platforms~\cite{detection_tracking_survey}, especially for applications where time of detection is critical (e.g., search and rescue missions~\cite{SAR_radio}). To collect measurements on wider ranges, and reduce the search phase, other technologies can be used, such as acoustic~\cite{haubner2019active} or radio-frequency (RF)~\cite{shahidian2017single} signals. Despite the high localization accuracy of acoustic signals~\cite{haubner2019active}, sound pollution and extra hardware requirements (e.g., microphone arrays) are critical pitfalls of this technology~\cite{zafari2019survey}. Conversely, RF signals are energy efficient, have large reception ranges ($\sim100 \; [m]$), and low hardware requirements, since platforms are not needed to be equipped with multiple receivers; moreover, the Received Signal Strength Indicator (RSSI) is extracted from standard data packet traffic~\cite{zanella2016best}. For these reasons, RSSI-based localization systems widely appear in literature and in commercial applications, despite environmental interference (e.g., cluttering and multi-path distortions) often limits their accuracy~\cite{zanella2016best}. Multi-modal sensor fusion techniques have been shown to overcome the inadequacies of uni-modal approaches, being more robust and reliable~\cite{DeepRL_gazeControl}. \noindent \textbf{Contributions -} This paper exploits the complementary benefits of radio and visual cues for visually detecting a radio-emitting target with an aerial robot, equipped with a radio receiver and a Pan-Tilt (PT) camera. We formulate the control problem within a probabilistic active sensing framework, where camera measurements refine radio ones within a RBE scheme, used to keep the map updated. The fusion of RF and camera sensor data for target search and tracking is an open problem and the literature addressing this task is sparse. Indeed, to the best of authors knowledge, this is the first attempt to combine radio and visual measurements within a probabilistic active sensing framework. Furthermore, unlike existing solutions operating on limited control spaces (e.g., platform position~\cite{shahidian2017single} or camera orientation~\cite{DeepRL_gazeControl}), we propose a gradient-based optimal control, defined on a continuous space comprising both platform position and camera orientation. Theoretical and numerical analyses are provided to validate the effectiveness of the proposed algorithm. What emerges is that bi-modality is proven to increase the target localization accuracy; this, together with the availability of an integrated high-dimensional control space, leads to higher detection success rates, as well as superior time and energy efficiency with respect to radio and vision-only counterparts. $ $ \section{Problem statement}\label{sec:problem_formulation} Fig. \ref{fig:scenario} shows the main elements of the problem scenario, namely the target and the sensing platform\footnote{Bold letters indicate (column) vectors, if lowercase, matrices otherwise. With $\vect{I}_n$ we define the $n$-dimensional identity matrix, while $\vect{0}_n$ is the zero vector of dimension $n$. \\ Regarding the statistical distributions used in this paper, $\chi^2(n)$ denotes the chi-squared distribution with $n$ degrees of freedom, and $\mathcal{N}(x|\mu,\sigma^2)$ is the Gaussian distribution over the random variable $x$ with expectation $\mu$ and variance $\sigma^2$.\\ With the shorthand notation $z_{t_0:t_1}$ we indicate a sequence of measurements from time instant $t_0$ to $t_1$, namely $\left\lbrace z_k \right\rbrace_{k=t_0}^{t_1}$.\\ The Euclidean distance between vectors $\vect{a}, \vect{b} \in \mathbb{R}^n$ is denoted as \begin{equation} d(\vect{a},\vect{b}) = \lVert \vect{a} - \vect{b} \rVert_2 = \left( \sum_{i=1}^n \left( a(i) - b(i)\right)^2 \right)^{1/2} \end{equation} where $a(i)$ is the $i$-th component of $\vect{a}$. Given a plane $\Pi$ and a vector $\vect{a} \in \mathbb{R}^n$, we denote as $\vect{a}_\Pi$ the orthogonal projection of $\vect{a}$ onto $\Pi$.}. \noindent \textbf{Target - }The radio-emitting target moves on a planar environment \mbox{$\Pi \subset \mathbb{R}^2$}, according to a (possibly) non-linear stochastic Markovian state transition model~\cite{radmard2017active} \begin{equation}\label{eq:target_dyn} \vect{p}_{t+1} = f(\vect{p}_{t},\bm{\eta}_{t}). \end{equation} where $\vect{p}_t \in \Pi$ is the target position at time $t$, referred to the global 3D reference frame $\mathcal{F}_0$; when expressed in $\mathbb{R}^3$, it is referred as $\vect{p}_t^+ = [ \, \vect{p}_t^\top \quad 0 \,]^\top$. The uncertainty on the underlying target movements are captured by the distribution of the stochastic process noise $\bm{\eta}_t$. The probabilistic form of \eqref{eq:target_dyn}, namely $p(\vect{p}_{t+1}|\vect{p}_{t})$, is known as process model~\cite{radmard2017active}. \noindent \textbf{Sensing platform - }The sensing platform is an aerial vehicle (UAV), equipped with an omnidirectional radio receiver and a PT camera on gimbal, and endowed with processing capabilities and a real-time target detector ~\cite{yolo}. The state of the platform is the camera pose, namely \begin{equation}\label{eq:platform_state} \begin{split} & \vect{s}_t = \begin{bmatrix} \vect{c}_t^\top & \bm{\psi}_t^\top \end{bmatrix}^\top, \\ & \vect{c}_t \in \mathbb{R}^3; \quad \bm{\psi}_t = \begin{bmatrix} \alpha_t & \beta_t \end{bmatrix}^\top \in [-\pi/2+\theta,\pi/2-\theta]^2. \end{split} \end{equation} The UAV position $\vect{c}_t$ is referred to $\mathcal{F}_0$, it is supposed to coincide with the camera focal point and its altitude is fixed (i.e., non-controllable); $\alpha_t$ (resp. $\beta_t$) is the pan (resp. tilt) angle w.r.t. the camera inertial reference frame $\mathcal{F}_u$ (obtained by a $\vect{c}_t$ translation of $\mathcal{F}_0$); $\theta$ is the half-angle of view. The camera state follows a linear deterministic Markovian transition model~\cite{radmard2017active} \begin{equation}\label{eq:sensor_dynamics} \vect{s}_{t+1} = \vect{s}_t + \vect{u}_t; \; \vect{u}_t = \begin{bmatrix} \vect{u}_{\vect{c},t}^\top \quad \vect{u}_{\bm{\psi},t}^\top \end{bmatrix}^\top \in \mathcal{A} \end{equation} where $\mathcal{A}$ is the control space. It comprises all possible control inputs that can be applied to the platform to regulate position and attitude. In particular, being the UAV altitude fixed, we focus on a planar control $\vect{u}_{\vect{c}_\Pi,t}$, acting on the projection $\vect{c}_{\Pi,t}$. Inspired by real-life scenarios, the UAV movements are considered energy-consuming with a linear dependence on the flying distance~\cite{liu2018energy}, that is \begin{equation}\label{eq:energy_model} \Delta E_t = d(\vect{c}_{t+1},\vect{c}_{t}), \end{equation} where $\Delta E_t$ is the energy used to move the platform from $\vect{c}_{t}$ to $\vect{c}_{t+1}$. The total available energy is denoted as $E_{tot}$. \begin{figure}[t] \centering \includegraphics[scale=0.3, trim={0 0 0 38}, clip]{images/scenario.pdf} \vspace{-0.9cm} \caption{Problem scenario. A target moves in a planar environment and establishes a radio signal communication with a camera-embedded UAV. The objective is to control the camera pose so that the target is visually detected.} \label{fig:scenario} \end{figure} Motivated by the long reception ranges of radio signals~\cite{zanella2016best}, we suppose the target to be always within the range of the platform receiver and, from received data packets, the RSSI value $r_t\in\mathbb{R}$ is extracted. This is related to the platform-target distance $d(\vect{c}_t,\vect{p}_t^+)$ according to the log-distance path loss model~\cite{goldsmith_2005} \begin{equation}\label{eq:path_loss_model} r_t = \kappa - 10 n\log_{10} \left( d(\vect{c}_t,\vect{p}_t^+) \right). \end{equation} The parameters $\kappa$ and $n$ are estimated via offline calibration procedures~\cite{zanella2016best} and they represent the RSSI at a reference distance (e.g., $1\;[m]$) and the attenuation gain, respectively. Thus, defining $T_{\text{RF}}$ as the receiver sampling interval, the radio observation model is \begin{equation}\label{eq:obs_Rx} z_{\text{RF},t} = \begin{cases} r_t + v_{\text{RF},t} , & t = M T_{\text{RF}}, \; M \in \mathbb{N}\\ \emptyset, & \text{otherwise} \end{cases} \end{equation} where $v_{\text{RF},t} \sim \mathcal{N}\left(v| 0,\sigma_{\text{RF}}^2 \right)$ is the noise in RSSI measurements, and $\emptyset$ is an empty observation (i.e., measurements without target information). Camera measurements are modeled through the projection perspective geometry~\cite{aghajan2009multi} \begin{equation}\label{eq:obs_c} \vect{z}_{c,t} = \begin{cases} \vect{P}(\vect{s}_t)\widetilde{\vect{p}}_t + \vect{v}_{c,t}, & D_t =1 \text{ and } t = N T_a, \; N \in \mathbb{N}\\ \emptyset, & \text{otherwise} \end{cases} \end{equation} where $\vect{v}_t \sim \mathcal{N}\left(\vect{v}| \vect{0}_2,\bm{\Sigma}_{c} \right)$ is the noise of camera observations, $\widetilde{\vect{p}}_t$ is the homogeneous representation of $\vect{p}_t^+$ and \begin{equation} \vect{P}(\vect{s}_t) = \begin{bmatrix} \vect{I}_2 & \vect{0}_2 \end{bmatrix} \vect{K} \begin{bmatrix} \vect{R}(\bm{\psi_t}) & \vect{c}_t \end{bmatrix} \in \mathbb{R}^{2 \times 4} \end{equation} is the camera projection matrix that maps $\widetilde{\vect{p}}_t$ onto the image plane $\mathcal{I}$. $\vect{P}(\vect{s}_t)$ depends on $\vect{K} \in \mathbb{R}^{3 \times 3}$, the matrix of intrinsic parameters, and $\vect{R}(\bm{\psi_t})$, the camera rotation matrix w.r.t. $\mathcal{F}_0$. The camera frame rate $T_a$ satisfies \begin{equation}\label{eq:sampling_time_relation} T_{\text{RF}} = \nu T_a, \; \nu>1 \end{equation} since radio reception is typically characterized by longer sample rates than cameras~\cite{zhuang2016smartphone,vollmer2011high}. Without any loss of generality, we will consider a normalized frame rate (i.e., $T_a=1$). Finally, a successful target detection is indicated by the value $1$ of the binary variable $D_t$. \begin{problem} With this formalism, the \emph{visual target detection problem} can be formulated as the control of the camera state $\vect{s}_t$ (through $\vect{u}_t$) to realize event $D_t$. \end{problem} \section{Methodology}\label{sec:method} To solve the problem defined in Sec. \ref{sec:problem_formulation}, a probabilistic bi-modal active sensing approach is proposed. As shown in Fig.~\ref{fig:pipeline}, radio-visual measurements are aggregated to form a bi-modal likelihood function; this is used to keep the target belief map updated through an RBE scheme. Finally, an optimal controller is fed with the probabilistic map and generates the platform control input. \noindent \textbf{Probabilistic map -} Given the observations $\vect{z}_{1:t}$, RBE provides a two-stage procedure to recursively update the target belief state, namely the posterior distribution $p(\vect{p}_t| \vect{z}_{1:t})$. The prediction stage involves using the process model \eqref{eq:target_dyn} to obtain the prior of the target position via the \mbox{Chapman-Kolmogorov} equation~\cite{radmard2017active} \begin{equation}\label{eq:Chapman-Kolmogorov} p(\vect{p}_{t}| \vect{z}_{1:t-1}) = \int p(\vect{p}_{t}| \vect{p}_{t-1}) p(\vect{p}_{t-1}| \vect{z}_{1:t-1}) d \vect{p}_{t-1}. \end{equation} As a new observation $\vect{z}_{t}$ becomes available, Bayes rule~\cite{smith2013MonteCarlo} updates the target belief state \begin{equation}\label{eq:bayes} p(\vect{p}_{t}| \vect{z}_{1:t}) = \frac{ p(\vect{p}_{t}| \vect{z}_{1:t-1}) p(\vect{z}_{t}| \vect{p}_{t}) }{ \int p(\vect{z}_{t}| \vect{p}_{t-1}) p(\vect{p}_{t-1}| \vect{z}_{1:t-1}) d \vect{p}_{t-1} }, \end{equation} with $p(\vect{p}_0)$ as initial target state belief. Particle filtering~\cite{smith2013MonteCarlo} is a Monte Carlo approximation of the density $p(\vect{p}_t|\vect{z}_{1:t})$ with a sum of $N_s$ Dirac functions centered in the particles $\{ \vect{p}_t^{(i)} \}_{i=1}^{N_s}$, that is \begin{equation}\label{eq:posterior_PF} p(\vect{p}_t|\vect{z}_{1:t}) \approx \sum_{i=1}^{N_s} w_t^{(i)} \delta \left( \vect{p}_t - \vect{p}_t^{(i)} \right), \end{equation} where $w_t^{(i)}$ is the weight of particle $\vect{p}_t^{(i)}$ and it holds \begin{subequations}\label{eq:PF} \begin{align} & \vect{p}_t^{(i)} = f\left( \vect{p}_{t-1}^{(i)},\bm{\eta}_{t-1} \right) \label{eq:prediction} \\ & w_t^{(i)} \propto w_{t-1}^{(i)}p\left(\vect{z}_t|\vect{p}_t^{(i)}\right) \label{eq:update} \end{align} \end{subequations} for $i=1,\dots,N_s$. The $i$-th weight is therefore proportional to the likelihood function evaluated on the $i$-th particle, namely $p(\vect{z}_t|\vect{p}_t^{(i)})$. Under this scheme, a few particles will concentrate most of the weight over time, leading to a phenomenon called degeneracy. To alleviate this problem, systematic resampling is adopted, which involves resampling from the particle set. Particles with higher weights are more likely to be sampled, but the procedure preserves some low weight particles as well~\cite{smith2013MonteCarlo}. \tikzset{ block/.style = {draw, fill=white, rectangle, minimum height=3em, minimum width=3em}, block_transp/.style = {rectangle, minimum height=3em, minimum width=3em}, sum/.style= {draw, fill=white, circle, node distance=1cm}} \begin{figure}[t!] \center \begin{tikzpicture}[auto, node distance=3cm,>=latex',scale=0.8, transform shape] \node [block] (g) {$f(\vect{p}_{t-1},\bm{\eta}_{t-1})$}; \node [block_transp, right of = g, xshift=-1cm] (omega) { $\bm{\eta}_{t-1}$}; \node [block, below of = g, yshift=1.5cm] (time_update_g) { $z^{-1}$}; \node [block, below of = g,yshift=-1cm,xshift=-4cm] (sensing) { \nlenv{sensing\\ units}}; \node [block, right of = sensing, xshift=1.5cm] (sensor dynamics) { $\vect{s}_t = \vect{s}_{t-1} + \vect{u}_{t-1}^*$}; \node [block, below of = sensor dynamics, yshift=1.2cm,xshift=1cm] (time_update_sensor) { $z^{-1}$}; \node [block, below of = sensing, xshift=-1.2cm, yshift=-1.2cm] (RF_likelihood) { $p(z_{\text{RF},t}|\vect{p}_t,\vect{s}_t)$}; \node [block, below of = sensing, xshift=1.2cm,yshift=-1.2cm] (visual_likelihood) { $p(\vect{z}_{c,t}|\vect{p}_t,\vect{s}_t)$}; \node [sum, below of = sensing, yshift = -4.5cm] (prod) { $\times$}; \node [block, below of = prod, yshift = 1cm] (likelihood) { $p(\vect{z}_t|\vect{p}_t,\vect{s}_t)$}; \node [block, below of = sensor dynamics, yshift = -4.5cm] (RBE) { RBE}; \node [block, above of = RBE, yshift=0.3cm] (control) { $\argmin{ J(\vect{s}_t + \vect{u}_t) }$}; \draw [->] (omega.west) -- node{} (g.east); \draw [->] (g.west) -- ++ (-2.9,0) -- node[name=output_g]{ $\vect{p}_{t}$} (sensing.north); \draw [->] ($(g.west)+(-0.5,0)$) |- (time_update_g.west) ; \draw [->] (time_update_g.east) -- ++ (1,0) -- node[]{ $\vect{p}_{t-1}$} ++ (0,1.2) -- ($(g.east)-(0,0.3)$); \draw [->] (sensor dynamics.west) -- node[xshift=-0.2cm]{$\vect{s}_{t}$} (sensing.east); \draw [->] ($(sensor dynamics.west)+(-1,0)$) |- (time_update_sensor.west) ; \draw [->] (time_update_sensor.east) -- ++ (0.3,0) -- node[]{ $\vect{s}_{t-1},\vect{u}_{t-1}^*$} ++ (0,1.8) -- ($(sensor dynamics.east)$); \draw [->] ($(time_update_sensor.west) + (-0.5,0)$) -- ($(control.north)$) ; \draw [->] ($(sensing.south) -(0.3,0)$) -- ++ (0,-2) -- ++ (-0.9,0) --node[]{ $z_{\text{RF},t}$} (RF_likelihood.north); \draw [->] ($(sensing.south) +(0.3,0)$) -- ++ (0,-2) -- ++ (0.9,0) -- node[xshift=-0.7cm]{ $\vect{z}_{c,t}$} (visual_likelihood.north); \draw [->] (RF_likelihood.south) |- node{} (prod.west); \draw [->] (visual_likelihood.south) |- node{} (prod.east); \draw [->] (prod.south) -- node{} (likelihood.north); \draw [->] (likelihood.east) -- node{} (RBE.west); \draw [->] (RBE.north) -- node[]{ $p(\vect{p}_{t}|\vect{z}_{1:t})$} (control.south); \draw [->] ($(RBE.north)+(0,0.5)$) -- ++ (1,0) -- ++ (0,-1.05) -- (RBE.east); \draw [->] ($(control.north)+(1,0)$) -- node[]{ $\vect{u}_t^*$}(time_update_sensor.south); \begin{scope}[on background layer] \draw [fill=lightgray,dashed] ($(time_update_g) + (-2,-0.7)$) rectangle ($(omega) + (0.5,0.7)$); \draw [fill=lightgray,dashed] ($(RF_likelihood) + (-1.2,0.8)$) rectangle ($(likelihood) + (2.3,-0.7)$); \draw [fill=lightgray,dashed] ($(control.west) + (-0.2,0.8)$) rectangle ($(control.east) + (0.2,-0.8)$); \draw [fill=lightgray,dashed] ($(sensor dynamics.west) + (-1.8,0.7)$) rectangle ($(time_update_sensor.east) + (0.5,-0.7)$); \draw [fill=lightgray,dashed] ($(RBE.west) + (-1,0.7)$) rectangle ($(RBE.east) + (1,-0.7)$); \end{scope} \draw [dashed] ($(RF_likelihood) + (-1.4,5.1)$) rectangle ($(control) + (2.2,-4.6)$); \node [block_transp, above of = g, yshift=-2cm] (target_annotation) {\textit{Target dynamics}}; \node [block_transp, below of = sensor dynamics, xshift=-1.7cm,yshift=0.25cm] (sensor_annotation) { \textit{Platform dynamics}}; \node [block_transp, below of = likelihood, yshift=2cm] (likeliood_annotation) {\textit{Bi-modal likelihood}}; \node [block_transp, below of = control, yshift=2cm,xshift=1cm] (control_annotation) { \textit{Control}}; \node [block_transp, below of = RBE, yshift=2cm] (RBE_annotation) { \textit{Probabilistic map}}; \node [block_transp, above of = sensor dynamics, yshift=-1.8cm] (sensor_annotation) { \textit{Sensing platform}}; \end{tikzpicture} \caption{ Scheme of the proposed Probabilistic Radio-Visual Active Sensing algorithm.} \label{fig:pipeline} \end{figure} \textbf{Bi-modal likelihood - } In active sensing frameworks, the configuration of the sensing robot is not constant; hence, the likelihood function $p(\vect{z}_t|\vect{p}_t)$ should account also for the platform state, that is $p(\vect{z}_t|\vect{p}_t,\vect{s}_t)$. Given this premise, and recalling the stochastic characterization of the radio observation model \eqref{eq:obs_Rx}, the \emph{RF likelihood} is \begin{equation}\label{eq:likelihood_RF} p(z_{\text{RF},t}|\vect{p}_t,\vect{s}_t) = \begin{cases} \mathcal{N}\left(z| r_t,\sigma_{\text{RF}}^2 \right), & t=MT_{\text{RF}} \\ 1, & \text{otherwise}. \end{cases} \end{equation} Note that $z_{\text{RF},t}$ updates the belief map only when it carries information on the target position (i.e., \mbox{$z_{\text{RF},t} \neq \emptyset$}, at $t=MT_{\text{RF}}$). To define the visual likelihood from the observation model \eqref{eq:obs_c}, we consider the detection event as a Bernoulli random variable with success probability \begin{equation}\label{eq:POD} p(D_t=1|\vect{p}_t,\vect{s}_t) = \begin{cases} \Upsilon(\vect{p}_t,\vect{s}_t), & \vect{p}_t \in \Phi(\vect{s}_t) \\ 0, & \text{otherwise} \end{cases} \end{equation} with $\Phi(\vect{s}_t)$ camera FoV onto $\Pi$ (see Fig. \ref{fig:scenario}), and \begin{equation}\label{eq:Upsilon} \Upsilon(\vect{p}_t,\vect{s}_t) = \left[ 1 + e^{\gamma(d(\vect{c}_t,\vect{p}_t^+)/f - \epsilon)} \right]^{-1} \left( 1 + e^{-\gamma \epsilon} \right). \end{equation} Thus, the target can be detected only if inside the camera FoV and, from \eqref{eq:Upsilon}, the detection probability is proportional to the resolution $d(\vect{c}_t,\vect{p}_t^+)/f$ at which the target is observed, where $f$ is the camera focal length; $\epsilon > 0$ is the resolution at which the target is no longer well detectable, $\gamma > 0$ is the rate of the target detectability decrease. Then, the \textit{visual likelihood} is \begin{equation}\label{eq:likelihood_V} \begin{split} & p(\vect{z}_{c,t}|\vect{p}_t,\vect{s}_t) = p(\vect{z}_{c,t}|\vect{p}_t,\vect{s}_t,D_t)p(D_t|\vect{p}_t,\vect{s}_t) = \\ & \begin{cases} \mathcal{N}(\vect{z}| \vect{P}(\vect{s}_t)\widetilde{\vect{p}}_t,\bm{\Sigma}_c)\Upsilon(\vect{p}_t,\vect{s}_t), & D_t = 1,\; \vect{p}_t \in \Phi(\vect{s}_t) \\ 0, & D_t = 1,\; \vect{p}_t \not\in \Phi(\vect{s}_t) \\ 1-\Upsilon(\vect{p}_t,\vect{s}_t), & D_t = 0,\; \vect{p}_t \in \Phi(\vect{s}_t) \\ 1, & D_t = 0,\; \vect{p}_t \not\in \Phi(\vect{s}_t) \end{cases} \end{split} \end{equation} By aggregating radio and visual likelihoods, the following \emph{bi-modal likelihood} is obtained \begin{equation}\label{eq:bimodal_likelihood_radioVisual} p(\vect{z}_t |\vect{p}_t , \vect{s}_t ) = p(z_{\text{RF}.t} |\vect{p}_t ,\vect{s}_t )p(\vect{z}_{c,t} |\vect{p}_t , \vect{s}_t ). \end{equation} Then, \eqref{eq:bimodal_likelihood_radioVisual} is applied to the update stage \eqref{eq:update} of the particle filter that implements the RBE scheme. \noindent \textbf{Controller -} The platform control input is computed by solving the following optimization program \begin{equation}\label{eq:control_input} \begin{split} \mathcal{C}: \; \vect{u}_t^* = & \argmin_{\vect{u}_t \in \mathcal{A}} J(\vect{s}_t+\vect{u}_t)\\ & \text{s.t. } d \left( \vect{c}_{t+1},\vect{c}_{t} \right) \leq E_t \end{split} \end{equation} where $E_t$ is the residual energy at time $t$, computed as \begin{equation}\label{eq:residual_energy} E_t = E_{tot} - \sum_{k=0}^{t-1} \Delta E_k = E_{tot} - \sum_{k=0}^{t-1} d(\vect{c}_{k+1},\vect{c}_k). \end{equation} The cost function is chosen as \begin{equation}\label{eq:cost_function} J(\vect{c}_{\Pi,{t+1}}) = \frac{1}{2} d\left(\vect{c}_{\Pi,{t+1}}, \hat{\vect{p}}_t\right)^2, \end{equation} where $\vect{c}_{\Pi,t+1}$ is the projection of $\vect{c}_{t+1}$ onto $\Pi$, and $\hat{\vect{p}}_t$ is the MAP estimate\footnote{An alternative would be the MMSE estimate~\cite{hasanzade2018rf}, namely \mbox{$\hat{\vect{p}}_t = \sum_{i=1}^{N_s} \omega_t^{(i)} \vect{p}_t^{(i)}$.} This choice leads to smoother trajectories, but it is proven to make $\mathcal{C}$ more prone to local minima~\cite{radmard2017active}.} of the target position; formally \begin{equation}\label{eq:MAP} \hat{\vect{p}}_t = \vect{p}_t^{(i^*)}; \; i^* = \argmax_{i \in [1,N_s]}{ w_t^{(i)} }. \end{equation} Note that $J(\cdot)$ is function of $\vect{s}_t+\vect{u}_t$ (i.e., $\vect{s}_{t+1}$), since $\vect{c}_{\Pi,t+1}$ can be regulated by acting on the camera pose, through the inverse perspective geometry (with known and fixed UAV altitude w.r.t. $\Pi$)~\cite{aghajan2009multi}. Furthermore, $J(\cdot)$ extracts information from the belief map, according to the probabilistic active sensing approach (Fig. \ref{fig:pipeline}). The convexity of $J(\cdot)$ w.r.t. $\vect{c}_{\Pi,t+1}$ allows to solve \eqref{eq:control_input} with the gradient-based control law \begin{equation} \label{eq:gradient_based_controller} \!\begin{cases} \vect{s}_{\tau+1} = \vect{s}_\tau + \vect{u}_\tau, \; \tau \in [0, \tau_{max}] \\ \begin{split} \\[-4pt] \vect{u}_\tau & = \! \begin{bmatrix} \begin{array}{c} \vect{u}_{\vect{c},t} \\ \hline \vect{u}_{\bm{\psi},t} \\[4pt] \end{array} \end{bmatrix} \! = - \!\begin{bmatrix} \begin{array}{c|c} \vect{G}_{\vect{c}} & \vect{0} \\ \hline \vect{0} & \vect{G}_{\bm{\psi}} \end{array} \end{bmatrix} \! \begin{bmatrix} \begin{array}{c} \frac{\partial J(\vect{c}_{\Pi,\tau+1})}{\partial \vect{c}_{\Pi}} \\ 0 \\ \hline \\[-4pt] \frac{\partial J(\vect{c}_{\Pi,\tau+1})}{\partial \bm{\psi}} \\[4pt] \end{array}\end{bmatrix} \end{split} \end{cases} \end{equation} where $\tau_{max}$ accounts for the maximum number of iterations in order to accommodate the next incoming measurement at $t+1$. $\vect{G}_{\vect{c}} \in \mathbb{R}^{3 \times 3}$ and $\vect{G}_{\bm{\psi}} \in \mathbb{R}^{2 \times 2}$ are suitable control gain matrices. By choosing $\vect{G}_{\vect{c}}$ entries small, energy is preserved, since \eqref{eq:gradient_based_controller} commands short UAV movements. Conversely, larger $\vect{G}_{\vect{c}}$ and $\vect{G}_{\bm{\psi}}$ lead to a more reactive system, capable of getting close to the setpoint $\hat{\vect{p}}_t$ within a maximum number of iterations $\tau_{max}$. Consequently, if the localization procedure is accurate (i.e., \mbox{$\hat{\vect{p}}_t \approx \vect{p}_t$}), the condition $\vect{p}_t \in \Phi(\vect{s}_t)$ is likely to be satisfied and $d(\vect{c}_t,\vect{p}_t^+)$ is small, which is necessary to have high detection probabilities, according to \eqref{eq:POD}. Finally, it is important to remark that $J(\cdot)$ is \emph{purely-exploitative} and \emph{not energy-aware}: in the controller design problem $\mathcal{C}$, energy appears only in the constraint and no energy preservation~\cite{liu2018energy}, nor information-seeking (explorative)~\cite{radmard2017active} criteria are included. \section{Theoretical results }\label{sec:theoretical} This Section formally motivates the use of an action space involving the entire camera pose, as in \eqref{eq:sensor_dynamics}, and supports the choice of a combined radio-visual perception system. We observe that the particle weight distribution is an indicator of the target localizability: highly-weighted regions allow to focus the position estimate, while uniform weight patterns suggest ambiguity in the target localization. In this respect, we show that radio-only solutions need the sensing platform to move in order to solve localization ambiguity (Ths.~1-2 that follow), which can be conversely attained through a radio-visual approach also with a static platform (Th.~3). \begin{theorem}\label{th:axis_symmetric_ambiguity} Let the following hypotheses hold \begin{enumerate} \item the target moves according to an unbiased random walk, i.e., \mbox{$p(\vect{p}_{t+1}|\vect{p}_t) = \mathcal{N}(\vect{p}|\vect{p}_t,\sigma^2 \vect{I}_2)$}; \item the platform is static, i.e. $\vect{c}_t = \vect{c}; \; \forall t$; \item the RBE scheme updates through \eqref{eq:update} exploiting only the RF likelihood \eqref{eq:likelihood_RF}. \end{enumerate} Then, \begin{equation}\label{eq:axisSymmetric_condition} \begin{split} & \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] = \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right], \; t \geq 0 \\ & \forall i,j \in [1,\dots,N_s] \text{ s.t. } d(\vect{p}_0^{(i)},\vect{c}_\Pi) = d(\vect{p}_0^{(j)},\vect{c}_\Pi) \end{split} \end{equation} \end{theorem} \begin{proof} The dynamic model associated to the target unbiased random walk is \begin{equation} \vect{p}_{t+1} = \vect{p}_t + \bm{\eta}_t, \quad \bm{\eta}_t \sim \mathcal{N}(\bm{\eta}|\vect{0}_2,\sigma^2\vect{I}_2) \end{equation} Equivalently, \begin{equation} \vect{p}_t = \vect{p}_0 + \sum_{k=0}^{t-1} \bm{\eta}_k, \; t>0. \end{equation} Given $\bm{\eta}_t, \; \bm{\eta}_s$ i.i.d. for any $s \neq t$, it follows \begin{equation} \bar{\bm{\eta}}_{t-1} := \sum_{k=0}^{t-1}\bm{\eta}_k \sim \mathcal{N} \left(\bm{\eta} | \vect{0}_2,t\sigma^2\vect{I}_2 \right). \end{equation} Then, the squared distance $d_t^{(i),2} := d(\vect{p}_t^{(i)},\vect{c}_\Pi)^2$ is \begin{equation} \begin{split} d_t^{(i),2} & = \lVert \vect{p}_t^{(i)} - \vect{c}_\Pi \rVert_2^2 = \lVert \vect{p}_0^{(i)} + \bar{\bm{\eta}}_{t-1} - \vect{c}_\Pi \rVert_2^2 \\ & = \sum_{\ell=1}^2 \left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right)^2 + \sum_{\ell=1}^2 \bar{\eta}_{t-1}(\ell)^2 \\ & + 2\sum_{\ell=1}^2 \left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right) \bar{\eta}_{t-1}(\ell), \; t > 0. \end{split} \end{equation} It holds, \begin{equation} \small 2\left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right)\! \bar{\eta}_{t-1}(\ell) \sim \! \mathcal{N} \left( \! \eta | 0,4t\sigma^2\left( p_0^{(i)}(\ell) - c_\Pi(\ell) \right)^2 \!\right) \end{equation} and, since the components of $\bar{\bm{\eta}}_{t-1}$ are i.i.d. with distribution $\mathcal{N}\left(0,t\sigma^2\right)$, \begin{equation} \sum_{\ell=1}^2 \bar{\eta}_{t-1}(\ell)^2 \sim t\sigma^2 \chi^2(2). \end{equation} Finally, recalling that $\sum_{\ell=1}^2 ( p_0^{(i)}(\ell) - c_\Pi(\ell) )^2 = d_0^{(i)}$, \begin{equation}\label{eq:distance_distribution_th1} d_t^{(i),2} \sim \mathcal{N}(d^2 |\mu_d, \sigma_d^2 ) + t\sigma^2 \chi^2(2), \; t \geq 0 \end{equation} with \begin{equation}\label{eq:parameters_distance_th1} \mu_d = d_0^{(i),2}, \quad \sigma_d^2 = 4t\sigma^2d_0^{(i),2}. \end{equation} From \eqref{eq:distance_distribution_th1}-\eqref{eq:parameters_distance_th1}, $d_t^{(i),2}$ is identically distributed for all particles with the same initial distance from the platform. Using likelihood \eqref{eq:likelihood_RF} and with $z_{\text{RF},1:t}$ given, $\omega_t^{(i)}$ is a \mbox{(non-linear)} function of $d_t^{(i),2}$, through \eqref{eq:path_loss_model}. Then, condition \eqref{eq:axisSymmetric_condition} follows. \end{proof} \begin{theorem}\label{th:nonStatic_Rx} Let the following hypotheses hold \begin{enumerate} \item the target moves according to an unbiased random walk, i.e., \mbox{$p(\vect{p}_{t+1}|\vect{p}_t) = \mathcal{N}(\vect{p}|\vect{p}_t,\sigma^2 \vect{I}_2)$}; \item the platform moves at a fixed altitude according to a deterministic linear Markovian motion, i.e., $\vect{c}_{\Pi,t+1} = \vect{c}_{\Pi,t} + \vect{u}_{\vect{c}_\Pi,t}$; \item the RBE scheme updates through \eqref{eq:update} exploiting only the RF likelihood \eqref{eq:likelihood_RF}. \end{enumerate} Then, \begin{equation} \begin{split} & \exists (i,j) \in [1,N_s] \text{ s.t. } d(\vect{p}_0^{(i)},\vect{c}_{\Pi,0}) = d(\vect{p}_0^{(j)},\vect{c}_{\Pi,0}) \text{ and } \\ & \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right],\; t > 0 \end{split} \end{equation} \end{theorem} \begin{proof} The platform planar dynamics can be equivalently written as \begin{equation} \vect{c}_{\Pi,t}= \vect{c}_{\Pi,0} + \bar{\vect{u}}_{t-1}, \quad \bar{\vect{u}}_{t-1} := \sum_{k=0}^{t-1}\vect{u}_{\vect{c}_\Pi,k}, \; t>0. \end{equation} With similar computations involved of \mbox{Th. \ref{th:axis_symmetric_ambiguity}}, it is possible to show that the squared distance \mbox{$d_t^{(i),2} := d(\vect{p}_t^{(i)},\vect{c}_{\Pi,t})^2$} is distributed as \begin{equation}\label{eq:distance_distribution_th2} d_t^{(i),2} \sim \mathcal{N}(d^2 |\mu_d,\sigma_d^2 ) + t\sigma^2 \chi^2(2,\lambda), \; t>0 \end{equation} with \begin{equation}\label{eq:parameters_distance_th2} \begin{split} & \mu_d = d_0^{(i),2} - 2(\vect{p}_0^{(i)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1}, \;\; \sigma_d^2 = 4t\sigma^2 d_0^{(i),2}\\ & \lambda = -\sum_{\ell=1}^2 \bar{u}_{\vect{c}_\Pi,t-1}(\ell)^2 \end{split} \end{equation} where $\chi^2(2,\lambda)$ is a non-central chi-squared distribution, with non-centrality parameter $\lambda$. From \mbox{\eqref{eq:distance_distribution_th2}-\eqref{eq:parameters_distance_th2}}, the distribution of $d_t^{(i),2}$ depends on both $d_0^{(i),2}$ and \mbox{$(\vect{p}_0^{(i)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1}$}. It is always possible to find $(i,j) \in [1,N_s]$, such that $d_0^{(i)}=d_0^{(j)}$ and \mbox{$(\vect{p}_0^{(i)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1} \neq (\vect{p}_0^{(j)} - \vect{c}_{\Pi,0})^\top \bar{\vect{u}}_{t-1}$}. Then, in general, \begin{equation} \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right]. \end{equation} \end{proof} \newpage \begin{theorem}\label{th:biModality_benefit} Let the following hypotheses hold \begin{enumerate} \item the target moves according to an unbiased random walk, i.e., \mbox{$p(\vect{p}_{t+1}|\vect{p}_t) = \mathcal{N}(\vect{p}|\vect{p}_t,\sigma^2\vect{I}_2)$}; \item the platform is static, i.e. $\vect{c}_t = \vect{c}; \; \forall t$; \item the RBE scheme updates through \eqref{eq:update} exploiting the radio-visual likelihood \eqref{eq:bimodal_likelihood_radioVisual}. \end{enumerate} Then, \begin{equation}\label{eq:non_axisSymmetric_condition} \begin{split} & \exists (i,j) \in [1,N_s] \text{ s.t. } d(\vect{p}_0^{(i)},\vect{c}_{\Pi}) = d(\vect{p}_0^{(j)},\vect{c}_{\Pi}) \text{ and } \\ & \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right], \; t>0 \end{split} \end{equation} \end{theorem} \begin{proof} Suppose (w.l.o.g.) that $\vect{p}_0^{(i)}$ and $\vect{p}_0^{(j)}$ satisfy \mbox{$d(\vect{p}_0^{(i)},\vect{c}_{\Pi}) = d(\vect{p}_0^{(j)},\vect{c}_{\Pi})$}; hence, $d_t^{(i),2}$ and $d_t^{(j),2}$ are equally distributed, from Th. \ref{th:axis_symmetric_ambiguity}. Suppose now that \mbox{$\vect{p}_t^{(i)} \in \Phi(\vect{s}_t)$}, but \mbox{$\vect{p}_t^{(j)} \not\in \Phi(\vect{s}_t)$}. From \eqref{eq:likelihood_V}-\eqref{eq:bimodal_likelihood_radioVisual}, $\omega_t^{(i)}$ and $\omega_t^{(j)}$ are different functions of two equally distributed random variables (once $z_{\text{RF},1:t}$ are fixed). Thus, in general, $ \mathbb{E} \left[ \omega_t^{(i)} | z_{\text{RF},1:t}\right] \neq \mathbb{E} \left[ \omega_t^{(j)} | z_{\text{RF},1:t}\right]. $ \end{proof} \subsection{Discussion}\label{subsec:theory_discussion} \begin{figure*}[h!] \centering \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/axis_symmetry.pdf} \label{fig:axis_symmetry} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/no_axis_symmetry.pdf} \label{fig:no_axis_symmetry} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/loc_error.pdf} \label{fig:convergence} } \caption{Probabilistic map obtained with the uni-modal RF likelihood (a) and the bi-modal radio-visual likelihood (b). (c) Localization error (average and $68\%$ confidence interval over a MC simulation) with radio-only (RF), visual-only (V) and radio-visual (RF+V) likelihood.} \label{fig:distance_set} \vspace{-0.5cm} \end{figure*} Th. \ref{th:axis_symmetric_ambiguity} statistically characterizes the axis-symmetric ambiguity, one of the main issues of RSSI-based localization systems. As \eqref{eq:path_loss_model} suggests, RSSI values bring information only on the target-receiver distance; consequently, the RF likelihood \eqref{eq:likelihood_RF} is axis-symmetric w.r.t. the receiver. Thus, in a particle-based RBE scheme, particles at the same distance from the receiver are equally weighted. When the sensing platform is static, this phenomenon generates severe convergence issues, since the belief map is toroidal with non-unique MAP estimate, as depicted in Fig. \ref{fig:axis_symmetry}. For this reason, most literature solutions propose to use multiple receivers~\cite{canton2017bluetooth}. When this is not feasible, the axis-symmetric effect can be mitigated by using a single moving receiver, as proven in Th. \ref{th:nonStatic_Rx}. Therefore, including the platform position in the control space, as in \eqref{eq:sensor_dynamics}, is beneficial for the convergence of RSSI-based probabilistic active tracking strategies. Another viable solution is the adoption of a bi-modal perception system. According to Th. \ref{th:biModality_benefit}, the visual likelihood \eqref{eq:likelihood_V} reduces the ambiguity in the particles weighting process. This is one main advantage of the radio-visual approach w.r.t. the radio-only counterpart. To completely exploit the disambiguation effect of visual observations, camera orientation should be included in the control space, as in \eqref{eq:sensor_dynamics}. In particular, with the control law defined in Sec. \ref{sec:method}, the platform uses camera measurements to discard wrong MAP candidates. In this way, most of the weight will concentrate around the true target position over time, as Fig. \ref{fig:no_axis_symmetry} shows. In conclusion, combining radio-visual cues in the RBE scheme and comprising the entire camera pose in $\mathcal{A}$, reduces the estimation ambiguities. This makes the localization procedure faster and more accurate (\mbox{Sec. \ref{subsec:proof_concept}}). Accordingly, the overall target visual detection task becomes more robust and more efficient (\mbox{Sec. \ref{sec:numerical}}). \subsection{Proof of concept - target localization}\label{subsec:proof_concept} To support the theoretical results, the control law~\eqref{eq:control_input} is used to localize a static target (i.e., \mbox{$\vect{p}_t = \vect{p}; \; \forall t$}) in a Python-based synthetic environment (see Fig. \ref{fig:distance_set}). For consistency with the hypotheses of Th. \ref{th:axis_symmetric_ambiguity}-\ref{th:biModality_benefit}, the process model is an unbiased random walk. The receiver is kept static (i.e., \mbox{$\vect{G}_\vect{c} = \vect{0}$}), to stress the effects of the axis-symmetric ambiguity, as well as the improvements introduced by visual information. RSSI data are collected with sampling rate \mbox{$T_{\text{RF}} = 2$} ($T_a=1$) and noise level \mbox{$\sigma_{\text{RF}}=2 \; [dBm]$}; the target is placed at distance \mbox{$d(\vect{p}^+,\vect{c})=10 \; [m]$} from the platform. The proposed bi-modal radio-visual approach (RF+V) is compared with a visual-only (V) and a radio-only (RF) one. % To capture the performance variability, the evaluation is assessed through a Monte Carlo (MC) simulation with \mbox{$N_{tests}=150$} tests of $T_W=100$ iterations each. The performance index is the localization error \mbox{$e_t = d(\vect{p},\hat{\vect{p}}_t)$}. Fig. \ref{fig:convergence} depicts the average localization error and its $68\%$ confidence interval over the MC tests. Only RF+V converges to the groundtruth; namely, the estimate is unbiased and with zero variability over the MC tests \begin{equation} \begin{split} & \mu_{e_{T_W}} := \mathbb{E}[ e_{T_W} ] = 0 \; [m] \\ & \sigma_{e_{T_W}}^2 := \mathbb{E}[ (e_{T_W} - \mu_{e_{T_W}})^2 ] = 0 \; [m]. \end{split} \end{equation} The effect of the axis-symmetric ambiguity is clear in the RF approach. As proven in Th. \ref{th:axis_symmetric_ambiguity}, the MAP estimate is non-unique; hence, at any time, the estimate $\hat{\vect{p}}_t$ may be not representative of the true target position. This leads RF to converge, on average, to a biased estimate (\mbox{$\mu_{e_{T_W}} \approx 1 \; [m]$}), with remarkable inter-test variability (\mbox{$\sigma_{e_{T_W}} \approx 3 \; [m]$}). The oscillations of $e_t$ are even higher for the visual-only approach. The main issue here is related to the limited sensing domain of the camera: as \eqref{eq:likelihood_V} suggests, in case of non-detection ($D_t=0$), only particles inside the camera FoV are updated; these might be a small portion of the entire particles set. Consequently, many particles share the same weight for a long time, leading to a stronger ambiguity than the axis-symmetry (especially in the first iterations). Therefore, the localization procedure experiences a very slow and erratic converge behavior. \begin{figure*}[htp!] \vspace{0.2cm} \centering \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/ECDFscenario2case4.pdf} \label{fig:controlSpace} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/ECDFmodalities_new.pdf} \label{fig:modalities} } \subfigure[]{ \includegraphics[width=0.3\textwidth]{images/ECDF_energy.pdf} \label{fig:energy} } \caption{MC simulation results. (a) Detection time ECDF with different control spaces: full camera pose (RF+V), no pan-tilt movements (RF+V: $\vect{G}_{\bm{\Psi}}=\vect{0}$), static platform (RF+V: $\vect{G}_{\vect{c}}=\vect{0}$). (b) Detection time ECDF with different type and amount of information employed for the tracking task: radio-visual (RF+V), visual-only (V), radio-only (RF), and radio-only with two receivers ($2$RF). (c) Energy consumption ECDF between RF+V and RF.} \label{fig:numerical_results} \end{figure*} \section{Numerical results}\label{sec:numerical} In this Section the localization problem of Sec.~\ref{subsec:proof_concept} is extended to the active tracking of a moving target. At first, we numerically motivate the choice of the entire camera pose as control space (Sec.~\ref{subsec:why_control_space}), in accordance with the theoretical results of Sec.~\ref{sec:theoretical}. Secondly, we show that bi-modality allows to achieve higher robustness and time-efficiency than several uni-modal baselines (\mbox{Sec.~\ref{subsec:why_bimodality}}). Finally, the proposed algorithm is proven to be even more energy-efficient than a radio-only counterpart \mbox{(Sec.~\ref{subsec:finite energy})}, albeit the control law~\eqref{eq:control_input} does not account for any energy-preserving term. \subsection{Setup parameters}\label{subsec:setup} As in Sec. \ref{subsec:proof_concept}, we compare RF+V, RF and V over a MC simulation with $N_{tests}=150$ tests of \mbox{$T_W=100$} iterations each. The belief map is approximated by \mbox{$N_s=500$} particles, initialized according to a uniform distribution over $\Pi$. The underlying target motion is simulated using a linear stochastic Markovian transition planar model \begin{equation} \vect{p}_{t+1} = \vect{p}_{t} + \vect{q}_{t}; \quad \vect{q}_{t} \sim \mathcal{N}(\vect{q}|\bm{\mu}_{\vect{q}},\bm{\Sigma}_{\vect{q}}), \; \bm{\mu}_{\vect{q}} \neq \vect{0}_2. \end{equation} The initial condition $\vect{p}_{0}$ is randomly changed at each MC test, as well as the platform initial planar position $\vect{c}_{\Pi,0}$. The process model is an unbiased random walk, the receiver sampling rate is $T_{\text{RF}}=10$ ($T_a=1$), the total available energy is fixed to $E_{tot}=300$, and the RSSI noise level is $\sigma_{\text{RF}}=3.5 \; [dBm]$. \subsection{Performance assessment}\label{subsec:performance_assessment} The performance assessment of the active tracking task is based on the following indices. \begin{enumerate} \item \textit{Detection success rate}: robustness index that counts the number of successful target visual detections over the MC tests \begin{equation}\label{eq:detection_success_rate} \begin{split} & \varpi = \frac{1}{N_{tests}}\sum_{j=1}^{N_{tests}} \mathds{1}_{D,j} \\ & \mathds{1}_{D,j} = \begin{cases} 1, & \text{if } \exists t_{D} \in [1,T_W] \; \text{s.t. } D_{t_{D}}=1 \\ 0, & \text{otherwise} \end{cases} \end{split} \end{equation} \item \textit{Detection time ECDF\footnote{ECDF: Empirical Cumulative Distribution Function.}}: time-efficiency index that accounts for the time required to visually detect the target \begin{equation}\label{eq:time_ecdf} \begin{split} & p(t_{D} \leq t) = \frac{1}{N_{tests}}\sum_{j=1}^{N_{tests}} \mathds{1}_{t_{D,j}\leq t} \\ & \mathds{1}_{t_{D,j}\leq t} = \begin{cases} 1, & \text{if } t_{D,j}\leq t \\ 0, & \text{otherwise} \end{cases} \end{split} \end{equation} \item \textit{Energy consumption ECDF}: energy-efficiency index that accounts for the amount of available energy $E_{T_W} \in [0,E_{tot}]$ at the end of the task \begin{equation}\label{eq:energy_ecdf} \begin{split} & p(E_{T_W} \leq E) = \frac{1}{N_{tests}}\sum_{j=1}^{N_{tests}} \mathds{1}_{E_{T_W,j}\leq E} \\ & \mathds{1}_{E_{T_W,j} \leq E} = \begin{cases} 1, & \text{if } E_{T_W,j} \leq E \\ 0, & \text{otherwise}. \end{cases} \end{split} \end{equation} \end{enumerate} \subsection{Impact of the control space}\label{subsec:why_control_space} The controller $\mathcal{C}$, described in Sec. \ref{sec:method}, acts on a high-dimensional space comprising the full camera pose (namely, its position and orientation). This choice has been theoretically motivated in Ths.~\ref{th:nonStatic_Rx}-\ref{th:biModality_benefit} of Sec.~\ref{sec:theoretical} and it is here supported by numerical results. Specifically, the proposed control law (i.e., RF+V) is compared with two variants: the first does not allow pan-tilt movements (i.e., \mbox{$\vect{G}_{\bm{\Psi}}=\vect{0}$}) and the camera is fixed and facing downwards (namely, $\alpha_t=\beta_t=0, \; \forall t$); the second considers a static platform (i.e., $\vect{G}_{\vect{c}}=\vect{0}$). Indeed, without pan-tilt actuation, the target can be detected only when the UAV gets very close to it. However, this requirement is difficult to be met for every initial condition $\vect{p}_0$ and $\vect{c}_0$, due to localization errors and to the bias in the target motion. On the other side, a static platform is not capable to get closer to the target; hence, according to \eqref{eq:POD}-\eqref{eq:Upsilon}, many detection failures may occur. These considerations are supported by the numerical results reported in Tab.~\ref{tab:detection_rate} (top row): the detection success rate of RF+V is $77 \%$, against $39 \%$ of the case $\vect{G}_{\bm{\Psi}}=\vect{0}$ and $29 \%$ of the case $\vect{G}_{\vect{c}}=\vect{0}$. Moreover, Fig. \ref{fig:controlSpace} shows a remarkable difference in the detection time ECDF between the proposed solution and the other two versions. In conclusion, considering the entire camera pose as control space induces more robustness and more time-efficiency in the target visual detection task. \subsection{Impact of the sensing modalities}\label{subsec:why_bimodality} \begin{table}[t] \centering \caption{Detection success rate of the compared approaches.} \label{tab:detection_rate} \begin{tabular}{c| c|c|c} \hline \rowcolor{lightgray} & RF+V & RF+V: $\vect{G}_{\bm{\Psi}}=\vect{0}$ & RF+V: $\vect{G}_{\vect{c}}=\vect{0}$ \\ $\varpi$ & $77\%$ & $39\%$ & $29\%$ \\ \hline \rowcolor{lightgray} & RF & V & 2RF \\ $\varpi$ & $55\%$ & $26\%$ & $64\%$ \\ \hline \end{tabular} \end{table} As shown in Sec. \ref{subsec:proof_concept}, RF+V achieves a more accurate localization than than RF and V. This, combined with the control law $\mathcal{C}$, allows the platform to get closer to the target and to move the camera so that the target is inside the FoV. These two factors induce higher detection probabilities, according to \eqref{eq:POD}-\eqref{eq:Upsilon}. As a consequence, active tracking is more robust (see Tab. \ref{tab:detection_rate}) and more time-efficient (see Fig. \ref{fig:modalities}) when adopting the proposed bi-modal strategy, rather than the uni-modal counterparts. One may argue that the superior performance of the bi-modal strategy is due to the larger amount of information involved. To this aim, we compared RF+V with $2$RF: instead of combining radio-visual data (heterogeneous sensor fusion), we aggregate observations coming from two receivers (homogeneous radio-only sensor fusion). Thus, the measurements of $2$RF are \begin{equation}\label{eq:2RF_vector} \vect{z}_{2\text{RF},t} = \begin{bmatrix} z_{\text{RF}^{(1)},t} & z_{\text{RF}^{(2)},t} \end{bmatrix}^\top \end{equation} where $z_{\text{RF}^{(\ell)},t}$ identifies the RSSI sample collected by the $\ell$-th receiver at time $t$, according to the model \eqref{eq:obs_Rx}. Supposing independence between the observations of the two receivers, the likelihood of $2$RF becomes \begin{equation} p(\vect{z}_{2\text{RF},t} |\vect{p}_t , \vect{s}_t ) = \prod_{\ell=1}^2 p(z_{\text{RF}^{(\ell)},t}|\vect{p}_t, \vect{s}_t), \end{equation} with $p(z_{\text{RF}^{(\ell)},t}|\vect{p}_t, \vect{s}_t)$ as in \eqref{eq:likelihood_RF}. Tab. \ref{tab:detection_rate} and Fig. \ref{fig:modalities} show that $2$RF is more robust and time-efficient than RF, but less than RF+V. This means that the tracking performance is not only affected by the \emph{quantity}, but also by the \emph{quality}, of the aggregated information. In particular, exploiting complementary cues (e.g. radio and visual ones) is more advantageous than combining homogeneous data extracted from different sources; in fact, one can prove that the axis-symmetric ambiguity is not reduced when applying multiple receivers placed at the same location. \subsection{Energy-efficiency}\label{subsec:finite energy} As mentioned in Sec.~\ref{sec:method}, the proposed control law~\eqref{eq:control_input} does not include any explicit energy-aware term; hence, $\mathcal{C}$ is not expected to generate energy-preserving platform movements (unless $\vect{G}_{\vect{c}}$ is kept small, but this has been proven to be ineffective). Therefore, when the total available energy runs out, that is \begin{equation}\label{eq:out_of_energy_condition} \exists t \in [1,T_W] \text{ s.t. } E_{t}=0, \end{equation} the platform becomes static and the detection capabilities dramatically decrease (see \mbox{Sec. \ref{subsec:why_control_space}}). In the absence of an energy-aware control technique, the only way to avoid energy run-out is by producing accurate target position estimates. In fact, if $\hat{\vect{p}}_t \approx \vect{p}_t$, the controller $\mathcal{C}$ drives the platform towards the target through a smooth and direct trajectory, which is more energy-efficient than irregular patterns, according to \eqref{eq:energy_model}. Given this premise, and the localization superiority of RF+V w.r.t. RF (Secs.~\ref{sec:method} and \ref{subsec:why_bimodality}), we expect to have more energy preservation in RF+V than in RF. Indeed, the two energy consumption ECDFs in Fig. \ref{fig:energy} satisfy \begin{equation} p(E_{T_W}^{\text{RF+V}} \leq E) < p(E_{T_W}^{\text{RF}} \leq E);\; \forall E \in [0,E_{tot}]. \end{equation} This means that RF drains more energy than RF+V. In particular, the out-of-energy condition \eqref{eq:out_of_energy_condition} is higher in RF ($76\%$) than in RF+V ($68\%$). \section{Conclusion }\label{sec:conclusion} This work discusses the problem of visually detecting a RF-emitting target under energy constraint. To this aim, we propose a probabilistic active sensing approach, leveraging radio-visual information. A theoretical discussion highlights the main limitations of radio-only localization, as well as the benefits introduced by the bi-modal strategy. These findings are then validated through numerical results, from which bi-modality is shown to be more robust and more time and energy-efficient than uni-modal camera and radio-only counterparts. Bi-modality is proven to be even more effective than homogeneous sensor fusion strategies, since the combination of two receivers does not get comparable performance of the suggested radio-visual integration. Future work will be devoted to the design of specific energy-aware terms to be included in the optimal controller design. In addition, it is worth comparing the proposed exploitative strategy with information-seeking (explorative) ones. Finally, we are planning to carry out laboratory experiments to evaluate the algorithm on a real robotic platforms. \bibliographystyle{IEEEtran}
2,877,628,088,773
arxiv
\section{Introduction} Division of labor is a ubiquitous phenomenon in biology. In sufficiently complex multicellular organisms, various tasks necessary for organismal survival (metabolism, nutrient transport, motion, reproduction, information processing, etc.) are performed by distinct parts of the organism \cite{ORGDIV1, ORGDIV2}. Division of labor is even possible in clonal populations of free-living single-celled organisms \cite{BACTDIV}. At longer length scales, it is apparent that division of labor is a strong characteristic of community behavior in various animals \cite{INSECTDIV1, INSECTDIV2}. Human-built modern economies exhibit considerable division of labor (indeed, much research into this phenomenon has been done by economists) \cite{ECONDIV1, ECONDIV2, ECONDIV3, ECONDIV4, ECONDIV5, ECONDIV6, ECONDIV7, ECONDIV8}. Selective pressure for the division of labor in a population of agents (cells, organisms, humans) arises because specialization allows a given agent to optimize its performance of a relatively limited set of tasks. Total system production of a population of differentiated agents can therefore be significantly greater than a comparable population of undifferentiated agents. The question that arises then, is why is division of labor not always observed? For example, while complex multicellular organisms are certainly ubiquitous, approximately $ 80\% $ of the biomass of the planet is in the form of bacteria. While capable of exhibiting cooperative behavior, bacteria are, for the most part, free-living single-celled organisms. Clearly then, there are regimes where differentiation is not desirable. As a general rule, the more complex the organism, the greater the selective pressure for differentiation of system tasks. This rule is admittedly somewhat circular, since a more complex organism will by definition exhibit more specialization of the component agents. So, to be more precise, the greater the number of agents comprising a system, the greater the selective pressure for differentiation (even this formulation has some ambiguity, because we can arbitrarily define any group of agents to comprise a system, no matter how weak the inter-agent interactions. Nevertheless, despite this ambiguity, we will proceed with this initial ``working'' rule). The origin of this rule comes from the observation that there is a cost associated with differentiation, namely a time (and energy, though this will be ignored in this paper) cost associated with transporting intermediate products from one part of the system to another. As system size grows, then presumably the density of agents grows (since the number of agents grows, and since we are grouping all the agents into one system, the inter-agent interactions are sufficiently strong, compared to some reference interaction, to warrant this grouping. Note that increasing the agent density is a simple way to do this, though highly interconnected systems may interact fairly strongly over relatively long distances. The internet is a good example of this). As the density of agents grows, the characteristic time associated with transporting intermediates from one part of the system to another decreases, and so the cost of differentiation decreases (in fairness, the idea of transport costs placing a barrier to differentiation is not originally the author's. In the context of firms, this idea has been presented in the economics literature \cite{ECONDIV5}). In this paper, we develop two sets of models that capture the competition between the benefits of differentiation and the time cost associated with differentiation. In the first model, we consider the flow of some resource into a compartment, and its conversion into some final product. In the undifferentiated case, we assume that there is a single agent capable of converting the resource into final product. In the differentiated case, we assume that the conversion of resource is accomplished in a two-step process, each of which is carried out by distinct agents specialized for the separate tasks. In the second model, we consider the flow of resource into a region containing replicating agents. We assume that the agents increase their volume so as to maintain a constant, pre-specified population density. In the undifferentiated case, we assume that a given agent can absorb the resource, and process it to produce a new agent. In the differentiated case, we assume a division of labor between replication and metabolism steps. That is, we assume that a fraction of the agents are specialized so that they cannot replicate, but can only process the resource into an intermediate form. This metabolized resource is then processed by the replicators, which produce new agents. These daughter agents then undergo a differentiation step, where they can either become replicators themselves, or metabolizers. In both the compartment and the replicator models, the general features that emerge is that differentiation is favored when population density is at intermediate levels with respect to resource numbers (when the population density is low, then the undifferentiated pathways are favored, while when resources are highly limited, then the difference between the undifferentiated and differentiated pathways disappears). In the context of the replicator-metabolism model, we argue that this phenomenon suggests an evolutionary basis for the stem-cell-based tissue architecture in complex vertebrate organisms. This paper is organized as follows: In Section II, we develop and discuss the compartment model, involving the conversion of some resource into a final product. In Section III, we develop and discuss the replication-metabolism model. In Section IV, we conclude with a summary of our results and plans for future research. \section{Compartment Model} \subsection{Definition of the model} The compartment model is defined as follows: Some resource, denoted $ R $, flows into a compartment of volume $ V $ at a rate $ f_R $. In the undifferentiated case, a single agent, denoted $ E $, processes the resource to produce a final product, denoted $ P $ (the term $ E $ comes from chemistry, since the chemical analogue of an agent is an \underline{E}nzyme catalyst). In the differentiated case, the processing of $ R $ is accomplished by two separate agents, $ E_1 $ and $ E_2 $. The agent $ E_1 $ first converts the resource into an intermediate product $ R^{*} $, and then the agent $ E_2 $ converts $ R^{*} $ into $ P $. It should be apparent that separating the tasks associated with converting $ R $ to $ P $ among two different agents can only increase the total production rate of $ P $ if $ E_1 $ and $ E_2 $ can each perform their individual tasks better than $ E $. Therefore, an implicit assumption here is that, when an agent specializes, its ``native'' ability to perform a given task can be made better than when an agent is unspecialized. For a simple reason why this is true, let us imagine that $ E $, $ E_1 $, $ E_2 $ are enzymes, i.e. protein catalysts, whose function is pre-determined by some amino acid sequence of length $ L $. If the alphabet is of size $ S $, then there are $ S^L $ distinct sequences that can generate $ E $, $ E_1 $, $ E_2 $. Assuming that $ E_1 $ and $ E_2 $ are optimized for their particular functions, we note that, in the absence of any additional information, the probability that $ E_1 $ and $ E_2 $ are the same is $ 1/S^{L} \rightarrow 0 $ as $ L \rightarrow \infty $. Indeed, the average Hamming distance (number of sites where two sequences differ) between any two sequences in the sequence space is given by $ L (1 - 1/S) \rightarrow \infty $ as $ L \rightarrow \infty $. Therefore, it is highly likely that $ E $ is neither optimized for any of the tasks associated with converting $ R $ to $ P $, but performs each task with some intermediate efficiency. \subsection{Undifferentiated model} In order to describe the processes governing the conversion of $ R $ to $ P $, we will adopt the language and notation of chemical reaction kinetics. This formalism is very convenient, and is easily translatable into a system of ordinary differential equations. For the undifferentiated model, we have, \begin{eqnarray} & & E + R \rightarrow E-R \mbox{ second-order rate constant $ k_1 $} \nonumber \\ & & E-R \rightarrow E + P \mbox{ first-order rate constant $ k_2 $} \nonumber \\ & & R \rightarrow \mbox{Decay products (first-order rate constant $ k_D $)} \end{eqnarray} The first reaction refers to agent $ E $ grabbing the resource $ R $ (in chemistry, this is referred to as the binding step). At this point, the agent is denoted $ E-R $, to indicate that it is bound to a resource particle. In the second reaction, the agent processes the resource to form the product $ P $, which it then releases. The last reaction indicates that the resource $ R $ has a finite lifetime inside the compartment, and decays with some first-order rate constant $ k_D $. This assumption ensures that the compartment cannot be filled with resource without limit. The finite lifetime can be due to back-diffusion of resource outside the compartment, or simply that the resource does not last forever (some analogies include waiting time of customers at a restaurant before leaving without being served, the characteristic time for a food product to spoil, or the diffusion of solute out of a cell). If $ n_R $, $ n_E $, $ n_{ER} $ denote the number of particles of resource $ R $, unbound agents $ E $, and agent-resource complexes $ E-R $, respectively, then we have, \begin{eqnarray} & & \frac{d n_R}{dt} = f_R - \frac{k_1}{V} n_E n_R - k_D n_R \nonumber \\ & & \frac{d n_E}{dt} = -\frac{k_1}{V} n_E n_R + k_2 n_{ER} \nonumber \\ & & \frac{d n_{ER}}{dt} = \frac{k_1}{V} n_E n_R - k_2 n_{ER} \end{eqnarray} If we define $ n = n_E + n_{ER} $, then note that $ d n/dt = d n_E/dt + d n_{ER}/dt = 0 $, which implies that $ n $ is constant. After some manipulation, we obtain that the steady-state solution of this model is given by, \begin{equation} n_{R, ss} = \frac{k_2 n_{ER, ss}}{(k_1/V) (n - n_{ER, ss})} \end{equation} where $ n_{ER, ss} $ satisfies, \begin{equation} n_{ER, ss}^2 - (n + \frac{f_R}{k_2} + \frac{k_D}{(k_1/V)}) n_{ER, ss} + \frac{f_R}{k_2} n = 0 \end{equation} so that, \begin{equation} n_{ER, ss} = \frac{1}{2}[(n + \frac{f_R}{k_2} + \frac{k_D}{(k_1/V)}) - \sqrt{(n + \frac{f_R}{k_2} + \frac{k_D}{(k_1/V)})^2 - 4 \frac{f_R}{k_2} n}] \end{equation} We take the ``-'' root because it guarantees that $ n_{ER, ss} \leq n $ for all positive values of $ f_R $. For small $ n $, a Taylor expansion of the quadratic to first order gives, \begin{equation} n_{ER, ss} = \frac{f_R/k_2}{f_R/k_2 + k_D/(k_1/V)} n \mbox{ (small $ n $)} \end{equation} while for large $ n $, Taylor expansion to first order with respect to the remaining terms gives, \begin{equation} n_{ER, ss} = \frac{f_R}{k_2} \mbox{ (large $ n $)} \end{equation} As a rough estimate of where the transition from the small $ n $ to the large $ n $ behavior occurs, we can equate the two expressions and solve for $ n $. The result is, \begin{equation} n_{trans, 1} = \frac{f_R}{k_2} + \frac{k_D}{(k_1/V)} \end{equation} \subsection{Differentiated model} The conversion of resource $ R $ into $ P $ via the differentiated pathway occurs via the following sets of chemical reactions: \begin{eqnarray} & & E_1 + R \rightarrow E_1-R \mbox{ second-order rate constant $ k_1' $} \nonumber \\ & & E_1-R \rightarrow E_1 + R^{*} \mbox{ first-order rate constant $ k_2' $} \nonumber \\ & & E_2 + R^{*} \rightarrow E_2-R^{*} \mbox{ second-order rate constant $ k_3' $} \nonumber \\ & & E_2-R^{*} \rightarrow E_2 + P \mbox{ first-order rate constant $ k_4' $} \nonumber \\ & & R \rightarrow \mbox{ Decay products} \mbox{ first-order rate constant $ k_D $} \nonumber \\ & & R^{*} \rightarrow \mbox{ Decay products} \mbox{ first-order rate constant $ k_D^{*} $} \end{eqnarray} Note that the intermediate product, $ R^{*} $, is also capable of decaying. As we will see shortly, it is the finite lifetime of the intermediate products that causes the undifferentiated pathway to outperform the differentiated pathway at low agent numbers, and allows for a transition at higher agent numbers, whereby the differentiated pathway overtakes the undifferentiated pathway. In the context of our model, the direct interpretation of the decay term for $ R^{*} $ is that the intermediate product has a finite lifetime, due either to diffusion out of the compartment or due to decay into other compounds. More generally, though, this term may refer to an aging cost, and therefore this model may be useful in understanding aspects of networked systems, whose function does not necessarily depend on material transfers, but on information transfers. Information is transmitted between various parts of a system in order to effect system behavior, in response to the state of the system at the time of information transfer. Therefore, there is a time limit during which the information is relevant (because of the dynamic nature of the system and environment), which may be roughly modelled by assuming that the information is ``lost'' via a first-order decay. Defining particle and agent numbers analogously to the undifferentiated case, we obtain the system of equations, \begin{eqnarray} & & \frac{d n_R}{dt} = f_R - \frac{k_1'}{V} n_{E_1} n_R - k_D n_R \nonumber \\ & & \frac{d n_{R^*}}{dt} = k_2 n_{E_1 R} - \frac{k_3'}{V} n_{E_2} n_{R^*} - k_D^{*} n_{R^*} \nonumber \\ & & \frac{d n_{E_1}}{dt} = -\frac{k_1'}{V} n_{E_1} n_R + k_2' n_{E_1 R} \nonumber \\ & & \frac{d n_{E_1 R}}{dt} = \frac{k_1'}{V} n_{E_1} n_R - k_2' n_{E_1 R} \nonumber \\ & & \frac{d n_{E_2}}{dt} = -\frac{k_3'}{V} n_{E_2} n_{R^{*}} + k_4' n_{E_2 R^{*}} \nonumber \\ & & \frac{d n_{E_2 R^{*}}}{dt} = \frac{k_3'}{V} n_{E_2} n_{R^{*}} - k_4' n_{E_2 R^{*}} \end{eqnarray} If we define $ n_1 = n_{E_1} + n_{E_1 R} $ and $ n_2 = n_{E_2} + n_{E_2 R^{*}} $, then note that $ d n_1/dt = d n_2/dt = 0 $, so that $ n_1 $ and $ n_2 $ are also constant. Proceeding to solve for the steady-state of this model, we obtain, \begin{widetext} \begin{eqnarray} & & n_{E_1 R, ss} = \frac{1}{2} [(n_1 + \frac{f_R}{k_2'} + \frac{k_D}{(k_1'/V)}) - \sqrt{(n_1 + \frac{f_R}{k_2'} + \frac{k_D}{(k_1'/V)})^2 - 4 \frac{f_R}{k_2'} n_1}] \nonumber \\ & & n_{E_2 R^{*}, ss} = \frac{1}{2} [(n_2 + \frac{k_2 n_{E_1 R}}{k_4'} + \frac{k_D^{*}}{(k_1'/V)}) - \sqrt{(n_2 + \frac{k_2' n_{E_1 R}}{k_4'} + \frac{k_D^{*}}{(k_3'/V)})^2 - 4 \frac{k_2' n_{E_1 R}}{k_4'} n_2}] \end{eqnarray} \end{widetext} Now, when $ n_1 $ and $ n_2 $ are small, it may be shown that to lowest non-vanishing order, the steady state population of $ E_2-R^{*} $ is given by, \begin{equation} n_{E_2 R^{*}, ss} = \frac{(k_3'/V)}{k_D^{*}} \frac{k_2'}{k_4'} \frac{f_R/k_2'}{f_R/k_2' + k_D/(k_1'/V)} \alpha (1 - \alpha) n^2 \mbox{ (small $ n $)} \end{equation} where $ n \equiv n_1 + n_2 $, and $ \alpha \equiv n_1/n $, and $ 1 - \alpha = n_2/n $. For large values of $ n $, we obtain that, \begin{equation} n_{E_2 R^{*}, ss} = \frac{f_R}{k_4'} \mbox{ (large $ n $)} \end{equation} As an estimate of where the transition between the small $ n $ and large $ n $ behavior occurs, we can equate the two expressions and solve for $ n $. We obtain, \begin{equation} n_{trans, 2} = \sqrt{\frac{1}{\alpha (1 - \alpha)} \frac{k_D^{*}}{(k_3'/V)} (\frac{f_R}{k_2'} + \frac{k_D}{(k_1'/V)})} \end{equation} \subsection{Comparison of undifferentiated and differentiated models} The small $ n $ expression for the rate of production of final product in the undifferentiated case is, \begin{equation} k_2 n_{ER} = k_2 \frac{f_R/k_2}{f_R/k_2 + k_D/(k_1/V)} n \end{equation} while the small $ n $ expression for the rate of production of final product in the differentiated case is, \begin{equation} k_4' n_{E_2 R^{*}} = k_2' \frac{(k_3'/V)}{k_D^{*}} \frac{f_R/k_2'}{f_R/k_2' + k_D/(k_1'/V)} \alpha (1 - \alpha) n^2 \end{equation} Note then that for sufficiently small $ n $, the undifferentiated production pathway produces final product more quickly than the differentiated pathway. However, because the rate of production of final product for the undifferentiated pathway initially increases linearly with $ n $, while the rate of production of final product for the differentiated pathway increases quadratically, it is possible that the differentiated pathway eventually overtakes the undifferentiated pathway. The critical $ n $ where this is estimated to occur, denoted $ n_{equal} $, may be estimated by equating the two expressions. The final result is, \begin{equation} n_{equal} = \frac{1}{\alpha (1 - \alpha)} \frac{k_D^{*}}{(k_3'/V)} \frac{f_R/k_2' + k_D/(k_1'/V)}{f_R/k_2 + k_D/(k_1/V)} \end{equation} Now, for $ n_{equal} $ to be meaningful, it must occur in a regime where the rate expressions used to obtain it are valid. Therefore, we want $ n_{equal} < n_{trans, 1}, n_{trans, 2} $. However, we can make an even stronger statement. If $ n_{equal} $ does indeed refer to a point beyond which the differentiated pathway overtakes the undifferentiated pathway, then we should have $ n_{equal} < n_{trans, 2} < n_{trans, 1} $. It is possible to show that, \begin{widetext} \begin{equation} \frac{n_{trans, 2}}{n_{equal}} = \frac{n_{trans, 1}}{n_{trans, 2}} = \sqrt{\alpha (1 - \alpha) \frac{(k_3'/V)}{k_D^{*}} \frac{1}{f_R/k_2' + k_D/(k_1'/V)}} (\frac{f_R}{k_2} + \frac{k_D}{(k_1/V)}) \end{equation} \end{widetext} and so, our inequality is equivalent to the condition that, \begin{equation} \frac{k_D^{*}}{(k_3'/V)} < \alpha (1 - \alpha) (\frac{f_R}{k_2} + \frac{k_D}{(k_1/V)}) \frac{f_R/k_2 + k_D/(k_1/V)}{f_R/k_2' + k_D/(k_1'/V)} \end{equation} which implies that \begin{equation} n_{equal} < \frac{f_R}{k_2} + \frac{k_D}{(k_1/V)} \end{equation} Figures 4 and 5 (of the version submitted to {\it The Journal of Theoretical Biology}) show comparisons of the production rates of final product for the undifferentiated and differentiated pathways. In Figure 4, the parameters are chosen so that the differentiated pathway eventually overtakes the undifferentiated pathway, while in Figure 5, the parameters are chosen so that this is not the case. Note, however, that even in Figure 4, although the differentiated pathway overtakes the undifferentiated pathway, once $ n $ becomes very large, the undifferentiated pathway again overtakes the differentiated pathway. This behavior can be explained as follows: When $ n $ is very small, the rate at which the intermediate product is ``grabbed'' by $ E_2 $ is small compared to the decay rate, so that much intermediate product is lost. In this regime, the undifferentiated pathway is optimal, for, although $ E $ may be less efficient than either $ E_1 $ or $ E_2 $ at their respective tasks, the overall production rate of $ P $ is not reduced by the loss of intermediates. Now, as $ n $ increases, the rate of loss of intermediates decreases to an extent such that the increased efficiency associated with differentiation causes the differentiated pathway to overtake the undifferentiated pathway. However, once $ n $ increases even further, then there is sufficient quantity of agents in both the undifferentiated and differentiated pathways to process all of the incoming resource, with minimal loss due to decay. At this point, because the production rate of $ P $ has become resource limited, the efficiency advantage of the differentiated pathway is considerably reduced, such that the slight cost associated with intermediate decay becomes sufficient to cause the undifferentiated pathway to overtake the differentiated pathway. However, this effect is a small one, since, once $ n $ is very large, both the undifferentiated and differentiated pathways perform similarly. \subsection{When can a differentiated pathway outperform an undifferentiated pathway?} The analysis of the previous section deserves further scrutiny, in order to better understand the circumstances under which differentiation can lead to improved system performance. At low agent numbers, the decay of the product intermediates leads to a quadratic increase in system output, and so the undifferentiated pathway outperforms the differentiated pathway. At some point, however, the number of agents is sufficiently large that the decay of both resource and intermediates is minimal, so that it is possible for the differentiated pathway to overtake the undifferentiated pathway. However, this is only possible if the differentiated pathway is more efficient than the undifferentiated pathway. To quantify this notion, assume that $ n $ is at some intermediate value, such that $ k_D $ and $ k_D^{*} $ may be effectively taken to be $ 0 $. In this regime, it is possible to show that, \begin{equation} k_4' n_{E_2 R^{*}, ss} = \min \{k_2' \alpha n, k_4' (1 - \alpha) n\} \end{equation} Essentially, if $ k_2' \alpha n > k_4' (1 - \alpha) n $, then the first set of agents are capable of producing intermediate at a rate greater than the second set of agents are capable of processing it, so that the second reaction step is rate limiting. If $ k_2' \alpha n < k_4' (1 - \alpha) n $, then the first reaction step is rate limiting. Note then that if one of the reactions is rate limiting, we can adjust the agent fractions to increase the rate of the rate limiting reaction, and thereby increase the overall production rate of $ P $. Therefore, the maximal production rate of $ P $ is achieved when $ k_2' \alpha n = k_4' (1 - \alpha) n \Rightarrow \alpha_{optimal} = k_4'/(k_2' + k_4') $, so that the maximal production rate of $ P $ is given by, \begin{equation} (k_4' n_{E_2 R^{*}, ss})_{max} = \frac{k_2' k_4'}{k_2' + k_4'} n \end{equation} For the undifferentiated case, the analogous expression is $ k_2 n $, and so we expect that the differentiated pathway can only overtake the undifferentiated pathway when, \begin{eqnarray} & & \frac{k_2' k_4'}{k_2' + k_4'} > k_2 \nonumber \\ & & \Rightarrow \frac{1}{k_2'} + \frac{1}{k_4'} < \frac{1}{k_2} \end{eqnarray} Intuitively, this condition makes sense, since $ 1/k_2 $ is the characteristic time it takes agent $ E $ to convert $ R $ to $ P $, while $ 1/k_2' $ and $ 1/k_4' $ are the characteristic times for agents $ E_1 $ and $ E_2 $ to perform their respective tasks. Therefore, differentiation can only overtake nondifferentiation if the characteristic time for the completion of a set of tasks is shorter for the differentiated pathway than it is for the undifferentiated pathway. If $ 1/k_2' + 1/k_4' > 1/k_2 $, it is in principle possible for the differentiated pathway to overtake the undifferentiated pathway if $ k_1' $ is sufficiently greater than $ k_1 $, and if $ k_3' $ is sufficiently large compared to $ k_D^{*} $. Basically, the differentiated agents are not more efficient at actually processing the resource, but they are more efficient at grabbing them, which can give the differentiated pathway an advantage. However, in contrast to the case where $ 1/k_2' + 1/k_4' < 1/k_2 $, this advantage is only temporary, because once the agent number becomes sufficiently large, the characteristic time to grab resource becomes very small. As a final note, because the condition for differentiation to outperform nondifferentiation at larger agent numbers is $ 1/k_2' + 1/k_4' < 1/k_2 $, while the agent number $ n_{equal} $ where differentiation overtakes nondifferentiation does not depend on $ k_4' $, it should be apparent that our criterion for $ n_{equal} $ could be inaccurate in actually predicting the location of the cross-over. This is because $ n_{equal} $ is based on the small $ n $ region, where the rate of production of $ P $ for the differentiated pathway increases quadratically. This is the regime where the production rate of $ P $ is limiting by intermediate resource decay. However, as $ 1/k_2' + 1/k_4' $ increases, we expect $ n_{equal} $ to become a better predictor of the crossover point, since the decay of the intermediate resource becomes a comparatively greater factor in dictating the performance of the differentiated pathway. In any event, though, the expression for $ n_{equal} $ and the condition for the existence of a cross-over are useful, for they indicate that the larger the value of $ f_R $, the larger the value of $ k_D^{*} $ that is possible for a cross-over to still occur. In particular, it suggests that, as long as $ 1/k_2' + 1/k_4' < 1/k_2 $, then by making $ f_R $ sufficiently large for a given $ k_D^{*} $, we will eventually obtain that the differentiated pathway will outperform the undifferentiated pathway at sufficiently high agent numbers. This is indeed what is observed numerically. \section{Replication-Metabolism Model} In this section, we turn our attention to the replication-metabolism model, where a population of agents processes an external resource for the purposes of producing more agents. \subsection{Definition of the model} We consider a population of replicating agents, relying on the supply of some resource, denoted $ R $. We assume that the resource is supplied to the population at a rate of $ f_R $ per unit volume, and that, as the population grows, the volume expands in such a way as to maintain a constant population density $ \rho $. In the undifferentiated model, a single agent, denoted $ E $, processes the resource $ R $ and replicates. In the differentiated model, an agent $ E_1 $ ``metabolizes'' the resource to some intermediate $ R^{*} $, and then another agent, denoted $ E_2 $, processes the intermediate and reproduces. However, the $ E_2 $ agents are responsible for supplying both metabolizers and replicators. Therefore, $ E_2 $ produces a ``blank'' agent, denoted $ E $, which then specializes and becomes either $ E_1 $ and $ E_2 $. \subsection{Undifferentiated model} The reactions defining the undifferentiated model are, \begin{eqnarray} & & E + R \rightarrow E-R \mbox{ second-order rate constant $ k_1 $} \nonumber \\ & & E-R \rightarrow E + E \mbox{ first-order rate constant $ k_2 $} \nonumber \\ & & R \rightarrow \mbox{ Decay products (first-order rate constant $ k_D $)} \end{eqnarray} In terms of population numbers, the dynamical equations for $ n_E $, $ n_{ER} $ and $ n_R $ are, \begin{eqnarray} & & \frac{d n_E}{dt} = -\frac{k_1}{V} n_E n_R + 2 k_2 n_{ER} \nonumber \\ & & \frac{d n_{ER}}{dt} = \frac{k_1}{V} n_E n_R - k_2 n_{ER} \nonumber \\ & & \frac{d n_R}{dt} = f_R V - \frac{k_1}{V} n_E n_R - k_D n_R \end{eqnarray} Therefore, defining $ n = n_E + n_{ER} $, we have, \begin{equation} \frac{d n}{dt} = k_2 n_{ER} \end{equation} Since the population density is $ \rho $, this implies that, \begin{equation} \frac{d V}{dt} = \frac{1}{\rho} \frac{d n}{dt} = k_2 V x_{ER} \end{equation} where $ x_E \equiv n_E/n $, $ x_{ER} \equiv n_{ER}/n $. Now, the concentration $ c_R $ of the resource $ R $ is given by the relation $ n_R = c_R V $, which implies that \begin{eqnarray} \frac{d c_R}{dt} & = & \frac{1}{V} (\frac{d n_R}{dt} - c_R \frac{d V}{dt}) \nonumber \\ & = & f_R - (k_1 \rho x_E + k_2 x_{ER} + k_D) c_R \end{eqnarray} Putting everything together, we obtain, finally, the system of equations, \begin{eqnarray} & & \frac{1}{n} \frac{d n}{dt} = k_2 x_{ER} \nonumber \\ & & \frac{d x_E}{dt} = -k_1 c_R x_E + 2 k_2 x_{ER} - k_2 x_{ER} x_E \nonumber \\ & & \frac{d x_{ER}}{dt} = k_1 c_R x_E - k_2 x_{ER} - k_2 x_{ER}^2 \nonumber \\ & & \frac{d c_R}{dt} = f_R - c_R (k_1 \rho x_E + k_2 x_{ER} + k_D) \end{eqnarray} We can determine the steady-state behavior of the model by setting the left-hand-side of the above system of equations to $ 0 $. When $ \rho \rightarrow 0 $, the steady-state solution is characterized by, \begin{equation} c_{R, ss} = \frac{f_R}{k_D + k_2 x_{ER, ss}} \mbox{ ($ \rho \rightarrow 0 $)} \end{equation} where $ x_{ER, ss} $ is the solution to the cubic, \begin{equation} 0 = x^3 + (1 + \frac{k_D}{k_2}) x^2 + (\frac{k_1 f_R}{k_2^2} + \frac{k_D}{k_2}) x - \frac{k_1 f_R}{k_2^2} \mbox{ ($ \rho \rightarrow 0 $)} \end{equation} Note that when $ f_R = 0 $, we obtain $ x_{ER, ss} = 0 $. Therefore, differentiating the cubic with respect to $ x $ gives, \begin{equation} (\frac{d x_{ER, ss}}{d f_R})_{f_R = 0} = \frac{k_1}{k_2 k_D} \end{equation} and so we have, \begin{equation} x_{ER, ss} = \frac{k_1}{k_2 k_D} f_R \mbox{ (small $ f_R $, $ \rho \rightarrow 0 $)} \end{equation} When $ f_R $ is large, we get $ x_{ER, ss} \rightarrow 1 $, so that \begin{equation} x_{ER, ss} = 1 \mbox{ ($ f_R \rightarrow \infty $, $ \rho \rightarrow 0 $)} \end{equation} Equating the small $ f_R $ and large $ f_R $ expressions, we obtain that the transition from small $ f_R $ to large $ f_R $ behavior is approximated by, \begin{equation} f_{R, trans, 1}(\rho = 0) = k_2 \frac{k_D}{k_1} \end{equation} Now, when $ \rho $ is large, then the steady-state expression for $ c_R $ is approximated by, \begin{equation} 0 = \frac{d c_R}{dt} = f_R - c_R k_1 \rho x_E \rightarrow k_1 c_R x_E = \frac{f_R}{\rho} \end{equation} and so, \begin{equation} 0 = \frac{f_R}{\rho} - k_2 x_{ER, ss} - k_2 x_{ER, ss}^2 \end{equation} from which it follows that, \begin{equation} x_{ER, ss} = \frac{1}{2}[-1 + \sqrt{1 + 4 \frac{f_R}{k_2 \rho}}] \end{equation} Since $ \rho $ is large, we will approximate this expression further, by taking the first-order expansion in $ f_R/(k_2 \rho) $, giving, \begin{equation} x_{ER, ss} = \frac{f_R}{k_2 \rho} \end{equation} We can estimate the cross-over from small $ \rho $ to large $ \rho $ behavior by equating the two expressions. We have two estimates, one for small $ f_R $, and one for large $ f_R $. We obtain, \begin{eqnarray} & & \rho_{trans, 1}^{-} = \frac{k_D}{k_1} \mbox{ ($ f_R < k_2 \frac{k_D}{k_1} $)} \nonumber \\ & & \rho_{trans, 1}^{+} = \frac{f_R}{k_2} \mbox{ ($ f_R > k_2 \frac{k_D}{k_1} $)} \end{eqnarray} \subsection{Differentiated model} The reactions defining the differentiated model are, \begin{eqnarray} & & E_1 + R \rightarrow E_1-R \mbox{ second-order rate constant $ k_1' $} \nonumber \\ & & E_1-R \rightarrow E_1 + R^{*} \mbox{ first-order rate constant $ k_2' $} \nonumber \\ & & E_2 + R^{*} \rightarrow E_2-R^{*} \mbox{ second-order rate constant $ k_3' $} \nonumber \\ & & E_2-R^{*} \rightarrow E_2 + E \mbox{ first-order rate constant $ k_4' $} \nonumber \\ & & E \rightarrow E_1 \mbox{ first-order rate constant $ k_5' $} \nonumber \\ & & E \rightarrow E_2 \mbox{ first-order rate constant $ k_6' $} \nonumber \\ & & R \rightarrow \mbox{ Decay products (first-order rate constant $ k_D $)} \nonumber \\ & & R^{*} \rightarrow \mbox{ Decay products (first-order rate constant $ k_D^{*} $)} \end{eqnarray} Following a procedure similar to the one carried out for the undifferentiated model, we obtain the system of equations, \begin{eqnarray} & & \frac{1}{n} \frac{d n}{dt} = k_4' x_{E_2 R^{*}} \nonumber \\ & & \frac{d x_{E_1}}{dt} = -k_1' c_R x_{E_1} + k_2' x_{E_1 R} + k_5' x_E - k_4' x_{E_2 R^{*}} x_{E_1} \nonumber \\ & & \frac{d x_{E_1 R}}{dt} = k_1' c_R x_{E_1} - k_2' x_{E_1 R} - k_4' x_{E_2 R^{*}} x_{E_1 R} \nonumber \\ & & \frac{d x_{E_2}}{dt} = -k_3' c_{R^{*}} x_{E_2} + k_4' x_{E_2 R^{*}} + k_6' x_E - k_4' x_{E_2 R^{*}} x_{E_2} \nonumber \\ & & \frac{d x_{E_2 R^{*}}}{dt} = k_3' c_{R^{*}} x_{E_2} - k_4' x_{E_2 R^{*}} - k_4' x_{E_2 R^{*}}^2 \nonumber \\ & & \frac{d x_E}{dt} = k_4' x_{E_2 R^{*}} - (k_5' + k_6') x_E - k_4' x_{E_2 R^{*}} x_E \nonumber \\ & & \frac{d c_R}{dt} = f_R - (k_1' \rho x_{E_1} + k_D) c_R - k_4' x_{E_2 R^{*}} c_{R^{*}} \nonumber \\ & & \frac{d c_{R^{*}}}{dt} = \rho (k_2' x_{E_1 R} - k_3' c_{R^{*}} x_{E_2}) - (k_D^{*} + k_4' x_{E_2 R^{*}}) c_{R^{*}} \nonumber \\ \end{eqnarray} Now, defining $ \tilde{x}_{E_1} = x_{E_1} + x_{E_1 R} $, and $ \tilde{x}_{E_2} = x_{E_2} + x_{E_2 R^{*}} $, we obtain, \begin{eqnarray} & & \frac{d \tilde{x}_{E_1}}{dt} = k_5' x_E - k_4' x_{E_2 R^{*}} \tilde{x}_{E_1} \nonumber \\ & & \frac{d \tilde{x}_{E_2}}{dt} = k_6' x_E - k_4' x_{E_2 R^{*}} \tilde{x}_{E_2} \end{eqnarray} Therefore, at steady-state we have, \begin{equation} \frac{\tilde{x}_{E_1, ss}}{\tilde{x}_{E_2, ss}} = \frac{k_5'}{k_6'} \end{equation} and so, using the relation $ \tilde{x}_{E_1} + \tilde{x}_{E_2} + x_E = 1 $ we obtain, \begin{eqnarray} & & \tilde{x}_{E_1, ss} = \frac{k_5'}{k_5' + k_6'} (1 - x_{E, ss}) \nonumber \\ & & \tilde{x}_{E_2, ss} = \frac{k_6'}{k_5' + k_6'} (1 - x_{E, ss}) \end{eqnarray} If we let $ k_5', k_6' \rightarrow \infty $ such that $ k_5'/k_6' $ remains constant, then it should be clear that $ x_{E, ss} \rightarrow 0 $. Intuitively, $ E $ differentiates to either $ E_1 $ or $ E_2 $ as soon as it is produced, so it does not build up in the system. The ratio between $ k_5' $ and $ k_6' $ then dictates the fraction of $ E_1 $ and $ E_2 $ in the system (allowing $ k_5', k_6' \rightarrow \infty $ essentially amounts to assuming that the differentiation time is zero. This is of course not true, and future research will need to incorporate positive differentiation times). Defining $ \alpha = k_5'/(k_5' + k_6') $, we then have $ \tilde{x}_{E_1, ss} = \alpha $, and $ \tilde{x}_{E_2, ss} = 1 - \alpha $. Therefore, to characterize the system at steady-state, we need to solve four equations, giving the steady-state conditions for $ x_{E_1 R} $, $ x_{E_2 R^{*}} $, $ c_R $, and $ c_{R^{*}} $, respectively. The equations are, \begin{eqnarray} & & 0 = k_1' c_R (\alpha - x_{E_1 R}) - k_2' x_{E_1 R} - k_4' x_{E_2 R^{*}} x_{E_1 R} \nonumber \\ & & 0 = k_3' c_{R^{*}} (1 - \alpha - x_{E_2 R^{*}}) - k_4' x_{E_2 R^{*}} - k_4' x_{E_2 R^{*}}^2 \nonumber \\ & & 0 = f_R - c_R (k_D + k_1' \rho (\alpha - x_{E_1 R})) - k_4' x_{E_2 R^{*}} c_R \nonumber \\ & & 0 = \rho (k_2' x_{E_1 R} - k_3' c_{R^{*}} (1 - \alpha - x_{E_2 R^{*}})) \nonumber \\ & & - k_D^{*} c_{R^{*}} - k_4' c_{R^{*}} x_{E_2 R^{*}} \end{eqnarray} As with the undifferentiated case, we study the behavior of this system of equations in both the small and large $ \rho $ limits. When $ \rho = 0 $, we have $ c_{R^{*}, ss} = 0 \Rightarrow x_{E_2 R^{*}, ss} = 0 \Rightarrow c_{R, ss} = f_R/k_D \Rightarrow x_{E_1 R, ss} = (k_1' f_R \alpha/k_D)/(k_2' + k_1' f_R/k_D) $. Differentiating the steady-state equations with respect to $ \rho $, and evaluating at $ \rho = 0 $, gives, \begin{eqnarray} & & (\frac{d c_{R^{*}, ss}}{d \rho})_{\rho = 0} = \frac{k_2'}{k_D^{*}} (x_{E_1 R})_{\rho = 0} \nonumber \\ & & (\frac{d x_{E_2 R^{*}, ss}}{d \rho})_{\rho = 0} = \frac{k_3'}{k_4'} (1 - \alpha) (\frac{d c_{R^{*}, ss}}{d \rho})_{\rho = 0} \end{eqnarray} and so, for small $ \rho $, we have, \begin{equation} x_{E_2 R^{*}, ss} = \frac{k_2' k_3'}{k_4' k_D^{*}} \frac{\frac{k_1' f_R}{k_D}}{k_2' + \frac{k_1' f_R}{k_D}} \alpha (1 - \alpha) \rho \mbox{ (small $ \rho $)} \end{equation} Now for large $ \rho $, our steady-state equations may be reduced to, \begin{eqnarray} & & 0 = k_1' c_R (\alpha - x_{E_1 R}) - k_2' x_{E_1 R} - k_4' x_{E_2 R^{*}} x_{E_1 R} \nonumber \\ & & 0 = k_3' c_{R^{*}} (1 - \alpha - x_{E_2 R^{*}}) - k_4' x_{E_2 R^{*}} - k_4' x_{E_2 R^{*}}^2 \nonumber \\ & & 0 = f_R - k_1' c_R \rho (\alpha - x_{E_1 R}) \nonumber \\ & & 0 = k_2' x_{E_1 R} - k_3' c_{R^{*}} (1 - \alpha - x_{E_2 R^{*}}) \end{eqnarray} The third equation gives $ k_1' c_R (\alpha - x_{E_1 R}) = f_R/\rho $, which may be substituted into the first equation to give, \begin{equation} 0 = \frac{f_R}{\rho} - x_{E_1 R} (k_2' + k_4' x_{E_2 R^{*}}) \end{equation} Solving for $ x_{E_1 R} $ in terms of $ x_{E_2 R^{*}} $, and plugging the resulting expression into the fourth steady-state equation gives, after some manipulation, that $ x_{E_2 R^{*}, ss} $ is the solution of the cubic, \begin{equation} 0 = x_{E_2 R^{*}, ss}^3 + (1 + \frac{k_2'}{k_4'}) x_{E_2 R^{*}, ss}^2 + \frac{k_2'}{k_4'} x_{E_2 R^{*}, ss} - \frac{k_2' f_R}{k_4'^2 \rho} \end{equation} Now, when $ f_R = 0 $, we obtain $ x_{E_2 R^{*}, ss} = 0 $. From this it is possible to show that, \begin{equation} (\frac{d x_{E_2 R^{*}, ss}}{d f_R})_{f_R = 0} = \frac{1}{\rho k_4'} \end{equation} and so, \begin{equation} x_{E_2 R^{*}, ss} = \frac{f_R}{k_4' \rho} \mbox{ (large $ \rho $)} \end{equation} As with the undifferentiated case, the transition from small $ \rho $ to large $ \rho $ behavior may be estimated by equating the two expressions and solving for $ \rho $. The result is, \begin{equation} \rho_{trans, 2} = \sqrt{\frac{1}{\alpha (1 - \alpha)} \frac{k_D k_D^{*}}{k_3'}(\frac{1}{k_1'} + \frac{f_R}{k_2' k_D})} \end{equation} \subsection{Comparison of undifferentiated and differentiated models} As a function of $ f_R $, we wish to determine if, as $ \rho $ increases, the average growth rate of the differentiated population overtakes the growth rate of the undifferentiated population. If this does indeed happen, then there exists a $ \rho $, denoted $ \rho_{equal} $, at which the two growth rates are equal. We first consider the regime $ f_R < k_2 (k_D/k_1) $. This is the small $ f_R $ regime of the undifferentiated population. In this regime, the transition from low $ \rho $ to large $ \rho $ behavior occurs at $ \rho_{trans, 1} = k_D/k_1 $. For the differentiated pathway, we have $ \rho_{trans, 2} = \sqrt{\frac{1}{\alpha (1 - \alpha)} \frac{k_D^{*}}{k_1' k_3'}(\frac{k_4'}{k_2'} f_R + k_D)} $. Now, we have four possibilities: (1) $ \rho_{equal} < \rho_{trans, 1}, \rho_{trans, 2} $. (2) $ \rho_{trans, 2} < \rho_{equal} < \rho_{trans, 1} $. (3) $ \rho_{trans, 1} < \rho_{equal} < \rho_{trans, 2} $. (4) $ \rho_{equal} > \rho_{trans, 1}, \rho_{trans, 2} $. We can immediately eliminate Cases (2), (3), and (4) as possibilities. For Case (4), we get an undifferentiated rate of $ k_2 f_R/(k_2 \rho) = f_R/\rho $, and a differentiated rate of $ k_4' f_R/(k_4' \rho) = f_R/\rho $, and so the two rates are equal. For Case (2), we get an undifferentiated rate of $ k_1 f_R/k_D $, and a differentiated rate of $ f_R/\rho $, so equating gives $ \rho_{equal} = k_D/k_1 = \rho_{trans, 1} $. Therefore, Case (2) is essentially a limiting case of Case (4), and can also eliminated. For Case (3), we get an undifferentiated rate of $ f_R/\rho $, and a differentiated rate of $ (k_2' k_3'/k_D^{*}) (k_1' f_R/k_D)/(k_2' + k_1' f_R/k_D) \alpha (1 - \alpha) \rho $, so equating gives $ \rho_{equal} = \rho_{trans, 2} $. Therefore, Case (3) is essentially a limiting case of Case (1), and can also be eliminated. For Case (1), we have, \begin{equation} \rho_{equal} = \frac{1}{\alpha (1 - \alpha)} \frac{k_1 k_D^{*}}{k_3'} (\frac{1}{k_1'} + \frac{f_R}{k_2' k_D}) \end{equation} Now, we can show that, \begin{equation} \frac{\rho_{equal}}{\rho_{trans, 2}} = \frac{\rho_{trans, 2}}{\rho_{trans, 1}} = k_1 \sqrt{\frac{1}{\alpha (1 - \alpha)} \frac{k_D^{*}}{k_3' k_D} (\frac{1}{k_1'} + \frac{f_R}{k_2' k_D})} \end{equation} and so, in order for $ \rho_{equal} < \rho_{trans, 1}, \rho_{trans, 2} $, then we must have, \begin{equation} f_R < k_D \frac{k_2}{k_1} \frac{k_2'}{k_2} [\alpha (1 - \alpha) \frac{k_3' k_D}{k_1 k_D^{*}} - \frac{k_1}{k_1'}] \end{equation} We now consider the case where $ f_R > k_D \frac{k_2}{k_1} $. This is the large $ f_R $ regime of the undifferentiated population. Following a similar procedure to the one carried out for the small $ f_R $ regime, we can show that the only possible crossover occurs in the small $ \rho $ regimes for both the undifferentiated and differentiated cases. In this regime, we obtain, \begin{equation} \rho_{equal} = \frac{k_2}{f_R} \frac{1}{\alpha (1 - \alpha)} \frac{k_D k_D^{*}}{k_3'} (\frac{1}{k_1'} + \frac{f_R}{k_2' k_D}) \end{equation} We can show that, \begin{equation} \frac{\rho_{equal}}{\rho_{trans, 2}} = \frac{\rho_{trans, 2}}{\rho_{trans, 1}} = frac{k_2}{f_R} \sqrt{\frac{1}{\alpha (1 - \alpha)} \frac{k_D k_D^{*}}{k_3'} (\frac{1}{k_1'} + \frac{f_R}{k_2' k_D})} \end{equation} and so, in order for $ \rho_{equal} < \rho_{trans, 1}, \rho_{trans, 2} $, we must have, \begin{equation} \frac{k_D^{*}}{k_3'} < \alpha (1 - \alpha) \frac{(f_R/k_2)^2}{k_D/k_1' + f_R/k_2'} \end{equation} In Figure 9, we show a high-$ f_R $ plot where the differentiated growth rate overtakes the undifferentiated growth rate. In Figure 10, we show a high-$ f_R $ plot where the undifferentiated growth rate stays above the differentiated rate at all values of $ \rho $ (these figures are included in the version submitted to {\it The Journal of Theoretical Biology}). \subsection{When can a differentiated population outreplicate an undifferentiated population?} We can subject our replication-metabolism model to a similar analysis to the one applied to the compartment model. First of all, as with the compartment model, we expect that the differentiated pathway can only overtake the undifferentiated pathway, and then maintain a higher replication rate if $ 1/k_2' + 1/k_4' < 1/k_2 $. Again, this condition simply states that the total characteristic time associated with converting resource into a new agent in the differentiated case is less than the total characteristic time in the undifferentiated case. The assumption is that decay costs are negligible, as well as time costs associated with grabbing resource and intermediates. An interesting behavior that occurs with the replication-metabolism model is the different dependence on $ f_R $ that the transition population density $ \rho_{equal} $ has in the low-$ f_R $ regime and the high-$ f_R $ regime. In the high-$ f_R $ regime, $ \rho_{equal} $ has a weak dependence on $ f_R $, though it does decrease as $ f_R $ increases. This makes sense, for in the high-$ f_R $ regime, the growth rate of the undifferentiated population is limited by the rate at which the complex $ E-R $ produces new agents. As $ f_R $ increases, the cost associated with the decay of the intermediate resource $ R^{*} $ decreases, so that the differentiated pathway overtakes the undifferentiated pathway sooner. In the low-$ f_R $ regime, $ \rho_{equal} $ increases linearly with $ f_R $, so that, as $ f_R $ increases in this regime, the differentiated pathway overtakes the undifferentiated pathway only at higher values of $ \rho $ (if it overtakes at all). The reason for this behavior is that at low $ f_R $, the growth rate of the undifferentiated population is resource limited, so that increasing $ f_R $ actually increases the growth rate. The effect of this is to push to higher values of $ \rho $ the point at which the differentiated agents outreplicate the undifferentiated agents. What is interesting with these patterns of behavior is that they indicate opposite criteria for when a cooperative replicative strategy is favored, depending on the availability of resource: When resources are plentiful, then increasing the resource favors a differentiated replication strategy. However, when resource are limited, then {\it decreasing} the resource favors a differentiated replication strategy. In this vein, it is interesting to note that complex multicellular life is only possible in relatively resource-rich environments. On the other hand, organisms such as the cellular slime mold ({\it Dictyostelium discoideum}) transition from a single-celled to a multi-celled life cycle when starved. While we have already postulated one possible reason for this behavior in terms of minimizing overall reproductive costs \cite{MULTICELL}, the behavior indicated in our model may provide another, complementary explanation for the selective advantage for this phenomenon. \section{Conclusions and Future Research} This paper developed two models with which to compare the performance of undifferentiated and differentiated pathways. The first model considered the flow of resource into a compartment filled with a fixed number of agents, whose collective task is to convert the resource into some final product. The second model considered the replication rate of a collection of agents, driven by an externally supplied resource. By assuming that the resource, and even more importantly, that reaction intermediates, have a finite lifetime, we were able to show that undifferentiated pathways are favored at low agent numbers and/or densities, while differentiated pathways are favored at higher agent numbers and/or densities. An equivalent way of stating this is that differentiation is favored when resources are limited, where resource limitation is measured by the ratio of available resource to agents. Some interesting results that emerged from our studies was that, although limited resources favor differentiation (as measured by the resource-agent ratio), for a given set of system parameters, differentiation will be more likely to overtake nondifferentiation at higher population size and/or density if the amount of available resource is increased (although the actual cross-over location will increase as well). The central reason for this is that the relative decay costs associated with differentiation are decreased as resource is increased. In the context of the replication-metabolism model, we should note that when resources are plentiful, differentiation is favored at lower population densities as the resource flow is increased, while when resources are limited, differentiation is favored at lower population densities as the resource flow is decreased. Regarding the former observation, it should be noted that it has been shown that diversity of replicative strategies is favored at intermediate levels of resources \cite{DIFFWILKE}. In digital life simulations, they showed that the number of distinct replicating computer programs was maximized at intermediate resource availability, a result consistent with what is observed ecologically. We claim that the results of this paper are consistent with these observations. Regarding the latter observation, we pointed out in the previous section that this behavior is possibly consistent with the behavior of organisms such as the cellular slime mold, which transition from a single-celled to a multi-celled life form when starved. We also posit that the results of the replication-metabolism model suggest a possible evolutionary basis for the stem-cell-based tissue architecture in complex multicellular organisms. Essentially, as population density increases, and therefore as the resource-to-agent ratio decreases, it becomes more efficient for some cells to exclusively focus on replicating and renewing the population, while other cells engage in specialized functions necessary for organismal survival. Of course, our replication-metabolism model is not quite the same as a stem-cell-based tissue architecture. First of all, the stem-cell and tissue cell population does not collectively grow. Rather, the stem cells periodically divide in order to replace dead tissue cells. Therefore, the stem-cell-based tissue architecture is a kind of hybrid between our compartment model and our replication-metabolism model. Secondly, our replication-metabolism model assumes that there is a single differentiation step, while in reality a differentiating tissue cell undergoes several divisions and differentiation steps before becoming a mature tissue cells. Finally, our replication-metabolism model assumed that differentiation was instantaneous. In reality, differentiation takes time, and this time cost will affect whether differentiation can overtake non-differentiation, and, if so, will likely delay the critical population density where this happens. Despite these shortcomings, we believe that the models developed here could be used as the basis for more sophisticated models that could produce, via an optimization criterion, the stem-cell-based tissue architecture observed in complex multicellular organisms. This is a subject we leave for future work. \begin{acknowledgments} This research was supported by the Israel Science Foundation (Alon Fellowship). \end{acknowledgments}
2,877,628,088,774
arxiv
\section{Introduction} A \emph{sandwiched surface singularity} is a normal surface singularity that admits a birational morphism to $\mathbb{C}^2$. Sandwiched surface singularities are introduced and classified in Spivakovsky~\cite{Spivakovsky-1990}. Sandwiched surface singularities are rational singularities and they are characterized by their dual resolution graphs, which are referred to as \emph{sandwiched graphs} by Spivakovsky~\cite{Spivakovsky-1990}; see Definition~\ref{definition:sandwiched-graph}. Sandwiched surface singularities include cyclic quotient surface singularities, weighted homogeneous surface singularities with ``big'' central nodes, and rational surface singularities with the reduced fundamental cycles. Deformations of sandwiched surface singularities can be described in two ways: Picture deformations by de Jong and van Straten~\cite{deJong-vanStraten-1998} and P-modifications by Kollár~\cite{Kollar-1991}. de Jong and van Straten~\cite{deJong-vanStraten-1998} showed that any one-parameter deformations of a sandwiched surface singularity $(X,p)$ are obtained from specific one-parameter deformations of a germ of a plane curve singularity $C \subset \mathbb{C}^2$ through the origin $(0,0)$, where each irreducible components $C_i$ of $C$ are decorated by some natural numbers $l_i$. The pair $(C = \cup C_i, l = \cup l_i)$ is called a \emph{decorated curve} of the singularity $(X,p)$. Furthermore, all smoothings of the singularity $(X,p)$ are provided by certain particular one-parameter deformations of the decorated curve $(C,l)$, known as \emph{picture deformations}. See Section~\ref{section:sandwiched} for a summary. On the other hand, Kollár~\cite{Kollar-1991} conjectured that for any rational surface singularity $(X,p)$ there is a one-to-one correspondence between irreducible components of the reduced miniversal deformation space of $(X,p)$ and specific proper modifications of $(X,p)$, called \emph{P-modifications}. In particular, the conjecture states that if $\mathcal{X} \to \Delta$ is a one-parameter deformation of $X$ over a small disk $\Delta$ then there exists a P-modification $U \to X$ together with a $\mathbb{Q}$-Gorenstein deformation $\mathcal{U} \to \Delta$ of $U$ such that the deformation $\mathcal{U} \to \Delta$ is blown down (possibly after a base change) to the given deformation $\mathcal{X} \to \Delta$. Kollár conjecture is known to hold in only a few cases; for further information, see Stevens~\cite{Stevens-2003}. For example, Kollár and Shepherd-Barron~\cite{KSB-1988} showed that the conjecture holds for quotient surface singularities and that the corresponding P-modifications are normal surfaces admitting only \emph{T-singularities} as singularities (which are referred to as \emph{P-resolutions}). See Section~\ref{section:P-modifications} for a summary. In this paper we present a method for getting the associated picture deformations from P-resolutions corresponding to a given one-parameter smoothing of a sandwiched surface singularity by applying the semi-stable minimal model program for complex 3-folds. That is, if a one-parameter smoothing $\mathcal{X} \to \Delta$ of $(X,p)$ is induced by a P-resolution $U \to X$, then one can find the picture deformation of a decorated curve $(C,l)$ corresponding to the smoothing $\mathcal{X} \to \Delta$ of $(X,p)$ by applying the semi-stable MMP. More specifically, it is as follows. We first compactify the singularity $(X,p)$ and its decorated curve $(C,l)$ to a projective singular surface $(Y,p)$ and a decorated projective curve $(D,l)$ (called a \emph{compactified decorated curve}), respectively, using the birational morphism $X \to \mathbb{C}^2$. We show that there is no local-to-global obstruction to deforming $(Y,p)$ by showing that $H^2(Y, \mathcal{T}_Y)=0$; Theorem~\ref{theorem:extension-of-deformation}. Then we show that every one-parameter smoothing of the projective surface $(Y,p)$ induced by a smoothing of the singular point $p \in Y$ is obtained from a picture deformation of the compactified decorated curve $(D,l)$ as before; See Section~\ref{section:compactification} for details. Similarly, in Section~\ref{section:identification}, we show that any P-resolution $U \to X$ of the singularity $X$ can be compactified to a projective surface $Z \to Y$ (called \emph{compactified P-resolution}). Any deformation of $U$ can be extended to that of $Z$. Then we show that, if a one-parameter smoothing $\mathcal{X} \to \Delta$ of $(X,p)$ is induced by a P-resolution $U \to X$, then one obtain the picture deformation of the compactified decorated curve $(D,l)$ of $(Y,p)$, hence that of the decorated curve $(C,l)$ of $(X,p)$, corresponding to the deformation $\mathcal{X} \to \Delta$ by applying the semi-stable MMP to the $\mathbb{Q}$-Gorenstein smoothing $\mathcal{Z} \to \Delta$ of the compactified P-resolution $Z \to Y$. \begin{maintheorem}[Theorem~\ref{theorem:main}] One can run the semi-stable MMP to a one-parameter deformation $\mathcal{Z} \to \Delta$ of a compactified P-resolution $Z$ of $(X,p)$ until one obtains the corresponding picture deformation $\mathcal{D} \to \Delta$ of the compactified decorated curve $(D,l)$. \end{maintheorem} Using a similar technique, we provide several examples of sandwiched surface singularities for which the corresponding picture deformations may be determined from their normal P-modifications (not necessarily P-resolutions). As an application, we present a general procedure for proving Kollár conjecture using the machinery developed in this paper; Section~\ref{section:How-To-K-conjecture}. Indeed we verify Kollár conjecture for various weighted homogeneous surface singularities in Section~\ref{section:example-WHSS} and Section~\ref{section:Wpqr}. \begin{maintheorem}[Theorem~\ref{theorem:Wpqr-K-conjecture}] Kollár conjecture holds for a weighted homogeneous surface singularity $W_{p,q,r}$, which is one of the weighted homogeneous surface singularities that admit rational homology disk smoothings. \end{maintheorem} Indeed we can prove that Kollár conjecture holds for any weighted homogeneous surface singularities admitting rational homology disk smoothings. However we decide to split the results into a separate paper (H. Park--D. Shin~\cite{PS-2022}) because the proofs are exceedingly lengthy and involve complex combinatorial calculations. Also J. Jeon and D. Shin \cite{Jeon-Shin-2022} show that Kollár conjecture holds for weighted homogeneous surface singularities with the ``big'' central node using the results in this paper. As another application, we present a correspondence between three theories of deformations of cyclic quotient surface singularities: P-resolutions by Kollár and Shepherd-Barron~\cite{KSB-1988}, picture deformations by de Jong and van Straten~\cite{deJong-vanStraten-1998}, and `equations' by Christophersen~\cite{Christophersen-1991} and Stevens~\cite{Stevens-1991}. For briefly summaries of each theories, refer Section~\ref{section:P-modifications} for P-resolutions, Section~\ref{section:CQSS-sandwiched} for picture deformations, and Section~\ref{section:CQSS-Equation} for `equations'. We already presented a correspondence between P-resolutions and picture deformations via the semi-stable MMP in this paper. However, in the case of cyclic quotient surface singularities, we show that one can run the MMP in an explicit and controlled way; Section~\ref{section:CQSS-PtoP}. A correspondence between P-resolutions and 'equations' was presented in PPSU~\cite{PPSU-2018} via the semi-stable MMP in a similar fashion in this paper. In particular, they also compactified a given P-resolution, but in a manner different from that used in this paper; See Section~\ref{section:CQSS-PtoE} for a summary. In contrast, Némethi and Popescu-Pampu~\cite{Nemethi-Poposcu-Pampu-2010} developed a topological method for finding `equations' from picture deformations. Their approach relied on Lisca's classification~\cite{Lisca-2008} of minimal symplectic fillings of cyclic quotient surface singularities. Instead, in order to find a direct way from picture deformations to `equations', we compare two compactifications in PPSU~\cite{PPSU-2018} and this paper in Section~\ref{section:CQSS-PtoE}. Therefore, we have a complete picture of all known deformation theories of cyclic quotient surface singularities through the semi-stable MMP. We organize this paper as follows. We start with reviewing basics of sandwiched surface singularities in Section~\ref{section:sandwiched}. We then define their compatible compactifications and we prove that there is no local-to-global obstruction to deforming them in Section~\ref{section:compactification}. In Section~\ref{section:P-modifications} and Section~\ref{section:semistable-MMP}, we recall the main tools of this paper: P-modifications and the semi-stable MMP. We also introduce Kollár conjecture. The main part of this paper is Section~\ref{section:identification}. We will show that one can obtain the corresponding picture deformations from given P-resolutions by applying the semi-stable MMP. We discuss briefly how one can prove Kollár conjecture using this machinery in Section~\ref{section:How-To-K-conjecture}. After that, we will show various examples of analyzing the correspondence between P-resolutions and picture deformations using the semi-stable MMP in Section~\ref{section:illustrations}. In particular, we will prove Kollár conjecture for a certain weighted homogeneous surface singularity in Section~\ref{section:Wpqr} and Section~\ref{section:Wpqr-K-Conjecture}, which shows the usefulness of the machinery developed in this paper. Finally, we investigate three deformation theories of cyclic quotient surface singularities via the semi-stable MMP in Sections~\ref{section:CQSS-sandwiched}--\ref{section:CQSS-PictoE}. \subsection*{Acknowledgements} Heesang Park was supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education: NRF-2021R1F1A1063959. Dongsoo Shin was supported by the National Research Foundation of Korea grant funded by the Korea government: 2018R1D1A1B07048385 and 2021R1A4A3033098. The authors would like to thank the Korea Institute for Advanced Study for warm hospitality when they were associate members in KIAS. \section{Sandwiched surface singularities}\label{section:sandwiched} We summarize basics on sandwiched surface singularities and their deformations. Please refer Spivakovsky~\cite{Spivakovsky-1990} and de Jong--van Straten~\cite{deJong-vanStraten-1998} for details. \begin{definition} A normal surface singularity $(X,p)$ is said to be \emph{sandwiched} if it is analytically isomorphic to a germ of an algebraic surface $X$ that admits a birational map $X \to \mathbb{C}^2$. \end{definition} Sandwiched surface singularities are rational. They are characterized by their dual resolution graphs. \begin{definition}[{Spivakovsky~\cite[Definition~1.9]{Spivakovsky-1990}}]\label{definition:sandwiched-graph} A graph $\Gamma$ is called a \emph{sandwiched graph} if it is the dual resolution graph of a rational surface singularity that can be blown down to a smooth point by adding new vertices with weights $(-1)$ on the proper places. \end{definition} \begin{proposition}[{Spivakovsky~\cite[Proposition~1.11]{Spivakovsky-1990}}] A normal surface singularity is sandwiched if and only if its dual resolution graph is sandwiched. \end{proposition} It has been known that every cyclic quotient surface singularity is sandwiched; cf. Figure~\ref{figure:sandwiched-structure-for-cyclic}. \begin{example}\label{example:3-4-2} Let $(X,p)$ be the cyclic quotient surface singularity of type $\frac{1}{19}(1,7)$ and let $\Gamma$ be the dual graph of its minimal resolution of $(X,p)$. We construct a new graph $\Gamma^{\ast}$ by connecting a vertex with weight $(-1)$ to $\Gamma$ as follows: \begin{equation*} \begin{tikzpicture} \node[bullet] (10) at (1,0) [labelBelow={$-3$}] {}; \node[bullet] (20) at (2,0) [labelBelow={$-4$}] {}; \node[bullet] (30) at (3,0) [labelBelow={$-2$}] {}; \node[bullet] (1-1) at (1,0.5) [labelAbove={$-1$}] {}; \node[bullet] (175-1) at (1.75,0.5) [labelAbove={$-1$}] {}; \node[bullet] (225-1) at (2.25,0.5) [labelAbove={$-1$}] {}; \node[bullet] (3-1) at (3,0.5) [labelAbove={$-1$}] {}; \draw [-] (10)--(20); \draw [-] (20)--(30); \draw [-] (10)--(1-1); \draw [-] (20)--(175-1); \draw [-] (20)--(225-1); \draw [-] (30)--(3-1); \end{tikzpicture} \end{equation*} Since $\Gamma^{\ast}$ can be blown-down to a smooth point, $(X,p)$ is sandwiched. \end{example} \subsection{Decorated curves} For a sandwiched surface singularity $(X,p)$, de Jong and van Straten~\cite{deJong-vanStraten-1998} introduce a pair $(C,l)$ of a plane curve singularity $C=\cup_{i=1}^{s} C_i$ and an assignment $l \colon \{C_i \mid i=1,\dotsc,e\} \to \mathbb{N}$ so that the singularity $(X,p)$ is represented by the singularity $X(C,l)$ induced from $(C,l)$. We recall how to get $(X,p)$ from $(C,l)$. For details, see de Jong-van Straten~\cite{deJong-vanStraten-1998}. \begin{definition}[{de Jong-van Straten~\cite[Definition~1.4]{deJong-vanStraten-1998}}] A \emph{decorated germ} is a pair $(C,l)$ consisting of a plane curve singularity $C = \cup_{i=1}^{s} C_i \subset \mathbb{C}^2$ passing through the origin and an assignment $l \colon \{C_i\} \to \mathbb{N}$ such that $l(C_i) \ge m(C_i)$, where $m(C_i)$ is the sum of multiplicities of the branch $C_i$ in the multiplicity sequence of the minimal resolution of $C$. \end{definition} Let $\widetilde{Z}(C,l) \to \mathbb{C}^2$ be the modification obtained from the minimal embedded resolution of $C$ followed by $l(C_i)-m(C_i)$ consecutive blow-ups at $C_i$. \begin{definition} The analytic space $X(C,l)$ is obtained from $Z(C,l) - \widetilde{C}$ by blowing down the union of all exceptional divisors not intersecting the strict transform $\widetilde{C} \subset \widetilde{Z}(C,l)$. \end{definition} The analytic space $X(C,l)$ may be smooth or have several singularities. For example, if $C$ is the ordinary cusp $y^2-x^3=0$, then $X(C,2)$ and $X(C,3)$ are smooth. However de Jong-van Straten~\cite{deJong-vanStraten-1998} prove that if $l(C_i) \ge M(C_i)+1$, where $M(C_i)$ is the sum of multiplicities of the branch $C_i$ in the multiplicity sequence of the minimal good resolution of $C$, then the maximal compact set not intersecting $\widetilde{C}$ is connected. So $X(C,l)$ has only one singularity, which is clearly a sandwiched surface singularity. Conversely, de Jong and van Straten~\cite{deJong-vanStraten-1998} also prove that every sandwiched surface singularity is represented by $X(C,l)$ for some decorated germ $(C,l)$. \begin{definition}\label{definition:decorated-curve} For a sandwiched surface singularity $(X,p)$, a decorated germ $(C,l)$ such that $X=X(C,l)$ is called a \textit{decorated curve} of $(X,p)$. \end{definition} Using a sandwiched graph structure, one may get a decorated curve $(C,l)$ for $(X,p)$. Since the dual graph of the minimal resolution $(V,E)$ of $(X,p)$ is sandwiched, it is blown down to a smooth point after adding some $(-1)$-vertices. On the other hand, $(V,E)$ can be embedded into a blow-up $(\widetilde{\mathbb{C}}^2, F)$ of $\mathbb{C}^2$ over the origin (including its infinitely near point), where $F$ is the set of the exceptional divisors. For each $(-1)$-curve $F_i \in F$, choose a \emph{curvetta} $\widetilde{C}_i$ (i.e., a small piece of a curve) transverse to $F_i$. We put $\widetilde{C}=\bigcup_{F_i \in F} \widetilde{C}_i$ and $C=\rho(\widetilde{C})=\bigcup_{F_i \in F} C_i$, where $C_i=\rho(\widetilde{C}_i)$. Then $C$ may be considered as a germ of plane curves through the origin $0$. We decorate $C_i$ with the number $l_i$ of the sum of the multiplicities of blowing-up points sitting on the strict transform of $C_i$. \begin{example}[Continued from Example~\ref{example:3-4-2}]\label{example:3-4-2-decorated-curve} A decorated curve $(C,l)$ of the cyclic quotient surface singularity of type $\frac{1}{19}(1,7)$ is given by $C=C_1 \cup C_2 \cup C_3 \cup C_4$ with $l_1=2, l_2=3, l_3=3, l_4=4$. \begin{center} \includegraphics[scale=0.5]{figure-3-4-2-decorated-curve} \end{center} \end{example} Notice that a decorated curve $(C,l)$ of $(X,p)$ is not uniquely determined. \subsection{Picture deformations} Any one-parameter deformation of a sandwiched surface singularity $(X,p)$ is induced from a one-parameter deformation of its decorated curve $(C,l)$. We summarize the deformation theory in de Jong-van Straten~\cite{deJong-vanStraten-1998}. Refer also Möhring~\cite{Mohring-2004} for a nice survey. One may regard the decoration $l$ of a decorated curve $(C,l)$ as the union of the unique subscheme of length $l_i$ supported on the preimage of $0$ on the normalization of $C_i$. \begin{definition}[{de Jong-van Straten~\cite[Definition~4.2]{deJong-vanStraten-1998}}]\label{definition:one-parameter-deformation} Let $(C,l)$ be a decorated curve of a sandwiched surface singularity $(X,p)$. Then A \emph{one-parameter deformation} $(\mathcal{C}, \mathcal{L})$ of $(C,l)$ over a small disk $\Delta$ centered at the origin $0$ consists of \begin{enumerate}[(1)] \item a $\delta$-constant deformation $\mathcal{C} \to \Delta$ of $C$, that is, $\delta(C_t)$ is constant for all $t \in \Delta,$ \item a flat deformation $\mathcal{L} \subset \mathcal{C} \times \Delta$ over $\Delta$ of the scheme $l$ such that \item $\mathcal{M} \subset \mathcal{L}$, where the relative total multiplicity scheme $\mathcal{M}$ of $\mathcal{C} \times \Delta \to \mathcal{C}$ is defined as the closure $\bigcup_{t \in \Delta\setminus{0}} m(C_t)$. \end{enumerate} \end{definition} Here a \emph{$\delta$-constant} of a germ of a plane curve singularity $(C,0) \subset (\mathbb{C}, 0)$ is given by the classical formula: \begin{equation*} \delta(C,0) = \frac{m(C,Q)(m(C,Q)-1)}{2} \end{equation*} where $Q$ varies among all the points infinitely near $0$ and $m(C,Q)$ denotes the multiplicity of the strict transform of $C$ at $Q$. Each one-parameter deformations of $(C,l)$ induces that of $(X,p)$ and vice versa: \begin{theorem}[{de Jong-van Straten~\cite[Theorem~4.4]{deJong-vanStraten-1998}}] For any one-parameter deformation $(\mathcal{C}, \mathcal{L}) \to \Delta$ of a decorated curve $(C,l)$ of a sandwiched surface singularity $(X,p)$, there exists a flat one-parameter deformation $\mathcal{X} \to \Delta$ of $(X,p)$ such that \begin{enumerate}[(1)] \item $X_0=X$, \item $X_t=X(C_t, l_t)$ for all $t \in \Delta \setminus 0$. \end{enumerate} Moreover, every one-parameter deformation of $(X,p)$ is obtained in this way. \end{theorem} Here $X(C_t,l_t)$ denotes the blow-up of $\mathbb{C}^2$ in the ideals $I(C_{t,p}, l_{t,p}) \subset \mathcal{O}_{\mathbb{C}^2,p}$ where $p$ ranges over all points of $C_t$ where $l_t$ is not zero. Especially, a special deformation of a decorated curve $(C,l)$ induces a smoothing of $(X,p)$: \begin{definition}[{de Jong-van Straten~\cite[Definition~4.6]{deJong-vanStraten-1998}}]\label{definition:picture-deformation} A one-parameter deformation $(\mathcal{C}, \mathcal{L})$ of $(C,l)$ is called a \emph{picture deformation} if for generic $t \in T \setminus 0$ the divisor $l_t$ on $\widetilde{C}_t$ is reduced. \end{definition} \begin{proposition}[{de Jong-van Straten~\cite[Lemma~4.7]{deJong-vanStraten-1998}}] A generic smoothing of $(X,p)$ is realised by a picture deformation of its decorated curve $(C,l)$. \end{proposition} Möhring~\cite{Mohring-2004} describes a geometric way to construct a deformation of $(X,p)$ from that of $(C,l)$ via a blow-up along a special complete ideal. For this, we denote $\Sigma(\mathcal{J})$ for a coherent ideal sheaf $\mathcal{J} \subset \mathcal{O}_X$ the complex subspace of $X$ defined by $\mathcal{J}$; i.e., $\Sigma(\mathcal{J}) = (V(\mathcal{J}), \mathcal{O}_X/\mathcal{J}|_{V(\mathcal{J})})$. Then: \begin{theorem}[{Möhring~\cite[Theorem~3.5.1]{Mohring-2004}}]\label{theorem:Mohring} Let $(\mathcal{C}, \mathcal{L}) \to \Delta$ be a one-parameter deformation of $(C,l)$ over a small disk $(\Delta,0) \subset (\mathbb{C},0)$ and let $\mathcal{J}$ be a coherent sheaf of ideals on a well chosen neighborhood $U$ of zero in $\mathbb{C}^2 \times \Delta$ with fibers $I(C_t,l_t)$. Then the blow-up of $\mathbb{C}^2 \times \Delta$ respectively $U$ in $\Sigma(\mathcal{J})$ is flat over $\Delta$ with fibers $X(C_t,l_t)$ for all $t \in \Delta$. \end{theorem} The Milnor fibre of a smoothing of $(X,p)$ is given as follows. At first, we choose a closed Milnor ball $B(0,\epsilon)$ for the germ $(C,0)$. For a sufficiently small $t \neq 0$, a general fiber $C_t$ over $t$ will have a representative in the ball $B(0,\epsilon)$ denoted by $B_t$. \begin{theorem}[{de Jong-van Straten~\cite[Proposition~5.1]{deJong-vanStraten-1998}}] The Milnor fibre of a smoothing of $(X,p)$ corresponding to a picture deformation $(\mathcal{C},\mathcal{L})$ of its decorated curve $(C,l)$ is diffeomorphic to the complement of the strict transform $\widetilde{C}_t$ of $C_t$ on $B_t$ blown up in the points corresponding to $l_t$. \end{theorem} By the definition of a picture deformation of $(C,l)$, the singularities of a general fiber $C_t$ for $t \neq 0$ are only ordinary multiple points. So it is easy to draw a picture of $C_t$. \begin{example}[Continued from Example~\ref{example:3-4-2-decorated-curve}]\label{example:3-4-2-picture-deformation} There are three picture deformations of the decorated curve $(C,l)$ in Example~\ref{example:3-4-2-decorated-curve}, where red dots are the subscheme $l_s$ for $s \neq 0$. \begin{center} \includegraphics[scale=0.5]{figure-3-4-2-picture-deformation} \end{center} The Milnor numbers of the Milnor fibers corresponding to the above picture deformations are $3$, $2$, $1$, respectively. \end{example} \subsection{Incidence matrices} A picture deformation of a sandwiched surface singularity $X(C,l)$ may be encoded as a matrix. Let $(\mathcal{C}, \mathcal{L})$ be a picture deformation of $(C,l)$. Suppose that $C=\cup_{i=1}^{s} C_i$ and $C_t = \cup_{i=1}^{s} C_{i,t}$. Denote by $\{P_1,\dotsc,P_n\}$ the images in $B_t$ of the points in the support of $l_t$. \begin{definition}[{de Jong-van Straten~\cite[p.~483]{deJong-vanStraten-1998}}] The \emph{incidence matrix} of a picture deformation $(\mathcal{C}, \mathcal{L})$ is the matrix $I(\mathcal{C}, \mathcal{L}) \in M_{s,n}(\mathbb{Z})$ whose $(i,j)$ entry is equal to the multiplicity of $P_j$ as a point of $C_{i,t}$. \end{definition} \begin{remark}\label{remark:incidence-matrix} Since a general fiber $X(C_t,l_t)$ is given by the blow-up of $(C_t,l_t)$ along the support $l_t$, the incidence matrix records how the $(-1)$-curves on $X(C_t,l_t)$ intersects the curves $\widetilde{C}_{i,t}$. \end{remark} \begin{example}[Continued from Example~\ref{example:3-4-2-picture-deformation}]\label{example:3-4-2-incidence-matrices} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{19}(1,7)$. The incidence matrices corresponding to the picture deformations are \begin{equation*} \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 \\ \end{bmatrix}, \begin{bmatrix} 0 & 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 & 0 \\ \end{bmatrix} \end{equation*} \end{example} Every incidence matrix satisfies the following necessary conditions (de Jong--van Straten~\cite[p.484]{deJong-vanStraten-1998}), which are induced from Definition~\ref{definition:one-parameter-deformation} and the fact that $C_t$($t \neq 0$) has only ordinary singularities: \begin{equation}\label{equation:incidence-matrix} \begin{aligned} &\text{$\sum_{j=1}^{n} \frac{m_{ij}(m_{ij}-1)}{2} = \delta_i$ for all $i$};\\ &\text{$\sum_{j=1}^{n} m_{ij} m_{kj} = C_i \cdot C_k$ for all $i \neq k$};\\ &\text{$\sum_{j=1}^{n} m_{ij}=l_i$ for all $i$}, \end{aligned} \end{equation} where $\delta_i$ is the $\delta$-invariant of the branch $C_i$ and $C_i \cdot C_k$ is the intersection multiplicity at $0$ of the branches $C_i$ and $C_k$. \section{Compactifications compatible with deformations}\label{section:compactification} We introduce a compactification of a sandwiched surface singularity $(X,p)$ induced from its sandwiched structure which is ``compatible'' with its deformations given by picture deformations. Let's choose and fix a decorated curve $(C,l)$ of $(X,p)$. Let $C=\cup_{i=1}^{s} C_i$. Each member $C_i$ is a small piece of a plane curve in $\mathbb{C}^2$ passing through the origin $(0,0)$. Let's consider $\mathbb{C}^2$ as an affine open subset of $\mathbb{CP}^2$ (say, $z_2 \neq 0$ in $\{[z_0,z_1,z_2] \in \mathbb{CP}^2\})$. Then there is a projective plane curve $D_i \subset \mathbb{CP}^2$ such that $D_i \cap \mathbb{C}^2 = C_i$. Let $D=\cup_{i=1}^{s} D_i$. We construct a projective singular surface $(Y,p)$ from the pair $(D,l)$ in a similar way as we did for $(X,p)$ from $(C,l)$. Explicitly, we first blow up $\mathbb{CP}^2$ on $[0,0,1]$ (including its infinitely near points) so that we get the minimal embedded resolution of $D$ over $[0,0,1]$. We next blow up consecutively at the each branch $D_i$ by $l_i-m_i$ times. Then we get the modification $W \to \mathbb{CP}^2$. Since we assume that $l_i \ge M_i+1$, the union of the exceptional components that do not meet the strict transform of $D$ forms a connected configuration of curves, which is exactly the exceptional divisor $E$ of the minimal resolution of $(X,p)$. The projective singular surface $(Y,p)$ is obtained by contracting the exceptional divisor $E$. To sum up, if $(V,E) \to (X,p)$ is the minimal resolution of $(X,p)$, then we have a diagram \begin{equation*} \begin{tikzcd} (V,E) \arrow[hookrightarrow]{r} \arrow{d} & (W,E) \arrow{d} \\ (X,p) \arrow[hookrightarrow]{r} & (Y,p) \end{tikzcd} \end{equation*} \begin{definition} The projective surface $(Y,p)$ constructed as above is called a \emph{compatible compactification} of $(X,p)$ corresponding to $(C,l)$. The pair $(D,l)$ is called the \emph{compactified decorated curve} of $(Y,p)$. \end{definition} \subsection{Compatiblity with deformations of $(X,p)$} The compactification $(Y,p)$ of $(X,p)$ is compatible with deformations of $(X,p)$: \begin{theorem}\label{theorem:extension-of-deformation} Any deformation of $(X,p)$ can be extended to a deformation of $(Y,p)$ which is locally trivial outside a small neighborhood of $p \in Y$. \end{theorem} The proof relies on the following basic fact. \begin{proposition}[cf.~{Wahl~\cite[Proposition~6.4]{Wahl-1981}}] Let $Y$ be a normal projective surface with a normal surface singularity $p \in Y$. If $H^2(Y, \sheaf{T_Y})=0$, then any deformation of the singularity $p$ may be realized by a deformation of $Y$. \end{proposition} \begin{proof}[Proof of Theorem~\ref{theorem:extension-of-deformation}] Let $E$ be the exceptional divisor of the contraction $\pi \colon (W,E) \to (Y,p)$ and let $E \cup F$ be the exceptional divisor of the sequence of blow-ups $W \to \mathbb{CP}^2$. We are going to prove that $H^2(Y, \sheaf{T_Y})=0$. For this we will show that \begin{equation}\label{equation:H2(Y,T_Y)=H2(W,T_W(-log(E)))} H^2(Y, \sheaf{T_Y})=H^2(W, \sheaf{T_W}(-\log{E})). \end{equation} Assume Equation~\eqref{equation:H2(Y,T_Y)=H2(W,T_W(-log(E)))}. There is a surjective map \begin{equation*} H^2(W, \sheaf{T_W}(-\log(E+F))) \to H^2(W, \sheaf{T_W}(-\log{E})) \to 0, \end{equation*} which follows from the exact sequence \begin{equation*} 0 \to \sheaf{T_W}(-\log(E+F)) \to \sheaf{T_W}(-\log{E}) \to \oplus_i \sheaf{N}_{F_i/W} \to 0. \end{equation*} But blowing ups and downs do not change $h^2$ of logarithmic tangent sheaves; cf.~Flenner-Zaidenberg~\cite[Lemma~1.5]{Flenner-Zaidenberg-1994} for example. So we have \begin{equation*} h^2(W, \sheaf{T_W}(-\log(E+F)))=h^2(\mathbb{CP}^2, \sheaf{T_{\mathbb{CP}^2}})=0 \end{equation*} which implies that $H^2(W, \sheaf{T_W}(-\log{E}))=H^2(Y, \sheaf{T_Y})=0$, as we asserted. We now prove Equation~\eqref{equation:H2(Y,T_Y)=H2(W,T_W(-log(E)))}. We first show that \begin{equation}\label{equation:H2(Y,T_Y)=H2(W,pi_*T_W(-log(E)))} H^2(Y, \sheaf{T_Y}) = H^2(Y, \pi_{\ast}{\sheaf{T_W}(-\log{E})}). \end{equation} From the standard exact sequence \begin{equation*} 0 \to \sheaf{T_W}(-\log{E}) \to \sheaf{T_W} \to \oplus_{i} \sheaf{N_{E_i/W}} \to 0 \end{equation*} we have an exact sequence \begin{equation*} 0 \to \pi_{\ast}{\sheaf{T_W}(-\log{E})} \to \pi_{\ast}{\sheaf{T_W}} \to \Delta \end{equation*} where $\Delta$ is supported on the singular point $p$. By restricting on the image, we have a short exact sequence \begin{equation*} 0 \to \pi_{\ast}{\sheaf{T_W}(-\log{E})} \to \pi_{\ast}{\sheaf{T_W}} \to \Delta' \to 0 \end{equation*} where $\Delta'$ is still supported on the singular point $p$. Taking long exact sequence, we have \begin{equation*} H^2(Y, \pi_{\ast}{\sheaf{T_W}}) = H^2(Y, \pi_{\ast}{\sheaf{T_W}(-\log{E})}). \end{equation*} On the other hand, there is a natural isomorphism between $\pi_{\ast}{\sheaf{T_W}}$ and $\sheaf{T_Y}$ by Burns-Wahl~\cite[Proposition~1.2]{Burns-Wahl-1974}. Therefore we have Equation~\eqref{equation:H2(Y,T_Y)=H2(W,pi_*T_W(-log(E)))}. We next show that \begin{equation}\label{equation:H2(Y,pi_*T_W(-log(E)))=H2(W,T_W(-log(E)))} H^2(Y, \pi_{\ast}\sheaf{T_W}(-\log{E})) = H^2(W, \sheaf{T_W}(-\log{E})), \end{equation} from which combined with Equation~\eqref{equation:H2(Y,T_Y)=H2(W,pi_*T_W(-log(E)))} we have Equation~\eqref{equation:H2(Y,T_Y)=H2(W,T_W(-log(E)))} as desired. From Leray spectral sequence, there is an exact sequence \begin{equation*} \begin{split} 0 &\to H^1(Y, \pi_{\ast}{\sheaf{T_W}(-\log{E})}) \to H^1(W, \sheaf{T_W}(-\log{E})) \to H^0(Y, R^1\pi_{\ast}{\sheaf{T_W}(-\log{E})}) \\ &\to H^2(Y, \pi_{\ast}\sheaf{T_W}(-\log{E})) \to H^2(W, \sheaf{T_W}(-\log{E})) \to H^0(Y, R^2\pi_{\ast}{\sheaf{T_W}(-\log{E})}) \end{split} \end{equation*} In the above sequence, notice that \begin{equation*} H^0(Y, R^i\pi_{\ast}{\sheaf{T_W}(-\log{E})})=H^i(V,\sheaf{T_V}(-\log{E})) \end{equation*} for $i=1,2$ by Hartshorne~\cite[III~Proposition~8.5]{Hartshorne-1977} for example. Furthermore, since $V$ is a deformation retract of the one-dimensional exceptional divisor $E$, we have \begin{equation*} H^2(V,\sheaf{T_V}(-\log{E}))=0 \end{equation*} So Equation~\eqref{equation:H2(Y,pi_*T_W(-log(E)))=H2(W,T_W(-log(E)))} follows if one can prove: \textbf{Claim A}. $H^1(W, \sheaf{T_W}(-\log{E})) \to H^1(V, \sheaf{T_V}(-\log{E}))$ is surjective. For this, let $L=\{z_2=0\}$ be the line at infinity in $W$. The local cohomology $H^2_L(\sheaf{T_W}(-\log{E}))$ fits into the exact sequence \begin{equation*} H^1(W, \sheaf{T_W}(-\log{E})) \to H^1(W \setminus L, \sheaf{T_V}(-\log{E})) \to H^2_L(\sheaf{T_W}(-\log{E})). \end{equation*} On the other hand, $V \subset W \setminus L$. Furthermore, $W \setminus L$ is a deformation retract of $V$ because $V$ is a regular neighborhood of the exceptional divisor $E$. So $H^1(W \setminus L, \sheaf{T_W}(-\log{E}))=H^1(V, \sheaf{T_V}(-\log{E}))$. Therefore we have an exact sequence \begin{equation*} H^1(W, \sheaf{T_W}(-\log{E})) \to H^1(V, \sheaf{T_V}(-\log{E})) \to H^2_L(\sheaf{T_W}(-\log{E})). \end{equation*} By excision principle, we have $H^2_L(\sheaf{T_W}(-\log{E}))=H^2_L(\sheaf{T_W})$. So for proving the above Claim~A, it is enough to show: \textbf{Claim B}. $H^2_L(\sheaf{T_W})=0$. Notice that \begin{equation*} H^2_L(\sheaf{T_W}) = \varinjlim \Ext^2(\sheaf{O_{nL}}, \sheaf{T_W}) \end{equation*} where the limit is taken over all positive integers $n \ge 1$. From the exact sequence \begin{equation*} 0 \to \sheaf{O_W}(-nL) \to \sheaf{O_W} \to \sheaf{O_{nL}} \to 0 \end{equation*} we have \begin{equation*} \Ext^1(\sheaf{O_W}, \sheaf{T_W}) \xrightarrow{\phi} \Ext^1(\sheaf{O_W}(-nL), \sheaf{T_W}) \to \Ext^2(\sheaf{O_{nL}}, \sheaf{T_W}) \to 0 \end{equation*} because $\Ext^2(\sheaf{O_W},\sheaf{T_W})=H^2(W, \sheaf{T_W})=0$. On the other hand, the map $\phi$ fits into the exact sequence \begin{equation*} H^1(W, \sheaf{T_W}) \xrightarrow{\phi} H^1(W, \sheaf{T_W} \otimes \sheaf{O_W}(nL)) \to H^1(W, \sheaf{T_W} \otimes \sheaf{O_{nL}}(nL)). \end{equation*} So if one can show that $H^1(W, \sheaf{T_W} \otimes \sheaf{O_{nL}}(nL))=0$ for all $n \ge 1$, then $\phi$ is surjective for all $n \ge 1$, which implies $\Ext^2(\sheaf{O_{nL}}, \sheaf{T_W})=0$ for all $n \ge 1$. So Claim~B follows. \textbf{Claim C}. $H^1(W, \sheaf{T_W} \otimes \sheaf{O_{nL}}(nL))=0$ for all $n \ge 1$. From the decomposition sequence \begin{equation*} 0 \to \sheaf{O_{nL}}(-L) \to \sheaf{O_{(n+1)L}} \to \sheaf{O_L} \to 0 \end{equation*} tensored with $\sheaf{T_W}((n+1)L)$, we have an exact sequence \begin{equation}\label{equation:surjective-map} H^1(W, \sheaf{T_W} \otimes \sheaf{O_{nL}}(nL)) \to H^1(W, \sheaf{T_W} \otimes \sheaf{O_{(n+1)L}}((n+1)L)) \to H^1(W, \sheaf{T_W} \otimes \sheaf{O_L}((n+1)L)) \end{equation} Here we first show that \begin{equation}\label{equation:H1(W,T_W*O_L(mL))=0} H^1(W, \sheaf{T_W} \otimes \sheaf{O_L}(mL))=0 \end{equation} for all $m \ge 1$: From the tangent-normal sequence \begin{equation*} 0 \to \sheaf{T_L} \to \sheaf{T_W} \otimes \sheaf{O_L} \to \sheaf{N_{L/W}} \to 0 \end{equation*} tensored with $\sheaf{O_L}(mL)$, we have an exact sequence \begin{equation*} H^1(L, \sheaf{T_L}(mL)) \to H^1(W, \sheaf{T_W} \otimes \sheaf{O_L}(mL)) \to H^1(L, \sheaf{O_L}((m+1)L). \end{equation*} But $H^1(L, \sheaf{T_L}(mL))=H^1(L, \sheaf{O_L}((m+1)L)=0$ because $L \cong \mathbb{CP}^1$ and $L \cdot L = 1$. So we have $H^1(W, \sheaf{T_W} \otimes \sheaf{O_L}(mL))=0$ for all $m \ge 1$. Then we always have a surjective map \begin{equation*} H^1(W, \sheaf{T_W} \otimes \sheaf{O_{nL}}(nL)) \to H^1(W, \sheaf{T_W} \otimes \sheaf{O_{(n+1)L}}((n+1)L)) \to 0. \end{equation*} for all $n \ge 1$ from the above exact sequence in \eqref{equation:surjective-map}. On the other hand, for $n=1$, we have $H^1(W, \sheaf{T_W} \otimes \sheaf{O_L}(L))=0$ as proven above in \eqref{equation:H1(W,T_W*O_L(mL))=0}. Therefore we have $H^1(W, \sheaf{T_W} \otimes \sheaf{O_L}(nL))=0$ for all $n \ge 1$ as asserted in Claim~C. \end{proof} \subsection{Deformations of compatible compactifications} Let $(X,p)$ be a sandwiched surface singularity and let $(C,l)$ be its decorated curve. Let $(Y,0)$ be a compatible compactification of $(X,p)$ corresponding to $(C,l)$ and let $(D,l)$ be the compactified decorated curve of $(Y,0)$. Just as every one-parameter deformation of $(X,p)$ is induced from that of $(C,l)$ via a blow-up, there is a same relationship between one-parameter deformations of $(Y,0)$ and that of $(D,l)$. At first, one can define a one-parameter deformation of $(D,l)$ which is compatible with that of $(C,l)$. \begin{definition} A \emph{one-parameter deformation} $(\mathcal{D}, \mathcal{L})$ of $(D,l)$ over a small disk $\Delta$ centered at the origin $0$ consists of \begin{enumerate}[(1)] \item a $\delta$-constant deformation $\mathcal{D} \to \Delta$ of $D$ over $[0,0,1]$, that is, $\delta(D_t)$ over $[0,0,1]$ is constant for all $t \in \Delta,$ \item a flat deformation $\mathcal{L} \subset \mathcal{D} \times \Delta$ over $\Delta$ of the scheme $l$ such that \item $\mathcal{M} \subset \mathcal{L}$, where the relative total multiplicity scheme $\mathcal{M}$ of $\mathcal{D} \times \Delta \to \mathcal{D}$ is defined as the closure $\bigcup_{t \in \Delta\setminus{0}} m(C_t)$. \end{enumerate} \end{definition} \begin{theorem}\label{theorem:compactified-picture-deformation} Let $\mathcal{Y} \to \Delta$ be a one-parameter deformation of $(Y,0)$. Then there is a one-parameter deformation $(\mathcal{D}, \mathcal{L}) \to \Delta$ of $(D,l)$ such that $\mathcal{Y}$ is equal to the blow-up of $\mathbb{CP}^2 \times \Delta$ along $\Sigma(\mathcal{J})$, where $\mathcal{J}$ is a coherent sheaf of ideals in $\mathbb{CP}^2 \times \Delta$ with fibers $I(D_t,l_t)$. \end{theorem} \begin{proof} Let $\mathcal{X} \to \Delta$ be the deformation of $(X,p)$ that is the blow-down of $\mathcal{Y} \to \Delta$. There is a one-parameter deformation $(\mathcal{C},\mathcal{L})$ of the decorated curve $(C,l)$ corresponding to $\mathcal{X} \to \Delta$ such that $\mathcal{X}$ is equal to the blow-up of $\mathbb{C}^2 \times \Delta$ along $\Sigma(\mathcal{J})$. For each $t \in \Delta$, the fiber $C_t$ consists of plane curves in $\mathbb{C}^2$. So there are projective curves $D_t$ for each $t \in \Delta$ in $\mathbb{CP}^2$ such that $D_t \cap \mathbb{C}^2=C_t$. Then there is a one-parameter deformation $(\mathcal{D}, \mathcal{L})$ of $(D,l)$ such that $D_t \cap \mathbb{C}^2 = C_t$. Since $I(D_t,l_t)=I(C_t,l_t)$, the deformation $\mathcal{Y}$ is obtained from $\mathbb{CP}^2 \times \Delta$ by blowing up along $\Sigma(\mathcal{J})$. \end{proof} As in Remark~\ref{remark:incidence-matrix}, one can recreate the incidence matrix corresponding to a one-parameter deformation of $(X,p)$ (or that of $(C,l)$) from the information how the branches of the strict transform $\widetilde{D}_t$ in $Y(D_t,l_t)$ ($t \neq 0$) intersect with the $(-1)$-curves in $Y(D_t,l_t)$. \section{Kollár conjecture, P-modifications, and P-resolutions}\label{section:P-modifications} We recall basics on Kollár conjecture and the related topics. We refer to Kollár~\cite{Kollar-1991}, Kollár-Shepherd-Barron~\cite{KSB-1988}, and Behnke-Christophersen~\cite{Behnke-Christophersen-1994} for details. \subsection{Kollár conjecture} Kollár and Shepherd-Barron~\cite{KSB-1988} described the irreducible components of the reduced miniversal deformation space of a quotient singularity $(X,p)$ in terms of certain partial resolutions of $(X,p)$. Then Kollár~\cite{Kollar-1991} proposed the following conjecture as a generalization. \begin{conjecture}[{Kollár~\cite[6.2.1]{Kollar-1991}}]\label{conjecture:Kollar-conjecture-original} Let $(X,p)$ be a rational surface singularity and let $\mathcal{X}$ be the total space of a one-parameter smoothing of $(X,p)$. Then the canonical algebra \begin{equation*} \sum_{n=0}^{\infty} \mathcal{O}_{\mathcal{X}}(n K_{\mathcal{X}}) \end{equation*} is a finitely generated $\mathcal{O}_{\mathcal{X}}$-algebra. \end{conjecture} If the conjecture is true, then the Proj of the above canonical algebra gives a specific partial modification $f \colon U \to X$. \begin{definition}[{Kollár~\cite[Definition~6.2.10']{Kollar-1991}}] Let $(X,p)$ be a rational surface singularity and let $f \colon U \to X$ be a proper modification. $U$ is called a \emph{P-modification} if \begin{enumerate}[(i)] \item $R^1f_{\ast}{\mathcal{O}_U}=0$, \item $K_U$ is $f$-ample, \item $U$ has a smoothing which induces a $\mathbb{Q}$-Gorenstein smoothing of each singularity of $U$. \end{enumerate} \end{definition} Then Stevens~\cite{Stevens-2003} reformulates Conjecture~\ref{conjecture:Kollar-conjecture-original} using P-modifications as follows. \begin{conjecture}[Kollár Conjecture; cf.~{Stevens~\cite[p.114]{Stevens-2003}}]\label{conjecture:Kollar-conjecture-Stevens} Let $\pi \colon \mathcal{X} \to S$ be the total family over a smoothing component of the versal deformation of a rational surface singularity $X$. Then there exists a proper modification $\mathcal{U}$ of $\mathcal{X}$ with Stein factorisation $\mathcal{U} \xrightarrow{\pi'} T \xrightarrow{\sigma} S$, such that some multiple of $K_{\mathcal{U}}$ is Cartier, and $K_{\mathcal{U}}$ is $\pi'$-ample; $\pi'$ is flat, and every fibre $\mathcal{U}_t$ over $t \in \sigma^{-1}(0)$ is a P-modification of $X$, and the germ $(T, t)$ is a component of the versal deformation of $\mathcal{U}_t$. \end{conjecture} The conjecture is true in several cases; see Stevens~\cite[\S14]{Stevens-2003}. In particular, Kollár and Shepherd-Barron~\cite{KSB-1988} showed that the conjecture holds for quotient surface singularities. \subsection{T-singularities} In case of cyclic quotient surface singularites, the corresponding P-modifications are normal and they have only \emph{T-singularities} as singularities, which are special cyclic quotient surface singularities. We briefly recall basics on T-singularities. \begin{definition}[T-singularity] A \emph{T-singularity} is a quotient surface singularity that admits a $\mathbb{Q}$-Gorenstein one-parameter smoothing. \end{definition} Kollár and Shepherd-Barron~\cite{KSB-1988} determined all T-singularities. \begin{proposition}[{KSB~\cite[Proposition~3.10]{KSB-1988}}] A T-singularity is either a rational double point or a cyclic quotient surface singularity $\frac{1}{dn^2}(1, dna-1)$ with $d \ge 1$, $n \ge 2$, $1 \le a < n$, and $(n,a)=1$. \end{proposition} Due essentially to Wahl~\cite{Wahl-1981}, a T-singularity may be recognized from its minimal resolution: \begin{proposition}\hfill \label{proposition:T-algorithm} \begin{enumerate}[(i)] \item The singularities $[4]$ and $[3,2,\dotsc,2,3]$ are of class T \item If $[b_1,\dotsc,b_r]$ is a T-singularity, then $[b_1+1,b_2,\dotsc,b_r,2]$ and $[2,b_1,\dotsc,b_{r-1},b_r+1]$ are also T-singularities. \item Every singularity of class T that is not a rational double point can be obtained by starting with one of the singularities described in (i) and iterating the steps described in (ii). \end{enumerate} \end{proposition} Among T-singularities, \begin{definition}[Wahl singularity] A \emph{Wahl singularity} is a cyclic quotient surface singularity $\frac{1}{n^2}(1, na-1)$. \end{definition} A Wahl singularity is a T-singularity that can be obtained by iterating the steps described in Proposition~\ref{proposition:T-algorithm} (ii) starting from $[4]$. The Riemenschneider's dot diagram for the minimal resolution of a Walh singularity is symmetric to a special dot in the diagram which is called the \emph{$\delta$-dot}. We briefly explain this phenomenon. The Riemenschneider's diagram for $[4]$ is \begin{equation*} \begin{tikzpicture}[scale=0.5] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[bullet] (00) at (0,0) [] {}; \node[bullet] (10) at (1,0) [labelAbove=$\delta$] {}; \node[bullet] (20) at (2,0) [] {}; \end{tikzpicture} \end{equation*} which is symmetric to the dot decorated by $\delta$. And the step (ii) in Proposition~\ref{proposition:T-algorithm} above can be described in the view of Riemenschneider's dot diagram as follows: The procedure from $[b_1,\dotsc,b_r]$ to $[b_1+1,b_2,\dotsc,b_r,2]$ is just equivalent to adding a dot to the left of the first dot of the top row and a dot under the last dot of the final row. Also the procedure from $[b_1,\dotsc,b_r]$ to $[2,b_1,\dotsc,b_{r-1},b_r+1]$ can be understood as adding a dot to the right of the last dot of the final row and adding a dot over the first dot of the first row in the given Riemenschneider's dot diagram. Therefore the Riemenschneider's diagram of a Wahl singularity is still symmetric to the dot decorated by $\delta$. \begin{example} The Riemenschneider's dot diagram for the Wahl singularity $\frac{1}{49}(1,34)=[2,2,5,4]$ is symmetric to the dot decorated by $\delta$: \begin{equation*} \begin{tikzpicture}[scale=0.5] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[bullet] at (0,3) [labelAbove={}] {}; \node[bullet] at (0,2) [] {}; \node[bullet] at (0,1) [] {}; \node[bullet] at (1,1) [] {}; \node[bullet] at (2,1) [labelAbove={$\delta$}] {}; \node[bullet] at (3,1) [] {}; \node[bullet] at (3,0) [] {}; \node[bullet] at (4,0) [] {}; \node[bullet] at (5,0) [] {}; \draw [-] (0,3)--(0,2)--(0,1)--(1,1); \draw [-] (3,1)--(3,0)--(4,0)--(5,0); \end{tikzpicture} \end{equation*} \end{example} \begin{definition}[Initial curve] The \emph{initial curve} of a Wahl singularity is the exceptional curve of its minimal resolution that has the $\delta$-dot in its Riemenschneider dot diagram. \end{definition} \begin{example} The initial curve of a Wahl singularity $[2,2,5,4]$ is the exceptional curve $C$ with $C \cdot C = -5$. \end{example} \subsection{P-resolutions and M-resolutions} Kollár and Shepherd-Barron~\cite{KSB-1988} showed that the canonical model $\mathcal{U}$ of a one-parameter smoothing $\mathcal{X}$ of a quotient surface singularity $(X,p)$ gives a small modification whose central fiber $U$ is a specific normal P-modification of $X$ that admits only T-singularities as its singularities, called a \emph{P-resolution}. \begin{definition}[P-resolution; {cf.Kollár-Mori~\cite{KM-1992}}] Let $(X,p)$ be any rational surface singularity. A \emph{P-resolution} $f \colon U \to X$ be a proper birational morphism $f$ such that $U$ is normal with only T-singularities and $K_{U/X}$ is $f$-ample. \end{definition} Every P-resolution of a cyclic quotient surface singularity is dominated by the so-called \emph{maximal resolution} of the singularity (KSB~\cite[Lemma~3.14]{KSB-1988}) whose dual graph is a linear chain. \begin{example}[Continued from Example~\ref{example:3-4-2}]\label{example:3-4-2-P-resolutions} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{19}(1,7)$. There are three P-resolutions of $(X,p)$. \begin{equation*} 3-4-[2], \quad 3-[4]-2, \quad [4]-1-[5,2]. \end{equation*} \end{example} More generally, it is possible to limit it to the case where the singularities of P-resolutions are only Wahl singularities. \begin{definition}[M-resolution; cf.~{Behnke-Christophersen~\cite[p.882]{Behnke-Christophersen-1994}}] \label{definition:M-resolution} An \emph{M-resolution} of a rational surface singularity $(X,p)$ is a proper birational morphism $f \colon U' \to Y$ such that \begin{enumerate} \item $U'$ has only Wahl singularities. \item $K_{U'}$ is nef relative to $f$, i.e., $K_{U'} \cdot E \ge 0$ for all $f$-exceptional curves $E$. \end{enumerate} \end{definition} \begin{proposition}[{Behnke-Christophersen~\cite[3.1.4]{Behnke-Christophersen-1994}}] For each P-resolution $U \to X$ there is an M-resolution $U' \to U$, such that there is a surjective morphism $g \colon U' \to U$ with $K_{U'} = g^{\ast}{K_U}$. \end{proposition} For a given P-resolution we call the corresponding $M$-resolution by the \emph{(crepant) M-resolution} of the P-resolution. We briefly recall how to construct the crepant M-resolution of a P-resolution. At first each T-singularity has a special crepant M-resolution. Let $Y=\frac{1}{dn^2}(1,dna-1)$ be a T-singularity with $d \ge 2$. The \emph{crepant} M-resolution $U' \to U$ of $X$ is defined by the following partial resolution of $U'$: $U'$ has $d-1$ exceptional components $C_i \cong \mathbb{CP}^1$ ($i=1,\dotsc,d-1$) and $d$ singular points $P_i$ of type $\frac{1}{n^2}(1, na-1)$ as described in the following figure: \begin{center} \includegraphics{figure-M-resolution.pdf} \end{center} Explicitly, the proper transforms of $C_i$'s in the minimal resolution $\widetilde{U}'$ of $U'$ are $(-1)$-curves and the minimal resolution $\widetilde{U}'$ is given by \begin{equation*} b_1-\dotsb-b_r-1-b_1-\dotsb-b_r-1-\dotsb-1-b_1-\dotsb-b_r \end{equation*} where $[b_1,\dotsc,b_r]=\frac{1}{n}(1,na-1)$. Then the M-resolution of a given P-resolution in the above proposition is obtained by taking the crepant M-resolutions for each T-singularities. We finally introduce P-resolutions appearing in the next section. \begin{definition}[Extremal P-resolution] A P-resolution $f \colon Z^+ \to Y$ of a cyclic quotient surface singularity $(Y,0)$ is called an \emph{extremal P-resolution} if $f^{-1}(0)$ is a smooth rational curve $C$ and $Z^+$ has only Wahl singularities. \end{definition} Notice that an extremal P-resolution may have at most two Wahl singularities. \section{Semistable minimal model program}\label{section:semistable-MMP} We briefly recall the semistable minimal model program for one-parameter families of surfaces. Refer Kollár-Mori~\cite{KM-1992} and Hacking-Tevelev-Urzúa~\cite{HTU-2017}. \subsection{Semistable extremal neighborhoods} A three dimensional \emph{extremal neighborhood} is a birational morphism \begin{equation*} F \colon (C \subset \mathcal{Z}) \to (Q \in \mathcal{Y}) \end{equation*} from a germ of a 3-fold $\mathcal{Z}$ along a proper reduced irreducible curve $C$ to a germ of a 3-fold $\mathcal{Y}$ along $Q \in \mathcal{Y}$ such that $F_{\ast}(\mathcal{O}_{\mathcal{Z}})=\mathcal{O}_{\mathcal{Y}}$, $F^{-1}(Q)=C$ (as sets), and $K_{\mathcal{Z}} \cdot C < 0$. Then $C \cong \mathbb{CP}^1$. Let $E_Z$ be a general member of $\lvert -K_{\mathcal{Z}} \rvert$ and let $E_Y = F(E_Z) \in \lvert - K_{\mathcal{Y}} \rvert$. The extremal neighborhood $F \colon (C \subset \mathcal{Z}) \to (Q \in \mathcal{Y})$ is called \emph{semistable} if $Q \in E_Y$ is a Du Val singularity of type $A$; Kollár-Mori~\cite[p.541]{KM-1992}. Semistable extremal neighborhoods are classified as two types as follows. A semistable extremal neighborhoof $F$ is of type \emph{k1A} or \emph{k2A} if the number of singular points of $E_Z$ equals one or two respectively; Kollár-Mori~\cite[p.542]{KM-1992}. In this paper we are interested in the particular case that an semistable extremal neighborhood is given as total spaces of certain flat family of singular surfaces. Furthermore we would like to assume that the second Betti number $b_2(Z_t)$ of a general fiber of $\mathcal{Z} \to \Delta$ is equal to one: \begin{definition}[{cf. HTU~\cite[Proposition~2.1]{HTU-2017}, Urzúa~\cite[Definition~2.5]{Urzua-2016-1}}] Let $(Q \in Y)$ be a two-dimensional germ of a cyclic quotient surface singularity. Let $f \colon Z \to Y$ be a partial resolution of $Q \in Y$ such that $f^{-1}(Q)=C$ is a smooth rational curve with one (or two) Wahl singularity(ies) of $Z$ on it. Suppose that $K_Z \cdot C < 0$. Let $(Z \subset \mathcal{Z}) \to (0 \in \Delta)$ be a $\mathbb{Q}$-Gorenstein smoothing of $Z$ over a small disk $\Delta$. Let $(Y \subset \mathcal{Y}) \to \Delta$ be the corresponding blow-down deformation of $Y$. The induced birational morphism $(C \subset \mathcal{Z}) \to (Q \in \mathcal{Y})$ is called an \emph{extremal neighborhood of type mk1A (or mk2A)}. \end{definition} An extremal neighborhood is said to be \emph{flipping} if the exceptional set of $F$ is $C$. If it is not flipping, then the exceptional set of $F$ is of dimension $2$. In this case we call it a \emph{divisorial} extremal neighborhood. For any flipping extremal neighborhood we always have a proper birational morphism called \emph{flip}: \begin{proposition}[Kollár-Mori~{\cite[\S11 and Theorem~13.5]{KM-1992}}] Suppose that $(C \subset \mathcal{Z}) \to (Q \in \mathcal{Y})$ is a flipping extremal neighborhood of type mk1A or mk2A. Let $(C \subset X) \to (Q \in Y)$ be the contraction of $C$ between the central fibers $Z$ and $Y$. Then there exists an extremal P-resolution $(C^+ \subset Z^+) \to (Q \in Y)$ such that the flip $(C^+ \subset \mathcal{Z}^+) \to (Q \in \mathcal{Y})$ is obtained by the blown-down deformation of a $\mathbb{Q}$-Gorenstein smoothing of $Z^+$. That is, we have the commutative diagram \begin{equation*} \begin{tikzcd} (C \subset \mathcal{Z}) \arrow[rr, dashed, "\textrm{flip}"] \arrow[dr] \arrow[ddr] & & (C^+ \subset \mathcal{Z}^+) \arrow[dl] \arrow[ddl] \\ & (Q \in \mathcal{Y}) \arrow[d] & \\ & (0 \in \Delta) & \end{tikzcd} \end{equation*} which is restricted to the central fibers as follows: \begin{equation*} \begin{tikzcd} (C \subset Z) \arrow[rr, dashed] \arrow[dr] & & (C^+ \subset Z^+) \arrow[dl] \\ & (Q \in Y) & \end{tikzcd} \end{equation*} \end{proposition} On the other hand, for a divisorial mk1A or mk2A, the birational morphism $F \colon \mathcal{Z} \to \mathcal{Y}$ is called a \emph{divisorial contraction} and $F$ is induced from blowing-downs between each smooth fibers. Explicitly: \begin{proposition}[{see e.g. Urzúa~\cite[Proposition~2.8]{Urzua-2016-1}}]\label{proposition:divisorial-contraction} If an mk1A or mk2A is divisorial, then $(Q \in Y)$ is a Wahl singularity. In addition, the divisorial contraction $F \colon \mathcal{Z} \to \mathcal{Y}$ induces the blowing-down of a $(-1)$-curve between the smooth fibers of $\mathcal{Z} \to \mathbb{D}$ and $\mathcal{Y} \to \mathbb{D}$. \end{proposition} \subsection{Numerical data for semistable extremal neighborhoods} We will use the notation in Urzúa~\cite[Subsection 2.4]{Urzua-2016-1} for the extremal neighborhoods mk1A and mk2A. \subsubsection{$Z \to Y$ for mk1A} Let $\widetilde{Z}$ be the minimal resolution of the Wahl singularity in $Z$ with the exceptional curves $E_1, \dotsc, E_s$ such that $E_j^2 = -e_j$ ($j=1,\dotsc,s$), where the Wahl singularity is given by $\frac{m^2}{ma-1}=[e_1, \dotsc, e_s]$. Since $K_Z \cdot C < 0$ and $C \cdot C < 0$, the strict transform of $C$ in $\widetilde{Z}$ is a $(-1)$-curve intersecting only one component $E_i$ transversally at one point. We denote this data by \begin{equation*} [e_1, \dotsc, \overline{e_i}, \dotsc, e_s]. \end{equation*} If $Q \in Y$ is a quotient surface singularity of type $\frac{1}{\Omega}(1, \Delta)$, then \begin{equation*} \frac{\Delta}{\Omega} =[e_1, \dotsc, e_{i-1}, e_i-1, e_{i+1}, \dotsc, e_s]. \end{equation*} We can calculate $\Delta$ and $\Omega$ by the sequences on integers defined recursively from the continued fraction $[e_1, \dotsc, e_s]$. At first, we defines the sequence $\{\beta_j\}$ with \begin{equation*} \beta_{s+1}(=0) < \beta_s(=1) < \dotsb < \beta_1(=a) < \beta_0(=m) \end{equation*} by $\beta_{j+1}=e_j \beta_j-\beta_{j-1}$. In this way we have \begin{equation*} \frac{\beta_{j-1}}{\beta_j} = [e_j,\dotsc,e_s]. \end{equation*} On the other hand, we define similarly sequences of integers $\{\alpha_j\}$ and $\{\gamma_j\}$ by $\alpha_{j+1}=e_j\alpha_j - \alpha_{j-1}$ and $\gamma_{j+1}=e_j\gamma_j - \gamma_{j-1}$ starting with $\alpha_0=0$, $\alpha_1=1$ and $\gamma_0=-1$, $\gamma_1=0$. Then we have \begin{equation*} \alpha_0(=0) < \alpha_1(=1) < \dotsb < \alpha_s=(a^{-1}) < \alpha_{s+1}(=m) \end{equation*} where $a^{-1}$ is the integer such that $0 < a^{-1} < m$ and $aa^{-1}\equiv 1 \pmod{m}$. In particular, \begin{equation*} \frac{\alpha_j}{\gamma_j} = [e_1,\dotsc,e_j]. \end{equation*} Then \begin{equation*} \text{$\Delta = m^2 - \beta_i\alpha_i$ and $\Omega = ma-1-\gamma_i\beta_i$}. \end{equation*} Furthermore, if we define \begin{equation*} \delta := \frac{\beta_i+\alpha_i}{m} \end{equation*} then we have $K_X \cdot C = -\delta/m < 0$ and $C \cdot C = -\Delta/m^2 < 0$. \subsubsection{$Z \to Y$ for mk2A} We have two Wahl singularities $m_1^2/(m_1a_1-1)$ and $m_2^2/(m_2a_2-1)$ on $C \subset Z$. Let $m_1^2/(m_1a_1-1)=[e_1, \dotsc, e_{s_1}]$ and $m_2^2/(m_2a_2-1)=[f_1, \dotsc, f_{s_2}]$ and let $E_1, \dotsc, E_{s_1}$ and $F_1, \dotsc, F_{s_2}$ be the corresponding exceptional divisors with $E_i^2=-e_i$ and $F_j^2=-f_j$ for all $i$ and $j$. As before, the strict transform of $C$ in the minimal resolution $\widetilde{X}$ of $X$ is a $(-1)$-curve intersecting only one $E_i$ and $F_j$ at one point. So these two exceptional curves should be the ends of each exceptional chains because the minimal resolution of $Q \in Y$ is a linear chain of $\mathbb{CP}^1$'s. We assume that the $(-1)$-curve intersects $E_1$ and $F_1$. We denote the data for an mk2A by \begin{equation*} [(m_2,a_2)]-1-[(m_1,a_1)] = [f_{s_2}, \dotsc, f_1]-1-[e_1, \dotsc, e_{s_1}]. \end{equation*} Then $(Q \in Y)$ is given by \begin{equation*} \frac{\Delta}{\Omega} = [f_{s_2}, \dotsc, f_1, 1, e_1, \dotsc, e_{s_1}]. \end{equation*} Furthermore, if we define \begin{equation*} \delta := m_1a_2 + m_2a_1 - m_1m_2, \end{equation*} then \begin{equation*} \Delta = m_1^2 + m_2^2 - \delta m_1 m_2, \quad \Omega = (m_2-\delta m_1)(m_2-a_2) + m_1a_1 -1 \end{equation*} and we have $K_X \cdot C = - \delta/(m_1m_2) < 0$ and $C \cdot C = - \Delta/(m_1^2m_2^2) < 0$. \subsubsection{$Z^+ \to Y$ for an extremal P-resolution} Finally, the numerical description of extremal P-resolutions are given as follows. Let $m_1'^2/(m_1'a_1'-1) = [e_1, \dotsc, e_{s_1}]$ and $m_2'^2/(m_2'a_2'-1)=[f_1, \dotsc, f_{s_2}]$ be two Wahl singularities. Here we allow that $m_i'=a_i'=1$; that is, we allow that a Wahl singularity (or both) is (are) actually smooth. Similar to an mk2A, we denote an extremal P-resolution by \begin{equation*} [f_{s_2}, \dotsc, f_1] - c - [e_1, \dotsc, e_{s_1}] \end{equation*} where $-c$ is the self-intersection number of the strict transform of $C^+$ in the minimal resolution of $Z^+$. As before, $(Q \in Y)$ is defined by \begin{equation*} \frac{\Delta}{\Omega} = [f_{s_2}, \dotsc, f_1,c,e_1, \dotsc, e_{s_1}]. \end{equation*} Define \begin{equation*} \delta := cm_1'm_2'-m_1'a_2'-m_2'a_1'. \end{equation*} Then \begin{equation*} \Delta = m_1'^2 + m_2'^2 + \delta m_1'm_2' \end{equation*} and, when both $m_i' \neq 1$, \begin{equation*} \Omega = -m_1'^2(c-1) + (m_2' + \delta m_1')(m_2' - a_2') + m_1'a_1'-1. \end{equation*} Finally, we have $K_{Z^+} \cdot C^+ = \delta/(m_1'm_2') > 0$ and $C^+ \cdot C^+ = - \Delta/(m_1'^2m_2'^2) < 0$. \subsection{Determining the types of semistable extremal neighborhoods} From the numerical data of a semistable extremal neighborhood of type mk1A or mk2A, one can determine whether it is flipping or divisorial and one can also calculate the flip if it is a flipping type. We summarize briefly HTU~\cite[Subsection~3.3]{HTU-2017}. See also Urzúa~\cite[Section~2]{Urzua-2016-1}. Let $\mathcal{Z}_1 = [(m_2,a_2)]-1-[(m_1,a_1)]$ be an mk2A with $m_2 > m_1$. Here we also allow the mk1A for special case $m_1=a_1=1$. We have $\delta=m_2a_1+m_1a_2-m_1m_2$ as before. Since $K_Z \cdot C = -\delta/(m_1m_2) < 0$, we have $\delta \ge 1$. Assume that \begin{equation*} \delta m_1 - m_2 \le 0 \end{equation*} from now on. Such an mk2A is called an \emph{initial} mk2A in HTU~\cite{HTU-2017}. We first define two sequences $d(i)$ and $c(i)$ (called the \emph{Mori recursions}) as follows: For $i \ge 2$, \begin{align*} &d(1)=m_1, \quad d(2)=m_2, \quad d(i+1)=\delta d(i)-d(i-1), \\ &c(1)=a_1, \quad c(2)=m_2-a_2, \quad c(i+1)=\delta c(i)-c(i-1). \end{align*} \begin{definition}[Mori sequence for $\delta > 1$] If $\delta > 1$, then a \emph{Mori sequence} for an initial mk2A $\mathcal{Z}_1$ is a sequence of mk2A's $\mathcal{Z}_i$ with Wahl singularities defined by the pairs $(m_i, a_i)$ and $(m_{i+1}, a_{i+1})$ where \begin{equation*} m_{i+1}=d(i+1), a_{i+1}=d(i+1)-c(i+1), \quad m_i=d(i), a_i=c(i). \end{equation*} Notice that $m_{i+1} > m_i$. \end{definition} \begin{definition}[Mori sequence for $\delta=1$] If $\delta=1$, then a \emph{Mori sequence} for an initial mk2A $\mathcal{Z}_1$ consists of just one more mk2A $\mathcal{Z}_2$ defined by the pairs $(m_2, a_2)$ and $(m_3, a_3)$, where $m_2=d(2)$, $a_2=c(2)$, and $m_3=d(2)-d(1)$, $a_3=d(2)-d(1)+c(1)-c(2)$. \end{definition} \begin{proposition}[{HTU~\cite[Proposition~3.15, Theorem~3.20]{HTU-2017}} If $\delta m_1 - m_2 < 0$, then the mk2A $\mathcal{Z}_i$'s ($i \ge 1$) in the Mori sequence of an initial mk2A $\mathcal{Z}_1$ are of flipping type sharing the same $\delta$, $\Omega$, $\Delta$ associated to $\mathcal{Z}_1$ and, after flipping each $\mathcal{Z}_i$, they have the same extremal P-resolution $Z^+$ admitting two Wahl singularities defined by the pairs $(m_1', a_1')$ and $(m_2', a_2')$ where $m_2'=m_1$, $a_2'=m_1-a_1$, and $m_1'=m_2-\delta m_1$, $a_1' \equiv m_2-a_2 -\delta a_1 \pmod{m_1'}$. Here, in case of $m_1=a_1=1$, we set $a_2'=1$. \end{proposition} \begin{proposition}[{HTU~\cite[Proposition~3.13]{HTU-2017}}] If $\delta m_1 - m_2=0$, then $\mathcal{Z}_i$'s are of divisorial type with $m_1=\delta$, $m_2=\delta^2$, and $a_2=\delta^2-(\delta a_1 -1)$. By Proposition~\ref{proposition:divisorial-contraction}, the contraction $\mathcal{Z}_i \to \mathcal{Y}$ over $\mathbb{D}$ corresponding to $\mathcal{Z}_i$ induces the blow down $Z_{i,t} \to Y_t$ over $0 \neq t \in \mathbb{D}$ of a $(-1)$-curve between smooth fibers. \end{proposition} HTU~\cite{HTU-2017} also provides a way to compute for all extremal neighborhoods of type mk1A. Refer Urzúa~\cite[Proposition~2.12]{Urzua-2016-1} for a concise summary. \begin{proposition}[{HTU~\cite[\S2.3, \S3.4]{HTU-2017}}] Let $[e_1,\dotsc,\overline{e_i},\dotsc,e_s]$ be an mk1A with $\frac{m^2}{ma-1}=[e_1,\dotsc,e_s]$. Let $\frac{m_2}{m_2-a_2}=[e_1,\dotsc,e_{i-1}]$ and $\frac{m_1}{m_1-a_1}=[e_{i+1},\dotsc,e_s]$ (if possible; that is, for the first $i > 1$ and for the second $i < s$). Then there are two mk2A's given by \begin{equation*} \text{$[f_{s_2},\dotsc,f_1]-1-[e_1,\dotsc,e_s]$ and $[e_1,\dotsc,e_s]-1-[g_1,\dotsc,g_{s_1}]$} \end{equation*} where $\frac{m_2^2}{m_2a_2-1}=[f_1,\dotsc,f_{s_2}]$ and $\frac{m_1^2}{m_1a_1-1}=[g_1,\dotsc,g_{s_1}]$ such that the corresponding cyclic quotient surface singularity $\frac{1}{\Omega}(1,\Delta)$ and $\delta$ are the same for the mk1A and the mk2A's. Moreover these three extremal neighborhoods are either (1) flipping with the same extremal P-resolution for the flip or (2) divisorial with the same $(Q \in Y)$. \end{proposition} The above propositions provide us the way to determine the type of the given extremal neighborhood: Find the initial extremal neighborhood from the given one by applying the Mori recursion in the opposite way and determine its type by calculating the value $\delta m_1 - m_2$ of the initial one. \begin{example}[A flipping mk2A with $\delta > 1$] Let $(Q \in Y)=\frac{1}{11}(1,3)$. Let $\mathcal{Z}$ be an mk2A $[(50,37)]-1-[(19,5)]$. Since $\delta=3$, the initial mk2A is $\mathcal{Z}_1=[(7,5)]-1-[(2,1)]$. We have $\delta m_1-m_2 < 0$. So the initial mk2A $\mathcal{Z}_1$ (and hence $\mathcal{Z}_3$ also) is of flipping type. The flip have $Z^+=[4]-3$ as the central fiber. Notice that the mk1A $[4,7,2,\overline{2},3,2,2]$ is also of flipping type with $Z^+=[4]-3$ as its flip. \end{example} \begin{example}[A flipping mk2A with $\delta=1$] Let $(Q \in Y) = \frac{1}{13}(1,3)$ . Let $\mathcal{Z}$ be an mk2A $[2,5]-1-[2,2,6]$. Then $\delta=1$ and the initial mk2A $\mathcal{Z}_1$ is $[2,2,6]-1-\varnothing$. So it is of flipping type and the flip is $[5,2]-2$. \end{example} \begin{example}[A divisorial mk2A] Let $(Q \in \mathcal{Y})$ be the Wahl singularity $\frac{1}{4}(1,1)$. Then the mk2A $[8,2,2,2,2]-1-[6,2,2]$ is of divisorial type. \end{example} \subsection{Degenerations} We briefly recall how the degenerations occur during flips only for the situations that we will encounter often. For details and full generalities, refer Urzúa~\cite[\S4]{Urzua-2016-2}. Let $(E^{-1} \subset Z_0 \subset \mathcal{Z}) \to (Q \in T \subset \mathcal{Y})$ be a flipping $mk1A$. Let $\Gamma_0$ be an irreducible curve in $S_0$ such that $\Gamma_0 \cdot E^{-1} = 1$ but $\Gamma_0$ does not pass through any of s the singular points of $S_0$. Let $\Gamma_t$ be the deformation of $\Gamma_0$; that is, there is a divisor $\Gamma$ in $\mathcal{Z}$ such that $\Gamma|_{Z_t} = \Gamma_t$. Then $\Gamma_t$ is isomorphic to $\Gamma_0$. Let $(E^+ \subset Z^+_0 \subset \mathcal{Z}^+) \to (Q \in T \subset \mathcal{Y})$ be the flipping of $(E^- \subset Z_0 \subset \mathcal{Z}) \to (Q \in T \subset \mathcal{Y})$ along $E^-$. Let $\Gamma^+ \subset \mathcal{Z}^+$ be the proper transform of $\Gamma$. Then $\Gamma^+_t = \Gamma_t$ for $t \neq 0$. But: \begin{lemma}[{Urzúa~\cite[\S4]{Urzua-2016-2}}]\label{lemma:degeneration} We have \begin{equation*} \Gamma^+_0 = \Gamma_0' + \beta E^+ \end{equation*} for some positive integer $\beta$, where $\Gamma_0'$ is the proper transform of $\Gamma_0$. \end{lemma} Since $\mathcal{Z}^+$ is $\mathbb{Q}$-factorial, we have the equality \begin{equation*} (\Gamma^+_0 + \beta E^+) \cdot K_{Z^+_0} = \Gamma^+_t \cdot K_{Z^+_t} = \Gamma_t \cdot K_{Z_t}. \end{equation*} So one can calculate $\beta$ from the above equation. \subsection{Flips encountered frequently} The flipping mk1A, which appears frequently, is as follows. \begin{proposition}[Usual flips; {Urzúa~\cite[Proposition~2.15]{Urzua-2016-1}}]\label{proposition:Usual-flips} The mk1A $\mathcal{Z}=[a_1,\dotsc,a_{s-1},\overline{a_s}]$ is of flipping type. Suppose that $a_i \ge 3$ but $a_{i+1}=\dotsb=a_s=2$ for some $i$. Here if $a_s \ge 3$ then we set $i=s$. Then the image of $A_1$ in the extremal P-resolution $Z^+$ is the curve $C^+$ and there is a Wahl singularity $[a_2,\dotsc,a_i-1]$ on $C^+$ if $i \ge 2$. That is, the numerical data for $\mathcal{Z}^+$ is either $a_1-[a_2,\dotsc,a_i-1]$ for $i \ge 2$ or $a_1-1$ for $i=1$. \end{proposition} In this situation, the degeneration occurs in the following manner. \begin{corollary}[{Urzúa~\cite[Proposition~4.1]{Urzua-2016-2}}]\label{Corollary:usual-degeneration} Let $[e_1,\dotsc,e_{s-1},\overline{e_s}]$ be an mk1A. Then $\beta=1$; that is, we have $\Gamma^+_0 = \Gamma_0' + E_1$. \end{corollary} \begin{remark}\label{remark:flip-Riemenschneider} In the view of Riemenschneider's dot diagram, the usual flip removes the last column of the diagram of $a_1-\dotsb-a_s$ and frees its first row. For example the Riemenschneider's dot diagram of $\mathcal{Z}^+=2-[2,5,3]$ for $\mathcal{Z}=[2,2,5,\overline{4}]$ is the following: \begin{equation*} \begin{tikzpicture}[scale=0.5] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[bullet] at (0,3) [label=left:{$C^+$}] {}; \node[bullet] at (0,2) [] {}; \node[bullet] at (0,1) [] {}; \node[bullet] at (1,1) [] {}; \node[bullet] at (2,1) [] {}; \node[bullet] at (3,1) [] {}; \node[bullet] at (3,0) [] {}; \node[bullet] at (4,0) [] {}; \node[bullet] at (5,0) [] {}; \draw [-] (5,-0.5)--(5,0.5); \draw [-] (-0.25,2.5)--(-0.5,2.5)--(-0.5,-0.5)--(-0.25,-0.5); \draw [-] (4.25,2.5)--(4.5,2.5)--(4.5,-0.5)--(4.25,-0.5); \end{tikzpicture} \end{equation*} \end{remark} The following are frequent occurrences in this paper. \begin{lemma}\label{lemma:usual-situation-1} Let $[b_1,\dotsc,b_r]$ be a Wahl singularity with $B_p$ as its initial curve. Suppose that a bunch of $(-1)$-curves are attached to each $B_i$ as follows: If $p < r$, then there are $(b_i-2)$ $(-1)$-curves attached to $B_i$ for each $i=p,\dotsc,r-1$ and there are $(b_r-1)$ $(-1)$-curves attached to $B_r$ as in Figure~\ref{figure:usual-flip}. If $p=r$, then there are $(b_p-1)$ $(-1)$-curves to $B_p$. Let $\mathcal{Z}=[b_1,\dotsc,b_r]$ be the extremal neighborhood with the $(-1)$-curves. Then we can apply the usual flips to $\mathcal{Z}$ successively starting from the $(-1)$-curves intersecting $B_r$ to some of the $(-1)$-curves intersecting $B_p$ followed by divisorial contractions along the remained $(-1)$-curves intersecting $B_p$ until we obtain the linear chain \begin{equation*} b_1-\dotsb-b_{p-1}-1 \end{equation*} without no singularity, where the $(-1)$-curve is the image of the initial curve $B_p$ and $b_1-\dotsb-b_{p-1}$ may be empty (in case that $p=1$). \end{lemma} \begin{figure} \centering \begin{tikzpicture}[scale=2] \node[bullet] (00) at (0,0) [label=above:{$B_1$}] {}; \node[empty] (050) at (0.5,0) [] {}; \node[empty] (10) at (1,0) [] {}; \node[bullet] (150) at (1.5,0) [label=above:{$B_p$}] {}; \node[bullet] (150-1) at (1.3,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (1.4,-0.5) [] {}; \node[smallbullet] at (1.5,-0.5) [] {}; \node[smallbullet] at (1.6,-0.5) [] {}; \node[bullet] (150-2) at (1.7,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (1.3,-0.6)--(1.7,-0.6) node[pos=0.5,below=0.1em,black]{$b_p-2$}; \node[bullet] (250) at (2.5,0) [label=above:{$B_{p+1}$}] {}; \node[bullet] (250-1) at (2.3,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (2.4,-0.5) [] {}; \node[smallbullet] at (2.5,-0.5) [] {}; \node[smallbullet] at (2.6,-0.5) [] {}; \node[bullet] (250-2) at (2.7,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (2.3,-0.6)--(2.7,-0.6) node[pos=0.5,below=0.1em,black]{$b_{p+1}-2$}; \node[empty] (30) at (3,0) [] {}; \node[empty] (350) at (3.5,0) [] {}; \node[bullet] (40) at (4,0) [label=above:{$B_{r-1}$}] {}; \node[bullet] (40-1) at (3.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (3.9,-0.5) [] {}; \node[smallbullet] at (4,-0.5) [] {}; \node[smallbullet] at (4.1,-0.5) [] {}; \node[bullet] (40-2) at (4.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (3.8,-0.6)--(4.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_{r-1}-2$}; \node[bullet] (50) at (5,0) [label=above:{$B_r$}] {}; \node[bullet] (50-1) at (4.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (4.9,-0.5) [] {}; \node[smallbullet] at (5,-0.5) [] {}; \node[smallbullet] at (5.1,-0.5) [] {}; \node[bullet] (50-2) at (5.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (4.8,-0.6)--(5.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_r-1$}; \draw[-] (00)--(050); \draw[dotted] (050)--(10); \draw[-] (10)--(150); \draw[-] (150)--(250); \draw[-] (250)--(30); \draw[dotted] (30)--(350); \draw[-] (350)--(40); \draw[-] (40)--(50); \draw[-] (150)--(150-1); \draw[-] (150)--(150-2); \draw[-] (250)--(250-1); \draw[-] (250)--(250-2); \draw[-] (40)--(40-1); \draw[-] (40)--(40-2); \draw[-] (50)--(50-1); \draw[-] (50)--(50-2); \end{tikzpicture} \caption{Situations often encountered} \label{figure:usual-flip} \end{figure} \begin{proof} \textit{Case~1}: $r=1$. Then $\mathcal{Z}=[4]$ with three $(-1)$-curves on it. Then after one usual flip followed by two divisorial contractions, we have $\mathcal{Z}^+=1$. Suppose that $r > 1$ from now on. \textit{Case~2}: $p=r$. Then \begin{equation*} \mathcal{Z}=[\underbrace{2,\dotsc,2}_{b_p-4},b_p]. \end{equation*} Applying the usual flips $b_p-4$-times, then we have $\mathcal{Z}'=2-\dotsc-2-[4]$ with three $(-1)$-curves on the $[4]$. So as in Case~1, we have $\mathcal{Z}^+=2-\dotsc-2-1$ after a further usual flip and divisorial contractions. \textit{Case~3}: $p=1$. Then \begin{equation*} \mathcal{Z}=[b_p,\underbrace{2,\dotsc,2}_{b_p-4}]. \end{equation*} We apply a usual flip once along the $(-1)$-curve on the last $(-2)$-curves. Then we have $\mathcal{Z}'=b_p-1$ with $(b_p-2)$ $(-1)$-curves on it. So we have $\mathcal{Z}^+=1$ after $(b_p-2)$ divisorial contractions. \textit{Case~4}: $1 < p < r$. Applying the usual flip once along a $(-1)$-curve intersecting $B_r$, we have \begin{equation*} \mathcal{Z}' = b_1-[b_2,\dotsc,b_{i-1},b_i-1] \end{equation*} where $b_i \ge 3$ and $b_j =2$ for $i > j$ (if any). Notice that $p \le i$ (Remark~\ref{remark:flip-Riemenschneider}) and there are $(b_k-2)$ $(-1)$-curves intersection $B_k$ for each $p \le k \le i$. If $p=2$ or $p=i$, then it is the same situation as Case 2 and Case 3. If $2 < p < i$, then we can apply again the usual flips starting from a $(-1)$-curve intersecting $B_i$ until we reach Case 1, Case 2, or Case 3. Therefore we finally have \begin{equation*} \mathcal{Z}^+ = b_1-\dotsb-b_{p-1}-1 \end{equation*} as desired. \end{proof} \begin{lemma}\label{lemma:usual-situation-2} Let $h_1-\dotsb-h_q$ be a linear chain of rational curves $H_i$ with $H_i \cdot H_i = -h_i \le -2$. Suppose that there is a sequence of blowing ups at some point $x \in H_q \setminus H_{q+1}$ (including its infinitely near point) so that we have a linear chain of rational curves \begin{equation*} h_1-\dotsb-h_{q-1}-h_q'-\dotsb-h_s-1-g_t-\dotsb-g_1 \end{equation*} such that $[h_1,\dotsb,h_s]$ is a T-singularity. Let $\mathcal{Z}=[h_1,\dotsc,h_s]-1-g_t-\dotsb-g_p$ be an extremal neighborhood. Then applying the usual flips successively to $\mathcal{Z}$ we obtain a 3-fold $\mathcal{Z}^+ \to \Delta$ whose minimal resolution of the central fiber over $0$ is the original linear chain $h_1-\dotsb-h_q$. \end{lemma} \begin{proof} Notice that $[h_q'-h_q,\dotsc,h_s,1,g_t,\dotsc,g_1]=0$. So we keep going until there is no $(-1)$-curve that can be flipped so that we obtain the asserted 3-fold $\mathcal{Z}^+ \to \Delta$. \end{proof} \section{Identifying picture deformations from P-resolutions}\label{section:identification} Let $(X,p)$ be a sandwiched surface singularity and let $\mathcal{X} \to \Delta$ be a one-parameter smoothing of $(X,p)$ over a small open disk $D \subset \mathbb{C}^2$ centered at the origin. Assume that there is a M-resolution $U \to X$ that induces the smoothing $\mathcal{X}$. Then there is a M-resolution $Z \to Y$ of the compatible compactification $Y$ of $X$ such that its $\mathbb{Q}$-Gorenstein smoothing $\mathcal{Z} \to \Delta$ blows down to the smoothing $\mathcal{Y} \to \Delta$ of $Y$ that is an extension of $\mathcal{X} \to \Delta$. In this section we will show that one can identify the picture deformation corresponding to the given smoothing $\mathcal{X} \to \Delta$ via running the semistable minimal model program to the $\mathbb{Q}$-Gorenstein smoothing $\mathcal{Z} \to \Delta$. \subsection{Reduction to smooth central fibers} We first reduce the smoothing $\mathcal{Z} \to \Delta$ of the M-resolution $Z$ of $Y$ to a deformation with a smooth central fiber. Urzúa~\cite{Urzua-2016-2} defines a so-called \emph{$W$-surface} as a normal projective surface $S$ together with a proper deformation $\mathcal{S} \to \Delta$ such that (1) $S$ has at most singularities of class $T_0$, (2) $\mathcal{S}$ is a normal complex 3-fold such that the canonical divisor $K_{\mathcal{S}}$ is $\mathbb{Q}$-Cartier, (3) The fiber $S_0$ is reduced and isomorphic to $S$, (4) The fiber $S_t$ is nonsingular for $t \neq 0$. Then: \begin{proposition}[{Urzúa~\cite[Corollary~3.5]{Urzua-2016-2}}]\label{proposition:Urzua-Corollary-3.5} If $S_0$ is birational to $S_t$ for $t \neq 0$, then the smoothing $\mathcal{S} \to \Delta$ can be reduced to a deformation $\mathcal{S}' \to \Delta$ whose central fiber $S_0'$ is smooth by applying a finite number of the divisorial contractions and the flips described in Section~\ref{section:semistable-MMP}. \end{proposition} Returning to our situation, notice that $Z=Z_0$ is a $W$-surface. Furthermore the central fiber $Z_0$ and a general fiber $Z_t$ are both rational surfaces because they contain a $(+1)$-curve that was the line at infinity in $\mathbb{CP}^2$. Therefore it follows by the above Proposition~\ref{proposition:Urzua-Corollary-3.5} that: \begin{proposition}\label{proposition:smooth-central-fiber} By applying the divisorial contractions and the flips described in Section~\ref{section:semistable-MMP} to $(-1)$-curves on the central fiber $Z_0$ of $\mathcal{Z} \to \Delta$, one can run MMP to $\mathcal{Z} \to \Delta$ until one obtains a deformation $\mathcal{Z}' \to \Delta$ whose central fiber $Z_0'$ is smooth. \end{proposition} Flips never change a general fibers. But a divisorial contraction is just a blowing down on each fibers. So: \begin{corollary} In Proposition~\ref{proposition:smooth-central-fiber}, a general fiber $Z_t$ is obtained by blowing up several times a general fiber $Z'_t$ of the smoothing $\mathcal{Z}' \to \Delta$ of $Z_0'$. \end{corollary} \begin{theorem}\label{theorem:main} One can run the semi-stable MMP to $\mathcal{Z} \to \Delta$ until one obtains the corresponding picture deformation $\mathcal{D} \to \Delta$ of the compactified decorated curve $(D,l)$. \end{theorem} \begin{proof} Notice that the minimal resolution of the central fiber $Z_0'$ in the above Proposition~\ref{proposition:smooth-central-fiber} is obtained by blowing up $\mathbb{CP}^2$ at $[0,0,1]$. So after a sequence of divisorial contractions applied to $\mathcal{Z}' \to \Delta$, we have a deformation $\mathcal{Z}'' \to \Delta$ whose central fiber $Z_0''$ is just $\mathbb{CP}^2$. Similarly, its general fiber $Z_t''$ of $\mathcal{Z}'' \to \Delta$ is also $\mathbb{CP}^2$. Therefore the deformation $\mathcal{Z}'' \to \Delta$ is just a deformation of $\mathbb{CP}^2$ to itself. But notice that the maps $Z_0 \to Z_0''(=\mathbb{CP}^2)$ is decomposed into $Z_0 \to Y_0 \to \mathbb{CP}^2$. And the map $Y_0 \to \mathbb{CP}^2$ is just the contraction of all decorated $(-1)$-curves on $Z_0$. On the other hand, the map $Z_t \to Z_t''(=\mathbb{CP}^2)$ for $t \neq 0$ is just a composition of blowing downs. Therefore the image of the proper transform $(\widetilde{D},l)$ in $\mathcal{Z}'' \to \Delta$ is the picture deformation of the compactified decorated curve $(D,l)$ of $Y$ by Theorem~\ref{theorem:compactified-picture-deformation}. \end{proof} \subsection{Identifying method} A flip changes only the central fiber. So it is enough to pay attention to only divisorial contractions in order to track down how a general fiber is changed during the MMP process above. But divisorial contractions are just blow-downs of $(-1)$-curves on a general fiber. Since $\mathcal{Z}' \to \Delta$ is a deformation of a smooth central fiber $Z_0'$, a general fiber $Z_t'$ is diffeomorphic to $Z_0'$. Especially flips induce degenerations. That is, an irreducible curve in a general fiber $Z_t'$ may be degenerated into several curves in the central fiber $Z'_0$. By comparing $Z_0'$ and $Z_t'$ via degeneration, one can get the data of positions of $(-1)$-curves in $Z_t'$. Then by tracking the blow-downs $Z_t \to Z_t'$ one can get the picture deformation of $X$ corresponding to the smoothing $\mathcal{X} \to \Delta$. \section{How to prove Kollár Conjecture}\label{section:How-To-K-conjecture} We may prove Kollár conjecture for some sandwiched surface singularities using the MMP machinery developed in this paper. Let $(X,p)$ be a sandwiched surface singularity and let $(C,l)$ be a decorated curve of $(X,p)$. Let $(Y,p)$ be a compatible compactification of $(X,p)$ and let $(D,l)$ be its compactified decorated curve. Let $\mathcal{X} \to \Delta$ be a one-parameter smoothing of $(X,p)$ and let $\mathcal{C} \to \Delta$ be the corresponding picture deformation of $(C,l)$. Assume that $\mathcal{C} \to \Delta$ corresponds to an incidence matrix $A$. Let $\mathcal{Y} \to \Delta$ the one-parameter smoothing of $(Y,p)$ and let $\mathcal{D} \to \Delta$ be the compactified picture deformation of $(D,l)$, which correspond to $\mathcal{X} \to \Delta$ and $\mathcal{C} \to \Delta$, respectively. \begin{proposition} If there is a normal P-modification $U \to X$ such that the compactified $P$-modification $Z \to Y$ has a $\mathbb{Q}$-Gorenstein smoothing $\mathcal{Z} \to \Delta$ that descends via the MMP to a picture deformation $\mathcal{D}' \to \Delta$ of $(D,l)$ corresponding to the incidence matrix $A$, then Kollár conjecture holds for the singularity $(X,p)$. \end{proposition} \begin{proof} Let $\mathcal{Y}' \to \Delta$ be the blowing up of $\mathcal{D}' \to \Delta$ along the ideal determined by the scheme $l$. Then there is a morphism $\mathcal{Z} \to \mathcal{Y}'$. On the other hand, $\mathcal{D}' \to \Delta$ and $\mathcal{D} \to \Delta$ are isomorphic because they correspond to the same incidence matrix $A$. Therefore the isomorphism extends to that of $\mathcal{Y}' \to \Delta$ and $\mathcal{Y} \to \Delta$. Therefore we have a morphism $\mathcal{Z} \to \mathcal{Y}' \to \mathcal{Y} \to \Delta$. Therefore we may conclude that the smoothing $\mathcal{X} \to \Delta$ of $(X,p)$ is induced by the P-modification $U \to X$. \end{proof} To prove the hypothesis, we must first identify all incidence matrices. It is a difficult combinatorial problem in general. Then, it is necessary to identify the P-modifications corresponding to each incidence matrix. We can reapply the MMP algorithm to determine whether or not the P-modifications we discovered correspond to the given incidence matrices. Using this procedure, we will prove Kollár conjecture for various weighted homogeneous surface singularities in the next two sections: \S\ref{section:illustrations} and \S\ref{section:Wpqr}. \section{Illustrations of the correspondences}\label{section:illustrations} We illustrate how to determine the picture deformations corresponding to given normal P-modifications using several examples. \subsection{A quotient surface singularity $\frac{1}{4}(1,1)$}\label{section:[4]} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{4}(1,1)$. It has a sandwiched structure given by \begin{equation*} \begin{tikzpicture}[scale=0.75] \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-20) at (-2,0) [label=left:{$\widetilde{C}_1$}] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-1$}] {}; \node[bullet] (10) at (1,0) [labelAbove={$-1$}] {}; \node[bullet] (20) at (2,0) [label=right:{$\widetilde{C}_2$}] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-1$}] {}; \node[bullet] (0-2) at (0,-2) [label=below:{$\widetilde{C}_3$}] {}; \draw[-] (-20)--(-10)--(00); \draw[-] (00)--(10)--(20); \draw[-] (00)--(0-1)--(0-2); \end{tikzpicture} \end{equation*} The decorated curve $(C,l)$ of $(X,p)$ consists of three different lines $C_1, C_2, C_3$ passing through the origin of $\mathbb{C}^2$ with the decorations $l_1=l_2=l_3=2$. There are two picture deformations of $(C,l)$: \begin{center} \includegraphics[width=0.75\textwidth]{figure-4-picture-deformation} \end{center} On the other hand, we have also two M-resolutions: The minimal resolution and the singularity itself. So we have two compactified M-resolutions \begin{equation*} \begin{tikzpicture}[scale=0.75] \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-20) at (-2,0) [label=left:{$\widetilde{C}_1$}] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-1$}] {}; \node[bullet] (10) at (1,0) [labelAbove={$-1$}] {}; \node[bullet] (20) at (2,0) [label=right:{$\widetilde{C}_2$}] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-1$}] {}; \node[bullet] (0-2) at (0,-2) [label=below:{$\widetilde{C}_3$}] {}; \draw[-] (-20)--(-10)--(00); \draw[-] (00)--(10)--(20); \draw[-] (00)--(0-1)--(0-2); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.75] \node[rectangle] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-20) at (-2,0) [label=left:{$\widetilde{C}_1$}] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-1$}] {}; \node[bullet] (10) at (1,0) [labelAbove={$-1$}] {}; \node[bullet] (20) at (2,0) [label=right:{$\widetilde{C}_2$}] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-1$}] {}; \node[bullet] (0-2) at (0,-2) [label=below:{$\widetilde{C}_3$}] {}; \draw[-] (-20)--(-10)--(00); \draw[-] (00)--(10)--(20); \draw[-] (00)--(0-1)--(0-2); \end{tikzpicture} \end{equation*} We show the procedure for identifying picture deformations from given M-resolutions. Let's start with the minimal resolution. Let $(Y,p)$ be the compatible compactification of $(X,p)$ and let $Z$ be the M-resolution of $Y$ corresponding to the M-resolution of $X$ that is the minimal resolution of $X$. Let $\mathcal{Z} \to \Delta$ be the corresponding $\mathbb{Q}$-Gorenstein smoothing of $Z$. So we have the following picture. \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-1-0} \end{center} The $(-1)$-curve $E_i$ ($i=1,2,3$) in the central fiber deforms to a $(-1)$-curve in a general fiber that intersects only with $C_i$, respectively. Therefore we can add $(-1)$-curves to the above picture. \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-1-1} \end{center} We apply the divisorial contraction to $E_1, E_2, E_3$. Then we have a new deformation, where we denote by $P_i$'s the points to which $E_i$'s are contracted. \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-1-2} \end{center} There is a new $(-1)$-curve $E_4$, which deforms to a $(-1)$-curve in a central fiber that intersects with $C_1, C_2, C_3$. \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-1-3} \end{center} We finally apply the divisorial contraction to $E_4$. Then we obtain the first picture deformation \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-1-3} \end{center} Now let's consider the second M-resolution, where three $(-1)$-curves pass through the singularity $[4]$: \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-2-1} \end{center} The curve $E^{-}$ is a flipping curve. So we apply the usual flip to $E$. Notice that, after the flip, the decorated curve $C_1$ in a general fiber is degenerated to the sum of $C_1'$ and the $(-3)$-curve in the central fiber. So we have a new flipped deformation \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-2-2} \end{center} The two $(-1)$-curves $E_1$ and $E_2$ in the central fiber deform to $(-1)$-curves intersect with $C_1, C_2$ and $C_1, C_3$, respectively: \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-2-3} \end{center} We apply the divisorial contractions to $E_1$ and $E_2$. \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-2-4} \end{center} We then have a new $(-1)$-curve $E_3$, whose deformation in a general fiber intersects with $C_2$ and $C_3$. \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-2-5} \end{center} Finally, applying the divisorial contraction to $E_3$, we obtain the second picture deformation \begin{center} \includegraphics[width=.5\textwidth]{figure-4-PtoI-2-6} \end{center} \subsection{A cyclic quotient surface singularity $\frac{1}{19}(1,7)$}\label{section:example-cyclic} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{19}(1,7)$. Suppose that a decorated curve of $(X,p)$ is given as in Example~\ref{example:3-4-2-decorated-curve}. \begin{center} \includegraphics[scale=0.5]{figure-3-4-2-decorated-curve} \end{center} It has three picture deformations; Example~\ref{example:3-4-2-picture-deformation}. \begin{center} \includegraphics[scale=0.5]{figure-3-4-2-picture-deformation} \end{center} On the other hand, there are also three M-resolutions of $(X,p)$: $U_1=3-4-2$, $U_2=3-[4]-2$, $U_3=[4]-1-[5,2]$. We explain the procedure for identifying the picture deformation corresponding to $U_3$ step by step below. For $U_1$ and $U_2$ we present the procedures in Figure~\ref{figure:PtoI-U1-U2}. \begin{figure} \centering \subfloat[$U_1=3-4-2$]{\includegraphics[height=0.5\textheight]{figure-PtoI-example-1}} \qquad \subfloat[{$U_2=3-[4]-2$}]{\includegraphics[height=0.5\textheight]{figure-PtoI-example-2}} \caption{Example in Section~\ref{section:example-cyclic}} \label{figure:PtoI-U1-U2} \end{figure} Let $Z_3$ be the M-resolution of $Y$ corresponding to $U_3$ and let $\mathcal{Z}_3 \to \Delta$ be the smoothing of $Z_3$ induced from $\mathbb{Q}$-Gorenstein smoothings of T-singularities of $Z_3$. Since the compactified decorated curve $C_1, \dotsc, C_4$ do not disappear during the deformation, the central fiber and a general fiber of the deformation $\mathcal{Z}_3 \to \Delta$ look as follows: \begin{center} \includegraphics[width=0.5\textwidth]{figure-PtoI-example-3-1} \end{center} We first apply a usual flip along the $(-1)$-curve $E^-$ intersecting $C_4$. Then we have a new deformation \begin{center} \includegraphics[width=0.5\textwidth]{figure-PtoI-example-3-2} \end{center} Here the curve $C_4$ in a general fiber is degenerated to the curve $4+C_4'$ in the central fiber by Corollary~\ref{Corollary:usual-degeneration}. We tie them together with a blue curve. There are two $(-1)$-curves $E_1$ and $E_2$ in the central fiber. They do not disappear during the deformation. So there are also two $(-1)$-curves, say again $E_1$ and $E_2$ in a general fiber. In the central fiber, the $(-1)$-curve $E_1$ intersects transversally once with $C_3$ and also with the degeneration $4+C_4'$ of $C_4$. So $E_1$ intersects with $C_3$ and $C_4$ in a genera fiber in the same way. Similarly, $E_2$ intersects with $C_2$ and $C_4$ in a genera fiber. In short, we can track how $(-1)$-curves in a general fiber intersect with the decorated curves by looking at how they intersects with the (degenerations of) decorated curves in the central fiber. We next apply divisorial contractions along $E_1$ and $E_2$: \begin{center} \includegraphics[width=0.5\textwidth]{figure-PtoI-example-3-3} \end{center} Here we painted the contracted curves in orange. Applying a usual flip along the $(-1)$-curve passing through the singular point $[4]$, we get the deformation \begin{center} \includegraphics[width=0.5\textwidth]{figure-PtoI-example-3-4} \end{center} The decorated curve $C_4$ degenerates to $3-1-C_4'$ in the central fiber. Two $(-1)$-curves $E_3$ and $E_4$ appear after the flip. Note that $E_3 \cdot C_i=1$ for $i=1,2,3$ and $E_4 \cdot C_j=1$ for $j=1,4$. So we have two $(-1)$-curves $E_3$ and $E_4$ in a general fiber that intersect with the decorated curves in the same way. We apply divisorial contractions along $E_3$ and $E_4$: \begin{center} \includegraphics[width=0.5\textwidth]{figure-PtoI-example-3-5} \end{center} Then a new $(-1)$-curve $E_5$ appears so that $E_5 \cdot C_i=1$ for $i=2,3,4$ in a general fiber. Finally, we apply a divisorial contraction along $E_5$. Then we get the picture deformation corresponding to the P-resolution $U_3$: \begin{center} \includegraphics[width=0.5\textwidth]{figure-PtoI-example-3-6} \end{center} which is exactly same to the following graphical representation of the corresponding picture deformation: \begin{center} \includegraphics[width=0.5\textwidth]{figure-PtoI-example-3-7} \end{center} where $P_i$'s denote the points to which $E_i$'s are contracted. \subsection{A weighted homogeneous surface singularity with a non-T-singularity}\label{section:example-WHSS} We investigate Example~6.3.6 in Kollár~\cite{Kollar-1991}. Let $(X,p)$ be a rational surface singularity with the following dual resolution graph: \begin{equation*} \begin{tikzpicture \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[bullet] (00) at (0,0) [labelBelow={$-3$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{tikzpicture} \end{equation*} It is a sandwiched surface singularity. Let's attach $(-1)$-vertices on the $(-2)$-vertices in North and South, respectively. Consider small curvetta, say $C_1$ and $C_2$, passing through the $(-1)$-curves, respectively. Then we have a decorated curve $(C=C_1 \cup C_2, l=\{l_1=8, l_2=8\})$ of $(X,p)$. Indeed the decorated curve can be realized as plane curve singularities, for example, $C_1=y^3-x^4$ and $C_2=y^3-2x^4$. Notice that $\delta(C_i)=3$ for $i=1,2$ and $C_1 \cdot C_2 = 12$. Using the necessary conditions on incidence matrices in Equation~\eqref{equation:incidence-matrix}, one can show that there are exactly six picture deformations of $(C,l)$ and the corresponding incidence matrices are given as follows: \begin{align*} A_1 &=\begin{bmatrix} 1 & 0 & 1 & 0 & 1 & 1 & 1 & 3 \\ 0 & 1 & 0 & 1 & 1 & 1 & 1 & 3 \end{bmatrix} & A_2 &=\begin{bmatrix} 1 & 0 & 1 & 0 & 2 & 2 & 2\\ 0 & 1 & 0 & 1 & 2 & 2 & 2 \end{bmatrix} \\ A_3 &=\begin{bmatrix} 1 & 0 & 2 & 1 & 2 & 2 \\ 0 & 1 & 1 & 2 & 2 & 2 \end{bmatrix} & A_4 &=\begin{bmatrix} 0 & 1 & 1 & 2 & 2 & 3 \\ 1 & 1 & 1 & 1 & 1 & 3 \end{bmatrix} \\ A_5 &=\begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 3 \\ 0 & 1 & 1 & 2 & 2 & 3 \end{bmatrix} & A_6 &=\begin{bmatrix} 2 & 2 & 2 & 1 & 1\\ 2 & 1 & 1 & 2 & 2 \end{bmatrix} \end{align*} On the other hand, there are five P-resolutions dominated by its minimal resolution described in the below and their corresponding M-resolutions are given in Figure~\ref{figure:5-M-resolutions}: \begin{enumerate}[(1)] \item The minimal Du Val resolution. \item Contract all $(-2)$-curves and the $(-4)$-curve. \item Contract any of the three configurations $2-3-4$ and the $(-2)$-curve on the left if necessary. \end{enumerate} \begin{figure} \begin{tikzpicture}[scale=0.75] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \begin{scope} \node[empty] (-21) at (-2,1) []{$M_1$}; \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[bullet] (00) at (0,0) [labelBelow={$-3$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{scope} \begin{scope}[shift={(6,0)}] \node[empty] (-21) at (-2,1) []{$M_2$}; \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[bullet] (00) at (0,0) [labelBelow={$-3$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{scope} \begin{scope}[shift={(0,-3)}] \node[empty] (-21) at (-2,1) []{$M_3$}; \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-5$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-1$}] {}; \node[rectangle] (20) at (2,0) [labelBelow={$-2$}] {}; \node[rectangle] (30) at (3,0) [labelBelow={$-5$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10)--(20)--(30); \end{scope} \begin{scope}[shift={(6,-3)}] \node[empty] (-21) at (-2,1) []{$M_4$}; \node[rectangle] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-5$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-1$}] {}; \node[rectangle] (20) at (2,0) [labelBelow={$-2$}] {}; \node[rectangle] (30) at (3,0) [labelBelow={$-5$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10)--(20)--(30); \end{scope} \begin{scope}[shift={(0,-6)}] \node[empty] (-21) at (-2,1) []{$M_5$}; \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-5$}] {}; \node[rectangle] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-1$}] {}; \node[rectangle] (20) at (2,0) [labelBelow={$-2$}] {}; \node[rectangle] (30) at (3,0) [labelBelow={$-5$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10)--(20)--(30); \end{scope} \end{tikzpicture} \caption{Example~\ref{section:example-WHSS}: Five M-resolutions} \label{figure:5-M-resolutions} \end{figure} There is also a one-parameter family of P-modifications given in Kollár~\cite[Example~6.3.6]{Kollar-1991}. We first blow up any point $x$ on the $(-3)$-curve that is not intersection points. We have a $(-1)$-curve and the following configuration. \begin{equation}\label{equation:M6-configuration} \begin{aligned} \begin{tikzpicture \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[bullet] (00) at (0,0) [labelBelow={$-4$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{tikzpicture} \end{aligned} \end{equation} \begin{lemma}[{Kollár~\cite[Lemma~6.3.7]{Kollar-1991}}]\label{lemma:M6-equation} Any singularity with dual resolution graph as in Equation~\eqref{equation:M6-configuration} has a $\mathbb{Q}$-Gorenstein component. \end{lemma} \begin{proof} We briefly summary the proof in Kollár~\cite[Lemma~6.3.7]{Kollar-1991}. Any singularity with the dual graph \eqref{equation:M6-configuration} is a quotient of the elliptic singularity $\{x^2z + xy^2 + z^5 + ay^2z^2 =0 \} \subset \mathbb{C}^3$ by the $\mathbb{Z}_5$-action with weight $(1,2,3)$. Then the $\mathbb{Q}$-Gorenstein smoothing is given by \begin{equation*} \{x^2z + xy^2 + z^5 + ay^2z^2 + t=0 \} / \mathbb{Z}_5. \qedhere \end{equation*} \end{proof} We contract all the other curves except the $(-1)$-curve, which induces the following P-modification $M_6$. \begin{equation*} \begin{tikzpicture \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[empty] (-21) at (-2,1) []{$M_6$}; \node[rectangle] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-4$}] {}; \node[rectangle] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[rectangle] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-4$}] {}; \node[bullet] (0505) at (0.5,0.5) [labelAbove={$-1$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \draw[-] (00)--(0505); \end{tikzpicture} \end{equation*} \subsubsection{The correspondence} Applying the semistable MMP to the above five M-resolutions, we show that each M-resolution $M_i$ corresponds to the incidence matrix $A_i$ for all $i=1,\dotsc,5$. For example, we present the process of the semistable MMP applied to the M-resolution $M_3$ in Figure~\ref{figure:PtoI-WHSS-M3}, where the exceptional curves contracted to T-singularities are painted in light green, and, on the other hand, the curves painted in orange or in pink are degenerations of the branches $C_{i,t}$ on a general fiber, respectively. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=.45\textwidth]{figure-PtoI-example-WHSS-0-1} & \includegraphics[width=.45\textwidth]{figure-PtoI-example-WHSS-0-2} \end{tabular} \caption{The P-resolution $M_3$ in Section~\ref{section:example-WHSS}} \label{figure:PtoI-WHSS-M3} \end{figure} On the other hand, we show that we can also apply the semi-stable MMP machinery to $M_6$ even though it has a singularity which is not a T-singularity; Proposition~\ref{proposition:non-T}. \begin{lemma}\label{lemma:mu=2} The Milnor number of the $\mathbb{Q}$-Gorenstein smoothing in Lemma~\ref{lemma:M6-equation} is $3$. \end{lemma} \begin{proof} At first, let $H=\{f(x,y,z)=x^2z + xy^2 + z^5 + ay^2z^2 =0 \}$. Since it is a hypersurface singularity, its Milnor number is given by the length of the Jacobian algebra \begin{equation*} \begin{split} J &= \mathbb{C}[[x,y,z]]/\langle \partial f/\partial x, \partial f/\partial y, \partial f/\partial z \rangle \\ &= \mathbb{C}[[x,y,z]]/\langle 2xz+y^2, 2xy+2ayz^2, x^2+5z^4+2ay^2z\rangle. \end{split} \end{equation*} Then one can check that its length is $19$. Let $H_t=\{f(x,y,z)=x^2z + xy^2 + z^5 + ay^2z^2 +t =0 \}$ for a fixed $t \neq 0$. If $M$ is the Milnor fiber of the $\mathbb{Q}$-Gorenstein smoothing in Lemma~\ref{lemma:M6-equation}, then there is a unramified connected covering $H_t \to M$ of degree $5$. Then, since $b_1(M)=b_1(H_t)=0$, we have \begin{equation*} 5e(M)=e(H_t)=1+b_2(H_t)=20. \end{equation*} Therefore $b_2(M)=3$. \end{proof} \begin{lemma}\label{lemma:flip-non-T} Let $(E^{-} \subset \mathcal{Z}) \to (Q \in \mathcal{Y})$ be an extremal neighborhood given by the following dual graph \begin{equation*} \begin{tikzpicture}[scale=0.75] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[bullet] (02) at (0,2) [label=left:{$-1$},label=right:{$E^{-}$}] {}; \node[rectangle] (01) at (0,1) [label=left:{$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-4$}] {}; \node[rectangle] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[rectangle] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (02)--(01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{tikzpicture} \end{equation*} Then there is a flip $(E^+ \subset \mathcal{Z}^+)$ given by the dual graph \begin{equation}\label{equation:M6-flip} \begin{aligned} \begin{tikzpicture}[scale=0.75] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[rectangle] (00) at (0,0) [labelBelow={$-3$}] {}; \node[rectangle] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[rectangle] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$},labelAbove={$E^+$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{tikzpicture} \end{aligned} \end{equation} \end{lemma} \begin{proof} The singularity $Q \in Y$ has a dual graph \begin{equation*} \begin{tikzpicture}[scale=0.75] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[bullet] (00) at (0,0) [labelBelow={$-3$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{tikzpicture} \end{equation*} and the extremal neighborhood $(E^{-} \subset \mathcal{Z})$ induces a smoothing $\mathcal{Y} \to \Delta$ of $Q \in Y$ whose Milnor number is $3$ by Lemma~\ref{lemma:mu=2}. Notice that $Q \in Y$ is a sandwiched surface singularity. If we choose its sandwiched $(-1)$-curves by one attached to the $(-3)$-curve and the other one attached to the $(-2)$-curve in south position, then we have four incidence matrices \begin{align*} A_1' &=\begin{bmatrix} 0 & 1 & 0 & 1 & 1 & 1 & 3 \\ 1 & 0 & 1 & 1 & 1 & 1 & 3 \end{bmatrix} & A_2' &=\begin{bmatrix} 0 & 1 & 0 & 2 & 2 & 2\\ 1 & 0 & 1 & 2 & 2 & 2 \end{bmatrix} \\ A_3' &=\begin{bmatrix} 0 & 2 & 1 & 2 & 2 \\ 1 & 1 & 2 & 2 & 2 \end{bmatrix} & A_5' &=\begin{bmatrix} 1 & 1 & 1 & 1 & 3 \\ 1 & 1 & 2 & 2 & 3 \end{bmatrix} \end{align*} Here $A_3'$ and $A_5'$ correspond to one-parameter smoothings of $Q \in Y$ with Milnor number $3$. But one can check via the MMP machinery in the paper that the following $P$-resolutions $P_3'$ and $P_5'$ correspond to $A_3'$ and $A_5'$, respectively: \begin{equation*} \begin{tikzpicture}[scale=0.75] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \begin{scope}[shift={(0,0)}] \node[empty] (-21) at (-2,0.5) []{$P_3'$}; \node[rectangle] (00) at (0,0) [labelBelow={$-3$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{scope} \begin{scope}[shift={(5,0)}] \node[empty] (-21) at (-2,0.5) []{$P_5'$}; \node[rectangle] (00) at (0,0) [labelBelow={$-3$}] {}; \node[rectangle] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[rectangle] (-20) at (-2,0) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-2$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-4$}] {}; \draw[-] (00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10); \end{scope} \end{tikzpicture} \end{equation*} According to Kollár--Mori~\cite[Theorem~13.5]{KM-1992}, the exceptional curve $E^+$ is irreducible. Therefore the flip of the extremal neighborhood $(E^{-} \subset \mathcal{Z}) \to (Q \in \mathcal{Y})$ is given by Equation~\eqref{equation:M6-flip} as asserted. \end{proof} \begin{lemma}\label{lemma:flip-non-T-degeneration} Let $C$ be an irreducible curve contained in the central fiber of the extremal neighborhood $(E^{-} \subset \mathcal{Z})$ in Lemma~\ref{lemma:flip-non-T} such that $C \cdot E^{-}=1$ and $C$ does not pass through the singular point. Let $C'$ be the proper transform of $C$ after the flip along $E^{-}$. Then the curve $C$ in the general fiber $Z^+_t$ ($t \neq 0$) degenerates to $C'+2E^{+}$ in the central fiber $Z^+_0$. \end{lemma} \begin{proof} Suppose that $C$ in a general fiber $Z^+_t$ degenerates to $C'+\beta E^+$ in the central fiber $Z^+_0$. By Lemma~\ref{lemma:degeneration}, we have \begin{equation*} K_{Z_0} \cdot C = K_{Z_t} \cdot C = K_{Z^+_t} \cdot {C} = K_{Z^+_0} \cdot (C'+\beta E^+) = K_{Z^+_0} \cdot C' + \beta K_{Z^+_0} \cdot E^+. \end{equation*} Let $\pi \colon \widetilde{Z}_0 \to Z_0$ and $\pi^+ \colon \widetilde{Z}^+_0 \to Z^+_0$ be the minimal resolutions, respectively. Let $\widetilde{C}$ and $\widetilde{C}'$ be the proper transforms of $C$ and $C'$, respectively. Then we have \begin{equation*} K_{Z_0} \cdot C = K_{\widetilde{Z}_0} \cdot \widetilde{C}. \end{equation*} On the other hand, notice that \begin{equation*} (\pi^+)^{\ast} K_{Z^+_0} = K_{\widetilde{Z}^+_0} + \frac{1}{3} E_1 + \frac{2}{3} E_2 + \frac{2}{3} E_3 \end{equation*} where $[E_1, E_2, E_3]$ are the exceptional divisors $[2,3,4]$. So we have \begin{equation*} K_{Z^+_0} \cdot C' + \beta K_{Z^+_0} \cdot E^+ = K_{\widetilde{Z}^+_0} \cdot \widetilde{C}' + \frac{2}{3} + \frac{2}{3} \beta. \end{equation*} But $\widetilde{Z}_0 \to \widetilde{Z}^+_0$ is a composition of two blowing-ups. Hence \begin{equation*} K_{\widetilde{Z}^+_0} \widetilde{C}' = K_{Z_0} \cdot C = K_{\widetilde{Z}_0} \cdot \widetilde{C} - 2 \end{equation*} Therefore $\beta=2$. \end{proof} \begin{proposition}\label{proposition:non-T} The 6th P-modification $M_6$ corresponds to the 6th incidence matrix $A_6$. \end{proposition} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=.45\textwidth]{figure-PtoI-example-nonT-1} & \includegraphics[width=.45\textwidth]{figure-PtoI-example-nonT-2} \end{tabular} \caption{A P-modification $M_6$ with a non-T-singularity in Section~\ref{section:example-WHSS}} \label{figure:PtoI-WHSS-M6} \end{figure} \begin{proof} We present in Figure~\ref{figure:PtoI-WHSS-M6} the process of the semi-stable minimal model program for identifying the picture deformation corresponding to the P-modification $M_6$. \end{proof} \begin{corollary} Kollár conjecture holds for the singularity in this Section~\ref{section:example-WHSS}. \end{corollary} \subsection{A weighted homogeneous surface singularity with a nonnormal P-modification} We investigate Example~6.6.3 in Kollár~\cite{Kollar-1991}. Let $(X,p)$ be a rational surface singularity whose dual graph of the minimal resolution is given by \begin{equation*} \begin{tikzpicture}[scale=0.75] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[bullet] (00) at (0,0) [labelBelow={$-4$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-3$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-2$}] {}; \node[bullet] (20) at (2,0) [labelBelow={$-2$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-10)--(00)--(10)--(20); \end{tikzpicture} \end{equation*} It is a sandwiched surface singularity and its sandwiched structure is given in Figure~\ref{figure:WHSS-sandwiched-structure}. \begin{figure} \centering \begin{tikzpicture}[scale=0.75] \tikzset{font=\scriptsize} \node[bullet] (01) at (0,1) [label=left:{$-2$}] {}; \node[bullet] (00) at (0,0) [labelBelow={$-4$}] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-3$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-2$}] {}; \node[bullet] (20) at (2,0) [labelBelow={$-2$}] {}; \node[bullet] (30) at (3,0) [labelBelow={$-1$}] {}; \node[bullet] (40) at (4,0) [labelBelow={$C_1$}] {}; \node[bullet] (02) at (0,2) [label=left:{$-1$}] {}; \node[bullet] (03) at (0,3) [label=left:{$C_2$}] {}; \node[bullet] (0-2) at (0,-2) [label=left:{$-1$}] {}; \node[bullet] (0-3) at (0,-3) [label=left:{$C_3$}] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$-1$}] {}; \node[bullet] (-30) at (-3,0) [labelBelow={$C_4$}] {}; \draw[-] (03)--(02)--(01)--(00)--(0-1)--(0-2)--(0-3); \draw[-] (-30)--(-20)--(-10)--(00)--(10)--(20)--(30)--(40); \end{tikzpicture} \caption{A sandwiched structure} \label{figure:WHSS-sandwiched-structure} \end{figure} The decorated curve is given by $(C_1 \cup C_2 \cup C_3 \cup C_4, \{5,4,4,2\})$ and they satisfy the following incidence relations: $C_i \cdot C_j =2$ for $i \neq j$ with $i, j \le 3$ and $C_k \cdot C_4=1$ for all $k \le 3$: \begin{center} \includegraphics[scale=0.5]{figure-PtoI-example-WHSS-1.pdf} \end{center} There are six picture deformations of $(X,p)$. In Figure~\ref{figure:WHSS-six-picture-deformations}, we present the decorated curves in a general fiber for each picture deformations. On the other hand, there are also six P-modifications: Fives are P-resolutions, but the sixth is a nonnormal P-modification. We present the dual graphs of five P-resolutions in Figure~\ref{figure:WHSS-five-P-resolutions}. \begin{figure} \centering \includegraphics[scale=0.5]{figure-PtoI-example-WHSS-2.pdf} \caption{Six picture deformations} \label{figure:WHSS-six-picture-deformations} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale=0.75] \tikzset{label distance=-0.15em} \tikzset{font=\scriptsize} \begin{scope} \node[empty] (-11) at (-1,1) []{$\fbox{1}$}; \node[rectangle] (01) at (0,1) [labelAbove={$-2$}] {}; \node[bullet] (00) at (0,0) [labelBelow={$-4$}] {}; \node[rectangle] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-3$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-2$}] {}; \node[rectangle] (20) at (2,0) [labelBelow={$-2$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-10)--(00)--(10)--(20); \end{scope} \begin{scope}[shift={(4,0)}] \node[empty] (-11) at (-1,1) []{$\fbox{2}$}; \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-4$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-3$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-2$}] {}; \node[rectangle] (20) at (2,0) [labelBelow={$-2$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-10)--(00)--(10)--(20); \end{scope} \begin{scope}[shift={(9,0)}] \node[empty] (-11) at (-1,1) []{$\fbox{3}$}; \node[rectangle] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-5$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[rectangle] (-20) at (-2,0) [labelBelow={$-4$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-1$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-2$}] {}; \node[rectangle] (20) at (2,0) [labelBelow={$-2$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10)--(20); \end{scope} \begin{scope}[shift={(1,-4)}] \node[empty] (-21) at (-2,1) []{$\fbox{4}$}; \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-5$}] {}; \node[rectangle] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[rectangle] (-20) at (-2,0) [labelBelow={$-4$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-1$}] {}; \node[bullet] (10) at (1,0) [labelBelow={$-2$}] {}; \node[rectangle] (20) at (2,0) [labelBelow={$-2$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10)--(20); \end{scope} \begin{scope}[shift={(6,-4)}] \node[empty] (-21) at (-2,1) []{$\fbox{5}$}; \node[bullet] (01) at (0,1) [labelAbove={$-2$}] {}; \node[rectangle] (00) at (0,0) [labelBelow={$-5$}] {}; \node[bullet] (0-1) at (0,-1) [labelBelow={$-2$}] {}; \node[rectangle] (-20) at (-2,0) [labelBelow={$-4$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$-1$}] {}; \node[rectangle] (10) at (1,0) [labelBelow={$-2$}] {}; \node[bullet] (20) at (2,0) [labelBelow={$-2$}] {}; \draw[-] (01)--(00)--(0-1); \draw[-] (-20)--(-10)--(00)--(10)--(20); \end{scope} \end{tikzpicture} \caption{Five P-resolutions} \label{figure:WHSS-five-P-resolutions} \end{figure} It is not difficult to show that the $i$th P-resolution corresponds to the $i$th picture deformation for $i=1,\dotsc,5$ by applying the MMP machinery. \subsubsection{A nonnormal P-modification} Kollár~\cite[Example~6.3.3]{Kollar-1991} also present a nonnormal P-modification of $(X,p)$. We briefly recall the construction. There are four distinguished points on the central $(-4)$-curve $C$ corresponding to the four intersection points with another exceptional curves. We denote the intersection points by $N, E, S, W$ according to the direction of the exceptional curves given in the above resolution graph. There is a unique involution $\tau$ on $C$ such that $\tau(N)=S$ and $\tau(E)=W$. We first contract all curves except $C$. Then we have a normal surface $C'' \subset X''$. Then for each $x \in C''$ we identify $x$ and $\tau(x)$ to obtain a nonnormal surface germ $g \colon C' \subset X' \to (0 \in X)$. Along $C'$ we have generically normal crossing points. There are also two pinch points corresponding to branch points of the involution $\tau$ and two singular points of the form \begin{equation*} \text{$(xy=0) \subset \mathbb{C}^3/\mathbb{Z}_2(1,-1,1)$ and $(xy=0) \subset \mathbb{C}^3/\mathbb{Z}_3(1,-1,1)$} \end{equation*} Kollár~\cite[6.3.3.4]{Kollar-1991} show that the singularities on $X'$ have $\mathbb{Q}$-Gorenstein smoothings and $X'$ is indeed a (nonnormal) P-modification of $X$. \begin{question} The picture deformation corresponding to the above nonnormal P-modification seems to be the picture deformation \#6. But it is not clear whether one can see the correspondence via the semistable minimal model program. \end{question} \section{Weighted homogeneous surface singularities admitting QHDS} \label{section:Wpqr} In early 1980's J. Wahl constructed many examples of complex surface singularities that admit smoothings with Milnor number zero, that is, smoothings whose Milnor fibre is a rational homology disk. Recently, Bhupal and Stipsicz~\cite{Bhupal-Stipsicz-2011} classified weighted homogeneous surface singularities with rational homology disk smoothings into 13 classes. We show that Kollár conjecture holds for all of the 13 classes in Bhupal--Stipsicz~\cite{Bhupal-Stipsicz-2011}. However we present the proof only for one class because the whole proof consists of extremely lengthy but similar combinatorial arguments. Therefore, we will publish the entire proof separately in an upcoming paper; Park--Shin~\cite{PS-2022}. The singularity with which we are concerned in this paper is known as $W_{p,q,r}$ (after J. Wahl), whose resolution graph is given by in Figure~\ref{figure:Wpqr}. \begin{figure} \centering \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-350) at (-3.5,0) [labelBelow={$-(p+3)$}] {}; \node[bullet] (-250) at (-2.5,0) [labelAbove={$-2$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-2$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (-2.5,-0.15)--(-1,-0.15) node[pos=0.5,below=0.1em,black]{$q$ $(-2)$'s}; \node[bullet] (350) at (3.5,0) [labelBelow={$-(q+3)$}] {}; \node[bullet] (250) at (2.5,0) [labelAbove={$-2$}] {}; \node[empty] (20) at (2,0) [] {}; \node[empty] (150) at (1.5,0) [] {}; \node[bullet] (10) at (1,0) [labelAbove={$-2$}] {}; \draw[-] (350)--(250); \draw[-] (250)--(20); \draw[dotted] (20)--(150); \draw[-] (150)--(10); \draw[-] (10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (1,-0.15)--(2.5,-0.15) node[pos=0.5,below=0.1em,black]{$r$ $(-2)$'s}; \node[bullet] (0-35) at (0,-3.5) [label=right:{$-(r+3)$}] {}; \node[bullet] (0-25) at (0,-2.5) [label=left:{$-2$}] {}; \node[empty] (0-2) at (0,-2) [] {}; \node[empty] (0-15) at (0,-1.5) [] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-2$}] {}; \draw[-] (0-35)--(0-25); \draw[-] (0-25)--(0-2); \draw[dotted] (0-2)--(0-15); \draw[-] (0-15)--(0-1); \draw[-] (0-1)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (0.15,-2.5)--(0.15,-1) node[pos=0.5,right=0.1em,black]{$p$ $(-2)$'s}; \end{tikzpicture} \caption{$W_{p,q,r}$, where $p, q, r \ge 0$} \label{figure:Wpqr} \end{figure} \subsection{A sandwiched structure of $W_{p,q,r}$} The singularity $W_{p,q,r}$ is sandwiched. We add ($p+2$) $(-1)$-vertices to the $-(p+3)$-vertex, ($q+2$) $(-1)$-vertices to the $-(q+3)$-vertex, and ($r+2$) $(-1)$-vertices to the $-(r+3)$-vertex in the resolution graph, making the resolution graph a sandwiched graph. We then consider small curvetta attached to each $(-1)$-vertices, which we denote by $\widetilde{A}_1, \dotsc, \widetilde{A}_{p+2}$, $\widetilde{B}_1, \dotsc, \widetilde{B}_{q+2}$, and $\widetilde{C}_1, \dotsc, \widetilde{C}_{r+2}$, respectively; Figure~\ref{figure:Wpqr-sandwiched}. \begin{figure} \centering \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-505) at (-5,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-5-05) at (-5,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-5,0.25)--(-5,-0.25); \node[bullet] (-4505) at (-4.5,0.5) [labelAbove={$-1$}] {}; \node[bullet] (-45-05) at (-4.5,-0.5) [labelBelow={$-1$}] {}; \draw[dotted] (-4.5,0.25)--(-4.5,-0.25); \draw[-] (-4505)--(-505); \draw[-] (-45-05)--(-5-05); \draw[-] (-4505)--(-350); \draw[-] (-45-05)--(-350); \node[bullet] (-350) at (-3.5,0) [labelBelow={$-(p+3)$}] {}; \node[bullet] (-250) at (-2.5,0) [labelAbove={$-2$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-2$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (-2.5,-0.15)--(-1,-0.15) node[pos=0.5,below=0.1em,black]{$q$ $(-2)$'s}; \node[bullet] (505) at (5,0.5) [label=right:{$\widetilde{B}_1$}] {}; \node[bullet] (5-05) at (5,-0.5) [label=right:{$\widetilde{B}_{q+2}$}] {}; \draw[dotted] (5,0.25)--(5,-0.25); \node[bullet] (4505) at (4.5,0.5) [labelAbove={$-1$}] {}; \node[bullet] (45-05) at (4.5,-0.5) [labelBelow={$-1$}] {}; \draw[dotted] (4.5,0.25)--(4.5,-0.25); \draw[-] (4505)--(505); \draw[-] (45-05)--(5-05); \draw[-] (4505)--(350); \draw[-] (45-05)--(350); \node[bullet] (350) at (3.5,0) [labelBelow={$-(q+3)$}] {}; \node[bullet] (250) at (2.5,0) [labelAbove={$-2$}] {}; \node[empty] (20) at (2,0) [] {}; \node[empty] (150) at (1.5,0) [] {}; \node[bullet] (10) at (1,0) [labelAbove={$-2$}] {}; \draw[-] (350)--(250); \draw[-] (250)--(20); \draw[dotted] (20)--(150); \draw[-] (150)--(10); \draw[-] (10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (1,-0.15)--(2.5,-0.15) node[pos=0.5,below=0.1em,black]{$r$ $(-2)$'s}; \node[bullet] (-05-5) at (-0.5,-5) [label=below:{$\widetilde{C}_1$}] {}; \node[bullet] (05-5) at (0.5,-5) [label=below:{$\widetilde{C}_{r+2}$}] {}; \draw[dotted] (-0.25,-5)--(0.25,-5); \node[bullet] (-05-45) at (-0.5,-4.5) [label=left:{$-1$}] {}; \node[bullet] (05-45) at (0.5,-4.5) [label=right:{$-1$}] {}; \draw[dotted] (-0.25,-4.5)--(0.25,-4.5); \draw[-] (-05-5)--(-05-45); \draw[-] (05-5)--(05-45); \draw[-] (-05-45)--(0-35); \draw[-] (05-45)--(0-35); \node[bullet] (0-35) at (0,-3.5) [label=right:{$-(r+3)$}] {}; \node[bullet] (0-25) at (0,-2.5) [label=left:{$-2$}] {}; \node[empty] (0-2) at (0,-2) [] {}; \node[empty] (0-15) at (0,-1.5) [] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-2$}] {}; \draw[-] (0-35)--(0-25); \draw[-] (0-25)--(0-2); \draw[dotted] (0-2)--(0-15); \draw[-] (0-15)--(0-1); \draw[-] (0-1)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (0.15,-2.5)--(0.15,-1) node[pos=0.5,right=0.1em,black]{$p$ $(-2)$'s}; \end{tikzpicture} \caption{A sandwiched structure for $W_{p,q,r}$} \label{figure:Wpqr-sandwiched} \end{figure} \subsection{The decorated curves of $W_{p,q,r}$ and their incidence relations} We denote by $A_i, B_j, C_k$ the decorated curves of $W_{p,q,r}$ in $\mathbb{CP}^2$ that are induced from the sandwiched structure given in Figure~\ref{figure:Wpqr-sandwiched}. Notice that all curves $A_i, B_j, C_k$ are smooth. Indeed $A_i$'s are plane curves given by $y=a_i x^{q+2}$, and $B_j$'s and $C_k$'s are those given by $y=b_jx^{r+2}$ and $y=c_kx^{p+2}$, respectively. So we have \begin{equation}\label{equation:Wpqr-matrix-delta-l} \begin{gathered} \delta(A_i)=\delta(b_j)=\delta(C_k)=1 \\ l(A_i)=q+3, \quad l(B_j)=r+3, \quad l(C_k)=p+3 \end{gathered} \end{equation} for all $i,j,k$. Furthermore the incidence relations between $A_i, B_j, C_k$ are given as follows. \begin{equation}\label{equation:Wpqr-matrix-intersection} \begin{gathered} A_i \cdot A_{i'}= q+2, \quad B_j \cdot B_{j'} = r+2, \quad C_k \cdot C_{k'} = p+2 \\ A_i \cdot B_j = 1, \quad B_j \cdot C_k = 1, \quad C_k \cdot A_i = 1 \end{gathered} \end{equation} for all $i, i', j, j', k, k'$. \subsection{Submatrices of incidence matrices for $W_{p,q,r}$} We would like to find all incidence matrices for the decorated curve of $W_{p,q,r}$. Since all decorated curves are smooth, any matrices whose entries satisfy the constraints given in Equation~\eqref{equation:incidence-matrix} can be realized as incidence matrices of certain picture deformations of the decorated curve for $W_{p,q,r}$. First of all, since $\delta(A_i)=\delta(B_j)=\delta(C_k)=1$, all entries of any incidence matrices are $1$. One the other hand, $l(A_i)=q+3$ and $A_i \cdot A_{i'}=q+2$. So the submatrix consisting of the rows corresponding to $A_i$'s are either \begin{center} \includegraphics{figure-for-arXiv-1} \end{center} Notice that $A(II)$ can occur only if $q+2-p \ge 0$. Furthermore, if $p=0$, then $A(I)=A(II)$. So $A(II)$ is considered only when $p \ge 1$. Similarly, those corresponding to $B_j$'s and $C_k's$ are as follows: \begin{center} \includegraphics{figure-for-arXiv-2} \end{center} and \begin{center} \includegraphics{figure-for-arXiv-3} \end{center} where $B(II)$ and $C(II)$ can occur only if $r+2-q \ge 0$, $q \ge 1$, and $p+2-r \ge 0$, $r \ge 1$, respectively. \subsection{Combinations of the submatrices $A$ and $B$} We will explain how the columns of the above submatrices are combined to produce incidence matrices. Notice that we should have $A_i \cdot B_j = 1$, $B_j \cdot C_k = 1$, $C_k \cdot A_i = 1$ from Equation~\eqref{equation:Wpqr-matrix-intersection}. That is, the inner product of any pair of the $A_i$-rows ($B_j$-rows, or $C_k$-rows) and the $B_j$-rows($C_k$-rows, or $A_i$-rows, respectively) is equal to $1$. We begin with analyzing all possible combinations of the columns of the submatrices $A$ and $B$ such that $A_i \cdot B_j = 1$. At first, suppose that $A(I)$ appears in an incidence matrix $M$. We divide cases according to which columns of $B(I)$ or $B(II)$ can appear under the first column of $A(I)$. \textit{Case~1}: A column of $B(I)$ or $B(II)$ consisting of only $1$ under the first column of $A(I)$. The submatrix corresponding to $A$-rows and $B$-rows looks like \begin{center} \includegraphics{figure-for-arXiv-4} \end{center} Then the inner products `$A_i \cdot B_j =1$' occur on the first column of the incidence matrix $M$ for any $i, j$. So there must be no $1$ below the other columns of $A(I)$. Therefore the incidence matrix $M$ contains the following submatrix \begin{center} \includegraphics{figure-for-arXiv-5} \end{center} where $B'$ is the submatrix obtained from $B(I)$ or $B(II)$ by deleting one column consisting of only $1$'s. \textit{Case~2}: A column of $B(I)$ with only one $1$ under the first column of $A(I)$. We may assume that the submatrix corresponding to $A$-rows and $B$-rows looks like \begin{center} \includegraphics{figure-for-arXiv-6} \end{center} Then the inner product `$A_i \cdot B_1=1$' occur on the first entry $1$ of $B_1$ for all $i$. Therefore there is no $1$ on the $B_1$-row until the last column of $A$. So we have \begin{center} \includegraphics{figure-for-arXiv-7} \end{center} Hence any columns of $B(I)$ consisting of only $1$'s cannot appear under any columns of $A(I)$. So in order to satisfy the conditions `$A_i \cdot B_j=1$', the submatrix corresponding to $A_i$-rows and $B_j$-rows must be of the form \begin{center} \includegraphics{figure-for-arXiv-8} \end{center} \textit{Case~3}. A column of $B(II)$ with only one $0$ under the first column of $A(I)$. We may assume that the submatrix corresponding to $A$-rows and $B$-rows looks like \begin{center} \includegraphics{figure-for-arXiv-9} \end{center} As before, because of the inner product condition `$A_i \cdot B_j=1$', the entries of $B_j$-rows for $j=1,\dots,q+1$ must be $0$ until the last column of $A(I)$. So we have \begin{center} \includegraphics{figure-for-arXiv-10} \end{center} But, since $q \ge 1$, there is no way to complete the $B_{q+2}$-row so that the inner product condition $A_i \cdot B_{q+2}=1$ satisfies for any $i$. Therefore Case~3 cannot occur. \textit{Case~4}. No columns of $B(I)$ and $B(II)$ under the first column of $A(I)$. That is, we may assume that \begin{center} \includegraphics{figure-for-arXiv-11} \end{center} Then there must be the columns of $B(I)$ or $B(II)$ consisting of only $1$'s under the $(p+2)$-column of $A(I)$ in order to satisfy the inner product condition `$A_i \cdot B_j=1$. That is, we have \begin{center} \includegraphics{figure-for-arXiv-12} \end{center} where $B'$ is the submatrix obtained from $B(I)$ or $B(II)$ by deleting $(p+2)$ columns consisting of only $1$'s. Notice that we must assume that \begin{equation}\label{equation:Wpqr-AB-inequality} \text{$r+2 \ge p+2$ for $B(I)$ or $r+2-q \ge p+2$ for $B(II)$} \end{equation} because we need at most $(p+2)$ columns consisting of only $1$'s in $B(I)$ or $B(II)$. Next, assume that the submatrix $A(II)$ appears in the incidence matrix $M$. As before, we divide the cases as follows. \textit{Case~5}. A column of $B(I)$ or $B(II)$ consisting of only $1$'s under the first column of $A(II)$. By the similar arguments as in Case~1, we have \begin{center} \includegraphics{figure-for-arXiv-13} \end{center} where $B'$ is the submatrix obtained from $B(I)$ or $B(II)$ by deleting one column consisting of only $1$'s. \textit{Case~6}. A column of $B(I)$ with only one $1$ under the first column of $A(II)$. As in Case~2, the entries of the $B_1$-row except the first one must be $0$. So we have \begin{center} \includegraphics{figure-for-arXiv-14} \end{center} Since we assume that $p \ge 1$, there must be a $B_j$-row whose entries under the $(q+2-p)$-columns of $A(II)$ are all $0$. We may assume that the $B_{q+2}$-row is of this type. Then there must be only one $1$ in the entries of the $B_{q+2}$-row under the $(p+2)$-columns of $A(II)$ in order to satisfy the inner product condition `$A_i \cdot B_{q+2}=1$'. However no matter where $1$ is placed, there must be a row of $A(II)$ that does not satisfy the inner product condition. So Case~6 cannot occur. \textit{Case~7}. A column of $B(II)$ with only one $0$ under the first column of $A(II)$. Because of the exactly same reason in Case~3, this case cannot happen. \textit{Case~8}. No columns of $B(I)$ or $B(II)$ under the first column of $A(II)$. We will show that this last Case~8 cannot occur as well. We may assume that all entries under the $(q+2-p)$-columns of $A(II)$ are $0$. That is, \begin{center} \includegraphics{figure-for-arXiv-15} \end{center} Notice that, under the $(p+2)$-columns of $A(II)$, there must be a non-zero column of $B(I)$ or $B(II)$. So we may assume that the first entry in the $B_1$-row under the first column of the $(p+2)$-columns of $A(II)$ is $1$. Then the other entries of the $B_1$-row should be zero because $A_i\cdot B_1 =1$ and $p \ge 1$. Then we have \begin{center} \includegraphics{figure-for-arXiv-16} \end{center} But, $A_{p+2} \cdot B_1 = 0$, which violates the inner product conditions. Therefore Case~8 cannot occur. One can analyze combinations of the submatrices $B, C$ and $C, A$ in a similar way. In summary, \begin{lemma}\label{lemma:Wpqr} The inner products occur either on the columns of the submatrices $A, B, C$ consisting of only $1$'s or on the identity submatrix of $A(I), B(I), C(I)$ and the submatrix consisting of only $1$'s of $A, B, C$ under the suitable conditions on the number of columns as in Equation~\eqref{equation:Wpqr-AB-inequality}. \end{lemma} \subsection{All possible incidence matrices} Suppose that the inner product occurs only on the columns of the submatrices whose entries consisting only $1$. Then there are only two possibilities: Either \begin{center} \includegraphics{figure-for-arXiv-17} \end{center} or \begin{center} \includegraphics{figure-for-arXiv-18} \end{center} where $A', A''$, $B', B''$, $C', C''$ are the submatrices obtained from $A, B, C$ by deleting one or two columns consisting of only $1$'s, respectively. So if any of the submatrices $A(II)$, $B(II)$, or $C(II)$ are encountered in the above incidence matrices, then $M(1)$ can occur under the conditions \begin{equation}\label{equation:condition-M(1)-with-II} \text{$p \ge 1$, $q+2-p \ge 1$, or $q \ge 1$, $r+2-q \ge 1$, or $r \ge 1$, $p+2-r \ge 1$,} \end{equation} respectively, and $M(2)$ can occur under the conditions \begin{equation}\label{equation:condition-M(2)-with-II} \text{$p \ge 1$, $q+2-p \ge 2$, or $q \ge 1$, $r+2-q \ge 2$, or $r \ge 1$, $p+2-r \ge 2$,} \end{equation} respectively. Suppose that the inner product happens on the identity submatrix of $A(I)$ and the submatrix of $B$ consisting of only $1$'s as in Case~4. Then the inner products on $A, C$ and $B, C$ must occur on the columns consisting of only $1$'s by Lemma~\ref{lemma:Wpqr}. Therefore we have \begin{center} \includegraphics{figure-for-arXiv-19} \end{center} where $B'$ and $C'$ are the remained parts of $B$ and $C$, respectively. Notice that the above configuration can happen under the condition \begin{equation*} l := \begin{cases} r-p \ge 1 & \text{for $B(I)$} \\ r-p-q \ge 1 & \text{for $B(II)$} \end{cases} \end{equation*} We claim that the above configuration can happen only for $C(I)$: The submatrix $C(II)$ must have at least two columns consisting of only $1$'s. So $p+2-r \ge 2$, which contradicts, however, to the above condition $l \ge 1$. Therefore we have the following incidence matrix \begin{center} \includegraphics{figure-for-arXiv-20} \end{center} under the condition \begin{equation}\label{equation:condition-M(3)} l := \begin{cases} r-p \ge 1 & \text{if $B=B(I)$} \\ r-p-q \ge 1, q \ge 1 & \text{if $B=B(II)$} \end{cases} \end{equation} Similarly, we have more incidence matrices \begin{center} \includegraphics{figure-for-arXiv-21} \end{center} under the condition \begin{equation}\label{equation:condition-M(4)} l := \begin{cases} p-q \ge 1 & \text{if $C=C(I)$} \\ p-q-r \ge 1, r \ge 1 & \text{if $C=C(II)$} \end{cases} \end{equation} \begin{center} \includegraphics{figure-for-arXiv-22} \end{center} under the condition \begin{equation}\label{equation:condition-M(5)} l := \begin{cases} q-r \ge 1 & \text{if $A=A(I)$} \\ q-r-p \ge 1, p \ge 1 & \text{if $A=A(II)$} \end{cases} \end{equation} Finally, suppose that the inner product happens on the identity submatrix of $A(I)$ and the submatrix of $B$ consisting of only $1$'s as in Case~2. Then there is only one possible incidence matrix \begin{center} \includegraphics{figure-for-arXiv-23} \end{center} \begin{proposition}\label{proposition:Wpqr-all-incidence-matrices} There are six types of incidence matrices $M(1), \dotsc, M(6)$ for the singularity $W_{p,q,r}$ under the suitable conditions on $p,q,r$ described in Equation~\eqref{equation:condition-M(1)-with-II} for $M(1)$, \eqref{equation:condition-M(2)-with-II} for $M(2)$, \eqref{equation:condition-M(3)} for $M(3)$, \eqref{equation:condition-M(4)} for $M(4)$, \eqref{equation:condition-M(5)} for $M(5)$. \end{proposition} \section{Kollár conjecture for the singularities $W_{p,q,r}$}\label{section:Wpqr-K-Conjecture} In this section we prove: \begin{theorem}\label{theorem:Wpqr-K-conjecture} Kollár conjecture holds for the singularities $W_{p,q,r}$. \end{theorem} We divide the proof according to the types of the incidence matrices of $W_{p,q,r}.$ \subsection{P-modifications for the incidence matrices of type $M(1)$ or $M(2)$} We will show that all P-modifications (indeed, P-resolutions) for the incidence matrices of type $M(1)$ or $M(2)$ are dominated by the minimal resolution of $W_{p,q,r}$. \begin{proposition}\label{proposition:Wpqr-P-modifications-M(1)orM(2)} Each incidence matrices of type $M(1)$ or $M(2)$ correspond to one of the following M-resolution of $W_{p,q,r}$. \begin{enumerate}[(1)] \item The minimal resolution itself. It corresponds to the incidence matrix $M(1)$ with $A(I), B(I), C(I)$. \item A M-resolution obtained by contracting the central $(-4)$-curve. It corresponds to the incidence matrix $M(2)$ with $A(I), B(I), C(I)$. \item A M-resolution obtained by contracting the $(-p-3)$-curve and the $(p-1)$ $(-2)$-curves that follow it (that is, the Wahl configuration $[p+3,2,\dotsc,2]$) from the $A$-arm (assuming $p \ge 1$ and $q \ge p-1$). It corresponds to the incidence matrix $M(1)$ with $A(II)$, but $B(I)$ and $C(I)$. \item A M-resolution obtained by contracting the Wahl configuration $[q+3,2,\dotsc,2]$ from the $B$-arm (assuming $q \ge 1$ and $r \ge q-1$). It corresponds to the incidence matrix $M(1)$ with $A(I), B(II), C(I)$. \item A M-resolution obtained by contracting the Wahl configuration $[r+3,2,\dotsc,2]$ from the $C$-arm (assuming $r \ge 1$ and $p \ge r-1$). It corresponds to the incidence matrix $M(1)$ with $A(I), B(I), C(II)$. \item Any M-resolutions obtained by combining (3), (4), or (5) under the necessary hypotheses on $p, q, r$. They correspond to the incidence matrix $M(1)$ whose submatrices $A, B, C$ are determined by combining as in (3), (4), (5). \item Any M-resolutions obtained by combining (3), (4), or (5) with (2) under the stronger conditions $q \ge p$, $p \ge r$, and $r \ge q$, respectively (in order to ensure that the contractions can be performed independently). They correspond to the incidence matrix $M(2)$ whose submatrices $A, B, C$ are determined by combining as in (3), (4), (5). \end{enumerate} \end{proposition} From now on we prove the above proposition \subsubsection*{The M-resolution (1)} Let $Z_0$ be the compactified M-resolution corresponding to the minimal resolution and let $\mathcal{Z} \to \Delta$ be the $\mathbb{Q}$-Gorenstein smoothing of $Z_0$. Let $F$ be the incidence matrix of type $M(1)$ with $A(I), B(I), C(I)$. On the central fiber $Z_0$, each $(-1)$-curve between the decorated curve $\widetilde{A}_i$ and the $(-p-3)$-curve give rise a $(-1)$-curve on a general fiber $Z_t$ that intersect transversally $\widetilde{A}_i$ only. Therefore the $(p+2)$ $(-1)$-curves intersecting the $(-p-3)$-curve induce the $(p+2) \times (p+2)$ identity submatrix in $A(I)'$. We then apply divisorial contractions of the $(p+2)$ $(-1)$-curves to $\mathcal{Z} \to \Delta$. Then we have the following configuration on the central fiber $Z_0'$ of the blown-down deformation $\mathcal{Z}' \to \Delta$: \begin{equation*} \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-4505) at (-4.5,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-45-05) at (-4.5,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-4.5,0.25)--(-4.5,-0.25); \draw[-] (-4505)--(-350); \draw[-] (-45-05)--(-350); \node[bullet] (-350) at (-3.5,0) [labelAbove={$-1$},label=below:{$E_A$}] {}; \node[bullet] (-250) at (-2.5,0) [labelAbove={$-2$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-2$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (-2.5,-0.15)--(-1,-0.15) node[pos=0.5,below=0.1em,black]{$q$ $(-2)$'s}; \end{tikzpicture} \end{equation*} The $(-1)$-curve $E_A$ in the above configuration deforms to a $(-1)$-curve in a general fiber $Z_t'$ intersecting all of the decorated curves $\widetilde{A}_i$. But $E_A$ does not intersect any of the other decorated curves $\widetilde{B}_j$ and $\widetilde{C}_k$. Therefore $E_A$ induces a column of the submatrix $A(I)'$ which consists of only $1$'s. We again apply a divisorial contraction of $E_A$ to $\mathbb{Z}' \to \Delta$ so that we have a new deformation $\mathcal{Z''} \to \Delta$ whose central fiber $Z_0''$ contains the configuration \begin{equation*} \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-4505) at (-4.5,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-45-05) at (-4.5,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-4.5,0.25)--(-4.5,-0.25); \draw[-] (-4505)--(-350); \draw[-] (-45-05)--(-350); \node[bullet] (-350) at (-3.5,0) [labelAbove={$-1$},label=below:{$E_1$}] {}; \node[bullet] (-250) at (-2.5,0) [labelAbove={$-2$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-2$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (-2.5,-0.15)--(-1,-0.15) node[pos=0.5,below=0.1em,black]{($q-1$) $(-2)$'s}; \end{tikzpicture} \end{equation*} So, for the same reason as before, the $(-1)$-curve $E_1$ induces a column of the submatrix $A(I)'$ consisting of only $1$'s again. Consequently, by repeating this procedure until the $q$th $(-2)$-curves, we obtain the part of the submatrix $A(I')$ consisting of $(q+1)$ columns containing only $1$'s. In a similar way we obtain the parts $B(I)'$ and $C(I)'$ in the incidence matrix $F$ from the M-resolution $Z_0$ by applying only divisorial contractions. Applying the divisorial contractions to the $A, B, C$-arms as above, we will come to the following situation: \begin{equation*} \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-205) at (-2,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-2-05) at (-2,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-2,0.25)--(-2,-0.25); \node[bullet] (-10) at (-1,0) [labelAbove={$-1$}] {}; \draw[-] (-205)--(-10); \draw[-] (-2-05)--(-10); \node[bullet] (205) at (2,0.5) [label=right:{$\widetilde{B}_1$}] {}; \node[bullet] (2-05) at (2,-0.5) [label=right:{$\widetilde{B}_{q+2}$}] {}; \draw[dotted] (2,0.25)--(2,-0.25); \node[bullet] (10) at (1,0) [labelAbove={$-1$}] {}; \draw[-] (205)--(10); \draw[-] (2-05)--(10); \node[bullet] (-05-2) at (-0.5,-2) [label=below:{$\widetilde{C}_1$}] {}; \node[bullet] (05-2) at (0.5,-2) [label=below:{$\widetilde{C}_{r+2}$}] {}; \draw[dotted] (0.25,-2)--(-0.25,-2); \node[bullet] (0-1) at (0-1) [labelAbove={$-1$}] {}; \draw[-] (05-2)--(0-1); \draw[-] (-05-2)--(0-1); \draw[-] (-10)--(00); \draw[-] (10)--(00); \draw[-] (0-1)--(00); \end{tikzpicture} \end{equation*} We apply the divisorial contractions to the three $(-1)$-curves. Then we have \begin{equation*} \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-1$},labelBelow={$E_0$}] {}; \node[bullet] (-105) at (-1,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-1-05) at (-1,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-1,0.25)--(-1,-0.25); \draw[-] (-105)--(00); \draw[-] (-1-05)--(00); \node[bullet] (105) at (1,0.5) [label=right:{$\widetilde{B}_1$}] {}; \node[bullet] (1-05) at (1,-0.5) [label=right:{$\widetilde{B}_{q+2}$}] {}; \draw[dotted] (1,0.25)--(1,-0.25); \draw[-] (105)--(00); \draw[-] (1-05)--(00); \node[bullet] (-05-1) at (-0.5,-1) [label=below:{$\widetilde{C}_1$}] {}; \node[bullet] (05-1) at (0.5,-1) [label=below:{$\widetilde{C}_{r+2}$}] {}; \draw[dotted] (0.25,-1)--(-0.25,-1); \draw[-] (05-1)--(00); \draw[-] (-05-1)--(00); \end{tikzpicture} \end{equation*} Therefore the $(-1)$-curve $E_0$ induces the first column of the incidence matrix $F$ consisting of only $1$'s. Thus the M-resolution that is the minimal resolution corresponds to the incidence matrix $F$ as asserted in (1) in Proposition~\ref{proposition:Wpqr-P-modifications-M(1)orM(2)}. \subsubsection*{The M-resolution (2)} Let $Z_0$ be the compactified M-resolution given in Proposition~\ref{proposition:Wpqr-P-modifications-M(1)orM(2)}(2) and let $\mathcal{Z} \to \Delta$ be the $\mathbb{Q}$-Gorenstein smoothing of $Z_0$. Let $F$ be the incidence matrix of type $M(2)$ with $A(I), B(I), C(I)$. For exactly the same reason as in the above `The M-resolution (1)', the M-resolution $Z_0$ induces the submatrices $A(I)', B(I)', C(I)'$ in the incidence matrix $F$. By applying the divisorial contractions to the three $(-1)$-curves in each arms (and the induced $(-1)$-curves after the divisorial contractions) we will come to the following situation: \begin{equation*} \begin{tikzpicture} \node[rectangle] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-205) at (-2,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-2-05) at (-2,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-2,0.25)--(-2,-0.25); \node[bullet] (-10) at (-1,0) [labelAbove={$-1$}] {}; \draw[-] (-205)--(-10); \draw[-] (-2-05)--(-10); \node[bullet] (205) at (2,0.5) [label=right:{$\widetilde{B}_1$}] {}; \node[bullet] (2-05) at (2,-0.5) [label=right:{$\widetilde{B}_{q+2}$}] {}; \draw[dotted] (2,0.25)--(2,-0.25); \node[bullet] (10) at (1,0) [labelAbove={$-1$}] {}; \draw[-] (205)--(10); \draw[-] (2-05)--(10); \node[bullet] (-05-2) at (-0.5,-2) [labelBelow={$\widetilde{C}_1$}] {}; \node[bullet] (05-2) at (0.5,-2) [labelBelow={$\widetilde{C}_{r+2}$}] {}; \draw[dotted] (0.25,-2)--(-0.25,-2); \node[bullet] (0-1) at (0-1) [label=left:{$-1$}] {}; \draw[-] (05-2)--(0-1); \draw[-] (-05-2)--(0-1); \draw[-] (-10)--(00); \draw[-] (10)--(00); \draw[-] (0-1)--(00); \end{tikzpicture} \end{equation*} Then, as in Section~\ref{section:[4]}, we obtain the first 3 columns of the incidence matrix $F$ by applying the usual flip followed two divisorial contractions. \subsubsection*{The M-resolutions (3), (4), (5)} It is enough to consider the M-resolution (3). As before, let $Z_0$ be the compactified M-resolution given in Proposition~\ref{proposition:Wpqr-P-modifications-M(1)orM(2)}(3) and let $\mathcal{Z} \to \Delta$ be the $\mathbb{Q}$-Gorenstein smoothing of $Z_0$. Let $F$ be the incidence matrix $M(1)$ with $A(II)$, but $B(I)$ and $C(I)$. Notice that the same explanation applies to the parts $B(I)$ and $C(I)$ of the incidence matrix $F$ as in the preceding case (1). Therefore, it remains to show how $A(II)$ appears in $F$. The M-resolution $Z_0$ contains the following configuration: \begin{equation*} \begin{tikzpicture} \node[bullet] (250) at (2.5,0) [labelAbove={$-4$}] {}; \node[bullet] (-505) at (-5,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-5-05) at (-5,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-5,0.25)--(-5,-0.25); \node[bullet] (-4505) at (-4.5,0.5) [labelAbove={$-1$}] {}; \node[bullet] (-45-05) at (-4.5,-0.5) [labelBelow={$-1$}] {}; \draw[dotted] (-4.5,0.25)--(-4.5,-0.25); \draw[-] (-4505)--(-505); \draw[-] (-45-05)--(-5-05); \draw[-] (-4505)--(-350); \draw[-] (-45-05)--(-350); \node[rectangle] (-350) at (-3.5,0) [labelAbove={$-(p+3)$},labelBelow={$G_0$}] {}; \node[rectangle] (-250) at (-2.5,0) [labelAbove={$-2$},label=below:{$G_1$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[rectangle] (-10) at (-1,0) [labelAbove={$-2$},label=below:{$G_{p-1}$}] {}; \node[bullet] (00) at (0,0) [labelAbove={$-2$},label=below:{$G_p$}] {}; \node[empty] (050) at (0.5,0) [] {}; \node[empty] (10) at (1,0) [] {}; \node[bullet] (150) at (1.5,0) [labelAbove={$-2$},label=below:{$G_{q+2}$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw[-] (00)--(050); \draw[dotted] (050)--(10); \draw[-] (10)--(150); \draw[-] (150)--(250); \end{tikzpicture} \end{equation*} Applying the usual flips successively to the $(-1)$-curves attaching to $\widetilde{A}_1,\dotsc,\widetilde{A}_p$ in this order, we have the new central fiber $Z_0'$ of the flipped deformation $\mathcal{Z}' \to \Delta$ of containing the following configuration \begin{equation*} \begin{tikzpicture} \node[bullet] (250) at (2.5,0) [labelAbove={$-4$}] {}; \node[bullet] (-451) at (-4.5,1) [label=left:{$\widetilde{A}_1'$}] {}; \node[bullet] (-450) at (-4.5,0) [label=left:{$\widetilde{A}_p'$}] {}; \node[bullet] (-55-05) at (-5.5,-0.5) [label=left:{$\widetilde{A}_{p+1}'$}] {}; \node[bullet] (-55-125) at (-5.5,-1.25) [label=left:{$\widetilde{A}_{p+2}'$}] {}; \draw[dotted] (-4.5,0.75)--(-4.5,0.25); \node[bullet] (-45-05) at (-4.5,-0.5) [labelBelow={$-1$}] {}; \node[bullet] (-45-125) at (-4.5,-1.25) [labelBelow={$-1$}] {}; \draw[-] (-451)--(-350); \draw[-] (-450)--(-350); \draw[-] (-55-05)--(-45-05); \draw[-] (-55-125)--(-45-125); \draw[-] (-45-05)--(-350); \draw[-] (-45-125)--(-350); \node[bullet] (-350) at (-3.5,0) [labelAbove={$-3$},label=below:{$G_0'$}] {}; \node[bullet] (-250) at (-2.5,0) [labelAbove={$-2$},label=below:{$G_1$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-2$},label=below:{$G_{p-1}$}] {}; \node[bullet] (00) at (0,0) [labelAbove={$-2$},label=below:{$G_p$}] {}; \node[empty] (050) at (0.5,0) [] {}; \node[empty] (10) at (1,0) [] {}; \node[bullet] (150) at (1.5,0) [labelAbove={$-2$},label=below:{$G_{q+2}$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw[-] (00)--(050); \draw[dotted] (050)--(10); \draw[-] (10)--(150); \draw[-] (150)--(250); \end{tikzpicture} \end{equation*} Notice that the decorated curves $\widetilde{A}_i$ in a general fiber $Z_t'$ of $\mathcal{Z}' \to \Delta$ are degenerated in the central fiber $Z_0'$ as follows: \begin{align*} \widetilde{A}_1 &= \widetilde{A}_1' + G_0' + \dotsc + G_{p-1} \\ \widetilde{A}_2 &= \widetilde{A}_2' + G_0' + \dotsc + G_{p-2} \\ &\vdots\\ \widetilde{A}_p &= \widetilde{A}_p' + G_0' \\ \widetilde{A}_{p+1} &= \widetilde{A}_{p+1}' \\ \widetilde{A}_{p+2} &= \widetilde{A}_{p+2}' \end{align*} So the $(-1)$-curve connected to $\widetilde{A}_{p+1}'$ in the central fiber $Z_0'$ deforms to a $(-1)$-curve in a general fiber $Z_t'$ that intersects with $\widetilde{A}_1, \dotsc, \widetilde{A}_{p+1}$ (but not with $\widetilde{A}_{p+2}$), which induces the first column of $A(II)'$ in the incidence matrix $F$ that contains $0$. Similarly, the $(-1)$-curve connected to $\widetilde{A}_{p+2}$ induces the second column of $A(II)'$ in $F$ that contains $0$. Applying the divisorial contractions to two $(-1)$-curves, the new central fiber $Z_0''$ contains the configuration \begin{equation*} \begin{tikzpicture} \node[bullet] (250) at (2.5,0) [labelAbove={$-4$}] {}; \node[bullet] (-451) at (-4.5,1) [label=left:{$\widetilde{A}_1'$}] {}; \node[bullet] (-450) at (-4.5,0) [label=left:{$\widetilde{A}_p'$}] {}; \node[bullet] (-45-05) at (-4.5,-0.5) [label=left:{$\widetilde{A}_{p+1}'$}] {}; \node[bullet] (-45-1) at (-4.5,-1) [label=left:{$\widetilde{A}_{p+2}'$}] {}; \draw[dotted] (-4.5,0.75)--(-4.5,0.25); \draw[-] (-451)--(-350); \draw[-] (-450)--(-350); \draw[-] (-45-05)--(-350); \draw[-] (-45-1)--(-350); \node[bullet] (-350) at (-3.5,0) [labelAbove={$-1$},label=below:{$G_0'$}] {}; \node[bullet] (-250) at (-2.5,0) [labelAbove={$-2$},label=below:{$G_1$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-2$},label=below:{$G_{p-1}$}] {}; \node[bullet] (00) at (0,0) [labelAbove={$-2$},label=below:{$G_p$}] {}; \node[empty] (050) at (0.5,0) [] {}; \node[empty] (10) at (1,0) [] {}; \node[bullet] (150) at (1.5,0) [labelAbove={$-2$},label=below:{$G_{q+2}$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw[-] (00)--(050); \draw[dotted] (050)--(10); \draw[-] (10)--(150); \draw[-] (150)--(250); \end{tikzpicture} \end{equation*} Then the $(-1)$-curve $G_0$ in the central fiber $Z_0''$ deforms to a $(-1)$-curve in a general fiber $Z_t''$ that intersects with $\widetilde{A}_1, \dotsc, \widetilde{A}_{p-1}''$ and $\widetilde{A}_{p+1}, \widetilde{A}_{p+2}$, but not with $\widetilde{A}_{p}$. Therefore $G_0$ induces the third column of $A(II)'$ in the incidence matrix $F$ that contains $0$. By applying the divisorial contractions to $G_0'$, then we have a new $(-1)$-curve $G_1'$ obtained from $G_1$. And this new $(-1)$-curve $G_1'$ induces the $4$th column of $A(II)'$ in $F$ that contains $0$. Repeating this process until $G_{p-1}'$, we have all columns of $A(II)'$ that contain $0$. After that, we have the following configuration in a central fiber $Z_0'''$ \begin{equation*} \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-4$}] {}; \node[bullet] (-4505) at (-4.5,0.5) [label=left:{$\widetilde{A}_1$}] {}; \node[bullet] (-45-05) at (-4.5,-0.5) [label=left:{$\widetilde{A}_{p+2}$}] {}; \draw[dotted] (-4.5,0.25)--(-4.5,-0.25); \draw[-] (-4505)--(-350); \draw[-] (-45-05)--(-350); \node[bullet] (-350) at (-3.5,0) [labelAbove={$-1$},label=below:{$G_p$}] {}; \node[bullet] (-250) at (-2.5,0) [labelAbove={$-2$},label=below:={$G_{p+1}$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[bullet] (-10) at (-1,0) [labelAbove={$-2$},label=below:{$G_{q+2}$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \end{tikzpicture} \end{equation*} Notice that there are no degenerations of the decorated curves $\widetilde{A}_i$ in a general fiber $Z_t'''$ during being deformed into the central fiber $Z_0'''$. Therefore we obtain the remained parts of $F$ , as in Case~(1). \subsubsection*{The M-resolutions (7) and (8)} Since the MMP's can be applied independently to each arms, we obtain the combined submatrices of the incidence matrix $F$ from any combinations of (3), (4), or (5) with (2) or without (2). Hence the proof of Proposition~\ref{proposition:Wpqr-P-modifications-M(1)orM(2)} is done. \subsection{P-resolutions for the incidence matrices of type $M(3)$, $M(4)$, or $M(5)$} If $r-p \ge 1$ and $q \ge 1$, then we can construct a P-resolution of $W_{p,q,r}$ whose dual graph is given in Figure~\ref{figure:Wpqr-P-for-M(3)-I-q>=1}. It has two T-singularities: \begin{equation*} \text{$[p+3,\underbrace{2,\cdots,2}_{q}, q+5, \underbrace{2,\dotsc,2}_{p+1}]$ and $[p+3,\underbrace{2,\dotsc,2}_{q-1},3,\underbrace{2,\dotsc,2}_{p}]$} \end{equation*} If $r-p \ge 1$ but $q=0$, then we can construct a P-resolution of $W_{p,q,r}$ as in Figure~\ref{figure:Wpqr-P-for-M(3)-I-q=0} which has two T-singularities: \begin{equation*} \text{$[p+4,\underbrace{2,\dotsc,2}_{p}]$ and $[p+5,\underbrace{2,\dotsc,2}_{p+1}]$} \end{equation*} \begin{figure} \centering \scalebox{0.75}{% \begin{tikzpicture} \node[rectangle] (00) at (0,0) [labelAbove={$-q-5$}] {}; \node[rectangle] (-350) at (-3.5,0) [labelAbove={$-(p+3)$}] {}; \node[rectangle] (-250) at (-2.5,0) [labelAbove={$-2$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[rectangle] (-10) at (-1,0) [labelAbove={$-2$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (-2.5,-0.15)--(-1,-0.15) node[pos=0.5,below=0.1em,black]{$q$}; \node[rectangle] (10) at (1,0) [labelAbove={$-2$}] {}; \node[empty] (150) at (1.5,0) [] {}; \node[empty] (20) at (2,0) [] {}; \node[rectangle] (250) at (2.5,0) [labelAbove={$-2$}] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-1$}] {}; \node[rectangle] (450) at (4.5,0) [labelAbove={$-p-3$}] {}; \node[rectangle] (550) at (5.5,0) [labelAbove={$-2$}] {}; \node[empty] (60) at (6,0) [] {}; \node[empty] (650) at (6.5,0) [] {}; \node[rectangle] (70) at (7,0) [labelAbove={$-2$}] {}; \node[rectangle] (80) at (8,0) [labelAbove={$-3$}] {}; \node[rectangle] (90) at (9,0) [labelAbove={$-2$}] {}; \node[empty] (950) at (9.5,0) [] {}; \node[empty] (100) at (10,0) [] {}; \node[rectangle] (1050) at (10.5,0) [labelAbove={$-2$}] {}; \node[bullet] (105-1) at (10.5,-1) [label=left:{$-2$}] {}; \node[empty] (105-15) at (10.5,-1.5) [] {}; \node[empty] (105-2) at (10.5,-2) [] {}; \node[bullet] (105-25) at (10.5,-2.5) [label=left:{$-2$}] {}; \node[bullet] (105-35) at (10.5,-3.5) [label=left:{$-(q+3)$}] {}; \draw[-] (00)--(10); \draw[-] (10)--(150); \draw[dotted] (150)--(20); \draw[-] (20)--(250); \draw [decorate, decoration = {calligraphic brace,mirror}] (1,-0.15)--(2.5,-0.15) node[pos=0.5,below=0.1em,black]{$p+1$}; \draw[-] (250)--(350); \draw[-] (350)--(450); \draw[-] (450)--(550); \draw[-] (550)--(60); \draw[dotted] (60)--(650); \draw[-] (650)--(70); \draw [decorate, decoration = {calligraphic brace,mirror}] (5.5,-0.15)--(7,-0.15) node[pos=0.5,below=0.1em,black]{$q-1$}; \draw[-] (70)--(80); \draw[-] (80)--(90); \draw[-] (90)--(950); \draw[dotted] (950)--(100); \draw[-] (100)--(1050); \draw [decorate, decoration = {calligraphic brace,mirror}] (9,-0.15)--(10.5,-0.15) node[pos=0.5,below=0.1em,black]{$p$}; \draw[-] (1050)--(105-1); \draw[-] (105-1)--(105-15); \draw[dotted] (105-15)--(105-2); \draw[-] (105-2)--(105-25); \draw [decorate, decoration = {calligraphic brace, mirror}] (10.65,-2.5)--(10.65,-1) node[pos=0.5,right=0.1em,black]{$r-p-1$}; \draw[-] (105-25)--(105-35); \node[bullet] (0-35) at (0,-3.5) [label=left:{$-(r+3)$}] {}; \node[bullet] (0-25) at (0,-2.5) [label=left:{$-2$}] {}; \node[empty] (0-2) at (0,-2) [] {}; \node[empty] (0-15) at (0,-1.5) [] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-2$}] {}; \draw[-] (0-35)--(0-25); \draw[-] (0-25)--(0-2); \draw[dotted] (0-2)--(0-15); \draw[-] (0-15)--(0-1); \draw[-] (0-1)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (0.15,-2.5)--(0.15,-1) node[pos=0.5,right=0.1em,black]{$p$}; \end{tikzpicture}} \caption{The P-resolution for $M(3)$ with $B(I)$ for $r-p \ge 1$, $q \ge 1$} \label{figure:Wpqr-P-for-M(3)-I-q>=1} \end{figure} \begin{figure} \centering \scalebox{0.75}{% \begin{tikzpicture} \node[rectangle] (00) at (0,0) [labelAbove={$-p-5$}] {}; \node[rectangle] (-450) at (-4.5,0) [labelAbove={$-(p+4)$}] {}; \node[rectangle] (-350) at (-3.5,0) [labelAbove={$-2$}] {}; \node[empty] (-30) at (-3,0) [] {}; \node[empty] (-250) at (-2.5,0) [] {}; \node[rectangle] (-20) at (-2,0) [labelAbove={$-2$}] {}; \node[bullet] (-10) at (-10) [labelAbove={$-1$}] {}; \draw[-] (-450)--(-350); \draw[-] (-350)--(-30); \draw[dotted] (-30)--(-250); \draw[-] (-250)--(-20); \draw[-] (-20)--(-10); \draw[-] (-10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (-3.5,-0.15)--(-2,-0.15) node[pos=0.5,below=0.1em,black]{$p$}; \node[bullet] (60) at (6,0) [labelAbove={$-3$}] {}; \node[bullet] (50) at (5,0) [labelAbove={$-2$}] {}; \node[empty] (450) at (4.5,0) [] {}; \node[empty] (40) at (4,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-2$}] {}; \node[rectangle] (250) at (2.5,0) [labelAbove={$-2$}] {}; \node[empty] (20) at (2,0) [] {}; \node[empty] (150) at (1.5,0) [] {}; \node[rectangle] (10) at (1,0) [labelAbove={$-2$}] {}; \draw[-] (60)--(50); \draw[-] (50)--(450); \draw[dotted] (450)--(40); \draw[-] (40)--(350); \draw [decorate, decoration = {calligraphic brace,mirror}] (3.5,-0.15)--(5,-0.15) node[pos=0.5,below=0.1em,black]{$r-p-1$}; \draw[-] (350)--(250); \draw[-] (250)--(20); \draw[dotted] (20)--(150); \draw[-] (150)--(10); \draw[-] (10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (1,-0.15)--(2.5,-0.15) node[pos=0.5,below=0.1em,black]{$p+1$}; \node[bullet] (0-35) at (0,-3.5) [label=left:{$-(r+3)$}] {}; \node[bullet] (0-25) at (0,-2.5) [label=left:{$-2$}] {}; \node[empty] (0-2) at (0,-2) [] {}; \node[empty] (0-15) at (0,-1.5) [] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-2$}] {}; \draw[-] (0-35)--(0-25); \draw[-] (0-25)--(0-2); \draw[dotted] (0-2)--(0-15); \draw[-] (0-15)--(0-1); \draw[-] (0-1)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (0.15,-2.5)--(0.15,-1) node[pos=0.5,right=0.1em,black]{$p$}; \end{tikzpicture}} \caption{The P-resolution for $M(3)$ with $B(I)$ for $r-p \ge 1$, $q=0$} \label{figure:Wpqr-P-for-M(3)-I-q=0} \end{figure} Moreover, if $r-p-q \ge 1$ and $q \ge 1$, we can contract the Wahl configuration $2-\dotsb-2-(q+3)$ in the $B$-arm to obtain another P-resolution of $W_{p,q,r}$ as shown in Figure~\ref{figure:Wpqr-P-for-M(3)-II-q>=1} which has one more T-singularity in addition to the two singularities: \begin{equation*} [\underbrace{2,\dotsc,2}_{q-1},q+3] \end{equation*} \begin{figure} \centering \scalebox{0.75}{% \begin{tikzpicture} \node[rectangle] (00) at (0,0) [labelAbove={$-q-5$}] {}; \node[rectangle] (-350) at (-3.5,0) [labelAbove={$-(p+3)$}] {}; \node[rectangle] (-250) at (-2.5,0) [labelAbove={$-2$}] {}; \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[rectangle] (-10) at (-1,0) [labelAbove={$-2$}] {}; \draw[-] (-350)--(-250); \draw[-] (-250)--(-20); \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (-2.5,-0.15)--(-1,-0.15) node[pos=0.5,below=0.1em,black]{$q$}; \node[rectangle] (10) at (1,0) [labelAbove={$-2$}] {}; \node[empty] (150) at (1.5,0) [] {}; \node[empty] (20) at (2,0) [] {}; \node[rectangle] (250) at (2.5,0) [labelAbove={$-2$}] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-1$}] {}; \node[rectangle] (450) at (4.5,0) [labelAbove={$-p-3$}] {}; \node[rectangle] (550) at (5.5,0) [labelAbove={$-2$}] {}; \node[empty] (60) at (6,0) [] {}; \node[empty] (650) at (6.5,0) [] {}; \node[rectangle] (70) at (7,0) [labelAbove={$-2$}] {}; \node[rectangle] (80) at (8,0) [labelAbove={$-3$}] {}; \node[rectangle] (90) at (9,0) [labelAbove={$-2$}] {}; \node[empty] (950) at (9.5,0) [] {}; \node[empty] (100) at (10,0) [] {}; \node[rectangle] (1050) at (10.5,0) [labelAbove={$-2$}] {}; \node[bullet] (105-1) at (10.5,-1) [label=left:{$-2$}] {}; \node[empty] (105-15) at (10.5,-1.5) [] {}; \node[empty] (105-2) at (10.5,-2) [] {}; \node[bullet] (105-25) at (10.5,-2.5) [label=left:{$-2$}] {}; \node[rectangle] (105-35) at (10.5,-3.5) [label=left:{$-2$}] {}; \node[empty] (105-4) at (10.5,-4) [] {}; \node[empty] (105-45) at (10.5,-4.5) [] {}; \node[rectangle] (105-5) at (10.5,-5) [label=left:{$-2$}] {}; \node[rectangle] (105-6) at (10.5,-6) [label=left:{$-(q+3)$}] {}; \draw[-] (00)--(10); \draw[-] (10)--(150); \draw[dotted] (150)--(20); \draw[-] (20)--(250); \draw [decorate, decoration = {calligraphic brace,mirror}] (1,-0.15)--(2.5,-0.15) node[pos=0.5,below=0.1em,black]{$p+1$}; \draw[-] (250)--(350); \draw[-] (350)--(450); \draw[-] (450)--(550); \draw[-] (550)--(60); \draw[dotted] (60)--(650); \draw[-] (650)--(70); \draw [decorate, decoration = {calligraphic brace,mirror}] (5.5,-0.15)--(7,-0.15) node[pos=0.5,below=0.1em,black]{$q-1$}; \draw[-] (70)--(80); \draw[-] (80)--(90); \draw[-] (90)--(950); \draw[dotted] (950)--(100); \draw[-] (100)--(1050); \draw [decorate, decoration = {calligraphic brace,mirror}] (9,-0.15)--(10.5,-0.15) node[pos=0.5,below=0.1em,black]{$p$}; \draw[-] (1050)--(105-1); \draw[-] (105-1)--(105-15); \draw[dotted] (105-15)--(105-2); \draw[-] (105-2)--(105-25); \draw [decorate, decoration = {calligraphic brace, mirror}] (10.65,-2.5)--(10.65,-1) node[pos=0.5,right=0.1em,black]{$r-p-q$}; \draw[-] (105-25)--(105-35); \draw[-] (105-35)--(105-4); \draw[dotted] (105-4)--(105-45); \draw[-] (105-45)--(105-5); \draw [decorate, decoration = {calligraphic brace}] (10.65,-3.5)--(10.65,-5) node[pos=0.5,right=0.1em,black]{$q-1$}; \draw[-] (105-5)--(105-6); \node[bullet] (0-35) at (0,-3.5) [label=left:{$-(r+3)$}] {}; \node[bullet] (0-25) at (0,-2.5) [label=left:{$-2$}] {}; \node[empty] (0-2) at (0,-2) [] {}; \node[empty] (0-15) at (0,-1.5) [] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-2$}] {}; \draw[-] (0-35)--(0-25); \draw[-] (0-25)--(0-2); \draw[dotted] (0-2)--(0-15); \draw[-] (0-15)--(0-1); \draw[-] (0-1)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (0.15,-2.5)--(0.15,-1) node[pos=0.5,right=0.1em,black]{$p$}; \end{tikzpicture}} \caption{The P-resolution for $M(3)$ with $B(II)$ for $r-p-q \ge 1$, $q \ge 1$} \label{figure:Wpqr-P-for-M(3)-II-q>=1} \end{figure} \begin{proposition}\label{proposition:Wpqr-P-modifications-M(3)M(4)orM(5)} Any incidence matrices of type $M(3)$ correspond to one of the P-resolutions given in Figure~\ref{figure:Wpqr-P-for-M(3)-I-q>=1}, \ref{figure:Wpqr-P-for-M(3)-I-q=0}, and \ref{figure:Wpqr-P-for-M(3)-II-q>=1} according to the choice of $B$. Similarly, any incidence matrices of type $M(4)$ or $M(5)$ correspond to the P-resolutions in the same figures; however, their subscripts are modified as follows: $(p,q,r)\rightarrow (q,r,p)$ for $M(4)$ and $(p,q,r) \rightarrow (r,p,q)$ for $M(5)$. \end{proposition} \begin{proof} We will prove the proposition in case of the incidence matrix $F$ of type $M(3)$ with $B=B(I)$ under the condition $r-p \ge 1$ and $q \ge 1$. The other cases are proven in a similar fashion. Let $Z_0$ be the P-resolution given in Figure~\ref{figure:Wpqr-P-for-M(3)-I-q>=1} and let $\mathcal{Z} \to \Delta$ be a $\mathbb{Q}$-Gorenstein smoothing of the central fiber $Z_0$. At first, each $(-1)$-curve intersecting one of the decorated curves $\widetilde{B}_j$ in the central fiber $Z_0$ deforms a $(-1)$-curve intersecting the same decorated curve $\widetilde{B}_j$ in a general fiber $Z_t$. So they induce the identity submatrix of $B(I)$ in the incidence matrix $F$. We apply divisorial contractions successively to $\mathcal{Z} \to \Delta$ along with the $(-1)$-curves intersecting $\widetilde{B}_j$'s. Then we have a deformation whose central fiber contains the following configuration \begin{equation*} \dotsb-1-[p+3,\underbrace{2,\dotsc,2}_{q-1},3,\underbrace{2,\dotsc,2}_{p}]-\underbrace{2-\cdots-2}_{r-p-1}-1-\text{$\widetilde{B}_j$'s} \end{equation*} where all of the $\widetilde{B}_j$'s intersect the $(-1)$-curve (at different points). If $r-p-1 > 0$, then the $(-1)$-curve in the above configuration induces a column of $B(I)'$ consisting of only $1$'s. We then apply the divisorial contraction to the $(-1)$-curve. Repeating this process, we obtain the $(r-p-1)$ columns of $B(I)'$ consisting of only $1$'s, and, after the $(r-p-1)$-numbers of divisorial contractions, we have a new deformation contains the following configuration in the central fiber: \begin{equation*} \dotsb-1-[p+3,\underbrace{2,\dotsc,2}_{q-1},3,\underbrace{2,\dotsc,2}_{p}]-1-\widetilde{B}_j \end{equation*} Consider the crepant M-resolution of the T-singularity \begin{equation*} \dotsb-1-[p+3,\underbrace{2,\dotsc,2}_{q-1},3,\underbrace{2,\dotsc,2}_{p}] \end{equation*} so that we have a deformation with the central fiber containing the configuration \begin{equation*} \dotsb-1-[p+4,\underbrace{2\dotsc,2}_{p}]-1-\dotsb-[p+4,\underbrace{2\dotsc,2}_{p}]-1-[p+4,\underbrace{2\dotsc,2}_{p}]-1-\widetilde{B}_j \end{equation*} where there are $(q+1)$-numbers of $(-1)$-curves between the first and the last T-singularities $[p+4,2,\dotsc,2]$. Applying the usual flip to the $(-1)$-curve intersecting $\widetilde{B}_j'$s, we have a flipped deformation with the following configuration in the central fiber:( \begin{equation*} \dotsb-1-[p+4,\underbrace{2\dotsc,2}_{p}]-1-\dotsb-[p+4,\underbrace{2\dotsc,2}_{p}]-1-(p+3)-\widetilde{B}_j' \end{equation*} Notice that every decorated curve $\widetilde{B}_j$ in a general fiber is degenerated to the union of the $(-p-3)$-curve and $\widetilde{B}_j'$ in the central fiber. Repeating this process until there is no T-singularities coming from the above crepant M-resolution, then we have a deformation whose central fiber contains the configuration \begin{equation*} \dotsb-1-(p+3)-\underbrace{2-\dotsb-2}_{q+1}-\widetilde{B}_j' \end{equation*} Every $\widetilde{B}_j$ in a general fiber is degenerated to \begin{equation*} (p+3) \cup \underbrace{2 \cup \dotsb \cup 2}_{q+1} \cup \widetilde{B}_j' \end{equation*} in the central fiber. Notice that no new $(-1)$-curve is induced in a general fiber during the above flips. Finally, we apply the usual flips to the $(-1)$-curves intersecting with the decorated curves $\widetilde{A}_i$ starting from $\widetilde{A}_1$ to $\widetilde{A}_{p+2}$. Combined with the MMP described in the above along the $B$-arms, we have resolved all T-singularities and we obtain a deformation with the central fiber containing the configuration in Figure~\ref{figure:Wpqr-P-for-M(3)-I-resolved}. \begin{figure} \centering \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-q-4$},label=below:{$G_{p+2}$}] {}; \node[bullet] (-105) at (-1,0.5) [label=left:{$\widetilde{A}_1'$}] {}; \node[bullet] (-1-05) at (-1,-0.5) [label=left:{$\widetilde{A}_{p+2}'$}] {}; \draw[dotted] (-1,0.25)--(-1,-0.25); \draw[-] (-105)--(00); \draw[-] (-1-05)--(00); \node[bullet] (10) at (1,0) [labelAbove={$-2$},label=below:{$G_{p+1}$}] {}; \node[empty] (150) at (1.5,0) [] {}; \node[empty] (20) at (2,0) [] {}; \node[bullet] (250) at (2.5,0) [labelAbove={$-2$},label=below:{$G_1$}] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-1$},label=below:{$E$}] {}; \node[bullet] (450) at (4.5,0) [labelAbove={$-p-3$},label=below:{$H_{q+2}$}] {}; \node[bullet] (550) at (5.5,0) [labelAbove={$-2$},label=below:{$H_{q+1}$}] {}; \node[empty] (60) at (6,0) [] {}; \node[empty] (650) at (6.5,0) [] {}; \node[bullet] (70) at (7,0) [labelAbove={$-2$},label=below:{$H_1$}] {}; \node[bullet] (805) at (8,0.5) [label=right:{$\widetilde{B}_1'$}] {}; \node[bullet] (8-05) at (8,-0.5) [label=right:{$\widetilde{B}_{p+2}'$}] {}; \draw[dotted] (8,0.25)--(8,-0.25); \draw[-] (00)--(10); \draw[-] (10)--(150); \draw[dotted] (150)--(20); \draw[-] (20)--(250); \draw[-] (250)--(350); \draw[-] (350)--(450); \draw[-] (450)--(550); \draw[-] (550)--(60); \draw[dotted] (60)--(650); \draw[-] (650)--(70); \draw[-] (70)--(805); \draw[-] (70)--(8-05); \node[bullet] (-05-55) at (-0.5,-5.5) [label=below:{$\widetilde{C}_1$}] {}; \node[bullet] (05-55) at (0.5,-5.5) [label=below:{$\widetilde{C}_{r+2}$}] {}; \draw[dotted] (0.25,-5.5)--(-0.25,-5.5); \node[bullet] (-05-45) at (-0.5,-4.5) [label=left:{$-1$}] {}; \node[bullet] (05-45) at (0.5,-4.5) [label=right:{$-1$}] {}; \draw[dotted] (0.25,-4.5)--(-0.25,-4.5); \node[bullet] (0-35) at (0,-3.5) [label=left:{$-(r+3)$}] {}; \node[bullet] (0-25) at (0,-2.5) [label=left:{$-2$}] {}; \node[empty] (0-2) at (0,-2) [] {}; \node[empty] (0-15) at (0,-1.5) [] {}; \node[bullet] (0-1) at (0,-1) [label=left:{$-2$}] {}; \draw[-] (-05-55)--(-05-45); \draw[-] (05-55)--(05-45); \draw[-] (-05-45)--(0-35); \draw[-] (05-45)--(0-35); \draw[-] (0-35)--(0-25); \draw[-] (0-25)--(0-2); \draw[dotted] (0-2)--(0-15); \draw[-] (0-15)--(0-1); \draw[-] (0-1)--(00); \draw [decorate, decoration = {calligraphic brace,mirror}] (0.15,-2.5)--(0.15,-1) node[pos=0.5,right=0.1em,black]{$p$}; \end{tikzpicture} \caption{Final configuration after being resolved all T-singularities} \label{figure:Wpqr-P-for-M(3)-I-resolved} \end{figure} Notice that the decorated curves $\widetilde{A}_i$'s and $\widetilde{B}_j$'s in a general fiber are degenerated in the central fiber as follows: \begin{align*} &\text{$\widetilde{A}_i \rightsquigarrow \widetilde{A}_i' + G_{p+2} + \dotsb + G_i$ for each $i$}, \\ &\text{$\widetilde{B}_j \rightsquigarrow \widetilde{B}_j' + H_{q+2} + \dotsb + H_1$ for all $j$}. \end{align*} Since there is no singularities, every $(-1)$-curve in the central fiber deforms to a $(-1)$-curve in a general fiber, which induces a column of the incidence matrix $F$. Then we can apply the divisorial contraction to the $(-1)$-curve so that we may have another new $(-1)$-curve, which also leaves a column of $F$. At first, the $(-1)$-curve $E$ deforms to a $(-1)$-curve in a general fiber that intersects only with $\widetilde{A}_1$ and with all of the $\widetilde{B}_j$'s. We apply the divisorial contraction to the $(-1)$-curve. Then the curve $G_1$ becomes another $(-1)$-curve and it deforms to a $(-1)$-curve that intersects only with $\widetilde{A}_2$ but with all of the $\widetilde{B}_j$'s again. Repeat this process until $H_{q+2}$ becomes a $(-1)$-curve. On the other hand, we can start from the $(-1)$-curves that intersect with $\widetilde{C}_k$'s until we reach the curve $G_{p+2}$. Then the process induces the following gray columns of the incidence matrix $F$ and we obtain a new deformation whose central fiber is given in Figure~\ref{figure:Wpqr-P-for-M(3)-I-contracted}. \begin{center} \includegraphics{figure-for-arXiv-24} \end{center} \begin{figure} \centering \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-q-2$},label=below:{$G_{p+2}$}] {}; \node[bullet] (-105) at (-1,0.5) [label=left:{$\widetilde{A}_1'$}] {}; \node[bullet] (-1-05) at (-1,-0.5) [label=left:{$\widetilde{A}_{p+2}'$}] {}; \draw[dotted] (-1,0.25)--(-1,-0.25); \draw[-] (-105)--(00); \draw[-] (-1-05)--(00); \node[bullet] (10) at (1,0) [labelAbove={$-1$},label=below:{$H_{q+2}$}] {}; \node[bullet] (20) at (2,0) [labelAbove={$-2$},label=below:{$H_{q+1}$}] {}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-2$},label=below:{$H_1$}] {}; \node[bullet] (4505) at (4.5,0.5) [label=right:{$\widetilde{B}_1'$}] {}; \node[bullet] (45-05) at (4.5,-0.5) [label=right:{$\widetilde{B}_{p+2}'$}] {}; \draw[dotted] (4.5,0.25)--(4.5,-0.25); \draw[-] (00)--(10); \draw[-] (10)--(20); \draw[-] (20)--(250); \draw[dotted] (250)--(30); \draw[-] (30)--(350); \draw[-] (350)--(4505); \draw[-] (350)--(45-05); \node[bullet] (-05-2) at (-0.5,-2) [label=below:{$\widetilde{C}_1$}] {}; \node[bullet] (05-2) at (0.5,-2) [label=below:{$\widetilde{C}_{r+2}$}] {}; \draw[dotted] (0.25,-2)--(-0.25,-2); \node[bullet] (0-1) at (0,-1) [label=left:{$-1$},label=right:{$I$}] {}; \draw[-] (05-2)--(0-1); \draw[-] (-05-2)--(0-1); \draw[-] (0-1)--(00); \end{tikzpicture} \caption{A central fiber after being divisorially contracted} \label{figure:Wpqr-P-for-M(3)-I-contracted} \end{figure} Notice that the degenerations are given as follows \begin{align*} \widetilde{A}_i &\rightsquigarrow \widetilde{A}_i' + G_{p+2} \\ \widetilde{B}_j &\rightsquigarrow \widetilde{B}_j' + H_1 + \dotsb + H_{p+2} \end{align*} for all $i,j$. Therefore the $(-1)$-curve $I$ leaves the first column of $F$. On the other hand, the $(-1)$-curve $H_{q+2}$ induces the second column of $F$. After divisorially contracting $H_{q+2}$, we have a new $(-1)$-curve, denoted again by $H_{p+1}$, which leaves the third column of $F$. Repeat this process until we exhaust all $H_j$'s. Then we obtain the first $(q+2)$ columns of $F$ and we have a new deformation whose central fiber contains the configuration given in Figure~\ref{figure:Wpqr-P-for-M(3)-I-final}. \begin{figure} \centering \begin{tikzpicture} \node[bullet] (00) at (0,0) [labelAbove={$-1$},label=below:{$G_{p+2}$}] {}; \node[bullet] (-105) at (-1,0.5) [label=left:{$\widetilde{A}_1'$}] {}; \node[bullet] (-1-05) at (-1,-0.5) [label=left:{$\widetilde{A}_{p+2}'$}] {}; \draw[dotted] (-1,0.25)--(-1,-0.25); \draw[-] (-105)--(00); \draw[-] (-1-05)--(00); \node[bullet] (105) at (1,0.5) [label=right:{$\widetilde{B}_1'$}] {}; \node[bullet] (1-05) at (1,-0.5) [label=right:{$\widetilde{B}_{p+2}'$}] {}; \draw[dotted] (1,0.25)--(1,-0.25); \draw[-] (105)--(00); \draw[-] (1-05)--(00); \node[bullet] (-05-1) at (-0.5,-1) [label=below:{$\widetilde{C}_1$}] {}; \node[bullet] (05-1) at (0.5,-1) [label=below:{$\widetilde{C}_{r+2}$}] {}; \draw[dotted] (0.25,-1)--(-0.25,-1); \draw[-] (05-1)--(00); \draw[-] (-05-1)--(00); \end{tikzpicture} \caption{A central fiber at the final stage} \label{figure:Wpqr-P-for-M(3)-I-final} \end{figure} Notice that we still have degenerations \begin{equation*} \widetilde{A}_i \rightsquigarrow \widetilde{A}_i' + G_{p+2} \end{equation*} for all $i$ in Figure~\ref{figure:Wpqr-P-for-M(3)-I-final}. Therefore the $(-1)$-curve $G_{p+2}$ in Figure~\ref{figure:Wpqr-P-for-M(3)-I-final} induces the following gray column of $F$, which completes the proof of Proposition~\ref{proposition:Wpqr-P-modifications-M(3)M(4)orM(5)}. \qedhere \begin{center} \includegraphics{figure-for-arXiv-25} \end{center} \end{proof} \subsection{The P-modification for the incidence matrix $M(6)$} Wahl~\cite[Theorem~3.4]{Wahl-2013} proved that a smoothing of $W_{p,q,r}$ whose Milnor fiber is a rational homology disk occurs on a one-dimensional smoothing component of the deformation space of $W_{p,q,r}$ and that such a smoothing can be chosen to be $\mathbb{Q}$-Gorenstein. \begin{proposition}\label{proposition:Wpqr-P-modifications-M(6)} The P-modification of $W_{p,q,r}$ corresponds to the incidence matrix $M(6)$ is the $\mathbb{Q}$-Gorenstein smoothing of $W_{p,q,r}$ whose Milnor fiber is a rational homology disk. \end{proposition} \begin{proof} Notice that $M(6)$ is the only incidence matrix whose Milnor number is zero. Therefore the $\mathbb{Q}$-Gorenstein smoothing of $W_{p,q,r}$ whose Milnor fiber is a rational homology disk smoothing corresponds to the incidence matrix $M(6)$. \end{proof} \section{Sandwiched structures of cyclic quotient surface singularities}\label{section:CQSS-sandwiched} From now on we focus on cyclic quotient surface singularities. We would like to compare three deformation theories of them: Picture deformations by de Jong and van Straten~\cite{deJong-vanStraten-1998}, P-resolutions by Kollár and Shepherd-Barron~\cite{KSB-1988}, Equations by Christophersen~\cite{Christophersen-1991} and Stevens~\cite{Stevens-1991}. At first, we investigate sandwiched structures of cyclic quotient surface singularities. Every cyclic quotient surface singularity is a sandwiched surface singularity. Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{n}(1,a)$ with $(n,a)=1$. Let $(V,E)$ be the minimal resolution of $(X,p)$, where $E=E_1 \cup \dotsb \cup E_r$ is the union of the exceptional divisors $E_i$ whose dual graph is given by \begin{equation}\label{equation:minimal-resolution} \begin{aligned} \begin{tikzpicture} \node[bullet] (10) at (1,0) [labelAbove={$-b_1$},labelBelow={$E_1$}] {}; \node[bullet] (20) at (2,0) [labelAbove={$-b_2$},labelBelow={$E_2$}] {}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-b_{r-1}$},labelBelow={$E_{r-1}$}] {}; \node[bullet] (450) at (4.5,0) [labelAbove={$-b_r$},labelBelow={$E_r$}] {}; \draw [-] (10)--(20); \draw [-] (20)--(250); \draw [dotted] (20)--(350); \draw [-] (30)--(350); \draw [-] (350)--(450); \end{tikzpicture} \end{aligned} \end{equation} for $n/a = [b_1,\dotsc,b_r]$. Némethi and Popescu-Pampu~\cite[\S7.1]{Nemethi-Poposcu-Pampu-2010} introduced how to construct decorated curves for minimal surface singularities, that is, rational surface singularities with reduced fundamental cycles. Cyclic quotient surface singularities can also be considered minimal one. In Figure~\ref{figure:sandwiched-structure-for-cyclic}, we illustrate a sandwiched structure of $(X,p)$ in this manner. Then, for each $(-1)$-vertex of the graph $\Gamma'$, we choose a `curvetta' $\widetilde{C}_i$. Blowing down $\Gamma'$ to a smooth point, we get a germ of a plane curve singularity $C$ with all the components $C_i$ smooth. The decorations $l_i$ for the components $C_i$ are given by the following lemma. \begin{lemma}[{Némethi--Popescu-Pampu~\cite[Lemma~7.1.1]{Nemethi-Poposcu-Pampu-2010}}] The weight $l_i$ equal to the distance from the vertex $E_1$ to the $(-1)$-vertex associated to the curvetta $C_i$. \end{lemma} \begin{figure} \centering \begin{tikzpicture}[scale=2] \node[bullet] (10) at (1,0) [labelAbove={$-b_1$}] {}; \node[bullet] (10-1) at (0.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (0.9,-0.5) [] {}; \node[smallbullet] at (1,-0.5) [] {}; \node[smallbullet] at (1.1,-0.5) [] {}; \node[bullet] (10-2) at (1.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (0.8,-0.6)--(1.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_1-2$}; \node[bullet] (20) at (2,0) [labelAbove={$-b_2$}] {}; \node[bullet] (20-1) at (1.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (1.9,-0.5) [] {}; \node[smallbullet] at (2,-0.5) [] {}; \node[smallbullet] at (2.1,-0.5) [] {}; \node[bullet] (20-2) at (2.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (1.8,-0.6)--(2.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_2-2$}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-b_{r-1}$}] {}; \node[bullet] (350-1) at (3.3,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (3.4,-0.5) [] {}; \node[smallbullet] at (3.5,-0.5) [] {}; \node[smallbullet] at (3.6,-0.5) [] {}; \node[bullet] (350-2) at (3.7,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (3.3,-0.6)--(3.7,-0.6) node[pos=0.5,below=0.1em,black]{$b_{r-1}-2$}; \node[bullet] (450) at (4.5,0) [labelAbove={$-b_r$}] {}; \node[bullet] (450-1) at (4.3,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (4.4,-0.5) [] {}; \node[smallbullet] at (4.5,-0.5) [] {}; \node[smallbullet] at (4.6,-0.5) [] {}; \node[bullet] (450-2) at (4.7,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (4.3,-0.6)--(4.7,-0.6) node[pos=0.5,below=0.1em,black]{$b_r-1$}; \draw [-] (10)--(20); \draw [-] (20)--(250); \draw [dotted] (20)--(350); \draw [-] (30)--(350); \draw [-] (350)--(450); \draw [-] (10)--(10-1); \draw [-] (10)--(10-2); \draw [-] (20)--(20-1); \draw [-] (20)--(20-2); \draw [-] (350)--(350-1); \draw [-] (350)--(350-2); \draw [-] (450)--(450-1); \draw [-] (450)--(450-2); \end{tikzpicture} \caption{An usual sandwiched structure $\Gamma'$} \label{figure:sandwiched-structure-for-cyclic} \end{figure} \subsection{Topology of the compatible compactification}\label{section:topology-hair} Let $(Y,p)$ be the compatible compactification of $(X,p)$ whose sandwiched structure is given as in Figure~\ref{figure:sandwiched-structure-for-cyclic}. Let $(W,E)$ be the minimal resolution of $(Y,p)$. \begin{figure} \centering \begin{tikzpicture}[scale=2] \begin{scope}[shift={(0,0)}] \node[empty] (00) at (0,0) [] {}; \node[empty] (050) at (0.5,0) [] {}; \draw [dotted] (0,0)--(0.5, 0); \draw [-] (0.5,0)--(1,0); \node[bullet] (10) at (1,0) [labelAbove={$-b_i+1$}] {}; \node[bullet] (10-1) at (0.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (0.9,-0.5) [] {}; \node[smallbullet] at (1,-0.5) [] {}; \node[smallbullet] at (1.1,-0.5) [] {}; \node[bullet] (10-2) at (1.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (0.8,-0.6)--(1.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_i-2$}; \draw [-] (10)--(10-1); \draw [-] (10)--(10-2); \end{scope} \node[empty] (1750) at (1.75,0) [] {}; \node[empty] (2250) at (2.25,0) [] {}; \draw[<-] (1750) -- (2250); \begin{scope}[shift={(2.5,0)}] \node[empty] (00) at (0,0) [] {}; \node[empty] (050) at (0.5,0) [] {}; \draw [dotted] (0,0)--(0.5, 0); \draw [-] (0.5,0)--(1,0); \node[bullet] (10) at (1,0) [labelAbove={$-b_i$}] {}; \node[bullet] (10-1) at (0.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (0.9,-0.5) [] {}; \node[smallbullet] at (1,-0.5) [] {}; \node[smallbullet] at (1.1,-0.5) [] {}; \node[bullet] (10-2) at (1.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (0.8,-0.6)--(1.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_i-2$}; \draw [-] (10)--(10-1); \draw [-] (10)--(10-2); \node[bullet] (20) at (2,0) [labelAbove={$-1$}] {}; \draw [-] (10) -- (20); \end{scope} \node[empty] (175-15) at (1.75,-1.5) [] {}; \node[empty] (225-15) at (2.25,-1.5) [] {}; \draw[<-] (175-15) -- (225-15); \begin{scope}[shift={(2.5,-1.5)}] \node[empty] (00) at (0,0) [] {}; \node[empty] (050) at (0.5,0) [] {}; \draw [dotted] (0,0)--(0.5, 0); \draw [-] (0.5,0)--(1,0); \node[bullet] (10) at (1,0) [labelAbove={$-b_i$}] {}; \node[bullet] (10-1) at (0.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (0.9,-0.5) [] {}; \node[smallbullet] at (1,-0.5) [] {}; \node[smallbullet] at (1.1,-0.5) [] {}; \node[bullet] (10-2) at (1.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (0.8,-0.6)--(1.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_i-2$}; \draw [-] (10)--(10-1); \draw [-] (10)--(10-2); \node[bullet] (20) at (2,0) [labelAbove={$-b_{i+1}$}] {}; \node[bullet] (20-1) at (1.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (1.9,-0.5) [] {}; \node[smallbullet] at (2,-0.5) [] {}; \node[smallbullet] at (2.1,-0.5) [] {}; \node[bullet] (20-2) at (2.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (1.8,-0.6)--(2.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_{i+1}-2$}; \draw [-] (20)--(20-1); \draw [-] (20)--(20-2); \draw [-] (10) -- (20); \end{scope} \end{tikzpicture} \caption{A sequence of blowing-ups} \label{figure:blowing-ups} \end{figure} Notice that we can construct the surface $W$ by blowing up $\mathbb{CP}^2$ at $[0,0,1]$ (including its infinitely near points) as given inductively in Figure~\ref{figure:blowing-ups}. So it is easy to describe generators of $H_2(W;\mathbb{Z})$. For this, we decorate the Riemenschneider dot diagram of Equation~\eqref{equation:minimal-resolution} as follows. We label each dots in the $i$th row of the dot diagram as below. \begin{equation}\label{equation:labeling-row} \begin{aligned} \begin{tikzpicture} \node[smallbullet] at (-0.5,1) []{}; \node[smallbullet] at (-0.75,1) []{}; \node[smallbullet] at (-1,1) []{}; \node[bullet] (01) at (0,1) [labelAbove={$e_{{i-1},b_{i-1}-2}$}] {}; \node[bullet] (00) at (0,0) [labelAbove={$e_{i,0}$}] {}; \node[bullet] (10) at (1,0) [labelAbove={$e_{i,1}$}] {}; \node[smallbullet] at (1.75,0) []{}; \node[smallbullet] at (2,0) []{}; \node[smallbullet] at (2.25,0) []{}; \node[bullet] (30) at (3,0) [labelAbove={$e_{i,b_i-2}$}] {}; \node[bullet] (3-1) at (3,-1) [labelAbove={$e_{i+1,0}$}] {}; \node[smallbullet] at (3.5,-1) []{}; \node[smallbullet] at (3.75,-1) []{}; \node[smallbullet] at (4,-1) []{}; \end{tikzpicture} \end{aligned} \end{equation} Finally, let $e_{r,b_r-1}$ denote the exceptional divisor of the final blow-up for constructing $W$ among the sequence of blowing-ups given in Figure~\ref{figure:blowing-ups}. Then we may consider each $e_{i,j}$'s represent the homology classes of exceptional 2-spheres in $H_2(W;\mathbb{Z})$. In particular, the homology classes of the exceptional divisors $E_i$ in $W$ given in Equation~\eqref{equation:minimal-resolution} are given as follows: For $i<r$, \begin{equation}\label{equation:homology-row} \begin{aligned} [E_i]&=e_{i,0}-e_{i,1}-\dotsb-e_{i,b_i-2}-e_{i+1,0} \\ [E_r]&=e_{r,0}-e_{r,1}-\dotsb-e_{r,b_r-2}-e_{r,b_r-1} \end{aligned} \end{equation} Let $(D,l)$ be the compactified decorated curve of $(Y,p)$ and let $\widetilde{D}$ be the proper transform of $D$ in $W$. Each homology classes $e_{i,j}$ for $j \ge 1$ represent $(-1)$-curves over $E_i$ in $W$ given in Figure~\ref{figure:sandwiched-structure-for-cyclic}. Let $\widetilde{D}_{i,j}$ be the component of $\widetilde{D}$ corresponding to $e_{i,j}$. In order to represent their homology classes, let $l$ be the line class in $H_2(W;\mathbb{Z})$. Then there is a positive integer $n$ such that the homology classes of $\widetilde{D}_{i,j}$ for $j \ge 1$ are given by \begin{equation}\label{equation:homology-D} [\widetilde{D}_{i,j}] = nl-e_{1,0}-e_{2,0}-\dotsb-e_{i,0}-e_{i,j}. \end{equation} \begin{example}\label{example:3-4-2-homology-hair} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{19}(1,7)$. We label the Rimenschneider dot diagram for $19/7=[3,4,2]$ as follows. \begin{equation*} \begin{tikzpicture} \node[bullet] at (0,3) [labelAbove={$e_1$}] {}; \node[bullet] at (1,3) [labelAbove={$e_2$}] {}; \node[bullet] at (1,2) [labelAbove={$e_3$}] {}; \node[bullet] at (2,2) [labelAbove={$e_4$}] {}; \node[bullet] at (3,2) [labelAbove={$e_5$}] {}; \node[bullet] at (3,1) [labelAbove={$e_6$}] {}; \end{tikzpicture} \end{equation*} Let $e_7$ be the homology class of the final exceptional divisor. Then the homology classes of $E_i$ and $\widetilde{D}_j$ are given by \begin{equation*} \begin{aligned} [E_1] &= e_1-e_2-e_3 \\ [E_2] &= e_3-e_4-e_5-e_6 \\ [E_3] &= e_6-e_7 \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} [\widetilde{D}_1] &= nl-e_1-e_2 \\ [\widetilde{D}_2] &= nl-e_1-e_3-e_4 \\ [\widetilde{D}_3] &= nl-e_1-e_3-e_5 \\ [\widetilde{D}_4] &= nl-e_1-e_3-e_6-e_7 \end{aligned} \end{equation*} Here we may take $n=2$. \end{example} \section{From P-resolutions to picture deformations}\label{section:CQSS-PtoP} Let $(X,p)$ be a cyclic quotient surface singularity. Suppose that a sandwiched structure for $(X,p)$ is given as in Figure~\ref{figure:sandwiched-structure-for-cyclic}. Let $\mathcal{X} \to \Delta$ be a one-parameter smoothing of $(X,p)$. Let $Z \to Y$ be the corresponding M-resolution of the compatible compactification $Y$ of $X$ and let $\mathcal{Z} \to \Delta$ be the $\mathbb{Q}$-Gorenstein smoothing corresponding to $\mathcal{X} \to \Delta$. The following theorem says that one can run MMP \emph{in an explicit and controlled way} for cyclic quotient surface singularities, which is a particular case of Proposition~\ref{proposition:smooth-central-fiber}. \begin{theorem} By applying only Iitaka-Kodaira divisorial contractions and usual flips, one can run MMP to $\mathcal{Z} \to \Delta$ until we obtain a deformation $\mathcal{Z}' \to \Delta$ whose central fiber $Z'$ is smooth. \end{theorem} \begin{proof} Let $F_1, \dotsc, F_{b_r-1}$ be the $(-1)$-curves attaching to $E_r$. \textit{Case~1}. The curve $E_r$ is not an exceptional curve of a Wahl singularity of the compactified M-resolution $Z$. We apply Iitaka-Kodaira divisorial contractions along $F_1,\dotsc,F_{b_r-1}$. Then the blown-down M-resolution $Z'$ is the compactified M-resolution of the cyclic quotient surface singularity $(X',0)$ whose dual graph is given by \begin{equation* \begin{tikzpicture} \node[bullet] (10) at (1,0) [labelAbove={$-b_1$},labelBelow={$E_1$}] {}; \node[bullet] (20) at (2,0) [labelAbove={$-b_2$},labelBelow={$E_2$}] {}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-b_{r-1}$},labelBelow={$E_{r-1}$}] {}; \draw [-] (10)--(20); \draw [-] (20)--(250); \draw [dotted] (20)--(350); \draw [-] (30)--(350); \end{tikzpicture} \end{equation*} Here the sandwiched structure of $(X',0)$ is given as in Figure~\ref{figure:sandwiched-structure-for-X'}. So we can repeat the process with the compactified M-resolution $Z'$ for a cyclic quotient surface singularity $(X',0)$. \begin{figure} \centering \begin{tikzpicture}[scale=2] \node[bullet] (10) at (1,0) [labelAbove={$-b_1$}] {}; \node[bullet] (10-1) at (0.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (0.9,-0.5) [] {}; \node[smallbullet] at (1,-0.5) [] {}; \node[smallbullet] at (1.1,-0.5) [] {}; \node[bullet] (10-2) at (1.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (0.8,-0.6)--(1.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_1-2$}; \node[bullet] (20) at (2,0) [labelAbove={$-b_2$}] {}; \node[bullet] (20-1) at (1.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (1.9,-0.5) [] {}; \node[smallbullet] at (2,-0.5) [] {}; \node[smallbullet] at (2.1,-0.5) [] {}; \node[bullet] (20-2) at (2.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (1.8,-0.6)--(2.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_2-2$}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-b_{r-1}$}] {}; \node[bullet] (350-1) at (3.3,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (3.4,-0.5) [] {}; \node[smallbullet] at (3.5,-0.5) [] {}; \node[smallbullet] at (3.6,-0.5) [] {}; \node[bullet] (350-2) at (3.7,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (3.3,-0.6)--(3.7,-0.6) node[pos=0.5,below=0.1em,black]{$b_{r-1}-2$}; \node[bullet] (450) at (4.5,0) [labelAbove={$-1$},labelBelow={$E_r$}] {}; \draw [-] (10)--(20); \draw [-] (20)--(250); \draw [dotted] (20)--(350); \draw [-] (30)--(350); \draw [-] (350)--(450); \draw [-] (10)--(10-1); \draw [-] (10)--(10-2); \draw [-] (20)--(20-1); \draw [-] (20)--(20-2); \draw [-] (350)--(350-1); \draw [-] (350)--(350-2); \end{tikzpicture} \caption{The sandwiched structure of $(X',0)$ in Case 1} \label{figure:sandwiched-structure-for-X'} \end{figure} \textit{Case~2}. $E_r$ is a member of a Wahl singularity in $Z$. The dual graph of $Z$ over $0 \in Y$ is of the form \begin{equation* \begin{tikzpicture} \node[empty] (050) at (0.5,0) [] {}; \node[empty] (10) at (1,0) [] {}; \node[bullet] (150) at (1.5,0) [labelAbove={$-e$},labelBelow={$E$}] {}; \node[rectangle] (250) at (2.5,0) [labelAbove={$-g_1$},labelBelow={$G_1$}] {}; \node[empty] (30) at (3,0) [] {}; \node[empty] (350) at (3.5,0) [] {}; \node[rectangle] (40) at (4,0) [labelAbove={$-g_s$},labelBelow={$G_s$}] {}; \draw [dotted] (050)--(10); \draw [-] (10)--(150); \draw [-] (150)--(250); \draw [-] (250)--(30); \draw [dotted] (30)--(350); \draw [-] (350)--(40); \end{tikzpicture} \end{equation*} with $G_s=E_r$. Note that the left part ``$\cdots - E$'' of $[G_1,\dotsc,G_s]$ may be empty. \textit{Case~2-1}. $E$ is not a $(-1)$-curve or the left part ``$\cdots - E$'' is empty. In this case, $G_s=E_r,\dotsc,G_1=E_{r-s+1},E=E_{r-s}$. Suppose that $g_i \ge 3$ for some $i \ge 1$ and $g_{i+1}=\dotsb=g_s=2$ (if any). Applying the usual flips along $F_1,\dotsc,F_{b_r-1}$, we have a surface $Z'$ whose dual graph over $0 \in Y$ is given by either \begin{equation*} \begin{tikzpicture} \node[empty] (-050) at (-0.5,0) [] {}; \node[empty] (00) at (0,0) [] {}; \node[bullet] (050) at (0.5,0) [labelAbove={$-e$},labelBelow={$E$}] {}; \node[bullet] (150) at (1.5,0) [labelAbove={$-g_1$},labelBelow={$G_1$}] {}; \node[rectangle] (250) at (2.5,0) [labelAbove={$-g_2$},labelBelow={$G_2$}] {}; \node[empty] (30) at (3,0) [] {}; \node[empty] (350) at (3.5,0) [] {}; \node[rectangle] (40) at (4,0) [labelAbove={$-g_i+1$},labelBelow={$G_i$}] {}; \draw [dotted] (-050)--(00); \draw [-] (00)--(050); \draw [-] (050)--(150); \draw [-] (150)--(250); \draw [-] (250)--(30); \draw [dotted] (30)--(350); \draw [-] (350)--(40); \end{tikzpicture} \end{equation*} for $i \ge 2$ or \begin{equation*} \begin{tikzpicture} \node[empty] (-050) at (-0.5,0) [] {}; \node[empty] (00) at (0,0) [] {}; \node[bullet] (050) at (0.5,0) [labelAbove={$-e$},labelBelow={$E$}] {}; \node[bullet] (150) at (1.5,0) [labelAbove={$-g_1+1$},labelBelow={$G_1$}] {}; \draw [dotted] (-050)--(00); \draw [-] (00)--(050); \draw [-] (050)--(150); \end{tikzpicture} \end{equation*} for $i=1$. Then this new surface $Z'$ is again an M-resolution of a cyclic quotient surface singularity $(X',0)$ whose sandwiched structure is given as in Figure~\ref{figure:sandwiched-structure-for-X'-after-usual-flips}. Hence we can repeat again the process with $Z'$ for $(X',0)$. \begin{figure} \centering \begin{tikzpicture}[scale=2] \node[bullet] (10) at (1,0) [labelAbove={$-b_1$}] {}; \node[bullet] (10-1) at (0.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (0.9,-0.5) [] {}; \node[smallbullet] at (1,-0.5) [] {}; \node[smallbullet] at (1.1,-0.5) [] {}; \node[bullet] (10-2) at (1.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (0.8,-0.6)--(1.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_1-2$}; \node[bullet] (20) at (2,0) [labelAbove={$-b_2$}] {}; \node[bullet] (20-1) at (1.8,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (1.9,-0.5) [] {}; \node[smallbullet] at (2,-0.5) [] {}; \node[smallbullet] at (2.1,-0.5) [] {}; \node[bullet] (20-2) at (2.2,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (1.8,-0.6)--(2.2,-0.6) node[pos=0.5,below=0.1em,black]{$b_2-2$}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-b_{r-s+i}+1$}] {}; \node[bullet] (350-1) at (3.3,-0.5) [label={[label distance=-0.35em]above left:{$-1$}}] {}; \node[smallbullet] at (3.4,-0.5) [] {}; \node[smallbullet] at (3.5,-0.5) [] {}; \node[smallbullet] at (3.6,-0.5) [] {}; \node[bullet] (350-2) at (3.7,-0.5) [label={[label distance=-0.35em]above right:{$-1$}}] {}; \draw [decorate, decoration = {calligraphic brace,mirror}] (3.3,-0.6)--(3.7,-0.6) node[pos=0.5,below=0.1em,black]{$b_{r-s+i}-2$}; \draw [-] (10)--(20); \draw [-] (20)--(250); \draw [dotted] (20)--(350); \draw [-] (30)--(350); \draw [-] (10)--(10-1); \draw [-] (10)--(10-2); \draw [-] (20)--(20-1); \draw [-] (20)--(20-2); \draw [-] (350)--(350-1); \draw [-] (350)--(350-2); \end{tikzpicture} \caption{The sandwiched structure of $(X',0)$ in Case 2-1} \label{figure:sandwiched-structure-for-X'-after-usual-flips} \end{figure} \textit{Case 2-2}. $E$ is a $(-1)$-curve. Since $K_Z$ is ample, there is another Wahl singularity $[H_t,\dotsc,H_1]$ on $E$ so that $Z$ looks like \begin{equation* \begin{tikzpicture} \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$H_t$}] {}; \node[empty] (-050) at (-0.5,0) [] {}; \node[empty] (00) at (0,0) [] {}; \node[rectangle] (050) at (0.5,0) [labelBelow={$H_1$}] {}; \node[bullet] (150) at (1.5,0) [labelBelow={$E$}] {}; \node[rectangle] (250) at (2.5,0) [labelBelow={$G_1$}] {}; \node[empty] (30) at (3,0) [] {}; \node[empty] (350) at (3.5,0) [] {}; \node[rectangle] (40) at (4,0) [labelBelow={$G_s$}] {}; \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(-050); \draw[dotted] (-050)--(00); \draw[-] (00)--(050); \draw [-] (050)--(150); \draw [-] (150)--(250); \draw [-] (250)--(30); \draw [dotted] (30)--(350); \draw [-] (350)--(40); \end{tikzpicture} \end{equation*} As in Lemma~\ref{lemma:usual-situation-1} we apply repeatedly the usual flips starting from the sandwiched $(-1)$-curves on $G_s$ until there is no Wahl singularity left from $[G_1,\dotsc,G_s]$. Then the flipped surface $Z_1$ contains a configuration \begin{equation}\label{equation:Z1-after-two-sequences-of-flips} \begin{tikzpicture}[baseline=(current bounding box.center)] \node[empty] (-20) at (-2,0) [] {}; \node[empty] (-150) at (-1.5,0) [] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$H_t$}] {}; \node[empty] (-050) at (-0.5,0) [] {}; \node[empty] (00) at (0,0) [] {}; \node[rectangle] (050) at (0.5,0) [labelBelow={$H_1$}] {}; \node[bullet] (150) at (1.5,0) [labelBelow={$E$}] {}; \node[bullet] (250) at (2.5,0) [labelBelow={$G_1$}] {}; \node[empty] (30) at (3,0) [] {}; \node[empty] (350) at (3.5,0) [] {}; \node[bullet] (40) at (4,0) [labelBelow={$G_p$}] {}; \draw[dotted] (-20)--(-150); \draw[-] (-150)--(-10); \draw[-] (-10)--(-050); \draw[dotted] (-050)--(00); \draw[-] (00)--(050); \draw [-] (050)--(150); \draw [-] (150)--(250); \draw [-] (250)--(30); \draw [dotted] (30)--(350); \draw [-] (350)--(40); \end{tikzpicture} \end{equation} where $G_p$ is the initial curve of the Wahl singularity $[G_1,\dotsc,G_s]$. We continue to apply again the usual flips now starting from the $(-1)$-curve $E$ until there are no exceptional curves (of the sequence of blowing-ups creating $E$) left. By Lemma~\ref{lemma:usual-situation-2}, the successive blow-downs of $H_t-\dotsb-H_1-E-G_1-\dotsb-G_s$ starting $E$ cannot contract both of the initial curves $H_q$ and $G_p$ of the Wahl singularities $[H_t,\dotsc,H_1]$ and $[G_1,\dotsc,G_s]$. So there is a Wahl singularity left from $[H_t,\dotsc,H_1]$ whose minimal resolution contains the initial curve $H_q$ as one of its exceptional curves. Then the new surface $Z_2$ after the usual flips contains a configuration of the form \begin{equation*} \begin{tikzpicture} \node[empty] (-450) at (-4.5,0) [] {}; \node[empty] (-40) at (-4,0) [] {}; \node[bullet] (-350) at (-3.5,0) [labelBelow={$H_t$}] {}; \node[empty] (-30) at (-3,0) [] {}; \node[empty] (-250) at (-2.5,0) [] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$H_{k+1}$}] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$H_k$}] {}; \node[empty] (-050) at (-0.5,0) [] {}; \node[empty] (00) at (0,0) [] {}; \node[rectangle] (050) at (0.5,0) [labelBelow={$H_l$}] {}; \node[bullet] (150) at (1.5,0) [labelBelow={$G_m$}] {}; \node[empty] (20) at (2,0) [] {}; \node[empty] (250) at (2.5,0) [] {}; \node[bullet] (30) at (3,0) [labelBelow={$G_p$}] {}; \draw[dotted] (-450)--(-40); \draw[-] (-40)--(-350); \draw[-] (-350)--(-30); \draw[dotted] (-30)--(-250); \draw[-] (-250)--(-20); \draw[-] (-20)--(-10); \draw[-] (-10)--(-050); \draw[dotted] (-050)--(00); \draw[-] (00)--(050); \draw[-] (050)--(150); \draw[-] (150)--(20); \draw[dotted] (20)--(250); \draw[-] (250)--(30); \end{tikzpicture} \end{equation*} Note that each curves $H_q, \dotsc, H_l$, $G_m, \dotsc, G_p$ are the original exceptional curves of the cyclic quotient surface singularity $(X,p)$. So they have sandwiched $(-1)$-curves. Each curves $H_j$ ($q+1 \le j \le l$) and $G_i$ ($m \le i \le p-1$) have $(h_j-2)$ and $(g_i-2)$ sandwiched $(-1)$-curves, respectively. In particular, there are $(g_p-1)$ $(-1)$-curves intersecting $G_p$. Then it's time to apply the Iitaka-Kodaira divisorial contractions starting from the $(-1)$-curves intersecting $G_p$ until the linear chain $G_m-\dotsb-G_p$ contracts to a $(-1)$-curve intersecting $H_l$. Hence we have a new surface $Z_3$ with a configuration \begin{equation* \begin{tikzpicture} \node[empty] (-450) at (-4.5,0) [] {}; \node[empty] (-40) at (-4,0) [] {}; \node[bullet] (-350) at (-3.5,0) [labelBelow={$H_t$}] {}; \node[empty] (-30) at (-3,0) [] {}; \node[empty] (-250) at (-2.5,0) [] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$H_{k+1}$}] {}; \node[rectangle] (-10) at (-1,0) [labelBelow={$H_k$}] {}; \node[empty] (-050) at (-0.5,0) [] {}; \node[empty] (00) at (0,0) [] {}; \node[rectangle] (050) at (0.5,0) [labelBelow={$H_l$}] {}; \node[bullet] (150) at (1.5,0) [labelBelow={$-1$}] {}; \draw[dotted] (-450)--(-40); \draw[-] (-40)--(-350); \draw[-] (-350)--(-30); \draw[dotted] (-30)--(-250); \draw[-] (-250)--(-20); \draw[-] (-20)--(-10); \draw[-] (-10)--(-050); \draw[dotted] (-050)--(00); \draw[-] (00)--(050); \draw[-] (050)--(150); \end{tikzpicture} \end{equation*} Again by Lemma~\ref{lemma:usual-situation-1}, we can eliminate the Wahl singularity $[H_k,\dotsc,H_l]$ by a sequence of the usual flips starting from $(-1)$-curves intersecting $H_l$. Finally we have a surface $Z_4$: \begin{equation* \begin{tikzpicture} \node[empty] (-450) at (-4.5,0) [] {}; \node[empty] (-40) at (-4,0) [] {}; \node[bullet] (-350) at (-3.5,0) [labelBelow={$H_t$}] {}; \node[empty] (-30) at (-3,0) [] {}; \node[empty] (-250) at (-2.5,0) [] {}; \node[bullet] (-20) at (-2,0) [labelBelow={$H_{k+1}$}] {}; \node[bullet] (-10) at (-1,0) [labelBelow={$H_k$}] {}; \node[empty] (-050) at (-0.5,0) [] {}; \node[empty] (00) at (0,0) [] {}; \node[bullet] (050) at (0.5,0) [labelBelow={$H_q$}] {}; \draw[dotted] (-450)--(-40); \draw[-] (-40)--(-350); \draw[-] (-350)--(-30); \draw[dotted] (-30)--(-250); \draw[-] (-250)--(-20); \draw[-] (-20)--(-10); \draw[-] (-10)--(-050); \draw[dotted] (-050)--(00); \draw[-] (00)--(050); \end{tikzpicture} \end{equation*} If there is a $(-1)$-curve touching $H_t$ and another Wahl singularity, we return to the situation of Equation~\eqref{equation:Z1-after-two-sequences-of-flips}. Hence we can repeat the process from there. If not, then the surface $Z_4$ is a new compactified M-resolution of a certain compactified cyclic quotient surface singularlty $(Y_4,0)$. So we can repeat the process from the beginning. \end{proof} As in Section~\ref{section:identification}, one can track how a branch $C_{i,t}$ of the decorated curve $C_t$ on a general fiber is degenerated to sums of curves on the central fiber during flips. On the contrary, one can track how $(-1)$-curves on the central fiber intersect with the degenerations. Therefore one get the information of the intersection of $(-1)$-curves with the branches of the decorated curve $C_t$ on a general fiber, which gives us the incidence matrix (hence the picture deformation) corresponding to the given P-resolution. \begin{remark} We have an algorithm for finding the corresponding P-resolutions from a given picture deformations. But the algorithm is a little complicated and involved in non-usual sandwiched structures of cyclic quotient surface singularities, which we will not use in the other parts of this paper. Instead, we will introduce a simple but slightly indirect algorithm in the next sections. \end{remark} \section{Equations of cyclic quotient surface singularities}\label{section:CQSS-Equation} Let $(X,p)$ be a cyclic quotient surface singularity of type $\frac{1}{n}(1,a)$ with $(n,a)=1$. Let $\Def(X)$ be the reduced space of the reduced base space of the miniversal deformation of $(X,p)$. Christophersen~\cite{Christophersen-1991} and Stevens~\cite{Stevens-1991} provided systems of equations for the singularity $(X,p)$ and described explicitly their deformations with smooth parameter spaces. According to their results, the components of $\Def(X)$ are parameterized by certain sequences $\underline{k}=(k_1,\dotsc,k_s) \in K_s(n/n-a)$ of positive integers. For a detailed summary, please refer to Némethi--Popescu-Pampu~\cite{Nemethi-Poposcu-Pampu-2010}. We first explain the set $K_s(n/n-a)$ in more detail. For any sequence $\underline{x}=(x_1,\dotsc,x_s)$ of positive integers, we define an $s \times s$ matrix $M(\underline{x})$ by $M_{i,i}=x_i$, $M_{i,j}=-1$ if $\abs{i-j}=1$, and $M_{i,j}=0$ otherwise. \begin{definition}[Orlik--Wagreich~\cite{Orlik-Wagreich-1977}] A sequence $\underline{k}=(k_1,\dotsc,k_s) \in \mathbb{N}^s$ is called \emph{admissible} if the matrix $M(\underline{k})$ is positive semi-definite of rank at least $s-1$. \end{definition} \begin{definition}{Christophersen~\cite{Christophersen-1991}} For $s \ge 1$, we define \begin{equation*} K_s = \{ (k_1,\dotsc,k_s) \in \mathbb{N}^s \mid \text{$(k_1,\dotsc,k_s)$ is admissible and $[k_1,\dotsc,k_s]=0$}\}. \end{equation*} that is, the set of all admissible $s$-tuples which represent a zero HJ continued fraction. \end{definition} Suppose that $n/(n-a)=[a_1,\dotsc,a_s]$. \begin{definition} We define \begin{equation*} K_s(n/n-a) = \{(k_1,\dotsc,k_s) \in K_s \mid \text{$k_i \le a_i$ for all $i$}\}. \end{equation*} \end{definition} As a toric variety, one can find a system of equations that define the singularity $(X,p)$. The system includes the equations \begin{equation*} z_{i-1}z_{i+1}-z_i^{a_i}=0 \end{equation*} for $i=1,\dotsc,s$. Then Christophersen~\cite{Christophersen-1991} and Stevens~\cite{Stevens-1991} showed that the deformation component parameterized by $\underline{k} \in K_s(n/n-a)$ contains a 1-parameter deformation which has a set of equations that contains \begin{equation*} z_{i-1}z_{i+1}-z_i^{a_i}=tz_i^{k_i} \end{equation*} for all $i=1,\dotsc,s$. \begin{example}\label{example:3-4-2-k} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{19}(1,7)$. Then $19/(19-7)=[2,3,2,3]$. So there are three parameters $\underline{k}$ corresponding to components of $\Def(X)$: \begin{equation*} (1,2,2,1), \quad (1,3,1,2), \quad (2,2,1,3). \end{equation*} \end{example} \section{From P-resolutions to Equations}\label{section:CQSS-PtoE} In PPSU~\cite{PPSU-2018}, we developed a process for determining the parameter $\underline{k}$ of the component of $\Def(X)$ that contains a one-parameter deformation of $(X,p)$ given by a P-resolution. PPSU~\cite{PPSU-2018} used the same technique, the semi-stable MMP, as in this paper, but the singularity $(X,p)$ was compactified in a different way. We review briefly the method in PPSU~\cite{PPSU-2018}. Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{n}(1,a)$ whose dual graph is given as in Equation~\eqref{equation:minimal-resolution}. Suppose that $n/(n-a)=[a_1,\dotsc,a_s]$. Since $[b_1,\dotsc,b_r,1,a_s,\dotsc,a_1]=0$, one can blow up $\mathbb{CP}^2$ at a point (including its infinitely near points) so that one has a projective surface $W$ containing rational curves whose dual graph is given by \begin{equation}\label{equation:CQSS-compactification-tail} \begin{aligned} \begin{tikzpicture} \node[bullet] (10) at (1,0) [labelAbove={$-b_1$},labelBelow={$E_1$}] {}; \node[bullet] (20) at (2,0) [labelAbove={$-b_2$},labelBelow={$E_2$}] {}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-b_{r-1}$},labelBelow={$E_{r-1}$}] {}; \node[bullet] (450) at (4.5,0) [labelAbove={$-b_r$},labelBelow={$E_r$}] {}; \node[bullet] (550) at (5.5,0) [labelAbove={$-1$}] {}; \node[bullet] (650) at (6.5,0) [labelAbove={$-a_s$},labelBelow={$A_s$}] {}; \node[empty] (70) at (7,0) [] {}; \node[empty] (750) at (7.5,0) [] {}; \node[bullet] (80) at (8,0) [labelAbove={$-a_2$},labelBelow={$A_2$}] {}; \node[bullet] (90) at (9,0) [labelAbove={$-a_1+1$},labelBelow={$A_1$}] {}; \node[bullet] (100) at (10,0) [labelAbove={$+1$},labelBelow={$L$}] {}; \draw [-] (10)--(20); \draw [-] (20)--(250); \draw [dotted] (20)--(350); \draw [-] (30)--(350); \draw [-] (350)--(450); \draw [-] (450)--(550); \draw [-] (550)--(650); \draw [-] (650)--(70); \draw [dotted] (70)--(750); \draw [-] (750)--(80); \draw [-] (80)--(90); \draw [-] (90)--(100); \end{tikzpicture} \end{aligned} \end{equation} where $L$ is a line in $\mathbb{CP}^2$ and $A_1$ is the proper transform of a line through the blow-up point. Contracting $E_1, \dotsc, E_r$, we have a singular projective surface $Y$. As in Theorem~\ref{theorem:extension-of-deformation}, PPSU~\cite[Lemma~3.1]{PPSU-2018} showed that there is no local-to-global obstruction to the singular surface $Y$. So, if $\mathcal{X} \to \Delta$ is a one-parameter smoothing of $(X,p)$ and if $U \to X$ is the corresponding P-resolution of $X$, then there are an extension $\mathcal{Y} \to \Delta$ of $\mathcal{X} \to \Delta$ and a P-resolution $Z \to Y$ such that the $\mathbb{Q}$-Gorenstein smoothing $\mathcal{Z} \to \Delta$ of $Z$ blows down to $\mathcal{Y} \to \Delta$. Notice that a general fiber $Z_t$ of $\mathcal{Z} \to \Delta$ contains the linear chain of rational curves with the dual graph \begin{equation* \begin{aligned} \begin{tikzpicture} \node[bullet] (650) at (6.5,0) [labelAbove={$-a_s$},labelBelow={$A_s$}] {}; \node[empty] (70) at (7,0) [] {}; \node[empty] (750) at (7.5,0) [] {}; \node[bullet] (80) at (8,0) [labelAbove={$-a_2$},labelBelow={$A_2$}] {}; \node[bullet] (90) at (9,0) [labelAbove={$-a_1+1$},labelBelow={$A_1$}] {}; \node[bullet] (100) at (10,0) [labelAbove={$+1$},labelBelow={$L$}] {}; \draw [-] (650)--(70); \draw [dotted] (70)--(750); \draw [-] (750)--(80); \draw [-] (80)--(90); \draw [-] (90)--(100); \end{tikzpicture} \end{aligned} \end{equation*} PPSU~\cite{PPSU-2018} proved that by applying the semi-stable MMP to $\mathcal{Z} \to \Delta$, one can determine how $(-1)$-curves in a general fiber $Z_t$ intersect the curves $A_1, \dotsc, A_s$. Furthermore, let $d_i$ be the number of $(-1)$-curves in a general fiber $Z_t$ that intersects the curve $A_i$ transversally. Then PPSU~\cite{PPSU-2018} showed that if we define \begin{equation}\label{equation:k} k_i := a_i-d_i \end{equation} for $i=1,\dotsc,s$, then the sequence $(k_1,\dotsc,k_s)$ is contained in $K_s(n/n-a)$ and it is the parameter $\underline{k}$ (introduced in Section~\ref{section:CQSS-Equation}) corresponding to the component of $\Def(X)$ that contains the smoothing $\mathcal{X} \to \Delta$. \begin{example}[Continued from Example~\ref{example:3-4-2-k}]\label{example:3-4-2-PtoE} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{19}(1,7)$. There are three P-resolutions (cf.~Example~{example:3-4-2-P-resolutions}): \begin{equation*} 3-4-[2], \quad 3-[4]-2, \quad [4]-1-[5,2] \end{equation*} The corresponding sequences $\underline{k}$ are determined in PPSU~\cite[\S9]{PPSU-2018} as follows: \begin{equation*} \begin{aligned} &\text{$\underline{k}=(1,2,2,2)$ to $3-4-[2]$}\\ &\text{$\underline{k}=(1,3,1,2)$ to $3-[4]-2$}\\ &\text{$\underline{k}=(2,2,1,3)$ to $[4]-1-[5,2]$} \end{aligned} \end{equation*} \end{example} \subsection{Topology of the compactifications}\label{section:topology-tail} We would like to compare two compactifications: One given in Section~\ref{section:CQSS-sandwiched} and the other given in this section. For this, we slightly change the choice of $A_1$ in Equation~\eqref{equation:CQSS-compactification-tail}. That is, we may choose $A_1$ so that $A_1$ is the proper transform of a smooth curve of degree $n$ in $\mathbb{CP}^2$ passing through the blow point. Then the surface $W$ contains a linear chain of smooth curves whose dual graph is \begin{equation}\label{equation:CQSS-compactification-tail-modified} \begin{aligned} \begin{tikzpicture} \node[bullet] (10) at (1,0) [labelAbove={$-b_1$},labelBelow={$E_1$}] {}; \node[bullet] (20) at (2,0) [labelAbove={$-b_2$},labelBelow={$E_2$}] {}; \node[empty] (250) at (2.5,0) [] {}; \node[empty] (30) at (3,0) [] {}; \node[bullet] (350) at (3.5,0) [labelAbove={$-b_{r-1}$},labelBelow={$E_{r-1}$}] {}; \node[bullet] (450) at (4.5,0) [labelAbove={$-b_r$},labelBelow={$E_r$}] {}; \node[bullet] (550) at (5.5,0) [labelAbove={$-1$}] {}; \node[bullet] (650) at (6.5,0) [labelAbove={$-a_s$},labelBelow={$A_s$}] {}; \node[empty] (70) at (7,0) [] {}; \node[empty] (750) at (7.5,0) [] {}; \node[bullet] (80) at (8,0) [labelAbove={$-a_2$},labelBelow={$A_2$}] {}; \node[bullet] (90) at (9,0) [labelAbove={$-a_1+n^2$},labelBelow={$A_1$}] {}; \node[bullet] (100) at (10,0) [labelAbove={$+1$},labelBelow={$L$}] {}; \draw [-] (10)--(20); \draw [-] (20)--(250); \draw [dotted] (20)--(350); \draw [-] (30)--(350); \draw [-] (350)--(450); \draw [-] (450)--(550); \draw [-] (550)--(650); \draw [-] (650)--(70); \draw [dotted] (70)--(750); \draw [-] (750)--(80); \draw [-] (80)--(90); \draw [-] (90)--(100); \end{tikzpicture} \end{aligned} \end{equation} In order to represent the homology classes of $E_i$'s and $A_j$'s in $H_2(W;\mathbb{Z})$, we use again the Riemenschneider dot diagram for $n/a$ as in Section~\ref{section:topology-hair}. For $E_i$'s, it is convenient to label each dots as in \eqref{equation:labeling-row}. Then their homology classes are presented exactly as described in Equation~\eqref{equation:homology-row}: \begin{equation}\label{equation:homology-column} \begin{aligned} [E_i]&=e_{i,0}-e_{i,1}-\dotsb-e_{i,b_i-2}-e_{i+1,0} \\ [E_r]&=e_{r,0}-e_{r,1}-\dotsb-e_{r,b_r-2}-e_{r,b_r-1} \end{aligned} \end{equation} But, for $A_j$'s, it is convenient that we decorate each dots in the $j$th column of the dot diagram as belows: \begin{equation*} \begin{tikzpicture} \node[smallbullet] at (-1.5,1.25) []{}; \node[smallbullet] at (-1.5,1) []{}; \node[smallbullet] at (-1.5,0.75) []{}; \node[bullet] at (-1.5,0) [labelAbove={$e_{a_{j-1}-2,j-1}$}] {}; \node[bullet] at (0,0) [labelAbove={$e_{0,j}$}] {}; \node[bullet] at (0,-1) [labelAbove={$e_{1,j}$}] {}; \node[smallbullet] at (0,-1.75) []{}; \node[smallbullet] at (0,-2) []{}; \node[smallbullet] at (0,-2.25) []{}; \node[bullet] at (0,-3) [labelAbove={$e_{a_j-2,j}$}] {}; \node[bullet] at (1.5,-3) [labelAbove={$e_{0,j+1}$}] {}; \node[smallbullet] at (1.5,-3.25) []{}; \node[smallbullet] at (1.5,-3.5) []{}; \node[smallbullet] at (1.5,-3.75) []{}; \end{tikzpicture} \end{equation*} And let $e_{a_s-1,s}$ be the homology class of the exceptional divisor of the final blow-up in the sequence of blowing ups for constructing $W$. Then the homology classes of $A_j$'s are given as follows: \begin{equation}\label{equation:homology-A} \begin{aligned} [A_1] &= nl-e_{0,1}-\dotsc-e_{a_1-2,0}-e_{0,2},\\ [A_j] &= e_{0,j}-e_{1,j}-\dotsc-e_{a_j-2,j}-e_{0,j+1},\\ [A_s] &= e_{0,s}-e_{1,s}-\dotsc-e_{a_s-2,s}-e_{a_s-1,s}. \end{aligned} \end{equation} where $1 < j < s$. \begin{example}[Continued from Example~\ref{example:3-4-2-homology-hair}]\label{example:3-4-2-homology-tail} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{19}(1,7)$. Using the same labeling as in Example~\ref{example:3-4-2-homology-hair}, the homology classes of $A_j$'s are given by \begin{equation*} \begin{aligned} [A_1] &= nl-e_1-e_2, \\ [A_2] &= e_2-e_3-e_4, \\ [A_3] &= e_4-e_5, \\ [A_4] &= e_5-e_6-e_7. \end{aligned} \end{equation*} \end{example} \section{From picture deformations to Equations}\label{section:CQSS-PictoE} Let $(X,p)$ be a cyclic quotient surface singularity $\frac{1}{n}(1,a)$. We would like to determine the parameter $\underline{k}$ of the component of $\Def(X)$ corresponding to a given picture deformation of $(X,p)$. Némethi and Popescu-Pampu~\cite[\S10]{Nemethi-Poposcu-Pampu-2010} already provided the procedure for determining the parameter $\underline{k}$ from a given picture deformation. Their approach is topological. Naturally, we get the same result as in Némethi--Popescu-Pampu~\cite{Nemethi-Poposcu-Pampu-2010}. However, we will use a method different from theirs. We will compare two compactifications presented in Sections~\ref{section:CQSS-sandwiched} and \ref{section:CQSS-PtoE}, namely the compactification for comparing P-resolutions and picture deformations, and the compactification for determining the parameters $\underline{k}$ from P-resolutions. \subsection{Comparing two compactifications} Let $Y$ be the compactification of $X$ described in Section~\ref{section:CQSS-sandwiched} induced from its sandwiched structure and let $(W,E=\cup_{i=1}^{r} E_i)$ be its minimal resolution. Similarly, let $Y'$ be the compactification of $X$ in Section~\ref{section:CQSS-PtoE} and let $(W',E'=\cup_{i=1}^{r} E_i')$ be its minimal resolution. Observe that $W$ and $W'$ are diffeomorphic because $W$ and $W$ are obtained from $\mathbb{CP}^2$ by the same number of blowing-ups. Explicitly, as seen in Section~\ref{section:topology-hair} and Section~\ref{section:topology-tail}, the generators of $H_2(W;\mathbb{Z})$ and $H^2(W';\mathbb{Z})$ correspond identically to the Riemenschneider dot diagram and one additional $(-1)$-sphere. So we may assume that $H_2(W;\mathbb{Z}) = \langle l, e_{i,j} \rangle$ and $H_2(W';\mathbb{Z}) = \langle l, e_{i,j}' \rangle$, where $e_{i,j}$ and $e_{i,j}'$ denote the homology classes of the $(-1)$-curves corresponding to the same dot in the Riemenschneider dot diagram plus one additional dot. Then there is a diffeomorphism $\phi \colon W \to W'$ such that, if $\phi_{\ast} \colon H_2(W;\mathbb{Z}) \to H_2(W';\mathbb{Z})$ is the induced isomorphism, then $\phi_{\ast}(e_{i,j})=e_{i,j}'$ for all $i,j$. According to Equation~\eqref{equation:homology-row} and \eqref{equation:homology-column}, we have $\phi_{\ast}[E_i]=[E_i']$ for all $i=1,\dotsc,r$. Let $(D=\cup_{j=1}^{s} D_j,l)$ be the compactified decorated curve of $(Y,p)$ and let $\widetilde{D}_j$ be the proper transform of $D_j$ in $W$. On the other hand, let $A_1,\dotsc,A_s$ be the curves on $W'$ whose dual graph is given by Equation~\eqref{equation:CQSS-compactification-tail-modified}. Notice that the number of $D_j$'s and $A_j$'s is the same. Furthermore, by Equations~\eqref{equation:homology-D} and \eqref{equation:homology-D}, if we relabeling $D_j$'s properly, then we have the following equalities on the homology level: \begin{equation}\label{equation:Dj=A1+...+Aj} \phi_{\ast}[\widetilde{D}_j] = [A_1] + \dotsb + [A_j]. \end{equation} \begin{example}[Continued from Examples~\ref{example:3-4-2-homology-hair} and \ref{example:3-4-2-homology-tail}] One can check that \begin{equation*} \begin{aligned} \phi_{\ast}[\widetilde{D}_1] &= [A_1], \\ \phi_{\ast}[\widetilde{D}_2] &= [A_1]+[A_2], \\ \phi_{\ast}[\widetilde{D}_3] &= [A_1]+[A_2]+[A_3], \\ \phi_{\ast}[\widetilde{D}_4] &= [A_1]+[A_2]+[A_3]+[A_4] \end{aligned} \end{equation*} \end{example} \subsection{Identification procedure} Let $(\mathcal{C},\mathcal{L}) \to \Delta$ be a picture deformation of $(X,p)$. We may assume that the branches $C_i$'s are labeled properly so that Equation~\eqref{equation:Dj=A1+...+Aj} holds. Let $M$ be the incidence matrix corresponding to $(\mathcal{C},\mathcal{L})$. We define a number $d_i$ by \begin{equation}\label{equation:di} d_i = \#\{j \mid \text{$M_{pj}=0$ for $p < i$ and $M_{pj}=1$ for $p \ge i$}\}. \end{equation} Let $k_i=a_i-d_i$ for $i=1,\dotsc,s$, where $n/(n-a)=[a_1,\dotsc,a_s]$. \begin{theorem} The sequence $(k_1,\dotsc,k_s)$ is the parameter corresponding to the component of $\Def(X)$ that contains the deformation of $(X,p)$ induced by the picture deformation $(\mathcal{C},\mathcal{L})$. \end{theorem} \begin{proof} Let $U \to X$ be a P-resolution corresponding to the given picture deformation $(\mathcal{C},\mathcal{L})$. Let $Z \to Y$ and let $Z' \to Y'$ be the compactified P-resolutions corresponding $U \to X$, respectively. Notice that the diffeomorphism $\phi \colon W \to W'$ can be extended to a diffeomorphism between the minimal resolutions $\widetilde{Z}$ and $\widetilde{Z'}$ because $\widetilde{Z}$ and $\widetilde{Z'}$ are obtained from $W$ and $W'$ by the same sequence of blowing ups on the points lying onthe exceptional divisors $E$ and $E'$, respectively. Let $Z_t$ and $Z_t'$ be general fibers of the $\mathbb{Q}$-Gorenstein smoothings $\mathcal{Z} \to \Delta$ and $\mathcal{Z}' \to \Delta$, respectively. Topologically, $Z_t$ and $Z_t'$ are obtained from $\widetilde{Z}$ and $\widetilde{Z}'$, respectively, via rational blowdown surgeries along the T-singularities. Therefore there is a diffeomorphism $\phi_t \colon Z_t \to Z_t'$ that is induced from the diffeomorphism $\phi \colon \widetilde{Z} \to \widetilde{Z}'$. Then Equation~\eqref{equation:Dj=A1+...+Aj} still holds for $\phi_t$. That is, we have \begin{equation}\label{equation:Dj=A1+...+Aj} \phi_{t,\ast}[\widetilde{D}_i] = [A_1] + \dotsb + [A_i]. \end{equation} for all $i=1,\dotsc,s$. Therefore, if $d_i > 0$, then there are $d_i$ numbers of $(-1)$-curves intersecting transversally only $A_i$ (and not intersecting any $A_j$'s for $j \neq i$). Then, as we seen in Section~\ref{section:CQSS-PtoE} (or by Equation~\eqref{equation:k}), the sequence $(k_1,\dotsc,k_s)$ is the desired parameter corresponding to the P-resolution $U \to X$, hence to the given picture deformation $(\mathcal{C},\mathcal{L})$ of $(X,p)$. \end{proof} \begin{remark} The number $d_i$ defined in Equation~\eqref{equation:di} is the same value defined in Némethi--Popescu-Pampu~\cite[Proposition~10.1.10]{Nemethi-Poposcu-Pampu-2010}. \end{remark} \begin{example}[Continued from Examples~\ref{example:3-4-2-incidence-matrices}, \ref{example:3-4-2-k}, \ref{example:3-4-2-PtoE}] Let $(X,p)$ be a cyclic quotient surface singularities $\frac{1}{19}(1,7)$. Then the correspondences are given as follows. \begin{center} \begin{tabulary}{\textwidth}{p{13em}p{10em}p{10em}} \toprule Picture deformations & P-resolutions & Equations \\ \midrule $\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ \end{bmatrix}$ & $3-4-[2]$ & $(1,2,2,1)$ \\ \midrule $\begin{bmatrix} 0 & 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 0 & 0 & 1 \\ \end{bmatrix}$ & $3-[4]-2$ & $(1,3,1,2)$ \\ \midrule $\begin{bmatrix} 0 & 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 & 1 \\ 1 & 1 & 1 & 1 & 0 \\ \end{bmatrix}$ & $[4]-1-[5,2]$ & $(2,2,1,3)$ \\ \bottomrule \end{tabulary} \end{center} \end{example}
2,877,628,088,775
arxiv
\section{Introduction} A polynomial in variables $x_1,\dots,x_n$ is \emph{symmetric} if permuting the variables does not change it. Another way to view symmetric polynomials is as invariant polynomials under the action of the symmetric group. A natural generalization of symmetric polynomials then arises: if $s_{i,j}$ is the operator on polynomials that swaps the variables~$x_i$ and~$x_j$, then we may consider polynomials~$F$ such that $F-s_{i,j}(F)$ vanishes to some order at $x_i=x_j$. Notably, if~$F$ is symmetric in~$x_i$ and $x_j$, then $F-s_{i,j}(F)$ vanishes to infinite order. These polynomials may be viewed as quasi-invariant polynomials of the symmetric group, and have been introduced by Chalykh and Veselov \cite{CV} in the study of quantum Calogero--Moser systems. \begin{Definition}Let $k$ be a field, $n$ be a positive integer, and $m$ be a nonnegative integer. We say that a polynomial $F\in k[x_1,\dots,x_n]$ is $m$-quasi-invariant if \[(x_i-x_j)^{2m+1}\,|\, F(x_1,\dots,x_i,\dots,x_j,\dots,x_n)-F(x_1,\dots,x_j,\dots,x_i,\dots,x_n)\] for all $1\le i,j\le n$. Denote by $Q_m(n)$ the set of all $m$-quasi-invariant polynomials over $k$ in $n$ variables. \end{Definition} Here, we use the odd exponent $2m+1$ because if the right-hand side is divisible by $(x_i-x_j)^{2m}$, then it is also divisible by $(x_i-x_j)^{2m+1}$. This follows by the anti-symmetry of the right-hand side in $x_i$ and $x_j$. Note that $Q_m(n)$ is a module over the ring of symmetric polynomials over $k$ in $n$ variables. Also, as $Q_m(n)$ is a space of polynomials, it has a grading by degree. Thus, we may define a Hilbert series and a Hilbert polynomial to encapsulate the structure of $Q_m(n)$. \looseness=-1 The motivation for studying quasiivariant polynomials arises from their relation with integrable systems. In 1971, Calogero first solved the problem in mathematical physics of determining the energy spectrum of a one-dimensional system of quantum-mechanical particles with inversely quadratic potentials~\cite{calogero1971solution}. Moser later on connected the classical variant of his problem with integrable Hamiltonian systems and showed that the classical analogue is indeed integrable~\cite{moser1975three}. These so-called Calogero--Moser systems have been of great interest to mathematicians as they connect many different fields including algebraic geometry, representation theory, deformation theory, homological algebra, and Poisson geometry. See, e.g.,~\cite{etingof2006lectures} and the references therein. Quasi-invariant polynomials are deeply related with solutions of quantum Calogero--Moser systems as well as representations of Cherednik algebras \cite{feigin2002quasi}. As such, the structure of $Q_m(n)$, in particular freeness as a module, and its corresponding Hilbert series and polynomials have been extensively investigated by mathematicians. Introduced by Feigin and Veselov in 2001, their Hilbert series and lowest degree non-symmetric elements have subsequently been computed by Felder and Veselov \cite{felder2003action}. In 2010, Berest and Chalykh generalized the idea to quasi-invariant polynomials over an arbitrary complex reflection group \cite{berest2011quasi}. Recently in 2016, Braverman, Etingof, and Finkelberg proved freeness results and computed the Hilbert series of a generalization of~$Q_m(n)$ twisted by monomial factors \cite{braverman2016cyclotomic}. Our goal is to extend the investigation of~$Q_m(n)$ and its various generalizations. In Section~\ref{sec:p}, we investigate quasi-invariant polynomials over finite fields. In particular, we provide sufficient conditions for which the Hilbert series over characteristic $p$ is greater than over characteristic~0. We conjecture that our sufficient conditions are also necessary. We also make conjectures about the properties of the Hilbert series over finite fields. In Section~\ref{sec:twist}, we investigate a generalization of the twisted quasi-invariants. In~\cite{braverman2016cyclotomic}, Braverman, Etingof and Finkelberg introduced the space of quasi-invariants twisted by monomial factors, again a module over the ring of symmetric polynomials. They proved freeness results and computed the corresponding Hilbert series. We generalize their work to the space of quasi-invariants twisted by arbitrary smooth functions and determine the Hilbert series in certain cases when there are two variables. In Section~\ref{sec:fut}, we discuss future directions for our research, in particular considering spaces of polynomial differential operators and $q$-deformations. \section[Quasi-invariant polynomials over fields of nonzero characteristic]{Quasi-invariant polynomials over fields\\ of nonzero characteristic}\label{sec:p} Much of the previous research on quasi-invariant polynomials has been done over fields of characteristic zero. The general approach is to use representations of spherical rational Cherednik algebras \cite{braverman2016cyclotomic}. In the case of fields of positive characteristic, we take a different approach. Let $k$ be $\mathbb F_p$, and $Q_m(n)$ the set of all $m$-quasi-invariant polynomials over $k$ in $n$ variables. To begin, we define the Hilbert series of $Q_m(n)$. \begin{Definition} Let the Hilbert series of $Q_m(n)$ be \[H_m(t)=\sum_{d\ge0}t^d\cdot\dim Q_{m,d}(n),\] where $Q_{m,d}(n)$ is the $k$ vector subspace of $Q_m(n)$ consisting of polynomials with degree $d$. \end{Definition} By the Hilbert basis theorem, $Q_m(n)$ is a finitely generated module over the ring of symmetric polynomials. Thus, we may write \[H_m(t)=\frac{G_m(t)}{\prod\limits_{i=1}^n(1-t^i)},\] where $G_m(t)$ is the Hilbert polynomial associated with $H_m(t)$ and the terms in the denominator correspond to the elementary symmetric polynomials that generate the ring of symmetric polynomials in $n$ variables. We are mainly concerned with the difference between the Hilbert series of $Q_m(n)$ over characteristic $p$ and characteristic 0. The following proposition states that the Hilbert series of $Q_m(n)$ is at least as large in the former case as in the latter case. \begin{Proposition}\label{prop:ge} $\dim Q_{m,d}(n)$ over $\mathbb F_p$ is at least as large as over $\mathbb C$ for each choice of $m$, $n$, and $d$. \end{Proposition} \begin{proof}Suppose that $F=\sum_{i_1+\cdots+i_n=d}a_{i_1,\dots,i_n}x_1^{i_1}\cdots x_n^{i_n}$ is in $Q_{m,d}(n)$. Then, either $d<2m+1$, in which case we must have $F$ symmetric or $(x_i-x_j)^{2m+1}$ would divide a nonzero polynomial with degree $d$ for some choice of $i$ and $j$, a contradiction. This means that the dimensions are equal over either characteristic. Otherwise, we have that \[F-s_{i,j}F=(x_i-x_j)^{2m+1}\left(\sum_{j_1+\dots+j_n=d-(2m+1)}b_{i,j,j_1,\dots,j_n}x_1^{j_1}\cdots x_n^{j_n}\right)\] for each pair $i$, $j$. These yield a system of linear equations in the undetermined coefficients of~$F$ and $\frac{F-s_{i,j}F}{(x_i-x_j)^{2m+1}}$, which is with integral coefficients we are considering. It then follows from considering the null-space that the dimension of the solution space over a field of characteristic~$p$ is at least the dimension over a field of characteristic~0. \end{proof} However, for each $m$, there are only finitely many primes for which the Hilbert series of~$Q_m(n)$ is strictly greater over ${\mathbb F}_p$ than over $\mathbb C$. \begin{Proposition} For any fixed $m$ and $n$, there are only finitely many primes $p$ for which the Hilbert series of $Q_m(n)$ is greater over $\mathbb F_p$ than over $\mathbb C$. \end{Proposition} \begin{proof} Let $P=\mathbb Z[x_1,\dots,x_n]$, $Q=\bigoplus_{1\le i<j\le n}P/(x_i-x_j)^{2m+1}P$, and $h$ be the linear map from~$P$ to~$Q$ defined as \[ h(F)=\bigoplus_{1\le i<j\le n}(1-s_{i,j})F.\] Note that $\operatorname{Ker}(h)$ coincides with $Q_m(n)$ by definition. Set $M=\operatorname{Coker}(h)$ as the cokernel of~$h$ in~$Q$ and note that if $Q_m(n)$ over $\mathbb F_p$ has a higher dimension than $Q_m(n)$ over $\mathbb C$ for some degree of the polynomials, then $M$ must have $p$-torsion. To prove that there are only finitely many such primes~$p$, we use the following generic freeness lemma, see, e.g., \cite[Theorem~14.4]{Eisenbud}. \begin{Lemma} For a Noetherian integral domain $A$, a~fini\-te\-ly generated $A$-algebra $B$, and a~fini\-te\-ly generated $B$-module~$M$, there exists a nonzero element~$r$ of~$A$ such that the localization~$M_r$ is a~free $A_r$ module. \end{Lemma} We apply this in the case where $A=\mathbb Z$, $B=\mathbb Z[x_1,\dots,x_n]^{S_n}$ and $M=\operatorname{Coker}(h)$. It is easy to see that these satisfy the conditions for $A$, $B$, and $M$ in the lemma. Thus there exists an integer $r\in\mathbb Z\setminus\{0\}$ such that $M_r$ is free over $\mathbb Z[1/r]$. As~$M$ has no $p$-torsion for any $p\nmid r$, $M$~has no $p$-torsion for all but finitely many primes~$p$ so the Hilbert series over~$\mathbb F_p$ is the same as over~$\mathbb C$. \end{proof} We now determine the primes for which $Q_m(n)$ is greater. First, we examine the case when $n=2$. \begin{Proposition}When $n=2$, the Hilbert series for $Q_m(2)$ over characteristic~$p$ coincides with that of characteristic~$0$. It is $\frac{1+t^{2m+1}}{(1-t)(1-t^2)}$ over all fields. \end{Proposition} \begin{proof} We claim that the dimension of $Q_{m,d}(2)$ over $\mathbb C$ is equal to the dimension of~$Q_{m,d}(2)$ over~$\mathbb F_p$. By Proposition~\ref{prop:ge}, it suffices to show that for each $m$ and $d$, the dimension of~$Q_{m,d}(2)$ over $\mathbb C$ is at least the dimension of~$Q_{m,d}(2)$ over~$\mathbb F_p$. Consider a basis $f_1,\dots,f_k\in\mathbb F_p[x,y]$ of~$Q_{m,d}(2)$ over~$\mathbb F_p$. We will show the existence of $F_1,\dots,F_k\in\mathbb Z[x,y]$ of $Q_{m,d}(2)$ such that $F_i\equiv f_i\pmod p$ for all~$i$. This means that $F_1,\dots,F_k$ are linearly independent, as otherwise there exist relatively prime integers $n_1,\dots,n_k$ with $n_1F_1+\cdots+ n_kF_k=0$. Taking the equation modulo $p$ yields $n_1f_1+\cdots+ n_kf_k\equiv0\pmod p$, a contradiction with $f_1,\dots,f_k$ forming a basis of $Q_{m,d}(2)$ as not all of $n_1,\dots,n_k$ are divisible by~$p$. To show the existence of such $F_1,\dots,F_k$, let $f=f_i$ for a fixed $i$ and suppose that $f(x,y)-f(y,x)=(x-y)^{2m+1}g(x,y)$ for some symmetric $g(x,y)\in\mathbb F_p[x,y]$. Let us take a symmetric $G(x,y)\in \mathbb Z[x,y]$ such that $G\equiv g\pmod{p}$. Let $f(x,y)=\sum_{i=0}^da_ix^iy^{d-i}$ and suppose that $G(x,y)(x-y)^{2m+1}=\sum_{i=1}^dB_ix^iy^{d-i}$ with $a_i\in\mathbb F_p$ and $B_i\in \mathbb Z$. We have that $a_i-a_{d-i}\equiv B_i\pmod{p}$. Note that $G(x,y)$ is symmetric, so $G(x,y)(x-y)^{2m+1}$ is anti-symmetric, which implies that $B_i+B_{d-i}=0$ for all $i$. Now, define $F(x,y)=\sum_{i=1}^dA_ix^iy^{d-i}$, where $A_i\equiv a_i\pmod{p}$ for \mbox{$i\le\frac d2$} and $A_i=A_{d-i}+B_i$ for \mbox{$i>\frac d2$}. Note that for \mbox{$i>\frac d2$}, we have that $A_i\equiv A_{d-i}+B_i\equiv a_i\pmod{p}$, so this $F$ satisfies $F\equiv f\pmod p$. It remains to check the quasi-invariance condition. However, note that \begin{gather*} F(x,y)-F(y,x)=\sum_{i=1}^d(A_i-A_{d-i})x^iy^{d-i}=\sum_{i=1}^dB_ix^iy^{d-i}=G(x,y)(x-y)^{2m+1} \end{gather*} by definition, so we are done. Hence, the dimension, and thus the series, is independent of $p$. It is known from~\cite{braverman2016cyclotomic} that the series is $\frac{1+t^{2m+1}}{(1-t)(1-t^2)}$, as desired. \end{proof} When $n>2$, the series differs greatly for many primes. In this case, we have found a sufficient condition for when the Hilbert series over characteristic $p$ is greater. \begin{Theorem}\label{thm:suf} Let $m\ge0$ and $n\ge3$ be integers. Let $p$ be a prime such that there exist integers $a\ge 0$ and $k\ge0$ with \[\frac{mn(n-2)+\binom n2}{n(n-2)k+\binom n2-1}\le p^a\le\frac{mn}{nk+1}.\] Then the Hilbert series of $Q_m(n)$ with $n$ variables over $\mathbb F_p$ is different from the Hilbert series over $\mathbb C$. \end{Theorem} \begin{proof} The following formula, due to \cite{felder2003action}, gives the Hilbert polynomial for $Q_m(n)$ over $\mathbb C$: \[n!t^{m\binom n2}\sum_{\text{Young diagrams}}\prod_{i=1}^nt^{m(\ell_i-a_i)+\ell_i}\frac{1-t^i}{h_i\big(1-t^{h_i}\big)}.\] Here, the sum is over Young diagrams with $n$ boxes, $a_i$ denotes the number of boxes to the right of the $i$th box, $\ell_i$ denotes the number of boxes below the $i$th box, and $h_i=a_i+\ell_i+1$. It is not hard to see that the formula gives that the Hilbert polynomial is of the form $1+(n-1)t^{mn+1}+\cdots$, where the exponents are sorted in ascending order. This is as the two terms of smallest degree are contributed by the Young diagrams corresponding to the partitions $(n)$ and $(n-1,1)$. This implies that all polynomials in $Q_m(n)$ with degree at most $mn$ are symmetric, and~$Q_m(n)$ as a~module over symmetric polynomials has a generator of degree $mn+1$. For any $m$, denote this generator in $Q_m(n)$ by~$P_m$. In the following construction, we will use the generator $P_k$ of $Q_k(n)$ for certain $k<m$. To show that the Hilbert series is different over $\mathbb F_p$, we consider the following non-symmetric polynomial: \[F=P_k^{p^a}\prod_{1\le i<j\le n}(x_i-x_j)^{2b}.\] Here $b=\frac{2m+1-p^a(2k+1)}2$, and $a$, $k$ are integers such that the above inequalities are satisfied. So \begin{gather*} \deg F=p^a(nk+1)+2b\binom n2=p^a(nk+1)+\binom n2(2m+1-p^a(2k+1))\\ \hphantom{\deg F}{} =\binom n2(2m+1)+p^a\left(1-\binom n2-n(n-2)k\right)\\ \hphantom{\deg F}{} \le\binom n2(2m+1)-\left(mn(n-2)+\binom n2\right) =mn<\deg P_m. \end{gather*} Hence, if we show that $F\in Q_m(n)$, then as $\deg F<\deg P_m$ we obtain a different Hilbert series over $\mathbb F_p$, in particular in the coefficient of $t^{\deg F}$. To do that, note that $b$ is an integer when~$p$ is odd and a half-integer when $p=2$. Either way, $\prod(x_i-x_j)^{2b}$ is a~symmetric polynomial in~$\mathbb F_p$, so we have that $(1-s_{i,j})\big(P_k^{p^a}\prod(x_i-x_j)^{2b}\big)=((1-s_{i,j})P_k)^{p^a}\prod(x_i-x_j)^{2b}$ by the fact that $(u+v)^{p^a}=u^{p^a}+v^{p^a}$ in $\mathbb F_p$. Hence, as $(x_i-x_j)^{2k+1}$ divides $(1-s_{i,j})P_k$ by assumption, we have that $(x_i-x_j)^{p^a(2k+1)+2b}=(x_i-x_j)^{2m+1}$ divides $(1-s_{i,j})F$. Hence, $F$ is in~$Q_m(n)$, so this produces a generator of $Q_m(n)$ of lower degree in $\mathbb F_p$ and thus a different Hilbert series over~$\mathbb F_p$, as desired. \end{proof} \begin{Remark}Let us write the inequalities in Theorem \ref{thm:suf} in the form \[ k+\frac{1}{n}\le m/p^a\le k+\frac{n+1}{2n}-\frac{n-1}{2(n-2)p^a}.\] In this way, we can rewrite it as \[ \frac{1}{n}\le \left\{\frac{m}{p^a}\right\}\le \frac{n+1}{2n}-\frac{n-1}{2(n-2)p^a}, \] where $\{\cdots\}$ denotes fractional part, which eliminates $k$. Also from this form it is clear that~$a$ cannot be zero, i.e., $a\ge 1$. \end{Remark} \begin{Remark} Let $k=0$ in the inequality in Theorem~\ref{thm:suf}. Then, we see that all primes $p$ with a~power $p^a$ between roughly $2m$ and $mn$ satisfy the inequality. These primes $p$ satisfy the property that $Q_m(n)$ over $\mathbb F_p$ has a different Hilbert series than over $\mathbb C$. \end{Remark} \begin{Conjecture}\label{conj:mn} The sufficient condition we have given in Theorem~{\rm \ref{thm:suf}} is also necessary. That is if the Hilbert series of $Q_m(n)$ in $\mathbb F_p$ is different from the Hilbert series in $\mathbb C$, then there exist integers $a\ge 0$ and $k\ge0$ such that \[\frac{mn(n-2)+\binom n2}{n(n-2)k+\binom n2-1}\le p^a\le\frac{mn}{nk+1}.\] In particular, if $p>mn$, then the Hilbert series over $\mathbb F_p$ is the same as over $\mathbb C$. \end{Conjecture} This is supported by computer calculations, especially in the case of $n=3,4$. They suggest that the Hilbert series takes a form depending on the smallest non-symmetric element of $Q_m(n)$ which is described by the proof of Theorem~\ref{thm:suf} and hence satisfies the conjecture. The following table summarizes the results of our computer program verification for $n=3$, $m\le15$ and $p\le50$. Each box in which the series is greater over $\mathbb F_p$ than over $\mathbb C$ is labeled with its integers $a$, $k$ that make the inequality hold. \begin{table}[!h]\centering\small \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \diagbox{$m$}{$p$}&2&3&5&7&11&13&17&19&23&29&31&37&41&43&47\\ \hline 0&&&&&&&&&&&&&&&\\\hline 1&&$1,0$&&&&&&&&&&&&&\\\hline 2&&&$1,0$&&&&&&&&&&&&\\\hline 3&$3,0$&$2,0$&&$1,0$&&&&&&&&&&&\\\hline 4&$3,0$&$2,0$&&&$1,0$&&&&&&&&&&\\\hline 5&&$2,0$&&&$1,0$&$1,0$&&&&&&&&&\\\hline 6&$4,0$&&&&$1,0$&$1,0$&$1,0$&&&&&&&&\\\hline 7&$4,0$&$1,2$&$1,1$&&&$1,0$&$1,0$&$1,0$&&&&&&&\\\hline 8&$4,0$&&&&&&$1,0$&$1,0$&$1,0$&&&&&&\\\hline 9&$4,0$&$3,0$&$2,0$&&&&$1,0$&$1,0$&$1,0$&&&&&&\\\hline 10&&$3,0$&$2,0$&$1,1$&&&$1,0$&$1,0$&$1,0$&$1,0$&&&&&\\\hline 11&$5,0$&$3,0$&$2,0$&&&&&$1,0$&$1,0$&$1,0$&$1,0$&&&&\\\hline 12&$5,0$&$3,0$&$2,0$&&&&&&$1,0$&$1,0$&$1,0$&&&&\\\hline 13&$5,0$&$3,0$&$2,0$&&&&&&$1,0$&$1,0$&$1,0$&$1,0$&&&\\\hline 14&$5,0$&$3,0$&$2,0$&&&&&&$1,0$&$1,0$&$1,0$&$1,0$&$1,0$&&\\\hline 15&$5,0$&$3,0$&$2,0$&&$1,1$&&&&&$1,0$&$1,0$&$1,0$&$1,0$&$1,0$&\\ \hline\end{tabular} \end{table} Through our programs, we have found that when $n=3$, the Hilbert series takes the form \[\frac{1+2t^d+2t^{6m+3-d}+t^{6m+3}}{(1-t)(1-t^2)(1-t^3)}\] for small $p$, where $d$ is the degree of the smallest non-symmetric generator of $Q_m$ in $\mathbb F_p$. In particular, this smallest non-symmetric polynomial in $Q_m$ is of the form $P_k^{p^a}\prod(x_i-x_j)^{2b}$ where the $P_k$ are as described in Theorem~\ref{thm:suf}. Furthermore, we conjecture that $Q_m$ is a free module over the ring of symmetric polynomials for $n=3$ over any field. In \cite{felder2003action}, the authors prove some properties of Hilbert series and polynomials over $\mathbb C$, specifically their maximal term and symmetry. We believe that similar results still hold over $\mathbb F_p$, and this is supported by our computer calculations for $n=3,4$. \begin{Conjecture}\label{conj:mainc} The largest degree term in the Hilbert polynomial is always $t^{\binom n2(2m+1)}$. Furthermore, when $p$ is an odd prime $Q_m$ is a free module over the ring of the symmetric polynomials of rank~$n!$, and the Hilbert polynomial is palindromic. \end{Conjecture} \begin{Remark} The condition that $p$ is odd appears to be necessary. Indeed, a computer calculation shows that when $n=4$, $m=1$ and $p=2$, the Hilbert series is \begin{gather*} 1+t+2t^2+3t^3+8t^4+9t^5+15t^6+23t^7+38t^8+50t^9+71t^{10}+\cdots\\ \qquad {} = \frac{1+3t^4+3t^7+5t^8+3t^9-t^{10}+\cdots}{(1-t)(1-t^2)(1-t^3)(1-t^4)}, \end{gather*} and the negative coefficient implies that the module cannot be free. In particular, computing the polynomial up to~$t^{18}$ also demonstrates that it is not symmetric. \end{Remark} \section{Twisted quasi-invariants}\label{sec:twist} \subsection{A generalization of quasi-invariants} In \cite{braverman2016cyclotomic}, Braverman, Etingof and Finkelberg introduced quasi-invariants twisted by a monomial $x_1^{a_1}\cdots x_n^{a_n}$, where $a_1\dots,a_n\in\mathbb C$. We further generalize this by allowing the twist to be a~product of general functions. To be more precise, let $m$ be a nonnegative integer. Fix one-variable meromorphic functions $f_1,f_2,\dots,f_n$, and denote by $D\subset \mathbb{C}^n$ the domain where the product $f_1(x_1)f_2(x_2)\cdots f_n(x_n)$ and its inverse are smooth. \begin{Definition} We define $Q_m(f_1,\dots,f_n)$ to be the space of polynomials $F\in\mathbb C[x_1,\dots,x_n]$ for which \[\frac{(1-s_{i,j})\left(f_1(x_1)\cdots f_n(x_n)F(x_1,\dots,x_n)\right)}{(x_i-x_j)^{2m+1}}\] is smooth on $D$ for all $1\le i<j\le n$. \end{Definition} In the following, for simplicity when we say smooth functions we always mean smooth on $D$. \begin{Remark} In \cite{braverman2016cyclotomic}, the authors studied the case $f_i(x)=x^{a_i}$ for $a_i\in\mathbb C$, and denoted $Q_m(f_1,\dots,f_n)$ by $Q_m(a_1,\dots,a_n)$. In cases of unambiguous use, we will shorten this to~$Q_m(n)$. \end{Remark} Similar to \cite{braverman2016cyclotomic}, we believe that $Q_m(f_1,\dots,f_n)$ is in general a free module. \begin{Conjecture}\label{conj:big} For generic $f_1,\dots,f_n$, in particular when $\frac{f_i}{f_j}$ is not a monomial in $x_1,\dots,x_n$, $Q_m(f_1,\dots,f_n)$ is a free module over the ring of symmetric polynomials in $x_1,\dots,x_n$. \end{Conjecture} \subsection{Rationality of the logarithmic derivative}\label{subsec:dlog} Note that for any $f_1,\dots,f_n$ and $i$, $j$, any $F\in(x_i-x_j)^{2m}\mathbb{C}[x_1,\dots,x_n]$ has the property that $\frac{(1-s_{i,j})(f_1(x_1)\cdots f_n(x_n)F)}{(x_i-x_j)^{2m+1}}$ is smooth. This is a trivial case, thus we want to find out which choices of the $f_i,f_j$ yield polynomials in $Q_m(n)$ that are not in $(x_i-x_j)^{2m}\mathbb{C}[x_1,\dots,x_n]$. \begin{Lemma}\label{prop:squ} If $F$ is divisible by $x_i-x_j$ and $\frac{(1-s_{i,j})(f_1(x_1)\cdots f_n(x_n)F)}{(x_i-x_j)^{2(k+1)+1}}$ is smooth then $(x_i-x_j)^2\,|\, F$ and $\frac{(1-s_{i,j})\big(f_1(x_1)\cdots f_n(x_n)\frac F{(x_i-x_j)^2}\big)}{(x_i-x_j)^{2k+1}}$ is smooth. \end{Lemma} \begin{proof} Let $F(x_i,x_j)=(x_i-x_j)G(x_i,x_j)$ for some polynomial $G$. Here, we write $F(x_i,x_j)$ for $F$ and $G(x_i,x_j)$ for $G$ to ease the notation, as we will only consider it as a function in the $i$th and $j$th coordinates. Substituting, the condition becomes \[f_i(x_i)f_j(x_j)G(x_i,x_j)+f_j(x_i)f_i(x_j)G(x_j,x_i)=(x_i-x_j)^{2k+2}g(x_i,x_j)\] (here and for the rest of this section, $g$ (with a possible subscript, e.g., $g_a$, $g_b$, $g_c$) denotes a~function smooth on $D$) and setting $x_i=x_j=x$ gives \[f_i(x)f_j(x)G(x,x)=0,\] so $G(x,x)=0$. This implies that $(x_i-x_j)\,|\, G$, so $(x_i-x_j)^2\,|\, F$, as desired. The second part of the proposition follows by definition as $\frac F{(x_i-x_j)^2}$ is smooth. \end{proof} \begin{Proposition}\label{dlograt} Let $h_{i,j}=\frac{f_i}{f_j}$. If $\operatorname{dlog}(h_{i,j})$ is not a rational function, then $Q_m(n)\subset(x_i-x_j)^{2m+1}\mathbb C[x_1,\dots,x_n]$. Here, $\operatorname{dlog}(f)=\frac{f'}f$ denotes the logarithmic derivative of a function $f$. \end{Proposition} \begin{proof} The proposition is trivial for $m=0$. For $m>0$, note that $F\in Q_m(n)$ if and only if \begin{gather*} f_i(x_i)f_j(x_j)F(x_i,x_j)-f_j(x_i)f_i(x_j)F(x_j,x_i)=(x_i-x_j)^{2m+1}g_a(x_i,x_j),\\ h_{i,j}(x_i)F(x_i,x_j)-h_{i,j}(x_j)F(x_j,x_i)=(x_i-x_j)^{2m+1}g_b(x_i,x_j).\end{gather*} Here, we treat the rest of the functions and variables as constants. Differentiating with respect to $x_i$, we have that \[h_{i,j}(x_i)(\operatorname{dlog}(h_{i,j})(x_i)F(x_i,x_j)+F_1(x_i,x_j))-h_{i,j}(x_j)F_2(x_j,x_i)=(x_i-x_j)^{2m}g_c(x_i,x_j),\] where for a function $F(x,y)$ we define $F_1=\frac{\partial F}{\partial x}$ and $F_2=\frac{\partial F}{\partial y}$. Setting $x_i=x_j=x$, we have that \begin{gather*} h_{i,j}(x)(\operatorname{dlog}(h_{i,j})(x)F(x,x)+F_1(x,x))-h_{i,j}(x)F_2(x,x)=0,\\ \operatorname{dlog}(h_{i,j})(x)F(x,x)=F_2(x,x)-F_1(x,x), \end{gather*} which means that $F(x,x)=0$. Otherwise, we would have \[\operatorname{dlog}(h_{i,j})(x)=\frac{F_2(x,x)-F_1(x,x)}{F(x,x)},\] which is a contradiction as the right hand side is a rational function. Hence, $(x_i-x_j)\,|\, F$. Now, by Lemma~\ref{prop:squ} we have $(x_i-x_j)^2\,|\, F$ and $\frac F{(x_i-x_j)^2}\in Q_{m-1}(n)$, which implies the desired result by a straightforward induction. \end{proof} \subsection[Hilbert series for $n=2$]{Hilbert series for $\boldsymbol{n=2}$}\label{subsec:free} Let $n=2$, $x=x_1$, and $y=x_2$. Note that scaling $f_1$ and $f_2$ by some smooth function does not affect $Q_m(2)$. Hence, we may multiply them both by $\frac1{f_2}$ and let $f=\frac{f_1}{f_2}$. For convenience, we use $Q_m(f)$ to denote the space of quasi-invariants. Throughout this section, we will let $\operatorname{dlog}(f(x))=\frac{p(x)}{q(x)}$ for relatively prime $p,q\in\mathbb C[x]$, as we have from Section~\ref{subsec:dlog} that either $Q_m=(x-y)^{2m}\mathbb C[x,y]$ or $\operatorname{dlog}(f(x))$ is a rational function. For convenience, we will also set $F_1=\frac{\partial F}{\partial x}$, $F_2=\frac{\partial F}{\partial y}$, and $F_{12}=\frac{\partial^2 F}{\partial x\partial y}$. \begin{Lemma}\label{lem:ind} If $F(x,y)\in Q_m(f)$, then \begin{gather*} p(x)F_2(x,y)+q(x)F_{12}(x,y)\in Q_{m-1}\left(\frac fq\right), \\ -p(y)F_1(x,y)+q(y)F_{12}(x,y)\in Q_{m-1}(fq). \end{gather*} \end{Lemma} \begin{proof} We begin with our quasi-invariant condition, which in our case of $n=2$ is \[f(x)F(x,y)-f(y)F(y,x)=(x-y)^{2m+1}g_a(x,y).\] Differentiating by $x$ and then $y$, we obtain \[f'(x)F_2(x,y)+f(x)F_{12}(x,y)-f'(y)F_2(y,x)-f(y)F_{12}(y,x)=(x-y)^{2m-1}g_b(x,y).\] By the definition of $p$ and $q$, this is equivalent to \begin{gather*} \frac{f(x)}{q(x)}\left(p(x)F_2(x,y)+q(x)F_{12}(x,y)\right)-\frac{f(y)}{q(y)}\left(p(y)F_2(y,x)+q(y)F_{12}(y,x)\right)\\ \qquad {} =(x-y)^{2m-1}g_b(x,y), \end{gather*} which is exactly the quasi-invariant condition that is desired. Dividing our quasi-invariant condition by $f(x)f(y)$ gives \[\frac1{f(x)}F(y,x)-\frac1{f(y)}F(x,y)=(x-y)^{2m+1}g_c(x,y).\] (Note that $g_c$ is a function smooth on $D$, because $\frac{1}{f(x)f(y)}$ is smooth on $D$ by assumption.) Thus, $-p(x)F_1(y,x)+q(x)F_{12}(y,x)\in Q_{m-1}\big(\frac1{fq}\big)$ by the above. Expanding the quasi-invariant condition and multiplying by $f(x)f(y)q(x)q(y)$, we obtain the equivalent statement $-p(y)F_1(x,y)+q(y)F_{12}(x,y)\in Q_{m-1}(fq)$, as desired. \end{proof} Now, we specialize to the case in which $f(x)=\prod_{i=1}^k(x-a_i)^{b_i}$ for arbitrary complex numbers~$a_i$,~$b_i$. Note that in this case $\operatorname{dlog}(f)=\sum_{i=1}^k\frac{b_i}{x-a_i}$. \begin{Definition} For a nonnegative integer $m$ and a complex number $z$, denote \[d_m(z)=\begin{cases}\min(m,|z|) & \text{if }z\in\mathbb{Z},\\m & \text{otherwise}, \end{cases}\] and \[d_m(f)=\sum_{i=1}^k d_m(b_i),\] where $f(x)=\prod_{i=1}^k(x-a_i)^{b_i}$ for $a_1,\dots,a_k,b_1,\dots,b_k\in\mathbb C$ with $a_1,\dots,a_k$ pairwise distinct. \end{Definition} \begin{Lemma}\label{lem:div} We have that \[\prod_{i=1}^k(x-a_i)^{d_m(b_i)}\,|\, F(x,x)\] for any $F\in Q_m(f)$. \end{Lemma} \begin{proof} We proceed using induction, with the base case of $m=0$ clearly true. Recall that $\operatorname{dlog}(f(x))=\frac{p(x)}{q(x)}$. For $f(x)=\prod_{i=1}^k(x-a_i)^{b_i}$, we have that $p(x)=\sum_{i=1}^kb_i\prod_{j\ne i}(x-a_i)$ and $q(x)=\prod_{i=1}^k(x-a_i)$. It suffices to prove this divisibility for each $(x-a_i)^{d_m(b_i)}$. By the inductive hypothesis and Lemma~\ref{lem:ind}, we have that $(x-a_i)^{d_{m-1}(b_i-1)}\,|\, p(x)F_2(x,x)+q(x)F_{12}(x,x)$ and $(x-a_i)^{d_{m-1}(b_i+1)}\,|\,-p(x)F_1(x,x)+q(x)F_{12}(x,x)$. It is easy to see that $d_m(b_i)-1\le d_{m-1}(b_i),d_{m-1}(b_i-1),d_{m-1}(b_i+1)$. Thus, $(x-a_i)^{d_m(b_i)-1}\,|\, (x-a_i)^{d_{m-1}(b_i)}\,|\, F(x,x)$ as $F$ is also in $Q_{m-1}(f)$. From the other two divisibilities we also obtain $(x-a_i)^{d_m(b_i)-1}\,|\, p(x)F_2(x,x)+q(x)F_{12}(x,x)$, and $-p(x)F_1(x,x)+q(x)F_{12}(x,x)$, so $(x-a_i)^{d_m(b_i)-1}\,|\, p(x)(F_1(x,x)+F_2(x,x))=p(x)\frac{{\rm d}F(x,x)}{{\rm d}x}(x,x)$. As $p$ and $q$ are relatively prime, we must have $(x-a_i)^{d_m(b_i)-1}\,|\,\frac{{\rm d}F(x,x)}{{\rm d}x}(x,x)$ which together with $(x-a_i)^{d_m(b_i)-1}\,|\, F(x,x)$ implies $(x-a_i)^{d_m(b_i)}\,|\, F(x,x)$, as desired. \end{proof} In fact, this lemma is sharp in the sense that there exists $F$ such that the divisibility becomes equality. To prove that, we utilize the following lemma: \begin{Lemma}\label{prop:prod} If $F\in Q_m(f)$ and $G\in Q_m(g)$, then $FG\in Q_m(fg)$. \end{Lemma} \begin{proof} We have that \begin{gather*} \frac{f(x)g(x)F(x,y)G(x,y)-f(y)g(y)F(y,x)G(y,x)}{(x-y)^{2m+1}}\\ \qquad{} =g(x)G(x,y) \frac{f(x)F(x,y)-f(y)F(y,x)}{(x-y)^{2m+1}} +f(y)F(y,x) \frac{g(x)G(x,y)-g(y)G(y,x)}{(x-y)^{2m+1}} \end{gather*} is smooth. \end{proof} \begin{Lemma}\label{lem:gen} There exists $P_m\in Q_m(f)$ with \[P_m(x,x)=\prod_{i=1}^k(x-a_i)^{d_m(b_i)}.\] \end{Lemma} \begin{proof} Note that by Lemma~\ref{prop:prod} it suffices to show this when $k=1$ as for $k>1$ we can take the product of all such $P_m$ in $Q_m\big((x-a_1)^{d_m(b_1)}\big),\dots,Q_m\big((x-a_k)^{d_m(b_k)}\big)$. Shifting, we may also assume that $a_1=0$. Now, note that if $z=b_1$ is an integer less than $m$, we can simply take $P_m=y^{z}$. Otherwise, we claim that we can take \[P_m(x,y)=\frac{\sum\limits_{i=0}^m\binom{m-z}i\binom{m+z}{m-i}x^iy^{m-i}}{\binom{2m}m}.\] Indeed, note that we have that $P_m(x,x)=x^m$ by Vandermonde's identity, so it suffices to show that $P_m$, or equivalently the numerator of $P_m$, is in $Q_m$. We proceed using induction, with the base case of $m=0$ obvious. For the inductive step, note that we wish to show that \[H(x,y):=\sum_{i=0}^m\binom{m-z}i\binom{m+z}{m-i}\big(x^{i+z}y^{m-i}-x^{m-i}y^{i+z}\big)\] vanishes at $x=y$ to order $2m+1$. It is easy to see that $H(x,y)$ vanishes at $x=y$. Let us first show that $H(x,y)$ vanishes at $x=y$ to order $2$. Differentiating with respect to $x$ and setting $x=y$, we would like to show that \[\sum_{i=0}^m\binom{m-z}i\binom{m+z}{m-i}(2i-m+z)=0.\] As we have \[\sum_{i=0}^m\binom{m-z}i\binom{m+z}{m-i}i=\sum_{i=1}^m(m-z)\binom{m-z-1}{i-1}\binom{m+z}{m-i}=(m-z)\binom{2m-1}{m-1}\] and \begin{gather*} \sum_{i=0}^m\binom{m-z}i\binom{m+z}{m-i}(i-m)=-\sum_{i=0}^{m-1}(m+z)\binom{m-z}i\binom{m+z-1}{m-i-1}\\ \hphantom{\sum_{i=0}^m\binom{m-z}i\binom{m+z}{m-i}(i-m)}{} =-(m+z)\binom{2m-1}{m-1} \end{gather*} by Vandermonde's identity, the expression reduces to \[-2z\binom{2m-1}{m-1}+\sum_{i=0}^mz\binom{m-z}i\binom{m+z}{m-i}=-2z\binom{2m-1}{m-1}+z\binom{2m}m=0\] as desired. Secondly, let us show that $\frac{\partial^2 H}{\partial y \partial x}$ vanishes at $x = y$ to order $2m-1$. Differentiating by both~$x$ and~$y$, it suffices to show that \[\sum_{i=0}^m\binom{m-z}i\binom{m+z}{m-i}(i+z)(m-i)\big(x^{i+z-1}y^{m-i-1}-x^{m-i-1}y^{i+z-1}\big)\] vanishes at $x=y$ to order $2m-1$. But note that this expression is \begin{gather*} \sum_{i=0}^{m-1}\binom{m-z}i\binom{m+z-1}{m-1-i}(i+z)(m+z)\big(x^{i+z-1}y^{m-i-1}-x^{m-i-1}y^{i+z-1}\big)\\ \qquad{} =(m+z)\sum_{i=0}^{m-1}\binom{m-z}i\binom{m+z-2}{m-1-i}(m+z-1)\big(x^{i+z-1}y^{m-i-1}-x^{m-i-1}y^{i+z-1}\big)\\ \qquad{} =(m+z)(m+z-1)\sum_{i=0}^{m-1}\binom{m-z}i\binom{m+z-2}{m-1-i}\big(x^{i+z-1}y^{m-i-1}-x^{m-i-1}y^{i+z-1}\big), \end{gather*} which vanishes at $x=y$ to order $2m-1$ by the inductive hypothesis on $Q_{m-1}(x^{z-1})$, as desired. Thus we have seen that $H(x,y)$ vanishes at $x=y$ to order $2$ and \[\frac{\partial^2 H}{\partial y \partial x}=(x-y)^{2m-1}K(x,y)\] for certain polynomial $K(x,y)$. Since $H(x,y)$ vanishes at $x=y$ to order $2$, the solution of the above equation is unique. We can use the integration by parts to get an expression of the polynomial $H$ in terms of $(x-y)^k$, $k\ge 2m+1$, and the derivatives of $K(x,y)$ (the constant of integration is chosen to be zero). In this way one checks that $H(x,y)$ vanishes at $x=y$ to order $2m+1$. \end{proof} \begin{Lemma}\label{lem:indu}Let $R$ denote the ring of symmetric polynomials in $x$ and $y$. Then, for all $m>0$ we have that \[Q_m=RP_m+(x-y)^2Q_{m-1}.\] \end{Lemma} \begin{proof} Let $F(x,y)$ be an element of $Q_m$. By Lemma~\ref{lem:div}, $P_m(x,x)\,|\, F(x,x)$, so there exists a~polynomial $g\in\mathbb C[x]$ with $P_m(x,x)g(x)=F(x,x)$. Now, consider the polynomial \[F'(x,y)=F(x,y)-P_m(x,y)g\left(\frac{x+y}2\right),\] which is in $Q_m$ as $F,P_m\in Q_m$ and $g\left(\frac{x+y}2\right)\in R$. But now note that $F'(x,x)=F(x,x)-P_m(x,x)g(x)=0$, so by Lemma~\ref{prop:squ}, $F'\in (x-y)^2Q_{m-1}$, which immediately implies the desi\-red. \end{proof} \begin{Corollary} We have that \[Q_m=RP_m+R(x-y)^2P_{m-1}+\cdots+R(x-y)^{2m-2}P_1+(x-y)^{2m}Q_0\] for all $m$. \end{Corollary} Now, we are finally ready to prove our main result of this section. Recall that $d_m(f)=\sum_{i=1}^kd_m(b_i)$ where $f(x)=\prod_{i=1}^k(x-a_i)^{b_i}$ and $d_m(z)=\min(m,|z|)$ if $z\in\mathbb Z$ and $d_m(z)=m$ otherwise. \begin{Theorem}\label{thm:main} The Hilbert series for $Q_m(f)$ is \[\dfrac{t^{2m}+t^{2m+1}+\sum_{i=1}^m t^{2(m-i)+d_i(f)}-\sum_{i=1}^m t^{2(m-i)+d_i(f)+2}}{(1-t)(1-t^2)}.\] \end{Theorem} \begin{proof} Note that $Q_0=\mathbb C[x,y]$, which is generated by $P_0=1$ and $x-y$ (as an $R$ modules). By the corollary of Lemma~\ref{lem:indu}, $Q_m$ is generated by \[P_m, \ (x-y)^2P_{m-1}, \ \dots, \ (x-y)^{2m-2}P_1, \ (x-y)^{2m}, \ (x-y)^{2m+1}.\] Let $g_{m,i}=(x-y)^{2(m-i)}P_i$ and $g_m=(x-y)^{2m+1}$. We claim that $Q_m$ is generated by $g_m, g_{m,0}, \dots, g_{m,m}$ and $m$ independent relations of the form \begin{gather*} (x-y)^2g_{m,m}=r_{m,m-1}g_{m,m-1}+\cdots+r_{m,0}g_{m,0}+r_mg_m,\\ (x-y)^2g_{m,m-1}=r_{m-1,m-2}g_{m,m-2}+\cdots+r_{m-1,0}g_{m,0}+r_{m-1}g_m,\\ \cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\\ (x-y)^2g_{m,1}=r_{1,0}g_{m,0}+r_1g_m \end{gather*} for some $r_i,r_{i,j}\in R$. We proceed using induction, noting that $Q_0$ is generated by $1$ and $x-y$ with no relations. For the inductive step, first note that as $g_{m,m}=P_m\in Q_m\subset Q_{m-1}$, there exist $p, p_0, \dots, p_{m-1}\in R$ with $g_{m,m}=pg_{m-1}+p_0g_{m-1,0}+\dots+p_{m-1}g_{m-1,m-1}$. This yields a relation in the form of the first relation above by setting $r_m=p, r_{m,0}=p_0, \dots, r_{m,m-1}=p_{m-1}$. This equation is true as $g_m=(x-y)^2g_{m-1}, g_{m,0}=(x-y)^2g_{m-1,0}, \dots, g_{m,m-1}=(x-y)^2g_{m-1,m-1}$ by definition. Now, suppose that $q, q_0, \dots, q_m\in R$ such that $qg_m+q_0g_{m,0}+\dots+q_mg_{m,m}=0$. Then, as $(x-y)^2\,|\, g_m,g_{m,0},\dots,g_{m,m-1}$ and $(x-y)^2\nmid g_{m,m}$, we must have that $(x-y)^2\,|\, q_m$. Let $q_m=(x-y)^2q_m'$. Then, subtracting $q_m'$ times the first relation from $qg_m+q_0g_{m,0}+\dots+q_mg_{m,m}=0$, we obtain a relation of the form $q'g_m+q_0'g_{m,0}+\dots+q_{m-1}'g_{m,m-1}=0$ with $q', q_0', \dots, q_{m-1}'\in R$. Note that this relation is uniquely determined by the first generating relation we have so the first generating relation is independent of the rest of the relations. Furthermore, this relation is $(x-y)^2$ times a relation among the generators of $Q_{m-1}$. By the inductive hypothesis, such a relation is generated by $(x-y)^2$ times the $m-1$ independent generating relations of $Q_{m-1}$, which are by definition the last $m-1$ generating relations on our list. Hence, $Q_m$ is generated by those $m+2$ elements and $m$ independent relations among those elements, as desired. For the Hilbert polynomial, note that the generators have degrees \[d_m(f),\ 2+d_{m-1}(f),\ \dots,\ 2m-2+d_1(f),\ 2m,\ 2m+1\] and that the independent relations have degrees \[2+d_m(f),\ 4+d_{m-1}(f),\ \dots,\ 2m+d_1(f),\] which gives the Hilbert polynomial and series exactly as described in the theorem. \end{proof} \section{Future prospects}\label{sec:fut} It would be interesting to study Conjectures~\ref{conj:mn},~\ref{conj:mainc} and~\ref{conj:big}. As with our current results, we expect to make extensive use of computer programs to discover key properties of quasi-invariant polynomials and their Hilbert series. We expect that resolving Conjecture~\ref{conj:mn} will require studying the modular representation theory of~$S_n$. A possible approach to Conjecture~\ref{conj:big} is to adapt the approach of the authors of \cite{braverman2016cyclotomic}, namely to construct a Cherednik-like algebra related to $f_1,\dots,f_n$ and the quasi-invariant polynomials. Along the way, one may also find the formula of the Hilbert series. Finally, it would be interesting to study $q$-deformations of the spaces of twisted quasi-invariant polynomials. In~\cite{braverman2016cyclotomic}, Braverman, Etingof and Finkelberg study $q$-deformations of their special case and show that when $Q_m$ is free, its $q$-deformation is a flat deformation. They conjecture that it is a flat deformation in general even when $Q_m$ is not a free module. Here, $Q_{m,q}(f_1,\dots,f_n)$ is defined as the set of polynomials $F$ for which \[\frac{(1-s_{i,j})(f_1(x_1)\cdots f_n(x_n)F)}{\prod\limits_{k=-m}^m\big(x_i-q^kx_j\big)}\] is a smooth function for all $1\le i<j\le n$. It would be interesting to resolve this in the case that Braverman, Etingof, and Finkelberg consider, as well as the general case we have presented. We believe that $q$-analogues of some of our results hold. For example, the $q$-analogue of Proposition~\ref{dlograt} would be that $Q_{m,q}\subset\prod_{k=-m}^m\big(x_i-q^kx_j\big)$ if $\frac{h_{i,j}(qx)}{h_{i,j}(x)}$ is not rational. \subsection*{Acknowledgements} We would like to thank MIT PRIMES, specifically Pavel Etingof, for suggesting the project. We would like to thank Eric Rains for very useful discussions. We also would like to thank the referees for carefully reading our manuscript and for their valuable comments and suggestions which substantially help to improve the readability and quality of the paper. \pdfbookmark[1]{References}{ref}
2,877,628,088,776
arxiv
\section{Introduction} Generalized grey Brownian motion (ggBm) is a two-parameter stochastic process $B_{\alpha,\beta}$, which is in general not Gaussian. Introduced in~\cite{MuMa09,MuPa08}, ggBm has been considered in the physics literature to model anomalous diffusions with non-Gaussian marginals, including both slow (variance grows slower than linearly) and fast diffusive behavior. The process $B_{\alpha,\beta}$ has stationary increments and is self-similar with parameter $H=\alpha/2$ \cite[Proposition~3.2]{MuMa09}. The marginal density of ggBm satisfies a fractional partial integro-differential equation~\cite{MuMa09}. Special cases of ggBm include fractional Brownian motion (fBm; $\beta= 1$), grey Brownian motion (\cite{Sc90}; $\alpha = \beta$), and Brownian motion ($\alpha = \beta = 1$). Our focus is mainly on the case $\beta<1$. In~\cite{MuMa09}, a generalized grey noise space is defined, motivated by white noise space, but with the Gaussian characteristic function replaced by the Mittag-Leffler function. The ggBm is then defined by evaluating generalized grey noise at the test function $1_{[0,t)}$. We do not go into details, because for our purposes, the representation \begin{equation}\label{eq:repr} B_{\alpha,\beta}(t) = \sqrt{L_\beta} B_{\alpha/2}(t),\quad 0<\alpha<2,\ 0<\beta< 1, \end{equation} which was proved in~\cite{MuPa08}, is more convenient. Here, $B_{\alpha/2}$ is a fBm with Hurst parameter $H=\alpha/2$, and~$L_\beta$ is an independent positive random variable whose density is the $M$-Wright function (see below). The representation~\eqref{eq:repr} makes sense also in the limiting case $\beta=1$, but we will not require this. The problem of small ball probabilities, also called small deviations, consists of estimating \begin{equation}\label{eq:sb} \mathbb{P}\Big[\sup_{0\leq t\leq 1}|B_{\alpha,\beta}(t)|\leq \varepsilon\Big],\quad \varepsilon \downarrow 0, \end{equation} asymptotically. More generally, we can consider \[ \mathbb{P}\big[\| B_{\alpha,\beta} \|\leq \varepsilon\big],\quad \varepsilon \downarrow 0, \] where $\|\cdot\|$ is a norm on $C_0^{\gamma}[0,1]$, the space of $\gamma$-H\"older continuous functions, with $0<\gamma<H=\alpha/2$. For ggBm with $\beta<1$, our main result (Theorem~\ref{thm:main}) shows that~\eqref{eq:sb} is of order $\varepsilon^2$, and that this also holds for some other norms. For Gaussian processes, such as fBm ($\beta=1$), the small ball problem has been studied extensively~\cite{LiSh01}, and exponential decay is typical. But there are also many works studying small ball probabilities for non-Gaussian processes; see, e.g., \cite{AuLiLi09,AuSi07} and the references therein. We refer to~\cite{Ko16,Na09} for other examples of processes with the small ball rate~$\varepsilon^2$ of ggBm. In Section~\ref{se:small}, we will show that the known exponential small ball estimates for fBm can be used to deduce our quadratic small ball estimate for ggBm. As a byproduct, we show that the uniform norm (sup norm) of ggBm has a smooth, even analytic, pdf. In Section~\ref{se:large}, we provide a large deviations estimate. The decay rate is exponential, but slower than Gaussian, depending on the parameter~$\beta$. Notation: When we write $B_{\alpha,\beta}$, we always mean the process on the time interval $[0,1]$, i.e.\ $B_{\alpha,\beta}=(B_{\alpha,\beta}(t))_{0\leq t\leq 1}$. We write $F_H$ for the cdf of $\| B_H\|$, assuming that the choice of the norm $\| \cdot \|$ is clear from the context. As usual, $\mathbb{R}^+=(0,\infty)$ denotes the positive reals. The letter $C$ denotes various positive constants. \section{Analyticity of the cdf and small ball probability}\label{se:small} The $M$-Wright function, which is the pdf of $L_\beta$ in~\eqref{eq:repr}, is defined by \begin{equation}\label{eq:def M} M_\beta(x)= \sum_{n=0}^\infty \frac{(-x)^n}{n! \Gamma(1-\beta-\beta n)}, \quad x\geq0,\ 0<\beta<1. \end{equation} It is not obvious that $M_\beta$ is a pdf; for this, and more information on $M_\beta$ and its generalizations, we refer to~\cite{MaMuPa10}. For later use, we note that it follows from Euler's reflection formula that \begin{equation}\label{eq:refl} \frac{1}{\Gamma(1-\beta-\beta n)} = \frac{\sin\big(\pi(\beta+\beta n)\big)}{\pi} \Gamma(\beta+\beta n), \end{equation} (cf.~\cite[p.~41]{Wr40} and~\cite[(3.8)]{MaMuPa10}), which shows, by Stirling's formula for the gamma function, that the series in~\eqref{eq:def M} defines an entire function. For this, the crude version \begin{equation}\label{eq:stir} \Gamma(x) = x^{x+o(x)},\quad x\uparrow \infty, \end{equation} of Stirling's formula suffices. We will also need the asymptotic behavior of~$M_\beta$ at infinity~\cite[(4.5)]{MaMuPa10}, \begin{equation}\label{eq:M as} M_\beta(x) = \exp\Big( {-\frac{1-\beta}{\beta}}(\beta x)^{\frac{1}{1-\beta}} + O(\log x) \Big),\quad x\uparrow \infty. \end{equation} Our main assumption is that fBm satisfies an exponential small ball estimate w.r.t.\ to the chosen norm $\|\cdot\|$. \begin{assumption}\label{ass} For $0<H<1$, there are $\theta,C_1,C_2>0$ such that \begin{equation*} -C_1 \varepsilon^{-\theta} \leq \log\mathbb{P}\big[\| B_H \|\leq \varepsilon\big]\leq -C_2 \varepsilon^{-\theta}, \quad \varepsilon \in (0,1]. \end{equation*} \end{assumption} For the uniform norm, it is known that this holds with $\theta=1/H$, \begin{equation}\label{eq:fbm} -C_1 \varepsilon^{-1/H} \leq \log\mathbb{P}\Big[\sup_{0\leq t\leq 1}|B_H(t)|\leq \varepsilon\Big]\leq -C_2 \varepsilon^{-1/H}. \end{equation} Assumption~\ref{ass} also holds for the $\gamma$-H\"older norm, where $0<\gamma<H$, and for the $L^2$-norm. See~\cite{Br03,LiSh01,Li12} for the corresponding values of~$\theta$, and for much more information on small ball probabilities for fBm and other Gaussian processes. The examples we just mentioned are norms in the classical sense, and so we stick to this terminology in our statements. From our proofs, it is clear that it would suffice throughout to assume that $\|\cdot\|$ is a measurable non-negative homogeneous functional. \begin{proposition}\label{prop:an} Let $0<\alpha<2$, $0<\beta<1$. If the norm $\| \cdot \|$ satisfies Assumption~\ref{ass}, then the cdf of $\| B_{\alpha,\beta} \|$ is an analytic function on~$\mathbb{R}^+$. In particular, this holds for the cdf of $\|B_{\alpha,\beta}\|_\infty=\sup_{0\leq t\leq 1}|B_{\alpha,\beta}(t)|$. \end{proposition} \begin{proof} Recall that $F_H$ denotes the cdf of $\| B_H \|$. {}From~\eqref{eq:repr} we find \begin{align} \mathbb{P}\big[\| B_{\alpha,\beta} \|\leq \varepsilon\big]&= \int_0^\infty F_H(\varepsilon x^{-1/2})M_{\beta}(x)dx \notag \\ &= 2\varepsilon^2\int_0^\infty F_H(y)M_{\beta}(\varepsilon^2/y^2) y^{-3}dy. \label{eq:for hol} \end{align} As $M_{\beta}$ extends to an entire function (see above), the last integrand clearly is an entire function of~$\varepsilon$ for any fixed $y>0$. The function $M_{\beta}$ is bounded on~$\mathbb{R}^+$, as follows, e.g., from~\eqref{eq:def M} and~\eqref{eq:M as}. Thus, the integrand in~\eqref{eq:for hol} can be bounded by an integrable function of~$y$, independently of~$\varepsilon$. Thus, the conditions of a standard criterion for complex differentiation under the integral sign~\cite[Theorem IV.5.8]{El05} are satisfied, which yields the assertion. \end{proof} Note that fBm, i.e.\ $\beta=1$, is not covered by Proposition~\ref{prop:an}. In~\cite{LaNu03}, it is shown by Malliavin calculus that $\sup_{0\leq t\leq 1}B_H$ (without the absolute value) has a $C^\infty$ density. We now show that, for $\beta<1$, the small ball ball probability of ggBm is of order~$\varepsilon^2$ as $\varepsilon\downarrow0$. For $2/\theta+\beta<1$ ($\alpha+\beta<1$ for the uniform norm), we express it as a power series, which yields a full asymptotic expansion. We write \[ \eta_k(H):=\mathbb{E}\big[\| B_H \|^{-k}\big], \quad k\in\mathbb N, \] for the negative moments of the norm of fBm, omitting the dependence on the norm $\|\cdot\|$ in the notation $\eta_k(H)$. By integration by parts, it is easy to see that $\eta_k(H)$ is finite under Assumption~\ref{ass}. \begin{theorem}\label{thm:main} Let $0<\alpha<2$, $0<\beta<1$, and define $H=\alpha/2$. Under Assumption~\ref{ass}, the small ball probability of ggBm satisfies \begin{equation}\label{eq:as e2} \mathbb{P}\big[\| B_{\alpha,\beta} \| \leq \varepsilon\big] \sim \frac{\eta_2(H)\varepsilon^2}{\Gamma(1-\beta)}, \quad \varepsilon\downarrow0. \end{equation} If, additionally, $2/\theta+\beta<1$, then it has the convergent series representation \begin{equation}\label{eq:expans} \mathbb{P}\big[\| B_{\alpha,\beta} \|\leq \varepsilon\big] = 2\sum_{n=0}^\infty \frac{(-1)^n \eta_{2n+2}(H)}{(2n+2)n! \Gamma(1-\beta-\beta n)} \varepsilon^{2n+2}, \quad \varepsilon\geq 0. \end{equation} In particular, if $\|\cdot \|=\|\cdot \|_{\infty}$, then~\eqref{eq:expans} holds for $\alpha+\beta<1$. \end{theorem} \begin{proof} By integration by parts, we have \begin{equation}\label{eq:parts} \int_0^\infty\frac{F_H(y)}{y^{2n+3}}dy = \frac{1}{2n+2} \int_0^\infty y^{-2n-2}F_H(dy) =\frac{\eta_{2n+2}(H)}{2n+2}. \end{equation} The assertion~\eqref{eq:as e2} follows from~\eqref{eq:for hol}, \eqref{eq:def M} for $x=0$, \eqref{eq:parts} for $n=0$, and dominated convergence, because~$M_\beta$ is a bounded function. For the next statement, define \[ G_N(\varepsilon,y):=\sum_{n=N+1}^\infty \frac{(-1)^n\varepsilon^{2n-2N+1}}{y^{2n}n! \Gamma(1-\beta-\beta n)},\quad y>0,\ \varepsilon\in[0,1], \] so that~\eqref{eq:for hol} yields, for $N\in \mathbb N$, \begin{multline}\label{eq:for expans} \mathbb{P}\big[\| B_{\alpha,\beta} \| \leq \varepsilon\big]\\ = 2\varepsilon^2\int_0^\infty \frac{F_H(y)}{y^3} \sum_{n=0}^N \frac{(-\varepsilon^2/y^2)^n}{n! \Gamma(1-\beta-\beta n)} dy + 2\varepsilon^{2N+1} \int_0^\infty \frac{F_H(y)}{y^3} G_N(\varepsilon,y)dy. \end{multline} For the finite sum, we can use~\eqref{eq:parts} to rewrite the summands as in~\eqref{eq:expans}. We now provide an integrable bound for the last integrand in~\eqref{eq:for expans} that does not depend on $\varepsilon\in[0,1]$. It is clear that \begin{equation}\label{eq:est G} |G_N(\varepsilon,y)| \leq \sum_{n=N+1}^\infty \frac{1}{y^{2n}n! \Gamma(1-\beta-\beta n)},\quad y>0,\ \varepsilon\in[0,1]. \end{equation} By~\eqref{eq:refl} and Stirling's formula, \begin{equation}\label{eq:from stir} \frac{1}{n! |\Gamma(1-\beta -\beta n)|} \leq n^{-(1-\beta)n+o(n)} \leq C n^{-(1-\hat{\beta})n}, \quad n\in\mathbb N, \end{equation} for any $\hat{\beta}>\beta$; we will fix~$\hat \beta$ later. {}From~\eqref{eq:est G}, \eqref{eq:from stir}, and Stirling's formula, we conclude \begin{align} |G_N(\varepsilon,y)| &\leq C \sum_{n=N+1}^\infty \frac{1}{y^{2n}\Gamma((1-\hat{\beta})n)} \notag \\ &=y^{-2} E_{1-\hat{\beta},1-\hat{\beta}}(y^{-2}) - \sum_{n=1}^N \frac{1}{y^{2n}\Gamma((1-\hat{\beta})n)},\label{eq:G bd} \end{align} where \[ E_{u,v}(z)=\sum_{n=0}^\infty \frac{z^n}{\Gamma(u n+v)} ,\quad u,v>0,\ z\in\mathbb C, \] denotes the two-parameter Mittag-Leffler function. We now use the uniform bound~\eqref{eq:G bd} in~\eqref{eq:for expans}. Integrability at $\infty$ is obvious, and we now show integrability at zero. By~\cite[Theorem~4.3]{GoKiMaRo14}, \[ E_{1-\hat{\beta},1-\hat{\beta}}(y^{-2})=\exp\Big(y^{-\frac{2}{1-\hat \beta}}\big(1+o(1)\big)\Big), \quad y\downarrow0. \] We see, using Assumption~\ref{ass} for $F_H$, that the last integrand in~\eqref{eq:for expans} satisfies \[ \frac{F_H(y)}{y^3} G_N(\varepsilon,y) \leq \exp\Big({-C_1} y^{-\theta} + y^{-\frac{2}{1-\hat \beta}} +o\big(y^{-\big(\theta \wedge \frac{2}{1-\hat \beta}\big)}\big)\Big), \quad y\downarrow0, \] uniformly w.r.t.\ $\varepsilon\in[0,1]$. This is integrable if $\theta>2/(1-\hat \beta)$, i.e., $2/\theta+\hat \beta <1$. Clearly, our assumption that $2/\theta+ \beta <1$ allows us to chose such a $\hat \beta > \beta$. By the following lemma, $\eta_{2n+2}(H)=2^{2n/\theta+o(n)}$. Using~\eqref{eq:from stir}, we can thus take the limit $N\uparrow \infty$ in~\eqref{eq:for expans} for fixed $\varepsilon\in[0,1]$, which proves~\eqref{eq:expans} for these~$\varepsilon$. The extension to any~$\varepsilon\geq0$ follows by analytic continuation, using Proposition~\ref{prop:an}. \end{proof} In the preceding proof, we applied the following estimate for negative moments of the supremum of fBm. Note that moments with \emph{positive} exponent are estimated in~\cite{Pr14}; see also~\cite{NoVa99}. \begin{lemma}\label{le:mom} Under Assumption~\ref{ass}, for $k\uparrow \infty$, we have $\eta_k(H)=k^{k/\theta+o(k)}$. \end{lemma} \begin{proof} We show only the upper estimate, as the lower one can be proven analogously. By~\eqref{eq:fbm}, there is $\varepsilon_0>0$ such that \[ F_H(y)\leq 2 \exp({-C_2}y^{-\theta}),\quad 0<y\leq \varepsilon_0. \] Define $\tilde K := 2 \vee \exp(C_2 \varepsilon_0^{-\theta})$. Then, \[ F_H(y) \leq \tilde K \exp({-C_2}y^{-\theta}), \quad y>0; \] note that the right hand side is $\geq1$ for $y\geq \varepsilon_0$. This implies \begin{align*} \eta_k(H) &= \int_0^\infty y^{-k}F_H(dy) = k \int_0^\infty y^{-k-1}F_H(y)dy \\ &\leq k \tilde{K} \int_0^\infty \exp({-C_2}y^{-\theta}) y^{-k-1}dy\\ &= e^{O(k)} \int_0^\infty e^{-w} w^{k/\theta -1} dw\\ &= e^{O(k)} \Gamma(k/\theta-1)=k^{k/\theta+o(k)}, \end{align*} by Stirling's formula~\eqref{eq:stir} for the gamma function. \end{proof} If $2/\theta+ \beta <1$, then the series in~\eqref{eq:expans} diverges for any $\varepsilon>0$. Indeed, there is an increasing sequence $(n_j)$ in~$\mathbb N$ such that the lower bound \[ \mathrm{dist}(1-\beta -\beta n_j,\mathbb Z) \geq C >0, \quad j\in\mathbb N, \] holds. For rational~$\beta\in(0,1)$, this is clear by periodicity. For irrational~$\beta$, it follows from the classical fact that the sequence of fractional parts $\{n\beta\}$ is dense in $[0,1]$ (Kronecker's approximation theorem). Hence, again by~\eqref{eq:refl} and Stirling's formula, \[ \frac{1}{ |\Gamma(1-\beta -\beta n_j)|} \geq n_j^{\beta n_j + o(n_j)}, \] which, together with Lemma~\ref{le:mom}, shows divergence. We leave it as an open problem if~\eqref{eq:expans} still holds in the sense of an asymptotic expansion of the small ball probability, if $2/\theta+ \beta \leq1$. \section{Large deviations}\label{se:large} For fractional Brownian motion, it is well known that \begin{equation}\label{eq:fbm ld sup} \mathbb{P}\Big[ \sup_{0\leq t\leq 1}|B_H(t)| \geq y\Big] =\exp\big({-\tfrac12}y^2+o(y^2)\big),\quad y\uparrow \infty. \end{equation} Indeed, the upper estimate follows from \[ \mathbb{P}\Big[ \sup_{0\leq t\leq 1}|B_H(t)| \geq y\Big]\leq 2\, \mathbb{P}\Big[ \sup_{0\leq t\leq 1}B_H(t) \geq y\Big] \] and the Borell-TIS inequality~\cite[Theorem~4.2]{No12}, and the lower one is clear from $ \sup_{0\leq t\leq 1}|B_H(t)| \geq B_H(1)$. The following result gives a large deviation estimate for ggBm. For $\beta=1$, the distribution has a Gaussian upper tail, of course. For $0<\beta<1$, the decay is between exponential and Gaussian, which is sometimes called compressed exponential. \begin{theorem}\label{thm:ld} Let $0<\alpha<2$ and $0<\beta\leq 1$, and assume that $\|\cdot\|$ is a norm on the H\"older space $C_0^\gamma[0,1]$, where $0<\gamma<H=\alpha/2$, such that \begin{equation}\label{eq fbm ld} \mathbb{P}\big[ \| B_H\| \geq y\big] =\exp\big({-\kappa}y^2+o(y^2)\big),\quad y\uparrow \infty, \end{equation} for some $\kappa>0$. Then there are constants $K_1,K_2>0$ such that \begin{align} \exp\Big({-K_1}y^{\frac{2}{2-\beta}}\big(1+o(1)\big)\Big) &\leq \mathbb{P}\big[ \|B_{\alpha,\beta}\| \geq y\big] \label{eq:ld low} \\ &\leq \exp\Big({-K_2}y^{\frac{2}{2-\beta}}\big(1+o(1)\big)\Big),\quad y\uparrow \infty. \label{eq:ld} \end{align} \end{theorem} \begin{proof} We may assume $\beta<1$, because for $\beta=1$ we have $B_{\alpha,1}=B_H$ and the assumption~\eqref{eq fbm ld} makes the statement trivial. With $\bar{F}_H=1-F_H$ the tail distribution function of $\|B_H\|$, we have, from~\eqref{eq:repr}, \[ \mathbb{P}\big[ \|B_{\alpha,\beta}\| \geq y\big] =\int_0^\infty \bar{F}_H(yx^{-1/2})M_\beta(x)dx. \] If $\kappa=\tfrac12$, then $\bar{F}_H$ satisfies \begin{equation}\label{eq:F b} \bar{F}_H(y)=\exp\big({-\tfrac12}y^2+o(y^2)\big),\quad y\uparrow \infty, \end{equation} by~\eqref{eq fbm ld}. We assume $\kappa=\tfrac12$ for rest of the proof, as $\kappa>0$ is a trivial extension. Let $0<\hat \kappa <\tfrac12$ be arbitrary. Since $M_\beta$ is bounded, we obtain \begin{align*} \int_0^1 \bar{F}_H(yx^{-1/2})M_\beta(x)dx &\leq C \int_0^1 e^{-\hat{\kappa}y^2/x}M_\beta(x)dx\\ &\leq C \int_0^1 e^{-\hat{\kappa}y^2/x}dx\\ &=C\big(e^{-\hat{\kappa}y^2}-\hat{\kappa} y^2 \Gamma(0,\hat{\kappa} y^2)\big), \end{align*} where $\Gamma(a,z)=\int_z^\infty t^{a-1}e^{-t}dt$ is the incomplete gamma function. Using a well-known expansion of that function \cite[\S8.11]{DLMF}, we conclude \begin{equation}\label{eq:int01} \int_0^1 \bar{F}_H(yx^{-1/2})M_\beta(x)dx \leq \exp\big({-\hat{\kappa}}y^2+o(y^2)\big),\quad y\uparrow \infty. \end{equation} As $\beta<1$, this is negligible compared to the decay rate claimed in~\eqref{eq:ld low} and~\eqref{eq:ld}. Now define $h(y):=y^2/(\log y)$. Since $\bar{F}_H \leq 1$, and using~\eqref{eq:M as}, we have \begin{align*} \int_{h(y)}^\infty \bar{F}_H(yx^{-1/2})M_\beta(x)dx &\leq \int_{h(y)}^\infty M_\beta(x)dx\\ &\leq \int_{h(y)}^\infty \exp\big({-C}x^{\frac{1}{1-\beta}}\big)dx\\ &\leq \exp\big({-C}h(y)^{\frac{1}{1-\beta}}\big). \end{align*} Since $2/(1-\beta)>2/(2-\beta)$, this is of faster decay than~\eqref{eq:ld}. It remains to show that the integral $\int_1^{h(y)}\bar{F}_H(yx^{-1/2})M_\beta(x)dx$ has the claimed growth order~\eqref{eq:ld}. By dividing the exponent in~\eqref{eq:M as} by~2, which makes the decay slower, we obtain \begin{equation*} M_\beta(x) \leq C\exp\Big( {-\frac{1-\beta}{2\beta}}(\beta x)^{\frac{1}{1-\beta}} \Big),\quad x\geq1. \end{equation*} Similarly, \eqref{eq:F b} implies \[ \bar{F}_H(y) \leq C e^{-y^2/3}, \quad y\geq 1. \] Analogously, we can \emph{increase} the constants in the exponents to find \emph{lower} estimates, for which the following reasoning is analogous, and yields~\eqref{eq:ld low}. Therefore, we only discuss the upper estimate for $\int_1^{h(y)}$. This is a straightforward application of the Laplace method~\cite[Chapter~4]{deB58} to the integral \[ \int_1^{h(y)}\exp\Big({-\frac{y^2}{3x}} -\frac{1-\beta}{2\beta}(\beta x)^{\frac{1}{1-\beta}} \Big)dx, \] which results from the two preceding estimates. The integrand is a strictly concave function with a maximum at \[ x_0(y)=c y^{\frac{2(1-\beta)}{2-\beta}} \in (1,h(y)) \] for some constant $c>0$. As we are not concerned with lower order terms, it suffices to evaluate the integrand at $x_0(y)$ to conclude \[ \int_1^{h(y)} \bar{F}(yx^{-1/2})M_\beta(x)dx \leq \exp\Big({-C}y^{\frac{2}{2-\beta}}(1+o(1))\Big). \] This completes the proof. \end{proof} We now comment on applying Theorem~\ref{thm:ld} to other norms than the sup norm, which requires verifying~\eqref{eq fbm ld}. As mentioned above, for the sup norm, this follows from the Borell-TIS inequality. For an arbitrary norm $\|\cdot\|$ on H\"older space, we have \[ \mathbb{P}\big[ \| B_H\| \geq y\big] = \mathbb{P}\big[y^{-1}B_H\in \{\| f \| \geq 1\}\big]. \] In principle, this is in the scope of the general LDP (large deviation principle) for Gaussian measures \cite[Theorem 3.4.12]{DeSt89}, but it may not be trivial to verify the assumptions. For $H=\tfrac12$ and the H\"older norm, this was done in~\cite{BaBeKe92}, extending Schilder's theorem. Note that choosing a stronger topology than the uniform one enlarges the dual space of path space, making it harder to verify the defining property of a Gaussian measure. For the H\"older topology, we are on safe grounds, though, by another approach: Using the double sum method for Gaussian fields, Fatalov has shown that~\eqref{eq:fbm ld sup} holds for the $\gamma$-H\"older norm~\cite[Theorem~1.3]{Fa03}, and so Theorem~\ref{thm:ld} is applicable to this norm (with $0<\gamma<H$, of course). \bibliographystyle{siam}
2,877,628,088,777
arxiv
\section{Introduction}\label{sec1} In the U.S. more than $375,000$ deaths are associated with heart diseases each year \cite{american2015heart}. Early diagnosis of heart disease can reduce the mortality rate and provide useful information on proper therapies. Several noninvasive techniques have been used to characterize abnormal blood patterns that contribute to heart failure. Medical imaging techniques including Phase Contrast-MRI (PC-MRI)~\cite{bock20104d,wigstrom1999particle}, 3D echocardiography~(3D-echo)~\cite{sugeng2007real,chaoui2004three}, and cardiovascular magnetic resonance~(CMR)~\cite{rodriguez2013intracardiac} have been widely used to visualize flow patterns in the cardiac system. However, they typically suffer from low spatial/temporal resolution within an acceptable scanning time and thus insufficient for detailed hemodynamic analysis~\cite{doost2016heart}. Computational Fluid Dynamic~(CFD) in combination with imaging techniques for geometric reconstruction can provide a powerful tool for investigating the flow pattern with more details in the reconstructed cardiac system~\cite{borazjani2013left}. The flow simulations in the complex geometries with large deformations and complicated motions such as left ventricle~(LV) depends mainly on the geometry and its motion. The geometric segmentation and reconstruction process from medical images along with the assumptions for motions of these geometries~(valves and LV) can greatly influence LV's flow pattern~\cite{mittal2016computational,votta2013toward,le2013fluid}. Currently, two types of techniques are available to model the motion of valves and the LV~\cite{doost2016heart}: 1) Fluid-Structure Interaction~(FSI), and 2) prescribed models. In prescribed models, the motions and geometries are prescribed based on in vivo measurements ~\cite{saber2003progress,long2008subject,schenkel2009mri} or simplified equations to define the motion~\cite{domenichini2005three,le2013fluid,seo2013effect,azar2018mechanical}. Of course, obtaining the motions from medical images is more realistic than prescribing the motion based on simplified equations. The FSI models have been widely used in numerical simulations~\cite{watanabe2004multiphysics,cheng2005fluid,borazjani2008curvilinear,borazjani2013fluid,sharzehee2018fluid,kamensky2015immersogeometric,shahidian2017stress} especially for capturing the motion of heart valves. However, due to the complexity of geometry, dynamic shape, and large deformation of LV and mitral valve~(MV) leaflets FSI models are computationally expensive~\cite{borazjani2015review,mittal2016computational}, and the lack of data for tissue mechanical properties from in vivo measurements makes using them quite challenging. In fact, FSI models are typically validated by comparing the motion computed from FSI with the in vivo measurements, i.e., FSI methods are considered valid when their motion is similar to prescribed ones. Therefore, it is desirable to use image-based geometry/motion if available. However, the main concern about using the medical images in CFD simulations is their accuracy in terms of spatio-temporal resolution. Several imaging techniques are currently available for LV, e.g., computed tomography~(CT)~\cite{watabe2015enhancement,szafer2019simplified}, magnetic resonance imaging~(MRI)~\cite{lapa2015imaging}, cardiac magnetic resonance~(CMR)~\cite{morris2018magnetic}, and 3D-echo~\cite{monaghan2006role}. However, most of these techniques are either very expensive or suffer from the lack of spatial or temporal resolution within an acceptable acquisition time. Among these, 2D-echo is the most widely used non-invasive imaging technique, due to its low costs and fast acquisition ~\cite{chamsi2017handheld}. 2D echo can provide high temporal resolution~(from 250 fps, down to 50 fps for resolving the whole LV) compared to other techniques~\cite{rajan2016automated}. However, the acquired data from segmentation of 2D echo are only available on a few standard planes from which the 3D geometry need to be reconstructed~\cite{rajan2016automated}. For CFD simulations of LV, the valves need to be modeled. Due to the highly dynamic motion and geometry of the leaflet valve, the reconstruction of heart valves still remains a challenge. In many studies, the valve's leaflets are either ignored or simplified using an on-off approach where the switch between the on and off configuration occurs instantaneously without any intermediate positions~\cite{song2015role,watanabe2008looped,zheng2012computational,domenichini2005three}. Dimasi et al.~\cite{dimasi2012influence} reconstructed the MV and AV from cardiac magnetic resonance~(CMR) images. Their study was limited to systole~(MV always closed). Seo eta al.~\cite{seo2014effect} created a prescribed kinematic model for MV motion based on the location of the tip of the valve leaflets from echo images for a cardiac cycle. However, mitral annulus was assumed to be a perfect circle with a fixed dimension in their study. Chnafa et al.~\cite{chnafa2014image} modeled the MV based on a projection of mitral annulus geometric with the assumption of elliptical MV orifice, and the aortic valve~(AV) was approximated by a simple plane. A reduced degree of freedom model for leaflet motion coupled the left ventricle geometry was introduced by Domenichini et al.~\cite{domenichini2015asymptotic}. However, the mitral annulus was assumed to be a rigid circular orifice in their study. Su et al.~\cite{su2016cardiac} reconstructed MV using a mathematical model based on only one cross section from CMR. Here, we reconstruct mitral and aortic valves using the segmented data from multiple axis 2D echo images with the temporal resolution of more than $30$~fps. The valve geometry and leaflet motion can have a significant impact on the flow field inside the LV~\cite{seo2014effect}. The effect of AV has been studied thoroughly in terms of hemodynamic performance~\cite{seaman2014steady,borazjani2010high}, platelet activation~\cite{hedayat2019comparison,hedayat2017platelet,alemu2007flow} and thrombus formation~\cite{bluestein2000vortex,yoganathan2005flow}, both experimentally and numerically. Most of these simulations are performed in simplified geometries without an LV. Only a limited number of studies considered the effect of ventricular valves in their LV simulations~ \cite{mao2017fully,su2015numerical,le2013fluid,su2014numerical,su2016cardiac,chnafa2014image,seo2014effect,dahl2012fsi,carmody2006approach}. Due to the significant impact of MV on the performance of the heart, most of LV simulations are just considering the effect of mitral valves~(neglecting the AV) and the simulations are mainly limited to the diastolic phase \cite{su2015numerical,su2016cardiac,chnafa2014image,seo2014effect,dahl2012fsi,carmody2006approach}. Charonko et al.~\cite{charonko2013vortices} reported that the mitral vortex ring facilitates the filling and enhances flow transfer to the LV apex. Dahl et al.~\cite{dahl2012fsi} studied the effect of MV in 2D simulation of LV. Their results show that asymmetric leaflets for MV as well as an adequate model for the left atrium are essential for resolving important flow features in the LV. Seo et al.~\cite{seo2014effect} showed the MV to significantly affect the vortex ring propagation and flow field inside LV. They also found that due to the asymmetry of the MV a circulatory flow pattern can be generated in the LV which can enhance apical washout ability and reduce the risk of thrombus formation. Su et al.~\cite{su2016cardiac} investigated the effect of MV on vortex formation time~(VFT). Their results showed that VFT is a promising parameter to characterize the performance of the LV. However, only a few simulations studied the flow pattern inside the LV incorporated with both ventricular valves~\cite{su2014numerical,su2016cardiac,mao2017fully} and even fewer compared flow pattern in healthy LV with left ventricles with any kind of heart failure~\cite{su2016cardiac}. In this study, a generalized method to automate reconstruction of the 3D LV geometry from standard 2D echo projections~\cite{rajan2016automated} is extended to reconstruct the valves and the superior wall to generate a closed 3D geometry for CFD simulations~(section~\ref{Superior wall reconstruction}). A new method is developed for the reconstruction of AV and MV based on $1$ and $3$ standard sections from 2D echo, respectively (section~\ref{Valves reconstruction}). This 3D reconstruction method is coupled with our in-house CFD code based on a sharp-interface immersed boundary method \cite{gilmanov2005hybrid,borazjani2008curvilinear,borazjani2013parallel} to simulate the ventricular flows (section~\ref{Hybrid echo-CFD framework}). Different smoothing algorithms, described in section~\ref{Data smoothing}, are tested to dampen the fluctuations of boundaries obtained from echo data. The effect of these smoothing algorithm on shape and flux of LV as well as CFD results are investigated~(section \ref{smoothness in geometric} and \ref{numerical simulations smoothing}). This hybrid echo-CFD framework is applied to a healthy LV and an LV with acute myocardial infarction~(AMI) and the results are discussed in section~\ref{Comparing healthy and AMI}. \section{Methods} In this section, the previous process for the reconstruction of LV from multiple 2D echo cross-section~\cite{rajan2016automated} is briefly described in \ref{Previous LV reconstruction method} subsection. Afterwards, the new developments/modifications to this method are discussed in \ref{Data smoothing} to \ref{Computational setup} subsections. The new 3D reconstruction method is applied to a healthy LV as well as one with acute myocardial infarction~(AMI) which was scanned after inducing AMI on the healthy LV by ligating the mid-portion of the left anterior descending coronary artery~(both healthy and AMI images are approved by the Institutional Animal Care and Use Committee at the Mayo Clinic). \subsection{Previous LV reconstruction method} \label{Previous LV reconstruction method} The following steps are performed by Rajan et al.~\cite{rajan2016automated} to reconstruct the LV from 2D-echo images: \begin{enumerate}[I] \item \textbf{Endocardial border detection}\\ LV endocardial borders are identified using the gradient in the RGB~(Red, Green, Blue) values and extracted from black and white echo image in standard long axis~(LA) and short axis~(SA) echo projections of a pig heart for an entire cardiac cycle with the frame rate ranged from $36$ fps to $55$ fps for different sections. \item \textbf{B-spline data smoothing}\\ The segmented boundaries extracted from echo images are not smooth and a B-spline curve fitting is used for smoothing the detected endocardial borders. \item \textbf{Temporal interpolation}\\ Since the image frame rate for various sections is not the same, temporal interpolation is used to obtain different cross-sections at the same time instants. A cubic spline with natural boundary conditions is used for temporal interpolation~\cite{rajan2016automated}. \item \textbf{Scaling, positioning and orientation of the LV sections}\\ The left ventricular end-diastolic dimension (LVEDD) is chosen as the scaling parameter to scale all the extracted data to the physical dimension. The angular orientations and position of the LA and SA sections are optimized using an optimization algorithm based on the assumption of a fixed apex. \item \textbf{Spatial Interpolation}\\ The available optimized cross-sections are interpolated using a bivariate method (depending on the available sections) to generate surface points at the chosen spatial resolution. \item \textbf{Temporal Smoothing}\\ To make the geometry suitable for CFD applications, the volume flux is smoothed in time using the Fourier curve fitting method considering the periodic nature of the dependent variable. \item \textbf{Mesh Generation and Output}\\ The final surface points are turned into a triangular mesh as needed by the immersed boundary~(IB) node classification algorithm in our in-house the CFD solver\cite{borazjani2008curvilinear}. \end{enumerate} The following optimizations are performed on the previous reconstruction method~\cite{rajan2016automated} to improve the accuracy and as well as incorporating physiological assumptions: \subsection{Data smoothing} \label{Data smoothing} Rajan et al.~\cite{rajan2016automated} used a b-spline curve fitting method which resulted in small non-physical ringing near the basal section in final reconstructed LV's surface. To get rid of this ringing in the reconstructed geometry, the weighted moving average~(WMA), and finally, the variable span weighted moving average~(VSWMA) have been employed. Using WMA method, data at each node is computed as the weighted mean of the data points on both sides as \begin{equation} r(i)=\sum_{j=i-k}^{j=i+k} \frac{(k-j) \times r(j)}{2k+1} \end{equation} where $\Delta \theta=2k + 1$ is the span of the moving average scheme, $i$ is each node of data and $r(i)$ is radial coordinate of each node in $(r,\theta)$ format~(Fig.~\ref{r-theta}). Two general techniques are available for boundary point treatment when using moving averages methods: 1) the span is reduced progressively while reaching the boundary points in a way that the equation holds for all the data points; and 2) data augmentation is performed to generate a new dataset equivalent to the size of the span at the endpoints~(boundary points). Our results show that reducing the span cannot eliminate oscillations near the endpoints of reconstructed LV. Thus data augmentation is employed by reflecting data~(the number of data points to be mirrored is equal to the span size) around the boundary point~(Fig.~\ref{smoothing}-a). Different span sizes are tested and the effects of each can be seen in Fig.~\ref{smoothing}. Furthermore, Variable Span Weighted Moving Average (VSWMA) is employed, which enables us to apply larger span value near the endpoints and a smaller span near the apex. The span of the moving average scheme can vary based on the radial proximity of the data point to the vertical projection of the apex on the base~(as seen in LA apical sections Fig.~\ref{r-theta}) as follows: \begin{equation} k(i)=k_s+(k_l-k_s)\times \frac{r_l-r(i)}{r_l-r_s} \label{span} \end{equation} where $k(i)$, $k_l$, $k_s$, $r_l$, $r_s$ and $r(i)$ are the span value on the $i^{th}$ data point, the larger span, the smaller span, the largest $r$, the smallest $r$ and the radial distance of the $i^{th}$ data point from the mid-point of the endpoints of the dataset, respectively. Equation~\ref{span} basically interpolates the value of span in a way to have a larger span near the base and a smaller span near the apex. \subsection{Scaling, positioning and orientation of the LV sections} \label{Scaling} The left ventricular end-diastolic dimension (LVEDD) is chosen as the scaling parameter to scale all the segmented data. The algorithm employed for fitting the SA and LA sections in our previous work~\cite{rajan2016automated} can be seen in Fig.~\ref{optimization}. The general algorithm has been kept the same with changes only to the positioning of the SA sections and the apex. Rajan et al.~\cite{rajan2016automated} assumed that the base is fixed while the apex moves in space and for the reconstruction, the LA sections were fit to the SA sections such that the ends of the LA Apical sections were fit to the SA basal sections. However, in-vivo experiments have shown that the longitudinal displacement of LV's apex is small compared to other parts of LV during a cardiac cycle~\cite{rogers1991quantification}. In addition, it is more realistic to assume that the SA sections are fixed in space for the 3D reconstruction because sections the images are captured by the transducer at fixed standard locations in the experiments. To that end, instead of varying the spatial position of the apex and the relative positions of the SA sections, the following is implemented and depicted illustratively in Fig.~\ref{LV_orientation}. The position of SA sections, as well as the apex, are assumed to be fixed during a cardiac cycle. The positions of the SA sections are calculated at end systole where the length of the left ventricle is minimum and are assumed to be fixed during a cardiac cycle. Therefore, the distances of the SA sections are relatively constant with respect to one another at all time instances. \begin{comment} \subsubsection{Spatial Interpolation} A predictor-corrector type of interpolation is devised for the 3D spatial interpolation of the LV geometry based on the work by Rajan et al.~\cite{rajan2016automated} . Firstly, the SA data is predicted ($r^*_{SA}$) by interpolating the LA sections. An error ratio is calculated between the available sections and the predicted values as follows: \begin{equation} e_{SA}=\frac{r_{SA}(\theta)}{r^*_{SA}(\theta)} \end{equation} To compute $r$ at any point $(\theta; z)$ on the surface, the LA data are interpolated to get the predicted value $(r^*(\theta; z))$ and the error ratio is interpolated to get $e(\theta; z)$. The actual $r(\theta; z)$ is then computed as \begin{equation} r(\theta; z)=e(\theta; z)*r^*(\theta; z) \end{equation} The cubic-spline interpolation method with periodic boundary conditions was employed for the prediction of the SA data and with natural boundary conditions for the error ratio interpolation. \end{comment} \subsection{Valves reconstruction} \label{Valves reconstruction} In the LA Apical three-chambered view, both the mitral and aortic valves can be clearly seen~(Fig~\ref{valve_ident}-a). In the LA Apical two-chambered~(Fig~\ref{valve_ident}-b) and Apical four-chambered views~(Fig~\ref{valve_ident}-c), the mitral leaflets can be observed and the SA basal section the mitral orifice is clearly visible~(Fig~\ref{valve_ident}-d). The boundaries for valve leaflets and orifice on all the frames~(approximately at 30 fps) are manually identified using ImageJ software~\cite{schindelin2015imagej}~(Fig~\ref{valve_ident}). After the color pixelation, the following two manipulations are made to the images: 1) Inverting the 'Red' pixel values~($I_R$)~(Fig~\ref{LSF}-b), \begin{equation} I_R=255-I_R \label{Inverting} \end{equation} considering that the images are in $8$-bit RGB format, and, 2) Intensity reduction~(Fig~\ref{LSF}-c) \begin{equation} I_R= \left\lbrace \begin{tabular}{cc} $0$ & if~ $I_R-250<0$\\ $40$ & otherwise \end{tabular} \right. \label{Intensity} \end{equation} Since the echo images are in grayscale all the pixels have approximately the same R, G, and B values. However, the colored pixels, the RGB intensities are not similar. The valves' leaflet are identified by the blue color, the $I_R$ from Step $1$ is close to $255$~($250$ for blue values is considered as the threshold, however, other values close to $255$ also can be used) and the above filter can assign every pixel valve in the image to $0$ except for the leaflet~(Fig~\ref{LSF}-d). It is to be noted that a similar computation shown in Eqn.~\ref{Inverting} and \ref{Intensity} can be done with 'G' instead of 'R' in the RGB pixel values as well to get a similar result. The next step in the input video development is to use the distance regularized level-set evolution~(DRLSE)~\cite{li2010distance} image segmentation for all the images of the same video~(Fig~\ref{LSF}-e) to segment the valve's leaflets in each frame. The initial level-set contour has also been depicted in Fig.~\ref{LSF}-(e) along with the final result~(Fig.~\ref{LSF}-(f)). Then, A polynomial interpolation is used to interpolate the data set into a fixed number of points for all the frames. For the mitral orifice data, as shown in Fig.~\ref{valve} a cubic-spline interpolation coupled with weighted moving average (span=20) for data smoothing was employed. The mitral and aortic orifice data are oriented using angular orientation value of the SA basal from the LV optimization process explained in section~\ref{Scaling}. In addition, The scaling of the mitral data as seen in the 3 LA Apical views~(Fig~\ref{valve_ident}-a to Fig~\ref{valve_ident}-c) and orifice as observed in the SA Basal section~(Fig~\ref{valve_ident}-d) is performed as follows. First the mitral and aortic valve data is scaled using the scaling ratios calculated from the LV reconstruction process~(Fig.~\ref{valve_scaling}-a). Then, the segmented mitral leaflets from LA Apical two-chambered and three-chambered view are fitted to the mitral orifice that is extracted from the SA basal section. For positioning the data, the three-chambered view planar data for the valves as well as the LV is matched to obtain the transverse location of the valve's location. For the longitudinal positioning of the valves, the bases of the valve orifices have been assumed to be $5~mm$ above the LV base section at all time instances~(based on the measurements from the LA sections). For the reconstruction of the aortic valve, only one sectional data is available i.e. from the Apical three-chambered view~(Fig.~\ref{valve_scaling}-a). In order to reconstruct the mitral valve, a predictor-corrector type of algorithm has been used in this work as follows. The SA sections of the mitral valve are predicted using the LA leaflet data using curve fitting, and after the predictor step, the correction is computed and implemented. Correction at all the LA sections is computed and applied to all the predicted SA sections to have the corrected SA sections for the mitral reconstruction \begin{equation} \vec{c}_r(SA_i,LA_j)=~\vec{r}(SA_i,LA_j) \times \frac{\vec{r_p(LA_j)}}{\vec{r_{int}(LA_j)}} \label{pred-corr} \end{equation} where $\vec{c}_r(SA_i,LA_j)$ is the corrected radial dimension, $\vec{r}(SA_i,LA_j)$ is the original dimension, $\vec{r}_p(LA_j)$ is the mitral orifice data, and $\vec{r_{int}(LA_j)}$ is the interpolated data in the mitral orifice section, or the predicted data. The results of this procedure can be found in Fig.~\ref{MV_rec}. Considering two directions, $\xi$ and $\eta$, shown in Fig.~\ref{MV_rec}, the interpolation is performed in one of the directions (e.g. $\xi$), and the correction to the interpolation is done in the other direction (e.g. $\eta$) using a bi-variate interpolation. However, there is only one cross-section~(Apical three-chambered view) of echo images in which AV is clearly visible. Therefore, the AV is reconstructed using only this view assuming that AV has a circular cross-section which is shown to be a reasonable assumption for AV reconstruction~\cite{labrosse2006geometric}. The final reconstruction of the MV and AV during a cardiac cycle is shown in Fig.~\ref{valve_motion} which shows the valve motion is well synchronized with the phases of LV. \subsection{Superior wall reconstruction} \label{Superior wall reconstruction} Closed geometry is essential to perform CFD simulation in the LV. Hence, the superior wall is modeled to connect the valves and the Endocardium wall so as to have a smooth enclosure for the LV geometry. A bivariate strategy is employed for closing the LV geometry with the valves. The bivariate strategy is such that the interpolation is performed in $\eta$ direction, and the smoothing is performed in $\xi$ direction~(Fig.~\ref{lid}) using following steps: 1) a section of the valve bases are connected to the LV base using cubic spline interpolation method 2) the remaining space is covered with linearly interpolated data so as to have a closed geometry 3) smoothing is performed in the orthogonal direction to have a smooth geometry and to incorporate information from both the directions. Steps of this process can be seen in Fig.~\ref{lid}. \subsection{Hybrid echo-CFD framework} \label{Hybrid echo-CFD framework} To handle the complex shape and motion of LV, a curvilinear immersed boundary (CURVIB) method~\cite{gilmanov2005hybrid,ge2007numerical,borazjani2008curvilinear} is used which has been extensively validated for a variety of complex flow problems~\cite{borazjani2013parallel,asgharzadeh2017newton} and implemented in various applications such as cardiovascular flows~\cite{Asgharzadeh2019,hedayat2019comparison,hedayat2017platelet}, aquatic motions and vortex dynamics~\cite{asadi2018scaling,daghooghi2016self} and rheology of suspensions~\cite{daghooghi2015influence,daghooghi2018effects}. The 3D reconstructed surfaces of LV from 2D-echo which are meshed using triangular elements are given to the flow solver as an input for each time step to classify the background domain's nodes into fluid, boundary, and solid using an efficient ray tracing algorithm~\cite{borazjani2008curvilinear} which can handle thick, closed-surface bodies. To classify the nodes corresponding to heart valves, which are provided as thin structures, an immersed boundary nodes classification algorithm for thin bodies is used~\cite{borazjani2013fluid}. After immersed boundary nodes classification, the boundary conditions on the solid/fluid interface are reconstructed using the velocity of the 3D reconstructed LV surface using no-slip condition. Finally, the blood flow is driven from/into LV using the volume flux equal to the volumetric change of the LV. The simulations are performed for two cardiac cycles to let the flow reach a quasi-steady state. \subsection{Computational setup} \label{Computational setup} To minimize the influences of the inlet/outlet boundary conditions on the flow inside the LV, simplified surfaces~(not from any medical images) are generated to model left atrium and aorta. These geometries are generated in a way to make the dimensions of the surfaces have realistic values in comparison to the reconstructed LV based on data measurements in previous works \cite{vriz2014normal,otani2016three}. The complete reconstructed geometry used for CFD simulations can be seen in Fig.~\ref{complete_setup}. It is worth mentioning that only one of the aorta and atrium exist in the geometry at the same time~(the aorta just exist during the systole phase and will be removed during diastole and an opposite pattern for left atrium). The blood flow flux from the left atrium and aorta to/from the LV chamber is specified based on the volumetric change of the LV~(The volumetric flux of LV is calculated based on the change of LV volume in two consecutive time steps). In addition, the velocity boundary condition at the inlet of the atrium is assumed to be uniform. The Navier-Stocks equation is non-dimensionalized with a diameter, $D = 16.38 ~mm$, of the aortic orifice and the bulk velocity, $U = 0.598~m/s$, with a time step $dt = 0.0109~s$ over 2500 time instances during a cardiac cycle. Considering the blood viscosity to be $\nu= 3.3\times 10^{-6} m^2/s$ leads to a Reynold's number of 2950 for the simulations. The LV geometry is discretized with approximately $50,000$ unstructured triangular mesh points and is immersed in a background grid with a dimension of $5.19D \times 3.33D \times 6.32D$~(where $D$ is the aortic diameter) discretized with $161 \times 121 \times 201$ grid points in x, y and z directions, respectively. \section{Results and discussion} Here, the effect of fixed apex assumption on the final surface reconstruction and LV flux is studied. In addition, the sensitivity of the results~(reconstruction and CFD simulations) to the smoothing algorithm is investigated. Furthermore, the effect of ventricular valves especially the MV on the flow field in the LV is studied. Finally, the flow hemodynamics of healthy and AMI LVs are compared with each other. The presented results are for healthy LV based on VSWMA smoothing algorithm unless it is mentioned otherwise. \subsection{Effect of change in positioning and orientation of the LV sections on LV geometry} As previously mentioned, the assumption of the fixed apex is more realistic compared to fixed SA basal section since the apex has the least displacement in LV geometry during a cardiac cycle~\cite{rogers1991quantification}. This assumption will affect the final reconstructed geometry and volumetric flux of LV during a cardiac cycle. Figure~\ref{orientation-effect} shows the volumetric curve of the reconstructed LV with the assumption of having the apex fixed compared to the one with SA basal section fixed at various time instances during a cardiac cycle. The difference in LV flux based on these two assumptions can clearly be seen in this figure. Comparing the ejection fraction~($EF$), which is typically used by physicians as an indicator of heart functionality, an increase of $4\%$~($0.46$ for apex fixed vs. $0.44$ for SA basal section fixed) in the calculation of $EF$ is observed using the apex fixed assumption. The $4\%$ difference in the $EF$ is not be considerable for identifying heart failure with reduced $EF$~(for human usually $EF$ is usually more than $50\%$ and less than $40$ for healthy heart and heart failure, respectively). However, assumption of fixed apex can affect the investigation of flow features inside the LV as well as the LV flux during a cardiac cycle and consequently synchronous of valves opening and closing with the LV flux. \subsection{Accuracy vs. smoothness in geometric reconstruction} \label{smoothness in geometric} The segmented data from the echo images are not smooth. Employing different smoothing algorithms can change the results of the final geometric reconstruction of LV. The ideal smoothing algorithm should preserve the shape of data~(segmented from echo) while avoiding over-fitting~(ringing) in the final geometry. Fig.~\ref{smoothing}-a shows the final smoothed curve that resulted from implementing different smoothing algorithms and also different spans for the weighted moving average technique on data extracted from the three-chambered apical LA section. As it can be seen the WMA method with higher weighting span~($20$) results in a length decrease of about $15\%$ in LA section while WMA10 and the b-spline methods result in ringing and instabilities near the SA basal region, which is clearly visible close to the end of systole~(Fig.~\ref{smoothing}-b). However, using variable span moving average with a higher span value near the basal section, unlike the b-spline method, can take care of the instabilities in this region, while using smaller span near the apex provides less smoothing in apex area~(which results in better preservation of the shape of LV). Table~\ref{table_comparison} quantitatively compares the effect of smoothing algorithm on the surface reconstruction process in terms of preserving the important features of the shape of LV~(length of the long axis) and least square error for curve fitting. Our results show the VWMA method provides the best trade-off between the shape preservation and preventing ringing in the final reconstructed geometry. \subsection{Comparison of LV parameters with their corresponding physiological range} Several parameters related to functionality of LV from reconstructed geometry are compared to their corresponding physiological ranges to show that the reconstruction from echo images is comparable to physiological data. The volume and volumetric flux versus time for both healthy and AMI left ventricles are presented in Fig.~\ref{healthy-AMI-flux}. The time is non-dimensionalized in this figure so as to have the same time duration for both the healthy and AMI cases. Various parameters including $EF$, the ratio of maximum fluxes during the E-wave and A-wave~($E/A$ Ratio), Deceleration Time~($DT$), Stroke volume~($SV$) and Cardiac Output~($CO$) are calculated based on the curves in Fig.~\ref{healthy-AMI-flux}. Table~\ref{table-parameters} shows the comparison between the calculated parameters and their physiological ranges reported in previous in vivo experiments for a porcine LV~\cite{reiter2016early}. As it can be seen in this table, the parameters calculated here for healthy LV lie within the physiological ranges of in-vivo experiments. However, the $E/A$ ratio, $SV$ and $CO$ for the AMI afflicted subject's LV lie outside the physiological ranges for normal subject's LV as can be expected. \subsection{CFD simulation for the reconstructed LV assembly} It is well known that the blood flow pattern in the LV has a direct impact on the heart performance~\cite{seo2013effect}. However, this flow pattern is directly related to the accuracy of LV reconstruction and valves' motion. Hence, in this section, the impact of ventricular valves and different smoothing algorithm for LV reconstruction as well as LV dysfunction~(AMI) on the performance of LV in terms of energy loss during the whole cardiac cycle is measured using the energy equation for a control volume as follows: \begin{equation} \frac{dE}{dt}=\dot{Q}-\dot{W}=\frac{\partial}{\partial t}\int_{CV}{\rho e~dV}+\int_{CS}{(\rho e+p)\vec{V}.\vec{dA}} \end{equation} where $E$ is the total energy of the system~(Fig.~\ref{control}), $\dot{Q}$ is the rate of heat transfer to the system, $\dot{W}$ is the rate of work done on/by the system, $CV$ is the control volume, $CS$ is control surface, $\vec{V}$ velocity of flow, $p$ is pressure, $\rho$ is blood density, and $e$ is the energy per unit mass \begin{equation} e=u+V^2/2+\vec{g}z \end{equation} where $u$ is the internal energy of the fluid, $V^2/2$ is the kinetic energy. Integrating over a cardiac cycle assuming that the system has reached the quasi-steady state and neglecting the heat transferred to the LV as well as gravity and difference of internal energy of inlet and outlet, the above equation reduces to \begin{equation} loss= \int_{0}^{T}{\int_{CS}{\rho~(p/{\rho}+V^2/2)~\vec{V}. \vec{dA}}} \end{equation} where $T$ is time at the end of the cardiac cycle. Since AMI and Healthy left ventricle have different stroke-volume and heart beat rate, to be able to compare the performance of the LV in different cases the rate of loss is calculated per $lit$ of blood pumped in each simulation as follows: \begin{equation} \dot{loss}~ (J/lit)= \frac{loss \times HR}{CO} \end{equation} where $CO$ is the cardiac output and $HR$ is the heart rate. Table~\ref{work_load} compares the $\dot{loss}$ of the LV during a cardiac cycle for different smoothing algorithms, without heart valve, and AMI simulations. \subsubsection{Effect of mitral valve} The vortex ring generated during the rapid filling~(E-wave) is one of the key characteristics of intraventricular flows~\cite{gharib2006optimal}. Figure~\ref{healthy-vortex} shows the vortical structures inside healthy LV with incorporate ventricular valves during diastole phase visualized by iso-surface of q-criteria \cite{haller2005objective}. For comparison, Fig.~\ref{healthy-vortex-without} shows the same visualization for the LV without valves at the same time instances. As can be observed in Fig.~\ref{healthy-vortex} at time $t/T=0.448$~(t/T=instance time / cardiac cycle length), in the early diastolic phase where the mitral valve leaflets are just beginning to open, the vortex ring starts forming on the tip of the leaflet of the mitral valve. Since the mitral orifice is a circular this vortex ring has a circular shape. The vortex ring starts to pinch off and propagates inside the LV around the peak E-wave~(Fig.~\ref{healthy-vortex}-b). Due to the asymmetric geometry of mitral leaflets, the ring propagates towards the posterior wall of LV while starting to disintegrate as it approaches the wall. This ring finally hits the wall and begins to break down into small-scale vortical structures which fill the whole volume of LV. During the A-wave also another vortex ring is generated. However, this time the ring is weaker and dissipates faster without propagating much in the LV. Comparing the q-criteria visualization of LV without the valve; it can be clearly observed that the vortex ring starts forming at the mitral annulus~(Fig.~\ref{healthy-vortex-without}). Due to the absence of the mitral valves, the symmetric ring propagates towards the apex of LV. However, since the mitral annulus has a larger orifice area compared to the orifice of the MV, the ring is weaker and the core of the vortex has smaller propagation speed. The peak velocity near the mitral annulus in LV with MV is around $1.37~m/s$ which is in agreement with the previously published physiological values for a healthy LV~\cite{sotiropoulos2016fluid}, whereas in the simulation without MV this value is $0.76~m/s$. Therefore, the vortex ring is not traveling far in the apical direction inside the LV and it starts breaking down after propagating about $30\%$ of the LV length. This shallow vortex ring penetration depth that happens in the simulations without MV, can negatively affect the washout ability of LV in the apex region. In addition, as previously shown by Seo et al.~\cite{seo2014effect} the presence of MV results in a higher asymmetric diastolic flow pattern and consequently a counter-clockwise~(CCW) circulation which increases the washout potential of LV. This is in agreement with our results which show the CCW circulation of about $63~ cm^2/s$ and $41 ~cm^2/s$ for LV with and without MV, respectively, which is an increase of $35\%$. As it been shown in the previous study by Seo et al.~\cite{seo2013effect}, the asymmetric flow pattern during the diastole phase which is also reflected in the higher CCW circulation will increase the efficiency of blood ejects towards aorta during the systole phase. In addition, effect of AV and MV on the $\dot{loss}$ is investigated in Table~\ref{work_load}. As it can be seen the presence of ventricular valve's AV increases the $\dot{loss}$. This increase is mainly due to the presence of AV during the systole. However, the presence of AV is essential to prevent backflow during the diastolic phase. \subsubsection{Sensitivity of the numerical simulations to smoothing algorithms} \label{numerical simulations smoothing} The qualitative comparison of ring vortex formation and its propagation using q-criteria shows no significant difference in the flow pattern inside the LV for different smoothing algorithms. In order to make a more quantitative comparison, the $\dot{loss}$ for different smoothing is calculated. Again, results do not show a significant variation in $\dot{loss}$ due to different smoothing algorithms. Thus it can be concluded the although the choice of smoothing algorithm can have some impact on the accuracy of the reconstruction and thus local flow pattern~(in the regions near the ringing in the final geometry), the main flow patterns are not very sensitive to the smoothing method. \subsubsection{Comparing healthy and AMI reconstructed LV} \label{Comparing healthy and AMI} AMI can significantly affect the performance of LV during the systole. Several studies~\cite{white1987left,st1994quantitative,moller2003prognostic} investigated the effect of the AMI on systolic functionalities in terms of $EF$ and the amount of blood flux through the aorta. Figure~\ref{AMI-healthy_velocity} shows the maximum velocity of flow through the aortic orifice at peak systole. As it can be seen, the magnitude of velocity in the healthy LV is higher than the AMI case which is due to lower $CO$ and $EF$ in the LV with AMI compared to the healthy one. AMI can also affect the diastolic performance of the LV~\cite{moller2006prognostic} in terms of the filling pattern. A bulge or dyskinetic region in the reconstructed LV surface of the AMI-afflicted subject can be clearly seen in Fig.~\ref{AMI-vortex}. The visualization of vortical structures using q-criteria in this figure shows the formation of a vortex ring from the tip of mitral leaflets the same as the healthy subject. However, there is a time delay for vortex formation/propagation in the AMI versus the healthy LV which is due to delay in the starting time of diastolic phase in AMI LV that can be clearly seen in Fig.~\ref{healthy-AMI-flux}. In addition, the vortex ring formed in the AMI subject is weaker and has a lower propagation velocity($1.12~m/s$ compared to the healthy one $1.37~m/s$) and thus it disintegrates and dissipates in the LV sooner. It can also be seen that in the AMI simulation the vortical structures are predominantly found in the region directly beneath the mitral annulus, as compared to a more uniform and looped sweeping of structures in a healthy subject's LV. Comparison of the $\dot{loss}$ in Table~\ref{work_load} for AMI and healthy LV shows that the AMI left ventricle has higher $\dot{loss}$ by approximately $20\%$ compared to all healthy LVs with different smoothing. Our results suggest that $\dot{loss}$ can be used as a promising indicator to measure the performance of LV. \section{Conclusions} An improved methodology based on the previous work of Rajan et al.~\cite{rajan2016automated} for the automated 3D reconstruction of the LV from multiple axis echo of both long-axis and short-axis sections has been implemented. To the best of our knowledge, this is a first attempt made to perform reconstruction of a healthy and an AMI LV along with mitral and aortic valves using porcine 2D echo~(potentially usable for patient-specific reconstructions) obtained using standard cross-sectional views. The sensitivity of the CFD simulation to the process of reconstruction was investigated. The results show that the choice of smoothing algorithm for image segmentation can slightly affect the final reconstructed geometry in terms of generating non-physical ringing in the wall of the; however, the comparison of energy loss and flow visualizations in CFD simulations with different smoothing algorithms shows the choice of smoothing does not change the main flow pattern during the cardiac cycle. Furthermore, the change in positioning and orientation of the LV sections has also brought the reconstruction one step closer to a more physical reconstruction with the LV apex fixed in space along with the short-axis sections. Having reconstructed the LV from the multiple axes 2D echo data, the same data was used to extract mitral and aortic valves. Our results also suggest the absence of MV and AV which is the case in most of the previous LV simulations can change the flow pattern inside the LV in a non-physiological way in terms of vortex formation/propagation, flow circulation during the diastole phase and energy loss. Our results show the vortex ring formed in the healthy is stronger and has faster propagation speed compared to AMI which may suggest a better apex washout for a healthy subject. In addition, our results suggest the energy $\dot{loss}$ can be a good indicator for the performance of the LV in the numerical simulations. \section*{CONFLICT OF INTEREST STATEMENT} The authors declare that there is no conflict of interest regarding the publication of this article. \section*{ACKNOWLEDGEMENT} This work was supported by American Heart Association grant 13SDG17220022 and the Texas A$\&$M High Performance Research Computing center~(HPRC).
2,877,628,088,778
arxiv
\section{Introduction} The history of $\kappa$-{\em deformation} has begun in 1992 with the work of J. Lukierski, A. Nowicki and H. Ruegg \cite{LNR}, where this deformation of the enveloping algebra of Poincar\'{e} Group appeared for the first time. Next important step was the paper by S. Majid and H. Ruegg in 1994 \cite{Maj-Rueg} identifying bicrossedproduct structure of $\kappa$-Poincar\'{e} algebra. Since then, a vast literature on the subject has been produced with many attempts to apply this deformation to physical problems, but because (almost) all work was done on the level of pure algebra (or formal power series) it was hard to get more then rather formal conclusions. This is not a review paper and we refer to \cite{Luk-review} (also an older one \cite{Bor-Pa}) for a discussion and an extensive bibliography of the subject. This article, however, is not about the $\kappa$-Poincar\'{e} algebra but about the $\kappa$-Poincar\'{e} Group - deformation of an algebra of functions on Poincar\'{e} Group. It appeared in 1994, on a Hopf $*$-algebra level, in the work of S. Zakrzewski \cite{SZ-94}, where it was also shown it is a quantization of a certain Poisson-Lie structure. Soon, it became clear that this particular Poisson structure is not special for dimension 4 but has analogues in any dimension and is related to certain decompositions of orthogonal Lie algebras \cite{PS-triple} and is dual to a certain Lie algebroid structure \cite{PS-poisson}. The main result of this work is {\em a topological version} of $\kappa$-Poincar\'{e} Group. Since \cite{Maj-Rueg} it has been clear that had it existed it should have been given by some bicrossedproduct construction. The main problem is that the decomposition of a Lie algebra $\gotg=\gota\oplus\gotc$ doesn't lift to a decomposition of a Lie Group $G=AC$ (of course it lifts to a local decomposition, but the complement of the set of decomposable elements i.e. $G\setminus(AC\cap CA)$ has a non empty interior) therefore the construction of S.~Vaes and L.~Vainerman presented in \cite{VV} can't be directly applied. As it has been shown already in \cite{PS-triple} there is a {\em non connected} extension of a group $A$, let's denote it by $\tilde{A}$, such that $\tilde{A} C\cap C \tilde{A}$ is open and dense in $G$. {\em Therefore it fits into the framework of \cite{VV} and the $\kappa$-deformation of Poincar\'{e} Group, or rather its non connected extension, exists as a locally compact quantum group.} Here we use approach different then used in \cite{VV}. Although less general, it has some advantages -- it is more geometric and it is easier to see that what we get is really a quantization of a Poisson-Lie structure. It is based on the use of groupoid algebras for differential groupoids naturally related to decompositions of Lie groups. For a global decomposition this construction was described in \cite{PS-DLG}; the result is that given a Lie group $G$ with two closed subgroups $B,C\subset G$ satisfying $G=BC$ one can define two differential groupoid structures on $G$ (over $B$ and $C$, this is described briefly in the second part of this introduction). It turns out that $C^*$-algebras of these groupoids carry quantum group structures, in fact, sweeping under the rug some universal/reduced algebras problems, one may say that all main ingredients of quantum group structure are just $C^*$-lifting of natural groupoid objects. As said above the situation with $\kappa$-Poincar\'{e} is not so nice, but there is a global decomposition ``nearby''one can try to use; this framework was described in \cite{PSAx}. Let us explain briefly the construction; geometric details were presented in \cite{PS-poisson}. By the (restricted) Poincar\'{e} Group it's meant here the semidirect product of the (restricted) Lorentz Group $A:=SO_0(1,n)$ and $n+1$-dimensional vector Minkowski space. It turns out, that it can be realized as a subgroup $(TA)^0$ of $T^*G$ for $G:=SO_0(1,n+1)$, where $A$ is embedded naturally into $G$ (as the stabilizer of a spacelike vector). On $(TA)^0$ there is a Poisson structure dual to a Lie algebroid structure related to the decomposition of $\gotg$ -- the Lie algebra of $G$ into two subalgebras $\gotg=\gota\oplus \gotc$, where $\gota$ is the Lie algebra of $A$. This Lie algebroid is the algebroid of the Lie groupoid $AC\cap CA\subset G$, here $C$ is the Lie subgroup with algebra $\gotc$. As said above, the set $AC\cap CA$ is too small, but one can find $\tilde{A}\subset G$ -- a non-connected extension of $A$, in fact this is the normalizer of $A$ in $G$, such that $\Gamma:=\tilde{A} C\cap C \tilde{A}$ is open and dense in $G$. The set $\Gamma$ is a differential groupoid (over $\tilde{A}$) and $C^*$-algebra of this groupoid is the $C^*$-algebra of $\kappa$-Poincar\'{e} Group. The method used in \cite{PSAx} relies essentially on the fact that the algebra $\gotc$ has a second complementary algebra $\gotb$ and the decomposition $\gotg=\gotc\oplus\gotb$ lifts to {\em a global decomposition} $G=BC$ and this is just the Iwasawa decomposition (i.e. $B=SO(n+1)$). This global decomposition defines a groupoid $G_B:G\rightrightarrows B$ and its $C^*$-algebra is the underlying algebra of the quantum group which may be called ``Quantum $\kappa$-Euclidean Group''. Our groupoid $\Gamma$ embeds into $G_B$ in such a way that it is possible to prove that their $C^*$-algebras are the same but the comultiplication of $\kappa$-Poincar\'{e} is comultiplication of $\kappa$-Euclidean twisted by a unitary multiplier. So one may say that {\em quantum spaces underlying $\kappa$-Poincar\'{e} and $\kappa$-Euclidean groups are the same and only group structures are different.} The embedding $\Gamma\hookrightarrow G_B$ essentially is given by embedding of $\tilde{A} \hookrightarrow SO(n+1)$ as a dense open subset, so it is a kind of compactification of $\tilde{A}$ (which consists of two copies of the (restricted) Lorentz Group $SO_0(1,n)$). This compactification solves the problem, that some natural operators, that ``should be'' self-adjoint elements affiliated with $C^*(\Gamma)$ are defined by non complete vector fields, so they are not essentially self-adjoint on their ``natural'' domains and this embedding just defines ``correct'' domains. In this work we consider only $C^*$-algebra with comultiplication, and not discuss other ingredients like antipode, Haar weight, etc.; they can be constructed using methods presented in \cite{PS-DLG}. In the remaining part of the Introduction we recall basics of groupoid algebras, groupoids related to decomposition of groups and results of \cite{PSAx} essential in the following. The short description of the content of each section is given at the end of the Introduction. \subsection{Groupoid algebras} We will use groupoid algebras, so now we recall basic facts and establish the relevant notation. We refer to \cite{PS-DLG,DG} for a detailed exposition and to \cite{PSAx} for basics of formalism. All {\em manifolds} are smooth, Hausdorff, second countable and {\em submanifolds} are embedded. For a manifold $M$ by $\omh(M)$ we denote the vector space of smooth, compactly supported, complex \haldens on $M$; it is equipped with the scalar product $\displaystyle (\psi_1\,|\,\psi_2):=\int_M\overline{\psi_1}\psi_2$ and $L^2(M)$ is the completion of $\omh(M)$ in the associated norm. Clearly, if we choose some $\psi_0$ -- non vanishing, real \halden on $M$, there is the equality $\omh(M)=\{f\,\psi_0\,,\,f\in \sD(M)\}$, where $\sD(M)$ stands for smooth, complex and compactly supported functions on $M$. Let $\Gamma\rightrightarrows E$ be a differential groupoid. By $e_R$ ($e_L$) we denote the {\em source (target) projection} and called it {\em right (left) projection}; a groupoid inverse is denoted by $s$. Let $\Om_L^{1/2}, \Om_R^{1/2}$ denote bundles of complex \haldens along left and right fibers. {\em A groupoid *-algebra} $\sA(\Gamma)$ is a vector space of compactly supported, smooth sections of $\Om_L^{1/2}\mt\Om_R^{1/2}$ together with a convolution and $*$-operation. To write explicit formulae let us choose $\lambda_0$ - a real, non vanishing, left invariant \halden along left fibers (in fact that means we choose a Haar system on $\Gamma$, however nothing depends on this choice, details are given in \cite{DG}.) Let $\rho_0:=s(\lambda_0)$ be the corresponding right invariant \halden and $\om_0:=\lambda_0\mt \rho_0$. Then any $\om\in\sA(\Gamma)$ can be written as $\om=f\om_0$ for a unique function $f\in\sD(\Gamma)$. With such a choice we write $(f_1\om_0)\,(f_2\om_0)=:(f_1*f_2)\om_0$, $(f\om_0)^*=:(f^*)\om_0$ and: \notka{group-alg-mult} \begin{equation}\label{group-alg-mult} (f_1*f_2)(\gamma):=\int_{F_l(\gamma)} \lambda_0^2(\gamma') f_1(\gamma')f_2(s(\gamma') \gamma)= \int_{F_r(\gamma)} \rho_0^2(\gamma')f_1(\gamma s(\gamma')f_2(\gamma')\,\,,\,\,f^*(\gamma):=\overline{f(s(\gamma))} \end{equation} $F_l(\gamma)$ and $F_r(\gamma)$ are left and right fibers passing through $\gamma$ e.g. $F_l(\gamma):=e_L^{-1}(e_L(\gamma))$. \\ The choice of $\om_0$ defines a norm that makes $\sA(\Gamma)$ a normed $*$-algebra: $$||f\om_0||_0=:||f||_0=max\left\{\sup_{e\in E}\int_{\el^{-1}(e)}\lambda_0^2(\gamma)|f(\gamma)|,\, \sup_{e\in E}\int_{\er^{-1}(e)}\rho_0^2(\gamma)|f(\gamma)|\right\}$$ There is a faithful representation $\pi_{id}$ of $\sA(\Gamma)$ on $L^2(\Gamma)$ described as follows: choose $\nu_0$ - a real, non vanishing \halden on $E$; since $\er$ is a surjective submersion one can define $\psi_0:=\rho_0\mt\nu_0$ - this is a real, non vanishing, \halden on $\Gamma$. For $\psi=f_2\psi_0,\, f_2\in\sD(\Gamma)$ the representation is given by $\pi_{id}(f_1\om_0) (f_2\psi_0)=:(\pi_{id}(f_1)f_2)\psi_0$ and $\pi_{id}(f_1)f_2=f_1*f_2$ is as in (\ref{group-alg-mult}). The estimate $||\pi_{id}(\om)||\leq||\om||_0$ makes possible the definition: {\em The reduced $C^*$-algebra of a groupoid} -- $C^*_r(\Gamma)$ is the completion of $\sA(\Gamma)$ in the norm $||\om||:=||\pi_{id}(\om)||$. We will also use the following fact which is a direct consequence of the definition of the norm $||f||_0$: \begin{lem}\label{lemma-ind-lim}\notka{lemma-ind-lim} Let $U\subset \Gamma$ be an open set with compact closure. There exists $M$ such that $||f||_0\leq M \sup |f(\gamma)|$ for any $f\in\sD(\Gamma)$ with support in $U$. If $f_n\in\sD(\Gamma) $ have supports in a fixed compact set and $f_n$ converges to $f\in\sD(\Gamma)$ uniformly then $f_n\om_0$ converges to $f\om_0$ in $C^*_r(\Gamma)$. \dowl \end{lem} We will use {\em Zakrzewski category} of groupoids \cite{SZ1,SZ2}. {\em Morphisms} are not mappings (functors) but relations satisfying certain natural properties. A morphism $h:\Gamma\rel\Gamma'$ of differential groupoids defines a mapping $\hat{h}:\sA(\Gamma)\rightarrow L(\sA(\Gamma'))$ (linear mappings of $\sA(\Gamma')$), this mapping commutes with (right) multiplication in $\sA(\Gamma')$ i.e. $$\hat{h}(\om)(\om')\om''=\hat{h}(\om)(\om' \om'')\,\,,\,\om\in\sA(\Gamma)\,,\,\om',\om''\in\sA(\Gamma')$$ and we use notation $\hat{h}(\om)\om'$ (see formula (\ref{app-hat-delta0-form}) in the Appendix for an example of such a mapping); there is also a representation $\pi_h$ of $\sA(\Gamma)$ on $L^2(\Gamma')$; these objects satisfy some obvious compatibility conditions with respect to multiplication and $*$-operation (see \cite{DG} for details). {\em A bisection} $B$ of $\Gamma\rightrightarrows E$ is a submanifold such that $e_L|_B, e_R|_B :B\rightarrow E$ are diffeomorphisms. Bisections act on $\cred(\Gamma)$ as unitary multipliers and are transported by morphisms: if $B\subset \Gamma$ is a bisection and $h:\Gamma\rel\Gamma'$ is a morphism then the set $h(B)$ is a bisection of $\Gamma'$. \subsection{Group decompositions, related groupoids and quantum groups} Now we briefly recall some facts about {\em double groups}. Let $G$ be a group and $A,B\subset G$ subgroups such that $A\cap B=\{e\}$. Every element $g$ in the set $\Gamma:=AB\cap BA$ can be written uniquely as $$g=a_L(g) b_R(g)=b_L(g) a_R(g)\,,\,a_L(g), a_R(g)\in A\,,\,b_L(g),b_R(g)\in B.$$ These decompostions define surjections: $a_L,a_R: \Gamma\rightarrow A$ and $b_L,b_R: \Gamma\rightarrow B$ (in fact $a_L, b_R$ are defined on $AB$ and $b_L, a_R$ on $BA$, we will denote these extensions by the same symbols). The formulae: \begin{align*}E &:=A \,, \quad s(g):= b_L(g)^{-1}a_L(g)=a_R(g) b_R(g)^{-1}\,,\,\\ Gr(m) &:= \{(b_1 a b_2; b_1 a, a b_2) :b_1a, ab_2\in\Gamma\} \end{align*} define the structure of the groupoid $\Gamma_A:\Gamma\rightrightarrows A$; the analogous formulae define the groupoid $\Gamma_B: \Gamma\rightrightarrows B$. On the other hand for a subgroup $B\subset G$ there is a (right) transformation groupoid $(B\setminus G)\rtimes B$. The following lemma \cite{PSAx} explains relation between these groupoids. \begin{lem}\label{lemma-embed}\notka{lemma-embed} The map: $$\Gamma_A\ni g\mapsto ([a_L(g)], b_R(g))\in (B\backslash G)\rtimes B$$ is an isomorphism of the groupoid $\Gamma_A$ with the restriction of a (right) transformation groupoid $ (B\backslash G)\rtimes B$ to the set $\{[a] : a\in A\}\subset B\backslash G$. \dowl \end{lem} If $AB=G$ (i.e. $\Gamma=G$) the triple $(G;A,B)$ is called {\em a double group} and in this situation we will denote groupoids $\Gamma_A, \Gamma_B$ by $G_A, G_B$. It turns out that {\em the transposition of multiplication relation $m_B$} i.e. $\delta_0:=m_B^T: G_A\rel G_A\times G_A$ is a coassociative morphism of groupoids. Applying the lemma \ref{lemma-embed} to the groupoid $G_A$ we can identify it with the transformation groupoid $(B\backslash G)\rtimes B$. So $G_A=A\rtimes B$ is a right transformation groupoid for the action $(a,b)\mapsto a_R(ab)$ i.e the structure is given by: $$E:=\{(a,e): a\in A\}\,,\,s(a,b):=(a_R(ab),b^{-1}),$$ $$m:=\{(a_1,b_1 b_2;a_1, b_1,a_R(a_1 b_1),b_2): a_1\in A\,,\,b_1,b_2\in B\}$$ {\em In the formula above, we identified a relation $m:\Gamma\times \Gamma\rel\Gamma$ with its graph, i.e. subset of $\Gamma\times \Gamma\times \Gamma$. We will use such notation throughout the paper.} If $G$ is a Lie group, $A,B$ are closed subgroups, $A\cap B=\{e\}\,,\,AB=G$ then $(G;A,B)$ is called {\em a double Lie group}, abbreviated in the following as DLG. It turns out that the mapping $\widehat{\delta_0}$, defined by the morphism $\delta_0$ (compare (\ref{app-hat-delta0-form}) in Appendix), extends to the coassociative morphism $\Delta$ of $C^*_r(G_A)$ and $C^*_r(G_A\times G_A)=C^*_r(G_A)\mt C^*_r(G_A)$ which satisfies {\em density conditions}: \begin{equation*} cls\{\Delta(a)(I\mt b) :a,b \in C^*_r(G_A)\}=cls\{\Delta(a)(b\mt I) :a,b \in C^*_r(G_A)\}=C^*_r(G_A)\mt C^*_r(G_A), \end{equation*} where {\em cls} denotes the closed linear span. There are other objects that make the pair $(C^*_r(G_A), \Delta)$ a locally compact quantum group; we refer to \cite{PS-DLG} for details. \subsection{Framework for $\kappa$-Poincar\'{e}} The framework for $\kappa$-Poincare group we are going to use was presented in \cite{PSAx}. Let us now recall basic facts established there. Let $G$ be a group and $A,B,C\subset G$ subgroups satisfying conditions: \notka{rozklad-warunki}\begin{equation*}\label{rozklad-warunki} B\cap C=\{e\}=A\cap C\,,\,B C=G. \end{equation*} i.e. $(G;B,C)$ is a double group. As described above, in this situation, there is the groupoid $G_B$, and the (coassociative) morphism $\delta_0:G_B\rel G_B\times G_B$; explicitly the graph of $\delta_0$ is equal to: \notka{def-delta0}\begin{equation}\label{def-delta0} \delta_0=\{(b_1 c, c b_2;b_1 c b_2) \,:\, b_1,b_2\in B\,,\,c\in C\} \end{equation} Using the lemma \ref{lemma-embed} we see that this is a transformation groupoid $(C\backslash G) \rtimes C$ and the isomorphism is $$(C\backslash G) \times C\ni ([g],c)\mapsto b_R(g)c\in G$$ Let $\Gamma:=A C\cap C A$ and consider on $\Gamma$ the groupoid structure $\Gamma_A: \Gamma\rightrightarrows A$ described above, together with a relation (the transposition of the multiplication in $\Gamma_C:\Gamma\rightrightarrows C$): \begin{equation*}\label{def-tmct} \notka{def-tmct} \tilde{m}_C^T:=\{(a_1 c_1, c_1 a_2 ; a_1 c_1 a_2) : a_1 c_1, c_1 a_2\in\Gamma\}\subset \Gamma\times\Gamma\times\Gamma. \end{equation*} The corresponding projections will be denoted by $\tilde{c}_L, \tilde{c}_R $ and $a_R, a_L$. Again, by the lemma \ref{lemma-embed} we identify the groupoid $\Gamma_A$ with the restriction of $(C\backslash G) \rtimes C$ and then with the restriction of $G_B$ to the set $B':=B\cap CA$, i.e. with $b_L^{-1}(B')\cap b_R^{-1}(B')$. This restriction will be denoted by $\Gamma_{B'}$ (instead of more adequate but rather inconvenient $G_B|_{B'}$). This isomorphism and its inverse are given by:\notka{wzor-embed} \begin{align}\label{wzor-embed} \Gamma_A\ni & a c\mapsto b_R(a) c\in \Gamma_{B'}\quad\quad,\quad \Gamma_{B'}\ni b c\mapsto a_R(b) c\in \G_A \end{align} The image of $\tilde{m}_C^T$ inside $\Gamma_{B'}\times \Gamma_{B'}\times \Gamma_{B'}$ is equal to: \begin{equation*}\label{mct} \{(b_R(a_1) c_1, b_R(a_2) c_2; b_R(a_1 a_2) c_2): a_1 c_1=\tilde{c}_1 \tilde{a}_1, c_1\tilde{a}_2=a_2 c_2\} \end{equation*} The following object plays the major role in what follows: \notka{def-twist}\begin{equation}\label{def-twist} T:=\{(g,b):c_R(g)b\in A\}=\{(b_1 \tilde{c}_L(b_2)^{-1}, b_2) : b_1\in B,b_2\in B'\}\subset G_B\times G_B. \end{equation} Using the definition (\ref{def-delta0}) of $\delta_0$ one easily computes images of $T$ by relations $id\times\delta_0$ and $\delta_0\times id$: \begin{align*} (id\times \delta_0)& T=\{(g_1,b_2,b_3): b_2 b_3\in B'\,,\,c_R(g_1)=\tilde{c}_L(b_2 b_3)^{-1}\}\\ (\delta_0\times id)T& =\{(g_1,g_2,b_3): c_R(g_1)=c_L(g_2)\,,\,b_3\in B'\,,\,c_R(g_2)=\tilde{c}_L(b_3)^{-1}\} \end{align*} Let us also denote $T_{12}:=T\times B\subset G_B\times G_B\times G_B$ and $T_{23}:=B\times T\subset G_B\times G_B\times G_B$. \\ Main properties of $T$ are listed in the following lemma (proven in \cite{PSAx}): \begin{prop} \label{prop-twist}\notka{prop-twist}\begin{enumerate} \item $T$ is a section of left and right projections (in $G_B\times G_B$) over the set $B\times B'$ and a bisection of $G_B\times \Gamma_{B'}$; \item $(id\times \delta_0) T$ is a section of left and right projections (in $G_B\times G_B\times G_B$) over the set $B\times \delta_0(B')=\{(b_1,b_2,b_3) : b_2 b_3\in B'\}$; \item $(\delta_0\times id) T$ is a section of left and right projections over the set $B\times B \times B'$; \item $T_{23}(id\times \delta_0)T= T_{12}(\delta_0\times id) T$ (equality of sets in $G_B\times G_B\times G_B$), moreover this set is a section of the right projection over $B\times (\delta_0(B')\cap(B\times B'))$ and the left projection over $B\times B'\times B'$. \end{enumerate} \end{prop} \noindent Due to this proposition the left multiplication by $T$, which we denote by the same symbol, is a bijection of $G_B \times b_L^{-1}(B')$. Let $Ad_{T}:G_B\times G_B\rel G_B\times G_B$ be a relation defined by:\notka{def-AdT} \begin{equation}\label{def-AdT} (g_1,g_2;g_3,g_4)\in Ad_{T}\iff \exists t_1, t_2\in T : (g_1,g_2)=t_1(g_3,g_4)(s_B\times s_B)(t_2). \end{equation} and let us define the relation $\delta:=Ad_T\cdot \delta_0: G_B\rel G_B\times G_B$ \notka{def-delta}\begin{equation}\label{def-delta} \begin{split} \delta= \{ & (b_R(b_3 b_2^{-1} \tilde{c}_L(b_2))\tilde{c}_L(b_2)^{-1} c_L(b_2 c_2) \tilde{c}_L(b_R(b_2 c_2)), b_2 c_2; b_3 c_2) :\\ & b_3\in B, c_2\in C, b_2, b_R(b_2 c_2)\in B' \}. \end{split} \end{equation} The relation between $\delta$ and $\tilde{m}_C^T$ is explained in the lemma: \cite{PSAx} \begin{lem} $\delta$ is an extension of $\tilde{m}_C^T$ i.e. $\tilde{m}_C^T\subset \delta$ \end{lem} \noindent Addition of some differential conditions to this situation makes possible to use $T$ to twist the comultiplication on $C^*_r(G_B)$: \begin{assumpt}\label{basic-assumpt}\notka{basic-assumpt} \begin{enumerate} \item $G$ is a Lie group and $A,B,C$ are closed Lie subgroups such that \\ $B\cap C=\{e\}=A\cap C\,,\,B C=G$ (i.e. $(G;B,C)$ is a DLG). \item The set $\Gamma:=C A\cap AC$ is open and dense in $G$. \item Let $U:=b_L^{-1}(B')$ and $\sA(U)$ be the linear space of elements from $\sA(G_B)$ supported in $U$. We assume that $\sA(U)$ is dense in $C^*_r(G_B)$. \item For a compact set $K_C\subset C$, open $V\subset B$ and $(b_1, b_2)\in B\times B'$ let us define a set $Z(b_1,b_2,K_C;V):=K_C\cap\{c\in C: b_R(b_1 c) b_2\in V\}$ and a function: $$B\times B' \ni(b_1,b_2)\mapsto \mu(b_1,b_2,K_C;V):=\int_{Z(b_1,b_2,K_C;V)} d_l c.$$ For compact sets $K_1\subset B$ and $K_2\subset B'$ let $\mu(K_1,K_2,K_C;V):=\sup\{\mu(b_1,b_2,K_C;V): b_1\in K_1\,,\,b_2\in K_2\}$ We assume that \begin{equation*} \forall\,\epsilon>0\,\exists\, V-{\rm a\, neighborhood\,of\,}B\setminus B'{\rm\, in\, B}\,:\, \mu(K_1,K_2,K_C;V)\leq\epsilon \end{equation*} \end{enumerate} \end{assumpt} \begin{re}\label{remark-B}\notka{remark-B} \begin{itemize} \item It follows from the first and the second assumptions that $B'$ is open and dense in $B$. \item The second assumption can be replaced by the following two conditions: \begin{itemize} \item[a)] $\gotg=\gota\oplus\gotc$, where $\gotg, \gota,\gotc$ are lie algebras of $G,A,C$, respectively. (then $AC$ and $CA$ are open); \item[b)] $AC\cap CA$ is dense in $G$. \end{itemize} \item These assuumptions are not very pleasent and probably, at least some of them, redundant; but they are sufficient to get results in case of quantum ``ax+b'' group and $\kappa$-Poincar\'{e}. This framework, however, doesn't work for dual to $\kappa$-Poncar\'{e} and at the moment it is unclear if and how that example may be handled (with this approach). \end{itemize} \end{re} The following proposition was proven in \cite{PSAx}: \begin{prop}\label{prop-Delta} \notka{prop-Delta} Assume the conditions listed in (\ref{basic-assumpt}) are satisfied. Then \begin{itemize} \item[a)] $C^*_r(\Gamma_{B'})=C^*_r(G_B)$ -- for this equality, it is sufficient to satisfy (1),(2),(3) from \ref{basic-assumpt}; \item[b)] The mapping $T:\sA(G_B\times U)\rightarrow\sA(G_B\times U)$ extends to the unitary $\widehat{\sT}\in M(C^*_r(G_B)\mt C^*_r(G_B))$ which satisfies: $$(\widehat{\sT}\mt I)(\Delta_0\mt id)\widehat{\sT}=(I \mt \widehat{\sT})(id\mt \Delta_0)\widehat{\sT}$$ \item[c)] Because of b), the formula $\Delta(a):=\widehat{\sT}\Delta_0(a)\widehat{\sT}^{-1}$ defines a coassociative morphism. For this morphism ("cls" stands for "closed linear span"):\notka{density-c} \begin{equation} \label{density-c} cls\{\Delta(a)(I\mt c)\,,\,a,c\in C^*_r(G_B)\}=cls\{\Delta(a)(c\mt I)\,,\,a,c\in C^*_r(G_B)\}= C^*_r(G_B)\mt C^*_r(G_B). \end{equation} \end{itemize} \dow \end{prop} The second section describes the situation for $\kappa$-Poincar\'{e}, i.e.~we define groups $G,A,B,C$, compute explicit formulae for decompositions and describe structure of the groupoid $G_B$. In the third one, we verify Assumptions \ref{basic-assumpt} and, by the Prop.~\ref{prop-Delta} get the $C^*$-algebra and comultiplication for $\kappa$-Poincar\'{e} Group. The fourth section describes generators of this $C^*$-algebra, computes commutation relations and comultiplication on generators; also, the twist is described in more details. In the last but one section we compare our formulae to the ones in \cite{SZ-94} and in the last one we discuss ``quantum $\kappa$-Minkowski Space''. Finally there is an Appendix with some formulae needed here and proven elsewhere. \section{Decompositions defining $\kappa$-Poincar\'{e} Group }\label{sec:decomp} Let $(V,\eta)$ be $n+2, n\geq1$ dimensional vector Minkowski space (signature is $(+,-,\dots,-)$). Let us choose an orthonormal basis $(e_0, e_1,\dots,e_{n+1})$ and identify the (special) orthogonal group $SO(\eta)$ with the corresponding group of matricies $SO(1,n+1)$. Let $G:=SO_0(1,n+1)\subset SO(1,n+1)$ be the connected component of identity and $\gotg:=span\{\Mlambda_{\alpha \beta}\,,\,\alpha,\beta=0,\dots, n+1\}$ be its Lie algebra (see Appendix for notation). Consider three closed subgroups $A,B,C\subset G$: {\em The group $A$} is a non-connected extension of $SO_0(1,n)$ inside $SO_0(1,n+1)$ defined by: \notka{def-A} \begin{equation}\label{def-A} A:=\left\{\left(\begin{array}{cc} u & 0\\ 0 & 1 \end{array}\right) \left(\begin{array}{cc} I_n &0\\ 0 & h \end{array}\right)\,:\,\,u\in SO_0(1,n),\,h:=\left(\begin{array}{cc} d &0\\ 0 & d \end{array}\right)\,,\,d=\pm 1 \right\}. \end{equation} In fact, it is not hard to see that $A$ is the normalizer of $SO_0(1,n)$ (embedded into upper left corner) inside $SO_0(1,n+1)$. The Lie algebra of $A$ is $\gota:=span\{\Mlambda_{0 m}\,,\,m=1,\dots, n\}$. \noindent We parameterize $SO_0(1,n)$ by: $$\{z\in \R^n: |z|<1\}\times SO(n)\ni (z,U)\mapsto \left(\begin{array}{cc} \frac{1+|z|^2}{1-|z|^2} & \frac{2}{1-|z|^2}z^tU\\ \frac{2 }{1-|z|^2} z & (I+\frac{2}{1-|z|^2}z z^t)U\end{array}\right)\in SO_0(1,n)$$ \begin{re} This is the standard $boost\times rotation$ parametrization of the Lorentz Group; the parameter $z$ is related to a velocity $v$ by: $$z=\frac{v}{1+\sqrt{1-|v|^2}}\,\,,\,\,\,v=\frac{2 z}{1+|z|^2}$$ \notka{jezeli $|v|=\tanh \chi$, to $|z|=\tanh(\frac\chi2)$} \end{re} \noindent In this way we obtain the parametrization of $A$: \notka{def-zUd} \begin{equation}\label{def-zUd} \{z\in \R^n: |z|<1\}\times SO(n)\times\{-1,1\}\ni(z,U,d)\mapsto \left(\begin{array}{ccc} \frac{1+|z|^2}{1-|z|^2} & \frac{2}{1-|z|^2}z^tU D & 0\\ \frac{2 }{1-|z|^2} z & (I+\frac{2}{1-|z|^2}z z^t)U D & 0\\ 0 & 0 & d\end{array}\right), \end{equation} where $D=\left(\begin{array}{cc}I_{n-1} & 0\\0 & d\end{array}\right)$; we will also denote by $(z,U,d)$ the corresponding element of $G$. {\em The group $B$} is $SO(n+1)$ embedded into $G$ by: $SO(n+1)\ni g\mapsto \left(\begin{array}{cc} 1 & 0 \\0 & g\end{array}\right)\in G$; its Lie algebra is $\gotb:=\{\Mlambda_{kl}\,,\,k,l=1,\dots, n+1\}$. Elements of $SO(n+1)$ will be written as: \notka{def-B} \begin{equation}\label{def-B} (\Lambda,u,w,\alpha):=\left(\begin{array}{cc} \Lambda & u\\ w^t & \alpha \end{array}\right),\quad\Lambda\in M_n(\R) , u,w\in\R^n\,,\,\alpha\in[-1,1], \end{equation} and $\Lambda, u, w, \alpha$ satisfy: \notka{param-SOn} \begin{equation}\label{param-SOn}\Lambda \Lambda^t+u u^t=I\,,\,\Lambda w+\alpha u=0\,,\,\Lambda^t \Lambda+ w w^t=I\,,\, \Lambda^t u+\alpha w=0\,,\,|u|^2+\alpha^2=|w|^2+\alpha^2=1; \end{equation} these equations imply that $\alpha=\det(\Lambda)$. Again we will denote by $(\Lambda,u,w,\alpha)$ the corresponding element of $G$. {\em The group $C$} is: \notka{def-C} \begin{equation}\label{def-C} C:=\left\{\left(\begin{array}{ccc} \frac{s^2+1+|y|^2}{2 s} & -\frac1s y^t & \frac{s^2-1+|y|^2}{2 s}\\ -y & I & -y\\ \frac{s^2-1-|y|^2}{2 s} & \frac1s y^t & \frac{s^2+1-|y|^2}{2 s} \end{array}\right)\,s\in\R_+, y\in\R^n\right\}\subset G\,; \end{equation} it is isomorphic to the semidirect product of $\R_+$ and $\R^n$ $\,\{(s,y)\in \R_+\times \R^n\}$ with multiplication $\displaystyle (s_1,y_1) (s_2,y_2):=(s_1 s_2, s_2 y_1+y_2)$. As before we will use $(s,y)$ to denote the corresponding element of $G$. The Lie algebra of $C$ is $\gotc:=span\{ \Mlambda_{\beta 0}-\Mlambda_{\beta (n+1)}\,,\,\beta=0,\dots, n\}=span\{ \Mlambda_{k 0}-\Mlambda_{k(n+1)}\,,\,k=1,\dots, n+1\}$. The coordinates $(s,y)$ are related to basis in $\gotc$ as: \notka{def-wsp-sy} \begin{align}\label{def-wsp-sy} (s,y) &=\exp(-(\log s) \Mlambda_{0(n+1)}) \exp(\Mlambda(y))\,,& \Mlambda(y)&:=\sum_{k=1}^n y_k(\Mlambda_{k(n+1)} -\Mlambda_{k 0}) \end{align} \begin{re} More geometric description of data defining groups $A,B,C$ was given in \cite{PS-poisson}; essentially we have to choose two orthogonal vectors in $n+2$ dimensional Minkowski(vector) space, one spacelike and one timelike. \end{re} The Iwasawa decomposition for $G$ is $G=B C=C B$. In the following we will need explicit relation between two forms of this decomposition i.e. solutions of the equation \notka{eq-iwasawa} \begin{align}\label{eq-iwasawa} (\Lambda,u,w,\alpha)(s,y)&=(\tilde{s},\tilde{y})(\tilde{\Lambda}, \tilde{u}, \tilde{w}, \tilde{\alpha}) \end{align} \begin{lem}\label{lemat-iwasawa}\notka{lemat-iwasawa} (a) Let $(s,y)\in C$ and $(\Lambda,u,w,\alpha)\in B$. The equation (\ref{eq-iwasawa}) has the (unique) solution $(\tilde{s},\tilde{y})\in C$ and $(\tilde{\Lambda}, \tilde{u}, \tilde{w}, \tilde{\alpha})\in B$ given by formulae: \notka{solution-iwasawa1} \begin{align} \tilde{s} &=M s =-w^ty+\frac{s^2+1+|y|^2}{2 s}+\alpha\frac{s^2-1-|y|^2}{2 s} & \tilde{y} &=\Lambda y-\frac{s^2-1-|y|^2}{2 s} u\nonumber\\ \tilde{u} &=\frac{1}{M s}\left((1-\frac{w^t y}{s}) u -\frac{1-\alpha}{s} \Lambda y\right) & \tilde{w} &=\frac{1}{M s}\left(w-\frac{1-\alpha}{s} y\right)\nonumber \end{align} \begin{align}\label{solution-iwasawa1} \tilde{\Lambda} &=\left[\Lambda-\frac1{\alpha-1}u w^t\right]\left[I-\frac1{M (1-\alpha)}(w-\frac{1-\alpha}{s} y)(w^t-\frac{1-\alpha}{s} y^t)\right] \\ \tilde{\alpha} &=1-\frac{1-\alpha}{M s^2}\,,\,\, {\rm where}\nonumber \end{align}\vspace{-4ex} \begin{align*} \begin{split} M & :=\frac1{2(1-\alpha)} \left(\left(\frac{1-\alpha}{s}\right)^2+ |w-\frac{1-\alpha}{s} y|^2\right)=\\ & = \frac{1}{2}\left(\frac{1}{s^2}+\frac{|y|^2}{s^2}+1\right)-\frac{\alpha}{2}\left(\frac{1}{s^2}+\frac{|y|^2}{s^2}-1\right) -\frac{w^t y}{s} \end{split} \end{align*} (b) Let $(\tilde{s},\tilde{y})\in C$ and $(\tilde{\Lambda}, \tilde{u}, \tilde{w}, \tilde{\alpha})\in B$. The equation (\ref{eq-iwasawa}) has the (unique) solution $(s,y)\in C$ and $(\Lambda,u,w,\alpha)\in B$ given by formulae: \notka{solution-iwasawa2} \begin{align}\label{solution-iwasawa2} s&=\frac{\tilde{s}}{\tilde{M}}= \frac{2 \tilde{s} (1-\tilde{\alpha})}{\tilde{s}^2 (1-\tilde{\alpha})^2+|\tilde{u}+(1-\tilde{\alpha})\tilde{y}|^2} & y&=\frac{1}{\tilde{M}}\left(\tilde{\Lambda}^t \tilde{y}-\frac{\tilde{s}^2+|\tilde{y}|^2-1}{2} \tilde{w}\right)\nonumber\\ u&=\frac{\tilde{s}}{\tilde{M}}\left(\tilde{u}+(1-\tilde{\alpha})\tilde{y}\right) & w&=\frac{\tilde{s}}{\tilde{M}}\left((1-\tilde{\alpha})\tilde{\Lambda}^t\tilde{y}+(1+\tilde{u}^t\tilde{y}) \tilde{w}\right) \end{align} \begin{align} \Lambda&=\tilde{\Lambda}-\frac{1}{\tilde{M}}\left[(1+\tilde{u}^t\tilde{y})\tilde{y}\tilde{w}^t- \frac{\tilde{s}^2+|\tilde{y}|^2-1}{2} \tilde{u}\tilde{w}^t+(\tilde{u}+(1-\tilde{\alpha})\tilde{y})\tilde{y}^t\tilde{\Lambda}\right]= \nonumber\\ &=\left[I-\frac{1}{\tilde{M}(1-\tilde{\alpha})}(\tilde{u}+(1-\tilde{\alpha})\tilde{y})(\tilde{u}+(1-\tilde{\alpha})\tilde{y})^t\right] \left[\tilde{\Lambda}+\frac{1}{1-\tilde{\alpha}}\tilde{u}\tilde{w}^t\right]\nonumber\\ \alpha& =1-\frac{\tilde{s}^2(1-\tilde{\alpha})}{\tilde{M}}\,,\,\, {\rm where}\nonumber \end{align}\vspace{-2ex} \begin{align*}\begin{split} \tilde{M}& :=\frac{1}{2(1-\tilde{\alpha})}\left(\tilde{s}^2(1-\tilde{\alpha})^2+ |(1-\tilde{\alpha})\tilde{y}+\tilde{u}|^2\right)=\\ &=\frac{\tilde{s}^2+|\tilde{y}|^2+1}{2}-\tilde{\alpha}\frac{\tilde{s}^2+|\tilde{y}|^2-1}{2}+\tilde{u}^t\tilde{y} \end{split}\end{align*} \end{lem} \noindent {\em Proof:} Direct computation using (\ref{def-B}) and (\ref{def-C}). \\ \dowl \noindent By direct computations one also verifies that (\ref{eq-iwasawa}) implies equality \notka{eq-orbit1} \begin{equation*}\label{eq-orbit1} \tilde{s}(\tilde{\alpha}-1) =\frac{\alpha-1}{s} \end{equation*} Clearly, it follows that $\alpha=1\iff\tilde{\alpha}=1$ and then: \begin{equation*} u=w=\tilde{u}=\tilde{w}=0\,,\,\,\Lambda=\tilde{\Lambda}\,,\,\,s=\tilde{s}\,,\,\,\tilde{y}=\Lambda y \end{equation*} For $\alpha\neq1$ (or, equivalently, $\tilde{\alpha}\neq 1$) we have: \notka{eq-orbit2} \begin{equation}\label{eq-orbit2} \Lambda-\frac{1}{\alpha-1}u w^t =\tilde{\Lambda}-\frac{1}{\tilde{\alpha}-1}\tilde{u} \tilde{w}^t \end{equation} Moreover, for two given elements of $B$: $(\Lambda,u,w,\alpha)$ and $(\tilde{\Lambda},\tilde{u},\tilde{w},\tilde{\alpha})$ with $\alpha\neq 1,\tilde{\alpha}\neq 1$ that satisfy (\ref{eq-orbit2}), there exist (not unique) $(s,y)\in C$ and $(\tilde{s}, \tilde{y})\in C$ such that equation (\ref{eq-iwasawa}) is fulfilled; they are given by: \begin{align*} \tilde{s} s &=\frac{\alpha-1}{\tilde{\alpha}-1}\,,\,\,&\, \tilde{y} &=\frac{u}{s(1-\tilde{\alpha})}-\frac{\tilde{u}}{1-\tilde{\alpha}}\,,&\,y &=\frac{\tilde{w}}{\tilde{\alpha}-1}- \frac{s w}{\alpha-1} \end{align*} The decomposition (\ref{eq-iwasawa}) defines the groupoid $G_B: G\rightrightarrows B$. We will write $(\Lambda,u,w,\alpha;s,y)$ for the product $(\Lambda,u,w,\alpha)(s,y)$ and use $(\Lambda,u,w,\alpha;s,y)$ to denote elements of $G_B$. The multiplication relation is given by:\notka{GB-multip} \begin{equation*}\label{GB-multip} \begin{split} m_B:=\left\{\left(\Lambda,u,w,\alpha;s s_1,s_1 y+y_1;;\, \Lambda,u,w,\alpha;s,y\,;;\,\tilde{\Lambda},\tilde{u},\tilde{w},\tilde{\alpha};s_1,y_1\right): \right.\\ \left.(\Lambda,u,w,\alpha;s,y)=(\tilde{s},\tilde{y})(\tilde{\Lambda}, \tilde{u}, \tilde{w}, \tilde{\alpha})\right\}\subset G_B\times G_B\times G_B, \end{split} \end{equation*} We will use a detailed structure of this groupoid to verify (3) of Assumptions \ref{basic-assumpt}; the structure is summarized in the following lemma. \begin{lem}\label{lemma-GB-struct}\notka{lemma-GB-struct} \begin{enumerate} \item The isotropy group of $(\Lambda,0,0,1)\in B$ (i.e. of elements of $SO(n)$ embedded in $SO(n+1)$ in the upper left corner) is equal to $\{(\Lambda,0,0,1;s,y): (s,y)\in C)\}\simeq C$ and for $(\Lambda,u,w,\alpha)\in B\,,\,\alpha\neq 1$ is one dimensional $\{(\Lambda,u,w,\alpha;s, \frac{s-1}{1-\alpha} w ):s\in \R_+\}\simeq \R_+$ \item $G_B$ is a disjoint union of an open groupoid $\Gamma_0$ over $SO(n+1)\setminus SO(n)$ and a group bundle $\Gamma_1:=\{(\Lambda,0,0,1;s,y): \Lambda\in SO(n),\,(s,y)\in C \} \simeq SO(n)\times C$; \item The open groupoid $\Gamma_0$ is a product $\Gamma_0=O(n)^-\times \Gamma_{00}$ of a manifold-groupoid $O(n)^-$ and a transitive groupoid $\Gamma_{00}$, where $O(n)^-$ stands for the component of the orthogonal group with a negative determinant; \item The transitive groupoid $\Gamma_{00}$ is a product $\Gamma_{00}=\R_+\times(\R^n\times\R^n)$ of a group and a pair groupoid. \end{enumerate} \end{lem} \noindent {\em Proof:} 1) and 2) are direct consequences of formulae (\ref{solution-iwasawa1}) and (\ref{solution-iwasawa2});\\ 3) For $v\in\R^n$ let $R(v)$ be the orthogonal reflection in $\R^{n+1}$ along the direction of $(v,-1)^t$ i.e. $$R(v)=\left(\begin{array}{cc} I-\frac{2 v v^t}{1+|v|^2} & \frac{2 v}{1+|v|^2} \\ \frac{2 v^t}{1+|v|^2} &\frac{|v|^2-1 }{1+|v|^2}\end{array}\right)$$ Let us consider the map: $$\Phi: O(n)^-\times \R^n\ni (K,v)\mapsto \left(\begin{array}{cc} K &0\\0&1\end{array}\right) R(v) \in SO(n+1)$$ Clearly $\Phi(K,v) \in SO(n+1)\setminus SO(n)$. Moreover, using (\ref{param-SOn}), it is easy to see that for $\alpha\neq 1$ the matrix $\Lambda-\frac{1}{\alpha-1}u w^t$ is orthogonal and $$\left(\begin{array}{cc} \Lambda& u\\w^t &\alpha\end{array}\right) \left(\begin{array}{cc} I& 0\\\frac{w^t}{1-\alpha} & 1\end{array}\right) \left(\begin{array}{cc} I& -w\\0 & 1\end{array}\right)= \left(\begin{array}{cc} \Lambda-\frac{1}{\alpha-1}u w^t& 0\\ \frac{w^t}{1-\alpha}&-1\end{array}\right),$$ therefore $\det(\Lambda-\frac{1}{\alpha-1}u w^t)=-1$ and we can define $$\Psi:SO(n+1)\setminus SO(n)\ni \left(\begin{array}{cc} \Lambda& u\\w^t &\alpha\end{array}\right) \mapsto \left( \Lambda-\frac{1}{\alpha-1}u w^t,\frac{w}{1-\alpha}\right)\in O(n)^-\times \R^n.$$ By direct computation one verifies that $\Psi=\Phi^{-1}$. Clearly, both mappings are smooth, so both are diffeomorphisms. By (\ref{solution-iwasawa1}) we obtain: $$\Phi(K,v)\,(s,y)=(\tilde{s},\tilde{y})\,\Phi(K, sv-y)$$ $$\tilde{s}=\frac{1+|s v-y|^2}{s(1+|v|^2)}\,,\,\,\,\, \tilde{y}=K\left(y-\frac{2v}{1+|v|^2}(v^t y +\frac{s^2-|y|^2-1}{2 s})\right)$$ These formulae show that $\Gamma_{00}:=\R^n\times C$ is a (right) transformation groupoid with the action: $\R^n\times C\ni (v ;s,y)\mapsto s v-y\in \R^n$. It is clear that this action is transitive.\\ 4) We will use the following general but simple: \begin{lem}\label{lemma-transitive} \notka{lemma-transitive}Let $\Gamma\rightrightarrows E$ be a transitive groupoid. For $e_0\in E$ let $p: E\ni e \mapsto p(e)\in e_L^{-1}(e_0)$ be a section of the right projection, such that $p(e_0)=e_0$. Let $G$ be an isotropy group of $e_0$. The map $G\times E\times E\ni(g,e_1,e_2)\mapsto s(p(e_1)) g p(e_2)\in\Gamma$ is an isomorphism of groupoids. ($G\times E\times E$ is a product of a group and pair groupoid). \end{lem} The application of the lemma (choose $v_0=0$) gives us a groupoid isomorphism: $$\R_+\times\R^n\times \R^n\ni(s;x_1,x_2)\mapsto (x_1; s, s x_1-x_2)\in \R^n\times C,$$ which in our situation clearly is a diffeomorphism.\\ \dowl To find the set $B'$ i.e. $B\cap CA$ we have to solve the equation:\notka{eq-B-prim} \begin{equation}\label{eq-B-prim} (z,U,d)=(s,y)(\Lambda,u,w,\alpha)\,,\,\,(z,U,d)\in A\,,\,(s,y)\in C\,,\,(\Lambda,u,w,\alpha)\in B \end{equation} Using (\ref{def-zUd}, \ref{def-B}) and (\ref{def-C}), by direct computation, one verifies: \begin{lem}\label{lemma-B-prim}\notka{lemma-B-prim} (a) For $(z,U,d)\in A$ solutions $(s,y)\in C\,,\,(\Lambda,u,w,\alpha)\in B$ of (\ref{eq-B-prim}) are given by: \begin{align}\label{solution-B-prim-1} s& =\frac{1+|z|^2}{1-|z|^2}\quad, & y&=\frac{- 2 z}{1-|z|^2}\quad,\nonumber\\ \Lambda&= (I-\frac{2 z z^t}{1+|z|^2})U D\quad, & u&=\frac{- 2 d z }{1+|z|^2}\quad,& w&=\frac{2 D U^t z}{1+|z|^2}\quad, & \alpha&=\frac{d(1-|z|^2)}{1+|z|^2}. \notka{solution-B-prim-1} \end{align} (b) For $(\Lambda,u,w,\alpha)\in B$ with $\alpha\neq 0$, solutions $(s,y)\in C\,,\,(z,U,d)\in A$ of (\ref{eq-B-prim}) are given by: \begin{align}\nonumber\label{solution-B-prim-2} s&=\frac{1}{|\alpha|}\quad,& y& =\frac{u}{\alpha}\quad,\\ z& =-\frac{\sign (\alpha) u}{1+|\alpha|}\quad,& U&=(\Lambda-\frac{\sign(\alpha)}{1+|\alpha|} u w^t)D\quad, & d& =\sign(\alpha). \notka{solution-B-prim-2} \end{align} \end{lem} \dowl \noindent This way we obtain $B'$ and projections $\tilde{c}_L$ and $a_R$ defined by decomposition $CA\ni g=\tilde{c}_L(g)a_R(g)$ (restricted to $B'$). \begin{align} B'& =\{(\Lambda,u,w,\alpha)\in B: \alpha\neq 0\}\label{def-B-prim}\notka{def-B-prim}\\ \tilde{c}_L(\Lambda,u,w,\alpha)& =(|\alpha|,-sgn(\alpha) u )\label{def-tcL}\notka{def-tcL}\\ a_R(\Lambda,u,w,\alpha)& =\left(-\frac{\sign (\alpha) u}{1+|\alpha|}, \,(\Lambda-\frac{\sign(\alpha)}{1+|\alpha|} u w^t)D, \,\sign(\alpha)\right)\label{def-aR} \notka{def-aR} \end{align} Finally, using the definition (\ref{def-twist}) and the formula above we obtain the twist $T$: \begin{equation*}\label{def-twist-kappa} T=\{(\Lambda_1,u_1,w_1,\alpha_1;\frac{1}{|\alpha_2|},\frac{u_2}{\alpha_2};;\Lambda_2,u_2,w_2,\alpha_2;1,0)\in G_B\times G_B\,:\,\alpha_2\neq 0\}\notka{def-twist-kappa} \end{equation*} \section{The $C^*$-algebra} In this section we will use Prop.~\ref{prop-Delta} to get the $C^*$-algebra and the comultiplication. To this end, conditions listed in assumptions \ref{basic-assumpt} have to be verified. The first one is clear and the second was proven in \cite{PS-triple}, it remains to check $(3)$ and $(4)$. \noindent {\bf\small Condition \ref{basic-assumpt} (3) \,} Let $U:=b_L^{-1}(B')$ and $\sA(U)$ be the linear space of elements from $\sA(G_B)$ supported in $U$. We will show that $\sA(U)$ is dense in $C^*_r(G_B)$.\\ Let $B'$ be as in (\ref{def-B-prim}) and define $B'':=\{(\Lambda,u,w,\alpha)\in B: \alpha\neq 1\}$, then $B=B'\cup B''$ i.e. $\{B',B''\}$ is an open cover of $B$ and $b_L^{-1}(B'')=\Gamma_0$. Let $\tilde{\chi}',\tilde{\chi}''$ be a partition of unity subordinated to the cover $\{B',B''\}$, $\chi'(\gamma):=\tilde{\chi}'(b_L(\gamma))$ and $\chi''(\gamma):=\tilde{\chi}''(b_L(\gamma))$. For $\om\in\sA(G_B): \om=\chi'\om+\chi''\om$ and $\chi'\om\in\sA(U)\,,\,\chi''\om\in\sA(\Gamma_0)$; so to prove that $\sA(U)$ is dense w $C^*_r(G_B)$ is sufficient to show that any element in $\sA(\Gamma_0)$ can be approximated by elements from $\sA(U)$, in particular from $\sA(U\cap\Gamma_0)$. In this way we can transfer the whole problem to $\Gamma_0$ and use its structure described in lemma \ref{lemma-GB-struct}; in this presentation $U\cap\Gamma_0=\{(K,t, x_1,x_2)\in O(n)^-\times \R_+\times \R^n\times \R^n\, :|x_1|\neq 1\}$. Let $dK$ denotes a measure on $O(n)^-$ defined by (the square of) some smooth, positive, non vanishing half-density; $dx_1$ be the Lebesgue measure on $\R^n$ and $\frac{d s}{s}$ be the Haar measure on $\R_+$. Then $L^2(\Gamma_0)$ can be identified with $L^2\left(O(n)^-\times \R_+\times \R^n\times \R^n; dK \frac{d s}{s} d x_1 d x_2\right)$ and $\sA(\Gamma_0)$ acts on $L^2(\Gamma_0)$ by:\notka{def-rep-id-GB} \begin{equation}\label{def-rep-id-GB} (\pi(f)\Psi)(K,t,x_1,x_2):=(f *\Psi)(K,t,x_1,x_2):=\int \frac{d s}{s}\, d y\, f(K,s,x_1,y)\Psi(K,t/s,y,x_2); \end{equation} note that it is sufficient to consider $\Psi\in\sD(\Gamma_0)$. We will prove the following estimate: \notka{lemma-basic-estimate} \begin{lem}\label{lemma-basic-estimate} Let $f$ be a continuous function supported in a product of compact sets $L\times M\times N\times R \subset O(n)^-\times \R_+\times \R^n\times \R^n$. Then the norm of the operator $\pi(f)$ given by (\ref{def-rep-id-GB}) satisfies: \begin{equation}\label{eq-basic-estimate}||\pi(f)||\leq \sup|f|\, \nu(M)\,\sqrt{\mu(N)\mu(R)}, \end{equation} where $\nu$ denotes the Haar measure on $\R_+:\nu(M):=\int_M \frac{d s}{s}$ and $\mu$ the Lebesgue measure on $\R^n$. \end{lem} For $\epsilon > 0$ let $\sO_\epsilon$ be an open neighborhood of the unit sphere in $\R^n$ with $\mu(\sO_\epsilon)\leq\epsilon$; let $\tilde{\chi}_\epsilon$ be a smooth function supported in $\sO_\epsilon$ such that $0 \leq \tilde{\chi}_\epsilon \leq 1$, $\tilde{\chi}_\epsilon =1$ on the unit sphere and the function $\chi_\epsilon$ on $\Gamma_0$ be defined by $\chi_\epsilon(K,t,x_1,x_2):=\tilde{\chi}_\epsilon (x_1)$. Let $f\in\sD(\Gamma_0)$ be supported in $L\times M\times N\times R$. We have $f=(f-\chi_\epsilon f)+\chi_\epsilon f$ and $(f-\chi_\epsilon f )\in \sD(U\cap\Gamma_0)$. By the lemma \ref{lemma-basic-estimate} $$||\pi(\chi_\epsilon f)||\leq \sup|f| \nu(M) \sqrt{\mu(R)} \sqrt{\mu(\sO_\epsilon\cap N)}\leq \sup|f| \nu(M) \sqrt{\mu(R)} \sqrt{\epsilon} $$ So really $f$ can be approximated by elements from $\sD(U\cap\Gamma_0)$. It remains to prove lemma \ref{lemma-basic-estimate}. {\em Proof of lemma \ref{lemma-basic-estimate}:} We just apply the Schwartz inequality several times. Let $\Psi$ be smooth and compactly supported; by (\ref{def-rep-id-GB}): $$(\Psi|f*\Psi)=\int dK \frac{d t}{t} d x_1 d x_2\, \overline{\Psi(K,t,x_1,x_2)}\int\frac{d s}{s} d y\,f(K,s,x_1,y)\Psi(K,t/s,y,x_2)$$ and \begin{equation}\label{norm-est} |(\Psi|f*\Psi)|\leq \int dK \frac{d t}{t} d x_1 d x_2\,|\Psi|(K,t,x_1,x_2)\int\frac{d s}{s} d y\,|f|(K,s,x_1,y)|\Psi|(K,t/s,y,x_2) \end{equation} We write the integral as iterated integral: $\int dK d x_2 \int d x_1 \int d y \,\int \frac{d t}{t}\,\int\frac{d s}{s}$ For fixed $(K,y,x_1,x_2)$ let us define functions: $\displaystyle\Psi_1:\R_+\ni t\mapsto \Psi_1(t):=|\Psi(K,t,x_1,x_2)|\,$ $$f_1:\R_+\ni t\mapsto f_1(t):=|f(K,t,x_1,y)|\,\,;\,\,\Psi_2:\R_+\ni t\mapsto \Psi_2(t):=|\Psi(K,t,y,x_2)|$$ With these definitions we have the estimate: $$\int \frac{d t}{t}\,|\Psi|(K,t,x_1,x_2)\,\int\frac{d s}{s}|f|(K,s,x_1,y)|\Psi|(K,t/s,y,x_2)=|(\Psi_1|f_1 *\Psi_2)| \leq ||\Psi_1||_2||\Psi_2||_2||f_1||_1,$$ where the scalar product is in $L^2(\R_+,\frac{d s}{s})$ and $||\cdot||_2$ norms refer to this space, $*$ is the convolution in $\R_+$ and $||f||_1$ is $L^1$ norm; these norms are continuous functions of remaining variables.\\ Let us now define (continuous, compactly supported) functions $ \tilde{\Psi}_1,\,\tilde{\Psi}_2,\, \tilde{f}_1 : O(n)^-\times \R^n\times \R^n\rightarrow \R $: $$\tilde{\Psi}_1(K,x_1,x_2):=||\Psi_1||_2=\left[\int\,\frac{d t}{t} |\Psi(K,t,x_1,x_2)|^2\right]^{1/2},\,\tilde{\Psi}_2(K,x_2,y):=||\Psi_2||_2$$ and $$\tilde{f}_1(K,x_1,y):=||f_1||_1=\int t\,\frac{d t}{t} |f(K,t,x_1,y)|.$$ The right hand side of (\ref{norm-est}) is estimated by: $$\int dK d x_2\int d x_1\,\tilde{\Psi}_1(K,x_1,x_2)\,\int d y \,\tilde{f}_1(K,x_1,y)\tilde{\Psi}_2(K,x_2,y)$$ By the Schwartz inequality for $y$-integration we get an estimate for the integral above by: \begin{equation}\label{norm-est1} \int dK d x_2 \int d x_1 \tilde{\Psi}_1(K,x_1,x_2) \left[\int d y \,(\tilde{f}_1(K,x_1,y))^2\right]^{1/2}\, \left[\int d y \,(\tilde{\Psi}_2(K,x_2,y))^2\right]^{1/2} \end{equation} Again let us denote $\tilde{f}_2(K,x_1):=\left[\int d y \,(\tilde{f}_1(K,x_1,y))^2\right]^{1/2}$ and $\tilde{\Psi}_3(K,x_2):=\left[\int d y \,(\tilde{\Psi}_2(K,x_2,y))^2\right]^{1/2}$ The integral (\ref{norm-est1}) reads $$\int dK d x_2 \, \tilde{\Psi}_3(K,x_2)\,\int d x_1 \, \tilde{f}_2(K,x_1)\,\tilde{\Psi}_1(K,x_1,x_2).$$ Schwartz inequality again and we can estimate it by: $$\int dK d x_2 \, \tilde{\Psi}_3(K,x_2)\,\tilde{\Psi}_4(K,x_2)\tilde{f}_3(K),$$ where $\tilde{\Psi}_4(K,x_2):=\left[\int d x_1\,(\tilde{\Psi}_1(K, x_1,x_2))^2\right]^{1/2}$ and $\tilde{f}_3(K):=\left[\int d x_1\,(\tilde{f}_2(K, x_1))^2\right]^{1/2}$.\\ And, finally, this integral we estimate by $$\sup|\tilde{f}_3|\,\left[\int dK d x_2 (\tilde{\Psi}_4(K,x_2))^2\right]^{1/2}\, \left[\int dK d x_2 (\tilde{\Psi}_3(K,x_2))^2\right]^{1/2}$$ (again the Schwartz inequality was used). But $$\int dK d x_2 (\tilde{\Psi}_4(K,x_2))^2=\int dK d x_2 \int d x_1\,(\tilde{\Psi}_1(K, x_1,x_2))^2=$$ $$=\int dK d x_2 \int d x_1\,\int \frac{d t}{t}\,|\Psi|^2(K,t,x_1,x_2)=\|\Psi\|^2$$ and, in a similar way, $$\int dK d x_2 (\tilde{\Psi}_3(K,x_2))^2= \|\Psi\|^2$$ Thus we get an inequality $|(\Psi|f*\Psi)|\leq \sup|\tilde{f}_3| \|\Psi\|^2$, therefore $||\pi(f)||\leq \sup|\tilde{f}_3|$. $$|\tilde{f}_3(K)|^2=\int d x_1 (\tilde{f}_2(K,x_1))^2=\int d x_1\int d y\left[\int \frac{d s}{s} |f(K,s,x_1,y)|\right]^2\leq$$ $$\leq\int_N d x_1\int_R d y \,(\sup|f|)^2\nu^2(M)=(\sup|f|)^2\nu^2(M) \mu(N) \mu(R)$$ This is the estimate (\ref{eq-basic-estimate}) and the lemma is proven. \\\dowl \noindent This way by Prop. \ref{prop-Delta} (a) we obtain the equality of $C^*$-algebras: \begin{equation}\cred(\Gamma_A)=\cred(\G_{B'})=\cred(G_B).\end{equation} \noindent To get the twist and the comultiplication we have to verify:\\ {\bf\small Condition \ref{basic-assumpt} (4) \,} For $M>1$ and $1>\delta>\epsilon>0$ let us define compact sets $K_M\subset C, K_\delta\subset B'$ and an open neighborhood $V_\epsilon$ of $B\setminus B'$ in $B$: $$K_M:=\{(s,y)\in C: \frac{1}{M}\leq s\leq M\,,\,|y|\leq M\}\,,\,\,K_\delta:=\{(\Lambda,u,w,\alpha)\in B: |\alpha|\geq \delta\}\,,$$ $$V_\epsilon:=\{(\Lambda,u,w,\alpha)\in B : |\alpha| < \epsilon\}.$$ We will show that for any $M>1$ any $0<\delta<1$ and any $0< \epsilon< \delta$ there exists $\epsilon'$ such that $\mu(B,K_\delta,K_M;V_{\epsilon'})<\epsilon$ (notation as in Assumptions \ref{basic-assumpt}.) Since any compact in $B'$ is contained in some $K_\delta$ and any compact in $C$ is contained in some $K_M$, this is sufficient. Let $b=(\Lambda, u, w, \alpha)$ and $b_1=(\Lambda_1,u_1,w_1,\alpha_1)\in K_\delta$; we want to find the set $\displaystyle Z(b,b_1,K_M;V_\epsilon):=K_M\cap \{c\in C: b_R(b c)b_1\in V_\epsilon\}.$ Let $c=(s,y)\,,\,\,b_R(b c)=:(\tilde{\Lambda},\tilde{u},\tilde{w},\tilde{\alpha})$ and $b_R(b c) b_1=:(\Lambda_2,u_2,w_2,\alpha_2)$. \noindent Then $\alpha_2=\tilde{w}^t u_1 +\tilde{\alpha}\alpha_1$ and using solutions of eq. (\ref{eq-iwasawa}) we get:\\ $\displaystyle \tilde{\alpha}=\left\{\begin{array}{cc} \frac{|r|^2-1}{|r|^2+1} & \alpha\neq 1\\1 & \alpha=1\end{array}\right.\,$, $\,\displaystyle\tilde{w}=\left\{\begin{array}{cc} \frac{2 r}{1+|r|^2} & \alpha\neq 1\\ 0& \alpha=1\end{array}\right.\,$, $\,\displaystyle\alpha_2=\left\{\begin{array}{cc}\frac{2 r^t u_1+\alpha_1(|r|^2-1)}{1+|r|^2} & \alpha\neq 1\\ \alpha_1 & \alpha=1\end{array}\right.$,\\ \noindent where $r:=\frac{s w}{1-\alpha} -y$. \noindent Now we solve for $(s,y)$ the inequality $|\alpha_2|< \epsilon$ with the additional assumption $0<\epsilon<\delta$; \noindent There is no solution for $\alpha=1$ and for $\alpha\neq 1$ we have: $$-\epsilon(1+|r|^2)<2 r^t u_1+\alpha_1(|r|^2-1)<\epsilon(1+|r|^2)$$ after some manipulation we get for $\alpha_1>0$: $$|r+\frac{u_1}{\alpha_1-\epsilon}|^2< \frac{1-\epsilon^2}{(\alpha_1-\epsilon)^2}\,\,\,{\rm and}\,\,\, |r+\frac{u_1}{\alpha_1+\epsilon}|^2> \frac{1-\epsilon^2}{(\alpha_1+\epsilon)^2}$$ and for $\alpha_1<0$: $$|r+\frac{u_1}{\alpha_1-\epsilon}|^2> \frac{1-\epsilon^2}{(\alpha_1-\epsilon)^2}\,\,\,{\rm and}\,\,\, |r+\frac{u_1}{\alpha_1+\epsilon}|^2< \frac{1-\epsilon^2}{(\alpha_1+\epsilon)^2}.$$ \noindent Both situations can be described uniformly as: $$\left|r+\frac{sgn(\alpha_1)}{|\alpha_1|-\epsilon} u_1 \right|^2<\frac{1-\epsilon^2}{(|\alpha_1|-\epsilon)^2}\,\,\,{\rm and}\,\,\, \left|r+\frac{sgn(\alpha_1)}{|\alpha_1|+\epsilon} u_1\right|^2> \frac{1-\epsilon^2}{(|\alpha_1|+\epsilon)^2}$$ or in terms of $(s,y)$: $$\left|\frac{s w}{1-\alpha} +\frac{sgn(\alpha_1)}{|\alpha_1|-\epsilon} u_1 - y \right|^2<\frac{1-\epsilon^2}{(|\alpha_1|-\epsilon)^2}\,\,\,{\rm and}\,\,\, \left|\frac{s w}{1-\alpha}+\frac{sgn(\alpha_1)}{|\alpha_1|+\epsilon} u_1 - y \right|^2> \frac{1-\epsilon^2}{(|\alpha_1|+\epsilon)^2}$$ For fixed $s$ this is the intersection of the (larger) ball centered at $y_1:=\frac{s w}{1-\alpha} +\frac{sgn(\alpha_1)}{|\alpha_1|-\epsilon} u_1$ with a radius $r_1:=\frac{\sqrt{1-\epsilon^2}}{|\alpha_1|-\epsilon}$ with the exterior of the (smaller) ball centered at $y_2:=\frac{s w}{1-\alpha} +\frac{sgn(\alpha_1)}{|\alpha_1|+\epsilon} u_1$ with a radius $r_2:=\frac{\sqrt{1-\epsilon^2}}{|\alpha_1|+\epsilon}$. Because of the inequality $$ |y_1-y_2|=\left(\frac{1}{|\alpha_1|-\epsilon}- \frac{1}{|\alpha_1|+\epsilon}\right) \sqrt{1-\alpha_1^2} < \left(\frac{1}{|\alpha_1|-\epsilon}- \frac{1}{|\alpha_1|+\epsilon}\right)\sqrt{1-\epsilon^2}=r_1-r_2,$$ the smaller ball is contained in the larger one, and the volume of this intersection is equal to: $$F(n)(r_1^n-r_2^n)=F(n)\frac{(1-\epsilon^2)^{n/2}}{(|\alpha_1|-\epsilon)^n} \left(1-\left(1-\frac{2 \epsilon}{|\alpha_1|+\epsilon}\right)^n\right) \leq F(n) \frac{(1-\epsilon^2)^{n/2}}{(|\alpha_1|-\epsilon)^n} \frac{2 n\epsilon}{|\alpha_1|+\epsilon}\leq$$ $$ \leq F(n) \frac{1}{(\delta-\epsilon)^n}\frac{2 n\epsilon}{\delta}\leq 2 n F(n)\frac{\epsilon}{\delta (\delta-\epsilon)^n },$$ where $F(n)r^n$ is a volume of n-dimensional ball. In this way we obtain $$\mu(b,b_1,K_M;V_\epsilon)=\int_{Z(b,b_1,K_M;V_\epsilon)}\frac{ds}{s} dy\leq \epsilon \,\frac{\log M}{\delta} \frac{4 n F(n) }{(\delta-\epsilon)^n}$$ i.e. $$\mu(B,K_\delta,K_M;V_\epsilon)\leq \epsilon \,\frac{\log M}{\delta} \frac{4 n F(n) }{(\delta-\epsilon)^n}$$ The right hand side goes to $0$ as $\epsilon\rightarrow 0$, and the fourth condition of (\ref{basic-assumpt}) is satisfied.\\\dowl Now, by the Prop.~\ref{prop-Delta} (b), (c), we get the comultiplication $\Delta$ on $\cred(\G_{B'})=\cred(G_B)$ satisfying the density condition (\ref{density-c}). \section{Generators and relations} In the previous section the $C^*$-algebra of the quantum $\kappa$-Poincar\'{e} Group together with comultiplication was defined. In this section we describe its generators, commutation relations among them and look closer at the twist. \subsection{General formulae} Let $ u $ be a bisection of a differential groupoid $\G\rightrightarrows E$ and $f$ a smooth, bounded function on $E$. They define bounded operators on $L^2(\Gamma)$, denoted by $\hat{u}$ and $\hat{f}$, which are multipliers of $C^*_r(\Gamma)$: $\hat{u}$ acts by a push-forward of half-densities and the action of $\hat{f}$ is defined by $(\hat{f} \psi)(\gamma):=f(\el(\gamma)) \psi(\gamma)\,,\,\psi\in\omh(\G)$ \cite{DG}. These operators satisfy: \notka{bis-fun-komut1} \begin{equation}\label{bis-fun-komut1} \hat{u} \hat{f}= \widehat{f_{\scriptsize u}} \hat{u}\,\,,\,\,{\rm where\,\,} f_{\scriptsize u}(e):=f(e_L(u^{-1}e))\,,\quad e\in E \end{equation} In particular, for a one-parameter group of bisections $u_t$, we have \notka{bis-fun-komut2} \begin{equation}\label{bis-fun-komut2} \widehat{u_t} \hat{f} \widehat{u_{-t}} = \widehat{f_t}\,\,,\,\,{\rm where }\,\, f_t(e):=f(e_L(u_{-t} e)) \end{equation} For a DLG $(G;B, C)$ and $c_0\in C$, by $B c_0$ we denote the bisection of $G_B$ defined as $B c_0:=\{b c_0 :b\in B\}$. It acts on $G_B$ by \notka{Bc0-action0} \begin{equation} \label{Bc0-action0} (B c_0)(b c )= b_R(b c_0^{-1}) c_0 c = c_L^{-1}(b c_0^{-1}) b c= c_R(c_0 b^{-1}) b c \end{equation} \noindent $G_B$ is a (right) transformation groupoid $B\rtimes C$ for the action $B\times C \ni (b,c)\mapsto b_R(bc)\in B$. The $C^*$-algebra $C^*_r(G_{B})$ is the reduced crossed product $C_0(B)\rtimes_r C$ (see e.g \cite{PSAx}, prop.~5.2, where this identification is described in details). If $C$ is {\em an amenable group}, and this is our case (or more generally, this is the case of $C=AN$ coming from the Iwasawa decomposition $G=K(AN)$) {\em reduced and universal crossed products coincide} (\cite{DW}, Thm 7.13, p. 199). Moreover, since the universal $C^*$-algebra of a differential groupoid $C^*(G_{B})$ as defined in \cite{DG} is, for transformation groupoids, ``something between'' universal and reduced crossed products, for amenable groups $C$ we have $C^*(G_{B})=C^*_r(G_{B})$; thus any morphism of differential groupoids $h:G_{B} \rel \Gamma$ defines a $C^*$-morphism of corresponding reduced and universal algebras. \begin{re} Thus our $C^*(G_B)$ has $SO(n)$ family of ``classical points'' i.e. characters given by $0$-dimensional orbits of $G_B$ described in lemma \ref{lemma-GB-struct}. \end{re} The canonical morphisms $i_{C}\in Mor(C^*(C), C^*(G_{B}))$ and $i_{B}\in Mor(C_0(B),C^*(G_{B}))$ are given by (extensions of) the following actions on $\sA(G_{B})$: $$ i_{C}(c_0)\omega:= (B c_0) \omega\,,\quad (i_{B}(f)\omega)(g):= f(b_L(g)) \omega(g)\,,\quad\quad c_0\in C, g\in G, f\in C^\infty_0(B),\omega\in \sA(G_{B})$$ If $X_1,\dots, X_m$ are generators of $C^*(C)$, $f_1,\dots, f_k$ are generators of $C_0(B)$ (in the sense of \cite{Wor2}) and $C$ is amenable then (by the universality of crossed product) $C^*(G_{B})$ is generated by $i_{C}(X_1),\dots$, $i_{C}(X_m)$, $i_{B}(f_1),\dots$, $i_{B}(f_k)$. In our situation, it is clear that $C(B)$ is generated by matrix elements $(\Lambda,u,w,\alpha)$ in (\ref{def-B}) and, by the results of \cite{Wor2}, $C^*(C)$ is generated by any basis in $\gotc$. Thus we have \begin{prop} Let groups $B$ and $C$ be defined by (\ref{def-B}, \ref{def-C}); let $(X_0,\dots,X_n)$ be a basis in $\gotc$ and $(\Lambda,u,w,\alpha)$ be matrix elements of $B$. Elements $(i_C(X_0), \dots, i_C(X_n), i_B(\Lambda), i_B(u), i_B(w), i_B(\alpha))$ are generators of $C^*(G_B)$.\\ \dowl \end{prop} For $\kropka{c}\in\gotc$, let $u_{\kropka{c}}$ and $X^r_{\kropka{c}}$ be, respectively, the one-parameter group of bisections and the right invariant vector field on $G_{B}$ defined by \notka{def-u-Xr-ckropka} \begin{equation}\label{def-u-Xr-ckropka} u_{\kropka{c}}(t):=B\exp(t \kropka{c}) \quad,\quad X^r_{\kropka{c}}(b c ):=\left.\frac{d}{d t}\right|_{t=0} u_{\kropka{c}}(t)(b c)=( \adc(b)(\kropka{c})) b c , \end{equation} where $\adc(g)$ is defined in (\ref{adb-adc}) (the last equality follows easily from (\ref{Bc0-action0})). Let $\kot (X^r_{\kropka{c}})$ be the projection of $X^r_{\kropka{c}}$ onto $B$ by $b_L$ i.e. \notka{def-kotwica} \begin{equation} \label{def-kotwica} \kot (X^r_{\kropka{c}})(b):=\left.\frac{d}{d t}\right|_{t=0} b_L(u_{\kropka{c}}(t)(b)) \end{equation} by the straightforward computation, we get: \notka{kotwica1} \begin{equation} \label{kotwica1} \kot(X^r_{\kropka{c}})(b):=- (\rzutB Ad(b) \kropka{c})b \end{equation} \begin{prop}\label{prop-generators}\notka{prop-generators} Let $(G;B,C)$ be a DLG and $\kropka{c}\in\gotc$. Let $u_{\kropka{c}}, X^r_{\kropka{c}}$ and $ \kot (X^r_{\kropka{c}})$ be objects defined in (\ref{def-u-Xr-ckropka}) and (\ref{def-kotwica}). \begin{enumerate} \item The one-parameter group $\widehat{u_{\kropka{c}}}$ is strongly continuous and strictly continuous (as a group of multipliers of $C^*_r(G_{B})$). Let $A_{\kropka{c}}$ be its generator. The action of $A_{\kropka{c}}$ on $\omh(G_{B})$ and on $\sA(G_{B})$ is given by $A_{\kropka{c}}=\iota \sL_{X^r_{\kropka{c}}}$ (the Lie derivative). The linear space $\omh(G_{B})$ is an essential domain for $A_{\kropka{c}}$ and $A_{\kropka{c}}$ is affiliated to $C^*_r(G_{B})$. \item For $\kropka{c},\kropka{e}\in\gotc$, generators $A_{\kropka{c}}, A_{\kropka{e}}$, as operators on $\omh(G_{B})$ (or $\sA(G_{B})$), satisfy: \begin{equation}\label{komut-A}\notka{komut-A} [A_{\kropka{c}}, A_{\kropka{e}}]=-\iota A_{[\kropka{c},\kropka{e}]} \end{equation} \item Let $\kropka{c}\in\gotc$ and $f$ be a smooth function on $B$. As operators on $\omh(G_{B})$ (or $\sA(G_{B})$), $A_{\kropka{c}}$ and $\hat{f}$ satisfy:\notka{kros-komut} \begin{equation} \label{kros-komut} [A_{\kropka{c}}, \hat{f}]=\iota (\kot(X^r_{\kropka{c}})f)\,\widehat{} \end{equation} \end{enumerate} \end{prop}\normalsize \noindent{\em Proof:} Since $\widehat{u_{\kropka{c}}}$ is a one-parameter group of multipliers in $C^*_r(G_B)$ for its strict continuity it is sufficient to check continuity at $t=0$ of the mapping $\R\ni t\mapsto \widehat{u_{\kropka{c}}}(t) (\omega)\in C^*_r(G_B)$ for $\omega \in \sA(G_B)$. Let us choose $\om_0=\lo\mt\ro$ as in (\ref{def-lo}, \ref{def-ro})\notka{def-lo,def-ro}, then $\om= f\om_0$ for $f\in\sD(G_B)$ and we can write $\widehat{u_{\kropka{c}}}(t)(f\om_0)=:(\widehat{u_{\kropka{c}}}(t) f )\om_0$, where the function $(\widehat{u_{\kropka{c}}}(t) f)$ is given by (compare (\ref{app-Bc0-action})): \notka{Bc0-action-1} \begin{align}\label{Bc0-action-1} (\widehat{u_{\kropka{c}}}(t)f)( b c) & =f(u_{\kropka{c}}(-t) bc) \, j_C(\exp(t \kropka{c}))^{-1/2} \end{align} Since the mapping $\R\times G_B\ni (t, g)\mapsto u_{\kropka{c}}(t) g\in G_B$ is continuous and $f\in \sD(G_B)$, for $\delta>0$ and $|t|<\delta$ supports of all functions $(\widehat{u_{\kropka{c}}}(t)f)$ are contained in a fixed compact set, so by the lemma \ref{lemma-ind-lim}, it is sufficient to prove that $(\widehat{u_{\kropka{c}}}(t)f)$ converges, as $t\rightarrow 0$, uniformly to $f$. But this is clear, since everything happens in a fixed compact set and all functions and mappings appearing in (\ref{Bc0-action-1}) are smooth. Recall that the domain of a generator $A$ of a (strongly continuous) one-parameter group of unitaries $U_t$ on a Hilbert space is defined as set of those vectors $\psi$ for which the limit $\displaystyle\lim_{t\rightarrow 0}(-\iota)\frac{U_t\psi-\psi}{t}$ exists and $A\psi$ is the value of this limit. Moreover if a dense linear subspace is contained in the domain of $A$ and is invariant for all $U_t$'s then it is a core for $A$. Let us choose $\nu_0$ -- real non vanishing \halden on $B$ and let $\Psi_0=\rho_0\mt\nu_0$. This is (real, non vanishing) \halden on $G_B$ and any $\psi\in \omh(G_B)$ can be written as $\psi=f \Psi_0$ for $f\in \sD(G_B)$. The action of $\widehat{u_{\kropka{c}}}(t)$ can be written as $\widehat{u_{\kropka{c}}}(t)(f\Psi_0 )=:(\widehat{u_{\kropka{c}}}(t)f)\Psi_0$ and $(\widehat{u_{\kropka{c}}}(t)f)$ is given by the formula (\ref{app-Bc0-action}). Using this formula and formulae (\ref{def-u-Xr-ckropka}, \ref{app-def-modular-functions}) one verifies that: \notka{A-formula-0} \begin{align}\label{A-formula-0} \lim_{t\rightarrow 0}\frac{1}{t}\left((\widehat{u_{\kropka{c}}}(t)f)( b c)- f(b c)\right) &= -\left((X^r_{\kropka{c}} f)( bc) + \frac{1}{2} Tr(ad(\kropka{c})|_{\gotc}) f(bc)\right), \end{align} where $ad(\kropka{c})(\kropka{e}):=[\kropka{c},\kropka{e}]\,,\,\kropka{c},\kropka{e}\in\gotc$. Again, since in the formula above, we stay, for a given $f$, in a fixed compact subset and everything is smooth, the limit is, in fact, uniform and therefore also in $L^2(G_B)$. Thus $\omh(G_B)\subset Dom(A_{\kropka{c}})$ and \notka{A-formula-1} \begin{equation}\label{A-formula-1} A_{\kropka{c}}(f\,\Psi_0)=:A_{\kropka{c}}(f) \,\Psi_0 \,,\,\,A_{\kropka{c}}(f):=\iota\left(X^r_{\kropka{c}} f + \frac{1}{2} Tr(ad(\kropka{c})|_{\gotc}) f\right) \end{equation} Clearly $\omh(G_B)$ is dense in $L^2(G_B)$ and $\widehat{u_{\kropka{c}}}(t)$ invariant, so it is a core for $A_{\kropka{c}}$.\\ Since $\widehat{u_{\kropka{c}}}(t)(\psi)$ is a {\em push-forward} by the flow of $X^r_{\kropka{c}}$, the limit $\displaystyle \lim_{t\rightarrow 0}\frac{1}{t}\left((\widehat{u_{\kropka{c}}}(t)\psi)(g) -\psi(g)\right)$ is equal to $- \sL_{X^r_{\kropka{c}}} \psi$ and, consequently, $A_{\kropka{c}}\psi= \iota \sL_{X^r_{\kropka{c}}}\psi$. Let us prove (\ref{komut-A}). The mapping $C\times G_B\ni(c_0,g)\mapsto (B c_0) g\in G_B$ is a left action of $C$. For $\kropka{c}\in \gotc$, the fundamental vector field for this action is defined as $\displaystyle X_{\kropka{c}}(g):= \left.\frac{d}{d t}\right|_{t=0} u_{\kropka{c}}(-t)(g)$ and the map $\kropka{c}\mapsto X_{\kropka{c}}$ is a Lie algebra homomorphism (see e.g \cite{Lie-Mar}) Comparing with (\ref{def-u-Xr-ckropka}) we see that $X^r_{\kropka{c}}=(-1) X_{\kropka{c}}$. Since $A_{\kropka{c}}=\iota \sL_{X^r_{\kropka{c}}}$ the formula (\ref{komut-A}) follows. The formula (\ref{kros-komut}) is a direct consequence of (\ref{bis-fun-komut2}) and (\ref{def-kotwica}).\\ \dow \subsection{Commutation relations for $\kappa$-Poincar\'{e}} Let us consider the one-parameter group $\,c(t):=\exp (t \Mlambda_{0(n+1)})$; in $(s,y)$ coordinates $c(t)=(s(t), y(t)):=(e^{-t},0)$. Let $S(t)=B c(t)$ be the corresponding group of bisections i.e. \begin{equation*} S(t):=\{ (\Lambda, u, w,\alpha;e^{-t},0)\,,\, (\Lambda, u, w,\alpha)\in B\,,\,t\in \R\}. \end{equation*} \noindent Define $\displaystyle (\tilde{\Lambda}(t),\tilde{u}(t),\tilde{w}(t),\tilde{\alpha}(t)):=b_R(\Lambda, u, w,\alpha;e^{t},0)$ and let $\hat{S}$ be the generator of $\widehat{S(t)}$. By (\ref{Bc0-action0},\ref{def-kotwica}) and (\ref{kros-komut}) we get: $$[\hat{S},\hat{Q}](\Lambda,u, w ,\alpha)=\iota \left(\left.\frac{d}{d t}\right|_{t=0} \tilde{Q}(t)\right)^{\widehat{}}\,,\, Q=\Lambda,u, w,\alpha$$ By (\ref{solution-iwasawa1}) we have \begin{align*} \tilde{\Lambda}(t) &= \Lambda -\frac{\sinh t}{\cosh t +\alpha \sinh t} u w^t & \tilde{\alpha}(t) &= \frac{\alpha\cosh t+\sinh t}{\cosh t+\alpha\sinh t} \\ \tilde{u}(t) &= \frac{u}{\cosh t+\alpha\sinh t} & \tilde{w}(t) &= \frac{w}{\cosh t+\alpha\sinh t}, \end{align*} and by differentiation we obtain commutation relations (as operators on $\sA(G_B)$ or $\omh(G_B)$): \notka{relkom-S} \begin{align}\label{relkom-S} [\hat{S},\hat{\Lambda}] &= -\iota \widehat{u w^t} & [\hat{S},\hat{w}] &= -\iota \hat{\alpha} \hat{w} & \,[\hat{S},\hat{u}] &= -\iota \hat{\alpha} \hat{u} & [\hat{S},\hat{\alpha}] &= -\iota (\hat{\alpha}^2 -1) \end{align} or with indices put explicitly:\notka{relkom-S-1} \begin{align}\label{relkom-S-1} [\hat{S},\widehat{\Lambda_{kl}}] &= -\iota \widehat{u_k}\widehat{w_l} & [\hat{S},\widehat{w_k}] &= -\iota \hat{\alpha} \widehat{w_k} & \,[\hat{S},\widehat{u_k}] &= -\iota \hat{\alpha} \widehat{u_k} & [\hat{S},\hat{\alpha}] &= -\iota (\hat{\alpha}^2 -1) \end{align} Now, for $y_0\in \R^n$, let $c(t):=\exp (t \Mlambda(y_0))$, where, as in (\ref{def-wsp-sy}), $\Mlambda(y_0):=\sum_{i=1}^n(y_0)_i (\Mlambda_{i(n+1)}-\Mlambda_{i 0})$); or in $(s,y)$ coordinates: $c(t)=(s(t),y(t))= (1,t y_0)$. Let $ Y_0(t)$ be the corresponding one-parameter group of bisections $B c(t)$; as before we put $\displaystyle (\tilde{\Lambda}(t),\tilde{u}(t),\tilde{w}(t),\tilde{\alpha}(t)):=b_R(\Lambda, u, w, \alpha;1, -ty_0)$. By (\ref{solution-iwasawa1}) we get: \begin{align*} \tilde{\Lambda}(t) &= \Lambda + \frac{t^2|y_0|^2}{2M(t)} u w^t - \frac{t(1+t w^t y_0)}{M(t)} u y_0^t - \frac{t}{M(t)}\Lambda y_0 w^t- \frac{t^2(1-\alpha)}{M(t)} \Lambda y_0 y_0^t \end{align*} \begin{align*} \tilde{\alpha}(t) &= 1-\frac{1-\alpha}{M(t)} & \tilde{u}(t) &= \frac{u+ t( w^ty_0 u+ (1-\alpha)\Lambda y_0)}{M(t)} & \tilde{w} (t) &= \frac{w+ t (1-\alpha)y_0}{M(t)}, \end{align*} $${\rm where}\,\,\,\,\,M(t):=\frac{|y_0|^2}{2}(1-\alpha)t^2 + t w^t y_0+1$$ Denoting by $\widehat{Y_0}$ the generator of $\widehat{Y_0(t)}$ we obtain commutation relations (again as operators on $\sA(G_B)$ or $\omh(G_B)$): \notka{relkom-Y} \begin{align}\label{relkom-Y}\nonumber [\widehat{Y_0},\hat{\Lambda}] &= -\iota \left(u y_0^t+\Lambda y_0 w^t\right)^{\widehat{}} & [\widehat{Y_0},\hat{\alpha}] &= \iota\left ((1- \alpha)w^t y_0\right)^{\widehat{}} \\ [\widehat{Y_0},\hat{w}] &= \iota\left((1-\alpha)y_0 -w^t y_0 w\right)^{\widehat{}} & [\widehat{Y_0},\hat{u}] &= \iota \left((1- \alpha)\Lambda y_0\right)^{\widehat{}} \end{align} or for $y_0:=e_m\in\R^n$ with corresponding $\widehat{Y_m}$ and with indices of matrix elements:\notka{relkom-Y-1} \begin{align}\label{relkom-Y-1}\nonumber [\widehat{Y_m},\widehat{\Lambda_{kl}}] &= -\iota \left(\widehat{u_k} \delta_{ml}+\widehat{\Lambda_{km}}\widehat{w_l}\right) & [\widehat{Y_m},\hat{\alpha}] &= \iota (1- \hat{\alpha})\widehat{w_m} \\ [\widehat{Y_m},\widehat{w_k}] &= \iota\left((1-\hat{\alpha})\delta_{mk} -\widehat{w_m}\widehat{w_k}\right) & [\widehat{Y_m},\widehat{u_k}] &= \iota (1- \hat{\alpha})\widehat{\Lambda_{km}} \end{align} For completeness let us write relations between $\hat{S}$ and $\widehat{Y_0}$. Since $\hat{S}$ ($\widehat{Y_0}$) is the generator of the group $\widehat{u_{\kropka{c}}}$ for $\kropka{c}=\Mlambda_{0(n+1)}$ ($\kropka{c}=\Mlambda(y_0)$) by the formula (\ref{komut-A}) and (\ref{lambda-komut}) we obtain:\notka{relkom-S-Y} \begin{align}\label{relkom-S-Y} [\hat{S}, \widehat{Y_0}] &= - i \widehat{Y_0}\,,& [\widehat{Y_1}, \widehat{Y_0}] &=0, \end{align} where $\widehat{Y_1}$ is defined in the same way as $\widehat{Y_0}$ for a vector $y_1\in\R^n$. Our generators $\hat{S}$ and $\hat{Y}_k$ are related, via the mapping $\gotc\ni \kropka{c}\mapsto A_{\kropka{c}}$ used in the previous subsection, to the following basis in $\gotc$: \notka{def-generator-basis} \begin{equation} \label{def-generator-basis} (\hat{S}, \hat{Y}_k)\leftrightsquigarrow (\kropka{c}_0,\kropka{c}_k) :=(\Mlambda_{0(n+1)},\Mlambda_{k(n+1)}-\Mlambda_{k 0})\,,\quad k=1, \dots, n \end{equation} \subsection{Formulae for comultiplication} Now we consider again a general DLG $(G;B,C)$ together with the relation $\delta_0:G_B\rel G_B\times G_B$ defined in (\ref{def-delta0}) i.e. $\displaystyle \delta_0=\{(b_1^{-1}c_L(b_1 b c), b_1 b c; b c): b,b_1\in B, c\in C\}$. It defines the mapping $\hat{\delta}_0$ given by the formula (\ref{app-hat-delta0-form}), which extends to the coassociative $\Delta_0\in Mor(C^*_r(G_B), C^*_r(G_B)\mt C^*_r(G_B))$ \cite{PS-DLG}. For a smooth, bounded function $f$ on $B$, let $\Delta_B(f)$ be the value of the comultiplication of the group $B$ on $f$ i.e. $\Delta_B(f) (b_1,b_2):=f(b_1 b_2)$. Let $\hat{f}$ be the multiplier of $C^*_r(G_B)$ defined by $f$; the formula (\ref{app-hat-delta0-form}) implies \notka{delta0-function} \begin{align}\label{delta0-function} \Delta_0(\hat{f})&=\widehat{\Delta_B(f)} \end{align} One easily computes the image of a bisection $B c_0$ by $\delta_0$: \begin{equation*} \delta_0(B c_0)=\{(b_1c_L(b c_0), b c_0) : b, b_1\in B\} \end{equation*} and its action on $G_B\times G_B$: \notka{delta0-bis} \begin{equation}\label{delta0-bis} \begin{split} \delta_0(B c_0)(b_1 c_1, b_2 c_2) & =(c_L^{-1}(b_1 c_L(b_2 c_0^{-1})) b_1 c_1, c_L^{-1}(b_2 c_0^{-1}) b_2 c_2)=\\ & = (b_R(b_1 c_L(b_2 c_0^{-1}) )c_L^{-1}(b_2 c_0^{-1}) c_1, b_R(b_2c_0^{-1}) c_0 c_2)=\\ & = (c_R(c_R(c_0b_2^{-1}) b_1^{-1}) b_1 c_1, c_R(c_0b_2^{-1}) b_2 c_2) \end{split} \end{equation} For $\kropka{c}\in\gotc$, let $u_{\kropka{c}}$ be the one-parameter group of bisections defined in (\ref{def-u-Xr-ckropka}) and define $\tilde{u}_{\kropka{c}}(t):=\delta_0(u_{\kropka{c}}(t))$; this is a one-parameter group of bisections of $G_B\times G_B$. Let $\tilde{X}^r_{\kropka{c}}$ be the corresponding right invariant vector field on $G_B\times G_B$ (compare (\ref{def-u-Xr-ckropka})). By (\ref{delta0-bis}): $$\tilde{X}^r_{\kropka{c}}(b_1 c_1, b_2 c_2)=\left.\frac{d}{d t}\right|_{t=0} (\tilde{c}_1(t) b_1 c_1,\tilde{c}_2(t) b_2 c_2),\, {\rm where} $$ $$\tilde{c}_1(t) := c_R( b_1 c_R( b_2 \exp( t\kropka{c}) b_2^{-1}) b_1^{-1}) \,,\,\,\tilde{c}_2(t) =c_R(b_2\exp(t\kropka{c})b_2^{-1}),$$ and \notka{delta0-Xr} \begin{equation}\label{delta0-Xr} \tilde{X}^r_{\kropka{c}}(b_1 c_1, b_2 c_2)=(\adc(b_1)\adc(b_2)(\kropka{c}) b_1 c_1,\,\adc(b_2)(\kropka{c}) b_2 c_2) \end{equation} Let $(X_\alpha)$ be a basis in $\gotc$ , $u_\alpha,\, A_\alpha,\, X^r_\alpha$ be corresponding one-parameter groups, their generators and right-invariant vector fields respectively, and let $\tilde{u}_\alpha:=\delta_0(u_\alpha)$ with corresponding $\tilde{A}_\alpha$ and $\tilde{X}^r_\alpha$. The formula (\ref{delta0-Xr}) reads (after identification $T_{(x,y)}(X\times Y)=T_xX\oplus T_yY$) : \notka{delta0-Xr-1} \begin{equation}\label{delta0-Xr-1} \tilde{X}^r_\alpha(b_1 c_1, b_2 c_2)=\sum_\beta \adc_{\beta \alpha}(b_2) X^r_\beta( b_1 c_1) + X^r_\alpha( b_2 c_2), \end{equation} where functions $ \adc_{\beta \alpha}: B\rightarrow \R$ are matrix elements of $\adc|_B$ (\ref{adb-adc}), i.e. $\adc(b)(X_\alpha)=:\sum_\beta \adc_{\beta \alpha}(b) X_\beta$. As in Prop.\ref{prop-generators} the action of $\tilde{A}_\alpha$ on $\omh(G_B\times G_B)$ is given by ($\iota$ times) the Lie derivative with respect to $\tilde{X}^r_\alpha$. Since $\tilde{A}_\alpha=\Delta_0(A_{\alpha})$ we obtain the following equality on $\omh(G_B\times G_B)$ \notka{Delta0-gen} \begin{equation}\label{Delta0-gen} \Delta_0(A_{\alpha})=I\mt A_\alpha+ \sum_\beta A_\beta \mt \adc_{\beta \alpha} \end{equation} \begin{re} Let us comment on the meaning of the equality above. The operator $\tilde{A}_\alpha=\Delta_0(A_{\alpha})$ is essentially self-adjoint on $\omh(G_B\times G_B)$. The operator on the right hand side has immediate meaning on $\omh(G_B)\mt\omh(G_B)$ and is symmetric there. But since elements of $\,\omh(G_B\times G_B)$ can be approximated by elements of $\omh(G_B)\mt\omh(G_B)$ in topology of $\omh(G_B\times G_B)$ (i.e. uniformly with all derivatives on compact sets) and operators $A_\alpha$ are differential operators, the space $\omh(G_B\times G_B)$ is in the domain of the closure of the right hand side (treated as operator on $\,\omh(G_B)\mt\omh(G_B)$). Therefore this closure is the self-adjoint operator $\Delta_0(A_{\alpha})$. \end{re} \subsection{Comultiplication for $\kappa$-Poincar\'e} For a bisection $B c_0$ let us consider the map: $(b_1 c_1,b_2 c_2)\mapsto \delta(B c_0)(b_1 c_1,b_2 c_2)$, where the relation $\delta$ is given by (\ref{def-delta}). By the use of that formula one finds the domain of this map -- $\{(b_1 c_1 ,b_2 c_2): b_2, b_R(b_2 c_0^{-1})\in B'\}$ and \notka{delta-bis} \begin{equation}\label{delta-bis} \begin{split} \delta(B c_0) (b_1 c_1,b_2 c_2)=& \left(b_R[b_1 \tilde{c}_L^{-1}(b_2) \tilde{c}_L(b_2 c_0^{-1})] \tilde{c}_L^{-1}(b_2 c_0^{-1})\tilde{c}_L(b_2) c_1,\, b_R(b_2 c_0^{-1})c_0 c_2\right)=\\ =&\left( c_L^{-1}(b_1 \tilde{c}_L^{-1}(b_2) \tilde{c}_L(b_2 c_0^{-1}))b_1 c_1,\,c_L^{-1}(b_2 c_0^{-1})b_2 c_2 \right). \end{split} \end{equation} Note that $b_R(b_2 c_0^{-1})c_0 c_2=(B c_0)(b_2 c_2)$ i.e. the action on the right ``leg'' is the action of the bisection $B c_0$. We will use this expression to get the comultiplication for generators. Let $c_0:=c(t):=\exp(t \kropka{c})$ for $\kropka{c}\in\gotc$. Since $B'$ is open in $B$, for fixed $b_2\in B'$ the right hand side of (\ref{delta-bis}) is well defined for $t$ sufficiently close to $0$; therefore we can define a vector field on $G\times B'C$ which we denote by $\delta(X^r_{\kropka{c}})$. It is given by:\notka{delta-Xr} \begin{align*}\label{delta-Xr} \delta(X^r_{\kropka{c}})(b_1 c_1, b_2 c_2) &:=\left.\frac{d}{d t}\right|_{t=0} (c_1(t) b_1 c_1,c_2(t)b_2 c_2), & \\\nonumber c_1(t)& :=c_L^{-1}(b_1\tilde{c}_L^{-1}(b_2)\tilde{c}_L(b_2\exp(-t \kropka{c})))\,,& c_2(t) &:=c_L^{-1}(b_2 \exp(-t \kropka{c}))\end{align*} Let $\trzutC$ be the projection onto $\gotc$ corresponding to the decomposition $\gotg=\gotc\oplus\gota$ and \notka{def-tadc} \begin{equation}\label{def-tadc} \tadc(g):=\trzutC Ad(g)|_{\gotc}. \end{equation} Computing derivatives we obtain \notka{delta-Xr-1} \begin{equation}\label{delta-Xr-1} \delta(X^r_{\kropka{c}})(b_1 c_1, b_2 c_2)=\left((\adc(b_1)\tadc(a_R(b_2) \kropka{c}) b_1 c_1\,, \, (\adc(b_2)\kropka{c}) b_2 c_2\right). \end{equation} For a basis $(X_\alpha)$ in $\gotc$ we obtain (compare \ref{delta0-Xr-1}): \notka{delta-Xr-2} \begin{equation}\label{delta-Xr-2} \delta(X^r_\alpha)(b_1 c_1, b_2 c_2)=\sum_\beta \tadc_{\beta \alpha}(a_R(b_2)) X^r_\beta( b_1 c_1) + X^r_\alpha( b_2 c_2), \end{equation} We may try to go one step further and write this equation in a form similar to (\ref{Delta0-gen}) as: \notka{Delta-gen} \begin{equation}\label{delta-gen} \Delta(A_\alpha) = I\mt A_\alpha + \sum_\beta A_\beta\mt (\tadc_{\beta \alpha} \cdot a_R) \end{equation} But, contrary to (\ref{Delta0-gen}), where the right hand side has a well defined meaning, here we have rather formal expression. Certainly we have equality as operators on $\omh(G_B)\mt\omh(B'C)$ but {\em this space is not a core for the left hand side} so it is not true that the closure of the right hand side is equal to the self-adjoint operator on the left hand side. The precise formula for comultiplication is given in Prop.\ref{prop-Delta}. Let us now, compute the comultiplication for our generators $(\widehat{S},\widehat{Y}_k)$ (formally, in the sense of (\ref{delta-gen})). Recall that they are related (\ref{def-generator-basis}) to the following basis in $\gotc$: \begin{equation* (\kropka{c}_\beta):=(\kropka{c}_0,\kropka{c_k}):=(\Mlambda_{0(n+1)},\Mlambda_{k(n+1)}-\Mlambda_{k 0})\,,\quad k=1, \dots, n. \end{equation*} We need to find matrix elements of representation $\tadc(a),\,a\in A$ in the basis $(\kropka{c}_\beta)$. Let us denote this matrix by $\kkad(a)$: $$\tadc(a)\kropka{c}_\beta=\sum_\alpha \kkad_{\alpha\beta}(a) \kropka{c}_\alpha$$ We will use lemma \ref{lema-reps}. It is easy to see that the orthogonal complement of $\gota$, with respect to the form $k$ defined in (\ref{app-def-k}), is \mbox{$\displaystyle \gota^\perp=span \{\Mlambda_{\beta(n+1)}\,:\,\beta=0,\dots, n\}$} and bases $(e_\beta)$ and $(\rho_\beta)$ defined as: \notka{def-bases-e-ro} \begin{equation}\label{def-bases-e-ro} e_\beta:=\Mlambda_{\beta(n+1)}\quad,\quad\quad\rho_\beta:=k(e_\beta)\quad,\quad\quad \beta=0,\dots, n \end{equation} are orthonormal basis in $\gota^\perp$ and $\gota^0$, respectively. The projection $\trzutC:\gota^\perp \rightarrow \gotc$ acts as $\trzutC(e_\beta)=\Mlambda_{\beta (n+1)} -\Mlambda_{\beta 0}=\kropka{c}_\beta$. By the lemma \ref{lema-reps}, matrix of $\tadc(a)$ (i.e. the matrix $\kkad(a)$) is equal to the matrix of $\kad(a)|_{\gota^0}$ in basis $(\rho_\alpha)$. For $a=(z,U,d)\in A$, by direct computations (e.g. using formulae in the Appendix) one gets: \notka{kad-zUd} \begin{align}\label{kad-zUd} \kkad(z,U,d)=\left(\begin{array}{ccc} d \frac{1+|z|^2}{1-|z|^2} & \frac{2}{1-|z|^2}z^tU D_1 \\ d \frac{2}{1-|z|^2} z & (I+\frac{2}{1-|z|^2}z z^t)U D_1 \end{array}\right),\quad {\rm where}\quad D_1=\left(\begin{array}{cc}d I_{n-1} & 0\\0 & 1\end{array}\right). \end{align} Finally, using (\ref{def-aR}) one gets:\notka{kad-zUd-ar} \begin{equation}\label{kad-zUd-ar} \kkad(a_R(\Lambda,u,w,\alpha))=\left(\begin{array}{ccc} \frac{1}{\alpha} & \frac{w^t}{\alpha} \\ - \frac{u}{|\alpha|} & sgn(\alpha)( \Lambda-\frac{u w^t}{\alpha}) \end{array}\right) \end{equation} And formulae for comultiplication on generators are: \notka{Delta-na-generatorach} \begin{align}\label{Delta-na-generatorach} \begin{split} \Delta(\widehat{S}) &= I\mt \widehat{S}+ \widehat{S}\mt \frac1\alpha+\sum_k\widehat{Y}_k\mt \frac{- u_k}{|\alpha|}\\ \Delta(\widehat{Y}_i) &= I\mt\widehat{Y}_i+\widehat{S}\mt \frac{w_i}{\alpha} + \sum_k\widehat{Y}_k\mt sgn(\alpha)(\Lambda_{ki}-\frac{u_k w_i}{\alpha}) \end{split} \end{align} It remains to compute $\Delta$ for generators $(\Lambda,u,w,\alpha)$. Let $f$ be a smooth and bounded function on $B$. By (\ref{def-delta}) $\Delta(f)$, as a function on $B\times B'$, is given by the formula: \notka{Delta-functions-B} \begin{equation}\label{Delta-functions-B} (\Delta f) (b_1, b_2)=f(b_R(b_1 a_R(b_2)))=f(b_R(b_1(\tilde{c}_L(b_2))^{-1}) b_2)\,,\,\,(b_1, b_2)\in B\times B' \end{equation} Using formulae (\ref{def-tcL}) and (\ref{solution-iwasawa1}), after some computations, one gets:\notka{Delta-matrix-elem-B} \begin{align}\label{Delta-matrix-elem-B} \Delta(u_k) &= u_k\mt\sign(\alpha)+ P^{-1} \sum_l \alpha\Lambda_{kl}\mt u_l\,,\quad\Delta(w_k) =I\mt w_k+ P^{-1} \sum_l w_l\mt |\alpha| \Lambda_{lk} & \nonumber\\ \Delta(\alpha) &=P^{-1}(\alpha\mt\alpha)\,, \quad \quad\Delta(\Lambda_{kl}) =\sum_j \Lambda_{kj}\mt\Lambda_{jl} + P^{-1}\sum_{mj}\Lambda_{km} w_j\mt \sign(\alpha)u_m\Lambda_{jl}\,, & \end{align} where $\displaystyle P:=1-\sum_k w_k\mt \sign(\alpha)u_k$. Notice that $P$ is invertble on $B\times B'$ and right hand sides are well defined (as smooth functions on $B\times B'$). \subsection{A closer look at the twist} \newcommand{\cexp}{\exp_C} \newcommand{\clog}{\log_C} \begin{center} {\em In this subsection groups $G,A,B,C$ fulfill assumptions (\ref{basic-assumpt}) and $C$ is an exponential Lie group i.e. the exponential mapping $\cexp: \gotc \rightarrow C$ is a diffeomorphism.} \end{center} Let $\clog:=\exp_C^{-1}$ and for $b\in B'$ let \notka{def-ct} \begin{equation}\label{ct-def} c_t(b):=\cexp(t \clog(\tilde{c}_L(b)^{-1}))=\cexp(- t \clog(\tilde{c}_L(b)))\,,\quad t\in\R \end{equation} be the one-parameter group defined by $-\clog(\tilde{c}_L(b))\in \gotc$ and \notka{def-Tt} \begin{equation} \label{def-Tt} T_t:=\{(b_1 c_t(b_2), b_2):b_1\in B, b_2\in B'\}\subset G_B\times\Gamma_{B'} . \end{equation} Note that $T_1$ is the twist (\ref{def-twist}). \begin{lem} $T_t$ is a one-parameter group of bisections $G_B\times\Gamma_{B'}$. \end{lem} {\em Proof:} It is easy to verify that $T_t$ is a bisection and a submanifold for any $t\in\R$. For $s,t\in \R$: $$(b_1 c_1, b_2 c_2)\in T_t T_s \iff \exists\, b_3,b_5\in B\,, b_4, b_6\in B': (b_1 c_1, b_2 c_2)=(b_3 c_t(b_4), b_4) (b_5 c_s(b_6), b_6),$$ (on the right hand side there is the multiplication in $G_B\times \Gamma_{B'}$) so $c_2=e$ and $b_2=b_4=b_6\in B'$, and $b_1 c_1= m_B(b_3 c_t(b_6),b_5 c_s(b_6))$. Therefore $b_5=b_R(b_3 c_t(b_6))$ and $b_1 c_1=b_5 c_t(b_6)c_s(b_6)=b_5 c_{t+s}(b_6)$, i.e. $T_t T_s =\{(b_1 c_{t+s}(b_2), b_2): b_1\in B, b_2\in B'\}=T_{t+s}$. \\ \dowl \noindent Let $X_T^r$ be the right invariant vector field on $G_B\times\Gamma_{B'}$ defined by $T_t$ (compare (\ref{def-u-Xr-ckropka})) i.e. $$X_T^r(b_1 c_1, b_2 c_2):=\left.\frac{d}{d t}\right|_{t=0} T_t(b_1 c_1, b_2 c_2)$$ Using the definition (\ref{def-Tt}) we get for $b_2\in B'$: \begin{align*}\begin{split} T_t(b_1 c_1, b_2 c_2) & = (b_R(b_1c_t(b_2)^{-1})c_t(b_2) c_1, b_2 c_2)=(c_L(b_1 c_t(b_2)^{-1})^{-1} b_1 c_1\,,\,b_2 c_2)=\\ &=( c_R(c_t(b_2)b_1^{-1}) b_1 c_1\,,\,b_2 c_2), \,\, \end{split} \end{align*} and \notka{twist-pole-0} \begin{align}\label{twist-pole-0} X_T^r(b_1 c_1, b_2 c_2) & =\left((-\adc(b_1) \log_C(\tilde{c}_L(b_2))) b_1 c_1, 0_{b_2 c_2}\right) \end{align} For a basis $(X_\alpha)$ of $\gotc$ let $D_\alpha: B'\rightarrow \R$ be coordinates of $\clog(\tilde{c}_L(b))$:\notka{def-D} \begin{equation}\label{def-D} \clog(\tilde{c}_L(b))=:\sum_\alpha D_\alpha(b) X_\alpha. \end{equation} Using the definition (\ref{def-u-Xr-ckropka}) of $X^r_\alpha$ we can write (\ref{twist-pole-0}) as \notka{twist-pole} \begin{equation}\label{twist-pole} X_T^r(b_1 c_1, b_2 c_2)=-\sum_\alpha D_\alpha (b_2) X^r_\alpha(b_1 c_1) \end{equation} \begin{prop} Let $\widehat{T}_t$ be the one-parameter group of unitaries in $L^2(G_B\times \Gamma_{B'})=L^2(G_B)\mt L^2(\Gamma_{B'})=L^2(G_B)\mt L^2(G_B$) defined by $T_t$. The action of $\widehat{T}_t$ on $\sA(G_B\times \Gamma_{B'})$ is given by \begin{align}\widehat{T_t}(F(\om_0\mt\om_0))&=:(\widehat{T_t}F)(\om_0\mt\om_0)\,,& (\widehat{T_t}F)(g_1,g_2)& := F(T_{-t}(g_1,g_2))\, j_C(c{-t}(b_L(g_2)))^{-1/2}, \end{align} where $\om_0=\lambda_0\mt\rho_0$ for $\lambda_0$ and $\rho_0$ defined in (\ref{def-lo},\ref{def-ro}) and $F\in \sD(G_B\times \Gamma_{B'})$.\\ Let $\sT$ be the generator of $\widehat{T}_t$ . $\sT$ ie essentially self-adjoint on $\omh(G_B\times \Gamma_{B'})$ and for $F(\psi_0\mt\psi_0) \in \omh(G_B\times \Gamma_{B'})$: \notka{twist-generator} \begin{equation}\label{twist-generator} \begin{split} \sT (F(\psi_0\mt\psi_0))& =:(\sT F)(\psi_0\mt\psi_0)\,,\\ (\sT F) (b_1 c_1, b_2 c_2) &=\iota\left( (X_T^r F)(b_1c_1, b_2 c_2)-\frac{1}{2}Tr (ad(\clog(\tilde{c}_L(b_2)))|_{\gotc})F(b_1 c_1, b_2 c_2)\right) \end{split} \end{equation} where $\psi_0=\rho_0\mt\nu_0$ for some real, non vanishing \halden $\nu_0$ on $B$. Moreover on $\omh(G_B\times \Gamma_{B'})$ \notka{twist-generator-1} \begin{equation}\label{twist-generator-1} \sT=- \sum_\alpha A_\alpha\mt D_\alpha \end{equation} \end{prop} {\em Proof:} As for any bisection $\widehat{T_t}F$ is given by \cite{DG} : $$(\widehat{T_t}F)(T_t(g_1,g_2))=F(g_1,g_2)\frac{(\rho_0\mt\rho_0) (v g_1\mt w g_2)}{(\rho_0\mt\rho_0)(T_t(v g_1\mt w g_2))}\,, \,v,w\in\Lambda^{max}(T_eC)$$ Let $c(s) \subset C$ be a curve with $c(0)=e$. For $(g_1,g_2)\in G_B\times \Gamma_{B'}$ let $(\tilde{g}_1 , g_2):=T_t(g_1, g_2)$. By (\ref{def-Tt}) we get $$T_t(c(s) g_1, g_2)=(c_1(s) \tilde{g}_1, g_2)\quad\,\quad\quad T_t( g_1, c(s) g_2)=(c_2(s) \tilde{g}_1 , c(s) g_2),$$ for some curves $c_1(s), c_2(s)$. Identifying tangent spaces to right fibers with $T_eC$ in corresponding points, one sees that the map we have to consider has the form $\left(\begin{array}{cc} M_1 & M_2\\0 & I\end{array}\right)$, so its action on densities is determined by $M_1$ i.e. (derivative of) $c(s)\mapsto c_1(s)$. But this is exactly as the action of the bisection $Bc_t(b_L(g_2))$ (compare (\ref{Bc0-action0}) and by (\ref{app-Bc0-action}) we obtain: $$(\widehat{T_t}F)(b_1 c_1 ,b_2 c_2)= F(T_{-t}(b_1 c_1, b_2 c_2))\, j_C(c_{t}(b_2))^{-1/2}$$ Since $c_t(b_2)=\cexp(- t \clog(\tilde{c}_L(b_2)))$ we can proceed exactly as in the proof of Prop.\ref{prop-generators} and by putting $\kropka{c}=-\clog(\tilde{c}_L(b_2))$ in formula (\ref{A-formula-1}) we obtain (\ref{twist-generator}). As before, since $\omh(G_B\times \Gamma_{B'})$ is invariant for $\widehat{T}_t$ it is a core for $\sT$. Formula (\ref{twist-generator-1}) is a direct consequence of (\ref{twist-pole}) and (\ref{twist-generator}).\\ \dowl \subsection{Twist for $\kappa$-Poincar\'e} Let $(\kropka{s},\kropka{y})\in \R\times \R^n$ denotes element of $\gotc$. The exponential mapping $\cexp$ and its inverse $\clog$ are given by: \begin{equation*} \cexp: \gotc\ni(\dot{s},\dot{y})\mapsto (\exp(\dot{s}),\dot{y}\frac{\exp(\dot{s})-1}{\dot{s}})\in C\quad,\quad \clog: C\ni (s,y)\mapsto (\log(s),y\frac{\log(s)}{s-1})\in \gotg \end{equation*} With respect to the basis (\ref{def-generator-basis}) the decomposition is: $\displaystyle (\kropka{s}, \kropka{y})=-\kropka{s}\kropka{c}_0 + \sum_k\kropka{y}_k\kropka{c}_k$. Recall from (\ref{def-tcL}) the mapping $\tilde{c}_L: B'\ni (\Lambda,u,w,\alpha) \mapsto (|\alpha|,-sgn(\alpha) u )\in C$. Thus \notka{D-for-kappa} \begin{align}\label{D-for-kappa} \clog(\tilde{c}_L(\Lambda,u,w,\alpha)) &= (\log(|\alpha|), -sgn(\alpha) \frac{\log(|\alpha|)}{|\alpha|-1} u) = - \log(|\alpha|)\kropka{c}_0 - sgn(\alpha) \frac{\log(|\alpha|)}{|\alpha|-1}\sum_k u_k \kropka{c}_k \nonumber \\ {\rm i.e.}\quad\quad & D_0(\Lambda,u,w,\alpha) =-\log(|\alpha|)\,,\,\,\,D_k(\Lambda,u,w,\alpha) = - sgn(\alpha) \frac{\log(|\alpha|)}{|\alpha|-1} u_k \end{align} The bisection $T_t$ is equal: $$T_t:=\{(b_1, |\alpha_2|^{-t}, \frac{\sign(\alpha_2)(|\alpha_2|^{-t}-1)}{1-|\alpha_2|} u_2; \Lambda_2,u_2,w_2,\alpha_2,1,0):\alpha_2\neq 0\,,\,b_1\in B\}\subset G_B\times\Gamma_{B'}$$ And from (\ref{twist-generator-1}) and (\ref{D-for-kappa}) we get the formula for the twist: $$\hat{T}=\exp(\iota \sT)\,,\quad \sT=\widehat{S}\mt\log|\alpha|+\sum_k\widehat{Y}_k\mt \frac{\sign(\alpha) \log|\alpha|}{|\alpha|-1}u_k$$ \section{Comparison with relations coming from Poisson-Poincar\'{e} Group} The classical Poincar\'{e} Group, our $(C^*(G_B),\Delta)$ is a quantization of, is $(TA)^0\subset T^*G$. After identifying $T^*G$ with the semidirect product $\gotg^*\rtimes G$ (\ref{app-semi-direct}), $(TA)^0$ is identified with $\gota^0\rtimes A$. Invariant, bilinear form $k$ (\ref{app-def-k}) on $\gotg$ induces the form on $\gota^0$ which makes it a vector Minkowski space and the action of $A$ is by orthogonal transformations. The bundle $(TA)^0$ is dual to the Lie algebroid of groupoid $\Gamma_A$, so objects related to this groupoid have more direct relation to functions on $\gota^0\rtimes A$ (see \cite{PS-poisson} for a detailed description). The groupoid $G_B$ was used to overcome some functional analytic problems, let us now transport our expressions to the groupoid $\Gamma_A$ to relate them to formulae in \cite{SZ-94, Kosinski-Maslanka}. \subsection{Back to $\Gamma_A$ picture} {\em Right invariant fields $X^r_{\kropka{c}}$.} By (\ref{Bc0-action0}) and (\ref{def-u-Xr-ckropka}) $X^r_{\kropka{c}}(bc)=\left.\frac{d}{d t}\right|_{t=0} b_R(b c_t^{-1})c_tc=\left.\frac{d}{d t}\right|_{t=0} c_L^{-1}(b c_t^{-1})b c$, where $c_t:=\exp(t \kropka{c}).$ If $bc\in\Gamma_{B'}$ i.e. $b, b_R(bc)\in B'$ then it is easy to see that $b_R(bc_t^{-1})\in B'$ for $t$ sufficiently close to $0$ and, consequently, $b_R(b c_t^{-1})c_tc \in \Gamma_{B'}$. So we can push $X^r_{\kropka{c}}(bc)$ to $\Gamma_A$ by the mapping (\ref{wzor-embed}) $bc\mapsto a_R(b) c$: $$b_R(bc_t^{-1})c_t c \mapsto a_R(b_R(bc_t^{-1})) c_t c=a_R(a_R(b)c_t^{-1}) (c_t a_R(b)^{-1}) a_R(b)c= \tilde{c}^{-1}_L(a_R(b)c_t^{-1}) a_R(b)c$$ since $a_R(b_R(bc_t^{-1}))=a_R(bc_t^{-1})=a_R(a_R(b)c_t^{-1})$. Let us denote the resulting (right-invariant) vector field on $\Gamma_A$ by $X^{A,r}_{\kropka{c}}$:\notka{def-Xr-kappa} \begin{align}\label{def-Xr-kappa} X^{A,r}_{\kropka{c}}(ac) &:=\left.\frac{d}{d t}\right|_{t=0} \tilde{c}^{-1}_L(a \exp(-t \kropka{c})) a c=(\widetilde{Ad^{\gotc}}(a)\kropka{c}) ac, \end{align} where $\tadc$ was defined in (\ref{def-tadc}). {\em Anchor.} Let us transfer formula (\ref{def-kotwica}): \notka{def-kotwica-kappa} \begin{equation*} \label{def-kotwica-kappa} \begin{split} \left.\frac{d}{d t}\right|_{t=0} a_R(b_R(bc_t^{-1}))= \left.\frac{d}{d t}\right|_{t=0} a_R(a_R(b)c_t^{-1}) &=\left.\frac{d}{d t}\right|_{t=0} a_R(a_R(b)c_t^{-1}a^{-1}_R(b))a_R(b)\\ & =(-Ad^{\gota}(a_R(b))\kropka{c})a_R(b) \end{split} \end{equation*} Thus, denoting the anchor of $\Gamma_A$ by $\kot^A$, we obtain: \notka{def-kotwica-kappa-1} \begin{equation} \label{def-kotwica-kappa-1} \kot^A(X^{A,r}_{\kropka{c}})(a):=(-Ad^{\gota}(a)\kropka{c})a. \end{equation} Comparing this formula and (\ref{def-Xr-kappa}) with (\ref{kotwica1}) and (\ref{def-u-Xr-ckropka}) we see that we can immediately write commutation relations analogous to relations (\ref{komut-A}) and (\ref{kros-komut}): \notka{rel-komut-Gamma-A} \begin{align} \label{rel-komut-Gamma-A} [A^A_{\kropka{c}}, A^A_{\kropka{e}}] & =-\iota A^A_{[\kropka{c},\kropka{e}]},\,\kropka{c},\kropka{e}\in\gotc&\, [A^A_{\kropka{c}}, \hat{f}] &=\iota (\kot^A(X^{A,r}_{\kropka{c}})f)\,\widehat{}\,,\,f\in C^\infty(A). \end{align} Operators $A^A_{\kropka{c}}$ are, as before, Lie derivatives along $X^{A,r}_{\kropka{c}}$ multiplied by $i$. These relations hold on $\omh(\Gamma_A)$ (and $\sA(\Gamma_A)$), however, their status is ``weaker'' then one of relations (\ref{komut-A}) and (\ref{kros-komut}. Firstly, operators $A^A_{\kropka{c}}$ {\em are not essentially self-adjoint} on $\omh(\Gamma_A)$; secondly, even though each smooth function on $A$ defines an element affiliated to $C^*_r(\Gamma_A)=C^*_r(G_B)$ (or even a multiplier, if $f$ is bounded), not every such a function defines smooth (or even continuous) function on $B$ and only for such functions formula (\ref{kros-komut}) has immediate meaning. {\em Comultiplication.} By computations similar to ones before (\ref{def-Xr-kappa}) we obtain from (\ref{delta-Xr-1}): \begin{equation*} \delta(X^{A,r}_{\kropka{c}})(a_1 c_1, a_2 c_2)=\left((\tadc(a_1)\tadc(a_2) \kropka{c}) a_1 c_1\,, \, (\tadc(a_2)\kropka{c}) a_2 c_2\right). \end{equation*} and, for a basis $(X_\alpha)$ in $\gotc$: \begin{equation} \delta(X^{A,r}_\alpha)(a_1 c_1, a_2 c_2)=\sum_\beta \tadc_{\beta \alpha}(a_2) X^{A,r}_\beta( a_1 c_1) + X^{A,r}_\alpha( a_2 c_2), \end{equation} or, as operators on $\omh(\Gamma_A\times\Gamma_A)$ we can write (compare to (\ref{Delta0-gen}): \notka{Delta-gen-Gamma-A} \begin{align}\label{Delta-gen-Gamma-A} \Delta(A^A_\alpha)=I\mt A^A_\alpha+ \sum_\beta A^A_\beta \mt \tadc_{\beta \alpha} \end{align} But again, because of problems with domains, this expression is rather formal. Finally, let us check that if we transfer formula (\ref{Delta-functions-B}) we get what we expect. The formula (\ref{wzor-embed}) gives diffeomorphism $\phi: A\rightarrow B'$: \notka{dyfeo-A-Bprim} \begin{equation}\label{dyfeo-A-Bprim} \phi(a):= b_R(a)\,,\,\, \phi^{-1}(b')=a_R(b')\,,\,\,a\in A, b'\in B' \end{equation} For $f\in C^\infty(A)$ using (\ref{Delta-functions-B}) let us compute: \begin{equation*}\begin{split} [(\phi^*\mt \phi^*)^{-1}\Delta(\phi^*f)](a_1, a_2) &= \Delta(\phi^*f)](b_R(a_1), b_R(a_2))=(\phi^*f)(b_R[b_R(a_1)a_R(b_R(a_2))])=\\ &= (\phi^*f)(b_R(b_R(a_1)a_2))=(\phi^*f)(b_R(a_1a_2))=f(a_R(b_R(a_1a_2)))=\\ &= f(a_1 a_2)\end{split} \end{equation*} \subsection{Hopf $*$-algebra of $\kappa$-Poincar\'{e} from quantization of Poisson-Lie structure.} On a Hopf *-algebra level, relations for $\kappa$-Poincare Group were given in \cite{SZ-94} (and \cite{Kosinski-Maslanka}). The $*$-algebra is generated by {\em self-adjoint elements} $\{a_\alpha, L_{\alpha\beta}\,,\alpha,\beta=0,1\dots,n\}$ that satisfy the following commutation relations for $\eta_{\alpha\beta}:=diag(1,-1,\dots,-1)$ and some $h\in\R$ (to compare with \cite{Kosinski-Maslanka} substitute $\kappa:=h^{-1}$): \notka{hopf-rel-pedy} \begin{align}\label{hopf-rel-pedy} [a_0,a_k] &= i h a_k\,,& \, [a_k,a_l] &=0 \,,& \,[L_{\alpha\beta}, L_{\gamma\delta}] &=0\,,& \,\eta=L^t\eta L \end{align}\notka{hopf-rel-kros} \begin{align}\label{hopf-rel-kros} [a_0, L_{00}]& =i h ((L_{00})^2-1)\,,& \, [a_k, L_{00}]& = i h (L_{00}-1)L_{k0}\nonumber\\ [a_0, L_{0m}]& = i h L_{00} L_{0m}\,,& \, [a_k, L_{0m}]& = i h (L_{00}-1)L_{km}\nonumber\\ [a_0, L_{m0}]& = i h L_{00} L_{m0}\,,& \, [a_k, L_{m0}]& = i h (L_{k0} L_{m0}-\delta_{km}(L_{00}-1))\\ [a_0, L_{mn}]& =i h L_{m0} L_{0n}\,,& \, [a_k, L_{mn}]& = i h (L_{m0} L_{kn}-\delta_{km}L_{0n})\nonumber \end{align} together with coproduct $\Delta$:\notka{hopf-delta} \begin{align}\label{hopf-delta} \Delta(a_\alpha) &=a_{\alpha}\mt I + \sum_\beta L_{\alpha\beta}\mt a_{\beta}\,,& \Delta(L_{\alpha\beta}) &=\sum_\gamma L_{\alpha\gamma}\mt L_{\gamma\beta} \end{align} antypode $\antyp$ and counit $\counit$:\notka{hopf-antyp}\notka{hopf-counit} \begin{align} \antyp(a_\alpha) &=-\sum_\beta \antyp(L_{\alpha\beta}) a_\beta \,,& \antyp(L_{\alpha\beta}) &=(L^{-1})_{\alpha\beta}=(\eta L^t\eta)_{\alpha\beta}\label{hoph-antyp}\\ \counit(a_\alpha)&=0 \,,& \counit(L_{\alpha\beta}) &=\delta_{\alpha\beta}\label{hopf-counit} \end{align} \subsection{Comparison of formulae} Now we want to compare our generators and relations to formulae (\ref{hopf-rel-pedy} - \ref{hopf-delta}). We restrict computations only to commutation relations and comultiplication (and partially to antipode) and are not going to investigate in details remaining parts of quantum group structure on $(C^*_r(\Gamma_A),\Delta)$ (i.e. counit, antipode and Haar weight; this can be done with the use of expressions given in \cite{PS-DLG}). Comparing (\ref{kappa-e}) and (\ref{Delta-gen-Gamma-A}) we see that, with respect to the comultiplication, our generators $(\widehat{Y}_\alpha)$ (where we put $\widehat{Y}_0:=\widehat{S}$) behave like $\antyp(a_\alpha)$. Such an identification would not be consistent with self-adjointness, however. Instead we use the unitary part of antipode (see e.g. \cite{SLW-MUQG}), which, for a quantum group defined by DLG, is implemented by the group inverse \cite{PS-DLG}. As said above, we are not going to present all functional analytic details but summarize computations in the following: \begin{lem} \label{lemat-Rgen}\notka{lemat-Rgen} Let $(\sM,\Delta)$ be a $*$-bialgebra generated by self-adjoint elements $(a_k, V_{kl}, V_{kl}^c)\,$, $k,l=1,\dots, n$ satisfying $\displaystyle (1) \,\sum_m V_{km} V^c_{lm} = \sum_m V^c_{mk} V_{ml}=\delta_{kl} I\,$, $\displaystyle (2)\,\,\Delta(V_{kl}) =\sum_m V_{km}\mt V_{ml}\,$ and $\displaystyle (3)\,\,\Delta(a_k) = a_k\mt I +\sum_l V_{kl}\mt a_l.$ Then \begin{itemize} \item[a)] $\displaystyle \sum_m V^c_{lm} V_{km} = \sum_m V_{ml} V^c_{mk} =\delta_{kl} I$ and $\displaystyle \Delta(V_{kl}^c) =\sum_m V_{km}^c\mt V_{ml}^c.$ \item[b)] Elements $\,\displaystyle A_k := -\frac12 \sum_m \left( a_m V^c_{mk}+ V^c_{mk}a_m\right)$ are self-adjoint and satisfy \begin{align} \Delta(A_k) &= I\mt A_k +\sum_m A_m\mt V^c_{mk} \end{align} \item[c)] Moreover, if $\,\displaystyle \sum_{mk}V^c_{mk} [a_m, V_{lk}] = \sum_{mk}[a_m, V_{lk}] V^c_{mk} \,$ then \notka{lemat-Rgen-eq1} \begin{align}\label{lemat-Rgen-eq1} a_k & = -\frac12 \sum_m\left( V_{km}A_m+A_m V_{km}\right) \end{align} and $\sM$ is generated by $(A_k, V_{kl}, V_{kl}^c)$. \end{itemize} \end{lem} \noindent {\em Proof:} a) The first equality is just $*$ applied to (1); for the second one, apply $\Delta$ to (1), use (2) and then (1); b) self-adjointness is evident and the formula for $\Delta(A_k)$ is a simple computation;\\ c) Clearly, equality (\ref{lemat-Rgen-eq1}) is sufficient for statement about generation. Let us rewrite the assumption in (c) as: $$\sum_{mk}V^c_{mk} a_m V_{lk}- \sum_m \left(\sum_{k}V^c_{mk} V_{lk}\right) a_m = \sum_{m} a_m \left(\sum_k V_{lk} V^c_{mk}\right) - \sum_{mk} V_{lk} a_m V^c_{mk}\,$$ by (1) and (a) expressions in brackets are $\delta_{lm} I$ and we obtain:\notka{lemat-Rgen-eq2} \begin{equation}\label{lemat-Rgen-eq2} 2 a_l = \sum_{mk}\left( V^c_{mk} a_m V_{lk} + V_{lk} a_m V^c_{mk}\right) \end{equation} Let us compute using definition of $A_k$ and (1): $$\sum_k V_{sk} A_k=-\frac12 \sum_{mk} V_{sk} a_m V^c_{mk} -\frac12 \sum_{mk} V_{sk} V^c_{mk}a_m=-\frac12 \sum_{mk} V_{sk} a_m V^c_{mk}-\frac12 a_s$$ In the similar way, using (a): $$\sum_k A_k V_{sk}=-\frac12 a_s - \frac12 \sum_{mk} V_{mk}^c a_m V^c_{sk}$$ Adding these two equalities and using (\ref{lemat-Rgen-eq2}) we get (\ref{lemat-Rgen-eq1}). \dowl The version of the lemma above, with $a_k$ and $A_k$ interchanged, is proven in the same way:\notka{lemat-Rgen1} \begin{lem}\label{lemat-Rgen1} Let $(\sM,\Delta)$ be a $*$-bialgebra generated by self-adjoint elements $(A_k, V_{kl}^c, V_{kl})\,$, $k,l=1,\dots, n$ satisfying $\displaystyle (1) \,\sum_m V_{lm}^c V_{km} = \sum_m V_{ml} V_{mk}^c=\delta_{kl} I\,$, $\displaystyle (2)\,\Delta(V_{kl}^c) =\sum_m V_{km}^c\mt V_{ml}^c\,$ and $\displaystyle (3)\,\Delta(A_k) = I\mt A_k +\sum_l A_l \mt V_{lk}^c.$ Then \begin{itemize} \item[a)] $\,\displaystyle \sum_m V_{km} V^c_{lm} = \sum_m V^c_{mk} V_{ml}=\delta_{kl} I\,$ and $\displaystyle \,\Delta(V_{kl}) =\sum_m V_{km}\mt V_{ml}\,$; \item[b)] Elements $\,\displaystyle a_k = -\frac12 \sum_m\left( V_{km}A_m+A_m V_{km}\right)$ are self-adjoint and satisfy \begin{align} \Delta(a_k) = a_k\mt I +\sum_l V_{kl}\mt a_l. \end{align} \item[c)] Moreover, if $\,\displaystyle \sum_{mk} [V^c_{ks}, A_m] V_{km} = \sum_{mk}V_{km} [V^c_{ks}, A_m]\,$ then \notka{lemat-Rgen1-eq1} \begin{align}\label{lemat-Rgen1-eq1} A_k &= -\frac12 \sum_m \left( a_m V^c_{mk}+ V^c_{mk}a_m\right) \end{align} and $\sM$ is generated by $(a_k, V_{kl}^c, V_{kl})$. \end{itemize} \end{lem}\dowl Looking at the formulae (\ref{Delta-na-generatorach}) and using the fact that $W$ given by (\ref{kad-zUd}) is a representation of the group $A$ we use the lemma \ref{lemat-Rgen1} with $V^c=W$, where matrix elements of $W$ are expressed by $(\Lambda,w,u,\alpha)$ as in (\ref{kad-zUd-ar}) (using identification of $A$ with $B'$ as in (\ref{dyfeo-A-Bprim})). So we define:\notka{def-L} \begin{equation}\label{def-L} L:=\left(\begin{array}{ccc} \frac{1}{\alpha} & - \frac{w^t}{\alpha} \\ \frac{u}{|\alpha|} & sgn(\alpha)( \Lambda-\frac{u w^t}{\alpha}) \end{array}\right) \end{equation} and \notka{def-a} \begin{equation}\label{def-a} \begin{split} a_\alpha &:= -\frac12 \sum_\beta\left( L_{\alpha\beta}\widehat{Y}_\beta+\widehat{Y}_\beta L_{\alpha\beta}\right)= -\sum_\beta L_{\alpha\beta}\widehat{Y}_\beta -\frac12 \sum_\beta [\widehat{Y}_\beta , L_{\alpha\beta}]=\\ &=: \tilde{a}_\alpha -\frac12 \sum_\beta [\widehat{Y}_\beta , L_{\alpha\beta}] \end{split} \end{equation} In the formula above and in what follows we treat $\widehat{Y}_\alpha$ (recall that $\widehat{Y}_0 :=\widehat{S}$) and matrix elements of $L$ {\em as operators on $\omh(\Gamma_A)$ (or $\sA(\Gamma_A)$)}. \begin{re} Notice that matrix elements of $L$, despite being non-continuous functions on $B$, are affiliated to $C^*(G_B)$ (wich is equal to $C^*_r(G_B)$) because they are smooth functions on $A$ and $C^*(G_B)=C^*_r(\Gamma_A)$. \end{re} \noindent Since for $f\in C^\infty(A)$ commutators $[\widehat{Y}_\mu , f]\in C^\infty(A)$ and functions on $A$ commute, we have $$[a_\mu , L_{\beta\gamma}]=[\tilde{a}_\mu , L_{\beta\gamma}]$$ We will need commutators of $\widehat{Y}_\beta$ with $\frac{1}{\alpha}, sgn(\alpha)$ and $\frac{1}{|\alpha|}$. Since $\alpha\neq 0$ on $A$ and $\widehat{Y}_\beta$ are differential operators, it is clear that \notka{rel-SY-sgn} \begin{equation}\label{rel-SY-sgn} [\widehat{Y}_\beta, sgn(\alpha)]=0. \end{equation} By this equality we have $\,\displaystyle \left[\widehat{Y}_\beta, \frac{1}{|\alpha|}\,\right]=$ $ \displaystyle \left[\widehat{Y}_\beta , \frac{sgn(\alpha)}{\alpha}\,\right]=$ $ \displaystyle sgn(\alpha) \left[\widehat{Y}_\beta, \frac{1}{\alpha}\,\right]$ and, for the last commutator, $$0=\left[\widehat{Y}_\beta, \frac{1}{\alpha} \alpha\right]=\left[\widehat{Y}_\beta, \frac{1}{\alpha}\right] \alpha+ \frac{1}{\alpha} \left[\widehat{Y}_\beta, \alpha\right] \quad \Rightarrow \quad \left[\widehat{Y}_\beta, \frac{1}{\alpha}\right]=- \frac{1}{\alpha} \left[\widehat{Y}_\beta, \alpha\right] \frac{1}{\alpha}. $$ Now, using (\ref{relkom-S-1}), (\ref{relkom-Y-1}) we obtain: \begin{align*} [\widehat{S}, \frac{1}{\alpha}\,] &= \iota(1-\frac{1}{\alpha^2})\,,&\,\,[\widehat{Y}_m, \frac{1}{\alpha}\,] &= \iota\frac{\alpha-1}{\alpha^2} w_m\,. \end{align*} Having these relations together with (\ref{relkom-S-1}), (\ref{relkom-Y-1}) by direct computation on obtains commutators of $\widehat{Y}_\gamma$ with matrix elements of $L$: \notka{relkom-SY-L} \begin{equation}\label{relkom-SY-L} \left[\widehat{Y}_\gamma, L_{\beta\mu}\right]= \iota\left( \delta_{\gamma \mu}(\delta_{\beta 0}-L_{\beta 0})- sgn(\gamma) L_{\beta \gamma}(L_{0\mu}-\delta_{0 \mu})\right), \end{equation} where we use $sgn(\gamma):=\left\{\begin{array}{lcr} 1&\,{\rm for}\, & \gamma=0\\ -1 & \, {\rm for}\, & \gamma>0\end{array}\right..$ Now we get \begin{equation} \begin{split} \left[a_\rho, L_{\beta \mu}\right] &= \left[\tilde{a}_\rho, L_{\beta \mu}\right]=- \sum_\gamma\left[L_{\rho \gamma}\widehat{Y}_\gamma, L_{\beta \mu}\right]= - \sum_\gamma L_{\rho \gamma} \left[\widehat{Y}_\gamma, L_{\beta \mu}\right]= \\ &= \iota\left( L_{\rho \mu} (L_{\beta 0}-\delta_{\beta 0}) +sgn(\rho) \delta_{\rho\beta}(L_{0 \mu} - \delta_{0 \mu})\right). \end{split}\end{equation} and these are relations (\ref{hopf-rel-kros}) (for $h=1$). Finally, lets us verify (\ref{hopf-rel-pedy}). By (\ref{relkom-SY-L}) and (\ref{relkom-S-Y}): \begin{equation*} \sum_\gamma[\widehat{Y}_\gamma, L_{\beta \gamma}] = \iota n (\delta_{\beta 0}- L_{\beta 0}) \end{equation*} \begin{equation*} \begin{split} [\tilde{a}_\mu, \tilde{a}_\nu] &= \sum_{\gamma \delta} \left[ L_{\mu \gamma} \widehat{Y}_\gamma, L_{\nu \delta} \widehat{Y}_\delta \right]= \sum_{\gamma \delta} L_{\mu \gamma} L_{\nu \delta} \left[ \widehat{Y}_\gamma, \widehat{Y}_\delta \right] + L_{\mu \gamma} \left[ \widehat{Y}_\gamma, L_{\nu \delta} \right] \widehat{Y}_\delta - L_{\nu \delta} \left[ \widehat{Y}_\delta , L_{\mu \gamma} \right] \widehat{Y}_\gamma=\\ &= \iota(\delta_{0\mu}\tilde{a}_\nu - \delta_{0\nu}\tilde{a}_\mu). \end{split} \end{equation*} and \begin{equation} \begin{split} [a_\mu, a_\nu] &= \left[\tilde{a}_\mu- \frac{\iota n}{2} (\delta_{\mu 0}- L_{\mu 0}), \tilde{a}_\nu- \frac{\iota n}{2} (\delta_{\nu 0}- L_{\nu 0})\right]= \\ &= \left[\tilde{a}_\mu , \tilde{a}_\nu \right]+ \frac{\iota n}{2}\left( \left[\tilde{a}_\mu , L_{\nu 0})\right]- \left[\tilde{a}_\nu, L_{\mu 0})\right]\right) = \iota(\delta_{0\mu}\tilde{a}_\nu - \delta_{0\nu}\tilde{a}_\mu)+ \frac{\iota n}{2} \iota \left( \delta_{\mu 0} L_{\nu 0} - \delta_{\nu 0} L_{\mu 0} \right) =\\ &= \iota \left( \delta_{\mu 0} (\tilde{a}_\nu + \frac{\iota n}{2} L_{\nu 0}) - \delta_{\nu 0} (\tilde{a}_\mu + \frac{\iota n}{2} L_{\mu 0})\right)= \\ &= \iota \left( \delta_{\mu 0} (\tilde{a}_\nu + \frac{\iota n}{2} (L_{\nu 0} -\delta_{\nu 0})) - \delta_{\nu 0} (\tilde{a}_\mu + \frac{\iota n}{2} (L_{\mu 0}-\delta_{\mu 0}) \right)=\\ &= \iota \left( \delta_{\mu 0} a_{\nu} - \delta_{\nu 0} a_\mu \right) \end{split} \end{equation} as in (\ref{hopf-rel-pedy}). \section{(Some remarks on) Quantum $\kappa$-Minkowski space.} In this last section we are going to make few remarks on ``quantum $\kappa$-Minkowski space''. Under this name is usually understood ``$*$-algebra generated by self-adjoint elements $(x_0, x_k)$ satisfying relations'' (\ref{hopf-rel-pedy}): \begin{align* [x_0,x_k] &= i h x_k\,,& \, [x_k,x_l] &=0 \quad,\quad k=1,\dots,n \end{align*} As a Hopf algebra it was introduced already in \cite{Maj-Rueg}. This is considered either on a pure algebra level or as operators on Hilbert space \cite{Agostini} or within a star-product formulation of simplified two-dimensional version in \cite{Sitarz}. But if one wants to consider it on a $C^*$-algebra level, it is clear (in fact since \cite{PS-triple}, \cite{SZ-PP} and certainly since \cite{VV}) that the ``unique candidate'' for this name is $C^*(C)$ with the group $C$ defined in (\ref{def-C}) i.e. $C$ is the $AN$ group from the Iwasawa decomposition $SO_0(1,n)=SO(n) A N$ (this group appears in \cite{Agostini} under the name ``$\kappa$-Minkowski Group''). But to call a space the ``Minkowski space'' it should carry an action of the Poincar\'{e} Group, so in our case we have to show the action of our quantum group $(C^*(\Gamma_A), \Delta)$ on $C^*(C)$. Let us now show, how the groupoid framework gives natural candidate for such an action and postpone analytic details to future publication. \begin{re} A ``quantum space'' $C^*(C)$ has a family of classical points $\{p_\lambda:\lambda\in\R\}$ i.e. characters. They are given by 1-dimensional unitary representations of $C$ $p_\lambda(s,y):=s^{\iota\lambda}$. These are representations induced from whole line of $0$-dimensional symplectic leaves on which the Poisson bivector for the related Poisson Minkowski space (affine) described e.g. in \cite{PS-poisson} vanishes. \end{re} Let us begin with simpler situation of global decomposition and let $(G;B,C)$ be a double group with $\delta_0: G_B\rel G_B\times G_B$ defined by (\ref{def-delta0}). Consider relations $\delta_L: C\rel G_B\times C$ and $\delta_R: C\rel C\times G_B$ defined by:\notka{def-deltaLR} \begin{align} \label{def-deltaLR} \delta_L &:= \{(g,c_R(g); c_L(g))\,:\,g\in G\}\quad,\quad \delta_R:=\{(c_L(g), g; c_R(g))\,:\,g\in G\} \end{align} The following lemma can be proven by direct computation: \begin{lem} $\delta_L$ and $\delta_R$ are morphisms of groupoids that satisfy: \begin{align} (\delta_0\times id)\delta_L &= (id\times \delta_L)\delta_L\quad, & \quad ( id \times \delta_0)\delta_R &= (\delta_R\times id )\delta_R \end{align} Let $\sigma:C\times G_B \ni (c,g)\mapsto (g,c)\in G_B\times \C$ be the flip and $R_B, R_C$ denote the inverse in a group $G$ and its restriction to $C$ respectively. Then the following equality holds: \begin{align} \delta_L &= \sigma(R_C\times R_B)\delta_R R_C \end{align} \end{lem} \dowl If $(G;B,C)$ is a DLG, relations $\delta_L,\delta_R$ are morphisms of differential groupoids and lifting them one obtains $\Delta_L,\Delta_R$ -- morphisms of corresponding $C^*$-algebras, e.g. $\Delta_L\in Mor(C^*(C), C^*(G_B\times C))$. One may expect that in ``nice'' cases $C^*(G_B\times C)=C^*(G_B)\otimes C^*(C)$ and reduced/universal algebras problem can be also handled. This way one gets actions (left or right) of $(C^*(G_B),\Delta_0)$ on $C^*(C)$. Let us show that $\Delta_L$ should be considered as ``quantization'' of the canonical affine action of the semidirect product $\gotb^0\rtimes B$ on $\gotb^0$:\notka{affine-action} \begin{equation}\label{affine-action} (\gotb^0\rtimes B)\times \gotb^0\ni (\varphi,b;\psi)\mapsto \varphi+\kad(b)\psi\in \gotb^0 \end{equation} Let $\Gamma_1\rightrightarrows E_1$ and $\Gamma_2\rightrightarrows E_2$ be differential groupoids. For a (differentiable) relation $h:\Gamma_1\rel\Gamma_2$ by $T^*h: T^*\Gamma_1\rel T^*\Gamma_2$ we denote the relation ({\em cotangent lift of $h$}) defined by \notka{def-Twgiazdkah} \begin{equation} \label{def-Tgwiazdkah} (\varphi_2,\varphi_1)\in T^*h \iff \forall (v_2,v_1) \in Th \quad <\varphi_2,v_2 > = <\varphi_1,v_1 >. \end{equation} If $h$ is a morphism of differential groupoids then $T^*h$ is a morphism of {\em symplectic groupoids} and its base map is a Poisson map $(TE_2)^0\rightarrow (TE_1)^0$ \cite{SZ2}. By (\ref{def-deltaLR}), (\ref{def-Tgwiazdkah}) and (\ref{kontragrad}) for $\Phi$ -- the base map of $T^*\delta_L$, we have $\Phi: (TB)^0\times \gotc^*\rightarrow \gotc^*$ and for $(\phi, \psi)\in (T_bB)^0\times \gotc^*$, $\kropka{c}\in\gotc\subset\gotg$ and identifying $\gotc^*$ with $\gotb^0\subset\gotg^*$: \begin{equation*} \begin{split} <\Phi(\phi, \psi), \kropka{c}> &= <(\phi, \psi), (\kropka{c} b, c_R(\kropka{c}b)>=<\phi,\kropka{c}b > + < \psi , \adc(b^{-1})(\kropka{c})>= \\ &= < \phi,\kropka{c}b > + < \kad(b)(\psi) ,\kropka{c})>= <\varphi+\kad(b)(\psi),\kropka{c}>, \end{split} \end{equation*} where, using right trivialization, we represent $\phi\in (T_bB)^0$ by $(\varphi, b)\in \gotb^0\times B$. This way we get $\quad\displaystyle \Phi(\varphi,b; \psi)=\varphi+\kad(b)(\psi)\quad$ exactly as in (\ref{affine-action}). In the situation of $\kappa$-Poincar\'{e}, relations defined as $\delta_L,\delta_R$ in (\ref{def-deltaLR}) e.g. $$\tilde{\delta}_L: C\rel \Gamma_A \times C \quad,\quad \tilde{\delta}_L:=\{(g,\tilde{c}_R(g);\tilde{c}_L(g)) :g\in\Gamma_A\}$$ {\em are not morphisms} of differential groupoids. It can be directly verified that $\quad\displaystyle m'(\tilde{\delta}_L \times \tilde{\delta}_L)\nsubseteq \tilde{\delta}_L m\,\,$, where $m$ and $m'$ denote multiplication relations in $C$ and $\Gamma_A \times C$, respectively. Or one may observe that $T^*\tilde{\delta}_L$ restricted to sets of units gives the action of Poisson-Poincar\'{e} group $\gota^0\rtimes A$ on $\gotc^*\simeq \gota^0$ as in (\ref{affine-action}). It is known that this is a Poisson action which is {\em non complete}, so it cannot be the base map of a morphism of symplectic groupoids \cite{SZ2}. Essentially we face the similar problem as in the very beginning: some operators that ``should be'' self-adjoint are not essentially self-adjoint on their ``natural'' domains and we can try to overcome it in the similar way -- passing from $\Gamma_A$ to $G_B$. It is easier to work with $\tilde{\delta}_R:C\rel C\times \Gamma_A$ given by $\displaystyle \,\tilde{\delta}_R:=\{(\tilde{c}_L(g), g; \tilde{c}_R(g))\,:\,g\in \Gamma_A\}\,,\,$ or after passing to $\Gamma_{B'}$ (and using the same symbol $\tilde{\delta}_R$): \notka{def-tilde-deltaR} \begin{equation}\label{def-tilde-deltaR} \tilde{\delta}_R=\{(\tilde{c}_L(b_L(g))^{-1}\tilde{c}_L(g), g; c_R(g)) : g\in \Gamma_{B'}\}\subset (C\times \Gamma_{B'}) \times C \end{equation} Let us define \notka{def-TC} \begin{align}\label{def-TC} T^C:=\{(\tilde{c}_L(b)^{-1},b):b\in B'\}=(c_R\times id)T\subset C\times G_B\,, \quad T^C_{12}:= T^C\times B\subset C\times G_B\times G_B \end{align} The following lemma may be compared to Prop. \ref{prop-twist}.\notka{lemma-TC} \begin{lem}\label{lemma-TC} \begin{enumerate} \item $T^C$ is a section of left and right projections over $\{e\}\times B'\subset C\times G_B$ ($e$ is the neutral element in $C$) and a bisection of $C\times \Gamma_{B'}$. \item $(id\times\delta_0) T^C$ is a section of left and right projections (in $C\times G_B\times G_B$) over the set $\{e\}\times \delta_0(B')=\{(e,b_2,b_3) : b_2 b_3\in B'\}$; \item $(\delta_R\times id) T^C$ is a section of left and right projections over the set $\{e\}\times B \times B'$; \item $T^C_{12}(\delta_R\times id) T^C= T_{23}( id \times \delta_0) T^C$ (equality of sets in $C\times G_B\times G_B$), moreover this set is a section of the right projection over $\{e\}\times (\delta_0(B')\cap(B\times B'))$ and the left projection over $\{e\}\times B'\times B'$. \end{enumerate} \end{lem} {\em Proof:} The first statement is clear from the definition of $T^C$. By a straightforward computation: \begin{equation}\begin{split} (id\times\delta_0) T^C & =\{(\tilde{c}_L(b_1 b_2)^{-1}, b_1, b_2): b_1 b_2\in B'\}\subset C\times G_B\times G_B\\ (\delta_R\times id) T^C &= \{(c_L(g), g, b): b\in B', c_R(g)=\tilde{c}_L(b)^{-1}\}=\\ &= \{(c_L(b_2 \tilde{c}_L(b_3)^{-1}), b_2 \tilde{c}_L(b_3)^{-1}, b_3): b_2\in B, b_3\in B'\}\subset C\times G_B\times G_B, \end{split} \end{equation} the second and third statement follow easily from these expressions. Now, using these expressions, it is easy to compute $T^C_{12}(\delta_R\times id) T^C$ and get: $$T^C_{12}(\delta_R\times id) T^C= \{\tilde{c}_L(b_2)^{-1} c_L(b_2 \tilde{c}_L(b_3)^{-1}), b_2 \tilde{c}_L(b_3)^{-1}, b_3) : b_2,b_3\in B'\},$$ and the same result for $T_{23}( id \times \delta_0) T^C$. The statement about left projection is clear, the only relation we need to check is that for $b_2,b_3\in B'$ the product $b_R(b_2 \tilde{c}_L(b_3)^{-1}) b_3$ is again in $B'$. Let $b_2=c_2 a_2$ and $b_3=c_3 a_3 $. Then $$b_R(b_2 \tilde{c}_L(b_3)^{-1}) b_3=b_R(c_2 a_2 c_3^{-1})b_3= b_R(c_2 a_2 c_3^{-1}b_3)= b_R(c_2 a_2 a_3)=b_R(a_2 a_3)\in B'$$ \dowl \noindent $T^C$ is used to twist $\delta_R$ to $\tilde{\delta}_R$: \begin{lem} Let $\delta_R:C\rel C\times G_B$ be the morphism defined in (\ref{def-deltaLR}) and $\tilde{\delta}_R$ be as in (\ref{def-tilde-deltaR}). They are related by: $\displaystyle \tilde{\delta}_R=Ad_{T^C}\cdot \delta_R$. \end{lem} {\em Proof:} Recall that $Ad_{T^C}:C\times G_B\rel C\times G_B$ is defined by (compare (\ref{def-AdT})): $$(c_1,g_2;c_3,g_4)\in Ad_{T^C}\iff \exists t_1,t_2\in T^C : (c_1,g_2) = t_1 (c_3,g_4) (s_C\times s_B) t_2$$ The multiplication above is in the groupoid $C\times G_B$ and $s_C, s_B$ are groupoid inverses (i.e. $s_C$ stands for the inverse in the group $C$). Let us compute: $$(c_1,g_2;c_3,g_4)\in Ad_{T^C}\iff \exists b_5,b_6\in B' : (c_1,g_2)= (\tilde{c}_L(b_5)^{-1}, b_5) (c_3, g_4) (\tilde{c}_L(b_6), b_6),$$ i.e. $b_5=b_L(g_4)\in B'\,,\, b_6=b_R(g_4)\in B'$ therefore $g_2=g_4\in \Gamma_{B'}$ and $c_1=\tilde{c}_L(b_5)^{-1} c_3 \tilde{c}_L(b_6);$ so this relation is in fact bijection $$ Ad_{T^C}: C\times\G_{B'}\ni (c,g)\mapsto (\tilde{c}_L(b_L(g))^{-1} c_3 \tilde{c}_L(b_R(g)),g)\in C\times\G_{B'}$$ defined by the bisection $T^C$ of $C\times\G_{B'}$. \noindent Now we have: $$(c_1,g_2,c_3)\in Ad_{T^C}\delta_R \iff \exists g\in G : (c_1,g_2; c_L(g), g)\in Ad_{T^C}\,, \,c_R(g)=c_3$$ therefore $g\in \G_{B'}$ and $c_1=\tilde{c}_L(b_L(g))^{-1} c_L(g) \tilde{c}_L(b_R(g))=\tilde{c}_L(b_L(g))^{-1} \tilde{c}_L(g)$ and $$Ad_{T^C}\delta_R=\{(\tilde{c}_L(b_L(g))^{-1} \tilde{c}_L(g), g, c_R(g)) : g\in \Gamma_{B'}\}$$ exactly as $\tilde{\delta}_R$ in (\ref{def-tilde-deltaR}).\dowl\\ $T^C$, as a bisection of $C\times \Gamma_{B'}$, defines unitary multiplier $\widehat{T^C}$ of $C^*_r(C\times\Gamma_{B'})=C^*_r(C)\otimes C^*_r(\Gamma_{B'})= C^*(C)\otimes C^*_r(G_B)=C^*(C)\otimes C^*(G_B)$, so having $\Delta_R$ -- action of the quantum group $(C^*(G_B),\Delta_0)$ on ``quantum space'' represented by $C^*(C)$ (lifted from $\delta_R$) we can, due to properties of $T^C$ described in Lemma \ref{lemma-TC}, {\em define the action} of our quantum group $(C^*(\Gamma_A),\Delta)$ on the same ``space'' $C^*(C)$ by $\widetilde{\Delta}_R(a):=\widehat{T^C}\Delta_R(a)\widehat{T^C}^{-1}$. Whether this construction gives {\em continuous} action of $\kappa$-Poincar\'{e} on $C^*(C)$ still needs to be verified. \section{Appendix} Here we collect some formulae proven in \cite{PS-DLG} and used in this paper. $(G;B,C)$ is a double Lie group, $\gotg, \gotb,\gotc$ are corresponding Lie algebras and $\gotg=\gotb\oplus \gotc$ (direct sum of vector spaces). Let $\rzutB, \rzutC $ be projections in $\gotg$ corresponding to the decomposition $\gotg=\gotb\oplus \gotc$. Let us define: \begin{equation}\label{adb-adc}\notka{adb-adc} \adb(g):=\rzutB Ad(g)|_{\gotb}\,\,,\,\,\adc(g):=\rzutC Ad(g)|_{\gotc} \end{equation} Clearly $\adb$ and $\adc$ are representations when restricted to $B$ or $C$. \noindent{\em Modular functions.} Let us define: \notka{app-def-modular-functions}\begin{equation}\label{app-def-modular-functions} j_B(g):=|\det(\adb(g))|\,,\,\,j_C(g):=|\det(\adc(g))| \end{equation} \noindent {\em The choice of $\om_0$.} Choose a real half-density $\mu_0\neq 0$ on $T_eC$ and define left-invariant half-density on $G_B$ by \begin{equation}\label{def-lo}\notka{def-lo} \lambda_0(g)(v):=\mu_0(g^{-1} v)\,,\,v\in \lma T^l_g G_B. \end{equation} The corresponding right-invariant half-density is given by: \begin{equation}\label{def-ro}\notka{def-ro} \rho_0(g)(w):=j_C(b_L(g))^{-1/2}\mu_0(w g^{-1})\,,\,w\in\lma T^r_g G_B. \end{equation} {\em Multiplication and comultiplication in $\sA(G_B)$} After the choice of $\om_0$ as above, the multiplication in $\sA(G_B)$ reads: $(f_1\om_0)(f_2\om_0)=:(f_1*f_2)\om_0$ and \notka{mult-ga} \begin{equation}\label{mult-ga} \begin{split} (f_1*f_2)(g) & =\int_C d_lc\, f_1(b_L(g) c)f_2(c_L(b_L(g) c)^{-1}g)=\\ &=\int_C d_rc\, j_C(b_L(c b_R(g)))^{-1} f_1(g c_R(c b_R(g))^{-1}) f_2(c b_R(g)), \end{split} \end{equation} where $d_l c$ and $d_r c$ are left and right Haar measures on $C$ defined by $\mu_0$. The $||\cdot||_l$ defined by this $\om_0$ is given by: \notka{l-norm}\begin{equation}\label{l-norm} ||f||_l=\sup_{b\in B}\int_C d_l c\, |f(b c)| \end{equation} Let $\delta_0:=m_C^T: G_B\rel G_B\times G_B$. The formula for $\hat{\delta}_0$ reads $$\hat{\delta}_0(f\om_0)(F(\om_0\otimes\om_0))=:(\hat{\delta}_0(f) F)(\om_0\otimes \om_0)$$ \notka{app-hat-delta0-form}\begin{equation}\label{app-hat-delta0-form} (\hat{\delta}_0(f) F)(b_1 c_1, b_2 c_2)= \int_C d_lc\, j_C(c_L(b_2 c))^{-1/2}f(b_1 b_2 c)F(c_L(b_1 b_2 c)^{-1} b_1 c_1, b_R(b_2 c) c^{-1} c_2) \end{equation} \noindent{\em Action of bisections on bidensities.} Let us write $\om=f \om_0\,,\,f\in \sD(G_B)$. The action of a bisection $B c_0$ on $\sA(G_B)$ is given by: \notka{app-Bc0-action} \begin{align}\label{app-Bc0-action} (B c_0)(f\om_0) & =:(B c_0 f)\, \om _0\,, &\quad & (B c_0 f)(g)=f(B(c_0)^{-1} g) j_C(c_0)^{-1/2} \end{align} \noindent {\bf Notation for orthogonal Lie algebras.} Let $(V,\eta)$ be a real, finite dimensional vector space with a bilinear, symmetric and non degenerate form $\eta$; by $\eta$ we denote also the isomorphism $V\rightarrow V^*$ defined by $< \eta(x), y >:=\eta(x,y)$. A basis $(v_\alpha)$ of $V$ is called {\em orthonormal} if $\displaystyle \eta(v_\alpha, v_\beta)=\eta(v_\alpha,v_\alpha)\delta_{\alpha\beta}\,,\,\,\,|\eta(v_\alpha,v_\alpha)|=1$. For a subset $S\subset V$ the symbol $S^\perp$ is used for {\em the orthogonal complement} of $S$, the symbol $S^0\subset V^*$ stands for {\em the annihilator} of $S$. Let us define operators in $End(V)$: \begin{equation}\label{lambdas} \Mlambda_{xy}:=x\mt\eta(y)-y\mt\eta(x)\quad, \quad x,y\in V.\end{equation} For a basis $(v_\alpha)$ in $V$ we write $\Mlambda_{\alpha\beta}$ instead of $\Mlambda_{v_\alpha,v_\beta}$. Operators $\Mlambda_{xy}$ satisfy: \begin{equation}\label{lambda-komut} \notka{lambda-komut} \,[\Mlambda_{xy},\Mlambda_{zt}]=\eta(x,t)\Mlambda_{yz}+\eta(y,z)\Mlambda_{xt}-\eta(x,z)\Mlambda_{yt}-\eta(y,t)\Mlambda_{xz} \end{equation} and $so(\eta)=span\{\Mlambda_{xy}:x,y\in V\}$. Notice that for $g\in O(\eta)$ we have $\ad(g) (\Mlambda_{xy})=\Mlambda_{gx,gy}$. We will use a bilinear, non degenerate form $k:so(\eta)\times so(\eta)\rightarrow \R$ defined by: \notka{app-def-k} \begin{equation} \label{app-def-k} k(\Mlambda_{xy},\Mlambda_{zt}):=\eta(x,t)\eta(y,z)-\eta(x,z)\eta(y,t) \end{equation} It is easy to see that for $g\in O(\eta)$: $\ad(g)\in O(k)$ i.e. $$k(g\Mlambda_{xy}g^{-1},g\Mlambda_{zt}g^{-1})=k(\Mlambda_{xy},\Mlambda_{zt})\,,\,g\in O(\eta)$$ (of course $k$ is proportional to the Killing form on $so(\eta)$). By $\kad$ we denote the coadjoint representation of $O(\eta)$ on $so(\eta)^*$: $\kad(g):=\ad(g^{-1})^*$. If $k$ is the isomorphism $so(\eta)\rightarrow so(\eta)^*$ defined by the form $k$ then $$\kad(g) k(X)=k(\ad(g) X) \,,\quad X\in so(\eta), g\in O(\eta)$$ Let us also define a bilinear form $\tilde{k}$ on $so(\eta)^*$ by: \begin{equation}\label{tildek}\notka{tildek} \tilde{k}(\varphi,\psi):=k(k^{-1}(\varphi),k^{-1}(\psi))\,,\,\varphi,\psi\in so(\eta)^* \end{equation} so $\tilde{k}(\varphi,\psi)=<\varphi, k^{-1}(\psi)>$; again it is clear that if $g\in O(\eta)$ then $\kad(g)\in O(\tilde{k})$, and $$\tilde{k}(\kad(g) k(X),k(Y))= k(\ad(g) X, Y)\,\,,\,X,Y\in so(\eta).$$ \noindent {\bf Adjoint, coadjoint representations and Hopf algebra structure.} For a Lie group $G$ we identify the group $T^*G$, via right translations, with the semidirect product $\gotg^*\rtimes G$ (with coadjoint representation):\notka{app-semi-direct} \begin{equation} \label{app-semi-direct} (\varphi,g)(\psi, h):=(\varphi+\kad(g) \psi, g h)\,,\quad \varphi, \psi\in \gotg^*\,,\,g,h\in G, \end{equation} If $B\subset G$ is a subgroup with a Lie algebra $\gotb\subset \gotg$ then $\gotb^0\times B$ is a subgroup of $\gotg^*\rtimes G$. If $\gotc\subset \gotg$ is any complementary subspace to $\gotb$ i.e. $\gotg=\gotb\oplus\gotc$, then $Ad^{\gotc}(b):=P_{\gotc}Ad(b)|_{\gotc}$ is a representation of $B$ on $\gotc$. The spaces $\gotc$ and $\gotb^0$ are dual to each other and the representation $Ad^{\gotc}$ is contragradient to $\kad|_B$, i.e. for $\varphi\in \gotb^0, \kropka{c}\in\gotc$ and $b\in B$: \notka{kontragrad} \begin{equation}\label{kontragrad} \begin{split} <\kad(b)\varphi, \kropka{c}> &= <\varphi, Ad(b^{-1})\kropka{c}>=<\varphi, P_{\gotc} Ad(b^{-1})\kropka{c}>=<\varphi, Ad^{\gotc}(b^{-1})\kropka{c}>=\\ &= <(Ad^{\gotc}(b^{-1}))^*\varphi, \kropka{c}> \end{split} \end{equation} Let $(\rho_k)$ be a basis in $\gotb^0$, $(\kropka{c}_k)$ dual basis in $\gotc$ and $\kad_{lk}, Ad^{\gotc}_{lk} : B\rightarrow\R$ matrix elements of $\displaystyle \kad(b)$ and $Ad^{\gotc}(b)$ in corresponding bases: \begin{equation*} <\rho_l, \kropka{c}_m>=\delta_{lm} \quad,\quad\quad \kad(b)\rho_k=\sum_l \kad_{lk}(b)\rho_l\quad,\quad\quad Ad^{\gotc}(b)\kropka{c}_k=\sum_l Ad^{\gotc}_{lk}(b) \kropka{c}_l \end{equation*} Clearly, the equality (\ref{kontragrad}) implies $\,\displaystyle \kad_{lk}(b^{-1})=Ad^{\gotc}_{kl}(b)$, i.e. $$\sum Ad^{\gotc}_{km}(b)\kad_{kl}(b)=\sum \kad_{lk}(b)Ad^{\gotc}_{mk}(b)=\delta_{lm}.$$ Let us use the same symbols for extensions of functions $\kad_{lk}, Ad^{\gotc}_{lk}, \kropka{c}_k$ to $\gotb^0\rtimes B$ i.e. $$\kad_{kl}(\varphi,b):=\kad_{kl}(b)\,,\,\,Ad^{\gotc}_{kl}(\varphi,b):=Ad^{\gotc}_{kl}(b)\,,\,\,\kropka{c}_k(\varphi,b):=<\varphi, \kropka{c}_k>$$ It is straightforward to compute action of comultiplication, counit and antipode, defined by the group $\gotb^0\rtimes B$, on functions $\kropka{c}_k, \kad_{kl}, Ad^{\gotc}_{kl}$:\notka{hopf} \begin{align*} \Delta(\kropka{c}_k) &= \kropka{c}_k\mt I + \sum_m \kad_{km}\mt \kropka{c}_m\,, & \counit(\kropka{c}_k) &= 0\,, & \antyp(\kropka{c}_k) = -\sum_m\antyp(\kad_{km})\,\kropka{c}_m &= -\sum_m Ad^{\gotc}_{mk} \,\kropka{c}_m\,; \end{align*} \begin{align}\label{hopf} \Delta(\kad_{kl}) &= \sum_m \kad_{km}\mt \kad_{ml}\,,& \counit(\kad_{kl}) &= \delta_{kl}\,, & \antyp(\kad_{kl}) &= Ad^{\gotc}_{lk}\,;\\ \Delta(Ad^{\gotc}_{kl}) &=\sum_m Ad^{\gotc}_{km}\mt Ad^{\gotc}_{ml}\,,& \counit(Ad^{\gotc}_{kl}) &= \delta_{kl}\,, &\antyp(Ad^{\gotc}_{kl}) &= \kad_{lk}\,.\nonumber \end{align} Let us define $\displaystyle \tilde{A}_k:=\antyp(\kropka{c}_k)=-\sum_l Ad^{\gotc}_{lk} \kropka{c}_l\,$, then: \notka{kappa-e} \begin{align}\label{kappa-e} \Delta(\tilde{A}_k)& =I\mt \tilde{A}_k + \sum_m \tilde{A}_m\mt Ad^{\gotc}_{mk}\,,&\,\antyp(\tilde{A}_k)&=-\sum_l \tilde{A}_l\, \kad_{kl}\,,& \,\kropka{c}_k&=- \sum_l \kad_{kl}\, \tilde{A}_l \end{align} Let us assume additionally, that $\gotg$ is equipped with invariant, non degenerate, symmetric bilinear form $k$ and that $k|_{\gotb}$ is non degenerate. Thus we have two decompositions \begin{equation*} \gotg=\gotb\oplus \gotb^{\perp}=\gotb\oplus \gotc \end{equation*} Let $P_{\gotb}$ be the projection on $\gotb$ defined by the first decomposition and $P_{\gotc}$ projection on $\gotc$ defined by the second one. The next lemma is straightforward. \notka{lema-reps} \begin{lem}\label{lema-reps} (a) The restriction of $k$ to $\gotb^{\perp}$ is an isomorphism of $\gotb^{\perp}$ and $\gotb^0$. For any $\kropka{c}\in\gotc$ and any $\kropka{f} \in \gotb^{\perp}$: $\displaystyle P_{\gotc}(I-P_{\gotb}) \kropka{c}=\kropka{c}\,,\,\,(I-P_{\gotb}) P_{\gotc}\kropka{f}=\kropka{f},$ in other words the restriction of $P_{\gotc}$ to $\gotb^{\perp}$ is an isomorphism of $\gotb^{\perp}$ and $\gotc$, and the inverse mapping is the restriction of $I-P_{\gotb}$ to $\gotc$.\\ (b) The mapping $\phi:=k \cdot (I-P_{\gotb})|_{\gotc} : \gotc\rightarrow \gotb^0$ is an isomorphism with the inverse $\phi^{-1}=P_{\gotc}\cdot( k^{-1}|_{\gotb^0}) $, moreover for any $b\in B$ $$\phi\cdot Ad^{\gotc}(b)\cdot \phi^{-1} =\kad(b)|_{\gotb^0}$$ (c) Let $(e_k)$ be o.n. basis in $\gotb^{\perp}$; it defines bases $(k(e_k))$ and $(P_{\gotc}(e_k))$ in $\gotb^0$ and $\gotc$, respectively. Then $\phi(P_{\gotc}(e_k))=k(e_k)$ and, consequently, matrix elements of $Ad(b)|_{\gotb^{\perp}}$, $\kad(b)$ and $Ad^{\gotc}(b)$ in bases $(e_k)$, $(k(e_k))$ and $(P_{\gotc}(e_k))$, respectively, are equal. \end{lem} \dowl
2,877,628,088,779
arxiv
\section{Introduction} The mass transport (MT) process in astrophysical environments has relevance for different phenomena in our Universe. It is important for planet formation in proto-planetary discs, it triggers the AGN activity associated to super massive black hole (SMBH) accretion at high redshift and it is responsible for mass accretion from the filamentary structures around dark matter (DM) haloes into the central regions of the first galaxies. A full understanding of this phenomenon is very important in the construction of a galaxy formation theory. In particular, the MT phenomenon has a crucial relevance in models of black hole (BH) formation and their growth in the early stages of our Universe. The observation of very bright quasars at redshift $z\ga6$ with luminosities $L\ga10^{13}[L_\odot]$ implies the existence of BHs with masses of the order $M_{BH}\sim10^{9}[M_\odot]$ when our Universe was about $\sim 1$ Gyr old \citep{Fan+2001}, i.e. SMBH should be formed very early in the history of our Universe and they should grow very fast in order to reach such high masses in the first $\sim Gyr$ of our Universe. To understand such a rapid early evolution is one of the main challenges of current galaxy formation theories. For a more extended discussion on massive BH formation at high redshift see \citet{Volonteri2010} and \citet{Haiman2013}. There are three main scenarios for the formation of SMBH seeds: \begin{itemize} \item Seeds from the first generation of stars and their subsequent accretion to form the SMBH: In this scenario a top heavy initial mass function (IMF) associated with population III (pop III) stars \citep{Abel+2002,BrommLarson2004} can leave BHs with masses of the order of $\sim100$M$_\odot$ \citep{HegerWoosley2002}. We note that more recent studies have shown that pop III stars could be formed in clusters of low mass stars, e.g. \citet{Stacy+2010,Greif+2011,Clark+2011}. \item Massive seeds formed by the direct collapse of warm ($\ga 10^4$ K) neutral hydrogen inside atomic cooling haloes: In this scenario the initial BH seeds are formed by the direct collapse of warm gas triggered by dynamical instabilities inside DM haloes of mass $\ga5\times 10^7[M_\odot]$ at high redshift, $z\ga10$ \citep[e.g. ][]{OhHaiman2002,LodatoNatarajan2006,Begelman+2006}. If there is no molecular hydrogen to avoid fragmentation \citep[e.g. ][]{Agarwal+2012,Latif+2013,Latif+2014} and there is an efficient outward AM transport \citep{Choi+2015} such a scenario could favor the formation of a BH seed of $\sim10^4-10^6[M_\odot]$, e.g. \citet{Begelman+2008}. \item BH seeds formed by dynamical effects of dense stellar systems like star clusters or galactic nuclei, e.g. \citet{Schneider+2006}: In this scenario the formed BH seed can be of mass $\sim10^2-10^4[M_\odot]$ \citep{DevecchiVolonteri2009} \end{itemize} Studies related to the initial mass for SMBH formation favor massive $\sim 10^4-10^6$M$_\odot$ seeds inside primordial atomic cooling haloes \citep{Volonteri+2008,TanakaHaiman2009,LodatoNatarajan2006}. Despite that, due to the lack of observational evidence it is not clear yet if one of these scenarios is preferred by nature or all of them are working at the same time in different haloes. There is dynamical evidence for the existence of SMBH in the center of nearby galaxies \citep{Ferrarese&Ford2005} with masses in the range $M_{BH}\sim10^6-10^9[M_\odot]$ suggesting that the BHs formed in the first evolutionary stages of our Universe are now living in the galactic centers around us, including our galaxy \citep{Ghez+2005}. Besides their ubiquitous nature there is evidence of scaling relations connecting the BH mass with its host galaxy properties, namely the galactic bulge - BH mass relation \citep[e.g. ][]{Gultekin+2009} and the bulge stars velocity dispersion - BH mass relation \citep[e.g. ][]{Tremaine+2002,FerrareseMerritt2000}. Such relations suggest a co-evolution between the BH and its host galaxy. In a cosmological context, motivated by the theoretical study of \citet{PichonBernardeau1999} a series of recent simulations \citet{Pichon+2011} and \citet{Codis+2012} have argued that the galactic spin may be generated entirely in the baryonic component due to the growth of eddies in the turbulence field generated by large-scale ($\ga$ few Mpc) mass in-fall relating the large scale angular momentum (AM) acquisition with mass transport phenomena inside the virial radius. \citet{Danovich2015} studied the AM acquisition process in galaxies at redshift $z\approx 4-1.5$ identifying 4 phases for AM acquisition: i) the initial spin is acquired following the Tidal Torque Theory \citep{Peebles1969,Doro70}. ii) In this phase both the mass and AM are transported to the halo outer region in the virialization process following the filamentary structure around them. iii) In the disc vicinity the gas forms a non-uniform ring which partially suffers the effect of the galactic disc torques producing an alignment with the inner disc AM. iv) Finally, outflows reduce gas with low specific AM, increasing its global value at the central region and violent dynamical instabilities (VDI) associated to clump-clump interactions and clump-merged DM halo interactions remove AM, allowing a more centrally concentrated gas. At smaller scales ($\sim$ few $100 [kpc]$), \citet{PrietoSpin} studied the MT and AM acquisition process in four DM haloes of similar mass $M\approx10^9[M_\odot]$ and very different spin parameter $\lambda=0.001-0.04-0.06-0.1$ \citep{Bullock2001} at redshift $z=9$. The main result of this work is the anti-correlation between the DM halo spin parameter and the number of filaments converging on it: the larger the number of filaments the lower the spin parameter. Such a result suggests that DM haloes associated to isolated knots of the cosmic web could favor the formation of SMBH because the inflowing material would have to cross a lower centrifugal barrier to reach the central galactic region. In a non-cosmological context, \citet{Escala2006} and \citet{Escala2007} has shown that the interplay and competition between BH feeding and SF can naturally explain the $M_{BH}-\sigma$ relation. Using idealized isolated galaxy evolution simulations \citet{Bournaud+2007} showed that $z\sim 1$ galaxies are able to form massive clumps due to gravitational instabilities \citep{Toomre1964} triggered by its high gas mass fraction. Such clumpy high redshift galaxies evolve due to VDI to form spiral galaxies characterized by a bulge and an exponential disc. Similar results have been found in cosmological contexts by \citet{Mandelker+2014}. They show that the formation of massive clumps is a common feature of $z\sim 3-1$ galaxies. Due to the fast formation and interaction between them the VDI dominate the disc evolution. A similar clump migration has been observed at higher redshift in a $5\times10^{11}[M_\odot]$ DM halo at $z=6$ in \citet{Dubois+2012} and \citet{Dubois+2013}. In these works the migration has been triggered by DM merger induced torques. In contrast to the scenario presented above there are studies supporting the idea that clump interaction in high redshift discs are not the main source to build up the galactic bulges \citep[e.g. ][]{Hopkins+2012,Fiacconi+2015,Tamburello+2015,Behrendt+2016,Oklopcic+2016}. Inspired by the $\alpha$ parametrization in the seminal paper of \citet{SS1973}, \citet{Gammie2001} studied the gravitational stability in cool thin discs. In his work \citet{Gammie2001} quantified the rate of angular momentum flux in terms of the Reynolds and gravitational stress. In this work we will use a similar $\alpha$-formalism to study the MT process on galactic discs at redshift $z\sim 6$ performing N-body and hydrodynamic numerical simulations from cosmological initial conditions. We will study a halo of $\sim$ few $10^{10}[M_\odot]$, a mass value not studied already in this context. It is the first time that such an approach is being used to study the MT process on galaxies at high redshift. Furthermore we will compute directly the torques working on the simulated structures from $\sim50 [kpc]$ scales associated to the cosmic web around the central DM halo to $\sim$few pc scales associated to the galactic disc. Such an approach will allow us to have clues about the main source of mass transport on these objects and then to have some insights about the SMBH growth mechanisms at high redshift. The paper is organized as follows. Section \S 2 contains the numerical details of our simulations. Here we describe the halo selection procedure, our refinement strategy and the gas physics included in our calculations. In section \S 3 we show our results. Here we present radial profiles of our systems, star formation properties of our galaxies, a gravitational stability analysis and show a mass transport analysis based on both the $\alpha$ formalism and the torques analysis on small and large scales. In section \S 4 we discuss our results and present our main conclusions. \section{Methodology and Numerical Simulation Details} \label{Methodology} \subsection{RAMSES code} The simulations presented in this work were performed with the cosmological N-body hydrodynamical code RAMSES \citep{Teyssier2002}. This code has been written to study hydrodynamic and cosmological structure formation with high spatial resolution using the Adaptive Mesh Refinement (AMR) technique, with a tree-based data structure. The code solves the Euler equations with a gravitational term in an expanding universe using the second-order Godunov method (Piecewise Linear Method). \subsection{Cosmological parameters} Cosmological initial conditions were generated with the mpgrafic code \citep{Prunetetal2008} inside a $L=10[cMpc]$ side box. Cosmological parameters where taken from \citet{Planck2013Results}: $\Omega_m=0.3175$, $\Omega_\Lambda=0.6825$, $\Omega_b=0.04899$, $h=0.6711$, $\sigma_8 =0.83$ and $n_s=0.9624$. \subsection{Halo selection} Using the parameters mentioned above, we ran a number of DM-only simulations with $N_p=256^3$ particles starting at $z_{ini}=100$. We selected one DM halo of mass $M_{DM}\approx 3\times 10^{10}[M_\odot]$ at redshift z=6. We gave preference to DM haloes without major mergers through its final evolution in order to have a more clean and not perturbed system to analyze. After the selection process we re-simulated the halo including gas physics. For these simulations we re-centered the box on the DM halo position at redshift $z=6$. We set a coarse level of $128^3$ (level 7) particles and allowed for further DM particle mass refinements until level 10 inside a variable volume called mask (as we wil explain below). In this way we were able to reach a DM resolution equivalent to a $1024^3$ particle grid inside the central region of the box, which corresponds to a particle mass resolution $m_{part}\approx3\times10^4[M_\odot]$, in other words we resolved the high redshift $\sim10^6[M_\odot]$ halo with $\ga30$ particles and our final halo with $\ga 10^6$ particles. \subsection{Refinement strategy} In order to resolve all the interesting regions we allowed refinements inside the Lagrangian volume associated to a sphere of radius $R_{ref}=3R_{vir}$\footnote{Here $R_{vir}\equiv R_{200}$, the radius associated to an spherical over-density 200 higher than $\Omega_m\rho_c$, with $\rho_c$ the critical density at the corresponding redshift.} around the selected DM halo at $z_{end}=6$. Such a Lagrangian volume is tracked back in time until the initial redshift of the simulation, $z_{ini}$=100. In this way we ensure that the simulation is resolving all the interesting volume of matter throughout the experiment, i.e. all the material ending inside the $R_{ref}$ at the end of the simulation\footnote{In order to define such a volume we compute a mask. Such a mask can be computed using the geticref.f90 and the geticmask.f90 routines in the ${\rm /ramses/utils/f90/zoom\_ic}$ folder.}. The Lagrangian volume (the mask) is defined by an additional passive scalar advected by the flow throughout the simulation. At the beginning the passive scalar has a value equal to 1 inside the mask and it is 0 outside. We apply our refinement criteria in regions where this passive scalar is larger than $10^{-3}$. In our simulations a cell is refined if it is in regions where the mask passive scalar is larger than $10^{-3}$ and if one of the following conditions is fulfilled: \begin{itemize} \item it contains more than 8 DM particles, \item its baryonic content is 8 times higher than the average in the whole box, \item the local Jeans length is resolved by less than 4 cells \citep{Trueloveetal1997}, and \item if the relative pressure variation between cells is larger than 2 (suitable for shocks associated to the virialization process and SNe explosions). \end{itemize} Following those criteria the maximum level of refinement was set at $\ell_{max}=18$, corresponding to a co-moving maximum spatial resolution of $\Delta x_{min}\approx38.1 [cpc]$ and a proper spatial resolution of $\Delta x_{min}\approx5.4 [pc]$ at redshift $z=6$. With this resolution we were able to resolve the inner $0.1R_{vir}$ DM region with $\sim200$ computational cells. \subsection{Gas physics} Our simulations include optically thin gas cooling. The gas is able to cool due to H, He and metals following the \citet{SutherlandDopita93} model until it reaches a temperture of $T=10^4 [K]$. Below this temperature the gas can cool until $T=10[K]$ due to metal lines cooling. We note that the cooling functions assume collisional ionization equilibrium. The metals are modeled as passive scalars advected by the gas flow. In order to mimic the effect of H$_2$ cooling in primordial environments all our simulations started with an initial metallicity $Z_{ini}=0.001[Z_\odot]$ \citep{Powelletal2011}. Furthermore, a uniform UV background is activated at $z_{reion}=8.5$, following \citet{HaardtMadau1996}. \subsubsection{Star formation} The numerical experiments include a density threshold Schmidt law for star formation: above a given number density, set to $n_0\approx 30[cm^{-3}]$ in our case, the gas is converted into stars at a rate density, $\dot{\rho}_\star$, given by \citep[e.g. ][]{RaseraTeyssier2006,DuboisTeyssier2008}: \begin{equation} \dot{\rho}_\star=\epsilon_\star \frac{\rho}{t_{ff}(\rho)} \end{equation} where $\rho$ is the local gas density, $\epsilon_\star=0.05$ is the constant star formation efficiency and $t_{ff}(\rho)$ is the density dependent local free fall time of the gas. The number density for star formation, $n_0$, corresponds to the value at which the local Jeans length is resolved by 4 cells with a temperature $T_0=200 [K]$\footnote{In order to avoid numerical fragmentation we added a pressure floor to the hydrodynamical pressure. The pressure floor is computed as \begin{equation} P_{floor}=\frac{\rho k_{B}}{m_H}\frac{T_{floor}}{\mu}, \end{equation} with \begin{equation} \frac{T_{floor}}{\mu}=T_0\left(\frac{n}{n_0}\right). \end{equation} The pressure floor is activated at $n_0$ for $T_0$ and at the corresponding density for different temperatures. We note that under the Jeans condition $T_{floor}\propto n$ and $P_{floor}\propto n^2$.} When a cell reaches the conditions to form stars we create star particles following a Poisson distribution \begin{equation} P(N)=\frac{\lambda_P^N}{N!}e^{-\lambda_P}, \end{equation} with $N$ the number of formed stars and \begin{equation} \lambda_P=\frac{\rho\Delta x^3}{m_\star}\frac{\Delta t}{t_\star}, \end{equation} where $\Delta x$ is the cell grid side, $m_\star\approx m_H n_0 \Delta x^3$ is the mass of the stars, $\Delta t$ is the time step integration and $t_\star=t_{ff}(\rho)/\epsilon_\star$ is the star formation time scale. This process ends up with a population of stars inside the corresponding cell. In order to ensure numerical stability we do not allow conversion of more than $50\%$ of the gas into stars inside a cell. \subsubsection{SNe feedback} After 10 Myr the most massive stars explode as SNe. In this process a mass fraction $\eta_{SN}=0.1$ (consistent with a Salpeter initial mass function truncated between $0.1$ and $100[M_\odot]$) of the stellar populations is converted into SNe ejecta: \begin{equation} m_{eject}=\eta_{SN}\times m_\star. \end{equation} In this case $m_\star$ is not the single stellar particle mass but the total stellar mass created inside a cell. Furthermore, each SNe explosion a releases specific energy $E_{SN}=10^{51}[erg]/10 [M_\odot]$ into the gas inside a sphere of $r_{SN}=2\Delta x$: \begin{equation} E_{eject}=\eta_{SN}\times m_\star \times E_{SN}. \end{equation} As mentioned above, metals are included as passive scalars after each SNe explosion and then they are advected by the gas flows. This means that after each SNe explosion a metallicity \begin{equation} Z_{eject}=0.1[Z_\odot] \end{equation} is included as metals in the gas in the simulation. Such an amount of metals is consistent with the yield of a $10[M_\odot]$ type II SNe from \citet{WoosleyWeaver95}. In this work we used the delayed cooling implementation of the SNe feedback \citep[discussed in][]{Teyssier+2013,Dubois+2015}. This means that in places where SNe explode, if the gas internal energy is above an energy threshold $e_{NT}$, the gas cooling is turned off for a time $t_{diss}$ in order to take into account the unresolved chaotic turbulent energy source of the explosions. As written in \citet{Dubois+2015} the non-thermal energy $e_{NT}$ evolution associated to the SNe explosions can be expressed as \begin{equation} \frac{d e_{NT}}{dt}=\eta_{SN}\dot{\rho}_\star E_{SN}-\frac{e_{NT}}{t_{diss}}=\eta_{SN}\epsilon_\star\rho \frac{E_{SN}}{t_{ff}}-\frac{e_{NT}}{t_{diss}}. \end{equation} In an equilibrium state $de_{NT}/dt=0$ it is possible to write \begin{equation} \frac{e_{SN}}{\rho}=\eta_{SN}\epsilon_\star E_{SN}\frac{t_{diss}}{t_{ff}}. \end{equation} If we assume that non thermal energy is associated to a turbulent motion with a velocity dispersion $\sigma_{NT}$ and that this energy $e_{NT}=\rho\sigma_{NT}^2/2$ will be dissipated in a time scale of order the crossing time scale associated to the local jeans length then $t_{diss}\approx l_J/\sigma_{NT}$, and it is possible to write \begin{equation} t_{diss}=\left(\frac{t_{ff}}{2\eta_{SN}E_{SN}\epsilon_\star}\right)^{1/3}l_J^{2/3}. \end{equation} Then, expressing the local Jeans length as $l_J=4\Delta x$, with $\Delta x$ the proper cell side at the highest level of refinement, it is possible to write the dissipation time scale as: \begin{eqnarray} t_{diss} & \approx & 0.52[Myr]\left(\frac{0.1}{\eta_{SN}}\right)^{1/3}\left(\frac{0.05}{\epsilon_\star}\right)^{1/3}\times \\ \nonumber & & \left(\frac{\Delta x}{5.4[pc]}\right)^{2/3}\left(\frac{30[cm^{-3}]}{n_0}\right)^{1/6} \end{eqnarray} Given our parameters, we set the non-thermal energy dissipation time scale as $t_{diss}\approx0.5[Myr]$. In this model the gas cooling is switched off when the non-thermal velocity dispersion is higher than a given threshold: \begin{eqnarray} \sigma_{NT}& \approx & 49[km/s]\left(\frac{\eta_{SN}}{0.1}\right)^{1/3}\left(\frac{\epsilon_\star}{0.05}\right)^{1/3}\times \\ \nonumber & & \left(\frac{\Delta x}{5.4[pc]}\right)^{1/3}\left(\frac{n_0}{30[cm^{-3}]}\right)^{1/6} \end{eqnarray} which for us is $\sigma_{NT} \approx 49 [km/s]$. \subsection{Sink particles and black hole accretion} In order to follow the evolution of a black hole (BH) in the simulations we introduced a sink particle \citep{Bleuler&Teyssier2014} at the gas density peak inside a DM halo of $M\approx1.7\times10^{8}[M_\odot]$ at redshift $z=15.7$. The BH seed mass is 10$^4M_\odot$, roughly following the $M_{BH}-\sigma$ relation of \citet{McConell+2011}. Such a black hole mass is in the range of masses associated to direct collapse of warm gas inside atomic cooling haloes at high redshift \citep[e.g. ][]{OhHaiman2002,LodatoNatarajan2006,Begelman+2006,Begelman+2008,Agarwal+2012,Latif+2013,Latif+2014,Choi+2015}. We did not allow more BH formation after the formation of the first one. In order to compute the mass accretion rate onto the BH we use the modified Bondi-Hoyle accretion rate described below. \subsubsection{Modified Bondi accretion} \citet{Bleuler&Teyssier2014} implemented the modified Bondi mass accretion rate onto sink particles in the RAMSES code. They use the expression presented in \citet{Krumholz+2004} based on the Bondi, Hoyle and Lyttleton theory \citep{Bondi&Lyttleton1939,Bondi1952}. There the Bondi radius \begin{equation} {r_{BHL}}=\frac{G{M}_{BH}}{(c^2_\infty+v^2_\infty)} \end{equation} defines the sphere of influence of the central massive object of mass $M_{BH}$ and its corresponding accretion rate is given by \begin{equation} \dot{M}_{BHL}=4\pi\rho_\infty {r}^2_{BHL}(\lambda^2c^2_\infty+v^2_\infty)^{1/2} \end{equation} where $G$ is the gravitational constant, $c_\infty$ is the average sound speed and $v_\infty$ is the average gas velocity relative to the sink velocity, $\lambda$ is an equation of state dependent variable and it is ${\rm exp}(3/2)/4\approx1.12$ in the isothermal case. The density $\rho_\infty$ is the gas density far from the central mass and it is given by \begin{equation} \rho_\infty=\bar{\rho}/\alpha_{BHL}(\bar{r}/r_{BHL}) \end{equation} where $\alpha_{\rm BHL}(x)$ is the solution for the density profile in the Bondi model \citep{Bondi1952}. The variable $x=\bar{r}/r_{\rm BHL}$ is the dimensionless radius and $\bar{\rho}$ the corresponding density. In our case $\bar{r}=2\Delta x_{min}$, with $\Delta x_{min}$ the minimum cell size in the simulation. The modified Bondi accretion rate is limited by the Eddington accretion rate. In other words, the sink particle can not accrete at a rate larger than the Eddington rate, given by: \begin{equation} \dot{M}_{Edd}=\frac{4\pi G m_p M_{BH}}{\sigma_T c\epsilon_r}, \end{equation} where $m_p$ is the proton mass, $\sigma_{\rm T}$ is the Thomson scattering cross section, $c$ is the speed of light and $\epsilon_r=0.1$ is the fraction of accreted mass converted into energy. \section{Results} \label{results} In this work we will analyze three simulations: \begin{itemize} \item NoSNe simulation: Includes star formation and modified Bondi-Hoyle-Lyttleton (BHL) accretion rate onto sinks, \item SNe0.5 simulation: Includes star formation, BHL accretion rate onto sinks and SNe feedback with a consistent delayed cooling $t_{diss}=0.5[Myr]$, and \item SNe5.0 simulation: Similar to SNe0.5 but with an out of model $t_{diss}=5.0[Myr]$ \end{itemize} We are not including AGN feedback in these experiments. This important ingredient has been left for a future study to be presented in an upcoming publication. Figure \ref{fig:mapsim} shows a gas number density projection of our systems at redshift $z=6$ at two different scales. The top rows show the large scale ($\sim 3\times10^2 [ckpc]$) view of the systems and the bottom rows show a zoom-in of the central ($\sim 10 [ckpc]$) region. In the top panels it is possible to recognize a filamentary structure converging on to the central galaxy position. Such filaments work as pipes channeling cold baryonic matter onto the converging region: the place for galaxy formation, i.e. knots of the cosmic web. Aside from the accreted low density gas it is possible to recognize a number of over-densities associated to small DM haloes merging with the central dominant halo, a common feature of the hierarchical structure formation. Such mini haloes certainly perturb the galactic disc environment as we will see later. Whereas at large scale we can see that the difference between runs are the small scale features associated to the shocks produced by SNe explosions, the bottom panels show a clear difference between simulations at the end of the experiments. The NoSNe run developed a concentrated gas rich spiral galaxy whereas both SNe runs have less concentrated gas and much more chaotic matter distribution. Such a difference certainly is a consequence of SNe explosions: the energy injected into the environment is able to spread the gas out of the central region and then to decrease the average density due to effect of the expanding SNe bubbles. Such a phenomenon is able to destroy the galactic disc as can be seen in the central and right-bottom panel where the proto-galaxy is reduced to a number of filaments and gas clumps. Figure \ref{fig:starmaps} shows the rest frame face on (top panels) and edge-on (bottom panels) stellar populations associated to our systems in three combined filters: i, u and v. The images were made using a simplified version of the STARDUST code \citep{Devriendt+99}\footnote{The STARDUST code computes the observed flux for a single stellar population (ssp). It assumes a Salpeter IMF and for a given stellar track it computes the associated spectral energy distribution of a ssp. Then it is convolved with the different filters from SSDS to obtain the observed maps. There is no dust extinction in our case.}. \subsection{Radial profiles} Before computing any radial average from our AMR 3D data we aligned the gas spin vector with the Cartesian $\hat{z}$ direction. After this procedure, we performed a mass weighted average in the $\hat{z}$ direction in order to have all the interesting physical quantities associated to the disc surface: \begin{equation} \langle Q(x,y)\rangle_z=\frac{\sum_{z_i}^{z_f} Q(x,y,z) \Delta m}{\sum_{z_i}^{z_f}\Delta m}, \end{equation} with $\Delta m=\rho \Delta x^3$ and $\Delta x$ the grid size. In order to get our radial quantities we have performed a mass weighted averaging in the cylindrical $\hat{\theta}$ direction of our disc surface data: \begin{equation} \langle Q(r)\rangle_\theta=\frac{\sum_{\theta=0}^{\theta=2\pi} Q(r,\theta) \Delta m}{\sum_{\theta=0}^{\theta=2\pi}\Delta m}, \end{equation} where $r$ and $\theta$ are the radial and azimuthal cylindrical coordinates, with $\Delta m=r \Delta r \Delta\theta$ and $\Delta r=\Delta x$. Figure \ref{fig:surfprof} shows the gas surface density\footnote{In this case we have added the total mass in the $\hat{z}$ direction (not an average) divided by the corresponding area and then we averaged in the cylindrical direction in order to get mass weighted radial SD, for gas, stars and SFR.} (SD) (top panels), the stars SD (central panel) and the SD star formation rate (bottom panel) for different redshifts as a function of radius. (The line style-redshift relation will be kept for the following plots). All our simulations show a number of peaks in the gas SD profiles at almost all the sampled redshifts, a proof of the irregular and clumpy structure of the gas. The second row of the figure shows more clear differences between our runs. Apart from an almost monotonic increment on the stars SD for each simulation it is also possible to see lower central peaks from left to right. Such a trend is a consequence of the different strength of the feedback which also increases from left to right. The bottom panels show the SD star formation rate (SFR). In order to compute this quantity we took into account all the stars with an age $t_\star< 10 Myr$ formed in the time elapsed between consecutive outputs (which is in the range $\Delta t\sim7-9Myr$). Inside one tenth of the virial radius we compute the height and the radius where $90\%$ of the gas and stars are enclosed. We averaged these scales for stars and gas to define a cylinder where we compute the SFR (a similar procedure will be used in the next sub-section to compute the Kennicutt-Schmidt law). We can see that our NoSNe run shows higher SD SFR peaks above $\sim 10 pc$ and the SNe5.0 run has the lower SD SFR. Such a fact will be confirmed after computing the global SD against the global SD SFR in the next section. Figure \ref{fig:velprof} shows different gas velocities associated to our systems at different redshift as a function of radius. It plots the radial velocity $v_r\equiv\vec{v}\cdot \hat{r}$ (in solid black line), azimuthal velocity $v_\theta\equiv\vec{v}\cdot \hat{\theta}$ (in long-dashed blue line) and the spherical circular velocity $v_{circ}$ of the disc (in dot-dashed cyan line) defined as: \begin{equation} v_{circ}=\left(\frac{GM(<r)}{r}\right)^{1/2} \end{equation} where $G$ is Newton's constant and $M(<r)$ is the total (gas, stars and DM) mass inside the radius $r$. In our simulations the radial velocity fluctuates from negative to positive values. Such a feature is a proof of a non stationary disc where at some radii there is inflowing material whereas at other radii there are gas outflows. Such features can be produced by virialization shocks, DM halo mergers or SNe explosions. It is worth noticing that at $r\ga 1$ kpc, which roughly correspond to the outer edge of the galactic disc, the gas is inflowing in most of the cases. Such a feature is a remarkable signal of radial gas inflows at distance $\sim 0.1 R_{vir}$ from the center of the system. This fast radial material comes from larger scales channeled by the filamentary structure shown in the top panels of figure \ref{fig:mapsim} and, as mentioned above, they supply the central DM halo region with cold gas at rates as high as $\sim10[M_\odot/yr]$ as we will see in the following sections. The orbital velocity tends to be roughly similar to the spherical circular velocity at large radii $r\ga 100[pc]$ in most of the cases but in general the circular velocity does not follow the spherical circular orbit. Such deviations can be explained due to the shocked gas inflows, the mergers suffered by the central halo and due to SNe explosions which enhance the pressure support against gravity. We emphasize that these kinds of interactions have a gravitational effect due to tidal forces (mergers and clump-clump interaction) on the disc and also have a hydrodynamical effect (shocks). In our SNe runs it is clear that the spherical circular curves are lower than the NoSNe curve. In other words the enclosed mass inside $\sim 0.1 R_{vir}$ is lower in the SNe runs. That is because the SNe explosions spread the gas out of the central region. Actually from the shocked gas features at the top right panel of figure \ref{fig:mapsim} it is possible to see that the outflows can reach regions at $\sim100 [ckpc]$ from the central galaxy, i.e. $\sim15 [kpc]$ at $z=6$. \subsection{Star formation} Despite the lack of observations of the Kennicutt-Schmidt (KS) law at high redshift ($z\ga 6$) it is interesting to compare the KS law from our simulations with its currently accepted functional form from \citet{Kennicutt98} (hereafter K98). Furthermore, it is also interesting to compare our data with more recent literature from the \citet{Daddi+10} (hereafter D10) results for normal and star burst galaxies. Figure \ref{fig:ksl} shows the KS law for our runs. Each point marks the SD SFR as a function of the total gas SD. The SD SFR was computed following a similar procedure as in the previous sub-section. There is a correlation between the lower gas SD and the level of feedback in our results: the higher the feedback the lower the gas SD, which is a natural consequence of the gas heating due to SNe events. Whereas the NoSNe run shows points covering $\sim 1$ decade at high SD with a large scatter in SD SFR from below the D10 normal galaxies sequence to above the D10 star burst sequence the SNe runs cover a larger SD range. Both SNe runs are in agreement with the star burst sequence of D10 and the SNe0.5 simulation shows a lower scatter in the points. Such a behavior could be due to the number of mergers suffered by these kind of haloes at high redshift. Figure \ref{fig:mvirmstar} shows the stellar mass normalized by $f_b M_{vir}$, where $f_b\equiv \Omega_b/\Omega_m$ is the universal baryonic fraction. At the end of the simulation our SNe0.5 galaxy has a stellar metallicity $Z_\star=0.1[Z_\odot]$ and our SNe5.0 galaxy a metallicity of $Z_\star=0.04[Z_\odot]$. It is clear from the figure that the NoSNe run is producing much more stars than our SNe runs and that due to the extreme feedback our SNe5.0 simulation form less stars than our SNe0.5 simulation. When we compare our results with the one shown in \citet{Kimm+2015} we can see that our results are in the range of their MFB and MFBm simulations at similar $\sim10^{10}[M_\odot]$. Despite the uncertainties and the lack of robust observational constrains, such values are not far (a factor of $\sim$ few for SNe5.0 and just in the limit of the order of magnitude for SNe0.5) from the prediction from \citet{Behroozi+13} (hereafter B13) where the stellar to halo mass ratio is of the order of few $\sim 10^{-2}$ at the same mass range and high redshift. Figure \ref{fig:SFR} shows the SFR for our runs as a function of redshift. When we compare the NoSNe run with our SNe runs the main difference arises in the continuity of the SFR history. Whereas the NoSNe run shows a continuous line the SNe runs present periods of almost zero SFR. Such periods last few $\sim 10[Myr]$ and are more frequent in our SNe5.0 run due to the stronger feedback. Despite the large fluctuations in the SFR data our SNe runs tend to be in the range $\sim1-10[M_\odot/yr]$. Such numbers are in line with the one found by \citet{Watson+15} (here after W15) and references therein for high redshift galaxies. Taking into account the uncertainties of the predictions, if we compare our SNe run results with B13 they tend to be below or similar to their data for $\sim 10^{11}[M_\odot]$ halo at $z\ga6$. \begin{figure*} \centering \includegraphics[width=2.0\columnwidth,height=1.0\columnwidth]{multiGasDM2.pdf} \caption{Mass weighted projection of the gas number density for our simulations: NoSNe left column, SNe0.5 central column and SNe5.0 right column. The top row is a large scale ($\sim30$ ckpc square side) view of our systems and the bottom row is a zoom-in of the central region of the system ($\sim 1$ ckpc square side). From the top panels it is possible to identify the filamentary structure converging at the central region of the system: the galaxy position. Such filaments channel and feed the galactic structure. At large scales it is possible to recognize shock waves associated to the SNe explosions of our SNe runs. Beside the low density gas, there are a number of over-densities associated to small DM haloes about to merge with the central structure. The bottom panels show a dramatic difference between our simulations: a compact gas rich spiral galaxy for the NoSne experiment, a rough spiral galaxy disturbed by SNe feedback in our SNe0.5 run and a group of clumps in our SNe5.0 simulation.} \label{fig:mapsim} \end{figure*} \subsection{Disc stability} High redshift galactic environments have a high gas fraction $f_g\ga 0.5$ \citep[e.g. ][]{Mannucci+2009,Tacconi+2010}. Figure \ref{fig:gasfraction} shows the gas fraction of our systems as a function of redshift. Here we define the gas fraction as the ratio between the galactic gas mass and the mass of the gas plus the stars in the galaxy: $f_g\equiv M_{gas}/(M_{gas}+M_{star})$. All our systems shows a high gas fraction with values fluctuating around $\sim 0.8$. In fact the average values for our simulations are $\langle f_{g}\rangle_{NoSNe}=0.86$, $\langle f_{g}\rangle_{SNe0.5}=0.83$ and $\langle f_{g}\rangle_{SNe5.0}=0.82$ below $z=8$. If we average below redshift $7$ we find $\langle f_{g}\rangle_{NoSNe}=0.87$, $\langle f_{g}\rangle_{SNe0.5}=0.80$ and $\langle f_{g}\rangle_{SNe5.0}=0.85$. It is interesting to compare such numbers with those found by W15. In this work the authors describe the properties of a $z\approx 7.5$ galaxy. The galaxy at this redshift has a gas fraction $f_g=0.55\pm0.25$, in other words our values of $f_g$ are inside the errors associated to their observations as we can see in figure \ref{fig:gasfraction} with the SNe runs closer to the observational expectations. The non-stationary and highly dynamic nature of the high gas fraction systems makes them susceptible to gravitational instabilities. In order to analyze the disc stability throughout its evolution we will use the Toomre parameter, $Q_T$, stability criterion \citep{Toomre1964}: \begin{equation} Q_T=\frac{c_s\Omega}{\pi G \Sigma} \end{equation} A convenient modification of the Toomre parameter to take into account the turbulent velocity dispersion of the fluid has the form $Q_T=v_{rms}\Omega/\pi G \Sigma$. Despite the {\it ad hoc} modification of the parameter it is not straightforward to interpret the turbulent velocity dispersion of the gas as a source of pressure counteracting the gravity \citep{ElmegreenScalo2004}. This comes from the fact that this pressure term could only be defined in the case where the dominant turbulent scale is much smaller than the region under consideration, which is in fact not the case of the ISM. Rigorous analysis indeed shows that turbulence can be represented as a pressure only if the turbulence is produced at scales smaller than the Jeans length \citep[micro-turbulence in ][]{Bonazzola+1992}. Therefore the gravitational instability analysis is not strictly applicable with a turbulent pressure term that could stabilize and dampen all the substructure below the unstable scale associated to $v_{rms}$. The left column of figure \ref{fig:toomreprof} shows the Toomre parameter for our three runs at different redshifts. The gray dashed horizontal line marks the $Q_T=1$ state. For completeness, the right column of figure \ref{fig:toomreprof} shows the Toomre parameter associated to the turbulent velocity dispersion. Due to the high Mach numbers (see appendix \ref{appendixD}) of these systems it is $\ga$ 1 order of magnitude above the thermal Toomre parameter. Our NoSNe run tends to have lower values with a smaller dispersion compared with our SNe runs. In the case of no feedback the Toomre parameter fluctuates around 1 above $z=7$ showing an unstable disc at high redshift. At $z=6$ the parameter is of order $\sim10^0$ inside $\sim 100 pc$ and above this radius it increases due to the combined effect of low density and higher sound speed (high temperature) stabilizing the system at these radii. Due to the higher temperature associated to SNe explosions the Toomre parameter tends to be larger in our SNe runs showing a more stable system in these cases. Despite that it is also possible to find regions with $Q_T\approx 1$ in our feedback runs. We have to take into account that after each SNe explosion a given amount of metals is released into the gas. Such a new component allows the gas to reach lower temperatures creating unstable regions. We applied the clump finder algorithm of \citet{Padoan+2007} to our galactic disc inside a $\sim1-1.5[kpc]$ box. The clump finder algorithm scans regions of density above $5\times 10^2n_{avg}$, with $n_{avg}$ the average density inside the analyzed box which is of order $\sim 5[cm^{-3}]$. In practice it means that we look for gas clumps at densities above $\sim 10^3[cm^{-3}]$. The scan is performed increasing the density by a fraction $\delta n/n=0.25$ until the maximum box density is reached. For each step the algorithm selects the over-densities with masses above the Bonnor-Ebert mass in order to define a gravitationally unstable gas clump. This algorithm gave us clump masses in the range $\sim$ few $10^5-10^8[M_\odot]$. Figure \ref{fig:scalemass} shows the clump mass function found in each of our simulations at different redshifts. In order to complement this analysis we have computed the mass associated to the maximum unstable scale length of a rotating disc \citep{Escala&Larson08} \begin{equation} M_{cl}^{max}=\frac{\pi^4 G^2 \Sigma_{gas}^3}{4\Omega^4}. \end{equation} The vertical lines of figure \ref{fig:scalemass} mark the average $M_{cl}^{max}$ at each sampled redshift. Our NoSNe run formed the bigger gas clumps. In this case due to the lack of feedback the most massive objects ($M_{clump}\approx8\times10^7 [M_\odot]$) can survive at different redshifts. Such mass is of the order of the expected $M_{cl}^{max}\ga10^8[M_\odot]$. The SNe runs could form objects as big as $M_{clump}\approx2\times10^7 [M_\odot]$. These masses are below the $M_{cl}^{max}\ga$ few $10^8[M_\odot]$. The SNe0.5 simulation forms much more massive objects compared with the SNe5.0 run. Due to the extreme feedback of the SNe5.0 experiment it is not easy for the clump to survive in such a violent environment. This is why the SNe5.0 simulation forms less clumps throughout its evolution. All the clumps formed in our simulations have sizes in the range of $\lambda_{clump}\sim$ few $10^0 [pc]$ to few $\sim 10^1 [pc]$ (note that as this size is associated to all the cells above the threshold $n_{avg}$, then it is a minimum size because it could increase if we reduce $n_{avg}$). These sizes are below the unstable length scale (averaged on the inner $\sim 1[kpc]$) associated to the maximum clump mass: $\lambda_{clump}< \lambda_{cl}^{max}=4\pi^2 G \Sigma_{gas}/\Omega^2\sim$ few $10^2 [pc]$. \subsection{Mass transport on the disc} It is well known that in a cosmological context the large scale ($\ga R_{\rm vir}$) gas cooling flows associated to DM filaments converging onto DM haloes have influence on the small scales ($\la R_{\rm vir}$) galactic AM \citep[e.g. ][]{Powelletal2011,PrietoSpin,Danovich2015}. Such an interplay between large and small scales suggests that the mass/AM transport analysis should be performed taking into account both regimes. \begin{figure*} \centering \includegraphics[width=2.0\columnwidth,height=11.5cm]{starmaps2.pdf} \caption{Combined rest frame stars visualization for our three runs using SDSS $u$, $v$ and $i$ filters in blue green and red colors, respectively. The images correspond to the end of our simulations and there is no dust extinction. The face on view of the NoSNe system shows a smoother star distribution compared with our SNe runs where feedback is able to create a non-homogeneous star distribution characterized by green-blue star clumps around the center of the system.} \label{fig:starmaps} \end{figure*} \subsubsection{Stresses on the disc} The MT on the galactic disc can be studied based on the momentum conservation equation. Written in its conservative form this equation tell us that the local variation of momentum is due to the rate of momentum fluxes: \begin{equation} \frac{\partial (\rho v_i)}{\partial t}+\frac{\partial}{\partial x_k}(R_{ik}+P_{ik}-G_{ik})=0, \label{peq} \end{equation} where $\rho$ is the gas density, $x_i$ are the Cartesian coordinates and $v_i$ are the Cartesian components of the gas velocity. All the terms inside the divergence are related with the rates of momentum flux and they can be written as follow: \begin{equation} R_{ik}=\rho v_i v_k. \end{equation} is the term associated to the Reynolds (or hydrodynamic) stress. It is the momentum flux term associated to the total fluid movement. Instead of being a momentum flux source it quantifies the transported momentum due to the addition of different phenomena on the disc, namely gravitational stresses, magnetic stresses, viscous stresses or pressure stresses. \begin{equation} P_{ik}=\delta_{ik}P. \end{equation} This is the pressure term, where $\delta_{ik}$ is the Kronecker delta symbol, $P$ is the gas pressure and its gradient will be a source of torque as we will show in the following lines. \begin{equation} G_{ik}=\frac{1}{4\pi G}\left[\frac{\partial \phi}{\partial x_i}\frac{\partial \phi}{\partial x_k}-\frac{1}{2}(\nabla\phi)^2\delta_{ik}\right]. \end{equation} with $\phi$ the gravitational potential and $G$ Newton's constant. $G_{ik}$ is the term associated to the gravitational stress and it is related with the movements of the fluid due to the gravitational acceleration. This term also will be a source of torques acting on the fluid as we will show later. Because we are not including magnetic fields we have neglected the term associated to it. Furthermore, the dissipative-viscous term is negligible in this context and will not be taken into account in the following discussion \citep[e.g. ][]{Balbus2003}. In the disc MT context it is useful to quantify the momentum transport in the $\hat{r}$ direction due to processes in the $\hat{\theta}$ direction where $\hat{r}$ and $\hat{\theta}$ are the radial and the azimuthal cylindrical coordinates, respectively. If $F_{r\theta}$ is the rate of momentum flux in the $\hat{r}$ direction due to the processes in the $\hat{\theta}$ direction associated to any of the stresses mentioned above, in general we can write (see appendix \ref{appendixA}) \begin{equation} F_{r\theta}=\frac{1}{2}(F_{yy}-F_{xx})\sin2\theta+F_{xy}\cos2\theta. \end{equation} After some algebra it is possible to write the momentum fluxes for each of our sources as follow \citep[e.g. ][ and references therein]{Balbus2003,Fromang+2004}: \begin{equation} R_{r\theta}=\rho v_r v_\theta, \end{equation} \begin{equation} P_{r\theta}=0 \quad {\rm and} \end{equation} \begin{equation} G_{r\theta}=\frac{1}{4\pi G}\nabla_r\phi \nabla_\theta \phi, \end{equation} It is worth noticing that in the case of $\theta$ symmetry the gravitational term vanishes. In other words, any density perturbation in the $\hat{\theta}$ direction, e.g. an asymmetric density distribution of gas clumps in the disc, will cause a momentum flux in the $\hat{r}$ direction. This will be the term associated to the VDI as we will show later. The terms associated to the Reynolds and the gravitational stress as defined in the above expressions are averaged in space in order to quantify the radial momentum flux associated to perturbations in the azimuthal direction \citep{Hawley2000} The Reynolds and the gravitational stress are defined as follow\footnote{An alternative definition of the Reynolds stress from \citet{ Hawley2000} is presented in appendix \ref{appendixF} with similar results.}: \begin{equation} \langle R_{r\theta}\rangle=\langle \rho v_r \delta v_\theta\rangle, \label{reynoldsstress} \end{equation} \begin{equation} \langle G_{r\theta}\rangle=\frac{1}{4\pi G}\langle\nabla_r\phi\nabla_\theta\phi\rangle, \end{equation} where $\delta v_\theta\equiv v_\theta-\langle v_\theta\rangle$, $\langle v_\theta\rangle$ the average circular velocity of the fluid and the averages are computed as \begin{equation} \langle f(r,z,\theta)\rangle=\frac{\int\int r d\theta dz f(r,z,\theta)\rho}{\int\int r d\theta dz\rho}. \end{equation} In this context it is useful to define an $\alpha$ parameter for each of our stresses. For a given rate of momentum flux, following \citet{Gammie2001}, we define: \begin{equation} \alpha_{r\theta}=\alpha_{R,r\theta}+\alpha_{G,r\theta}=\Big\langle\frac{R_{r\theta}+G_{r\theta}}{P}\Big\rangle \end{equation} Each $\alpha$ parameter is interpreted as the rate of momentum flux associated with a given process normalized by the gas pressure. Because the gas pressure is $P\sim (\rho c_s)\times c_s$ it can be interpreted as a ``thermal momentum'' advected at the sound speed or as a ``thermal rate of momentum flux''. In this sense an $\alpha \ga 1$ is a sign of super-sonic movements in the fluid. This parameter is $\alpha\approx 0.02$ for ionized and magnetized discs \citep{Fromang+2004,NelsenPapaloizou2003} and observations of proto-stellar accretion discs \citep{Hartmann+1998} and optical variability of AGN \citep{Starling+2004} give an alpha parameter $\sim 0.01$. Due to the turbulent (i.e. high Mach number, see appendix \ref{appendixD}) nature of the environments studied here, the alphas will typically be higher than 1. In fact, $\alpha_R\la M^2$ with $M$ the gas Mach number. Figure \ref{fig:stressprof} shows the radial values of $\alpha_{R,r\theta}$ in the top row and $\alpha_{G,r\theta}$ in the bottom row for our three simulations in different columns. The first thing that we should notice from this figure is that the Reynolds $\alpha$ parameters are not constant neither in time nor in space and furthermore they reach values well above unity. In other words, our high redshift galactic discs are not in a steady state. Such a dynamical condition does not allow use of the \citet{SS1973} mass accretion rate expression as a function of the computed $\alpha$ parameter. (See appendix \ref{appendixB} for a more detailed discussion.) Instead of that we must compute a mass accretion rate directly from our data. From figure \ref{fig:stressprof} it is clear that the Reynolds stress tends to be much larger than the gravitational stress and then it dominates the MT process in most of the cases (note that both top and bottom panels are not in the same $\alpha$ range). In other words, the rate of momentum flux associated to the gravitational potential gradients is lower than the rate of momentum flux associated to local turbulent motions of the gas in most of the cases. Such high values of $\alpha_R$ are associated to high velocity dispersions which can be an order of magnitude above the sound speed. We note that our two SNe runs have lower $\alpha_R$ due to the higher sound speed in their environment. Here we emphasize that the Reynolds tensor is not a source of momentum flux, in the sense that if we start the disc evolution from a spherical circular rotation state, i.e. without a radial velocity component, with null viscosity and one small gravitational potential perturbation in the $\hat{\theta}$ direction the variation in momentum will be associated to the gravitational stress and the appearance of the Reynolds stress will be a consequence of this process. It is interesting to note that the $\alpha_{G,r\theta}$ parameter in our NoSNe and SNe0.5 has a decreasing trend with the galactic radius at some redshifts: the smaller the radius the larger the gravitational stresses. If we take into account that the accreted material tends to concentrate in the inner part of the galaxy then it is reasonable that the larger gravitational stresses act at small radii. In the NoSNe run it is of the order of the pressure at the galactic center at all redshifts, whereas in the SNe0.5 run it is comparable to the pressure at high z. Due to the high feedback the SNe5.0 run is dominated by the Reynolds stress in all the sampled redshifts. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{profGasStars.pdf} \caption{Radial profiles of the gas SD (top panels), stars SD (central panel) and SD SFR (bottom panel) for the NoSne run in the left column, SNe0.5 run in the central column and SNe5.0 run in the right column. All the quantities are plotted for different redshifts (from 6 in blue to 10 in green): $z=6$ (solid line), $z=7$ (long-dashed line), $z=8$ (short-dashed line), $z=9$ (dotted line) and $z=10$ (dot-dashed line). The vertical lines mark the $4=0.1R_{vir}$ at each $z$ following the same line style as the profiles. The top and central panels show density fluctuations associated to the non-homogeneous nature of the galactic disc at high redshift. SNe feedback clearly decreases the amount of stars formed in our galaxies.} \label{fig:surfprof} \end{figure} \subsubsection{Torques on the disc} \begin{figure} \centering \includegraphics[width=1.0\columnwidth,]{profVel.pdf} \caption{The figure shows the gas radial velocity (solid black line), orbital velocity (long-dashed blue line) and the spherical circular velocity (dot-dashed cyan line) as a function of radius at different redshifts for the NoSNe run (left columns), the SNe0.5 run (central column) and the SNe5.0 run (right column). The vertical lines mark the $r=0.1R_{vir}$ position.} \label{fig:velprof} \end{figure} After observing that the Reynolds stress associated to the gas turbulent motions dominates the rate of momentum flux in the disc and that the gravitational $\alpha$ tends to reach its maximum at the central galactic region, it is relevant for the MT study to analyze the torques acting in the disc associated to forces in the $\hat{\theta}$ direction. In order to do that we compute the torques associated to both the gravity and the gas pressure for our systems. We define these two quantities as: \begin{equation} \vec{\tau}_G=\vec{r}\times\nabla\phi, \end{equation} \begin{equation} \vec{\tau}_P=\vec{r}\times\frac{\nabla P}{\rho} \end{equation} which actually are specific torques, i.e. torques per unit gas mass. These two terms will act as a source of AM transport in the galactic disc and will give us some clues about the MT process in high redshift galactic discs. In order to compute this we have defined the radial origin to be in the cell where the sink particle is set. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{KSLdt.pdf} \caption{KS law for each of our snapshots from $z=10$ to $z=6$. Each point marks the KS relation for our galaxies computed as mentioned in the text at different redshifts in different colors. From top to bottom: NoSNe, SNe0.5 and SNe5.0. The solid black line marks the K98 fit. The thick long-dashed green line marks the D10 fit for normal galaxies and the short-dashed blue line marks the D10 fit for star burst galaxies. Our SNe0.5 run has the lowest scatter, and most closely follows the D10 sequence of star burst galaxies.} \label{fig:ksl} \end{figure} Figure \ref{fig:profTorque} show the ratio between $\tau_G$ and $\tau_P$, with $\tau_i\equiv|\vec{\tau}_i|$. The NoSNe run shows a decreasing trend with radius, like in the $\alpha_{G}$ profile. The pressure gradients tend to dominate above $\sim 100 [pc]$ and the gravity force dominates in the innermost region. As already shown in the alpha profiles, in the SNe0.5 run the gravity dominates the central part of the system at high $z$ and at lower redshifts the pressure torques are the source of mass transport. And finally, due to the high feedback which is able to create strong shocks and destroy gas clumps, the SNe5.0 simulation tends to be dominated by torques associated to pressure gradients in line with the previous alpha results. As a complement to our findings it is useful to take a look at torques at large scales. Figure \ref{fig:multiTorque} shows the ratio of the total torques (not only the $\hat{z}$ component in the disc) $\tau_G/\tau_P$ for our three runs at two different redshifts, at $z=10$ in the top row and at $z=6$ in the bottom row. The maps take into account the gas with density above $50\times\Omega_b\rho_c$, where $\rho_c$ is the critical density of the Universe. Such a cut in density was set by inspection in order to have a clear view of the filaments around the central DM halo. It is interesting to note that in our three simulations the border of the filaments is clearly dominated by the pressure torque: material from voids falls onto the filamentary DM structure creating large pressure gradients \citep{Pichon+2011,Danovich2015}. There it loses part of its angular momentum and flows onto the DM halo. At high redshift it is possible to see that inside the filaments the gravitational torque is $\la 0.1$ of the pressure torque. The picture changes when we look at the bottom panels, there the shocked filaments have a ratio $\tau_G/\tau_P\la 10^{-2}$. It is possible to find regions with a ratio $\tau_G/\tau_P> 10^{-2}$ around gas over-densities and near the main central halo. All the gas over-densities, in general associated to DM haloes at these scales, have a higher gravitational to pressure torque ratio. In particular at $z=6$ we can see that the central galactic region for the NoSNe simulation is dominated by the gravitational torque. This is not the case for both our SNe runs where at low redshift the pressure torque is dominating the AM re-distribution. Such behavior confirm the radial profile results of figure \ref{fig:profTorque} and the $\alpha$ parameters of figure \ref{fig:stressprof}. At the edge of the galactic disc the pressure torque associated to the in-falling shocked material tends to dominate AM variations whereas at the central region the gravitational potential gradient is the main source of torque in the NoSNe simulation. In the SNe runs the energy injection spread out the high density material and there is a more flat potential at the center of the galaxy implying a non clear gravity domination there. Such behavior is more evident in our SNe5.0 run where it is possible to see a dark region in the center of the system. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{mvirmstar.pdf} \caption{Halo mass - stellar mass (normalized by the halo baryonic content assuming that it has exactly the universal baryonic fraction $f_b$) relation as a function of redshift in different colors for our runs. From top to bottom: NoSNe, SNe0.5 and SNe5.0. The filled circles mark the mass in stars inside the virial radius and the empty triangles mark the mass in stars inside one tenth of the virial radius. Our SNe5.0 run is in closest agreement with B13 at $\sim10^{10}M_\odot$ while for high $z$ galaxies our SNe0.5 experiment is still in the limit of the order of magnitude predicted by B13. } \label{fig:mvirmstar} \end{figure} Having clarified that the source of pressure gradients are the shocks associated to both the filamentary incoming material from the cosmic web and the SNe explosions, it is interesting to elucidate the origin of the gravitational torque acting mainly in the central region of the galactic disc. In order to do that it is useful to study the density distribution in the disc. In particular, it is worth computing the Fourier modes associated to the gas mass surface density: \begin{equation} c_m=\frac{1}{2\pi}\int_{-\pi}^{\pi}d\theta\int_{0}^{\infty}dr e^{im\theta} r\Sigma(r,\theta). \end{equation} Figure \ref{fig:cmode} shows the square of each Fourier mode (from $m=1$ to $m=15$) normalized by $|c_0|^2$ for five different redshifts for our three runs. It is clear from the figure that the $m=1$ mode has the highest power in the spectrum for all the shown redshifts. Despite that, it is also possible to see that the difference in power between the first and the second mode is not too much for all the sampled redshifts, i.e. $|c_2|^2/|c_1|^2\ga 0.5$. In this sense it is not possible to say that the first mode is the only contribution to the surface density spectrum because the second mode (and even the third one) is also important. Furthermore, it is worth noticing that the powers with $m>2$ are also there and they have values $\ga 10^{-2}$ below $m\approx 6$. It is interesting to compare our result with the one shown in \citet{Krumholz+2007} where they found that the source of the torques on a proto-stellar disc is associated to the domination of the $m=1$ mode due to the SLING instability \citep{Adams1989,Shu1990}. In their case the first power is at least one order of magnitude higher than the $m=2$ mode with an increasing domination of $m=1$ mode with time. They argue that the $m=1$ spiral mode produces global torques which are able to efficiently transport AM. In our case, the global perturbation will be associated to a more complex disc structure. The reason for this difference will be clear after looking at the surface gas density projections. Figure \ref{fig:sigmagas} reflects the fact that the power spectrum of the gas surface density shows power for different modes $m$. There we can see a complex spiral-clumpy structure defining the galactic disc. Such density field features create an inhomogeneous gravitational potential field which will exert torques on the surrounding media. In particular, the clumps formed on the disc by gravitational instabilities interact between themselves migrating to the central galactic region: the VDI acts on these high redshift clumpy galactic discs \citep{Bournaud+2007}. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{SFR.pdf} \caption{SFR as a function of redshift for our experiments: NoSNe (black solid line), SNe0.5 (dashed blue line) and SNe5.0 (dot-dashed line). It is worth to notice that above redshift $z\approx10$ the NoSne simulation presents a higher SFR compared with the SNe simulations: the NoSne experiment can form stars continuously without feedback. The SNe runs shows a ``bursty'' nature with peaks of SF in each $\sim$ few $10[Myrs]$.} \label{fig:SFR} \end{figure} It is worth noticing that due to the SNe energy injection in the SNe runs the disc takes a longer time to appear comparable to the NoSne experiment. Whereas the NoSne simulation develops a disc that is progressively disturbed in time by no more than mergers, the SNe runs show a disturbed clumpy environment characteristic of turbulent gas where the SNe explosions disrupt the galaxy with a strong effect on the central BH accretion rate as we will see in the next sub section. Furthermore, due to the metal release in the SNe runs the gas can cool more efficiently than in the NoSNe simulation. Such an important difference allows the gas to form more self gravitating over-densities and produce the clumpy galaxies shown at redshift $z\la8$ in the second and third columns of figure \ref{fig:sigmagas}. \subsubsection{Mass accretion and BH growth} High redshift galaxies are far from isolated systems. In fact, as has been shown above, they are very dynamic, in the sense that they are being built up in environments disturbed by filamentary accretion from the cosmic web, mergers and SNe explosions which affect the transport of AM. In the context of BH evolution at high redshift it is relevant to study and quantify the mass accretion rate in the galactic disc due to the processes described in the previous sub section, and more relevant yet is the quantification of the mass accretion rate onto the central BH and the relation of its mass accretion with the large scale filamentary inflows. Figure \ref{fig:dotMprof} shows the radial gas mass accretion rate on the disc as a function of radius inside $\sim 0.1R_{vir}$ at different redshifts for our three runs. We defined the mass accretion on the disc as: \begin{equation} \frac{dM_{g}}{dt}=-2\pi r\Sigma_g v_r. \end{equation} The radial coordinate $r$ is defined in the disc plane and the gas SD $\Sigma_g$ and the radial velocity $v_r$ are cylindrical shell averages in the $\hat{z}$ direction. We note that the NoSNe simulation shows continious lines at almost all redshifts and radii, but our SNe simulations present non continuous lines due to gas outflows. Such features are another proof of the highly dynamic environment where the first galactic discs are formed. The mass accretion rate fluctuates roughly between $\sim10^{-2}$ and $\sim10^{2}[M_\odot/yr]$ in the range of radius shown in the figure. As mentioned above such a huge dispersion reflects the fluctuating conditions of the galactic disc environment at high $z$, where due to the continuous gas injection through filamentary accretion the evolution is far from the secular type we see in low redshift galaxies. At $z=6$ the NoSNe run has an accretion rate of $\dot{M}_g\approx7[M_\odot/yr]$ on the disc. This value is higher than the $\dot{M}_g\approx4[M_\odot/yr]$ in the case of SNe0.5 and $\dot{M}_g\approx3[M_\odot/yr]$ for the SNe5.0 simulation. This trend can be related to the SNe feedback strength. As we will see below, the SNe feedback will have important effects on the BH growth and on the mass accretion at larger scales also. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{gasfraction.pdf} \caption{Gas mass fraction as a function of redshift for our three runs: NoSNe (solid black line), SNe0.5 (dashed blue line) and SNe5.0 (dot-dashed cyan line). The dotted thick green line at $f_g=0.55$ marks the W15 observed value. The dotted thin green line are the errors associated to that observation. In general our SNe runs are well inside the error bars of W15. Our SNe0.5 run has $f_g$ closer to the observed value at $z\la 7$ and it is just in the limit of $f_g=0.8$ when we average below $z=8$.} \label{fig:gasfraction} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{profToomre.pdf} \caption{Toomre parameter as a function of radius for our three simulations. The thermal Toomre parameter in the left column and the turbulent Toomre parameter in the right column. From top to bottom: NoSNe, SNe0.5 and SNe5.0. From the thermal Toomre panels we see that our NoSNe run presents lower values compared with our SNe experiments. Despite that the SNe runs do have some regions of the Toomre parameter $\la 1$ due to the effect of metal line cooling. Our turbulent Toomre parameter shows much higher values. This is due to the high Mach number of our runs. The difference is more dramatic in our NoSNe run due to the low gas temperatures reached without SNe heating. See the text for a discussion about this parameter.} \label{fig:toomreprof} \end{figure} Figure \ref{fig:accrate} shows the ratio of the BH mass accretion rate to the Eddington mass accretion rate $f_{EDD}\equiv\dot{M}_{BH}/\dot{M}_{EDD}$. The NoSNe run BH accretes matter at the Eddington limit until $z\approx8$. At this redshift the central galactic region undergoes several mergers losing a lot of gas, leaving the BH with almost no material to consume. After this event the accretion rate fluctuates until the end of the simulation. Our SNe run has an Eddington limited BH accretion rate $\langle f_{EDD}\rangle\approx 0.75$ throughout its evolution and a $\langle f_{EDD}\rangle\approx 0.5$ below $z=8$. From figure \ref{fig:accrate} the effect of SNe feedback on the BH growth is clear. SNe feedback perturbs the BH accretion rate from the beginning of its evolution decreasing its value until $f_{EDD}\la10^{-4}$ in the SNe0.5 run and reaching even lower values in the SNe5.0 simulation. Such a difference in the BH accretion rate is translated into a $\langle f_{EDD}\rangle\approx 0.5$ for SNe0.5 throughout its evolution and a $\langle f_{EDD}\rangle\approx 0.3$ for the SNe5.0 experiment. These values do not change significantly when we average below $z=8$. The mass accretion rate onto the BH at the end of the simulations is $\dot{M}_{BH}\approx 8[M_\odot/yr]$, $\dot{M}_{BH}\approx 0.03[M_\odot/yr]$ and $\dot{M}_{BH}\approx 0.003[M_\odot/yr]$ for the NoSNe, SNe0.5 and SNe5.0 run, respectively (see appendix \ref{appendixE}). \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{clumpsMF.pdf} \caption{The gas clumps mass function. From top to bottom: NoSNe, SNe0.5 and SNe5.0. The vertical lines are the average maximum unstable mass for rotating disc $M^{max}_{cl}$. This scale mass has values $\ga10^8[M_\odot]$ for all our runs. We divided the mass range in the bins $(5\times 10^5,10^6,5\times 10^6,10^7,5\times 10^7,10^8)[M_\odot]$. Our simulations form clumps in the range $M_{clump}\sim$ few $10^5$ to few $10^7[M_\odot]$The NoSNe simulation reaches the higher mass clump with $M_{clump}\approx 8\times 10^7[M_\odot]$ due to its null feedback. Our SNe runs reached lower maximum clump mass $M_{clump}\approx 2\times 10^7[M_\odot]$ due to the SNe heating and our SNe0.5 experiment form more massive clumps compared with our SN5.0 run.} \label{fig:scalemass} \end{figure} Figure \ref{fig:sinkmass} shows the BH mass evolution as a function of redshift. From this figure it is possible to see the effect of the different behavior in the accretion rate, as shown in figure \ref{fig:accrate}. Whereas the NoSne sink has an approximately exponential evolution ending with a mass $M_{BH}=1.4\times10^9[M_\odot]$, due to the SNe feedback our SNe runs show episodes of no growth at some redshift. Such a feature is much clearer in our SNe5.0 run (see between $z=14-12$ or $z\approx 9$ for example). The final mass in these two runs was $M_{BH}=3.6\times10^7[M_\odot]$ for SNe0.5 and $M_{BH}=1.5\times10^6[M_\odot]$ for SNe5.0. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{stressprof.pdf} \caption{The $\alpha_{r\theta}$ parameters in the disc as a function of radius for different redshifts. The gray dashed line marks the $\alpha=1$ position. From left to right: NoSNe, SNe0.5 and SNe5.0. The top row shows the hydrodynamic $\alpha$ and the bottom row the gravitational $\alpha$. The $\alpha_R$ are much larger than one in most of our cases. Due to the high Mach number of the NoSNe run it shows higher values. Due to the high temperatures reached after SNe explosions our SNe runs have lower Reynolds alphas. The gravitational $\alpha$ is much lower than the Reynolds stress. It reaches higher values at the central galactic region for NoSNe and SNe0.5 ($z\ga 9$). The SNe5.0 run shows lower values due to the destruction of dense gas features and higher gas temperatures.} \label{fig:stressprof} \end{figure} \subsection{Mass transport on larger scales} At high redshift we cannot study the small-scale galactic phenomena without taking into account the effects of the large scale structure in a cosmological context. Here we study the behavior of the mass accretion rate above the $\sim$ kpc scales, i.e. beyond the galactic disc edge. Figure \ref{fig:accrateLS} shows the mass accretion rate out to $\sim 3R_{vir}$. The mass accretion has been computed taking into account all the mass crossing a spherical shell at a given radius centered at the sink cell position: \begin{equation} \frac{dM_{g}}{dt}=-4\pi r^2\rho v_r. \end{equation} The left column of figure \ref{fig:accrateLS} shows the total mass accretion rate for our three simulations. In the right column we have plotted the mass accretion rate associated to gas densities below $\rho_{coll}=18\pi^2\Omega_b\rho_c\approx 200\Omega_b\rho_c$, with $\rho_c$ the critical density of the Universe. The vertical lines mark the DM virial radius at each sampled redshift. The right column of the figure can be interpreted as smooth accretion associated to non collapsed objects. The NoSNe panel shows a smooth decreasing behavior almost independent of redshift above $\sim2[kpc]$. The smooth mass accretion rate presents roughly constant values above the virial radius, with accretion rates of the order $\sim10^1[M_\odot /yr]$. Such a value is consistent with the one found by e.g. \citet{Dekel+2009} and \citet{Kimm+2015} for a $\sim10^{10}[M_\odot]$ halo at high redshift \citep[see also ][]{Neistein+2006}. On the other hand, the SNe run panels have a more irregular decreasing behavior with a notable dependence on redshift due to SNe explosions. In these runs the SNe feedback is able to heat up the gas and create hot low density gas outflows almost depleting the system of low density gas. In particular, for a number of redshifts it is possible to see that the smooth accretion is practically erased at radii $\la R_{vir}$, a clear signal of low density gas evaporation. In other words, due to SNe explosions only the dense gas is able to flow into the inner $\la1 [kpc]$ region of the galaxy. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{ratioTorque.pdf} \caption{Gravitational torque to pressure gradient torque ratio as a function of radius for different redshifts. From top to bottom: NoSNe, SNe0.5 and SNe5.0. The gray dashed line marks the $\tau_G/\tau_P=1$ state. The NoSNe run MT tends to be dominated by gravitational torque in the inner galactic region $r\la 100 [pc]$ at all redshifts. Beyond that radius the pressure gradients and gravity work together to re-distribute AM. The SNe0.5 run tends to be dominated by pressure with a central gravity domination at high $z$, inside $r\la10-100 [pc]$. The pressure gradient domination is more clear in our SNe5.0 due to the extreme SNe feedback.} \label{fig:profTorque} \end{figure} \begin{figure*} \centering \includegraphics[width=2.0\columnwidth,height=1.0\columnwidth]{multiTorque2.pdf} \caption{Modulus of the mass weighted gravitational to pressure gradients torque ratio at $z=10$ in the top row and at $z=6$ in the bottom row. From left to right: NoSNe, SNe0.5 and SNe5.0. It is interesting that the pressure torque dominates over the gravitational torque in most of the mapped filamentary dense regions. The gravitational torque increases its influence at the central region of filaments and around gas over-densities. Such a fact confirms our previous finding based on the torques ratio radial profiles: gravitational torque increases its influence in the central galactic region. Such a behavior is not true in our SNe runs where the SNe feedback creates a region dominated by pressure gradients at the galactic center.} \label{fig:multiTorque} \end{figure*} \subsection{Gas-stars-DM spin alignment} As a complementary analysis it is interesting to study the alignment between the AM of the different components of the system, namely DM, gas and stars. Figure \ref{fig:spinangle} shows the alignment between the AM of the different components of our systems. The misalignment angle between the gas AM $\vec{l}_{Gas}$ and the component $i$ of the system $\vec{l}_{i}$ was computed as: \begin{equation} \cos(\theta_{Gas-i})=\frac{\vec{l}_{Gas}\cdot\vec{l}_{i}}{|\vec{l}_{Gas}||\vec{l}_i|} \end{equation} The rotational center to compute the gas, DM and stars AM was set at the sink position, $\vec{r}_{sink}$. This point coincides with the gravitational potential minimum cell within $\sim50[pc]$ around the sink particle. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{cmode.pdf} \caption{The Fourier modes associated to the mass surface density spectrum on the disc. From top to bottom: NoSNe, SNe0.5 and SNe5.0. Despite of the $m=1$ mode has the higher power, the mode $m=2$ is also important being a fraction $\ga0.5$ of the $|c_1|^2$ at all redshifts. Furthermore, all the other $m>3$ modes have a contribution of roughly similar order between them. In other words, the disc have developed a complex azimuthal structure allowing gravitational torques on the galaxy.} \label{fig:cmode} \end{figure} We computed the AM of the different components $i$ as \begin{equation} \vec{l}_{i}=\sum_j \Delta m_{i,j}(\vec{r}_{i,j}-\vec{r}_{c})\times(\vec{v}_{i,j}-\vec{v}_{c}), \end{equation} where the sum is calculated inside $0.1R_{vir}$ for each component\footnote{We have not made any distinction regarding the cell gas temperature or between disc and bulge stars.}. $\vec{r}_{c}$ is the the center of the cell where the sink particle is located, $\vec{v}_{c}$ is the average gas velocity of all cells inside a radius of $5\Delta x$ around the sink position and $\Delta m_{i}$ is the mass of our different quantities: $i=$ gas, stars and DM. From the figure we can see that the gas and the DM spins are far from aligned. The misalignment angle between them fluctuates from a parallel alignment $\theta_{Gas-DM}\la 10^\circ$ to an almost anti-parallel configuration $\theta_{Gas-DM}\approx 120^\circ$ in our NoSNe experiment. In our SNe runs the fluctuations are more dramatic due to SNe explosions. Such a non-correlation between the AM vector of these two components has been studied before in, e.g. \citet{PrietoSpin}. In their work the authors noticed that after the cosmological turn around the gas can decouple from the DM due to its collisional nature: while the DM can feel only the gravity the gas can also feel the gas pressure gradients. Such pressure gradients are responsible for an extra torque on the baryonic component and its AM vector deviates from the DM AM orientation. As already shown in figure \ref{fig:profTorque} the pressure gradients are not negligible inside the virial radius of our haloes. Such torques are able to change the orientation of the gas AM and then create a misalignment between gas and DM AM vectors. \begin{figure} \centering \includegraphics[width=0.95\columnwidth,height=1.7\columnwidth]{sigmagas2.pdf} \caption{Gas surface density projections for our runs at different $z$. From left to right: NoSNe, SNe0.5 and SNe5.0. The evolution of the density maps show that the galaxy develops a complex spiral clumpy structure supporting the existence of high $m$ powers in the Fourier analysis of figure \ref{fig:cmode}. Due to SNe explosion the spiral shape of the object appears only at $z\la8$ in the SNe0.5 run. Below this redshift the galaxy is successively destroyed by SNes and re-built by gravity. In our SNe5.0 run it is almost impossible to see a spiral shape due to the extreme feedback.} \label{fig:sigmagas} \end{figure} The alignment between gas and stars has a different behavior in our runs. For the NoSNe case it is possible to see that at high redshift, between $z\approx15$ and $z\approx13$, the stars and the gas had a very different spin orientation. This is because at this stage the galaxy is starting to be built by non spherical accretion and mergers, conditions which do not ensure an aligned configuration. After $z\approx13$ the gas and stars reach a rather similar spin orientation with a misalignment angle fluctuating around the value of $\theta_{Gas-Stars}\sim 20^\circ$. There the proto-galaxy can not be perturbed easily by minor mergers and acquires a defined spiral shape allowing the gas-star alignment. Such an alignment is perturbed at redshift $z\approx10$. At this redshift the main DM halo suffers a number of minor mergers which can explain the spin angle perturbation. After that the gas and stars again reach an aligned configuration which will be perturbed by mergers again at lower redshifts. The SNe runs show a much more perturbed gas-stars AM evolution. In this case, as well as the merger perturbations the systems also feel the SNe explosions which continuously inject energy. The strong shocks associated to this phenomenon are able to decouple the gas AM from the stellar AM as we can see from the blue solid line. Such perturbations are more common in our SNe5.0 run compared with our SNe0.5 run due to the stronger feedback as we can see from figure \ref{fig:sigmagas} where the SNe5.0 simulation shows a number of clumps instead of a defined spiral shape. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{dotMprofLR.pdf} \caption{Mass accretion rate radial profiles for our three runs. The vertical lines mark $0.1R_{vir}$ at each redshift. In all simulations the accretion rate has huge fluctuations between $\sim$ few $10^{-2}$ and $\sim$ few $10^{1}[M_\odot/yr]$, another proof of the highly dynamic nature of the system. The SNe runs show a less continuous accretion with lower values. In particular, our SNe simulations have a $\sim$ few $1[M_\odot/yr]$ at the end of the simulation.} \label{fig:dotMprof} \end{figure} \section{Discussion and Conclusions} \label{conclusion} By using cosmological hydrodynamic zoom-in simulations we have studied the MT process from $\sim$ few $10 [kpc]$ to $\sim$ few $1[pc]$ scales on a DM halo of $M\approx 3\times 10^{10}[M_\odot]$ at redshift $z=6$. We have studied the evolution of the system without SNe feedback (NoSNe run) with the delayed cooling model for SNe feedback (SNe0.5 run) and with an extreme case of delayed cooling SNe feedback (SNe5.0 run). We found that the SNe0.5 run is the best match with the D10 star burst galaxy sequence. It covers about two decades in SD with the lowest scatter among our simulations. When we look at the stellar mass of the systems our SNe5.0 run shows a stellar mass close to the expected value from B13, $M_\star/f_b M_{vir}\sim 10^{-2}$. Looking at this quantity our SNe0.5 run is still in the order of magnitude compared with B13 for a $\sim10^{10}[M_\odot]$ DM halo at high redshift. Such an offset can be related to the ``bursty'' nature of high redshift galaxies. In terms of the SFR, due to the extreme feedback, our SN5.0 run has the lowest values with a SFR $\sim 1[M_\odot/yr]$ at $z\la 8$. At the same $z$ range our SNe0.5 run has a SFR $\sim 10[M_\odot/yr]$ in agreement with results from the W15 high $z$ galaxies. Despite this both SNe runs present low ($\la 10^{-1}$) episodic SFR values due to the SNe heating. Our SNe experiments show lower gas fractions among our three simulations. They have values $f_g\la0.85$ below $z=8$. If we look at the gas fraction below $z=7$ our SNe0.5 run has the lowest value with $f_g\la0.8$ which is just within the upper limit for the SFR of the $z=7.5$ galaxy found by W15. Following \citet{Gammie2001} we have computed the $\alpha$ parameters associated to both the Reynolds and the gravitational stresses. In other words, we have computed both the Reynolds and the gravitational rate of momentum fluxes on the disc normalized by the gas pressure. \citet{Gammie2001} showed that the $\alpha$ parameters associated to radial mass transport are of order $\alpha\sim 10^{-2}$, reasonable values for a subsonic stationary accretion disc. In our case the $\alpha$ parameters reach values above unity, meaning that the rate of momentum flux has values higher than the gas pressure $P=c_s\times(\rho c_s)$. Such high values are characteristic of a turbulent super-sonic environment associated to dynamical systems like the ones in our simulations. The highly non-stationary gas behavior is confirmed also by the highly fluctuating values of $\alpha$ at all redshift. We found that the Reynolds stress dominates over the gravitational one in most of the analyzed redshifts. Here it is worth noting that the Reynolds stress tensor is a measurement of the turbulent motions in the gas. In these systems the gas falls from large scales, channeled by filaments almost freely onto the DM halo central region, gaining super-sonic velocities. Through the virialization process strong shocks are created developing a turbulent environment which is enhanced due to SNe explosions. Under such conditions the rate of momentum flux associated to this term normalized by the gas pressure will be much higher than 1 if the rms gas velocity in the $\hat{r}$ and $\hat{\theta}$ directions are super-sonic. We emphasize that the Reynolds stress is not a source of mass transport but it is a measurement of the local rate of momentum transport triggered by other processes, namely pressure gradients, gravitational forces, magnetic fields or viscosity. In this sense, its high value simply tells us that throughout galaxy evolution there exists processes capable of transporting mass from large scales to small scales very efficiently. In fact, in our systems, gravity triggers the mass flows through the DM filamentary structure around the central halo and then a combined effect of gravity and pressure gradients allows the MT in the disc. The Reynolds term tends to be higher for our NoSNe run where Mach numbers are higher due to the null SNe heating. The gravitational $\alpha$ parameter has a different behavior in our three experiments. The NoSNe run shows a clear decreasing trend in radius until $r\sim 100 [pc]$ for all sampled redshifts. Beyond that $\alpha_G$ fluctuates between values lower than 1. It reaches values $\sim 1$ at the central region. Such a behavior is telling us that the gravitational term is more important at the central galactic region where matter is more concentrated. Our SNe0.5 run has a peak above unity in the central region at high redshift decreasing until $r\sim100 [pc]$. Beyond that radius it has a similar behavior compared with our NoSNe simulation. Below redshift $z\sim 8$ the gravitational alpha parameter reduces its value to around $\sim10^{-3}$ with a lot of dispersion but always below $\sim10^{-1}$. In this case the SNe feedback is able to deplete the central galactic region of gas after $z\sim 8$ reducing the stresses associated to the gravitational gradients. The SNe5.0 simulation has no peak at the galactic center. It seems to have the lowest values at the central regions. Due to the extreme feedback adopted in this simulation it is much more difficult for gas to create dense structure producing important gravitational forces. Furthermore, in this case the gas maintains higher temperatures implying higher pressures counteracting the gravitational effect. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{fEDD.pdf} \caption{BH mass accretion rate normalized by the Eddington accretion, $f_{EDD}$ as a solid black line for the NoSNe run, as a dashed blue line for the SNe0.5 run and as a dot-dashed line for the SNe5.0 run. From the figure the increasing feedback perturbation on the BH growth is clear. It has an average value of $\langle f_{EDD}\rangle\approx0.75$ in the NoSne case and $\langle f_{EDD}\rangle\approx0.5$ in our SNe runs throughout the BH evolution.} \label{fig:accrate} \end{figure} The torques acting in the disc show the sources of angular momentum variations triggering the MT process in these galaxies. In our systems the sources of torques are the pressure gradient due to the shocks created through the virialization process and SNe explosions, and the gravitational forces associated to gas inhomogeneities. As in the $\alpha_G$ analysis when we compute the gravitational to pressure gradient ratio our NoSNe run shows a decreasing trend in radius. Gravity dominates over pressure and has a maximum at the central galactic regions reaching values $\sim 1$ at radius $\sim 100 [pc]$. Beyond this radius the pressure gradients tend to dominate the AM re-distribution. Without SNe feedback the pressure domination at large radius is associated to shocks created by the large scale in-falling material to the central region of the host DM halo. Despite the domination of the pressure gradients in the outer regions the system shows a number of regions where gravity acts showing a mixed contribution for the MT process. In our SNe runs the domination of gravity at the central regions is not as clear as in the NoSNe run. In the SNe0.5 simulation the gravitational gradients dominate above $z\sim 9$ inside $r\la100 [pc]$. At lower redshifts the pressure gradients clearly dominate the torques at the inner $\sim 100 [pc]$. Beyond that radius again it is possible to see a mixed torque contribution to the MT. A similar scenario is shown in our SNe5.0 simulation. In this case the gravity can dominate the very central regions ($r\la$ few $10 [pc]$) at high redshift and the pressure gradients have a more clear domination at larger radii, but there is still a mixed contribution to the AM re-distribution. When we look at the large scales related with the filamentary structure around the central DM halo it is possible to see that pressure torques dominate over gravitational torque in filaments. The central region of the filaments show an enhanced gravitational contribution, but it is not enough to be dominant. These results are consistent with the picture in which the material filling the voids falls onto the filamentary over-densities where it is channeled to the central DM halo region \citep{Pichon+2011,Danovich2015} by gravity. Once the gas reaches the filaments it feels the pressure gradient on the edge of the filaments and it loses part of its AM. Then gravity acts and transport the mass almost radially inside the cold filaments to the central region of the DM halo. Such a process allows the gas to reach the galactic edge almost at free-fall. There the gas pressure acts reducing its initially high radial velocity and at the same time exerting torques allowing the MT. Throughout this process the gravitational torques also work in the galactic gas helping the MT process in the disc. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{sinkmass.pdf} \caption{BH mass evolution for our three simulations: NoSne (solid black line), SNe0.5 (dashed blue line) and SNe5.0 (dot-dashed cyan line). The NoSNe BH reaches a mass of $1.4\times 10^{9}M_\odot$ at the end of the simulation. Such a high mass was reached because most of the time the BH was accreting at the Eddington limit. In the SNe0.5 run the sink particle reaches a final mass of $3.6\times10^7M_\odot$ and our SNe5.0 BH mass reaches $1.5\times10^6M_\odot$ due to the extreme feedback.} \label{fig:sinkmass} \end{figure} A Fourier analysis of the disc gas surface density field for our runs shows that the density power spectrum has a number of excited modes. Despite the $m=1$ and $m=2$ modes dominating the power spectrum, the other modes do exist and have roughly comparable values between them. Such features tell us that the gas SD develops a complex structure throughout its evolution. The information given by the Fourier analysis is confirmed by visual inspection. The galactic discs develop spiral arms and gas clumps which interact between them by gravity. The gas clumps are formed from the cold gas flowing from the cosmic web onto the central DM halo region. The high gas fraction (which is $f_g\ga 60\%$) and cold environment is a perfect place to produce a clumpy galactic disc. The interaction between gas clumps, spiral arms and merged DM haloes exert gravitational torques which are capable of transporting mass onto the galactic center in times comparable to the dynamical time of the system: this is the so-called VDI \citep{Mandelker+2014,Bournaud+2007}. Due to the process described above, i.e. large scale gravitational collapse inducing filamentary accretion onto the DM central region and both gravitational and pressure torques acting in the galaxy, the mass can flow through the galactic disc and reach the galactic center. The radial mass accretion rate inside $\sim 0.1R_{vir}$ has huge fluctuations with values in the range $\sim(10^{-2}-10^{1})[M_\odot/yr]$ for our SNe runs, a clear proof of a non-stationary and highly dynamic environment. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{accrateLS2.pdf} \caption{Left column: Same as figure \ref{fig:dotMprof} but for larger radii taking into account material till $\sim 3R_{vir}$ around the central halo. From top to bottom: NoSNe, SNe0.5 and SNe5.0. Right column: Same as left column but for the smooth accretion, i.e. for gas with a density below the collapse density $\rho_g< 200\Omega_b\rho_c$. Beyond the virial radius the total accretion has a floor similar to the smooth accretion. Inside the virial radius the accretion rate is dominated by dense gas. The SNe explosions have a clear effect on the smooth accretion. At $\sim$ kpc scales the smooth accretion is practically erased due to the SNe heating.} \label{fig:accrateLS} \end{figure} The high mass accretion rate in the high gas fraction disc allows the central BH to grow at the Eddington limit most of the time for the NoSNe run whereas in the SNe runs it is clearly affected by the SNe explosions showing an intermittent Eddington-limited accretion rate. Despite this it can increase its mass substantially throughout the simulation. The violent events, namely mergers (which can also trigger mass accretion torquing the gas in the disc) and SNe explosions are not enough to stop the BH growth. The $10^4[M_\odot]$ BH seed can evolve until $M_{BH}=1.4\times 10^9[M_\odot]$ in our NoSNe experiment, $M_{BH}=3.6\times10^7[M_\odot]$ in our SNe0.5 and $M_{BH}=1.5\times10^6[M_\odot]$ in our SNe5.0. When we look at the mass transport beyond the virial radius we find that the large scale $r\ga R_{vir}$ mass accretion rate has a floor of the order $\la10^1[M_\odot/yr]$ with peaks associated to gas inside DM haloes of $\sim$ few $10^1[M_\odot/yr]$ in all our runs \citep[consistent with ][]{Dekel+2009,Kimm+2015}. Inside the virial radius the smooth accretion decreases monotonically reaching values $\sim 10^{-1}-10^{-2}[M_\odot/yr]$ in the galactic outer regions, i.e. $r\sim 0.1R_{vir}$. These values change dramatically when we look at our feedback simulations. In these cases the SNe feedback practically depletes the galactic central region of low density gas. Due to the strong feedback effect only dense gas is able to reach the outer regions of the central galaxy. The mass accretion rate associated to dense gas is of the order $\sim 10^{0}-10^{1}[M_\odot/yr]$ in our NoSNe systems and it is almost devoid of discontinuities. On the other hand, despite the SNe runs reaching similar accretion rates in the disc, they do have discontinuities, i.e regions of zero accretion rate, affecting the amount of gas reaching the outer galactic region. At the end of the simulation our NoSNe run shows an accretion rate $\dot{M}_{BH}\approx 8[M_\odot/yr]$ which is similar to the total mass accretion in the disc. Contrarily, our SNe0.5 run ends with $\dot{M}_{BH}\approx 3\times 10^{-2}[M_\odot/yr]$ and our SNe5.0 run reaches $\dot{M}_{BH}\approx 3\times 10^{-3}[M_\odot/yr]$ at the end of the experiment showing how important are the SNe explosions to the BH accretion rate. The gas AM vector orientation fluctuates a lot with respect to the DM spin vector through out the system evolution. The gas and DM start their evolution with spin vectors roughly aligned but once the pressure gradients increase due to virialization shocks, mergers \citep[e.g. ][]{PrietoSpin} and SNe explosions they decouple reaching an almost anti-parallel orientation at some stages. The alignment between these two components is more clear in our NoSNe run where the angle between them is $\theta\la 60^\circ$ below $z\approx 13$. The picture changes when we look at our SNe simulations, there the effect of SN feedback is capable of changing the alignment from $\sim 0^\circ$ to $\ga100^\circ$ in $\sim$ few $10[Myr]$. Such an effect is stronger in our SNe5.0 where the big angle fluctuations are present throughout the entire system evolution. The inclusion of AGN feedback in our simulations certainly could change both the galaxy and the BH evolution. The strong energy release in the gas can increase the gas temperature and may suppress the star formation changing SFR properties of those objects. Furthermore, due to the outflows associated to BH feedback the gas may not reach the central galactic region as easily as in the simulations presented here. A more detailed study of mass transport on high redshift galaxies with AGN feedback is left for a future study in preparation. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{spinangle.pdf} \caption{The misalignment angle between the gas AM and stellar AM (solid blue line), and the gas AM and DM (short-dashed cyan line). From top to bottom: NoSNe, SNe0.5 and SNe5.0. The gas and DM show a fluctuating misalignment angle with a high value at the end of the simulation. Due to the collisional nature of the gas it decouple from the DM once the pressure torques start to work on it. For the same reason the SNe simulations have a larger misalignment through out the simulation.} \label{fig:spinangle} \end{figure} To summarize: In a cosmological context galaxies are formed inside knots of the cosmic web surrounded by filaments. The gas flows from voids to the DM filaments from all directions. There the gas piles up in the filamentary structure and its pressure gradient cancels part of its angular momentum. The pressure torques dominate the filamentary structure whereas the gravitational torques have a non-dominant enhancement at the center of filaments. Part of the material inside the filaments formed by dense cold gas, flows into the DM central halo in almost free-fall due to the host DM halo gravitational attraction. The transported cold gas reaches the DM halo with a high radial component of the velocity producing strong pressure gradients at the edge of the galactic disc. The constant inflowing of cold gas creates a high gas fraction and cold environment at the inner $\sim 0.1R_{vir}$. Such conditions promote a low Toomre parameter in the disc and so it becomes gravitationally unstable, forming very efficiently gas clumps of masses in the range $\sim10^{5-8}[M_\odot]$ which interact between them, with the galactic spiral arms and with merged DM haloes. Such a clumpy environment produces regions in the disc dominated by gravity torques, in other words the gravitational torque due to the VDI acts as a source of MT in high redshift galaxies. The other, dominant, source of torques in our system is the pressure gradients. It is produced by SNe explosions and virialization shocks complementing the gravitational MT effect on this $z=6$ galaxy. The mass accretion rate in the disc triggered by pressure gradients and gravity can reach peaks of $\sim$ few $10^1[M_\odot/yr]$ and average values of $\sim$ few $10^0[M_\odot/yr]$ allowing an efficient BH mass growth. \section*{Acknowledgments} J.P. and A.E. acknowledges the anonymous referee for the invaluable comments to improve this work. J.P. acknowledges the support from proyecto anillo de ciencia y tecnologia ACT1101. A.E. acknowledges partial support from the Center of Excellence in Astrophysics and Associated Technologies (PFB06), FONDECYT Regular Grant 1130458. Powered@NLHPC: This research was partially supported by the supercomputing infrastructure of the NLHPC (ECM-02). The Geryon cluster at the Centro de AstroIngenieria UC was extensively used for the analysis calculations performed in this paper. The Anillo ACT-86, FONDEQUIP AIC-57 and QUIMAL 130008 provided funding for several improvements to the Geryon cluster. J.P. acknowledges the valuable comments and discussion from Yohan Dubois and Muhammad Latif. J.P. and A.E. acknowledge to Marta Volonteri for her enlightening comments on this work.
2,877,628,088,780
arxiv
\section{Introduction} Despite their remarkable successes, both the Standard Model of Particle Physics and the Standard Model of Cosmology face multiple open questions. Examples include the origin and composition of dark matter, the origin of dark energy, the evolution of the Universe during the first minute after the Big Bang (including the inflationary phase as well as possible other phase transitions), the particle physics at the TeV and higher energy scales, the mechanism of electroweak symmetry breaking, and others. Many of these phenomena could have left measurable imprints in the form of gravitational waves (GWs) coming from the early Universe, providing a unique connection between GW physics and fundamental physics. Given the very high energies associated with many of these phenomena, often unachievable in laboratories and accelerators, GWs may provide the only experimental handle to probe these domains for new physics. (Snowmass white paper ``Future Gravitational-Wave Detector Facilities'' \cite{Ballmer:2022uxx} gives an extensive survey of planned and proposed GW observatories. Snowmass white paper ``Spacetime Symmetries and Gravitational Physics''~\cite{Adelberger:2022sve} provides an overview of high-sensitivity small experiments that can be used for GW detections.) The connection between high energy physics, cosmology, and GW physics has been investigated through many facets, and can be illustrated using different perspectives. On one hand, there are open questions that are inevitably linked with the cosmology of our Universe and that may be partially decoupled from the Standard Model of Particle Physics. We dedicate two individual Sections to such questions. Section \ref{sec:earlyphases} is dedicated to the questions about the origin of our Universe. This includes the inflationary and other early phases in the evolution of the Universe that could have generated a stochastic GW background (SGWB). (For synergy, see Snowmass white papers on inflation \cite{Achucarro:2022qrl} and high energy physics model-building of the early Universe \cite{Flauger:2022hie,Asadi:2022njl}.) Later, Section \ref{sec:dm} is dedicated to the open problem of dark matter. (Many of the open problems in cosmology are presented in the Snowmass white paper ``Cosmology Intertwined: A Review of the Particle Physics, Astrophysics, and Cosmology Associated with the Cosmological Tensions and Anomalies'' \cite{Abdalla:2022yfr}.) Since there are many possible strategies to address the dark matter problem, the dark matter mechanisms that generate GW signals are necessarily diverse. For instance, a dark matter particle could leave an imprint in GW signals by distorting the binary merger dynamics, while dark matter in the form of primordial black holes would provide a new source of binary mergers GW signals. (For synergy, see the Snowmass white paper ``Observational Facilities to Study Dark Matter''~\cite{Chakrabarti:2022cbu}. See also the Snowmass white paper ``Primordial Black Hole Dark Matter'' \cite{Bird:2022wvk} for a detailed discussion of the implications of primordial black holes. There are also significant overlaps with the Snowmass white papers ``Astrophysical and Cosmological Probes of Dark Matter" \cite{Boddy:2022knd} and Snowmass white paper: ``Cosmic Probes of Fundamental Physics Probing dark matter with small-scale astrophysical observations'' \cite{Snowmass2021:CosmoDM_SmallScale}.) Many fundamental questions in particle physics -- the electroweak hierarchy problem and the mechanism behind electroweak symmetry breaking, the strong CP problem, the matter-antimatter asymmetry, neutrino masses, to some extent also dark matter -- are typically addressed by introducing new particles and symmetries to the Standard Model (SM), with new, and possibly dark, sectors. These new symmetries may break through phase transitions during the evolution of the Universe, possibly constituting important new sources of a SGWB. GW production mechanisms include both the dynamics of the phase transition if it is first order, as discussed in Section \ref{section_PT}, or topological defects if they are created during the phase transition, as covered in Section \ref{sec:defects}. Finally, there is a great potential for multimessenger complementarity between GW observations and the standard techniques for probing cosmology and particle physics. In Section \ref{sec:gwcollider}, we explore such complementarity between GW observations and the future collider experiments in the context of probing TeV-scale physics. This includes studies of the electroweak symmetry breaking in colliders and associated GWs from phase transitions in the early Universe. It also includes other possible symmetries and associated phase transitions, such as those related to supersymmetry. (For synergy with collider-based probes, see the Snowmass white paper ``Probing the Electroweak Phase Transition with Exotic Higgs Decays'' \cite{Carena:2022yvx}. Further GW probes of fundamental physics are described in the Snowmass white paper ``Fundamental Physics and Beyond the Standard Model'' \cite{Berti:2022wzk}.) In Section \ref{sec:gwem} we explore complementarity between GW observations and traditional electromagnetic observations of the large-scale structure, such as the cosmic microwave background (CMB) or weak gravitational lensing. Directional correlations between these observations, codified in angular cross-correlation spectra, may offer unique information about the early phases in the evolution of the Universe and perhaps shed additional information on the dark matter problem. (Snowmass white papers ``Cosmology and Fundamental Physics from the Three-Dimensional Large Scale Structure'' \cite{Ferraro:2022cmj} and ``Cosmic Microwave Background Measurements'' \cite{Chang:2022tzj} provide a deeper study of the connections of large scale structure and CMB measurements to fundamental physics.) We offer concluding remarks in Section \ref{sec:conclusion}. We note that SGWB could also be generated by astrophysical sources such as mergers of binary black hole and binary neutron star systems. This astrophysical SGWB can act as a foreground to the cosmological (early universe) SGWB. Suppression or removal of the astrophysical stochastic GW foreground is an active area of research, with multiple techniques being explored \cite{PhysRevLett.118.151105,PhysRevD.102.024051,PhysRevD.102.063009,PhysRevLett.125.241101}. These efforts have only had partial success to date and further studies in this direction are needed. We will not discuss this problem any further in this paper. \begin{figure}[!t] \begin{center} \includegraphics[width=1\hsize]{landscape_Snowmass2022.png} \end{center} \caption{Landscape of gravitational wave cosmology. Experimental results include: O1-O3 LIGO-Virgo upper limits~\cite{PhysRevD.104.022004}, indirect limits from big bang nucleosynthesis~\cite{PhysRevX.6.011035}, CMB limits~\cite{PhysRevX.6.011035}, and NANOGrav pulsar timing measurement~\cite{Arzoumanian_2020}, as well as projected sensitivities of the third generation (3G) terrestrial GW detectors~\cite{EinsteinTelescope,CosmicExplorer} and space-borne LISA~\cite{LISA}, Taiji~\cite{Hu:2017mde}, and Tianqin~\cite{TianQin:2015yph}. Theoretical models include examples of slow-roll inflation~\cite{turner}, first-order phase transitions (PT-1~\cite{Ellis:2019oqb}, PT-2~\cite{DelleRose:2019pgi}, and PT-3~\cite{An:2022}), Axion Inflation~\cite{peloso_parviol}, Primordial Black Hole model \cite{Wang:2016ana}, hypothetical stiff equation of state in the early universe~\cite{boylebuonanno}, and foregrounds due to binary black hole/neutron stars~\cite{PhysRevD.104.022004} and galactic binary white dwarfs~\cite{LISA}. } \label{fig:sample} \end{figure} \section{Early Phases in the Evolution of the Universe\label{sec:earlyphases}} A variety of physical processes in the early Universe may generate GWs. As the universe expands and the temperature drops, phase transitions may take place followed by spontaneously broken symmetries. First order phase transitions may generate GWs \cite{PhysRevLett.69.2026} within the frequency range of present \cite{Romero:2021kby} or future \cite{Caprini:2019egz} interferometers, providing a way to test particle physics models beyond the SM. Phase transitions followed by spontaneous symmetry breaking may lead to the production of topological defects as relics of the previous more symmetric phase of the Universe; they are characterised by the homotopy group of the false vacuum. One-dimensional topological defects, called cosmic strings \cite{Vilenkin:2000jqa,Kibble:1976sj,Sakellariadou:2006qs}, were shown \cite{Jeannerot:2003qv} to be generically formed at the end of hybrid inflation within the context of grand unified theories. Cosmic strings, analogues of vortices in condensed matter systems, leave several observational signatures, opening up a new window to fundamental physics at energy scales much above the ones reached by accelerators. The production of GWs by cosmic strings \cite{PhysRevD.31.3052,Sakellariadou:1990ne} is one of the most promising observational signatures; they can be accessed by interferometers \cite{LIGOScientific:2021nrg,Auclair:2019wcv,Boileau:2021gbr}, as illustrated in Fig. \ref{fig:sample}. Domain walls, \cite{Vilenkin:1981zs,Sakellariadou:1990ne,Sakellariadou:1991sd,Gleiser:1998na,Hiramatsu:2010yz,Dunsky:2021tih}, textures \cite{Fenu:2009qf}, and indirectly monopoles \cite{Dunsky:2021tih} can also source GWs; they are hitherto less studied than cosmic strings. A cosmic string network is mainly characterised by the string tension $G\mu (c = 1)$, where $G$ is Newton's constant and $\mu$ the mass per unit length. Observational data constrain the $G\mu$ parameter, which is related to the energy scale of the phase transition leading to cosmic string formation, and therefore it may also be related to the energy scale of inflation. Cosmic superstrings, predicted \cite{Sarangi:2002yt,Jones:2002cv} in superstring inspired inflationary models with spacetime-wrapping D-branes, are coherent macroscopic states of fundamental superstrings and D-branes extended in one macroscopic direction. For cosmic superstrings, there is an additional parameter, namely the intercommutation probability. The dynamics of either a cosmic string or a cosmic superstring network is driven by the formation of loops and the emission of GWs. Following the inflationary paradigm, vacuum fluctuations of the inflaton field may generate a scale-invariant spectrum of GWs imprinted on the CMB B-mode polarization. Inflationary models generally predict very small values of the ratio of the power spectra of the tensor and scalar modes $(r\ll 1)$. Combining Planck with BICEP2/Keck 2015 data yields an upper limit of $r<0.044$ \cite{Tristram:2020wbi}. Inflationary scenarios in which the inflaton couples to a gauge field may predict strongly blue-tilted GW spectra that may be consistent with CMB bounds at long wavelengths but within reach of direct detection experiments at short wavelengths. (See Ref.~\cite{Thorne:2017jft} for examples, or the forthcoming ``Cosmology with the Laser Interferometer Space Antenna,'' by the LISA Cosmology Working Group \cite{Auclair:2022lcg}, for further discussion.) Effective Field Theory inflationary models that do not respect the null-energy condition are also characterized by a blue tensor tilt \cite{Capurri:2020qgz}, as well as inflationary models with violation of slow roll \cite{Wang:2014kqa}, and potentially inflationary models within modified gravity. Finally, there are inflationary models that predict features in the stochastic GW background, for instance as the result of particle production during inflation \cite{Fumagalli:2020nvq}. Despite its success, inflation does face drawbacks \cite{Goldwirth:1989pr,Calzetta:1992gv,Calzetta:1992bp,Borde:2001nh,Ijjas:2014nta,Brandenberger:2000as}, hence various alternatives have been proposed and worked out. Within bouncing cosmologies \cite{Brandenberger:2016vhg}, the pre-Big Bang \cite{Gasperini:1992em} or ekpyrotic model \cite{Khoury:2001wf}, the string gas cosmology \cite{Brandenberger:1988aj,Battefeld:2014uga} and the matter bounce scenario \cite{Finelli:2001sr,Brandenberger:2012zb}, have been long discussed in the literature. The pre-Big Bang scenario \cite{Gasperini:1992em,Gasperini:2002bn} assumes that gravity and cosmology are based on some particular version of superstring theory. According to this model, the Universe emerges from a highly perturbative initial state preceding the Big Bang. In string theory, the dilaton is an additional massless mode, which while at late times may be assumed fixed, is dynamical in the very early Universe. Hence, the massless sector of string theory to which the graviton belongs is given by dilaton gravity. This model leads to the production of an amplified quasi-thermal spectrum of gravitons during the dilaton-driven phase. While the slope of the GW spectrum may change for modes crossing the horizon during the subsequent string phase, it remains characterized by an enhanced production of high frequency gravitons, irrespective of the particular value of the spectral index \cite{Brustein:1995ah}. For a wide region of the parameter space of the pre-Big Bang model, one can simultaneously generate a spectrum of scalar metric perturbations in agreement with Planck data and a stochastic background of primordial GWs within reach of the design sensitivity of aLIGO/Virgo and/or LISA \cite{Gasperini:2016gre}. The ekpyrotic model \cite{Khoury:2001wf} is motivated by a specific realization of M-theory in which spacetime is 11-dimensional \cite{Horava:1995qa}. In this model, whereas the equation of motion of the cosmological perturbations depends on the potential of the scalar field, that of the GWs does not. The spectrum of GWs turns out to be very blue, implying that primordial GWs are negligible on scales of cosmological interest today. String gas cosmology is based on degrees of freedom and symmetries of superstring theory which are absent in an effective field theory approach to early Universe cosmology. This model has two fundamental elements. The first is the Hagedorn temperature defined as the maximal temperature which a gas of strings in thermal equilibrium can achieve. The second is the T-duality symmetry of the spectrum of string states. This string gas cosmology model (tested also via numerical experiments \cite{Sakellariadou:1995vk}) leads to an almost scale-invariant spectrum of primordial GWs \cite{Brandenberger:2006vv}. However, whereas inflation typically leads to a red spectrum, string gas cosmology generically yields a slightly blue spectrum. The matter bounce model \cite{Finelli:2001sr,Brandenberger:2012zb} is based on the duality between the evolution of the canonical fluctuation variables in an exponentially expanding period and in a contracting phase with vanishing pressure. In this model, the amplitude of the GW spectrum is generically larger than that in inflationary models ($r$ can be close to 1). Having indicated the many possibilities during the expansion phase in the very early Universe, we now turn to the role that GWs play in the subsequent eras. GWs, because they free-stream through the entirety of cosmological history, provide us a way of probing cosmic history before Big Bang Nucleosynthesis (BBN). $(i)$ Long-lasting GW sources as cosmic witnesses: The vast stretch of energies between the initial expansion phase of the Universe and BBN can accommodate a range of non-standard cosmological histories with equations of state parameter $w$ different from $w=1/3$. The GW spectrum of long-lasting sources such as inflation and networks of cosmic strings spans a wide range of frequencies and has been studied as cosmic witnesses \cite{DEramo:2019tit, Figueroa:2019paj}, especially to extract information about $w$. Scenarios with $1/3 \leq w \leq 1$ develop a blue tilt for the tensor perturbations at large frequencies and certain portions of the parameter space may be detectable by LISA and/or terrestrial GW detectors \cite{boylebuonanno,Figueroa:2019paj,Giovannini_1,Giovannini_2}, c.f. Fig. \ref{fig:sample}. Matter-dominated era with $w=0$ may cause kinks in the spectrum at frequencies corresponding to the onset of matter and radiation domination, which may be observable in the future at BBO and other experiments. If the change in the equation of state is sudden enough, rapidly oscillating scalar perturbations may enhance the primordial GW spectrum from inflation; this effect has been pursued in the context of primordial black holes \cite{Inomata:2020lmk} and Q-balls \cite{White:2021hwi}. $(ii)$ Phase transitions as cosmic witnesses: It is challenging to use shorter-lasting sources like phase transitions as a witness to $w$ in the early universe, although some effort has been made in this direction \cite{Guo:2020grp, Hook:2020phx, Barenboim:2016mjm, Cai:2019cdl}. If the phase transition occurs during a phase dominated by an entity with a general equation of state $w$ that is not radiation, expressions for the parameters defining the phase transition should take into account the new dominant contribution to the energy density; if the phase transition is followed by an era dominated by an entity with state $w$, the modified redshifting changes the spectrum. In the deep infrared, super-horizon modes scale as $\propto k^{3+2\frac{3w-1}{3w+1}}$. A special case of phase transitions serving as a probe of inflationary physics is when the transition occurs \textit{during} inflation itself, possibly triggered by inflaton couplings to spectator fields. For such a source, the GW spectrum always contains a unique oscillatory feature, which can be used to identify the GW source~\cite{An:2020fff}. A different avenue is to use GWs from phase transitions as a record of temperature anisotropies coming from a previous era, for example from inflation. Since the phase transition inherits primordial temperature anisotropies, it effectively serves as a copy of the CMB, but in an earlier, more pristine form \cite{Geller:2018mwu}. While this matching is true for adiabatic fluctuations, if the primordial fluctuations carry an isocurvature component then a richer scenario can emerge \cite{Kumar:2021ffi}. $(iii)$ GWs from inflationary reheating/preheating: The large time-dependent field inhomogeneities that are characteristic of rapid particle production through parametric resonances during a preheating phase \cite{Kofman:1997yn, Kofman:1994rk, Amin:2014eta} can be a well-motivated source of GW production. Topics that have been explored in this context include gauge preheating after axion inflation \cite{Adshead:2019lbr, Adshead:2018doq}, self-resonance after single-field inflation and oscillon formation \cite{Zhou:2013tsa,Lozanov:2019ylm, Antusch:2016con, Amin:2018xfe, Hiramatsu:2020obh,Kou:2021bij}, as well as tachyonic preheating from a waterfall transition \cite{Garcia-Bellido:2007nns, Garcia-Bellido:2007fiu, Dufaux:2010cf}. The frequency of the resultant GW signal is typically too high, or the amplitude too small, to be detectable with near future GW detectors, although several ideas have been proposed recently \cite{Berlin:2021txa, Domcke:2022rgu}. (See also the Snowmass White Paper~\cite{Berlin:2022hfx}.) The recent work of \cite{Cui:2021are} demonstrates a promising GW detection prospect based on a preheating scenario in the framework of hybrid inflation, where a prolonged waterfall phase allows for an efficient transfer of energy from the scalar sector to an Abelian gauge field. For particular reheating mechanisms, the study of \cite{Haque:2021dha} has investigated the phases of early and late time reheating through imprints on primordial GWs. \section{Phase Transitions} \label{section_PT} GWs from first order phase transitions (FOPTs) in the early Universe offer a unique way of probing particle physics models at energy scales otherwise inaccessible. The GW spectrum, with examples shown in Fig.~\ref{fig:sample}, is sensitive to the shape of the effective potential, which depends on the symmetry breaking pattern and the particle content of the theory. This provides access to regions of parameter space unexplored so far in various extensions of the SM. GWs from a strong FOPT have a plethora of motivations in the early universe. For instance, new physics at the electroweak scale can lead to a strongly first order electroweak phase transition \cite{Ramsey-Musolf:2019lsf,Profumo:2007wc,Delaunay:2007wb,Huang:2016cjm,Chala:2018ari,Croon:2020cgk,Grojean:2006bp,Alves:2018jsw,Alves:2020bpi,Vaskonen:2016yiu,Dorsch:2016nrg,Chao:2017vrq,Wang:2019pet,Demidov:2017lzf,Ahriche:2018rao,Huang:2017rzf,Mohamadnejad:2019vzg,Baldes:2018nel,Huang:2018aja,Ellis:2019flb, Alves:2018oct, Alves:2019igs,Cline:2021iff,Chao:2021xqv,Liu:2021mhn,Zhang:2021alu,Cai:2022bcf,Liu:2022vkm} and large lepton asymmetries or different quark masses can make the QCD transition strong \cite{Schwarz:2009ii,Middeldorf-Wygas:2020glx,Caprini:2010xv,vonHarling:2017yew}. Beyond this, a strong transition can occur in multistep phase transitions\footnote{See Refs. \cite{Weinberg:1974hy,Land:1992sm,Patel:2012pi,Patel:2013zla,Blinov:2015sna} for the viability of a multistep phase transition} \cite{Niemi:2018asa,Croon:2018new,Morais:2018uou,Morais:2019fnm,Angelescu:2018dkk,TripletGW2022}, B-L breaking \cite{Jinno:2016knw,Chao:2017ilw,Brdar:2018num,Okada:2018xdh,Marzo:2018nov,Bian:2019szo,Hasegawa:2019amx,Ellis:2019oqb,Okada:2020vvb} (or B/L breaking \cite{Fornal:2020esl}), flavour physics \cite{Greljo:2019xan,Fornal:2020ngq}, axions \cite{Dev:2019njv,VonHarling:2019rgb,DelleRose:2019pgi}, GUT symmetry breaking chains \cite{Hashino:2018zsi,Huang:2017laj,Croon:2018kqn,Brdar:2019fur,Huang:2020bbe}, supersymmetry breaking \cite{Fornal:2021ovz,Craig:2020jfv,Apreda:2001us,Bian:2017wfv}, hidden sector involving scalars \cite{Schwaller:2015tja,Baldes:2018emh,Breitbach:2018ddu,Croon:2018erz,Hall:2019ank,Baldes:2017rcu,Croon:2019rqu,Hall:2019rld,Hall:2019ank,Chao:2020adk,Dent:2022bcd}, neutrino mass models \cite{Li:2020eun,DiBari:2021dri,Zhou:2022mlz} and confinement \cite{Helmboldt:2019pan,Aoki:2019mlt,Helmboldt:2019pan,Croon:2019ugf,Croon:2019iuh,Garcia-Bellido:2021zgu,Huang:2020crf,Halverson:2020xpg,Kang:2021epo}. Such phase transitions could occur at nearly any time during or after inflation. For example, the environment in which an electroweak-scale phase transition takes place, where bubbles expand in a plasma of relativistic SM particles, is very different to that prior to reheating. Much of what is discussed in this section relates in particular to what one might call \textsl{thermal phase transitions}, in which the bubbles typically nucleate thermally through the three-dimensional bounce action. Despite the name, not all thermal transitions efficiently transfer kinetic energy to the plasma, as will be discussed below. Nevertheless, as a class of scenarios they represent the most likely source of an observable SGWB due to a FOPT, and have the richest complementarity with other particle physics and cosmological observables. A thermal phase transition begins with the nucleation of bubbles where the walls expand in a plasma of ultrarelativistic particles, and the interactions of the particles with the walls in large part determine the terminal wall velocity of the bubbles. GWs are first sourced by the colliding bubble walls \cite{Kosowsky:1991ua,Kosowsky:1992vn,Huber:2008hg,Jinno:2016vai} and the fluid shock configurations, detonation or deflagrations, accompanying them \cite{Kamionkowski:1993fg}, although for a thermal transition the shear stress in the walls and shocks is unlikely to be a substantial source of GWs \cite{Hindmarsh:2015qta}. For deflagrations, pressure may build up in front of the walls, deforming them and delaying completion of the transition through the formation of hot droplets \cite{Cutting:2019zws}. After the bubbles have merged, a bulk fluid velocity will remain in the plasma. At first the velocity perturbations will typically be longitudinal, unless the bubbles have been deformed during the transition due to hydrodynamic instabilities or deformations of the shape. In weak transitions, the fluid perturbation will take the form of longitudinal acoustic waves -- sound waves \cite{Hindmarsh:2013xza}. If the shock formation timescale $\tau_\text{sh} \sim L_*/v_\text{rms}$, where $L_*$ is a typical length scale in the fluid linked to the mean bubble separation, and $v_\text{rms}$ is the root mean square fluid 3-velocity, is smaller than the Hubble time, shocks will form \cite{Caprini:2015zlo}. This is expected to occur for strong transitions, and to lead to turbulence \cite{Pen:2015qta}. Sound waves, acoustic, and vortical turbulence are all sources of GWs, lasting until the kinetic energy is dissipated by the plasma viscosity \cite{Caprini:2009yp}. These different processes source GWs with different spectral shapes (see e.g.~\cite{Hindmarsh:2017gnf,Hindmarsh:2019phv,Niksa:2018ofa,Caprini:2009yp,RoperPol:2019wvy,Jinno:2020eqg}), which would allow us in principle to reconstruct the conditions in the Universe during and after a sufficiently strong FOPT. The starting point in the calculation is the effective potential $V_{\rm eff}$ of a given model, consisting of three contributions: tree-level, one-loop Coleman-Weinberg, and finite temperature part. The $V_{\rm eff}$ initially admits a vacuum at high temperature, typically at the origin of the field space, and starts to develop another one which becomes more and more energetically preferable as temperature drops. Provided that the two vacua are separated by a potential barrier, the Universe then undergoes a FOPT to the lower-energy state. This is realized through nucleating bubbles of true vacuum, which then expand and collide with each other, eventually leaving the Universe in the new vacuum state. Four parameters characterizing this picture dictate the resulting GWs: the nucleation temperature $T_*$, the bubble wall velocity $v_w$, the FOPT's strength $\alpha$ and its inverse duration $\beta$. We note the following issues and recent developments regarding their calculations. {\bf 1. Issues in perturbation theory.} A perturbative treatement of the finite temperature potential is known to breakdown. The central problem is that the expansion parameter at finite temperature involves a mode occupation which diverges when the mass vanishes \cite{Linde:1980ts}. This can be partially addressed by resumming the most dangerous ``daisy'' diagrams. However, the usual resummation prescriptions have the issues that ($T$ dependent) UV divergences do not cancel at the same order \cite{Laine:2017hdk} and anyway does not address the issue of the slow convergence of perturbation theory. Including next to leading order corrections can change the predictions of the GW amplitude by many orders of magnitude \cite{Croon:2020cgk,Gould:2021oba}. At present, only the technique of dimensional reduction \cite{Kajantie:1995dw,Farakos:1994xh} performed at NLO using an $\hbar$ expansion provides a prescription to calculate thermodynamic parameters at $O(g^4)$ in a gauge independent way \cite{Croon:2020cgk,Gould:2021oba}. This method is challenging to use and has been applied to benchmarks in very few models. There are proposed alternatives to dimensional reduction \cite{Curtin:2016urg,Croon:2021vtc}. However, these are in need of development and testing and it is not yet obvious how to apply such techniques in a gauge invariant way. Finally, for very weak transitions, the tachyonic mass of the physical Higgs is cancelled by thermal mass near the origin, leading to an unresolved infrared divergence which is probably captured by large differences in predictions of perturbation theory and Monte-Carlo simulations \cite{Gould:2019qek,Niemi:2020hto}. It is generally assumed that perturbation theory in its most sophisticated form should give accurate results provided a transition is strong enough, however this needs to be proven by careful comparison with Monte-Carlo simulations. {\bf 2. Calculation of the bubble nucleation rate.} An accurate evaluation of the nucleation rate $\Gamma_{\mathrm{nuc}}$ and its evolution with temperature is of paramount importance to defining the characteristic time scales of the transition. For sufficiently fast transitions, $T_{\ast}$ and $\beta$ can be obtained by linearizing the rate near $T_{\ast}$. This breaks down for slow transitions, which can be of great phenomenological interest, where the next order corrections must be accounted for~\cite{Ellis:2018mja}. Various components contribute to $\Gamma_{\rm{nuc}}$, each meriting a separate discussion. Firstly, analytical solutions of the bounce EOM exist only for specific single field potentials~\cite{Fubini:1976jm, Coleman:1980aw,Duncan:1992ai, Adams:1993zs}, with progress made in the study of approximate single field potentials for light scalars~\cite{Dutta:2011rc, Aravind:2014aza,Espinosa:2018hue, Guada:2020ihz, Amariti:2020ntv}. Often, however, the underlying theory implies highly nonlinear equations of motion, or the existence of multiple scalar directions, where the bounce solution describes the motion of a soliton along a complicated manifold, requiring the use of numerical tools, such as~\cite{Konstandin:2006nd, Wainwright:2011kj, Camargo-Molina:2013qva, Masoumi:2016wot,Athron:2019nbd, Sato:2019wpo,Guada:2020xnz}. Secondly, the stationary phase approximation used to derive the bounce action holds well for weakly-coupled theories, including radiative corrections~\cite{Langer:1969bc, Weinberg:1992ds, Buchmuller:1992rs, Gleiser:1993hf,Alford:1993br}, which assumes the existence of a hierarchy of scales, with UV and IR modes well separated in energy scales. This assumption breaks down at strong coupling, where novel methods for generalizing the saddle point treatment are necessary, as discussed in~\cite{Croon:2020cgk, Croon:2021vtc,Dupuis:2020fhh}. Thirdly, the nucleation prefactor, often taken to be a simple $\mathcal{O}(1)$ times mass dimension 4 prefactor, can have a non-trivial form when more carefully evaluated, and has been shown to substantially alter the nucleation rate in some cases, as its contribution may become exponential. This prefactor can be split into two independent pieces, a dynamical part, related to the inverse timescale of critical bubble growth, depending on the evolution of fluctuation of the bubble radius, as well as the thermal bath~\cite{Affleck:1980ac, Linde:1981zj,Arnold:1987mh,Csernai:1992tj,Carrington:1993ng,Moore:2000jw}, and a statistical part, which depends on functional determinants of the second-order fluctuations around the critical bubble and the symmetric phase~\cite{Baacke:1993ne,Guo:2020grp,Brahm:1993bm,Surig:1997ne,Hindmarsh:2017gnf,Croon:2020cgk}, both requiring a different formalism in order to obtain a complete treatment. {\bf 3. Bubble wall velocity.} Different formalisms have been developed for the calculation of $v_w$, whose applicability depends on the relative strengths of the transition, which determines whether the terminal speed will be only mildly relativistic, or on the other hand ultrarelativistic. For not-too-fast moving walls, a standard approach is to split the distribution functions for the various particle species in the plasma into an equilibrium part, plus a perturbation due to the interaction between the wall and the particles. Recently, progress has been made in characterizing the importance of the equilibrium part of the distribution function, where variation of the plasma temperature, which is a function of the position relative to the wall and $v_w$, plays a role. These variations are tied to hydrodynamic effects in the plasma, which can induce a backreaction force on the wall~\cite{Konstandin:2010dm,BarrosoMancha:2020fay,Balaji:2020yrx,Ai:2021kak,Cline:2021iff}. For ultrarelativistic bubble walls, with a Lorentz factor $\gamma(v_w)=1/\sqrt{1-v_w^2}\gtrsim10$, equilibration cannot be maintained across the bubble wall. Nevertheless, one can assume that all the particles ahead of the advancing bubble, featuring equilibrium distributions, are absorbed by the new phase, without any reflections. The absorbed particles can then exchange momentum with the wall, which gives rise to friction. The leading effect is caused by the variation in the particles' masses across the wall due to the changing scalar condensate \cite{Bodeker:2009qy}. This gives rise to a friction force that remains independent of $v_w$, and thus cannot in general prevent a runaway behaviour towards $v_w=1$. Additionally, the particles can also emit radiation, mainly in the form of gauge bosons, which leads to a $v_w$-dependent friction effect that grows with $v_w$ and thus can prevent runaways. Single-particle emissions yield a force proportional to $\gamma(v_w)m T^3$ \cite{Bodeker:2017cim}, where $m$ is the mass of emitted gauge bosons inside the bubble, while a resummation of multi-particle emissions leads to an enhanced force proportional to $\gamma(v_w)^2 T^4$~\cite{Hoeche:2020rsg}, or $\gamma(v_w) m T^3$ times a log~\cite{Gouttenoire:2021kjv}. Possible open issues include the difference between these two results and the lack of mass dependence~\cite{Azatov:2020ufh} of the force in \cite{Hoeche:2020rsg}, and the impact of radiated bosons that are reflected~\cite{Gouttenoire:2021kjv}. Nevertheless, independent of the specific form of the friction term, the efficiency factor for the bubble wall motion can be calculated in general~\cite{Cai:2020djd}. With the above transition parameters $T_{\ast}$, $v_w$, $\alpha$ and $\beta$ determined, one can go on to calculate GWs. For a weak thermal phase transition, the dominant contribution is due to sound waves, with the GW spectrum obtained from large scale numerical simulations~\cite{Hindmarsh:2015qta,Hindmarsh:2017gnf}. The sound shell model~\cite{Hindmarsh:2016lnk,Hindmarsh:2019phv} has been proposed to understand these numerical results, with a generalization to the expanding Universe given in~\cite{Guo:2020grp}. In this model, the velocity field of the perturbed plasma is modelled by a linear superposition of individual disturbance from each bubble which in turn can be solved from a hydrodynamic analysis~\cite{Espinosa:2010hh}. The resulting spectrum agrees reasonably well with that from large scale numerical calculations~\cite{Hindmarsh:2017gnf}. Aside from the spectral shape, which does not agree perfectly with numerical result, the amplitude is also different due to several reasons. Firstly, the amplitude depends on the root mean square fluid velocity $\bar{U}_f$, calculable from the hydrodynamic analysis. However, $\bar{U}_f$ calculated this way gives an overestimation as observed in numerical simulations for strong phase transition where $\alpha \sim 1$ with small $v_w$~\cite{Cutting:2019zws}. This reduction is more pronounced for increasingly smaller $v_w$ for fixed $\alpha$, due presumably to the formation of bubble droplets ahead of the wall which then slows down the wall. Secondly, the original widely used GW spectrum (see, e.g., \cite{Caprini:2015zlo}) actually enforces an infinite lifetime, $\tau_{\text{sw}}$, of sound waves, as found out in \cite{Guo:2020grp,Ellis:2018mja}. For a finite $\tau_{\text{sw}}$, an additional multiplication factor needs to be added to account for the increasingly reduced GW production due to the increasingly diluted energy density as the universe expands. This factor depends on the expansion rate of the universe during the transition, and for radiation domination it is $(1-1/\sqrt{1+2\,\tau_{\text{sw}}H})$ with $H$ the Hubble rate at $T_{\ast}$~\cite{Guo:2020grp}, which approaches the asymptotic value of 1 as $\tau_{\text{sw}}\rightarrow \infty$ recovering the old result, and reduces to $\tau_{\text{sw}} H$~\cite{Ellis:2018mja,Ellis:2020awk} for short transitions. There remains the question of what exactly the value of $\tau_{\text{sw}}$ is. It is usually chosen to be the time scale corresponding to the onset of turbulence \cite{Pen:2015qta,Hindmarsh:2017gnf}, which needs to be improved based on insights gained from numerical simulations and analytical studies. Besides, there are attempts~\cite{Giese:2020znk,Wang:2020nzm} of going beyond the bag model~\cite{Giese:2020znk}. We now return to the less-thermal transitions, where the vacuum energy released in phase transitions can far exceed the surrounding radiation energy (see, e.g., ~\cite{Randall:2006py,Espinosa:2008kw}). Here the bubble expansion mode has two possibilities~\cite{Espinosa:2010hh}: strong detonation, where the wall reaches a terminal velocity due to balancing between the outward pressure and the friction, and runaway, where the wall continues to accelerate until it collides. In determining which of the two is relevant, the friction from the thermal plasma splitting upon impinging onto the ultrarelativistic walls plays a crucial role~\cite{Bodeker:2017cim,Hoeche:2020rsg,Gouttenoire:2021kjv}. Since this friction increases as the wall accelerates, runaway is now known to require a stronger transition than previously thought. The main contribution to the energy budget of these transitions comes from a highly relativistic and concentrated fluid around the bubbles in strong detonations, while it comes from relativistic walls in runaway~\cite{Ellis:2019oqb}. The GW production for the latter has long been estimated with the so-called envelope approximation, in which the walls immediately dump upon collision~\cite{Kosowsky:1992rz,Kosowsky:1992vn,Huber:2008hg,Jinno:2016vai,Jinno:2017ixd,Zhong:2021hgo,Megevand:2021llq}. However, numerical calculations revealed that the energy accumulated on the bubble surface propagates inside other bubbles even after collision~\cite{Weir:2016tov,Cutting:2018tjt,Jinno:2019bxw,Cutting:2020nla,Lewicki:2020azd}. To incorporate the long lifetime of these walls, a modelling now called the bulk flow model is proposed~\cite{Jinno:2017fby,Konstandin:2017sat,Megevand:2021juo}, and it is found that GWs at low frequencies are amplified, reflecting the expanding spherical structure after collision. On the other hand, the GW production in strong detonations leaves much room for study. While the sound shell model~\cite{Hindmarsh:2016lnk,Hindmarsh:2019phv} is expected to describe the GW production if the system turns into weak compression waves $\gamma \lesssim {\cal O} (1)$ at an early stage, it should be noted that strong concentration of the fluid may take some time to get dispersed~\cite{Jinno:2019jhi}, or the system may develop vortical and/or acoustic turbulence at an early stage~\cite{Caprini:2015zlo,RoperPol:2019wvy,Ellis:2020awk,Dahl:2021wyk}. In addition, both purely hydrodynamic and magneto-hydrodynamic (MHD) turbulence are expected to source GWs \cite{Kamionkowski:1993fg}. Past analyses have evaluated the GW production using semi-analytical modelling \cite{Kosowsky:2001xp,Dolgov:2002ra,Caprini:2006jb,Gogoberidze:2007an,Kahniashvili:2008pe,Caprini:2009yp,Niksa:2018ofa}. The results of these analyses have been combined with the prediction of the GW signal from the acoustic phase, leading to a GW spectrum where the acoustic contribution dominates at the peak while turbulence, believed to be sub-leading, dominates at high frequencies (see e.g.~\cite{Caprini:2015zlo}). Simulations featuring the scalar field evolution coupled to the relativistic fluid dynamics started only recently to explore the non-linear regime, where vorticity and turbulent generation is expected to occur \cite{Cutting:2019zws}. Simulations of MHD turbulence carried out with the Pencil code have improved on previous analytical estimates, but have shown that the initial conditions of the turbulence onset affect the GW spectral shape around the peak region \cite{RoperPol:2019wvy,Kahniashvili:2020jgm,RoperPol:2021xnd}. A ready-to-use prediction of the GW signal from MHD turbulence, validated by numerical simulations but assuming fully-developed turbulence as initial condition, has been provided in \cite{RoperPol:2022iel}. Simulating the onset of turbulence directly from the PT dynamics, and thereby providing a thorough and reliable estimate of the GW power spectrum, remains a key challenge of the next decade. Direct experimental searches for GWs from FOPTs have recently been carried out by several experimental collaborations. A subgroup of the LIGO-Virgo-KAGRA collaboration has performed the first search using its data from the O1, O2 and O3 observing runs \cite{Romero:2021kby}, being sensitive to FOPT at the energy scale of $\sim $(PeV-EeV), and found no evidence for such signals, with upper limits thus placed on the FOPT parameters. Searches have also been performed by the NANOGrav collaboration, based on its 12.5 year data set \cite{NANOGrav:2021flc}, corresponding to the QCD energy scale \cite{Witten:1984rs,Caprini:2010xv}, after the detection of a common-noise possibly due to a SGWB~\cite{NANOGrav:2020bcs}: it concludes that a FOPT signal would be degenerate with that from supermassive black hole binary mergers. A search on Parkes PTA data is also reported~\cite{Xue:2021gyq} with no positive detection and with upper limits set. Continued searches by these efforts will give improved results in the near future, while in a longer term, future third generation ground-based detectors such as Cosmic Explorer~\cite{Reitze:2019iox,Evans:2021gyd} and Einstein Telescope~\cite{Punturo:2010zz,Maggiore:2019uih} will probe much more weaker GW signals, and future space interferometers LISA~\cite{LISA:2017pwj,Caprini:2015zlo}, Taiji~\cite{Hu:2017mde,Ruan:2018tsw,Taiji-1} and Tianqin~\cite{TianQin:2015yph,Luo:2020bls,TianQin:2020hid} will operate in a frequency range suitable to test FOPTs at the electroweak scale. Once the GW signal from a FOPT is detected, recent analyses suggest that all four parameters determining its spectral shape can be reconstructed from the power spectrum, within the sound shell model \cite{Gowling:2021gcy}. \section{Topological Defects\label{sec:defects}} \noindent\textbf{Motivation.} Topological defects are generically predicted in field theories with symmetry breaking \cite{Jeannerot:2003qv} as well as superstring theories \cite{Sarangi:2002yt,Jones:2002cv}. When a symmetry is spontaneously broken in the early Universe, the homotopy groups of the resultant vacuum manifold can be non-trivial. Consequently, topological defects of different forms can form \cite{Kibble:1976sj,Vilenkin:2000jqa}. Three types of topological defects have been shown to produce SGWB signals: domain walls \cite{Vilenkin:1981zs,Sakellariadou:1990ne,Sakellariadou:1991sd,Gleiser:1998na,Hiramatsu:2010yz,Dunsky:2021tih}, textures \cite{Fenu:2009qf} and cosmic strings \cite{Vachaspati:1984gt,Blanco-Pillado:2017oxo,Blanco-Pillado:2017rnf,Ringeval:2017eww,Vilenkin:1981bx,Hogan:1984is,Siemens:2006yp,DePies:2007bm,Olmez:2010bi,Vachaspati:2015cma}. Monopoles can also indirectly produce GWs, particularly when combined with other defects \cite{Dunsky:2021tih}. What makes such signals particularly interesting, is that, in all cases the amplitude of the GW signal grows with the symmetry breaking scale. This makes topological defects an effective probe of high energy physics. The type together with the detailed properties of defects which are formed depends on the underlying theory. Domain walls can in principle arise wherever there is reason to expect a discrete symmetry --- from Pecci Quinn \cite{Harigaya:2018ooc,Craig:2020bnv}, R-Parity \cite{Borah:2011qq} to neutrino masses \cite{Ouahid:2018gpg} for a non-exhaustive list. Discrete symmetries also appear naturally in the center of symmetry breaking chains --- for example D parity and matter parity \cite{Kibble:1982ae,Kibble:1982dd}. Global strings can originate from axion dark matter models where a U(1) is broken to a vacuum with a discrete symmetry \cite{Chang:2019mza}. Local strings, monopoles and textures are ubiquitous in the symmetry breaking chains that result from SO(10) breaking to the SM \cite{Dunsky:2021tih}. Considering all possible spontaneous symmetry breaking patterns from the GUT down to the SM gauge group, it was shown \cite{Jeannerot:2003qv} that cosmic string formation is unavoidable. The strings which form at the end of inflation have a mass which is proportional to the inflationary scale. Sometimes, a second network of strings form at a lower scale. In the context of string theory, cosmic superstrings \cite{polchinski} can form as the result of brane interactions. While solitonic cosmic strings are classical objects, cosmic superstrings are quantum ones, hence one expects several differences between the two (see, e.g., \cite{Sakellariadou:2009ev} and references therein). \noindent\textbf{Gravitational wave signals.} The GW spectrum resulting from the topological defect will have different features depending upon whether the symmetry broken is local or global. For instance, GW spectrum from local textures has an infrared suppression compared to global textures \cite{Dror:2019syi}, GW signal strength from global strings features a more dramatic scaling with the symmetry breaking scale compared to the linear scaling of local strings \cite{Chang:2019mza} and local domain walls require destruction via strings in order to be viable, in contrast with global domain walls \cite{Dunsky:2021tih}. In the following we summarize the current status on GW signal from topological defects, with an emphasis on cosmic strings, which are most studied. \noindent\textit{1. GW from local cosmic strings.} For local strings, and particularly thin strings with no internal structure which can be described by the Nambu-Goto action, once the string network is formed, it is expected to quickly reach the scaling regime \cite{Kibble:1976sj}. The predicted GW spectrum is then contingent on two crucial quantities: the dimensionless power spectrum for a loop of a given length; and the number density of loops. Regarding the power spectrum, either one can motivate an averaged power spectrum (where the average is over different configurations of loops of the same length) using simulation data as input \cite{Blanco-Pillado:2017oxo}, or one can assume a high frequency domination \cite{DV2,Ringeval:2017eww}. Regarding the number density, a (rather crude) analytic estimate can be made using the `velocity dependent one scale model' that takes only the loop size at formation as an input \cite{Martins:1996jp,Martins:2000cs}. This agrees quite well with some models of simulations \cite{Blanco-Pillado:2013qja,Blanco-Pillado:2011egf}, but disagrees with others \cite{Ringeval:2005kr,Lorenz:2010sm,Auclair:2019zoz} which predict a greater fraction of energy density in smaller loops. The reasons for these differences are not yet fully understood, but may be related to the effects of gravitational backreaction. Besides their differences, these string models all predict a roughly constant GW spectrum over many decades of frequency, assuming standard cosmological history (see e.g.~\cite{Auclair:2019wcv}). This makes it a useful witness to any departures from a standard cosmological picture \cite{Cui:2017ufi,Cui:2018rwi, Gouttenoire:2019kij}. Nambu-Goto dynamics may not apply to all types of cosmic strings, in particular the case with field theory strings is unclear. Some simulation results \cite{Vincent:1996qr,Hindmarsh:2011qj} show that the field theory strings decay predominantly into particles rather than gravitational radiation, although the literature did not yet converge \cite{Matsunami:2019fss,Saurabh:2020pqe,Hindmarsh:2021mnl,Auclair:2019jip}. Such discrepancies lead to different observational predictions. Therefore it is important to further investigate this open question and better understand the difference between Nambu-Goto and field theory results. \noindent\textit{2. GW from global/axion cosmic strings/domain walls.} While most of literature focused on the evolution of local cosmic strings, motivated by the close connection to axion physics, recent years have seen increasing interest in global or axion strings/topological defects and the GW signature sourced by them \cite{Sakellariadou:1991sd,Martins:2018dqg,Gorghetto:2018myk,Buschmann:2019icd,Figueroa:2020lvo,Gorghetto:2021fsn,Chang:2019mza,Chang:2021afa}. Although the GW signal from global strings is suppressed relative to local strings due to the dominant emission of Goldstone bosons, it has been shown to be detectable with upcoming experiments, and feature a logarithmic declining spectrum towards high frequency \cite{Chang:2019mza,Chang:2021afa,Gorghetto:2021fsn}. Clarification of the GW spectrum from global strings will require further investigations in simulation studies, which so far have not converged well. \noindent\textit{3. GW from superstrings.} Evolution of cosmic superstring networks, is a rather involved issue, which has been addressed by numerical \cite{Sakellariadou:2004wq,PhysRevD.71.123513,Hindmarsh:2006qn,Urrestilla:2007yw,Rajantie:2007hp,Sakellariadou:2008ay} as well as analytical \cite{Copeland:2006eh,PhysRevD.75.065024,PhysRevD.77.063521,Avgoustidis:2014rqa} approaches. Cosmic superstrings can also lead to gravitational waves (see, e.g., \cite{2006AAS...209.7413H,LIGOScientific:2017ikf}), hence GW experiments can provide a novel and powerful way to test string theory. \noindent\textbf{Probe for non-standard cosmology.} The state and particle content of our Universe prior to the BBN era remains unknown as the ``primordial dark age'' \cite{Boyle:2005se,Boyle:2007zx}, despite the standard paradigm we often assume. Potential deviations from the standard cosmology scenario are well motivated and attracted increasing interest in recent years. GW background spectrum from a cosmic string network typically spans over a wide frequency range with detectable amplitude, making it a unique tool for ``cosmic archaeology" based on a time-frequency correspondence \cite{Cui:2017ufi, Cui:2018rwi}. In the following we review a few representative cases on how the GW signal from strings may be used to probe the pre-BBN cosmology. \noindent\textit{1. Probe new equation of state of the early universe.} In standard cosmology, the Universe undergoes a prolonged radiation dominated era from the end of inflation till the recent transition to matter domination at redshift $\sim 3000$. However, well-motivated theories suggest that the evolution of the Universe's equation of state may deviate from this paradigm, e.g. the presence of an early matter-domination or kination phase \cite{Moroi:1999zb,Nelson:2018via,Salati:2002md,Chung:2007vz, Poulin:2018dzj}. Such an alternative cosmic history can sharply modify the GW spectrum from cosmic strings via its effect on Hubble expansion rate. Specifically, an early period of kination results in a period where the GW frequency spectrum grows as $f^1$, whereas an early period of matter domination results in a spectrum that depletes in the UV, obeying a $f^{-1/3}$ power law \cite{Cui:2018rwi,Cui:2019kkd,Blasi:2020mfx}. A string network can also be ``consumed'' through the nucleation of monopoles \cite{Buchmuller:2021mbb} or domain walls if there is a small hierarchy between symmetry breaking steps, or if not they can be destroyed by a connected domain wall in a later symmetry breaking step \cite{Dunsky:2021tih}. Importantly, all of these signals can be distinguished. \noindent\textit{2. Probe new particle species.} While the high frequency range of the GW spectrum from strings is largely flat (corresponding to GW emission during radiation domination), it is modified by changes in the number of relativistic degrees of freedom, $g_\ast$ \cite{Cui:2017ufi, Auclair:2019wcv}, which modifies the standard Hubble expansion history, and can therefore be used as a probe of high energy degrees of freedom that are beyond the reach of terrestrial colliders or CMB observatories. \noindent\textit{3. Probe (pre-)inflationary universe.} Cosmic defects generally dilute more slowly than radiation. Even if a large number of e-foldings during inflation largely washes out a pre-existing string network, it can regrow back into the horizon and replenish itself to become a non-trivial component of the late Universe energy budget. In particular, replenished strings can leave a unique SGWB spectrum that can be probed by nanoHz detectors, along with GW burst signals \cite{Cui:2019kkd}. This provides a unique example that cosmology before the end of inflation can be probed with GWs from cosmic defects. \noindent\textbf{Probe for new particle physics.} As the amplitude of the GW spectrum produced by a string network grows with the symmetry breaking scale, they provide a unique way of probing high scale physics. For example, since there is typically a hierarchy between the seesaw scale and the GUT scale, it is natural to protect the seesaw scale with gauge symmetry. Symmetry breaking chains predict strings more often than not and the entire range of scales relevant to thermal leptogenesis is projected to be testable by future detectors \cite{Dror:2019syi}. More generally, GUT symmetry breaking chains more often than not allow for observable signals from some set of defects \cite{Dunsky:2021tih}.\footnote{see also Snowmass white paper \cite{Elor:2022hpa}} Searches for proton decay provide a complimentary probe of symmetry breaking chains, as chains that allow for proton decay can be chains that do not predict strings \cite{King:2020hyd,Chun:2021brv}. Furthermore, there has been a growing interest in GWs from global/axion topological defects due to the close connection to axion or axion-like (ALP) dark matter physics \cite{Chang:2019mza,Gouttenoire:2019kij,Ramberg:2020oct,Chang:2021afa,Gorghetto:2021fsn,Gelmini:2021yzu}: when PQ symmetry breaking occurs after inflation, these topological defects are an indispensable companion of axion particles. While there have been extensive studies on axion particle detection which strongly depend on whether/how axions couple to SM particles in an observable manner, the GW signature from axion strings/domain walls is universal and could be a highly complementary probe for axion physics. In addition, as mentioned earlier, the GW spectrum from cosmic strings may reveal new particle species via the effect on the Hubble expansion rate due to changes in relativistic degrees of freedom. \noindent\textbf{Prospect for experimental detection.} Current limits on cosmic strings from GW signals are: $G \mu \lesssim 9.6 \times 10^{-9}$ by LIGO-Virgo \cite{LIGOScientific:2021nrg}, and $G \mu \lesssim 10^{-10}$ by pulsar timing arrays \cite{Ellis:2020ena,Blanco-Pillado:2021ygr}. Fig. \ref{fig:sample} shows and example of a cosmic string GW spectrum in comparison with existing and future detector sensitivities. Note that considering the expected astrophysical background and a galactic foreground, a cosmic string tension in the range of $G \mu \approx 10^{-16}- 10^{-15}$ or bigger could be detectable by LISA, with the galactic foreground affecting this limit more than the astrophysical background \cite{Boileau:2021gbr,Auclair:2019wcv}. Future experiments covering a wide frequency range will further improve the sensitivity to GW signals from cosmic strings, including: e.g. Einstein Telescope, Cosmic Explorer, AEDGE, DECIGO, BBO, $\mu$Ares and Theia \cite{Punturo:2010zz,Yagi:2011wg,AEDGE:2019nxb,Hild:2010id,Sesana:2019vho,Theia:2017xtk}. An exciting possibility is that NANOGrav may have already seen a hint of cosmic strings \cite{NANOGrav:2020bcs}. The suggested hint is consistent with a shallow power law as one would expect from strings \cite{Ellis:2020ena,Blasi:2020mfx,Datta:2020bht,Chakrabortty:2020otp,Samanta:2020cdk,King:2020hyd,Garcia-Bellido:2021zgu} though to fully verify one way or another, the Hellings-Downs curve needs to be constructed \cite{Hellings:1983fr}. \section{Dark Matter\label{sec:dm}} There are multiple strong and independent lines of evidences on the existence of dark matter (DM). However its identity remains as one of the greatest mysteries in the modern physics. For example, the mass of DM can range for almost 100 orders of magnitude. Understanding the DM identity provides invaluable information about the early Universe as well as the extension of the particle Standard Model. Decades of effort have been devoted to searches of DM. Motivated by the gauge hierarchy problem, experimental efforts has been focused on the mass window around the electroweak energy scale, i.e. $\sim$100 GeV. Null results to date have led to strong constraints in this part of the parameter space and have prompted a re-examination of the possibilities of other well-motivated mass windows. Besides conventional DM search methods, GW experiments may provide completely novel opportunities to search for DM. Interestingly, it has been demonstrated that GW experiments can be used to study DM in both ultraheavy and ultralight mass regimes, for an indirect as well as a direct detection. (For synergy, see Snowmass white paper ``New Horizons: Scalar and Vector Ultralight Dark Matter''~\cite{Antypas:2022asj}, and ``Ultraheavy particle dark matter''~\cite{Carney:2022gse}. ) $\bullet$ {\bf Primordial black holes (PBHs):} GW observations~\cite{Abbott:2016blz,TheLIGOScientific:2016pea,Abbott:2016nmj,Abbott:2017vtc,Abbott:2017oio,Abbott:2017gyy,LIGOScientific:2018mvr,LIGOScientific:2020stg,Abbott:2020uma,Abbott:2020khf} have revealed intriguing properties of BH mergers and have rekindled suggestions that PBHs may exist and constitute a fraction of the DM~\cite{Bird:2016dcv,Clesse:2016vqa,Sasaki:2016jop,2016ApJ...823L..25K,Blinnikov:2016bxu}. Advanced LIGO and Virgo, and future ground-based GW observatories, e.g. Cosmic Explorer (CE)~\cite{Evans:2016mbw,Reitze:2019iox} and Einstein Telescope (ET)~\cite{Punturo:2010zza,Hild:2010id,Maggiore:2019uih}, will probe the origin of BHs (stellar or primordial) through different methods and observations: \textit{1. Subsolar black holes mergers.} Detecting a black hole of mass below the Chandrasekhar mass would almost unambiguously point towards a primordial origin. Subsolar searches have been carried out in the first three runs of LIGO/Virgo~\cite{Abbott:2018oah,Authors:2019qbw,LIGOScientific:2021job,Nitz:2020bdb,Phukon:2021cus,Nitz:2021vqh,Nitz:2022ltl}, with a few candidates recently found~\cite{Phukon:2021cus}, while CE and ET will reach the sensitivity at cosmological distances. \textit{2. BHs in the NS mass range, low mass gap and pair-instability mass gap.} Multi-messenger astronomy could probe the origin of compact objects in the possible mass-gap between NS and astrophysical BHs \cite{Abbott:2020uma,Abbott:2020khf}, eventually revealing their possibly primordial origin by detecting EM counterparts\cite{Unal:2020mts}. CE and ET could also detect their different merging phase. PBHs in this range are motivated by a boosted formation at the QCD transition~\cite{Byrnes:2018clq,Carr:2019kxo}. Above $60 M_\odot$, pair-instability should prevent BHs to form while PBHs could explain recent observations~\cite{Carr:2019kxo,Clesse:2020ghq,DeLuca:2020sae}, though hierarchal mergers remain a more natural explanation~\cite{Gerosa:2021mno}. Accurate spin reconstructions allow distinguishing them from secondary mergers in dense environments~\cite{Farmer:2019jed}. CE will probe intermediate-mass black hole binaries up to $10^4 M_\odot$, which will reveal a possible primordial origin of the seeds of the super-massive BHs at galactic centers~\cite{Clesse:2015wea,Carr:2019kxo}. \textit{3. BH mergers at high redshift.} The third generation of GW detectors like CE and ET will have an astrophysical reach $20 \lesssim z+1 \lesssim 100$, prior to the formation of stars. Any BH merger detection would therefore almost certainly point to a primordial origin~\cite{Ding:2020ykt}. \textit{4. Distinguishing PBH vs stellar BHs with statistical methods.} Bayesian statistical methods and model selection~\cite{LIGOScientific:2018jsj} applied to the rate, mass, spin and redshift distributions will also help to distinguish PBHs from the stellar scenarios~\cite{Kocsis:2017yty,Ali-Haimoud:2017rtz,Clesse:2017bsw,Fernandez:2019kyb,DeLuca:2019buf,Carr:2019kxo,Gow:2019pok,Hall:2020daa,Jedamzik:2020omx,Jedamzik:2020ypm,Bhagwat:2020bzh,DeLuca:2020qqa,DeLuca:2020fpg,DeLuca:2020bjf,Wong:2020yig,Garcia-Bellido:2020pwq,Dolghov:2020hjk,Dolgov:2020xzo,Belotsky:2014kca,Mukherjee:2021ags,Mukherjee:2021itf}. They can be used to set new limits on PBH models and reveal the existence of different black hole populations (PBH binaries with merging rates large enough to be detected may have formed by tidal capture in clusters~\cite{Bird:2016dcv,Clesse:2016vqa,Korol:2019jud,Belotsky:2018wph} and before recombination~\cite{Nakamura:1997sm,Raidal:2017mfl,Ali-Haimoud:2017rtz,Raidal:2018bbj,Young:2019gfc}). \textit{5. GW backgrounds.} If PBHs contribute to a non-negligible fraction of DM, their binaries generate a detectable GW background~\cite{Mandic:2016lcn,Clesse:2016ajp,Raidal:2017mfl,Wang:2016ana,Wang:2019kaf,Bagui:2021dqi,Braglia:2021wwa}, as well as close encounters~\cite{Garcia-Bellido:2021jlq}. Its spectral shape depends on the PBH mass distribution and binary formation channel, with its amplitude comparable or higher than astrophysical sources. The number of sources contributing may also help to identify a SGWB from PBHs~\cite{Braglia:2022icu}. Other SGWBs may come from Hawking radiation~\cite{Arbey:2019mbc,Dong:2015yjs}, from the density fluctuations at the origin of PBH formation~\cite{Ananda:2006af,Baumann:2007zm,Inomata:2016rbd,Nakama:2016gzw,Gong:2017qlj,Clesse:2018ogk,Bartolo:2018evs,Bartolo:2018rku,Inomata:2018epa,Garcia-Bellido:2017aan,Unal:2018yaa,Unal:2020mts,Romero-Rodriguez:2021aws} or from their distribution~\cite{Papanikolaou:2020qtd,Papanikolaou:2021uhe}. The density fluctuations also give rise to anisotropies and deformation in SGWBs of other cosmological origins through propagation effects \cite{Alba:2015cms, Contaldi:2016koz, Bartolo:2019oiq, Bartolo:2019zvb, Bartolo:2019yeu, Domcke:2020xmn}. \textit{6. Continuous waves (CWs). } Very light ($\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}\mathcal{O}(10^{-10}M_\odot-10^{-3}M_\odot$)) PBH binaries would generate long-lived GWs during inspiraling, lasting at least $\mathcal{O}$(hours-days) and potentially up to thousands or million years. A method has been designed to search for these GWs \cite{Miller:2020kmv}, and those from mini-EMRI ones~\cite{Guo:2022sdd}, with constraints placed using upper limits from searching for quasi-monochromatic, persistent GWs in O3 from planetary and asteroid-mass PBHs \cite{Miller:2021knj,LIGOScientific:2022pjk}. CE and ET could even detect such binaries in the solar system vicinity. \textit{7. GW bursts.} These may be produced by hyperbolic encounters in dense halos~\cite{Garcia-Bellido:2017qal,Garcia-Bellido:2017knh}. The signal frequency can lie in the frequency range of ground-based detectors for stellar-mass BHs, with a duration of order of milliseconds. Finally, the absence of GW from kilonovae may point to neutron stars (NS) destroyed by sublunar PBHs~\cite{Fuller:2017uyd}. \textit{8. Phase transition GWs.} PBHs can form during a first-order phase transition via trapping fermions in the false vacuum~\cite{Baker:2021nyl,Baker:2021sno,Kawana:2021tde,Huang:2022him,Marfatia:2021hcp}, bubble collisions~\cite{Hawking:1982ga,Jung:2021mku} or postponed vacuum decay~\cite{Liu:2021svg,Hashino:2021qoq}. In such scenarios, there could be correlated signals between PBHs (e.g. merger/evaporation/microlensing) and phase transition GWs. $\bullet$ {\bf Dark photon DM: } If DM is made up of ultralight bosons, it will behave as an oscillating classical wave, with dark photon (DPDM) being a good candidate. If DPDM is further charged under $U(1)_B$ gauge group, the DPDM background field will induce displacements on the GW interferometer's test masses, resulting in a time dependent variation on the arm length and thus a GW-like signal~\cite{Pierce:2018xmy}. The DPDM signal in the frequency domain is quasi-monochromatic, centered at the mass of dark photon $m_A$, with a very narrow frequency width $\Delta f/f\sim 10^{-6}$ from the velocity dispersion of DM halo in the Milky Way. Thus, the search amounts to bump hunting in the frequency domain with Fourier analysis. Searches have been performed at LIGO, which is most sensitive to a DPDM mass of $m_A\sim 4 \times 10^{-13}$eV at its most sensitive frequency $\sim 100$Hz, using data from the first observation run (O1) \cite{Guo:2019ker} and the third (O3) \cite{LIGOScientific:2021odm}. The long coherence length of the DPDM means the signal is correlated in multiple interferometers of LIGO and Virgo, which then allows cross-correlating the strain channels of data from pairs of interferometers to significantly reduce the noises ~\cite{Allen:1997ad}, while in the O3 search, a band sampled data method is also used\cite{Miller:2020vsl}. No evidence of DPDM signal has been found in O1 and O3 data and upper limits are placed on the DPDM coupling with baryon number. The O3 upper limit of squared coupling $\epsilon^2$ is best constrained to be $1.2 (1.31)\times 10^{-47}$ at $5.7(4.2)\times 10^{-13}\text{eV}/c^2$ for the two methods used, which is improved by a factor of $\sim 100$ for $m_A \sim (2-4)\times 10^{-13} \text{eV}/c^2$ compared with the O1 result, with most of the gain in sensitivity coming from taking into account of the finite travel time of the light~\cite{Morisaki:2020gui}. The GW data have already probed the unexplored DPDM parameter space, and direct DPDM search using GW data became competitive compared with other fifth-force experiments at this particular mass region. $\bullet$ {\bf Dilaton:} The dilaton is a promising ultralight dark matter candidate. It is naturally predicted in theories with extra dimensions, and it couples to the SM particles through the trace of the energy momentum tensor. If the dilaton plays the role of DM, its oscillation will lead to time-dependent variations of fundamental constants, such as the electron mass and the fine structure constant. If the GW interferometers are embedded in the background of the dilaton, the oscillation of the dilaton DM would manifest as changes in the length and index of refraction of materials \cite{Stadnik2015a,Stadnik2015b,Stadnik2016}. Similarly to DPDM, dilatons would behave as a classically oscillating field, and would impart a quasi-monochromatic, persistent signal onto the detectors by physically changing the size of the beam splitter, resulting in different travel times for light coming from the x- and y-arms \cite{grote2019novel}. Therefore, the arm-length of the interferometers does not matter; instead, it is necessary to have high sensitivity to optical phase differences between the two arms. In GEO 600, the squeezed light vacuum states of light allow for a large quantum noise reduction, more than LIGO/Virgo/KAGRA, meaning that GEO 600 is the most sensitive GW interferometer to dilaton DM. A search for dilaton DM was conducted using about a week of data from GEO 600, resulting in extremely competitive constraints on the couplings of dilatons to electrons and photons \cite{Vermeulen:2021epa}. The analysis was optimally sensitive to each logarithmically-spaced mass of the dilaton, since it employed LPSD to confine the frequency modulation induced by the dilaton on the detector to one frequency bin. This modulation results from the superposition of plane waves that compose a packet of dilatons that interacts with the detector \cite{pierce2019dark}. $\bullet$ {\bf Axions:} Axions are scalar particles that generally appear in various extensions of the SM \cite{Peccei:1977hh, Preskill:1982cy}. These hypothetical particles can be constrained in several ways. If axions play the role of DM, constraints can be imposed using techniques sensitive to different interaction channels and appropriate mass ranges \cite{Graham:2015ouw}. Stellar energy-loss arguments can also lead to constraints \cite{Ayala:2014pea}. Axions with weak self-interactions could lead to black hole superradiance, and thus be constrained through black hole spin measurements \cite{PhysRevLett.126.151102,Gruzinov:2016hcq,Davoudiasl:2019nlo,Stott:2020gjj,Ng:2020ruv,Baryakhtar:2020gao}, polarimetric observations \cite{PhysRevLett.124.061102}, and GWs emitted by the superradiance cloud \cite{Arvanitaki:2016qwi,Zhu:2020tht,Brito:2017wnc,Tsukada:2018mbp,Palomba:2019vxe}. Light scalar fields can be sourced by neutron stars due to their coupling to nuclear matter, and affect the dynamics of binary neutron star coalescence leaving potentially detectable fingerprints in the inspiral waveform. Calculating the first post-Newtonian corrections to the orbital dynamics, radiated power, and gravitational waveform for binary neutron star mergers in the presence of an axion field, it was shown \cite{PhysRevD.99.063013} that Advanced LIGO at designed sensitivity can potentially exclude axions with mass $m_{\rm a}\lesssim 10^{-11} $ eV and decay constant $f_{\rm a}\sim (10^{14}-10^{17})$ GeV. Analyzing the GWs from the binary neutron star inspiral GW170817 allowed to impose constraints on axions with masses below $10^{-11}$ eV by excluding the ones with decay constants ranging from $1.6 \times 10^{16}$ GeV to $10^{18}$ GeV at a 3$\sigma$ confidence level \cite{Zhang:2021mks}. This parameter space, excluded from neutron star inspirals, has not been probed by other existing experiments. \section{Complementarity between Collider and GW Observations\label{sec:gwcollider}} As discussed in previous sections, GW signals from the early Universe offer a new probe of physics beyond the SM. In particular, a phase transition in the early Universe in the $100$ GeV - $100$ TeV energy range will lead to a GW signal with a peak frequency in the mHz - Hz range, potentially accessible at future GW observatories such as LISA, {Taiji, Tianqin}. This range will also be scrutinized by the LHC (including its High Luminosity upgrade) in combination with proposed future high-energy colliders like ILC, CLIC, {CEPC}, FCC, SppC or a multi-TeV muon smasher. GW observatories and high-energy colliders are then highly complementary in the search for physics beyond the SM: the discovery of a GW signal from a phase transition in the early Universe could then guide searches for new physics at future colliders; conversely, new physics discovered at colliders could provide hints of early Universe phase transitions producing GW signals. We here discuss this complementarity, highlighting that the respective sensitivities may be very different depending on the specific incarnation of new physics, and each of the two approaches could {in principle} cover the ``blind spots" of the other. We first focus on the {possibility of an} electroweak phase transition (EWPT) as a prime example leading to potential signatures both at colliders and next-generation GW detectors. Then, we also discuss other phase transitions in the early Universe for which such complementarity could be very important. \subsection{Electroweak Phase Transition} Unravelling the physics behind electroweak symmetry breaking (EWSB) is central to particle physics. It will be a leading aspect of the upcoming LHC and HL-LHC physics runs, as well as a main physics driver for future high-energy colliders. There are compelling theoretical arguments to expect new physics coupled to the SM Higgs and not far from the TeV energy scale, e.g.~in order to address the origin of the electroweak scale. Among the many questions surrounding EWSB, the {thermal history of EWSB} is of particular interest. In the SM {with a 125 GeV Higgs boson}, {the EWSB transition occurs via} a smooth cross-over {rather than a {\it bona fide} phase transition}~\cite{Kajantie:1996mn}. {This transition takes place at a temperature $T_\mathrm{EW}\sim 140$ GeV.} {Importantly}, new physics coupled to the Higgs could alter the nature of the {EWSB transition}, possibly making it {a first order EWPT}. The existence of such a transition is a necessary ingredient for electroweak baryogenesis (see Ref.~\cite{Morrissey:2012db} amd references therein as well as Snowmass white papers \cite{Barrow:2022gsu,Asadi:2022njl}) and could provide a source for observable gravitational radiation. {To significantly alter the SM thermal history,} the new physics {mass scale} cannot lie too far above {$T_\mathrm{EW}$, nor can its interactions with the SM Higgs boson be too feeble\cite{Ramsey-Musolf:2019lsf}. Thus, } it will be generally possible to measure its effects at the LHC or future high-energy colliders. % {Collider probes rely on two classes of signatures.} On the one hand, it should be possible to directly produce and study the new particles at {some of} these facilities, given that their reach in energy spans one/two orders of magnitude beyond the electroweak scale. % On the other hand, new physics coupled to the Higgs tends to lead to deviations $\delta_i$ of the Higgs couplings {or other properties} from their SM values, including: \begin{itemize} \item {The Higgs trilinear self-coupling, $\lambda_{3}$.} Although the HL-LHC will only be mildly sensitive to this coupling ($\delta_{\lambda_3} \sim 50 \% $), future colliders could significantly improve on its measurement. In particular, 100 TeV hadron colliders (e.g.~FCC-hh or SppC) and TeV scale lepton colliders could reach a sensitivity $\delta_{\lambda_3} \sim 5 - 10 \%$. \item {The Higgs-$Z$ boson coupling~\cite{Craig:2013xia,Huang:2016cjm}, which could be measured with exquisite precision (down to $\delta_{Zh} \sim 0.1 \%$) in future Higgs factories like ILC, CEPC or FCC-ee.} \item {Higgs boson signal strengths, which depend on the product of the Higgs production cross section and decay branching ratios. Mixing between the Higgs boson and a new neutral scalar that catalyzes a first order EWPT may lead to deviations accessible at future Higgs factories~\cite{Profumo:2014opa}}. \item {The Higgs-to-diphoton decay rate. If a new neutral scalar is part of an electroweak multiplet, its charged partners will contribute to this loop-induced decay, with a magnitude governed by the scalar mass and the same Higgs portal coupling responsible for a first order EWPT. Order 1-10\% deviations from the SM prediction are possible, yielding potentially observable signatures at next generation colliders. } \item {Possible new or \lq\lq exotic\ \rq\rq Higgs decay modes into new light particles responsible for a first order EWPT.} \end{itemize} {For a generic assessment of the discovery reach for direct and indirect signals associated with a first order EWPT -- along with an extensive set of references to model-specific studies -- see Ref.~\cite{Ramsey-Musolf:2019lsf}.} % The two types of collider probes of new physics {that may catalyze a first order EWPT}, direct production of the new particle states {and precision measurements of Higgs properties,} are complementary to each other and to GW probes of the {EWSB thermal history}. In the following, we discuss several concrete examples of such interplay, which illustrate the reach when combining collider searches and GW observations to probe the properties of {a possible first order EWPT}. \noindent {\it{Singlet-driven EWPT scenarios}}. {The interactions of a SM gauge singlet scalar with the Higgs open up significant possibilities for a first order EWPT. A singlet may be either real (the \lq\lq xSM\,\rq\rq) or complex (the \lq\lq cxSM\,\rq\rq ) and involve adding to the SM one or two new degrees of freedom, respectively. We focus on the EWPT in the xSM (see, {\it e.g.}, \cite{Barger:2007im,Profumo:2007wc,Espinosa:2011ax,Profumo:2014opa,Huang:2016cjm}).} In the absence of a $\mathbb{Z}_2$-symmetry for the singlet scalar field $S$, the Higgs and the singlet will generally mix. {On general grounds, one expects $\vert\sin\theta\vert \gtrsim 0.01$ when a first order EWPT is sufficiently strong as to accommodate electroweak baryogenesis \cite{Ramsey-Musolf:2019lsf}. } The presence of the singlet, both via the mixing angle $\theta$ and via its contribution to the Higgs two-point function at loop level, leads to a universal suppression of Higgs couplings to gauge bosons and fermions w.r.t. their SM values. {Precision studies of Higgs boson properties provide multiple avenues for observing these effects. For example, } it has been shown in~\cite{Huang:2016cjm,Chen:2017qcz} (see also~\cite{Curtin:2014jma}) that the resulting modification of the Higgs coupling to the $Z$ boson would allow one to probe a large fraction of the parameter space region yielding a strongly first-order EWPT at FCC-ee, CEPC or ILC-500. {Measurements of the Higgs boson signal strengths at the LHC or future Higgs factories could provide a similarly powerful probe, as shown in Ref.~\cite{Profumo:2014opa}.} The Higgs self-coupling $\lambda_3$ could be measured at a future 100 TeV hadron collider or a multi-TeV lepton collider (e.g. CLIC or a muon smasher) with $10\%$ precision {or better}, which yields a comparable constraint on the singlet parameter space in the small-mixing limit $\rm{sin}\,\theta \ll 1$~\cite{Profumo:2014opa,Chen:2017qcz}. {On the other hand, it is possible that the degree of singlet-Higgs mixing needed for a first order EWPT may not be entirely accessible with future precision Higgs studies. In this case, direct production via the \lq\lq resonant di-Higgs\rq\rq\, process (for $m_s > 2\, m_h$) provides a complementary approach. It was shown in Ref.~\cite{Kotwal:2016tex} that searches for this process in the channels $p p \to s\to hh\to b\bar b \gamma\gamma$ and $p p \to s\to hh\to 4 \tau$ at a 100 TeV hadron collider could cover the entire first order EWPT parameter space, including portions not accessible through precision Higgs studies. (See also Refs.~\cite{Huang:2017jws,Li:2019tfd} for resonant di-Higgs probes of the EWPT in the xSM with the $bbWW$ and $4b$ channels at the HL-LHC.) Additional direct production possibilities include vector boson fusion (VBF) production of the singlet at 3 TeV CLIC~\cite{No:2018fev,Buttazzo:2018qqp} or a multi-TeV muon collider~\cite{Buttazzo:2018qqp,Liu:2021jyc} with its subsequent decay to a Higgs boson pair ($s\to hh\to b\bar b b\bar b$) or $Z$ boson pair ($s\to ZZ\to\ell^+\ell^-\ell^+\ell^-$) (with higher collision energy setups giving higher reach). } Conversely, for small singlet scalar masses $m_s < m_h/2$, exotic Higgs decays $h \to s s$ will also allow to probe the corresponding first-order EWPT region~\cite{Carena:2021onl,Kozaczuk:2019pet} at the HL-LHC (in the $b\bar b \tau \tau$ final state) and future lepton colliders (in the $4 b$ final state). When combined with the projected sensitivity to the EWPT via GWs of the future LISA detector, (almost) the entire parameter space yielding detectable GW signals would be probed by future multi-TeV lepton colliders~\cite{Liu:2021jyc} (and also by 100 TeV hadron colliders). Thus, if a stochastic GW signal from a phase transition were to be detected by LISA, these future collider facilities would provide a key cross-check to identify the underlying new physics. {The complementarity between GW probes with the future LISA detector and new physics searches at colliders (in this case the HL-LHC) is shown explicitly for the xSM in Fig.~\ref{fig:LHC_LISA_complementarity}, in the plane of EWPT strength $\alpha$ and inverse duration (in Hubble units) $\beta/H^{*}$ (see section \ref{section_PT}), using \textsc{PTPlot}~\cite{PTPLOT}.} \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\hsize]{singlet_snr.pdf} \end{center} \caption{ EWPT strength $\alpha$ versus inverse duration (in Hubble units) $\beta/H^{*}$ for xSM benchmark scenarios. The orange benchmarks feature a singlet mixing $\left\vert\rm{sin}\,\theta \right\vert \gtrsim 0.1$, thus within reach of the HL-LHC, while the HL-LHC will not be able to probe the blue points (some of which are within reach of LISA). The red-orange-green curves correspond to the LISA sensitivity with a certain signal-to-noise ratio (indicated in the figure). The black dashed lines correspond to constant values of $(\tau_{\text{sw}} H)^{-1}$ (see section \ref{section_PT}), with $\tau_{\text{sw}} H < 1$ for the grey region. Figure adapted from~\cite{Caprini:2019egz} using \textsc{PTPlot}~\cite{PTPLOT}. } \label{fig:LHC_LISA_complementarity} \end{figure} For the $\mathbb{Z}_2$-symmetric singlet extension of the SM \cite{Profumo:2007wc,Curtin:2014jma} the direct signatures\footnote{Indirect signatures of the singlet through the modifications of Higgs couplings would be similar to the non-$\mathbb{Z}_2$-symmetric case discussed above in the limit of vanishing Higgs-singlet mixing.} at colliders differ from the ones discussed above: the singlet field does not mix with the Higgs, has to be produced in pairs and does not decay to SM particles, escaping the detector as missing transverse energy $E_T^{\rm{miss}}$. For $m_s > m_h/2$, the sensitivity of the HL-LHC (in the VBF process $p p \to 2j + s s$ via an off-shell Higgs) will be very limited, and the parameter space yielding a first-order EWPT will only be accessible at a future 100 TeV hadron collider~\cite{Curtin:2014jma,Craig:2014lda} or multi-TeV lepton colliders~\cite{Buttazzo:2018qqp,Chacko:2013lna,Ruhdorfer:2019utl} (also possibly at a high-energy $\gamma\gamma$ collider based on such lepton colliders~\cite{Garcia-Abenza:2020xkk}). A space-based GW observatory like LISA would then have the first chance to probe the parameter space with a first-order EWPT in the $\mathbb{Z}_2$-symmetric scenario (as discussed in section 6.1 of~\cite{Caprini:2019egz}). For other (non-singlet) extensions of the SM yielding a strong first-order EWPT, e.g. with new scalar electroweak multiplets, the non-singlet nature of the new fields helps making them more accessible at high-energy colliders. This strengthens the interplay between LHC studies and the generation of GWs from the EWPT. {Important cases of recent interest include:} \noindent{\it{Two-Higgs-doublet models (2HDM)}}. {In this scenario,} a first-order EWPT favours a sizable mass splitting among the new states $A_0$ and $H_0$ from the second Higgs doublet, and LHC searches for $p p \to A_0 \to H_0\, Z$ yield important constraints on the corresponding EWPT parameter space~\cite{Dorsch:2014qja,Dorsch:2016tab}. The HL-LHC will completely probe the first-order EWPT parameter space in the 2HDM of Type-II (see e.g.~section 9.4 of~\cite{Cepeda:2019klc}), while for Type-I, LISA will be able to explore parameter regions beyond the LHC. \noindent {\it{Extension of the SM by a (real) scalar $SU(2)_L$ triplet}.} {This scenario\cite{Patel:2012pi,Blinov:2015sna,Niemi:2018asa,Chala:2018opy}, which entails adding three new degrees of freedom to the SM,} allows for a very strong first-order EWPT through a two-step process~\cite{Patel:2012pi} within the reach of LISA. It also predicts various distinct collider signatures, including the modification of the $h \to \gamma\gamma$ branching fraction and disappearing track signatures~\cite{Chiang:2020rcv} associated with the compressed triplet scalar spectrum. \begin{figure}[!t] \begin{center} \includegraphics[width=1.0\hsize]{Triplet_Comb.pdf} \end{center} \caption{Real triplet extension of the SM. Panel (a) gives the phase diagram in terms of the triplet mass $m_\Sigma$ and Higgs portal coupling $a_2$. The light blue, green, red, and grey areas correspond to singlet step crossover transition, single step first order transition, two step thermal history, and unstable electroweak minimum, respectively. The interior of the black dashed contour corresponds to an EWPT that would complete. The thin black band is the allowed region for a hypothetical LISA observation. The dark (light) ellipses give prospective collider allowed regions for scenarios BMA (BMA'): determination of the triplet mass and Higgs diphoton decay rate (adds a measurement of the neutral triplet decay to two $Z$ bosons). The blue bands in panel (b) show projection of the hypothetical collider allowed parameter space into the plane of GW-relevant inputs. Figures adapted from Ref.~\cite{TripletGW2022}. } \label{fig:GW-Collider-triplet} \end{figure} {Recent work on the triplet model that draws on the use of effective field theory and lattice simulations, illustrates how the combination of GW and collider observations could test the model and identify the values of the relevant parameters\cite{TripletGW2022}. Illustrative results are shown in Fig. \ref{fig:GW-Collider-triplet}. Panel (a) shows the phase diagram (see caption for details), with the thin black band giving the portion region that a hypothetical GW signal would identify. The dark and light blue ellipses correspond to the results of prospective future collider observations. An overlap between the collider and GW-allowed regions could enable a precise determination of the model parameters in a way that is unlikely to occur with either GW or Collider measurements alone. Panel (b) shows how a hypothetical collider-allowed region in the plane of GW input parameters. Some portions of that parameter space would lie within the range of currently envisioned GW probes, while a complete coverage would require development of more sensitive probes. } Finally, some scenarios envision the dark matter actually plays a role in modifying the nature of the EWPT. An example in this class is the so called lepton portal dark matter model, with Majorana fermionic dark matter candidate~\cite{Bai:2014osa} which has negligible direct and indirect signals. Therefore, the only hope to probe such a WIMP scenario is the collider searches. The scalar portal interactions between the charged mediator and the SM Higgs are first considered in Ref.~\cite{Liu:2021mhn}, which shows that the EWPT can be modified to a first-order one, providing an extra detecting channel, i.e. the first order EWPT GWs. It is shown that the precision measurement at the future CEPC on Higgs exotic decay and Higgs coupling to lepton can be complementary to the GW signals in probing the scalar and lepton portal couplings. \subsection{Theoretical Robustness} When exploring the EWPT sensitivity of future GW and collider probes, it is important to assess the reliability of the theoretical computations. To date, the bulk of such studies have relied on the use of perturbation theory to analyze the EWPT thermodynamics and bubble nucleation dynamics (in the case of a first order EWPT). However, it has long been known, if not widely appreciated in recent times, that exclusive reliance on perturbation theory can yield quantitatively and qualitatively misleading results. The most significant impediment arises from bosonic contributions to thermal loops. While non-perturbative lattice computations are free from this difficulty, exclusive reliance on the lattice is not practical for a wide survey of BSM scenarios and the relevant parameter space therein. A reasonable middle ground is to pair the lattice with state-of-the-art perturbative computations, using the former to \lq\lq benchmark\rq\rq\, the latter. The latter now relies on the use of the dimensionally-reduced, three-dimensional effective field theory (DR 3DEFT). For recent applications to the singlet, 2HDM, and real triplet models, see Refs.~\cite{Gould:2019qek,Niemi:2021qvp,Kainulainen:2019kyp,Niemi:2018asa,Niemi:2020hto}. A recent application of this benchmarking approach to the GW-collider interplay is given in Ref.~\cite{Gould:2019qek}. In that study, it was observed that if a new scalar is sufficiently heavy to be integrated out of the DR 3DEFT, yielding a high-temperature effective theory that is SM like, then the resulting GW signal is unlikely to be accessible with LISA. Only for a sufficiently light or dynamical new scalar could one expect a signal in the LISA detector. On the other hand, future collider searches could still probe the first order EWPT-viable models that would be inaccessible to LISA. Looking to the future, the use of lattice-EFT methods to investigate the prospects for next generation GW probes is a clear theoretical forefront. \subsection{Other Phase Transitions} Other phase transitions close to the weak scale can also leave both collider and GW signal. One of the well motivated scenarios is Supersymmetry, which could be probed with GWs, complementing the existing efforts in collider physics. First, the presence of light supersymmetric particles (specifically the stops) coupled to the SM Higgs could affect the properties of the EW symmetry breaking, rendering it strongly first order \cite{Carena:1996wj,Delepine:1996vn}. However such light stop scenario is in tension with LHC data \cite{Menon:2009mz,Cohen:2012zza,Curtin:2012aa,Carena:2012np}. Within the same strategy, one could consider non minimal SUSY extension of the SM including extra scalars which further favour a strong FOPT, without being excluded by the LHC (see e.g. \cite{Huang:2014ifa,Kozaczuk:2014kva,Huber:2015znp}). The phenomenology of such models possesses features similar to the singlet extensions of the SM which have been previously discussed. A promising scenario for GW signatures in the MSSM has been investigated recently in \cite{Fornal:2021ovz}. In this realization the MSSM scalars have a non-standard thermal evolution at high temperatures, passing through a phase of symmetry non-restoration. The associated phase transition at high temperature can be a source of GWs, possibly detectable in future interferometers. Futhermore, viable SUSY models should include a sector where SUSY is spontaneously broken, and SUSY breaking is tied to spontaneous R-symmetry breaking \cite{Nelson:1993nf}. In \cite{Craig:2020jfv} the SUSY and R-symmetry phase transition in simple SUSY breaking hidden sectors has been investigated, and it has been shown under which conditions this can be strong and first order, leading to a SGWB. Constraints from gravitino cosmology set bounds on the SUSY breaking scales resulting in a GW spectrum in the frequency ranges accessible to current and future interferometers. Moreover, once the SUSY breaking mediation scheme is specified, the peak of the GW spectrum is correlated with the typical scale of the SM superpartners, and a visible GW signal would imply superpartners within reach of future colliders. As a generic remark, we emphasize that SUSY gauge theories typically include large scalar manifolds where phase transitions could have occurred during the evolution of the Universe, opening the possibilities for novel mechanisms to generate GW signatures and to test high energy SUSY breaking scales. \section{Correlating GW Background with EM Observations\label{sec:gwem}} As discussed above, a SGWB arises as an incoherent superposition of many GW sources, summed over all sky directions and both polarizations. Numerous SGWB models have been proposed, both cosmological and astrophysical, many of which are accessible to terrestrial and space-borne GW detectors~\cite{maggiore,regimbau_review}. These models often yield predictions for other cosmological observables, such as the CMB, the distribution of galaxies across the sky and redshift, and the distribution of dark matter throughout the universe. It is therefore expected that cross-correlating the spatial structure in the SGWB with spatial structures in other cosmological observables would enable new probes of the underlying physical models and of the earliest phases of the evolution of the universe. The SGWB is typically described in terms of its energy density \cite{allenromano,romanocornish}: \begin{equation} \Omega_{\rm GW}(\hat{e}, f)\equiv\frac{f}{\rho_c} \frac{\rm d^3\rho_{\rm GW}\left(f, \hat{e}\right)}{{\rm d} f{\rm d}^2 \hat{e}} = \frac{\bar{\Omega}_{\rm GW}}{4\pi}+\delta\Omega_{\rm GW}(\hat{e}, f) , \end{equation} where $d \rho_{\rm GW}$ is the energy density of gravitational radiation stored in the frequency band $[f, f+df]$, $\hat{e}$ is the direction on the sky, and $\rho_c$ is the critical energy density needed for a spatially flat Universe. In the second step, we have separated the isotropic and anisotropic components of the SGWB energy density. The anisotropic part can further be decomposed in spherical harmonics $\delta\Omega_{\rm GW}(\hat{e}, f) = \sum_{lm} a_{lm}(f) Y_{lm}(\hat{e})$, from which the angular power spectrum can be computed as $C_{l}(f) \propto \sum_m \langle a_{lm}(f) a^{*}_{lm}(f) \rangle$, under the assumption of statistical isotropy. While the isotropic SGWB component is expected to be larger than the possible anisotropy across the sky, there have been significant recent developments in the literature computing the levels of anisotropy for various astrophysical and cosmological SGWB models \cite{contaldi,jenkins_cosmstr,Jenkins:2018a,Jenkins:2018b,Jenkins:2019a,Jenkins:2019b,Jenkins:2019c,Cusin:2017a,Cusin:2017b,Cusin:2018,Cusin:2018_2, Cusin:2019,Cusin:2019b,Alonso:2020,Cusin:2019c,Cusin:2018avf,CanasHerrera,ghostpaper,bartolo,bartolo2020,Adshead:2020bji,dallarmi,Bellomo:2021mer,domcke2020,ricciardone2021,braglia2022}. Some of them have also investigated the possibility of correlating the SGWB anisotropy with the anisotropy observed in electromagnetic (EM) tracers of the large scale structure, such as galaxy counts and weak lensing \cite{Cusin:2017a,Cusin:2017b,Cusin:2018,Cusin:2019,CanasHerrera,Alonso:2020,Cusin:2019c,mukherjee, Alonso:2020mva,PhysRevD.104.063518}, or the CMB \cite{ghostpaper,contaldi,bartolo,bartolo2020,dallarmi,domcke2020,ricciardone2021,braglia2022}. In such cases, one can also expand the EM observable (such as galaxy count distribution) in spherical harmonics (with coefficients $b_{lm}$) and define the angular cross-correlation spectrum $D_{l}(f) \propto \sum_m \langle a_{lm}(f) b^{*}_{lm}(f) \rangle$. These SGWB-EM anisotropy correlations carry unique potential to probe different aspects of high-energy physics, as we outline in the following examples. {\bf SGWB-CMB Correlations:} While most cosmological SGWB models predict isotropic backgrounds ~\cite{grishchuk,barkana,starob,turner,peloso_parviol,seto,eastherlim,boylebuonanno}, recent studies have started to investigate anisotropy in these models. An example is the model of phase transitions (PT) in the early universe, which occurred as the universe cooled and went through symmetry-breaking transitions \cite{witten,hogan,turnerwilczek,kosowsky,kamionkowski,apreda,caprini,binetruy,caprini2}. As bubbles of a new vacuum form and expand, collisions of bubble walls, combined with corresponding motion of the plasma and magnetohydrodynamic turbulences lead to formation of the SGWB \cite{caprini2}, as discussed in Section 3. A PT is expected to occur at the time of the electroweak symmetry breaking, at $\sim 1$ TeV scale, resulting in a potentially strong SGWB in both LISA and third-generation (3G) terrestrial detector bands~\cite{binetruy,margotpaper}. Possible PTs at higher temperatures ($\sim 10^3 - 10^6$ TeV) would also be accessible to 3G detectors~\cite{margotpaper}. The PT would have occurred at slightly different redshifts in different causally disconnected regions of the universe, giving rise to anisotropy in the SGWB. The SGWB angular structure would not be affected by interactions with the plasma (i.e. effects such as Silk damping and baryon acoustic oscillations are not relevant for GWs), resulting in a simple angular spectrum: $C_l^{\rm GW} \sim [l (l+1)]^{-1}$ \cite{ghostpaper}. Assuming the PT happened after inflation, the primordial density fluctuations that led to the CMB angular spectrum would also have been present during the PT, imprinting a SGWB anisotropy at least as large as the CMB anisotropy~\cite{ghostpaper}. The degree and nature of correlations between the two backgrounds would provide valuable insight into inflation and the ''dark ages`` of cosmic history. While SGWB anisotropy can be generated at the time of the SGWB production, as in the above phase transition example, it is also possible for the SGWB anisotropy to be generated while GWs propagate through a non-uniform universe. This effect is common for all isotropic early-universe SGWB models: as GWs propagate through large-scale density perturbations that are correlated with the CMB temperature and E-mode polarization, they too become correlated with the CMB \cite{ricciardone2021,contaldi,bartolo,bartolo2020,domcke2020}. Multiple examples of extensions of $\Lambda$CDM universe model have been examined, all featuring new pre-recombination physics. This includes models with extra relativistic degrees of freedom, a massless non-minimally coupled scalar field, and an Early Dark Energy component \cite{braglia2022}. SGWB-CMB correlations help constrain these models at various levels of significance, depending on the specific model and on the strength of the SGWB monopole. {\bf Cosmic strings:} As discussed in Section 4, cosmic strings, either as fundamental strings or as topological defects formed during PTs in the early universe, are expected to support cusps \cite{caldwellallen,DV1,DV2,cosmstrpaper} and kinks \cite{olmez1}, which if boosted toward the Earth could result in detectable GW bursts. Integrating contributions of kinks and cusps across the entire string network results in a SGWB. Discovery of the cosmic superstring SGWB would open a unique and rare window into string theory \cite{polchinski}. The amplitude, the frequency spectrum, and the angular spectrum depend on fundamental parameters of cosmic strings (string tension, reconnection probability), and on the network dynamics model \cite{siemens,ringeval,olum}. While the isotropic (monopole) component of this SGWB may be within reach of the advanced or 3G detectors~\cite{O1cosmstr}, the anisotropy amplitudes are found to be $10^{4} - 10^{6}$ times smaller than the isotropic component, depending on the string tension and network dynamics~\cite{olmez2,jenkins_cosmstr}. This level of anisotropy may be within reach of the 3G detectors. Correlating the anisotropy of this SGWB with anisotropy in the CMB or large scale structure may reveal details about the formation and dynamics of the cosmic string network. {\bf Primordial Black Holes (PBHs):} As discussed in Section 5, PBHs are of high interest as dark matter candidates and have been searched for using different observational approaches, including gravitational lensing, dynamical effects, and accretion effects. While constraints have been placed that disfavour PBHs as a significant fraction of the dark matter, they are far from conclusive due to the variety of assumptions involved, and consequently a window around $10 M_\odot$ is still allowed. Cross correlating the sky-map of the SGWB due to binary black hole (BBH) signals with the sky-maps of galaxy distribution or dark matter distribution could provide additional insights on the origin of black holes \cite{raccanelli1,raccanelli2,nishikawa,scelfo}. In particular, in more massive halos the typical velocities are relatively high, making it harder for two PBHs to form a binary through GW emission, since the cross section of such a process is inversely proportional to some power of the relative velocity of the progenitors. The PBH binaries are therefore more likely to form binaries in low-mass halos. On the other hand, the merger probability for stellar black holes is higher in more luminous galaxies (or more massive halos). Therefore, if the BBH SGWB anisotropy is found to be correlated with the distribution of luminous galaxies, the BBHs would be of stellar origin, otherwise they would be primordial. While mergers of PBHs would tend to trace the filaments of the large-scale structure, stellar BBH mergers would tend to trace the distribution of galaxies of high stellar mass. The clustering of well-resolved individual GW sources may provide additional constraints \cite{stiskalek2020, payne2020, bera2020}, and efforts to combine well-resolved sources with the SGWB hold promise as well \cite{callister2020}. {\bf Outlook for GW-EM observations:} Advanced LIGO and Advanced Virgo have produced upper limit measurements of the SGWB anisotropy in the 20-500 Hz band for different frequency spectra, and for both point sources and extended source distributions on the sky \cite{O1stochdir,O2stochdir}. Similar techniques for measuring SGWB anisotropy in the 1 mHz band using LISA are being developed \cite{adamscornish}. The first attempts to correlate SGWB measurements with EM observations are also being developed (for example with the SDSS galaxy survey, resulting in upper limits on the cross-correlation \cite{yang}). Much more remains to be done in order to fully explore the science potential of the SGWB-EM correlation approach. Systematic studies are needed to understand the angular resolution of GW detector networks and to perform optimal SGWB-EM correlation measurements so as to start constraining model parameters---e.g. Bayesian techniques applied to the BBH SGWB are particularly promising \cite{smiththrane,banagiri}. Further development of theoretical models of SGWB-EM anisotropy correlation is critical to enable formulation of suitable statistical formalisms to compare these models to the data. Finally, the study of the astrophysical and cosmological components of the SGWB and their correlations with different EM observations will be further deepened by the upcoming, more sensitive data coming from gravitational wave detectors (LIGO, Virgo, Kagra, Einstein Telescope, Cosmic Explorer, LISA), galaxy and weak lensing surveys (EUCLID, SPHEREx, DESI, SKA, and others), and CMB measurements. \section{Conclusions\label{sec:conclusion}} This white paper highlights the strong scientific potential in using GW observations to probe fundamental particle physics and the physics of the early universe. Processes that took place in the Universe within one minute after the Big Bang are often associated with high energies that cannot be reproduced in laboratories, making GW observations unique opportunities to probe the new physics at such energies. In some cases, combining GW observations with accelerator-based experiments or with cosmological observations in the electromagnetic spectrum allows even more powerful probes of the new physics. The standard inflationary paradigm results in a scale-invariant GW spectrum. A novel coupling between the inflaton and gauge fields could result in a strongly blue-tilted spectrum. At the end of inflation, a variety of mechanisms for transferring energy from the inflaton to other particles, including reheating and preheating phases, could result in a boost of the GW spectrum at relatively high frequencies. The presence of additional phases in the Universe, especially if characterized by stiff equations of state, could also result in a significant blue GW spectrum observable by future GW detectors. Alternative cosmologies, such as pre-Big-Bang and ekpyrotic models, could also leave observable blue GW spectra, hence providing new windows into the origins of the universe. As the Universe cools, multiple symmetries are expected to be broken at different energy scales, resulting in phase transitions in the early universe. The electroweak phase transition is of particular interest, but others are also possible, including QCD, supersymmetry, and others. If they are first-order, these phase transitions could result in GW production with a spectrum typically peaked around the frequency corresponding to the energy scale of the phase transition. GW production mechanisms include collisions of bubble walls, sound waves in the plasma, and magnetohydrodynamic turbulence. In the case of electroweak phase transitions, existence of new physics slightly above the electroweak scale could cause the transition to be first-order. Such new physics would be within reach of future collider experiments, including the ILC, FCC, CEPC, and others, hence raising the distinct possibility of combining collider experiments with GW observations to probe the physics of electroweak symmetry breaking. Similar collider-GW complementarity could also be used to study other symmetries and corresponding phase transitions, with supersymmetry as a notable example. Furthermore, phase transitions in the early universe could result in topological defects, such as strings or domain walls. Among these, cosmic strings have received much attention as possible GW sources: the dynamics in cosmic string loops is expected to produce a broad GW spectrum, spanning decades in frequency, with the amplitude strongly dependent on the string tension. More recently, axion strings and topological defects have been studied as sources of GWs, likely with a spectrum with logarithmic decline at high frequencies. Cosmic superstrings are also a possible GW source, turning GW experiments into novel ways to test string theory. Dark matter could result in GW production with a broad variety of morphologies. Dark matter in the form of primordial black holes could be detected in individual binary black hole merger events, for example if involving subsolar black holes or if taking place at high redshift ($>20$), or by observing the SGWB due to binary black holes whose spectrum would depend on the fraction of black holes that are of primordial origin. Dark matter in the form of dark photons would induce quasi-monochromatic displacements in the GW detector test masses, at the frequency set by the dark photon mass, and could be searched for using Fourier techniques. Dark matter in the form of a dilaton would cause changes in the length and index of refraction in a GW detector's mirrors, hence inducing phase differences in the detector's two arms. Dark matter in the form of axions could generate GW signatures through the black hole superradiance process or by modifying the inspiral signal in neutron star binary mergers. Finally, we note that cross-correlations of the anisotropy in GW energy density and the anisotropy in electromagnetic tracers of the structure in the universe (cosmic microwave and infrared backgrounds), weak lensing, galaxy counts) could also serve as powerful probes of the early universe physics. As an example, if a phase transition happened after inflation, the primordial density fluctuations that led to the CMB angular spectrum would also have been present during the phase transition, imprinting anisotropy in the SGWB at least as large as (and correlated with) the CMB anisotropy. Other applications include cosmic string probes and probes of dark matter in the form of primordial black holes. This tremendous breadth of fundamental particle physics and cosmology phenomena will be accessible to future GW observations, including terrestrial and space-borne detectors, pulsar timing observations, and experiments targeting the B-mode CMB polarization. Realizing this scientific potential requires not only development and completion of the next generation of GW and collider detectors, but also theoretical developments that would define effective probes of the phenomena using the upcoming data. \bmhead{Acknowledgments} RC is supported in part by U.S. Department of Energy Award No. DE-SC0010386. YC is supported in part by the U.S. Department of Energy under award number DE-SC0008541. HG, FY and YZ are supported by U.S. Department of Energy under Award No. DESC0009959. VM is supported by NSF grant PHY2110238. VM and CS are also supported by NSF grant PHY-2011675. AM and AS are supported by the SRP High-Energy Physics and the Research Council of the VUB, AM is also supported by the EOS - be.h project n.30820817, and AS by the FWO project G006119N. JMN is supported by Ram\'on y Cajal Fellowship contract RYC-2017-22986, and by grant PGC2018-096646-A-I00 from the Spanish Proyectos de I+D de Generaci\'on de Conocimiento. MJRM is supported in part under U.S. Department of Energy contract DE-SC0011095, and was also supported in part under National Natural Science Foundation of China grant No. 19Z103010239. MS is supported in part by the Science and Technology Facility Council (STFC), United Kingdom, under the research grant ST/P000258/1. KS is supported in part by the U.S. Department of Energy under award number DE-SC0009956. LTW is supported by the U.S. Department of Energy grant DE-SC0013642. GW is supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. HA is supported in part by the National Key R\&D Program of China under Grant No. 2021YFC2203100 and 2017YFA0402204, the National Natural Science Foundation under Grant No. 11975134, and the Tsinghua University Initiative Scientific Research Program. LB is supported in part by the National Key Research and Development Program of China Grant No. 2021YFC2203004, the National Natural Science Foundation of China under the grants Nos.12075041, 12047564, and the Fundamental Research Funds for the Central Universities of China (No. 2021CDJQY-011 and No. 2020CDJQY-Z003), and Chongqing Natural Science Foundation (Grants No.cstc2020jcyj-msxmX0814). SC is supported by a Starting Grant from the Belgian Francqui Foundation. JMC and BL are supported by NSERC (Natural Sciences and Engineering Research Council), Canada. GC is funded by Swiss National Science Foundation (Ambizione Grant). RJ is supported by the grants IFT Centro de Excelencia Severo Ochoa SEV-2016-0597, CEX2020-001007-S and by PID2019-110058GB-C22 funded by MCIN/AEI/10.13039/501100011033 and by ERDF. NL is grateful for the support of the Milner Foundation via the Milner Doctoral Fellowship. KFL is partially supported by the U.S. Department of Energy grant DE-SC0022345. MM is partially supported by the Spanish MCIN/AEI/10.13039/501100011033 under the grant PID2020-113701GB-I00, which includes ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. ALM is a beneficiary of a FSR Incoming Postdoctoral Fellowship. BS is supported in part by NSF grant PHY-2014075. JS is supported by the National Natural Science Foundation under Grants No. 12025507, No. 12150015, No.12047503; and is also supported by the Strategic Priority Research Program and Key Research Program of Frontier Science of the Chinese Academy of Sciences under Grants No. XDB21010200, No. XDB23010000, and No. ZDBS-LY-7003 and CAS project for Young Scientists in Basic Research YSBR-006. XS was supported by NSF's NANOGrav Physics Frontier Center (NSF grants PHY-1430284 and PHY-2020265). RS is supported by the NSF grant PHY-1914731 and by the US-Israeli BSF Grant 2018236. CT acknowledges financial support by the DFG through the ORIGINS cluster of excellence. DJW was supported by Academy of Finland grant nos. 324882 and 328958. KPX is supported by the University of Nebraska-Lincoln. SYZ is supported by in part by JSPS KAKENHI Grant Number 21F21026. \section*{Data Availability} Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
2,877,628,088,781
arxiv
\section{Introduction} Although deep neural networks have achieved significant success on a variety of challenging machine learning tasks, including state-of-the-art accuracy on large-scale image classification \cite{he2016identity,huang2017densely}, the discovery of adversarial examples \cite{szegedy2013intriguing} has drawn attention and raised concerns. Adversarial examples are carefully perturbed versions of the original data that successfully fool a classifier. In the image domain, for example, adversarial examples are images that have no visual difference from natural images, but that lead to different classification results \cite{goodfellow2014explaining}. A large body of work has been developed on defensive methods to tackle adversarial examples, yet most remain vulnerable to adaptive attacks \cite{szegedy2013intriguing,goodfellow2014explaining,moosavi2016deepfool,papernot2016distillation,kurakin2016adversarial,carlini2017towards,brendel2017decision,athalye2018obfuscated}. A major drawback of many defensive models is that they are heuristic and fail to obtain a theoretically justified guarantee of robustness. On the other hand, many works have focused on providing provable/certified robustness of deep neural networks \cite{hein2017formal,raghunathan2018certified,kolter2017provable,weng2018towards,zhang2018efficient,dvijotham2018dual,wong2018scaling}. Recently, \cite{lecuyer2018certified} provided theoretical insight on certified robust prediction, building a connection between differential privacy and model robustness. It was shown that adding properly chosen noise to the classifier will lead to certified robust prediction. Building on ideas in \citep{lecuyer2018certified}, we conduct an analysis of model robustness based on R{\'e}nyi divergence \citep{van2014renyi} between the outputs of models for natural and adversarial examples when random noise is added, and show a higher upper bound on the tolerable size of perturbations compared to \cite{lecuyer2018certified}. In addition, our analysis naturally leads to a connection between adversarial defense and robustness to random noise. Based on this, we introduce a comprehensive framework that incorporates stability training for additive random noise, to improve classification accuracy and hence the certified bound. Our contributions are as follows: \begin{itemize} \item We derive a certified bound for robustness to adversarial attacks, applicable to models with general activation functions and network structures. Specifically, according to \cite{anonymous2020ell}, the derived bound for $\ell_1$ perturbation is tight in the binary case. \item Using our derived bound, we establish a strong connection between robustness to adversarial perturbations and additive random noise. We propose a new training strategy that accounts for this connection. The new training strategy leads to significant improvement on the certified bounds. \item We conduct a comprehensive set of experiments to evaluate both the theoretical and empirical performance of our methods, with results that are competitive with the state of the art. \end{itemize} \section{Background and Related Work} Much research has focused on providing provable/certified robustness for neural networks. One line of such studies considers distributionally robust optimization, which aims to provide robustness to changes of the data-generating distribution. For example, \cite{namkoong2017variance} study robust optimization over a $\phi$-divergence ball of a nominal distribution. A robustness with respect to the Wasserstein distance between natural and adversarial distributions was provided in \cite{sinha2017certifiable}. One limitation of distributional robustness is that the divergence between distributions is rarely used as an empirical measure of strength of adversarial attacks. Alternatively, studies have attempted to provide a certified bound of minimum distortion. In \cite{hein2017formal} a certified bound is derived with a small loss in accuracy for robustness to $\ell_2$ perturbations in two-layer networks. A method based on semi-definite relaxation is proposed in \cite{raghunathan2018certified} for calculating a certified bound, yet their analysis cannot be applied to networks with more than one hidden layer. A robust optimization procedure is developed in \cite{kolter2017provable} by considering a convex outer approximation of the set of activation functions reachable through a norm-bounded perturbation. Their analysis, however, is still limited to ReLU networks and pure feedforward networks. Algorithms to efficiently compute a certified bound are considered in \cite{weng2018towards} by utilizing the property of ReLU networks as well. Recently, their idea has been extended by \cite{zhang2018efficient} and relaxed to general activation functions. However, both of their analyses only apply to the multi-layer perceptron (MLP), limiting the application of their results. In general, most analyses for certified bounds rely on the properties of specific activation functions or model structures, and are difficult to scale. Several works aim to generalize their analysis to accommodate flexible model structures and large-scale data. For example, \cite{dvijotham2018dual} formed an optimization problem to obtain an upper bound via Lagrangian relaxation. They successfully obtained the first non-trivial bound for the CIFAR-10 data set. The analysis of \cite{kolter2017provable} was improved in \cite{wong2018scaling}, where it was scaled up to large neural networks with general activation functions, obtaining state-of-the-art results on MNIST and CIFAR-10. Certified robustness is obtained by \cite{lecuyer2018certified} by analyzing the connection between adversarial robustness and differential privacy. Similar to \cite{dvijotham2018dual,wong2018scaling}, their certified bound is agnostic to model structure and is scalable, but it is loose and is not comparable to \cite{dvijotham2018dual,wong2018scaling}. Our approach maintains all the advantages of \cite{lecuyer2018certified}, and significantly improves the certified bound with more advanced analysis. The connection between adversarial robustness and robustness to added random noise has been studied in several works. In \cite{fawzi2016robustness} this connection is established by exploring the curvature of the classifier decision boundary. Later, \cite{ford2019adversarial} showed adversarial robustness requires reducing error rates to essentially zero under large additive noise. While most previous works use concentration of measure as their analysis tool, we approach such connection from a different perspective using R{\'e}nyi divergence \citep{van2014renyi}; we illustrate the connection to robustness to random noise in a more direct manner. More importantly, our analysis suggests improving robustness to additive Gaussian noise can directly result in the improvement of the certified bound. \section{Preliminaries} \subsection{Notation} We consider the task of image classification. Natural images are represented by $\mathbf{x}\in \mathcal{X} \triangleq [0,1]^{h\times w\times c}$, where $\mathcal{X}$ represents the image space, with $h, w$ and $c$ denoting the height, width, and number of channels of an image, respectively. An image classifier over $k$ classes is considered as a function $f:\mathcal{X}\rightarrow \{1,\dots,k\}$. We only consider classifiers constructed by deep neural networks (DNNs). To present our framework, we define a stochastic classifier, a function $f$ over $\mathbf{x}$ with output $f(\mathbf{x})$ being a multinomial distribution over $\{1,\dots,k\}$, {\it i.e.}, $P\left(f(\mathbf{x})=i\right)=p_i$ for $\sum_i p_i=1$. One can classify $\mathbf{x}$ by picking $\mbox{argmax}_i\ p_i$. Note this distribution is different from the one generated from softmax. \subsection{R{\'e}nyi Divergence} Our theoretical result depends on the R{\'e}nyi divergence, defined in the following \citep{van2014renyi}. \begin{definition}[R{\'e}nyi Divergence] For two probability distributions $P$ and $Q$ over $\mathcal{R}$, the R{\'e}nyi divergence of order $\alpha >1$ is \begin{equation} D_\alpha(P\|Q) = \frac{1}{\alpha-1}\log \mathbb{E}_{x\sim Q}\left(\frac{P}{Q}\right)^{\alpha} \end{equation} \end{definition} \subsection{Adversarial Examples} Given a classifier $f:\mathcal{X}\rightarrow \{1,\dots,k\}$ for an image $\mathbf{x}\in \mathcal{X}$, an adversarial example $\mathbf{x}^\prime$ satisfies $\mathcal{D}(\mathbf{x},\mathbf{x}^\prime)<\epsilon$ for some small $\epsilon>0$, and $f(\mathbf{x})\neq f(\mathbf{x}^\prime)$, where $\mathcal{D}(\cdot, \cdot)$ is some distance metric, {\it i.e.}, $\mathbf{x}^\prime$ is close to $\mathbf{x}$ but yields a different classification result. The distance is often described in terms of an $\ell_p$ metric, and in most of the literature $\ell_2$ and $\ell_\infty$ metrics are considered. In our development, we focus on the $\ell_2$ metric but also provide experimental results for $\ell_\infty$. More general definitions of adversarial examples are considered in some works \cite{unrestricted_advex_2018}, but we only address norm-bounded adversarial examples in this paper. \subsection{Adversarial Defense} Classification models that are robust to adversarial examples are referred to as adversarial defense models. We introduce the most advanced defense models in two categories. Empirically, the most successful defense model is based on adversarial training \cite{goodfellow2014explaining,madry2017towards}, that is augmenting adversarial examples during training to help improve model robustness. TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization (TRADES) \cite{zhang2019theoretically} is a variety of adversarial training that introduces a regularization term for adversarial examples: $$\mathcal{L}=\mathcal{L}(f(\mathbf{x}),y)+\gamma \mathcal{L}(f(\mathbf{x},f(\mathbf{x}_{\text{adv}}))$$ where $\mathcal{L}(\cdot,\cdot)$ is the cross-entropy loss. This defense model won 1st place in the NeurIPS 2018 Adversarial Vision Challenge (Robust Model Track) and has shown better performance compared to previous models \cite{madry2017towards}. On the other hand, the state-of-the-art approach for provable robustness is proposed by \cite{wong2018scaling}, where a dual network is considered for computing a bound for adversarial perturbation using linear-programming (LP), as in \cite{kolter2017provable}. They optimize the bound during training to achieve strong provable robustness. Although empirical robustness and provable robustness are often considered as orthogonal research directions, we propose an approach that provides both. In our experiments, presented in Sec. \ref{sec:exp}, we show our approach is competitive with both of the aforementioned methods. \section{Certified Robust Classifier} \label{sec:bound} We propose a framework that yields an upper bound on the tolerable size of attacks, enabling certified robustness on a classifier. Intuitively, our approach adds random noise to pixels of adversarial examples before classification, to eliminate the effects of adversarial perturbations. \begin{algorithm}[ht] \caption {\label{alg:1}Certified Robust Classifier} \begin{algorithmic}[1] \REQUIRE An input image $\mathbf{x}$; A standard deviation $\sigma>0$; A classifier $f$ over $\{1,\dots,k\}$; Number of iterations $n$ ($n=1$ is sufficient if only the robust classification $c$ is desired). \STATE Set $i=1$. \FOR{$i\in[n]$} \STATE Add i.i.d. Gaussian noise $N(0,\sigma^2)$ to each pixel of $\mathbf{x}$ and apply the classifier $f$ on it. Let the output be $c_i=f(\mathbf{x}+N(\mathbf{0},\sigma^2I))$. \ENDFOR \STATE Estimate the distribution of the output as $p_j=\frac{\#\{c_i=j:i=1,\dots,n\}}{n}$. \STATE Calculate the upper bound: $$L=\sup_{\alpha>1} \left(-\frac{2\sigma^2}{\alpha}\log\left(1-p_{(1)}-p_{(2)}+2\left(\frac{1}{2}\left(p_{(1)}^{1-\alpha}+p_{(2)}^{1-\alpha}\right)\right)^{\frac{1}{1-\alpha}}\right)\right)^{1/2}$$ where $p_{(1)}$ and $p_{(2)}$ are the first and the second largest values in $p_1,\dots,p_k$. \STATE {Return classification result $c=\text{argmax}_i\ p_i$ and the tolerable size of the attack $L$.} \end{algorithmic} \end{algorithm} Our approach is summarized in Algorithm \ref{alg:1}. In the following, we develop theory to prove the certified robustness of the proposed algorithm. Our goal is to show that if the classification of $\mathbf{x}$ in Algorithm~\ref{alg:1} is in class $c$, then for any examples $\mathbf{x}^\prime$ such that $\|\mathbf{x}-\mathbf{x}^\prime\|_2\leq L$, the classification of $\mathbf{x}^\prime$ is also in class $c$. To prove our claim, first recall that a stochastic classifier $f$ over $\{1,\dots,k\}$ has an output $f(\mathbf{x})$ corresponding to a multinomial distribution over $\{1,\dots,k\}$, with probabilities as $(p_1,\dots,p_k)$. In this context, robustness to an adversarial example $\mathbf{x}^\prime$ generated from $\mathbf{x}$ means $\text{argmax}_i\ p_i=\text{argmax}_j\ p^\prime_j$ with $P\left(f(\mathbf{x})=i\right)=p_i$ and $P\left(f(\mathbf{x}^\prime)=j\right)=p_j^\prime$, where $P(\cdot)$ denotes the probability of an event. In the remainder of this section, we show Algorithm \ref{alg:1} achieves such robustness based on the R\'{e}nyi divergence, starting with the following lemma. \begin{lemma} \label{lemma:1} Let $P=(p_1,\dots,p_k)$ and $Q=(q_1,\dots,q_k)$ be two multinomial distributions over the same index set $\{1,\dots,k\}$. If the indices of the largest probabilities do not match on $P$ and $Q$, that is $\text{argmax}_i\ p_i \neq \text{argmax}_j\ q_j$, then $$D_\alpha(Q\|P)\geq -\log\left(1-p_{(1)}-p_{(2)}+2\left(\frac{1}{2}\left(p_{(1)}^{1-\alpha}+p_{(2)}^{1-\alpha}\right)\right)^{\frac{1}{1-\alpha}}\right)$$ where $p_{(1)}$ and $p_{(2)}$ are the largest and the second largest probabilities among the set of all $p_i$. \end{lemma} To simplify notation, we define $M_p(x_1,\dots,x_n)=(\frac{1}{n}\sum_{i=1}^n x_i^p)^{1/p}$ as the generalized mean. The right hand side (RHS) of the condition in Lemma~\ref{lemma:1} then becomes $-\log\left(1-2M_1\left(p_{(1)},p_{(2)}\right)+2M_{1-\alpha}\left(p_{(1)},p_{(2)}\right)\right)$. Lemma \ref{lemma:1} proposes a lower bound of the R{\'e}nyi divergence for changing the index of the maximum of $P$, {\it i.e.}, for any distribution $Q$, if $D_\alpha(Q\|P) < -\log\left(1-2M_1\left(p_{(1)},p_{(2)}\right)+2M_{1-\alpha}\left(p_{(1)},p_{(2)}\right)\right)$, the index of the maximum of $P$ and $Q$ must be the same. Based on Lemma \ref{lemma:1}, we obtain our main theorem on certified robustness as follows, validating our claim. \begin{theorem} \label{theo:main} Suppose we have $\mathbf{x}\in \mathcal{X}$, and a potential adversarial example $\mathbf{x}^\prime\in \mathcal{X}$ such that $\|\mathbf{x}-\mathbf{x}^\prime\|_2\leq L$. Given a $k$-classifier $f:\mathcal{X}\rightarrow \{1,\dots,k\}$, let $f(\mathbf{x}+N(\mathbf{0},\sigma^2I))\sim (p_1,\dots,p_k)$ and $f(\mathbf{x}^\prime+N(\mathbf{0},\sigma^2I))\sim (p^\prime_1,\dots,p^\prime_k)$. If the following condition is satisfied, with $p_{(1)}$ and $p_{(2)}$ being the first and second largest probabilities in $\{p_i\}$: \begin{align*} \hspace{-0.2cm}\sup_{\alpha>1} -\frac{2\sigma^2}{\alpha}\log\left(1-2M_1\left(p_{(1)},p_{(2)}\right)+2M_{1-\alpha}\left(p_{(1)},p_{(2)}\right)\right) \geq L^2 \end{align*} then $\text{argmax}_i\ p_i=\text{argmax}_j\ p^\prime_j$ \end{theorem} The conclusion of Theorem \ref{theo:main} can be extended to the $\ell_1$ case by replacing Gaussian with Laplacian noise. Specifically, notice the Renyi divergence between two Laplacian distribution $\Lambda(x,\lambda)$ and $\Lambda(x^\prime, \lambda)$ is $$\frac{1}{\alpha-1}\log\left(\frac{\alpha}{2\alpha-1}\exp\left(\frac{(\alpha-1)\|x-x^\prime\|_1}{\lambda}\right)+\frac{\alpha-1}{2\alpha-1}\exp\left(-\frac{\alpha\|x-x^\prime\|_1}{\lambda}\right)\right)\xrightarrow{\alpha\rightarrow\infty}\frac{\|x-x^\prime\|_1}{\lambda}$$ Meanwhile, $-\log\left(1-2M_1\left(p_{(1)},p_{(2)}\right)+2M_{1-\alpha}\left(p_{(1)},p_{(2)}\right)\right)\xrightarrow{\alpha\rightarrow\infty} -\log(1-p_{(1)}+p_{(2)})$, thus we have the upper bound for the $\ell_1$ perturbation: \begin{theorem} In the same setting as in Theorem \ref{theo:main}, with $\|\mathbf{x}-\mathbf{x}^\prime\|_1\leq L$, let $f(\mathbf{x}+\Lambda(\mathbf{0},\lambda))\sim (p_1,\dots,p_k)$ and $f(\mathbf{x}^\prime+\Lambda(\mathbf{0}, \lambda))\sim (p^\prime_1,\dots,p^\prime_k)$. If $-\lambda\log(1-p_{(1)}+p_{(2)}) \geq L$ is satisfied, then $\text{argmax}_i\ p_i=\text{argmax}_j\ p^\prime_j$. \end{theorem} In the rest of this paper, we focus on the $\ell_2$ norm with Gaussian noise, but most conclusions are also applicable to $\ell_1$ norm with Laplacian noise. A more comprehensive analysis for $\ell_1$ norm can be found in \cite{anonymous2020ell}. Interestingly, they have proved that the bound $-\lambda\log(1-p_{(1)}+p_{(2)})$ is tight in the binary case for $\ell_1$ norm \cite{anonymous2020ell}. With Theorem \ref{theo:main}, we can enable certified $\ell_2$ ($\ell_1)$ robustness on any classifier $f$ by adding i.i.d. Gaussian (Laplacian) noise to pixels of inputs during testing, as done in Algorithm~\ref{alg:1}. It provides an upper bound for the tolerable size of attacks for a classifier, {\it i.e.}, as long as the pertubation size is less than the upper bound (the ``$\sup$'' part in Theorem~\ref{theo:main}), any adversarial sample can be classified correctly. \paragraph{Confidence Interval and Sample Size} In practice we can only estimate $p_{(1)}$ and $p_{(2)}$ from samples, thus the obtained lower bound is not precise and requires adjustment. Note that ($p_1,\dots,p_k$) forms a multinomial distribution, and therefore the confidence intervals for $p_{(1)}$ and $p_{(2)}$ can be estimated using one-sided Clopper-Pearson interval along with Bonferroni correction. We refer to \cite{lecuyer2018certified} for further details. In all our subsequent experiments, we use the end points (lower for $p_{(1)}$ and upper for $p_{(2)}$) of the $95\%$ confidence intervals for estimating $p_{(1)}$ and $p_{(2)}$, and multiply 95\% for the corresponding accuracy. Moreover, the estimates for the confidence intervals are more precise when we increase the sample size $n$, but at the cost of extra computational burden. In practice, we find a sample size of $n=100$ is sufficient. \paragraph{Choice of $\sigma$} The formula of our lower bound indicates a higher standard deviation $\sigma$ results in a higher bound. In practice, however, a larger amount of added noise also leads to higher classification error and a larger gap between $p_{(1)}$ and $p_{(2)}$, which gives a lower bound. Therefore, the best choice of $\sigma^2$ is not obvious. We will demonstrate the effect of different choices of $\sigma^2$ in the experiments of Sec. \ref{sec:exp}. \section{Improved Certified Robustness} Based on the property of {\em generalized mean}, one can show that the upper bound is larger when the difference between $p_{(1)}$ and $p_{(2)}$ becomes larger. This is consistent with the intuition that a larger difference between $p_{(1)}$ and $p_{(2)}$ indicates more confident classification. In other words, more confident and accurate prediction in the presence of additive Gaussian noise, in the sense that $p_{(1)}$ is much larger than $p_{(2)}$, leads to better certified robustness. To this end, a connection between robustness to adversarial examples and robustness to added random noise has been established by our analysis. Such a connection is beneficial, because robustness to additive Gaussian noise is much easier to achieve than robustness to carefully crafted adversarial examples. Consequently, it is natural to consider improving the adversarial robustness of a model by first improving its robustness to added random noise. In the context of Algorithm \ref{alg:1}, we aim to improve the robustness of $f$ to additive random noise. Note improving robustness to added Gaussian noise as a way of improving adversarial robustness has been proposed by \cite{zantedeschi2017efficient} and was later shown ineffective \cite{carlini2017magnet}. Our method is different in that it requires added Gaussian noise during the testing phase, and more importantly it is supported theoretically. There have been notable efforts at developing neural networks that are robust to added random noise \citep{xie2012image,zhang2017beyond}; yet, these methods failed to defend against adversarial attacks, as they are not particularly designed for this task. Within our framework, since Algorithm \ref{alg:1} has no constraint on the classifier $f$, which gives the flexibility to modify $f$, we can adapt these methods to improve the accuracy of classification when Gaussian noise is present, hence improving the robustness to adversarial attacks. In this paper, we only discuss stability training, but a much wider scope of literature exists for robustness to added random noise \cite{xie2012image,zhang2017beyond}. \subsection{Stability Training} The idea of introducing perturbations during training to improve model robustness has been studied widely. In \citep{bachman2014learning} the authors considered perturbing models as a construction of pseudo-ensembles, to improve semi-supervised learning. More recently, \cite{zheng2016improving} used a similar training strategy, named stability training, to improve classification robustness on noisy images. For any natural image $\mathbf{x}$, stability training encourages its perturbed version $\mathbf{x}^\prime$ to yield a similar classification result under a classifier $f$, {\it i.e.}, $D(f(\mathbf{x}),f(\mathbf{x}^\prime))$ is small for some distance measure $D$. Specifically, given a loss function $\mathcal{L}^*$ for the original classification task, stability training introduces a regularization term $\mathcal{L}(\mathbf{x},\mathbf{x}^\prime)=\mathcal{L}^*+\gamma \mathcal{L}_{\text{stability}}(\mathbf{x},\mathbf{x}^\prime)=\mathcal{L}^*+\gamma D(f(\mathbf{x}),f(\mathbf{x}^\prime))$, where $\gamma$ controls the strength of the stability term. As we are interested in a classification task, we use cross-entropy as the distance $D$ between $f(\mathbf{x})$ and $f(\mathbf{x}^\prime)$, yielding the stability loss $ \mathcal{L}_\text{stability}=-\sum_j P(y_j|\mathbf{x})\log P(y_j|\mathbf{x}^\prime)$, where $P(y_j|\mathbf{x})$ and $P(y_j|\mathbf{x}^\prime)$ are the probabilities generated after softmax. In this paper, we add i.i.d.\! Gaussian noise to each pixel of $\mathbf{x}$ to construct $\mathbf{x}^\prime$, as suggested in \citep{zheng2016improving}. Stability training is in the same spirit as adversarial training, but is only designed to improve the classification accuracy under a Gaussian perturbation. Within our framework, we can apply stability training to $f$, to improve the robustness of Algorithm \ref{alg:1} against adversarial perturbations. We call the resulting defense method Stability Training with Noise (STN). \paragraph{Adversarial Logit Pairing} Adversarial Logit Pairing (ALP) was proposed in \cite{kannan2018adversarial}; it adds $D(f(\mathbf{x}),f(\mathbf{x}^\prime))$ as the regularizer, with $\mathbf{x}^\prime$ being an adversarial example. Subsequent work has shown ALP fails to obtain adversarial robustness \cite{engstrom2018evaluating}. Our method is different from ALP and any other regularizer-based approach, as the key component in our framework is the added Gaussian noise at the testing phase of Algorithm \ref{alg:1}, while stability training is merely a technique for improving the robustness further. We do not claim stability training alone yields adversarial robustness. \section{Experiments} \label{sec:exp} We perform experiments on the MNIST and CIFAR-10 data sets, to evaluate the theoretical and empirical performance of our methods. We subsequently also consider the larger ImageNet dataset. For the MNIST data set, the model architecture follows the models used in \citep{tramer2017ensemble}, which contains two convolutional layers, each containing $64$ filters, followed with a fully connected layer of size $128$. For the CIFAR-10 dataset, we use a convolutional neural network with seven convolutional layers along with MaxPooling. In both datasets, image intensities are scaled to $[0,1]$, and the size of attacks are also rescaled accordingly. For reference, a distortion of $0.031$ in the $[0,1]$ scale corresponds to $8$ in $[0,255]$ scale. The source code can be found at \url{https://github.com/Bai-Li/STN-Code}. \subsection{Theoretical Bound} With Algorithm \ref{alg:1}, we are able to classify a natural image $\mathbf{x}$ and calculate an upper bound for the tolerable size of attacks $L$ for this particular image. Thus, with a given size of the attack $L^*$, the classification must be robust if $L^*<L$. If a natural example is correctly classified and robust for $L^*$ simultaneously, any adversarial examples $\mathbf{x}^\prime$ with $\|\mathbf{x}-\mathbf{x}^\prime\|_2<L^*$ will be classified correctly. Therefore, we can determine the proportion of such examples in the test set as \textbf{a lower bound of accuracy} given size $L^*$. We plot in Figure \ref{fig:result0} different lower bounds for various choices of $\sigma$ and $L^*$, for both MNIST and CIFAR-10. To interpret the results, for example on MNIST, when $\sigma=0.7$, Algorithm \ref{alg:1} achieves at least $51\%$ accuracy under any attack whose $\ell_2$-norm size is smaller than $1.4$. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{result0.eps} \caption{Accuracy lower bounds for \textbf{MNIST} (left) and \textbf{CIFAR-10} (right). We test various choices of $\sigma$ in Algorithm \ref{alg:1}. For reference, we include results for PixelDP (green) and the lower bound without stability training (orange).} \label{fig:result0} \end{figure} When $\sigma$ is fixed, there exists a threshold beyond which the certified lower bound degenerates to zero. The larger the deviation $\sigma$ is, the later the degeneration happens. However, larger $\sigma$ also leads to a worse bound when $L^*$ is small, as the added noise reduces the accuracy of classification. As an ablation study, we include the corresponding lower bound without stability training. The improvement due to stability training is significant. In addition, to demonstrate the improvement of the certified bound compared to PixelDP \cite{lecuyer2018certified}, we also include the results for PixelDP. Although PixelDP also has tuning parameters $\delta$ and $\epsilon$ similar to $\sigma$ in our setting, we only include the optimal pair of parameters found by grid search, for simplicity of the plots. One observes the accuracy lower bounds for PixelDP (green) are dominated by our bounds. We also compare STN with the approach from \cite{wong2018scaling}. Besides training a single robust classifier, \cite{wong2018scaling} also proposed a strategy of training a sequence of robust classifiers as a cascade model which results in better provable robustness, although it reduces the accuracy on natural examples. We compare both to STN in Table \ref{table:bound}. Since both methods show a clear trade-off between the certified bound and corresponding robustness accuracy, we include the certified bounds and corresponding accuracy in parenthesis for both models, along with the accuracy on natural examples. \begin{table}[ht] \caption{\label{table:bound} Comparison on MNIST and CIFAR-10. The numbers ``$a ~(b\%)$'' mean a certified bound $a$ with the corresponding accuracy $b\%$.} \centering \begin{tabular}{c||c|c|c|c}\hline &\multicolumn{2}{ c |}{\textbf{MNIST}} & \multicolumn{2}{c}{\textbf{CIFAR-10}} \\\hline Model&Robust Accuracy & Natural & Robust Accuracy & Natural\\\hline \cite{wong2018scaling} (Single) & 1.58 (43.5\%)& 88.2\% & 36.00 (53.0\%)& 61.2\%\\ \cite{wong2018scaling} (Cascade) & \textbf{1.58 (74.6\%)}& 81.4\% & 36.00 (58.7\%)& 68.8\%\\ STN & 1.58 (69.0\%) & \textbf{98.9\%}& \textbf{36.00 (65.6\%)}&\textbf{80.5\%}\\\hline \end{tabular} \end{table} \begin{wrapfigure}{r}{0.30\textwidth} \centering \vspace{-.40cm} \includegraphics[width=0.30\textwidth]{imagenet.eps} \caption{Comparison of the certified bound from STN (orange) and PixelDP (blue) on ImageNet.} \label{fig:imagenet} \end{wrapfigure} Our bound is close to the one from \cite{wong2018scaling} on MNIST, and becomes better on CIFAR-10. In addition, since the training objective of \cite{wong2018scaling} is particularly designed for provable robustness and depends on the size of attacks, its accuracy on the natural examples decreases drastically when accommodating stronger attacks. On the other hand, STN is capable of keeping a high accuracy on natural examples while providing strong robustness guarantees. \paragraph{Certified Robustness on ImageNet} As our framework adds almost no extra computational burden on the training procedure, we are able to compute accuracy lower bounds for ImageNet \cite{imagenet_cvpr09}, a large-scale image dataset that contains over 1 million images and 1,000 classes. We compare our bound with PixelDP in Figure \ref{fig:imagenet}. Clearly, our bound is higher than the one obtained via PixelDP. \subsection{Empirical Results} We next perform classification and measure the accuracy on real adversarial examples to evaluate the empirical performance of our defense methods. For each pair of attacks and defense models, we generate a robust accuracy vs. perturbation size curve for a comprehensive evaluation. We compare our method to TRADES on MNIST and CIFAR-10. Although we have emphasized the theoretical bound of the defense, the empirical performance is promising as well. More details and results of the experiments, such as for gradient-free attacks, are included in the Appendix. \paragraph{Avoid Gradient Masking} A defensive model incorporating randomness may make it difficult to apply standard attacks, by causing gradient masking as discussed in \cite{athalye2018obfuscated}, thus achieving robustness unfairly. To ensure the robustness of our approach is not due to gradient masking, we use the expectation of the gradient with respect to the randomization when estimating gradients, to ensemble over randomization and eliminate the effect of randomness, as recommended in \cite{athalye2018obfuscated,carlini2019evaluating}. In particular, the gradient is estimated as $\mathbb{E}_{\mathbf{r}\sim N(0,\sigma^2I)}\left[\nabla_{\mathbf{x}+\mathbf{r}}L(\theta,\mathbf{x}+\mathbf{r},y)\right]\approx \frac{1}{n_0}\sum_{i=1}^{n_0} \left[\nabla_{\mathbf{x}+\mathbf{r}_i}L(\theta,\mathbf{x}+\mathbf{r}_i,y)\right]$, where $\mathbf{r}_i$'s are i.i.d. samples from $N(\mathbf{0},\sigma^2I)$ distribution, and $n_0$ is the number of samples. We assume threat models are aware of the value of $\sigma$ in Algorithm 1 and use the same value for attacks. \paragraph{White-box Attacks} For $\ell_\infty$ attacks, we use Projected Gradient Descent (PGD) attacks \cite{madry2017towards}. It constructs an adversarial example by iteratively updating the natural input along with the sign of its gradient and projecting it into the constrained space, to ensure its a valid input. For $\ell_2$ attacks, we perform a Carlini \& Wagner attack \cite{carlini2017towards}, which constructs an adversarial example via solving an optimization problem for minimizing distortion distance and maximizing classification error. We also use technique from \cite{1905.06455}, that has been shown to be more effective against adversarially trained model, where the gradients are estimated as the average of gradients of multiple randomly perturbed samples. This brings a variant of Carlini \& Wagner attack with the same form as the ensemble-over-randomization mentioned above, therefore it is even more fair to use it. The results for white-box attacks are illustrated in Figure \ref{fig:main}. \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{main_plot.eps} \caption{\textbf{MNIST and CIFAR-10}: Comparisons of the adversarial robustness of TRADES and STN with various attack sizes for both $\ell_2$ and $\ell_\infty$. The plots are ordered as: MNIST($\ell_2$), MNIST($\ell_\infty$), CIFAR-10($\ell_2$), and CIFAR-10($\ell_\infty$). Both white-box (straight lines) and black-box attacks (dash lines) are considered.} \label{fig:main} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\textwidth]{sigma_plot.eps} \caption{Robust accuracy of STN with different choices of $\sigma$ for both $\ell_2$ and $\ell_\infty$ attacks. The plots are ordered as: MNIST($\ell_2$), MNIST($\ell_\infty$), CIFAR-10($\ell_2$), and CIFAR-10($\ell_\infty$).} \label{fig:sigma} \end{figure} \paragraph{Black-box Attacks} To better understand the behavior of our methods and to further ensure there is no gradient masking, we include results for black-box attacks. After comparison, we realize the adversarial examples generated from Madry's model \cite{madry2017towards} result in the strongest black-box attacks for both TRADES and STN. Therefore, we apply the $\ell_2$ and $\ell_\infty$ white-box attacks to Madry's model and test the resulting adversarial examples on TRADES and STN. The results are reported as the dashlines in Figure \ref{fig:main}. \paragraph{Summary of Results} Overall, STN shows a promising level of robustness, especially regarding $\ell_2$-bounded distortions, as anticipated. One observes that STN performs slightly worse than TRADES when the size of attacks is small, and becomes better when the size increases. Intuitively, the added random noise dominantly reduces the accuracy for small attack size and becomes beneficial against stronger attacks. It is worth-noting that Algorithm 1 adds almost no computational burden, as it only requires multiple forward passes, and stability training only requires augmenting randomly perturbed examples. On the other hand, TRADES is extremely time-consuming, due to the iterative construction of adversarial examples. \paragraph{Choice of $\sigma$} In previous experiments, we use $\sigma=0.7$ and $\sigma=\frac{100}{255}$ for MNIST and CIFAR-10, respectively. However, the choice of $\sigma$ plays an important role, as shown in Figure \ref{fig:result0}; therefore, we study in Figure \ref{fig:sigma} how different values of $\sigma$ affect the empirical robust accuracy. The results make it more clear that the noise hurts when small attacks are considered, but helps against large attacks. Ideally, using an adaptive amount of noise, that lets the amount of added noise grow with the size of attacks, could lead to better empirical results, yet it is practically impossible as the size of attacks is unknown beforehand. In addition, we include results for $\sigma=0$, which is equivalent to a model without additive Gaussian noise. Its vulnerability indicates the essential of our framework is the additive Gaussian noise. \section{Comparison to \cite{cohen2019certified}} Following this work, \cite{cohen2019certified} proposed a tighter bound in $\ell_2$ norm than the one in section \ref{sec:bound}. Although they do indeed improve our bound, our work has unique contributions in several ways: $(i)$ We propose stability training to improve the bound and robustness, while they only use Gaussian augmentation. In general, stability training works better than Gaussian augmentation, as shown in Figure \ref{fig:result0}. Thus, stability training is an important and unique contribution of this paper. $(ii)$ We conduct empirical evaluation against real attacks and compare to the state-of-the-art defense method (adversarial training) to show our approach is competitive. \cite{cohen2019certified} only discusses the certified bound and does not provide evaluation against real attacks. $(iii)$ The analysis from \cite{cohen2019certified} is difficult to be extended to other norms, because it requires isotropy. On the other hand, ours lead to a tight certified $\ell_1$ bound by adding Laplacian noise, as discussed in Section \ref{sec:bound}. \section{Conclusions} We propose an analysis for constructing defensive models with certified robustness. Our analysis leads to a connection between robustness to adversarial attacks and robustness to additive random perturbations. We then propose a new strategy based on stability training for improving the robustness of the defense models. The experimental results show our defense model provides competitive provable robustness and empirical robustness compared to the state-of-the-art models. It especially yields strong robustness when strong attacks are considered. There is a noticeable gap between the theoretical lower bounds and the empirical accuracy, indicating that the proposed upper bound might not be tight, or the empirical results should be worse for stronger attacks that have not been developed, as has happened to many defense models. We believe each explanation points to a direction for future research. \paragraph{Acknowledgments} This research was supported in part by DARPA, DOE, NIH, NSF and ONR.
2,877,628,088,782
arxiv
\section{Introduction} Large dimensional random matrices play important roles in high dimensional statistics and machine learning theory. Given a data matrix $Y,$ researchers are interested in understanding the behavior of the singular values and vectors of $Y,$ or equivalently, the eigenvalues and eigenvectors of $YY^*$ and $Y^*Y.$ In particular, it has drawn great attention in the study of the first few largest eigenvalues and their associated eigenvectors, which are closely related to the principal component analysis (PCA)\cite{jolliffe2013principal}. Sample covariance matrix \cite{yao2015large} plays a central role in the context of modern statistical learning theory, where the data matrix $Y$ is assumed to be $Y=A^{1/2}X$ for some positive definite population covariance matrix $A$, and $X=(x_{ij})$ is the main random source where $x_{ij}$'s are i.i.d. centered random variables. An extension of the sample covariance matrix is the separable covariance matrix \cite{DYaos,PAUL200937,yang2019}, where the data matrix is $Y=A^{1/2}XB^{1/2}$ with another positive definite matrix $B$. In spatiotemporal data analysis, $A$ and $B$ are respectively the spatial and temporal covariance matrices. Even though the assumption that $X$ has i.i.d. entries are popular and useful in the literature, it prohibits the applications of other types of random matrices. An important example is the Haar distributed random matrices. We mention that the Haar distributed random matrices have been used in the study of high dimensional statistics and machine learning theory, for instance, see \cite{ELnips,nips20201,Liu2020Ridge,2020arXiv200500511Y}. In this paper, we consider that $X=U$ is either an $N \times N$ random Haar unitary or orthogonal matrix and assume that \begin{equation}\label{eq_defndatamatrixtype} Y=A^{1/2}UB^{1/2}. \end{equation} In this sense, we introduce a new class of separable random matrices. The data matrix (\ref{eq_defndatamatrixtype}) has appeared in the study of high dimensional data analysis, for instance, data acquisition and sensor fusion \cite{DW} and matrix denoising \cite{7587390,bun2017}. We remark that when $X=U$ is a Haar random matrix, all the discussions in \cite{DYaos,yang2019} will be invalid since the independence structure of $X$ is violated. The study of the singular values of $Y$ is closely related to the free convolution of the empirical spectral distributions (ESDs) of $A$ and $B,$ denoted as $\mu_A$ and $\mu_B,$ respectively. More specifically, $YY^*$ has the same eigenvalues with the product of random matrices in generic position, i.e., \begin{equation}\label{defn_freemultiplicationmodel} H=AUBU^*, \end{equation} whose global law, i.e., the ESD of $H,$ has been identified in the literature of free probability theory. In the influential work \cite{Voiculescu1991}, Voiculescu studied the limiting spectral distribution of the eigenvalues of $H$ and showed that it is given by the free multiplicative convolution of $\mu_A$ and $\mu_B,$ denoted as $\mu_A \boxtimes \mu_B$; see Definition \ref{def:freeconv} for a precise definition. More recently, in \cite{JHC}, the author investigated the behavior of $\mu_A \boxtimes \mu_B$ by analyzing a system of deterministic equations, known as \emph{subordination equations}, that define the free convolution; see (\ref{eq_suborsystemPhi}) for details. They also proved that under certain regularity assumptions, the density of $\mu_A \boxtimes \mu_B$ has a regular square root behavior near the edge of its support. In this paper, we first study the singular values and vectors of $Y$ of the form (\ref{eq_defndatamatrixtype}), assuming that $A$ and $B$ are some positive definitive matrices whose ESDs behave regularly; see Assumptions \ref{assu_limit} and \ref{assu_esd} for more details. We establish the local laws for $Y,$ prove the results of eigenvalue rigidity and eigenvector delocalization. Motivated by the applications in statistics, we focus our discussion both near and far away from the rightmost edge of the support of $\mu_A \boxtimes \mu_B.$ Then we study a deformation of $Y$ in the sense that a few spikes (i.e., eigenvalues that detached from the bulks of the spectrums) are added to $A$ or $B,$ i.e., the spiked or deformed invariant model \cite{outliermodel} \begin{equation}\label{eq_modespikeddefn} \widehat{Y}=\widehat{A}^{1/2} U \widehat{B}^{1/2}, \end{equation} where $\widehat{A}$ and $\widehat{B}$ are finite-rank perturbations of $A$ and $B,$ respectively; see equation (\ref{eq_spikes}) and Section \ref{sec_spikedmodel} for more precise definitions. For (\ref{eq_modespikeddefn}), we study the first order convergent limits and rates for the outlier eigenvalues and their associated generalized components in very general settings. Our results can be used for statistical estimation for the models (\ref{eq_defndatamatrixtype}) and (\ref{eq_modespikeddefn}) and provide insights for statistical learning problems involving Haar random matrices. We point out that the addition of random matrices in generic position, i.e., $A+UBU^*,$ has been studied in a series of papers \cite{Bao-Erdos-Schnelli2016,BAO3,BAO2,bao2019,BEC}. As mentioned in \cite{BEC}, the results demonstrate that the Haar randomness in the additive model has an analogous behavior to the Wigner matrices \cite{2017dynamical} in the sense of the strong concentration of the eigenvalues and eigenvectors. In this spirit, the Haar randomness in our multiplicative models, both (\ref{eq_defndatamatrixtype}) and (\ref{eq_modespikeddefn}), have a similar behavior as the separable covariance matrices in \cite{yang2019, DYaos}. However, on the technical level, as we mentioned earlier, the technique developed in \cite{yang2019, DYaos} cannot be applied as the main source of randomness is Haar matrix. To address this issue, we use the technique of partial randomness decomposition (c.f. (\ref{eq_prd})) as developed in \cite{Bao-Erdos-Schnelli2016,BAO3,BAO2,bao2019,BEC}. It is the counterpart of the Schur complement in \cite{yang2019, DYaos}. In this sense, our current paper is on the research line of studying random matrices whose main random resources are Haar matrices. We expect this would be welcome results to researchers who are interested in the intersection of free probability and random matrices. The core of our proof is to conduct a sophisticated analysis on the subordination functions and their random equivalents in terms of the resolvents of $H$. Inspired by the arguments in \cite{JHC}, in the current paper, instead of using the subordination functions originally introduced in \cite{Voiculescu1987}, we use their variants in terms of the $\mathsf{M}$-transform (c.f. Definition \ref{defn_transform}). One advantage of using the $\mathsf{M}$-transform is that it provides us a simple way to write down the approximate subordination functions (c.f. Definition \ref{defn_asf}) in terms of $(H-z)^{-1}, z \in \mathbb{C}_+$, which in turn offers the deterministic limits of the resolvents in terms of the subordination functions (c.f. (\ref{eq:main}) and (\ref{eq:main1})). Similar to the discussion of the additive model in \cite{BEC}, for the rigorous proof, we need to conduct a stability analysis for the system of the subordination equations and control the errors between the subordination functions and their approximates. The stability analysis utilize the Kantorovich's theorem and the regularity behavior of the subordination functions. To control the error, we first use the partial randomness decomposition to explore some hidden relations, for instance, the error expression between the subordination functions and their approximates in terms of the resolvents; see (\ref{eq:Lambda}) and (\ref{eq:BGii-S}) for an illustration. Then we use the device of integration by parts to start the recursive estimates. As mentioned in \cite{BEC}, the weights in the fluctuation averaging mechanisms needed to be properly chosen. In our case, these weights (c.f. (\ref{eq_optimalfaquantitiescoeff})) can be easily constructed using the hidden identities obtained earlier. We emphasize that in the current paper, we basically follow the proof idea in \cite{BEC} for the additive model. However, as pointed out in \cite{JHC}, there exist many differences and extra technical difficulties for our multiplicative model, for instance, the counterpart of the $\mathsf{M}$-transform in \cite{BEC} is simply the negative reciprocal Stieltjes transform which makes their calculation easier than us. Compared to \cite{BEC}, we also establish the bounds for the off-diagonal entries of the resolvents by using the recursive estimate procedure. Such controls are only proved in \cite{BAO2} for the additive model in the bulk case using a more complicated discussion. Moreover, we also establish the local laws for the spectral parameter far away from the edge and all the way down to the real axis. The counterpart for the additive model is only proved for the averaged local law in \cite{BEC}. Once we have established the local law outside the bulk spectrum, we can use it to study the model (\ref{eq_modespikeddefn}). It is notable that in \cite{outliermodel}, the authors have studied the convergent limits of the eigenvalues and eigenvectors of $\widehat{Y} \widehat{Y}^*$ to some extend for the Haar unitary matrices under stronger assumptions; see (\ref{eq_strongone}), (\ref{eq_strongtwo}) and Remark \ref{eq_strongeigenvectorassumption}. In the current paper, we will greatly improve the results of \cite{BEC} by, on one hand, establishing their convergent rates which we believe are optimal up to some $N^{\epsilon}$ factor for some small $\epsilon>0$, and on the other hand, considering more general assumptions; see Assumption \ref{assum_outlier}. We mention that our results can be used for statistical estimation of the models (\ref{eq_defndatamatrixtype}) and (\ref{eq_modespikeddefn}). For instance, our local laws, Theorems \ref{thm:main} and \ref{thm_outlierlocallaw} can be used to estimate the subordination functions, which in general are difficult to calculate. Moreover, Theorem \ref{thm_outlier} can be used to estimate the values of the spikes once we have obtained approximations for the subordination functions, and Theorem \ref{thm_outlieereigenvector} can be used to estimate the number of spikes. All these results can provide useful insights for random matrices involving Haar randomness in statistical learning theory. These will be discussed in Section \ref{sec_statapplication}. Finally, we believe that both the results and techniques established in this paper can be employed to study other problems. For instance, the Tracy-Widom distribution for the largest eigenvalue of $H$ in (\ref{defn_freemultiplicationmodel}) and the second order asymptotics of the outlying eigenvalues and eigenvectors for $\widehat{Y}\widehat{Y}^*$ in (\ref{eq_modespikeddefn}). We will pursue these topics in future works. The rest of the paper is organized as follows. In Section \ref{sec:mainresult}, we introduce the necessary notations and state the main results: Section \ref{sec:subsecnotationandassumption} is devoted to introducing the notations and assumptions, Section \ref{sec_locallawresults} is used to state the main results on the local laws, eigenvalue rigidity and eigenvector delocalization, Section \ref{sec_spikedmodel} is devoted to the explanation of the results of the spiked model (\ref{eq_modespikeddefn}) and some discussions on the statistical applications are put in Section \ref{sec_statapplication}. In Section \ref{sec_subordiationproperties}, we prove the properties of the free multiplicative convolution and the subordination functions. Section \ref{sec_proofofspikedmodel} is devoted to the proof of the results regarding the spiked model (\ref{eq_modespikeddefn}) and Section \ref{sec_proofofceonsequenceslocallaw} proves the eigenvalue rigidity and eigenvector delocalization. The other sections are devoted to the proof of the local laws: Section \ref{sec:pointwiselocallaw} proves the local laws for fixed spectral parameter, Section \ref{sec_faall} proves the results of fluctuation averaging and finally, Section \ref{sec_finalsection} establishes the local laws. We put additional technical results in the appendices: Some derivative formulas which will be used through the integration by parts are collected in Appendix \ref{appendix_deriavtive}, some auxiliary lemmas regarding large deviation estimates and stability analysis are provided in Appendix \ref{append:A}, the continuity arguments for generalizing the domain of spectral parameter are offered in Appendix \ref{append:C} and some additional lemmas are proved in Appendix \ref{sec_additional}. \vspace{3pt} \noindent {\bf Conventions.} For $M,N\in\N$, we denote the set $\{k\in\N:M\leq k\leq N\}$ by $\llbra M,N\rrbra$. For $N\in\N$ and $i\in\llbra 1,N\rrbra$, we denote by $\mathbf{e}_{i}^{(N)}$ the $(N\times 1)$ column vector with $(\mathbf{e}_{i})_{j}=\delta_{ij}$. We will often omit the superscript $N$ to write $\mathbf{e}_{i}^{(N)}\equiv \mathbf{e}_{i}$ as long as there is no confusion. We use $I$ for the identity matrix of any dimension without causing any confusion. We use $\mathbb{I}$ for the indicator function. For $N$-dependent nonnegative numbers $a_{N}$ and $b_{N}$, we write $a_{N}\sim b_{N}$ if there exists a constant $C>1$ such that $C^{-1}b_{N}\leq \absv{a_{N}}\leq Cb_{N}$ for all sufficiently large $N$. Also we write $a_{N}=\rO(b_{N})$ or $a_{N}\ll b_{N}$ if $a_{N}\leq Cb_{N}$ for some constant $C>0$ and all sufficiently large $N$. Finally, we write $a_{N}=\ro(b_{N})$ if for all small $\epsilon>0$ there exists $N_{0}$ such that $a_{N}\leq \epsilon b_{N}$ for all $N\geq N_{0}$. We write $a_N \lesssim b_N$ if $a_N=\rO(b_N).$ Let $\bm{g}=(g_1,\cdots, g_N)$ be an $N$-dimensional real or complex Gaussian vector. We write $\bm{g} \sim \mathcal{N}_{\mathbb{R}}(0,\sigma^2 I_N)$ if $g_1,\cdots, g_N$ are i.i.d. $\mathcal{N}(0,\sigma^2)$ random variables; and we write $\bm{g} \sim \mathcal{N}_{\mathbb{C}}(0, \sigma^2 I_N)$ if $g_1, \cdots, g_N$ are i.i.d. $\mathcal{N}_{\mathbb{C}}(0,\sigma^2)$ variables, where $g_i \sim \mathcal{N}_{\mathbb{C}}(0, \sigma^2)$ means that $\re g_i$ and $\im g_i$ are independent $\mathcal{N}(0,\frac{\sigma^2}{2})$ random variables. We use $\mathbb{C}_+$ to denote the complex upper half plane. For any matrix $A,$ we denote its operator norm by $\norm{A}$ and its Hilbert–Schmidt norm as $\norm{A}_{\text{HS}}.$ For a vector $\bm{v},$ we use $\| \bm{v}\|$ for its $\ell_2$ norm. Moreover, for any complex value $a,$ we use $\bar{a}$ for its complex conjugate. \vspace{3pt} \noindent {\bf Acknowledgments. } The authors would like to thank Zhigang Bao and Ji Oon Lee for many helpful discussions and comments. The first author also wants to thank Hau-Tieng Wu for some discussions on the applications in medical sciences. \section{Main results}\label{sec:mainresult} \subsection{Notations and assumptions}\label{sec:subsecnotationandassumption} For any $N \times N$ matrix $W,$ we denote its normalized trace by $\tr$, i.e., \begin{equation}\label{eq_defntrace} \tr W=\frac{1}{N}\sum_{i=1}^{N} W_{ii}. \end{equation} Moreover, its empirical spectral distribution (ESD) is defined as \begin{equation*} \mu_W=\frac{1}{n}\sum_{i=1}^n \delta_{\lambda_i(W)}. \end{equation*} In the present paper, even if the matrix is not of size $N\times N$, the trace is always normalized by $N^{-1}$ unless otherwise specified. We first introduce our model. Consider two $N\times N$ real deterministic positive definite matrices \begin{equation*} A\equiv A_{N}=\diag(a_{1},\cdots,a_{N}) \AND B\equiv B_{N}=\diag(b_{1},\cdots,b_{N}), \end{equation*} where the diagonal entries are ordered as $a_{1}\geq a_{2}\geq \cdots\geq a_{N}>0$ and $b_{1}\geq b_{2}\geq\cdots\geq b_{N}>0$. Let $U\equiv U_{N}$ be a random unitary or orthogonal matrix, Haar distributed on the $N$-dimensional unitary group $U(N)$ or orthogonal group $O(N)$. Define $\widetilde{A}\mathrel{\mathop:}= U^* AU$, $\widetilde{B}\mathrel{\mathop:}= UBU^*$, and \begin{equation} \label{defn_eq_matrices} H\mathrel{\mathop:}= AUBU^*, \quad {\mathcal H}\mathrel{\mathop:}= U^* AU B, \quad \widetilde{H}\mathrel{\mathop:}= A^{1/2}\widetilde{B}A^{1/2}, \AND \widetilde{{\mathcal H}}\mathrel{\mathop:}= B^{1/2}\widetilde{A}B^{1/2}. \end{equation} Note that we only need to consider diagonal matrices $A$ and $B$ since $U$ is a Haar random unitary or orthogonal matrix. Moreover, $\widetilde{H}$ and $\widetilde{{\mathcal H}}$ are Hermitian random matrices. Since $H$, ${\mathcal H}$, $\widetilde{H}$ and $\widetilde{{\mathcal H}}$ have the same eigenvalues, in the sequel, we denote the eigenvalues of all of them as $\lambda_1 \geq \lambda_2 \geq \cdots \geq\lambda_N$ without causing any confusion. Further, we define the empirical spectral distributions (ESD) of the above matrices by \begin{equation*} \mu_{A}\equiv \mu_{A}^{(N)}\mathrel{\mathop:}= \frac{1}{N}\sum_{i=1}^{N}\delta_{a_{i}},\quad \mu_{B}\equiv\mu_{B}^{(N)}\mathrel{\mathop:}= \frac{1}{N}\sum_{i=1}^{N}\delta_{b_{i}}\AND \mu_{H}\equiv\mu_{H}^{(N)}\mathrel{\mathop:}= \frac{1}{N}\sum_{i=1}^{N}\delta_{\lambda_{i}}. \end{equation*} For $z\in\C_{+}:=\{z \in \mathbb{C}: \im z >0\}$, we define the \emph{resolvents} of the above random matrices as follows \begin{equation} \label{defn_greenfunctions} G(z)\mathrel{\mathop:}= (H-zI)^{-1},\quad {\mathcal G}(z)\mathrel{\mathop:}= ({\mathcal H}-zI)^{-1},\quad \widetilde{G}(z)\mathrel{\mathop:}= (\widetilde{H}-zI)^{-1},\quad \widetilde{{\mathcal G}}(z)\mathrel{\mathop:}= (\widetilde{{\mathcal H}}-zI)^{-1}. \end{equation} In the rest of the paper, we usually omit the dependence of $z$ and simply write $G, {\mathcal G}, \widetilde{G}$ and $\widetilde{{\mathcal G}}.$ The following transforms will play important roles in the current paper. \begin{defn}\label{defn_transform} For any probability measure $\mu$ defined on $\mathbb{R}_+,$ its \emph{Stieltjes transform} $m_{\mu}$ is defined as \begin{equation*} m_{\mu}(z)\mathrel{\mathop:}=\int\frac{1}{x-z}\mathrm{d}\mu(x),\quad \text{for }z\in\CR. \end{equation*} Moreover, we define the $\mathtt{M}$-transform $M_{\mu}$ and $\mathtt{L}$-transform $L_{\mu}$ on $\CR$ as \begin{align}\label{eq_mtrasindenity} M_{\mu}(z)&\mathrel{\mathop:}= 1-\left(\int\frac{x}{x-z}\mathrm{d}\mu(x)\right)^{-1}= \frac{zm_{\mu}(z)}{1+zm_{\mu}(z)}, & L_{\mu}(z)&\mathrel{\mathop:}= \frac{M_{\mu}(z)}{z}. \end{align} \end{defn} Let $\mu_{H}(z)$ be the Stieltjes transform of the ESD of $H.$ Since $H,{\mathcal H},\widetilde{H}$ and $\widetilde{{\mathcal H}}$ are similar to each other, we have that $m_H(z)=\tr G=\tr {\mathcal G} =\tr \widetilde{G}=\tr \widetilde{{\mathcal G}}.$ Moreover, it is easy to see that \begin{equation}\label{eq_connectiongreenfunction} G_{ij}=\sqrt{a_i/a_j} \widetilde{G}_{ij}, \ \mathcal{G}_{ij}=\sqrt{b_j/b_i} \widetilde{{\mathcal G}}_{ij}. \end{equation} We next introduce the main assumptions of the paper. Throughout the paper, we assume that there exist two $N$-independent absolutely continuous probability measures $\mu_{\alpha}$ and $\mu_{\beta}$ on $(0, \infty)$ satisfying the following conditions. \begin{assu}\label{assu_limit} Suppose the following assumptions hold true: \begin{itemize} \item[(i).] $\mu_{\alpha}$ and $\mu_{\beta}$ have densities $\rho_{\alpha}$ and $\rho_{\beta}$, respectively. For the ease of discussion, we assume that both of them have means $1$, i.e., \begin{equation*} \int x\mathrm{d}\mu_{\alpha}(x)=\int x\rho_{\alpha}(x)\mathrm{d} x=1. \end{equation*} \item[(ii).] Both $\rho_{\alpha}$ and $\rho_{\beta}$ have single non-empty intervals as supports, denoted as $[E_-^{\alpha}, E_+^{\alpha}]$ and $[E_-^{\beta}, E_+^{\beta}],$ respectively. Here $E_-^{\alpha}, E_+^{\beta}, E_-^{\alpha}$ and $E_+^{\beta}$ are all positive numbers. Moreover, both of the density functions are strictly positive in the interior of their supports. \item[(iii).] There exist constants $-1<t^{\alpha}_{\pm},t^{\beta}_{\pm}<1$ and $C>1$ such that \begin{align*} &C^{-1}\leq \frac{\rho_{\alpha}(x)}{(x-E_{-}^{\alpha})^{t_{-}^{\alpha}}(E_{+}^{\alpha}-x)^{t_{+}^{\alpha}}}\leq C,\quad\forall x\in[E_{-}^{\alpha},E_{+}^{\alpha}],\\ &C^{-1}\leq \frac{\rho_{\beta}(x)}{(x-E_{-}^{\beta})^{t_{-}^{\beta}}(E_{+}^{\beta}-x)^{t_{+}^{\beta}}}\leq C,\quad\forall x\in[E_{-}^{\beta},E_{+}^{\beta}]. \end{align*} \end{itemize} \end{assu} \begin{rem} The assumption (i) is introduced for technical simplicity and it can be removed easily; see Remark 3.2 of \cite{JHC} for details. Moreover, the assumption (iii) is introduced to guarantee the square root behavior near the edges of the free multiplicative convolution of $\mu_{\alpha}$ and $\mu_\beta.$ When this condition fails, the behavior of $\mu_{\alpha} \boxtimes \mu_{\beta}$ near the edge can be very different from our current discussion; see \cite{KLP} for more details. \end{rem} The second assumption, Assumption \ref{assu_esd}, ensures that $\mu_A$ and $\mu_B$ are close to $\mu_{\alpha}$ and $\mu_{\beta},$ respectively. Specifically, it demonstrates that the convergence rates of $\mu_{A}$ and $\mu_{B}$ to $\mu_{\alpha}$ and $\mu_{\beta}$ are bounded by an order of $N^{-1}$, so that their fluctuations do not dominate that of $\mu_{H}$. \begin{assu}\label{assu_esd} Suppose the following assumptions hold true: \begin{itemize} \item[(iv).] For the Levy distance $\mathcal{L}(\cdot, \cdot),$ we have that for any small constant $\epsilon>0$ \begin{equation*} {\boldsymbol d}\mathrel{\mathop:}= \mathcal{L}(\mu_{\alpha}, \mu_A)+\mathcal{L}(\mu_{\beta}, \mu_{B}) \leq N^{-1+\epsilon}, \end{equation*} when $N$ is sufficiently large. \item[(v).] For the supports \footnote{Note that the support of a non-negative measure is defined to be the largest (closed) subset such that for which every open neighbourhood of every point of the set has positive measure. The notation does not take isolated singletons into consideration.} of $\mu_{A}$ and $\mu_{B}$, we have that for any constant $\delta>0$ \begin{equation*} \supp\mu_{A}\subset [E_{-}^{\alpha}-\delta,E_{+}^{\alpha}+\delta]\AND \supp\mu_{B}\subset[E_{-}^{\beta}-\delta,E_{+}^{\beta}+\delta], \end{equation*} when $N$ is sufficiently large. \end{itemize} \end{assu} \begin{rem} We remark that we will consistently use $\epsilon$ as a sufficiently small constant for the rest of the paper. The assumption $(v)$ assures that both of the upper edges of $\mu_A$ and $\mu_B$ are bounded. \end{rem} As proved by Voiculescu in \cite{Voiculescu1987,Voiculescu1991}, under Assumptions \ref{assu_limit} and \ref{assu_esd}, $\mu_{H}$ converges weakly to a deterministic measure, denoted as $\mu_{\alpha}\boxtimes\mu_{\beta}$. It is called the \emph{free multiplicative convolution} of $\mu_{\alpha}$ and $\mu_{\beta}$. In the present paper, we use the $\mathsf{M}$-transform in (\ref{eq_mtrasindenity}) to define the free multiplicative convolution. \begin{lem}[Proposition 2.5 of \cite{JHC}]\label{lem_subor} For Borel probability measures $\mu_{\alpha}$ and $\mu_{\beta}$ on $\R_{+}$, there exist unique analytic functions $\Omega_{\alpha},\Omega_{\beta}:\CR\to\CR$ satisfying the following: \\ \noindent{(1).} For all $z\in\C_{+}$, we have \begin{equation} \label{eq_subsys3} \arg \Omega_{\alpha}(z)\geq \arg z \AND \arg\Omega_{\beta}(z)\geq \arg z. \end{equation} \noindent{(2).} For all $z\in\C_{+},$ \begin{equation} \label{eq_subsys2} \lim_{z\searrow-\infty}\Omega_{\alpha}(z)=\lim_{z\searrow-\infty}\Omega_{\beta}(z)=-\infty. \end{equation} \noindent{(3).} For all $z\in\CR$, we have \begin{equation} \label{eq_suborsystem} zM_{\mu_{\alpha}}(\Omega_{\beta}(z))=zM_{\mu_{\beta}}(\Omega_{\alpha}(z))=\Omega_{\alpha}(z)\Omega_{\beta}(z). \end{equation} \end{lem} With Lemma \ref{lem_subor}, we now define the free multiplicative convolution. \begin{defn}\label{def:freeconv} Denote the analytic function $M: \mathbb{C} \backslash \mathbb{R}_+ \rightarrow \mathbb{C} \backslash \mathbb{R}_+ $ by \begin{equation} \label{eq_defn_eq} M(z):=M_{\mu_{\alpha}}(\Omega_{\beta}(z))=M_{\mu_{\beta}}(\Omega_{\alpha}(z)). \end{equation} The {free multiplicative convolution} of $\mu_{\alpha}$ and $\mu_{\beta}$ is defined as the unique probability measure $\mu,$ denoted as $\mu \equiv \mu_{\alpha}\boxtimes\mu_{\beta}$ such that (\ref{eq_defn_eq}) holds for all $z\in\CR$. In this sense, $M(z) \equiv M_{\mu_{\alpha} \boxtimes \mu_{\beta}}(z)$ is the $\mathsf{M}$-transform of $\mu_{\alpha} \boxtimes \mu_{\beta}.$ Furthermore, the analytic functions $\Omega_{\alpha}$ and $\Omega_{\beta}$ are referred to as the subordination functions. Similarly we define $\Omega_{A}$ and $\Omega_{B}$ by replacing $(\alpha,\beta)$ with $(A,B)$ in Lemma \ref{lem_subor}, and define $\mu_{A}\boxtimes\mu_{B}$ so that $M_{\mu_{A}}(\Omega_{B}(z))=M_{\mu_{B}}(\Omega_{A}(z))=M_{\mu_{A}\boxtimes\mu_{B}}(z)$ for all $z\in\CR$. \end{defn} Note that a straightforward consequence of (\ref{eq_suborsystem}) and the definition of $M_{\mu}(z)$ is the following identity \begin{equation}\label{eq_multiidentity} \int \frac{x}{x-z} d (\mu_{\alpha} \boxtimes \mu_{\beta}) (x)=z m_{\mu_{\alpha} \boxtimes \mu_{\beta}}(z)+1=\Omega_{\beta}(z) m_{\mu_{\alpha}}(\Omega_{\beta}(z))+1=\int \frac{x}{x-\Omega_{\beta}(z)} d \mu_{\alpha}(x). \end{equation} \begin{rem} Since all of $\mu_{\alpha},\mu_{\beta},\mu_{A}$, and $\mu_{B}$ are compactly supported in $(0,\infty)$, similar results hold for $\mu_{\alpha}\boxtimes\mu_{\beta}$ and $\mu_{A}\boxtimes\mu_{B}$. Specifically, we have \cite[Remark 3.6.2. (iii)]{Voiculescu-Dykema-Nica1992} \begin{align}\label{eq:priorisupp} \supp \mu_{\alpha}\boxtimes\mu_{\beta}&\subset [E_{-}^{\alpha}E_{-}^{\beta},E_{+}^{\alpha}E_{+}^{\beta}], & \supp \mu_{A}\boxtimes\mu_{B}&\subset[a_{N}b_{N}, a_1 b_1]. \end{align} Moreover, we conclude from \cite[Theorem 3.1]{JHC} that, if (i) and (ii) in Assumption \ref{assu_limit} hold, then $\mu_{\alpha} \boxtimes \mu_{\beta} $ is absolutely continuous and supported on a single non-empty compact interval on $(0, \infty),$ denoted as $[E_-, E_+],$ i.e., \begin{equation} \label{eq_edge} E_{-}\mathrel{\mathop:}=\inf \supp(\mu_{\alpha}\boxtimes\mu_{\beta}),\quad E_{+}\mathrel{\mathop:}= \sup\supp(\mu_{\alpha}\boxtimes\mu_{\beta}). \end{equation} Moreover, let the density of $\mu_{\alpha} \boxtimes \mu_{\beta}$ be $\rho.$ For small constant $\tau>0,$ with (iii) of Assumption \ref{assu_limit}, we have \begin{equation}\label{eq_originalsquarerootbehavior} \rho(x) \sim \sqrt{E_+-x}, \ x \in [E_+-\tau, E_+]. \end{equation} \end{rem} \begin{rem} As we will see later in Lemma \ref{lem:suborsqrt}, the subordination functions of $\Omega_{\alpha}$ and $\Omega_{\beta}$ will also have square root behaviors near the edges. The regularity behavior is assured by the fact that the subordination functions $\Omega_{\alpha}(\Omega_\beta)$ are well separated from the supports of $\mu_\beta (\mu_\alpha),$ i.e., (ii) of Lemma \ref{lem:stabbound}. In fact, from the proof of Proposition 5.6 of \cite{JHC}, we see that the assumption (iii) in Assumption \ref{assu_limit} implies this stability result. \end{rem} \begin{rem} \label{rem_ctxexpansubordination} It is known from \cite{Belinschi2006, JHC} that Assumption \ref{assu_limit} ensures that the subordination functions $\Omega_{\alpha}\vert_{\C_{+}}$ and $\Omega_{\beta}\vert_{\C_{+}}$ can be extended continuously to the real line. Till the end of the paper, we will write $\Omega_{\alpha}(x)$ or $\Omega_{\beta}(x)$ for $x \in \mathbb{R}$ to denote the values of the continuous extensions. In particular, $\Omega_{\alpha}(x)$ and $\Omega_{\beta}(x)$ always have nonnegative imaginary parts for all $x\in\R$. \end{rem} \subsection{Local laws for free multiplication of random matrices} \label{sec_locallawresults} In this section, we state the results of the local laws. We will need the following notation of stochastic domination to state our main results. It was first introduced in \cite{MR3119922} and subsequently used in many works on random matrix theory. It simplifies the presentation of the results and the proofs by systematizing statements of the form ``$X_N$ is bounded by $Y_N$ with high probability up to a small power of $N$''. \begin{defn} For two sequences of random variables $\{X_{N}\}_{N\in\N}$ and $\{Y_{N}\}_{N\in\N}$, we say that $X_{N}$ is \emph{stochastically dominated} by $Y_{N},$ written as $X_{N}\prec Y_{N}$ or $X_{N}=\rO_{\prec}(Y_{N}),$ if for all (small) $\epsilon>0$ and (large) $D>0$, we have \begin{equation*} \prob{\absv{X_{N}}\geq N^{\epsilon}\absv{Y_{N}}}\leq N^{-D}, \end{equation*} for sufficiently large $N\geq N_{0}(\epsilon,D)$. If $X_{N}(v)$ and $Y_{N}(v)$ depend on some common parameter $v$, we say $X_{N}\prec Y_{N}$ \emph{uniformly in $v$} if the threshold $N_{0}(\epsilon,D)$ can be chosen independent of the parameter $v$. Moreover, we say an event $\Xi$ holds with high probability if for any constant $D>0,$ $\mathbb{P}(\Xi) \geq 1-N^{-D}$ for large enough $N.$ \end{defn} We first provide some notations. For any spectral parameter $z=E+\mathrm{i}\eta\in\C_{+}$, we define \begin{equation*} \kappa\equiv\kappa(z):=\absv{E-E_{+}}, \end{equation*} where $E_{+}$ is the rightmost edge of $\mu_{\alpha}\boxtimes\mu_{\beta}$ given in (\ref{eq_edge}). For given constants $0\leq a\leq b$ and small constant $0<\tau<\min\{\frac{E_{+}-E_{-}}{2},1\}$, we define the following set of spectral parameter $z$ by \begin{equation} \label{eq_fundementalset} {\mathcal D}_{\tau}(a,b)\mathrel{\mathop:}= \{z=E+\mathrm{i}\eta\in\C_{+}:E_+-\tau \leq E \leq \tau^{-1}, a\leq \eta\leq b\}. \end{equation} Further, for any small positive $\gamma>0$, we let \begin{equation} \label{eq_eltalgamma} \eta_{L}\equiv\eta_{L}(\gamma)\mathrel{\mathop:}= N^{-1+\gamma}, \end{equation} and let $\eta_{U}>1$ be a large $N$-independent constant. The following theorem establishes the local laws for the matrices $H, \widetilde{H}, \mathcal{H}$ and $\widetilde{{\mathcal H}}$ near the upper edge $E_+.$ {Analogous results can be obtained for the lower edge $E_-.$ } { \begin{thm}[Local laws near the edge]\label{thm:main} Suppose Assumptions \ref{assu_limit} and \ref{assu_esd} hold. Let $\tau$ and $\gamma$ be fixed small positive constants. For any deterministic vector $\bm{v}=(v_1, \cdots, v_N) \in \mathbb{C}$ such that $\| \bm{v} \|_{\infty} \leq 1,$ we have: \\ \noindent{(1)}. For the matrix $H$ and its resolvent $G(z),$ we have \begin{equation} \label{eq:main} \Absv{\frac{1}{N}\sum_{i=1}^{N}v_{i}\left(zG_{ii}(z)+1-\frac{a_{i}}{a_{i}-\Omega_{B}(z)}\right)}\prec \frac{1}{N\eta}, \end{equation} uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ with $\eta_L$ defined in (\ref{eq_eltalgamma}) and any fixed constant $\eta_{U}$. In particular, we have \begin{equation*} \absv{m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}\prec\frac{1}{N\eta}. \end{equation*} Moreover, for the off-diagonal entries, we have \begin{equation}\label{eq_off1} \max_{i \neq j} |G_{ij}(z)| \prec \frac{1}{\sqrt{N \eta}}. \end{equation} Similar results hold true by simply replacing $H$ and $G(z)$ with $\widetilde{H}$ and $\widetilde{G}(z)$, respectively. \\ \noindent{(2).} For the matrix $\mathcal{H}$ and its resolvent $\mathcal{G}(z),$ we have \begin{equation} \label{eq:main1} \Absv{\frac{1}{N}\sum_{i=1}^{N}v_{i}\left(z\mathcal{G}_{ii}(z)+1-\frac{b_{i}}{b_{i}-\Omega_{A}(z)}\right)}\prec \frac{1}{N\eta}. \end{equation} In particular, we have \begin{equation*} \absv{m_{\mathcal{H}}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}\prec\frac{1}{N\eta}, \end{equation*} uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. Moreover, for the off-diagonal entries, we have \begin{equation}\label{eq:mainoff2} \max_{i \neq j} |\mathcal{G}_{ij}(z)| \prec \frac{1}{\sqrt{N \eta}}. \end{equation} Similar results hold true by simply replacing ${\mathcal H}$ and ${\mathcal G}(z)$ with $\widetilde{{\mathcal H}}$ and $\widetilde{{\mathcal G}}(z)$, respectively. \end{thm} } \begin{rem} We remark that in Theorem 2.5 of \cite{BEC}, which is the counterpart for the additive model, only the local laws for the diagonal elements of the resolvents are established. We also consider the off-diagonal entries and prove that they are small. Moreover, in Proposition \ref{prop_linearlocallaw}, we see that all the bounds in the above theorem can be replaced by \begin{equation*} \sqrt{\frac{\im m_{\mu_A \boxtimes \mu_B}(z)}{N\eta}}+\frac{1}{N\eta}, \end{equation*} which matches the typical forms of the bounds of local laws in Random Matrix Theory literature, for instance, see the monograph \cite{2017dynamical}. We keep the current form to highlight both the similarity and differences between our multiplicative model and the additive model in \cite{BEC}. \end{rem} Denote $\gamma_j$ as the $j$-th $N$-quantile (or typical location) of $\mu_{\alpha} \boxtimes \mu_{\beta}$ such that \begin{equation*} \int_{\gamma_j}^\infty d \mu_{\alpha} \boxtimes \mu_{\beta}(x)=\frac{j}{N}. \end{equation*} Similarly, we denote $\gamma_j^*$ to be the $j$-th $N$-quantile of $\mu_A \boxtimes \mu_B.$ Recall that $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_N$ are the eigenvalues of $AUBU^*.$ We next state two important consequences of our local laws. {\begin{thm}[Spectral rigidity near the upper edge] \label{thm_rigidity} Suppose Assumptions \ref{assu_limit} and \ref{assu_esd} hold true. For any small constant $0<c<1/2,$ we have that for all $1 \leq i \leq cN,$ \begin{equation*} |\lambda_i-\gamma_i^*| \prec i^{-1/3}N^{-2/3}. \end{equation*} Moreover, the same conclusion holds if $\gamma_i^*$ is replaced with $\gamma_i.$ \end{thm} Consider (\ref{eq_defndatamatrixtype}) that $Y=A^{1/2} UB^{1/2}.$ Denote its singular value decomposition (SVD) \begin{equation*} Y=\sum_{k=1}^N \sqrt{\lambda_i} \mathbf{u}_k \mathbf{v}_k^*, \end{equation*} where $\{\mathbf{u}\}_k$ and $\{\mathbf{v}_k\}$ are the left and right singular vectors of $Y,$ respectively. \begin{thm}[Delocalization of the singular vectors] \label{thm_delocalization} Suppose Assumptions \ref{assu_limit} and \ref{assu_esd} hold true.For any small constant $0<c<1/2,$ we have that for all $1 \leq k \leq cN,$ \begin{equation*} \max_i|\mathbf{u}_k(i)|^2+\max_\mu|\mathbf{v}_k(\mu)|^2 \prec \frac{1}{N}. \end{equation*} \end{thm} } Finally, outside the spectrum, we have stronger control all the way down to the real axis. We record such results in the following theorem, which will be the key technical input for the proof of the spiked model. For the parameter $\tau$ in (\ref{eq_fundementalset}), we denote the spectral domain of the parameter as \begin{equation} \label{eq_fundementalsetoutlier} {\mathcal D}_\tau(\eta_U) \mathrel{\mathop:}= \{z=E+\mathrm{i}\eta\in\C_{+}: E_++N^{-2/3+\tau} \leq E \leq \tau^{-1}, \ 0 \leq \eta\leq \eta_U \}. \end{equation} { \begin{thm}[Local laws far away from the spectrum]\label{thm_outlierlocallaw} Suppose Assumptions \ref{assu_limit} and \ref{assu_esd} hold. Let $\tau$ be a fixed small positive constant. We have the following hold true uniformly in $z \in \mathcal{D}_{\tau}(\eta_U)$: \\ \noindent{(1).} For the matrix $H$, we have \begin{equation*} \absv{m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}\prec\frac{1}{N(\kappa+\eta)}. \end{equation*} Similar results hold if we replace $H$ with ${\mathcal H}, \widetilde{H}$ and $\widetilde{{\mathcal H}}.$\\ \noindent{(2).} For the resolvent $G,$ we have \begin{equation*} \max_{i}\left|zG_{ii}(z)+1-\frac{a_i}{a_i-\Omega_B(z)} \right| \prec N^{-1/2}(\kappa+\eta)^{-1/4}, \end{equation*} and \begin{equation*} \max_{i,j}|G_{ij}(z)| \prec N^{-1/2}(\kappa+\eta)^{-1/4}. \end{equation*} Similar results hold for ${\mathcal G}, \widetilde{G}$ and $\widetilde{{\mathcal G}}.$ \end{thm} } \subsection{Spiked invaraint model}\label{sec_spikedmodel} In this section, we employ the local laws obtained in Section \ref{sec_locallawresults} to study the eigenvalues and eigenvectors of spiked model (\ref{eq_modespikeddefn}) and improve the results obtained in \cite[Section 2.2]{outliermodel}, which only concerns the Haar unitary random matrices. To add a few spikes, we follow the setup of \cite{DINGRMTA,DYaos} and assume that there exist some fixed integers $r$ and $s$ with two sequences of positive numbers $\{d_i^a\}_{i \leq r}$ and $\{d_j^b\}_{j \leq s}$ such that $\widehat{A}=\operatorname{diag}\{\widehat{a}_1, \cdots, \widehat{a}_N\}$ and $\widehat{B}=\operatorname{diag}\{\widehat{b}_1, \cdots, \widehat{b}_N\},$ where \begin{equation}\label{eq_spikes} \widehat{a}_k= \begin{cases} a_k(1+d^a_k), & 1 \leq k \leq r \\ a_k, & k \geq r+1 \end{cases}, \ \widehat{b}_k= \begin{cases} b_k(1+d^b_k), & 1 \leq k \leq s \\ b_k, & k \geq s+1 \end{cases}. \end{equation} Without loss of generality, we assume that $\widehat{a}_1 \geq \widehat{a}_2 \cdots \geq \widehat{a}_N$ and $\widehat{b}_1 \geq \widehat{b}_2 \cdots \geq \widehat{b}_N.$ In the current paper, we assume that all the $d_k^a$'s and $d_k^b$'s are bounded. We will investigate the behavior of the singular values and vectors of $\widehat{Y}=\widehat{A}^{1/2}U \widehat{B}^{1/2}.$ We denote $\widehat{\mathcal{Q}}_1=\widehat{Y} \widehat{Y}^*$ and $\widehat{\mathcal{Q}}_2=\widehat{Y}^* \widehat{Y}.$ Denote the eigenvalues of $\widehat{\mathcal{Q}}_1$ and $\widehat{\mathcal{Q}}_2$ as $\widehat{\lambda}_1 \geq \widehat{\lambda}_2 \geq \cdots \geq \widehat{\lambda}_N.$ Recall that $\Omega_A(\cdot)$ and $\Omega_B(\cdot)$ are the subordination functions associated with $\mu_A$ and $\mu_B$ and Remark \ref{rem_ctxexpansubordination}. We will see that a spike $\widehat{a}_i, 1 \leq i \leq r$ or $\widehat{b}_j, 1 \leq j \leq s,$ causes an outlier eigenvalue, if \begin{equation}\label{eq_outlierlocation} \widehat{a}_i>\Omega_B(E_+), \ \text{or} \ \widehat{b}_j>\Omega_A(E_+). \end{equation} We first introduce the following assumption. \begin{assu}\label{assum_outlier} We assume that (\ref{eq_outlierlocation}) holds for all $1 \leq i \leq r$ and $1 \leq j \leq s.$ Moreover, we define the integers $ 0 \leq i \leq r^+$ and $0 \leq j \leq s^+$ such that \begin{equation*} \widehat{a}_i \geq \Omega_B(E_+)+N^{-1/3} \ \text{if and only if} \ 1 \leq i \leq r^+, \end{equation*} and \begin{equation*} \widehat{b}_j \geq \Omega_A(E_+)+N^{-1/3} \ \text{if and only if} \ 1 \leq j \leq s^+. \end{equation*} The lower bound $N^{-1/3}$ is chosen for definiteness, and it can be replaced with any $N$-dependent parameter that is of the same order. \end{assu} \begin{rem} A spike $\widehat{a}_i$ or $\widehat{b}_j$ that does not satisfy the assumptions in Assumption \ref{assum_outlier} will cause an outlying eigenvalue that lies within an $\rO_{\prec}(N^{-2/3})$ neighborhood of the edge $E_+.$ In this sense, it will be hard to detect such spikes as we have seen from Theorem \ref{thm_rigidity}, the eigenvalues of $\mathcal{Q}_1=YY^*$ near the edge also have a fluctuation of order $N^{-2/3}.$ Assumption \ref{assum_outlier} simply choose the "real" spikes. In the statistical literature, this is referred to as the supercritical regime and a reliable detection of the spikes is only available in this regime. We refer the readers to \cite{BDWW,Bloemendal2016, DYaos,perry2018} for more detailed discussion. \end{rem} To state the results of the outlier and extremal non-outlier eigenvalues, we introduce the following definition following \cite[Definition 3.5]{DYaos}. Since both $\Omega_A(x)$ and $\Omega_B(x)$ are real and monotone increasing for $x \geq E_+,$ we can then denote $\Omega_A^{-1}(\cdot)$ and $\Omega_B^{-1}(\cdot)$ the inverse functions of $\Omega_A$ and $\Omega_B$ on $[\Omega_A(E_+), \infty)$ and $[\Omega_B(E_+), \infty),$ respectively. \begin{defn} We define the labelling functions $\pi_a:\{1,\cdots, N\}\to N$ and $\pi_b:\{1,\cdots, N\}\to N$ as follows. For any $1\leq i \leq r$, we assign to it a label $\pi_a(i)\in \{1,\cdots, r+s\}$ if $\Omega_B^{-1}(\widehat{a}_i)$ is the $\pi_a(i)$-th largest element in $\{\Omega_B^{-1}(\widehat{a}_i)\}_{i=1}^r \cup \{\Omega^{-1}_{A}(\widehat{b}_j)\}_{j=1}^s$. We also assign to any $1\leq j \leq s$ a label $\pi_b(j)\in \{1,\cdots, r+s\}$ in a similar way. Moreover, we define $\pi_a(i)=i+s$ if $i>r$ and $\pi_b(j)=j + r$ if $j >s$. We define the following sets of outlier indices: \begin{align*} & \mathcal O:= \{\pi_a(i): 1\le i \le r\}\cup \{\pi_b(j): 1\le j \le s\}, \end{align*} and \begin{align*} & \mathcal O^+:= \{\pi_a(i): 1\le i \le r^+\}\cup \{\pi_b(j): 1\le j \le s^+\}. \end{align*} \end{defn} \begin{thm}[Eigenvalue statistics]\label{thm_outlier} Suppose Assumptions \ref{assu_limit}, \ref{assu_esd} and \ref{assum_outlier} hold. Then we have that \begin{equation*} \left| \widehat{\lambda}_{\pi_a(i)}-\Omega_B^{-1}(\widehat{a}_i) \right| \prec N^{-1/2}(\widehat{a}_i-\Omega_B(E_+))^{1/2}, \ 1 \leq i \leq r^+, \end{equation*} and \begin{equation*} \left| \widehat{\lambda}_{\pi_b(j)}-\Omega^{-1}_A(\widehat{b}_j) \right| \prec N^{-1/2}(\widehat{b}_j-\Omega_A(E_+))^{1/2}, \ 1 \leq j \leq s^+. \end{equation*} Moreover, for any fixed integer $\varpi>r+s,$ we have \begin{equation*} \left|\widehat{\lambda}_i-E_+ \right| \prec N^{-2/3}, \ \text{for} \ i \notin \mathcal{O}^+ \ \text{and} \ i \leq \varpi. \end{equation*} \end{thm} \begin{rem}\label{rem_eigenvaluesitkcing} We remark that the convergent limits of the outlier eigenvalues have been obtained under stronger assumptions in \cite[Section 2.2]{outliermodel} for the spiked unitarily invariant model, where the conditions in Assumption \ref{assum_outlier} are strengthened to \begin{equation}\label{eq_strongone} \widehat{a}_i \geq \Omega_B(E_+)+\varsigma \ \text{if and only if} \ 1 \leq i \leq r^+, \end{equation} and \begin{equation}\label{eq_strongtwo} \widehat{b}_j \geq \Omega_A(E_+)+\varsigma \ \text{if and only if} \ 1 \leq j \leq s^+, \end{equation} where $\varsigma>0$ is some fixed constant. We extend the results in \cite{outliermodel} in two aspects: considering the more general Assumption \ref{assum_outlier} and establishing their convergent rates. We believe that Assumption \ref{assum_outlier} is the most general assumption possible for the existence of the outliers, and the convergent rates obtained here are optimal up to some $N^{\epsilon}$ factor, where $\epsilon>0$ is some small constant. These results will be used for the discussion of statistical applications in Section \ref{sec_statapplication}. Finally, we mention that we can follow the discussion in \cite[Theorem 2.7]{Bloemendal2016} or \cite[Theorem 3.7]{DYaos} to show that the non-outlier eigenvalues of $\widehat{\mathcal{Q}}_1$ will be close to those of $\mathcal{Q}_1,$ which is called the \emph{eigenvalue sticking property}. We will pursue this direction somewhere else. \end{rem} Then we introduce the results regarding the singular vectors. Specifically, we will state our results under the so-called \emph{non-overlapping} condition, which was first introduced in \cite{Bloemendal2016}. In what follows, we consider an index set $S$ such that \begin{equation}\label{eq_setscondition} S \subset \mathcal{O}^+. \end{equation} For convenience, we introduce the following notations. For $1\le i_1 \le r^+$, $1\le i_2 \le N$ and $1\le j \le N$, we define \begin{equation}\label{eq_differencedefinition} \delta_{\pi_a(i_1), \pi_a(i_2)}^{a}:=|\widehat{a}_{i_1}-\widehat{a}_{i_2}|, \quad \delta_{\pi_a(i_1), \pi_b(j)}^{a}:= \left| \widehat{b}_j - \Omega_{A}(\Omega_B^{-1}( \widehat{a}_{i_1}))\right|. \end{equation} Similarly, for $1\le j_1 \le s^+$, $1\le j_2 \le N$ and $1\le i \le N$, we define \begin{equation*} \delta_{\pi_b(j_1), \pi_a(i)}^{b}:=|\widehat{a}_i - \Omega_B(\Omega_A^{-1}(\widehat{b}_{j_1}))|, \quad \delta_{\pi_b(j_1), \pi_b(j_2)}^{b}:=|\widehat{b}_{j_1}-\widehat{b}_{j_2}|. \end{equation*} Further if $\mathfrak a\in S$, we define \begin{equation}\label{eq_alphasinside} \delta_{\mathfrak a}(S):=\begin{cases}\left( \min_{ k:\pi_a(k)\notin S}\delta^a_{\mathfrak a, \pi_a(k)}\right)\wedge \left( \min_{j:\pi_b(j)\notin S}\delta^a_{\mathfrak a, \pi_b(j)}\right), & \ \text{if } \mathfrak a=\pi_a(i) \in S\\ \left( \min_{k:\pi_a(k)\notin S}\delta^b_{\mathfrak a, \pi_a(k)}\right)\wedge \left( \min_{j:\pi_b(j)\notin S}\delta^b_{\mathfrak a,\pi_b(j)}\right), & \ \text{if } \mathfrak a=\pi_b(j) \in S \end{cases}; \end{equation} if $\mathfrak a\notin S$, then we define \begin{equation}\label{eq_alphaoutside} \delta_{\mathfrak a}(S):=\left( \min_{k:\pi_a(k)\in S}\delta^a_{\pi_a(k), \mathfrak a}\right)\wedge \left( \min_{j:\pi_b(j)\in S}\delta^b_{\pi_b(j), \mathfrak a}\right). \end{equation} We are now ready to impose some assumptions. \begin{assu}\label{assum_eigenvector} For some fixed small constants $\tau_1, \tau_2>0,$ we assume that for $\pi_a(i) \in S$ and $\pi_b(j) \in S,$ \begin{equation}\label{condition_nonoverone} \widehat{a}_i-\Omega_B(E_+) \geq N^{-1/3+\tau_1}, \ \widehat{b}_j-\Omega_A(E_+) \geq N^{-1/3+\tau_1}. \end{equation} Moreover, we assume that \begin{equation}\label{conidition_nonovertwo} \delta_{{\pi_a(i)}}(S) \geq N^{-1/2+\tau_2} (\widehat{a}_i-\Omega_B(E_+))^{-1/2}, \ \delta_{\pi_b(j)}(S) \geq N^{-1/2+\tau_2} (\widehat{b}_j-\Omega_A(E_+))^{-1/2}. \end{equation} \end{assu} In the above assumption, (\ref{condition_nonoverone}) implies that for $\pi_a(i) \in S$ and $\pi_b(j) \in S,$ $\{\widehat{a}_i\}$ and $\{\widehat{b}_j\}$ are the "real" spikes, whereas (\ref{conidition_nonovertwo}) guarantees a phenomenon of cone concentration; see Remark \ref{rem_example} for more detailed discussion. With the above preparation, we proceed to state our main results regarding the singular vectors. Let the left singular vectors of $\widehat{Y}$ be $\{\mathbf{\widehat{u}}_i\}$ and the right singular vectors be $\{\mathbf{\widehat{v}}_i\}.$ We denote \begin{equation*} \mathcal{P}_S= \sum_{k \in S} \mathbf{\widehat{u}}_k \mathbf{\widehat{u}}_k^*, \ \text{and} \ \mathcal{P}'_{S}= \sum_{k \in S} \mathbf{\widehat{v}}_k \mathbf{\widehat{v}}_k^*. \end{equation*} \begin{thm}[Eigenvector statistics]\label{thm_outlieereigenvector} Suppose that Assumptions \ref{assu_limit}, \ref{assu_esd} and \ref{assum_eigenvector} hold. For the set $S$ in (\ref{eq_setscondition}) and any given deterministic vectors $\mathbf{v}=(v_1, \cdots, v_N)^* \in \mathbb{C}^N,$ we have that for the left singular vectors, \begin{equation*} \left| \langle \mathbf{v}, \mathcal{P}_S \mathbf{v}\rangle-g_a(\mathbf{v}, S) \right| \prec \sum_{i: \pi_a(i) \in S} \frac{|v_i|^2}{\sqrt{N(\widehat{a}_i-\Omega_B(E_+))}}+\sum_{i=1}^N \frac{|v_i|^2}{N \delta_{\pi_a(i)}(S)}+g_a(\mathbf{v},S)^{1/2} \left( \sum_{\pi_a(i) \notin S} \frac{|v_i|^2}{N \delta_{\pi_a(i)}(S)} \right)^{1/2}, \end{equation*} where $g_a(\mathbf{v}, S)$ is defined as \begin{equation}\label{eq_concentrationlimit} g_a(\mathbf{v}, S):=\sum_{i: \pi_a(i) \in S} \widehat{a}_i \frac{(\Omega_B^{-1})'(\widehat{a}_i)}{\Omega_B^{-1}(\widehat{a}_i)}|v_i|^2. \end{equation} Similarly, for the right singular vectors, we have \begin{equation*} \left| \langle \mathbf{v}, \mathcal{P}_S' \mathbf{v}\rangle -g_b(\mathbf{v}, S) \right| \prec \sum_{i: \pi_b(j) \in S} \frac{|v_i|^2}{\sqrt{N(\widehat{b}_j-\Omega_A(E_+))}}+\sum_{j=1}^N \frac{|v_j|^2}{N \delta_{\pi_b(j)}(S)}+g_b(\mathbf{v},S)^{1/2} \left( \sum_{\pi_b(j) \notin S} \frac{|v_j|^2}{N \delta_{\pi_b(j)}(S)} \right)^{1/2} , \end{equation*} where $g_b(\mathbf{v},S)$ is defined as \begin{equation*} g_b(\mathbf{v}, S):=\sum_{j: \pi_b(j) \in S} \widehat{b}_j \frac{(\Omega_A^{-1})'(\widehat{b}_j)}{\Omega_A^{-1}(\widehat{b}_j)}|v_j|^2. \end{equation*} \end{thm} \begin{rem}\label{eq_strongeigenvectorassumption} We mention that some partial results of Theorem \ref{thm_outlieereigenvector} have been obtained in (4) of Theorem 2.5 in \cite{outliermodel} under stronger assumptions for the spiked unitarily invariant model. More specifically, by assuming that $r=0$ or $s=0,$ and (\ref{eq_strongone}) and (\ref{eq_strongtwo}), they obtained the concentration limit (\ref{eq_concentrationlimit}). We extend the counterparts in \cite{outliermodel} by one one hand, stating the results in great generality under Assumption \ref{assum_eigenvector}, and on the other hand, establishing their convergent rates. Moreover, we remark that we can obtain the results of Theorem \ref{thm_outlieereigenvector} without Assumption \ref{assum_eigenvector} following the arguments of \cite[Sections S.5.2 and S.6]{DYaos}. Finally, for the non-outlier singular vectors, we can prove a delocalization result similar to \cite[Theorem 3.14]{DYaos}. Both of the extensions will need more dedicate efforts and will be the topics of future study. \end{rem} \begin{rem}\label{rem_example} We provide an example for illustration. For simplicity, we consider the non-degenerate case such that all the outliers are well-separated in the sense that we can simply choose $S=\{\pi_a(i)\}$ or $S=\{\pi_b(j)\}.$ Let $S=\{\pi_a(i)\}$ and $\mathbf{v}=\mathbf{e}_i.$ Then we obtain from Theorem \ref{thm_outlieereigenvector} that \begin{equation}\label{eq_exampleeqone} \left| \langle \widehat{\mathbf{u}}_i, \mathbf{e}_i \rangle \right|^2=\widehat{a}_i \frac{(\Omega_B^{-1})'(\widehat{a}_i)}{\Omega_B^{-1}(\widehat{a}_i)}+\rO_{\prec}\left(\frac{1}{\sqrt{N}(\widehat{a}_i-\Omega_B(E_+))^{1/2}}+\frac{1}{N \delta^2_i} \right), \ \delta_i=\delta_{\pi_a(i)}(\pi_a(i)). \end{equation} It is easy to see that the non-overlapping condition (\ref{conidition_nonovertwo}), together with the estimates (\ref{eq_complexedgederterministicbound}) and (\ref{eq_derivativeinversebound}), imply that the error term is much smaller than the first term of the right-hand side of (\ref{eq_exampleeqone}). In this sense, $\widehat{\mathbf{u}}_i$ is concentrated on a cone with axis parallel to $\mathbf{e}_i.$ \end{rem} \subsection{Remarks on statistical applications}\label{sec_statapplication} In this section, we discuss how we can apply the results of Theorems \ref{thm:main}, \ref{thm_rigidity}, \ref{thm_delocalization}, \ref{thm_outlier} and \ref{thm_outlieereigenvector} to study the models (\ref{eq_defndatamatrixtype}) and (\ref{eq_modespikeddefn}). In all the discussions of this subsection, we assume that $r^+=r$ and $s^+=s,$ i.e., all the spikes are in the supercritical regime, which is the most interesting regime in statistics. As we have seen from the above theorems, all the results involve the subordination functions. Even though we know both $A$ and $B,$ it is generally difficult to calculate these functions, even numerically. In this sense, the results of Theorems \ref{thm:main} and \ref{thm_outlierlocallaw} provide us a simple way to approximate the subordination functions and estimate the quantities in which we are interested. For example, we want to estimate the values of the spikes $\widehat{a}_i, 1 \leq i \leq r$ and $\widehat{b}_j, 1 \leq j \leq s,$ given the data matrix $Y$ and the matrices $A$ and $B.$ By Theorem \ref{thm_outlier}, we see that $\widehat{a}_i$ and $\widehat{b}_j$ can be well approximated by $\Omega_B(\widehat{\lambda}_{\pi_a(i)})$ and $\Omega_A(\widehat{\lambda}_{\pi_b(j)}),$ respectively. Together with Remark \ref{rem_eigenvaluesitkcing}, we can propose the following estimators: \begin{equation*} \widetilde{\widehat{a}}_i=\tr{A}-\frac{1}{N} \sum_{i=r+s+1}^{N} \frac{a_i}{\widehat{\lambda}_{\pi_a(i)} \widetilde{G}_{ii}(\widehat{\lambda}_{\pi_a(i)})+1}, \ \widetilde{\widehat{b}}_j=\tr{B}-\frac{1}{N} \sum_{j=r+s+1}^{N} \frac{b_j}{\widehat{\lambda}_{\pi_b(j)} \widetilde{{\mathcal G}}_{ii}(\widehat{\lambda}_{\pi_b(j)})+1}, \end{equation*} for $1 \leq i \leq r, 1 \leq j \leq s.$ Note that our proposed estimators only need the information of $A$ and $B$ once we obtain the data matrix $Y.$ It is easy to see the above statistics are consistent estimators for the spikes. Our results can also be used to estimate the number of spikes in (\ref{eq_modespikeddefn}), i.e., the values of $r$ and $s.$ The number of spikes have important meanings in practice, for instance, it represents the number of factors in factor model and the number of signals in signal processing. In our model, we have two resources of spikes from either $\widehat{A}$ or $\widehat{B}.$ We can follow the arguments of Section 4.1 of \cite{DYaos} to propose the estimators. For simplicity, we assume that Assumption \ref{assum_eigenvector} holds with $\tau_1=1/3$ and $\tau_2=1/2.$ In this sense, by Theorem \ref{thm_outlieereigenvector} and Remark \ref{eq_strongeigenvectorassumption}, we see that \begin{equation*} \left| \widehat{\mathbf{u}}_k \right|^2=\mathbb{I}(k=\pi_a(i))\widehat{a}_i \frac{(\Omega_B^{-1})'(\widehat{a}_i)}{\Omega_B^{-1}(\widehat{a}_i)} +\ro(1). \end{equation*} Similar discussion applies to $\widehat{\mathbf{v}}_k$'s. Therefore, for some constant $c>0,$ we propose the following statistics to estimate $r$ and $s$, respectively \begin{equation*} \widehat{r}=\underset{1 \leq i \leq cN}{\text{arg} \min} \left\{ \max_{k} |\widehat{\mathbf{u}}_i(k)|^2 \leq \omega \right\}, \ \widehat{s}=\underset{1 \leq i \leq cN}{\text{arg} \min} \left\{ \max_{k} |\widehat{\mathbf{v}}_i(k)|^2 \leq \omega \right\}, \end{equation*} where the threshold $\omega=\ro(1)$ can be chosen using a resampling procedure as discussed in \cite[Section 4.1]{DYaos}. Similar to the discussion of Theorem 4.3 of \cite{DYaos}, we can conclude that $\widehat{r}$ and $\widehat{s}$ are consistent estimators for $r$ and $s,$ respectively. Finally, we mention that our results can be used for other statistical applications. For instance, the truncated Haar random matrices, i.e., submatrices of Haar random matrices, are important objects in the study of random sketching \cite{ELnips}. For such a generalization, we need to allow $A$ or $B$ be nonnegative in the sense that a portion of the eigenvalues of $A$ or $B$ are zeros for our model (\ref{eq_defndatamatrixtype}). In fact, all the results obtain in this paper still hold true with minor modifications. These results can deepen the understanding and provide more insights for the problem considered in \cite{ELnips}, where only the global laws and convergent limits were used as the technical inputs. Moreover, in \cite{MR4009717}, the universality has been established for the local spectral statistics of the additive model in the bulk. For Wigner matrices and Gram matrices, the edge universality have been established in \cite{2017arXiv171203881L,2020arXiv200804166D}. The key technical inputs are the local laws. Therefore, we believe that combining the arguments in the aforementioned works and Theorem \ref{thm:main}, we should be able to derive the distributions of the edge eigenvalues and use it for hypothesis testing regarding the models (\ref{eq_defndatamatrixtype}) and (\ref{eq_modespikeddefn}). We will pursue these directions in the future works. \section{Properties of subordination functions}\label{sec_subordiationproperties} In this section, we investigate the properties of the subordination functions. We first introduce some notations. The system of equations \eqref{eq_suborsystem} can be written as \begin{align}\label{eq_suborsystemPhi} \Phi_{\alpha\beta}(\Omega_{\alpha}(z),\Omega_{\beta}(z),z)=0, \end{align} where we define $\Phi_{\alpha\beta}\equiv(\Phi_{\alpha},\Phi_{\beta}):\{(\omega_{1},\omega_{2},z)\in\C_{+}^{3}: \arg \omega_{1},\arg\omega_{2}\geq \arg z\}\to \C^{2}$ by \begin{align}\label{eq:def_Phi_ab} &\Phi_{\alpha}(\omega_{1},\omega_{2},z)\mathrel{\mathop:}= \frac{M_{\mu_{\alpha}}(\omega_{2})}{\omega_{2}}-\frac{\omega_{1}}{z}, &\Phi_{\beta}(\omega_{1},\omega_{2},z)\mathrel{\mathop:}= \frac{M_{\mu_{\beta}}(\omega_{1})}{\omega_{1}}-\frac{\omega_{2}}{z}. \end{align} Here $\Phi_{\alpha\beta}$ is defined as a function of three complex variables. We will also use the following quantities, which are closely related to the first and the second derivatives of the system (\ref{eq_suborsystemPhi}). Recall (\ref{eq_mtrasindenity}). \begin{align} &{\mathcal S}_{\alpha\beta}(z)\mathrel{\mathop:}= z^{2}L_{\mu_{\beta}}'(\Omega_{\alpha}(z))L_{\mu_{\alpha}}'(\Omega_{\beta}(z))-1, \label{eq_defn_salphabeta} \\ &{\mathcal T}_{\alpha}(z)\mathrel{\mathop:}= \frac{1}{2}\left[zL_{\mu_{\beta}}''(\Omega_{\alpha}(z))L_{\mu_{\alpha}}'(\Omega_{\beta}(z)) +(zL_{\mu_{\beta}}'(\Omega_{\alpha}(z)))^{2}L_{\mu_{\alpha}}''(\Omega_{\beta}(z))\right], \label{eq_defn_talpha} \\ &{\mathcal T}_{\beta}(z)\mathrel{\mathop:}= \frac{1}{2}\left[zL_{\mu_{\alpha}}''(\Omega_{\beta}(z))L_{\mu_{\beta}}'(\Omega_{\alpha}(z))+(zL_{\mu_{\alpha}}'(\Omega_{\beta}(z)))^{2}L_{\mu_{\beta}}''(\Omega_{\alpha}(z))\right]. \nonumber \end{align} We point out that, denoting by $\mathrm{D}$ the differential operator with respect to $\omega_{1}$ and $\omega_{2}$, the first derivative of $\Phi_{\alpha\beta}$ is given by \begin{equation} \label{eq_diferentialoperator} \mathrm{D}\Phi_{\alpha\beta}(\omega_{1},\omega_{2},z)\mathrel{\mathop:}=\begin{pmatrix} -z^{-1} & L'_{\mu_{\alpha}}(\omega_{2}) \\ L'_{\mu_{\beta}}(\omega_{1}) & -z^{-1} \end{pmatrix}, \end{equation} whose determinant is equal to $-z^{-2}{\mathcal S}_{\alpha\beta}(z)$ at the point $(\Omega_{\alpha}(z),\Omega_{\beta}(z),z)$. Similarly, using $\Phi_{\alpha\beta}(\Omega_{\alpha}(z),\Omega_{\beta}(z),z)=0$, we find that \begin{align*} {\mathcal T}_{\alpha}(z)&= z\left[\frac{\partial}{\partial\omega_{1}}\det \mathrm{D} \Phi_{\alpha\beta}(\omega_{1},zL_{\mu_{\beta}}(\omega_{1}),z)\right]_{\omega_{1}=\Omega_{\alpha}(z)}, \\ {\mathcal T}_{\beta}(z)&=z\left[\frac{\partial}{\partial\omega_{2}}\det \mathrm{D} \Phi_{\alpha\beta}(zL_{\mu_{\alpha}}(\omega_{2}),\omega_{2},z)\right]_{\omega_{2}=\Omega_{\beta}(z)}. \end{align*} By replacing the pair $(\alpha,\beta)$ with $(A,B)$, we can define $\Phi_{AB}$, ${\mathcal S}_{AB}$, ${\mathcal T}_{A}$, and ${\mathcal T}_{B}$ analogously. The main result of this section is the following proposition, which establishes the properties of $\mu_A \boxtimes \mu_B$ and its associated subordination functions. For $z=E+\mathrm{i} \eta \in \mathbb{C}_+,$ we define \begin{equation}\label{eq_defnkappa} \kappa \equiv \kappa(z):=|E-E_+|. \end{equation} \begin{prop}\label{prop:stabN} Suppose Assumptions \ref{assu_limit} and \ref{assu_esd} hold. Then for any fixed small constant $\tau>0$ and sufficiently large $N$, the following hold: \begin{itemize} \item[(i)] There exists some constant $C>1$ such that \begin{align*} &\min_i\absv{a_{i}-\Omega_{B}(z)}\geq C^{-1},& &\min_i \absv{b_{i}-\Omega_{A}(z)}\geq C^{-1},&\\ & C^{-1}\leq \absv{\Omega_{A}(z)}\leq C,& &C^{-1}\leq \absv{\Omega_{B}(z)}\leq C,& \end{align*} uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. \item[(ii)] For all $z \in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ and $\kappa$ defined in (\ref{eq_defnkappa}), we have \begin{equation*} \im m_{\mu_{A}\boxtimes\mu_{B}}(z)\sim\left\{ \begin{array}{lcl} \sqrt{\kappa+\eta}, &\text{if }& E\in\supp\mu_{A}\boxtimes\mu_{B},\\ \dfrac{\eta}{\sqrt{\kappa+\eta}}, &\text{if }& E\notin\supp\mu_{A}\boxtimes\mu_{B}. \end{array} \right. \end{equation*} \item[(iii)] For all $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U}),$ we have the following bounds for ${\mathcal S}_{AB}$, ${\mathcal T}_{A}$, and ${\mathcal T}_{B}$, \begin{align*} &{\mathcal S}_{AB}\sim\sqrt{\kappa+\eta},& &\absv{{\mathcal T}_{A}(z)}\leq C,& &\absv{{\mathcal T}_{B}(z)}\leq C.& \end{align*} Furthermore, if $\absv{z-E_{+}}\leq\delta$ for sufficiently small constant $\delta>0$, we also have the lower bounds for ${\mathcal T}_{A}$ and ${\mathcal T}_{B}$: \begin{align*} \absv{{\mathcal T}_{A}(z)}\geq c, && \absv{{\mathcal T}_{B}(z)}\geq c, \end{align*} where $c>0$ is some constant. \item[(iv)] For the derivatives of $\Omega_{A}$, $\Omega_{B}$ and ${\mathcal S}_{AB}$, we have \begin{align*} \absv{\Omega_{A}'(z)} \leq C\frac{1}{\sqrt{\kappa+\eta}}, && \absv{\Omega_{B}'(z)}\leq C\frac{1}{\sqrt{\kappa+\eta}}, && \absv{S_{AB}'(z)}\leq C\frac{1}{\sqrt{\kappa+\eta}}, \end{align*} uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. \end{itemize} \end{prop} The proof is divided into two parts. In Section \ref{sec:freelimit}, we prove an analogue of Proposition \ref{prop:stabN} for $\Omega_{\alpha}$, $\Omega_{\beta}$ and $\mu_{\alpha}\boxtimes\mu_{\beta}$ in Lemma \ref{lem:stabbound} and Proposition \ref{prop:stablimit}. Then in Section \ref{sec:freeN}, by establishing the bounds $\absv{\Omega_{A}(z)-\Omega_{\alpha}(z)}=\rO_{\prec}(N^{-1/2})$ and $\absv{\Omega_{B}(z)-\Omega_{\beta}(z)}=\rO_{\prec}(N^{-1/2}),$ we extend the results of Section \ref{sec:freelimit} to $\Omega_A$ and $\Omega_B,$ and complete the proof of Proposition \ref{prop:stabN}. In Section \ref{sec:omegabound}, we prove the controls for the subordination functions, namely the upper bounds of $\absv{\Omega_{A}(z)-\Omega_{\alpha}(z)}$ and $\absv{\Omega_{B}(z)-\Omega_{\beta}(z)}.$ Proof therein follow the idea of \cite{Bao-Erdos-Schnelli2016,BEC}, in the sense that we first prove the bound when $\im z$ is large and then use a bootstrapping argument to expand the domain of $z$. \subsection{Free convolution of $\mu_{\alpha} \boxtimes \mu_{\beta}$}\label{sec:freelimit} In this section, we collect the results concerning the $N$-independent measure $\mu_{\alpha} \boxtimes \mu_{\beta}$ and its corresponding subordination functions. Most of the results have been proved in \cite[Section 5]{JHC}. \begin{lem}[Lemma 5.2 and Proposition 5.6 of \cite{JHC}]\label{lem:stabbound} Suppose that $\mu_{\alpha}$ and $\mu_{\beta}$ satisfy Assumption \ref{assu_limit}. Then the following statements hold: \\ (i). For any compact ${\mathcal D} \subset \C_{+}\cup(0,\infty)$, there exists some constant $C>1$ such that for all $z\in{\mathcal D}$, \begin{align*} &C^{-1}\leq \absv{\Omega_{\alpha}(z)}\leq C, &C^{-1}\leq \absv{\Omega_{\beta}(z)}\leq C. \end{align*} (ii). There exists some constant $\varsigma>0$ such that for all $z\in\C_{+}$, \begin{align} \label{eq:stabbound} \dist(\Omega_{\alpha}(z),\supp\mu_{\beta})>&\varsigma, & \dist(\Omega_{\beta}(z),\supp\mu_{\alpha})>&\varsigma. \end{align} (iii). Furthermore, we have \begin{align} \label{eq_boundedomega} &0<\Omega_{\alpha}(E_{-})<E_{-}^{\beta}<E_{+}^{\beta}<\Omega_{\alpha}(E_{+}), &0<\Omega_{\beta}(E_{-})<E_{-}^{\alpha}<E_{+}^{\alpha}<\Omega_{\beta}(E_{+}). \end{align} \end{lem} The lower bound in the second statement of Lemma \ref{lem:stabbound} is the so-called \emph{stability bound}, as it implies the stability of the systems \eqref{eq_suborsystem} and \eqref{eq_suborsystemPhi}. In particular, the edges of $\mu_{\alpha} \boxtimes \mu_{\beta}$ are completely characterized by the following equation. Recall the definitions of $E_{-}$ and $E_{+}$ of $\rho$ defined in (\ref{eq_edge}). \begin{lem}[Proposition 5.7 of \cite{JHC}] The edges $E_-$ and $E_+$ satisfy the following equation {when $z=E_{\pm}$} \begin{equation} \label{eq:edgechar} \left(\frac{\Omega_{\beta}(z)}{M_{\mu_{\alpha}}(\Omega_{\beta}(z))}M_{\mu_{\alpha}}'(\Omega_{\beta}(z))-1\right)\left(\frac{\Omega_{\alpha}(z)}{M_{\mu_{\beta}}(\Omega_{\alpha}(z))}M_{\mu_{\beta}}'(\Omega_{\alpha}(z))-1\right)-1=0. \end{equation} \end{lem} \begin{rem}\label{rem_salphabetaequivalent} In fact the left-hand side of (\ref{eq:edgechar}) is exactly $\mathcal{S}_{\alpha \beta}(z).$ To see this, by the definitions of $L_{\mu_{\alpha}}(z)$ and $L_{\mu_{\beta}}(z)$ together with \eqref{eq_suborsystem}, we have \begin{equation*} L'_{\mu_{\beta}}(\Omega_{\alpha}(z))=\frac{\Omega_{\alpha}(z)M_{\mu_{\beta}}'(\Omega_{\alpha}(z))-M_{\mu_{\beta}}(\Omega_{\alpha}(z))}{\Omega_{\alpha}(z)^{2}} =\frac{\Omega_{\beta}(z)}{z\Omega_{\alpha}(z)}\left[\frac{\Omega_{\alpha}(z)}{M_{\mu_{\beta}}(\Omega_{\alpha}(z))}M_{\mu_{\beta}}'(\Omega_{\alpha}(z))-1\right]. \end{equation*} Similar results hold for $L_{\mu_{\alpha}}'(\Omega_{\beta}(z))$ by interchanging $\alpha$ and $\beta$. In this sense, the edges of $\rho$ satisfy that $\mathcal{S}_{\alpha \beta}(E_{\pm})=0.$ \end{rem} In \cite[Proposition 6.15]{JHC}, the author has proved that there are exactly two real positive solutions to the equation \eqref{eq:edgechar}, denoted as $E_-$ and $E_+,$ so that $\{x\in\R_{+}:\rho(x)>0\}=(E_{-},E_{+})$. Using this as an input, the author in \cite{JHC} also shows that the subordination functions admit the following Pick representations. Analogous results hold for the $L$-transform. \begin{lem}[Lemma 5.8 of \cite{JHC}]\label{lem:reprM} Suppose that $\mu_{\alpha}$ and $\mu_{\beta}$ satisfy Assumption \ref{assu_limit}. Then there exist unique finite measures $\widehat{\mu}_{\alpha}$, $\widehat{\mu}_{\beta}$, $\widetilde{\mu}_{\alpha}$, and $\widetilde{\mu}_{\beta}$ on $\R_{+}$ such that the following hold: \begin{align} L_{\mu_{\alpha}}(z)&=1+m_{\widehat{\mu}_{\alpha}}(z),& \widehat{\mu}_{\alpha}(\R_{+})&=\Var{\mu_{\alpha}}=\int_{\R_{+}} x^{2}\mathrm{d}\mu_{\alpha}(x)-1, & \supp\widehat{\mu}_{\alpha}&=\supp\mu_{\alpha}, \label{eq:reprM} \\ \frac{\Omega_{\alpha}(z)}{z}&=1+m_{\widetilde{\mu}_{\alpha}}(z), & \widetilde{\mu}_{\alpha}(\R_{+})&=\Var{\mu_{\alpha}}, & \supp \widetilde{\mu}_{\alpha}&=\supp \rho. \nonumber \end{align} Similar results hold true if we replace $\alpha$ with $\beta.$ \end{lem} Armed with Lemma \ref{lem:stabbound}, we can see that $\mathcal{S}_{\alpha \beta}$ is \emph{locally quadratic} for $z$ around the edges $E_{+}$ and $E_{-}$ and consequently, $\Omega_{\alpha}$ and $\Omega_{\beta}$ have square root behavior. Specifically, denote \begin{equation} \label{eq_defnztilde} \widetilde{z}_{+}(\Omega)\mathrel{\mathop:}= \Omega\frac{M_{\mu_{\alpha}}^{-1} ( M_{\mu_{\beta}}(\Omega) )}{M_{\mu_{\beta}}(\Omega)}. \end{equation} From the results of \cite[Section 6.3]{JHC}, we find that there exists some neighborhood $U$ around $E_+$ such that when $z \in U,$ $\widetilde{z}_{+}(\Omega_{\alpha}(z))=z$ and \begin{equation} \label{eq:suborTaylor} z-E_{+}=\widetilde{z}_{+}(\Omega_{\alpha}(z))-\widetilde{z}_{+}(\Omega_{\alpha}(E_{+}))= \frac{1}{2}\widetilde{z}_{+}''(\Omega_{\alpha}(E_{+}))(\Omega_{\alpha}(z)-\Omega_{\alpha}(z))^{2}+\rO(\absv{\Omega_{\alpha}(z)-\Omega_{\alpha}(E_{+})}^{3}). \end{equation} Moreover, we have that $\widetilde{z}_{+}'(\Omega_{\alpha}(E_{+}))=0$ and $\widetilde{z}_{+}''(\Omega_{\alpha}(E_{+}))>0.$ \begin{lem}\label{lem:suborsqrt} Suppose Assumption \ref{assu_limit} holds. Then there exist positive constants $\gamma^{\alpha}_{\pm}$, $\gamma^{\beta}_{\pm}$, $0<\tau<1$ and $\eta_{0}$ such that \begin{equation} \label{eq_squareroot} \Omega_{\alpha}(z)=\Omega_{\alpha}(E_{\pm})+\gamma^{\alpha}_{\pm}\sqrt{\pm(z-E_{\pm})}+\rO(\absv{z-E_{\pm}}^{3/2}), \end{equation} uniformly in $z\in\{z\in\C_{+}: E_+-\tau \leq E\leq \tau^{-1},\ 0 \leq \eta<\eta_{0}\}$, where $\sqrt{-1}=\mathrm{i}. The same asymptotics holds with $\alpha$ replaced by $\beta$. Also, for any compact interval $[a,b]\subset(E_{-},E_{+})$, there exists a constant $c>0$ such that \begin{align}\label{eq_imaginarypart} \im\Omega_{\alpha}(x)>c,\ \im\Omega_{\beta}(x)>c, \end{align} for all $x\in[a,b]$. \end{lem} \begin{proof} (\ref{eq_squareroot}) has been proved in \cite[Proposition 5.10]{JHC}. We prove \eqref{eq_imaginarypart} here. The lower bound in \eqref{eq_imaginarypart} is a consequence of the equality $\{x\in\R:\rho(x)>0\}=(E_{-},E_{+})$. In fact, since the density is continuous, there must exist a constant $c>0$ so that $\rho(x)>c$ holds for all $x\in[a,b]$. Since \begin{equation*} \im\Omega_{\alpha}(x)\int_{\R_{+}}\frac{t}{\absv{t-\Omega_{\alpha}(x)}^{2}}\mathrm{d}\mu_{\beta}(t)=\im (x m_{\rho}(x)+1)=x\rho(x)>ca, \end{equation*} we conclude that $\im\Omega_{\alpha}(x)$ is bounded below for $x\in[a,b]$ using (i) and (ii) of Lemma \ref{lem:stabbound}. \end{proof} Furthermore, the density of the $\mu_{\alpha} \boxtimes \mu_{\beta}$ also has the square root behavior. \begin{lem}[Theorem 3.3 of \cite{JHC}] Suppose that (i) and (ii) of Assumption \ref{assu_limit} hold. Then there exists a constant $C>1$ such that for all $x\in[E_{-},E_{+}]$, \begin{equation*} C^{-1}\leq \frac{\rho(x)}{\sqrt{(E_{+}-x)(x-E_{-})}} \leq C. \end{equation*} \end{lem} To characterize the behavior of ${\mathcal S}_{\alpha \beta}, \mathcal{T}_{\alpha}$ and $\mathcal{T}_{\beta},$ we need the following lemma. \begin{lem}\label{lem:suborder} Suppose Assumption \ref{assu_limit} holds. Then there exist positive constants $\tau$ and $\eta_{0}$ such that the following hold uniformly in $z \in \mathcal{D}_\tau(0,\eta_0)$ \begin{align} L_{\mu_{\alpha}}'(\Omega_{\beta}(z))\sim M_{\mu_{\alpha}}'(\Omega_{\beta}(z)) \sim 1, && L_{\mu_{\alpha}}''(\Omega_{\beta}(z))\sim M_{\mu_{\alpha}}''(\Omega_{\beta}(z)) \sim 1,\label{eq:LMder}\\ \absv{\Omega_{\alpha}'(z)}\sim\frac{1}{\sqrt{\absv{z-E_{+}}}}, && \absv{\Omega_{\alpha}''(z)}\sim\frac{1}{\absv{z-E_{+}}^{3/2}}\label{eq:suborder}. \end{align} Similar results hold true if we replace $\alpha$ with $\beta.$ Without loss of generality, we set $\tau$ to be the same as defined in (\ref{eq_fundementalset}). \end{lem} \begin{proof} {We start with the first term of (\ref{eq:LMder}).} By {differentiating the first term in (\ref{eq:reprM})}, we find that \begin{align} \label{eq_firstderivative} L'_{\mu_{\alpha}}(z)=\int\frac{1}{(x-z)^{2}}\mathrm{d}\widehat{\mu}_{\alpha}(x), && M'_{\mu_{\alpha}}(z)=1+\int\frac{x}{(x-z)^{2}}\mathrm{d}\widehat{\mu}_{\alpha}(x). \end{align} {Then the proof follows from Remark \ref{rem_ctxexpansubordination}, $\text{supp} \ \widehat{\mu}_{\alpha}=\text{supp} \ \mu_{\alpha}$ (c.f. (\ref{eq:reprM})) and (\ref{eq:stabbound})}. The second term follows from a similar argument by differentiating (\ref{eq_firstderivative}) and we omit the details. Then we prove (\ref{eq:suborder}). The proof is similar to \cite[Section 6.3]{JHC} and we only sketch the proof here. Considering the Taylor expansion of $\widetilde{z}_{+}'$ around $\Omega_{\alpha}(E_{+})$, we have \begin{equation} \label{eq:z'expa} \widetilde{z}_{+}'(\Omega_{\alpha}(z))=\widetilde{z}_{+}''(\Omega_{\alpha}(E_{+}))(\Omega_{\alpha}(z)-\Omega_{\alpha}(E_{+}))+O(\absv{\Omega_{\alpha}(z)-\Omega_{\alpha}(E_{+})}^{2}). \end{equation} Recall that $\widetilde{z}_{+}(\Omega_{\alpha}(z))=z.$ By the inverse function theorem and the facts $\absv{\Omega_{\alpha}(z)-\Omega_{\alpha}(E_{+})}\sim \sqrt{\absv{z-E_{+}}}$ and $\widetilde{z}_{+}''(\Omega_{\alpha}(E_{+}))>0,$ we have \begin{align} \label{eq_derivative} \Omega_{\alpha}'(z)=\frac{1}{\widetilde{z}_{+}'(\Omega_{\alpha}(z))}\sim \frac{1}{\sqrt{\absv{z-E_{+}}}}, && \Omega_{\alpha}''(z)=-\frac{\widetilde{z}_{+}''(\Omega_{\alpha}(z)) \Omega_{\alpha}'(z)^{2}}{\widetilde{z}_{+}'(\Omega_{\alpha}(z))}\sim \frac{1}{\absv{z-E_{+}}^{3/2}}. \end{align} \end{proof} Finally, we investigate the properties of ${\mathcal S}_{\alpha\beta}$, ${\mathcal T}_{\alpha}$, and ${\mathcal T}_{\beta}$ around the edge (and far away from bulk). They are summarized in the following proposition, whose proof is given in Appendix \ref{sec_additional}. \begin{prop}\label{prop:stablimit} Suppose that Assumption \ref{assu_limit} holds. Then there exist constants $\tau,\eta_{0}>0$ such that the following hold: \begin{itemize} \item[(i)] We have \begin{align*} \im M_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z)\sim \im m_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z) \sim\im \Omega_{\alpha}(z)\sim\im\Omega_{\beta}(z)\sim \begin{cases} \sqrt{\kappa+\eta} &\text{if }E\in[E_{-},E_{+}], \\ \dfrac{\eta}{\sqrt{\kappa+\eta}} &\text{if }E\notin[E_{-},E_{+}], \end{cases} \end{align*} uniformly in $z=E+\mathrm{i}\eta\in{\mathcal D}_{\tau}(0,\eta_{0})$. \item[(ii)] For all fixed $\eta_{U}\geq\eta_{0}$, there exists a constant $C>0$ such that \begin{align}\label{eq_stalphabetabound} {\mathcal S}_{\alpha\beta}(z) &\sim\sqrt{\kappa+\eta}, &|{\mathcal T}_{\alpha}(z)| &\leq C,& |{\mathcal T}_{\beta}(z)| &\leq C \end{align} hold uniformly in $z\in{\mathcal D}_{\tau}(0,\eta_{U})$. \item[(iii)] There exists a constant $\delta>0$ such that \begin{align*} &{\mathcal T}_{\alpha}(z)\sim 1, & &{\mathcal T}_{\beta}(z)\sim 1 \end{align*} hold uniformly in $z\in\{z\in\C_{+}:\absv{z-E_{+}}<\delta\}$. \end{itemize} \end{prop} \subsection{Free convolution of $\mu_{A} \boxtimes \mu_{B}$: proof of Proposition \ref{prop:stabN}}\label{sec:freeN} In this section we prove Proposition \ref{prop:stabN}. In fact, we will prove our results on a slightly larger domain ${\mathcal D}_{0},$ where ${\mathcal D}_{0}\mathrel{\mathop:}= {\mathcal D}_{I}\cup{\mathcal D}_{O}$ and \begin{align} {\mathcal D}_{I}\mathrel{\mathop:}= & \{z\in\C_{+}: \re z\in[E_{+}-\tau ,E_{+}+N^{-1+\xi \epsilon}],\,\im z\in[ N^{-1+\xi \epsilon},\eta_{1}]\}, \nonumber \\ {\mathcal D}_{O}\mathrel{\mathop:}= & \{z\in\C_{+}: \re z\in[E_{+}+N^{-1+\xi \epsilon},\tau^{-1}],\im z\in(0,\eta_{1})\}, \label{eq_defineoutsideset} \end{align} where $\xi>1$ is a fixed constant, $\eta_{1}>\eta_{U}$ will be given later in the proof (see Lemma \ref{lem:OmegaBound1}). Our proof relies on the control of the difference between $(\Omega_{\alpha},\Omega_{\beta})$ and $(\Omega_{A},\Omega_{B})$. \begin{lem}\label{lem:OmegaBound} Let $\mu_{\alpha}$, $\mu_{\beta}$, $\mu_{A}$, and $\mu_{B}$ satisfy Assumptions \ref{assu_limit} and \ref{assu_esd}. Then there exists some constant $C>0$ such that for sufficiently large $N$ the following statements hold: \begin{itemize} \item[(i)] For all $z\in{\mathcal D}_{0}$, \begin{equation} \label{eq:OmegaBound} \absv{\Omega_{A}(z)-\Omega_{\alpha}(z)}+\absv{\Omega_{B}(z)-\Omega_{\beta}(z)} \leq C\frac{N^{-1+\epsilon}}{\sqrt{\absv{z-E_{+}}}}\leq N^{-1/2+\epsilon}. \end{equation} \item[(ii)] For all $z\in{\mathcal D}_{O},$ we have \begin{equation} \label{eq:OmegaImBound} \absv{\im\Omega_{A}(z)-\im\Omega_{\alpha}(z)}+\absv{\im\Omega_{B}(z)-\im\Omega_{\beta}(z)} \leq C\frac{N^{-1+\epsilon}\im(\Omega_{\alpha}(z)+\Omega_{\beta}(z))+\im z}{\sqrt{\absv{z-E_{+}}}}. \end{equation} \end{itemize} \end{lem} We first present the proof of Proposition \ref{prop:stabN} using Lemma \ref{lem:OmegaBound} and postpone its proof to Section \ref{sec:omegabound}. Along the proof of Proposition \ref{prop:stabN}, we require two additional lemmas. The first result is an analogue of Lemma \ref{lem:reprM} for the ESDs $\mu_{A}$ and $\mu_{B}$. \begin{lem}\label{lem:reprMemp} Suppose that $\mu_{A}$ and $\mu_{B}$ satisfy Assumptions \ref{assu_limit} and \ref{assu_esd}. Then there exist unique probability measures $\widehat{\mu}_{A}$, $\widehat{\mu}_{B}$, $\widetilde{\mu}_{A}$, and $\widetilde{\mu}_{B}$ on $\R_{+}$ such that the following hold: \begin{align} \label{eq_newsupportbound} L_{\mu_{A}}(z)&=1+m_{\widehat{\mu}_{A}}(z), & \widehat{\mu}_{A}(\R_{+})&=\Var{\mu_{A}}, & \supp\widehat{\mu}_{A}&\subset [a_{N},a_{1}], \nonumber \\ \frac{\Omega_{A}(z)}{z}&=1+m_{\widetilde{\mu}_{A}}(z), & \widetilde{\mu}_{A}(\R_{+})&=\Var{\mu_{A}}, & \supp\widetilde{\mu}_{A}&\subset[\inf \supp \mu_{A}\boxtimes\mu_{B},\sup\supp \mu_{A}\boxtimes\mu_{B}]. \end{align} Similar results hold true if we replace $A$ with $B.$ \end{lem} \begin{proof} The proof is similar to that of \cite[Lemmas 6.2 and 6.14]{JHC}. First, we use classical Nevanlinna-Pick representation theorem(see \cite[Lemma 3.7]{JHC}) to establish the existence and uniqueness of $\widehat{\mu}_{A}$ and $\widetilde{\mu}_{A}$. Second, we use the fact that $L_{\mu_{A}}$ and $\Omega_{A}$ are analytic in the complements of $[a_{N},a_{1}]$ and $[\inf\supp(\mu_{A}\boxtimes\mu_{B}),\sup\supp(\mu_{A}\boxtimes\mu_{B})]$ to get the inclusion of the supports. We omit further details here and refer to the proofs of \cite[Lemmas 6.2 and 6.14]{JHC}. \end{proof} We remark that by (\ref{eq_newsupportbound}), we have \begin{align} \im\Omega_{A}(z)&=\im (z+zm_{\mu_{\widetilde{A}}}(z))=\eta+\eta\int\frac{x}{\absv{x-z}^{2}}\mathrm{d}\widetilde{\mu}_{A}(x)\geq\eta, \label{eq_omegabbound1} \\ \im\frac{\Omega_{A}(z)}{z}&=\eta\int\frac{1}{\absv{x-z}^{2}}\mathrm{d}\widetilde{\mu}_{A}\geq \Var{\mu_{A}}\frac{\eta}{2(\absv{z}^{2}+\norm{A}^{2}\norm{B}^{2})} \label{eq_omegabbound}, \end{align} where in the second step of (\ref{eq_omegabbound}) we used (\ref{eq:priorisupp}). We next introduce the second auxiliary lemma. We will frequently refer to this lemma whenever we need to bound the differences between different Stieltjes transforms in terms of ${\mathcal L}(\mu_{\alpha},\mu_{A})+{\mathcal L}(\mu_{\beta},\mu_{B}).$ Its proof can be found in Appendix \ref{sec_additional}. \begin{lem}\label{lem:rbound} Let $\delta>0$ be fixed and $f\to\C$ be a continuous function which is continuously differentiable in $(E_{-}^{\alpha}-\delta,E_{+}^{\alpha}+\delta)$. Suppose that there exists $N_0 \in \mathbb{N}$ independent of $f$ such that ${\boldsymbol d}\leq \delta/2$ and $[a_{N},a_{1}]\in[E_{-}^{\alpha}-\delta/2,E_{+}^{\alpha}+\delta/2]$ for all $N\geq N_{0}$. Then there exists some constant $C>0$, independent of $f$, such that we have \begin{equation*} \Absv{\int_{\R_{+}} f(x)\mathrm{d}\mu_{\alpha}(x)-\int_{\R_{+}} f(x)\mathrm{d}\mu_{A}(x)}\leq C \norm{f'}_{\mathrm{Lip},\delta}{\boldsymbol d}, \end{equation*} for all $N\geq N_{0}$, where we denoted \begin{equation*} \norm{f'}_{\mathrm{Lip},\delta}\mathrel{\mathop:}=\sup_{{x\in[E_{-}^{\alpha}-\delta,E_{+}^{\alpha}+\delta]}}\absv{f'(x)}+\sup\left\{\Absv{\frac{f'(x)-f'(y)}{x-y}}:x,y\in[E_{-}^{\alpha}-\delta,E_{+}^{\alpha}+\delta], x\neq y\right\}. \end{equation*} The same result holds if we replace $\alpha$ and $A$ by $\beta$ and $B$. \end{lem} Now we are now ready to present the proof of Proposition \ref{prop:stabN}. \begin{proof}[\bf Proof of Proposition \ref{prop:stabN}] Throughout the proof we choose $\eta_1 \geq \eta_U$ such that ${\mathcal D}_{\tau}(\eta_{L},\eta_{U})\subset{\mathcal D}_{0}.$ It is easy to see that (i) follows directly from Lemmas \ref{lem:stabbound} and \ref{lem:OmegaBound}. Next, we proceed to prove (ii). By (\ref{eq_multiidentity}), we find that \begin{equation*} \im zm_{\mu_{A}\boxtimes\mu_{B}}(z) =\im \Omega_{B}(z)m_{\mu_{A}}(\Omega_{B}(z))=\im\Omega_{B}(z)\int\frac{x}{\absv{x-\Omega_{B}(z)}^{2}}\mathrm{d}\mu_{A}(x). \end{equation*} Together with (i) of Proposition \ref{prop:stabN}, we obtain that \begin{equation}\label{eq:reduceimone} \im zm_{\mu_{A}\boxtimes\mu_{B}}(z) \sim \im \Omega_{B}(z). \end{equation} On the other hand, by (\ref{eq:OmegaImBound}), it is easy to verify that \begin{equation} \label{eq_derivativecontroldifference} \absv{\Omega_{\beta}(z)-\Omega_{B}(z)}\leq C\frac{N^{-1+\epsilon}}{\sqrt{\kappa+\eta}} \ll\left\{ \begin{array}{cl} \sqrt{\kappa+\eta} & \text{ if }E\in[E_{-},E_{+}],\\ \frac{\eta}{\sqrt{\kappa+\eta}} & \text{ if }E>E_{+}, \end{array} \right. \end{equation} holds uniformly in $z\in{\mathcal D}_{0}$ when $N$ is sufficiently large. In light of (\ref{eq:reduceimone}) and (i) of Proposition \ref{prop:stablimit}, we find that \begin{equation}\label{eqeq:twotwo} \im z m_{\mu_A \boxtimes \mu_B}(z) \sim \im \Omega_{\beta}(z). \end{equation} On the other hand, by (\ref{eq_multiidentity}) and (i) of Proposition \ref{prop:stabN} \begin{equation*} \im (zm_{\mu_{A}\boxtimes\mu_{B}}(z)) =\eta\int\frac{x}{\absv{x-z}^{2}}\mathrm{d}(\mu_{A}\boxtimes\mu_{B})(x) \sim \eta\int\frac{1}{\absv{x-z}^{2}}\mathrm{d}(\mu_{A}\boxtimes\mu_{B})(x) =\im m_{\mu_{A}\boxtimes\mu_{B}}(z). \end{equation*} Together with (\ref{eqeq:twotwo}) and (i) of Proposition \ref{prop:stablimit}, we finish the proof of (ii). For the proof of (iii), due to similarity, we will focus our proof on $\mathcal{S}_{AB}(z)$ and briefly discuss the proof of $\mathcal{T}_A(z)$ and $\mathcal{T}_B(z).$ Notice that \begin{align}\label{eq_sabalphabetadifference} \mathcal{S}_{AB}&-\mathcal{S}_{\alpha \beta} \nonumber \\ &=z^2 \left[ L'_{\mu_A}(\Omega_B(z)) \left(L_{\mu_\beta}'(\Omega_{\alpha}(z))-L_{\mu_B}'(\Omega_A(z))\right)+L_{\mu_\beta}'(\Omega_{\alpha}(z)) \left(L_{\mu_\alpha}'(\Omega_{\beta}(z))-L_{\mu_A}'(\Omega_B(z)) \right) \right]. \end{align} Hence, we need to control the right-hand side of (\ref{eq_sabalphabetadifference}). On one hand, we find that \begin{equation} \label{eq:L'bound} \absv{L'_{\mu_{B}}(\Omega_{A}(z))-L_{\mu_{\beta}}'(\Omega_{\alpha}(z))}\leq \absv{L'_{\mu_{B}}(\Omega_{A}(z))-L'_{\mu_{B}}(\Omega_{\alpha}(z))}+\absv{L'_{\mu_{B}}(\Omega_{\alpha}(z)-L'_{\mu_{\beta}}(\Omega_{\alpha}(z))}. \end{equation} By definition, we have that \begin{align} \label{eq_expresenationl} \absv{L'_{\mu_{B}}(\Omega_{\alpha}(z))-L'_{\mu_{\beta}}(\Omega_{\alpha}(z))} =\frac{1}{\absv{\Omega_{\alpha}(z)}^{2}}\bigg\vert(\Omega_{\alpha}(z)m_{\mu_{B}}(\Omega_{\alpha}(z))+1)^{-2}\int\frac{x^{2}}{(x-\Omega_{\alpha}(z))^{2}}\mathrm{d}\mu_{B}(z)\\ -(\Omega_{\alpha}(z)m_{\mu_{\beta}}(\Omega_{\alpha}(z))+1)^{-2} \int\frac{x^{2}}{(x-\Omega_{\alpha}(z))^{2}}\mathrm{d}\mu_{\beta}(x)\bigg\vert. \nonumber \end{align} By using Lemma \ref{lem:rbound} with $f=(x-\Omega_{\alpha}(z))^{-2}$ and (iv) of Assumption \ref{assu_esd}, when $N$ is large enough, we have \begin{equation*} L^{'}_{\mu_B}(\Omega_\alpha(z))=L'_{\mu_\beta}(\Omega_\alpha(z))+\rO(N^{-1+\epsilon}), \end{equation*} where we used (i) of Lemma \ref{lem:stabbound} and (i) of Proposition \ref{prop:stabN}. Together with (\ref{eq:LMder}), we can see that $L''_{\mu_B}(\omega) \sim 1$ when $\omega$ is around $\Omega_{\alpha}(z)$ and $N$ is large enough. Then by (\ref{eq_expresenationl}), it is easy to see that for some constant $C>0,$ we have \begin{equation*} \absv{L'_{\mu_{B}}(\Omega_{A}(z))-L'_{\mu_{B}}(\Omega_{\alpha}(z))} \leq C |\Omega_A(z)-\Omega_{\alpha}(z)|. \end{equation*} As a consequence, by (\ref{eq:OmegaBound}), we can bound (\ref{eq:L'bound}) by \begin{equation} \label{eq_Lderivativecontrol} \absv{L'_{\mu_{B}}(\Omega_{A}(z))-L'_{\mu_{\beta}}(\Omega_{\alpha}(z))} \leq C\left(\absv{\Omega_{\alpha}(z)-\Omega_{A}(z)}+{\boldsymbol d}\right)\leq C\frac{{\boldsymbol d}}{\sqrt{\kappa+\eta}}\ll\sqrt{\kappa+\eta}\sim{\mathcal S}_{\alpha\beta}(z), \end{equation} where we used the definition of ${\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ and choose $\epsilon<\gamma$ in the third inequality. Note that $L'_{\mu_A}( \Omega_B(z)) \sim 1$ by a discussion similar to (\ref{eq:LMder}). This finishes the proof of the first term of the right-hand side of (\ref{eq_sabalphabetadifference}). Similar discussion can be applied to the second term. In summary, we have that \begin{equation} \label{eq_compareboundsalphabeta} \absv{{\mathcal S}_{AB}(z)-{\mathcal S}_{\alpha\beta}(z)}\ll {\mathcal S}_{\alpha\beta}(z). \end{equation} The we complete the proof of the control of ${\mathcal S}_{AB}(z)$ in (iii) using (ii) of Proposition \ref{prop:stablimit}. For the control of $\mathcal{T}_A$ and $\mathcal{T}_B,$ by the definition of $\mathcal{T}_{\alpha}$ in (\ref{eq_defn_talpha}), with an argument similar to (\ref{eq_compareboundsalphabeta}), we find that $\mathcal{T}_{A}-\mathcal{T}_{\alpha}=o(1)$ and $\mathcal{T}_{B}-\mathcal{T}_{\beta}=o(1).$ This concludes the proof using (ii) and (iii) of Proposition \ref{prop:stablimit}. Finally, we prove (iv). By applying $\frac{\mathrm{d}}{\mathrm{d} z}$ to the subordination system \eqref{eq_suborsystemPhi} with $\alpha$ and $\beta$ replaced by $A$ and $B$, we have { \begin{equation*} \begin{aligned} L'_{\mu_{A}}(\Omega_{B}(z))\Omega_{B}'(z) -\frac{\Omega_{A}'(z)}{z} +\frac{\Omega_{A}(z)}{z^{2}} =0, \\ L'_{\mu_{B}}(\Omega_{A}(z))\Omega_{A}'(z) -\frac{\Omega_{B}'(z)}{z}+\frac{\Omega_{B}(z)}{z^{2}}=0. \end{aligned} \end{equation*} } Equivalently, we rewrite the above equations as { \begin{equation*} \begin{pmatrix} zL'_{\mu_{B}}(\Omega_{A}(z)) & -1 \\ -1 & zL'_{\mu_{A}}(\Omega_{B}(z)) \end{pmatrix} \begin{pmatrix} \Omega_{A}'(z) \\ \Omega_{B}'(z) \end{pmatrix} =-\frac{1}{z}\begin{pmatrix} \Omega_{B}(z) \\ \Omega_{A}(z) \end{pmatrix}. \end{equation*} } Note that the determinant of $(2\times 2)$-matrix on the left-hand side of the above equation is ${\mathcal S}_{AB}(z)$. Then we have { \begin{equation} \label{eq_brackter} \begin{pmatrix} \Omega_{A}'(z) \\ \Omega_{B}'(z) \end{pmatrix} =-\frac{1}{z{\mathcal S}_{AB}(z)} \begin{pmatrix} zL'_{\mu_{A}}(\Omega_{B}(z)) & 1 \\ 1 & zL'_{\mu_{B}}(\Omega_{A}(z)) \end{pmatrix} \begin{pmatrix} \Omega_{B}(z) \\ \Omega_{A}(z) \end{pmatrix}. \end{equation} } By a discussion similar to (\ref{eq:LMder}), (i) of Lemma \ref{lem:stabbound} and (i) of Proposition \ref{prop:stabN}, we find that the entries of in the brackets of the right-hand side of (\ref{eq_brackter}) are bounded from above and below. Then by (iii) of Proposition \ref{prop:stabN}, we get the desired result of $\Omega_A'(z)$ and $\Omega_B'(z)$. For $S_{AB}'(z),$ we first note that \begin{equation} \label{eq_sqbprime} {\mathcal S}_{AB}'(z) =\frac{2}{z}({\mathcal S}_{AB}+1)+z^{2}L_{\mu_{B}}''(\Omega_{A}(z))L_{\mu_{A}}'(\Omega_{B}(z))\Omega_{A}'(z) +z^{2}L_{\mu_{B}}'(\Omega_{A}(z))L_{\mu_{A}}''(\Omega_{B}(z))\Omega_{B}'(z). \end{equation} Since \begin{equation*} \frac{2}{z}(\mathcal{S}_{AB}+1)=2z L'_{\mu_A} (\Omega_B(z))L'_{\mu_B}(\Omega_A(z)), \end{equation*} by a discussion similar to (\ref{eq:LMder}), we find that \begin{equation*} \frac{2}{z}({\mathcal S}_{AB}+1) \sim 1 \leq C \frac{1}{\sqrt{\kappa+\eta}}, \ z \in {\mathcal D}_\tau(\eta_L, \eta_U). \end{equation*} Similarly, we can bound the other terms on the right-hand side of (\ref{eq_sqbprime}) using the first two terms of (iv) of Proposition \ref{prop:stabN}. This finishes the proof of the bound for ${\mathcal S}'_{AB}(z).$ \end{proof} \subsection{Closeness of subordination functions: proof of Lemma \ref{lem:OmegaBound}}\label{sec:omegabound} In this section, we prove Lemma \ref{lem:OmegaBound}. By the construction of (\ref{eq_suborsystemPhi}), we find that $\Omega_A(z)$ and $\Omega_B(z)$ are determined by the equation $$\Phi_{AB}(\Omega_A, \Omega_B,z)=0.$$ Equivalently, $\Omega_A$ and $\Omega_B$ are solutions of the equations \begin{align}\label{eq:SuborPerturb} \Phi_{\alpha}(\omega_{1},\omega_{2},z)=r_{1}(z), \ \Phi_{\beta}(\omega_{1},\omega_{2},z)=r_{2}(z), \end{align} where $\Phi_{\alpha}$ and $\Phi_{\beta}$ are defined in \eqref{eq:def_Phi_ab} and \begin{align}\label{eq_defnra} r_{1}(z)&\mathrel{\mathop:}= \frac{M_{\mu_{\alpha}}(\omega_2)-M_{\mu_{A}}(\omega_2)}{\omega_2}, & r_{2}(z)&\mathrel{\mathop:}= \frac{M_{\mu_{\beta}}(\omega_1)-M_{\mu_{B}}(\omega_1)}{\omega_1}. \end{align} In what follows, we use the notations $r_A(z) \equiv r_1(\Omega_B(z))$ and $r_B(z) \equiv r_2(\Omega_A(z)).$ Ideally, both $r_A(z)$ and $r_B(z)$ should be small and we aim to bound $\absv{\Omega_{A}-\Omega_{\alpha}(z)}$ and $\absv{\Omega_{B}(z)-\Omega_{\beta}(z)}$ in terms of the norm of $r(z)\mathrel{\mathop:}= (r_{A}(z),r_{B}(z))$. {Since the lower bounds of $\text{dist}(\Omega_A,\mu_\beta)$ and $\text{dist}(\Omega_B,\mu_\alpha)$ play an important role in our proof, in light of (\ref{eq_omegabbound1}), we split} the proof of Lemma \ref{lem:OmegaBound} into two steps. First, we start from the regime in which $\eta=\im z$ is large. Second, we use a bootstrapping argument to extend the result to whole domain ${\mathcal D}_0$. We mention that when $\eta$ is large, many critical quantities can be bounded from above and below. For instance, when $\eta=O(1),$ we assert that $\Omega_A(z)$ and $\Omega_B(z)$ are bounded from below in light of (\ref{eq_omegabbound1}). In the first step, we will need the following lemma. Its proof relies on the a consequence of Kantorovich theorem (c.f. Lemma \ref{lem:Kantorovich_appl}), which provides an upper bound for deviation between the initial value and the exact solution is an application of Newton's method. In our case, we consider $(\Omega_{A}(z),\Omega_{B}(z))$ as the initial point to find the solution $(\Omega_{\alpha}(z),\Omega_{\beta}(z))$ of $\Phi_{\alpha\beta}(\cdot,\cdot,z)=0$. The upper bound in Kantorovich's theorem is given in terms of the first and second derivatives of $\Phi_{\alpha\beta}(\cdot,\cdot,z).$ We further show that they can be bounded by $2\norm{r(z)}$. It will be proved in Appendix \ref{sec_additional}. \begin{lem}\label{lem:OmegaBound1} There exists a constant $\eta_{1}>0$ such that for all $z\in\C_{+}$ with $\re z\in [E_{+}-\tau,\tau^{-1}]$ and $\im z=\eta_{1}$ the following hold: \begin{align*} \absv{\Omega_{\alpha}(z)-\Omega_{A}(z)}&\leq 2\norm{r(z)}, & \absv{\Omega_{\beta}(z)-\Omega_{B}(z)}&\leq 2\norm{r(z)}, \end{align*} for all sufficiently large $N$. \end{lem} For the second step, we will need the following lemma to extend the results in Lemma \ref{lem:OmegaBound1} for smaller $\im z.$ Denote $K_{1}\deq4K_{4}$ and $K_{2}\mathrel{\mathop:}= (4K_{3}K_{4})^{-1}$, where \begin{equation*} K_{3}\mathrel{\mathop:}= 27\kappa_{0}^{-3}\max(\widehat{\mu}_{\alpha}(\R_{+}),\widehat{\mu}_{\beta}(\R_{+})), \end{equation*} with $\kappa_0$ defined as { \begin{equation*} \kappa_0=\min_{z \in \mathbb{C}_+} \{\text{dist}(\Omega_{\alpha}(z), \text{supp} \ \mu_\beta ), \ \text{dist}(\Omega_{\beta}(z), \text{supp} \ \mu_\alpha ) \}, \end{equation*} } and \begin{equation*} K_{4}\mathrel{\mathop:}= \sup\left\{\absv{z}+\absv{z^{2}L_{\mu_{\alpha}}'(\Omega_{\beta}(z))},\absv{z}+\absv{z^{2}L_{\mu_{\beta}}'(\Omega_{\alpha}(z))}:z\in{\mathcal D}_{\tau}(0,\eta_{1})\right\}, \end{equation*} where $\eta_{1}$ is introduced in Lemma \ref{lem:OmegaBound1}. It is easy to see that both $K_3$ and $K_4$ are positive finite numbers. We next state the results. Recall (\ref{eq_defn_talpha}). \begin{lem}\label{lem:OmegaBound2} For sufficiently large $N$ and $z_{0}\in{\mathcal D}_{\tau}(0,\eta_{1}),$ where $\eta_{1}$ is introduced in Lemma \ref{lem:OmegaBound1}, suppose that there exists some $q>0$ such that the following hold: \begin{itemize} \item[(i).] $\displaystyle q\leq \frac{1}{3}\kappa_{0};$ \item[(ii).] $\displaystyle \absv{\Omega_{A}(z_{0})-\Omega_{\alpha}(z_{0})}\leq q, \quad \absv{\Omega_{B}(z_{0})-\Omega_{\beta}(z_{0})}\leq q; $ \item[(iii).] $\displaystyle q\leq \frac{1}{2}K_{2}{\mathcal S}_{\alpha\beta}(z_{0})$. \end{itemize} Then we have \begin{equation*} \absv{\Omega_{A}(z_{0})-\Omega_{\alpha}(z_{0})}+\absv{\Omega_{B}(z_{0})-\Omega_{\beta}(z_{0})}\leq K_{1}\frac{\norm{r(z_{0})}}{\absv{{\mathcal S}_{\alpha\beta}(z_{0})}}. \end{equation*} \end{lem} The proof of Lemma \ref{lem:OmegaBound2} will be provided in Appendix \ref{sec_additional}. Given these two lemmas, we proceed to finish the proof of Lemma \ref{lem:OmegaBound} using the two-step strategy as described earlier. \begin{proof}[\bf Proof of Lemma \ref{lem:OmegaBound}]{ We start with the proof of (\ref{eq:OmegaBound}). We first prove that there exists a constant $C_{0}>0$ and $N_{0}\in\N$ such that the following holds; if $N\geq N_{0}$ and $z\in{\mathcal D}_{0}$ satisfy the assumptions of Lemma \ref{lem:OmegaBound2}, then \begin{equation*} K_{1}\frac{\norm{r(z)}}{\absv{{\mathcal S}_{\alpha\beta}(z)}}\leq C_{0}\frac{N^{-1+\epsilon/2}}{\sqrt{\kappa+\eta}}. \end{equation*} Note that \eqref{eq:stabbound} ensures that at least one of the following inequalities should hold for any $z\in{\mathcal D}_{0}$; \begin{align*} \re \Omega_{\beta}(z)&\leq E_{-}^{\alpha}-\frac{\kappa_{0}}{2},& \re \Omega_{\beta}(z)&\geq E_{+}^{\alpha}+\frac{\kappa_{0}}{2},& \im \Omega_{\beta}(z)&\geq \frac{\kappa_{0}}{2}. \end{align*} Moreover, by assumptions (i) and (ii) of Lemma \ref{lem:OmegaBound2}, $\Omega_{B}(z)$ satisfies \begin{align}\label{eq_boundmore} \re \Omega_{B}(z)&\leq E_{-}^{\alpha}-\frac{\kappa_{0}}{6}, & \re \Omega_{B}(z)&\geq E_{+}^{\alpha}+\frac{\kappa_{0}}{6}, & \im \Omega_{B}(z)&\geq \frac{\kappa_{0}}{6}. \end{align} Thus, by Lemma \ref{lem:rbound}, we have \begin{equation} \label{eq:Mdiff} \Absv{\int\frac{x\mathrm{d}(\mu_{A}-\mu_{\alpha})(x)}{x-\Omega_{B}(z)}}\leq C{\boldsymbol d}, \end{equation} where $C$ is some constant. Furthermore, we have \begin{equation} \label{eq:MLowerBound} \Absv{\int\frac{x}{x-\Omega_{B}(z)}\mathrm{d}\mu_{\alpha}(x)}\geq\Absv{\int\frac{x}{x-\Omega_{B}(z)}\mathrm{d}\mu_{A}(x)}-C{\boldsymbol d}=\Absv{\int\frac{x}{x-z}\mathrm{d}\mu_{A}\boxtimes\mu_{B}(x)}-C{\boldsymbol d}\geq c. \end{equation} for some positive constant $c>0.$ Together with (\ref{eq_defnra}), we find that $|r_A(z)|\leq C{\boldsymbol d}.$ Similarly, we can show $|r_B(z)| \leq C {\boldsymbol d}$ and consequently, $\|r(z)\| \leq C {\boldsymbol d}.$ This implies \begin{equation} \label{eq_finalrepresenationbound} K_{1}\frac{\norm{r(z)}}{{\mathcal S}_{\alpha\beta}(z)}\leq C_{0} \frac{N^{-1+\epsilon/2}}{\sqrt{\kappa+\eta}}. \end{equation} Now we prove \eqref{eq:OmegaBound} using the result above. First, we fix $N\geq N_{0}$ and $z\in{\mathcal D}_{0}$. Define a finite decreasing sequence $(\eta_{1},\cdots,\eta_{M})$ of positive numbers, starting from $\eta_{1}$ given in Lemma \ref{lem:OmegaBound1} and ending with $\eta_{M}=\eta$, such that \begin{equation*} \eta_{i}-\eta_{i+1}\leq L\mathrel{\mathop:}= \eta^{2}N^{(-1+(1-\xi)\epsilon)/2}. \end{equation*} Note that the length $M$ of this sequence may depend on $N$. Then it suffices to prove that $z_{i}=E+\mathrm{i}\eta_{i}$ satisfies the assumption of Lemma \ref{lem:OmegaBound2} for each $i=1,\cdots,M$. Suppose inductively that the result holds for some $i$. By Lemmas \ref{lem:reprM} and \ref{lem:reprMemp} we have \begin{align*} &\absv{\Omega_{\alpha}(z_{i})-\Omega_{\alpha}(z_{i+1})}\leq C_{1}\eta^{-2}L,& &\absv{\Omega_{A}(z_{i})-\Omega_{A}(z_{i+1})}\leq C_{1}\eta^{-2}L, \end{align*} for some constant $C_{1}>0$ that depends only on $\mu_{\alpha}$, and the same bound holds also for $\Omega_{\beta}$ and $\Omega_{B}$. Now we see that \begin{align*} \absv{\Omega_{A}(z_{i+1})-\Omega_{\alpha}(z_{i+1})}+\absv{\Omega_{B}(z_{i+1})-\Omega_{\beta}(z_{i+1})}\leq \absv{\Omega_{A}(z_{i})-\Omega_{\alpha}(z_{i})}+\absv{\Omega_{B}(z_{i})-\Omega_{\beta}(z_{i})}+2C_{1}\eta^{-2}L\\ \leq K_{1}\frac{\norm{r(z_{i})}}{\absv{{\mathcal S}_{\alpha\beta}(z_{i})}}+2C_{1}\eta^{-2}L \leq C_{0}\frac{N^{-1+\epsilon/2}}{\sqrt{\kappa+\eta_{i}}}+2C_{1}N^{(-1+(1-\xi)\epsilon)/2} \leq (C_{0}+2C_{1})N^{(-1+(1-\xi)\epsilon)/2}. \end{align*} Thus, with the choice \begin{equation*} q=(C_{0}+2C_{1})N^{(-1+(1-\xi)\epsilon)/2}, \end{equation*} we can enlarge $N_{0}$ so that the following hold: \begin{align*} & q=(C_{0}+2C_{1})N^{(-1+(1-\xi)\epsilon)/2}\leq \frac{1}{3}\varsigma,\\ & \frac{q}{\absv{{\mathcal S}_{\alpha\beta}(z_{i+1})}}\leq \frac{C(C_{0}+2C_{1})}{\sqrt{\kappa+\eta_{i+1}}}N^{(-1+(1-\xi)\epsilon)/2}\leq N^{(1-2\xi)\epsilon/2}\leq CN^{-\epsilon/2}\leq \frac{K_{2}}{2}, \end{align*} where we used $\xi>1$. Therefore $z_{i+1}$ satisfies the assumptions of Lemma \ref{lem:OmegaBound2}. Since $z_{1}$ automatically satisfies the assumptions of Lemma \ref{lem:OmegaBound2} by Lemma \ref{lem:OmegaBound1}, we conclude the proof of \eqref{eq:OmegaBound} by induction. } We next prove \eqref{eq:OmegaImBound}. The proof is similar to that of Lemma \ref{lem:OmegaBound2} by considering the imaginary parts. The key ingredients are \eqref{eq:OmegaBound} and the fact \begin{equation*} \im r_{A}(z)\leq C\im \Omega_{B}(z)\left({\boldsymbol d} +\Absv{ \int\frac{x^{2}}{\absv{x-\Omega_{B}(z)}^{2}}\mathrm{d}(\mu_{\alpha}-\mu_{A})(x)}\right)\leq C\im\Omega_{B}(z){\boldsymbol d}, \end{equation*} where we use \eqref{eq:OmegaBound}, \eqref{eq:Mdiff}, \eqref{eq:MLowerBound} and $\im \frac{xz}{x-z}=\im z\frac{x^{2}}{\absv{x-z}^{2}}$ and the definition $\im r_{A}(z)$ is given by \begin{equation*} \im r_{A}(z)= \im\left(\int\frac{x\Omega_{B}(z)}{x-\Omega_{B}(z)}\mathrm{d}\mu_{\alpha}(x)\right)^{-1}-\im\left(\int\frac{x\Omega_{B}(z)}{x-\Omega_{B}(z)}\mathrm{d}\mu_{A}(x)\right)^{-1}. \end{equation*} Similar results hold for $\im r_B(z).$ By taking imaginary parts of perturbed equation \eqref{eq:SuborPerturb}, we have \begin{equation*} \begin{aligned} \im L_{\mu_{\alpha}}(\Omega_{B}(z))-\im \frac{\Omega_{A}(z)}{z}=\im r_{A}(z),\\ \im L_{\mu_{\beta}}(\Omega_{B}(z))-\im\frac{\Omega_{B}(z)}{z}=\im r_{B}(z). \end{aligned} \end{equation*} The rest of the proof follows from a discussion similar to the proof of Lemma \ref{lem:OmegaBound2}. We omit the details here. \end{proof} \section{Proof of Theorems \ref{thm_outlier} and \ref{thm_outlieereigenvector}} \label{sec_proofofspikedmodel} In this section, we prove the main results regarding the spiked invariant model in Section \ref{sec_spikedmodel}. For the convenience of our proof, we introduce the following linearization form. For $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U) \cup \mathcal{D}_{\tau}(\eta_U)$ in (\ref{eq_fundementalset}) and (\ref{eq_fundementalsetoutlier}). Recall $Y=A^{1/2}U B^{1/2}.$ Denote $\mathbf{H} \equiv \mathbf{H}(z)$ as \begin{equation*} \mathbf{H}(z) := \begin{pmatrix} 0 & z^{1/2} Y \\ z^{1/2}Y^* & 0 \end{pmatrix}, \end{equation*} and $\mathbf{G}(z)=(\mathbf{H}-z)^{-1}.$ By Schur's complement, it is easy to see that \begin{equation}\label{eq_schurcomplement} \mathbf{G}(z)= \begin{pmatrix} \widetilde{G}(z) & z^{-1/2} \widetilde{G}(z) Y \\ z^{-1/2}Y^* \widetilde{G}(z) & \widetilde{\mathcal{G}}(z) \end{pmatrix}, \end{equation} where we recall the definitions in (\ref{defn_greenfunctions}). For simplicity of the notations, we define the index sets \begin{equation*} \mathcal{I}_1:=\{1, \cdots, N\}, \ \mathcal{I}_2:=\{N+1, \cdots, 2N\}, \ \mathcal{I}:= \mathcal{I}_1 \cup \mathcal{I}_2. \end{equation*} Then we relabel the indices of the matrices according to \begin{equation*} U=(U_{i \mu}: i \in \mathcal{I}_1, \mu \in \mathcal{I}_2), \ A=(A_{ij}: i,j \in \mathcal{I}_1), \ B=(B_{\mu \nu}: \mu, \nu \in \mathcal{I}_2). \end{equation*} In the proof of this section, we will consistently use the latin letters $i,j \in \mathcal{I}_1$ and greek letters $\mu, \nu \in \mathcal{I}_2$. We denote the $2N \times 2N$ diagonal matrix $\Theta \equiv \Theta(z)$ by letting \begin{equation*} \Theta_{ii}=\frac{1}{z} \frac{\Omega_B(z)}{a_i-\Omega_B(z)}, \ \Theta_{\mu \mu}=\frac{1}{z} \frac{\Omega_A(z)}{b_\mu-\Omega_A(z)}. \end{equation*} Similar to Theorems \ref{thm:main} and \ref{thm_outlierlocallaw}, we have the following controls for the resolvent $\mathbf{G}(z)$ uniformly in $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U) \cup \mathcal{D}_{\tau}(\eta_U).$ It is the key technical input for this section. We postpone its proof to Section \ref{sec_suboutlierlocallaws}. \begin{prop}\label{prop_linearlocallaw} Under the assumptions of Theorem \ref{thm:main}, we have \begin{equation*} \sup_{1 \leq k ,l \leq 2N}|(\mathbf{G}(z)-\Theta(z))_{kl}| \prec \sqrt{\frac{\im m_{\mu_A \boxtimes \mu_B}}{N \eta}}+\frac{1}{N \eta}, \end{equation*} holds uniformly in $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U).$ Moreover, when the assumptions of Theorem \ref{thm_outlierlocallaw} hold, we have that \begin{equation*} \sup_{1 \leq k ,l \leq 2N}|(\mathbf{G}(z)-\Theta(z))_{kl}| \prec N^{-1/2}(\kappa+\eta)^{-1/4}, \ \kappa=|z-E_+|, \end{equation*} holds uniformly in $z \in \mathcal{D}_{\tau}(\eta_U).$ \end{prop} \subsection{Eigenvalue statistics: proof of Theorem \ref{thm_outlier}} We first provide the following lemma which gives the master equation for the locations of the outlier eigenvalues. Denote \begin{equation}\label{eq_defnudefnd} \mathbf{U}= \begin{pmatrix} \mathbf{E}_r & 0 \\ 0 & \mathbf{E}_s \end{pmatrix}, \ \bm{\mathcal{D}}= \begin{pmatrix} D^a (D^a+1)^{-1} & 0 \\ 0 & D^b (D^b+1)^{-1} \end{pmatrix}, \end{equation} where $\mathbf{E}_r=(\mathbf{e}_1, \cdots, \mathbf{e}_r)$, \ $\mathbf{E}_s=(\mathbf{e}_1, \cdots, \mathbf{e}_s)$, $D^a=\operatorname{diag}(d_1^a, \cdots, d_r^a)$ and $D^b=\operatorname{diag}(d_1^b, \cdots, d_s^b).$ For notational convenience, we set $\mathcal{Q}_1= \widetilde{H}$ as defined in (\ref{defn_eq_matrices}). \begin{lem}\label{lem_evmaster} If $x \neq 0$ is not an eigenvalue of $\mathcal{Q}_1,$ then it is an eigenvalue of $\widehat{\mathcal{Q}}_1$ if and only if \begin{equation*} \det (\bm{\mathcal{D}}^{-1}+x\mathbf{U}^* \mathbf{G}(x) \mathbf{U} )=0. \end{equation*} \end{lem} \begin{proof} See Lemma S.4.1 of \cite{DYaos}. \end{proof} Heuristically, by Proposition \ref{prop_linearlocallaw} and Lemma \ref{lem_evmaster}, an outlier location $x > E_+$ should satisfy the condition that \begin{equation}\label{eq_determinantexplicitform} \prod_{i=1}^r \left( \frac{d_i^a+1}{d_i^a}+\frac{\Omega_B(x)}{a_i-\Omega_B(x)} \right) \prod_{j=1}^s \left( \frac{d_j^b+1}{d_j^b}+\frac{\Omega_A(x)}{b_j-\Omega_A(x)} \right)=0. \end{equation} We write \begin{equation*} \frac{d_i^a+1}{d_i^a}+\frac{\Omega_B(x)}{a_i-\Omega_B(x)}=\frac{1}{d_i^a}-\frac{a_i}{\Omega_B(x)-a_i}. \end{equation*} Since $(\Omega_B(x)-a_i)^{-1}$ is decreasing (c.f. (\ref{eq_derivative})), by (\ref{eq_boundedomega}), we find that when $x>E_+$ \begin{equation*} \frac{d_i^a+1}{d_i^a}+\frac{\Omega_B(x)}{a_i-\Omega_B(x)}=0, \end{equation*} if and only if \begin{equation*} \frac{d_i^a+1}{d_i^a}+\frac{\Omega_B(E_+)}{a_i-\Omega_B(E_+)}<0, \end{equation*} which implies that $\widehat{a}_i>\Omega_B(E_+).$ Similar calculation holds for $\widehat{b}_j, 1 \leq j \leq s. $ We now proceed to prove Theorem \ref{thm_outlier}. We will follow the basic strategy as summarized in the beginning of \cite[Appendix S.4]{DYaos} for the i.i.d. model, i.e., the entries of $U$ are centered i.i.d. random variables with variance $N^{-1}$. We emphasize that similar strategies have been employed in \cite[Section 4]{Bloemendal2016} for spiked covariance matrices and in \cite[Section 6]{KY13} for finite rank deformation of Wigner matrices. \begin{proof}[\bf Proof of Theorem \ref{thm_outlier}] By Proposition \ref{prop_linearlocallaw} and Theorem \ref{thm_rigidity}, for any fixed $\epsilon>0,$ we can choose a high probability event $\Xi \equiv \Xi(\epsilon)$ where the following estimates hold: \begin{equation}\label{eq_proofinsidelocallaw} \mathbf{1}(\Xi) \norm{\mathbf{U}^*(\mathbf{G}(z)-\Theta(z))\mathbf{U}} \leq N^{\epsilon/2} \left( \sqrt{\frac{\im m_{\mu_A \boxtimes \mu_B}}{N \eta}}+\frac{1}{N \eta} \right), \ z \in \mathcal{D}_\tau(\eta_L, \eta_U) ; \end{equation} \begin{equation}\label{eq_proofoutlierlocallaw} \mathbf{1}(\Xi) \norm{\mathbf{U}^* (\mathbf{G}(z)-\Theta(z))\mathbf{U}} \leq N^{-1/2+\epsilon/2} (\kappa+\eta)^{-1/4}, \ z \in \mathcal{D}_{\tau}(\eta_U); \end{equation} \begin{equation}\label{eq_proofoutlier} \mathbf{1}(\Xi)|\lambda_i(\mathcal{Q}_1)-E_+| \leq n^{-2/3+\epsilon}, \ 1 \leq i \leq \varpi. \end{equation} We will restrict our proof to $\Xi$ in what follows and hence the discussion below will be entirely deterministic. We first prepare some notations following the proof of \cite[Theorem 3.6]{DYaos}. For any fixed constant $\epsilon>0,$ we denote \begin{equation*} \mathcal{O}_{\epsilon}^{(a)}:=\left\{i: \widehat{a}_i-\Omega_B(E_+) \geq N^{-1/3+\epsilon} \right\}, \ \mathcal{O}_{\epsilon}^{(b)}:=\left\{\mu: N+1 \leq \mu \leq 2N, \widehat{b}_\mu-\Omega_A(E_+) \geq N^{-1/3+\epsilon} \right\}, \end{equation*} Here and after, we use $\widehat{b}_{\mu}:=\widehat{b}_{\mu-N}$ for $\mu \in \mathcal{I}_2$. Recall that the eigenvalues of $\widehat{A}$ and $\widehat{B}$ are ordered in the decreasing fashion. It is easy to see that \begin{equation*} \sup_{\mu \notin \mathcal{O}_{\epsilon}^{(b)}} (\widehat{b}_{\mu}-\Omega_A(E_+)) \lesssim N^{-1/3+\epsilon}, \ \inf_{\mu \in \mathcal{O}_{\epsilon}^{(b)}} (\widehat{b}_{\mu}-\Omega_A(E_+)) \gtrsim N^{-1/3+\epsilon}. \end{equation*} Moreover, since we are mainly interested in the outlying and extremal non-outlier eigenvalues, we use the convention that $\Omega_{B}^{-1}(\widehat{a}_i)=E_+, i \geq r$ and $\Omega_A^{-1}(\widehat{b}_\mu)=E_+, \mu \geq N+s.$ Throughout the proof, we will need the following estimate following from (\ref{eq_squareroot}) and (\ref{eq:OmegaBound}) { \begin{equation}\label{eq_edgeestimationrough} \Omega_B^{-1}(\widehat{a}_i)-E_+ \sim (\widehat{a}_i-\Omega_B(E_+))^2. \end{equation} Indeed, when $\Omega_B^{-1}(\widehat{a}_i)-E_+ \leq \varsigma_1$ for some sufficiently small constant $0<\varsigma_1<1,$ using (\ref{eq_squareroot}), (\ref{eq:OmegaBound}) and the fact $\Omega_B(\cdot)$ is monotone increasing, we readily see that \begin{equation}\label{eq_smallerregion} \widehat{a}_i \sim \Omega_B(E_+)+\gamma \sqrt{\Omega_B^{-1}(\widehat{a}_i)-E_+}+N^{-1/2+\epsilon}, \end{equation} where $\gamma>0$ is some universal constant. This immediately implies (\ref{eq_edgeestimationrough}) using Assumption \ref{assum_outlier}. On the other hand, when $\Omega^{-1}_B(\widehat{a}_i)-E_+ \geq \varsigma_1,$ since $\Omega_B(\cdot)$ is increasing, we obtain that \begin{equation*} \widehat{a}_i \geq \Omega_B(E_++\varsigma_1) \sim \Omega_B(E_+)+\gamma' \sqrt{\varsigma_1}+N^{-1/2+\epsilon}. \end{equation*} This proves the claim (\ref{eq_edgeestimationrough}). } As for any $i \notin \mathcal{O}_{\epsilon}^{(a)}$ and $\mu \in \mathcal{O}_{\epsilon}^{(b)},$ by (\ref{eq_edgeestimationrough}), $ \Omega_B^{-1}(\widehat{a}_i) \leq E_++ N^{-2/3+2\epsilon} \leq \Omega_A^{-1}(\widehat{b}_{\mu}),$ we have the followings \begin{equation*} \sup_{i \notin \mathcal{O}_{\epsilon}^{(a)}} \Omega_B^{-1}(\widehat{a}_i) \leq \inf_{\mu \in \mathcal{O}_{\epsilon}^{(b)}} \Omega_A^{-1}(\widehat{b}_{\mu})+N^{-2/3+\epsilon}. \end{equation*} Similarly, we have that \begin{equation*} \ \sup_{\mu \notin \mathcal{O}_{\epsilon}^{(b)}} \Omega_A^{-1}(\widehat{b}_\mu) \leq \inf_{i \in \mathcal{O}_{\epsilon}^{(a)}} \Omega_B^{-1}(\widehat{a}_i)+N^{-2/3+\epsilon}. \end{equation*} An advantage of the above labelling is that the largest outliers of $\widehat{\mathcal{Q}}_1$ can be labelled according to $i \in \mathcal{O}_\epsilon^{(a)}$ and $\mu \in \mathcal{O}_{\epsilon}^{(b)}.$ Analogously to (S.9) and (S.10) of \cite{DYaos}, we find that to prove Theorem \ref{thm_outlier}, it suffices to prove that for arbitrarily small constant $\epsilon>0,$ there exists some constant $C>0$ such that \begin{equation}\label{eq_reduceproofoutlier} \mathbf{1}(\Xi) \left| \widehat{\lambda}_{\pi_a(i)}-\Omega_B^{-1}(\widehat{a}_i) \right| \leq C N^{-1/2+2\epsilon} \Delta_1(\widehat{a}_i), \ \ \mathbf{1}(\Xi) \left| \widehat{\lambda}_{\pi_b(\mu)}-\Omega_A^{-1}(\widehat{b}_\mu) \right| \leq C N^{-1/2+2\epsilon} \Delta_2(\widehat{b}_\mu), \end{equation} for all $i \in \mathcal{O}_{4 \epsilon}^{(a)}$ and $\mu \in \mathcal{O}_{4\epsilon}^{(b)},$ where we used the short-hand notations \begin{equation}\label{eq_shorthandnotationsdelta} \Delta_1(\widehat{a}_i):=(\widehat{a}_i-\Omega_B(E_+))^{1/2}, \ \Delta_2(\widehat{b}_\mu):=(\widehat{b}_\mu-\Omega_A(E_+))^{1/2} , \end{equation} and we let $\pi_b$ be defined on the set $\{N+1,\cdots,2N\}$ for notational convenience, and \begin{equation}\label{eq_reducebulkvalue} \mathbf{1}(\Xi)\left| \widehat{\lambda}_{\pi_a(i)}-E_+ \right| \leq CN^{-2/3+12\epsilon}, \ \mathbf{1}(\Xi)\left| \widehat{\lambda}_{\pi_b(\mu)}-E_+ \right| \leq CN^{-2/3+12\epsilon}, \end{equation} for all $i \in \left\{1,2,\cdots,r \right\} \backslash \mathcal{O}_{4 \epsilon}^{(a)}$ and $\mu \in \left\{N+1,\cdots, N+s \right\} \backslash \mathcal{O}_{4 \epsilon}^{(b)}.$ We now follow the strategy of \cite[Theorem 3.6]{DYaos} to complete our proof. We divide our discussion into four steps for the convenience of the readers. We will focus on explaining Step 1 since it differs the most from its counterpart of the proof of \cite[Theorem 3.6]{DYaos}, and briefly sketch Steps 2-4. \noindent{\bf Step 1:} For each $1 \leq i \leq r^+,$ we define the permissible intervals \begin{equation*} \mathrm{I}_i^{(a)}:=\left[\Omega_B^{-1}(\widehat{a}_i)-N^{-1/2+\epsilon} \Delta_1(\widehat{a}_i), \ \Omega_B^{-1}(\widehat{a}_i)+N^{-1/2+\epsilon} \Delta_1(\widehat{a}_i) \right]. \end{equation*} Similarly, for each $1 \leq \mu-N \leq s^+,$ we denote \begin{equation*} \mathrm{I}_\mu^{(b)}:=\left[\Omega_A^{-1}(\widehat{b}_\mu)-N^{-1/2+\epsilon} \Delta_2(\widehat{b}_\mu), \ \Omega_A^{-1}(\widehat{b}_\mu)+N^{-1/2+\epsilon} \Delta_2(\widehat{b}_\mu) \right]. \end{equation*} Then we define \begin{equation*} \mathrm{I}:=\mathrm{I}_0 \cup \Big(\bigcup_{i \in \mathcal{O}^{(a)}_{\epsilon}}\mathrm{I}_i^{(a)}\Big) \cup \Big(\bigcup_{\mu \in \mathcal{O}^{(b)}_{\epsilon}}\mathrm{I}_\mu^{(b)}\Big) ,\quad \mathrm{I}_0:=\left[0, E_+ + n^{-2/3+3\epsilon}\right]. \end{equation*} The main task of this step is to prove that on the event $\Xi,$ there exist no eigenvalues outside $\mathrm{I}.$ This is summarized as the following lemma. \begin{lem} The complement of $\mathrm{I}$ contains no eigenvalues of $\widehat{\mathcal{Q}}_1.$ \end{lem} \begin{proof} By Lemma \ref{lem_evmaster}, (\ref{eq_proofoutlier}) and (\ref{eq_proofoutlierlocallaw}), we find that $x \notin \mathrm{I}_0$ is an eigenvalue of $\widehat{\mathcal{Q}}_1$ if and only if \begin{equation}\label{eq_lefthandsidenonsingular} \mathbf{1}(\Xi)(\bm{\mathcal{D}}^{-1}+x \mathbf{U}^* \mathbf{G}(x) \mathbf{U})=\mathbf{1}(\Xi)\left(\bm{\mathcal{D}}^{-1}+x \mathbf{U}^* \Theta(x) \mathbf{U}+\mathrm{O}(\kappa^{-1/4} N^{-1/2+\epsilon/2}) \right), \end{equation} is singular. In light of (\ref{eq_determinantexplicitform}), it suffices to show that if $x \notin \mathrm{I},$ then \begin{equation}\label{eq_sufficientconverse} \min\left\{ \min_{1 \leq i \leq r}\left| \frac{d_i^a+1}{d_i^a}+\frac{\Omega_B(x)}{a_i-\Omega_B(x)} \right|, \min_{1 \leq \mu-p \leq s } \left| \frac{d_\mu^b+1}{d_\mu^b}+\frac{\Omega_A(x)}{b_\mu-\Omega_A(x)} \right| \right\} \gg \kappa^{-1/4} N^{-1/2+\epsilon/2}. \end{equation} Indeed, when (\ref{eq_sufficientconverse}) holds, then the matrices on the left-hand side of (\ref{eq_lefthandsidenonsingular}) is non-singular. Note that \begin{align} \frac{d_i^a+1}{d_i^a}+\frac{\Omega_B(x)}{a_i-\Omega_B(x)}=\frac{1}{d_i^a}-\frac{a_i}{\Omega_B(x)-a_i}& =\frac{a_i}{\Omega_B(\Omega_B^{-1}(\widehat{a}_i))-a_i}-\frac{a_i}{\Omega_B(x)-a_i} \nonumber \\ &=\mathrm{O} \left( \left| \Omega_B(x)-\Omega_B(\Omega_B^{-1}(\widehat{a}_i)) \right| \right), \label{eq_mvtcontrol} \end{align} where in the last equality we used (i) of Proposition \ref{prop:stabN}. The rest of the proof is devoted to control (\ref{eq_mvtcontrol}) using mean value theorem. First, we have that \begin{equation} \label{eq_mvtdifference} |x-\Omega_B^{-1}(\widehat{a}_i)| \geq N^{-1/2+\epsilon} \Delta_1(\widetilde{a}_i), \ \text{for all} \ x \notin \mathrm{I}. \end{equation} In fact, when $i \in \mathcal{O}_{\epsilon}^{(a)},$ (\ref{eq_mvtdifference}) holds by definition. When $i \notin \mathcal{O}_{\epsilon}^{(a)},$ by (\ref{eq_edgeestimationrough}) and the fact that $\Omega_B(x)$ is monotone increasing when $x>E_+$, we have \begin{equation*} \Omega_B^{-1}(\widehat{a}_i)-E_+ \lesssim N^{-2/3+2\epsilon} \ll N^{-2/3+3 \epsilon}. \end{equation*} Now we return to the proof of (\ref{eq_sufficientconverse}). We divide our proof into two cases. If there exists a constant $c>0$ such that $\Omega_B^{-1}(\widehat{a}_i) \notin [x-c\kappa, x+c\kappa].$ Since $\Omega_B(\cdot)$ is monotonically increasing on $(E_+, \infty),$ we have that \begin{equation*} |\Omega_B(x)-\Omega_B(\Omega_B^{-1}(\widehat{a}_i))| \geq |\Omega_B(x)-\Omega_B(x \pm \kappa)| \sim \kappa^{1/2} \gg N^{-1/2+\epsilon/2} \kappa^{-1/4}, \end{equation*} where in the second step we used (\ref{eq_derivative}) and (\ref{eq_derivativecontroldifference}) with Cauchy's integral formula when $x>E_+.$ On the other hand, if $\Omega_B^{-1}(\widehat{a}_i) \in [x-c\kappa, x+c\kappa]$ such that $\Omega_B^{-1}(\widetilde{a}_i)-E_+ \sim \kappa,$ here $c<1$ is some small constant. By (\ref{eq_edgeestimationrough}) and the fact $\widehat{a}_i-\Omega_B(E_+) \geq N^{-1/3+\epsilon},$ we have that \begin{equation*} \Omega_B^{-1}(\widehat{a}_i)-E_+ \sim \Delta_1(\widehat{a}_i)^4 \gg N^{-1/2+\epsilon} \Delta_1(\widehat{a}_i). \end{equation*} Moreover, by (\ref{eq:suborder}) and (\ref{eq_derivativecontroldifference}), we conclude that \begin{equation*} |\Omega'_B(\xi)| \sim |\Omega_B'(\Omega_B^{-1}(\widehat{a}_i))| \sim \Delta_1(\widehat{a}_i)^{-2}, \ \xi \in \mathrm{I}_i^{(a)}, \end{equation*} where we used (\ref{eq_edgeestimationrough}) in the second step. Since $\Omega_B$ is monotonically increasing on $(E_+, \infty),$ for $x \notin \mathrm{I}_i^{(a)},$ by (\ref{eq_mvtdifference}) and (\ref{eq_edgeestimationrough}), we conclude that \begin{align*} |\Omega_B(x)-\Omega_B(\Omega_B^{-1}(\widehat{a}_i))| &\geq |\Omega_B(\Omega_B^{-1}(\widehat{a}_i) \pm N^{-1/2+\epsilon} \Delta_1(\widehat{a}_i))-\Omega_B(\Omega_B^{-1}(\widehat{a}_i))| \nonumber \\ & \sim N^{-1/2+\epsilon} \Delta_1(\widehat{a}_i)^{-1} \gg N^{-1/2+\epsilon/2} \kappa^{-1/4}. \end{align*} The $d_{\mu}^b$ term can be dealt with in the same way, and this completes our proof. \end{proof} \noindent{\bf Steps 2 and 3:} For the ease of discussion, we relabel all the spikes with indices in $\mathcal{O}_{\epsilon}^{(a)} \cup \mathcal{O}_{\epsilon}^{(b)}$ as $\sigma_1, \sigma_2, \cdots, \sigma_{r_\epsilon}$ and call them \emph{$\epsilon$-spikes}. Further, we assume that they correspond to the classical locations of the outlying eigenvalues as $x_1 \geq x_2 \geq \cdots \geq x_{r_\epsilon},$ where some of are determined by $\Omega_A^{-1}$ while others are given by $\Omega_B^{-1}.$ Correspondingly, all the permissible intervals are relabelled as $\mathrm{I}_i, 1 \leq i \leq r_{\epsilon}.$ We claim that, for a specific $N$-independent configuration $\mathbf{x} \equiv \mathbf{x}(0):=(x_1, \cdots, x_{r_\epsilon})$ satisfying $x_1>x_2>\cdots>x_{r_\epsilon},$ each $\mathrm{I}_i(\mathbf{x})$ contains precisely one eigenvalues of $\widetilde{\mathcal{Q}}_1.$ This finishes Step 2. For Step 3, we use a continuity argument to generalize the results of Step 2 with configuration $\mathbf{x}(0)$ to arbitrary $N$-dependent configuration and this proves (\ref{eq_reduceproofoutlier}). The justification of the above discussion makes use of Rouch{\' e}'s theorem, (\ref{eq_edgeestimationrough}), (\ref{eq:suborder}), (\ref{eq_derivative}) and (\ref{eq_derivativecontroldifference}). We can verbatimly follow the counterparts in Steps 2 and 3 of the proof of Theorem 3.6 of \cite{DYaos} to complete it. We omit further details here. \vspace{3pt} \noindent{\bf Step 4:} In this step, we consider the extremal non-outlier eigenvalues when $i \notin \left( \mathcal{O}_{\epsilon}^{(a)} \cup \mathcal{O}_{\epsilon}^{(b)} \right)$ and prove (\ref{eq_reducebulkvalue}). The discussion will use the following eigenvalue interlacing result. \begin{lem}\label{lem_interlacing} Recall the eigenvalues of $\widehat{\mathcal{Q}}_1$ and $\mathcal{Q}_1$ are denoted as $\{\widehat{\lambda}_i\}$ and $\{\lambda_i\},$ respectively. Then we have that \begin{equation*} \widehat{\lambda}_i \in [\lambda_i, \lambda_{i-r-s}], \end{equation*} where we adopt the convention that $\lambda_i=\infty$ if $i<1$ and $\lambda_i=0$ if $i>N.$ \end{lem} \begin{proof} See \cite[Lemma S.3.3]{DYaos}. \end{proof} We first fix a configuration $\mathbf{x}(0)$ as mentioned earlier. Then by the discussion of Step 2, (\ref{eq_proofoutlier}) and Lemma \ref{lem_interlacing}, we can prove (\ref{eq_reducebulkvalue}) under the configuration $\mathbf{x}(0).$ For arbitrary $N$-dependent configuration, we again use a continuity argument as mentioned in Step 3. We refer the readers to Step 4 of the proof of Theorem 3.6 of \cite{DYaos}. This finishes the proof of Theorem \ref{thm_outlier}. \end{proof} \subsection{Eigenvector statistics: proof of Theorem \ref{thm_outlieereigenvector}} In this section, we prove Theorem \ref{thm_outlieereigenvector}. Due to similarity, we only focus on the left singular vectors. The main technical task of this section is to prove Proposition \ref{prop_maineigenvector}, which implies Theorem \ref{thm_outlieereigenvector}. Recall the definitions in (\ref{eq_shorthandnotationsdelta}) and (\ref{eq_differencedefinition}). \begin{prop}\label{prop_maineigenvector} Suppose the assumptions of Theorem \ref{thm_outlieereigenvector} hold. Then for all $i,j=1,2,\cdots, N,$ we have that \begin{align*} & \left|\langle \mathbf{e}_i, \mathcal{P}_S \mathbf{e}_j \rangle-\delta_{ij} \mathbb{I}(\pi_a(i) \in S) \widehat{a}_i \frac{(\Omega_B^{-1})'(\widehat{a}_i)}{\Omega_B^{-1}(\widehat{a}_i)} \right|\prec \frac{\mathbb{I}(\pi_a(i) \in S, \pi_a(j) \in S)}{\sqrt{N} \sqrt{\Delta_1(\widehat{a}_i) \Delta_1(\widehat{a}_j)}}+\frac{\mathbb{I}(\pi_a(i) \in S, \pi_b(j) \notin S)\Delta_1(\widehat{a}_i)}{\sqrt{N} \delta^a_{\pi_a(i), \pi_b(j)}} \\ &+ \frac{1}{N} \left( \frac{1}{\delta_{\pi_a(i)}(S)}+\frac{\mathbb{I}(\pi_a(i) \in S)}{\Delta_1(\widehat{a}_i)^2} \right) \left( \frac{1}{\delta_{\pi_a(j)}(S)}+\frac{\mathbb{I}(\pi_a(j) \in S)}{\Delta_1(\widehat{a}_j)^2} \right)+(i \leftrightarrow j), \end{align*} where $(i \leftrightarrow j)$ denotes the same terms but with $i$ and $j$ interchanged. \end{prop} \begin{proof}[\bf Proof of Theorem \ref{thm_outlieereigenvector}] For the right singular vectors, since \begin{equation*} \mathbf{v}=\sum_{k=1}^N \langle \mathbf{e}_k, \mathbf{v} \rangle \mathbf{e}_k=\sum_{k=1}^N v_k \mathbf{e}_k. \end{equation*} Then the results simply follow from Proposition \ref{prop_maineigenvector}. The results of the left singular vectors can be obtained similarly. \end{proof} The rest of the subsection is devoted to the proof of Proposition \ref{prop_maineigenvector}. Let $\omega<\tau_1/2$ and $0<\epsilon<\min\{\tau_1,\tau_2\}/10$ be some small positive constants to be chosen later. By Proposition \ref{prop_linearlocallaw}, Theorems \ref{thm_rigidity} and \ref{thm_outlier}, we can choose a high probability event $\Xi_1 \equiv \Xi_1(\epsilon,\omega, \tau_1,\tau_2)$ where the following statements hold. \begin{enumerate} \item[(i)] For all \begin{equation}\label{eq_evoutsideparameter} z \in \mathcal{D}_{out}(\omega):=\left\{E+\mathrm{i} \eta: E_++N^{-2/3+\omega} \leq E \leq \omega^{-1}, 0 \leq \eta \leq \omega^{-1} \right\}, \end{equation} we have that \begin{equation}\label{eq_xi1setestimation} \mathbb{I}(\Xi_1)\| \mathbf{U}^* (\mathbf{G}(z)-\Theta(z)) \mathbf{U} \| \leq N^{-1/2+\epsilon} (\kappa+\eta)^{-1/4}. \end{equation} \item[(ii)] Recall the notations in (\ref{eq_shorthandnotationsdelta}). For all $1 \leq i \leq r^+$ and $1 \leq \mu-N \leq s^+$, we have \begin{equation*} \mathbb{I}(\Xi_1) \left| \widetilde{\lambda}_{\pi_a(i)}-\Omega_B^{-1}(\widehat{a}_i) \right| \leq N^{-1/2+\epsilon} \Delta_1(\widehat{a}_i),\ \mathbb{I}(\Xi_1) \left| \widetilde{\lambda}_{\pi_b(\mu)}-\Omega_A^{-1}(\widehat{b}_\mu) \right| \leq N^{-1/2+\epsilon} \Delta_1(\widehat{b}_\mu). \end{equation*} \item[(iii)] For any fixed integer $\varpi>r+s$ and all $r^++s^+<i \leq \varpi,$ we have that \begin{equation}\label{eq_edgerigidity} \mathbb{I} \left[ |\lambda_1-E_+|+|\widehat{\lambda}_i-E_+| \right] \leq N^{-2/3+\epsilon}. \end{equation} \end{enumerate} From now on, we will focus our discussion on the high probability event $\Xi_1$ and hence all the discussion will be purely deterministic. We next provide some contour will be used in our proof. Recall (\ref{eq_alphasinside}) and (\ref{eq_alphaoutside}). Denote \begin{equation*} \rho_i^a=c_i\left[ \delta_{\pi_a(i)}(S) \wedge (\widehat{a}_i-\Omega_B(E_+)) \right], \ \pi_a(i) \in S, \end{equation*} and \begin{equation*} \rho_\mu^b=c_\mu \left[ \delta_{\pi_b(\mu)}(S) \wedge (\widehat{b}_\mu-\Omega_A(E_+)) \right], \ \pi_b(\mu) \in S, \end{equation*} for some sufficiently small constants $0<c_i, c_\mu<1.$ Define the contour $\Gamma:=\partial \mathsf{C}$ as the boundary of the union of the open discs \begin{equation}\label{eq_defndiscs} \mathsf{C}:=\bigcup_{\pi_a(i) \in S} B_{\rho_i^a}(\widehat{a}_i) \ \cup \bigcup_{\pi_b(\mu) \in S} B_{\rho_\mu^b}(\Omega_B(\Omega_A^{-1}(\widehat{b}_\mu))), \end{equation} where $B_r(x)$ denotes an open disc of radius $r$ around $x.$ In the following lemma, we will show that by choosing sufficiently small $c_i, c_\mu,$ we have that: (1). $\overline{\Omega_B^{-1}(\mathsf{C})}$ is a subset of (\ref{eq_evoutsideparameter}) and hence (\ref{eq_xi1setestimation}) holds; (2). $\partial \Omega_B^{-1}(\mathsf{C})=\Omega_B^{-1}(\Gamma)$ only encloses the outliers with indices in $S.$ Its proof will be put into Appendix \ref{sec_additional}. \begin{lem}\label{lem_contour} Suppose that the assumptions of Theorem \ref{thm_outlieereigenvector} hold true. Then the set $\overline{\Omega_B^{-1}(\mathsf{C})}$ lies in the parameter set (\ref{eq_evoutsideparameter}) as long as $c_i$'s and $c_\mu$'s are sufficiently small. Moreover, we have that $\{\widehat{\lambda}_{\mathfrak{a}}\}_{\mathfrak{a} \in S} \subset \Omega_B^{-1}(\mathsf{C})$ and all the other eigenvalues lie in the complement of $\overline{\Omega_B^{-1}(\mathsf{C})}.$ \end{lem} Then we introduce some decompositions. Denote $\widehat \mathbf{H} \equiv \mathbf{H}(z)$ as \begin{equation}\label{eq_wthdecomposition} \widehat \mathbf{H}(z):=\mathbf{P} \mathbf{H}(z) \mathbf{P} = \begin{pmatrix} 0 & z^{1/2} \widehat Y \\ z^{1/2}\widehat Y^* & 0 \end{pmatrix}, \ \mathbf{P}= \begin{pmatrix} (1+D^a)^{1/2} & 0\\ 0 & (1+D^b)^{1/2} \end{pmatrix}. \end{equation} Correspondingly, we denote $\widehat{\mathbf{G}}(z)=(\widehat{\mathbf{H}}(z)-z)^{-1}.$ By a discussion similar to (\ref{eq_schurcomplement}) and the singular value decomposition (SVD) of $Y$, we have that \begin{align}\label{eq_linearizationspectral} \widehat{\mathbf{G}}_{ij}=\sum_{k=1}^N \frac{\widehat{\mathbf{u}}_k(i) \widehat{\mathbf{u}}_k^*(j)}{\widehat{\lambda}_k-z}, \ \widehat{\mathbf{G}}_{\mu \nu}=\sum_{k=1}^N \frac{\widehat{\mathbf{v}}_k(\mu) \widehat{\mathbf{v}}_k^*(\nu)}{\widehat{\lambda}_k-z}, \end{align} \begin{align*} \widehat{\mathbf{G}}_{i \mu}=\frac{1}{\sqrt{z}} \sum_{k=1}^N \frac{\sqrt{\widehat{\lambda}_k} \widehat{\mathbf{u}}_k(i)\widehat{\mathbf{v}}_k^*(\mu) }{\widehat{\lambda}_k-z}, \ \widehat{\mathbf{G}}_{\mu i}=\frac{1}{\sqrt{z}} \sum_{k=1}^N \frac{\sqrt{\widehat{\lambda}_k} \widehat{\mathbf{v}}_k(\mu)\widehat{\mathbf{u}}_k^*(i) }{\widehat{\lambda}_k-z}. \end{align*} Our starting point is an integral representation. Specifically, by (\ref{eq_linearizationspectral}), Lemma \ref{lem_contour}, and Cauchy's integral formula, we have that \begin{equation}\label{eq_intergralrepresentation} \langle \mathbf{e}_i, \mathcal{P}_S \mathbf{e}_j \rangle=-\frac{1}{2 \pi \mathrm{i}} \oint_{\Omega_B^{-1}(\Gamma)} \langle \bm{e}_i, \widehat{\mathbf{G}}(z) \bm{e}_j \rangle \mathrm{d} z, \end{equation} where $\bm{e}_i$ and $\bm{e}_j$ are the natural embeddings of $\mathbf{e}_i$ and $\mathbf{e}_j$ in $\mathbb{C}^{2N}.$ Next, we provide an identity of $\mathbf{U}^* \widehat{\mathbf{G}}(z) \mathbf{U}$ in terms of $\mathbf{U}^* \mathbf{G}(z) \mathbf{U}.$ For the matrices $\mathcal{A}, \mathcal{S}, \mathcal{B}$ and $\mathcal{T}$ of conformable dimensions, by Woodbury matrix identity, we have \begin{equation}\label{eq_woodbury} (\mathcal{A}+\mathcal{S} \mathcal{B} \mathcal{T})^{-1}=\mathcal{A}^{-1}-\mathcal{A}^{-1} \mathcal{S} (\mathcal{B}^{-1}+\mathcal{T} \mathcal{A}^{-1} \mathcal{S})^{-1} \mathcal{T} \mathcal{A}^{-1}, \end{equation} as long as all the operations are legitimate. Moreover, when $\mathcal{A}+\mathcal{B}$ is non-singular, we have that \begin{equation}\label{eq_hua} \mathcal{A}-\mathcal{A}(\mathcal{A}+\mathcal{B})^{-1} \mathcal{A}=\mathcal{B}-\mathcal{B}(\mathcal{A}+\mathcal{B})^{-1} \mathcal{B}. \end{equation} By (\ref{eq_wthdecomposition}), (\ref{eq_linearizationspectral}) and the matrix identities (\ref{eq_woodbury}) and (\ref{eq_hua}), we have that \begin{align}\label{eq_keyexpansionev} \mathbf{U}^* \widehat{\mathbf{G}}(z) \mathbf{U}&=\mathbf{U}^* \mathbf{P}^{-1} \left( \mathbf{H}-z+z(I-\mathbf{P}^{-2}) \right)^{-1} \mathbf{P}^{-1} \mathbf{U}=\mathbf{U}^*\mathbf{P}^{-1}(\mathbf{G}^{-1}(z)+z\mathbf{U} \bm{\mathcal{D}} \mathbf{U}^*)^{-1} \mathbf{P}^{-1} \mathbf{U} \nonumber \\ &= \mathbf{U}^* \mathbf{P}^{-1} \left[ \mathbf{G}(z)-z\mathbf{G}(z) \mathbf{U} \frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \mathbf{G}(z) \mathbf{U}} \mathbf{U}^* \mathbf{G}(z) \right] \mathbf{P}^{-1} \mathbf{U} \nonumber \\ &=\widetilde{\bm{\mathcal{D}}}^{1/2} \left[ \mathbf{U}^* \mathbf{G}(z) \mathbf{U}-z \mathbf{U}^* \mathbf{G}(z) \mathbf{U} \frac{1}{\bm{\mathcal{D}}^{-1}+z\mathbf{U}^* \mathbf{G}(z) \mathbf{U}} \mathbf{U}^* \mathbf{G}(z) \mathbf{U} \right] \widetilde{\bm{\mathcal{D}}}^{1/2} \nonumber \\ &= \frac{1}{z}\widetilde{\bm{\mathcal{D}}}^{1/2} \left[\bm{\mathcal{D}}-\bm{\mathcal{D}}^{-1} \frac{1}{\bm{\mathcal{D}}^{-1}+z\mathbf{U}^* \mathbf{G}(z) \mathbf{U}} \bm{\mathcal{D}}^{-1} \right] \widetilde{\bm{\mathcal{D}}}^{1/2}, \end{align} where $\widetilde{\bm{\mathcal{D}}}=\mathbf{P}^{-2}$ and $\mathbf{P}$ is defined in (\ref{eq_wthdecomposition}). Then we prove Proposition \ref{prop_maineigenvector}. The proof follows from the state strategy as \cite[Proposition S.5.5]{DYaos} \begin{proof}[\bf Proof of Proposition \ref{prop_maineigenvector}] Till the end of the proof, for convenience, we set $d_i^a=0$ when $i>r.$ Denote $\mathcal{E}(z)=z\mathbf{U}^*(\Theta(z)-\mathbf{G}(z))\mathbf{U}.$ Using the resolvent expansion, we obtain that \begin{align*} \frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \mathbf{G}(z) \mathbf{U}}&=\frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \Theta(z) \mathbf{U}}+\frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \Theta(z) \mathbf{U}} \mathcal{E} \frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \Theta(z) \mathbf{U}} \nonumber \\ &+\frac{1}{\bm{\mathcal{D}}^{-1} +z \mathbf{U}^* \Theta(z) \mathbf{U}} \mathcal{E} \frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \mathbf{G}(z) \mathbf{U}} \mathcal{E} \frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \Theta(z) \mathbf{U}}. \end{align*} Together with (\ref{eq_intergralrepresentation}) and (\ref{eq_keyexpansionev}), using the fact that $\Gamma$ does not enclose any pole of $\mathbf{G}$ (i.e. (\ref{eq_edgerigidity})), we have the following decomposition \begin{equation*} \langle \mathbf{e}_i, \mathcal{P}_S \mathbf{e}_j \rangle =\frac{\sqrt{(1+d_i^a)(1+d_j^a)}}{d_i^a d_j^a}(s_0+s_1+s_2), \end{equation*} where $s_0, s_1$ and $s_2$ are defined as \begin{equation*} s_0=\frac{\delta_{ij}}{2 \pi \mathrm{i}} \oint_{\Omega_B^{-1}(\Gamma)} \frac{1}{(d_i^a)^{-1}+a_i(a_i-\Omega_B(z))^{-1}} \frac{\mathrm{d} z}{z}, \end{equation*} \begin{equation*} s_1=\frac{1}{2 \pi \mathrm{i}} \oint_{\Omega_B^{-1}(\Gamma)} \frac{\mathcal{E}_{ij}(z)}{((d_i^a)^{-1}+a_i(a_i-\Omega_B(z))^{-1})((d_j^a)^{-1}+a_j(a_j-\Omega_B(z))^{-1})} \frac{\mathrm{d} z}{z}, \end{equation*} \begin{equation*} s_2=\frac{1}{2 \pi \mathrm{i}} \oint_{\Omega_B^{-1}(\Gamma)} \left(\frac{1}{\bm{\mathcal{D}}^{-1} +z \mathbf{U}^* \Theta(z) \mathbf{U}} \mathcal{E} \frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \mathbf{G}(z) \mathbf{U}} \mathcal{E} \frac{1}{\bm{\mathcal{D}}^{-1}+z \mathbf{U}^* \Theta(z) \mathbf{U}} \right)_{ij} \frac{\mathrm{d} z}{z}. \end{equation*} First, we we deal with the term containing $s_0.$ Using residual theorem, we readily see that \begin{align*} \frac{\sqrt{(1+d_i^a)(1+d_j^a)}}{d_i^a d_j^a}s_0 &=\frac{\sqrt{(1+d_i^a)(1+d_j^a)}}{d_j^a} \frac{\delta_{ij}}{2 \pi \mathrm{i}} \oint_{\Gamma} \frac{(\Omega_B^{-1})'(\zeta)}{\Omega_B^{-1}(\zeta)} \frac{a_i-\zeta}{\widehat{a}_i-\zeta} \mathrm{d} \zeta \\ &= \delta_{ij} \widehat{a}_i \frac{(\Omega_B^{-1})'(\widehat{a}_i)}{\Omega_B^{-1}(\widehat{a}_i)}. \end{align*} Second, we control the term containing $s_1.$ For the ease of discussion, we further apply residual theorem to obtain that \begin{equation}\label{eq_s1decomposition} s_1=\frac{d_i^a d_j^a}{2 \pi \mathrm{i}} \oint_{\Gamma} \frac{\xi_{ij}(\zeta)}{(\zeta-\widehat{a}_i)(\zeta-\widehat{a}_j)} \mathrm{d} \zeta, \ \xi_{ij}(\zeta)=(\zeta-a_i)(\zeta-a_j) \mathcal{E}_{ij}(\Omega_B^{-1}(\zeta)) \frac{(\Omega_B^{-1})'(\zeta)}{\Omega_B^{-1}(\zeta)}. \end{equation} To bound $\xi_{ij}(\zeta),$ we first prepare some useful estimates. When $c_i$'s and $c_\mu$'s are sufficiently small, by a discussion similar to (\ref{eq_edgeestimationrough}), we have that for $\zeta \in \Gamma,$ \begin{equation}\label{eq_complexedgederterministicbound} |\Omega_{B}^{-1}(\zeta)-E_+| \sim |\zeta-\Omega_B(E_+)|^2. \end{equation} Moreover, let $z_b=\Omega_B^{-1}(\zeta),$ by Cauchy's differentiation formula, we obtain that \begin{equation}\label{eq_cauchydifferentation} \Omega_B'(z_b)-\Omega_{\beta}'(z_b)=\frac{1}{2 \pi \mathrm{i}} \oint_{\mathcal{C}_b} \frac{\Omega_B(\xi)-\Omega_{\beta}(\xi)}{(\xi-z_b)^2} \mathrm{d} \xi, \end{equation} where $\mathcal{C}_b$ is the disc of radius $|z_b-E_+|/2$ centered at $z_b.$ Here we used the facts that both $\Omega_B$ and $\Omega_{\beta}$ are holomorphic on $\Omega_B^{-1}(\Gamma).$ Together with (\ref{eq:OmegaBound}) and (\ref{eq_derivative}), using residual theorem, we readily see that for some constant $C>0$ \begin{equation*} \Omega_B'(z_b) \sim C N^{-1/2+\epsilon} |z_b-E_+|^{-1}+|z_b-E_+|^{-1/2} \sim |z_b-E_+|^{-1/2}, \end{equation*} where in the last step we used (\ref{eq_complexedgederterministicbound}) such that $|z_b-E_+| \geq CN^{-2/3+\epsilon}.$ Consequently, using implicit differentiation, we conclude that \begin{equation}\label{eq_derivativeinversebound} (\Omega_B^{-1})'(\zeta) \sim |z_b-E_+|^{1/2} \sim |\zeta-\Omega_B(E_+)|, \end{equation} where in the last step we used (\ref{eq_complexedgederterministicbound}). With the above preparation, we now proceed to control $\xi_{ij}(\zeta)$ and $s_1.$ By (\ref{eq_xi1setestimation}), (\ref{eq_complexedgederterministicbound}) and (\ref{eq_derivativeinversebound}), it is easy to see that for $\zeta \in \Gamma$ \begin{equation}\label{eq_xiijwholebound} |\xi_{ij}(\zeta)| \lesssim N^{-1/2+\epsilon} |\zeta-\Omega_B(E_+)|^{1/2}. \end{equation} Together with a discussion similar to (\ref{eq_cauchydifferentation}), we see that \begin{equation}\label{eq_xiijderivativewholebound} |\xi_{ij}'(\zeta)| \lesssim N^{-1/2+\epsilon} |\zeta-\Omega_B(E_+)|^{-1/2}. \end{equation} In order to control $s_1,$ we will consider different cases. In the first case when both $\pi_a(i) \in S$ and $\pi_b(j) \in S,$ if $\widehat{a}_i \neq \widehat{a}_j,$ by (\ref{eq_s1decomposition}) and residual theorem, we have that \begin{align*} |s_1| \leq C \left| \frac{\xi_{ij}(\widehat{a}_i)-\xi_{ij}(\widehat{a}_j)}{\widehat{a}_i-\widehat{a}_j} \right| \leq \frac{C}{|\widehat{a}_i-\widehat{a}_j|} \left| \int^{\widehat{a}_i}_{\widehat{a}_j}|\xi'_{ij}(\zeta)| \mathrm{d} \zeta \right| \leq \frac{CN^{-1/2+\epsilon}}{\sqrt{\Delta_1(\widehat{a}_i) \Delta_1(\widehat{a}_j)}}, \end{align*} where we used (\ref{eq_xiijderivativewholebound}) in the last step. On the other hand, if $\widehat{a}_i=\widehat{a}_j,$ we can obtain the same bound using residual theorem. In the second case when $\pi_a(i) \in S$ and $\pi_a(j) \notin S,$ we conclude from (\ref{eq_xiijwholebound}) that \begin{equation*} |s_1| \leq C \frac{\left| \xi_{ij}(\widehat{a}_i) \right|}{\left| \widehat{a}_i-\widehat{a}_j \right|} \leq \frac{C \Delta_1(\widehat{a}_i) N^{-1/2+\epsilon}}{\delta^a_{\pi_a(i), \pi_a(j)}}. \end{equation*} Similarly, we can estimate $s_1$ when $\pi_a(i) \notin S$ and $\pi_b(j) \in S.$ Finally, when both $\pi_a(i) \notin S$ and $\pi_b(j) \notin S,$ we have $s_1=0$ by residual theorem. This completes the estimation regarding $s_1.$ We then estimate $s_2,$ which relies on some crucial estimates on the contour. We decompose the contour into \begin{equation}\label{decompose_contour} \Gamma=\bigcup_{\pi_a(i) \in S} \Gamma_i \cup \bigcup_{\pi_b(\mu) \in S} \Gamma_\mu, \ \Gamma_i:=\Gamma \cap \partial B_{\rho_i^a}(\widehat{a}_i), \ \Gamma_\mu:=\Gamma \cap \partial B_{\rho_\mu^b}(\Omega_B(\Omega_A^{-1}(\widehat{b}_\mu))). \end{equation} The following lemma is the key technical input for the estimation of $s_2.$ Its proof will be given in Appendix \ref{sec_additional}. \begin{lem}\label{lem_contourestimation} For any $\pi_a(i) \in S, 1 \leq j \leq r, 1 \leq \nu-N \leq s$ and $\zeta \in \partial B_{\rho_i^a}(\widehat{a}_i),$ we have that \begin{equation}\label{eq_onlyproveresult} \left| \zeta-\widehat{a}_j \right| \sim \rho_i^a+\delta^a_{\pi_a(i), \pi_a(j)}, \end{equation} and \begin{equation*} \left| \Omega_A(\Omega_B^{-1}(\zeta))-\widehat{b}_\nu \right| \sim \rho_i^a+\delta^a_{\pi_a(i), \pi_b(\nu)}. \end{equation*} For any $\pi_b(\mu) \in S, 1 \leq j \leq r, 1 \leq \mu-N \leq s$ and $\zeta \in \partial B_{\rho_\mu^b}(\Omega_B(\Omega_A^{-1}(\widehat{b}_\mu))),$ we have \begin{equation*} \left| \zeta-\widehat{a}_i \right| \sim \rho_\mu^b+\delta^b_{\pi_b(\mu), \pi_a(j)}, \end{equation*} and \begin{equation*} \left| \Omega_A(\Omega_B^{-1}(\zeta))-\widehat{b}_\nu \right| \sim \rho_\mu^b+\delta^b_{\pi_b(\mu), \pi_b(\nu)}. \end{equation*} \end{lem} For the estimation of $s_2,$ by (\ref{eq_xi1setestimation}), (\ref{eq_xiijwholebound}), (\ref{eq_complexedgederterministicbound}) and (i) of Proposition \ref{prop:stabN}, we have that \begin{align}\label{eq_s3bound} |s_2| & \leq C \oint_{\Gamma} \frac{N^{-1/2+\epsilon}}{|\zeta-\widehat{a}_i||\zeta-\widehat{a}_j|} \frac{(\Omega_B^{-1})'(\zeta)}{|\zeta-\Omega_B(E_+)|} \left \| \bm{\mathcal{D}}^{-1}+ \Omega_B^{-1}(\zeta) \mathbf{U}^* \mathbf{G}(\Omega_B^{-1}(\zeta)) \mathbf{U} \right\| |\mathrm{d} \zeta|, \nonumber \\ & \leq C \oint_{\Gamma}\frac{N^{-1+2\epsilon}}{|\zeta-\widehat{a}_i| |\zeta-\widehat{a}_j|} \frac{1}{\mathfrak{d}(\zeta)-\|\mathcal{E}(\Omega_B^{-1}(\zeta)) \|} |\mathrm{d} \zeta|, \end{align} where $\mathfrak{d}(\zeta)$ is defined as \begin{equation*} \mathfrak{d}(\zeta):=\left( \min_{1 \leq j \leq r}\left| \widehat{a}_j-\Omega_B(\zeta) \right| \right) \wedge \left( \min_{1 \leq \mu-N \leq s}|\widehat{b}_{\mu}-\Omega_A(\Omega_B^{-1}(\zeta))| \right). \end{equation*} We mention that $\mathfrak{d}(\zeta)$ can be bounded using Lemma \ref{lem_contourestimation}. Moreover, by (\ref{eq_xi1setestimation}) and (\ref{eq_complexedgederterministicbound}), for some constant $C>0,$ we can bound \begin{equation}\label{eq_smallbound} \| \mathcal{E}(\Omega_B^{-1}(\zeta)) \| \leq C \sqrt{rs} N^{-1/2+\epsilon}|\zeta-\Omega_B(E_+)|^{-1/2}. \end{equation} Recall that both $r$ and $s$ are bounded. Together with Lemma \ref{lem_contourestimation} and the fact $\epsilon<\tau_2$, we obtain that \begin{equation*} \| \mathcal{E}(\Omega_B^{-1}(\zeta)) \| \ll (\widehat{a}_i-\Omega_B(E_+))^{-1/2} N^{-1/2+\tau_2} \lesssim \begin{cases} \rho_i^a \lesssim \mathfrak{d}(\zeta), & \text{for} \ \zeta \in \Gamma_i \\ \rho_\mu^b \lesssim \mathfrak{d}(\zeta), & \text{for} \ \zeta \in \Gamma_\mu \end{cases}, \end{equation*} where we used (\ref{eq_smallbound}) and Assumption \ref{assum_eigenvector}. Based on the above estimates, we arrive at \begin{equation}\label{eq_boundsim1} \frac{1}{\mathfrak{d}(\zeta)-\|\mathcal{E}(\Omega_B^{-1}(\zeta)) \|} \lesssim \begin{cases} (\rho_i^a)^{-1}, & \text{for} \ \zeta \in \Gamma_i \\ (\rho_\mu^b)^{-1}, & \text{for} \ \zeta \in \Gamma_\mu \end{cases}. \end{equation} Now we proceed to control $s_2.$ Decomposing the integral contour in \eqref{eq_s3bound} as in \eqref{decompose_contour}, using \eqref{eq_boundsim1} and Lemma \ref{lem_contourestimation}, and recalling that the length of $\Gamma_i$ (or $\Gamma_\mu$) is at most $2 \pi \rho_i^a$ (or $2\pi \rho_\mu^b$), we get that for some constant $C>0,$ \begin{equation}\label{estimate_s21} \begin{split} |s_2| \le C \sum_{\pi_a(k) \in S} \frac{N^{-1+2\epsilon}}{(\rho_k^a + \delta^a_{\pi_a(k),\pi_a(i)})(\rho_k^a + \delta^a_{\pi_a(k),\pi_a(j)}) } +C \sum_{\pi_b(\mu) \in S} \frac{N^{-1+2\epsilon}}{(\rho_\mu^b + \delta^b_{\pi_b(\mu),\pi_a(i)})(\rho_\mu^b + \delta^b_{\pi_b(\mu),\pi_a(j)})} . \end{split} \end{equation} Now we bound the terms of the right-hand side of \eqref{estimate_s21} using Cauchy-Schwarz inequality. For $\pi_a(i)\notin S$, we have \begin{align*} \sum_{\pi_a(k)\in S}\frac{1}{(\rho_k^a + \delta^a_{\pi_a(k),\pi_a(i)})^2 }+\sum_{\pi_b(\mu)\in S}\frac{1}{(\rho_\mu^b + \delta^b_{\pi_b(\mu),\pi_a(i)})^2 } & \le \sum_{\pi_a(k)\in S}\frac{1}{(\delta^a_{\pi_a(k),\pi_a(i)})^2 }+\sum_{\pi_b(\mu)\in S}\frac{1}{ (\delta^b_{\pi_b(\mu),\pi_a(i)})^2 } \\ & \le \frac{C}{\delta_{\pi_a(i)}(S)^2} . \end{align*} For $\pi_a(i)\in S$, we have $\rho_k^a + \delta^a_{\pi_a(k),\pi_a(i)}\gtrsim\rho_i^a $ for $\pi_a(k)\in S$, and $\rho_\mu^b + \delta^b_{\beta(\mu),\pi_a(i)}\gtrsim \rho_i^a$ for $\pi_b(\mu)\in S$. Then we have for some constant $C>0$ $$\sum_{\pi_a(k)\in S}\frac{1}{(\rho_k^a + \delta^a_{\pi_a(k),\pi_a(i)})^2 }+\sum_{\pi_b(\mu)\in S}\frac{1}{(\rho_\mu^b + \delta^b_{\pi_b(\mu),\pi_a(i)})^2 } \le \frac{C}{(\rho_i^a)^2}\le \frac{C}{\delta_{\pi_a(i)}(S)^2} + \frac{C}{\Delta_1(\widehat{a}_i)^4} .$$ Plugging the above two estimates into \eqref{estimate_s21}, we get that \begin{equation*} \begin{split} |s_2| \le CN^{-1+2\epsilon}\left(\frac1{\delta_{\pi_a(i)}(S)} + \frac{\mathbb{I}(\pi_a(i)\in S)}{\Delta_1(\widehat{a})^2} \right)\left(\frac1{\delta_{\pi_a(j)}(S)} + \frac{\mathbb{I}(\pi_a(j)\in S)}{\Delta_1(\widehat{a}_j)^2} \right). \end{split} \end{equation*} So far, we have proved Proposition \ref{prop_maineigenvector} for $1\le i,j \le r$ since $\epsilon$ can be arbitrarily small. Finally, the general case can be dealt with easily. For general $i,j \in \{1,\cdots, N\}$, we define $\mathcal R:=\{1,\cdots, r\}\cup \{i,j\}$. Then we define a perturbed model as \begin{equation}\nonumber \widehat{\mathcal{A}} = A\Big(I+ {\widehat{D}}^{a} \Big), \quad \widehat{D}^{a}=\text{diag}(d_k^{a})_{k\in \mathcal R}, \end{equation} where for some $\widetilde \epsilon>0,$ $$\quad d_{k}^a:=\begin{cases} d_k^a, \ &\text{if } 1\le k\le r\\ \widetilde\epsilon, \ &\text{if } k\in \mathcal R \text{ and } k>r\end{cases}. $$ Then all the previous proof goes through for the perturbed model as long as we replace the $\mathbf U$ and $\bm{\mathcal D}$ in \eqref{eq_defnudefnd} with \begin{equation}\label{Depsilon0_general} \widehat{\mathbf{U}}= \begin{pmatrix} \mathbf{E}_{r+2} & 0 \\ 0 & \mathbf{E}_s \end{pmatrix}, \quad \widehat{\bm{\mathcal{D}}}= \begin{pmatrix} \widehat D^a(\widehat D^a+1)^{-1} & 0 \\ 0 & D^b(D^b+1)^{-1} \end{pmatrix}. \end{equation} Note that in the proof, only the upper bound on the $d_k^a$'s were used. Moreover, the proof does not depend on the fact that $\widehat{a}_i$ or $\widehat{a}_j$ satisfy \eqref{eq_outlierlocation} (we only need the indices in $S$ to satisfy Assumption \ref{assum_eigenvector}). By taking $\widetilde\epsilon\downarrow 0$ and using continuity, we get that Proposition \ref{prop_maineigenvector} holds for general $i,j \in \{1,\cdots, N\}$. \end{proof} \section{Proof of Theorems \ref{thm_rigidity} and \ref{thm_delocalization}}\label{sec_proofofceonsequenceslocallaw} In this section, we prove the spectral rigidity near the upper edge, i.e., Theorem \ref{thm_rigidity} and the complete delocalization of the singular vectors, i.e., Theorem \ref{thm_delocalization}. We start with Theorem \ref{thm_rigidity}. For the first part of the results, i.e., the eigenvalues are close to the quantiles $\gamma_i^*,$ the proof follows from a standard argument established in \cite{erdos2012}, by translating the closeness of the resolvent into the closeness of the eigenvalues and the quantitles of $\mu_{A} \boxtimes \mu_{B}.$ Since the proof strategy is rather standard, we only sketch the key inputs and steps. Overall, the proof reply on the following two important inputs: \begin{enumerate} \item For the largest eigenvalue $\lambda_1,$ we have \begin{equation}\label{eq_controllargestdifference} |\lambda_1-\gamma_1^*| \prec N^{-2/3}. \end{equation} \item Given sufficiently small constant $\varsigma>0,$ we have \begin{equation}\label{eq_ctronlclaimone} \sup_{x \leq E_++\varsigma}|\mu_{H}((-\infty,x])-\mu_{A} \boxtimes \mu_B((-\infty,x])|\prec N^{-1}. \end{equation} \end{enumerate} The proof of the first part of Theorem \ref{thm_rigidity} follow from the above two claims and the square root behavior of $\mu_A \boxtimes \mu_B$ established in (ii) of Proposition \ref{prop:stabN}. We mention that the second technical input is from an argument regardless of the underlying random matrix models once we have established the local laws in Theorem \ref{thm:main}. In detail, it follows from a standard application of the Helffer-Sj{\" o}strand formula on $\mathcal{D}_{\tau}(\eta_L, \eta_U)$; see Section C and the arguments in Section 8 of \cite{BKnotes} for details. In what follows, we focus our discussion on the justification of the first claim since it usually depends on a case by case checking. For the second part of Theorem \ref{thm_rigidity} when we replace $\gamma_i^*$ with $\gamma_i,$ we will bound the differences between $\gamma_i^*$ and $\gamma_i.$ \begin{proof}[\bf Proof of Theorem \ref{thm_rigidity}] First, we prove the first part by proving (\ref{eq_controllargestdifference}) by contradiction. It relies on the following improve estimates, whose proof can be found in Appendix \ref{sec_additional}. Denote $\widetilde{\mathcal{D}}$ as \begin{equation}\label{eq_wtmathcald} \widetilde{\mathcal{D}}_{>}:=\{z=E+\mathrm{i} \eta \in \mathcal{D}_{\tau}(\eta_L, \eta_U): \sqrt{\kappa+\eta}>\frac{N^{2 \epsilon}}{N \eta} \ \text{and} \ E>E_+\}. \end{equation} \begin{lem}\label{lem_improvedestimate} Under the assumptions of Theorem \ref{thm_rigidity}, we have the following holds uniformly in $z \in \widetilde{\mathcal{D}}_{>}$ \begin{equation}\label{eq_improveboundequation} \Lambda(z) \prec \frac{1}{N \sqrt{(\kappa+\eta)\eta}}+\frac{1}{\sqrt{\kappa+\eta}} \frac{1}{(N \eta)^2}. \end{equation} \end{lem} Recall (\ref{eq:priorisupp}). For any (small) constant $\epsilon>0,$ we define the following line segment \begin{equation*} \widetilde{\mathcal{D}}(\epsilon):=\{z=E+\mathrm{i} \eta: E_++N^{-2/3+6\epsilon} \leq E <\tau^{-1}, \ \eta=N^{-2/3+\epsilon} \}, \end{equation*} where $\tau>0$ is some small fixed constant. Clearly, we have that $\widetilde{\mathcal{D}}(\epsilon) \subset \widetilde{\mathcal{D}}_{>}.$ By (\ref{eq_improveboundequation}), we readily obtain that $\Lambda \prec \frac{N^{-\epsilon}}{N \eta}$ holds uniformly in $z \in \widetilde{\mathcal{D}}(\epsilon).$ Together with (\ref{eq_rigidityuse}), we see that for $z \in \widetilde{\mathcal{D}}(\epsilon)$ uniformly \begin{equation*} |m_H(z)-m_{\mu_A \boxtimes \mu_B}(z)| \prec \frac{N^{-\epsilon}}{N \eta}. \end{equation*} Together with (ii) of Proposition \ref{prop:stabN}, we arrive at \begin{equation}\label{eq_contradiction} \im m_H(z) \prec \frac{N^{-\epsilon}}{N \eta}, \end{equation} holds uniformly in $z \in \widetilde{\mathcal{D}}(\epsilon).$ By (\ref{eq:priorisupp}), it suffices to prove that with high probability $\lambda_1 \leq E_++N^{-2/3+6\epsilon}$ in order to prove (\ref{eq_controllargestdifference}). We now justify it by contradiction. Suppose that $\lambda_1 > E_++N^{-2/3+6\epsilon},$ then for any $\eta>0$ and by the definition of Stieltjes transform, we see that \begin{equation*} \sup_{E>E_++N^{-2/3+6\epsilon}} \im m_H(E+\mathrm{i} \eta)=\sup_{E>E_++N^{-2/3+6\epsilon}} \frac{1}{N} \sum_{k=1}^N \frac{\eta}{(\lambda_i-E)^2+\eta^2} \geq \frac{1}{N \eta}, \end{equation*} which is a contradiction to (\ref{eq_contradiction}). Therefore, we have proved (\ref{eq_controllargestdifference}). Next, for the second part of the results, we first claim the following estimate. \begin{lem} \label{lem:rigidity_AB} Suppose the assumptions of Theorem \ref{thm_rigidity} hold. Then for some sufficiently large $N_0 \equiv N_0(\epsilon,c),$ when $N \geq N_0,$ we have \begin{equation}\label{eq_closeABalphabeta} |\gamma_i-\gamma_i^*| \leq j^{-1/3} N^{-2/3+\epsilon}, \ 1 \leq j \leq cN. \end{equation} \end{lem} It is easy to see that the second part of the results follow from (\ref{eq_closeABalphabeta}) and the first part of the results. The rest of the proof is devoted to proving (\ref{eq_closeABalphabeta}). Similar to the previous discussion of the first part of the results, we will follow the proof strategy of \cite[Lemma 3.14]{BEC} and focus on establishing results analogous to (\ref{eq_controllargestdifference}) and (\ref{eq_ctronlclaimone}). More specifically, we prove the following results: \begin{equation} \absv{\gamma_1-\gamma_1^*} \prec N^{-2/3},\label{eq_controllargestdifference1} \end{equation} \begin{equation} \sup_{x \leq E_++\varsigma} \left| \mu_A \boxtimes \mu_B((-\infty,x])-\mu_{\alpha} \boxtimes \mu_\beta((-\infty,x]) \right| \leq CN^{-1+\epsilon}.\label{eq_ctronlclaimone1} \end{equation} \begin{proof} We first show (\ref{eq_ctronlclaimone1}). By the continuity of the free multiplicative convolution, i.e. Proposition 4.14 of \cite{MR1254116}, and (iv) of Assumption \ref{assu_esd}, we obtain that \begin{equation*} \mathrm{d}_L(\mu_A \boxtimes \mu_B, \mu_\alpha \boxtimes \mu_\beta) \leq \mathrm{d}_L(\mu_A, \mu_\alpha)+\mathrm{d}_L(\mu_B, \mu_\beta) \leq N^{-1+\epsilon}. \end{equation*} Then we can conclude that for some constant $C>0,$ \begin{equation*} \sup_{x \leq E_++\varsigma} \left| \mu_A \boxtimes \mu_B((-\infty,x])-\mu_{\alpha} \boxtimes \mu_\beta((-\infty,x]) \right| \leq CN^{-1+\epsilon}, \end{equation*} where we used the definition of the Levy distance and (\ref{eq:priorisupp}). Then we show the counterpart of (\ref{eq_controllargestdifference}). By (\ref{eq:OmegaImBound}), the definition of subordination function (\ref{eq_suborsystem}) and the inverse formula of Stieltjes transform, we obtain that \begin{equation*} \gamma_1^* \leq E_++N^{-1+\xi \epsilon}, \end{equation*} where we recall (\ref{eq_defineoutsideset}) that $\xi>1$ is some fixed constant. Together with (ii) of Proposition \ref{prop:stabN}, by definition, we obtain that \begin{align*} \frac{j}{N}=\int_{\gamma_j^*}^{\infty} \mathrm{d} \mu_A \boxtimes \mu_B(x)& =\int_{\gamma_j^*}^{E_++N^{-1+\xi \epsilon}} \mathrm{d} \mu_A \boxtimes \mu_B(x) \leq C \int_{\gamma_j^*}^{E_++N^{-1+\xi \epsilon}} \im m_{\mu_A \boxtimes \mu_B}(x+\mathrm{i} N^{-1+\xi \epsilon}) \mathrm{d} x, \\ & \leq C \int_{\gamma_j^*}^{E_++N^{-1+\xi \epsilon}} \left[|E_+-x|+N^{-1+\xi \epsilon} \right]^{1/2} \mathrm{d} x \\ & \leq C|E_+-\gamma_j^*|^{3/2}+CN^{-1+\xi \epsilon}|E_+-\gamma_j^*|, \end{align*} where in the second step we used the inversion formula for the Stieltjes transform. This yields that for some constant $c>0,$ \begin{equation*} |\gamma_j^*-E_+| \geq c N^{-2/3}j^{2/3}. \end{equation*} Together with the fact that $E_+-\gamma_j \sim N^{-2/3} j^{2/3}$ by (\ref{eq_originalsquarerootbehavior}), we conclude that \begin{equation*} \gamma_j^* \leq E_+-cN^{-2/3+\epsilon}, \end{equation*} whenever $\gamma_j \leq E_+-cN^{-2/3+\epsilon}.$ Similarly, we can show that $\gamma_j^* \geq E_+ +C N^{-2/3+\epsilon}$ whenever $\gamma_j^*$ whenever $\gamma_j \leq E_+-CN^{-2/3+\epsilon}.$ Based on the above arguments, we conclude that for some constant $C_1>0,$ \begin{equation*} |\gamma_j-\gamma_j^*| \leq E_+-\gamma_j+|E_+-\gamma_j^*| \leq C_1N^{-2/3+\epsilon}, \ \text{whenever} \ E_+-N^{-2/3+\epsilon} \leq \gamma_j \leq E_+. \end{equation*} This establishes (\ref{eq_controllargestdifference1}) and hence completes the proof. \end{proof} \end{proof} Then we prove Theorem \ref{thm_delocalization}. \begin{proof}[\bf Proof of Theorem \ref{thm_delocalization}] We focus our discussion on the left singular vectors. By spectral decomposition, we have that \begin{equation}\label{spectral_singularvector} \sum_{k=1}^N \frac{\eta |\mathbf{u}_k(i)|^2}{(\lambda_k-E)^2+\eta^2}=\im \widetilde{G}_{ii}(z), \ z=E+\mathrm{i} \eta. \end{equation} For $1 \leq k_0 \leq cN,$ let $z_0=\lambda_{k_0}+\mathrm{i} \eta_0,$ where $\eta_0=N^{-1+\epsilon_0}, \epsilon_0>\gamma$ is a small constant. Inserting $z=z_0$ into (\ref{spectral_singularvector}), by Theorem \ref{thm:main} and (i) of Proposition \ref{prop:stabN}, we immediately see that \begin{equation*} |\mathbf{u}_k(i)|^2 \prec \eta_0 \lesssim N^{-1}. \end{equation*} This concludes our proof. \end{proof} \section{Pointwise local laws}\label{sec:pointwiselocallaw} In this section, we provide several controls for the entries of the resolvents for each fixed spectral parameter $z$ under certain assumption(c.f. Assumption \ref{assu_ansz} and the assumption (\ref{eq:ansz_off})). The general and optimal case, i.e., Theorem \ref{thm:main}, can be proved by removing Assumption \ref{assu_ansz} and (\ref{eq:ansz_off}) and using a standard dynamic bootstrapping procedure in next section. Moreover, we prove an optimal estimate (up to some $N^{\epsilon}$ error for arbitrarily small $\epsilon>0$) for some functionals of the resolvents, which are the key ingredients for the boostrapping procedure. Till the end of the paper, we focus our discussion on the Haar unitary matrix and only point out the main technical difference with the Haar orthogonal random matrices along the proof. \subsection{Main tools}\label{sec:partialrandomness} In this section, we prepare some important identities for our later calculation. We first introduce some analytic functions, which are good approximations for the subordination functions and the diagonal entries of the Green functions. \begin{defn}[Approximate subordination functions]\label{defn_asf} For $z\in\CR$, we define \begin{equation*} \Omega_A^c \equiv \Omega_{A}^{c}(z) \mathrel{\mathop:}=\frac{z\tr \widetilde{B}G}{1+z\tr G}, \ \Omega_B^c \equiv \Omega_{B}^{c}(z) \mathrel{\mathop:}=\frac{z\tr AG}{1+z\tr G}=\frac{z\tr{\mathcal G}\widetilde{A}}{1+z\tr{\mathcal G}}. \end{equation*} \end{defn} Next, we summarize some resolvent identities which will be used in the proof of local laws. Recall (\ref{defn_eq_matrices}). Using the trivial relations $G(H-z)=I$ and $(\mathcal{H}-z)\mathcal{G}=I,$ it is not hard to see that \begin{equation} \label{eq:apxsubor} \begin{aligned} (HG)_{ii}-zG_{ii}=a_{i}(\widetilde{B}G)_{ii}-zG_{ii}=1,\\ ({\mathcal G}{\mathcal H})_{ii}-z{\mathcal G}_{ii}=b_{i}({\mathcal G}\widetilde{A})_{ii}-z{\mathcal G}_{ii}=1. \end{aligned} \end{equation} Moreover, using the fact \begin{equation}\label{eq_gggconnetction} G=A^{1/2}\widetilde{G}A^{-1/2}, {\mathcal G}=B^{-1/2}\widetilde{{\mathcal G}}B^{1/2}, \end{equation} we readily obtain that \begin{equation} \label{eq_ggrelationship} G^* G =A^{-1/2}\widetilde{G}^* A \widetilde{G}A^{-1/2}, \ {\mathcal G}^* {\mathcal G}= B^{1/2}\widetilde{{\mathcal G}}^* B^{-1}\widetilde{{\mathcal G}} B^{1/2}. \end{equation} Our proof makes use of the partial randomness decomposition for Haar unitary matrix. This technique has been employed in studying the addition of random matrices in \cite{Bao-Erdos-Schnelli2016,BAO3,BAO2,BEC}. Let $U$ be the $(N\times N)$ Haar unitary random matrix. For all $i\in\llbra1,N\rrbra$, define ${\boldsymbol v}_{i}\mathrel{\mathop:}= U{\boldsymbol e}_{i}$ as the $i$-th column vector of $U$ and $\theta_{i}$ as the argument of ${\boldsymbol e}_{i}^*{\boldsymbol v}_{i}$. Following \cite{PM1}, we denote \begin{equation} \label{eq_prd} U^{\langle i\rangle}\mathrel{\mathop:}= -\e{-\mathrm{i}\theta_{i}}R_{i}U, \quad\text{where}\ R_{i}\mathrel{\mathop:}= I-{\boldsymbol r}_{i}{\boldsymbol r}_{i}^* \quad {\boldsymbol r}_{i}\mathrel{\mathop:}= \sqrt{2}\frac{{\boldsymbol e}_{i}+\e{-\mathrm{i}\theta_{i}}{\boldsymbol v}_{i}}{\norm{{\boldsymbol e}_{i}+\e{-\mathrm{i}\theta_{i}}{\boldsymbol v}_{i}}_{2}}. \end{equation} Since $\|{\boldsymbol r}_i \|_2^2=2$, we have that $R_i$ is a Householder reflection. Consequently, we have that $R_i^*=R_i$ and $R_i^2=I.$ Furthermore, it is elementary to see that $U^{\langle i\rangle} \bm{e}_i=\bm{e}_i$ and $\bm{e}_i^* U^{\langle i\rangle}=\bm{e}_i^*.$ This implies that $U^{\langle i\rangle}$ is a unitary block-diagonal matrix, that is to say, $U^{\langle i\rangle}_{ii}=1$ and the $(i,i)$-matrix minor of $U^{\langle i\rangle}$ is Haar distributed on ${\mathcal U}(N-1) $ and $\bm{v}_i$ is uniformly distributed on the $N-1$ unit sphere. We next introduce some notations which will be frequently used throughout the proof. Denote \begin{equation} \label{eq_hatubhatu} \widetilde{B}^{\langle i\rangle}\mathrel{\mathop:}= U^{\langle i\rangle}B(U^{\langle i\rangle})^*. \end{equation} Since ${\boldsymbol v}_{i}$ is uniformly distributed on the unit sphere ${\mathbb S}^{N-1}_{\C}$, we can find a Gaussian vector $\widetilde{{\boldsymbol g}}_{i}\sim{\mathcal N}_{\C}(0,N^{-1}I_{N})$ such that \begin{equation} \label{eq_defnvb} {\boldsymbol v}_{i}= \frac{\widetilde{{\boldsymbol g}}_{i}}{\norm{\widetilde{{\boldsymbol g}}_{i}}}. \end{equation} Armed with the above Gaussian vector, we define \begin{equation} \label{eq_prd2} {\boldsymbol g}_{i}\mathrel{\mathop:}= \e{-\mathrm{i}\theta_{i}}\widetilde{{\boldsymbol g}}_{i}, \quad {\boldsymbol h}_{i}\mathrel{\mathop:}= \frac{{\boldsymbol g}_{i}}{\norm{{\boldsymbol g}_{i}}}=\e{-\mathrm{i}\theta_{i}}{\boldsymbol v}_{i}, \quad \ell_{i}\mathrel{\mathop:}= \frac{\sqrt{2}}{\norm{{\boldsymbol e}_{i}+{\boldsymbol h}_{i}}_2}, \quad \mathring{{\boldsymbol g}}_{i}\mathrel{\mathop:}= {\boldsymbol g}_{i}-g_{ii}{\boldsymbol e}_{i}, \ \mathring{{\boldsymbol h}}_{i}\mathrel{\mathop:}= {\boldsymbol h}_{i}-h_{ii}{\boldsymbol e}_{i}. \end{equation} Recall (\ref{eq_prd}). We have that $\bm{r}_i=\ell_i(\bm{e}_i+\bm{h}_i)$. In addition, we have \begin{equation} \label{eq_prd1} R_{i}{\boldsymbol e}_{i}=-{\boldsymbol h}_{i} \AND R_{i}{\boldsymbol h}_{i}=-{\boldsymbol e}_{i}. \end{equation} This implies that \begin{equation} \label{eq_hbwtr} \begin{aligned} {\boldsymbol h}_{i}^* \widetilde{B}^{\langle i\rangle}R_{i}=-{\boldsymbol e}_{i}^* \widetilde{B}, \quad {\boldsymbol e}_{i}^*\widetilde{B}^{\langle i\rangle}R_{i}= -{\boldsymbol h}_{i}^* \widetilde{B}=-b_{i}{\boldsymbol h}_{i}^* . \end{aligned} \end{equation} In this paper, we focus on the discussion of Haar unitary matrix. For Haar orthogonal random matrix on $O(N)$, the only difference lies in the partial randomness decomposition. In fact, we can decompose $U$ in the same way as in (\ref{eq_prd}), except that the factor $\e{-\mathrm{i}\theta_{i}}$ in \eqref{eq_prd} should be replaced by $\mathrm{sgn}({\boldsymbol e}_{i}^* {\boldsymbol v}_{i})$. We refer the readers to \cite[Appendix A]{BAO2} for more details. In the rest of this section, we introduce some useful decompositions. Without loss of generality, we assume that both $A$ and $B$ are normalized such that $\tr A =\tr B=1,$ where we recall the definition (\ref{eq_defntrace}). Recall that we always use $z=E+\mathrm{i} \eta.$ Throughout the paper, we use the following control parameters \begin{equation} \label{eq_controlparameter} \Psi\equiv\Psi(z)\mathrel{\mathop:}= \sqrt{\frac{1}{N\eta}},\quad \Pi\equiv\Pi(z)\mathrel{\mathop:}= \sqrt{\frac{\im m_{\mu_A \boxtimes \mu_B}(z)}{N\eta}},\ \Pi_{i}\equiv\Pi_{i}(z)\mathrel{\mathop:}= \sqrt{\frac{\im G_{ii}(z)+\im {\mathcal G}_{ii}(z)}{N\eta}}. \end{equation} Next, we show some important decomposition regarding the approximate subordination functions. By (\ref{eq:apxsubor}) and Definition \ref{defn_asf}, we observe that \begin{align}\label{eq:Lambda} (zG_{ii}+1)-\frac{a_{i}}{a_{i}-\Omega_{B}^{c}} &=a_{i}(\widetilde{B}G)_{ii}-\frac{a_{i}}{a_{i}-\Omega_{B}^{c}} \nonumber \\ &=\frac{a_{i}}{(1+z\tr G)(a_{i}-\Omega_{B}^{c})} \left((z\tr G+1)(a_{i}-\Omega_{B}^{c})(\widetilde{B}G)_{ii}-(z\tr G+1)\right) \nonumber \\ &=\frac{a_{i}}{(1+z\tr G)(a_{i}-\Omega_{B}^{c})}(a_{i}(z\tr G+1)(\widetilde{B}G)_{ii}-z\tr(AG)(\widetilde{B}G)_{ii}-(z\tr G+1)) \nonumber \\ &=\frac{{a_{i}}z}{(1+z\tr G)(a_{i}-\Omega_{B}^{c})}\left(G_{ii}\tr(A\widetilde{B}G)-\tr (GA)(\widetilde{B}G)_{ii}\right). \end{align} Consequently, by Proposition \ref{prop:stabN}, to prove (\ref{eq:main}), it suffices to show that $\Omega_B^c$ is close to $\Omega_B$ and control the following quantity \begin{equation} \label{eq_defnq} Q_{i}\mathrel{\mathop:}= G_{ii}\tr(A\widetilde{B}G)-\tr(GA)(\widetilde{B}G)_{ii}. \end{equation} In what follows, we present detailed decomposition of $Q_i.$ In particular, we discuss the decomposition of $(\widetilde{B} G)_{ii}$ using the partial randomness decomposition. Our goal is to explore the independence structure. Recall (\ref{eq_hatubhatu}) and (\ref{eq_prd2}). For convenience, we introduce the notations \begin{equation} \label{eq_shorhandnotation} S_{i}\mathrel{\mathop:}= {\boldsymbol h}_{i}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i},\quad \mathring{S}_{i}\mathrel{\mathop:}= \mathring{{\boldsymbol h}}_{i}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i},\quad T_{i}\mathrel{\mathop:}= {\boldsymbol h}_{i}^* G{\boldsymbol e}_{i}=\e{\mathrm{i}\theta_{i}}{\boldsymbol e}_{i}^* U^* G{\boldsymbol e}_{i},\AND\mathring{T}_{i}\mathrel{\mathop:}= \mathring{{\boldsymbol h}}_{i}^* G{\boldsymbol e}_{i}, \end{equation} where we used $(e^{-\mathrm{i} \theta_i})^*=e^{\mathrm{i} \theta_i}.$ By the construction of $U^{\langle i\rangle}$ in (\ref{eq_prd}) and (\ref{eq_prd1}), we find that \begin{equation*} (\widetilde{B}G)_{ii}=-{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i}. \end{equation*} Using the definition of $R_i$ in (\ref{eq_prd}), we can further write \begin{equation*} (\widetilde{B}G)_{ii}=-{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i} +\ell_{i}^{2}{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G{\boldsymbol e}_{i}, \end{equation*} where we recall that $\bm{r}_i=l_i(\bm{e}_i+\bm{h}_i).$ With the notations in (\ref{eq_shorhandnotation}), we can further write that \begin{equation*} (\widetilde{B}G)_{ii}=-S_{i}+\ell_{i}^{2}({\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{i}+{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i})(G_{ii}+T_{i}). \end{equation*} Further, since $R_i$ is a projection satisfying (\ref{eq_prd1}), we have that \begin{align}\label{eq:BGii-S} (\widetilde{B}G)_{ii} =&-S_{i}+\ell_{i}^{2}(-b_{i}{\boldsymbol h}_{i}^* R_{i}{\boldsymbol h}_{i}+{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i})(G_{ii}+T_{i}) \nonumber \\ =&-S_{i}+\ell_{i}^{2}(b_{i}h_{ii}+{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i})(G_{ii}+T_{i}). \end{align} Note that $\widetilde{B}^{\langle i\rangle}$ is independent of ${\boldsymbol h}_i.$ We will see later in our proof (e.g. (\ref{eq:BGii})), the discussion boils down to control $S_{i}$ and $T_{i}$. The main idea is to employ the technique of the integration by parts with respect to the coordinates of ${\boldsymbol h}_i.$ Through the calculation, we will frequently use the following terms \begin{equation} \label{eq_pk} \begin{aligned} &P_{i}\mathrel{\mathop:}= Q_i+(G_{ii}+T_{i})\Upsilon, \\ &K_{i}\mathrel{\mathop:}= T_{i}+\tr(GA)(b_{i}T_{i}+(\widetilde{B}G)_{ii})-\tr(GA\widetilde{B})(G_{ii}+T_{i}), \end{aligned} \end{equation} where \begin{equation} \label{eq_defnupsilon} \Upsilon\mathrel{\mathop:}= (\tr(GA\widetilde{B}))^{2}-\tr(GA)\tr(\widetilde{B}GA\widetilde{B})-\tr(GA\widetilde{B})+\tr(GA). \end{equation} Using the facts $GA\widetilde{B}=A\widetilde{B}G=zG+I$ and $\tr B=\tr \widetilde{B}=1$, we have \begin{align} \Upsilon&=\tr(A\widetilde{B}G)(\tr(zG+I)-1)-\tr(GA)(\tr(\widetilde{B}(zG+1))-\tr{\widetilde{B}}) \nonumber \\ &=z\left(\tr (G)\tr(A\widetilde{B}G)-\tr(GA)\tr(\widetilde{B}G)\right) \nonumber \\ & =z\sum_{i}Q_{i}. \label{eq_averageqdefinitionupsilon} \end{align} Moreover, we that \begin{align}\label{eq:approx_subor} \Omega_{A}^{c}\Omega_{B}^{c}-zM_{\mu_{H}}(z)&=\frac{z^{2}}{(1+zm_{H}(z))^{2}}(\tr(GA)\tr(\widetilde{B}G)-\tr(G)\tr(A\widetilde{B}G)) \nonumber \\ & =-\frac{z}{(1+zm_{H}(z))^{2}}\Upsilon(z), \end{align} where in the first equality we used $A \widetilde{B}G=zG+I.$ The above discussion focuses on the diagonal entries on the resolvents. Similar arguments hold for the off-diagonal terms by considering the decomposition of $(\widetilde{B}G)_{ij}$ using the strategy similar to (\ref{eq:BGii-S}). In what follows, we introduce the counterparts of the diagonal entries. Corresponding to (\ref{eq:Lambda}), we denote \begin{equation} \label{eq_qij} Q_{ij}\mathrel{\mathop:}=\tr(GA)(\widetilde{B}G)_{ij}-G_{ij}\tr(A\widetilde{B}G). \end{equation} Moreover, using the fact $G_{ij}=a_{i}(\widetilde{B}G)_{ij}/z,$ which follows from $zG+I=A\widetilde{B}G,$ we see that \begin{equation} \label{eq_qijdecomposution} Q_{ij}=\left(\frac{z}{a_{i}}\tr(GA)-\tr(A\widetilde{B}G)\right)G_{ij}=\frac{\tr (GA\widetilde{B})}{a_{i}}(\Omega_{B}^{c}-a_{i})G_{ij}. \end{equation} In this sense, it suffices to bound $Q_{ij}$ in order to prove \eqref{eq:mainoff2}. Similar to the discussion for the diagonal entries, we will need the following quantities \begin{align}\label{eq_quantitydefinitionoffdiagonal} &S_{ij}\mathrel{\mathop:}={\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{j},& &\mathring{S}_{ij}\mathrel{\mathop:}=\mathring{{\boldsymbol h}}_{i}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{j},\\ &T_{ij}\mathrel{\mathop:}= {\boldsymbol h}_{i}^* G{\boldsymbol e}_{j},& &\mathring{T}_{ij}\mathrel{\mathop:}= \mathring{{\boldsymbol h}}_{i}^* G{\boldsymbol e}_{j},\nonumber\\ &P_{ij}\mathrel{\mathop:}= Q_{ij}+(G_{ij}+T_{ij})\Upsilon, & &K_{ij}=T_{ij}+\tr(GA)(b_{i}T_{ij}+(\widetilde{B}G)_{ij})-\tr(GA\widetilde{B})(G_{ij}+T_{ij}). \nonumber \end{align} \subsection{Pointwise local laws} In this section, we prove (\ref{eq:main}) and (\ref{eq_off1}) of Theorem \ref{thm:main} for each fixed spectral parameter $z.$ Due to similarly, we focus our discussion on the diagonal entries (\ref{eq:main}) and only briefly discuss the results of the off-diagonal entries (\ref{eq_off1}). We start with introducing some notations. Denote \begin{equation} \label{eq_moregeneralerrorbound} \Lambda_{di} \mathrel{\mathop:}= \Absv{zG_{ii}+1-\frac{a_{i}}{a_{i}-\Omega_{B}}},\quad \Lambda_{d}\mathrel{\mathop:}= \max_{i}\Lambda_{di},\quad \Lambda_{T}\mathrel{\mathop:}= \max_{i}\absv{T_{i}}. \end{equation} Similarly, we can define $\Lambda_{di}^{c}$ and $\Lambda_{d}^{c}$ by replacing $\Omega_{B}$ with its approximate $\Omega_{B}^{c}$. Moreover, we define that \begin{equation*} \widetilde{\Lambda}_{di}\mathrel{\mathop:}= \Absv{z{\mathcal G}_{ii}+1-\frac{b_{i}}{b_{i}-\Omega_{A}}},\quad \widetilde{\Lambda}_{d}\mathrel{\mathop:}= \max_{i}\widetilde{\Lambda}_{di},\quad \widetilde{\Lambda}_{T}\mathrel{\mathop:}= \max_{i} \absv{{\boldsymbol e}_{i}^* U{\mathcal G} {\boldsymbol e}_{i}}. \end{equation*} We first introduce the following assumption. \begin{assu}\label{assu_ansz} Recall $\gamma>0$ is a small constant defined in (\ref{eq_eltalgamma}). Fix $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ defined in (\ref{eq_fundementalset}), we suppose the followings hold true \begin{equation} \label{eq_locallaweqbound} \Lambda_{d}(z)\prec N^{-\gamma/4},\quad \widetilde{\Lambda}_{d}(z)\prec N^{-\gamma/4},\quad \Lambda_{T}\prec 1,\quad \widetilde{\Lambda}_{T}\prec 1. \end{equation} \end{assu} In Section \ref{sec:pointwiselocallaw}, for the ease of statements, we prove all the results under Assumption \ref{assu_ansz}. As we will see later in Section \ref{subsec:weaklocallaw}, Assumption \ref{assu_ansz} can be easily removed by establishing the weak local laws; see Proposition \ref{prop:weaklaw} for more details. The main result of this section is the following proposition. \begin{prop}\label{prop:entrysubor} Suppose that the assumptions of Theorem \ref{thm:main} and Assumption \ref{assu_ansz} hold. Fix $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. Recall (\ref{eq_pk}). Then for all $i\in\llbra1, N\rrbra,$ we have that \begin{equation} \label{eq:PKbound} \absv{P_{i}(z)}\prec\Psi(z),\quad \absv{K_{i}(z)}\prec \Psi(z). \end{equation} Furthermore, we have \begin{equation} \label{eq_lambdaother} \Lambda_{d}^c(z)\prec\Psi(z),\quad \Lambda_{T}\prec\Psi(z),\quad \widetilde{\Lambda}_{d}^c\prec\Psi(z),\quad \widetilde{\Lambda}_{T}\prec\Psi(z),\quad \Upsilon\prec\Psi(z). \end{equation} \end{prop} We now prove Proposition \ref{prop:entrysubor}. As we will see later, the proof of (\ref{eq_lambdaother}) follows from (\ref{eq:PKbound}) and (\ref{eq_locallaweqbound}), whereas (\ref{eq:PKbound}) can be proved using recursive estimates. \begin{proof}[\bf Proof of Proposition \ref{prop:entrysubor}] We first prove (\ref{eq_lambdaother}) assuming that (\ref{eq:PKbound}) holds. We claim that \begin{equation}\label{eq_claimtibound} T_i \prec N^{-\gamma/4}, \end{equation} where $\gamma>0$ is the same parameter in (\ref{eq_locallaweqbound}) (also in (\ref{eq_eltalgamma})). To see (\ref{eq_claimtibound}), using (\ref{eq:PKbound}) and the definition of $K_i$ in (\ref{eq_pk}), we have \begin{align}\label{eq:T1} T_{i}(1+b_{i}\tr(GA)-\tr(GA\widetilde{B}))&=\tr (GA\widetilde{B})G_{ii}-\tr(GA)(\widetilde{B}G)_{ii} +K_i \nonumber \\ & =\tr (GA\widetilde{B})G_{ii}-\tr(GA)(\widetilde{B}G)_{ii}+\rO_{\prec}(\Psi). \end{align} By (\ref{eq:apxsubor}), we have that \begin{equation} \label{eq_gbgindeti} (\widetilde{B}G)_{ii}=\frac{zG_{ii}+1}{a_{i}}, \ G_{ii}= \frac{1}{z}\left( a_i (\widetilde{B}G)_{ii}-1 \right). \end{equation} Based on (\ref{eq_gbgindeti}), on one hand, by (\ref{eq_locallaweqbound}), we find that \begin{equation} \label{eq_wtbgiipointwise} (\widetilde{B}G)_{ii}=\frac{1}{a_{i}-\Omega_{B}}+\rO_{\prec}(N^{-\gamma/4}). \end{equation} On the other hand, using the fact that $\tr{(GA \widetilde{B})}=z m_H(z)+1$ and a relation similar to (\ref{eq_multiidentity}), by (\ref{eq_locallaweqbound}), we conclude that \begin{equation}\label{eq_trgawtbpointwise} \tr(GA\widetilde{B})=\int\frac{x}{x-\Omega_{B}}\mathrm{d}\mu_{A}(x)+\rO_{\prec}(N^{-\gamma/4})=zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1+\rO_{\prec}(N^{-\gamma/4}), \end{equation} and \begin{equation}\label{eq_trgapointwise} \tr(GA)=\frac{1}{N}\sum_{i}a_{i}G_{ii}=\frac{\Omega_{B}}{z}\frac{1}{N}\sum_{i}\frac{a_{i}}{a_{i}-\Omega_{B}}+\rO_{\prec}(N^{-\gamma/4})=\frac{\Omega_{B}}{z}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)+\rO_{\prec}(N^{-\gamma/4}). \end{equation} Combining (\ref{eq:T1}), (\ref{eq_wtbgiipointwise}), (\ref{eq_trgawtbpointwise}) and (\ref{eq_trgapointwise}), by (\ref{eq_locallaweqbound}), we have that \begin{equation} \label{eq:T2} T_{i}\left(1+(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)\left(\frac{b_{i}\Omega_{B}}{z}-1\right)\right)=\rO_{\prec}(\Psi+N^{-\gamma/4}). \end{equation} Moreover, invoking (\ref{eq_mtrasindenity}) and (\ref{eq_suborsystem}), we see that \begin{align}\label{eq_t2simplify} 1+(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)\left(\frac{b_{i}\Omega_{B}}{z}-1\right) &=(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)\left(\frac{b_{i}\Omega_{B}}{z}-M_{\mu_{A}\boxtimes\mu_{B}}(z)\right) \nonumber \\ &=(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)\frac{\Omega_{B}}{z}(b_{i}-\Omega_{A}). \end{align} By (\ref{eq_t2simplify}), a relation similar to (\ref{eq_multiidentity}) and (i) of Proposition \ref{prop:stabN}, we have proved the claim (\ref{eq_claimtibound}) using (\ref{eq:T2}). Armed with (\ref{eq_claimtibound}) and (\ref{eq:PKbound}), we are ready to prove (\ref{eq_lambdaother}). First, using the definitions in (\ref{eq_pk}) and (\ref{eq:PKbound}), we find that \begin{equation} \label{eq_gammastepone} \frac{1}{N}\sum_{i}a_{i}P_{i} =\Upsilon\frac{1}{N}\sum_{i}a_{i}(G_{ii}+T_{i})\prec\Psi, \end{equation} where we used the fact that $\{a_i\}$ are bounded. By (\ref{eq_gammastepone}), (\ref{eq_trgapointwise}), (i) of Proposition \ref{prop:stabN}, we have proved that \begin{equation}\label{eq_upsilonbound} \Upsilon \prec \Psi. \end{equation} Second, using the definition of $P_i$ in (\ref{eq_pk}), the expansion (\ref{eq:Lambda}) and (\ref{eq_upsilonbound}), we have proved that $\Lambda_{d}^c \prec \Psi.$ Third, by (\ref{eq:T1}) and a discussion similar to (\ref{eq_claimtibound}) with the bound $\Lambda_{d}^c \prec \Psi,$ it is easy to see that $\Lambda_T \prec \Psi(z).$ Finally, the proof of $\widetilde{\Lambda}_d^c$ and $\widetilde{\Lambda}_T$ follows from an argument similar to (\ref{eq:Lambda}) and the relationship (\ref{eq:apxsubor}). This completes the proof of (\ref{eq_lambdaother}). It remains to prove \eqref{eq:PKbound}. Indeed, it suffices to bound moments of $P_{i}$ and $K_{i}$. More specifically, by Markov inequality, we shall prove for all positive integer $p\geq 2,$ the followings hold \begin{equation} \label{eq:PKmomentbound} \expct{\absv{P_{i}}^{2p}}\prec\Psi^{2p}\AND \expct{\absv{K_{i}}^{2p}}\prec\Psi^{2p}. \end{equation} Denote \begin{equation} \label{eq_definitionpq} {\mathfrak X}_{i}^{(p,q)}\mathrel{\mathop:}= P_{i}^{p}\overline{P^q_i},\AND {\mathfrak Y}_{i}^{(p,q)}\mathrel{\mathop:}= K_{i}^{p}\overline{K^q_i}. \end{equation} We now state the recursive moment estimates for ${\mathfrak X}_{i}^{(p,q)}$ and ${\mathfrak Y}_{i}^{(p,q)}$. \begin{lem}\label{lem:PKrecmoment} For any fixed integer $p\geq 2$ and $i\in\llbra 1,N\rrbra$, we have \begin{align} \expct{{\mathfrak X}_{i}^{(p,p)}} \leq \expct{\rO_{\prec}(\Psi){\mathfrak X}_{i}^{(p-1,p)}} +\expct{\rO_{\prec}(\Psi^{2}){\mathfrak X}_{i}^{(p-2,p)}} +\expct{\rO_{\prec}(\Psi^{2}){\mathfrak X}_{i}^{(p-1,p-1)}}, \label{eq:PKrecmoment} \\ \expct{{\mathfrak Y}_{i}^{(p,p)}}\leq \expct{\rO_{\prec}(\Psi){\mathfrak Y}_{i}^{(p-1,p)}}+\expct{\rO_{\prec}(\Psi^{2}){\mathfrak Y}_{i}^{(p-2,p)}}+\expct{\rO_{\prec}(\Psi^{2}){\mathfrak Y}_{i}^{(p-1,p-1)}}. \label{eq:etakrecursive} \end{align} \end{lem} We will give the proof Lemma \ref{lem:PKrecmoment} in the end of this subsection. To conclude the proof of Proposition \ref{prop:entrysubor}, we explain how Lemma \ref{lem:PKrecmoment} implies \eqref{eq:PKmomentbound}. We first recall Young's inequality. For any constants $\alpha, \beta>0,$ we have $$\alpha \beta \leq \frac{\alpha^{m}}{m}+\frac{\beta^{n}}{n}, \ \frac{1}{m}+\frac{1}{n}=1, \ m,n>1 \ \text{are real numbers}.$$ For $k=1,2$, any arbitrary small constant $\epsilon>0$ and any random variable ${\mathfrak N}=\rO_{\prec}(\Psi^{k})$ satisfying $\expct{\absv{{\mathfrak N}}^{q}}\prec \Psi^{qk}$, we have that \begin{align} \expct{\absv{{\mathfrak N} P_{i}^{2p-k}}}=\expct{\absv{N^{\epsilon}{\mathfrak N}}\absv{N^{-\frac{\epsilon}{2p-k}}P_{i}}^{2p-k}} \leq &\frac{kN^{\frac{2p\epsilon}{k}}}{2p}\expct{\absv{{\mathfrak N}}^{\frac{2p}{k}}}+\frac{(2p-k)N^{-\frac{2p\epsilon}{(2p-k)^2}}}{2p}\expct{\absv{P_{i}}^{(2p-k)\frac{2p}{2p-k}}} \nonumber \\ \leq & \frac{kN^{(\frac{2p}{k}+1)\epsilon}}{2p}\Psi^{2p}+\frac{(2p-k)N^{-\frac{2p\epsilon}{(2p-k)^2}}}{2p}\expct{\absv{P_{i}}^{2p}}, \label{eq_arbitraydiscussion} \end{align} where in the first inequality we used Young's inequality with $m=2p/k$ and $n=2p/(2p-k)$ and in the third equality we used $\expct{\absv{{\mathfrak N}}^{q}}\prec \Psi^{qk}.$ Together with \eqref{eq:PKrecmoment}, it yields that \begin{equation*} \expct{\absv{P_{i}}^{2p}}\leq \frac{3}{2p}N^{(2p+1)\epsilon}\Psi^{2p}+\frac{3(2p-1)}{2p}N^{-\frac{2p\epsilon}{(2p-1)^2}}\expct{\absv{P_{i}}^{2p}}. \end{equation*} Since $\epsilon>0$ is arbitrarily small, we can conclude the first part of \eqref{eq:PKmomentbound}. The second part can be proved similarly and we omit the details here. \end{proof} The rest of the section is devoted to the proof of Lemma \ref{lem:PKrecmoment}. Throughout the proof, we will need some large deviation estimates as our technical inputs. These estimates and their proof can be found in Lemmas \ref{lem:LDE}, \ref{lem:DeltaG} and \ref{lem:recmomerror} of Appendix \ref{append:A}. \begin{proof}[\bf Proof of Lemma \ref{lem:PKrecmoment}] We start with the proof of (\ref{eq:PKrecmoment}). Since $h_{ii}{\boldsymbol e}_{i}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}=b_{i}h_{ii}G_{ii}$, we can rewrite \eqref{eq:BGii-S} as \begin{equation} \label{eq:BGii} (\widetilde{B}G)_{ii}=-S_{i}+\ell_{i}^{2}(b_{i}h_{ii}+{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i})(G_{ii}+T_{i})=-\mathring{S}_{i}+G_{ii}+T_{i}+\mathsf{e}_{i1}, \end{equation} where we denoted \begin{equation} \label{eq_epsilon1} \mathsf{e}_{i1} \mathrel{\mathop:}= (\ell_{i}^{2}-1)b_{i}h_{ii}G_{ii}+(\ell_{i}^{2}{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i}-1)(G_{ii}+T_{i})+\ell_{i}^{2}b_{i}h_{ii}T_{i}. \end{equation} By Lemma \ref{lem:LDE} and recall $\widetilde{{\boldsymbol g}} \sim \mathcal{N}_{\mathbb{C}}(0, N^{-1}I_N)$, we see that \begin{equation}\label{eq_hiicontrol} h_{ii}=\norm{\widetilde{{\boldsymbol g}}_{i}}^{-1}\absv{{\boldsymbol e}_{i}^* \widetilde{{\boldsymbol g}}_{i}}\prec N^{-1/2}. \end{equation} Consequently, using the definitions in (\ref{eq_prd2}), we obtain that \begin{equation} \label{eq_licontrol} \ell_{i}^{2}=\frac{2}{\norm{{\boldsymbol e}_{i}+{\boldsymbol h}_{i}}^{2}}=\frac{1}{1+{\boldsymbol e}_{i}^* {\boldsymbol h}_{i}}=1+\rO_{\prec}(N^{-1/2}). \end{equation} Moreover, by (\ref{eq_prd}), (\ref{eq_prd1}), (\ref{eq_defnvb}) and Lemma \ref{lem:LDE}, we have \begin{equation} \label{eq_controlhibhi} {\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i}={\boldsymbol h}_{i}^* R_{i}\widetilde{B}R_{i}{\boldsymbol h}_{i}={\boldsymbol e}_{i}^*\widetilde{B}{\boldsymbol e}_{i}=\frac{1}{\norm{\widetilde{{\boldsymbol g}}_{i}}^{2}}\widetilde{{\boldsymbol g}}_{i}^* B\widetilde{{\boldsymbol g}}_{i}=\tr B+\rO_{\prec}(N^{-1/2})=1+\rO_{\prec}(N^{-1/2}), \end{equation} where we recall that $B$ is normalized such that $\operatorname{tr} B=1.$ Using the definition (\ref{eq_epsilon1}), by (\ref{eq_hiicontrol}), (\ref{eq_licontrol}), (\ref{eq_controlhibhi}) and (\ref{eq_locallaweqbound}), we conclude that \begin{equation}\label{eq_boundepsilon1} \absv{\mathsf{e}_{i1}}\prec N^{-1/2}. \end{equation} Therefore, by (\ref{eq_pk}), \eqref{eq:BGii} and (\ref{eq_boundepsilon1}), we have \begin{align}\label{eq:recmomP} \expct{{\mathfrak X}_{i}^{(p,p)}} =&\expct{\left(G_{ii}\tr(A\widetilde{B}G)+\tr(GA)(\mathring{S}_{i})+(G_{ii}+T_{i})(\Upsilon-\tr(GA))\right){\mathfrak X}_{i}^{(p-1,p)}} \\ &+\expct{ \mathsf{e}_{i1} \tr{(GA)}{\mathfrak X}_{i}^{(p-1,p)}}. \nonumber \end{align} Next, we control all the terms of the RHS of (\ref{eq:recmomP}). We mainly focus on the term involving $\tr(GA)(\mathring{S}_{i})$. As we will see later, by exploring the hidden terms using integration by parts, the term involving $\tr(GA)(\mathring{S}_{i})$ will generate several terms which would cancel the rest of the terms on the RHS of (\ref{eq:recmomP}). Note that \begin{equation} \label{eq_rewritemrs} \mathring{S}_{i}=\mathring{{\boldsymbol h}}_{i}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i} =\sum_{k}\mathring{{\boldsymbol h}}_{i}^* {\boldsymbol e}_{k}{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i} =\sum_{k}^{(i)}\overline{g}_{ik}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}, \end{equation} where we use the shorthand notation \begin{equation*} \sum_{k}^{(i)}=\sum_{\substack{k=1 \\ k \neq i}}^N. \end{equation*} Our calculation relies on the following integration by parts formula for complex centered Gaussian variable $g \sim \mathcal{N}(0,\sigma^2)$ (see eq. (5.33) of \cite{BEC}) \begin{equation}\label{eq_formulaintergrationbyparts} \int_{\mathbb{C}} \bar{g} f(g, \bar{g}) e^{-\frac{|g|^2}{\sigma^2}} \mathrm{d}^2 g= \sigma^2 \int_{\mathbb{C}} \partial_g f(g, \bar{g}) e^{-\frac{|g|^2}{\sigma^2}} \mathrm{d}^2 g, \end{equation} where $f: \mathbb{C}^2 \rightarrow \mathbb{C}$ is a differentiable function. By (\ref{eq_rewritemrs}) and (\ref{eq_formulaintergrationbyparts}), we have \begin{align} &\expct{\mathring{S}_{i}\tr (GA){\mathfrak X}^{(p-1,p)}} =\sum_{k}^{(i)}\expct{\overline{g}_{ik}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA){\mathfrak X}^{(p-1,p)}} \nonumber \\ &=\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{\partial\norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA){\mathfrak X}^{(p-1,p)}} +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}\frac{\partial ({\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i})}{\partial g_{ik}}\tr (GA){\mathfrak X}^{(p-1,p)}} \nonumber \\ &+\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial\tr(GA)}{\partial g_{ik}}{\mathfrak X}^{(p-1,p)}} +\frac{p-1}{N}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA)\frac{\partial P_{i}}{\partial g_{ik}}{\mathfrak X}^{(p-2,p)}} \nonumber \\ &+\frac{p}{N}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA)\frac{\partial \overline{P}_{i}}{g_{ik}}{\mathfrak X}^{(p-1,p-1)}}, \label{eq_cumulantfirst} \end{align} where we recall that ${\mathfrak X}^{(p-1,p)}=P_{i}^{p-1}\overline{P}_{i}^{p}$ as defined in (\ref{eq_definitionpq}). To control the terms on the RHS of (\ref{eq_cumulantfirst}), it suffices to investigate the coefficients involving the partial derivatives with respect to $g_{ik}.$ More specifically, the second term of the RHS of (\ref{eq_cumulantfirst}) will provide some hidden terms for cancellation. Further, since $\widetilde{B}^{\langle i\rangle}$ is independent of $\bm{v}_i,$ we have that \begin{equation*} \frac{\partial(\bm{e}_k^* \widetilde{B}^{\langle i\rangle} G \bm{e}_i)}{\partial g_{ik}}=\bm{e}_k^* \widetilde{B}^{\langle i\rangle} \frac{\partial G}{\partial g_{ik}} \bm{e}_i. \end{equation*} We start with the analysis of the Household reflection defined in (\ref{eq_prd}). Recall $\bm{r}_i=\ell_i(\bm{e}_i+\bm{h}_i)$ and $\ell_i$ defined in (\ref{eq_prd2}). By (\ref{eq_householdderivative}), (\ref{eq_hiderivative}), (\ref{eq_histartderivative}) and (\ref{eq_partialli2}), we have that \begin{multline*} \frac{\partial R_{i}}{\partial g_{ik}}=-\ell_{i}^{4}\norm{{\boldsymbol g}_{i}}^{-1}\overline{h}_{ik}h_{ii}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* \\ -\ell_{i}^{2}\norm{{\boldsymbol g}_{i}}^{-1}\left({\boldsymbol e}_{k}{\boldsymbol e}_{i}^*-\overline{h}_{ik}({\boldsymbol h}_{i}{\boldsymbol e}_{i}^*+{\boldsymbol e}_{i}{\boldsymbol h}_{i}^*) +{\boldsymbol e}_{k}{\boldsymbol h}_{i}^*-\overline{h}_{ik}{\boldsymbol h}_{i}{\boldsymbol h}_{i}^* -\overline{h}_{ik}{\boldsymbol h}_{i}{\boldsymbol h}_{i}^*\right). \end{multline*} We can further rewrite the above equation as \begin{equation} \label{eq_househouldfinalexpression} \frac{\partial R_{i}}{\partial g_{ik}}=-\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}({\boldsymbol e}_{i}^* +{\boldsymbol h}_{i}^*)+\Delta_{R}(i,k), \end{equation} where we defined \begin{equation} \label{eq_deltarg} \Delta_{R}(i,k) \mathrel{\mathop:}= -\frac{\ell_{i}^{4}}{\norm{{\boldsymbol g}_{i}}^{2}}\bar{h}_{ik}h_{ii}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* +\ell_{i}^{2}\norm{{\boldsymbol g}_{i}}^{-1}\overline{h}_{ik}({\boldsymbol h}_{i}{\boldsymbol e}_{i}^*+{\boldsymbol e}_{i}{\boldsymbol h}_{i}^* +2{\boldsymbol h}_{i}{\boldsymbol h}_{i}^*). \end{equation} By (\ref{eq_househouldfinalexpression}) and (\ref{eq:Gder}), we obtain that \begin{equation} \label{eq:Gder1} \frac{\partial G}{\partial g_{ik}}=\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}GA\left({\boldsymbol e}_{k}({\boldsymbol e}_{i}^*+{\boldsymbol h}_{i}^*)\widetilde{B}^{\langle i\rangle}R_{i}+R_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k}({\boldsymbol e}_{i}^*+{\boldsymbol h}_{i}^*)\right)G+\Delta_{G}(i,k), \end{equation} where \begin{equation} \label{eq_defndeltag} \Delta_{G}(i,k)\mathrel{\mathop:}= -GA\left(\Delta_{R}(i,k)\widetilde{B}^{\langle i\rangle}R_{i}+R_{i}\widetilde{B}^{\langle i\rangle}\Delta_{R}(i,k)\right)G. \end{equation} We see from (\ref{eq:Gder1}) and \eqref{eq:DeltaG1} that \begin{align}\label{eq:1st_der_expa} &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i} =\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}GA\left({\boldsymbol e}_{k}({\boldsymbol e}_{i}^* +{\boldsymbol h}_{i}^*)\widetilde{B}^{\langle i\rangle}R_{i}+R_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k}({\boldsymbol e}_{i}^*+{\boldsymbol h}_{i}^*)\right)G{\boldsymbol e}_{i} +\rO_{\prec}(\Pi_{i}^{2}) \nonumber \\ &=\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}\frac{1}{N}\sum_{k}^{(i)}a_{k}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{k}(-{\boldsymbol h}_{i}^*\widetilde{B}-{\boldsymbol e}_{i}^*\widetilde{B})G{\boldsymbol e}_{i}+{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k}(G_{ii}+{\boldsymbol h}_{i}^* G{\boldsymbol e}_{i})+\rO_{\prec}(\Pi_{i}^{2}) \nonumber \\ &=\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}\frac{1}{N}\sum_{k}^{(i)} a_{k}(\widetilde{B}^{\langle i\rangle}G)_{kk}(-b_{i}T_{i}-(\widetilde{B}G)_{ii})+({\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k})(G_{ii}+T_{i})+\rO_{\prec}(\Pi_{i}^{2}), \end{align} where in the second equality we used (\ref{eq_hbwtr}) and in the third equality we used (\ref{eq_hbwtr}) and (\ref{eq_shorhandnotation}). Moreover, by (\ref{eq_locallaweqbound}) and the assumption that $\{a_i\}$ and $\{b_i\}$ are bounded, we readily see that \begin{equation}\label{eq_firstapproximatetrace} \tr(A\widetilde{B}^{\langle i\rangle}G)-\frac{1}{N}\sum_{k}^{(i)}a_{k}(\widetilde{B}^{\langle i\rangle}G)_{kk} =\frac{1}{N}a_{i}b_{i}G_{ii}\prec\frac{1}{N}, \end{equation} and \begin{align}\label{eq_firstapproximatetrace2} \tr(\widetilde{B}^{\langle i\rangle}GAR_{i}\widetilde{B}^{\langle i\rangle})-\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k} =-\frac{b_{i}}{N}{\boldsymbol e}_{i}^* GA\widetilde{B}{\boldsymbol h}_{i}=-\frac{b_{i}}{N}{\boldsymbol e}_{i}^*(zG+1){\boldsymbol h}_{i}\prec\frac{1}{N}. \end{align} We claim that we can replace $\widetilde{B}^{\langle i\rangle}$ by $\widetilde{B}$ in (\ref{eq_firstapproximatetrace}) and (\ref{eq_firstapproximatetrace2}) without changing the error bound in (\ref{eq:1st_der_expa}). Indeed, by the definition of Household reflection, we have that \begin{align}\label{eq_removeindexcontrol} \tr(A\widetilde{B}G)-\tr(A\widetilde{B}^{\langle i\rangle}G) & =\tr(A\widetilde{B}G)-\tr(AR_{i}\widetilde{B}R_{i}G) \nonumber \\ &=\tr(A{\boldsymbol r}_{i}{\boldsymbol r}_{i}^*\widetilde{B}G)+\tr(A\widetilde{B}{\boldsymbol r}_{i}{\boldsymbol r}_{i}^* G)-\tr(A{\boldsymbol r}_{i}{\boldsymbol r}_{i}^* \widetilde{B}{\boldsymbol r}_{i}{\boldsymbol r}_{i}^* G) \nonumber \\ &=\frac{1}{N}{\boldsymbol r}_{i}^*\widetilde{B}GA{\boldsymbol r}_{i}+\frac{1}{N}{\boldsymbol r}_{i}^* GA\widetilde{B}{\boldsymbol r}_{i}-\frac{1}{N}{\boldsymbol r}_{i}^* \widetilde{B}{\boldsymbol r}_{i}{\boldsymbol r}_{i}^* GA{\boldsymbol r}_{i}. \end{align} Recall that ${\boldsymbol r}_i=\ell_i(\bm{e}_i+\bm{h}_i).$ We have that \begin{equation} \label{eq_casestudyerrorone} \left|\frac{1}{N}{\boldsymbol r}_{i}^*\widetilde{B}GA{\boldsymbol r}_{i}\right| \lesssim \frac{1}{N}\left(\norm{\sqrt{A}G^* \widetilde{B}{\boldsymbol e}_{i}} \norm{\sqrt{A}}+\norm{G^* \widetilde{B}{\boldsymbol h}_{i}}\right) \lesssim \frac{1}{N}\left({\boldsymbol e}_{i}\widetilde{B} G AG^* \widetilde{B}{\boldsymbol e}_{i}+b_{i}^{2}{\boldsymbol h}_{i}^* G G^*{\boldsymbol h}_{i}\right)^{1/2}, \end{equation} where in the second inequality we used the fact that $\| A \|$ is bounded. Moreover, using (\ref{eq_gggconnetction}), (\ref{eq_connectiongreenfunction}) and (\ref{defn_eq_matrices}), it is easy to see that \begin{equation} \label{eq:a_i} {\boldsymbol e}_{i}^* \widetilde{B}GAG^* \widetilde{B}{\boldsymbol e}_{i} =\frac{{\boldsymbol e}_{i}^*\sqrt{A}\widetilde{B}\sqrt{A}\widetilde{G}\widetilde{G}^* \sqrt{A}\widetilde{B}\sqrt{A}{\boldsymbol e}_{i}} {a_{i}} =\frac{{\boldsymbol e}_{i}^* \widetilde{H}\widetilde{G}\widetilde{G}^* \widetilde{H}{\boldsymbol e}_{i}}{a_{i}} \leq \norm{\widetilde{H}}^{2}\frac{\im \widetilde{G}_{ii}}{a_{i}\eta}, \end{equation} where in the last inequality we used the fact that $\widetilde{G}$ is Hermitian and the Ward identity \begin{equation*} \sum_{j=1}^N |\widetilde{G}_{ij}|^2=(\widetilde{G} \widetilde{G}^*)_{ii} =\frac{\im \widetilde{G}_{ii}}{\eta}. \end{equation*} Similarly, we can show that \begin{equation} \label{eq_part2ai} b_{i}^{2}{\boldsymbol h}_{i}^* GG^* {\boldsymbol h}_{i}=b_{i}^{2}{\boldsymbol e}_{i}{\mathcal G} {\mathcal G}^* {\boldsymbol e}_{i} =b_{i}{\boldsymbol e}_{i}\widetilde{{\mathcal G}}B\widetilde{{\mathcal G}}^* {\boldsymbol e}_{i}\leq b_{i}\norm{B}\frac{\im \widetilde{{\mathcal G}}_{ii}}\eta. \end{equation} Since $A, B, \widetilde{H}$ are bounded, by (\ref{eq_casestudyerrorone}), (\ref{eq:a_i}) and (\ref{eq_part2ai}), we see that \begin{equation*} \left|\frac{1}{N} {\boldsymbol r}_i^* \widetilde{B} GA {\boldsymbol r}_i \right| \lesssim \frac{1}{N}\left( \frac{\im \widetilde{G}_{ii}}{\eta}+\frac{\im \widetilde{{\mathcal G}}_{ii}}{\eta} \right)^{1/2}. \end{equation*} By an analogous discussion, we can control the other two terms of the RHS of (\ref{eq_removeindexcontrol}) and obtain that \begin{equation*} \left|\frac{1}{N} {\boldsymbol r}_i^* GA \widetilde{B} {\boldsymbol r}_i \right| \lesssim \frac{1}{N}\left( \frac{\im \widetilde{G}_{ii}}{\eta}+\frac{\im \widetilde{{\mathcal G}}_{ii}}{\eta} \right)^{1/2}, \ \left|\frac{1}{N} {\boldsymbol r}_i^* \widetilde{B} {\boldsymbol r}_i {\boldsymbol r}_i^* GA {\boldsymbol r}_i \right| \lesssim \frac{1}{N}\left( \frac{\im \widetilde{G}_{ii}}{\eta}+\frac{\im \widetilde{{\mathcal G}}_{ii}}{\eta} \right)^{1/2}. \end{equation*} Further, from the spectral decomposition of $\widetilde{H}$ and $\widetilde{{\mathcal H}}$, it is clear that that $\im \widetilde{G}_{ii}/\eta\geq c$ and $\im \widetilde{{\mathcal G}}_{ii}/\eta\geq c$ for some fixed constant $c>0.$ This shows that for some constant $C>0,$ \begin{equation*} \frac{1}{N}\left(\frac{\im\widetilde{G}_{ii}+\im\widetilde{{\mathcal G}}_{ii}}{\eta}\right)^{1/2}\leq \frac{C}{N}\frac{\im\widetilde{G}_{ii}+\im\widetilde{{\mathcal G}}_{ii}}{\eta}=C\Pi_{i}^{2}. \end{equation*} Together with (\ref{eq_removeindexcontrol}), we arrive at \begin{equation}\label{eq_firstindexreducecomplete} \tr(A\widetilde{B}^{\langle i\rangle}G)=\tr(A\widetilde{B}G)+\rO_{\prec}(\Pi_{i}^2). \end{equation} By a discussion similar to (\ref{eq_firstindexreducecomplete}), we can get \begin{equation}\label{eq_secondindexreducecomplete} \tr(\widetilde{B}^{\langle i\rangle}GAR_{i}\widetilde{B}^{\langle i\rangle})=\tr{(\widetilde{B}GA \widetilde{B})}+\rO_{\prec}(\Pi_{i}^2). \end{equation} Therefore, by (\ref{eq:1st_der_expa}), (\ref{eq_firstindexreducecomplete}) and (\ref{eq_secondindexreducecomplete}), we conclude that \begin{equation} \label{eq:Sexpand} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}=\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}\left(\tr(A\widetilde{B}G)(-b_{i}T_{i}-(\widetilde{B}G)_{ii})+\tr(\widetilde{B}GA\widetilde{B})(G_{ii}+T_{i})\right)+\rO_{\prec}(\Pi_{i}^{2}). \end{equation} Note that compared to the expansion (\ref{eq:recmomP}), the coefficient in front of $\tr{(A \widetilde{B}G)}$ is still different. We need further explore the hidden relation. By a discussion similar to (\ref{eq:Sexpand}), we have that \begin{equation}\label{eq:Texpand} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* \frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i} =\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}\left(\tr(GA)(-b_{i}T_{i}-(\widetilde{B}G)_{ii})+\tr(GA\widetilde{B})(G_{ii}+T_{i})\right)+\rO_{\prec}(\Pi_{i}^{2}). \end{equation} In light of (\ref{eq:Sexpand}), (\ref{eq:Texpand}) and (\ref{eq:recmomP}), it suffices to control $$\tr(GA)\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}-\tr(A\widetilde{B}G)\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* \frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}.$$ Combining (\ref{eq:Sexpand}) and (\ref{eq:Texpand}), we have that \begin{align}\label{eq_combineminus} \tr(GA)& \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}-\tr(A\widetilde{B}G)\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* \frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i} \nonumber \\ &=\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}(G_{ii}+T_{i})(\tr(GA)\tr(\widetilde{B}GA\widetilde{B})-\tr(GA\widetilde{B})\tr(GA\widetilde{B}))+\rO_{\prec}(\Pi_{i}^{2}) \nonumber \\ &=\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}(G_{ii}+T_{i})(-\Upsilon-\tr(A\widetilde{B}G)+\tr(GA))+\rO_{\prec}(\Pi_{i}^{2}), \end{align} where in the second equality we employed the definition of $\Upsilon$ in (\ref{eq_defnupsilon}). Denote \begin{align}\label{eq_defnmathsfe2} \mathsf{e}_{i2}\mathrel{\mathop:}= &\left(\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}-\norm{{\boldsymbol g}_{i}}\right)(-G_{ii}\tr(A\widetilde{B}G)-(G_{ii}+T_{i})(\Upsilon-\tr(GA)))+\tr(A\widetilde{B}G)\left(\norm{{\boldsymbol g}_{i}}\mathring{T}_{i}-\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}T_{i}\right). \end{align} By a discussion similar to (\ref{eq_boundepsilon1}), we can conclude that \begin{equation}\label{eq_controlepsilon2} |\mathsf{e}_{i2}| \prec N^{-1/2}. \end{equation} Moreover, by a simple algebraic calculation using (\ref{eq_combineminus}) and (\ref{eq_defnmathsfe2}), we find that \begin{align}\label{eq_finalexpansion} \tr(GA)\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}=&\norm{{\boldsymbol g}_{i}}(-G_{ii}\tr(A\widetilde{B}G)-(G_{ii}+T_{i})(\Upsilon-\tr(GA))) \nonumber \\ +&\tr(A\widetilde{B}G)\left(\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}-\norm{{\boldsymbol g}_{i}}\mathring{T}_{i}\right)+\mathsf{e}_{i2}+\rO_{\prec}(\Pi_{i}^{2}). \end{align} With (\ref{eq_finalexpansion}) and (\ref{eq_cumulantfirst}), we come back to the discussion \eqref{eq:recmomP}. More specifically, inserting (\ref{eq_finalexpansion}) into (\ref{eq_cumulantfirst}) and then (\ref{eq:recmomP}), we have that \begin{align}\label{eq:recmomP1} &\expct{{\mathfrak X}_{i}^{(p,p)}} =\expct{\left(\frac{1}{N}\sum_{k}^{(i)}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}-\mathring{T}_{i}\right)\tr(A\widetilde{B}G){\mathfrak X}_{i}^{(p-1,p)}} \\ & +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{\partial\norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA){\mathfrak X}_{i}^{(p-1,p)}} \nonumber +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tr\left(\frac{\partial G}{\partial g_{ik}}A\right){\mathfrak X}_{i}^{(p-1,p)}} \\ & +\frac{p-1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA)\frac{\partial P_{i}}{\partial g_{ik}}{\mathfrak X}_{i}^{(p-2,p)}} +\frac{p}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA)\frac{\partial \overline{P}_{i}}{\partial g_{ik}}{\mathfrak X}_{i}^{(p-1,p-1)}} \nonumber \\ & +\expct{\left(\mathsf{e}_{i1}\tr(GA)+\frac{1}{\norm{{\boldsymbol g}_{i}}}\mathsf{e}_{i2}+\rO_{\prec}(\Pi_{i}^{2})\right){\mathfrak X}_{i}^{(p-1,p)}}. \nonumber \end{align} We do one more expansion for the first term of the above equation. Recall the definitions in (\ref{eq_shorhandnotation}). Applying the technique of integration by parts, i.e., (\ref{eq_formulaintergrationbyparts}), we get that \begin{align}\label{eq:recmomP2} &\expct{\left(\mathring{T}_{i}-\frac{1}{N}\sum_{k}^{(i)}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}\right)\tr(A\widetilde{B}G){\mathfrak X}_{i}^{(p-1,p)}} \\ &=\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}G_{ki}\tr(A\widetilde{B}G){\mathfrak X}_{i}^{(p-1,p)}} +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}G_{ki}\tr(A\widetilde{B}\frac{\partial G}{\partial g_{ik}}){\mathfrak X}_{i}^{(p-1,p)}} \nonumber \\ &+\frac{p-1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}G_{ki}\tr(A\widetilde{B}G)\frac{\partial P_{i}}{\partial g_{ik}}{\mathfrak X}_{i}^{(p-2,p)}} +\frac{p}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}G_{ki}\tr(A\widetilde{B}G)\frac{\partial \overline{P}_{i}}{\partial g_{ik}}{\mathfrak X}_{i}^{(p-1,p-1)}}. \nonumber \end{align} Combining (\ref{eq:recmomP2}) and (\ref{eq:recmomP1}), we can rewrite \begin{equation}\label{eq_pkfinalrepresentation} \expct{{\mathfrak X}_{i}^{(p,p)}}=\expct{ \mathfrak{C}_1{\mathfrak X}_{i}^{(p-1,p)}} +\expct{\mathfrak{C}_2{\mathfrak X}_{i}^{(p-2,p)}} +\expct{\mathfrak{C}_3{\mathfrak X}_{i}^{(p-1,p-1)}}, \end{equation} where the coefficients $\mathfrak{C}_k,k=1,2,3$ are defined as \begin{align}\label{eq_defnmathfrackc1} &\mathfrak{C}_1:=\frac{1}{N} \sum_{k}^{(i)} \Big(\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}G_{ki}\tr(A\widetilde{B}G)+\frac{1}{\norm{{\boldsymbol g}_{i}}}G_{ki}\tr(A\widetilde{B}\frac{\partial G}{\partial g_{ik}})+\frac{\partial\norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA) \\ &+\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tr\left(\frac{\partial G}{\partial g_{ik}}A\right)+\left(\mathsf{e}_{i1}\tr(GA)+\frac{1}{\norm{{\boldsymbol g}_{i}}}\mathsf{e}_{i2}+\rO_{\prec}(\Pi_{i}^{2})\right) \Big), \nonumber \\ & \mathfrak{C}_2:=\frac{p-1}{N} \sum_{k}^{(i)} \Big(\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA)\frac{\partial P_{i}}{\partial g_{ik}}+\frac{1}{\norm{{\boldsymbol g}_{i}}}G_{ki}\tr(A\widetilde{B}G)\frac{\partial P_{i}}{\partial g_{ik}} \Big), \label{eq_defnmathfrackc2} \\ & \mathfrak{C}_3:=\frac{p}{N} \sum_{k}^{(i)} \Big(\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA)\frac{\partial \overline{P}_{i}}{\partial g_{ik}}+ \frac{1}{\norm{{\boldsymbol g}_{i}}}G_{ki}\tr(A\widetilde{B}G)\frac{\partial \overline{P}_{i}}{\partial g_{ik}}\Big). \label{eq_defnmathfrackc3} \end{align} To conclude the proof of (\ref{eq:PKrecmoment}), it suffices to control the coefficients $\mathfrak{C}_k, k=1,2,3.$ For $\mathfrak{C}_1,$ by Lemma \ref{lem:recmomerror}, we find that \begin{equation}\label{eq:controlc1} \mathfrak{C}_1 \prec N^{-1/2}+\Pi_i^2. \end{equation} For $\mathfrak{C}_2,$ by (\ref{eq_partialpi}) and Lemma \ref{lem:recmomerror}, we find that \begin{equation}\label{eq:controlc2} \mathfrak{C}_2 \prec \Pi_i^2. \end{equation} Similarly, we can show that \begin{equation}\label{eq:controlc3} \mathfrak{C}_3 \prec \Pi_i^2. \end{equation} Invoking the definitions in (\ref{eq_controlparameter}), we complete the proof of (\ref{eq:PKrecmoment}) using (\ref{eq:controlc1}), (\ref{eq:controlc2}), (\ref{eq:controlc3}) and (\ref{eq_pkfinalrepresentation}). Finally, due to similarity, we only briefly discuss the proof of (\ref{eq:etakrecursive}). Using the definition of $K_i$ in (\ref{eq_pk}) and the fact that $T_{i}-\mathring{T}_{i}=h_{ii}G_{ii}\prec N^{-1/2},$ we find that \begin{align} &\expct{{\mathfrak Y}_{i}^{(p,p)}}=\expct{\left(\mathring{T}_{i}+\tr(GA)(b_{i}T_{i}+(\widetilde{B}G)_{ii})-\tr(GA\widetilde{B}G)(G_{ii}+T_{i})\right){\mathfrak Y}_{i}^{(p-1,p)}}+\expct{\rO_{\prec}(N^{-1/2}){\mathfrak Y}_{i}^{(p-1,p)}} \nonumber \\ & =\sum_{k}^{(i)}\expct{\frac{\overline{g}_{ik}}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}{\mathfrak Y}_{i}^{(p-1,p)}}+\expct{\left(\tr(GA)(b_{i}T_{i}+(\widetilde{B}G)_{ii})-\tr(GA\widetilde{B}G)(G_{ii}+T_{i})\right){\mathfrak Y}_{i}^{(p-1,p)}} \label{eq_closeendetaexpansion} \\ & +\expct{\rO_{\prec}(N^{-1/2}){\mathfrak Y}_{i}^{(p-1,p)}} \nonumber, \end{align} where in the second equality we used the definition in (\ref{eq_shorhandnotation}). Applying (\ref{eq_formulaintergrationbyparts}) to the first term of the RHS of the above equation, we obtain \begin{align} & \sum_{k}^{(i)}\expct{\frac{\overline{g}_{ik}}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}{\mathfrak Y}_{i}^{(p-1,p)}} =\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}{\mathfrak Y}_{i}^{(p-1,p)}} +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}{\mathfrak Y}_{i}^{(p-1,p)}} \nonumber \\ & +\frac{p-1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}\frac{\partial K_{i}}{\partial g_{ik}}{\mathfrak Y}_{i}^{(p-2,p)}} +\frac{p}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}\frac{\partial \overline{K}_{i}}{\partial g_{ik}}{\mathfrak Y}_{i}^{(p-1,p-1)}}. \label{eq_firstterm} \end{align} Inserting \eqref{eq:Texpand} into (\ref{eq_firstterm}) and then (\ref{eq_closeendetaexpansion}), by a discussion similar to the cancellation in (\ref{eq:recmomP1}) and error controls in (\ref{eq_boundepsilon1}) and (\ref{eq_controlepsilon2}), we conclude that \begin{align*} \expct{{\mathfrak Y}_{i}^{(p,p)}} &=\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{\partial\norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}{\mathfrak Y}_{i}^{(p-1,p)}}+\frac{p-1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}\frac{\partial K_{i}}{\partial g_{ik}}{\mathfrak Y}_{i}^{(p-2,p)}} \\ &+\frac{p}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* G{\boldsymbol e}_{i}\frac{\partial \overline{K}_{i}}{\partial g_{ik}}{\mathfrak Y}_{i}^{(p-1,p-1)}}+\expct{\rO_{\prec}(N^{-1/2}){\mathfrak Y}_{i}^{(p-1,p)}}. \end{align*} Using (\ref{eq_partialKderivative}), Lemma \ref{lem:recmomerror} and a discussion similar to (\ref{eq:controlc1}), (\ref{eq:controlc2}) and (\ref{eq:controlc3}), we can finish the proof of (\ref{eq:etakrecursive}). \end{proof} Before concluding this section, we briefly discuss the control of the off-diagonal entries. Specifically, we will prove the following proposition. Recall the definitions in (\ref{eq_quantitydefinitionoffdiagonal}). \begin{prop}\label{prop_offdiagonal} Fix $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. Suppose that the assumptions of Theorem \ref{thm:main}, Assumption \ref{assu_ansz} hold. Moreover, we assume that \begin{align}\label{eq:ansz_off} \Lambda_{o}\mathrel{\mathop:}=\max_{i\neq j}\absv{G_{ij}}\prec N^{-\gamma/4}, \ \ \Lambda_{To}\mathrel{\mathop:}=\max_{i\neq j}\absv{T_{ij}}\prec 1. \end{align} Then for all $i,j\in\llbra N\rrbra$, we have \begin{align*} \absv{P_{ij}(z)}\prec\Psi, \ \ \absv{K_{ij}(z)}\prec \Psi, \ \ \Lambda_{o}\prec\Psi, \ \ \Lambda_{To}\prec\Psi. \end{align*} \end{prop} \begin{proof} The proof is similar to Proposition \ref{prop:entrysubor} using the identities (\ref{eq_qij}), (\ref{eq_qijdecomposution}) and the estimates in Lemma \ref{lem:recmomerror_off}. We omit further details here. \end{proof} \section{Fluctuation averaging}\label{sec_faall} \subsection{Rough fluctuation averaging} As we have seen from Proposition \ref{prop:entrysubor}, the error bounds are not optimal compared to our final results in Theorem \ref{thm:main}. More specifically, the obtained control for $\Upsilon$ in (\ref{eq_upsilonbound}) is too loose. In this subsection, we improve the control for $\Upsilon$ based on the Proposition \ref{prop:entrysubor}, which are better estimates compared to (\ref{eq_locallaweqbound}). More specifically, instead of using (\ref{eq_boundepsilon1}) and (\ref{eq_controlepsilon2}), we will conduct a more careful analysis for $\mathsf{e}_{i1}$ and $\mathsf{e}_{i2}$ defined in (\ref{eq_epsilon1}) and (\ref{eq_defnmathsfe2}), respectively. The improved estimates will be utilized in Section \ref{subsec_stronglocallawfixed} to prove Theorem \ref{thm:main} for each fixed spectral parameter $z.$ The main result of this section is Proposition \ref{prop:FA1}, which provides the estimates for an extension of $\Upsilon.$ More specifically, we will consider the control of the weighted average of $Q_{i}$'s, i.e., \begin{equation} \label{eq_weighted} {\mathfrak X} \equiv {\mathfrak X}(D):= \frac{1}{N}\sum_{i}d_{i}Q_{i}=\tr(GA\widetilde{B})\tr(GD)-\tr(GA)\tr(\widetilde{B}GD), \end{equation} where $D=\operatorname{diag}\{d_1, \cdots, d_N\}$ and $d_{i} \equiv d_i(H)$ are generic weights which in general are functions of $H.$ It is necessary and natural for us to consider such a generalization. For instance, in the decomposition (\ref{eq:Lambda}), we have an extra factor, which is a function of $H,$ in front of $Q_i.$ We first impose some assumptions regarding the concentration properties on $d_i, i=1,2,\cdots, N.$ It will be seen later that the following assumption will be sufficient for most of our applications. Especially, when all $d_i, i=1,2,\cdots, N,$ are functions irrelevant of $H$, Assumption \ref{assum_conditiond} will hold trivially. \begin{assu}\label{assum_conditiond} Let $X_{i}=I$ or $\widetilde{B}^{\langle i\rangle}$, and let $d_{1},\cdots,d_{N}$ be functions of $H$ with $\max_{i}\absv{d_{i}}\prec 1$. Assume that for all $i,j\in \llbra 1,N\rrbra$ the following hold: \begin{align}\label{eq:weight_cond} \frac{1}{N}\sum_{k}^{(i)}\frac{\partial d_{j}}{\partial g_{ik}}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}&=\rO_{\prec}(\Psi^{2}\Pi_{i}^{2}), \ \ \frac{1}{N}\sum_{k}^{(i)}\frac{\partial d_{j}}{\partial g_{ik}}{\boldsymbol e}_{k}^* X_{i}\mathring{{\boldsymbol g}}_{i}=\rO_{\prec}(\Psi^{2}\Pi_{i}^{2}), \end{align} and the same bound also holds with $d_{j}$'s replaced by $\overline{d}_{j}$. \end{assu} Now we state the main result of this subsection. \begin{prop}\label{prop:FA1} Fix $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ and suppose that the assumptions of Proposition \ref{prop:entrysubor} and Assumption \ref{assum_conditiond} hold. Let $\Pi(z)\prec\widehat{\Pi}(z)$ for some deterministic positive $\widehat{\Pi}(z)$ with $N^{-1/2}\eta^{-1/4}\prec \widehat{\Pi}\prec\Psi$. Then we have that \begin{equation*} {\mathfrak X} \prec\Psi\widehat{\Pi}. \end{equation*} \end{prop} The proof of Proposition \ref{prop:FA1} is similar to that of (\ref{eq:PKbound}), which follows from a recursive estimate Lemma \ref{lem:PKrecmoment} and Young's inequality. \begin{proof}[\bf Proof of Proposition \ref{prop:FA1}] Denote \begin{align*} {\mathfrak X}^{(p,q)}&\mathrel{\mathop:}= {\mathfrak X}^{p}\overline{{\mathfrak X}}^{q},\quad p,q \in\N. \end{align*} We claim that the following recursive estimates hold for ${\mathfrak X}.$ \begin{lem}\label{lem:Avrecmoment} For any fixed integer $p\geq 2,$ we have that \begin{equation} \label{eq_roughfarecuversivestimateeq} \expct{{\mathfrak X}^{(p,p)}}\leq \expct{\rO_{\prec}(\widehat{\Pi}^{2}){\mathfrak X}^{(p-1,p)}}+\expct{\rO_{\prec}(\Psi^{2}\widehat{\Pi}^{2}){\mathfrak X}^{(p-2,p)}}+\expct{\rO_{\prec}(\Psi^{2}\widehat{\Pi}^{2}){\mathfrak X}^{(p-1,p-1)}}. \end{equation} \end{lem} \noindent By a discussion similar to (\ref{eq_arbitraydiscussion}), together with Lemma \ref{lem:Avrecmoment} and Markov inequality, we can complete the proof. \end{proof} The rest of this section is devoted to proving Lemma \ref{lem:Avrecmoment}, where we will again apply the technique of integration by parts, i.e., (\ref{eq_formulaintergrationbyparts}). The proof is similar to that of Lemma \ref{lem:PKrecmoment}, except that we have better inputs as demonstrated in Proposition \ref{prop:entrysubor}. In what follows, we focus on discussing the hidden cancellation terms and will only briefly investigate the error terms. \begin{proof}[\bf Proof of Lemma \ref{lem:Avrecmoment}] In the first step, we follow the proof idea of \cite[Lemma 6.2]{BEC} to provide a sufficient condition for (\ref{eq_roughfarecuversivestimateeq}); that is, if $\widehat{\Upsilon}(z)$ is another deterministic control parameter such that $\absv{\Upsilon(z)}\prec \widehat{\Upsilon}(z)\leq \Psi(z)$, then \begin{equation} \label{eq:Avrecmomentrec} \expct{{\mathfrak X}^{(p,p)}}\leq \expct{\rO_{\prec}(\widehat{\Pi}^{2}+\Psi\widehat{\Upsilon}){\mathfrak X}^{(p-1,p)}}+\expct{\rO_{\prec}(\Psi^{2}\widehat{\Pi}^{2}){\mathfrak X}^{(p-2,p)}}+\expct{\rO_{\prec}(\Psi^{2}\widehat{\Pi}^{2})({\mathfrak X}^{(p-1,p-1)}}. \end{equation} { Similar to the discussion of (\ref{eq_arbitraydiscussion}), we can employ Young's and Markov inequalities to obtain that \begin{equation} \label{eq_lasttofirst} \Absv{\frac{1}{N}d_{i}Q_{i}}\prec \widehat{\Pi}^{2}+\Psi\widehat{\Upsilon}(z)+\Psi\widehat{\Pi}\prec \Psi\widehat{\Upsilon}(z)+\Psi\widehat{\Pi}, \end{equation} where we used the assumption $\widehat{\Pi}\prec\Psi$. Recall (\ref{eq_averageqdefinitionupsilon}). We now set $d_{i}=z$ for all $i$ in (\ref{eq_lasttofirst}) to get \begin{equation} \label{eq_iterationprocesureend} \absv{\Upsilon(z)}\prec \Psi\widehat{\Upsilon}(z)+\Psi\widehat{\Pi}\prec N^{-\gamma/2}\widehat{\Upsilon}(z)+\Psi\widehat{\Pi}, \end{equation} where we used the fact that $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U)$ is fixed. Using the RHS of (\ref{eq_iterationprocesureend}) as an updated deterministic bound for $\Upsilon$ instead of $\widehat{\Upsilon}$ and iterating the estimation procedure as in (\ref{eq_iterationprocesureend}), we can finally obtain that \begin{equation*} \absv{\Upsilon(z)}\prec \Psi\widehat{\Pi}. \end{equation*} Consequently, we can set $\widehat{\Upsilon}(z)=\Psi\widehat{\Pi}$ in \eqref{eq:Avrecmomentrec}. This yields that \begin{equation} \label{eq_reducedroughfarecursive} \expct{{\mathfrak X}^{(p,p)}}\leq \expct{\rO_{\prec}(\widehat{\Pi}^{2}+\Psi^{2}\widehat{\Pi}){\mathfrak X}^{(p-1,p)}}+\expct{\rO_{\prec}(\Psi^{2}\widehat{\Pi}^{2})({\mathfrak X}^{(p-2,p)}+{\mathfrak X}^{(p-1,p-1)})}. \end{equation} Since the term $\Psi^{2}\widehat{\Pi}$ in the first expectation of the RHS of (\ref{eq_reducedroughfarecursive}) can be absorbed into $\widehat{\Pi}^{2}$ as we assume that $N^{-1/2}\eta^{-1/4}\prec \widehat{\Pi}$, this recovers (\ref{eq_roughfarecuversivestimateeq}). } It remains to prove \eqref{eq:Avrecmomentrec}. Recall $D=\diag\{d_{1},\cdots,d_{N}\}$ and (\ref{eq_defnq}). We see that \begin{align}\label{eq_keyexpansionfrx} {\mathfrak X}=\frac{1}{N}\sum_{i}d_{i}Q_{i} & =\frac{1}{N}\sum_{i}d_{i}(G_{ii}\tr(A\widetilde{B}G)-\tr(GA)(\widetilde{B}G)_{ii}) \nonumber \\ & =\frac{1}{N}\sum_{i=1}^{N}(\widetilde{B}G)_{ii}\tau_{i1}\tr(GA), \end{align} where we denoted \begin{equation} \label{eq_defntauil} \tau_{i1}\mathrel{\mathop:}= \frac{a_{i}\tr(GD)}{\tr(GA)}-d_{i}. \end{equation} We state two important properties regarding $\tau_{i1}.$ First, \begin{equation} \label{eq:tau_aver} \sum_{i}G_{ii}\tau_{i1}=\frac{1}{\tr(GA)}\sum_{i}G_{ii}a_{i}\tr(GD)-\tr(GD)=0. \end{equation} Second, using the identity (c.f. Definition \ref{defn_asf}), we have that \begin{equation*} \tr{(GA)}=\frac{(1+z m_H(z))\Omega_B^c(z)}{z}. \end{equation*} Therefore, by Lemma \ref{lem:stabbound}, Propositions \ref{prop:stabN} and \ref{prop:entrysubor}, it is easy to see that $\tau_{i1} \prec 1.$ Inserting (\ref{eq:BGii}) into (\ref{eq_keyexpansionfrx}), we obtain that \begin{align} \expct{{\mathfrak X}^{(p,p)}} &=\frac{1}{N}\sum_{i=1}^{N}\expct{(\widetilde{B}G)_{ii} \tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}} \nonumber \\ & =-\frac{1}{N}\sum_{i=1}^{N}\expct{\mathring{S}_{i}\tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}} +\frac{1}{N}\sum_{i=1}^{N}\expct{(G_{ii}+T_{i})\tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}} \label{eq:recomonX10} \\ & \quad +\frac{1}{N}\sum_{i=1}^{N}\expct{\mathsf{e}_{i1}\tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}}, \label{eq:recmomX1} \end{align} where $\mathsf{e}_{i1}$ is defined in \eqref{eq_epsilon1}. It suffices to estimate all the terms of the RHS of (\ref{eq:recomonX10}) and (\ref{eq:recmomX1}). Due to (\ref{eq:tau_aver}), for part of the second term of the RHS of (\ref{eq:recomonX10}), we have that \begin{equation}\label{eq_roughfrcancellone} \frac{1}{N} \sum_{i=1}^N \mathbb{E} G_{ii} \tau_{i1} \tr{(GA)} {\mathfrak X}^{(p-1,p)}=0. \end{equation} For the remaining terms, we apply (\ref{eq_formulaintergrationbyparts}) to estimate them. We start with the first term of the RHS of (\ref{eq:recomonX10}). Recall (\ref{eq_rewritemrs}). We have that \begin{align}\label{eq:recmomX2} &\expct{\mathring{S}_{i}\tau_{i1}\tr (GA){\mathfrak X}^{(p-1,p)}} =\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}\frac{\partial ({\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i})}{\partial g_{ik}}\tau_{i1}\tr (GA){\mathfrak X}^{(p-1,p)}} \\ & +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{\partial\norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}} +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial(\tau_{i1}\tr(GA))}{\partial g_{ik}}{\mathfrak X}^{(p-1,p)}} \nonumber \\ & +\frac{p-1}{N}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(GA)\frac{\partial{\mathfrak X}}{\partial g_{ik}}{\mathfrak X}^{(p-2,p)}} +\frac{p}{N}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(GA)\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}}{\mathfrak X}^{(p-1,p-1)}}. \nonumber \end{align} For the first term of the RHS of (\ref{eq:recmomX2}), by (\ref{eq_finalexpansion}), we find that \begin{align}\label{eq:recmomX3}\nonumber &\frac{1}{N^{2}}\sum_{i=1}^N\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}\frac{\partial ({\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i})}{\partial g_{ik}}\tau_{i1}\tr (GA){\mathfrak X}^{(p-1,p)}} \nonumber \\ & =\frac{1}{N}\sum_{i}\expct{(-G_{ii}\tr(A\widetilde{B}G)-(G_{ii}+T_{i})(\Upsilon(z)-\tr(GA)))\tau_{i1}{\mathfrak X}^{(p-1,p)}} \\\ &+\frac{1}{N}\sum_{i}\expct{\tr(A\widetilde{B}G)\left(\frac{1}{N}\sum_{k}^{(i)}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}-\mathring{T}_{i}\right)\tau_{i1}{\mathfrak X}^{(p-1,p)}} +\frac{1}{N}\sum_{i}\expct{\frac{\mathsf{e}_{i2}\tau_{i1}}{\norm{{\boldsymbol g}_{i}}}{\mathfrak X}^{(p-1,p)}} \nonumber \\ & +\frac{1}{N}\sum_{i}\expct{\frac{\absv{\tau_{i1}}}{\norm{{\boldsymbol g}_{i}}}\rO_{\prec}(\Pi_{i}^{2}){\mathfrak X}^{(p-1,p)}}, \nonumber \end{align} where $\mathsf{e}_{i2}$ is defined in \eqref{eq_defnmathsfe2}. Recall (\ref{eq_moregeneralerrorbound}). Using (\ref{eq:tau_aver}), $\Lambda_{T}\prec\Psi(z)$ in Proposition \ref{prop:entrysubor}, and the assumption $|\Upsilon(z)| \prec \widehat{\Upsilon}(z),$ we conclude that the first term on the RHS of \eqref{eq:recmomX3} can be simplified as \begin{align}\label{eq:tau_aver2} \frac{1}{N}\sum_{i}(-G_{ii}\tr(A\widetilde{B}G)-(G_{ii}+T_{i})(\Upsilon(z)-\tr(GA)))\tau_{i1}=\frac{1}{N}\sum_{i=1}T_{i}\tau_{i1}\tr(GA)+\rO_{\prec}(\widehat{\Upsilon}(z)\Psi(z)). \end{align} For the last term of the RHS of \eqref{eq:recmomX3}, since $\tau_{i1}\prec 1$ and $\norm{{\boldsymbol g}_{i}}=1+\rO_{\prec}(N^{-1/2}),$ together with the fact \begin{equation*} \frac{1}{N}\sum_{i} \Pi_{i}^{2}\prec \Pi^2, \end{equation*} we arrive at \begin{equation*} \frac{1}{N}\sum_{i}\expct{\frac{\absv{\tau_{i1}}}{\norm{{\boldsymbol g}_{i}}}\rO_{\prec}(\Pi_{i}^{2}){\mathfrak X}^{(p-1,p)}} \prec \Pi^2 \prec \widehat{\Pi} \Psi. \end{equation*} For the second term of the RHS of \eqref{eq:recmomX3}, by a discussion similar to \eqref{eq:recmomP2}, we have that \begin{align}\label{eq:Iparts_X}\nonumber &\expct{\left(\mathring{T}_{i}-\frac{1}{N}\sum_{k}^{(i)}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}\right)\tau_{i1}\tr(A\widetilde{B}G){\mathfrak X}_{i}^{(p-1,p)}} \\\nonumber &=\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}G_{ki}\tau_{i1}\tr(A\widetilde{B}G)\tau_{i1}{\mathfrak X}_{i}^{(p-1,p)}} +\frac{1}{N}\sum_{k}^{(i)}\expct{\frac{G_{ki}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial (\tau_{i1} \tr(A\widetilde{B}G))}{\partial g_{ik}}{\mathfrak X}_{i}^{(p-1,p)}}\\ &+\frac{p-1}{N}\sum_{k}^{(i)}\expct{\frac{G_{ki}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(A\widetilde{B}G)\frac{\partial{\mathfrak X}}{\partial g_{ik}}\tau_{i1}{\mathfrak X}_{i}^{(p-2,p)}} +\frac{p}{N}\sum_{k}^{(i)}\expct{\frac{G_{ki}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(A\widetilde{B}G)\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}}\tau_{i1}{\mathfrak X}_{i}^{(p-1,p-1)}}. \end{align} At this point, we deviate a little bit from the current calculation and revisit (\ref{eq:recmomX1}). On one hand, we emphasize that, combining (\ref{eq:tau_aver2}) and (\ref{eq_roughfrcancellone}), we could cancel the second term of the RHS of (\ref{eq:recomonX10}); on the other hand, for the term of the RHS of \eqref{eq:recmomX1}, by the definition of $\mathsf{e}_{i1},$ (\ref{eq_hiicontrol}), (\ref{eq_licontrol}), (\ref{eq_controlhibhi}) and Proposition \ref{prop:entrysubor}, we have that \begin{align}\label{eq_epsilon1details} \mathsf{e}_{i1} & =\left((l_{i}^{2}-1)+({\boldsymbol h}_{i}^* \widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i}-1)\right)G_{ii}+\rO_{\prec}(N^{-1/2}\Psi) \nonumber \\ & =-h_{ii}G_{ii}+\frac{\Omega_{B}^{c}}{z}\frac{{\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i}-1}{a_{i}-\Omega_{B}^{c}}+\rO_{\prec}(N^{-1/2}\Psi). \end{align} Recall (\ref{eq_prd2}). By (\ref{eq_largedeviationbound}), we see that \begin{align}\label{eq_replacehizero} {\boldsymbol h}_{i}^*\widetilde{B}^{\langle i\rangle}{\boldsymbol h}_{i}-\mathring{{\boldsymbol h}}_{i}^*\widetilde{B}^{\langle i\rangle}\mathring{{\boldsymbol h}}_{i} =\rO_{\prec}(N^{-1}). \end{align} Moreover, using (\ref{eq_largedeviationbound}), we obtain that \begin{equation} \label{eq_hizeroquadratic} \mathring{{\boldsymbol h}}_{i}^*\widetilde{B}^{\langle i\rangle}\mathring{{\boldsymbol h}}_{i}-1=\mathring{{\boldsymbol h}}_{i}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol h}}_{i}+\rO_{\prec}(1/N). \end{equation} By (\ref{eq_epsilon1details}), (\ref{eq_replacehizero}), (\ref{eq_hizeroquadratic}) and the assumption that $N^{-1/2}\eta^{-1/4}\prec \widehat{\Pi}$, we conclude that the last term in \eqref{eq:recmomX1} can be written as \begin{align}\label{eq:expan_ei1} \expct{\frac{1}{N}\sum_{i}\mathsf{e}_{i1}\tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}} &=-\expct{\frac{1}{N}\sum_{i}h_{ii}G_{ii}\tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}} \nonumber \\ &+\frac{1}{z}\expct{\frac{1}{N}\sum_{i}(\mathring{{\boldsymbol h}}_{i}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol h}}_{i})\tau_{i2}{\mathfrak X}^{(p-1,p)}}+\expct{\rO_{\prec}(\widehat{\Pi}^{2}){\mathfrak X}^{(p-1,p)}}, \end{align} where $\tau_{i2}$ is defined as \begin{equation*} \tau_{i2}\mathrel{\mathop:}=\frac{\Omega_{B}^{c}(a_{i}\tr(GD)-d_{i}\tr(GA))}{a_{i}-\Omega_{B}^{c}}. \end{equation*} We point out that the first term of the RHS of (\ref{eq:expan_ei1}) will be kept for future cancellation (see the first term of the RHS of (\ref{eq:expan_ei2})). For the second term of the RHS of (\ref{eq:expan_ei1}), by (\ref{eq_formulaintergrationbyparts}), we see that \begin{align}\label{eq:recmom_ei1}\nonumber &\frac{1}{N}\sum_{i}\expct{\mathring{{\boldsymbol h}}_{i}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol h}}_{i}\tau_{i2}{\mathfrak X}^{(p-1,p)}} =\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\overline{g}_{ik}\frac{1}{\norm{{\boldsymbol g}_{i}}^{2}}{\boldsymbol e}_{k}^* (\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}\tau_{i2}{\mathfrak X}^{(p-1,p)}} \\\nonumber &=\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\frac{\partial \norm{{\boldsymbol g}_{i}}^{-2}}{\partial g_{ik}}{\boldsymbol e}_{k}^* (\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}\tau_{i2}{\mathfrak X}^{(p-1,p)}} +\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^* (\widetilde{B}^{\langle i\rangle}-I){\boldsymbol e}_{k}}{\norm{{\boldsymbol g}_{i}}^{2}}\tau_{i2}{\mathfrak X}^{(p-1,p)}}\\\nonumber &+\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}}{\norm{{\boldsymbol g}_{i}}^{2}}\frac{\partial \tau_{i2}}{\partial g_{ik}}{\mathfrak X}^{(p-1,p)}} +\frac{p-1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}}{\norm{{\boldsymbol g}_{i}}^{2}}\tau_{i2}\frac{\partial {\mathfrak X}}{\partial g_{ik}}{\mathfrak X}^{(p-2,p)}} \\ &+\frac{p}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\frac{{\boldsymbol e}_{k}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}}{\norm{{\boldsymbol g}_{i}}^{2}}\tau_{i2}\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}}{\mathfrak X}^{(p-1,p-1)}}. \end{align} For the first term of the RHS of (\ref{eq:recmom_ei1}), using the facts $\frac{\partial \norm{{\boldsymbol g}_{i}}^{-2}}{\partial g_{ik}}=-\norm{{\boldsymbol g}_{i}}^{-4}\overline{g}_{ik}$ and $\tau_{i2} \prec 1,$ we can get \begin{align*} &\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\frac{\partial \norm{{\boldsymbol g}_{i}}^{-2}}{\partial g_{ik}}{\boldsymbol e}_{k}^* (\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}\tau_{i2}{\mathfrak X}^{(p-1,p)}}\\ &=-\frac{1}{N^{2}}\sum_{i}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}^{4}}\mathring{{\boldsymbol g}}_{i}^* (\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}\tau_{i2}{\mathfrak X}^{(p-1,p)}} =\expct{\rO_{\prec}(N^{-1}){\mathfrak X}^{(p-1,p)}}. \end{align*} For the second term of the RHS of (\ref{eq:recmom_ei1}), using that $\tr B=\tr\widetilde{B}^{\langle i\rangle}=1,$ we conclude \begin{align*} &\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^* (\widetilde{B}^{\langle i\rangle}-I){\boldsymbol e}_{k}\tau_{i2}{\mathfrak X}^{(p-1,p)}}\\ &=\frac{1}{N^{2}}\sum_{i}\expct{\frac{1}{\norm{{\boldsymbol g}_{i}}^{2}}(b_{i}-1)\tau_{i2}{\mathfrak X}^{(p-1,p)}} =\expct{\rO_{\prec}(1/N){\mathfrak X}^{(p-1,p)}}. \end{align*} Now we return to (\ref{eq:recmomX3}) and investigate the remaining term, i.e., the third term. Recall (\ref{eq_defnmathsfe2}). By a discussion similar to (\ref{eq_epsilon1details}), we find that \begin{align}\label{eq_ei2definition} \mathsf{e}_{i2}=&(\norm{{\boldsymbol g}_{i}}^{2}-1+h_{ii})G_{ii}(\tr(A\widetilde{B}G)-\tr(GA))-h_{ii}G_{ii}\tr(A\widetilde{B}G)+\rO_{\prec}(N^{-1/2}\Psi) \nonumber \\ =&(\norm{{\boldsymbol g}_{i}}^{2}-1)G_{ii}\tr(A(\widetilde{B}-I)G)-h_{ii}G_{ii}\tr(GA)+\rO_{\prec}(N^{-1/2}\Psi) \nonumber \\ =&(\norm{\mathring{{\boldsymbol g}}_{i}}^{2}-1)\frac{\Omega_{B}^{c}}{z}\frac{\tr(A(\widetilde{B}-I)G)}{a_{i}-\Omega_{B}^{c}}-h_{ii}G_{ii}\tr(GA)+\rO_{\prec}(N^{-1/2}\Psi). \end{align} Using the definitions in (\ref{eq_prd2}), by (\ref{eq_largedeviationbound}), (\ref{eq_ei2definition}) and a discussion similar to (\ref{eq:expan_ei1}), we find that the third term of the RHS of (\ref{eq:recmomX3}) can be written as \begin{align}\label{eq:expan_ei2} \frac{1}{N}\sum_{i}\expct{\frac{\mathsf{e}_{i2}\tau_{i1}}{\norm{{\boldsymbol g}_{i}}}{\mathfrak X}^{(p-1,p)}} & =\frac{1}{Nz}\sum_{i}\expct{\left(\|\mathring{{\boldsymbol g}}_{i}\|^2-1\right)\tau_{i3}{\mathfrak X}^{(p-1,p)}} \\ & -\frac{1}{N}\sum_{i}\expct{h_{ii}G_{ii}\tau_{i1}\tr(GA){\mathfrak X}^{(p-1,p)}}+\expct{\rO_{\prec}(N^{-1/2}\Psi){\mathfrak X}^{(p-1,p)}}, \nonumber \end{align} where we denoted \begin{equation*} \tau_{i3}\mathrel{\mathop:}= \tau_{i2}\tr(A(\widetilde{B}-I)G). \end{equation*} We emphasize that the first term of the RHS of (\ref{eq:expan_ei2}) is canceled out with the first term of the RHS of (\ref{eq:expan_ei1}). Till now, all the leading terms in the expansion (\ref{eq:recmomX1}) have been canceled out. Regarding the second term of \eqref{eq:expan_ei2}, by (\ref{eq_formulaintergrationbyparts}) and the definitions in (\ref{eq_prd2}), we conclude that \begin{align}\label{eq:recmom_ei2} &\frac{1}{N}\sum_{i}\expct{\left(\|\mathring{{\boldsymbol g}}_{i}\|^2-1\right)\tau_{i3}{\mathfrak X}^{(p-1,p)}}=\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{\left({\boldsymbol e}_{k}^*{\boldsymbol e}_{k}-1+\frac{1}{N-1}\right)\tau_{i3}{\mathfrak X}^{(p-1,p)}} \nonumber \\ & +\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{{\boldsymbol e}_{k}^* \mathring{{\boldsymbol g}_{i}}\frac{\partial \tau_{i3}}{\partial g_{ik}}{\mathfrak X}^{(p-1,p)}}+\frac{p-1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{{\boldsymbol e}_{k}^* \mathring{{\boldsymbol g}_{i}}\tau_{i3}\frac{\partial {\mathfrak X}}{\partial g_{ik}}{\mathfrak X}^{(p-2,p)}} \nonumber \\ & +\frac{p}{N^{2}}\sum_{i}\sum_{k}^{(i)}\expct{{\boldsymbol e}_{k}^* \mathring{{\boldsymbol g}_{i}}\tau_{i3}\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}}{\mathfrak X}^{(p-1,p-1)}}. \end{align} It is easy to see that the first term of the RHS of (\ref{eq:recmom_ei2}) can be bounded by $\expct{\rO_{\prec}(N^{-1}){\mathfrak X}^{(p-1,p)}}$. Armed with the discussions between (\ref{eq_roughfrcancellone}) and (\ref{eq:recmom_ei2}), by a discussion similar to (\ref{eq_pkfinalrepresentation}), we can write (\ref{eq:recmomX1}) as follow \begin{equation}\label{eq_explicitlysolution} \mathbb{E}\left[ {\mathfrak X}^{(p,p)} \right]=\mathbb{E}\left[ \mathfrak{D}_1 {\mathfrak X}^{(p-1,p)} \right]+\mathbb{E}\left[ \mathfrak{D}_2 {\mathfrak X}^{(p-2,p)} \right]+\mathbb{E}\left[ \mathfrak{D}_3 {\mathfrak X}^{(p-1,p-1)} \right], \end{equation} where $\mathfrak{D}_k, k=1,2,3,$ are defined as \begin{align*} \mathfrak{D}_1:&=-\sum_i \left(\frac{1}{N}\sum_{k}^{(i)}\frac{\partial\norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* \widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tau_{i1}\tr(GA) -\frac{1}{N}\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial(\tau_{i1}\tr(GA))}{\partial g_{ik}} \right) \nonumber \\ &+\sum_{i} \left(\frac{1}{N}\sum_{k}^{(i)}\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}G_{ki}\tau_{i1}\tr(A\widetilde{B}G)\tau_{i1} +\frac{1}{N}\sum_{k}^{(i)}\frac{G_{ki}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial (\tau_{i1} \tr(A\widetilde{B}G))}{\partial g_{ik}}+ \frac{1}{zN^{2}}\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}}{\norm{{\boldsymbol g}_{i}}^{2}}\frac{\partial \tau_{i2}}{\partial g_{ik}} \right) \nonumber \\ &-\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* \mathring{{\boldsymbol g}_{i}}\frac{\partial \tau_{i3}}{\partial g_{ik}}+\rO_{\prec}\left(N^{-1}+\widehat{\Upsilon} \Psi\right), \end{align*} \begin{align*} \mathfrak{D}_2:&=\frac{p-1}{N}\sum_i \left(-\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(GA)\frac{\partial{\mathfrak X}}{\partial g_{ik}}+\sum_{k}^{(i)}\frac{G_{ki}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(A\widetilde{B}G)\frac{\partial{\mathfrak X}}{\partial g_{ik}}\tau_{i1} \right) \\ &+\frac{p-1}{N^2}\sum_i \left(\frac{1}{z} \sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}}{\norm{{\boldsymbol g}_{i}}^{2}}\tau_{i2}\frac{\partial {\mathfrak X}}{\partial g_{ik}} - \sum_{k}^{(i)}{\boldsymbol e}_{k}^* \mathring{{\boldsymbol g}_{i}}\tau_{i3}\frac{\partial {\mathfrak X}}{\partial g_{ik}}\right), \end{align*} \begin{align*} \mathfrak{D}_3:&=\frac{p}{N} \sum_i \left(-\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(GA)\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}}+\sum_{k}^{(i)}\frac{G_{ki}}{\norm{{\boldsymbol g}_{i}}}\tau_{i1}\tr(A\widetilde{B}G)\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}}\tau_{i1} \right) \\ &+\frac{p}{N^2} \sum_i \left(\frac{1}{z} \sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol g}}_{i}}{\norm{{\boldsymbol g}_{i}}^{2}}\tau_{i2}\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}}-\sum_{k}^{(i)}{\boldsymbol e}_{k}^* \mathring{{\boldsymbol g}_{i}}\tau_{i3}\frac{\partial \overline{{\mathfrak X}}}{\partial g_{ik}} \right). \end{align*} Finally, we need to control $\mathfrak{D}_k, k=1,2,3,$ using a discussion similar to (\ref{eq_defnmathfrackc1}), (\ref{eq_defnmathfrackc2}) and (\ref{eq_defnmathfrackc3}) utilizing Lemmas \ref{lem:DeltaG} and \ref{lem:recmomerror}. Due to similarity, we only sketch the proof and omit the details. Specifically, using the linear property of ${\mathfrak X}$ (c.f. (\ref{eq_weighted})) and chain rules, we find that all the terms in $\mathfrak{D}_k, k=1,2,3,$ are linear combinations of the form \begin{equation*} \frac{1}{N^2} \sum_i \sum_{k}^{(i)} c_i \Gamma_{i,k}, \end{equation*} where $c_i$'s are some generic weights and $\Gamma_{i,k}$ can be one of the following expressions \begin{align} &{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr (X_{D}X_{A}\frac{\partial G}{\partial g_{ik}}){\mathfrak X}^{\mathbb{J}}, \ \ {\boldsymbol e}_{k}^*\mathring{{\boldsymbol g}_{i}}\tr (X_{D}X_{A}\frac{\partial G}{\partial g_{ik}}){\mathfrak X}^{\mathbb{J}}, \ \ \frac{\partial\norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}{\mathfrak X}^{(p-1,p)},\label{eq_roughfrerrorfirstthree} \\ &{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr (\frac{\partial D}{\partial g_{ik}}X_{A}G){\mathfrak X}^{\mathbb{J}}, \ \ {\boldsymbol e}_{k}^*\mathring{{\boldsymbol g}_{i}}\tr (\frac{\partial D}{\partial g_{ik}}X_{A}G){\mathfrak X}^{\mathbb{J}}, \ \ {\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\frac{\partial d_{i}}{\partial g_{ik}}{\mathfrak X}^{(p-1,p)}, \ \ {\boldsymbol e}_{k}^* \mathring{{\boldsymbol g}}_{i}\frac{\partial d_{i}}{\partial g_{ik}}{\mathfrak X}^{(p-1,p)}, \label{eq_roughfrerrorlastfour} \end{align} where $X_{i}$ is either $\widetilde{B}^{\langle i\rangle}$ or $I$, $X_{A}$ is $A$, $I$ or $A^{-1}$, $X_{D}$ is either $D$ or $I$, and $\mathbb{J}$ is $(p-1,p),(p-2,p)$, or $(p-1,p-1)$. All the terms in (\ref{eq_roughfrerrorfirstthree}) can be tackled using Lemma \ref{lem:recmomerror}, whereas the terms in (\ref{eq_roughfrerrorlastfour}) can be bounded using Proposition \ref{prop:entrysubor}, the assumption (\ref{eq:weight_cond}), Lemmas \ref{lem:DeltaG} and \ref{lem:recmomerror} with the following linear property \begin{equation*} \tr(\frac{\partial D}{\partial g_{ik}}X_{A}G)=\frac{1}{N}\sum_{j}\frac{\partial d_{j}}{\partial g_{ik}}(X_{A}G)_{jj}. \end{equation*} Based on the above discussion, we can show that \begin{equation*} \mathfrak{D}_1 \prec \widehat{\Pi}^2+\Psi \widehat{\Upsilon}, \ \mathfrak{D}_2 \prec \Psi^2 \widehat{\Pi}^2, \ \mathfrak{D}_3 \prec \Psi^2 \widehat{\Pi}^2. \end{equation*} This completes the proof of (\ref{eq:Avrecmomentrec}). \end{proof} \subsection{Optimal fluctuation averaging}\label{subsec_stronglocallawfixed} In this section, we will establish an estimate for the key quantities regarding the stability of system which masters the subordination functions (c.f. (\ref{eq_subordinationsystemab})). Such an estimation replies on strengthening the estimates obtained in Proposition \ref{prop:FA1} for certain explicit choices of functions $d_i, i=1,2,\cdots,N;$ see (\ref{eq_optimalfaquantities}) and (\ref{eq_optimalfaquantitiescoeff}) for details. These results will be a base for the proof of Theorem \ref{thm:main}. Moreover, as one important byproduct, we can show the closeness between the subordination functions and their approximations in Definition \ref{defn_asf}. Denote \begin{align*} \Lambda_{A}(z) \mathrel{\mathop:}= \Omega_{A}(z)-\Omega_{A}^{c}(z), \ \Lambda_{B}(z)\mathrel{\mathop:}= \Omega_{B}(z)-\Omega_{B}^{c}(z), \ \Lambda(z) \mathrel{\mathop:}= \absv{\Lambda_{A}(z)}+\absv{\Lambda_{B}(z)}. \end{align*} As we have seen in Section \ref{sec_subordiationproperties}, especially Lemma \ref{lem:OmegaBound2}, the control of $\Lambda$ are expected to reduced to studying a system similar to (\ref{eq_suborsystemPhi}). Recall that $\Phi_{AB} \equiv (\Phi_A, \Phi_B) \in \mathbb{C}_+^3$ and the subordination functions $\Omega_A$ and $\Omega_B$ are governed by the following system \begin{equation}\label{eq_subordinationsystemab} \Phi_{AB}(\Omega_A(z), \Omega_B(z), (z))=0, \end{equation} where $\Phi_A(z)$ and $\Phi_B(z)$ are defined as follows \begin{equation} \label{eq:def_PhiAB} \Phi_{A}(\omega_{1},\omega_{2},z)\mathrel{\mathop:}= \frac{M_{\mu_{A}}(\omega_{2})}{\omega_{2}}-\frac{\omega_{1}}{z},\AND \Phi_{B}(\omega_{1},\omega_{2},z)\mathrel{\mathop:}= \frac{M_{\mu_{B}}(\omega_{1})}{\omega_{1}}-\frac{\omega_{2}}{z}. \end{equation} We further introduce the following shorthand notations \begin{equation} \label{eq_phacdefinition} \Phi_{A}^{c}\equiv \Phi_{A}^{c}(z)\mathrel{\mathop:}= \Phi_{A}(\Omega_{A}^{c}(z),\Omega_{B}^{c}(z),z) \AND \Phi_{B}^{c}\equiv \Phi_{B}^{c}(z)\mathrel{\mathop:}= \Phi_{B}(\Omega_{A}^{c}(z),\Omega_{B}^{c}(z),z). \end{equation} Whenever there is no ambiguity, we will omit the dependence on $z$ and we will often consider $(\Phi_{A},\Phi_{B})$ as a function of the first two variables $(\omega_{1},\omega_{2})$. Recall that $\mathcal{S}_{AB}$, $\mathcal{T}_{A}$ and $\mathcal{T}_{B}$ are defined analogously as in (\ref{eq_defn_salphabeta}) and (\ref{eq_defn_talpha}) by replacing the pair $(\alpha, \beta)$ with $(A,B).$ We first introduce an estimate regarding the linear combination of $\mathcal{S}_{AB}, \mathcal{T}_{A}, \mathcal{T}_B$ and $\Lambda.$ It serves as a fundamental input for the continuity argument in Section \ref{sec_finalsection}. Before stating the result, we observe that, under Assumption \ref{assu_ansz}, by (\ref{eq:Lambda}) and Proposition \ref{prop:entrysubor} \begin{equation}\label{eq_lambdainitialbound} \Lambda \prec N^{-\gamma/4}. \end{equation} \begin{prop}\label{prop:FA2} Fix $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U}).$ Suppose the assumptions of Proposition \ref{prop:entrysubor} hold. Let $\widehat{\Lambda}(z)$ be a deterministic positive function such that $\Lambda(z)\prec\widehat{\Lambda}(z)\prec N^{-\gamma/4}$. Then we have for $\iota=A,B,$ \begin{equation} \label{eq:FA2} \Absv{\frac{{\mathcal S}_{AB}(z)}{z}\Lambda_{\iota}+{\mathcal T}_{\iota}\Lambda_{\iota}^{2}+O(\absv{\Lambda_{\iota}}^{3})}\prec \mho. \end{equation} where $\mho$ is defined as \begin{equation} \label{eq_defnTheta} \mho \equiv \mho(z):= \Psi^{2}\left(\sqrt{(\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda}(z))(\absv{{\mathcal S}_{AB}(z)}+\widehat{\Lambda}(z))}+\Psi^{2}\right). \end{equation} \end{prop} The proof of Proposition \ref{prop:FA2} relies on the estimate for a special linear combinations of $Q_i$'s and their analogues. Recall (\ref{eq_mtrasindenity}). Denote \begin{align} \label{eq_frz1z2definition} {\mathfrak Z}_{1}\mathrel{\mathop:}= \Phi_{A}^{c}+zL_{\mu_{A}}'(\Omega_{B})\Phi_{B}^{c}, \ {\mathfrak Z}_{2}\mathrel{\mathop:}= \Phi_{B}^{c}+zL_{\mu_{B}}'(\Omega_{A})\Phi_{A}^{c}, \ {\mathfrak Z}_{1}^{(p,q)}\mathrel{\mathop:}={\mathfrak Z}_{1}^{p}\overline{{\mathfrak Z}}_{1}^{q}, \ {\mathfrak Z}_{2}^{(p,q)}&\mathrel{\mathop:}={\mathfrak Z}_{2}^{p}\overline{{\mathfrak Z}}_{2}^{q}. \end{align} We collect the recursive moment estimates for ${\mathfrak Z}_{1}$ and ${\mathfrak Z}_{2}$ in the following lemma. Its proof will be provided after we finish proving Proposition \ref{prop:FA2}. \begin{lem}\label{lem:Zrecmoment} Fix $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U}).$ Suppose the assumptions of Proposition \ref{prop:FA2} hold. Then we have \begin{equation} \label{eq_Zrecmomentequation} \expct{{\mathfrak Z}_{1}^{(p,p)}}=\expct{\rO_{\prec}(\mho){\mathfrak Z}_{1}^{(p-1,p)}}+\expct{\rO_{\prec}(\mho^{2}){\mathfrak Z}_{1}^{(p-2,p)}}+\expct{\rO_{\prec}(\mho^{2}){\mathfrak Z}_{1}^{(p-1,p-1)}}, \end{equation} where $\mho$ is defined in (\ref{eq_defnTheta}). Similar results hold for ${\mathfrak Z}_{2}$. \end{lem} Armed with Lemma \ref{lem:Zrecmoment}, we are ready to prove Proposition \ref{prop:FA2}. \begin{proof}[\bf Proof of Proposition \ref{prop:FA2}] Due to similarly, we will focus on the discussion when $\iota=A.$ First of all, we provide some identities to connect the left-hand side of (\ref{eq:FA2}) and ${\mathfrak Z}_1$ and ${\mathfrak Z}_2.$ Observing (\ref{eq_lambdainitialbound}), by (\ref{eq_subordinationsystemab}) and a discussion similar to (\ref{eq_diferentialoperator}), we expand $(\Phi_{A},\Phi_{B})$ around $(\Omega_{A},\Omega_{B})$ and obtain \begin{align} \Phi_{A}^{c} &=L_{\mu_{A}}'(\Omega_{B})\Lambda_{B}+\frac{1}{2}L_{\mu_{A}}''(\Omega_{B})\Lambda_{B}^{2}-\frac{\Lambda_{A}}{z}+O(\Lambda_{B}^{3}), \label{eq_phacexpansion}\\ \Phi_{B}^{c}&=L_{\mu_{B}}'(\Omega_{A})\Lambda_{A} +\frac{1}{2}L_{\mu_{B}}''(\Omega_{A})\Lambda_{A}^{2}-\frac{\Lambda_{B}}{z}+O(\Lambda_{A}^{3}). \label{eq_phbcexpansion} \end{align} Combine (\ref{eq_phbcexpansion}) and (\ref{eq_phacexpansion}), by a simple algebraic calculation (i.e., inserting (\ref{eq_phbcexpansion}) in terms of $\Lambda_B$ into (\ref{eq_phacexpansion})), we can conclude that \begin{align}\label{eq:weight_constr} \Phi_{A}^{c}+zL_{\mu_{A}}'(\Omega_{B})\Phi_{B}^{c} &=\left(zL_{\mu_{B}}'(\Omega_{A})L_{\mu_{A}}'(\Omega_{B})-\frac{1}{z}\right)\Lambda_{A} +\frac{1}{2}\left(zL_{\mu_{B}}''(\Omega_{A})L_{\mu_{A}}'(\Omega_{B})+(zL_{\mu_{B}}'(\Omega_{A}))^{2}L_{\mu_{A}}''(\Omega_{B})\right)\Lambda_{A}^{2} \nonumber \\ &+O(\absv{\Phi_{B}^{c}}^{2})+O(\absv{\Phi_{B}^{c}\Lambda_{A}})+O(\absv{\Lambda_{A}}^{3}) \nonumber \\ & =\frac{{\mathcal S}_{AB}}{z}\Lambda_{A}+{\mathcal T}_{A}\Lambda_{A}^{2}+O(\absv{\Phi_{B}^{c}}^{2})+O(\absv{\Phi_{B}^{c}\Lambda_{A}})+O(\absv{\Lambda_{A}}^{3}). \end{align} This implies that \begin{equation} \label{eq:ZtoFA2} \frac{{\mathcal S}_{AB}}{z}\Lambda_{A}+{\mathcal T}_{A}\Lambda_{A}^{2}+O(\absv{\Lambda_{A}}^{3})={\mathfrak Z}_{1}+\rO_{\prec}(\absv{\Phi_{B}^{c}}^{2}+\absv{\Phi_{B}^{c}\Lambda_{A}}). \end{equation} By a discussion similar to (\ref{eq_arbitraydiscussion}), using Young's and Markov's inequalities, together with Lemma \ref{lem:Zrecmoment}, we have that \begin{equation}\label{eq_frz1frz2control} {\mathfrak Z}_1 \prec \mho, \ {\mathfrak Z}_2 \prec \mho. \end{equation} In what follows, we will apply Proposition \ref{prop:FA1} to prove that \begin{equation}\label{eq_phibc} \Phi_\iota^c \prec \Psi \mho^{1/2}, \ \iota=A,B. \end{equation} Recall (\ref{eq_defnq}) and denote \begin{equation*} {\mathcal Q}_{i}\mathrel{\mathop:}= \tr({\mathcal G}\widetilde{A}B){\mathcal G}_{ii}-\tr(B{\mathcal G})({\mathcal G}\widetilde{A})_{ii}. \end{equation*} We next write $\Phi_A^c$ and $\Phi_B^c$ as linear combinations of $Q_i$'s and $\mathcal{Q}_i$'s, respectively. Specifically, \begin{align} \label{eq_optimalfaquantities} \Phi_{A}^{c}=\frac{1}{N}\sum_{i}{\mathfrak d}_{i1}Q_{i}, \ \ \Phi_{B}^{c}=\frac{1}{N}\sum_{i}{\mathfrak d}_{i2}{\mathcal Q}_{i}, \end{align} where we denote \begin{align}\label{eq_optimalfaquantitiescoeff} {\mathfrak d}_{i1}&\mathrel{\mathop:}= z\frac{1-M_{\mu_{A}}(\Omega_{B}^{c})}{(zm_{H}(z)+1)^{2}}\frac{a_{i}\tr(\widetilde{B}G)-\tr(A\widetilde{B}G)}{(a_{i}-\Omega_{B}^{c})},& {\mathfrak d}_{i2}&\mathrel{\mathop:}= z\frac{1-M_{\mu_{B}}(\Omega_{A}^{c})}{(zm_{H}(z)+1)^{2}} \frac{b_{i}\tr(\widetilde{A}{\mathcal G})-\tr(B\widetilde{A}{\mathcal G})}{b_{i}-\Omega_{A}^{c}}. \end{align} Indeed, (\ref{eq_optimalfaquantities}) follows from the following decompositions \begin{align*} \Phi_{A}^{c}&=\frac{m_{\mu_{A}}(\Omega_{B}^{c})}{(\Omega_{B}^{c}m_{\mu_{A}}(\Omega_{B}^{c})+1)}-\frac{\Omega_{A}^{c}}{z} \\ &=\frac{1}{(\Omega_{B}^{c}m_{\mu_{A}}(\Omega_{B}^{c})+1)(zm_{H}(z)+1)}\left[m_{\mu_{A}}(\Omega_{B}^{c})(zm_{H}(z)+1)-\tr(\widetilde{B}G)(\Omega_{B}^{c}m_{\mu_{A}}(\Omega_{B}^{c})+1)\right] \\ &=\frac{1-M_{\mu_{A}}(\Omega_{B}^{c})}{zm_{H}(z)+1}\frac{1}{N^{2}} \sum_{i,j}\left[\frac{a_{j}}{a_{i}-\Omega_{B}^{c}}(\widetilde{B}G)_{jj}-(\widetilde{B}G)_{ii}\frac{a_{j}}{a_{j}-\Omega_{B}^{c}}\right]\\ &=\frac{1-M_{\mu_{A}}(\Omega_{B}^{c})}{zm_{H}(z)+1}\frac{1}{N^{2}}\sum_{i,j}a_{j}\left[(\widetilde{B}G)_{jj}\left(\frac{1}{a_{i}-\Omega_{B}^{c}}-(\widetilde{B}G)_{ii}\right)+(\widetilde{B}G)_{ii}\left((\widetilde{B}G)_{jj}-\frac{1}{a_{j}-\Omega_{B}^{c}}\right)\right] \\ &=\frac{z}{N}\frac{1-M_{\mu_{A}}(\Omega_{B}^{c})}{zm_{H}(z)+1}\left[-\sum_{i}\frac{1}{a_{i}-\Omega_{B}^{c}}Q_{i}+\frac{\tr(\widetilde{B}G)}{zm_{H}(z)+1}\sum_{j}\frac{a_{j}}{a_{j}-\Omega_{B}^{c}}Q_{j}\right] \\ &=z\frac{1-M_{\mu_{A}}(\Omega_{B}^{c})}{(zm_{H}(z)+1)^{2}}\frac{1}{N}\sum_{i}\frac{a_{i}\tr(\widetilde{B}G)-\tr(A\widetilde{B}G)}{(a_{i}-\Omega_{B}^{c})}Q_{i}, \end{align*} where in the second equality we used (\ref{eq:apxsubor}) and the definition in (\ref{eq_mtrasindenity}). Similarly, we have \begin{align*} \Phi_{B}^{c}&=\frac{m_{\mu_{B}}(\Omega_{A}^{c})}{\Omega_{A}^{c}m_{\mu_{B}}(\Omega_{A}^{c}+1)}-\frac{\Omega_{B}^{c}}{z} \\ &=\frac{1-M_{\mu_{B}}(\Omega_{A}^{c})}{zm_{H}(z)+1}\frac{1}{N^{2}}\sum_{ij}\left[\frac{b_{j}}{b_{i}-\Omega_{A}^{c}}({\mathcal G}\widetilde{A})_{jj}-({\mathcal G}\widetilde{A})_{ii}\frac{b_{j}}{b_{j}-\Omega_{A}^{c}}\right] \\ &=\frac{1-M_{\mu_{B}}(\Omega_{A}^{c})}{zm_{H}(z)+1}\frac{1}{N^{2}}\sum_{ij}b_{j}\left[\frac{1}{b_{i}-\Omega_{A}^{c}}({\mathcal G}\widetilde{A})_{jj}-({\mathcal G}\widetilde{A})_{ii}\frac{1}{b_{j}-\Omega_{A}^{c}}\right] \\ &=z\frac{1-M_{\mu_{B}}(\Omega_{A}^{c})}{(zm_{H}(z)+1)^{2}}\frac{1}{N}\sum_{i} \frac{b_{i}\tr(\widetilde{A}{\mathcal G})-\tr(B\widetilde{A}{\mathcal G})}{b_{i}-\Omega_{A}^{c}}{\mathcal Q}_{i}. \end{align*} To apply Proposition \ref{prop:FA1}, we need to check whether its assumptions are satisfied for our case. Recall (\ref{eq_defnTheta}). First, we show that we can choose $\widehat{\Pi}=\mho^{1/2}.$ By (\ref{eq_mtrasindenity}) and (\ref{eq_suborsystem}), we find that \begin{align} &m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z) =\frac{(zm_{H}(z)+1)(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)}{z}\left[\frac{1}{zm_{\mu_{A}\boxtimes\mu_{B}(z)+1}}-\frac{1}{zm_{H}(z)+1}\right] \nonumber \\ &=\frac{(zm_{H}(z)+1)(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)}{z}\left(M_{\mu_{A}\boxtimes\mu_{B}}(z)-\frac{zm_{H}(z)}{zm_{H}(z)+1}\right) \nonumber \\ &=\frac{(zm_{H}(z)+1)(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)}{z}\left(\frac{\Omega_{A}(z)\Omega_{B}(z)}{z}-\frac{\Omega_{A}^{c}\Omega_{B}^{c}}{z}+z\frac{\tr(AG)\tr(\widetilde{B}G)-\tr G\tr(A\widetilde{B}G)}{(1+z\tr G)^{2}}\right) \nonumber \\ &=\frac{(zm_{H}(z)+1)(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)}{z}\left(\frac{\Omega_{A}(z)\Omega_{B}(z)}{z}-\frac{\Omega_{A}^{c}\Omega_{B}^{c}}{z}+\frac{\Upsilon(z)}{(1+z\tr G)^{2}}\right). \nonumber \end{align} Together with Propositions \ref{prop:stabN}, \ref{prop:entrysubor}, and \ref{prop:FA1}, we find that \begin{equation} \label{eq_rigidityuse} \absv{m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}\prec \Lambda(z)+\Upsilon\prec\widehat{\Lambda}+\Psi^{2}. \end{equation} This yields that \begin{equation*} \Pi^{2}=\frac{\im m_H(z)}{N\eta} \prec\frac{\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda}+\Psi^{2}}{N\eta} \prec\Psi^{2}\left(\sqrt{(\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda})(\absv{{\mathcal S}_{AB}(z)}+\widehat{\Lambda})}+\Psi^{2}\right), \end{equation*} where we used $\im m_{\mu_{A}\boxtimes\mu_{B}}(z)\prec \absv{{\mathcal S}_{AB}(z)}$ in the last inequality, which follows from (ii) and (iii) of Proposition \ref{prop:stabN}. In other words, we see that \begin{equation}\label{eq_propfa1condition1} \Pi \prec \mho^{1/2}. \end{equation} Moreover, by (ii) and (iii) of Proposition \ref{prop:stabN}, we conclude that both $\im m_{\mu_{A}\boxtimes\mu_{B}}(z)$ and $\absv{{\mathcal S}_{AB}(z)}$ are bounded. Consequently, we have that \begin{equation}\label{eq_propfa1condition2} \mho^{1/2} \prec \Psi. \end{equation} Using (ii) and (iii) of Proposition \ref{prop:stabN} again, we can conclude that \begin{equation*} \frac{1}{N\sqrt{\eta}} \prec \Psi^{2}\sqrt{\im m_{\mu_{A}\boxtimes\mu_{B}}(z)\absv{{\mathcal S}_{AB}(z)}}\prec \Psi^{2}\left(\sqrt{(\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda})(\absv{{\mathcal S}_{AB}(z)}+\widehat{\Lambda})}+\Psi^{2}\right). \end{equation*} This implies that \begin{equation}\label{eq_propfa1condition3} N^{-1/2}\eta^{-1/4} \prec \mho^{1/2}. \end{equation} By (\ref{eq_propfa1condition1}), (\ref{eq_propfa1condition2}) and (\ref{eq_propfa1condition3}), we have seen that we can choose $\widehat{\Pi}=\mho^{1/2}.$ Second, we show that the coefficients $\{{\mathfrak d}_{i1}\}$ and $\{{\mathfrak d}_{i2}\}$ satisfy Assumption \ref{assum_conditiond}. Indeed, for $j=1,2,\cdots,N,$ we can write ${\mathfrak d}_{j1}$ as \begin{equation*} {\mathfrak d}_{j1}= \frac{z}{\Omega_B^c m_{\mu_A}(\Omega_B^c)(\tr{(A \widetilde{B}G)})^2} \frac{a_i \tr{(\widetilde{B}G)}-\tr{(A \widetilde{B}G)}}{a_i-\Omega_B^c}, \end{equation*} where we used (\ref{eq:apxsubor}). Using the definitions of $\Omega_B^c$ in Definition \ref{defn_asf}, we find that ${\mathfrak d}_{j1}$ can be regarded as a smooth function of $\tr{(\widetilde{B}G)}$ and $\tr{G}.$ Then, using chain rule, the general Leibniz rule and Lemma \ref{lem:recmomerror}, it is easy to see that ${\mathfrak d}_{j1}$ satisfies Assumption \ref{assum_conditiond}. Similar results hold for ${\mathfrak d}_{j2}.$ Based on the above discussion, we find that both $\Phi_A^c$ and $\Phi_B^c$ satisfy the conditions of Proposition \ref{prop:FA1} with $\widehat{\Pi}=\mho^{1/2}.$ Then Proposition \ref{prop:FA1} implies (\ref{eq_phibc}). Now we return to (\ref{eq:ZtoFA2}). By (\ref{eq_frz1frz2control}) and (\ref{eq_phibc}), we find that \begin{equation} \label{eq_laststepsabz} \frac{{\mathcal S}_{AB}}{z}\Lambda_{A}+{\mathcal T}_{A}\Lambda_{A}^{2}+O(\absv{\Lambda_{A}}^{3}) \prec \mho+\Psi^{2}\mho+\widehat{\Lambda}\Psi\mho^{1/2}. \end{equation} Recall the definition of $\mho$ in (\ref{eq_defnTheta}). It is easy to see that \begin{equation*} \widehat{\Lambda}\Psi\prec \sqrt{\widehat{\Lambda}}\Psi \prec \mho^{1/2}. \end{equation*} Together with (\ref{eq_laststepsabz}), we complete the proof of Proposition \ref{prop:FA2}. \end{proof} We next prove Lemma \ref{lem:Zrecmoment}. Before stepping into the proof, we have obtained a bound for ${\mathfrak Z}_{1}$ and ${\mathfrak Z}_{2}$ from the previous discussion. Specially, by using the definitions of ${\mathfrak Z}_{1}$ and ${\mathfrak Z}_{2}$ in (\ref{eq_frz1z2definition}), a discussion similar to (\ref{eq:LMder}) and (\ref{eq_phibc}), we conclude that \begin{equation*} {\mathfrak Z}_{1} \prec\Psi\mho^{1/2}, \ {\mathfrak Z}_{2}\prec\Psi\mho^{1/2}. \end{equation*} \begin{proof}[\bf Proof of Lemma \ref{lem:Zrecmoment}] We only focus our proof on ${\mathfrak Z}_1$ and ${\mathfrak Z}_2$ can be handled similarly. Recall from (\ref{eq_frz1z2definition}) that ${\mathfrak Z}_1$ are linear combinations of $\Phi_A^c$ and $\Phi_B^c.$ We can write \begin{equation*} \mathbb{E}\left[ {\mathfrak Z}_1^{(p,p)} \right]=\frac{1}{N} \sum_{i=1}^N \mathbb{E} \left[ {\mathfrak d}_{i1} Q_i {\mathfrak Z}_1^{(p-1,p)} \right]+\frac{zL_{\mu_A}'(\Omega_B)}{N} \sum_{i=1}^N \mathbb{E} \left[ {\mathfrak d}_{i2} \mathcal{Q}_i {\mathfrak Z}_1^{(p-1,p)} \right]. \end{equation*} Due to similarity, we only state the estimate of the first term on the RHS of the above equation. The proof is again an application of the formula (\ref{eq_formulaintergrationbyparts}). By a discussion similar to (\ref{eq_roughfarecuversivestimateeq}) using $\widehat{\Pi}=\mho^{1/2}$, we find that \begin{equation}\label{eq_expansionstrongfr} \frac{1}{N} \sum_{i=1}^N \mathbb{E}\left[ {\mathfrak d}_{i1} Q_i {\mathfrak Z}_1^{(p-1,p)} \right]=\mathbb{E}\left[ \rO_{\prec}(\mho) {\mathfrak Z}_1^{(p-1,p)} \right]+\mathbb{E} \left[\rO_{\prec}(\Psi^2 \mho) {\mathfrak Z}_1^{(p-2,p)} \right]+\mathbb{E} \left[\rO_{\prec}(\Psi^2 \mho) {\mathfrak Z}_1^{(p-1,p-1)} \right]. \end{equation} Since the estimate of the coefficient in front of the term ${\mathfrak Z}_1^{(p-1,p)}$ matches with that in (\ref{eq_Zrecmomentequation}), it suffices to improve the estimates of the second and third terms on the RHS of (\ref{eq_expansionstrongfr}) in view of (\ref{eq_propfa1condition2}). In the rest of the proof, we briefly discuss the second term on the RHS of (\ref{eq_expansionstrongfr}) and the third term can be estimated similarly. Regarding the coefficients of ${\mathfrak Z}_{1}^{(p-2,p)}$, by a discussion similar to (\ref{eq_explicitlysolution}), we find that all of them have one of the following forms \begin{align}\label{eq:Zerror} \frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}c_{i}{\boldsymbol e}_{k}^* X_{i}{\boldsymbol e}_{i} \frac{\partial {\mathfrak Z}_{1}}{\partial g_{ik}}, \ \frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}c_{i}{\boldsymbol e}_{k}^* X_{i}\mathring{{\boldsymbol g}}_{i}\frac{\partial {\mathfrak Z}_{1}}{\partial g_{ik}}, \end{align} where $c_{i}$ stand for some $\rO_{\prec}(1)$ factors. By chain rule, we have \begin{align*} \frac{\partial {\mathfrak Z}_{1}}{\partial g_{ik}}&=\frac{\partial}{\partial g_{ik}}(\Phi_{A}(\Omega_{A}^{c},\Omega_{B}^{c})+zL_{\mu_{A}}'(\Omega_{B})\Phi_{B}(\Omega_{A}^{c},\Omega_{B}^{c}))\\ & =\left(zL'_{\mu_{A}}(\Omega_{B})L'_{\mu_{B}}(\Omega_{A}^{c})-\frac{1}{z}\right)\frac{\partial \Omega_{A}^{c}}{\partial g_{ik}}+(L'_{\mu_{A}}(\Omega_{B}^{c})-L'_{\mu_{A}}(\Omega_{B}))\frac{\partial \Omega_{B}^{c}}{\partial g_{ik}}. \end{align*} From Proposition \ref{prop:stabN} and Assumption \ref{assu_ansz}, we find that the coefficients in front of derivatives of $\Omega_{A}^{c}$ and $\Omega_{B}^{c}$ in the above equation admit the estimates \begin{align*} \Absv{zL'_{\mu_{A}}(\Omega_{B})L'_{\mu_{B}}(\Omega_{A}^{c})-\frac{1}{z}}\prec \absv{{\mathcal S}_{AB}}+\Lambda, \ \ \absv{L'_{\mu_{A}}(\Omega_{B})-L'_{\mu_{A}}(\Omega_{B}^{c})} \prec \Lambda. \end{align*} Since they do not depend on indices $i$ and $k$, we can simply pull them out as scaling factors. As a consequence, in light of (\ref{eq_optimalfaquantities}), the remaining weighted sum has the same form as in the second or third estimates in Lemma \ref{lem:recmomerror}, which is $\rO_{\prec}(\Pi^{2}\Psi^{2})$. Thus we conclude that both quantities in \eqref{eq:Zerror} satisfy \begin{align} \frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}c_{i}{\boldsymbol e}_{k}^* X_{i}{\boldsymbol e}_{i} \frac{\partial {\mathfrak Z}_{1}}{\partial g_{ik}}&\prec(\absv{{\mathcal S}_{AB}}+\Lambda)\Pi^{2}\Psi^{2}\prec \mho^{2}, & \frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}c_{i}{\boldsymbol e}_{k}^* X_{i}\mathring{{\boldsymbol g}}_{i}\frac{\partial {\mathfrak Z}_{1}}{\partial g_{ik}}&\prec \Lambda\Pi^{2}\Psi^{2}\prec \mho^{2}, \end{align} where we used the definition of $\mho$ in (\ref{eq_defnTheta}) and Proposition \ref{prop:stabN} to obtain \begin{equation*} (\absv{{\mathcal S}_{AB}}+\Lambda)\Psi^{2} \prec \Psi^{2}\sqrt{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}\prec \mho \AND \Lambda\Psi^{2}\prec \mho. \end{equation*} The proves that the estimate of the coefficient of ${\mathfrak Z}_{1}^{(p-2,p)}$ is $\rO_{\prec}(\mho^2).$ This finishes our proof. \end{proof} \section{Proof of Theorems \ref{thm:main} and \ref{thm_outlierlocallaw} and Proposition \ref{prop_linearlocallaw}}\label{sec_finalsection} This section is mainly devoted to the proof of Theorems \ref{thm:main} and \ref{thm_outlier}, and their linearization Proposition \ref{prop_linearlocallaw}. The discussion will make use of the weak local law which will be established in Section \ref{subsec:weaklocallaw}. \subsection{Weak local laws}\label{subsec:weaklocallaw} In Section \ref{sec:pointwiselocallaw}, we proved estimates for the entries of the resvolents and optimal fluctuation averaging for the linear combinations of them. We also proved the closeness of the subordination functions and their approximates. However, all these results are regarding pointwise control for fixed $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U)$ and under Assumption \ref{assu_ansz}. In this section, we will establish a weak local law without imposing Assumption \ref{assu_ansz} and uniformly in $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U),$ using a continuous bootstrapping argument. The weak local law will guarantee that Assumption \ref{assu_ansz} holds uniformly for $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U).$ As a consequence, the results in Section \ref{sec:pointwiselocallaw} also hold uniformly for $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U).$ More specifically, the main result of this subsection is stated in the following proposition. \begin{prop}\label{prop:weaklaw} Suppose that Assumptions \ref{assu_limit} and \ref{assu_esd} hold. Let $\tau>0$ be a sufficiently small constant and $\gamma>0$ be any fixed small constant. Then we have \begin{align}\label{eq_weaklaw} \Lambda_{d}(z)\prec \frac{1}{(N\eta)^{1/3}}, \ \Lambda(z)&\prec\frac{1}{(N\eta)^{1/3}}, \ \Lambda_{T}(z)\prec\Psi(z), \ \Lambda_o \prec \frac{1}{(N \eta)^{1/3}}, \ \Lambda_{T_o} \prec \Psi(z). \end{align} uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. The same statements hold for $\widetilde{\Lambda}_{d}$ and $\widetilde{\Lambda}_{T}$. \end{prop} The proof of Proposition \ref{prop:weaklaw} will be divided into three steps. In the first step, we prove (\ref{eq_weaklaw}) on the global scale such that $\eta \geq \eta_U,$ where $\eta_U$ is a sufficiently large constant. The key idea for proving this step is to regard $Q_i$ defined in (\ref{eq_defnq}) as a function of the random unitary matrix $U.$ This strategy has been employed in the study of addition of random matrices in \cite[Theorem 8.1]{BEC} and local single ring theorem in \cite{bao2019}. An advantage of doing so is that we can employ the device of Gromov-Milman concentration inequality. We collect the related results in the following lemma. Denote $U(N)$ as the set of $N \times N$ unitary matrices over $\mathbb{C}$ and $SU(N) \subseteq U(N)$ as the set of special unitary matrices defined by \begin{equation*} SU(N):=\left\{ U \in U(N): \det(U)=1 \right\}. \end{equation*} Recall that both $U(N)$ and $SU(N)$ are compact Lie groups with respect to the matrix multiplication. \begin{lem}\label{lem:gromovmilman} Let $f$ be a real-valued Lipschitz continuous function defined on $U(N).$ We denote its Lipschitz constant $\mathcal{L}_f$ as \begin{equation*} {\mathcal L}_{f}\mathrel{\mathop:}= \sup\left\{\frac{\Absv{f(U_1)-f(U_2)}}{\norm{U_1-U_2}_{2}}:U_1,U_2\in U(N)\right\},\quad \norm{U_1-U_2}_{2}\mathrel{\mathop:}= \sqrt{\Tr((U_1-U_2)^* (U_1-U_2))}. \end{equation*} Moreover, we let $\nu$ be the Haar measure defined on $U(N)$ and $\nu_s$ be that on $SU(N).$ Then for some universal constants $c,C>0,$ we have that for any constant $\delta>0,$ \begin{equation} \label{eq:Gromov-Milman} \int_{U(N)}\lone\left(\Absv{f(U)-\int_{SU(N)}f(VU)\mathrm{d}\nu_{s}(V)}>\delta\right)\mathrm{d}\nu (U) \leq C\mathrm{exp}\left(-c\frac{N\delta^{2}}{{\mathcal L}_{f}^{2}}\right). \end{equation} Finally, let $H_{N}$ be the group of the form $\{\diag(\e{\mathrm{i}\theta},1,\cdots,1):\theta\in[0,2\pi]\}$ and $\nu_{h}$ be its associated Haar measure in the sense that $\theta$ is uniformly distributed on $[0,2\pi]$, then we have \begin{equation} \label{eq:integ_SU} \int_{SU(N)}\int_{H_{N}} f(VU_1W)\mathrm{d}\nu_{h}(W)\mathrm{d}\nu_{s}(V)=\int_{U(N)}f(U)\mathrm{d}\nu(U), \quad \text{for all} \ U_1 \in U(N). \end{equation} \end{lem} \begin{proof} See Corollary 4.4.28 and Lemma 4.4.29 of \cite{Anderson-Guionnet-Zeitouni2010}. \end{proof} In the second step, we will employ a continuity argument based on the estimates obtained from step one to prove the results hold for each fixed $z \in \mathcal{D}_\tau(\eta_L, \eta_U).$ To this end, for $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ and $\delta,\delta'\in[0,1],$ we define events \begin{align} & \Theta(z,\delta,\delta')\mathrel{\mathop:}= \left\{\Lambda_{d}(z)\leq \delta, \widetilde{\Lambda}_{d}(z)\leq \delta, \Lambda(z)\leq \delta, \Lambda_{T}(z)\leq \delta',\widetilde{\Lambda}_{T}(z)\leq\delta'\right\}, \label{eq_setdefnTheta}\\ &\Theta_{>}(z,\delta,\delta',\epsilon')\mathrel{\mathop:}= \Theta(z,\delta,\delta')\cap\left\{\Lambda(z)\leq N^{-\epsilon'}\absv{{\mathcal S}_{AB}(z)}\right\}. \label{eq_setdefnThetag} \end{align} Moreover, we decompose the domain ${\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ into two disjoint sets, \begin{align*} {\mathcal D}_{>}\equiv{\mathcal D}_{>}(\tau,\eta_{L},\eta_{U},\epsilon)&\mathrel{\mathop:}=\left\{z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U}):\sqrt{\kappa+\eta}>\frac{N^{2\epsilon}}{(N\eta)^{1/3}}\right\}, \\ {\mathcal D}_{\leq}\equiv{\mathcal D}_{\leq}(\tau,\eta_{L},\eta_{U},\epsilon)&\mathrel{\mathop:}= {\mathcal D}_{\tau}(\eta_{L},\eta_{U})\setminus{\mathcal D}_{>}. \end{align*} The main technical input for the second step is Lemma \ref{lem:iteration_weaklaw} below, whose proof will be postponed to Appendix \ref{append:C}. As it will be seen from Lemma \ref{lem:iteration_weaklaw}, when we restrict ourselves on some high probability event, it is always possible to gradually improve our estimates. \begin{lem}\label{lem:iteration_weaklaw} Suppose that Assumptions \ref{assu_limit} and \ref{assu_esd} hold. For any fixed $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$, any $\epsilon\in(0,\gamma/12)$ and $D>0$, there exists $N_{1}(D,\epsilon)\in\N$ and an event $\Xi(z,D,\epsilon)$ with \begin{equation} \label{eq_highprobabilityevent} \P(\Xi(z,D,\epsilon))\geq 1-N^{-D}, \quad \text{for all} \ N\geq N_{1}(D,\epsilon), \end{equation} such that the followings hold: \begin{itemize} \item[(i)] For all $z\in{\mathcal D}_{>}$, \begin{equation*} \Theta_{>}\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}},\frac{\epsilon}{10}\right)\cap\Xi(z,D,\epsilon)\subset \Theta_{>}\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}},\frac{\epsilon}{2}\right). \end{equation*} \item[(ii)] For all $z\in{\mathcal D}_{\leq}$, \begin{equation*} \Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right)\cap\Xi(z,D,\epsilon)\subset \Theta\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}}\right). \end{equation*} \end{itemize} \end{lem} In the third step, we prove the uniformity. With the help of Lemma \ref{lem:iteration_weaklaw}, we first extend the results to the whole domain from a discrete lattice of mesh size $N^{-5}.$ Then by using the Lipschitz continuity of subordination functions and the resolvents, we can extend the bounds to the entire domain $\mathcal{D}_{\tau}(\eta_L, \eta_U).$ For the rest of this subsection, we prove the weak local law. \begin{proof}[\bf Proof of Proposition \ref{prop:weaklaw}] We will follow the three-step strategy to complete our proof. Due to similarity, we only prove $\Lambda_d, \Lambda$ and $\Lambda_T.$ \vspace{3pt} \noindent{\bf Step 1.} The goal of this step is to prove the results for $\eta \geq \eta_U.$ A main technical input is the following estimate \begin{equation} \label{eq:Qi_global} \absv{zQ_{i}}\prec \frac{1}{\sqrt{N}\eta_{U}}, \end{equation} for all fixed $z$ with $\im z\geq \eta_{U}$. We mention that (\ref{eq:Qi_global}) is slightly weaker than its counterpart, equation (8.35) of \cite{BEC}. Since $\eta_U$ is large, (\ref{eq:Qi_global}) is already sufficient for our discussion. To show (\ref{eq:Qi_global}), we employ (\ref{eq:Gromov-Milman}) for a properly chosen function. Specifically, for $U \in U(N),$ we let \begin{equation}\label{eq_defnfingeneral} f(U)=zQ_i(U), \end{equation} where $Q_i$ is defined in (\ref{eq_defnq}) and we regard $Q_i$ as a function of $U.$ To show $f(\cdot)$ is Lipschitz for large $\eta_U$, we directly calculate its Lipschitz constant in the spirit of \cite[Section 8.1]{bao2019} with a slightly different justification. Indeed, since $f$ is differentiable, it suffices to bound its derivative in order to obtain the Lipschitz constant. Consider the adjoint representation of the Lie algebra associated with $U(N)$ \cite{hall2003lie}. Note that the natural product of this Lie algebra is the Lie bracket, i.e., the commutator. We now introduce some "curve" based on a bounded $N \times N$ Hermitian matrix $X$. Since $B$ is diagonal, by definition, we see for $0 \leq t \leq 1$ that \begin{equation*} \mathrm{Ad}_{\e{\mathrm{i} tX}}UBU^*=\e{\mathrm{i} tX}UBU^* \e{-\mathrm{i} tX}=\e{\mathrm{i} t\mathrm{ad}_{X}}(UBU^*), \end{equation*} where $\mathrm{Ad}_{\cdot}$ is the adjoint action and $\mathrm{ad}_\cdot$ is the derivative of the adjoint action at $t=0$ which turns out to be the Lie bracket. Indeed, by an elementary calculation using Lie bracket, we see that \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} t}\left(\mathrm{Ad}_{\e{\mathrm{i} tX}}UBU^*\right)_{t=0}=\frac{\mathrm{d}}{\mathrm{d} t}\e{\mathrm{i} t\mathrm{ad}_{X}}(UBU^*)=\left(\mathrm{ad}_{X}\e{\mathrm{i} t\mathrm{ad}_{X}}UBU^*\right)_{t=0}=\mathrm{ad}_{X}UBU^*. \end{equation*} Furthermore, it is easy to see that \begin{equation} \label{eq_generalderivativecontrolliealgebra} \frac{\mathrm{d}}{\mathrm{d} t} \left(X_{A}(\e{\mathrm{i} tX}U)B(\e{\mathrm{i} tX}U)^* G_{(\e{\mathrm{i} tX}U)}\right)_{t=0}=X_{A}(I+GA)\mathrm{ad}_{X}(UBU^*)G, \end{equation} where $X_{A}$ can be $A$, $I$, or $A^{2}$. Actually, invoking the definition of $Q_{i}$ in \eqref{eq_defnq}, using (\ref{eq:apxsubor}), the facts $zG+1=A\widetilde{B}G$ and $\tr A=1,$ we can write \begin{align}\label{eq:zQi} zQ_{i}=(A\widetilde{B}G)_{ii}\tr(A\widetilde{B}G)-\tr(A\widetilde{B}G)-\tr(A\widetilde{B}GA)(\widetilde{B}G)_{ii}+(\widetilde{B}G)_{ii}. \end{align} In this sense, it suffices to analyze $(X_{A}\widetilde{B}G)_{ii}$ or $\tr (X_{A}\widetilde{B}G)$, where $X_{A}$ can be $I$, $A$, or $A^{2}$, in order to study $f(U)$ in (\ref{eq_defnfingeneral}). More specifically, due to the nature of Lie algebra and (\ref{eq:zQi}), the control of the derivative of $f$ reduces to bound the terms on the RHS of (\ref{eq_generalderivativecontrolliealgebra}). First, for the term of the form $(X_A \widetilde{B} G)_{ii},$ by (\ref{eq_generalderivativecontrolliealgebra}) and (v) of Assumption \ref{assu_esd}, we see that \begin{align} \absv{{\boldsymbol e}_{i}^* X_{A}(I+GA)\mathrm{ad}_{X}(\widetilde{B})G{\boldsymbol e}_{i}} &\leq \norm{X(I+GA)^* X_{A}{\boldsymbol e}_{i}}\norm{\widetilde{B}G{\boldsymbol e}_{i}}+\norm{\widetilde{B}(I+GA)^* X_{A}{\boldsymbol e}_{i}}\norm{XG{\boldsymbol e}_{i}} \nonumber \\ &\leq C\eta_{U}^{-1}\norm{X}\leq C\eta_{U}^{-1}\norm{X}_{2},\label{eq_lipconstantone} \end{align} where $C \equiv C(A,B)>0$ is some constant and $\| X \|_2$ is the Hilbert-Schmidt norm of $X$. Here we used the inequalities $\norm{G}\leq \eta^{-1}$ and $\norm{X}\leq \norm{X}_{2}$. Second, for the term of the form $\tr{(X_A \widetilde{B}G)},$ by H\"{o}lder's inequality, we have that \begin{align} \absv{\tr X_{A}(I+GA)\mathrm{ad}_{X}(\widetilde{B})G} & \leq \norm{X_{A}(I+GA)}(\tr\absv{X\widetilde{B}}+\tr\absv{\widetilde{B}X})\norm{G} \nonumber \\ & \leq \frac{C\norm{\widetilde{B}}_{2}}{N\eta_{U}}\norm{X}_{2}\leq \frac{C}{\sqrt{N}\eta_{U}}\norm{X}_{2}, \label{eq_lipconstanttwo} \end{align} where we used the Cauchy-Schwarz inequality for (\ref{eq_lipconstanttwo}). Combining (\ref{eq_lipconstantone}) and (\ref{eq_lipconstanttwo}), we conclude that for some constant $C>0,$ \begin{equation}\label{eq_lipconstantbound} {\mathcal L}_f \leq \frac{C}{\eta_U}. \end{equation} Then we show that \begin{equation} \label{eq_controlofmean} \int_{SU(N)}f(VU)\mathrm{d}\nu_{s}(V)=0. \end{equation} By (\ref{eq:integ_SU}), $B$ is diagonal and the fact $W(\theta)BW(\theta)^*=B$ since $W(\theta)=\diag(\e{\mathrm{i}\theta},1,\cdots,1),$ we readily find that \begin{equation} \label{eq:EQi} \int_{SU(N)}f(VU)\mathrm{d}\nu_{s}(V)=\int f(U)\mathrm{d}\nu (U)=z\expct{Q_{i}}. \end{equation} Furthermore, by Proposition 3.2 and equation (3.25) of \cite{Vasilchuk2001}, we find that \begin{equation*} \expct{(GA)_{jj}(\widetilde{B}G)_{ii}}=\expct{G_{ii}(GA\widetilde{B})_{jj}}. \end{equation*} By taking the average over the summation of $j$, we obtain $\expct{Q_{i}}=0$. Together with (\ref{eq:EQi}), we conclude the proof of (\ref{eq_controlofmean}). By (\ref{eq_lipconstantbound}) and (\ref{eq_controlofmean}), using (\ref{eq:Gromov-Milman}), we have proved the claim (\ref{eq:Qi_global}). Armed with this control, we proceed to offer some other useful estimates. We begin with showing that \begin{equation}\label{eq:Qi_global_unif} \sup_{z: \im z \geq \eta_U} |z Q_i| \prec N^{-1/2}. \end{equation} Indeed, when $|z| \geq \sqrt{N},$ using the definition of the resolvent and (\ref{eq:priorisupp}), we find that for some constant $C>0,$ \begin{align}\label{eq_largenestimate} \sup_{\absv{z}\geq\sqrt{N}}\norm{G}&\leq \sup_{|z| \geq \sqrt{N}}\sup_{i}\{\absv{z-\lambda_{i}}^{-1}\}\\ &\leq \sup\{\absv{z-x}^{-1}: a_N b_N+o(1)<x<a_1 b_1+o(1),\absv{z}\geq \sqrt{N}\} \nonumber \\ & \leq \frac{C}{\sqrt{N}}. \nonumber \end{align} On the other hand, it is easy to see that there exists a constant $C>0$ such that the following statement holds for all fixed $z_{0}$ with $\im z_{0}\geq\eta_{U}$: \begin{equation} \label{eq:Qi_global_lattice} \left\{\absv{z_{0}Q_{i}(z_{0})}\leq N^{-1/2+\epsilon}\right\}\subset\left\{\sup\left\{\absv{zQ_{i}(z)}:\im z\geq\eta_{U},\absv{z-z_{0}}\leq N^{-1/2}\right\}\leq CN^{-1/2+\epsilon}\right\}. \end{equation} Since $G$ is $\eta_{U}^{-2}$-Lipschitz, by (\ref{eq:zQi}), $Q_i$ is also $\eta_{U}^{-2}$-Lipschitz. We take a discrete lattice $\mathcal{P}$ in $\{z:\im z\geq\eta_{U},\absv{z}\leq\sqrt{N}\}$ with side length $N^{-1/2}$ and construct a net $\widetilde{\mathcal{P}}$ which is the intersection of $\mathcal{P}$ and the term on the RHS of (\ref{eq:Qi_global_lattice}). Since $\sup_{z\in\widetilde{\mathcal{P}}}\absv{zQ_{i}(z)}\prec N^{-1/2}$, by Lipschitz continuity, we conclude the proof of (\ref{eq:Qi_global_unif}). Then we prove the estimate \begin{equation} \label{eq:Lambda_c_macro} \sup_i \sup_{\substack{\im z>\eta_{U}}}\Absv{\Lambda_{di}^c}\prec N^{-1/2}. \end{equation} To show (\ref{eq:Lambda_c_macro}), by \eqref{eq:Lambda}, we rewrite \begin{equation} \label{eq_lambdadi} \Lambda_{di}^{c}=\Absv{\frac{{a_{i}}}{(1+z\tr G)(a_{i}-\Omega_{B}^{c})}}\Absv{zQ_{i}}. \end{equation} In view of (\ref{eq:Qi_global_unif}), it suffices to show that \begin{equation}\label{eq_reduceddenominator} (1+z\tr{G})(a_i-\Omega_B^c) \prec 1. \end{equation} Since $\eta_{U}$ is large enough, for all $z$ with $\im z\geq\eta_{U},$ for some constant $C>0,$ we have \begin{equation} \label{eq_usenexttrab} \Absv{1+z\tr G+\frac{\tr(A\widetilde{B})}{z}}=\Absv{\tr(A\widetilde{B}((A\widetilde{B}-z)^{-1}+z^{-1}))}=\Absv{\tr((A\widetilde{B})^{2}(z(A\widetilde{B}-z))^{-1})}\leq \frac{C}{\absv{z}^{2}}, \end{equation} where in the second equality we used the property of trace and the decomposition $A \widetilde{B}=A \widetilde{B}-z+z,$ and in the last step we apply Von Neumann's trace inequality and a discussion similar to (\ref{eq_largenestimate}). Furthermore, by Gromov-Milman concentration inequality (\ref{eq:Gromov-Milman}), we readily obtain \begin{equation*} \P\left[\Absv{\tr(A\widetilde{B})-1}>\delta\right]\leq C\exp(-c(N\delta)^{2}), \end{equation*} where we used \begin{align*} \expct{\tr(A\widetilde{B})}=\frac{1}{N}\sum_{ij}a_{i}b_{j}\expct{\absv{v_{ij}}^{2}}=\tr A\tr B=1, \ \Absv{\tr(A\mathrm{ad}_{X}\widetilde{B})}\leq \frac{C}{\sqrt{N}}\norm{X}_{2}. \end{align*} Since ${\boldsymbol v}_{i}$ are uniformly distributed on the unit sphere $S^{N}$, we have that $\expct{\absv{v_{ij}}^2}=1/N$. Together with (\ref{eq_usenexttrab}), we have that \begin{equation} \label{eqeqqqqeqqqqeqqq} \absv{1+z\tr G-z^{-1}}\leq C\absv{z}^{-2}+\rO_{\prec}((\absv{z}N)^{-1}). \end{equation} By a discussion similar to (\ref{eq_usenexttrab}), we have \begin{equation*} \Absv{\tr(AG)-z^{-1}}=\Absv{\tr(A(G+z^{-1}))}=\Absv{\tr(A^{2}\widetilde{B}(z(A\widetilde{B}-z))^{-1})}\leq C\absv{z}^{-2}. \end{equation*} Based on the above calculation, we arrive at \begin{equation} \label{eq:approx_subor_macro} \Omega_{B}^{c}=z\frac{\tr(AG)}{1+z\tr G}=z(1+\rO(\absv{z}^{-1})+\rO_{\prec}(N^{-1})). \end{equation} Since $a_i$ is bounded and $\eta_U$ is large enough, by (\ref{eqeqqqqeqqqqeqqq}) and (\ref{eq:approx_subor_macro}), we conclude that \begin{equation*} (1+z\tr G)(a_{i}-\Omega_{B}^{c})=1+\rO(\absv{z}^{-1})+\rO_{\prec}(N^{-1}). \end{equation*} This proves (\ref{eq_reduceddenominator}) and hence (\ref{eq:Lambda_c_macro}). Next, we provide some error bounds regarding the system (\ref{eq_subordinationsystemab}). Using (\ref{eqeqqqqeqqqqeqqq}), (\ref{eq:approx_subor_macro}), (\ref{eq:Lambda_c_macro}) and (\ref{eq_multiidentity}), we find that \begin{equation*} \absv{M_{\mu_{H}}(z)-M_{\mu_{A}}(\Omega_{B}^{c})}=\Absv{\frac{1}{z\tr G+1}-\frac{1}{\Omega_{B}^{c}m_{\mu_{A}}(\Omega_{B}^{c})+1}}\prec \absv{z}^{2}N^{-1/2}, \end{equation*} where in the first step we used the definition (\ref{eq_mtrasindenity}). Recall \eqref{eq_averageqdefinitionupsilon} and \eqref{eq:approx_subor}. Similarly, we have that \begin{equation*} \absv{\Omega_{A}^{c}\Omega_{B}^{c}-zM_{\mu_{H}}(z)}=\Absv{\frac{z}{(1+zm_{H}(z))^{2}}}\Absv{\frac{1}{N}{\sum_{i}zQ_{i}}}\prec \absv{z}^3N^{-1/2}. \end{equation*} Recall the definitions in (\ref{eq_phacdefinition}). Combining the above two inequalities, by (\ref{eq:approx_subor_macro}), we get \begin{equation} \label{eq_phacboundbound1} \Absv{\Phi_{A}^c}\leq\Absv{\frac{M_{\mu_{A}}(\Omega_{B}^{c})-M_{H}(z)}{\Omega_{B}^{c}}}+\Absv{\frac{zM_{H}(z)-\Omega_{A}^{c}\Omega_{B}^{c}}{z\Omega_{B}^{c}}}\prec\absv{z} N^{-1/2}. \end{equation} On the other hand, by Lemma \ref{lem:reprMemp}, $\absv{\Phi_{A}^{c}}$ also admits the following bound \begin{equation*} \absv{\Phi_{A}^c} \prec \absv{z}^{-1}. \end{equation*} Similarly, we can show that \begin{equation*} \absv{\Phi_{B}^c} \prec \absv{z}^{-1}. \end{equation*} Together with (\ref{eq:approx_subor_macro}) and (\ref{eq_phacboundbound1}), by Lemma \ref{lem:Kantorovich_appl}, we readily find that uniformly in $z\in{\mathcal D}_{\tau}(\eta_{U},\infty)$ \begin{align}\label{eq_differencesubordinationdifference} \absv{\Lambda_{A}}=\absv{\Omega_{A}^{c}-\Omega_{A}(z)}\leq 2(\absv{\Phi_{A}^{c}(z)}+\absv{\Phi_{B}^{c}(z)})\prec \absv{z}N^{-1/2}. \end{align} Similar results hold for $\Lambda_B.$ Fix $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U),$ say $z=E+\mathrm{i} \eta,$ by \eqref{eq:Lambda_c_macro}, (\ref{eq_differencesubordinationdifference}) and (i) of Proposition \ref{prop:stabN}, we conclude that \begin{equation*} \sup_i \Lambda_{di}(E+\mathrm{i}\eta_{U})=\sup_i \Absv{zG_{ii}-\frac{a_{i}}{a_{i}-\Omega_{B}}}\leq \sup_i \Lambda_{di}^{c}+\sup_i \left( \frac{a_i \absv{\Lambda_{B}^{c}}}{\absv{a_{i}-\Omega_{B}}(\absv{a_{i}-\Omega_{B}}-\absv{\Lambda_{B}^{c}})}\right)\prec N^{-1/2}. \end{equation*} Recall the definition of $T_i$ in (\ref{eq_shorhandnotation}). Using the trivial bound $\|G(E+\mathrm{i} \eta_U) \| \leq \eta_U^{-1},$ we find that \begin{equation*} \Lambda_T(E+\mathrm{i} \eta_U) \leq \eta_U^{-1}. \end{equation*} Similarly, we can show that $\widetilde{\Lambda}_T(E+\mathrm{i} \eta_U) \leq \eta_U^{-1}. $ Based on the above discussion, since $\eta_U$ is a sufficiently large constant, we see that Assumption \ref{assu_ansz} holds for $z=E+\mathrm{i} \eta_U.$ Then by Proposition \ref{prop:entrysubor}, we have that for $z=E+\mathrm{i} \eta_U$ \begin{align*} \Lambda_{T}\prec N^{-1/2}, \ \widetilde{\Lambda}_{T}&\prec N^{-1/2}, \ \Upsilon \prec N^{-1/2}. \end{align*} Moreover, by (iii) of Proposition \ref{prop:stabN} and (\ref{eq_differencesubordinationdifference}), we have $\Lambda_{A}(z)\prec N^{-\epsilon}\sqrt{\kappa+\eta_{U}}$ for $z=E+\mathrm{i}\eta_{U}.$ Quantitively, for any fixed $E \in \mathbb{R},$ \begin{equation} \label{eq:WLL_global} \P\left[\Theta_{>}\left(E+\mathrm{i}\eta_{U},\frac{N^{3\epsilon}}{(N\eta_{U})^{1/3}},\frac{N^{3\epsilon}}{(N\eta_{U})^{1/2}},\frac{\epsilon}{10}\right)\right]\geq 1-N^{-D}, \end{equation} for all $D>0$ and $N\geq N_{2}(\epsilon,D),$ where $N_{2}(\epsilon,D)$ depends only on $\epsilon$ and $D$. \vspace{3pt} \noindent{\bf Step 2.} In this step, with the estimate (\ref{eq:WLL_global}) and Lemma \ref{lem:iteration_weaklaw}, we control the probability of the "good" events $\Theta_{>}$ for $z \in \mathcal{D}_{>}$ and $\Theta$ for $z \in \mathcal{D}_{\leq}.$ Consequently, we can iteratively make $\im z$ smaller. More specifically, in this step, we will prove that for the high probability event $\Xi(\cdot, \cdot, \cdot)$ in (\ref{eq_highprobabilityevent}), the following statements hold: \begin{itemize} \item[(i)] For all $z\in{\mathcal D}_{>}$, \begin{equation} \label{eq:WLL_iter_event1} \Theta_{>}\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}},\frac{\epsilon}{2}\right)\cap\Xi(z-N^{-5}\mathrm{i},D,\epsilon) \subset \Theta_{>}\left(z-N^{-5}\mathrm{i},\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}},\frac{\epsilon}{2}\right); \end{equation} \item[(ii)] For all $z\in{\mathcal D}_{\leq}$, \begin{equation} \label{eq:WLL_iter_event2} \Theta\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}}\right)\cap\Xi(z-N^{-5}\mathrm{i},D,\epsilon)\subset \Theta\left(z-N^{-5}\mathrm{i},\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}}\right). \end{equation} \end{itemize} We decompose the domain $\mathcal{D}_{\tau}(\eta_L,\eta_U)$ into $\mathcal{D}_{>}$ and $\mathcal{D}_{\leq}$ as in $\mathcal{D}_{>}$ we need to keep track of the event $\Lambda \leq N^{-\epsilon/2} \mathcal{S}_{AB}$ in order to apply (i) of Lemma \ref{lem:iteration_weaklaw}; see equation (\ref{eq_keeptrackofeventhighprobability}) for more details. We start with discussing the event $\Theta.$ First, for generic values $z,\delta$ and $\delta',$ we claim that on the event $\Theta(z,\delta,\delta')$ defined in (\ref{eq_highprobabilityevent}), there exists some constant $C>0,$ \begin{align}\label{eq_claimoneone} \Lambda_{d}(z+w)\leq \delta+CN^{-3}, \ \Lambda_{T}(z+w) \leq \delta'+CN^{-3}, \end{align} for all $w\in\C$ with $\absv{w}\leq 2N^{-5},$ and $z+w\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. Indeed, the above estimates follow from mean value theorem and \begin{align*} \norm{\frac{\mathrm{d}}{\mathrm{d} z}(zG+1)}\leq \eta^{-2}\leq N^{2}, \ \absv{\Omega_{A}'(z)}\leq \frac{C}{\sqrt{\kappa+\eta}}\leq N^{1/2},\ \absv{\Omega_{B}'(z)}\leq \frac{C}{\sqrt{\kappa+\eta}}\leq N^{1/2}, \end{align*} where we used (iv) of Proposition \ref{prop:stabN}. Second, we claim the following result \begin{equation} \label{eq_claimnlambda} \Lambda(z+w)\leq \delta'+CN^{-3}. \end{equation} We now prove it. Recall Definition \ref{defn_asf}. Using (\ref{eq:apxsubor}) and the fact $\frac{\mathrm{d} G}{\mathrm{d} z}=G^2,$ we find that \begin{equation*} \frac{\mathrm{d}}{\mathrm{d} z}\Omega_{A}^{c}(z)=\frac{\tr(\widetilde{B}A\widetilde{B}G^{2})\tr(A\widetilde{B}G)-\tr(\widetilde{B}A\widetilde{B}G)\tr(A\widetilde{B}G^{2})}{\tr(A\widetilde{B}G)^{2}}. \end{equation*} For some constant $C>0,$ by (\ref{eq:Lambda}), (i) of Proposition \ref{prop:stabN} and the definition of $\Theta(z, \delta, \delta'),$ we have \begin{equation*} \absv{\tr(\widetilde{B}A\widetilde{B}G)}=\absv{\tr(\widetilde{B}(zG+1))}\leq1+\Absv{\frac{C z}{N}\sum_{i}\frac{1}{a_{i}-\Omega_{B}}}+C\absv{z}\delta\leq C, \end{equation*} and \begin{equation*} \absv{\tr(A\widetilde{B}G)}\geq \absv{zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1}-\delta. \end{equation*} Combining the above discussions, when $\delta \ll 1,$ we obtain that \begin{equation*} \left| \frac{\mathrm{d} }{\mathrm{d} z} \Omega_A^c(z)\right| \leq C \eta^2 \leq CN^2. \end{equation*} Similar result hold for $\Omega_B^c.$ Then we can complete the proof of (\ref{eq_claimnlambda}) with mean value theorem. Armed with (\ref{eq_claimoneone}) and (\ref{eq_claimnlambda}), we immediately see that \begin{align} \Theta\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}}\right) &\subset\bigcap_{\absv{w}\leq N^{-5}}\Theta\left(z+w,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}}+CN^{-3},\frac{N^{5\epsilon/2}}{{\sqrt{N\eta}}}+CN^{-3}\right) \nonumber\\ &\subset\bigcap_{\absv{w}\leq N^{-5}}\Theta\left(z+w,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right). \label{eq:WLL_lipschitz1} \end{align} We next briefly discuss the event $\Theta_{>}$ due to similarity. On the event $\Theta_{>}\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}},\frac{\epsilon}{10}\right)$, by a discussion similar to (\ref{eq_claimoneone}) and (iv) of Proposition \ref{prop:stabN}, we have that for some constants $c, C>0,$ \begin{align}\label{eq_keeptrackofeventhighprobability} \Lambda(z-N^{-5}\mathrm{i})\leq \Lambda(z)+CN^{-3}&\leq N^{-\epsilon/2}\absv{{\mathcal S}_{AB}(z)}+CN^{-3} \nonumber \\ & \leq CN^{-\epsilon/2}(\kappa+\eta)^{-1/2}+CN^{-3} \leq cN^{-\epsilon/10}(\kappa+\eta-N^{-5})^{-1/2} \nonumber \\ & \leq N^{-\epsilon/10}\absv{{\mathcal S}_{AB}(z-N^{-5}\mathrm{i})}. \end{align} Consequently, we have that \begin{align} \Theta_{>}\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}},\frac{\epsilon}{2}\right) &\subset\bigcap_{\absv{w}\leq N^{-5}}\Theta_{>}\left(z+w,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}},\frac{\epsilon}{10}\right).\label{eq:WLL_lipschitz2} \end{align} By Lemma \ref{lem:iteration_weaklaw} and taking $w=-N^{-5}\mathrm{i}$ in (\ref{eq:WLL_lipschitz1}) and (\ref{eq:WLL_lipschitz2}), we have proved \eqref{eq:WLL_iter_event1} and \eqref{eq:WLL_iter_event2}. \vspace{3pt} \noindent{\bf Step 3.} In this step, we use a continuity argument to extend the bound first to a lattice of mesh size $N^{-5}$ and then the whole domain $\mathcal{D}_{\tau}(\eta_L, \eta_U)$. To simplify the notations, we denote \begin{align*} & {\mathcal P}\mathrel{\mathop:}={\mathcal D}_{\tau}(\eta_{L},\eta_{U})\cap (N^{-5}\Z)^{2}, \ \ {\mathcal P}_{>}\mathrel{\mathop:}= {\mathcal D}_{>}\cap (N^{-5}\Z)^{2}, \\ & {\mathcal P}_{\leq}\mathrel{\mathop:}= {\mathcal D}_{\leq}\cap(N^{-5}\Z)^{2}, \ \ {\mathcal P}_{E}\mathrel{\mathop:}= [E_{+}-\tau,\tau^{-1}]\cap N^{-5}\Z. \end{align*} Repeatedly applying \eqref{eq:WLL_iter_event1} and \eqref{eq:WLL_iter_event2}, we have \begin{align}\label{eq:WLL_conc} &\bigcap_{z\in{\mathcal P}}\Xi(z,D,\epsilon)\cap \bigcap_{E\in{\mathcal P}_{E}}\Theta_{>}\left(E+\mathrm{i}\eta_{U},\frac{N^{5\epsilon/2}}{(N\eta_{U})^{1/3}},\frac{N^{5\epsilon/2}}{(N\eta_{U})^{1/2}},\frac{\epsilon}{2}\right)\\ & \subset\bigcap_{z\in{\mathcal P}_{>}}\Theta_{>}\left(E+\mathrm{i}\eta,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{(N\eta)^{1/2}},\frac{\epsilon}{2}\right)\cap \bigcap_{z\in{\mathcal P}_{\leq}}\Theta\left(E+\mathrm{i}\eta,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{(N\eta)^{1/2}}\right) \nonumber \\ & \subset \bigcap_{z\in{\mathcal D}_{>}}\Theta_{>}\left(E+\mathrm{i}\eta,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{(N\eta)^{1/2}},\frac{\epsilon}{10}\right)\cap \bigcap_{z\in{\mathcal D}_{\leq}}\Theta\left(E+\mathrm{i}\eta,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{(N\eta)^{1/2}}\right), \nonumber \end{align} where in the third step we used \eqref{eq:WLL_lipschitz1} and \eqref{eq:WLL_lipschitz2}. Moreover, by Lemma \ref{lem:iteration_weaklaw} and \eqref{eq:WLL_global}, when $N\geq \max(N_{1}(D,\epsilon),N_{2}(D,\epsilon)),$ the probability of the first event of (\ref{eq:WLL_conc}) is at least \begin{equation*} 1-\sum_{E\in{\mathcal P}_{E}}N^{-D}-\sum_{z\in{\mathcal P}}N^{-D}\geq 1- CN^{10-D}. \end{equation*} Since $D$ is arbitrary, we can prove for the discrete lattice by choosing $D>10$ large enough. Finally, by the Lipschitz continuity of the resolvent as demonstrated in the discussion below (\ref{eq:Qi_global_lattice}) and the subordination functions in (iv) of Proposition \ref{prop:stabN}, we can extend all the bounds from the discrete lattice to $\mathcal{D}_{\tau}(\eta_L, \eta_U).$ This concludes the proof of Proposition \ref{prop:weaklaw}. \end{proof} \subsection{Strong local law: proof of Theorem \ref{thm:main}}\label{sec_proofofstronglocallaw} In this section, using the weak local law Proposition \ref{prop:weaklaw} as an initial input, we prove Theorem \ref{thm:main}. Indeed, the proof follows from the same three-step strategy as mentioned in the proof of Proposition \ref{prop:weaklaw}, except that we have better estimates. We first prepare some technical ingredients. The first ingredient is Lemma \ref{lem:self_improv} below, which provides a device for us to gradually improve the estimate of $\Lambda(z)$ with fixed $\im z.$ Its proof will be put into Appendix \ref{append:C}. \begin{lem}\label{lem:self_improv} Let $\widehat{\Lambda}(z)$ be a deterministic control parameter such that $(N\eta)^{-1}\leq \widehat{\Lambda}(z)\leq N^{-\gamma/4}$. Moreover, denote $\Xi(z)$ as an event on which $\Lambda(z)\leq \widehat{\Lambda}(z)$ holds. Then for any fixed $\epsilon_{0}\in(0,\gamma/12)$ and large $D>0$, there exists $N_0 \equiv N_{0}(\epsilon_{0},D)$ such that the followings hold for all $N\geq N_{0}$ and $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$: \begin{itemize} \item[(i)] If $\sqrt{\kappa+\eta}>N^{-\epsilon_{0}}\widehat{\Lambda}$,there exists a sufficiently large constant $K_0>0,$ such that \begin{equation*} \prob{\Xi(z)\cap\left[\lone\left(\Lambda\leq \frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\right)\absv{\Lambda_{\iota}}> N^{-2\epsilon_{0}}\widehat{\Lambda}+\frac{N^{7\epsilon_{0}/5}}{N\eta}\right]}\leq N^{-D}, \ \iota=A,B. \end{equation*} \item[(ii)] If $\sqrt{\kappa+\eta}\leq N^{-\epsilon_{0}}\widehat{\Lambda}$, we have \begin{equation*} \prob{\Xi(z)\cap\left[ \absv{\Lambda_{\iota}}> N^{-\epsilon_{0}}\widehat{\Lambda}+\frac{N^{7\epsilon_{0}/5}}{N\eta}\right]}\leq N^{-D}. \end{equation*} \end{itemize} \end{lem} Our second input, Lemma \ref{lem:self_improv_S}, is analogous to Lemma \ref{lem:iteration_weaklaw}. It serves as the key input to gradually extend the bounds to the entire domain $\mathcal{D}_\tau(\eta_L, \eta_U).$ Its proof will be given in Appendix \ref{append:C}. To this end, we define the domains $\widetilde{{\mathcal D}}_{>},\widetilde{{\mathcal D}}_{\leq}$ and events $\widetilde{\Theta},\widetilde{\Theta}_{>}$ as follows: \begin{align*} &\widetilde{{\mathcal D}}_{>}\mathrel{\mathop:}= \left\{z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U}):\sqrt{\kappa+\eta}>\frac{N^{2\epsilon}}{N\eta}\right\},& &\widetilde{{\mathcal D}}_{\leq}\mathrel{\mathop:}= \left\{z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U}):\sqrt{\kappa+\eta}>\frac{N^{2\epsilon}}{N\eta}\right\}, \\ &\widetilde{\Theta}(z,\delta)\mathrel{\mathop:}= \left\{\Lambda\leq \delta \right\},& &\widetilde{\Theta}_{>}(z,\delta,\epsilon')\mathrel{\mathop:}= \widetilde{\Theta}(z,\delta)\cap \left\{\Lambda(z)\leq N^{-\epsilon'}\absv{{\mathcal S}_{AB}}\right\}. \end{align*} \begin{lem}\label{lem:self_improv_S} For any fixed $\epsilon_{0}\in(0,\gamma/12)$ and large $D>0$, there exists $N_0 \equiv N_{0}(\epsilon_{0},D)$ such that the followings hold for all $N\geq N_{0}$ and $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$: \begin{align} &\prob{\widetilde{\Theta}_{>}\left(z,\frac{N^{3\epsilon_{0}}}{N\eta},\frac{\epsilon_{0}}{10}\right)\setminus\widetilde{\Theta}_{>}\left(z,\frac{N^{5\epsilon_{0}/2}}{N\eta},\frac{\epsilon_{0}}{2}\right)}\leq N^{-D}, \ \ \text{ for }z\in\widetilde{{\mathcal D}}_{>},\\ &\prob{\widetilde{\Theta}\left(z,\frac{N^{3\epsilon_{0}}}{N\eta}\right)\setminus\widetilde{\Theta}\left(z,\frac{N^{5\epsilon_{0}/2}}{N\eta}\right)}\leq N^{-D}, \ \ \text{ for }z\in\widetilde{{\mathcal D}}_{\leq} . \end{align} \end{lem} Then we prove Theorem \ref{thm:main} using Lemmas \ref{lem:self_improv_S} and \ref{lem:self_improv}. \begin{proof}[\bf Proof of Theorem \ref{thm:main}] First, we prove that \begin{equation}\label{lambdaz_uniform} \Lambda \prec \frac{1}{N \eta}, \end{equation} holds uniformly in $z \in \mathcal{D}_\tau(\eta_L, \eta_U).$ As we mentioned earlier, the proof follows the same three-step proof strategy of Proposition \ref{prop:weaklaw}. We focus on explaining step 1, which differs the most from its counterpart of the proof of Proposition \ref{prop:weaklaw}. Analogous to (\ref{eq:WLL_global}), step 1 will prove that for fixed $\im z=\eta_U,$ \begin{equation} \label{eq_limitstrongglobal} \inf_{E\in[E_{+}-\tau,\tau^{-1}]}\prob{\widetilde{\Theta}_{>}\left(E+\mathrm{i}\eta_{U},\frac{N^{3\epsilon_{0}}}{N\eta_U},\frac{\epsilon}{10}\right)}\geq 1-N^{-D}. \end{equation} To see (\ref{eq_limitstrongglobal}), on one hand, we notice that by Proposition \ref{prop:weaklaw}, we can initially choose $\widehat{\Lambda}(z)=N^{3\epsilon_{0}}(N\eta_U)^{-1/3}$ in Lemma \ref{lem:self_improv}; on the other hand, we will repeatedly apply Lemma \ref{lem:self_improv} to gradually improve the control parameter $\widehat{\Lambda}(z)$ until it reaches the bound $(N\eta_U)^{-1}$(up to a factor of $N^{\epsilon_{0}}$). To facilitate our discussion, we denote \begin{align*} \widehat{\Lambda}_{1}(z)&\mathrel{\mathop:}=\frac{N^{3\epsilon_{0}}}{(N\eta_U)^{1/3}},& \widehat{\Lambda}_{k}(z)&\mathrel{\mathop:}= N^{(4-k)\epsilon_{0}}\frac{1}{(N\eta_U)^{1/3}}+\frac{N^{2\epsilon_{0}}}{N\eta_U},\quad\text{for } k\in\left\llbra 2,\left\lceil\frac{2}{3\epsilon_{0}}+4\right\rceil\right\rrbra. \end{align*} Within the above preparation, we will prove (\ref{eq_limitstrongglobal}) by induction in the sense that for each $k$, there exists an $N_k \equiv N_{k}(\epsilon_{0},D)$ so that \begin{equation} \label{eq:SLL_global_indct} \prob{\Lambda(E+\mathrm{i}\eta_{U})\leq\widehat{\Lambda}_{k}(E+\mathrm{i}\eta_{U})}\geq 1-kN^{-D}, \end{equation} for all $N\geq N_{k}$ and uniformly in $E\in[E_{+}-\tau,\tau^{-1}]$. Clearly, we have that $(N\eta)^{-1}\leq \widehat{\Lambda}_{k}(z)\leq N^{-\gamma/4}$ for all $k$. Therefore, Lemma \ref{lem:self_improv} is applicable with all the choices $\widehat{\Lambda}=\widehat{\Lambda}_{k}$. Since $\sqrt{\kappa+\eta_{U}}>N^{-\epsilon_{0}}\widehat{\Lambda}_{k}$, by Lemma \ref{lem:self_improv}, we have \begin{equation} \label{eq_withindicator} \prob{\left[\Lambda(E+\mathrm{i}\eta_{U})\leq \widehat{\Lambda}_{k}(E+\mathrm{i}\eta_{U})\right]\cap\left[\lone\left(\Lambda\leq \frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\right)\absv{\Lambda_{\iota}}>N^{-2\epsilon_{0}}\widehat{\Lambda}+\frac{N^{7\epsilon_{0}/5}}{N\eta_U}\right]}\leq N^{-D}, \end{equation} for all $N\geq N_{k}' \equiv N_k'(\epsilon_{0},D)$ and uniformly in $E\in[E_{+}-\tau,\tau^{-1}]$. By (iii) of Proposition \ref{prop:stabN}, we find that for some constant $c>0,$ \begin{equation*} \frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\geq c\sqrt{\kappa+\eta_{U}}\geq \widehat{\Lambda}_{k}\geq \Lambda, \end{equation*} on the event $[\Lambda\leq \widehat{\Lambda}_{k}]$. Therefore, we can safely remove the factor of indicator function in (\ref{eq_withindicator}). Consequently, \eqref{eq:SLL_global_indct} leads to \begin{align*} &\prob{\absv{\Lambda_{\iota}}> N^{-2\epsilon_{0}}\widehat{\Lambda}_{k}+\frac{N^{7\epsilon_{0}/5}}{N\eta_U}}\\ &\leq \prob{\Lambda> \widehat{\Lambda}_{k}}+ \prob{\left[\Lambda\leq \widehat{\Lambda}_{k}\right]\cap\left[\lone\left(\Lambda\leq \frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\right)\absv{\Lambda_{\iota}}>N^{-2\epsilon_{0}}\widehat{\Lambda}+\frac{N^{7\epsilon_{0}/5}}{N\eta_U}\right]} \\ & \leq (k+1)N^{-D}, \end{align*} whenever $N\geq \max\{N_{k}',N_{k} \}$ and $z=E+\mathrm{i}\eta_{U}$, uniformly in $E\in[E_{+}-\tau,\tau^{-1}]$. Together with the fact \begin{equation*} N^{-2\epsilon_{0}}\widehat{\Lambda}_{k}+\frac{N^{7\epsilon_{0}/5}}{N\eta_U} =N^{-\epsilon_{0}}\widehat{\Lambda}_{k+1}+\frac{N^{7\epsilon_{0}/5}-N^{\epsilon_{0}}}{N\eta_U}\leq \widehat{\Lambda}_{k}, \end{equation*} we have proved that for\eqref{eq:SLL_global_indct}, the same inequality holds for the case $k+1$ if the case $k$ holds true. Recall that by Proposition \ref{prop:weaklaw} we have that \eqref{eq:SLL_global_indct} holds for $k=1$. By induction, \eqref{eq:SLL_global_indct} holds for all fixed $k$. In particular, taking $k=\lceil 2/(3\epsilon_{0})+4\rceil$, we have \begin{equation*} \widehat{\Lambda}_{k}\leq \frac{N^{4\epsilon_{0}-k\epsilon_{0}+2/3}+N^{2\epsilon_{0}}}{N\eta_U} \leq \frac{N^{3\epsilon_{0}}}{N\eta_U}. \end{equation*} Since by (iii) of Proposition \ref{prop:stabN}, $\Lambda\leq\widehat{\Lambda}_{k}$ implies $\Lambda\leq N^{-\epsilon_{0}/10}\absv{{\mathcal S}_{AB}}$ for $\eta=\eta_{U}$, we have proved (\ref{eq_limitstrongglobal}). Steps 2 and 3 of the proof of (\ref{lambdaz_uniform}) follows analogously as the counterparts of the proof of Proposition \ref{prop:weaklaw} using Lemma \ref{lem:self_improv_S} and (\ref{eq_limitstrongglobal}), we omit the details here. Second, with (\ref{lambdaz_uniform}) and the local laws, we can see that the bounds in Theorem \ref{thm:main} hold. In detail, with the local law in Proposition \ref{prop:weaklaw}, we see that Assumption \ref{assu_ansz} and the assumption (\ref{eq:ansz_off}) hold uniformly in $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U).$ Consequently, Propositions \ref{prop:entrysubor}, and \ref{prop_offdiagonal} hold uniformly in $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U).$ Then the bounds for the off-diagonal entries, i.e. (\ref{eq_off1}) and (\ref{eq:mainoff2}), follow immediately. For the diagonal entries, we focus on explaining (\ref{eq:main}). Recall (\ref{eq:Lambda}). We see that for any deterministic $v_1, \cdots, v_N \in \mathbb{C},$ we can write \begin{equation*} \frac{1}{N} \sum_{i=1}^N v_i \left( zG_{ii}+1-\frac{a_i}{a_i-\Omega_B^c(z)} \right)=\frac{1}{N} \sum_{i=1}^N \frac{zv_i a_i}{(1+z\tr G)(a_i-\Omega_B^c)} Q_i. \end{equation*} Regarding $\frac{zv_i a_i}{(1+z\tr G)(a_i-\Omega_B^c)}$ as the random coefficients $d_i$ in (\ref{eq_weighted}), it is easy to see that the conditions in (\ref{eq:weight_cond}) hold. Hence, by Proposition \ref{prop:FA1} and the weak local law, we find that \begin{equation*} \left|\frac{1}{N} \sum_{i=1}^N v_i \left( zG_{ii}+1-\frac{a_i}{a_i-\Omega_B^c(z)} \right) \right| \prec \Psi \widehat{\Pi}. \end{equation*} Together with (\ref{lambdaz_uniform}), we conclude the proof. \end{proof} \subsection{Proof of Theorem \ref{thm_outlierlocallaw} and Proposition \ref{prop_linearlocallaw}} \label{sec_suboutlierlocallaws} In this section, we prove the local laws which are outside the bulk spectrum. We first prove Proposition \ref{prop_linearlocallaw}. \begin{proof}[\bf Proof of Proposition \ref{prop_linearlocallaw}] We first prove the first part of the results when $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U).$ Recall \eqref{eq:controlc1}, \eqref{eq:controlc2}, and \eqref{eq:controlc3}. Along the proof of Proposition \ref{prop:entrysubor}, we have proved that under Assumption \ref{assu_ansz} \begin{align}\label{eq:PK_improv} \absv{P_{i}}\prec\Pi_{i}, \ \ \absv{K_{i}}\prec\Pi_{i}. \end{align} Applying the same argument to the off-diagonal elements defined in (\ref{eq_quantitydefinitionoffdiagonal}), we can obtain that when $i \neq j$ \begin{align}\label{eq:PK_improv_off} \absv{P_{ij}}\prec\Pi_{i}+\Pi_{j}, \ \ \absv{K_{ij}}\prec\Pi_{i}+\Pi_{j} \end{align} Since we already have proved in Proposition \ref{prop:weaklaw} that Assumption \ref{assu_ansz} and the assumption (\ref{eq:ansz_off}) hold uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$, \eqref{eq:PK_improv} indeed holds uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. Moreover, applying Proposition \ref{prop:FA1} to the weights $d_{i}\equiv1, i=1,2,\cdots,N,$ and $\widehat{\Pi}=\Psi$, we see that $\Upsilon\prec\Psi^{2}$. Therefore, by (ii) of Proposition \ref{prop:stabN}, we conclude that for $i\in{\mathcal I}_{1}$ \begin{align}\label{eq_diagonalboundextend} \absv{G_{ii}-\Theta_{ii}}\prec\absv{Q_{i}} & \prec \absv{P_{i}}+\frac{1}{N\eta}\\ & \prec\sqrt{\frac{\im G_{ii}+\im{\mathcal G}_{ii}}{N\eta}}+\frac{1}{N\eta}\prec\sqrt{\frac{\im m_{\mu_{A}\boxtimes\mu_{B}}+(N\eta)^{-1}}{N\eta}}+\frac{1}{N\eta} \nonumber \\ & \prec\sqrt{\frac{\im m_{\mu_{A}\boxtimes\mu_{B}}}{N\eta}}+\frac{1}{N\eta}. \nonumber \end{align} Similarly, for $i\neq j\in{\mathcal I}_{1},$ we get \begin{equation} \label{eq_offdiagonalboundextension} \absv{G_{ij}}\prec\absv{Q_{ij}}\prec \sqrt{\frac{\im m_{\mu_{A}\boxtimes\mu_{B}}}{N\eta}}+\frac{1}{N\eta}. \end{equation} Applying the exact same reasoning to ${\mathcal G}$ proves the same bounds hold for $\absv{(\mathbf{G}-\Theta)_{\mu \nu}}$ with $\mu,\nu\in{\mathcal I}_{2}$. It only remains to consider the case when $i\in{\mathcal I}_{1}$ and $\mu \in{\mathcal I}_{2}$. Here we use the fact that $Y^*\widetilde{G}=\sqrt{B}UG\sqrt{A}$ to get \begin{equation*} \mathbf{G}_{i\mu}=z^{-1/2}(Y^*\widetilde{G})_{i\mu}=z^{-1/2}\sqrt{b}_{i}\sqrt{a}_{\mu}(U^* G)_{i\mu}=z^{-1/2}\sqrt{b_{i}a_{\mu}}\e{\mathrm{i}\theta_{i}}T_{\mu}, \end{equation*} where we denote $T_{\mu}=T_{\mu-N}.$ On the other hand, using the definitions in (\ref{eq_quantitydefinitionoffdiagonal}), we have \begin{equation*} K_{ij}=(1+b_{i}\tr(GA)-\tr(GA\widetilde{B}))T_{ij}+Q_{ij}. \end{equation*} Recall from \eqref{eq_t2simplify} that \begin{equation*} 1+b_{i}\tr(GA)-\tr(GA\widetilde{B})=(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)\frac{\Omega_{B}(z)}{z}(b_{i}-\Omega_{A}(z))+\frac{1}{N\eta}\sim 1. \end{equation*} We readily obtain that \begin{align*} \absv{\mathbf{G}_{i\mu}}\leq C T_{i\mu}\prec \absv{K_{i\mu}}+\absv{Q_{i\mu}}\prec \sqrt{\frac{\im m_{\mu_{A}\boxtimes\mu_{B}}}{N\eta}}+\frac{1}{N\eta}, \end{align*} where $T_{i\mu}=T_{i (\mu-N)}$ and similarly for $K_{i \mu}$ and $Q_{i \mu}.$ This concludes the first part of Proposition \ref{prop_linearlocallaw}. The second part of the results follow from (ii) of Proposition \ref{prop:stabN} and the first part of the results. The calculation is standard in the literature of Random Matrix Theory, for instance, see Theorem 3.12 and its proof in \cite{alex2014}. We omit the details here. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{thm_outlierlocallaw}] By (\ref{eq_connectiongreenfunction}) and (\ref{eq_schurcomplement}), we see that the control of the off-diagonal entries, i.e., part (2) of the results have been proved in Proposition \ref{prop_linearlocallaw}. It remains to prove part (1). We fix an arbitrary chosen $\epsilon\in(0,\tau/100)$ and consider the event $\Xi$ on which we have \begin{align}\label{eq:locallawout_Xi} &\sup_{z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})}\eta\absv{m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}\leq N^{-1+\epsilon/2},& &\max_{1\leq i\leq N/3}i^{1/3}\absv{\lambda_{i}-\gamma_{i}}\leq N^{-2/3+\epsilon}, \end{align} where in the proof we choose $\eta_{L}=N^{-1+\epsilon}$. By Theorems \ref{thm:main} and \ref{thm_rigidity}. we have $\prob{\Xi}\geq 1- N^{-D}$ for any large $D>0$. For all $z_{0}=E_{0}+\mathrm{i}\eta_{0}\in{\mathcal D}_{\tau}(\eta_{U})$ with $4\eta_{0}\leq\kappa_{0}=\absv{E_{0}-E_{+}}$, we consider a counter-clockwise square contour ${\mathcal C}(z_{0})$ with side length $\kappa_{0}$ and (bary)center $z_{0}$. Then, on the event $\Xi,$ by Cauchy's theorem, we have \begin{equation*} m_{H}(z_{0})-m_{\mu_{A}\boxtimes\mu_{B}}(z_{0}) =\left(\int_{{\mathcal C}_{>}(z_{0})}+\int_{{\mathcal C}_{\leq}(z_{0})}\right)\frac{m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}{z-z_{0}}\mathrm{d} z, \end{equation*} where ${\mathcal C}_{>}(z_{0})={\mathcal C}(z_{0})\cap\{z:\absv{\im z}>\eta_{L}\}$ and ${\mathcal C}_{\leq}(z_{0})={\mathcal C}(z_{0})\cap\{z:\absv{\im z}\leq \eta_{L}\}$. On the contour ${\mathcal C}_{>}(z_{0}),$ we use the first bound in \eqref{eq:locallawout_Xi} to get that for some constant $C>0$ \begin{align}\label{eq:contour_in} &\Absv{\int_{{\mathcal C}_{>}(z_{0})}\frac{m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}{z-z_{0}}\mathrm{d} z} \leq N^{-1+\epsilon/2}\frac{2}{\kappa_{0}}\Absv{\int_{{\mathcal C}_{>}(z_{0})}\frac{1}{\im z}\mathrm{d} z}\\ & \leq N^{-1+\epsilon/2}\frac{4}{\kappa_{0}}\left(\frac{\kappa_{0}}{\eta_{0}+\kappa_{0}/2}+\log\left(\frac{\eta_{0}+\kappa_{0}/2}{\eta_{L}}\right)\right)\leq \frac{C}{\kappa_{0}+\eta_{0}}N^{-1+\epsilon/2}\log N\leq \frac{C}{\kappa_{0}+\eta_{0}}N^{-1+\epsilon}. \nonumber \end{align} On the other hand, for $z$ on the other contour ${\mathcal C}_{\leq}(z_{0}),$ we use \begin{equation*} \absv{m_{H}(z)}\leq \frac{1}{N}\sum_{i}\frac{1}{E-\lambda_{i}}\leq \frac{1}{N}\sum_{i\leq N/3} \frac{1}{E-\gamma_{i}-i^{-1/3}N^{-2/3+\epsilon}}+\frac{1}{N}\sum_{i>N/3}\frac{1}{E-\gamma_{N/3}-3N^{-1+\epsilon}}\leq C, \end{equation*} where in the second step we used the second bound in \eqref{eq:locallawout_Xi} and in the third step we used the fact that $E_+-\gamma_i \sim i^{2/3} N^{-2/3}$. Following the same argument and using Lemma \ref{lem:rigidity_AB}, we get $\absv{m_{\mu_{A}\boxtimes\mu_{B}}(z)}\leq C$. Using the above bounds, we get \begin{equation} \label{eq:contour_out} \Absv{\int_{{\mathcal C}_{\leq}(z_{0})}\frac{m_{H}(z)-m_{\mu_{A}\boxtimes\mu_{B}}(z)}{z-z_{0}}\mathrm{d} z}\leq \frac{C}{\kappa_{0}}\eta_{L}=C\frac{N^{\epsilon}}{N\kappa_{0}}\leq \frac{C}{\kappa_{0}+\eta_{0}}N^{-1+\epsilon}. \end{equation} Combining \eqref{eq:contour_in} and \eqref{eq:contour_out}, we conclude our proof. \end{proof} \begin{appendix} \section{Collection of derivatives}\label{appendix_deriavtive} In this section, we collect some results involving derivatives. They can be easily checked by elementary calculation. \begin{lem}\label{lem_derivative} Recall the notations in (\ref{defn_greenfunctions}), (\ref{eq_prd}), (\ref{eq_prd2}), (\ref{eq_prd1}), (\ref{eq_pk}) and (\ref{eq_defnupsilon}). We have the following identities \begin{equation} \label{eq:Gder} \frac{\partial G}{\partial g_{ik}}=-G A\frac{\partial \widetilde{B}}{\partial g_{ik}}G =-GA \frac{\partial (R_{i}U^{\langle i\rangle}B(U^{\langle i\rangle})^* R_{i})}{\partial g_{ik}}G =-GA\left(\frac{\partial R_{i}}{\partial g_{ik}}\widetilde{B}^{\langle i\rangle}R_{i} +R_{i}\widetilde{B}^{\langle i\rangle}\frac{\partial R_{i}}{\partial g_{ik}}\right)G, \end{equation} \begin{equation} \label{eq_householdderivative} \frac{\partial R_{i}}{\partial g_{ik}} =-\frac{\partial (\ell_{i}^{2})}{\partial g_{ik}}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* -\ell_{i}^{2}\left(\frac{\partial {\boldsymbol h}_{i}}{\partial g_{ik}}{\boldsymbol e}_{i}^* +{\boldsymbol e}_{i}\frac{\partial {\boldsymbol h}_{i}^*}{\partial g_{ik}} +\frac{\partial {\boldsymbol h}_{i}}{\partial g_{ik}}{\boldsymbol h}_{i}^* +{\boldsymbol h}_{i}\frac{\partial {\boldsymbol h}_{i}^*}{\partial g_{ik}}\right), \end{equation} \begin{equation} \label{eq_hiderivative} \frac{\partial {\boldsymbol h}_{i}}{\partial g_{ik}} =\frac{1}{\norm{{\boldsymbol g}_{i}}}\frac{\partial {\boldsymbol g}_{i}}{\partial g_{ik}}+\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol g}_{i} =\norm{{\boldsymbol g}_{i}}^{-1}{\boldsymbol e}_{k} -\norm{{\boldsymbol g}_{i}}^{-3}\overline{g}_{ik}{\boldsymbol g}_{i}=\norm{{\boldsymbol g}_{i}}^{-1}({\boldsymbol e}_{k}-\overline{h}_{ik}{\boldsymbol h}_{i}), \end{equation} { \begin{equation} \label{eq_histartderivative} \frac{\partial {\boldsymbol h}_{i}^*}{\partial g_{ik}}=-\frac{1}{2}\norm{{\boldsymbol g}_{i}}^{-3}\bar{g}_{ik}{\boldsymbol g}_{i}^* =-\frac{1}{2}\norm{{\boldsymbol g}_{i}}^{-1}\overline{h}_{ik}{\boldsymbol h}_{i}^*, \end{equation} } \begin{align}\label{eq_partialli2} \frac{\partial \ell_{i}^{2}}{\partial g_{ik}} =2\frac{\partial\norm{{\boldsymbol e}_{i}+{\boldsymbol h}_{i}}^{-2}}{\partial g_{ik}} &=-2\norm{{\boldsymbol e}_{i}+{\boldsymbol h}_{i}}^{-4}\frac{\partial \norm{{\boldsymbol e}_{i}+{\boldsymbol h}_{i}}^{2}}{\partial g_{ik}} =-2\norm{{\boldsymbol e}_{i}+{\boldsymbol h}_{i}}^{-4}\left(\frac{\partial {\boldsymbol h}_{i}^* {\boldsymbol e}_{i}}{\partial g_{ik}}+\frac{\partial {\boldsymbol e}_{i}^*{\boldsymbol h}_{i}}{\partial g_{ik}}\right), \nonumber \\ & =\ell_{i}^{4}\norm{{\boldsymbol g}_{i}}^{-3}\overline{g}_{ik}g_{ii}=\ell_{i}^{4}\norm{{\boldsymbol g}_{i}}^{-1}\overline{h}_{ik}h_{ii}, \ k \neq i. \end{align} \begin{align} \frac{\partial P_{i}}{\partial g_{ik}} =&{\boldsymbol e}_{i}^*\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}\tr(A\widetilde{B}G) +zG_{ii}\frac{\tr G}{\partial g_{ik}}-\tr(\frac{\partial G}{\partial g_{ik}}A)(\widetilde{B}G)_{ii} -\tr(GA){\boldsymbol e}_{i}^* A^{-1}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i} \nonumber \\ +&({\boldsymbol e}_{i}^* \frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}+\frac{\partial T_{i}}{\partial g_{ik}})\Upsilon +(G_{ii}+T_{i})\frac{\partial \Upsilon}{\partial g_{ik}}, \label{eq_partialpi}\\ \frac{\partial \Upsilon}{\partial g_{ik}} =&z\left(\tr(2zG+1)\tr\frac{\partial G}{\partial g_{ik}}-\tr\left(\frac{\partial G}{\partial g_{ik}}A\right)\tr(\widetilde{B}G)-z\tr(GA)\tr(A^{-1}\frac{\partial G}{\partial g_{ik}})\right), \nonumber \\ \frac{\partial K_{i}}{\partial g_{ik}} =&\frac{\partial T_{i}}{\partial g_{ik}} +\tr(\frac{\partial G}{\partial g_{ik}}A)(b_{i}T_{i}+(\widetilde{B}G)_{ii}) +\tr(GA)\left(b_{i}\frac{\partial T_{i}}{\partial g_{ik}}+\frac{z}{a_{i}}{\boldsymbol e}_{i}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}\right) \nonumber \\ -&z\tr\frac{\partial G}{\partial g_{ik}}(G_{ii}+T_{i}) -\tr(GA\widetilde{B})\left({\boldsymbol e}_{i}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}+\frac{\partial T_{i}}{\partial g_{ik}}\right). \label{eq_partialKderivative} \end{align} \end{lem} \section{Some auxiliary lemmas}\label{append:A} In this section, we prove some auxiliary lemmas. \subsection{Large deviation inequalities} In this subsection, we provide some large deviation type controls. \begin{lem}[Large deviation estimates]\label{lem:LDE} Let $X$ be an $N \times N$ complex-valued deterministic matrix and let ${\boldsymbol y}\in\C^{N}$ be a deterministic complex vector. For a real or complex Gaussian random vector ${\boldsymbol g}\in\C^{N}$ with covariance matrix $\sigma^{2}I_{N}$, we have \begin{equation} \label{eq_largedeviationbound} \absv{{\boldsymbol y}^*{\boldsymbol g}}\prec\sigma \norm{{\boldsymbol y}},\quad \absv{{\boldsymbol g}^* X{\boldsymbol g}-\sigma^{2}N\tr X}\prec\sigma^{2}\norm{X}_{2}. \end{equation} \end{lem} \begin{proof} See Lemma A.1 of \cite{BEC}. \end{proof} \begin{lem}\label{lem:DeltaG} Let $X_{i}$ be $\widetilde{B}^{\langle i\rangle}$ or $I$, $X_{A}$ be $A$ or $I$ or $A^{-1}$, and $D=(d_{i})$ be a random diagonal matrix with $\norm{D}\prec 1.$ Recall (\ref{eq_defndeltag}) and (\ref{eq_controlparameter}). Under the assumption of (\ref{eq_locallaweqbound}), for each fixed $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U),$ we have \begin{equation} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}\Delta_{G}(i,k){\boldsymbol e}_{i}\prec\Pi_{i}^{2}, \ \ \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr(DX_{A}\Delta_{G}(i,k))\prec\Pi_{i}^{2}\Psi^{2}, \label{eq:DeltaG1} \end{equation} \begin{align} \ \ \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i} \mathring{{\boldsymbol g}_{i}}\tr(DX_{A}\Delta_{G}(i,k))\prec\Pi_{i}^{2}\Psi^{2}, \ \ \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}{\boldsymbol e}_{i}^* X_{A}\Delta_{G}(i,k){\boldsymbol e}_{i}\prec \Pi_{i}^{2}, \label{eq:DeltaG2} \end{align} \begin{equation} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}{\boldsymbol h}_{i}^* \Delta_{G}(i,k){\boldsymbol e}_{i}\prec\Pi_{i}^{2}. \label{eq:DeltaG3} \end{equation} \end{lem} \begin{proof} Recall the definitions in (\ref{eq_defndeltag}) and (\ref{eq_deltarg}). It is easy to see that every term in $\Delta_{G}(i,k)$ belongs to one of the following forms \begin{align}\label{eq_twotermschoice} d_{i}\overline{h}_{ik} GA{\boldsymbol{\alpha}}_{i}{\boldsymbol{\beta}}_{i}^*\widetilde{B}^{\langle i\rangle}R_{i}G \ \text{or} \ d_{i}\overline{h}_{ik}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i}{\boldsymbol{\beta}}_{i}^* G, \end{align} where $d_{i}$'s are $\rO_{\prec}(1)$, $k$-independent generic constants and ${\boldsymbol{\alpha}}_{i},{\boldsymbol{\beta}}_{i}$ are either ${\boldsymbol e}_{i}$ or ${\boldsymbol h}_{i}$. Due to similarity, we focus our discussion on the first inequality in (\ref{eq:DeltaG1}) and briefly explain the other terms. We start the argument of first inequality in (\ref{eq:DeltaG1}). In view of (\ref{eq_twotermschoice}), we find that every term in the first quantity in \eqref{eq:DeltaG1} has either one of the following two forms: \begin{align}\label{eq:Expan_DeltaG} &d_{i}\frac{1}{N}\sum_{k}^{(i)}\overline{h}_{ik}{\boldsymbol e}_{k}^* X_{i}GA{\boldsymbol{\alpha}}_{i}{\boldsymbol{\beta}}_{i}^* \widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i} =\frac{d_{i}}{N}(\mathring{{\boldsymbol h}}_{i}^* X_{i}GA{\boldsymbol{\alpha}}_{i})({\boldsymbol{\beta}}_{i}^*\widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i}),\\ &d_{i}\frac{1}{N}\sum_{k}^{(i)}\overline{h}_{ik}{\boldsymbol e}_{k}^* X_{i}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i}{\boldsymbol{\beta}}_{i}^* G{\boldsymbol e}_{i} =\frac{d_{i}}{N}(\mathring{{\boldsymbol h}}_{i}^* X_{i}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i})({\boldsymbol{\beta}}_{i}^* G{\boldsymbol e}_{i}), \nonumber \end{align} where used the definition of $\mathring{{\boldsymbol h}}_{i}$ in (\ref{eq_prd2}). Then we provide controls for terms on the right-hand side of the above two equations. Since $\| X_i \|$ is bounded and $\|\bm{h}_i \|_2^2=1,$ by (\ref{eq_hbwtr}) and (v) of Assumption \ref{assu_esd}, we find that for some constant $C>0,$ \begin{align} &\absv{\mathring{{\boldsymbol h}}_{i}^* X_{i}GA{\boldsymbol{\alpha}}_{i}}\leq \norm{X_{i}\mathring{{\boldsymbol h}}_{i}}\norm{GA{\boldsymbol{\alpha}}_{i}} \leq C({\boldsymbol{\alpha}}_{i}^* AG^* GA{\boldsymbol{\alpha}}_{i})^{1/2},& &\absv{{\boldsymbol{\beta}}_{i}^* \widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i}} \leq C({\boldsymbol e}_{i}^* G^* G{\boldsymbol e}_{i})^{1/2},& \label{eq_termonecontrolone}\\ &\absv{\mathring{{\boldsymbol h}}_{i}^* X_{i}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i}} \leq C({\boldsymbol{\alpha}}_{i}^* R_{i}\widetilde{B}AG^* G A\widetilde{B}R_{i}{\boldsymbol{\alpha}}_{i})^{1/2}, & &\absv{{\boldsymbol{\beta}}_{i}^* G{\boldsymbol e}_{i}}\leq C({\boldsymbol e}_{i}^* G^* G{\boldsymbol e}_{i})^{1/2}. \label{eq_termtwocontroltwo} \end{align} It remains to study the bounds in (\ref{eq_termonecontrolone}) and (\ref{eq_termtwocontroltwo}). We first study the bounds of the first terms in (\ref{eq_termonecontrolone}) and (\ref{eq_termtwocontroltwo}). When ${\boldsymbol{\alpha}}_{i}={\boldsymbol e}_{i}$, we have that for some constant $C>0$ \begin{align}\label{eq_transfernonhermitiantohermitian} {\boldsymbol{\alpha}}_{i}^* AG^* GA{\boldsymbol{\alpha}}_{i} =a_{i}{\boldsymbol e}_{i}^* \widetilde{G}^* A\widetilde{G}{\boldsymbol e}_{i} &\leq C\frac{\im G_{ii}}{\eta}, \end{align} where in the first step we used (\ref{eq_ggrelationship}) and in the second step we used (v) of Assumption \ref{assu_esd}, Ward identity and (\ref{eq_connectiongreenfunction}). Similarly, we have that \begin{align*} {\boldsymbol{\alpha}}_{i}^* R_{i}\widetilde{B}AG^* GA\widetilde{B}R_{i}{\boldsymbol{\alpha}}_{i} ={\boldsymbol h}_{i}^* G^* \widetilde{B}A^{2}\widetilde{B}G{\boldsymbol h}_{i} ={\boldsymbol e}_{i}^* B^{1/2}\widetilde{{\mathcal G}}B^{1/2}\widetilde{A}^{2}B^{1/2}\widetilde{{\mathcal G}}B^{1/2}{\boldsymbol e}_{i} &\leq C\frac{\im {\mathcal G}_{ii}}{\eta}. \end{align*} Analogously, when ${\boldsymbol{\alpha}}_{i}={\boldsymbol h}_{i}$, we have that \begin{align*} {\boldsymbol{\alpha}}_{i}^* AG^* GA{\boldsymbol{\alpha}}_{i}=&{\boldsymbol e}_{i}\widetilde{A}{\mathcal G}^* {\mathcal G}\widetilde{A}{\boldsymbol e}_{i} ={\boldsymbol e}_{i}^*\widetilde{A}B^{1/2}\widetilde{{\mathcal G}}^* B^{-1}\widetilde{{\mathcal G}}B^{1/2}\widetilde{A}{\boldsymbol e}_{i} =b_{i}^{-1}{\boldsymbol e}_{i}^*\widetilde{{\mathcal G}}B\widetilde{A}^{2}B\widetilde{{\mathcal G}}{\boldsymbol e}_{i} \leq C\frac{\im{\mathcal G}_{ii}}{\eta},\\ & {\boldsymbol{\alpha}}_{i}^* R_{i}\widetilde{B}AG^* GA\widetilde{B}R_{i}{\boldsymbol{\alpha}}_{i} ={\boldsymbol e}_{i}^* G^* G^* \widetilde{B}A^{2}\widetilde{B}G{\boldsymbol e}_{i} \leq C\frac{\im G_{ii}}{\eta}. \end{align*} Moreover, for the second terms in (\ref{eq_termonecontrolone}) and (\ref{eq_termtwocontroltwo}), we readily see that $${\boldsymbol e}_{i}^* G^* G{\boldsymbol e}_{i}\leq C\frac{\im G_{ii}}{\eta}.$$ Combining all the above bounds, we conclude that \begin{equation*} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}\Delta_{G}(i,k){\boldsymbol e}_{i}\prec \frac{1}{N} \frac{\im G_{ii}+\im {\mathcal G}_{ii}}{\eta} =\Pi_{i}^{2}. \end{equation*} Then we briefly discuss the proof of the rest of the inequalities. In fact, for the remaining estimates, we will replace $\Delta_{G}$ as a sum of quantities in \eqref{eq:Expan_DeltaG} and estimate every term of the summands. Specifically, for the second inequality, all terms are of either one of the following forms: \begin{align*} \frac{d_{i}}{N}\sum_{k}^{(i)}\overline{h}_{ik}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr(DX_{A}GA{\boldsymbol{\alpha}}_{i}{\boldsymbol{\beta}}_{i}^*\widetilde{B}^{\langle i\rangle}R_{i}G) =\frac{d_{i}}{N^{2}}(\mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i})( {\boldsymbol{\beta}}_{i}^*\widetilde{B}^{\langle i\rangle}R_{i}GDX_{A}GA{\boldsymbol{\alpha}}_{i}),\\ \frac{d_{i}}{N}\sum_{k}^{(i)}\overline{h}_{ik}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr(DX_{A}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i}{\boldsymbol{\beta}}_{i}^* G) =\frac{d_{i}}{N^{2}}(\mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i})({\boldsymbol{\beta}}_{i}^* GDX_{A}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i}). \end{align*} Similarly, all terms of the first quantity in (\ref{eq:DeltaG2}) can be written in the same form except that the factor $(\mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i})$ needs to be replaced by $g_{ik}$ or $b_{i}g_{ik}$ respectively according to $X_{i}=I$ or $\widetilde{B}^{\langle i\rangle}$. As $\norm{DX_{A}G^* R_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\beta}}_{i}}$ and $\norm{DX_{A}G^*{\boldsymbol{\beta}}_{i}}$ are $\rO_{\prec}(\eta^{-1})$ terms, the second inequality of (\ref{eq:DeltaG1}) follows from an argument similar to the first inequality in (\ref{eq:DeltaG1}) using \begin{equation*} \absv{\mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i}}\prec \norm{G{\boldsymbol e}_{i}}=\frac{\im G_{ii}}{\eta}. \end{equation*} Analogously, the first inequality in (\ref{eq:DeltaG2}) follows from the same reasoning with the bound above with the fact $g_{ik}\prec N^{-1/2}\prec \im G_{ii}/\eta$. For the second inequality in (\ref{eq:DeltaG2}), we get the following two types for all the terms \begin{align*} \frac{d_{i}}{N}(\mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i})({\boldsymbol e}_{i}^* X_{A}GA{\boldsymbol{\alpha}}_{i})({\boldsymbol{\beta}}_{i}^* \widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i}),\\ \frac{d_{i}}{N}(\mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i})({\boldsymbol e}_{i}^* X_{A}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i})({\boldsymbol{\beta}}_{i}^* G{\boldsymbol e}_{i}). \end{align*} Similarly, for (\ref{eq:DeltaG3}), all terms are given by \begin{align*} (\frac{d_{i}}{N}\mathring{{\boldsymbol h}}_{i}X_{i}G{\boldsymbol e}_{i}) ({\boldsymbol h}_{i}^* GA{\boldsymbol{\alpha}}_{i})({\boldsymbol{\beta}}_{i}^* \widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i}), \\ (\frac{d_{i}}{N}\mathring{{\boldsymbol h}}_{i}X_{i}G{\boldsymbol e}_{i}) ({\boldsymbol h}_{i}^* GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol{\alpha}}_{i})({\boldsymbol{\beta}}_{i}^* G{\boldsymbol e}_{i}). \end{align*} In all of these cases, the proof follows from an analogous discussion as for the previous inequalities and the estimate \begin{equation*} \mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i}=\rO_{\prec}(1), \end{equation*} which follows from Assumption \ref{assu_ansz} and the fact that the left-hand side of the above equation is either $\mathring{S}_{i}$ or $\mathring{T}_{i}$. The completes our proof. \end{proof} \begin{lem}\label{lem:recmomerror} Let $X_{i}$ be $\widetilde{B}^{\langle i\rangle}$ or $I$, $X_{A}$ be $A$ or $I$ or $A^{-1}$. Suppose that $D$ is a random diagonal matrix satisfying $\norm{D}\prec 1$. Then under the assumptions of Lemma \ref{lem:DeltaG}, we have \begin{align} &\frac{1}{N}\sum_{k}^{(i)}\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\prec\frac{1}{N}, & &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i} G{\boldsymbol e}_{i}\tr(DX_{A}\frac{\partial G}{\partial g_{ik}})\prec \Pi_{i}^{2}\Psi^{2}, \label{eq:condiff1} \\ &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}\mathring{{\boldsymbol g}_{i}}\tr(DX_{A}\frac{\partial G}{\partial g_{ik}})\prec \Pi_{i}^{2}\Psi^{2},& &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}{\boldsymbol e}_{i}^* X_{A}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}\prec\Pi_{i}^{2}, \label{eq:condiff2} \\ &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\frac{\partial T_{i}}{\partial g_{ik}}\prec \Pi_{i}^{2}. && \label{eq:condiff3} \end{align} \end{lem} \begin{proof} We start with the first inequality in (\ref{eq:condiff1}). Using the identity \begin{equation*} \frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}=-\frac{\overline{g}_{ik}}{2\norm{{\boldsymbol g}_{i}}^{3}}, \end{equation*} and the definitions in (\ref{eq_prd2}) and (\ref{eq_shorhandnotation}), we find that \begin{align*} \frac{1}{N}\sum_{k}^{(i)}\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i} & =-\frac{1}{\norm{{\boldsymbol g}_{i}}^{3}}\frac{1}{2N}\sum_{k}^{(i)}\overline{g}_{ik}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i} =-\frac{1}{2N\norm{{\boldsymbol g}_{i}}^{2}}\mathring{{\boldsymbol h}_{i}}^* X_{i}G{\boldsymbol e}_{i} \\ & =\left\{ \begin{array}{cc} -\frac{1}{2N}\mathring{S}_{i} & \text{if }X_{i}=\widetilde{B}^{\langle i\rangle},\\ -\frac{1}{2N}\mathring{T}_{i} & \text{if }X_{i}=I. \end{array} \right. \end{align*} It suffices to control $\mathring{S}_{i}$ and $\mathring{T}_{i}.$ By the definition of $\mathring{T}_i$ in (\ref{eq_shorhandnotation}), $\mathring{T}_{i}\prec 1$ follows from Assumption \ref{assu_ansz}. Moreover, under Assumption \ref{assu_ansz}, by (\ref{eq_gbgindeti}), (\ref{eq_wtbgiipointwise}), (\ref{eq:BGii}) and (\ref{eq_boundepsilon1}), we have \begin{equation*} \mathring{S}_{i}=-(\widetilde{B}G)_{ii}+G_{ii}+T_{i}+\mathsf{e}_{i1}=\left(\frac{\Omega_{B}}{z}-1\right)\frac{1}{a_{i}-\Omega_{B}}+T_{i}+\rO_{\prec}(N^{-\gamma/4}) \prec 1, \end{equation*} where we used (i) of Proposition \ref{prop:stabN}. This proves the first inequality in (\ref{eq:condiff1}). For the second inequality in (\ref{eq:condiff1}), by \eqref{eq:Gder1}, we have that \begin{align}\label{eq_secondinequalitysummation} &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr(DX_{A}\frac{\partial G}{\partial g_{ik}}) \\ =&\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr\left(DX_{A}GA({\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^*\widetilde{B}^{\langle i\rangle}R_{i}+R_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^*)G+DX_{A}\Delta_{G}\right). \nonumber \end{align} In view of (\ref{eq_secondinequalitysummation}), by (\ref{eq_licontrol}) and Lemma \ref{lem:DeltaG}, the contribution of the last term with $\Delta_{G}$ is $\rO_{\prec}(\Pi_{i}^{2} \Psi^2)$. Note that the first term can be written as \begin{align}\label{eq:Expan_Der_Trace} \frac{1}{N}\sum_{k}^{(i)}\tr(DX_{A}GA{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* \widetilde{B}^{\langle i\rangle} R_{i}G){\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i} =\frac{1}{N^{2}}\sum_{k}^{(i)}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^*\widetilde{B}^{\langle i\rangle} R_{i}GDX_{A}GA{\boldsymbol e}_{k}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i} \\ =\frac{1}{N^{2}}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* \widetilde{B}^{\langle i\rangle}R_{i}GDX_{A}GAI_{i}X_{i}G{\boldsymbol e}_{i} =\frac{1}{N^{2}}(-b_{i}{\boldsymbol h}_{i}-\widetilde{B}{\boldsymbol e}_{i})^* GDX_{A}GAI_{i}X_{i}G{\boldsymbol e}_{i}, \nonumber \end{align} where we denoted $I_{i}=I-{\boldsymbol e}_{i}{\boldsymbol e}_{i}^*$ and in the last step we used (\ref{eq_hbwtr}). Using the elementary bound that \begin{equation*} \norm{DX_{A}GAI_{i}X_{i}}\leq C\norm{G}\leq \frac{C}{\eta}, \end{equation*} and by a discussion similar to (\ref{eq_transfernonhermitiantohermitian}), for some constant $C>0,$ we can bound \begin{align}\label{eq:Expan_Der_Trace2} &\frac{1}{N^{2}}\absv{(-b_{i}{\boldsymbol h}_{i}-\widetilde{B}{\boldsymbol e}_{i})^* GDX_{A}GAI_{i}X_{i}G{\boldsymbol e}_{i}}\leq \frac{C}{N^{2}\eta} \left(\norm{G^*{\boldsymbol h}_{i}+G^*(\widetilde{B})^{1/2}{\boldsymbol e}_{i}} \right)\norm{G{\boldsymbol e}_{i}} \\ & \leq C\frac{1}{N^{2}\eta}({\boldsymbol h}_{i}^* GG^* {\boldsymbol h}_{i}+{\boldsymbol e}_{i}^*(\widetilde{B})^{1/2} GG^*(\widetilde{B})^{1/2}{\boldsymbol e}_{i}+{\boldsymbol e}_{i}^* G^* G{\boldsymbol e}_{i}) \nonumber \\ & =C\frac{1}{N^{2}\eta} ({\boldsymbol e}_{i}^* B^{-1/2}\widetilde{{\mathcal G}}B\widetilde{{\mathcal G}}^* B^{-1/2}{\boldsymbol e}_{i} +{\boldsymbol e}_{i}^* A^{-1/2}\widetilde{G}\widetilde{H}\widetilde{G}^* A^{-1/2}{\boldsymbol e}_{i} +{\boldsymbol e}_{i}^* A^{-1/2}\widetilde{G}^* A\widetilde{G}A^{-1/2}{\boldsymbol e}_{i}) \nonumber \\ & \leq C\frac{1}{N^{2}\eta^{2}}\im(\widetilde{{\mathcal G}}_{ii}+\widetilde{G}_{ii})=C\Pi_{i}^{2}\Psi^{2}. \nonumber \end{align} Similarly, we can bound the second term using the identity \begin{align*} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\tr(DX_{A}GAR_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G)=\frac{1}{N^{2}}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* GDX_{A}GAR_{i}\widetilde{B}^{\langle i\rangle}I_{i}X_{i}G{\boldsymbol e}_{i}. \end{align*} Regarding the first inequality in (\ref{eq:condiff2}), its proof is similar to the second inequality in (\ref{eq:condiff1}) with minor modification. For instance, the counterpart of \eqref{eq:Expan_Der_Trace} is \begin{align*} \frac{1}{N}\sum_{k}^{(i)}\tr(DX_{A}GA{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* \widetilde{B}^{\langle i\rangle} R_{i}G){\boldsymbol e}_{k}^* X_{i}\mathring{{\boldsymbol g}}_{i} =\frac{1}{N^{2}}(-b_{i}{\boldsymbol h}_{i}-\widetilde{B}{\boldsymbol e}_{i})^* GDX_{A}GAI_{i}X_{i}\mathring{{\boldsymbol g}}_{i}. \end{align*} Compared to \eqref{eq:Expan_Der_Trace2}, we can simply replace the factor $\norm{G{\boldsymbol e}_{i}}$ by $\norm{\mathring{{\boldsymbol g}}_{i}}\prec1\prec(\im G_{ii}/\eta)^{1/2}$ in the first step. For the second inequality in (\ref{eq:condiff2}), we again use \eqref{eq:Gder1} and a discussion similar to (\ref{eq:Expan_Der_Trace}) to get \begin{align}\label{eq_bbbberrorpi2} &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}{\boldsymbol e}_{i}^* X_{A}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i} \\ & =\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}{\boldsymbol e}_{i}^* X_{A}GA({\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^*\widetilde{B}^{\langle i\rangle}R_{i}+R_{i}\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^*)G{\boldsymbol e}_{i}+\rO_{\prec}(\Pi_{i}^{2}) \nonumber \\ & =\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}} \frac{1}{N}\left({\boldsymbol e}_{i}^* X_{A}GAI_{i}X_{i}G{\boldsymbol e}_{i}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^*\widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i}+{\boldsymbol e}_{i}^* X_{A}GAR_{i}\widetilde{B}^{\langle i\rangle}I_{i}X_{i}G{\boldsymbol e}_{i}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G{\boldsymbol e}_{i}\right)+\rO_{\prec}(\Pi_{i}^{2}), \nonumber \end{align} where in the second step we used Lemma \ref{lem:DeltaG}. To bound the above quantity, on one hand, by (\ref{eq_hbwtr}) and (\ref{eq_shorhandnotation}), under Assumption \ref{assu_ansz}, we conclude that \begin{align}\label{eq_boundfrom2} ({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* \widetilde{B}^{\langle i\rangle}R_{i}G{\boldsymbol e}_{i}=-((\widetilde{B}G)_{ii}+b_{i}T_{i})\prec 1, \ \ ({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G{\boldsymbol e}_{i} =G_{ii}+T_{i}\prec 1. \end{align} On the other hand, using the bound that \begin{align*} &\norm{G^* X_{A}{\boldsymbol e}_{i}}\norm{G{\boldsymbol e}_{i}} \leq \left({\boldsymbol e}_{i}GG^* {\boldsymbol e}_{i}\right)^{1/2}\left({\boldsymbol e}_{i}G^* G{\boldsymbol e}_{i}\right)^{1/2} \\ & =\left({\boldsymbol e}_{i}^* A^{1/2}\widetilde{G}A^{-1}\widetilde{G}A^{1/2}{\boldsymbol e}_{i}\right)^{1/2}\left(A^{-1/2}\widetilde{G}A\widetilde{G}A^{-1/2}{\boldsymbol e}_{i}\right)^{1/2} \leq C\frac{\im G_{ii}}{\eta}, \end{align*} for some constant $C>0,$ and the fact that both $\norm{AI_{i}X_{i}}$ and $\norm{AR_{i}\widetilde{B}^{\langle i\rangle}I_{i}X_{i}}$ are bounded, we conclude that for some constant $C>0$ \begin{align*} |{\boldsymbol e}_{i}^*X_{A}GAI_{i}X_{i}G{\boldsymbol e}_{i}| \leq C \frac{\im G_{ii}}{\eta}, \ \ {\boldsymbol e}_{i}^*X_{A}GAR_{i}\widetilde{B}^{\langle i\rangle}I_{i}X_{i}G{\boldsymbol e}_{i} \leq C\frac{\im G_{ii}}{\eta}. \end{align*} Together with (\ref{eq_bbbberrorpi2}) and (\ref{eq_boundfrom2}), we can complete the proof. For the inequality in (\ref{eq:condiff3}), we note that \begin{equation*} \frac{\partial T_{i}}{\partial g_{ik}}=\frac{\partial {\boldsymbol h}_{i}^*}{\partial g_{ik}} G{\boldsymbol e}_{i}+{\boldsymbol h}_{i}^* \frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{i}. \end{equation*} Due to similarity, we only study the summand containing the first term, i.e., \begin{align*} \frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\frac{\partial {\boldsymbol h}_{i}^*}{\partial g_{ik}}G{\boldsymbol e}_{i} &=\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol g}_{i}^* G{\boldsymbol e}_{i} =-\frac{1}{2\norm{{\boldsymbol g}_{i}}N}\sum_{k}^{(i)}\overline{h}_{ik}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{i}{\boldsymbol h}_{i}^* G{\boldsymbol e}_{i} \\ &=-\frac{1}{2\norm{{\boldsymbol g}_{i}}N}\mathring{{\boldsymbol h}}_{i}^* X_{i}G{\boldsymbol e}_{i}{\boldsymbol h}_{i}^* G{\boldsymbol e}_{i}=\left\{ \begin{array}{ll} -\frac{1}{2\norm{{\boldsymbol g}_{i}}N}\mathring{S}_{i}T_{i} & \text{if }X_{i}=\widetilde{B}^{\langle i\rangle},\\ -\frac{1}{2\norm{{\boldsymbol g}_{i}}N}\mathring{T}_{i}T_{i} & \text{if }X_{i}=I. \end{array} \right. , \end{align*} where we used (\ref{eq_histartderivative}). Since $\mathring{S}_{i},\mathring{T}_{i},T_{i}$ are all $\rO_{\prec}(1)$ under Assumption \ref{assu_ansz}, we can finish the proof. \end{proof} Following the same proof strategy of Lemma \ref{lem:recmomerror}, we can prove the following result for the errors arising in the expansion of off-diagonal entries. We omit the details of the proof. \begin{lem}\label{lem:recmomerror_off} Let $X_{i}$ be $\widetilde{B}^{\langle i\rangle}$ or $I$, $X_{A}$ be $A$ or $I$ or $A^{-1}$, and $D=(d_{i})$ be a random diagonal matrix with $\norm{D}\prec 1$. Fix $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U)$ and $j\neq i$, and assume that \eqref{eq_locallaweqbound} and \eqref{eq:ansz_off} hold. Then we have \begin{align*} &\frac{1}{N}\sum_{k}^{(i)}\frac{\partial \norm{{\boldsymbol g}_{i}}^{-1}}{\partial g_{ik}}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{j}\prec\frac{1}{N}, & &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i} G{\boldsymbol e}_{j}\tr(DX_{A}\frac{\partial G}{\partial g_{ik}})\prec (\Pi_{i}+\Pi_{j})^{2}\Psi^{2}, \\ &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{j}{\boldsymbol e}_{i}^* X_{A}\frac{\partial G}{\partial g_{ik}}{\boldsymbol e}_{j}\prec(\Pi_{i}+\Pi_{j})^{2}, & &\frac{1}{N}\sum_{k}^{(i)}{\boldsymbol e}_{k}^* X_{i}G{\boldsymbol e}_{j}\frac{\partial T_{ij}}{\partial g_{ik}}\prec (\Pi_{i}+\Pi_{j})^{2}. & \end{align*} \end{lem} \subsection{Stability of perturbed linear system} In this section, we prove a stability results concerning the perturbations of the linear system \eqref{eq_suborsystemPhi} for a sufficiently large $\eta$, i.e. Lemma \ref{lem:Kantorovich_appl}. The stability result will serve as the starting point of our bootstrapping arguments in Section \ref{sec:freeN} for the system \eqref{eq:def_Phi_ab} and in Section \ref{subsec_stronglocallawfixed} for \eqref{eq:def_PhiAB}. The proof of Lemma \ref{lem:Kantorovich_appl} relies on the well-known Kantorovich's theorem. We record it in the following lemma, modified to our setting. \begin{lem}\label{lem:Kantorovich} Let ${\mathcal C}\subset\C\times\C$ and $F:{\mathcal C}\to \C\times \C$ be a continuous function which is also continuously differentiable on $\mathrm{int}({\mathcal C})$, where $\mathrm{int}({\mathcal C})$ denotes the interior of ${\mathcal C}$. For $x_{0}\in\mathrm{int}({\mathcal C})$, and suppose that there exist constants $b,L>0$ such that for the matrix form $\mathrm{D}$ of the differential operator the following hold: \begin{itemize} \item[(i).] $\mathrm{D}F(x_{0})$ is non-singular; \item[(ii).] $\norm{\mathrm{D}F(x_{0})^{-1}(\mathrm{D}F(y)-\mathrm{D}F(x))}\leq L\norm{y-x}$ for all $x,y\in{\mathcal C}$; \item[(iii).] $\norm{\mathrm{D}F(x_{0})^{-1}F(x_{0})}\leq b$; \item [(iv).] $2bL<1$. \end{itemize} Denote \begin{align}\label{eq:def_Kant_t} t_{*}&\mathrel{\mathop:}= \frac{1-\sqrt{1-2bL}}{L}. \end{align} If $\overline{\mathrm{Ball}(x_{0},t_{*})}\subset{\mathcal C}$, then there exists an unique $x_{*}\in \overline{\mathrm{Ball}(x_{0},t_{*})}$ such that $F(x_{*})=0$, {where $\mathrm{Ball}(x,t)$ denotes the ball in $\C \times \C$ of radius $t$ centered at $x$.} \end{lem} \begin{proof} See Theorem 1 of \cite{Ferreira-Svaiter2012} and the reference therein. \end{proof} Then we state the main result of this subsection. \begin{lem}\label{lem:Kantorovich_appl} For $\eta>0$ and $\theta\in(0,\pi/2)$, define \begin{equation} \label{eq_epsilonetatheta} {\mathcal E}(\eta,\theta)\mathrel{\mathop:}= \{z\in\C_{+}:\im z>\eta, \theta<\arg z <\pi-\theta\}. \end{equation} Let $(\mu_{1},\mu_{2})$ be either $(\mu_{\alpha},\mu_{\beta})$ or $(\mu_{A},\mu_{B})$, and let $\Phi$ be either $\Phi_{\alpha\beta}$ or $\Phi_{AB}$ accordingly. Moreover, we set $\widetilde{\Omega}_{1}$ and $\widetilde{\Omega}_{2}$ be analytic functions mapping ${\mathcal E}(\widetilde{\eta}_{1},\widetilde{\theta}),$ for some $\widetilde{\eta}_{1}$ and $\widetilde{\theta},$ to $\C_{+}$. Denote $\widetilde{r}(z)\mathrel{\mathop:}= \Phi(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z),z)$. Assume that there exists a positive constant $0<c<1$ such that the following hold for all $z\in{\mathcal E}(\widetilde{\eta}_{1},\widetilde{\theta})$: \begin{align}\label{eq:kanappconidition1} \Absv{\frac{\widetilde{\Omega}_{1}(z)}{z}-1}\leq c, \ \ \Absv{\frac{\widetilde{\Omega}_{2}(z)}{z}-1}\leq c, \ \ \norm{\widetilde{r}(z)}\leq c. \end{align} Then there exist $\eta_{1}>\widetilde{\eta}_{1}$ and $\theta>\widetilde{\theta}$ depending only on $\mu_{\alpha},\mu_{\beta}$ and $c$ such that for all sufficiently large $N,$ we have \begin{align} \absv{\widetilde{\Omega}_{1}(z)-\Omega_{1}(z)}\leq 2\norm{\widetilde{r}(z)}, \ \ \absv{\widetilde{\Omega}_{2}(z)-\Omega_{2}(z)}\leq 2\norm{\widetilde{r}(z)}, \end{align} hold for all $z\in{\mathcal E}(\eta_{1},\theta)$, where $\Omega_{1}$ and $\Omega_{2}$ are subordination functions corresponding to the pair $(\mu_{1},\mu_{2})$ via Lemma \ref{lem_subor}. \end{lem} \begin{proof} Let $\eta_1>\widetilde{\eta}_1$ be some sufficiently large constant. For each $z$ with $\im z>\eta_{1}$, denote $\Phi_{z}(\cdot,\cdot)\mathrel{\mathop:}= \Phi(\cdot,\cdot,z).$ Our proof can be divided into two steps. In the first step, we apply Lemma \ref{lem:Kantorovich} to the function $\Phi_{z}$ with initial value $x_0=(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))$ to conclude that there exists an unique solution $x_*$ on a properly chosen open set around $x_0$. In the second step, we prove that the obtained $x_*$ coincides with $x_*=(\Omega_1(z), \Omega_2(z)).$ \\ \noindent{\bf Step 1:} In this step, we will justify that conditions (ii)--(iv) of Lemma \ref{lem:Kantorovich} hold on a properly choosen open set $\mathcal{C}_z \subset \mathbb{C}_+ \times \mathbb{C}_+$ (c.f. (\ref{eq_opensetkk})) and $\overline{\text{Ball}(x_0, t_*)} \subset \mathcal{C}_z.$ Note that (i) automatically holds once we have checked (ii) and (iii). Specifically, we will establish upper bounds of the following two quantities on $\mathcal{C}_z$ (i.e. conditions (ii) and (iii)) \begin{align}\label{eq:Kantorovich} &\norm{\mathrm{D}\Phi_{z}(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z)^{-1})\Phi_{z}(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))}, &\norm{\mathrm{D}\Phi_{z}(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))^{-1}}\sup_{(\omega_{1},\omega_{2})\in{\mathcal C}_{z}}\norm{\mathrm{D}^{2}\Phi_{z}(\omega_{1},\omega_{2})}, \end{align} and show that their product is bounded by $1/2$ (i.e. condition (iv)). We start verifying condition (iii) of Lemma \ref{lem:Kantorovich}. By (\ref{eq_diferentialoperator}), we have that \begin{align}\label{eq_detaileddifferential} \norm{\mathrm{D}\Phi_{z}(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))^{-1}} &=\frac{1}{\absv{1-z^{2}L_{\mu_{1}}'(\widetilde{\Omega}_{2}(z))L_{\mu_{2}}'(\widetilde{\Omega}_{1}(z))}}\norm{\begin{pmatrix} 1 & -zL_{\mu_{1}}'(\widetilde{\Omega}_{2}(z)) \\ -zL_{\mu_{2}}'(\widetilde{\Omega}_{1}(z)) & 1 \end{pmatrix}} . \end{align} Moreover, under (v) of Assumption \ref{assu_esd}, by Lemmas \ref{lem:reprM} and \ref{lem:reprMemp}, we have that \begin{align*} \absv{zL_{\mu_{1}}'(\widetilde{\Omega}_{2}(z))} &=\Absv{\frac{z}{\widetilde{\Omega}_{2}(z)}\int\frac{\widetilde{\Omega}_{2}(z)}{(x-\widetilde{\Omega}_{2}(z))^{2}}\mathrm{d}\widehat{\mu}_{1}(x)} \leq\Absv{\frac{z}{\widetilde{\Omega}_{2}(z)}}\left(\int\left[\frac{x}{\absv{x-\widetilde{\Omega}_{2}(z)}^{2}}+\frac{1}{\absv{x-\widetilde{\Omega}_{2}(z)}} \right]\mathrm{d}\widehat{\mu}_{1}(x)\right)\\ &\leq \frac{1}{1-c}\left(\frac{E_{+}^{\alpha}+\delta}{(\absv{\widetilde{\Omega}_{2}(z)}-(E_{+}^{\alpha}+\delta))^{2}}+\frac{1}{\absv{\absv{\widetilde{\Omega}_{2}(z)}-(E_{+}^{\alpha}+\delta)}}\right)\widehat{\mu}_{1}(\R_{+}), \end{align*} where in the third step we used (\ref{eq:kanappconidition1}). Using the assumption $\absv{\widetilde{\Omega}_{2}(z)/z}>1-c$ and the fact $\eta_1$ is sufficiently large, we have that for some constant $C>0,$ \begin{equation*} \absv{zL'_{\mu_{1}}(\widetilde{\Omega}_{2}(z))}\leq C\absv{z}^{-1}\leq C\eta_{1}^{-1}, \end{equation*} holds for all $z\in{\mathcal E}(\eta_{1},\widetilde{\theta})$. Together with (\ref{eq_detaileddifferential}), we conclude that for some constant $C>0,$ \begin{align*} \norm{\mathrm{D}\Phi_{z}(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))^{-1}} \leq(1-C\eta_{1}^{-2})^{-1}\left(2+2C\eta_{1}^{-2}\right)^{1/2} \leq 2, \end{align*} where we simply bounded the operator norm using its Frobenius norm in the first step and used the assumption that $\eta_1$ is sufficiently large in the second step. Thus we have established that \begin{equation} \label{eq_b_used} \norm{\mathrm{D}\Phi_{z}(\widetilde{\Omega}_{2}(z),\widetilde{\Omega}_{2}(z))^{-1}\Phi_{z}(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))}\leq 2\norm{r(z)}=:b. \end{equation} Next, we justify conditions (ii) and (iv) of Lemma \ref{lem:Kantorovich}. Recall (\ref{eq_diferentialoperator}). We see that \begin{equation*} \norm{\mathrm{D}^{2}\Phi_{z}(\omega_{1},\omega_{2})}=\norm{\mathrm{D}(zL_{\mu_{1}}'(\widetilde{\Omega}_{2}(z)),zL_{\mu_{2}}'(\widetilde{\Omega}_{2}(z)))}=\norm{ \begin{pmatrix} zL''_{\mu_{1}}(\omega_{2}) & 0 \\ 0& zL''_{\mu_{2}}(\omega_{1}) \end{pmatrix}}. \end{equation*} By Lemma \ref{lem:reprM} and (v) of Assumption \ref{assu_esd}, we have \begin{equation*} \absv{L''_{\mu_{1}}(\omega_{1})}\leq\int_{\R_{+}}\frac{1}{\absv{x-\omega_{1}}^{3}}\mathrm{d}\widehat{\mu}_{1}(x) \leq \widehat{\mu}_{1}(\R_{+})\frac{1}{\absv{\absv{\omega_{1}}-E_{+}^{\alpha}-\delta}^{3}}. \end{equation*} Based on the above discussion, we see that for all $z\in{\mathcal E}(\eta_{1},\widetilde{\theta})$, the function $\mathrm{D}\Phi_{z}$ is $C_{0}\absv{z}^{-2}$-Lipschitz in the domain \begin{equation} \label{eq_opensetkk} {\mathcal C}_{z}\mathrel{\mathop:}=\left\{(\omega_{1},\omega_{2})\in\C_{+}^{2}:\frac{\absv{\omega_{1}}}{\absv{z}}\geq \frac{1-c}{2}, \frac{\absv{\omega_{2}}}{\absv{z}}\geq \frac{1-c}{2}\right\}, \end{equation} for some constant $C_{0}$ depending only on $\mu_{\alpha},\mu_{\beta}$, and $c$. Therefore, we have that \begin{equation} \label{eq_lusedinproof} \norm{\mathrm{D}\Phi_{z}(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))^{-1}}\sup_{(\omega_{1},\omega_{2})\in{\mathcal C}_{z}}\norm{\mathrm{D}^{2}\Phi_{z}(\omega_{1},\omega_{2})}\leq 2C_{0}\absv{z}^{-2}\leq 2C_{0}\eta_{1}^{-2}=:L. \end{equation} Moreover, under the assumption (\ref{eq:kanappconidition1}), it is clear that $(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))\in{\mathcal C}_{z}$ for all $z\in{\mathcal E}(\eta_{1},\widetilde{\theta})$, and $b$ and $L$ defined in (\ref{eq_b_used}) and (\ref{eq_lusedinproof}) can be bounded by \begin{equation} \label{eq:Kantorovich_appl} Lb\leq 4C_{0}\eta_{1}^{-2}\norm{\widetilde{r}(z)}\leq 4C_{0}c\eta_{1}^{-2}. \end{equation} Since $\eta_1$ is sufficiently large, we conclude that condition (iv) of Lemma \ref{lem:Kantorovich} holds for our choices of $b$ and $L.$ To complete the argument of Step 1, we need to show that the closure of $B((\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z)),t_{*})$ is contained in ${\mathcal C}_{z}.$ Substituting $b=2\norm{\widetilde{r}(z)}$ and $L=2C_{0}\eta_{1}^{-2}$ in \eqref{eq:def_Kant_t}, we get \begin{align}\label{eq_tstartbound} t_{*}&=\frac{1-\sqrt{1-4C\eta_{1}^{-2}\norm{r(z)}}}{2C_{0}\eta_{1}^{-2}}\leq 2\norm{r(z)} \leq 2c, \end{align} where we used the assumption that $\eta_1$ is sufficiently large and (\ref{eq:kanappconidition1}). We now choose $\theta>\widetilde{\theta}$ such that $\theta$ is sufficiently close to $\pi/2.$ Since $\absv{z}=\csc\theta \im z$ for $z\in {\mathcal E}(\eta_{1},\theta)$, by (\ref{eq:kanappconidition1}), we have \begin{equation*} \im \widetilde{\Omega}_{1}(z) = \im z+\im\left(z\left(\frac{\widetilde{\Omega}_{1}(z)}{z}-1\right)\right)\geq \im z -c\absv{z} \geq \eta_{1}(1-c\csc\theta). \end{equation*} Consequently, the condition $\absv{\omega_{1}-\widetilde{\Omega}_{1}(z)}\leq t_{*}$ implies that \begin{equation*} \im\omega_{1}\geq \im \widetilde{\Omega}_{1}(z)-2\norm{\widetilde{r}(z)} \geq \eta_{1}(1-c\csc\theta)-2c>0, \end{equation*} for all $z\in {\mathcal E}(\eta_{1},\theta)$. This shows that $\omega_1 \in \mathbb{C}_+.$ On the other hand, $\absv{\omega_{1}-\widetilde{\Omega}_{1}(z)}\leq t_{*}$ also yields that \begin{equation*} \frac{\absv{\omega_{1}(z)}}{\absv{z}}\geq 1-c-\frac{2c}{\absv{z}}\geq 1-c-\frac{2c}{\eta_{1}}>\frac{1-c}{2}, \end{equation*} where we used (\ref{eq_tstartbound}). Similar results hold for $\omega_2$ satisfying $|\omega_2-\widetilde{\Omega}_2(z)| \leq t_*.$ This proves that \begin{equation*} \overline{\left\{(\omega_{1},\omega_{2})\in\C_+^{2}:\norm{(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))-(\omega_{1},\omega_{2})}\leq t_{*}\right\}}\subset{\mathcal C}_{z}. \end{equation*} This finishes that proof of Step 1. \\ \noindent{\bf Step 2.} In step 2, we have shown that the conditions of Lemma \ref{lem:Kantorovich} are satisfied. Therefore, we conclude that there exists a solution $(\omega_{1}(z),\omega_{2}(z))$ for the equation $\Phi_{}(\cdot,\cdot, z)=0$ such that \begin{equation*} \norm{(\omega_{1}(z),\omega_{2}(z))-(\widetilde{\Omega}_{1}(z),\widetilde{\Omega}_{2}(z))}\leq t_{*}. \end{equation*} In this step, we prove that $(\omega_{1}(z),\omega_{2}(z))$ coincides with $(\Omega_{1}(z),\Omega_{2}(z))$. As $\Phi(\omega_{1}(z),\omega_{2}(z),z)=0$, we find that $\omega_{1}(z)/z$ satisfies that \begin{equation*} \frac{\omega_{1}(z)}{z}=L_{\mu_{1}}(\omega_{2}(z))=L_{\mu_{1}}(zL_{\mu_{2}}(\omega_{1}(z))). \end{equation*} That is to say, $\omega_{1}(z)/z$ is a fixed point of the function \begin{equation*} \omega\mapsto {\mathcal F}_{z}(\omega)\mathrel{\mathop:}= L_{\mu_{1}}(zL_{\mu_{2}}(z\omega)). \end{equation*} By Lemmas \ref{lem:reprM} and \ref{lem:reprMemp}, the definition of $L_{\mu}(z)$ in (\ref{eq_mtrasindenity}) and the facts that $L_{\mu}(z)\in\C_{+}$ and $M_{\mu}(z)\in\C_{+},$ we have that \begin{equation*} 0<\arg L_{\mu}(z)<\pi-\arg z,\quad z\in\C_{+}. \end{equation*} Based on the above observation, we find that the function ${\mathcal F}_{z}$ maps the sector $\{\omega\in\C_{+}:0<\arg \omega <\pi-\arg z\}$ to itself. It is clear that the sector is conformally equivalent to the unit disk. Since ${\mathcal F}_{z}$ is a non-constant analytic map, by Schwarz-Pick lemma, ${\mathcal F}_z$ can have at most one fixed point in the sector. Moreover, on one hand, as $\Phi(\omega_{1}(z),\omega_{2}(z),z)=0$, we have \begin{equation*} \im \frac{\omega_{1}(z)}{z}=\im L_{\mu_{1}}(\omega_{2}(z))>0, \end{equation*} which implies that $\omega_{1}(z)/z$ is in the sector; on the other hand, $\Omega_{1}(z)/z$ is already a fixed point in the sector. This proves that $\omega_{1}(z)=\Omega_{1}(z)$. Similarly, we can show that $\omega_{2}(z)=\Omega_{2}(z).$ This finishes the proof of Step 2. \end{proof} \section{Bootstrapping continuity argument}\label{append:C} \subsection{Proof of Lemmas \ref{lem:iteration_weaklaw}, \ref{lem:self_improv} and \ref{lem:self_improv_S}} In this subsection, we prove Lemmas \ref{lem:iteration_weaklaw}, \ref{lem:self_improv} and \ref{lem:self_improv_S}. We begin with Lemma \ref{lem:iteration_weaklaw}. The proof strategy is the same with Lemma 8.3 of \cite{BEC}, which needs three technical inputs. The first input is Lemma \ref{lem:priori_weaklaw}, which is a weaker analog of Proposition \ref{prop:FA2} and will be used in the final stage of the proof of Lemma \ref{lem:iteration_weaklaw}. The other two ingredients are Lemmas \ref{lem:entrysuborconc} and \ref{lem:FA1conc} below. They are cutoff versions of Proposition \ref{prop:entrysubor} and Proposition \ref{prop:FA1} respectively; see the cutoff function in (\ref{eq_cutofffunction}). We first state these results and prove Lemma \ref{lem:iteration_weaklaw} in this subsection. The proofs of Lemmas \ref{lem:priori_weaklaw}, \ref{lem:entrysuborconc} and \ref{lem:FA1conc} are offered in Section \ref{secc_subsection}. \begin{lem}\label{lem:priori_weaklaw} Let $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U}),\epsilon\in(0,\gamma/12)$, and $k\in(0,1]$ be fixed values. Let $\widehat{\Lambda}(z)$ be some deterministic positive control parameter such that $\widehat{\Lambda}(z)\leq N^{-\gamma/4}$. Suppose that $\Lambda(z)\leq\widehat{\Lambda}(z)$ and \begin{equation} \label{eq:self_improv_assu} \absv{{\mathcal S}_{AB}\Lambda_{\iota}+{\mathcal T}_{\iota}\Lambda_{\iota}^{2}+O(\Lambda_{\iota})^{3}} \leq N^{2\epsilon/5}\frac{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}{(N\eta)^{k}},\quad \iota=A,B, \end{equation} hold on some event $\widetilde{\Xi}(z)$. Then there exists a constant $C>0$ such that for sufficiently large $N,$ the following hold: \begin{itemize} \item[(i)] If $\sqrt{\kappa+\eta}>N^{-\epsilon}\widehat{\Lambda}$, there is a sufficiently large constant $K_{0}>0$ which is independent of $z$ and $N,$ such that \begin{equation} \label{eq_lemmac1c1} \mathbb{I}\left(\Lambda\leq \frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\right)\absv{\Lambda_{\iota}}\leq C\left( N^{-2\epsilon}\widehat{\Lambda}+\frac{N^{7\epsilon/5}}{(N\eta)^{k}}\right), \end{equation} holds on $\widetilde{\Xi}(z)$. \item[(ii)] If $\sqrt{\kappa+\eta}\leq N^{-\epsilon}\widehat{\Lambda}$, then we have that \begin{equation} \label{eq_lemmac1c2} \absv{\Lambda_{\iota}}\leq C\left(N^{-\epsilon}\widehat{\Lambda}+\frac{N^{7\epsilon/5}}{(N\eta)^{k}}\right), \ \iota=A,B. \end{equation} holds on $\widetilde{\Xi}(z)$. \end{itemize} \end{lem} \begin{lem}\label{lem:entrysuborconc} Let small $\epsilon\in(0,\gamma/12)$ and large $D>0$ be fixed. Recall (\ref{eq_setdefnTheta}). There exists an $N_{01}(\epsilon,D)\in\N$ such that the following hold for all $N\geq N_{01}(\epsilon,D)$: for all $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$, there exists an event $\Xi_{1}(z)$ with $\prob{\Xi_{1}(z)}>1-N^{-D}$ so that the following hold on $\Xi_{1}(z)\cap\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{(N\eta)^{1/2}}\right)$: \begin{align*} \Lambda_{d}^{c}(z)&\leq N^{\epsilon/2}\Psi(z),& \Lambda_{T}&\leq N^{\epsilon/2}\Psi(z),& \widetilde{\Lambda}_{d}^{c}&\leq N^{\epsilon/2}\Psi(z),&\widetilde{\Lambda}_{T}&\leq N^{\epsilon/2}\Psi(z),&\Upsilon(z)&\leq N^{\epsilon/2}\Psi(z). \end{align*} \end{lem} Denote the control parameters \begin{align}\label{eq_parameterusedlemmac3} \widehat{\Lambda}&\mathrel{\mathop:}= \frac{N^{3\epsilon}}{(N\eta)^{1/3}}, & \widehat{\Pi}&\mathrel{\mathop:}=\left(\sqrt{\frac{(\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda})(\absv{{\mathcal S}_{AB}}+\widehat{\Lambda})}{(N\eta)^{2}}}+\frac{1}{(N\eta)^{2}}\right)^{1/2}. \end{align} \begin{lem}\label{lem:FA1conc} Let small $\epsilon\in(0,\gamma/12)$ and large $D>0$ be fixed. Recall (\ref{eq_frz1z2definition}). There exists an $N_{02}(\epsilon,D)\in\N$ such that the following hold for all $N\geq N_{02}(\epsilon,D)$: for all $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$, there exists an event $\Xi_{2}(z)$ with $\prob{\Xi_{2}(z)}>1-N^{-D}$ so that the following hold on $\Xi_{2}(z)\cap\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{(N\eta)^{1/2}}\right)$: \begin{align*} \Phi_{A}^{c}&\leq N^{\epsilon/3}\widehat{\Pi},& \Phi_{B}^{c}&\leq N^{\epsilon/3}\widehat{\Pi},& \absv{{\mathfrak Z}_{1}}&\leq N^{\epsilon/3}\widehat{\Pi},& \absv{{\mathfrak Z}_{2}}&\leq N^{\epsilon/3}\widehat{\Pi}. \end{align*} \end{lem} Armed with Lemmas \ref{lem:priori_weaklaw}, \ref{lem:entrysuborconc}, and \ref{lem:FA1conc}, we proceed to the proof of Lemma \ref{lem:iteration_weaklaw}. \begin{proof}[\bf Proof of Lemma \ref{lem:iteration_weaklaw}] By (ii) and (iii) of Proposition \ref{prop:stabN}, we have that $\im m_{\mu_{A}\boxtimes\mu_{B}}(z)\sim \absv{{\mathcal S}_{AB}}.$ For the parameters in (\ref{eq_parameterusedlemmac3}), we have that for some constant $C>0$ \begin{equation*} N^{\epsilon/3}\widehat{\Pi}\leq CN^{\epsilon/3}\frac{\sqrt{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}}{N\eta}+\frac{N^{\epsilon/3}}{N\eta} =CN^{\epsilon/3}\sqrt{\frac{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}{(N\eta)^{1/3}}\cdot \frac{1}{(N\eta)^{2/3}}}+\frac{N^{\epsilon/3}}{N\eta} \leq CN^{\epsilon/3}\frac{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}{(N\eta)^{1/3}}, \end{equation*} where in the last step we used that $\sqrt{xy} \leq x+y, \ x, y \geq 0.$ By a discussion similar to \eqref{eq:weight_constr}, we obtain that \begin{equation*} \Absv{\frac{{\mathcal S}}{z}\Lambda_{A}+{\mathcal T}_{A}\Lambda_{A}^{2}+O(\absv{\Lambda_{A}^{3}})}\leq\absv{{\mathfrak Z}_{1}}+O(\absv{\Phi_{B}^{c}}^{2})+O(\absv{\Phi_{B}^{c}\Lambda_{A}})\leq CN^{\epsilon/3}\widehat{\Pi}\leq N^{2\epsilon/5}\frac{\absv{{\mathcal S}}+\widehat{\Lambda}}{(N\eta)^{1/3}}, \end{equation*} on the event $\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{(N\eta)^{1/2}}\right),$ where we used (\ref{eq:LMder}) and (\ref{eq_Lderivativecontrol}), and the fact $\Lambda\leq N^{3\epsilon}/(N\eta)^{1/3}$. We now start the proof of Lemma \ref{lem:iteration_weaklaw}. In what follows, we will show that we can choose $\Xi(z, D,\epsilon)=\Xi_1(z) \cap \Xi_2(z)$ and $N_1=\max\{N_{01}(\epsilon, D), N_{02}(\epsilon,D)\},$ where $\Xi_1(z)$ and $\Xi_2(z)$ are events from Lemma \ref{lem:entrysuborconc} and Lemma \ref{lem:FA1conc}, respectively. Due to similarity, we only prove part (i). Denote \begin{align*} \widetilde{\Xi}(z)= \begin{cases} \Xi_{1}(z)\cap\Xi_{2}(z)\cap\Theta_{>}\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}},\frac{\epsilon}{10}\right)&\text{if }z\in{\mathcal D}_{>},\\ \Xi_{1}(z)\cap\Xi_{2}(z)\cap\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right)&\text{if }z\in{\mathcal D}_{\leq}. \end{cases} \end{align*} By (\ref{eq_lemmac1c1}), for $ k=1/3, \ \widehat{\Lambda}=N^{3\epsilon}/(N\eta)^{1/3},$ when $\sqrt{\kappa+\eta}>N^{-\epsilon}\widehat{\Lambda}$, i.e. $z\in{\mathcal D}_{>}$, we have that for some constant $C>0$ \begin{equation*} \absv{\Lambda_{\iota}}\leq C\left( N^{-2\epsilon}\widehat{\Lambda}+\frac{N^{7\epsilon/5}}{(N\eta)^{1/3}}\right), \end{equation*} holds on the event $\widetilde{\Xi}(z)$, where we can ignore the indicator function since $\Lambda\leq N^{-\epsilon/10}\absv{{\mathcal S}_{AB}}$. Thus we have that for some constant $C>0$ \begin{align*} \absv{\Lambda}\leq CN^{-2\epsilon}\widehat{\Lambda}+\frac{N^{7\epsilon/5}}{(N\eta)^{1/3}} \leq CN^{-8\epsilon/5}\widehat{\Lambda} \leq CN^{-3\epsilon/5}\sqrt{\kappa+\eta}. \end{align*} Consequently, we have that \begin{equation} \label{eq_belongeq1} \Lambda\leq C\min\left(\frac{N^{12\epsilon/5}}{(N\eta)^{1/3}},N^{-3\epsilon/5}\sqrt{\kappa+\eta}\right)\leq \min\left(\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},N^{-\epsilon/2}\absv{{\mathcal S}}\right). \end{equation} Moreover, by Lemma \ref{lem:entrysuborconc}, (i) of Proposition \ref{prop:stabN} and (\ref{eq_belongeq1}), we see that \begin{align}\label{eq_belongeq2} \Lambda_{d}(z)\leq\Lambda_{d}^{c}(z)+\frac{\Lambda}{(\absv{a_{i}-\Omega_{B}(z)}-\Lambda)^{2}}\leq \frac{N^{5\epsilon/2}}{(N\eta)^{1/3}}, \ \ \Lambda_{T}(z)\leq N^{\epsilon/2}\Psi \leq \frac{N^{5\epsilon/2}}{\sqrt{N\eta}}. \end{align} Similarly, we can show that \begin{align}\label{eq_belongeq3} \widetilde{\Lambda}_{d}(z)\leq \frac{N^{5\epsilon/2}}{(N\eta)^{1/3}}, \ \ \widetilde{\Lambda}_{T}(z)\leq \frac{N^{5\epsilon/2}}{\sqrt{N\eta}}. \end{align} Therefore, by (\ref{eq_belongeq1}), (\ref{eq_belongeq2}) and (\ref{eq_belongeq3}), using the definition (\ref{eq_setdefnThetag}), we have proved that \begin{equation*} \widetilde{\Xi}(z)=\Xi_{1}(z)\cap\Xi_{2}(z)\cap\Theta_{>}\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}},\frac{\epsilon}{10}\right)\subset\Theta_{>}\left(z,\frac{N^{5\epsilon/2}}{(N\eta)^{1/3}},\frac{N^{5\epsilon/2}}{\sqrt{N\eta}},\frac{\epsilon}{2}\right). \end{equation*} This completes the proof. \end{proof} Then we prove Lemma \ref{lem:self_improv}. \begin{proof}[\bf Proof of Lemma \ref{lem:self_improv}] From Proposition \ref{prop:weaklaw}, we find that Assumption \ref{assu_ansz} holds uniformly in $z \in {\mathcal D}_{\tau}(\eta_L, \eta_U).$ Consequently, we find that Proposition \ref{prop:FA2} holds uniformly in $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U).$ More specifically, let $\widehat{\Lambda}(z)$ be a deterministic positive function with $\Lambda(z)\prec\widehat{\Lambda}(z)\leq N^{-\gamma/4},$ we have that for $\iota=A,B,$ \begin{equation} \label{eq:SLL} \Absv{\frac{{\mathcal S}_{AB}(z)}{z}\Lambda_{\iota}+{\mathcal T}_{\iota}\Lambda_{\iota}(z)^{2}+O(\absv{\Lambda_{\iota}}^{3})}\prec \Psi^{2}\left(\sqrt{(\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda}(z))(\absv{{\mathcal S}_{AB}(z)}+\widehat{\Lambda}(z))}+\Psi^{2}\right) \end{equation} holds uniformly in $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$. Moreover, since $\absv{{\mathcal S}_{AB}(z)}\sim \sqrt{\kappa+\eta}\sim \im m_{\mu_{A}\boxtimes\mu_{B}}(z)$ and $\widehat{\Lambda}(z)\geq(N\eta)^{-1},$ by \eqref{eq:SLL}, we have that \begin{equation*} \Absv{\frac{{\mathcal S}_{AB}(z)}{z}\Lambda_{\iota}+{\mathcal T}_{\iota}\Lambda_{\iota}(z)^{2}+O(\absv{\Lambda_{\iota}}^{3})} \prec \frac{1}{N\eta}\left(\absv{{\mathcal S}_{AB}(z)}+\widehat{\Lambda}(z)\right). \end{equation*} Therefore, for fixed small $\epsilon_0 \in (0,\gamma/12)$ and large $D>0,$ we can find an $N_{0}(\epsilon_{0},D)\in\N$ such that for each $z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})$ and $N\geq N_{0}(\epsilon_{0},D),$ there exists an event $\widetilde{\Xi}(z,\epsilon_{0},D)$ with $\P[\widetilde{\Xi}(z,\epsilon_{0},D)]>1-N^{-D}$ on which we have \begin{align}\label{eq_toapplylemmac1condition} &\Absv{\frac{{\mathcal S}_{AB}(z)}{z}\Lambda_{\iota}+{\mathcal T}_{\iota}\Lambda_{\iota}(z)^{2}+O(\absv{\Lambda_{\iota}}^{3})} \leq N^{\epsilon_{0}/3}\frac{1}{N\eta}\left(\absv{{\mathcal S}_{AB}(z)}+\widehat{\Lambda}(z)\right)\leq N^{2\epsilon_{0}/5}\frac{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}{N\eta}. \end{align} Using (\ref{eq_toapplylemmac1condition}), by Lemma \ref{lem:priori_weaklaw} with $k=1,$ we have shown that the results hold on $\widetilde{\Xi}(z,\epsilon_{0},D).$ This completes our proof. \end{proof} \begin{proof}[\bf Proof of Lemma \ref{lem:self_improv_S}] The proof is similar to that of Lemma \ref{lem:iteration_weaklaw}, except that we employ Lemma \ref{lem:self_improv} as a technical input instead of Lemma \ref{lem:priori_weaklaw}. We omit the details here. \end{proof} \subsection{Proof of Lemmas \ref{lem:priori_weaklaw}, \ref{lem:entrysuborconc} and \ref{lem:FA1conc}}\label{secc_subsection} We first prove Lemma \ref{lem:priori_weaklaw} in this subsection. \begin{proof}[\bf Proof of Lemma \ref{lem:priori_weaklaw}] Notice that \eqref{eq:self_improv_assu} together with the assumption $\widehat{\Lambda}(z)\leq N^{-\gamma/4}$ implies \begin{equation} \label{eq:self_improv_quadratic} \Absv{\frac{{\mathcal S}_{AB}(z)}{z}\Lambda_{\iota}+{\mathcal T}_{\iota}\Lambda_{\iota}^{2}} \leq N^{2\epsilon/5}\frac{\absv{{\mathcal S}_{AB}(z)}+\widehat{\Lambda}}{(N\eta)^{k}}+N^{-\gamma/4}\widehat{\Lambda}^{2}, \ \iota=A,B. \end{equation} We first prove (\ref{eq_lemmac1c1}). We choose $K_0$ such that \begin{equation*} K_{0}\geq 2\sup\left\{\absv{zT_{\iota}(z)}:z\in{\mathcal D}_{\tau}(\eta_{L},\eta_{U})\right\}, \ \iota=A,B. \end{equation*} By (iii) of Proposition \ref{prop:stabN}, we find that $K_0$ is indeed finite. With such a choice of $K_0,$ under the assumptions of (i), we find that \begin{equation*} \mathbb{I}\left(\Lambda\leq\frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\right)\absv{{\mathcal T}_{\iota}\Lambda_{\iota}^{2}} \leq \Absv{\frac{{\mathcal S}_{AB}}{z}\Lambda_{\iota}}\frac{\absv{z{\mathcal T}_{\iota}}}{K_{0}}\leq \frac{1}{2}\Absv{\frac{{\mathcal S}_{AB}}{z}\Lambda_{\iota}}, \end{equation*} where we used $\Lambda_{\iota} \leq |\mathcal{S}_{AB}|/K_0$ and the definition of $K_0$ for the second and third step, respectively. This shows that the left-hand side of (\ref{eq:self_improv_quadratic}) is dominated by $\mathcal{S}_{AB} \Lambda_{\iota} /z.$ Consequently, we have that \begin{equation*} \mathbb{I}\left(\Lambda\leq\frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\right)\Absv{\frac{{\mathcal S}_{AB}}{z}\Lambda_{\iota}} \leq2\left(N^{2\epsilon/5}\frac{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}{(N\eta)^{k}}+N^{-\gamma/4}\widehat{\Lambda}^{2}\right). \end{equation*} This further implies that for some constant $C>0$ \begin{align*} \mathbb{I}\left(\Lambda\leq\frac{\absv{{\mathcal S}_{AB}}}{K_{0}}\right)\absv{\Lambda_{\iota}} \leq C\frac{1}{\absv{{\mathcal S}_{AB}}}\left(N^{2\epsilon/5}\frac{\absv{{\mathcal S}}+\widehat{\Lambda}}{(N\eta)^{k}}+N^{-\gamma/4}\widehat{\Lambda}^{2}\right)\leq C\left(\frac{N^{7\epsilon/5}}{(N\eta)^{k}}+N^{\epsilon-\gamma/4}\widehat{\Lambda}\right), \end{align*} where we used $\sqrt{\kappa+\eta}>N^{-\epsilon}\widehat{\Lambda}$ from the assumption and $\sqrt{\kappa+\eta}\sim\absv{{\mathcal S}}$ from (iii) of Proposition \ref{prop:stabN}. Since $\epsilon<\gamma/12$, we conclude the proof of part (i). We next handle (\ref{eq_lemmac1c2}). Denote \begin{equation*} X=-\frac{{\mathcal S}_{AB}}{z}\Lambda_{\iota}+T_{\iota}\Lambda_{\iota}^{2}. \end{equation*} Using the fact that $\absv{{\mathcal T}_{\iota}}\geq c$ for some small constant $c>0,$ i.e., (iii) of Proposition \ref{prop:stabN}, we can consider $\Lambda_{\iota}$ as the solution of a quadratic equation involving $X$ so that \begin{equation*} \Lambda_{\iota}=\frac{1}{2{\mathcal T}_{\iota}}\left(-\frac{{\mathcal S}_{AB}}{z}+\sqrt{\frac{{\mathcal S}_{AB}^{2}}{z^{2}}-4{\mathcal T}_{\iota}X}\right), \end{equation*} for an appropriate choice of branch cut. We thus have that for some constant $C>0,$ \begin{align*} \absv{\Lambda_{\iota}}&\leq C\left(\absv{{\mathcal S}_{AB}}+\sqrt{\absv{X}}\right) \leq C\left(\sqrt{\kappa+\eta}+\sqrt{N^{2\epsilon/5}\frac{\absv{{\mathcal S}_{AB}}+\widehat{\Lambda}}{(N\eta)^{k}}+N^{-\gamma/4}\widehat{\Lambda}^{2}}\right) \\ &\leq C\left(N^{-\epsilon}\widehat{\Lambda}+\sqrt{\frac{N^{2\epsilon/5}\widehat{\Lambda}}{(N\eta)^{k}}+N^{-\gamma/4}\widehat{\Lambda}^{2}}\right) \leq C\left(N^{-\epsilon}\widehat{\Lambda}+\frac{N^{7\epsilon/5}}{(N\eta)^{k}}+N^{-\gamma/2}\widehat{\Lambda}\right), \end{align*} where we used \eqref{eq:self_improv_quadratic} in the second step, $\absv{{\mathcal S}}\sim\sqrt{\kappa+\eta}\leq N^{-\epsilon}\widehat{\Lambda}$ in the third step, and $\sqrt{xy}\leq x+y$ and $\sqrt{x+y}\leq \sqrt{x}+\sqrt{y}$ for $x,y>0$ in the last step. Using $\epsilon<\gamma/12$, we complete the proof of part (ii). \end{proof} Next, we prove Lemma \ref{lem:entrysuborconc}, which essentially generalizes Proposition \ref{prop:entrysubor} and its key input Lemma \ref{lem:PKrecmoment}. Our goal here is to replace the Assumption \ref{assu_ansz} with the event $\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right)$ which holds with high probability. We will follow the proof idea as shown in \cite[Lemma 8.3]{BEC}. In detail, if $X$ is the quantity for which we want to establish a bound when restricted on $\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right)$, we will bound the high order moments of $ \mathbb{I}(\Theta_{0})X$ instead of $X$ directly, where $\Theta_{0}$ is an event such that $\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right)\subset\Theta_{0}$. Then on the intersection of $\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right)$ and some high-probability event on which the bound for $\mathbb{I}(\Theta_{0})X$ holds, we can establish the bound for $X$. As we have seen in the proof of Proposition \ref{prop:entrysubor}, we need to apply recursive moment estimate where differentiablity is required. To seek for a smooth approximation for the indicator function, we introduce the following cutoff function. For a (large) constant $\mathrm{K}$ and some constant $C>0$, we choose a smooth function $\varphi:\R\to\R$ that satisfies for all $x \in \mathbb{R}$ \begin{align}\label{eq_cutofffunction} \varphi(x) =\begin{cases} 0, & \text{ if }\absv{x}>2\mathrm{K} \\ 1, & \text{ if }\absv{x}\leq \mathrm{K} \end{cases}, \ \ \absv{\varphi'(x)}\leq C\mathrm{K}^{-1}. \end{align} Denote \begin{equation} \label{eq:FA1conc_Gammai} \Gamma_{i}=\absv{G_{ii}}^{2}+\absv{{\mathcal G}_{ii}}^{2}+\absv{T_{i}}^{2}+\absv{\widetilde{T}_{i}}^{2}+\absv{\tr G}^{2}+\absv{\tr(GA)}^{2}+\absv{\tr(\widetilde{B}G)}^{2}, \end{equation} where $\widetilde{T}_i$ is defined as an analogue of $T_i$ (c.f. (\ref{eq_shorhandnotation})) by switching the roles of $A$ and $B$ and also $U$ and $U^*.$ \begin{proof}[\bf Proof of Lemma \ref{lem:entrysuborconc}] Recall (\ref{eq_pk}) and (\ref{eq_defnq}). Denote \begin{align*} \widetilde{{\mathfrak X}}_{i}^{(p,q)}\mathrel{\mathop:}= \varphi(\Gamma_{i})^{p+q}P_{i}^{p}\overline{P}_{i}^{q}, \ \ \widetilde{{\mathfrak Y}}_{i}^{(p,q)}\mathrel{\mathop:}= \varphi(\Gamma_{i})^{p+q}K_{i}^{p}\overline{K}_{i}^{q}. \end{align*} We first claim that the following recursive moment estimates hold true for $N\geq N_{1}'(\epsilon_{1},D_{1}),$ where $N_1'(\epsilon_1,D_1)$ is some large positive integer and $\epsilon_{1},D_{1}>0$ will be chosen later in terms of $\epsilon,D$, \begin{align} &\expct{\widetilde{{\mathfrak X}}_{i}^{(p,p)}}\leq \expct{{\mathfrak a}_{i1}\widetilde{{\mathfrak X}}_{i}^{(p-1,p)}}+\expct{{\mathfrak a}_{i2}\widetilde{{\mathfrak X}}_{i}^{(p-2,p)}}+\expct{{\mathfrak a}_{i3}\widetilde{{\mathfrak X}}_{i}^{(p-1,p-1)}},\label{eq:Precmom_conc}\\ &\expct{\widetilde{{\mathfrak Y}}_{i}^{(p,p)}}\leq \expct{{\mathfrak b}_{i1}\widetilde{{\mathfrak Y}}_{i}^{(p-1,p)}}+\expct{{\mathfrak b}_{i2}\widetilde{{\mathfrak Y}}_{i}^{(p-2,p)}}+\expct{{\mathfrak b}_{i3}\widetilde{{\mathfrak Y}}_{i}^{(p-1,p-1)}}.\label{eq:Krecmom_conc} \end{align} Here ${\mathfrak a}_{ij},{\mathfrak b}_{ij}, j=1,2,3$ are random variables satisfying that with some constants $C_{p,\mathrm{K}},$ \begin{align*} &\absv{{\mathfrak a}_{i1}}\lone(\Xi_{1}'(z))\leq N^{\epsilon_{1}}\Psi,& &\absv{{\mathfrak a}_{i2}}\mathbb{I}(\Xi_{1}'(z))\leq N^{2\epsilon_{1}}\Psi^{2},& &\absv{{\mathfrak a}_{i3}}\mathbb{I}(\Xi_{1}'(z))\leq N^{2\epsilon_{1}}\Psi^{2},& &\expct{\absv{{\mathfrak a}_{ij}}^{p}}\leq C_{p,\mathrm{K}},& \end{align*} for some event high probability event $\Xi_{1}'(z)$ such $\prob{\Xi_{1}'(z)}>1-N^{-D_{1}}$, and the same bounds hold for ${\mathfrak b}.$ We then proceed to the proof of the Lemma \ref{lem:entrysuborconc} assuming that both (\ref{eq:Precmom_conc}) and (\ref{eq:Krecmom_conc}) hold. Similar to the discussion of (\ref{eq_arbitraydiscussion}), by applying Young's inequality to \eqref{eq:Precmom_conc}, we find that for all $\epsilon_{1}'>0$ \begin{equation*} \expct{\absv{{\mathfrak a}_{i1}\widetilde{{\mathfrak X}}_{i}^{2p-1}}}\leq \frac{N^{2p\epsilon_{1}'}}{2p}\expct{\absv{{\mathfrak a}_{i1}}^{2p}}+\frac{(2p-1)N^{-\frac{2p\epsilon_{1}'}{2p-1}}}{2p}\expct{\absv{\varphi(\Gamma_{i})P_{i}}^{2p}}. \end{equation*} Utilizing the bound \begin{equation*} \expct{\absv{{\mathfrak a}_{i1}}^{2p}}\leq \expct{\absv{{\mathfrak a}_{i1}^{2p}}\mathbb{I}(\Xi_{1}'(z))} +\expct{\absv{{\mathfrak a}_{i1}}^{4p}}^{1/2}(1-\prob{\Xi_{1}'(z)})^{1/2}\leq N^{2p\epsilon_{1}}\Psi^{2p}+C_{p,\mathrm{K}}N^{-D_{1}/2}, \end{equation*} we get \begin{equation*} \expct{\absv{{\mathfrak a}_{i1}\widetilde{{\mathfrak X}}_{i}^{(p-1,p)}}} \leq C_{p,\mathrm{K}}N^{2p(\epsilon_{1}+\epsilon_{1}')}\Psi^{2p}+C_{p,\mathrm{K}}N^{-D_{1}/2+2p\epsilon_{1}'} +C_{p,\mathrm{K}}N^{-\frac{2p\epsilon_{1}'}{2p-1}}\expct{\absv{\varphi(\Gamma_{i})P_{i}}^{2p}}. \end{equation*} Similarly, we can bound the terms regarding $\widetilde{{\mathfrak X}}_{i}^{(p-2,p)}$ and $\widetilde{{\mathfrak X}}_{i}^{(p-1,p)}$. Specifically, for $j=2,3,$ we have that \begin{equation*} \expct{\absv{{\mathfrak a}_{ij}\varphi(\Gamma_{i})P_{i}}^{2p-1}} \leq C_{p,\mathrm{K}}N^{p(2\epsilon_{1}+\epsilon_{1}')}\Psi^{2p}+C_{p,\mathrm{K}}N^{-D_{1}/2+p\epsilon_{1}'}+C_{p,\mathrm{K}}N^{-\frac{p\epsilon_{1}'}{p-1}}\expct{\absv{\varphi(\Gamma_{i})P_{i}}^{2p}}. \end{equation*} Combining the above estimates, we arrive at \begin{equation} \expct{\absv{\varphi(\Gamma_{i})P_{i}}^{2p}}\left(1-C_{p,\mathrm{K}}N^{-\frac{p\epsilon_{1}'}{p-1}}\right)\leq C_{p,\mathrm{K}}N^{2p(\epsilon_{1}+\epsilon_{1}')}\Psi^{2p}+C_{p,\mathrm{K}}N^{-D_{1}/2+2p\epsilon_{1}'}. \end{equation} Therefore, by Markov inequality, we have \begin{align} &\prob{\absv{\varphi(\Gamma_{i})P_{i}}>\Psi N^{\epsilon/4}}\leq C_{p,K}N^{2p(\epsilon_{1}+\epsilon_{1}'-\epsilon/4)} +C_{p,\mathrm{K}}N^{-D_{1}/2+2p(\epsilon_{1}'-\epsilon/4)}\Psi^{-2p} \nonumber \\ &\leq C_{p,\mathrm{K}}N^{2p(\epsilon_{1}+\epsilon_{1}'-\epsilon/4)} +C_{p,\mathrm{K}}N^{-D_{1}/2+2p(\epsilon_{1}'-\epsilon/4+1)}. \label{eq_markovfinal} \end{align} We now require that $\epsilon_{1},\epsilon_{1}'<\epsilon/16$, $p>(2D+2)/\epsilon$ and $D_{1}>4p+2D+2$. Then (\ref{eq_markovfinal}) yields that \begin{equation*} \prob{\absv{\varphi(\Gamma_{i})P_{i}}>\Psi N^{\epsilon/4}}\leq N^{-D-1}. \end{equation*} Denote \begin{equation*} \Xi_{11}(z)=\left\{\absv{\varphi(\Gamma_{i})P_{i}}\leq \Psi N^{\epsilon/4}\right\}. \end{equation*} Since $\varphi(\Gamma_{i})=1$ on $\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right),$ we have $|P_{i}|\leq N^{\epsilon/4}\Psi$ on the event $\Xi_{1}(z)\cap \Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right).$ Similarly, we can see that $|K_i| \leq N^{\epsilon/4} \Psi$ on the event $\Xi_{12}(z)\cap \Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right),$ where $\Xi_2(z)$ is defined as \begin{equation*} \Xi_{12}(z)=\left\{\absv{\varphi(\Gamma_{i})K_{i}}\leq \Psi N^{\epsilon/4}\right\}. \end{equation*} This shows that we can further restrict ourselves on the high probability event $\Xi_{11}(z) \cap \Xi_{12}(z) \cap\Theta\left(z,\frac{N^{3\epsilon}}{(N\eta)^{1/3}},\frac{N^{3\epsilon}}{\sqrt{N\eta}}\right),$ where the following hold \begin{equation*} P_{i}\leq N^{\epsilon/4}\Psi,\quad K_{i}\leq N^{\epsilon/4}\Psi. \end{equation*} We can conclude the proof of Lemma \ref{lem:entrysuborconc} by setting $\Xi_1(z):=\Xi_{11}(z) \cap \Xi_{12}(z)$ and following the proof of (\ref{eq_lambdaother}), i.e., the discussion between (\ref{eq_claimtibound}) and the paragraph before (\ref{eq:PKmomentbound}). It remains to prove (\ref{eq:Precmom_conc}) and (\ref{eq:Krecmom_conc}). We only discuss (\ref{eq:Precmom_conc}). In the proof of Lemma \ref{lem:PKrecmoment}, we only require that $\Gamma_i$ to be $\rO_{\prec}(1).$ In this sense, the cutoff factor $\varphi(\Gamma_{i})$ can be used as an substitute for Assumption \ref{assu_ansz}. Moreover, we can restrict our discussion on a high probability event using the property of Gaussian tail. Specifically, for any $\epsilon_1>0,$ let $\Xi_1'(z) \equiv \Xi_1'(z, \epsilon_1)$ be the event that all the large deviation estimates regarding the Gaussian vector ${\boldsymbol g}_{i}$'s hold with precision with $N^{\epsilon_1}.$ For example, the error bounds in (\ref{eq_hiicontrol}), (\ref{eq_licontrol}) and (\ref{eq_controlhibhi}), which all are $\rO_{\prec}(N^{-1/2}),$ can be replaced by $N^{-1/2+\epsilon_1}$ on $\Xi_1'(z).$ Using the Gaussian tail of ${\boldsymbol g}_i$'s, we find that for any $D_1>0,$ there exists an $N(D_1, \epsilon_1)$ such that for all $N \geq N(D_1, \epsilon_1),$ we have \begin{equation*} \mathbb{P}(\Xi_1'(z)) \geq 1-N^{-D_1}. \end{equation*} Consequently, we can conclude that (\ref{eq:Precmom_conc}) has the same form as (\ref{eq:PKrecmoment}) except that cutoff factor $\varphi(\Gamma_i).$ We can follow the proof of Lemma \ref{lem:PKrecmoment} and accommodate the additional terms in the integration by parts resulting from the derivatives of $\varphi(\Gamma_{i})$ into $\widetilde{{\mathfrak X}}_i$. Note that the derivative of $\varphi(\Gamma_{i})$ appears in \eqref{eq_cumulantfirst} and \eqref{eq:recmomP2}. For instance, in the analogue of \eqref{eq_cumulantfirst}, we will get an extra term \begin{equation*} \frac{p}{N}\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tr(GA)\frac{\partial \varphi(\Gamma_{i})}{\partial g_{ik}}\widetilde{{\mathfrak X}}_{i}^{(p-1,p)}. \end{equation*} By replacing $\varphi'(\Gamma_{i})$ by $C\mathrm{K}^{-1}$ and chain rule, it suffices to replace the derivative of $\varphi(\Gamma_{i})$ by derivatives of each of the followings: \begin{align}\label{eq_listterm} \absv{G_{ii}}^{2}, \ \absv{{\mathcal G}_{ii}}^{2}, \ \absv{T_{i}}^{2}, \ \absv{\widetilde{T}_{i}}^{2}, \ \absv{\tr G}^{2}, \ \absv{\tr(GA)}^{2}, \ \absv{\tr(\widetilde{B}G)}^{2}. \end{align} For an illustration, we provide the estimate only for $\absv{G_{ii}}^{2}$. By definition, we only need to focus on the event $\{\varphi'(\Gamma_i) \neq 0\} \cap \Xi_1'(z).$ Using \begin{equation*} \frac{\partial \absv{G_{ii}}^{2}}{\partial g_{ik}}=\frac{\partial G_{ii}}{\partial g_{ik}}+\frac{\partial \overline{G}_{ii}}{g_{ik}}, \end{equation*} the corresponding term becomes \begin{equation*} \frac{\overline{G}_{ii}\tr(GA)}{N}\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial G_{ii}}{\partial g_{ik}}\widetilde{{\mathfrak X}}_{i}^{(p-1,p)} +\frac{G_{ii}\tr(GA)}{N}\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial \overline{G}_{ii}}{\partial g_{ik}}\widetilde{{\mathfrak X}}_{i}^{(p-1,p)}. \end{equation*} Note that the above terms can be controlled using Lemma \ref{lem:recmomerror} and in fact, \begin{equation*} \frac{p}{N}\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\tr(GA)\frac{\partial |G_{ii}|^2}{\partial g_{ik}}\widetilde{{\mathfrak X}}_{i}^{(p-1,p)}=\rO\left( \frac{N^{\epsilon_1}}{\sqrt{N \eta}} \right). \end{equation*} The rest terms in (\ref{eq_listterm}) can be handled in the same way and we omit the details here. In summary, the additional terms regarding the derivatives of $\varphi(\Gamma_i)$ can be absorbed into the first term on the RHS of (\ref{eq:Precmom_conc}). This completes our proof. \end{proof} Finally, we prove Lemma \ref{lem:FA1conc}. We need to handle extra technical complexity compared to the proof of Lemma \ref{lem:entrysuborconc}. For some sufficiently small constant $c>0,$ denote \begin{align}\label{eq:FA1conc_Gamma} &\Gamma=\left(\frac{N^{5\epsilon}}{\sqrt{N\eta}}\right)^{-1}\frac{1}{N}\sum_{i}(\absv{T_{i}}^{2}+N^{-1})^{1/2} +\left(\frac{N^{5\epsilon}}{(N\eta)^{1/3}}\right)^{-2}\absv{\Upsilon}^{2}\\ &+\frac{\left(\absv{z\tr(GA)-\Omega_{B}(zm_{\boxtimes}(z)+1)}^{2}+\absv{z\tr(\widetilde{B}G)-\Omega_{A}(zm_{\boxtimes}(z)+1)}^{2}+\absv{\tr (zG+1)-(zm_{\boxtimes}(z)+1)}^{2}\right)}{\left(c\im m_{\boxtimes}(z)+\widehat{\Lambda}\right)^{2}}, \nonumber \end{align} where used the short-hand notation $m_{\boxtimes}=m_{\mu_{A}\boxtimes\mu_{B}}$. Recall (\ref{eq_defnq}) and (\ref{eq_optimalfaquantitiescoeff}). Denote \begin{equation*} \widetilde{{\mathfrak X}}^{(p,q)}=\left(\frac{1}{N}\sum_{i}d_{i}\varphi(\Gamma_{i})\varphi(\Gamma)Q_{i}\right)^{p}\overline{\left(\frac{1}{N}\sum_{i}d_{i}\varphi(\Gamma_{i})\varphi(\Gamma)Q_{i}\right)^{q}},\quad d_{i}={\mathfrak d}_{i1}\text{ or }{\mathfrak d}_{i2}. \end{equation*} \begin{proof}[\bf Proof of Lemma \ref{lem:FA1conc}] Similar to the construction of the event $\Xi_1'(z)$ in the proof of Lemma \ref{lem:entrysuborconc}, for any $\epsilon>0,$ we denote $\Xi_2'(z)$ be the event such that all the large deviation estimates regarding the Gaussian random vectors $\bm{g}_i$'s in the proof of Lemma \ref{lem:Avrecmoment} hold with precision $N^{\epsilon}.$ Again, by the properties of Gaussian tails, we find that for any $D>0,$ there exists an $N'(D, \epsilon)$ such that when $N \geq N'(D, \epsilon),$ the following holds \begin{equation*} \mathbb{P}(\Xi_2'(z)) \geq 1-N^{-D}. \end{equation*} Analogously to the proof of Lemma \ref{lem:entrysuborconc} (c.f. (\ref{eq:Precmom_conc}) and (\ref{eq:Krecmom_conc})), it suffices to prove the following recursive moment estimate when $N$ is sufficiently large \begin{equation} \label{eq_recursiverougheqprior} \expct{\widetilde{{\mathfrak X}}^{(p,p)}}\leq \expct{{\mathfrak c}_{1}\widetilde{{\mathfrak X}}^{(p-1,p)}}+\expct{{\mathfrak c}_{2}\widetilde{{\mathfrak X}}^{(p-2,p)}}+\expct{{\mathfrak c}_{3}\widetilde{{\mathfrak X}}^{(p-1,p-1)}}, \end{equation} where ${\mathfrak c}_{j}$ for $j=1,2,3$ are random variables satisfying that \begin{align*} &\absv{{\mathfrak c}_{1}}\mathbb{I}(\Xi_{2}'(z))\leq N^{\epsilon}\widehat{\Pi},& &\absv{{\mathfrak c}_{2}}\mathbb{I}(\Xi_{2}'(z))\leq N^{2\epsilon}\widehat{\Pi}^{2},& &\absv{{\mathfrak c}_{3}}\mathbb{I}(\Xi_{2}'(z))\leq N^{2\epsilon}\widehat{\Pi}^{2},& &\expct{\absv{{\mathfrak c}_{j}}^{p}}\leq C_{p,\mathrm{K}}. \end{align*} We mention that, unlike that (\ref{eq:Precmom_conc}) is a direct cutoff version of (\ref{eq:PKrecmoment}), (\ref{eq_recursiverougheqprior}) is a cutoff to (\ref{eq_roughfarecuversivestimateeq}) with a weaker bounds for the coefficients $\mathfrak{c}_i$'s. The weakness of the bounds is due to the weak a priori inputs in the cutoffs $\varphi(\Gamma_i)$ and $\varphi(\Gamma),$ and the terms involving the derivatives of these cutoffs. We then discuss both of these two aspects and only list the necessary modifications. First, we discuss the technical inputs in the cutoffs. Before explaining the detailed modifications, we introduce some useful estimates. We observe that \begin{equation} \label{eq_imbeforeindentity} \Omega_{B}(z)(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)=\Omega_{B}(z)(\Omega_{B}m_{\mu_{A}}(\Omega_{B}(z))+1)=\int\frac{x^{2}}{x-\Omega_{B}(z)}\mathrm{d}\mu_{A}(x)-1, \end{equation} where in the second step we used the assumption that $\tr A=1$ and the identity \begin{equation*} z(zm_{\mu}(z)+1)=\int \frac{x^2}{x-z} \mathrm{d} \mu(x)-\int x \mathrm{d} \mu(x). \end{equation*} In what follows, we will write $\Omega_B \equiv \Omega_B(z)$ for simplicity. Based on (\ref{eq_imbeforeindentity}), we find that \begin{align}\label{eq_imimboundone} &\im\left(\Omega_{B}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)\right) =\im \Omega_{B}\int\frac{x^{2}}{\absv{x-\Omega_{B}}^{2}}\mathrm{d}\mu_{A}(x) \geq a_{N}\im\Omega_{B}\int\frac{x}{\absv{x-\Omega_{B}}^{2}}\mathrm{d}\mu_{A}(x) \nonumber \\ &=a_{N}\im(\Omega_{B}m_{\mu_{A}}(\Omega_{B})+1)=a_{N}\im (zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)\geq a_{N}^{2}b_{N}\im m_{\mu_{A}\boxtimes\mu_{B}}(z), \end{align} where we used $\inf \supp \mu_{A}\boxtimes\mu_{B}\geq a_{N}b_{N}$ in the last inequality. Similarly, by reversing the inequities in (\ref{eq_imimboundone}), we have $\im(\Omega_{B}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1))\leq a_{1}^{2}b_{1}\im m_{\mu_{A}\boxtimes\mu_{B}}(z)$. Hence, we have that for some constants $c_\mathrm{K}, c'_{\mathrm{K}}>0,$ when $c$ is sufficiently small \begin{align}\label{eq:RFAconc_GAlbd} &\varphi(\Gamma)\absv{\tr GA}\geq \varphi(\Gamma)\left( \absv{\Omega_{B}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)}-\sqrt{2\mathrm{K}}(c\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda})\right) \nonumber \\ &\geq \frac{\sqrt{2}}{2}\varphi(\Gamma)\left[\absv{\re (\Omega_{B}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1))}+ a_N^2 b_N\left(1-\frac{c\sqrt{2\mathrm{K}}}{a_{N}^{2}b_{N}}\right)\im m_{\mu_{A}\boxtimes\mu_{B}}(z)-c\sqrt{2\mathrm{K}} N^{3\epsilon-\gamma/3}\right] \nonumber \\ &\geq \frac{c_\mathrm{K}}{2} \varphi(\Gamma)\left[\absv{ \re(\Omega_{B}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)}+\frac{a_N^2 b_N}{a_1^2 b_1} \left(1-\frac{c\sqrt{2\mathrm{K}}}{a_{N}^{2}b_{N}}\right) \im(\Omega_{B}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)) \right] \nonumber \\ & \geq \frac{c_\mathrm{K}}{4} \varphi(\Gamma)\absv{ (\Omega_{B}(zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)} \nonumber \\ & \geq c'_\mathrm{K} \varphi(\Gamma), \end{align} where in the first step we used the definitions of $\varphi(\cdot)$ in (\ref{eq_cutofffunction}) and $\Gamma$ in (\ref{eq:FA1conc_Gamma}), in the second step we used $x^2+y^2 \geq (x+y)^2/2,$ (\ref{eq_imimboundone}), the definition (\ref{eq_parameterusedlemmac3}) and the fact $\eta\geq N^{-1+\gamma},$ in the third step we used the assumption $\epsilon<\gamma/12$, in the fourth step we used $x+y \geq \sqrt{(x^2+y^2)/2}$ when $x,y \geq 0$ and the assumption $c$ is sufficiently small and in the last step we used a relation similar to (\ref{eq_multiidentity}) and (i) of Proposition \ref{prop:stabN}. Similarly, we can show that \begin{equation}\label{eq_secondpreparebound} \varphi(\Gamma)|\tr(zG+1)| \geq c_{\mathrm{K}}' \varphi(\Gamma). \end{equation} Analogously, for some constant $C_\mathrm{K}>0,$ we have that \begin{align*} &\varphi(\Gamma)\im \tr G\sim \varphi(\Gamma)\im \tr (zG+1) \sim \varphi(\Gamma)\im\tr (GA) \\ &\leq\varphi(\Gamma)C_{\mathrm{K}}(\im (zm_{\mu_{A}\boxtimes\mu_{B}}(z)+1)+\widehat{\Lambda})\sim \varphi(\Gamma)C_{\mathrm{K}}(\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda}) \nonumber . \end{align*} This implies that \begin{equation}\label{eq_preparethirdequation} \varphi(\Gamma)\frac{N^{\epsilon}}{N^{2}}\sum_{i}\absv{G_{ii}} \leq \varphi(\Gamma)\frac{N^{\epsilon}}{N}\sqrt{\frac{\im \tr G}{\eta}} \leq\varphi(\Gamma) C_{\mathrm{K}}\frac{N^{\epsilon}}{\sqrt{N}}\sqrt{\frac{\im m_{\mu_{A}\boxtimes\mu_{B}}(z)+\widehat{\Lambda}}{N\eta}}\leq \frac{N^{\epsilon}}{\sqrt{N}}\widehat{\Pi}. \end{equation} Moreover, by the definition of $\Gamma,$ we have that \begin{equation} \label{eq:FA1conc_aver_T} \varphi(\Gamma)\frac{1}{N}\sum_{i}\absv{T_{i}}\leq \varphi(\Gamma)\frac{1}{N}\sum_{i}(\absv{T_{i}}^{2}+N^{-1})^{1/2}\leq 2\mathrm{K} \varphi(\Gamma)\frac{N^{5\epsilon}}{\sqrt{N\eta}}. \end{equation} This yields that \begin{align}\label{eq_fourthpreparation} \varphi(\Gamma)\frac{N^{\epsilon}}{N^{3/2}}\sum_{i}\absv{T_{i}}\leq \frac{N^{7\epsilon}}{\sqrt{N^{2}\eta}}\leq N^{\epsilon}\widehat{\Pi}. \end{align} With above preparation, we explain the modifications regarding the technical inputs. The first quantity is the $\tau_{i1}$ defined in (\ref{eq_defntauil}). The counterpart of $\tau_{i1}$ is the following \begin{align}\label{eq_divarphigamma} \widetilde{\tau}_{i1}\mathrel{\mathop:}= a_{i}\frac{\tr(G\widetilde{D})}{\tr(GA)}-\widetilde{d}_{i}, \ \ \widetilde{D}=\diag(\widetilde{d}_{1},\cdots,\widetilde{d}_{N}), \ \ \quad \widetilde{d}_{i}\mathrel{\mathop:}= d_{i}\varphi(\Gamma_{i})\varphi(\Gamma). \end{align} By \eqref{eq:RFAconc_GAlbd}, (\ref{eq_secondpreparebound}) and the fact $\varphi(\Gamma_{i})\absv{G_{ii}}\leq 2\mathrm{K}$, we conclude $\widetilde{\tau}_{i1}\leq C_{\mathrm{K}}$ for some constant $C_\mathrm{K}>0$. The second modification is to discuss the bound of the analogue of the term $\rO_{\prec}(\Psi \widehat{\Upsilon})$ in (\ref{eq:Avrecmomentrec}). This error term was obtained when we bound $\frac{1}{N}\sum_{i}T_{i}\Upsilon\tau_{i1}$ in \eqref{eq:tau_aver2}. For its counterpart in the current proof, using the definition of $\Gamma$ in (\ref{eq:FA1conc_Gamma}) and the bound in \eqref{eq:FA1conc_aver_T}, we conclude that for some constant $C>0$ \begin{equation*} \varphi(\Gamma)\Absv{\frac{1}{N}\sum_{i}T_{i}\Upsilon\widetilde{\tau}_{i1}}\leq C\varphi(\Gamma)C_{\mathrm{K}}\frac{N^{5\epsilon}}{\sqrt{N\eta}}\frac{N^{5\epsilon}}{(N\eta)^{1/3}} \leq C\varphi(\Gamma)C_{\mathrm{K}}\frac{N^{10\epsilon}}{(N\eta)^{5/6}} \leq \sqrt{\frac{\widehat{\Lambda}}{N\eta}}\leq \widehat{\Pi}, \end{equation*} where we recall the definitions in (\ref{eq_parameterusedlemmac3}). This implies that $\frac{1}{N}\sum_{i}T_{i}\Upsilon\widetilde{\tau}_{i1}$ can be absorbed into the bound for $\mathfrak{c}_1$ in (\ref{eq_recursiverougheqprior}). The third and fourth modifications are devoted to the terms involving $\mathsf{e}_{i1}$ in \eqref{eq_epsilon1} and $\mathsf{e}_{i2}$ in (\ref{eq_defnmathsfe2}). By a discussion similar to (\ref{eq_epsilon1details}), we have that \begin{equation*} \varphi(\Gamma)\mathsf{e}_{i1}=(-h_{ii}+\mathring{{\boldsymbol h}}_{i}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol h}}_{i})G_{ii}+\rO_{\prec}(1/N)G_{ii}+\rO_{\prec}(1/\sqrt{N})T_{i}. \end{equation*} Without loss of generality, we assume that the stochastic dominance holds on some event $\Xi_2'$ with parameters $\epsilon$ and $D.$ Then taking average over $i$ and focusing on the above event $\Xi_2'$, we have \begin{align}\label{eq:expan_ei1_conc} \varphi(\Gamma)\frac{1}{N}\sum_{i}\mathsf{e}_{i1}\widetilde{\tau}_{i1}\tr(GA)= \varphi(\Gamma)\frac{1}{N}\sum_{i}(-h_{ii}+\mathring{{\boldsymbol h}}_{i}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol h}}_{i})G_{ii}\widetilde{\tau}_{i1}\tr(GA)+O(N^{\epsilon}\widehat{\Pi}), \end{align} where we used (\ref{eq_preparethirdequation}) and (\ref{eq_fourthpreparation}). Similarly, we have that \begin{equation} \label{eq:expan_ei2_conc} \varphi(\Gamma)\frac{1}{N}\sum_{i}\frac{\mathsf{e}_{i2}\widetilde{\tau}_{i1}}{\norm{{\boldsymbol g}_{i}}} =\varphi(\Gamma)\frac{1}{N}\sum_{i}\left((\mathring{{\boldsymbol g}}_{i}^*\mathring{{\boldsymbol g}}_{i}-1)\widetilde{\tau}_{i1}\tr(A(\widetilde{B}-I)G)-h_{ii}\tr(GA)\right)G_{ii}+O(N^{\epsilon}\widehat{\Pi}). \end{equation} Similar to the discussion in (\ref{eq:expan_ei1}) and (\ref{eq:expan_ei2}), the second term of \eqref{eq:expan_ei2_conc} cancels with the first term of \eqref{eq:expan_ei1_conc}, and the rest will be dealt with the technique of integration by parts. Second, we estimate are additional terms arising from the derivatives of $\varphi(\Gamma)$ and $\varphi(\Gamma_i)$. Since the terms involving derivatives of $\varphi(\Gamma_{i})$ can be handled in the same way as in the proof of Lemma \ref{lem:entrysuborconc}, we only focus on the discussion of $\varphi(\Gamma)$. In light of (\ref{eq_divarphigamma}), the derivatives containing $\varphi(\Gamma)$ appear in the analogues of \eqref{eq:recmomX2}, \eqref{eq:Iparts_X}, and the new terms after the integration by parts using the following two terms from \eqref{eq:expan_ei1_conc} and \eqref{eq:expan_ei2_conc} \begin{equation*} \frac{1}{N}\sum_{i}\expct{\mathring{{\boldsymbol h}}_{i}^*(\widetilde{B}^{\langle i\rangle}-I)\mathring{{\boldsymbol h}}_{i}G_{ii}\widetilde{\tau}_{i1}\tr(GA)\widetilde{{\mathfrak X}}^{(p-1,p)}}, \ \ \frac{1}{N}\sum_{i}\expct{(\mathring{{\boldsymbol g}}_{i}^*\mathring{{\boldsymbol g}}_{i}-1)\widetilde{\tau}_{i1}\tr(A(\widetilde{B}-I)G)G_{ii}\widetilde{{\mathfrak X}}^{(p-1,p)}}. \end{equation*} Due to similarity, we will only focus on discussing the counterpart of the analogue of the step \eqref{eq:recmomX2} and the other terms can be handled similarly. More specifically, we study \begin{equation} \label{eq:FA1conc_ex} \frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\frac{{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}{\norm{{\boldsymbol g}_{i}}}\frac{\partial\widetilde{\tau}_{i1}}{\partial g_{ik}}\tr(GA). \end{equation} Recall (\ref{eq_divarphigamma}). One new term in $\partial \widetilde{\tau}_{i1}/\partial g_{ik}$ is \begin{equation}\label{eq_finaldiscussion} -d_i \varphi(\Gamma_i) \varphi'(\Gamma) \frac{\partial \Gamma}{\partial g_{ik}}. \end{equation} In what follows, we study the contribution of the term (\ref{eq_finaldiscussion}) to (\ref{eq:FA1conc_ex}). The other terms can be handled similarly. In view of (\ref{eq:FA1conc_Gamma}) and (\ref{eq_finaldiscussion}), it remains to analyze the derivative of each term in $\Gamma$. In the sequel, we only focus on the term involving $(|T_j|^2+N^{-1})$ and $\Upsilon.$ The other terms can be investigated in the same way (actually easier) since they are tracial. Recall (\ref{eq_shorhandnotation}). For the term involving $(\absv{T_{j}}^{2}+N^{-1}),$ using \begin{equation*} \frac{\partial (\e{-\mathrm{i}\theta_{j}}T_{j})}{\partial g_{ik}}=\frac{\partial ({\boldsymbol e}_{j}^* U G{\boldsymbol e}_{j})}{\partial g_{ik}}={\boldsymbol e}_{j}^* U^{\langle i\rangle}\frac{\partial (R_{i}G)}{\partial g_{ik}}{\boldsymbol e}_{j}, \end{equation*} we find that \begin{align}\label{eq:FA1conc_Tj} \frac{\partial(\absv{T_{j}}^{2}+N^{-1})^{1/2}}{\partial g_{ik}} & =\frac{1}{(\absv{T_{j}}^{2}+N^{-1})^{1/2}}\Big({\boldsymbol e}_{j}^* G^* U {\boldsymbol e}_{j}{\boldsymbol e}_{j}^* U^{\langle i\rangle}\frac{\partial (R_{i}G)}{\partial g_{ik}}{\boldsymbol e}_{j} \nonumber \\ & +{\boldsymbol e}_{j}^*\frac{\partial (G^* R_{i})}{\partial g_{ik}}(U^{\langle i\rangle})^*{\boldsymbol e}_{j}{\boldsymbol e}_{j}^* UG{\boldsymbol e}_{j}\Big). \end{align} Recall (\ref{eq_househouldfinalexpression}) and (\ref{eq:Gder1}). It is easy to see that \begin{align}\label{eq:FA1conc_RG} \frac{\partial R_{i}G}{g_{ik}}&=\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}}\left(-{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G +R_{i}GA{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^*\widetilde{B}^{\langle i\rangle}R_{i}G+R_{i}GA\widetilde{B}^{\langle i\rangle}{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G\right)\\ &+\Delta_{R}(i,k)G+R_{i}\Delta_{G}(i,k).\nonumber \end{align} Based on the above expression, the definition of $T_j$ in (\ref{eq_shorhandnotation}) and (\ref{eq:FA1conc_Tj}), the main contribution of the first term in \eqref{eq:FA1conc_RG} to \eqref{eq_finaldiscussion} and (\ref{eq:FA1conc_ex}) involving $(|T|_j^2+N^{-1})$ reads \begin{align}\label{eq_finalfinal} &\left(\frac{N^{5\epsilon}}{\sqrt{N\eta}}\right)^{-1}\frac{1}{N^{3}}\sum_{i,j}\sum_{k}^{(i)}\frac{e^{\mathrm{i} \theta_j}T_{j}^*}{(\absv{T_{j}}^{2}+N^{-1})^{1/2}}\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}^{2}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(GA) {\boldsymbol e}_{j}^* U^{\langle i\rangle}{\boldsymbol e}_{k}({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G {\boldsymbol e}_{j} \nonumber \\ &=\left(\frac{N^{5\epsilon}}{\sqrt{N\eta}}\right)^{-1} \frac{1}{N^{3}}\sum_{i}\frac{\ell_{i}^{2}}{\norm{{\boldsymbol g}_{i}}^{2}}\tr(GA)({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G {\mathfrak D}_{1} U^{\langle i\rangle}I_{i}\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}, \end{align} where $I_{i}\mathrel{\mathop:}= I-{\boldsymbol e}_{i}{\boldsymbol e}_{i}^*$ and \begin{equation*} {\mathfrak D}_{1}\mathrel{\mathop:}= \diag\left(\frac{\e{\mathrm{i}\theta_{j}}T_{j}^*}{(\absv{T_{j}}^{2}+N^{-1})^{1/2}}\right)_{j=1,\cdots,N}. \end{equation*} Then we can follow the proof of Lemma \ref{lem:recmomerror} to get that for some constant $C>0$ \begin{equation*} \Absv{({\boldsymbol e}_{i}+{\boldsymbol h}_{i})^* G{\mathfrak D}_{1}U^{\langle i\rangle}I_{i}\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}}\leq C_{\mathrm{K}}\left(\norm{G^* {\boldsymbol e}_{i}}^{2}+\norm{G^* {\boldsymbol h}_{i}}^{2}+\norm{G{\boldsymbol e}_{i}}^{2}\right)\leq C\frac{\im G_{ii}+\im {\mathcal G}_{ii}}{\eta}, \end{equation*} where we used that \begin{equation*} \norm{G^* {\boldsymbol h}_{i}}^{2}={\boldsymbol h}_{i}^* GG^* {\boldsymbol h}_{i}={\boldsymbol e}_{i}^* U^* GG^* U{\boldsymbol e}_{i}={\boldsymbol e}_{i}^* {\mathcal G}\caG^* {\boldsymbol e}_{i}=\frac{1}{b_{i}}{\boldsymbol e}_{i}^* \widetilde{{\mathcal G}} B\widetilde{{\mathcal G}}^*{\boldsymbol e}_{i}. \end{equation*} To sum up, we have \begin{equation*} |(\ref{eq_finalfinal})| \leq C \frac{1}{N}\left(\frac{N^{4\epsilon}}{\sqrt{N\eta}}\right)^{-1}\widehat{\Pi}^{2}\leq \widehat{\Pi}^{2}. \end{equation*} The same reasoning also applies to the remaining terms of \eqref{eq:FA1conc_RG}, and we thus conclude the estimate for the derivative involving $(\absv{T_{j}}^{2}+N^{-1})^{1/2}$. We finally discuss the terms involving the derivatives of $\Upsilon$, which is defined in (\ref{eq_defnupsilon}). By the definition of $\Upsilon,$ we have that \begin{equation} \label{eq:FA1conc_UpsDer} \frac{\partial \Upsilon}{\partial g_{ik}}=z\left(\tr\frac{\partial G}{\partial g_{ik}}\tr(A\widetilde{B}G)+\tr G\tr(AB\frac{\partial G}{g_{ik}})-\tr(\frac{\partial G}{\partial g_{ik}}A)\tr(\widetilde{B}G)-\tr(GA)\tr(\widetilde{B}\frac{\partial G}{\partial g_{ik}})\right), \end{equation} and that \begin{equation} \label{eq_twopartsderivatives} \frac{\partial \absv{\Upsilon}^{2}}{\partial g_{ik}}=\frac{\partial\overline{\Upsilon}}{\partial g_{ik}}\Upsilon+\overline{\Upsilon}\frac{\Upsilon}{\partial g_{ik}}. \end{equation} Due to similarity, for (\ref{eq_twopartsderivatives}), we only discuss the contribution of $\overline{\Upsilon}\frac{\Upsilon}{\partial g_{ik}}$ to \eqref{eq:FA1conc_ex}. Specifically, we study \begin{equation*} \left(\frac{N^{5\epsilon}}{(N\eta)^{1/3}}\right)^{-2}\overline{\Upsilon}\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\frac{\partial \Upsilon}{\partial g_{ik}}\tr(GA), \end{equation*} which is reduced to bound all the terms in \eqref{eq:FA1conc_UpsDer}. For instance, for the first term in \eqref{eq:FA1conc_UpsDer}, we have that for some constants $C, C_\mathrm{K}>0$ \begin{align*} &\Absv{\left(\frac{N^{5\epsilon}}{(N\eta)^{1/3}}\right)^{-2}\overline{\Upsilon}\frac{1}{N^{2}}\sum_{i}\sum_{k}^{(i)}\frac{1}{\norm{{\boldsymbol g}_{i}}}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr(A\widetilde{B}G)\tr\frac{\partial G}{\partial g_{ik}}\tr(GA)}\\ &\leq \varphi(\Gamma)C_\mathrm{K}\left(\frac{N^{5\epsilon}}{(N\eta)^{1/3}}\right)^{-1}\frac{1}{N^{2}}\sum_{i}\frac{1}{\norm{{\boldsymbol g}_{i}}}\Absv{\sum_{k}^{(i)}{\boldsymbol e}_{k}^*\widetilde{B}^{\langle i\rangle}G{\boldsymbol e}_{i}\tr\frac{\partial G}{\partial g_{ik}}}\prec \varphi(\Gamma)\left(\frac{N^{5\epsilon}}{(N\eta)^{1/3}}\right)^{-1}\frac{1}{N}\Pi^{2}\Psi^{2}\\ &\leq C_{\mathrm{K}}\varphi(\Gamma)\left(\frac{N^{5\epsilon}}{(N\eta)^{1/3}}\right)^{-1}\frac{1}{N}\frac{\im m_{\mu_{A}\boxtimes\mu_{B}}+\widehat{\Lambda}}{(N\eta)^{2}} \\ & \leq C_{\mathrm{K}} \left(\frac{N^{5\epsilon}}{(N\eta)^{1/3}}\right)^{-1}\frac{1}{N^{2}\eta}\widehat{\Pi}^{2} \ll C \widehat{\Pi}^2, \end{align*} where in the first step we used the definition of $\Gamma$ and $\varphi(\cdot)$. Similar argument applies to the other terms in \eqref{eq:FA1conc_UpsDer}. Except for the aforementioned changes, the rest of the proof of (\ref{eq_recursiverougheqprior}) is the same as that for (\ref{eq_roughfarecuversivestimateeq}). This concludes the proof of Lemma \ref{lem:FA1conc}. \end{proof} \section{Additional technical proofs}\label{sec_additional} \subsection{Proofs of Proposition \ref{prop:stablimit}, Lemmas \ref{lem:rbound}, \ref{lem:OmegaBound1} and \ref{lem:OmegaBound2}} \begin{proof}[\bf Proof of Proposition \ref{prop:stablimit}] First of all, we choose $\tau$ and $\eta_{0}$ to satisfy the conclusions of Lemmas \ref{lem:suborsqrt} and \ref{lem:suborder}. We start with the proof of (i). The last two equivalences follow from an elementary computation by taking the imaginary parts of (\ref{eq_squareroot}) and using the fact that $\Omega_{\alpha}(E_+), \Omega_{\beta}(E_+) \in \mathbb{R}^+.$ By (\ref{eq_multiidentity}) and (ii) of Lemma \ref{lem:stabbound}, we find that \begin{equation} \label{eq_firstreductionfirstreduction} \im (zm_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z)) =\eta\int\frac{x}{\absv{x-z}^{2}}\mathrm{d}(\mu_{\alpha}\boxtimes\mu_{\beta})(x) \sim \eta\int\frac{1}{\absv{x-z}^{2}}\mathrm{d}(\mu_{\alpha}\boxtimes\mu_{\beta})(x) =\im m_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z). \end{equation} Moreover, by the definition of $M$-transform in (\ref{eq_mtrasindenity}), we find that \begin{equation*} \im M_{\mu_{\alpha} \boxtimes \mu_{\beta}}=\frac{\im (z m_{\mu_{\alpha} \boxtimes \mu_{\beta}}(z))}{|1+z m_{\mu_\alpha \boxtimes \mu_\beta}(z)|^2} \sim \im (z m_{\mu_{\alpha} \boxtimes \mu_{\beta}}(z)), \end{equation*} where we again use (\ref{eq_multiidentity}) and (ii) of Lemma \ref{lem:stabbound}. Together with (\ref{eq_firstreductionfirstreduction}), we conclude that \begin{equation*} \im M_{\mu_{\alpha} \boxtimes \mu_{\beta}} \sim \im m_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z). \end{equation*} By (\ref{eq_multiidentity}), we find that \begin{equation*} \im zm_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z) =\im \Omega_{\beta}(z)m_{\mu_{\alpha}}(\Omega_{\beta}(z))=\im\Omega_{\beta}(z)\int\frac{x}{\absv{x-\Omega_{\beta}(z)}^{2}}\mathrm{d}\mu_{\alpha}(x). \end{equation*} Together with (ii) of Lemma \ref{lem:stabbound}, we obtain that \begin{equation*} \im zm_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z) \sim \im \Omega_{\beta}(z). \end{equation*} Similarly, we find that \begin{equation*} \im (zm_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z)) =\eta\int\frac{x}{\absv{x-z}^{2}}\mathrm{d}(\mu_{\alpha}\boxtimes\mu_{\beta})(x) \sim \eta\int\frac{1}{\absv{x-z}^{2}}\mathrm{d}(\mu_{\alpha}\boxtimes\mu_{\beta})(x) =\im m_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z). \end{equation*} This concludes the proof of (i). Then we prove (ii) and start with the first term. First, when $z \in \mathcal{D}_\tau(0,\eta_0)$, by differentiating $\widetilde{z}_+$ defined in (\ref{eq_defnztilde}) with respect to $\Omega$, we find that \begin{align} &-M_{\mu_{\alpha}}'(\Omega_{\beta}(z))\widetilde{z}_{+}'(\Omega_{\alpha}(z)) \nonumber \\ =&-M_{\mu_{\alpha}}'(\Omega_{\beta}(z))\left(\frac{\Omega_{\beta}(z)}{M_{\mu_{\beta}}(\Omega_{\alpha}(z))} +\frac{\Omega_{\alpha}(z)}{M_{\mu_{\beta}}(\Omega_{\alpha}(z))}\frac{M_{\mu_{\beta}}'(\Omega_{\alpha}(z))}{M_{\mu_{\alpha}}'(\Omega_{\beta}(z))} -\Omega_{\alpha}(z)\Omega_{\beta}(z)\frac{M_{\mu_{\beta}}'(\Omega_{\alpha}(z))}{M_{\mu_{\beta}}(\Omega_{\alpha}(z))^{2}}\right), \label{eq:z'} \end{align} where in the first equality we use (\ref{eq_suborsystem}) and the inverse function theorem. Note that $M_{\mu_{\alpha}}^{-1}(\cdot)$ is analytic around $M_{\mu_{\beta}}(\Omega_{\alpha}(z))$ since $M_{\mu_{\alpha}}'(\Omega_{\beta}(z))\sim 1$. By Remark \ref{rem_salphabetaequivalent} and (\ref{eq:z'}), we readily find that \begin{equation}\label{eq_sabanotherform} -M_{\mu_{\alpha}}'(\Omega_{\beta}(z))\widetilde{z}_{+}'(\Omega_{\alpha}(z)) =\mathcal{S}_{\alpha \beta}(z). \end{equation} By (\ref{eq:reprM}), we find that \begin{equation*} M'_{\mu_{\alpha}}(\Omega_{\beta}(z))=1+\int\frac{x}{(x-z)^{2}}\mathrm{d}\widehat{\mu}_{\alpha}(x). \end{equation*} Since $\text{supp} \ \widehat{\mu}_{\alpha} =\text{supp} \ \mu_{\alpha},$ by (ii) of Lemma \ref{lem:stabbound}, we find that $M_{\mu_{\alpha}}'(\Omega_{\beta}(z)) \sim 1. $ Therefore, the first term of (ii) follows from (\ref{eq_sabanotherform}), (\ref{eq:z'expa}) and (\ref{eq_squareroot}) when $z \in \mathcal{D}_*.$ Second, when $z \in \mathcal{D}_\tau(0, \eta_U) \backslash \mathcal{D}^*={\mathcal D}_{\tau}(\eta_0, \eta_U),$ following the a discussion similar to the proof of \cite[Lemma 6.6]{JHC}, we get \begin{align*} \absv{{\mathcal S}_{\alpha\beta}(z)}&\geq 1-\absv{{\mathcal S}_{\alpha\beta}(z)+1} \geq 1-\absv{z}^{2} \left(\int\frac{1}{\absv{x-\Omega_{\alpha}(z)}^{2}}\mathrm{d}\widehat{\mu}_{\beta}(x)\right)\left(\int\frac{1}{\absv{x-\Omega_{\beta}(z)}^{2}}\mathrm{d}\widehat{\mu}_{\alpha}(x)\right) \\ &=1-\absv{z}^{2}\frac{\im L_{\mu_{\beta}}(\Omega_{\alpha}(z))\im L_{\mu_{\alpha}}(\Omega_{\beta}(z))}{\im\Omega_{\alpha}(z)\im\Omega_{\beta}(z)} =1-\absv{z}^{2}\frac{\im(\Omega_{\alpha}(z)/z)\im(\Omega_{\beta}(z)/z)}{\im\Omega_{\alpha}(z)\im\Omega_{\beta}(z)}\\ &=\frac{\im z\im M_{\mu_{\alpha}\boxtimes\mu_{\beta}}(z)}{\im\Omega_{\alpha}(z)\im\Omega_{\beta}(z)}\geq c>0, \end{align*} where we use the fact that $\eta_0=O(1)$. This completes the proof for ${\mathcal S}_{\alpha \beta}(z).$ We then prove the second and third terms of (ii). Due to similarity, we only discuss how to bound $\mathcal{T}_{\alpha}(z).$ Recall the definition of $\mathcal{T}_{\alpha}(z) $ in (\ref{eq_defn_talpha}). Since $z \in \mathcal{D}_{\tau}(0, \eta_U)$ is bounded, it suffices to control $L'_{\mu_\beta}(\Omega_{\alpha}(z))$ and $L^{''}_{\mu_\beta}(\Omega_{\alpha}(z)).$ By (\ref{eq:reprM}), we find that \begin{align*} L'_{\mu_{\beta}}(\Omega_{\alpha}(z)) &=\int\frac{1}{(x-\Omega_{\alpha}(z))^{2}}\mathrm{d}\widehat{\mu}_{\beta}(x), \ L''_{\mu_{\beta}}(z)=\int\frac{1}{(x-\Omega_{\alpha}(z))^{3}}\mathrm{d}\widehat{\mu}_{\beta}(x). \end{align*} Hence, we can conclude the proof using $\text{supp} \ \widehat{\mu}_{\alpha} =\text{supp} \ \mu_{\alpha}$ and (ii) of Lemma \ref{lem:stabbound}. Finally, we prove (iii) and only provide the details for $\mathcal{T}_{\alpha}$. Since we have provided a upper bound in (ii), it suffices to find a lower bound. By (iii) of Lemma \ref{lem:stabbound} that $\Omega_{\alpha}(E_{+})>E_{+}^{\beta}$, $\Omega_{\beta}(E_{+})>E_{+}^{\alpha}$ and Lemma \ref{lem:reprM}, we have \begin{align*} L'_{\mu_{\beta}}(\Omega_{\alpha}(E_{+})) &=\int\frac{1}{(x-\Omega_{\alpha}(E_{+}))^{2}}\mathrm{d}\widehat{\mu}_{\beta}(x) >0, & L'_{\mu_{\alpha}}(\Omega_{\beta}(E_{+})) &=\int\frac{1}{(x-\Omega_{\beta}(E_{+}))^{2}}\mathrm{d}\widehat{\mu}_{\alpha}(x)>0, \\ L''_{\mu_{\beta}}(\Omega_{\alpha}(E_{+}))&=\int\frac{1}{(x-\Omega_{\alpha}(E_{+}))^{3}}\mathrm{d}\widehat{\mu}_{\beta}(x)<0, & L''_{\mu_{\alpha}}(\Omega_{\beta}(E_{+})) &=\int\frac{1}{(x-\Omega_{\beta}(E_{+}))^{3}}\mathrm{d}\widehat{\mu}_{\alpha}(x) <0. \end{align*} Consequently, ${\mathcal T}_{\alpha}(E_{+})<0$ and ${\mathcal T}_{\beta}(E_{+})<0.$ Since $\Omega_{\alpha},\Omega_{\beta}$ are continuous around $E_{+}$ and $L_{\mu_{\alpha}}$ and $L_{\mu_{\beta}}$ are analytic around $\Omega_{\beta}(E_{+})$ and $\Omega_{\alpha}(E_{+})$ respectively, we find that the bounds hold in a small neighborhood of $E_{+}$. This concludes our proof. \end{proof} \begin{proof}[\bf Proof of Lemma \ref{lem:rbound}] We present only the proof for $\mu_{A}$ and the proof for $\mu_{B}$ is exactly the same. Denote the cumulative distribution functions of $\mu_{\alpha}$ and $\mu_{A}$ by $F_{\alpha}$ and $F_{A}$. We take $N_{0}$ such that ${\boldsymbol d}\leq \delta/2$ and $[a_{N},a_{1}]\in[E_{-}^{\alpha}-\delta/2,E_{+}^{\alpha}+\delta/2]$ for all $N\geq N_{0}$. Then By the definition of ${\mathcal L}(\cdot,\cdot)$, we have \begin{equation*} \absv{F_{\alpha}(x)-F_{A}(x)}\leq \mathbf{1}_{[E_{-}^{\alpha/2}-\delta/2,E_{+}^{\alpha}+\delta/2]}(x)\left(F_{\alpha}(x+{\boldsymbol d})-F_{\alpha}(x-{\boldsymbol d})+2{\boldsymbol d}\right). \end{equation*} Thus integrating by parts we get \begin{align*} &\Absv{\int_{\R_{+}}f(x)\mathrm{d}(\mu_{\alpha}-\mu_{A})(x)}=\Absv{\int_{E_{-}^{\alpha}-\delta/2}^{E_{+}^{\alpha}+\delta/2}f'(x)(F_{\alpha}(x)-F_{A}(x))\mathrm{d} x} \\ & \leq 2{\boldsymbol d} (E_{+}^{\alpha}-E_{-}^{\alpha}+\delta)\sup_{x\in [E_{-}^{\alpha}+\delta,E_{+}^{\alpha}+\delta]}\absv{f'(x)} +\int_{E_{-}^{\alpha}-\delta/2}^{E_{+}^{\alpha}+\delta/2}\absv{f'(x)}(F_{\alpha}(x+{\boldsymbol d})-F_{\alpha}(x-{\boldsymbol d}))\mathrm{d} x \\ & \leq 2{\boldsymbol d} (E_{+}^{\alpha}-E_{-}^{\alpha}+\delta)\sup_{x\in [E_{-}^{\alpha}+\delta,E_{+}^{\alpha}+\delta]}\absv{f'(x)} +\int_{E_{-}^{\alpha}}^{E_{+}^{\alpha}}\left(\absv{f'(x-{\boldsymbol d})}-\absv{f'(x+{\boldsymbol d})}\right)F_{\alpha}(x)\mathrm{d} x \\ & \leq 2{\boldsymbol d} (E_{+}^{\alpha}-E_{-}^{\alpha}+\delta)\norm{f'}_{\mathrm{Lip},\delta}. \end{align*} This concludes the proof. \end{proof} \begin{proof}[\bf Proof of Lemma \ref{lem:OmegaBound1}] The proof follows from Lemma \ref{lem:Kantorovich_appl}. More specifically, by Lemma \ref{lem:Kantorovich_appl}, we find that there exist $\eta_{1}>0$ and $\theta\in(0,\pi/2)$ such that for all $z\in{\mathcal E}(\eta_{1},\theta)$ \begin{align*} \absv{\Omega_{A}(z)-\Omega_{\alpha}(z)}&\leq 2\norm{r(z)}, & \absv{\Omega_{B}(z)-\Omega_{\beta}(z)}&\leq 2\norm{r(z)}. \end{align*} Since \begin{equation*} \left\{z\in\C_{+}:\re z\in[E_{+}-\tau,\tau^{-1}],\im z>\eta_{1}\right\}\subset{\mathcal E}(\eta_{1},\theta), \end{equation*} for large constant $\eta_{1}>\widetilde{\eta}_1$, we conclude the proof. It remains to verify the conditions of Lemma \ref{lem:Kantorovich_appl}. By Lemma \ref{lem:reprMemp} and (v) of Assumption \ref{assu_esd}, we find that there exists a large constant $\widetilde{\eta}_1>0, $ such that for $z=E+\mathrm{i} \widetilde{\eta}_1$ \begin{equation}\label{eq:choiceofcm} |m_{\widehat{\mu}_A}(z)| \leq \frac{1}{2}, \ | m_{\widehat{\mu}_B}(z) | \leq \frac{1}{2}. \end{equation} Recall (\ref{eq_epsilonetatheta}). By Lemma \ref{lem_subor}, we find that $\Omega_A, \ $ $\Omega_B: \mathcal{E}(\widetilde{\eta}_1,0) \rightarrow \mathbb{C}_+$ are analytic functions. We now apply Lemma \ref{lem:Kantorovich_appl} with the choice $(\mu_{1},\mu_{2})=(\mu_{\alpha},\mu_{\beta})$, $(\widetilde{\Omega}_{1},\widetilde{\Omega}_{2})=(\Omega_{A},\Omega_{B}), \Phi=\Phi_{\alpha \beta}$ and $c=1/2$. The first two conditions of (\ref{eq:kanappconidition1}) hold by (\ref{eq_newsupportbound}) and (\ref{eq:choiceofcm}). To verify the third condition of (\ref{eq:kanappconidition1}), we find that \begin{equation} \label{eq_defnra} \absv{r_{A}(z)}=\absv{\Omega_{B}(z)}^{-1}\Absv{\int\frac{x}{x-\Omega_{B}(z)}\mathrm{d}\mu_{A}(x)}^{-1}\Absv{\int\frac{x}{x-\Omega_{B}(z)}\mathrm{d}\mu_{\alpha}(x)}^{-1}\Absv{\int\frac{x\mathrm{d}(\mu_{A}-\mu_{\alpha})(x)}{x-\Omega_{B}(z)}}. \end{equation} By (\ref{eq_omegabbound1}), we have $\im \Omega_{B}(z)\geq \widetilde{\eta}_{1}$ for $z \in \mathcal{E}(\widetilde{\eta}_1, 0).$ Together with Lemma \ref{lem:rbound}, for some constant $C$ \begin{align}\label{eq_ra1} &\Absv{\int\frac{x}{x-\Omega_{B}(z)}\mathrm{d}(\mu_{A}-\mu_{\alpha})(x)} \leq C {\boldsymbol d}. \end{align} Further, with the same reasoning that $\im \Omega_{B}(z)\geq \widetilde{\eta}_{1},$ it is easy to see that for some constant $c_0>0,$ \begin{equation} \label{eq_ra2} \Absv{\int\frac{x}{x-\Omega_{B}(z)}\mathrm{d}\mu_{A}(x)}\geq c_0, \ \Absv{\int\frac{x}{x-\Omega_{B}(z)}\mathrm{d}\mu_{\alpha}(x)}\geq c_0. \end{equation} By (\ref{eq_ra1}) and (\ref{eq_ra2}), we find that for some constant $C>0,$ we have $|r_A(z)| \leq C \bm{d}.$ Similarly, we have that $|r_B(z)| \leq C \bm{d}.$ This implies that \begin{equation}\label{rlessequald} \|r(z) \| \leq c \bm{d}. \end{equation} Invoking (iv) of Assumption \ref{assu_esd}, we conclude that the third condition of (\ref{eq:kanappconidition1}) holds when $N$ is large enough. \end{proof} \begin{proof}[\bf Proof of Lemma \ref{lem:OmegaBound2}] By (\ref{eq_suborsystem}), we have that \begin{equation}\label{eq_elementaryone} L_{\mu_{\alpha}}(\Omega_{B}(z_{0})) =L_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))+\frac{\Delta\Omega_{1}(z_{0})}{z_{0}}+r_{A}(z_{0}), \end{equation} where we denote \begin{equation*} \Delta\Omega(z_{0})\equiv(\Delta\Omega_{1}(z_{0}),\Delta\Omega_{2}(z_{0}))\mathrel{\mathop:}=(\Omega_{A}(z_{0})-\Omega_{\alpha}(z_{0}),\Omega_{B}(z_{0})-\Omega_{\beta}(z_{0})). \end{equation*} Expanding $L_{\mu_{\alpha}}(\Omega_{B}(z_{0}))$ around $\Omega_{\beta}(z_{0})$, we have \begin{align}\label{eq_elemenentarytwo} L_{\mu_{\alpha}}(\Omega_{B}(z_{0}))=L_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))+L_{\mu_{\alpha}}'(\Omega_{\beta}(z_{0}))\Delta\Omega_{2}(z_{0})+L''_{\mu_\alpha}(\xi_0) \Delta \Omega_2(z_0)^2, \end{align} where $\xi_0$ is some value between $\Omega_B(z_0)$ and $\Omega_\beta(z_0).$ Note that \begin{equation*} \inf_{x\in\supp\widetilde{\mu}_{A}}\absv{x-\xi_{0}}\geq \inf_{x\in\supp\widetilde{\mu}_{A}}\absv{x-\Omega_{\beta}(z_{0})}-\absv{\Omega_{\beta}(z_{0})-\xi_{0}}\geq \kappa_{0}-\delta-q\geq \kappa_{0}/3, \end{equation*} where $\delta$ is defined in (v) of Assumption \ref{assu_esd} and we also use the assumption that $q \leq \frac{1}{3} \kappa_0$. Consequently, we have \begin{equation*} \absv{L''_{\mu_{\alpha}}(\xi_{0})}=\Absv{\int \frac{1}{(x-\xi_{0})^{3}}\mathrm{d}\widetilde{\mu}_{\alpha}(x)}\leq (\kappa_{0}/3)^{-3}\widetilde{\mu}_{\alpha}(\R_{+}). \end{equation*} Moreover, since $\absv{\Omega_{B}(z_{0})-\Omega_{\beta}(z_{0})}\leq q\leq \frac{1}{3}\kappa_{0},$ we have \begin{equation*} \absv{L''_{\mu_\alpha}(\xi_0) \Delta \Omega_2(z_0)^2}\leq 27\kappa_{0}^{-3}\widehat{\mu}_{\alpha}(\R_{+})\absv{\Delta\Omega_{2}(z_{0})}^{2}\leq 27\kappa_{0}^{-3}\widehat{\mu}_{\alpha}(\R_{+})\norm{\Delta\Omega(z_{0})}^{2}\leq K_{3}\norm{\Delta\Omega(z_{0})}^{2}. \end{equation*} By (\ref{eq_elementaryone}) and (\ref{eq_elemenentarytwo}), we readily obtain that { \begin{equation}\label{eq_sbound1} \Absv{L_{\mu_{\alpha}}'(\Omega_{\beta}(z_{0}))\Delta\Omega_{2}(z_{0})-\frac{\Delta\Omega_{1}(z_{0})}{z_{0}}} \leq\norm{r(z_{0})}+ K_{3}\norm{\Delta\Omega(z_{0})}^{2}. \end{equation} Similarly, we have that \begin{equation}\label{eq_sbound2} \Absv{L_{\mu_{\beta}}'(\Omega_{\alpha}(z_{0}))\Delta\Omega_{1}(z_{0})-\frac{\Delta\Omega_{2}(z_{0})}{z_{0}}} \leq \norm{r(z_{0})}+K_{3}\norm{\Delta\Omega(z_{0})}^{2}. \end{equation} } Recall (\ref{eq_defn_salphabeta}). We have that \begin{align}\label{eq_expansdelta} &\absv{{\mathcal S}_{\alpha\beta}(z_{0})}\absv{\Delta\Omega_{1}(z_{0})} =\Absv{1-z_{0}^{2}L'_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))L'_{\mu_{\beta}}(\Omega_{\alpha}(z_{0}))}\absv{\Delta\Omega_{1}(z_{0})}. \end{align} Moreover, we note that \begin{align*} &\absv{1-z_{0}^{2}L'_{\mu_{\alpha}}(z_{0})L'_{\mu_{\beta}}(z_{0})}\\ &=\Absv{ z_{0}^{2}L'_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))\left(L'_{\mu_{\beta}}(\Omega_{\alpha}(z_{0}))\Delta\Omega_{1}(z_{0})-\frac{\Delta\Omega_{2}(z_{0})}{z_{0}}\right)+z_{0}\left(L'_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))\Delta\Omega_{2}(z_{0})-\frac{\Delta\Omega_{1}(z_{0})}{z_{0}}\right)} \\ &\leq \absv{z_{0}^{2}L'_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))}\Absv{L'_{\mu_{\beta}}(\Omega_{\alpha}(z_{0}))\Delta\Omega_{1}(z_{0})-\frac{\Delta\Omega_{2}(z_{0})}{z_{0}}}+\absv{z_{0}}\Absv{L'_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))\Delta\Omega_{2}(z_{0})-\frac{\Delta\Omega_{1}(z_{0})}{z_{0}}} \\ &\leq (\absv{z_{0}^{2}L'_{\mu_{\alpha}}(\Omega_{\beta}(z_{0}))}+\absv{z_{0}})(\norm{r(z_{0})}+K_{3}\norm{\Delta\Omega(z_{0})}^{2}), \end{align*} where we used (\ref{eq_sbound1}) and (\ref{eq_sbound2}) in the first inequality. Together with (\ref{eq_expansdelta}), we readily see that \begin{align*} \absv{{\mathcal S}_{\alpha\beta}(z_{0})}\absv{\Delta\Omega_{1}(z_{0})} \leq (\absv{z_{0}}+\absv{z_{0}^{2}L_{\mu_{\alpha}}'(\Omega_{\beta}(z_{0}))})(\norm{r(z_{0})}+K_{3}\norm{\Delta\Omega(z_{0})}^{2}) \leq K_{4}\norm{r(z_{0})}+K_{3} K_4\norm{\Delta\Omega(z_{0})}^{2}. \end{align*} Similarly, we have \begin{equation*} \absv{{\mathcal S}_{\alpha\beta}(z_{0})}\absv{\Delta\Omega_{2}(z_{0})} \leq K_{4}\norm{r(z_{0})}+K_{3}K_4\norm{\Delta\Omega(z_{0})}^{2}. \end{equation*} Consequently, this yields that \begin{align*} \norm{\Delta\Omega(z_{0})}\leq |\Delta \Omega_1(z_0)|+|\Delta \Omega_2(z_0)| & \leq \frac{2K_{4}}{\absv{{\mathcal S}_{\alpha\beta}(z_{0})}}(\norm{r(z_{0})}+K_{3}\norm{\Delta\Omega(z_{0})}^{2}) \\ & =\frac{1}{\absv{{\mathcal S}_{\alpha\beta}(z_{0})}}\left(\frac{K_{1}}{2}\norm{r(z_{0})}+\frac{1}{2K_{2}}\| \Delta\Omega(z_{0})\|^{2}\right). \end{align*} Note that the above equation can be regarded as a quadratic inequality with respect to $\| \Delta \Omega(z_0) \|.$ By an elementary computation, it is easy to see that if the inequality holds then either one of the followings holds \begin{align}\label{eq:dichotomy} \norm{\Delta\Omega(z_{0})}&\leq K_{1}\frac{\norm{r(z_{0})}}{\absv{{\mathcal S}_{\alpha\beta}(z_{0})}} \ \text{or } \ \norm{\Delta\Omega(z_{0})} \geq K_{2}\absv{{\mathcal S}_{\alpha\beta}(z_{0})}, \end{align} where we use the trivial bounds \begin{align*} 1-\sqrt{1-x}&\leq x, \ 1+\sqrt{1-x} \geq 1. \end{align*} Since the second inequality of \eqref{eq:dichotomy} contradicts the assumption (iii) that $q\leq \frac{1}{2}K_{2}{\mathcal S}_{\alpha\beta}(z_{0})$. Therefore, we complete the proof. \end{proof} \subsection{Proofs of Lemmas \ref{lem_contour}, \ref{lem_contourestimation} and \ref{lem_improvedestimate}} \begin{proof}[\bf Proof of Lemma \ref{lem_contour}] Our proof is similar to Lemma S.5.6 of \cite{DYaos} and Lemmas 5.4 and 5.5 of \cite{Bloemendal2016}. We start with the first part of the results and first show that each $\Omega_B^{-1}(B_{\rho_i^a}(\widehat{a}_i)), \pi_a(i) \in S$ is a subset of $\mathcal{D}_{out}(\omega)$ in (\ref{eq_evoutsideparameter}). By (\ref{eq_edgeestimationrough}), it is easy to see that $|\Omega_B^{-1}(\zeta)| \leq \omega^{-1}$ for all $\zeta \in \mathsf{C}$ as long as $\omega$ is sufficiently small. For the lower bound, we claim that for any constant $\widetilde{C}>0$ and sufficiently small constant $\widetilde{c}_0<1,$ there exists a constant $\widetilde{c}_1 \equiv \widetilde{c}_1(\widetilde{c}_0, \widetilde{C})$ such that \begin{equation}\label{eq_lemm46claim} \re \Omega_{B}^{-1}(\zeta) \geq E_++\widetilde{c}_1(\re \zeta-\Omega_B(E_+))^2, \end{equation} for $\re \zeta \geq \Omega_B(E_+), |\im \zeta| \leq \widetilde{c}_0 (\re \zeta-\Omega_B(E_+))$ and $|\zeta| \leq \widetilde{C}.$ By the definition of $\Omega_B^{-1}(B_{\rho_i^a}(\widehat{a}_i))$ and the definition of $\rho_i^a,$ whenever $\zeta \in \Omega_B^{-1}(B_{\rho_i^a}(\widehat{a}_i)),$ we have that \begin{equation*} \im \zeta \leq c_i(\widehat{a}_i-\Omega_B(E_+)). \end{equation*} Moreover, since $\pi_a(i) \in S,$ we have that $\re \zeta \geq \Omega_B(E_+).$ Again using the definition of $\rho_i^a,$ we immediately see that \begin{equation*} \re \zeta-\Omega_B(E_+) \geq (1-c_i)(\widehat{a}_i-\Omega_B(E_+)). \end{equation*} Together with (\ref{eq_lemm46claim}), the assumptions $\widehat{a}_i-\Omega_B(E_+) \geq N^{-1/3+\tau_1}$ and $\omega<\tau_1/2$, we can conclude the proof when $c_i$'s are sufficiently small. It remains to prove the claim (\ref{eq_lemm46claim}). The proof follows from an argument similar to equation (S.11) of \cite{DYaos} and a discussion similar to (\ref{eq_edgeestimationrough}). We omit the details here. Similarly, we can show that $\Omega_B^{-1}(B_{\rho_i^b}(\Omega_B( \Omega_A^{-1}(\widehat{b}_\mu)))) \subset {\mathcal D}_{out}(\omega).$ This concludes the proof of the first part of the results. For the second part of the results, it suffices to prove the following: \begin{enumerate} \item[(i)] $\widehat{\lambda}_{\pi_a(i)} \in \Omega_B^{-1}(B_{\rho_i^a}(\widehat{a}_i))$ and $\widehat{\lambda}_{\pi_b(\mu)} \in \Omega_B^{-1}(B_{\rho_i^b}(\Omega_B( \Omega_A^{-1}(\widehat{b}_\mu))))$ for all $\pi_a(i) \in S$ and $\pi_b(\mu) \in S$; \item[(ii)] All the other eigenvalues $\widehat{\lambda}_j$ satisfy $\widehat{\lambda}_j \notin \Omega_B^{-1}(B_{\rho_i^a}(\widehat{a}_i))$ and $\widehat{\lambda}_j \notin \Omega_B^{-1}(B_{\rho_i^b}(\Omega_B( \Omega_A^{-1}(\widehat{b}_\mu))))$ for all $\pi_a(i) \in S$ and $\pi_b(\mu) \in S.$ \end{enumerate} The proof of (i) and (ii) follows from an analogous discussion to the counterpart in Lemma S.5.6 of \cite{DYaos}. We omit the details here. \end{proof} \begin{proof}[\bf Proof of Lemma \ref{lem_contourestimation}] The proof is similar to Lemma S.5.7 of \cite{DYaos}. Due to similarity, we only prove (\ref{eq_onlyproveresult}). First, the upper bound simply follows from the triangle inequality \begin{equation*} |\zeta-\widehat{a}_j| \leq \rho_i^a+|\widehat{a}_j-\widehat{a}_i| \lesssim \rho_i^a+\delta^a_{\pi_a(i),\pi_a(j)}. \end{equation*} Second, we provide a lower bound. When $\pi_a(j) \notin S,$ using the fact $|\widehat{a}_i-\widehat{a}_j| \geq 2 \rho_i^a,$ we have that \begin{equation*} |\zeta-\widehat{a}_j| \gtrsim \rho_i^a+|\widehat{a}_i-\widehat{a}_j|. \end{equation*} When $\pi_a(j) \in S,$ denote $\delta:=|\widehat{a}_i-\widehat{a}_j|-\rho_i^a-\rho_j^a.$ In the case when $C_0 \delta>|\widehat{a}_i-\widehat{a}_j|$ for some constant $C_0>1,$ we have that \begin{equation*} \rho_i^a+\rho_j^a \leq \frac{C_0-1}{C_0}|\widehat{a}_i-\widehat{a}_j|. \end{equation*} Consequently, we obtain that \begin{equation*} |\zeta-\widehat{a}_j| \geq |\widehat{a}_i-\widehat{a}_j|-\rho_i^a \geq \frac{1}{C_0}|\widehat{a}_i-\widehat{a}_j| \gtrsim \rho_i^a+\delta^a_{\pi_a(i),\pi_a(j)}. \end{equation*} In the other case when $C_0 \delta \leq |\widehat{a}_i-\widehat{a}_j|,$ we have that \begin{equation*} |\widehat{a}_i-\widehat{a}_j| \leq \frac{C_0}{C_0-1}(\rho_i^a+\rho_j^a). \end{equation*} To provide a bound regarding $\rho_i^a$ and $\rho_j^a,$ we claim that for large enough constant $C_0>0,$ there exists some constant $\widetilde{C}>1$ such that \begin{equation}\label{eq_finalpaperclaim} \frac{1}{\widetilde{C}} \rho_i^a \leq \rho_j^a \leq \widetilde{C} \rho_i^a. \end{equation} Combining the above bounds, we can prove (\ref{eq_onlyproveresult}). For the justification of (\ref{eq_finalpaperclaim}), we can take the verbatim of the proof of equation (S.27) of \cite{DYaos} to finish it. We omit the details here. This concludes our proof. \end{proof} \begin{proof}[\bf Proof of Lemma \ref{lem_improvedestimate}] The proof is similar to Lemma 10.1 of \cite{BEC}. Recall we have shown that $\Lambda \prec (N \eta)^{-1}$ holds uniformly in $z \in \mathcal{D}_{\tau}(\eta_L, \eta_U)$ in (\ref{lambdaz_uniform}). Suppose that $\Lambda \prec \widehat{\Lambda}$ for some deterministic $\widehat{\Lambda}(z)$ that satisfies \begin{equation}\label{eq_lowerbound} N^{\epsilon}\left( \frac{1}{N \sqrt{(\kappa+\eta) \eta}}+\frac{1}{\sqrt{\kappa+\eta}} \frac{1}{(N\eta)^2} \right) \leq \widehat{\Lambda}(z) \leq \frac{N^{\epsilon}}{N \eta}, \end{equation} where $\epsilon$ is the same as in (\ref{eq_wtmathcald}). We point out that such a $\widehat{\Lambda}$ always exists on $\widetilde{{\mathcal D}}_{>}.$ Since $\Lambda \prec (N\eta)^{-1},$ by (\ref{eq:FA2}) and (ii) and (iii) of Proposition \ref{prop:stabN}, we conclude that for $\iota=A,B,$ and $z \in \widetilde{{\mathcal D}}_{>},$ \begin{equation}\label{eq_summationcontrol} \left| \frac{\mathcal{S}_{AB}(z)}{z} \Lambda_{\iota}+\mathcal{T}_{\iota} \Lambda_{\iota}^2 \right| \prec \frac{\sqrt{\left( \frac{\eta}{\sqrt{\kappa+\eta}}+\widehat{\Lambda} \right) \left( \sqrt{\kappa+\eta}+\widehat{\Lambda} \right)}}{N \eta}+\frac{1}{(N \eta)^2} \prec \frac{\sqrt{\widehat{\Lambda} \sqrt{\kappa+\eta}}}{N \eta}+ \frac{\sqrt{\eta}}{N \eta}+\frac{1}{(N \eta)^2}, \end{equation} where we used the fact that $\widehat{\Lambda} \prec \frac{N^{\epsilon}}{N \eta} \leq N^{-\epsilon} \sqrt{\kappa+\eta}$ for all $z \in \widetilde{{\mathcal D}}_{>}.$ Moreover, again using (iii) of Proposition \ref{prop:stabN}, we get \begin{equation}\label{eq_absorbedone} |\Lambda_{\iota}| \prec \frac{1}{N \eta} \leq N^{-2\epsilon} \sqrt{\kappa+\eta} \sim N^{-2\epsilon} |\mathcal{S}_{AB}|. \end{equation} Since $|{\mathcal T}_\iota| \leq C$ for some constant $C>0$ by (iii) of Proposition \ref{prop:stabN}, combining (\ref{eq_absorbedone}) and (\ref{eq_summationcontrol}), we obtain that \begin{equation*} |\Lambda_\iota| \prec \frac{1}{\sqrt{\kappa+\eta}}\left( \frac{\sqrt{\widehat{\Lambda} \sqrt{\kappa+\eta}}}{N \eta}+ \frac{\sqrt{\eta}}{N \eta}+\frac{1}{(N \eta)^2} \right) \leq \frac{1}{N \eta (\kappa+\eta)^{1/4}} \widehat{\Lambda}^{1/2}+N^{-\epsilon} \widehat{\Lambda} \leq N^{-\epsilon/4} \widehat{\Lambda}, \end{equation*} where we used (iii) of Proposition \ref{prop:stabN} and (\ref{eq_lowerbound}). From the above argument, we see that we have improved the bound from $\Lambda \leq \widehat{\Lambda}$ to $\Lambda \leq N^{-\epsilon/4} \widehat{\Lambda}$ as long as the lower bound in (\ref{eq_lowerbound}) holds. The proof then follows from an iterative improvement. \end{proof}
2,877,628,088,783
arxiv
\section[#1]{#2}} \def\pr {\noindent {\it Proof.} } \def\rmk {\noindent {\it Remark} } \def\n{\nabla} \def\bn{\overline\nabla} \def\ir#1{\mathbb R^{#1}} \def\hh#1{\Bbb H^{#1}} \def\ch#1{\Bbb {CH}^{#1}} \def\cc#1{\Bbb C^{#1}} \def\f#1#2{\frac{#1}{#2}} \def\qq#1{\Bbb Q^{#1}} \def\cp#1{\Bbb {CP}^{#1}} \def\qp#1{\Bbb {QP}^{#1}} \def\grs#1#2{\bold G_{#1,#2}} \def\bb#1{\Bbb B^{#1}} \def\dd#1#2{\frac {d\,#1}{d\,#2}} \def\dt#1{\frac {d\,#1}{d\,t}} \def\mc#1{\mathcal{#1}} \def\pr{\frac {\partial}{\partial r}} \def\pfi{\frac {\partial}{\partial \phi}} \def\pf#1{\frac{\partial}{\partial #1}} \def\pd#1#2{\frac {\partial #1}{\partial #2}} \def\ppd#1#2{\frac {\partial^2 #1}{\partial #2^2}} \def\td{\tilde} \font\subjefont=cmti8 \font\nfont=cmr8 \def\a{\alpha} \def\be{\beta} \def\gr{\bold G_{2,2}^2} \def\r{\Re_{I\!V}} \def\sc{\bold C_m^{n+m}} \def\sg{\bold G_{n,m}^m(\bold C)} \def\p#1{\partial #1} \def\pb#1{\bar\partial #1} \def\de{\delta} \def\De{\Delta} \def\e{\eta} \def\ep{\varepsilon} \def\eps{\epsilon} \def\G{\Gamma} \def\g{\gamma} \def\k{\kappa} \def\la{\lambda} \def\La{\Lambda} \def\om{\omega} \def\Om{\Omega} \def\th{\theta} \def\Th{\Theta} \def\si{\sigma} \def\Si{\Sigma} \def\ul{\underline} \def\w{\wedge} \def\vs{\varsigma} \def\Hess{\mbox{Hess}} \def\R{\Bbb{R}} \def\C{\Bbb{C}} \def\tr{\mbox{tr}} \def\U{\Bbb{U}} \def\lan{\langle} \def\ran{\rangle} \def\ra{\rightarrow} \def\Dirac{D\hskip -2.9mm \slash\ } \def\dirac{\partial\hskip -2.6mm \slash\ } \def\bn{\bar{\nabla}} \def\aint#1{-\hskip -4.5mm\int_{#1}} \def\V{\mbox{Vol}} \def\ol{\overline} \renewcommand{\subjclassname} \textup{2000} Mathematics Subject Classification} \subjclass{58E20,53A10.} \begin{document} \title [The Gauss image and Bernstein type theorems] {The Gauss image of entire graphs of higher codimension and Bernstein type theorems} \author [J. Jost, Y. L. Xin and Ling Yang]{J. Jost, Y. L. Xin and Ling Yang} \address{Max Planck Institute for Mathematics in the Sciences, Inselstr. 22, 04103 Leipzig, Germany.} \email{[email protected]} \address {Institute of Mathematics, Fudan University, Shanghai 200433, China.} \email{[email protected]} \address{Max Planck Institute for Mathematics in the Sciences, Inselstr. 22, 04103 Leipzig, Germany.} \email{[email protected]} \thanks{The second named author is grateful to the Max Planck Institute for Mathematics in the Sciences in Leipzig for its hospitality and continuous support. He is also partially supported by NSFC and SFMEC } \begin{abstract} Under suitable conditions on the range of the Gauss map of a complete submanifold of Euclidean space with parallel mean curvature, we construct a strongly subharmonic function and derive a-priori estimates for the harmonic Gauss map. The required conditions here are more general than in previous work and they therefore enable us to improve substantially previous results for the Lawson-Osseman problem concerning the regularity of minimal submanifolds in higher codimension and to derive Bernstein type results. \end{abstract} \maketitle \Section{Introduction}{Introduction} We consider an oriented $n$-dimensional submanifold $M$ in $\ir{n+m}$ with $n\ge 3,\; m\ge 2.$ The Gauss map $\g:M\to \grs{n}{m}$ maps $M$ into a Grassmann manifold. In fact, for codimension $m=1$, this Grassmann manifold $\grs{n}{1}$ is the unit sphere $S^n$. In this paper, however, we are interested in the case $m\ge 2$ where the geometry of this Grassmann manifold is more complicated. By the theorem of Ruh-Vilms \cite{r-v}, $\g$ is harmonic if and only if $M$ has parallel mean curvature. This result applies thus in particular to the case where $M$ is a {\it minimal} submanifold of Euclidean space. Now, the Bernstein problem for entire minimal graphs is one of the central problems in geometric analysis. Let us summarize the status of this problem, first for the case of codimension 1. The central result is that an entire minimal graph $M$ of dimension $n\le 7 $ and codimension 1 has to be planar, but there are counterexamples to such a Bernstein type theorem in dimension $8$ or higher. However, when the additional condition is imposed that the slope of the graph be uniformly bounded, then a theorem of Moser \cite{m}, called a weak Bernstein theorem, asserts that such an $M$ in arbitrary dimension has to be planar. Thus, the counterexamples arise from a non-uniform behavior at infinity. In fact, by a general scaling argument, the Bernstein theorems are intimately related to the regularity question for the minimal hypersurface equation. A natural and important question then is to what extent such Bernstein type theorems generalize to entire minimal graphs of codimension $m\ge 2$. Moser's result has been extended to higher codimension by Chern-Osserman for dimension $n=2$ \cite{c-o} and Barbosa \cite{b} and Fisher-Colbrie \cite{fc} for dimension $n=3$. For dimension $n=4$ and codimension $m=3$, however, there is a counterexample given by Lawson-Osserman \cite{l-o}. In fact, their paper emphasizes the stark contrast between the cases of codimension 1 and greater than 1 for the minimal submanifold system, concerning regularity, uniqueness, and existence. The Lawson-Osserman problem then is concerned with a systematic understanding of the analytic aspects of the minimal submanifold system in higher codimension. As in the case of codimension 1, the Bernstein problem provides a key towards this aim. While the work of Lawson-Osserman produced a counterexample for a general Bernstein theorem, there are also some positive results in this direction which we shall now summarize. Hildebrandt-Jost-Widman \cite{h-j-w} started a systematic approach on the basis of the aforementioned Ruh-Vilms theorem. That is, they developed and employed the theory of harmonic maps and the convex geometry of Grassmann manifolds, and obtained Bernstein type results in general dimension and codimension. Their main result says that a Bernstein result holds if the image of the Gauss map is contained in a strictly convex distance ball. Since the Riemannian sectional curvature of $\grs{n}{m}$ is nonnegative, the maximal radius of such a convex ball is bounded. In codimension 1, this in particular reproduces Moser's theorem, and in this sense, their result is optimal. For higher codimension, their result can be improved, for the following reason. Since the sectional curvature of $\grs{n}{m}$ for $n,m \ge 2$ is not constant, there exist larger convex sets than geodesic distance balls, and it turns out that harmonic (e.g. Gauss) maps with values in such convex sets can still be well enough controlled. In this sense, the results of \cite{h-j-w} could be improved by Jost-Xin \cite{j-x}, Wang \cite{wang} and Xin-Yang \cite{x-y1}. In \cite{j-x}, the largest such geodesically convex set in a Grassmann manifold was found. Formulating it somewhat differently, the harmonic map approach is based on the fact that the composition of a harmonic map with a convex function is a subharmonic function, and by using quantitative estimates for such subharmonic functions, regularity and Liouville type results for harmonic maps can be obtained. The most natural such convex function is the squared distance from some point, when its domain is restricted to a suitably small ball. As mentioned, the largest such ball on which a squared distance function is convex was utilized in \cite{h-j-w}. As also mentioned, however, this result is not yet optimal, and other convex functions were systematically utilized in \cite{x-y1}. In that paper, also the fundamental connection between estimates for the second fundamental form of minimal submanifolds and estimates for their Gauss maps was systematically explored. On this basis, the fundamental curvature estimate technique, as developed by Schoen-Simon-Yau \cite{s-s-y} and Ecker-Huisken \cite{e-h}, could be used in \cite{x-y1}. Still, there remains a large quantitative gap between those positive results and the counterexample of Lawson-Osserman. In this situation, it could either be that Bernstein theorems can be found under more general conditions, or that there exist other counterexamples in the so far unexplored range. In the present paper, we make a step towards closing this gap in the positive direction. We identify a geometrically natural function $v$ on a Grassmann manifold and a natural quantitative condition under which the precomposition of this function with a harmonic (Gauss) map is (strongly) subharmonic (Theorem \ref{thm1}). When the precomposition of $v$ with the Gauss map of a complete minimal submanifold is bounded, then that submanifold is an entire graph of bounded slope. On one hand, this is the first systematic example in harmonic map regularity theory where this auxiliary function is not necessarily convex. On the other hand, the Lawson-Osserman's counterexample can also be readily characterized in terms of this function. Still, the range of values for $v$ where we can apply our scheme is strictly separated from the value of $v$ in that example. Therefore, still some gap remains which should be explored in future work. Our work also finds its natural position in the general regularity theory for harmonic maps. Also, once we have a strongly subharmonic function, we could derive Bernstein type results within the frame work of geometric measure theory, by the standard blow-down procedure and appeal to Allard's regularity theorem \cite{a}. By building upon the work of many people on harmonic map regularity, we can obtain more insight, however. In particular, we shall use the iteration method of \cite{h-j-w}, we can explore the relation with curvature estimates, and we shall utilize a version of the telescoping trick (Theorem \ref{thm2}) to finally obtain a quantitatively controlled Gauss image shrinking process (Theorem \ref{thm3} and Theorem \ref{thm4}). In this way, we can understand why the submanifold is flat as the Bernstein result asserts. More precisely, we obtain the following Bernstein type result, which substantially improves our previous results. \begin{thm}\label{thm5} Let $z^\a=f^\a(x^1,\cdots,x^n),\ \a=1,\cdots,m$, be smooth functions defined everywhere in $\R^n$ ($n\geq 3,m\geq 2$). Suppose their graph $M=(x,f(x))$ is a submanifold with parallel mean curvature in $\R^{n+m}$. Suppose that there exists a number $\be_0<3$ such that \begin{equation} \De_f=\Big[\det\Big(\de_{ij}+\sum_\a \f{\p f^\a}{\p x^i}\f{\p f^\a}{\p x^j}\Big)\Big]^{\f{1}{2}}\leq \be_0.\label{be2} \end{equation} Then $f^1,\cdots,f^m$ has to be affine linear, i.e., it represents an affine $n$-plane. \end{thm} The essential point is to show that $v:=\De_f$ is subharmonic when $<3$. In fact, when $v \le \be_0 <3$, then $\Delta v \ge K_0 |B|^2$ where $K_0$ is a positive constant and $B$ is the second fundamental form of $M$ in $\R^{n+m}$. This principle is not new. Wang \cite{wang} has given conditions under which $\log v$ is subharmonic and has derived Bernstein results from this, as indicated above. He only needs that $v$ be uniformly bounded by some constant, not necessarily $<3$, but in addition that there exist some $\delta >0$ such that for any two eigenvalues $\la_i, \la_j$ with $i\neq j$, the inequality $|\la_i \la_j|\le 1 -\delta$ holds (the latter condition means in geometric terms that $df$ is strictly area decreasing on any two-dimensional subspace). Since subharmonicity of $\log v$ is a weaker property than subharmonicity of $v$ itself, his computation is substantially easier than ours, and our results cannot be deduced from his. In fact, $v^2=\prod (1+\la_i^2)$, and while the condition of \cite{j-x} which can be reformulated as $v^2$ being bounded away from 4 implies the condition of \cite{wang} so that the latter result generalizes the former, the condition needed in the present paper is only the weaker one that $v^2$ be bounded away from 9. In fact, somewhat more refined results can be obtained, as will be pointed out in the final remarks of this paper. \Section{Geometry of Grassmann manifolds}{Geometry of Grassmann manifolds}\label{s1} Let $\R^{n+m}$ be an $(n+m)$-dimensional Euclidean space. Its oriented $n$-subspaces constitute the Grassmann manifold $\grs{n}{m}$, which is the Riemannian symmetric space of compact type $SO(n+m)/SO(n)\times SO(m).$ $\grs{n}{m}$ can be viewed as a submanifold of some Euclidean space via the Pl\"ucker embedding. The restriction of the Euclidean inner product on $M$ is denoted by $w:\grs{n}{m}\times \grs{n}{m}\ra \R$ $$w(P,Q)=\lan e_1\w\cdots\w e_n,f_1\w\cdots\w f_n\ran=\det W$$ where $P$ is spanned by a unit $n$-vector $e_1\w\cdots\w e_n$, $Q$ is spanned by another unit $n$-vector $f_1\w\cdots \w f_n$, and $W=\big(\lan e_i,f_j\ran\big)$. It is well-known that $$W^T W=O^T \La O$$ with $O$ an orthogonal matrix and $$\La=\left(\begin{array}{ccc} \mu_1^2 & & \\ & \ddots & \\ & & \mu_n^2 \end{array}\right).$$ Here each $0\leq \mu_i^2\leq 1$. Putting $p:=\min\{m,n\}$, then at most $p$ elements in $\{\mu_1^2,\cdots, \mu_n^2\}$ are not equal to $1$. Without loss of generality, we can assume $\mu_i^2=1$ whenever $i>p$. We also note that the $\mu_i^2$ can be expressed as \begin{equation}\label{di1a} \mu_i^2=\frac{1}{1+\la_i^2}. \end{equation} The Jordan angles between $P$ and $Q$ are defined by $$\th_i=\arccos(\mu_i)\qquad 1\leq i\leq p.$$ The distance between $P$ and $Q$ is defined by \begin{equation}\label{di} d(P, Q)=\sqrt{\sum\th_i^2}. \end{equation} Thus, (\ref{di1a}) becomes \begin{equation}\label{di2} \la_i=\tan\th_i. \end{equation} In the sequel, we shall assume $n\geq m$ without loss of generality. We use the summation convention and agree on the ranges of indices: $$1\leq i,j,k,l\leq n,\; 1\leq \a,\be,\g\leq m,\; a, b,\cdots =1,\cdots, n+m.$$ Now we fix $P_0\in \grs{n}{m}.$ We represent it by $ n $ vectors $\eps_i,$, which are complemented by $ m $ vectors $ \eps_{n+\a} $, such that $ \{\eps_i, \eps_{n+\a} \} $ form an orthonormal base of $ \ir{m+n} $. Denote $$\Bbb{U}:=\{P\in \grs{n}{m},\; w(P,P_0)>0\}.$$ We can span an arbitrary $P\in \Bbb{U}$ by $n$-vectors $f_i$: $$f_i=\eps_i+Z_{i\a}\eps_{n+\a}.$$ The canonical metric in $\Bbb{U}$ can be described as \begin{equation}\label{m1}ds^2 = tr (( I_n + ZZ^T )^{-1} dZ (I_m + Z^TZ)^{-1} dZ^T ),\end{equation} where $ Z = (Z_{i \a}) $ is an $ (n \times m) $-matrix and $ I_n $ (res. $ I_m $) denotes the $ (n\times n) $-identity (res. $ m \times m $) matrix. It is shown that (\ref{m1}) can be derived from (\ref{di}) in \cite{x}. For any $P\in\Bbb{U}$, the Jordan angles between $P$ and $P_0$ are defined by $\{\th_i\}$. Let $E_{i\a}$ be the matrix with $1$ in the intersection of row $i$ and column $\a$ and $0$ otherwise. Then, $\sec\th_i\sec\th_\a E_{i\a}$ form an orthonormal basis of $T_P\grs{n}{m}$ with respect to (\ref{m1}). Denote its dual frame by $\om_{i\a}.$ Our fundamental quantity will be \begin{equation} v(\cdot, P_0):=w^{-1}(\cdot, P_0) \text{ on }\Bbb{U}. \end{equation} For arbitrary $P\in \U$ determined by an $n\times m$ matrix $Z$, it is easily seen that \begin{equation}\label{v} v(P,P_0)=\big[\det(I_n+ZZ^T)\big]^{\f{1}{2}}=\prod_{\a=1}^m \sec\th_\a =\prod_{\a=1}^m \frac{1}{\mu_\alpha}. \end{equation} where $\th_1,\cdots,\th_m$ denote the Jordan angles between $P$ and $P_0$. In this terminology, Hess$(v(\cdot, P_0)$ has been estimated in \cite{x-y1}. By (3.8) in \cite{x-y1}, we have \begin{eqnarray}\label{He}\aligned \Hess(v(\cdot,P_0))&=\sum_{i\neq \a}v\ \om_{i\a}^2+\sum_\a (1+2\la_\a^2)v\ \om_{\a\a}^2 +\sum_{\a\neq\be} \la_\a\la_\be v(\om_{\a\a}\otimes \om_{\be\be}+\om_{\a\be}\otimes\om_{\be\a})\\ &=\sum_{m+1\leq i\leq n,\a}v\ \om_{i\a}^2+\sum_{\a}(1+2\la_\a^2)v\ \om_{\a\a}^2 +\sum_{\a\neq \be}\la_\a\la_\be v\ \om_{\a\a}\otimes\om_{\be\be}\\ &\qquad\qquad+\sum_{\a<\be}\Big[(1+\la_\a\la_\be)v\Big(\f{\sqrt{2}}{2}(\om_{\a\be} +\om_{\be\a})\Big)^2\\ &\hskip2in+(1-\la_\a\la_\be)v\Big(\f{\sqrt{2}}{2}(\om_{\a\be}-\om_{\be\a})\Big)^2\Big]. \endaligned \end{eqnarray} It follows that \begin{equation}\label{hess} v(\cdot,P_0)^{-1}\Hess(v(\cdot,P_0)) =g+\sum_\a 2\la_\a^2 \om_{\a\a}^2+\sum_{\a\neq \be}\la_\a\la_\be(\om_{\a\a}\otimes \om_{\be\be}+ \om_{\a\be}\otimes \om_{\be\a}). \end{equation} The canonical Riemannian metric on $\grs{n}{m}$ derived from (\ref{di}) can also be described by the moving frame method. This will be useful for understanding some of the sequel. Let $\{e_i,e_{n+\a}\}$ be a local orthonormal frame field in $\ir{n+m}.$ Let $\{\om_i,\om_{n+\a}\}$ be its dual frame field so that the Euclidean metric is $$g=\sum_{i}\om_i^2+\sum_{\a}\om_{n+\a}^2.$$ The Levi-Civita connection forms $\om_{ab}$ of $\ir{n+m}$ are uniquely determined by the equations $$\aligned &d\om_{a}=\om_{ab}\wedge\om_b,\cr &\om_{ab}+\om_{ba}=0. \endaligned$$ It is shown in \cite{x} that the canonical Riemannian metric on $\grs{n}{m}$ can be written as \begin{equation}\label{m2} ds^2=\sum_{i,\ \a}\om_{i\, n+\a}^2. \end{equation} \Section{Subharmonic functions}{Subharmonic functions} Let $M^m\ra \R^{n+m}$ be an isometric immersion with second fundamental form $B.$ Around any point $p\in M$, we choose an orthonormal frame field $e_i,\cdots, e_{n+m}$ in $\R^{n+m},$ such that $\{e_i\}$ are tangent to $M$ and $\{e_{n+\a}\}$ normal to $M.$ The metric on $M$ is $g=\sum_i \om_i^2.$ We have the structure equations \begin{equation}\label{str} \om_{i\ n+\a}=h_{\a ij}\om_j, \end{equation} where $h_{\a ij}$ are the coefficients of second fundamental form $B$ of $M$ in $\R^{n+m}.$ Let $0$ be the origin of $\R^{n+m}$, $SO(m+n)$ be the Lie group consisting of all orthonormal frames $(0;e_i,e_{n+\a})$, $TF=\big\{(p;e_1,\cdots,e_n):p\in M,e_i\in T_p M,\lan e_i,e_j\ran=\de_{ij}\big\}$ be the principle bundle of orthonormal tangent frames over $M$, and $NF=\big\{(p;e_{n+1},\cdots,e_{n+m}):p\in M,e_{n+\a}\in N_p M\big\}$ be the principle bundle of orthonormal normal frames over $M$. Then $\bar{\pi}: TF\oplus NF\ra M$ is the projection with fiber $SO(n)\times SO(m)$. The Gauss map $\g: M\ra \grs{n}{m}$ is defined by $$\g(p)=T_p M\in \grs{n}{m}$$ via the parallel translation in $\R^{n+m}$ for every $p\in M$. Then the following diagram commutes $$\CD TF \oplus NF @>i>> SO (n+m) \\ @V{\bar\pi}VV @VV{\pi}V \\ M @>{\g}>> \grs{n}{m} \endCD$$ where $i$ denotes the inclusion map and $\pi: SO(n+m)\ra \grs{n}{m}$ is defined by $$(0;e_i,e_{n+\a})\mapsto e_1\w\cdots\w e_n.$$ It follows that \begin{equation}\label{edg} |d\g|^2=\sum_{\a,i,j}h_{\a ij}^2=|B|^2. \end{equation} (\ref{hess}) was computed for the metric (\ref{m1}) whose corresponding coframe field is $\om_{i \a}.$ Since (\ref{m1}) and (\ref{m2}) are equivalent to each other, at any fixed point $P\in\grs{n}{m}$ there exists an isotropic group action, i.e., an $SO(n)\times SO(m)$ action, such that $\om_{i\a}$ is transformed to $\om_{i\ n+\a}$, namely, there are a local tangent frame field and a local normal frame field such that at the point under consideration, \begin{equation}\label{str2} \om_{i\ n+\a}=\g^*\om_{i \a}. \end{equation} In conjunction with (\ref{str}) and (\ref{str2}) we obtain \begin{equation}\label{hij} \g^*\om_{i\a}=h_{\a ij}\om_j. \end{equation} By the Ruh-Vilms theorem \cite{r-v}, the mean curvature of $M$ is parallel if and only if its Gauss map is a harmonic map. Now, we assume that $M$ has parallel mean curvature. We define \begin{equation} v:=v(\cdot,P_0)\circ \g, \end{equation} This function $ v$ on $M$ will be the source of the basic inequality for this paper. Its geometric significance is seen from the following observation. If the $v-$ function has an upper bound (or the $w-$function has a positive lower bound), $M$ can be described as an entire graph on $\ir{n}$ by $f:\ir{n}\to \ir{m}$, provided $M$ is complete. In this situation, $\la_i$ is the singular values of $df$ and \begin{equation}\label{v1} v=\Big[\det\Big(\de_{ij}+\sum_\a \f{\p f^\a}{\p x^i}\f{\p f^\a}{\p x^j}\Big)\Big]^{\f{1}{2}} \end{equation} Using the composition formula, in conjunction with (\ref{hess}), (\ref{edg}) and (\ref{hij}), and the fact that $\tau(\g)=0$ (the tension field of the Gauss map vanishes \cite{r-v}), we deduce the important formula of Lemma 1.1 in \cite{fc} and Prop. 2.1 in \cite{wang}. \begin{pro}Let $M$ be an $n-$submanifold in $\ir{n+m}$ with parallel mean curvature. Then \begin{equation}\label{Dv} \De v=v|B|^2+v\sum_{\a,j}2\la_\a^2h_{\a,\a j}^2 +v\sum_{\a\neq \be,j}\la_\a\la_\be(h_{\a,\a j}h_{\be,\be j}+h_{\a,\be j}h_{\be,\a j}), \end{equation} where $h_{\a,ij}$ are the coefficients of the second fundamental form of $M$ in $\ir{n+m}$ (see (\ref{str}). \end{pro} A crucial step in this paper is to find a condition which guarantees the strong subharmonicity of the $v-$ function on $M$. More precisely, under a condition on $v$, we shall bound its Laplacian from below by a positive constant times squared norm of the second fundamental form. Looking at the expression (\ref{Dv}), we group its terms according to the different types of the indices of the coefficients of the second fundamental form as follows. \begin{equation} v^{-1}\De v= \sum_\a\sum_{i,j>m}h_{\a,ij}^2+\sum_{j>m}I_j+\sum_{j>m,\a<\be}II_{j\a\be} +\sum_{\a<\be<\g}III_{\a\be\g}+\sum_\a IV_\a \end{equation} where \begin{equation} I_j=\sum_\a(2+2\la_\a^2)h_{\a,\a j}^2+\sum_{\a\neq \be}\la_\a\la_\be h_{\a,\a j}h_{\be,\be j}, \end{equation} \begin{equation} II_{j\a\be}=2h_{\a,\be j}^2+2h_{\be,\a j}^2+2\la_\a\la_\be h_{\a,\be j}h_{\be,\a j}, \end{equation} \begin{equation}\aligned III_{\a\be\g}=&2h_{\a,\be\g}^2+2h_{\be,\g\a}^2+2h_{\g,\a\be}^2\\ &+2\la_\a\la_\be h_{\a,\be\g}h_{\be,\g\a}+2\la_\be\la_\g h_{\be,\g\a}h_{\g,\a\be}+2\la_\g\la_\a h_{\g,\a\be}h_{\a,\be\g} \endaligned \end{equation} and \begin{equation}\aligned IV_\a=&(1+2\la_\a^2)h_{\a,\a\a}^2+\sum_{\be\neq \a}\big(h_{\a,\be\be}^2+(2+2\la_\be^2)h_{\be,\be\a}^2\big)\\ &+\sum_{\be\neq \g}\la_\be\la_\g h_{\be,\be \a}h_{\g,\g \a}+2\sum_{\be\neq \a}\la_\a\la_\be h_{\a,\be\be}h_{\be,\be\a}. \endaligned \end{equation} It is easily seen that \begin{equation}\label{es1} I_j=(\sum_\a \la_\a h_{\a,\a j})^2+\sum_\a (2+\la_\a^2)h_{\a,\a j}^2\geq 2\sum_\a h_{\a,\a j}^2. \end{equation} Obviously \begin{equation} II_{j\a\be}=\la_\a\la_\be(h_{\a,\be j}+h_{\be,\a j})^2+(2-\la_\a\la_\be)(h_{\a,\be j}^2+h_{\be,\a j}^2). \end{equation} $v=\Big(\prod_\a (1+\la_\a^2)\Big)^{\f{1}{2}}$ implies $(1+\la_\a^2)(1+\la_\be^2)\leq v^2$. Assume $(1+\la_\a^2)(1+\la_\be^2)\equiv C\leq v^2$, then differentiating both sides implies $$\f{\la_\a d\la_\a}{1+\la_\a^2}+\f{\la_\be d\la_\be}{1+\la_\be^2}=0.$$ Therefore \begin{equation} \aligned d(\la_\a \la_\be)&=\la_\be d\la_\a+\la_\a d\la_\be\\ &=\big[\la_\be^2(1+\la_\a^2)-\la_\a^2(1+\la_\be^2)\big]\f{d\la_\a}{\la_\be(1+\la_\a^2)}\\ &=(\la_\be^2-\la_\a^2)\f{d\la_\a}{\la_\be(1+\la_\a^2)}. \endaligned \end{equation} It follows that $(\la_\a,\la_\be)\mapsto \la_\a\la_\be$ attains its maximum at the point satisfying $\la_\a=\la_\be$, which is hence $((C^{\f{1}{2}}-1)^{\f{1}{2}},(C^{\f{1}{2}}-1)^{\f{1}{2}})$. Thus $\la_\a\la_\be\leq C^{\f{1}{2}}-1\leq v-1$ and moreover \begin{equation}\label{es2} II_{j\a\be}\geq (3-v)(h_{\a,\be j}^2+h_{\be,\a j}^2). \end{equation} \bigskip \begin{lem}\label{l1} $III_{\a\be\g}\geq (3-v)(h_{\a,\be\g}^2+h_{\be,\g\a}^2+h_{\g,\a\be}^2)$. \end{lem} \begin{proof} It is easily seen that $$\aligned III_{\a\be\g}-&(3-v)(h_{\a,\be\g}^2+h_{\be,\g\a}^2+h_{\g,\a\be}^2)\\ =&(\la_\a h_{\a,\be\g}+\la_\be h_{\be,\g\a}+\la_\g h_{\g,\a\be})^2+(v-1-\la_\a^2)h_{\a,\be\g}^2\\ &\qquad+(v-1-\la_\be^2)h_{\be,\g\a}^2 +(v-1-\la_\g^2)h_{\g,\a\be}^2.\endaligned$$ If $\la_\a^2,\la_\be^2,\la_\g^2\leq v-1$, then $III_{\a\be\g}-(3-v)(h_{\a,\be\g}^2+h_{\be,\g\a}^2+h_{\g,\a\be}^2)$ is obviously nonnegative definite. Otherwise, we can assume $\la_\g^2>v-1$ without loss of generality, then $(1+\la_\a^2)(1+\la_\be^2)(1+\la_\g^2)\leq v^2$ implies $\la_\a^2<v-1, \la_\be^2<v-1$. Denote $s=\la_\a h_{\a,\be\g}+\la_\be h_{\be,\g\a}$, then by the Cauchy-Schwarz inequality, $$\aligned s^2&=(\la_\a h_{\a,\be\g}+\la_\be h_{\be,\g\a})^2\\ &=\Big(\f{\la_\a}{\sqrt{v-1-\la_\a^2}}\sqrt{v-1-\la_\a^2}h_{\a,\be\g}+\f{\la_\be}{\sqrt{v-1-\la_\be^2}} \sqrt{v-1-\la_\be^2}h_{\be,\g\a}\Big)^2\\ &\leq\Big(\f{\la_\a^2}{v-1-\la_\a^2}+\f{\la_\be^2}{v-1-\la_\be^2}\Big)\big((v-1-\la_\a^2)h_{\a,\be\g}^2+(v-1-\la_\be^2)h_{\be,\g\a}^2\big) \endaligned$$ i.e. \begin{equation} (v-1-\la_\a^2)h_{\a,\be\g}^2+(v-1-\la_\be^2)h_{\be,\g\a}^2\geq \Big(\f{\la_\a^2}{v-1-\la_\a^2}+\f{\la_\be^2}{v-1-\la_\be^2}\Big)^{-1}s^2. \end{equation} Hence \begin{equation}\label{ineq3}\aligned &III_{\a\be\g}-(3-v)(h_{\a,\be\g}^2+h_{\be,\g\a}^2+h_{\g,\a\be}^2)\\ \geq& (s+\la_\g h_{\g,\a\be})^2+\Big(\f{\la_\a^2}{v-1-\la_\a^2}+\f{\la_\be^2}{v-1-\la_\be^2}\Big)^{-1}s^2+(v-1-\la_\g^2)h_{\g,\a\be}^2\\ =&\Big[1+\Big(\f{\la_\a^2}{v-1-\la_\a^2}+\f{\la_\be^2}{v-1-\la_\be^2}\Big)^{-1}\Big]s^2+(v-1)h_{\g,\a\be}^2+2\la_\g sh_{\g,\a\be}. \endaligned \end{equation} It is well known that $ax^2+2bxy+cy^2$ is nonnegative definite if and only if $a,c\geq 0$ and $ac-b^2\geq 0$. Hence the right hand side of (\ref{ineq3}) is nonnegative definite if and only if \begin{equation}\label{cond} (v-1)\Big[1+\Big(\f{\la_\a^2}{v-1-\la_\a^2}+\f{\la_\be^2}{v-1-\la_\be^2}\Big)^{-1}\Big]-\la_\g^2\geq 0 \end{equation} i.e. \begin{equation}\label{cond2} \f{1}{v-1-\la_\a^2}+\f{1}{v-1-\la_\be^2}+\f{1}{v-1-\la_\g^2}\leq \f{2}{v-1}. \end{equation} Denote $x=1+\la_\a^2$, $y=1+\la_\be^2$, $z=1+\la_\g^2$. Let $C$ be a constant $\leq v^2$, denote $$\Om=\big\{(x,y,z)\in \R^3:1\leq x,y<v,\; z>v,\; xyz=C\big\}$$ and $f:\Om\ra \R$ $$(x,y,z)\mapsto \f{1}{v-x}+\f{1}{v-y}+\f{1}{v-z}.$$ We claim $f\leq \f{2}{v-1}$ on $\Om$. Then (\ref{cond2}) follows and hence $$III_{\a\be\g}-(3-v)(h_{\a,\be\g}^2+h_{\be,\g\a}^2 +h_{\g,\a\be}^2)$$ is nonnegative definite. We now verify the claim. For arbitrary $\ep>0$, denote $$f_\ep=\f{1}{v+\ep-x}+\f{1}{v+\ep-y}+\f{1}{v+\ep-z},$$ then $f_\ep$ is obviously a smooth function on $$\Om_\ep=\big\{(x,y,z)\in \R^3:1\leq x,y\leq v,\; z\geq v+2\ep,\; xyz=C\big\}.$$ The compactness of $\Om_\ep$ implies the existence of $(x_0,y_0,z_0)\in \Om_\ep$ satisfying \begin{equation}\label{sup} f_\ep(x_0,y_0,z_0)=\sup_{\Om_\ep} f_\ep. \end{equation} Fix $x_0$, then (\ref{sup}) implies that for arbitrary $(y,z)\in \R^2$ satisfying $1\leq y\leq v,\; z\geq v+2\ep$ and $yz=\f{C}{x_0}$, we have $$f_{\ep,x_0}(y, z)=\f{1}{v+\ep-y}+\f{1}{v+\ep-z}\leq \f{1}{v+\ep-y_0}+\f{1}{v+\ep-z_0}.$$ Differentiating both sides of $yz=\f{C}{x_0}$ yields $\f{dy}{y}+\f{dz}{z}=0.$ Hence \begin{equation}\aligned &d\Big(\f{1}{v+\ep-y}+\f{1}{v+\ep-z}\Big)=\f{dy}{(v+\ep-y)^2}+\f{dz}{(v+\ep-z)^2}\\ =&\Big[\f{y}{(v+\ep-y)^2}-\f{z}{(v+\ep-z)^2}\Big]\f{dy}{y}=\f{((v+\ep)^2-yz)(y-z)}{(v+\ep-y)^2(v+\ep-z)^2}\f{dy}{y}. \endaligned \end{equation} It implies that $f_{\ep,x_0}\left(y, \f{C}{yx_0}\right)$ is decreasing in $y$ and $y_0=1.$ Similarly, one can derive $x_0=1$. Therefore $$\sup_{\Om_\ep} f_\ep=f_\ep(1,1,C)=\f{2}{v+\ep-1}+\f{1}{v+\ep-C}<\f{2}{v+\ep-1}.$$ Note that $f_\ep\ra f$ and $\Om\subset \lim_{\ep\ra 0^+}\Om_\ep$. Hence by letting $\ep\ra 0$ one can obtain $f\leq \f{2}{v-1}.$ \end{proof} \bigskip \begin{lem}\label{l2} There exists a positive constant $\ep_0$, such that if $v\leq 3$, then $$IV_\a\geq \ep_0\big(h_{\a,\a\a}^2+\sum_{\be\neq \a}(h_{\a,\be\be}^2+2h_{\be,\be\a}^2)\big).$$ \end{lem} \begin{proof} For arbitrary $\ep_0\in [0,1)$, denote $C=1-\ep_0$, then \begin{equation}\label{ineq4} \aligned &IV_\a-\ep_0\big(h_{\a,\a\a}^2+\sum_{\be\neq \a}(h_{\a,\be\be}^2+2h_{\be,\be\a}^2)\big)\\ =&(\sum_\be \la_\be h_{\be,\be \a})^2+(C+\la_\a^2)h_{\a,\a\a}^2+\sum_{\be\neq \a}\big[Ch_{\a,\be\be}^2+(2C+\la_\be^2)h_{\be,\be\a}^2+2\la_\a\la_\be h_{\a,\be\be}h_{\be,\be\a}\big]. \endaligned \end{equation} Obviously $$\aligned C\, h_{\a,\be\be}^2&+C^{-1}\la_\a^2\la_\be^2h_{\be,\be\a}^2+2\la_\a\la_\be h_{\a,\be\be}h_{\be,\be\a}\\ &\geq (C^{\f{1}{2}}h_{\a,\be\be}+C^{-\f{1}{2}}\la_\a\la_\be h_{\be,\be\a})^2\geq 0,\endaligned$$ hence, the third term of the right hand side of (\ref{ineq4}) satisfies \begin{equation}\label{ineq2} Ch_{\a,\be\be}^2+(2C+\la_\be^2)h_{\be,\be\a}^2+2\la_\a\la_\be h_{\a,\be\be}h_{\be,\be\a}\geq (2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2)h_{\be,\be\a}^2 \end{equation} If there exist 2 distinct indices $\be,\g\neq \a$ satisfying $$2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2\leq 0$$ and $$2C+\la_\g^2-C^{-1}\la_\a^2\la_\g^2\leq 0,$$ then $\la_\a^2>C$ and $$\la_\be^2\geq \f{2C^2}{\la_\a^2-C},\qquad \la_\g^2\geq \f{2C^2}{\la_\a^2-C}.$$ It implies $$(1+\la_\a^2)(1+\la_\be^2)(1+\la_\g^2)\geq \f{(\la_\a^2+1)(\la_\a^2+2C^2-C)^2}{(\la_\a^2-C)^2}.$$ Define $f:x\in (C,+\infty)\mapsto \f{(x+1)(x+2C^2-C)^2}{(x-C)^2}$, then a direct calculation shows $$(\log f)'=\f{1}{x+1}+\f{2}{x+2C^2-C}-\f{2}{x-C}=\f{(x-C(2C+3))(x+C)}{(x+1)(x+2C^2-C)(x-C)}.$$ It follows that $f(x)\geq f(C(2C+3))=\f{(2C+1)^3}{C+1}$, i.e. \begin{equation}\label{ineq7} v^2\geq (1+\la_\a^2)(1+\la_\be^2)(1+\la_\g^2)\geq \f{(2C+1)^3}{C+1}. \end{equation} If $C=1$, then $\f{(2C+1)^3}{C+1}=\f{27}{2}>9$; hence there is $\ep_1>0$, once $\ep_0\leq \ep_1$, then $C=1-\ep_0$ satisfies $\f{(2C+1)^3}{C+1}>9$, which causes a contradiction to $v^2\leq 9$. Hence, one can find an index $\g\neq \a$, such that \begin{equation} 2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2> 0\qquad \text{for arbitrary }\be\neq \a,\g. \end{equation} Denote $s=\sum_{\be\neq \g}\la_\be h_{\be,\be\a}$, then by using the Cauchy-Schwarz inequality, \begin{equation}\label{ineq5}\aligned (C+\la_\a^2)h_{\a,\a\a}^2&+\sum_{\be\neq \a,\g}(2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2)h_{\be,\be\a}^2\\ & \geq \Big(\f{\la_\a^2}{C+\la_\a^2}+\sum_{\be\neq \a,\g}\f{\la_\be^2}{2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2}\Big)^{-1}s^2. \endaligned\end{equation} Substituting (\ref{ineq5}) and (\ref{ineq2}) into (\ref{ineq4}) yields \begin{equation}\label{ineq6} \aligned IV_\a&-\ep_0\big(h_{\a,\a\a}^2+\sum_{\be\neq \a}(h_{\a,\be\be}^2+2h_{\be,\be\a}^2)\big)\\ &\geq (s+\la_\g h_{\g,\g\a})^2+ \Big(\f{\la_\a^2}{C+\la_\a^2}+\sum_{\be\neq \a,\g}\f{\la_\be^2}{2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2}\Big)^{-1}s^2\\ &\qquad +(2C+\la_\g^2-C^{-1}\la_\a^2\la_\g^2)h_{\g,\g\a}^2\\ &\geq \Big[1+\Big(\f{\la_\a^2}{C+\la_\a^2}+\sum_{\be\neq \a,\g}\f{\la_\be^2}{2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2}\Big)^{-1}\Big]s^2\\ &\qquad +(2C+2\la_\g^2-C^{-1}\la_\a^2\la_\g^2)h_{\g,\g\a}^2+2\la_\g s h_{\g,\g\a}. \endaligned \end{equation} Note that when $m=2$, $s=\la_\a h_{\a,\a\a}$ and $\sum_{\be\neq \a,\g}\f{\la_\be^2}{2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2}=0$. The right hand side of (\ref{ineq6}) is nonnegative definite if and only if \begin{equation}\label{con1} 2C+2\la_\g^2-C^{-1}\la_\a^2\la_\g^2\geq 0 \end{equation} and \begin{equation}\label{con2} \Big[1+\Big(\f{\la_\a^2}{C+\la_\a^2}+\sum_{\be\neq \a,\g}\f{\la_\be^2}{2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2}\Big)^{-1}\Big](2C+2\la_\g^2-C^{-1}\la_\a^2\la_\g^2)-\la_\g^2\geq 0. \end{equation} Assume $2C+2\la_\g^2-C^{-1}\la_\a^2\la_\g^2< 0$, then $\la_\a^2>2C$ and $\la_\g^2> \f{2C^2}{\la_\a^2-2C}$, which implies $(1+\la_\a^2)(1+\la_\g^2)\geq \f{(\la_\a^2+1)(\la_\a^2+2C(C-1))}{\la_\a^2-2C}$. Define $f:x\in (2C,+\infty)\mapsto \f{(x+1)(x+2C(C-1))}{x-2C}$, then $$(\log f)'=\f{1}{x+1}+\f{1}{x+2C(C-1)}-\f{1}{x-2C}= \f{x^2-4Cx-2C^2(2C-1)}{(x+1)(x+2C(C-1))(x-2C)}.$$ and hence $$\min f=f\big(C(2+\sqrt{4C+2})\big)=2C^2+2C+1+2C\sqrt{4C+2}.$$ In particular, when $C=1,\; \min f= 5+2\sqrt{6}>9$. There exists $\ep_2>0$, such that once $\ep_0\leq \ep_2$, one can derive $\min f>9$ and moreover $v^2\geq (1+\la_\a^2)(1+\la_\g^2)>9$, which contradicts $v\leq 3$. Therefore (\ref{con1}) holds. If $2C+\la_\g^2-C^{-1}\la_\a^2\la_\g^2\geq 0$, (\ref{con2}) trivially holds. At last, we consider the situation when there exists $\g, \, \g\neq\a$, such that $$2C+\la_\g^2-C^{-1}\la_\a^2\la_\g^2<0.$$ In this case, (\ref{con2}) is equivalent to \begin{equation}\label{con} \f{\la_\a^2}{C+\la_\a^2}+\sum_{\be\neq \a}\f{\la_\be^2}{2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2}\leq -1. \end{equation} Noting that $$\f{\la_\be^2}{2C+\la_\be^2-C^{-1}\la_\a^2\la_\be^2}=\f{C}{C-\la_\a^2}-\f{2C^3}{(C-\la_\a^2)^2}\f{1}{1+\la_\be^2 +\f{\la_\a^2+C(2C-1)}{C-\la_\a^2}}$$ and let $x_\be=1+\la_\be^2$, then (\ref{con}) is equivalent to \begin{equation} \f{x_\a-1}{x_\a+C-1}+\sum_{\be\neq \a}\Big[\f{C}{C+1-x_\a}-\f{2C^3}{(C+1-x_\a)^2}\f{1}{x_\be -\f{x_\a+2C^2-C-1}{x_\a-C-1}}\Big]\leq -1. \end{equation} Denote $$\aligned \psi(x_\a)=\f{x_\a-1}{x_\a+C-1},&\qquad \varphi(x_\a)=\f{x_\a+2C^2-C-1}{x_\a-C-1},\\ \zeta(x_\a)=\f{C}{C+1-x_\a},&\qquad \xi(x_\a)=\f{2C^3}{(C+1-x_\a)^2}. \endaligned $$ Let \begin{equation}\label{Om} \aligned\Om=\big\{&(x_1,\cdots,x_m)\in \R^m: x_\a>C+1,1\leq x_\be<\varphi(x_\a)\text{ for all }\be\neq \a,\g, \\ &\qquad x_\g>\varphi(x_\a), \prod_{\be}x_\be=v^2\big\} \endaligned \end{equation} and define $f:\Om\ra \R$ $$(x_1,\cdots,x_m)\mapsto \psi(x_\a)+\sum_{\be\neq \a}\Big[\zeta(x_\a)-\f{\xi(x_\a)}{x_\be-\varphi(x_\a)}\Big].$$ We point out that in (\ref{Om}), $\a$ and $\g$ are fixed indices. Now we claim \begin{equation}\label{claim} \sup_\Om f=\sup_\G f \end{equation} where \begin{equation}\aligned \G=\big\{&(x_1,\cdots,x_m)\in \R^m: x_\a\geq C+1, x_\be=1\text{ for all }\be\neq \a,\g,\\ &\qquad x_\g\geq \varphi(x_\a),\prod_\be x_\be=v^2\big\}\subset \Om. \endaligned \end{equation} When $m=2$, obviously $\G=\Om$ and (\ref{claim}) is trivial. We put $$\varphi_\ep(x_\a)=\varphi(x_\a+\ep),\ \zeta_\ep(x_\a)=\zeta(x_\a+\ep),\ \xi_\ep(x_\a)=\xi(x_\a+\ep)$$ for arbitrary $\ep>0$. If $m\geq 3$, as in the proof of Lemma \ref{l1}, we define $$f_\ep=\psi(x_\a)+\sum_{\be\neq \a}\Big[\zeta_\ep(x_\a)-\f{\xi_\ep(x_\a)}{x_\be-\varphi_\ep(x_\a)}\Big],$$ then $f_\ep$ is well-defined on $$\aligned\Om_\ep=\big\{&(x_1,\cdots,x_m)\in \R^m:x_\a\geq C+1,1\leq x_\be\leq \varphi_{2\ep}(x_\a)\text{ for all }\be\neq \a,\g,\\ &\qquad x_\g\geq \varphi_{\f{\ep}{2}}(x_\a), \prod_{\be}x_\be=v^2\big\}.\endaligned$$ The compactness of $\Om_\ep$ enables us to find $(y_1,\cdots,y_m)\in \Om_\ep$, such that \begin{equation}\label{mini} f_\ep(y_1,\cdots,y_m)=\sup_{\Om_\ep} f_\ep. \end{equation} Denote $b=\varphi_\ep(y_\a)$, then (\ref{mini}) implies for arbitrary $\be\neq \a,\g$ that $$\f{1}{x_\be-b}+\f{1}{x_\g-b}\geq \f{1}{y_\be-b}+\f{1}{y_\g-b}$$ holds whenever $x_\be x_\g=y_\be y_\g$, $1\leq x_\be\leq \varphi_{2\ep}(y_\a)$ and $x_\g\geq \varphi_{\f{\ep}{2}}(y_\a)$. Differentiating both sides yields $\f{dx_\be}{x_\be}+\f{dx_\g}{x_\g}=0$, thus \begin{equation}\label{diff}\aligned d\Big(\f{1}{x_\be-b}+\f{1}{x_\g-b}\Big)&=-\f{dx_\be}{(x_\be-b)^2}-\f{dx_\g}{(x_\g-b)^2}\\ &=\f{(b^2-x_\be x_\g)(x_\g-x_\be)}{(x_\be-b)^2(x_\g-b)^2}\f{dx_\be}{x_\be}. \endaligned \end{equation} Similarly to (\ref{ineq7}), one can prove $y_\a b^2=\f{y_\a(y_\a+\ep+2C^2-C-1)^2}{(y_\a+\ep-C-1)^2}>9$ when $\ep_0\leq \ep_1$ (note that $C=1-\ep_0$) and $\ep_1$ is sufficiently small. In conjunction with $y_\a x_\be x_\g=y_\a y_\be y_\g\leq v^2<9$, we have $b^2-x_\be x_\g>0$. Hence (\ref{diff}) implies $y_\be=1$ for all $\be\neq \a, \g$. In other words, if we put $$\aligned \G_\ep=\big\{(x_1,\cdots,x_m)&\in \R^m: x_\a\geq C+1,\; x_\be=1\text{ for all }\be\neq \a,\g,\\ &x_\g\geq \varphi_{\f{\ep}{2}}(x_\a),\prod_\be x_\be=v^2\big\}, \endaligned$$ then $\max_{\Om_\ep}f_\ep=\max_{\G_\ep}f_\ep$. Therefore, (\ref{claim}) follows from $\Om\subset \bigcup_{\ep>0}\Om_\ep$, $\G\subset \bigcup_{\ep>0}\G_\ep$ and $\lim_{\ep\ra 0}f_\ep=f$. To prove (\ref{con2}), i.e. $f\leq -1$ , it is sufficient to show on $\G$, \begin{equation} \psi(x_\a)+\zeta(x_\a)-\f{\xi(x_\a)}{\f{v^2}{x_\a}-\varphi(x_\a)}\leq -1 \end{equation} whenever $x_\a > C+1$ and $\f{v^2}{x_\a}>\varphi(x_\a)$. After a straightforward calculation, the above inequality is equivalent to \begin{equation}\label{con6} x_\a^3+(2C^2-C-2)x_\a^2+(C^3-3C^2+C+1)x_\a-v^2(x_\a^2-(C+2)x_\a-(C^2-C-1))\geq 0. \end{equation} It is easily seen that if \begin{equation}\label{con5} \inf_{t^2-(C+2)t-(C^2-C-1)>0}\f{t^3+(2C^2-C-2)t^2+(C^3-3C^2+C+1)t}{t^2-(C+2)t-(C^2-C-1)}> 9. \end{equation} then (\ref{con6}) naturally holds and furthermore one can deduce that $IV_\a-\ep_0\big(h_{\a,\a\a}^2+\sum_{\be\neq \a}(h_{\a,\be\be}^2+2h_{\be,\be\a}^2)\big)$ is nonnegative definite. When $C=1$, (\ref{con5}) becomes \begin{equation}\label{con7}\inf_{t>\f{3+\sqrt{5}}{2}}\f{t^2(t-1)}{t^2-3t+1}>9. \end{equation} If this is true, one can find a positive constant $\ep_3$ to ensure (\ref{con5}) holds true whenever $\ep_0\leq \ep_3$. Finally,, by taking $\ep_0=\min\{\ep_1,\ep_2,\ep_3\}$ we obtain the final conclusion. (\ref{con7}) is equivalent to the property that $h(t)=t^2(t-1)-9(t^2-3t+1)=t^3-10t^2+27t-9$ has no zeros on $\big(\f{3+\sqrt{5}}{2},+\infty\big)$. $h'(t)=3t^2-20t+27$ implies $h'(t)<0$ on $\big(\f{3+\sqrt{5}}{2},\f{10+\sqrt{19}}{3}\big)$ and $h'(t)>0$ on $\big(\f{10+\sqrt{19}}{3},+\infty\big)$, hence $$\inf_{t> \f{3+\sqrt{5}}{2}}h=h\big(\f{10+\sqrt{19}}{3}\big)=\f{187-38\sqrt{19}}{27}>0$$ and (\ref{con7}) follows. \end{proof} \bigskip In conjunction with (\ref{es1}), (\ref{es2}), Lemma \ref{l1} and \ref{l2}, we can arrive at \begin{thm}\label{thm1} Let $M^n$ be a submanifold in $\R^{n+m}$ with parallel mean curvature, then for arbitrary $p\in M$ and $P_0\in \grs{n}{m}$, once $v(\g(p),P_0)\leq 3$, then $\De\big(v(\cdot,P_0)\circ \g\big)\geq 0$ at $p$. Moreover, if $v(\g(p),P_0)\le q \be_0<3$, then there exists a positive constant $K_0$, depending only on $\be_0$, such that \begin{equation}\label{Dv1} \De\big(v(\cdot,P_0)\circ \g\big)\geq K_0|B|^2 \end{equation} at $p$. \end{thm} We also express this result by saying that the function $v$ satisfying (\ref{Dv1}) is strongly subharmonic under the condition $v(\g(p),P_0)\leq \be_0<3$. \begin{rem} If\, $\log v$ is a strongly subharmonic function, then $v$ is certainly strongly subharmonic, but the converse is not necessarily true. Therefore, the above result does not seem to follow from Theorem 1.2 in \cite{wang}. \end{rem} \Section{Curvature estimates}{Curvature estimates} Let $z^\a=f^\a(x^1,\cdots,x^n),\a=1,\cdots,m$ be smooth functions defined on $D_{R_0}\subset \R^n$. Their graph $M=(x,f(x))$ is a submanifold with parallel mean curvature in $\R^{n+m}$. Suppose there is $\be_0\in [1,3)$, such that \begin{equation}\label{slope} \De_f=\Big[\det\Big(\de_{ij}+\sum_\a \f{\p f^\a}{\p x^i}\f{\p f^\a}{\p x^j}\Big)\Big]^{\f{1}{2}}\leq \be_0. \end{equation} Denote by $\eps_1,\cdots,\eps_{n+m}$ the canonical basis of $\R^{n+m}$ and put $P_0=\eps_1\w\cdots\w\eps_n$. Then by (\ref{slope}) $$v(\cdot,P_0)\circ \g\leq \be_0$$ holds everywhere on $M$. Putting $v=v(\cdot,P_0)\circ \g$, Theorem \ref{thm1} tells us \begin{equation}\label{sub} \De v\geq K_0(\be_0)|B|^2. \end{equation} Let $\eta$ be a nonnegative smooth function on $M$ with compact support. Multiplying both sides of (\ref{sub}) by $\eta$ and integrating on $M$ gives \begin{equation}\label{weak} K_0\int_M |B|^2 \eta*1\leq -\int_M \n\eta\cdot\n v*1. \end{equation} $F:D_{R_0}\mapsto M$ defined by $$x=(x^1,\cdots,x^n)\mapsto (x,f(x))$$ is obviously a diffeomorphism. $F_*\f{\p}{\p x^i}=\eps_i+\f{\p f^\a}{\p x^i}\eps_{n+\a}$ implies $$\big\lan F_*\f{\p}{\p x^i},F_*\f{\p}{\p x^j}\big\ran=\de_{ij}+\sum_\a \f{\p f^\a}{\p x^i}\f{\p f^\a}{\p x^j}.$$ Hence \begin{equation} F^* g=\Big(\de_{ij}+\sum_\a \f{\p f^\a}{\p x^i}\f{\p f^\a}{\p x^j}\Big)dx^idx^j \end{equation} where $g$ is the metric tensor on $M$. In other words, $M$ is isometric to the Euclidean ball $D_{R_0}$ equipped with the metric $g_{ij}dx^i dx^j$ ($g_{ij}=\de_{ij}+\sum_\a \f{\p f^\a}{\p x^i}\f{\p f^\a}{\p x^j}$). It is easily seen that for arbitrary $\xi\in \R^n$, \begin{equation}\label{eig1} \xi^i g_{ij}\xi^j=|\xi|^2+\sum_\a \Big(\sum_i \f{\p f^\a}{\p x^i}\xi^i\Big)^2\geq |\xi|^2. \end{equation} On the other hand, $\De_f\leq \be_0$ implies $\prod_{i=1}^n \mu_i\leq \be_0^2$, with $\mu_1,\cdots,\mu_n$ the eigenvalues of $(g_{ij})$, thus \begin{equation}\label{eig2} \xi^i g_{ij}\xi^j\leq \be_0^2|\xi|^2\leq 9|\xi|^2. \end{equation} In local coordinates, the Laplace-Beltrami operator is $$\De=\f{1}{\sqrt{G}}\f{\p}{\p x^i}\Big(\sqrt{G}g^{ij}\f{\p }{\p x^j}\Big).$$ Here $(g^{ij})$ is the inverse matrix of $(g_{ij})$, and $G=\det(g_{ij})=\De_f^2$. From (\ref{slope}), (\ref{eig1}) and (\ref{eig2}) it is easily seen that \begin{equation}\label{uniform} \f{1}{3}|\xi|^2\leq \be_0^{-1}|\xi|^2\leq \xi^i(\sqrt{G}g^{ij})\xi^j\leq \be_0|\xi|^2\leq 3|\xi|^2. \end{equation} Following \cite{j} and \cite{j-x-y} we shall make use of the following abbreviations: For arbitrary $R\in (0,R_0)$, let \begin{equation} B_R=\big\{(x,f(x)):x\in D_R\big\}\subset M. \end{equation} And for arbitrary $h\in L^\infty(B_R)$ denote \begin{equation} \aligned &h_{+,R}\mathop{=}\limits^{\text{def.}}\sup_{B_R}h,\qquad h_{-,R}\mathop{=}\limits^{\text{def.}}=\inf_{B_R}h,\qquad \bar{h}_R\mathop{=}\limits^{\text{def.}}\aint{B_R}h=\f{\int_{B_R}h*1}{|\text{Vol}(B_R)|}\\ &|\bar{h}|_{p,R}\mathop{=}\limits^{\text{def.}}\Big(\aint{B_R}|h|^p\Big)^{\f{1}{p}}\; (p\in (-\infty,+\infty). \endaligned \end{equation} (\ref{uniform}) shows that $\De$ is a uniform elliptic operator. Moser's Harnack inequality \cite{m} for supersolutions of elliptic PDEs in divergence form gives \begin{lem}\label{Har} For a positive superharmonic function $h$ on $B_R$ with $R\in (0,R_0]$, $p\in (0,\f{n}{n-2})$ and $\th\in [\f{1}{2},1)$, we have the following estimate $$|\bar{h}|_{p,\th R}\leq \g_1 h_{-,\th R}.$$ Here $\g_1$ is a positive constant only depending on $n$, $p$ and $\th$, but not on $h$ and $R$. \end{lem} (\ref{sub}) shows the subharmonicity of $v$, and therefore $v_{+,R}-v+\ep$ is a positive superharmonic function on $B_R$ for arbitrary $\ep>0$. With the aid of Lemma \ref{Har}, one can follow \cite{j} to get \bigskip \begin{cor}\label{c1} There is a constant $\de_0\in (0,1)$, depending only on $n$, such that $$v_{+,\f{R}{2}}\leq (1-\de_0)v_{+,R}+\de_0\bar{v}_{\f{R}{2}}.$$ \end{cor} \bigskip Denote by $G^\rho$ the mollified Green function for the Laplace-Beltrami operator on $B_R$. Then for arbitrary $p=(y,f(y))\in B_R$, once $$B_\rho(p)=\big\{(x,f(x))\in M: x\in D_R(y)\big\}\subset B_R$$ we have $$\int_{B_R}\n G^\rho(\cdot,p)\cdot \n\phi*1=\aint{B_\rho(p)}\phi$$ for every $\phi\in H_0^{1,2}(B_R)$. The apriori estimates for mollified Green functions of \cite{g-w} tell us \bigskip \begin{lem}\label{l4} With $o:=(0,f(0))$, we have \begin{equation}\label{Gr1}\aligned 0\leq G^{\f{R}{2}}(\cdot,o)\leq c_2(n)R^{2-n}&\qquad \text{on }B_R\\ G^{\f{R}{2}}(\cdot,o)\geq c_1(n)R^{2-n}&\qquad \text{on }B_{\f{R}{2}}. \endaligned \end{equation} For arbitrary $p\in B_{\f{R}{4}}$, \begin{equation}\label{Gr3} G^\rho(\cdot,p)\leq C(n)R^{2-n}\qquad \text{on }B_R\backslash \bar{B}_{\f{R}{2}}. \end{equation} Moreover if $\rho\leq \f{R}{8}$, \begin{equation}\label{Gr2} \int_{B_R\backslash \bar{B}_{\f{R}{2}}}\big|\n G^\rho(\cdot,p)\big|^2*1\leq C(n)R^{2-n}. \end{equation} \end{lem} \bigskip Based on (\ref{weak}), Corollary \ref{c1} and Lemma \ref{l4}, we can use the method of \cite{j} to derive a telescoping lemma a la Giaquinta-Giusti \cite{g-g} and Giaquita-Hildebrandt \cite{g-h}. \bigskip \begin{thm}\label{thm2} There exists a positive constant $C_1$, only depending on $n$ and $\be_0$, such that for arbitrary $R\leq R_0$, \begin{equation}\label{tele} R^{2-n}\int_{B_{\f{R}{2}}}|B|^2*1\leq C_1(v_{+,R}-v_{+,\f{R}{2}}) \end{equation} Moreover, there exists a positive constnat $C_2$, only depending on $n$ and $\be_0$, such that for arbitrary $\ep>0$, we can find $R\in [\exp(-C_2\ep^{-1})R_0,R_0]$, such that \begin{equation} R^{2-n}\int_{B_{\f{R}{2}}}|B|^2*1\leq \ep. \end{equation} \end{thm} \begin{proof} With $$\om^R=R^{-2}\text{Vol}(B_{\f{R}{2}})G^{\f{R}{2}}(\cdot,o)\qquad \text{where }o=(0,f(0)),$$ then $$\int_{B_R}\n \om^R\cdot \n \phi*1=R^{-2}\int_{B_{\f{R}{2}}}\phi*1.$$ Choosing $(\om^R)^2\in H_0^{1,2}(B_R)$ as a test function in (\ref{weak}), we obtain $$\aligned K_0 \int_{B_R}|B|^2(\om^R)^2*1&\leq -\int_{B_R}\n (\om^R)^2\cdot \n v*1=-2\int_{B_R}\n \om^R\cdot \om^R\n(v-v_{+,R})*1\\ &=-2\int_{B_R}\n \om^R\cdot\big(\n(\om^R(v-v_{+,R}))-(v-v_{+,R})\n \om^R\big)*1\\ &\leq -2\int_{B_R}\n \om^R\cdot \n\big(\om^R(v-v_{+,R})\big)*1\\ &=-2R^{-2}\int_{B_{\f{R}{2}}}\om^R(v-v_{+,R})*1. \endaligned$$ By (\ref{Gr1}), there exist positive constants $c_3,c_4$, depending only on $n$, such that $$\aligned 0\leq \om^R\leq c_4 \qquad &\text{on } B_R,\\ \om^R\geq c_3\qquad &\text{on }B_{\f{R}{2}}. \endaligned$$ Hence \begin{equation}\label{tele2}\aligned \int_{B_{\f{R}{2}}}|B|^2*1&\leq -2K_0^{-1}c_4^{-1}c_3^2R^{-2}\int_{B_{\f{R}{2}}}(v-v_{+,R})*1\\ &\leq c_5(n,\be_0)R^{n-2}(v_{+,R}-\bar{v}_{\f{R}{2}}). \endaligned \end{equation} By Corollary \ref{c1}, $v_{+,R}-\bar{v}_{\f{R}{2}}\leq \de_0^{-1}(v_{+,R}-v_{+,\f{R}{2}})$; substituting it into (\ref{tele2}) yields (\ref{tele}). For arbitrary $k\in \Bbb{Z}^+$, (\ref{tele}) gives \begin{equation}\aligned \sum_{i=0}^k (2^{-i}R_0)^{2-n}\int_{B_{2^{-i-1}R_0}}|B|^2*1&\leq C_1\sum_{i=0}^k(v_{+,2^{-i}R_0}-v_{+,2^{-i-1}R_0})\\ &=C_1(v_{+,R_0}-v_{+,2^{-k-1}R_0})\\ &\leq C_1(\be_0-1)\leq 2C_1 \endaligned \end{equation} For arbitrary $\ep>0$, we take $$k=[2C_1\ep^{-1}],$$ then we can find $1\leq j\leq k$, such that $$(2^{-j}R_0)^{2-n}\int_{B_{2^{-j-1}R_0}}|B|^2*1\leq \f{2}{k+1}C_1\leq \ep.$$ Since $2^{-j}\geq 2^{-k}\geq 2^{-2C_1\ep^{-1}}=\exp\big[-2(\log 2)C_1\ep^{-1}\big]$, it is sufficient to choose $C_2=-2(\log 2)C_1$. \end{proof} \Section{Gauss image shrinking property}{A Gauss image shrinking property} \begin{lem}\label{l3} For arbitrary $a>1$ and $\be_0\in [1,a)$, there exists a positive constant $\ep_1=\ep_1(a,\be_0)$ with the following property. If $P_1,Q\in \grs{n}{m}$ satisfies $v(Q,P_1)\leq b\leq \be_0$, then we can find $P_2\in \grs{n}{m}$, such that $v(P,P_2)\leq a$ for every $P\in \grs{n}{m}$ satisfying $v(P,P_1)\leq b$, and \begin{equation} 1\leq v(Q,P_2)\leq \left\{\begin{array}{cc} 1 & \text{if }b<\sqrt{2}(1+a^{-1})^{-\f{1}{2}}\\ b-\ep_1 & \text{otherwise.} \end{array}\right. \end{equation} \end{lem} \begin{proof} Obviously $w(P,P)=1$ for every $P\in \grs{n}{m}$, which shows $\grs{n}{m}$ is a submanifold in a Euclidean sphere via the Pl\"ucker embedding. Denote by $r(\cdot,\cdot)$ the restriction of the spherical distance on $\grs{n}{m}$, then by spherical geometry, $w=\cos r$ and hence $v=\sec r$. Denote $\a=\arccos(a^{-1})$ and $\be=\arccos(b^{-1})$. Now we put $\g=\a-\be$ and \begin{equation}\label{dis} c=\sec \g=(a^{-1}b^{-1}+(1-a^{-2})^{\f{1}{2}}(1-b^{-2})^{\f{1}{2}})^{-1}. \end{equation} Once $v(P_2,P_1)\leq c$, the triangle inequality implies $$r(P,P_2)\leq r(P,P_1)+r(P_2,P_1)\leq \arccos(b^{-1})+\arccos(c^{-1})=\a$$ for every $P$ satisfying $v(P,P_1)\leq b$, and thus $v(P,P_2)\leq a$ follows. If $b<\sqrt{2}(1+a^{-1})^{-\f{1}{2}}$, then a direct calculation shows $\be<\f{\a}{2}$, hence $\g>\be$ and moreover $v(Q,P_1)\leq b<c$. Thereby $P_2=Q$ is the required point. Otherwise $b\geq \sqrt{2}(1+a^{-1})^{-\f{1}{2}}$ and hence $c\leq b$. Obviously one of the following two cases has to occur: \textit{Case I.} $v(Q,P_1)<c$. One can take $P_2=Q$ to ensure $v(\cdot,P_2)\leq a$ whenever $v(\cdot,P_1)\leq b$. In this case \begin{equation}\label{below1} b-v(Q,P_2)=b-1\geq \sqrt{2}(1+a^{-1})^{-\f{1}{2}}-1. \end{equation} \textit{Case II.} $v(Q,P_1)\geq c$. Denote by $\th_1,\cdots,\th_m$ the Jordan angles between $Q$ and $P_1$, and put $L^2=\sum_{1\leq \a\leq m}\th_\a^2$, then as shown in \cite{w}, if we denote the shortest normal geodesic from $Q$ to $P_1$ by $\g$, then the Jordan angles between $Q$ and $\g(t)$ are $\f{\th_1}{L}t,\cdots,\f{\th_m}{L}t$, while the Jordan angles between $\g(t)$ and $P_1$ are $\f{\th_1}{L}(L-t),\cdots,\f{\th_m}{L}(L-t)$. Hence $$\aligned v(Q,\g(t))&=\prod_\a \sec\big(\f{\th_\a}{L}t\big),\\ v(\g(t),P_1)&=\prod_\a \sec\big(\f{\th_\a}{L}(L-t)\big). \endaligned$$ Since $t\mapsto \prod_\a \sec\big(\f{\th_\a}{L}(L-t)\big)$ is a strictly decreasing function, there exists a unique $t_0\in [0,L)$, such that $\prod_\a \sec\big(\f{\th_\a}{L}(L-t_0)\big)=c$. Now we choose $P_2=\g(t_0)$, then $v(P_2,P_1)=c$ and \begin{equation}\label{below2} b-v(Q,P_2)=b-\prod_\a \sec\big(\f{\th_\a}{L}t_0\big). \end{equation} It remains to show $b-\prod_\a \sec\big(\f{\th_\a}{L}t_0\big)$ is bounded from below by a universal positive constant $\ep_2$. Once this holds true, in conjunction with (\ref{below1}) and (\ref{below2}), \begin{equation} \ep_1=\min\{\sqrt{2}(1+a^{-1})^{-\f{1}{2}}-1,\ep_2\} \end{equation} is the required constant. $t_0$ can be regarded as a smooth function on $$\Om=\big\{(b,\th_1,\cdots,\th_m)\in \R^{m+1},\sqrt{2}(1+a^{-1})^{-\f{1}{2}}\leq b\leq \be_0, 0\leq \th_\a\leq \f{\pi}{2},c\leq \prod_\a \sec(\th_\a)\leq b\big\}$$ which is the unique one satisfying $$\prod_\a \sec\big(\f{\th_\a}{L}(L-t_0)\big)=c.$$ (By (\ref{dis}), $c$ can be viewed as a function of $b$.) The smoothness of $t_0$ follows from the implicit function theorem. Therefore $F:\Om\ra \R$ $$(\th_1,\cdots,\th_m)\mapsto b-\prod_\a \sec\big(\f{\th_\a}{L}t_0\big)$$ is a smooth function on $\Om$. $t_0<L$ implies $F>0$; then the compactness of $\Om$ gives $\inf_\Om F>0$, and $\ep_2=\inf_\Om F$ is the required constant. \end{proof} \begin{rem} $\ep_1$ is only depending on $a$ and $\be_0$, non-decreasingly during the iteration process in Theorem \ref{thm4}. \end{rem} \begin{thm}\label{thm3} Let $M=\big\{(x,f(x)):x\in D_{R_0}\subset \R^n\big\}$ be a graph with parallel mean curvature, and $\De_f\leq \be_0$ with $\be_0\in [1,3)$. Assume there exists $P_0\in \grs{n}{m}$, such that $v(\cdot,P_0)\circ \g\leq b$ on $M$ with $1\leq b\leq \be_0$. If $b<\f{\sqrt{6}}{2}$, then for arbitrary $\ep>0$, one can find a constant $\de\in (0,1)$ depending only on $n$, $\be_0$ and $\ep$ such that \begin{equation}\label{es3} 1\leq v(\cdot,P_1)\circ \g\leq 1+\ep\qquad \text{on }B_{\de R_0} \end{equation} for a point $P_1\in \grs{n}{m}$. If $b\geq \f{\sqrt{6}}{2}$, then there are two constants $\de_0\in (0,1)$ and $\ep_1>0$, only depending on $n$ and $\be_0$, such that \begin{equation}\label{es4} 1\leq v(\cdot,P_1)\circ \g\leq b-\f{\ep_1}{2}\qquad \text{on }B_{\de_0R_0} \end{equation} for a point $P_1\in \grs{n}{m}$. \end{thm} \begin{proof} Let $H$ be a smooth function on $\grs{n}{m}$, then $h=H\circ \g$ gives a smooth function on $M$. Let $\eta$ be a nonnegative smooth function on $M$ with compact support and $\varphi$ be a $H^{1,2}$-function on $M$, then by Stokes' Theorem, $$\aligned 0&=\int_M \text{div}(\varphi\eta\n h)*1\\ &=\int_M \varphi\n\eta\cdot \n h*1+\int_M \eta\n\varphi\cdot \n h*1+\int_M \varphi \eta\De h*1\\ &=\int_M \varphi\n \eta\cdot \n h*1+\int_M\n\varphi\cdot \n(\eta h)*1-\int_M h\n \varphi\cdot \n\eta*1+\int_M \varphi\eta \De h*1. \endaligned$$ Hence \begin{equation}\label{sh0} \int_M \n\varphi\cdot \n(\eta h)*1=-\int_M \varphi\n\eta\cdot \n h*1+\int_M h\n\varphi\cdot \n\eta*1-\int_M \varphi\eta\De h*1. \end{equation} For arbitrary $R\leq R_0$, we take a cut-off function $\eta$ supported in the interior of $B_R$, $0\leq \eta\leq 1$, $\eta\equiv 1$ on $B_{\f{R}{2}}$ and $|\n \eta|\leq c_0 R^{-1}$. For every $\rho\leq \f{R}{8}$, denote by $G^\rho$ the mollified Green function on $B_R$. For arbitrary $p\in B_{\f{R}{4}}$, inserting $\varphi=G^\rho(\cdot,p)$ into (\ref{sh0}) gives \begin{equation}\label{sh}\aligned &\int_{B_R}\n G^\rho(\cdot,p)\cdot \n(\eta h)*1\\ =&-\int_{B_R}G^\rho(\cdot,p)\n\eta\cdot \n h*1+\int_{B_R}h\n G^\rho(\cdot,p)\cdot \n\eta*1 -\int_{B_R}G^\rho(\cdot,p)\eta\De h*1.\endaligned \end{equation} We write (\ref{sh}) as $$I_\rho=II_\rho+III_\rho+IV_\rho.$$ By the definition of mollified Green functions, \begin{equation}\label{sh11} I_\rho=\aint{B_\rho(p)}\eta h=\aint{B_\rho(p)}h. \end{equation} Hence \begin{equation}\label{sh1} \lim_{\rho\ra 0^+}I_\rho=h(p). \end{equation} Noting that $|d\g|^2=|B|^2$, we have $|\n h|\leq |\n^G H||d\g|=|\n^G H||B|$. Here and in the sequel, $\n^G$ denotes the Levi-Civita connection on $\grs{n}{m}$. In conjunction with (\ref{Gr3}), we have \begin{equation}\label{sh2}\aligned |II_\rho|&\leq \int_{T_R}G^\rho(\cdot,p)|\n \eta||\n h|*1\\ &\leq \sup_{T_R}G^\rho(\cdot,p)\sup_{T_R}|\n \eta|\sup_{\Bbb{V}}|\n^G H|\int_{B_R}|B|*1\\ &\leq C(n)R^{1-n}\sup_{\Bbb{V}}|\n^G H|\Big(\int_{B_R}|B|^2*1\Big)^{\f{1}{2}}\text{Vol}(B_R)^{\f{1}{2}}\\ &\leq c_1(n)\sup_{\Bbb{V}}|\n^G H|\Big(R^{2-n}\int_{B_R}|B|^2*1\Big)^{\f{1}{2}}. \endaligned \end{equation} Here $T_R\mathop{=}\limits^{\text{def.}}B_R\backslash \bar{B}_{\f{R}{2}}$ and $$\Bbb{V}=\{P\in \grs{n}{m}: v(P,P_0)\leq 3\},$$ which is a compact subset of $\Bbb{U}$. As shown in Section \ref{s1}, there is a one-to-one correspondence between the points in $\Bbb{U}$ and the $n\times m$-matrices. And each $n\times m$-matrix can be viewed as a corresponding vector in $\R^{nm}$. Define $T:\Bbb{U}\ra \R^{nm}$ $$Z\mapsto \big(\det(I+ZZ^T)^\f{1}{2}-1\big)\f{Z}{\big(\tr(ZZ^T)\big)^{\f{1}{2}}}$$ Note that $\big(\tr(ZZ^T)\big)^{\f{1}{2}}=(\sum_{i,\a}Z_{i\a}^2)^{\f{1}{2}}$ equals $|Z|$ when $Z$ is treated as a vector in $\R^{nm}$. Since $t\in [0,+\infty)\mapsto \left[\det\big(I+(tZ)(tZ)^T\big)\right]^{\f{1}{2}}$ is a strictly increasing function and maps $[0,+\infty)$ onto $[1,+\infty)$, $T$ is a diffeomorphism. By (\ref{v}), $|T(Z)|=v(P,P_0)-1$. Via $T$, we can define the mean value of $\g$ on $B_R$ by \begin{equation} \bar\g_R=T^{-1}\Big[\f{\int_{B_R}(T\circ \g)*1}{\text{Vol}(B_R)}\Big]. \end{equation} Note that $T$ maps sublevel sets of $v(\cdot,P_0)$ onto Euclidean balls centered at the origin. Hence the convexity of Euclidean balls gives \begin{equation} v(\bar{\g}_R,P_0)\leq \sup_{B_R}v(\cdot,P_0)\circ \g\leq b. \end{equation} The compactness of $\Bbb{V}$ ensures the existence of positive constants $K_1$ and $K_2$, such that for arbitrary $X\in T\Bbb{V}$, $$K_1|X|\leq |T_* X|\leq K_2|X|.$$ The classical Neumann-Poincar\'{e} inequality says $$\int_{D_R}|\phi-\bar{\phi}|^2\leq C(n)R^2\int_{D_R}|D \phi|^2.$$ As shown above, $B_R$ can be regarded as $D_R$ equipped with the metric $g=g_{ij}dx^idx^j$, and the eigenvalues of $(g_{ij})$ are bounded. Hence it is easy to get $$\int_{B_R}|\phi-\bar{\phi}|^2*1\leq C(n)R^2\int_{B_R}|\n \phi|^2*1.$$ Here $\phi$ can be a vector-valued function. Denote by $d_G$ the distance function on $\grs{n}{m}.$ Then, by using the above Neumann-Poincar\`{e} inequality we have \begin{equation}\label{poin}\aligned \int_{B_R}d_G^2(\g,\bar\g_R)*1&\leq K_1^{-2}\int_{B_R}\big|T\circ \g-T(\bar{\g}_R)\big|^2*1\\ &\leq C(n)K_1^{-2}R^2\int_{B_R}\big|d(T\circ \g)\big|^2*1\\ &\leq C(n)K_1^{-2}K_2^2R^2\int_{B_R}|d\g|^2*1\\ &= C(n)K_1^{-2}K_2^2R^2\int_{B_R}|B|^2*1. \endaligned \end{equation} Now we write $$h=H\circ \g=H(\bar{\g}_R)+\big(H\circ \g-H(\bar{\g}_R)\big),$$ then \begin{equation}\label{sh3} III_\rho=H(\bar{\g}_R)\int_{B_R}\n G^\rho(\cdot,p)\cdot \n\eta*1+\int_{T_R}\big(H\circ \g-H(\bar{\g}_R)\big)\n G^\rho(\cdot,p)\cdot \n \eta*1. \end{equation} Similar to (\ref{sh11}), \begin{equation}\label{sh31} \lim_{\rho\to 0^+}H(\bar{\g}_R)\int_{B_R}\n G^\rho(\cdot,p)\cdot \n \eta*1=\lim_{\rho\to 0^+}H(\bar{\g}_R)\aint{B_\rho(p)}\eta=H(\bar\g_R). \end{equation} The second term can be controlled by \begin{equation}\label{sh321}\aligned &\int_{T_R}\big(H\circ \g-H(\bar{\g}_R)\big)\n G^\rho(\cdot,p)\cdot \n \eta*1\\ \leq&\sup_{\Bbb{V}}|\n^G H|\sup_{T_R}|\n \eta|\int_{T_R}d_G(\g,\bar{\g}_R)|\n G^\rho(\cdot,p)|*1\\ \leq&c_0R^{-1}\sup_{\Bbb{V}}|\n^G H|\Big(\int_{B_R}d_G^2(\g,\bar{\g}_R)\Big)^{\f{1}{2}} \Big(\int_{T_R}|\n G^\rho(\cdot,p)|^2*1\Big)^{\f{1}{2}} \endaligned \end{equation} Substituting (\ref{Gr2}) and (\ref{poin}) into (\ref{sh321}) yields \begin{equation}\label{sh32} \int_{T_R}\big(H\circ \g-H(\bar{\g}_R)\big)\n G^\rho(\cdot,p)\cdot \n \eta*1\leq c_2(n)\sup_{\Bbb{V}}|\n^G H|\Big(R^{2-n}\int_{B_R}|B|^2*1\Big)^{\f{1}{2}}. \end{equation} From (\ref{sh}), (\ref{sh1}), (\ref{sh2}), (\ref{sh3}), (\ref{sh31}) and (\ref{sh32}), letting $\rho\ra 0$ we arrive at \begin{equation}\label{sh4} \aligned h(p)\leq &H(\bar{\g}_R)+c_3(n)\sup_{\Bbb{V}}|\n^G H|\Big(R^{2-n}\int_{B_R}|B|^2*1\Big)^{\f{1}{2}}\\ &-\limsup_{\rho\ra 0^+}\int_{B_R} G^\rho(\cdot,p)\eta\De h*1. \endaligned \end{equation} for every $p\in B_{\f{R}{4}}$. The compactness of $\grs{n}{m}$ implies the existence of a positive constant $K_3$, such that \begin{equation}\label{sh51} \big|\n^Gv(\cdot,P)\big|\leq K_3\qquad \text{whenever }1\leq v(\cdot,P)\leq 3 \end{equation} for arbitrary $P\in \grs{n}{m}$. Hence by inserting $H=v(\cdot,P)$ into (\ref{sh4}) one can obtain \begin{equation}\label{sh6} \aligned v(\g(p),P)\leq &v(\bar{\g}_R,P)+c_3K_3\Big(R^{2-n}\int_{B_R}|B|^2*1\Big)^{\f{1}{2}}\\ &-\limsup_{\rho\ra 0^+}\int_{B_R} G^\rho(\cdot,p)\eta\De \big(v(\cdot,P)\circ \g\big)*1. \endaligned \end{equation} By Lemma \ref{l3}, if we put $P_1=\bar{\g}_R$, then $1\leq v(\cdot,P_1)\leq 3$ whenever $1\leq v(\cdot,P_0))\leq b$ provided that $b<\f{\sqrt{6}}{2}$, which implies $v(\cdot,\bar{\g}_R)\circ \g$ is a subharmonic function on $B_R$. Letting $P=\bar{\g}_R$ in (\ref{sh6}) yields \begin{equation}\label{sh7} v(\g(p),\bar{\g}_R)\leq 1+c_3K_3\Big(R^{2-n}\int_{B_R}|B|^2*1\Big)^{\f{1}{2}} \end{equation} for all $p\in B_{\f{R}{4}}$. By Theorem \ref{thm2}, for every $\ep>0$, there is $\de\in (0,1)$, depending only on $n,\be_0$ and $\ep$, such that \begin{equation}\label{sh8} R^{2-n}\int_{B_R}|B|^2*1\leq c_3^{-2}K_3^{-2}\ep^2 \end{equation} for some $R\in [4\de R_0,R_0]$. Substituting (\ref{sh8}) into (\ref{sh7}) gives (\ref{es3}). If $b\geq \f{\sqrt{6}}{2}$, we put $\ep_1=\ep_1(3,\be_0)$ as given in Lemma \ref{l3}. Then Theorem \ref{thm2} enables us to find $R\in [4\de_0R_0,R_0]$ such that \begin{equation}\label{sh53} R^{2-n}\int_{B_R}|B|^2*1\leq \f{1}{4}c_3^{-2}K_3^{-2}\ep_1^2, \end{equation} where $\de_0$ only depends on $n$ and $\be_0$. Applying Lemma \ref{l3}, one can find $P_1\in \grs{n}{m}$, such that \begin{equation}\label{sh52} v(\bar{\g}_R,P_1)\leq b-\ep_1 \end{equation} and $1\leq v(\cdot,P_1)\leq 3$ whenever $1\leq v(\cdot,P_0)\leq b$. Theorem \ref{thm1} ensures that $v(\cdot,P_1)\circ \g$ is a subharmonic function on $B_R$. Taking $P=P_1$ in (\ref{sh6}) yields \begin{equation} v\big(\g(p),P_1\big)\leq v(\bar{\g}_R,P_1)+c_3K_3(\f{1}{4}c_3^{-2}K_3^{-1}\ep_1^2)^{\f{1}{2}}\leq b-\f{\ep_1}{2} \end{equation} for all $p\in B_{\f{R}{4}}$. Here we have used (\ref{sh53}) and (\ref{sh52}). From the above inequality (\ref{es4}) immediately follows. \end{proof} \Section{Bernstein type results}{Bernstein type results} Now we can start an iteration as in \cite{h-j-w} and \cite{g-j} to get the following estimates: \bigskip \begin{thm}\label{thm4} Let $M=\big\{(x,f(x)):x\in D_{R_0}\subset \R^n\big\}$ be a graph with parallel mean curvature, and $\De_f\leq \be_0$ with $\be_0\in [1,3)$, then for arbitrary $\ep>0$, there exists $\de\in (0,1)$, only depending on $n$, $\be_0$ and $\ep$, not depending on $f$ and $R_0$, such that $$1\leq v(\cdot,\g(o))\circ \g\leq 1+\ep\qquad \text{on }B_{\de R_0},$$ where $o=(0,f(0))$. In particular, if $|Df|(0)=0$, then $$\De_f\leq 1+\ep\qquad \text{on }D_{\de R_0}.$$ \end{thm} \begin{proof} Let $\{\eps_1,\cdots,\eps_{n+m}\}$ be canonical orthonormal basis of $\R^{n+m}$ and put $P_0=\eps_1\w\cdots\w\eps_n$. Then $\De_f\leq \be_0$ implies $v(\cdot,P_0)\leq \be_0$ on $B_{R_0}$. If $\be_0<\f{\sqrt{6}}{2}$, we put $Q_0=P_0$. Otherwise by Theorem \ref{thm3}, one can find $P_1\in \grs{n}{m}$, such that \begin{equation} v(\cdot,P_1)\circ \g \leq \be_0-\ep_1\qquad \text{on }B_{\de_0 R_0} \end{equation} with constants $\de_0$ and $\ep_1$ depending only on $n$ and $\be_0$. Similarly for each $j\geq 1$, if $\be_0-j\ep_1<\f{\sqrt{6}}{2}$, then we put $Q_0=P_j$; otherwise Theorem \ref{thm3} enables us to find $P_{j+1}\in \grs{n}{m}$ satisfying \begin{equation} v(\cdot,P_{j+1})\circ \g\leq \be_0-(j+1)\ep_1\qquad \text{on }B_{\de_0^{j+1}R_0}. \end{equation} Denoting $$k=\big[(3-\f{\sqrt{6}}{2})\ep_1^{-1}\big]+1,$$ then obviously $\be_0-k\ep_1<\f{\sqrt{6}}{2}$. Hence there exists $Q_0\in \grs{n}{m}$, such that \begin{equation} v(\cdot,Q_0)\circ \g\leq b<\f{\sqrt{6}}{2}\qquad \text{on }B_{\de_0^kR_0}. \end{equation} Again using Theorem \ref{thm3}, for arbitrary $\ep>0$, there exists $\de_1\in (0,1)$, depending only on $n,\be_0$ and $\ep$, such that \begin{equation} v(\cdot,Q_1)\circ \g\leq \sqrt{2}(1+(1+\ep)^{-1})^{-\f{1}{2}}\qquad \text{on }B_{\de_1\de_0^kR_0} \end{equation} for a point $Q_1\in \grs{n}{m}$. With $r(\cdot,\cdot)$ as in the proof of Lemma \ref{l3}, then $$r(\cdot,Q_1)\circ \g=\arccos v(\cdot,Q_1)^{-1}\circ \g\leq \f{1}{2}\arccos (1+\ep)^{-1}.$$ Using the triangle inequality we get $$r(\cdot,\g(0))\circ \g\leq r(\cdot,Q_1)\circ \g+r(\g(0),Q_1)\circ \g\leq \arccos(1+\ep)^{-1}.$$ Thus $v(\cdot,\g(0))\circ \g\leq 1+\ep$ on $B_{\de_1\de_0^k R_0}$. It is sufficient to put $\de=\de_1\de_0^k$. \end{proof} Letting $R_0\ra +\infty$ we can arrive at a Bernstein-type theorem: \bigskip \begin{thm}\label{thm5} Let $z^\a=f^\a(x^1,\cdots,x^n),\ \a=1,\cdots,m$, be smooth functions defined everywhere in $\R^n$ ($n\geq 3,m\geq 2$). Suppose their graph $M=(x,f(x))$ is a submanifold with parallel mean curvature in $\R^{n+m}$. Suppose that there exists a number $\be_0<3$ with \begin{equation} \De_f=\Big[\det\Big(\de_{ij}+\sum_\a \f{\p f^\a}{\p x^i}\f{\p f^\a}{\p x^j}\Big)\Big]^{\f{1}{2}}\leq \be_0.\label{be2} \end{equation} Then $f^1,\cdots,f^m$ has to be affine linear (representing an affine $n$-plane). \end{thm} \noindent{\bf Final remarks} \medskip For any $P_0\in\grs{n}{m}$, denote by $r$ the distance function from $P_0$ in $\grs{n}{m}$. The eigenvalues of $\Hess(r)$ were computed in \cite{j-x}. Then define \begin{equation*} B_{JX}(P_0)=\big\{P\in \grs{n}{m}:\mbox{ sum of any two Jordan angles between }P\mbox{ and }P_0<\f{\pi}{2}\big\} \end{equation*} in the geodesic polar coordinate neighborhood around $P_0$ on the Grassmann manifold. From (3.2), (3.7) and (3.9) in \cite{j-x} it turns out that $\Hess(r)>0$ on $B_{JX}(P_0).$ Moreover, let $\Si\subset B_{JX}(P_0)$ be a closed subset, then $\th_\a+\th_\be\le \be_0 <\f{\pi}2$ and $$\Hess(r)\ge \cot\be_0\ g,$$ where $g$ is the metric tensor on $\grs{n}{m}.$ Hence, the composition of the distance function with the Gauss map is a strongly subharmonic function on $M$, provided the Gauss image of the submanifold $M$ with parallel mean curvature in $\ir{n+m}$ is contained in $\Si$. The largest sub-level set of $v(\cdot, P_0)$ in $B_{JX}(P_0)$ were studied in \cite{j-x}. The Theorem 3.2 in \cite{j-x} shows that $$\max\{w(P, P_0);\; P\in\p B_{JX}(P_0)\}=\f{1}{2}.$$ Therefore, $$\{P\in \grs{n}{m},\; v(\cdot, P_0)<2\}\subset B_{JX}(P_0),$$ and $$\{P\in \grs{n}{m};\; v(\cdot, P_0)= 2\}\bigcap\p B_{JX}(P_0)\neq\emptyset.$$ On the other hand, we can compute directly. From (\ref{He}) we also have $$\aligned \Hess(v(\cdot,P_0))&=\sum_{m+1\leq i\leq n,\a}v\ \om_{i\a}^2+\sum_{\a}(1+2\la_\a^2)v\ \om_{\a\a}^2 +\sum_{\a\neq \be}\la_\a\la_\be v\ \om_{\a\a}\otimes\om_{\be\be}\\ &\qquad\qquad+\sum_{\a<\be}\Big[(1+\la_\a\la_\be)v\Big(\f{\sqrt{2}}{2}(\om_{\a\be} +\om_{\be\a})\Big)^2\\ &\hskip2in+(1-\la_\a\la_\be)v\Big(\f{\sqrt{2}}{2}(\om_{\a\be}-\om_{\be\a})\Big)^2\Big]. \endaligned $$ It follows that $v(\cdot, P_0)$ is strictly convex on $B_{JX}(P_0).$ Moreover, if $\th_\a+\th_\be\le \be_0<\f{\pi}{2},$ then $$\Hess (v(\cdot, P_0))\ge (1-\tan\th_\a\tan\th_\be)v\ g=\f{\cos(\th_\a+\th_\be)}{\cos\th_\a\cos\th_\be}v\,g \ge\cos\be_0 v\,g$$ where $g$ is the metric tensor of $\grs{n}{m}$ and $$\De v(\g(\cdot), P_0)\ge \cos\be_0 v|B|^2\ge \cos\be_0 |B|^2.$$ Now, we define $$\Sigma(P_0)=B_{JX}(P_0)\bigcup\{P\in \grs{n}{m};\; v(\cdot,P_0)<3\}\subset\grs{n}{m}.$$ The function $v(\cdot, P_0)$ is not convex on all of $\Si(P_0)$. But, its precomposition with the Gauss map could be a strongly subharmonic function on $M$ under suitable conditions. Therefore, we could obtain a more general result: Let $M$ be a complete submanifold in $\ir{n+m}$ with parallel mean curvature. If its image under the Gauss map is contained in a closed subset of $\Sigma(P_0)$ for some $P_0\in \grs{n}{m}$, then $M$ has to be an affine linear subspace. \bigskip\bigskip \bibliographystyle{amsplain}
2,877,628,088,784
arxiv
\section{Approach} \label{sec:approach} The following presents the components of \name{} outlined in the previous section in more detail. \subsection{Dynamic Analysis of Assignments} \label{subsec:data_extraction} The goal of this component is to gather name-value pairs from a corpus of programs. Our analysis focuses on assignments because they associate a value with the name of a variable. One option would be to statically analyze all assignments in a program. However, a static analysis could capture only those values where the right-hand side of an assignment is a literal, but would miss many other assignments, e.g., when the right-hand side is a complex expression or function call. In the code corpus used in our evaluation, we find that 90\% of all assignments have a value other than a primitive literal on the right-hand side, i.e., a static analysis could not gather name-value pairs from them. Instead, \name{} uses a dynamic analysis that observes all assignments during the execution of a program. Besides the benefit of capturing assignments that are hard to reason about statically, a dynamic analysis can easily extract additional properties of values, such as the length or shape, which we find to be useful for training an effective model. \subsubsection{Instrumentation and Data Gathering} To dynamically analyze assignments, \name{} instruments and then executes the programs in the corpus. For instrumentation, the analysis traverses the abstract syntax tree of a program and augments all assignments to a variable with a call to a function that records the name of the variable and the assigned value. As runtime values can be arbitrarily complex, the analysis can extract only limited information about a value. We extract four properties of each value, which we found to be useful for training an effective model, but extending the approach to gather additional properties of values is straightforward. Slightly abusing the term ``pair'' to include the properties extracted for each value, the analysis extracts the following information: \begin{definition}[Name-value pair] A name-value pair is a tuple $(n, v, \tau, l, s)$, where $n$ denotes the variable name on the left hand side, $v$ is a string representation of the value, $\tau$ represents the type of the value, and $l$ and $s$ represent the length and shape of the value, respectively. \end{definition} The string representation builds upon Python's built-in string conversion, which often yields a meaningful representation because developers commonly use this representation, e.g., for debugging. The type of values is relevant because it allows \name{} to find type-related mistakes, which otherwise remain easily unnoticed in a dynamically typed language. Length here refers to the number of items present in a collection or sequence type value, which is useful, e.g., to enable the model to distinguish empty from non-empty collections. Since some common data types are multidimensional the \emph{shape} refers to the number of items present in each dimension. The table in Figure~\ref{fig:overview} shows examples of name-value pairs gathered by the analysis. We show in the evaluation how much the extracted properties contribute to the overall effectiveness of the model. \subsubsection{Filtering and Processing of Name-Value Pairs} \label{sec:data filtering} \paragraph{Merge Types} We observe that the gathered data forms a long-tailed distribution of types. One of the reasons is the presence of many similar types, such as Python's dictionary type \emph{dict} and its subclass \emph{defaultdict}. To help the model generalize across similar types, we reduce the overall number of types by merging some of the less frequent types. To this end, we first choose the ten most frequent types present in the dataset. For the remaining types, we replace any types that are in a subclass relationship with one of the frequent types by the frequent type. For example, consider a name-value pair \textit{(stopwords, frozenset(\{"all", "afterwards", "eleven", ...\}), frozenset, 337, null)}. Because type \emph{frozenset} is not among the ten most frequent types, but type \emph{set} is, we change the name-value pair into \textit{(stopwords, frozenset(\{"all", "afterwards", "eleven", ...\}), set, 337, null)}. \paragraph{Filter Meaningless Names} An underlying assumption of \name{} is that developers use meaningful variable names. Unfortunately, some names are rather cryptic, such as variables called \code{a} or \code{ts\_pd}. Such names help neither our model nor developers in deciding whether a name fits the value it refers to, and hence, we filter likely meaningless names. The first type of filtering considers the length of the variable names and discards any name-value pairs where the name is less than three characters long. The second type of filtering is similar to the first one, except that it targets names composed of multiple subtokens, such as \code{ts\_pd}. We split names at underscores\footnote{\url{https://www.python.org/dev/peps/pep-0008/\#function-and-variable-names}}, and remove any name-value pairs where each subtoken has less than three characters. \subsection{Generation of Negative Examples} \label{subsec:create_negative_examples} The gathered name-value pairs provide numerous examples of names and values that developers typically combine. \name{} uses supervised learning to train a classification model that distinguishes consistent, or positive, name-value pairs from inconsistent, or negative, pairs. Based on the common assumption that most code is correct, we consider the name-value pairs extracted from executions as positive examples. The following presents two techniques for generating negative examples. First, we explain a purely random technique, followed by a type-guided technique that we find to yield a more effective training dataset. \subsubsection{Purely Random Generation} Our purely random algorithm for generating negative examples is straightforward. For each name-value pair $(n, v, \tau, l, s)$, the algorithm randomly selects another name-value pair $(n', v', \tau', l', s')$ from the dataset. Then, the algorithm creates a new negative example by combining the name of the original pair and the value of the randomly selected pair, which yields $(n, v', \tau', l', s')$. While simple, the purely random generation of negative examples suffers from the problem of creating many name-value pairs that do fit well together. The underlying root cause is that the distribution of values and types is long-tailed, i.e., the dataset contains many examples of similar values among the most common types. For example, consider a name-value pair gathered from an assignment \code{num = 23}. When creating a negative example, the purely random algorithm may choose a value gathered from another assignment \code{age = 3}. As both values are positive integers, they both fit the name \code{num}, i.e., the supposedly negative example actually is a legitimate name-value pair. Having many such legitimate, negative examples in the training data makes it difficult for a classifier to discriminate between consistent and inconsistent name-value pairs. \subsubsection{Type-Guided Generation} \label{sec:type-guided} \begin{algorithm}[tb] \caption{Create a negative example} \label{alg:create_a_negative_example} \begin{algorithmic}[1] \Require Name-value pair $(n, v, \tau, l, s)$, dataset $D$ of all pairs \Ensure Negative example $(n, v', \tau', l', s')$ \State $F_{\mathit{global}} \leftarrow$ Compute from $D$ a map from types to their frequency \State $F_{\mathit{name}} \leftarrow$ Compute from $D$ and $n$ a map from types observed with $n$ to their frequency \State $T_{\mathit{name}} \leftarrow \emptyset$ \Comment{Types seen with $n$} \label{line:typesets start} \State $T_{\mathit{name\_infreq}} \leftarrow \emptyset$ \Comment{Types infrequently seen with $n$} \ForAll {($\overline{\tau} \mapsto f) \in F_{\mathit{name}}$} \State $T_{\mathit{name}} \leftarrow \overline{\tau}$ \If{$f \leq$ threshold} \State $T_{\mathit{name\_infreq}} \leftarrow \overline{\tau}$ \EndIf \EndFor \label{line:typesets end} \State $T_{\mathit{all}} \leftarrow \mathit{dom}(F_{\mathit{global}})$ \Comment{All types ever seen} \label{line:neg value start} \State $T_{\mathit{cand}} = (T_{\mathit{all}} \setminus T_{\mathit{name}}) \bigcup T_{\mathit{name\_infreq}}$ \Comment{Types never or infrequently seen with $n$} \State $\tau' \leftarrow \mathit{weightedRandomChoice}(T_{\mathit{cand}}, F_{\mathit{global}})$ \State $v', l', s' \leftarrow \mathit{randomChoice}(D, \tau')$ \label{line:pick val} \State \Return $(n, v', \tau', l', s')$ \label{line:neg value end} \end{algorithmic} \end{algorithm} To mitigate the problem of legitimate, negative examples that the purely random generation algorithm suffers from, we present a type-guided algorithm for creating negative examples. The basic idea is to first select a type that a name is infrequently observed with, and to then select a random value among those observed with the selected type. Algorithm~\ref{alg:create_a_negative_example} shows the type-guided technique for creating a negative example for a given name-value pair. The inputs to the algorithm are a name-value pair $(n, v, \tau, l, s)$ and the complete dataset $D$ of positive name-value pairs. The first two lines of Algorithm~\ref{alg:create_a_negative_example} create two helper maps, which map types to their frequency. The $F_{\mathit{global}}$ map assigns each type to its frequency across the entire dataset $D$, whereas the $F_{\mathit{name}}$ map assigns each type to how often it occurs with the name $n$ of the positive example. Next, lines~\ref{line:typesets start} to~\ref{line:typesets end} populate two sets of types. The first set, $T_{\mathit{name}}$, is populated with all types ever observed with name $n$. The second set, $T_{\mathit{name\_infreq}}$, is populated with all types that are infrequently observed with name $n$. ``Infrequent'' here means that the frequency of the type among all name-value tuples with name $n$ is below some threshold, which is 3\% in the evaluation. The goal of selecting types that are infrequent for a particular name is to create negative examples that are unusual, and hence, likely to be inconsistent. The remainder of the algorithm (lines~\ref{line:neg value start} to~\ref{line:neg value end}) picks a type to be used for the negative example and then creates a negative name-value pair by combining the name $n$ with a value of that type. To this end, the algorithm computes all candidates types, $T_{\mathit{cand}}$, that are either never observed with name $n$ or among the types $T_{\mathit{name\_infreq}}$ that infrequently occur with $n$. The algorithm then randomly selects among the candidate types, using the global type frequency as weights for the random selection. The rationale is to choose a type that is unlikely for the name $n$, while following the overall distribution of types. The latter is necessary to prevent the model from simply learning to spot unlikely types, but to instead learn to find unlikely combinations of names and values. Once the target type $\tau'$ for the negative example is selected, the algorithm randomly picks a value among all values (line~\ref{line:pick val}) observed with type $\tau'$, and eventually returns a negative example that combines name $n$ with the selected value. \begin{figure} \includegraphics[width=.95\linewidth]{figures/example.pdf} \caption{Steps for creating a negative example.} \label{fig:example_negative} \end{figure} Figure~\ref{fig:example_negative} illustrates the algorithm with an example from our evaluation. The goal is to create a negative example for a name-value pair where the name is \code{years}. In the dataset of positive examples, the name \code{years} occurs with values of types \emph{list}, \emph{ndarray}, \emph{int}, etc., with the frequencies shown in the figure. For example, \code{years} occurs 235 times with a \emph{list} value, but only seven times with a \emph{float} value. Among all types that occur in the dataset, many never occur together with the name \code{years}, e.g., \emph{str} and \emph{bool}. Based on the global frequencies of types that \code{years} never or only infrequently occurs with, the algorithm picks \emph{float} as the target type. Finally, a corresponding \emph{float} value is selected from the dataset, which is \textit{1.8} for the example, and the negative example shown at the bottom of the figure is returned. By default, \name{} uses the type-guided generation of negative examples, and our evaluation compares it with the purely random technique. The generated negative examples are combined with the positive examples in the dataset, and the joint dataset serves as training data for the neural classifier. Due to the automated generation, a generated negative examples may coincidentally be identical to an existing positive example. In practice, the dataset used during the evaluation contains 38 instances of identical positive and negative examples out of 490,332 negative examples. \subsection{Representation as Vectors} Given a dataset of name-value pairs, each labeled either as a positive or a negative example, \name{} trains a neural classification model to distinguish the two kinds of examples. A crucial step is to represent the information in a name-value pair as vectors, which we explain in the following. The approach first represents each of the five components $(n, v, \tau, l, s)$ of a name-value pair as a vector, and then feeds the concatenation of these vectors into the classifier. Figure~\ref{fig:model} shows an overview of the neural architecture. The following describes the vector representation in more detail, followed by a description of the classifier in Section~\ref{sec:training_and_prediction}. \begin{figure} \includegraphics[width=.9\linewidth]{figures/model.pdf} \caption{Architecture of the neural model.} \label{fig:model} \end{figure} \paragraph{Representing Variable Names} To enable \name{} to reason about the meaning of variable names, it maps each name into a vector representation that encodes the semantics of the name. For example, the representation should map the names \code{list\_of\_numbers} and \code{integers} to similar vectors, as both represent similar concepts, but the vector representations of the names \code{age} and \code{file\_name} should differ from the previous vectors. To this end, our approach builds on pre-trained word embeddings, i.e., a learned function that maps each name into a vector. Originally proposed in natural language processing as a means to represents words~\cite{Mikolov2013a,Bojanowski2017}, word embeddings are becoming increasingly popular also on source code~\cite{icse2019,oopsla2018-DeepBugs,Alon2018,DBLP:conf/acl/ChangPCR18,Nguyen2017}, where they represent individual tokens, e.g., variable names. We build upon FastText~\cite{Bojanowski2017}, a neural word embedding known to represent the semantics of identifiers more accurately than other popular embeddings~\cite{icse2021}. An additional key benefit of FastText is to avoid the out-of-vocabulary problem that other embeddings, e.g., Word2vec~\cite{Mikolov2013a} suffer from, by splitting each token into n-grams and by computing a separate vector representation for each n-gram. To obtain meaningful embeddings for the Python domain, we pre-train a FastText model on token sequences extracted from the corpus Python programs used in our evaluation. Formally, the trained FastText model $M$, assigns to each name $n$ a real-valued vector $M(n) \in \mathbb{R}^d$, where $d=100$ in our evaluation. \paragraph{Representing Values} The key challenge for representing the string representations of values as vectors is that there is a wide range of different values, including sequential structures, e.g., in values of types \textit{string}, \textit{ndarray}, \textit{list}, and values without an obvious sequential structure, e.g., primitives and custom objects. The string representations of values may capture many interesting properties, including and beyond the information conveyed by the type of a value. For example, the string representation of an \emph{int} implicitly encodes whether the value is a positive or negative number. Our goal when representing values as vector is to pick up such intricacies, without manually defining type-specific vector encoders. To this end, \name{} represents value as a combination of two vector representations, each computed by a neural model that we jointly learn along with the overall classification model. On the one hand, we use a recurrent neural network (RNN) suitable for capturing sequential structures. Specifically, we apply gated recurrent units (GRU) over the sequence of characters, where each character is used as an input at every timestep. The vector obtained from the hidden state of the last timestep then serves as the representation of the complete sequence. On the other hand, we use a convolutional neural network (CNN) suitable for capturing non-sequential information about the value. Specifically, the approach applies a one-dimensional CNN over the sequence of characters, where the number of channels for the CNN is equal to the number of characters in the string representation of the value, the number of output channels is set to 100, Relu is the activation function, and a one-dimensional MaxPool layer serves as the final layer. Finally, \name{} concatenates the vectors obtained from the RNN and the CNN into the overall vector representation of the value. \paragraph{Representing Types} To represent the type of a value as a vector, the approach computes a one-hot vector for each type. Each vector has a dimension equal to the number of types present in the dataset. A type is represented by setting an element to one while keeping the remaining elements set to zero. For example, if we have only three types namely \textit{int, float,} and \textit{list} in our dataset then using one-hot encoding, each of them can be represented as \textit{[1, 0, 0], [0, 1, 0]} and \textit{[0, 0, 1]} respectively. For the evaluation, we set the maximum number of types to ten. More sophisticated representations of types, e.g., learned jointly with the overall model~\cite{Allamanis2020}, could be integrated into \name{} as part of future work. \paragraph{Representing Length and Shape} Length and shape are similar concepts, and hence, we represent them in a similar fashion. Because the length of a value is theoretically unbounded, we consider ten ranges of lengths and represent each of them with a one-hot vector. Specifically, \name{} considers ranges of length 100, starting from 0 until 1,000. That is, any length between 0 and 100 will be represented by the same one-hot vector, and likewise any length greater than 1,000 will be represented by the another vector. The shape of a value is a tuple of discrete numbers, which we represent similarly to the length, except that we first multiply the elements of the shape tuple. For example, for a value of shape $x,y,z$, we encode $x \cdot y \cdot z$ using the same approach as for the length. For values that do not have a length or shape, we use a special one-hot vector. \subsection{Training and Prediction} \label{sec:training_and_prediction} Once \name{} has obtained a vector representation for each component of a name-value pair, the individual vectors are concatenated into the combined representation of the pair. We then feed this combined representation into a neural classifier that predicts the probability $p$ of the name-value pair to be inconsistent. The classification model consists of two linear layers with a sigmoid activation function at the end. We also add a dropout with probability of 0.5 before each linear layer. We train the model with a batch size of 128, using the Adam~\cite{kingma2014adam} optimizer, for 15 epochs, after which the validation accuracy saturates. During training, the model is trained toward predicting $p=0.0$ for all positive examples and $p=1.0$ for all negative examples. Once trained, we interpret the predicted probability $p$ as the confidence \name{} has in flagging a name-value pair as inconsistent, and the approach reports to the user only pairs with $p$ above some threshold (Section~\ref{sec:eval effectiveness}). \subsection{Heuristic Filtering of Likely False Positives} \label{sec:heuristics} Before reporting name-value pairs that the model predicts as inconsistent to the user, \name{} applies two simple heuristics to prune likely false positives. The heuristics aim at removing generic and meaningless names that have passed the filtering described in Section~\ref{sec:data filtering}, such as \code{data} and \code{val\_0}. The rationale is that judging whether those names match a specific value is difficult, but the goal of \name{} is to identify name-value pairs that clearly mismatch. The first heuristic removes pairs with names that contain one of the following terms, which are often found in generic names: \code{data}, \code{value}, \code{result}, \code{temp}, \code{tmp}, \code{str}, and \code{sample}. The second heuristic removes pairs with short and cryptic names. To this end, we tokenize names at underscores and then remove pairs with names where at least one subtoken has less than three characters. \section{Conclusion} Using meaningful identifier names is important for code understandability and maintainability. This paper presents \name{}, which addresses the problem of finding inconsistencies that arise due to the use of a misleading name or due to assigning an incorrect value. The key novelty of \name{} is to learn from names and their values assigned at runtime. To reason about the meaning of names and values, the approach embeds them into vector representations that assign similar vectors to similar names and values. Our evaluation with about 500k name-value pairs gathered from real-world Python programs shows that the model is highly accurate, leading to warnings reported with a precision of 80\% and recall of 76\%. \medskip \noindent Our implementation and experimental results are available:\\ \url{https://github.com/sola-st/Nalin} \section*{Acknowledgments} This work was supported by the European Research Council (ERC, grant agreement 851895), and by the German Research Foundation within the ConcSys and Perf4JS projects. \section{Evaluation} \label{sec:evaluation} Our evaluation focuses on the following research questions: \begin{itemize} \item RQ1: How effective is the neural model of \name{} in detecting name-value inconsistencies? \item RQ2: Are the inconsistencies that \name{} reports perceived as hard to understand by software developers? \item RQ3: What kinds of inconsistencies does the approach find in real-world code? \item RQ4: How does our approach compare to popular static code analysis tools? \item RQ5: How does \name{} compare to simpler variants of the approach? \end{itemize} \subsection{Experimental Setup} \label{subsec:exprimental_setup} We implement our approach for Python as it is one of the most popular dynamically typed programming languages\cite{Tiobe}. All experiments are run on a machine with Intel Xeon E5-2650 CPU having 48 cores, 64GB of memory and an NVIDIA Tesla P100 GPU. The machine runs Ubuntu 18.04, and we use Python 3.8 for the implementation. The evaluation requires a large-scale, diverse, and realistic dataset of closed (i.e., include all inputs) programs. We choose one million computational notebooks in an existing dataset of Jupyter notebooks scrapped from GitHub~\cite{10.1145/3173574.3173606}. The dataset is (i) large-scale because there are many notebooks available, (ii) diverse because they are written by various developers and cover various application domains, (iii) realistic because Jupyter notebooks are one of the most popular ways of written Python code these days, and (iv) closed because notebooks do not rely on user input. Another option would be to apply \name{} to executions of test suites, which often focus on unusual inputs though and, by definition, exercise well-tested and hence likely correct behavior. Excluding some malformed notebooks, we convert 985,865 notebooks into Python scripts using \textit{nbconvert}. Some of these notebooks contain only text and no code, while for others, the code has syntax errors, or the code is very short and does not perform any assignments. All of this decreases the number of Python files that \name{} can instrument, and we finally obtain 598,321 instrumented files. The instrumentation takes approximately two hours. When gathering name-value pairs, we face general challenges related to reproducing Jupyter notebooks~\cite{wang2020assessing}. First, even with the installation of the 100 most popular Python packages, unresolved dependencies result in crashes during some executions. Second, some Python scripts read inputs from files, e.g., a dataset for training a machine learning model, which may not be locally available. Considering all notebooks that we can successfully execute despite these obstacles, \name{} gathers a total of 947,702 name-value pairs, of which 500,332 remain after the filtering described in Section~\ref{sec:data filtering}. The extracted pairs come from 106,652 Python files with a total of 7,231,218 lines of non-comment, non-blank Python code. Running the instrumented files to extract name-value pairs takes approximately 48 hours. Before running any experiments with the model, we sample 10,000 name-value pairs as a held-out \textit{test dataset}. Unless mentioned otherwise, all reported results are on this test dataset. On the remaining 490,332 name-value pairs, we perform an 80-20 split into \textit{training} and \textit{validation} data. For each name-value pair present in the training, validation, and test datasets, we create a corresponding negative example, which takes two hours in total. The total number of data points used to train the \name{} model hence is about 780k. Training takes an average of 190 seconds per epoch and once trained, prediction on the entire test dataset takes about 15 seconds. We find the name-value pairs to consist of a diverse set of values and types. There are 99.8k unique names, i.e., each name appears, on average, about 10 times. The top-5 frequent types are \emph{list}, \emph{ndarray}, \emph{str}, \emph{int}, \emph{float}. The presence of a large number of collection types, such as \emph{list} and \emph{ndarray}, which usually are not fully initialized as literals shows that extracting values at run-time is worthwhile. \subsection{RQ1: Effectiveness of the Trained Model} \label{sec:eval effectiveness} We measure the effectiveness of \name{}'s model by applying the trained model to the held-out test dataset. The output of the model can be interpreted as a confidence score that indicates how likely the model believes a given name-value pair to be inconsistent. We consider all name-value pairs $P_{\mathit{warning}}$ with a score above some threshold as a warning, and then measure precision and recall of the model w.r.t.\ the inconsistency labels in the dataset ($P_{inconsistent}$ are pairs labeled as inconsistent): \begin{center} $\mathit{precision} = \frac{|P_{\mathit{warning}} \cap P_{\mathit{inconsistent}}|}{|P_{\mathit{warning}}|}$\\ $\mathit{recall} = \frac{|P_{\mathit{warning}} \cap P_{\mathit{inconsistent}}|}{|P_{\mathit{inconsistent}}|}$ \end{center} We also compute the F1 score, which is the harmonic mean of precision and recall. \begin{figure} \includegraphics[width=.9\linewidth]{plots/precision_recall_over_threshold.pdf} \caption{Precision, recall, and F1 score with different thresholds for reporting warnings.} \label{fig:p_r_curve} \end{figure} Figure~\ref{fig:p_r_curve} shows the results for different thresholds for reporting a prediction as a warning. The results illustrate the usual precision-recall tradeoff, where a user can reduce the risk of false positive warnings at the cost of finding fewer inconsistencies. The model achieves the highest F1 score of 89\% at a threshold of 0.4, with a precision of 88\% and a recall of 91\%. Unless otherwise mentioned, we use a threshold of 0.5 as the default, which gives 87\% F1 score. Out of 8,858 files in the held-out test set, 336 (3.8\%) have at least one warning reported by \name{}. \begin{finding} The model effectively identifies inconsistent name-value pairs, with a maximum F1 score of 89\%. \end{finding} \subsection{RQ2: Study with Developers} To answer the question how well \name{}'s warnings match name-value pairs that developers perceive as hard to understand, we perform a study with eleven software developers. The participants are four PhD students and seven master-level students, all of which regularly develop software, and none of which overlaps with the authors of this paper. During the study, each participant is shown 40 name-value pairs and asked to assess each pair regarding its understandability. The participants provide their assessment on a five-point Likert scale ranging from ``hard''~(1) to ``easy''~(5), where ``hard'' means that the name and the value are inconsistent, making it hard to understand and maintain the code. The 40 name-value pairs consist of 20 pairs that are randomly selected from all warnings \name{} reports as inconsistent with a confidence above 80\% and of 20 randomly selected pairs that the approach does not warn about. For each pair, the participants are shown the name of the variable, the value that \name{} deems inconsistent with this name, and the type of the value. In total, the study hence involves 440 developer ratings. Because what is a meaningful variable names is, to some extent, subjective, we expect some variance in the ratings provided by the participants. To quantify this variance, we compute the inter-rater agreement using Krippendorff's alpha, which yields an agreement of 56\%. That is, developers agree with a medium to high degree on whether a name-value pair is easy to understand. Before providing quantitative results, we discuss a few representative examples. Among the name-value pairs without a warning is a variable called \code{DATA\_URL} that stores a string containing a URL. This pair is consistently rated as easy to understand, with a mean ranking of 5.0. Among the pairs that \name{} reports as inconsistent are a variable \code{password\_text} storing an integer value \code{0}, which most participants consider as hard to understand (mean rating: 1.54). Another pair that the approach warns about is a variable called \code{path} that stores an empty list. The study participants are rather undecided about this example, with a mean rating of 2.72. \begin{figure} \begin{subfigure}[b]{\linewidth} \centering \begin{tabular}{@{}l|rr@{}} \toprule & \multicolumn{2}{c}{\name{}'s prediction} \\ \cmidrule{2-3} Developer & Consistent & Inconsistent \\ assessment & ($P_{\mathit{noWarning}}$) & ($P_{\mathit{warning}}$) \\ \midrule Easy to understand ($P_{\mathit{easy}}$) & 15 & 4 \\ Hard to understand ($P_{\mathit{hard}}$) & 5 & 16 \\ \bottomrule \end{tabular} \caption{\name{} predictions of inconsistencies vs.\ developer-perceived understandability.} \label{fig:per pair comparison} \end{subfigure} \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=.9\linewidth]{plots/rating_line} \caption{Understandability ratings for name-value pairs with and without warnings by \name{}.} \label{fig:all ratings} \end{subfigure} \caption{Results from user study.} \label{fig:user study} \end{figure} \begin{table*}[t] \centering \caption{Examples of warnings produced by~\name{}.} \label{tab:examples_of_warnings_category} \small \setlength{\tabcolsep}{7pt} \begin{tabular}{@{}p{23em}p{4em}p{8em}p{23em}@{}} \toprule Code Example & Category & Run-time value & Comment \\ \midrule \vspace{-1.2em} \begin{lstlisting}[escapechar=§] name = 'Philip K. Dick' ... §\colorbox{hlyellow}{\makebox(30,3){\strut\textcolor{black}{name = \textcolor{orange1}{2.5}}}}§ if type(name) == str: print('yes') \end{lstlisting} & \vspace{-1em} Misleading name & \code{2.5} & \vspace{-1em} A variable called \code{name} is typically holding a string, but here stores a float value. \\ \midrule \vspace{-1.2em} \begin{lstlisting}[escapechar=§] §\colorbox{hlyellow}{\makebox(130,3){\strut\textcolor{black}{file = os.path.exists(\textcolor{green}{'reference.csv'})}}}§ if file == False: print('Warning: ...') \end{lstlisting} & \vspace{-1em} Misleading name & \code{False} & \vspace{-1em} The name \code{file} suggests that the variable stores either a file handle or a file name, but it here stores a boolean. \\ \midrule \vspace{-1.2em} \begin{lstlisting}[escapechar=§] def Custom(information): §\colorbox{hlyellow}{\makebox(155,3){\strut\textcolor{black}{prob = \textcolor{cyan}{get\_betraying\_probability}(information)}}}§ if(prob > 1 / 2): return D elif(prob == 1 / 2): return choice([D, C]) else: return C \end{lstlisting} & \vspace{-1em} Incorrect value & \code{"Corporate"} & \vspace{-1em} Assigning a string to a variable called \code{prob} is unusual, because \code{prob} usually refers to a probability. The value is incorrect and leads to a crash in the next line because comparing a string and a float causes a type error. \\ \midrule \vspace{-1.2em} \begin{lstlisting}[escapechar=§] §\colorbox{hlyellow}{\makebox(179,3){\strut\textcolor{black}{dwarF = \textcolor{green}{'/Users/iayork/Downloads/dwar\_2013\_2015.txt'}}}}§ dwar = pd.read_csv(dwarF, sep=' ', header=None) \end{lstlisting} & \vspace{-1em} False positive & \code{"/Users/.."} & \vspace{-1em} The value is a string that describes file path, which fits the name, where the \code{F} supposedly means ``file''. The model reports this false positive because it fails to understand the abbreviation. \\ \bottomrule \end{tabular} \end{table*} The main question of the user study is to what extent \name{} pinpoints name-value pairs that developers also consider to be hard to understand. We address this question in two ways, first by computing precision and recall of \name{} w.r.t. the developer ratings, and then by comparing the ratings for warnings and non-warnings. \paragraph{Precision and Recall w.r.t. Developer Ratings} We assign each of the 40 name-value pairs into two sets: On the one hand, a pair is in $P_{\mathit{hard}}$ if the mean rating assigned by the developers is less than three and in $P_{\mathit{easy}}$ otherwise. On the other hand, a pair is in $P_{\mathit{warning}}$ if \name{} flags it as an inconsistency and in $P_{\mathit{noWarning}}$ otherwise. Table~\ref{fig:per pair comparison} shows the intersections between these sets. For example, we see that 16 of the pairs that \name{} warns about, but only 5 of the pairs without a warning, are considered to be hard to understand. We compute precision and recall as follows: \vspace{-.1em} \begin{center} $\mathit{precision} = \frac{|P_{\mathit{warning}} \cap P_{\mathit{hard}}|}{|P_{\mathit{warning}}|} = \frac{16}{20} = 80\%$\\ $\mathit{recall} = \frac{|P_{\mathit{warning}} \cap P_{\mathit{hard}}|}{|P_{\mathit{hard}}|} = \frac{16}{21} = 76\%$ \end{center} \paragraph{Ratings for Warnings vs. Non-Warnings} In addition to the pair-based metrics above, we also globally compare the ratings for pairs with and without warnings. The goal is to understand whether \name{} is effective at distinguishing between name-value pairs that developers perceive as easy and hard to understand. To this end, consider two sets of ratings: ratings $R_{\mathit{warning}}$ for name-value pairs that \name{} reports as inconsistent, and ratings $R_{\mathit{noWarning}}$ for other name-value pairs. Figure~\ref{fig:all ratings} compares the two sets of ratings with each other, showing how many ratings there are for each point on the 5-point Likert scale. The results show a clear difference between the two sets: ``easy'' is the most common rating in $R_{\mathit{noWarning}}$, whereas the majority of ratings in $R_{\mathit{warning}}$ is either ''relatively hard'' or ``hard''. We also statistically compare $R_{\mathit{warning}}$ and $R_{\mathit{noWarning}}$ using a Mann-Whitney U-test, which shows the two sets of rankings to be extremely likely to be sampled from different populations (with a p-value of less than 0.1\%). \begin{finding} Developers mostly agree with the (in)consistency predictions by \name{}. In particular, they assess 80\% of the name-value pairs that the approach warns about as hard to maintain and understand. \end{finding} \subsection{RQ3: Kinds of Inconsistencies in Real-World Code} \label{subsec:rq2-kinds_of_inconsistencies} To better understand the kinds of name-value inconsistencies detected in real-world code, we inspect name-value pairs in the test datasets that appear as such in the code, but that are classified as inconsistent by the model. When using \name{} to search for previously unknown issues, these name-value pairs will be reported as warnings. We inspect the top-30 predictions, sorted by the probability score provided by the model, and classify each warning into one of three categories: \begin{itemize} \item \textit{Misleading name}. Name-value pairs where the name clearly fails to match the value it refers to. These cases do not lead to wrong program behavior, but should be fixed to increase the readability and maintainability of the code. \item \textit{Incorrect value}. Name-value pairs where the mismatch between a name and a value is due to an incorrect value being assigned. These cases cause unexpected program behavior, e.g., a program crash or incorrect output. \item \textit{False positive}. Name-value pairs that are consistent with each other, and which ideally would not be reported as a warning. \end{itemize} The inspection shows that 21 of the warnings correspond to misleading names, 2 are incorrect values, and 7 are false positives. That is, the majority of the reported inconsistencies are due to the name, whereas only a few are caused by an incorrect value being assigned to a meaningful name. This result is expected because incorrect behavior is easier to detect, e.g. via testing, than misleading names, for which currently few tools exist. The fact that 23 out of 30 warnings (77\%) are true positives is also consistent with the developer study in RQ2. Table~\ref{tab:examples_of_warnings_category} shows representative examples of warnings produced by \name{}. The first two examples show misleading names. For example, it is highly unusual to assign a number to a variable called~\code{name} or to assign boolean to a variable called \code{file}. To the best of our knowledge, these misleading names do not cause unexpected behavior, but developers may still want to fix them to increase the readability and maintainability of the code. In the third example, \name{} produces a warning about the assignment on line 2. The value assigned during the execution is a string \code{'Cooperate'}. Due to the string assignment, the code on line 3 crashes since the operator \code{>} does not support a comparison between a string and float. \name{} is correct in predicting this warning because the variable name \code{prob} is typically used to refer to a probability, not to a string like \code{'Cooperate'}. The final example is a false positive, which illustrates one of the most common causes of false positives seen during our inspection, namely short (and somewhat cryptic) names for which the model fails to understand the meaning. \begin{finding} The majority of inconsistencies detected in real-world code are due to the name in a name-value pair being misleading, and occasionally also due to incorrect values. \end{finding} \subsection{RQ4: Comparison with Previous Bug Detection Approaches} We compare \name{} to three state-of-the-art static analysis tools aimed at finding bugs and other kinds of noteworthy issues: (i) \textit{pyre}, a static type checker for Python that infers types and uses available type annotations. We compare with pyre because many of the inconsistencies that \name{} reports are type-related, and hence, might also be spotted by a type checker. (ii) \textit{flake8}, a Python linter that warns about commonly made mistakes. We compare with flake8 because it is widely used and because linters share the goal of improving the quality of code. (iii) \textit{DeepBugs}~\cite{oopsla2018-DeepBugs}, a learning-based bug detection technique. We compare with DeepBugs because it also aims to find name-related bugs using machine learning, but using static instead of dynamic analysis. We run pyre and flake8 using their default configurations. For DeepBugs, we install the ``DeepBugs for Python'' plugin from the marketplace of the PyCharm IDE. We apply each of the three approaches to the 30 files where \name{} has produced a warning and which have been manually inspected (RQ3). Namer~\cite{He2021}, a recent technique for finding name-related coding issues through a combination of static analysis, pattern mining, and supervised learning would be another candidate for comparing with, but neither the implementation nor the experimental results are publicly available. \begin{table}[t] \centering \caption{Comparison with existing static bug detectors.} \label{tab:comparison_with_static_analysis_and_deepbugs} \small \setlength{\tabcolsep}{18pt} \begin{tabular}{@{}lrr@{}} \toprule Approach & Warnings & Warnings common with \name{} \\ \midrule pyre & 54 & 1/30 \\ flake8 & 1,247 & 0/30 \\ DeepBugs & 151 & 0/30 \\ \bottomrule \end{tabular} \end{table} Table~\ref{tab:comparison_with_static_analysis_and_deepbugs} shows the number of warnings reported by the existing tools and how many of these warnings overlap with those reported by \name{}. We find that except one warning reported by pyre, none matches with the 30 manually inspected warnings from \name{}. The matching warning is a misleading name, shown on the first row of Table~\ref{tab:examples_of_warnings_category}. The pyre type checker reports this as an ``Incompatible variable type'' because in the same file, the variable \scode{name} is first assigned a string \scode{'Philip K. Dick'} and later assigned a float value \scode{2.5}. The 1,247 warnings produced by flake8 are mostly about coding style, e.g., ``missing white space'' and ``whitespace after '(' ''. The warnings reported by DeepBugs include possibly wrong operator usages and incorrectly ordered function arguments, but none matches the warnings reported by \name{}. \begin{finding} \name{} is complementary to both traditional static analysis-based tools and to a state-of-the-art learning-based bug detector aimed at name-related bugs. \end{finding} \subsection{RQ5: Comparison with Variants of the Approach} \subsubsection{Type-Guided vs.\ Purely Random Negative Examples} The following compares the two algorithms for generating negative examples described in Section~\ref{subsec:create_negative_examples}. Following the setup from RQ1, we find that the purely random generation reduces both precision and recall, leading to a maximum F1 score of 0.82, compared to 0.89 with the type-guided approach. Manually inspecting the top-30 reported warnings as in RQ2, we find 21 false positives, nine misleading names, and zero incorrect values, which clearly reduces the precision compared to the type-guided generation approach. These results confirm motivation for the type-guided algorithm (Section~\ref{sec:type-guided}) and show that it outperforms a simpler baseline. \subsubsection{Ablation Study} \begin{figure} \includegraphics[width=.48\textwidth]{plots/ablation_study_results.pdf} \caption{Result of ablation study.} \label{fig:ablation_study_results} \end{figure} We perform an ablation study to measure the importance of the different components of a name-value pair fed into the model. To this end, we set the vector representation of individual components to zero during training and prediction, and then measure the effect on the F1 score of the model. Figure~\ref{fig:ablation_study_results} shows the results, where the vertical axis shows the F1 score obtained on the validation dataset at each epoch during training. Each line in Figure~\ref{fig:ablation_study_results} shows the F1 score obtained while training the model keeping that particular feature set to zero. For example, the green line (``No Shape'') is for a model that does not use the shape of a value, and the blue line (``all'') is for a model that uses all components of a name-value pair. We find that the most important inputs to the model are the variable name and the string representation of the value. Removing the length or the type of a value does not significantly decrease the model's effectiveness. The reason is that these properties can often be inferred from other inputs given to the model, e.g., by deriving the type from the string representation of a value. We confirm this explanation by removing both the type and the string representation of a value, which yields an F1 score similar to the model trained by removing only values. \begin{finding} Each component of the approach contributes to the overall effectiveness, but there is some redundancy in the properties of values given to the model. \end{finding} \section{Implementation} \section{Introduction} Variable names are a means to convey the intended semantics of code. Because meaningful names are crucial for the understandability and maintainability of code~\cite{Butler2010}, developers generally try to name a variable according to the value(s) it refers to. Names are particularly relevant in dynamically typed languages, e.g., Python and JavaScript, where the lack of types forces developers to rely on names, e.g., to understand what types of values a variable stores. Unfortunately, the name and the value of a variables sometimes do not match, which we refer to as a \emph{name-value inconsistency}. A common reason is a \emph{misleading name} that is bound to a correct value. Because such names make code unnecessarily hard to understand and maintain, developers may want to replace them with more meaningful names. Another possible reason is that a meaningful name refers to an \emph{incorrect value}. Because such values may propagate through the program and cause unexpected behavior, developers should fix the corresponding code. The following illustrates the problem with two motivating examples, both found during our evaluation on real-world Python code~\cite{10.1145/3173574.3173606}. As an example of a misleading name consider the following code: \vspace{-.1em} \begin{lstlisting} log_file = glob.glob('/var/www/some_file.csv') \end{lstlisting} \vspace{-.1em} The right-hand side of the assignment yields a list of file names, which is inconsistent with the name of the variable it gets assigned to, because \code{log\_file} suggests a single file name. The code is even more confusing since this specific call to \code{glob} will return a list with at most one file name. That is, a cursory reader of the code may incorrectly assume this file name to be stored in the \code{log\_file} variable, whereas it is actually wrapped into a list. To clarify the meaning of the variable, it could be named, e.g., \code{log\_files} or \code{log\_file\_list}, or the developer could adapt the right-hand side of the assignment by retrieving the first (and only) element from the list. We find misleading names to be the most common reason for name-value inconsistencies. Less common, but perhaps even worse, are name-value inconsistencies caused by an incorrect value, as in the following example: \vspace{-.5em} \begin{lstlisting} train_size = 0.9 * iris.data.shape[0] test_size = iris.data.shape[0] - train_size train_data = data[0:train_size] \end{lstlisting} \vspace{.5em} The code tries to divide a dataset into training and test sets. Names like \code{train\_size} are usually bound to non-negative integer values. However, the above code assigns the value 135.0 to the \code{train\_size} variable, i.e., a floating point value. Unfortunately, this value causes the code to crash at the third line, where \code{train\_size} is used as an index to slice the dataset, but indices for slicing must be integers. While the root cause and the manifestation of the crash are close to each other in this simple example, in general, incorrect values may propagate through a program and cause hard to understand misbehavior. Finding name-value inconsistencies is difficult because it requires both understanding the meaning of names and realizing that a value that occurs at runtime does not match the usual meaning of a name. As a result, name-value inconsistencies so far are found mostly during some manual activity. For example, a developer may point out a misleading name during code review, or a developer may stumble across an incorrect value during debugging. Because developer time is precious, tool support for finding name-value inconsistencies is highly desirable. This paper presents \underline{\name{}}, an approach for detecting \underline{na}me-va\underline{l}ue \underline{in}consistencies automatically. The approach combines dynamic program analysis with deep learning. At first, a dynamic analysis keeps track of assignments during an execution and gathers pairs of names and values the names are bound to. Then, a neural model predicts whether a name and a value fit together. When the dynamic analysis observes a name-value pair that the neural model predicts to not fit together, then the approach reports a warning about a likely name-value inconsistency. While simple at its core, realizing the \name{} idea involves four key challenges: \begin{itemize}[leftmargin=1.7em] \item[C1] Understanding the semantics of names and how developers typically use them. The approach addresses this challenge through a learned token embedding that represents semantic similarities of names in a vector space. For example, the embedding maps the names \code{train\_size}, \code{size}, and \code{len} to similar vectors, as they refer to similar concepts. \item[C2] Understanding the meaning of values and how developers typically use them. The approach addresses this challenge by recording runtime values and by encoding them into a vector representation based on several properties of values. The properties include a string representation of the value, its type, and type-specific features, such as the shape of multi-dimensional numeric values. \item[C3] Pinpointing unusual name-value pairs. We formulate this problem as a binary classification task and train a neural model that predicts whether a name and a value match. To the best of our knowledge, this work is the first to detect coding issues through neural classification over names and runtime values. \item[C4] Obtaining a dataset for training an effective model. The approach addresses this challenge by considering observed name-value pairs as correct examples, and by creating incorrect examples by combining names and values through a statistical, type-guided sampling that is likely to yield an incorrect pair. \end{itemize} Our work relates to learning-based bug detectors~\cite{oopsla2018-DeepBugs,Allamanis2018b,Li2019,Dinella2020,Wang2020a}, which share the idea to classify code as correct or incorrect. However, we are the first to focus on name-value inconsistencies, whereas prior work targets other kinds of problems. \name{} also relates to learned models that predict missing identifier names~\cite{Raychev2015,Context2Name,David2020}. Our work differs by analyzing code with names supposed to be meaningful, instead of targeting obfuscated or compiled code. Finally, there are static analysis-based approaches for finding inconsistent method names~\cite{Nguyen2020,Liu2019,Host2009} and other naming issues~\cite{He2021}. A key difference to all the above work is that \name{} is based on dynamic instead of static analysis, allowing it to learn from runtime values, which static analysis can only approximate. One of the few existing approaches that learn from runtime behavior~\cite{Wang2020} aims at finding vector representations for larger pieces of code, but cannot pinpoint name-value inconsistencies. We train \name{} on 780k name-value pairs and evaluate it on 10k previously unseen pairs from real-world Python code extracted from Jupyter notebooks. The model effectively distinguishes consistent from inconsistent examples, with an F1 score of 0.89. Comparing the classifications by \name{} to a ground truth gathered in a study with eleven developers shows that the reported inconsistencies have a precision of 80\% and a recall of 76\%. Most of the inconsistencies detected in real-world code are due to misleading names, but there also are some inconsistencies caused by incorrect values. Finally, we show that the approach complements state-of-the-art static analysis-based tools that warn about frequently made mistakes, type-related issues, and name-related bugs. In summary, this paper contributes the following: \begin{itemize} \item An automatic technique to detect name-value inconsistencies. \item The first approach to find coding issues through neural machine learning on names and runtime behavior. \item A type-guided generation of negative examples that improves upon a purely random approach. \item Empirical evidence that the approach effectively identifies name-value pairs that developers perceive as detrimental to the understandability and maintainability of the code. \end{itemize} \section{Related Work} \paragraph{Detecting Poor Names} The importance of meaningful names during programming has been studied and established~\cite{Lawrie2006,Butler2010}. There are several techniques for finding poorly named program elements, e.g., based on pre-defined rules~\cite{abebe2009lexicon}, by comparing method names against method bodies~\cite{Host2009}, and through a type inference-like analysis of names and their occurrences~\cite{Lawall2010}. To improve identifier names, rule-based expansion~\cite{lawrie2011expanding}, n-gram models of code~\cite{Allamanis2014}, and learning-based techniques that compare method bodies and method names have been proposed~\cite{Liu2019,Nguyen2020}. Namer~\cite{He2021} combines static analysis, pattern mining, and supervised learning to find name-related coding issues. Many of the above approaches focus on method names, whereas we target variables. Moreover, none of the existing approaches exploits dynamically observed values. \paragraph{Predicting Names} When names are completely missing, e.g., in minified, compiled, or obfuscated code, learned models can predict them~\cite{Raychev2015,Vasilescu2017,Context2Name,Lacomis2019}. Another line of work predicts method names given the body of a method~\cite{Allamanis2015,Allamanis2016,Alon2018}, which beyond being potentially useful for developers serves as a pseudo-task to force a model to summarize code in a semantics-preserving way. \name{} differs by considering values observed at runtime, and not only static source code, and by checking names for inconsistencies with the values they refer to, instead of predicting names from scratch. \paragraph{Name-based Program Analysis} DeepBugs introduced learning-based and name-based bug detection~\cite{oopsla2018-DeepBugs}, which differs from \name{} by being purely static and by focusing on different kinds of errors. The perhaps most popular kind of name-based analysis is probabilistic type inference~\cite{Xu2016}, often using deep neural network models~\cite{Hellendoorn2018,icse2019,fse2020,Allamanis2020,Wei2020} that reason about the to-be-typed code. RefiNum uses names to identify conceptual types, which further refine the usual programming language types~\cite{Dash2018}. SemSeed exploits semantic relations between names to inject realistic bugs~\cite{fse2021}. All of the above work is based on the observation that the implicit information embedded in identifiers is useful for program analyses. Our work is the first to exploit this observation to find name-value inconsistencies. \paragraph{Natural Language vs.\ Code} Beyond natural language in the form of identifiers, comments and documentation associated with code are another valuable source of information. iComment~\cite{tan2007icomment} and tComment~\cite{tan2012tcomment} use this information to detect inconsistencies between comments and code. Our work differs by focusing on variable names instead of comments, by comparing the natural language artifact against runtime values instead of static code, and by using a learning-based approach. Another line of work uses natural language documentation to infer specifications of code~\cite{pandita2012inferring,Motwani2019,goffi2016automatic}, which is complementary to our work. \paragraph{Learning on Code} In addition to the work discussed above, machine learning on code is receiving significant interest recently~\cite{NeuralSoftwareAnalysis}. Embeddings of code are one important topic, e.g., using AST paths~\cite{Alon2019}, control flow graphs~\cite{Wang2020a}, ASTs~\cite{Zhang2019}, or a combination of token sequences and a graph representation of code~\cite{Hellendoorn2020}. Our encoder of variable names could benefit from being combined with an encoding of the code surrounding the assignment using those ideas. Other work models code changes and then makes predictions about them~\cite{Hoang2020,Brody2020}, or trains models for program repair~\cite{Gupta2017,Dinella2020}, code completion~\cite{DBLP:journals/corr/abs-2004-05249,kim-arxiv-2020,Alon2019a}, and code search~\cite{Gu2018,Sachdev2018}. \paragraph{Learning from Executions} Despite the recent surge of work on learning on code, learning on data gathered during executions is a relatively unexplored area. One model embeds student programs based on dynamically observed input-output relations~\cite{Piech2015}. Wang et al.'s ``blended'' code embedding learning~\cite{Wang2020} combines runtime traces, which include values of multiple variables, and static code elements to learn a distributed vector representation of code. Beyond code embedding, BlankIt~\cite{Porter2020} uses a decision tree model trained on runtime data to predict the library functions that a code location may use. In contrast to these papers, our work addresses a different problem and feeds one value at a time into the model. \section{Overview} \begin{figure*}[t] \includegraphics[width=\linewidth]{figures/overview2.pdf} \caption{Overview of the approach.} \label{fig:overview} \end{figure*} This section describes the problem we address and gives an overview of our approach. \name{} reasons about \emph{name-value pairs}, i.e., pairs of a variable name and a value that gets assigned to the variable. The problem we address is to identify name-value pairs where the name is not a good fit for the value, which we call \emph{inconsistent name-value pairs}. Identifying such pairs is an inherently fuzzy problem: Whether a name fits a value depends on the conventions that programmers follow when naming variables. The fuzziness of the problem motivates a data-driven approach~\cite{NeuralSoftwareAnalysis}, where we use the vast amounts of available programs as guidance for what name-value pairs are common and what name-value pairs stand out as inconsistent. Broadly speaking, \name{} consists of six components and two phases, illustrated in Figure~\ref{fig:overview}. During the training phase, the approach learns from a corpus of executable programs a neural classification model, which then serves during the prediction phase for identifying name-value inconsistencies in previously unseen programs. The following illustrates the six components of the approach with some examples. A detailed description follows in Section~\ref{sec:approach}. Given a corpus of executable programs, the first component is a dynamic analysis of assignments of values to variables. For each assignment during the execution of the program, the analysis extracts the variable name, the value assigned to the variable, and several properties of the value, e.g., the type, length, and shape. As illustrated in Figure~\ref{fig:overview}, properties that do not exist for a particular value are represented by \emph{null}. For example, the analysis extracts the length of the assigned value for \code{Xs\_train}, but not for \code{age} and \code{probability}, as the corresponding values are primitives that do not have a length. While the name-value pairs obtained by the dynamic analysis serve as positive examples, the second component generates negative examples that combine names and values in an unusual and likely inconsistent way. The motivation behind generating negative examples is that \name{} trains a classification model in a supervised manner, i.e., the approach requires examples of both consistent and inconsistent name-value pairs. Using the example pairs in Figure~\ref{fig:overview}, one negative example would be the name \code{Xs\_train} paired with the floating point value 0.83, which indeed is an unusual name-value pair. Our approach for generating negative examples is a probabilistic algorithm that biases the selection of unusual values toward unusual types based on the types of values that are usually observed with a name. The first and second component together address challenge~C4 from the introduction, i.e., obtaining a dataset for training an effective model. The third component of \name{} addresses challenges~C1 and~C2, i.e., ``understanding'' the semantics of names and values. To this end, the approach represents names and values as vectors that preserve their meaning. To represent identifier names, we build on learned token embeddings~\cite{Bojanowski2017}, which map each name into a vector while preserving the semantic similarities of names~\cite{icse2021}. For example, the vector of \code{probability} will be close to the vectors of names \code{probab} and \code{likelihood}, because these names refer to similar concepts. To represent values, we present a neural encoding of values based on their string representation, type, and other properties. Given the vector representations of name-value pairs, the fourth component trains a neural model to distinguish positive from negative examples. The result is a classifier that, once trained with sufficiently many examples, addresses challenge~C3. The fifth component of the approach queries the classifier with vector representations of name-value pairs extracted from previously unseen programs, producing a set of pairs predicted to be inconsistent. The final component heuristically filters pairs that are likely false positives, and then reports the remaining pairs as warnings to the developer. For the two new assignments shown in Figure~\ref{fig:overview}, the trained classifier will correctly identify the assignment \code{name = 2.5} as unusual and raises a warning. \section{Threats to Validity} \paragraph{Internal Validity} Several factors may influence our results. First, the filtering of name-value pairs based on the length of names may accidentally remove short but meaningful names, such as abbreviations that are common in a specific domain. Preliminary experiments without such filtering resulted in many false positives, and we prefer false negatives over false positives to increase developer acceptance. Second, our manual classification into different kinds of inconsistencies is subject to our limited knowledge of the analyzed Python files. To mitigate this threat, the classification is done by two of the authors, discussing any unclear cases until reaching consensus. \paragraph{External Validity} Several factors may influence the generalizability of our results. First, our approach is designed with a dynamically typed programming language in mind, because meaningful identifier names are particularly important in such languages. This focus and the setup of our experiments implies that we cannot draw conclusions beyond Python or beyond the kind of Python code found in Jupyter notebooks. Second, our developer study is limited to eleven participants, and other developers may assess the understandability of the name-value pairs differently. We mitigate this threat by getting eleven opinions about each name-value pair and by statistically analyzing the relevance of the results.
2,877,628,088,785
arxiv
\section{Introduction} Text-to-speech (TTS) and Voice Conversion (VC) are technologies that generate speech with a target voice from a text or a source utterance input. Recent studies have shown that we can clone new voices for these speech generation systems using a small amount of speech data either with or without transcription \cite{luong2020nautilus} while maintaining a high quality and speaker similarity to the target speakers \cite{arik2018neural,liu2018wavenet}. Voice cloning can be regarded as a classical scenario of speech generation in which the objective is to generate utterances that are as similar to the natural speech of the target speaker as possible. This framing is straightforward and easy to evaluate, but it ignores the the potential of the speech synthesis system which is the ability to create something new and/or better than the original under certain terms and conditions. Dysarthric speech reconstruction \cite{kain2007improving,wang2020end} and accent-reduction voice conversion \cite{aryal2014can,zhao2018accent} are two examples of scenarios in which we want to create a speech generation model with voices that are better than the originals while maintaining a certain level of speaker individuality. Cross-lingual speech generation \cite{wu2008cross,sun2016personalized}, the scenario in which speech is generated in a language not spoken by the target speaker, is another one and it is also the main topic of this paper. In the case of cross-lingual TTS, we could adapt an HMM-based TTS model to a target speaker whose data is not in the target language by establishing a phoneme mapping between the two languages \cite{wu2008cross,liang2010comparison,oura2010unsupervised}, training a bilingual state-sharing HMM model \cite{qian2009cross}, or factorizing the language and speaker components as transformation functions \cite{zen2012statistical}. Similar principles have been applied to neural TTS systems. For example, several works have proposed training a multi-speaker multi-lingual neural TTS model capable of cross-lingual speech generation by utilizing factorizing speaker and language components \cite{li2016multi,fan2016speaker,zhang2019learning}. Unsupervised speaker adaptation methods, such as speaker-adaptive TTS model conditioning on neural speaker embedding \cite{chen2019cross}, have also shown promising results for the cross-lingual scenario. Cross-lingual TTS is also the foundation for more interesting applications such as code-mixing speech synthesis \cite{rallabandi2019variational,cao2020code}. In the case of cross-lingual VC, the systems are generally based on non-parallel VC systems, which are developed using the non-parallel utterances of source and target speakers. Specifically, Phonetic PosteriorGram (PPG) based models are often used for the cross-lingual scenario \cite{sun2016personalized}. Even though a monolingual PPG trained on the target language is good enough for cross-lingual VC, it was reported that a bilingual PPG \cite{zhou2019cross,cao2020code} or mixed-lingual PPG \cite{zhou2019modularized} can significantly improve the performance. Non-parallel VC systems based on a Variational Autoencoder (VAE) \cite{mohammadi2018investigation} or Generative Adversarial Network (GAN) \cite{kameoka2018stargan,sisman2019study} are also applicable to the cross-lingual scenario. Cross-lingual VC was also the main topic of the Voice Conversion Challenge 2020 (VCC2020) which is the basis of the evaluations we present in this paper. In this study, we extend the versatile voice cloning method of the NAUTILUS system \cite{luong2020nautilus} to the cross-lingual speech generation scenario. While we follow the VCC2020 theme, our primary focus is the development of a unified framework for both cross-lingual TTS and VC. The proposed system is expected to generate speech with target voices in a language that is not spoken by them by using either the TTS or VC input interface. Moreover, the performances when switching between the two modes should be relatively consistent. In Section \ref{sec:methodology} of this paper, we describe our procedure to create a unified cross-lingual TTS and VC system for a target speaker. Section \ref{sec:experiments} provides details about the experimental setup, particularly the data conditions and in Section \ref{sec:evaluations} we discuss the results of the VCC2020 and those of our own listening test. We conclude in Section \ref{sec:conclusion} with a brief summary and mention of future works. \begin{figure*}[t] \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{arc-training-big.pdf} \caption{Initial training} \label{fig:step-training} \end{subfigure} \hfill \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{arc-heatup.pdf} \caption{Step 1 - Adaptation} \label{fig:step-adaptation} \end{subfigure} \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{arc-welding.pdf} \caption{Step 2 - Welding} \label{fig:step-welding} \end{subfigure} \caption{Steps of the intra-language/cross-language speaker adaptation procedure for the proposed unified TTS/VC system. The proposed system comprises of a text encoder ($TEnc$), a speech encoder ($SEnc$), a text decoder ($TDec$), a speech decoder ($SDec$), and a neural vocoder ($Voc$). $\boldsymbol{x}$ is phoneme, $\boldsymbol{y}$ is acoustic, $\boldsymbol{o}$ is waveform, and $\boldsymbol{z}$ is latent linguistic embedding. The $\textrm{loss}_{goal}$ term is a placeholder for either $\textrm{loss}_{tts}$, $\textrm{loss}_{sts}$, $\textrm{loss}_{stt}$, or $\textrm{loss}_{ttt}$ depending on the encoder/decoder combination.} \label{fig:steps} \centering \vspace{-3mm} \end{figure*} \section{Cross-lingual TTS and VC system with latent linguistic embedding} \label{sec:methodology} Our methodology for cross-lingual speaker adaptation is based on the unsupervised voice cloning strategy using Latent Linguistic Embedding (LLE) \cite{luong2020nautilus}. Previously, we showed that this type of cross-lingual VC system can deal with several sub-scenarios thanks to the imbalance of data demands in each step \cite{luong2019bootstrapping}. In this work, we focus on the cross-language speaker adaptation scenario of TTS and VC to align with the theme of VCC2020. The steps to develop a unified cross-lingual TTS/VC are summarized in this section, while further details can be found in the original paper \cite{luong2020nautilus}. \subsection{Initial training} This step focuses on training a robust and speaker-disentangled LLE in a target language as well as initializing all text/speech encoders/decoders using a large-scale multi-speaker corpus (Section III-A in \cite{luong2020nautilus}). We achieve this by jointly training the modules of the text-speech multimodal system using joint-goal and tied-layer objectives as follows: \begin{equation} \begin{aligned} \label{eq:losstrain} \textrm{loss}_{train} &= \textrm{loss}_{goals} + \beta \; \textrm{loss}_{tie} \\ &= \textrm{loss}_{tts} + \alpha_{sts} \; \textrm{loss}_{sts} + \alpha_{stt} \; \textrm{loss}_{stt} \\ &\qquad + \beta \; \textrm{loss}_{tie} \;. \end{aligned} \end{equation} Given the VAE-like structure of the encoders, we use symmetrized Kullback-Leibler divergence between the encoder outputs as a tied-layer loss: \begin{multline} \label{eq:losstie} \textrm{loss}_{tie} = \frac{1}{2} \; L_{KLD}(TEnc(\boldsymbol{x}), SEnc(\boldsymbol{y})) \\ + \frac{1}{2}\; L_{KLD}(SEnc(\boldsymbol{y}), TEnc(\boldsymbol{x})) \;. \end{multline} A multi-speaker WaveNet vocoder is separately trained using the same corpus in this step. It is used as the initial model for speaker adaptation in later steps. \begin{equation} \textrm{loss}^\prime_{train} = \textrm{loss}_{voc} \; , \end{equation} Figure \ref{fig:step-training} shows the conceptual actions performed in the initial training step. The same initial model obtained in this step will be reused for all target speakers. \subsection{Step 1 - Adaptation} As cross-lingual adaptation can be considered a special case of unsupervised speaker adaptation in which the transcribed speech of target speakers is unobtainable, as it is in a foreign/unseen language, we can use the unsupervised voice cloning strategy to tackle the cross-lingual scenario (Section III-B1 in \cite{luong2020nautilus}). Specifically, we adapt the speech decoder to the speech data of the target speaker as illustrated in Fig.\ \ref{fig:step-adaptation}: \begin{equation} \textrm{loss}_{adapt} = \textrm{loss}_{sts} + \beta \; \textrm{loss}_{cycle} \; , \end{equation} where $\textrm{loss}_{cycle}$ is the KL divergence between the LLE distributions extracted from natural and converted acoustic features. As LLE is a latent representation of sound, it is expected to be generalizable to other languages. The neural vocoder is also adapted to the target speaker, as \begin{equation} \textrm{loss}^\prime_{adapt} = \textrm{loss}_{voc} \; . \end{equation} The fact that the initial model was never trained on data in the languages other than English might negatively affect the performance of the cross-language speaker adaptation. However, we decided not to use additional data of the languages spoken by the target speakers in order to keep data demand low and simply test the generalization ability of the LLE. \subsection{Step 2 - Welding} Although tuning the speech decoder and neural vocoder separately is sufficient, we perform an additional step in which they are jointly tuned in order to increase their compatibility, as showned in Fig.\ \ref{fig:step-welding}. The optimizing loss is formulated as follows: \begin{equation} \textrm{loss}_{weld} = \textrm{loss}_{sts} + \gamma \; \textrm{loss}_{voc} \; . \end{equation} The $\textrm{loss}_{sts}$ is included to maintain the acoustic space for the autoregressive speech decoder. Several training tactics are applied in this step to enable the model to learn fine-grained details while avoiding overfitting \cite{luong2020nautilus}. \subsection{Step 3 - Inference} The adapted speech decoder and neural vocoder can be used along with either the text or speech encoder to form a complete TTS or VC system. For the standard intra-language speaker adaptation scenario, the NAUTILUS system has demonstrated a highly consistent performance between the TTS and VC inference \cite{luong2020nautilus}. In this work, we check if the same statement holds true in the cross-lingual scenario. \begin{figure*}[t] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{chart_vcc2020_task1.pdf} \caption{Standard scenario (Task 1)} \label{fig:result-task1} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{chart_vcc2020_task2.pdf} \caption{Cross-lingual scenario (Task 2)} \label{fig:result-task2} \end{subfigure} \caption{Subjective evaluations of quality and speaker similarity at VCC2020. Listeners are native English speakers.} \centering \vspace{-3mm} \end{figure*} \section{Experiments} \label{sec:experiments} \subsection{Model configuration and LLE training} The same NAUTILUS system used to evaluate scenario B in the original paper \cite{luong2020nautilus} was reused for the cross-language speaker adaptation experiments. Each module of the neural model consisted of several one-dimensional convolution layers to capture temporal context. Phonemes were used as the text representation and an 80-dimensional mel-spectrograms were used as the acoustic representation. The current system is not yet an end-to-end (E2E) system, as it requires the explicit phoneme duration information when generating speech from text input. This setup allows us to have the same duration condition between the TTS and VC system. The initial model used for scenario B (and reused in this paper) was first trained on 24 kHz English transcribed speech of LibriTTS \cite{zen2019libritts}, which has diverse linguistic content, and then on VCTK corpus \cite{veaux2017superseded}, which recorded in a more controlled environment. This initial model trained on English data was adapted to target speakers whose data is in languages other than English. Additional data of languages spoken by the target speakers would likely improve the performance \cite{zhang2019learning,zhou2019cross}, but we leave such investigation for future works. \subsection{Target speakers for evaluations} We tested the performance of the proposed cross-lingual speech generation system on target speakers from VCC2020, which consists of four native English speakers (Task 1) and six speakers who spoke languages other than English (Task 2). While the focus of the work is the cross-lingual scenario, we perform both tasks to establish a reliable baseline with English target speakers. In summary, the target speakers included four English, two Finnish, two German, and two Mandarin speakers with an equal number of male and female speakers in each language. Each speaker had 70 utterances (around five minutes) of untranscribed speech available for adaptation. We tested our cross-lingual TTS system on the same target speakers and compared its performance with our cross-lingual VC system to evaluate the consistency between the two. Even though the challenge provided speech data of source speakers, we decided not to use it for either training or adaptation. \section{Evaluations} \label{sec:evaluations} \subsection{Voice Conversion Challenge 2020} Figure \ref{fig:result-task1} and \ref{fig:result-task2} shows the subjective quality and speaker similarity results, judged by native English speakers, of the systems submitted to VCC2020. We recreated the figures based on results provided by the organizers to focus on the information relevant to our works\footnote{See the paper written by the organizers for detailed results \cite{zhao2020voice}}. For the standard intra-language task, our system, T07, achieved moderate performance and ranked among the top ten systems with the highest speaker similarity (Fig.\ \ref{fig:result-task1}). Our system had a higher speaker similarity but lower quality than T11, which was the best system (N10) of VCC2018 \cite{liu2018wavenet}. This is similar to the results presented in the original paper of the NAUTILUS system \cite{luong2020nautilus}. Compared with other baselines, our system had a higher quality but lower speaker similarity than T22 \cite{Huang2020}, while T16 \cite{tobing2019non} did not make it into the top ten ranking in the intra-language scenario. In summary, the subjective results of Task 1 confirm the competitive performance of our system when used as a VC system. \begin{figure*}[t] \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{chart_self_ttsnat_qua.pdf} \caption{TTS$_u$ vs. NAT} \label{fig:quality-ttsnat} \end{subfigure} \hfill \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{chart_self_ttsvc_qua.pdf} \caption{TTS$_u$ vs. VCA$_u$} \label{fig:quality-ttsvca} \end{subfigure} \hfill \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{chart_self_ttsxv_qua.pdf} \caption{TTS$_u$ vs. XV} \label{fig:quality-ttsxv} \end{subfigure} \vspace{-5mm} \caption{Subjective quality evaluations. Each speaker/system pair was judged 300 times by native Japanese speakers.} \label{fig:quality} \centering \end{figure*} \begin{figure*}[t] \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{chart_self_ttsnat_sim.pdf} \caption{TTS$_u$ vs. NAT} \label{fig:similarity-ttsnat} \end{subfigure} \hfill \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{chart_self_ttsvc_sim.pdf} \caption{TTS$_u$ vs. VCA$_u$} \label{fig:similarity-ttsvca} \end{subfigure} \hfill \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{chart_self_ttsxv_sim.pdf} \caption{TTS$_u$ vs. XV} \label{fig:similarity-ttsxv} \end{subfigure} \vspace{-5mm} \caption{Subjective similarity evaluations. Each speaker/system pair was judged 300 times by native Japanese speakers.} \label{fig:similarity} \centering \vspace{-3mm} \end{figure*} For the cross-lingual task, the setup for the listening test was trickier due to the language mismatch betweeen the generated and natural utterances. Based on information provided by the organizers, for the speaker similarity evaluation, listeners were presented with either a native language utterance or the English utterance of the target, the data of which was not provided to the challenge participants. The results shown in Fig.\ \ref{fig:result-task2} are the averages over both cases. In general, the cross-lingual scenario seems to have had a lower speaker similarity than the standard scenario, which is not surprising. Our system (T07) was ranked 3rd in the similarity metric and higher than all three baselines. However, it received relative low quality scores compared to other systems. Interestingly, if we only consider the speaker similarity results between converted utterances and the native language utterance of the target speakers, our system ranked 1st in the case of Finnish speakers, 4th in the case of German speakers, and 4th in the case of Mandarin speakers. This result demonstrates the feasibility of using the NAUTILUS system for cross-language speaker adaptation, as it achieved very high speaker similarity without having to pre-train on data of languages spoken by the target speakers. \subsection{A unified cross-lingual system of TTS and VC} The strength of the NAUTILUS system is its ability to switch between TTS and VC modes. We evaluate its performance as a cross-lingual TTS system in this section. Specifically, we compared our cross-lingual TTS system (TTS$_u$) with the cross-lingual VC system (VCA$_u$), a simple TTS baseline based on x-vector (XV) \cite{chen2019cross}, and the natural utterances of the target speakers (NAT)\footnote{Samples are available at \url{https://nii-yamagishilab.github.io/sample-cross-lingual-tts-vc/}}. The adapted models used in the previous section were reused here for TTS$_u$ and VCA$_u$, while XV\footnote{XV system consists of the pretrained \textit{libritts.tacotron2.v1} and \textit{libritts.wavenet.mol.v1} models based on ESPnet toolkit \cite{watanabe2018espnet}} was the same system used in the original paper \cite{luong2020nautilus}. We conducted our own listening test by asking 170 participants, who did one to ten tests, to pick which of the two samples had better quality or similarity. Each test consisted of 24 quality and 24 similarity questions covering all speaker and system pairs. For the TTS$_u$/NAT pair, we only evaluated the English target speakers, as comparing an English and non-English utterances would be uninformative. The reference utterances used in the similarity question were in the native languages of the target speakers. The quality and speaker similarity results are presented in Figures \ref{fig:quality} and \ref{fig:similarity}, respectively. As expected TTS$_u$ was not comparable with the natural utterances, but neither was it totally dominated by NAT, which is an encouraging result. Interestingly, the listeners seemed to pick the natural samples more when presented with a reference utterance as shown in the similarity results. Between TTS$_u$ and VCA$_u$, there were no clear winners across all speakers which suggests the highly consistent performance between the two modes of the proposed system even in the cross-lingual scenario. Last but not least, compared with the cross-lingual TTS baseline, XV, our system dominated the results across all speakers, which validates our cross-lingual methodology on the TTS side. If we look at individual speakers, our system seems to have had noticeably worse results in the case of TEF1 and TMF1 in the similarity metric. This suggests that the performance of the proposed method may not be consistent between speakers. \section{Conclusions} \label{sec:conclusion} The cross-lingual experiments presented in this paper have once again demonstrated the versatility of the proposed NAUTILUS system. Even though the initial LLE is only trained on English speech data, the proposed system can generalize to other languages spoken by the target speakers and generate utterances with high speaker similarity. This significantly reduces the data demands on the cross-lingual system and allows it to adapt to target speakers who only speak low-resource languages. However, as cross-lingual systems are expected to benefit from multilingual data \cite{chen2019cross,zhou2019cross}, our future work will focus on taking advantage of either transcribed or untranscribed speech from large-scale multilingual corpora. Moreover, although the VCC2020 has taken the first step in shifting the focus of the research community to developing speech generation system that are better and different from the original voice, concrete guidelines on how to evaluate such systems have not yet been established. Additional research on this new and exciting topic is required. \section{Acknowledgements} This work was partially supported by a JST CREST Grant (JPMJCR18A6, VoicePersonae project), Japan, and MEXT KAKENHI Grants (16H06302, 17H04687, 18H04120, 18H04112, 18KT0051), Japan. \bibliographystyle{IEEEtran}
2,877,628,088,786
arxiv
\section{Introduction} In this paper, we want to point out the similarities between the dynamics of vector fields in $\mathbb{R}^d$ and those of reaction-diffusion equations on bounded domains. More precisely, we consider the following classes of equations.\\[2mm] {\noindent \bf Class of vector fields}\\ Let $d\geq 1$ and $r\geq 1$ and let $g\in\mathcal{C}^r(\mathbb{R}^d,\mathbb{R}^d)$ be a given vector field. We consider the ordinary differential equation \begin{equation}\label{ODE} \left\{\begin{array}{l}\dot y(t)=g(y(t))~~t>0\\ y(0)=y_0\in\mathbb{R}^d \end{array}\right. \end{equation} where $\dot y(t)$ denotes the time-derivative of $y(t)$.\\ The equation \eqref{ODE} defines a local dynamical system $T_g(t)$ on $\mathbb{R}^d$ by setting $T_g(t)y_0=y(t)$. We assume that there exists $M>0$ large enough such that $$\forall y\in\mathbb{R}^d,~\|y\|\geq M~\Rightarrow~\langle y|g(y)\rangle <0~.$$ This condition ensures that $T_g(t)$ is a global dynamical system. Moreover, the ball $B(0,M)$ attracts the bounded sets of $\mathbb{R}^d$. Therefore, $T_g(t)$ admits a compact global attractor\footnotemark[1] \footnotetext[1]{ To make the reading of this article easier for the reader, who is not familiar with dynamical systems theory or with the study of PDEs, we add a short glossary at the end of the paper.} $\mathcal{A}_g$. The attractor $\mathcal{A}_g$ contains the most interesting trajectories such as periodic, homoclinic and heteroclinic orbits\footnotemark[1] and any $\alpha-$ or $\omega-$limit set\footnotemark[1]. Therefore, if one neglects the transient dynamics, the dynamics on $\mathcal{A}_g$ is a good representation of the whole dynamics of $T_g(t)$.\\[2mm] {\noindent \bf Class of scalar parabolic equations}\\ Let $d'\geq 1$ and let $\Omega$ be either a regular bounded domain of $\mathbb{R}^{d'}$, or the torus $\mathbb{T}^{d'}$. We choose $p> d'$ and $\alpha\in ((p+d')/2p,1)$. We denote $X^\alpha \equiv D((-\Delta_N)^{\alpha})$ the fractional power space\footnotemark[1] associated with the Laplacian operator $\Delta_N$ on $\mathbb{L}^p(\Omega)$ with homogeneous Neumann boundary conditions. It is well-known\footnotemark[1] that $X^\alpha$ is continuously embedded in the Sobolev space $W^{2\alpha,p}(\Omega)$ and thus it is compactly embedded in $\mathcal{C}^1(\overline\Omega)$. Let $r\geq 1$ and $f\in\mathcal{C}^r(\overline\Omega \times \mathbb{R} \times \mathbb{R}^{d'}, \mathbb{R})$. We consider the parabolic equation \begin{equation}\label{PDE} \left\{\begin{array}{ll}\dot u(x,t)=\Delta u(x,t) +f(x,u(x,t),{\nabla} u(x,t))~&~(x,t)\in\Omega\times (0,+\infty)\\ \Drond u\nu (x,t)=0~&~(x,t) \in\partial\Omega\times(0,+\infty)\\ u(x,0)=u_0(x)\in X^\alpha & \end{array}\right. \end{equation} where $\dot u(t)$ is the time-derivative of $u(t)$.\\ Eq. \eqref{PDE} defines a local dynamical system $S_f(t)$ on $X^\alpha$ (see \cite{Henry-book}) by setting $S_f(t)u_0=u(t)$. We assume moreover that there exist $c\in\mathcal{C}^0(\mathbb{R}_+,\mathbb{R}_+)$, $ \varepsilon>0$ and $\kappa>0$ such that $f$ satisfies \begin{align*} \forall R>0,~\forall \xi\in\mathbb{R}^{d'},~&\sup_{(x,z)\in\overline\Omega\times[-R,R]} |f(x,z,\xi)|\leq c(R)(1+|\xi|^{2-\varepsilon}) \\ \text{and } \forall z\in\mathbb{R} ,~\forall x\in\overline\Omega,~~~&~~|z|\geq\kappa~\Rightarrow~ zf(x,z,0) < 0~. \end{align*} Then, Eq. \eqref{PDE} defines a global dynamical system in $X^\alpha$ which admits a compact global attractor $\mathcal{A}$ (see \cite{Polacik-handbook}).\\[2mm] \begin{table}\label{table} {\noindent \small \begin{tabular}{|c|l|c|} \hline ODE& &PDE\\ \hline \hline \begin{tabular}{c}$d=1$\\$~$\\{\footnotesize (Or more generally}\\{\footnotesize tridiagonal}\\{ \footnotesize cooperative}\\{ \footnotesize system of ODEs)} \end{tabular} &\begin{tabular}{l} $\bullet$ Gradient dynamics\\ $\bullet$ Convergence to an equilibrium point\\ $\bullet$ Automatic transversality of stable\\ and unstable manifolds\\ $\bullet$ Genericity of Morse-Smale property\\ $\bullet$ Knowledge of the equilibrium points\\ implies knowledge of the whole dynamics\\ $\bullet$ Dimension of the attractor equal to the\\ largest dimension of the unstable manifolds\\ $\bullet$ Realisation of the ODE in the PDE \end{tabular} &$\Omega=(0,1)$\\ \hline \hline \begin{tabular}{c}$d=2$\\General case\\$~$\\{\footnotesize(Or more generally}\\{\footnotesize cyclic tridiagonal}\\{ \footnotesize cooperative }\\{\footnotesize system of ODEs)} \end{tabular} & \begin{tabular}{l} $\bullet$ Poincar\'e-Bendixson property\\ $\bullet$ Automatic transversality of stable\\ and unstable manifolds of two orbits\\ if one of them is a hyperbolic periodic orbit\\ or if both are equilibrium points\\ with different Morse indices.\\ $\bullet$ Non-existence of homoclinic orbits\\ for periodic orbits\\ $\bullet$ Genericity of Morse-Smale property\\ $\bullet$ Realisation of the ODE in the PDE \end{tabular} & \begin{tabular}{c}$\Omega=\mathbb{T}^1$\\General case\end{tabular}\\ \hline \begin{tabular}{c}$d=2$\\$g$ radially\\ symmetric \end{tabular} & \begin{tabular}{l} $\bullet$ Automatic transversality of stable\\ and unstable manifolds of\\ equilibrium points and periodic orbits\\ $\bullet$ No homoclinic orbit\\ $\bullet$ Knowledge of the equilibrium points\\ and of the periodic orbits\\ implies knowledge of the whole dynamics\\ $\bullet$ Genericity of the Morse-Smale property.\\ $\bullet$ Dimension of the attractor equal to the\\ largest dimension of the unstable manifolds\\ $\bullet$ Realisation of the ODE in the PDE \end{tabular} & \begin{tabular}{c}$\Omega=\mathbb{T}^1$\\ $f(x,u,{\nabla} u)\equiv f(u,{\nabla} u)$ \end{tabular}\\ \hline \hline $d\geq 3$& \begin{tabular}{l} $\bullet$ Existence of persistent chaotic dynamics\\ $\bullet$ Genericity of Kupka-Smale property\\ $\bullet$ Realisation of the ODE in the PDE \end{tabular} & dim($\Omega$)$\geq 2$\\ \hline \hline \begin{tabular}{c}Any $d$\\$g\equiv {\nabla} G$\end{tabular} & \begin{tabular}{l} $\bullet$ Gradient dynamics\\ $\bullet$ Genericity of the Morse-Smale property\\ $\bullet$ Realisation of a generic ODE in the PDE \end{tabular} & \begin{tabular}{c}Any $\Omega$\\$f(x,u,{\nabla} u)\equiv f(x,u)$ \end{tabular}\\ \hline \end{tabular} }\caption{ the correspondence between the dynamics of vector fields and the ones of parabolic equations.}\end{table} The reader, which is not familiar with partial differential equations, may neglect all the technicalities about $X^\alpha$, the Sobolev spaces and the parabolic equations in a first reading. The most important point is that $S_f(t)$ is a dynamical system defined on an {\it infinite-dimensional function space}. Compared with the finite-dimensional case, new difficulties arise. For example, the existence of a compact global attractor requires compactness properties, coming here from the smoothing effect of \eqref{PDE}. We also mention that, even if the backward uniqueness property holds, backward trajectories do not exist in general for \eqref{PDE}. The reader interested in the dynamics of \eqref{PDE} may consult \cite{Fiedler-Scheel}, \cite{Henry-book}, \cite{HaleCanada}, \cite{Polacik-handbook} or \cite{GR-handbook}. The purpose of this paper is to emphasize the different relationships between the dynamics of \eqref{ODE} and \eqref{PDE}. The correspondence is surprisingly perfect. It can be summarized by Table \ref{table}. This correspondence has already been noticed for some of the properties of the table. We complete here the correspondence for all the known properties of the dynamics of the parabolic equation. Table \ref{table} will be discussed in more details in Section \ref{section-comments} and, for cooperative systems, in Section \ref{section-coop}. Some of the properties presented in the table concerning finite-dimensional dynamical systems are trivial, other ones are now well-known. However, the corresponding results for the parabolic equation are more involved and some of them are very recent. These properties are mainly based on Sturm-Liouville arguments and unique continuation properties for the parabolic equations as explained in Section \ref{section-ucp}. The study of the dynamics generated by vector fields in dimension $d\geq 3$ is still a subject of research. Taking into account the correspondence presented in Table \ref{table} should give a guideline for research on the dynamics of the parabolic equations. Some examples of open questions are given in Section \ref{section-open}. We underline that we only consider the dynamics on the compact global attractors. Hence, we deal with dynamical systems on compact sets. It is important to be aware of the fact that, even if the dimension of the compact global attractor $\mathcal{A}$ of the parabolic equation \eqref{PDE} is finite, it can be made as large as wanted by choosing a suitable function $f$. This is true even if $\Omega$ is one-dimensional. Therefore, all the possible properties of the dynamics of \eqref{PDE} do not come from the low dimension of $\mathcal{A}$ but from properties, which are very particular to the flow of the parabolic equations. Finally, we remark that most of the results described here also hold in more general frames than \eqref{ODE} and \eqref{PDE}. For example, $\mathbb{R}^d$ could be replaced by a compact orientable manifold without boundary. We could also choose for \eqref{PDE} more general boundary conditions than Neumann ones, or less restrictive growing conditions for $f$. The domain $\Omega$ may be replaced by a bounded smooth manifold. Finally, notice that the case $\Omega=\mathbb{T}^{d'}$ can be seen as $\Omega=(0,1)^{d'}$ with periodic boundary conditions. \section{Details and comments about the correspondence table}\label{section-comments} We expect the reader to be familiar with the basic notions of the theory of dynamical systems and flows. Some definitions are briefly recalled in the glossary at the end of this paper. For more precisions, we refer for example to \cite{Katok-Hasselblatt}, \cite{Newhouse2}, \cite{Palis-de-Melo}, \cite{Robinson2} or \cite{Ruelle} for finite-dimensional dynamics and to \cite{HMO}, \cite{Henry-book}, \cite{Robinson} or \cite{HJR} for the infinite-dimensional ones. We first would like to give short comments and motivations concerning the properties appearing in Table \ref{table}. Notice that we do not deal in this section with the cooperative systems of ODEs. The properties of these systems are discussed in Section \ref{section-coop}. A {\it generic property of the dynamics} is a property satisfied by a countable intersection of open dense subsets of the considered class of dynamical systems. Generic dynamics represent the typical behaviour of a class of dynamical systems. For finite-dimensional flows, we mainly consider classes of the form $(T_g(t))_{g\in\mathcal{C}^1(\mathbb{R}^d,\mathbb{R}^d)}$. The parameter is the vector field $g$, which belongs to the space $\mathcal{C}^1(\mathbb{R}^d,\mathbb{R}^d)$ endowed with either the classical $\mathcal{C}^1$ or the $\mathcal{C}^1$ Whitney topology. Notice that the question wether or not a property is generic for $g\in\mathcal{C}^r(\mathbb{R}^d,\mathbb{R}^d)$ for some $r\geq 2$ may be much more difficult than $\mathcal{C}^1$ genericity. We will not discuss this problem here. In some cases, we restrict the class of vector fields to subspaces of $\mathcal{C}^1(\mathbb{R}^d,\mathbb{R}^d)$ such as radially symmetric, gradient vector fields or cooperative systems. In a similar way, for infinite-dimensional dynamics, we consider families of the type $(S_f(t))_{f\in\mathcal{C}^1(\overline\Omega\times\mathbb{R}\times\mathbb{R}^{d'},\mathbb{R})}$, where $\mathcal{C}^1(\overline\Omega\times\mathbb{R}\times\mathbb{R}^{d'},\mathbb{R})$ is endowed with either the classical $\mathcal{C}^1$ or the $\mathcal{C}^1$ Whitney topology. For some results, we restrict the class of nonlinearities $f$ to homogeneous ones or to ones, which are independent of the last variable $\xi$. {\it Poincar\'e-Bendixson property} and {\it the convergence to an equilibrium or a periodic orbit} are properties related to the following question: how simple are the $\alpha-$ and $\omega-$limit sets of the trajectories~? For vector fields, the restriction of the complexity of the limit sets may come from the restriction of freedom due to the low dimension of the flow. As said above, there is no restriction on the dimension of the global attractor for the parabolic equations. The possible restrictions of the complexity of the limit sets come from particular properties of the parabolic equations, see Section \ref{section-ucp}. {\it Hyperbolicity of equilibria and periodic orbits, tranversality of stable and unstable manifolds, Kupka-Smale and Morse-Smale properties} are properties related to the question of stability of the local and global dynamics respectively. Morse-Smale property is the strongest one. It implies the structural stability of the global dynamics: if the dynamical system $T_g(t)$ satisfies the Morse-Smale property, then for $\tilde g$ close enough to $g$, the dynamics of $T_{\tilde g}(t)$, restricted to its attractor $\mathcal{A}_{\tilde g}$, are qualitatively the same as the ones of $T_g(t)$ on $\mathcal{A}_g$, see \cite{Palis}, \cite{Palis-Smale} and \cite{Palis-de-Melo}. The same structural stability result holds for parabolic equations satisfying the Morse-Smale property, see \cite{HMO}, \cite{HJR} and \cite{Oliva}. It is natural to wonder if almost all the dynamics satisfy these properties, that is if these properties are generic. The fact that {\it the knowledge of the equilibria and the periodic orbits implies the knowledge of the whole dynamics} may be studied at different levels. Two equilibria or periodic orbits being given, can we know if they are connected or not by a heteroclinic orbit ? Are two dynamics with the same equilibria and periodic orbits equivalent ? Is there a simple algorithm to determine the global dynamics from the position of the equilibria and the periodic orbits ? These questions are among the rare dynamical questions coming from the study of partial differential equations and not from the study of vector fields. Indeed, for finite-dimensional dynamical systems, the answers, either positive or negative, are too simple. In contrast, such kinds of results are probably among the most amazing ones for the dynamics of the parabolic equations. {\it The persistent chaotic dynamics} and the fact that {\it the dimension of the attractor is equal to the largest dimension of the unstable manifolds}, are related to the following question: how complicated may be the dynamics~? In general, the dimension of the attractor of a dynamical system may be larger than the largest dimension of the unstable manifolds. The classes of systems, where these dimensions automatically coincide, are strongly constrained, which in some sense implies a simple behaviour. On the contrary, chaotic dynamics have very complicated behaviour. Chaotic dynamics may occur through several phenomena, and the notion of chaotic behaviour depends on the authors. In this paper, ``persistent chaotic dynamics'' refers to the presence of a tranversal homoclinic orbit generating a Smale horseshoe (see \cite{Smale65}). The persistent chaotic dynamics provide complicated dynamics, which cannot be removed by small perturbations of the system. Such an open set of chaotic dynamics is a counter-example to the genericity of the Morse-Smale systems. The question of {\it the realization of vector fields in the parabolic equations} is as follows: a vector field $g\in\mathcal{C}^r(\mathbb{R}^d,\mathbb{R}^d)$ being given, can we find a function $f$ and an invariant manifold $M\subset L^{p}(\Omega)$ such that the dynamics of the parabolic equation \eqref{PDE} restricted to $M$ is equivalent to the dynamics generated by the vector field $g$~? A positive answer to this question implies that the dynamics of the considered class of parabolic equations is at least as complicated as the dynamics of the considered class of vector fields. Such a realization result is very interesting since, on the opposite, the other properties stated in Table \ref{table} roughly say that the dynamics of the parabolic equation \eqref{PDE} cannot be much more complicated than the ones of the corresponding class of finite-dimensional flows. One has to keep in mind that the manifold $M$, on which the finite-dimensional dynamics are realized, is not necessarily stable with respect to the dynamics of the parabolic equation. Typically, $M$ cannot be stable if the finite-dimensional system contains a stable periodic orbit, since all periodic orbits of \eqref{PDE} are unstable (see for example \cite{Hirsch3}). \vspace{2mm} Now, we give short comments and references for the correspondences stated in Table \ref{table}. {\noindent \bf $\bullet$ $d=1$ and $\Omega=(0,1)$} The dynamics generated by a one-dimensional vector field is very simple. Its attractor consists in equilibrium points and heteroclinic orbits connecting two of them. The existence of these heteroclinic orbits is easily deduced from the positions of the equilibrium points. Moreover, these heteroclinic connections are trivially transversal. Finally, \eqref{ODE} is clearly a gradient system with associated Lyapounov functional $F(y) =-\int_0^y g(s) ds$. As a consequence, the Morse-Smale property is equivalent to the hyperbolicity of all the equilibrium points, which holds for a generic one-dimensional vector field. The dynamics $S_f(t)$ generated by \eqref{PDE} for $\Omega=(0,1)$ is richer since its attractor may have a very large dimension. However, these dynamics satisfy similar properties. These similarities are mainly due to the constraints coming from the non-increase of the number of zeros of solutions of the linear parabolic equation (see Theorem \ref{th-Angenent}). Zelenyak has proved in \cite{Zelenyak} that $S_f(t)$ admits an explicit Lyapounov function and thus that it is gradient. He also showed that the $\omega-$limit sets of the trajectories consist in single equilibrium points. In Proposition \ref{prop-cv-single}, we give a short proof of this result, due to Matano. The fact that the stable and unstable manifolds of equilibrium points always intersect transversally comes from Theorem \ref{th-Angenent} and the standard Sturm-Liouville theory. This property has been first proved by Henry in \cite{Henry-T} and later by Angenent \cite{Angenent-T} in the weaker case of hyperbolic equilibria. As a consequence of the previous results, the Morse-Smale property is equivalent to the hyperbolicity of the equilibrium points and is satisfied by the parabolic equation on $(0,1)$ generically with respect to $f$. The most surprising result concerning \eqref{PDE} on $\Omega=(0,1)$ is the following one. Assuming that every equilibrium point is hyperbolic and that the equilibrium points $e_1$,...,$e_p$ are known, one can say if two given equilibria $e_i$ and $e_j$ are connected or not by a heteroclinic orbit. This property has been proved by Brunovsk\'y and Fiedler in \cite{Brunovsky-Fiedler} for $f=f(u)$ and by Fiedler and Rocha in \cite{Fiedler-Rocha} in the general case. The description of the heteroclinic connections is obtained from the Sturm permutation which is a permutation generated by the respective positions of the values $e_i(0)$ and $e_i(1)$ of the equilibrium points at the endpoints of $\Omega=(0,1)$. The importance of Sturm permutation has been first underlined by Fusco and Rocha in \cite{Fusco-Rocha}. We also refer to the work of Wolfrum \cite{Wolfrum} , which presents a very nice formalism for this property. Fiedler and Rocha showed in \cite{Fiedler-Rocha-Equivalence} that the Sturm permutation characterizes the global dynamics of \eqref{PDE} on $(0,1)$. They proved in \cite{Fiedler-Rocha-realization} that it is possible to give the exact list of all the permutations which are Sturm permutations for some nonlinearity $f$ and thus to give the list of all the possible dynamics of the parabolic equation on $(0,1)$. The fact that the dimension of the attractor is equal to the largest dimension of the unstable manifolds has been shown by Rocha in \cite{Ro91}. The previous works of Jolly \cite{Jo} and Brunovsk\'y \cite{Br90} deal with the particular case $f\equiv f(u)$, but show a stronger result: the attractor can be embedded in a $\mathcal{C}^1$ invariant graph of dimension equal to the largest dimension of the unstable manifolds. Finally, let us mention that it is easy to realize any one-dimensional flow in an invariant manifold of the semi-flow generated by the one-dimensional parabolic equation. For example, in the simplest case of Neumann boundary conditions as in \eqref{PDE}, one can realize the flow of any vector field $g$ as the restriction of the dynamics of the equation $\dot u=\Delta u +g(u)$ to the subspace of spatially constant functions. \vspace{2mm} {\noindent \bf $\bullet$ $d=2$ and $\Omega=\mathbb{T}^1$, general case}\\ Even if they are richer than in the one-dimensional case, the flows generated by vector fields on $\mathbb{R}^2$ are constrained by the Poincar\'e-Bendixson property (see the original works of Poincar\'e \cite{Poincare} and Bendixson \cite{Bendixson} or any textbook on ordinary differential equations). In particular, this constraint precludes the existence of non-trivial non-wandering points in Kupka-Smale dynamics. Due to the low dimension of the dynamics, the stable and unstable manifolds of hyperbolic equilibria or periodic orbits always intersect transversally if either one of the manifold corresponds to a periodic orbit or if the invariant manifolds correspond to two equilibrium points with different Morse indices. Moreover, there is no homoclinic trajectory for periodic orbits. Using these particular properties, Peixoto proved in \cite{Peixoto1} that the Morse-Smale property holds for a generic two-dimensional vector field. The first correspondence between two-dimensional flows and the dynamics of the pa\-ra\-bo\-lic equation \eqref{PDE} on the circle $\Omega=\mathbb{T}^1$ has been obtained by Fiedler and Mallet-Paret in \cite{FMP}. They proved that the Poincar\'e-Bendixson property holds for \eqref{PDE} on $\mathbb{T}^1$, by using the properties of the zero number (see Theorem \ref{th-Angenent}). The realization of any two-dimensional flow in a two-dimensional invariant manifold of the parabolic equation on the circle has been proved by Sandstede and Fiedler in \cite{Sandstede-Fiedler}. Very recently, Czaja and Rocha have shown in \cite{Czaja-Rocha} that the stable and unstable manifolds of two hyperbolic periodic orbits always intersect transversally and that there is no homoclinic connection for a periodic orbit. The other automatic transversality results and the proof of the genericity of the Morse-Smale property have been completed by the authors in \cite{JR} and \cite{JR2}. \vspace{2mm} {\noindent \bf $\bullet$ $d=2$ and $\Omega=\mathbb{T}^1$, radial symmetry and $\mathbb{T}^1-$equivariance}\\ When the vector field $g$ satisfies a radial symmetry, the dynamics of the two-dimensional flow generated by \eqref{ODE} becomes roughly one-dimensional. The closed orbits consist in $0$, circles of equilibrium points and periodic orbits being circles described with a constant rotating speed. The dynamics are so constrained that the closed orbits being given, it is possible to describe all the heteroclinic connections. Notice that no homoclinic connection is possible. We also underline that the Morse-Smale property is generic in the class of radially symmetric vector fields. If the two-dimensional radial vector fields are too simple to attract much attention, the corresponding case for the parabolic equation \eqref{PDE} on $\Omega=\mathbb{T}^1$ with homogeneous nonlinearity $f(x,u,\partial_x u)\equiv f(u,\partial_x u)$ has been extensively studied. Since Theorem \ref{th-Angenent} holds for \eqref{PDE} with any one-dimensional domain $\Omega$, it is natural to expect results for \eqref{PDE} on $\Omega=\mathbb{T}^1$ similar to the ones obtained for \eqref{PDE} on $\Omega=(0,1)$. In particular, one may wonder if it is possible to describe the global dynamics of \eqref{PDE} knowing the equilibria and the periodic orbits only. However, this property is still open for general non-linearities $f(x,u,\partial_x u)$ in the case $\Omega=\mathbb{T}^1$. Moreover, if one believes in the correspondence stated in this paper, one can claim that it is in fact false for a general nonlinearity $f(x,u,\partial_x u)$. Therefore, it was natural to first study the simpler case of homogeneous nonlinearities $f\equiv f(u,\partial_x u)$. Indeed, the dynamics in this case are much simpler, in particular the closed orbits are either homogeneous equilibrium points $e(x)\equiv e\in\mathbb{R}$, or circles of non-homogeneous equilibrium points, or periodic orbits consisting in rotating waves $u(x,t)=u(x-ct)$ (notice the correspondence with the closed orbits of a radially symmetric two-dimensional flow). This property is a consequence of the zero number property of Theorem \ref{th-Angenent} and has been proved in \cite{Angenent-Fiedler} by Angenent and Fiedler. The works of Matano and Nakamura \cite{Matano-Nakamura} and of Fiedler, Rocha and Wolfrum \cite{Fiedler-Rocha-Wolfrum} show that the unstable and stable manifolds of the equilibria and the periodic orbits always intersect transversally and that no homoclinic orbit can occur. Moreover, in \cite{Fiedler-Rocha-Wolfrum}, the authors give an algorithm for determining the global dynamics of the parabolic equation \eqref{PDE} on $\Omega=\mathbb{T}^1$ with homogeneous nonlinearity $f\equiv f(u,\partial_x u)$. This algorithm uses the knowledge of the equilibria and the periodic orbits only. In \cite{Rocha}, Rocha also characterized all the dynamics, which may occur. Due to the automatic transverse intersection of the stable and unstable manifolds and due to the possibility of transforming any circle of equilibrium points into a rotating periodic orbit (see \cite{Fiedler-Rocha-Wolfrum}), one can show that the Morse-Smale property holds for the parabolic equation on $\mathbb{T}^1$ for a generic homogeneous nonlinearity $f(u,\partial_x u)$ (see \cite{JR}). Finally, the realization of any radially symmetric two-dimensional flow in the dynamics of \eqref{PDE} on $\mathbb{T}^1$ for some $f\equiv f(u,\partial_x u)$ and the fact that the dimension of the attractor is equal to the largest dimension of the unstable manifolds are shown in \cite{HJR}. \vspace{2mm} {\noindent \bf $\bullet$ $d\geq 3$ and dim($\Omega$)$\geq 2$}\\ The genericity of the Kupka-Smale property for vector fields in $\mathbb{R}^d$, $d\geq 3$, has been proved independently by Kupka in \cite{Kupka} and by Smale in \cite{Smale2}. Their proofs have been simplified by Peixoto in \cite{Peixoto2} (see \cite{Abraham-Robin} and \cite{Palis-de-Melo}). The strong difference with the lower dimensional vector fields is that, when $d\geq 3$, \eqref{ODE} may admit transversal homoclinic orbits consisting in the transversal intersection of the stable and unstable manifolds of a hyperbolic periodic orbit. The existence of such an intersection is stable under small perturbations and yields a Smale horseshoe containing an infinite number of periodic orbits and chaotic dynamics equivalent to the dynamics of the shift operator, see \cite{Smale65}. Therefore, the Morse-Smale property cannot be dense in vector fields on $\mathbb{R}^d$ with $d\geq 3$. Even worse, the set of vector fields, whose dynamics are structurally stable under small perturbations, is not dense (notice that this set contains the vector fields satisfying the Morse-Smale property). Indeed, as shown in \cite{Guckenheimer-Williams}, there exist sets of vector fields $g_\lambda$, depending smoothly of a real parameter $\lambda$, such that, for each $\lambda$, $T_{g_\lambda}(t)$ admits a Lorenz attractor, whose dynamics are qualitatively different for each value of $\lambda$. The possible presence of other chaotic dynamics such as Anosov systems or wild dynamics is also noteworthy, see \cite{Anosov}, \cite{Newhouse} and \cite{Bonatti-Diaz}. For the interested reader, we refer to \cite{Robinson2} or \cite{Smale67}. The genericity of the Kupka-Smale property for the parabolic equation \eqref{PDE} on any domain $\Omega$ is proved by Brunovsk\'y and both authors in \cite{BJR}. There exist several results concerning the embedding of the finite-dimensional flows into the parabolic equations. Pol\'a\v{c}ik has shown in \cite{Polacik5} that any ordinary differential equation on $\mathbb{R}^d$ can be embedded into the flow of \eqref{PDE} for some $f$ and for some domain $\Omega\subset\mathbb{R}^d$. The constraint that the dimension of $\Omega$ is equal to the dimension of the imbedded flow is removed in \cite{Polacik}, however the result concerns a dense set of flows only. A similar result has been obtained by Dancer and Pol\'a\v{c}ik in \cite{Dancer-Polacik} for homogeneous nonlinearities $f(u,{\nabla} u)$ (see also \cite{Prizzi-Rybakowski}). These realization results imply the possible existence of persistent chaotic dynamics in the flow of the parabolic equation \eqref{PDE} as soon as $\Omega$ has a dimension larger than one: transversal homoclinic orbits, Anosov flows on invariant manifolds of any dimension, Lorenz attractors etc... \vspace{2mm} {\noindent \bf $\bullet$ Gradient case}\\ When $g$ is a gradient vector field ${\nabla} G$ with $G\in\mathcal{C}^2(\mathbb{R}^d,\mathbb{R})$, then $-G$ is a strict Lyapounov function and \eqref{ODE} is a gradient system. In this case, the Kupka-Smale property is equivalent to the Morse-Smale property. The genericity of the Morse-Smale property for gradient vector fields has been obtained by Smale in \cite{Smale1}. In the case where the nonlinearity $f \in\mathcal{C}^r(\overline\Omega \times\mathbb{R}, \mathbb{R})$ (that is, $f\equiv f(x,u)$ does not depend on ${\nabla} u$), the parabolic equation \eqref{PDE} admits a strict Lyapounov function given by $E(u)=\int_\Omega\left(\frac 12 |{\nabla} u(x)|^2 - F(x,u(x))\right) dx$, where $F(x,u)=\int_0^u f(x,s)ds$ is a primitive of $f$, and hence generates a gradient system. Brunovsk\'y and Pol\'a\v{c}ik have shown in \cite{Bruno-Pola} that the Morse-Smale property holds for the parabolic equation, generically with respect to $f(x,u)$. It is noteworthy that the Morse-Smale property is no longer generic if one restricts the nonlinearities to the class of homogeneous functions $f\equiv f(u)$ (see \cite{Polacik2}). Pol\'a\v{c}ik has shown in \cite{Polacik3} that any generic gradient vector field of $\mathbb{R}^d$ can be realized in the flow of the parabolic equation \eqref{PDE} on a bounded domain of $\mathbb{R}^2$ with an appropriate nonlinearity $f(x,u)$. The paper \cite{Polacik3} also contains the realization of particular dynamics such as non-transversal intersections of stable and unstable manifolds. \vspace{2mm} {\noindent \bf $\bullet$ Caveat: general ODEs or cooperative systems?}\\ In Table 1, we have given the striking correspondence between the flow generated by Eq. \eqref{ODE} and the semiflow generated by the parabolic equation \eqref{PDE}. In addition, we have pointed out that some classes of cooperative systems are also involved in this correspondence. In fact, the reader should be aware that the dynamics of \eqref{PDE} is much closer to the ones of a cooperative system than to the ones of the general vector field \eqref{ODE}. Indeed, the semiflow $S_f(t)$ generated by the parabolic equation \eqref{PDE} belongs to the class of strongly monotone semiflows, which means that this semiflow has more constraints than the flow $T_g(t)$ generated by a general vector field $g$ (see Section \ref{section-coop}). That is why, it could be more relevant to write Table 1 in terms of cooperative systems only (for example, by replacing the case of the general ODE with $d\geq 3$ by the case of a cooperative system of ODEs in dimension $d\geq 4$). However, we have chosen to mainly write Table 1 in terms of general ODEs for several reasons:\\ - as far as the properties stated in Table 1 are concerned, there is no difference between the dynamics of a general ODE and the ones of a parabolic PDE,\\ - the dynamics of general ODEs are common knowledge, whereas speaking in terms of cooperative systems may not give a good insight of the dynamics of \eqref{PDE},\\ - not all the properties stated in Table 1 are known for the class of cooperative systems (for example the genericity of Kupka-Smale property is not yet known for $d\geq 4$). \section{Zero number and unique continuation properties for the scalar parabolic equation}\label{section-ucp} The results presented in Table \ref{table} and in Section \ref{section-comments} strongly rely on properties specific to the parabolic equations. The purpose of this section is to give a first insight of these particular properties and of their use, to the reader. Dynamical systems generated by vectors fields are flows on $\mathbb{R}^d$, whereas the phase-space of the parabolic equation is an infinite-dimensional space $X^\alpha$. It is important to be aware of the fact that the parabolic equations generate only a small part of all possible dynamical systems on the Banach space $X^\alpha$. On one hand, this implies less freedom in perturbing the dynamics and hence in obtaining density results. In particular, whereas one can easily construct perturbations of a vector field $g$ which are localized in the phase space $\mathbb{R}^d$, the perturbations of the nonlinearity $f$ act in a non local way on $X^\alpha$ (many different functions $u$ can have the same values of $u$ and ${\nabla} u$ at a given point $x$). Therefore, it is important to obtain unique continuation results in order to find values $(x,u,{\nabla} u)$, which are reached only once by a given periodic, heteroclinic or homoclinic orbit. On the other hand, the small class of dynamics generated by the parabolic equations admits special properties. These properties may in particular yield the constraints, which make the dynamics similar to the ones of low-dimensional vector fields. \vspace{2mm} The scalar parabolic equation in space dimension one ($\Omega=(0,1)$ or $\mathbb{T}^1$) satisfies a very strong property: the number of zeros of the solutions of the linearized equation is nonincreasing in time. This property is often called Sturm property since its idea goes back to Sturm \cite{Sturm} in 1836. There are different versions of this result, which have been proved by Nickel \cite{Nickel}, Matano \cite{Matano,Matano2}, Angenent and Fiedler \cite{Angenent,Angenent-Fiedler} (see also \cite{GalakHarwin} for a survey). By similar technics, a geometrical result on braids formed by solutions of the one-dimensional parabolic equation is obtained in \cite{GV}. \begin{theorem}\label{th-Angenent} Let $\Omega=(0,1)$ with Neumann boundary conditions or $\Omega=\mathbb{T}^1$. Let $T>0$, $a\in W^{1,\infty}(\overline\Omega\times [0,T],\mathbb{R})$ and $b\in\mathbb{L}^\infty(\overline\Omega\times [0,T] ,\mathbb{R})$. Let $v:\overline\Omega\times (0,T)\rightarrow \mathbb{R}$ be a bounded non-trivial classical solution of $$\partial_t v=\partial^2_{xx}v+a(x,t)\partial_x v+b(x,t)v~~,~~(x,t)\in\Omega\times (0,T)~.$$ Then, for any $t\in (0,T)$, the number of zeros of the function $x\in\overline\Omega\mapsto v(x,t)$ is finite and non-increasing in time. Moreover, it strictly decreases at $t=t_0$ if and only if $x\mapsto v(x,t_0)$ has a multiple zero. \end{theorem} Theorem \ref{th-Angenent} is the fundamental ingredient of almost all the results given in Table \ref{table} in the cases $\Omega=(0,1)$ and $\Omega=\mathbb{T}^1$. It can be used either as a strong comparison principle or as a strong unique continuation property, as shown in the following examples of applications. General surveys can be found in \cite{Fiedler-Scheel}, \cite{HaleCanada} and \cite{HJR}. In the first application presented here, Theorem \ref{th-Angenent} is used as a strong maximum principle. In some sense, it yields an order on the phase space which is preserved by the flow. This illustrates how Theorem \ref{th-Angenent} may imply constraints similar to the ones of low-dimensional vector fields. The following result was first proved in \cite{Zelenyak} and the proof given here comes from \cite{Matano} (see also \cite{Fiedler-Scheel}). \begin{prop}\label{prop-cv-single} Let $\Omega=(0,1)$, let $u_0\in X^\alpha$ and let $u(x,t)$ be the corresponding solution of the parabolic equation \eqref{PDE} with homogeneous Neumann boundary conditions. The $\omega-$limit set of $u_0$ consists of a single equilibrium point. \end{prop} \begin{demo} We first notice that $v(x,t)=\partial_t u(x,t)$ satisfies the equation $$\partial_t v(x,t)=\partial^2_{xx} v(x,t)+ f'_{u}(x,u(x,t),\partial_x u(x,t)) v(x,t) + f'_{\partial_x u}(x,u(x,t),\partial_x u(x,t)) \partial_x v(x,t)~.$$ Due to the Neumann boundary conditions, we have $\partial_x u(0,t)=\partial^2_{xt} u(0,t)=\partial_x v(0,t)=0$ for all $t >0$. In particular, as soon as $v(0,t)=0$, $v(t)$ has a double zero at $x=0$. Due to Theorem \ref{th-Angenent}, either $v$ is a trivial solution, that is $v\equiv 0$ for all $t$, and $u$ is an equilibrium point, or $v(0,t)$ vanishes at most a finite number of times since $v(t)$ can have a multiple zero only a finite number of times. Assume that $u$ is not an equilibrium, then $u(0,t)$ must be monotone for large times and thus converges to $a\in\mathbb{R}$. Any trajectory $w$ in the $\omega-$limit set of $u_0$ must hence satisfy $w(0,t)=a$ for all $t$. Therefore, $\partial_t w(0,t)=0$ for all $t$ and $\partial_t w(0,t)$ has a multiple zero at $x=0$ for all times. Using Theorem \ref{th-Angenent}, we deduce as above that $w$ is an equilibrium point of \eqref{PDE}. But there exists at most one equilibrium $w$ satisfying $w(0)=a$ and the Neumann boundary condition $\partial_x w(0)=0$. Therefore, the $\omega-$limit set of $u_0$ is a single equilibrium point $w$. \end{demo} The second application comes from \cite{JR}. It shows how Theorem \ref{th-Angenent} can be used as a unique continuation property. This kind of property roughly says that, if two solutions coincide too much near a point $(x_0,t_0)$, then they must be equal everywhere. The motivation beyond this example of application is the following. We consider a time-periodic solution of \eqref{PDE} on $\Omega=\mathbb{T}^1$. The problem is to find a perturbation of the nonlinearity $f$, which makes this periodic orbit hyperbolic. As enhanced above, such a perturbation is nonlocal in the phase space of \eqref{PDE}. To be able to perform perturbation arguments, it is important to show that one can find a perturbation of $f$ which acts only locally on the periodic orbit. To this end, one proves the following result. \begin{prop}\label{prop-injectivite} Let $p(x,t)$ be a periodic orbit of \eqref{PDE} on $\Omega=\mathbb{T}^1$. Let $T>0$ be its minimal period. Then, the map $$(x,t) \in \mathbb{T}^1 \times [0, T) \mapsto (x, p(x,t), \partial_x p(x,t)) $$ is one to one. \end{prop} \begin{demo} Assume that this map is not injective. Then there exist $x_0$, $t_0 \in [0, T)$ and $t_1 \in [0, T)$, $t_0\ne t_1$ such that $$ p(x_0, t_0)=p(x_0, t_1)~\text{ and }~\partial_x p(x_0, t_0)= \partial_x p(x_0, t_1)~.$$ The function $v(x,t)= p(x, t + t_1 -t_0)-p(x,t)$ is a solution of the equation $$ \partial_t v(x,t)= \partial^2_{xx} v(x,t) + a(x,t)v(x,t) + b(x,t) \partial_x v(x,t)~, $$ where $a(x,t)= \int_0^1 f'_u(x,p(x,t) + s(p(x,t+ t_1-t_0)-p(x,t)), \partial_x p(x,t + t_1-t_0)) ds$ and $b(x,t) =\int_{0}^{1} f'_{u_x}(x,p(x,t), \partial_x (p(x,t) +s(p(x,t + t_1-t_0)-p(x,t))) ds$. Moreover, the function $v(x,t)$ satisfies $v(x_0,t_0)=0$ and $\partial_x v(x_0, t_0)=0$ and does not vanish everywhere since $|t_1-t_0|<T$. Due to Theorem \ref{th-Angenent}, the number of zeros of $v(t)$ drops strictly at $t=t_0$ and never increases. However, $v(t)$ is a periodic function of period $T$, and thus, its number of zeros is periodic. This leads to a contradiction and proves the proposition. \end{demo} \vspace{2mm} In a domain $\Omega$ of dimension $d'\geq 2$, there is no known counterpart for Theorem \ref{th-Angenent} as shown in \cite{Fusco-Lunel}. In particular, Proposition \ref{prop-injectivite} does no longer hold. However, to be able to construct relevant perturbations of periodic orbits, one needs a result similar to Proposition \ref{prop-injectivite}, even if weaker. The following result can be found in \cite{BJR}. Its proof is based on a generalization of the arguments of \cite{Hardt-Simon} and on unique continuations properties of the parabolic equations. \begin{theorem}\label{th-nodal-sing} Let $p(x,t)$ be a periodic orbit of \eqref{PDE} with minimal period $T>0$. There exists a generic set of points $(x_0,t_0)\in \Omega \times [0,T)$ such that if $t\in [0,T)$ satisfies $p(x_0,t)=p(x_0,t_0)$ and ${\nabla} p(x_0,t)={\nabla} p(x_0,t_0)$, then $t=t_0$. \end{theorem} \section{Cooperative systems of ODEs}\label{section-coop} We consider a system of differential equations \begin{equation}\label{eq-coop} \dot y(t)=g(y(t))~, \quad y(0)=y_0\in\mathbb{R}^N~, \end{equation} where $g=(g_i)_{i=1\ldots N}$ is a $\mathcal{C}^1$ vector field.\\ Due to the analogy with biological models, the following definitions are natural. We say that \eqref{eq-coop} is a {\it cooperative} (resp. {\it competitive}) system if for any $y\in\mathbb{R}^N$ and $i\neq j$, $\Drond {g_i}{y_j}(y)$ is non-negative (resp. non-positive) and the matrix $(\Drond{g_i}{y_j})(y)$ is irreducible i.e. it is not a block diagonal matrix (the simpler assumption that all the coefficients $\Drond {g_i}{y_j}(y)$ are positive is sometimes made instead of the irreducibility). We say that \eqref{eq-coop} is a {\it tridiagonal} system if $\Drond {g_i}{y_j}=0$ for $|i-j|\geq 2$ and a {\it cyclic tridiagonal} system if the indices $i$ and $j$ are considered modulo $N$, i.e. if, in addition, we allow $\Drond {g_1}{y_N}$ and $\Drond {g_N}{y_1}$ to be non-zero. For the reader interested in cooperative systems, we refer to \cite{Smith2}. In this section, we only consider cooperative systems. However, notice that, by changing $t$ into $-t$ or $y_i$ into $-y_i$, we obtain similar results for competitive systems and for systems with different sign conditions. The dynamics of cooperative systems may be as complicated as the dynamics of general vector fields. Indeed, Smale has shown in \cite{Smale-coop} that any vector field in $\mathbb{R}^{N-1}$ can be realized in a invariant manifold of a cooperative system in $\mathbb{R}^N$. Notice that this realization result implies that any one-dimensional vector field can be imbedded in a tridiagonal cooperative system and any two-dimensional vector field can be imbedded in a cyclic tridiagonal cooperative system. This explains why we present the tridiagonal cooperative systems in Table \ref{table} as generalization of one- and two-dimensional vector fields. However, the dynamics of a cooperative system \eqref{eq-coop} is really different from the ones of the general ODE \eqref{ODE} since a cooperative system generates a strongly monotone flow, that is, a flow which preserves a partial order. It is noteworthy that the semiflow $S_f(t)$ generated by the parabolic equation \eqref{PDE} also belongs to the class of strongly monotone semiflows (it preserves the order of $X^\alpha$ induced by the classical order of $\mathcal{C}^0(\Omega)$). Therefore, the semiflow of \eqref{PDE} is much closer to the one of the cooperative system \eqref{eq-coop}, both admitting more constraints than the flow $T_g(t)$ generated by a general vector field $g$. In \cite{Hirsch0} and \cite{Hirsch3} for example, Hirsch has shown that almost all bounded trajectories of a strongly monotone semiflow are quasiconvergent, that is, their $\omega$-limit sets consist only of equilibria. More precisely, all initial data, which have bounded nonquasiconvergent trajectories, form a meager subset (that is, the complement of a generic subset) of the phase space. Later, in \cite{Polacik0}, Pol\'a\v{c}ik has proved that the set of all initial data $u_0 \in X^{\alpha}$, which have bounded nonconvergent trajectories in the semiflow of the parabolic equation \eqref{PDE}, is meager in $X^{\alpha}$. Moreover, since the works of Hirsch and Smillie, it is known that the dynamics of cooperative systems, which are in addition tridiagonal, are very constrained in any dimension $N$. Indeed, in \cite{Hirsch1}, \cite{Hirsch2} and \cite{Smillie}, strong properties of the limit sets of cooperative systems are proved. In particular, any three-dimensional cooperative system satisfies the Poincar\'e-Bendixson property and the trajectory of any tridiagonal cooperative system converges to a single equilibrium point. Inspired by the articles of Henry and Angenent about the parabolic equation on $(0,1)$, Fusco and Oliva (see \cite{Fusco-Oliva1}) showed a theorem similar to Theorem \ref{th-Angenent} (see \cite{Smith} for a more general statement). \begin{theorem}\label{th-FO} Let $\mathcal{N}$ be the set of vector $y\in\mathbb{R}^N$ such that, for all $i=1\ldots N$, either $y_i\neq 0$ or $y_i=0$ and $y_{i-1}y_{i+1}<0$ (where $y_0=y_{N+1}=0$). For every $y\in\mathcal{N}$, we set $N(y)$ to be the number of sign changes for $y_i$, when $i$ goes from $1$ to $N$. Let $y(t)\neq 0$ be a solution of \begin{equation}\label{eq-th-FO} \dot y(t)=A(t)y(t)~, \end{equation} where $A\in\mathcal{C}^0(\mathbb{R},\mathcal{M}_N(\mathbb{R}))$ satisfies $A_{ij}(t)>0$ for all $t\in\mathbb{R}$ and all $i\neq j$.\\ Then, the times $t$ where $y(t)\not\in \mathcal{N}$ are isolated and, if $y(t_0)\not\in\mathcal{N}$, then, for every $\varepsilon>0$ small enough, $N(y(t+\varepsilon))<N(y(t-\varepsilon))$. \end{theorem} In other words, the number of sign changes of the solutions of the linear equation \eqref{eq-th-FO} is non-increasing in time and strictly drops at $t_0$ if and only if $y(t_0)$ has in some sense a multiple zero. The parallel with Theorem \ref{th-Angenent} is of course striking. Using Theorem \ref{th-FO}, Fusco and Oliva have shown that the stable and unstable manifolds of equilibrium points of a tridiagonal cooperative system always intersect transversally. As a consequence, the Morse-Smale property is generic in the class of tridiagonal cooperative systems. Theorem \ref{th-FO} also holds for cyclic tridiagonal cooperative systems, see \cite{Fusco-Oliva2} and \cite{Smith}. Using this fundamental property, Fusco and Oliva have shown in \cite{Fusco-Oliva2} that the stable and unstable manifolds of periodic orbits of cyclic tridiagonal cooperative systems always intersect transversally. In addition, Mallet-Paret and Smith have shown in \cite{MP-Smith} that cyclic tridiagonal cooperative systems satisfy the Poincar\'e-Bendixson property. Notice that, following \cite{JR} and \cite{JR2}, one should be able to prove the genericity of the Morse-Smale property for cyclic tridiagonal cooperative systems. This has been proved very recently by Percie du Sert (see \cite{Percie})\\[3mm] Considering all these results, it is not surprising that there exists a parallel between tridiagonal cooperative systems and the parabolic equation on $(0,1)$. Indeed, consider a solution $v$ of the linear one-dimensional parabolic equation \begin{equation}\label{eq-lien} \dot v(x,t)=\partial^2_{xx} v(x,t) + a(x,t)\partial_x v(x,t) + b(x,t)v(x,t)~~~ (x,t)\in\mathbb{R}\times\mathbb{R}_+ \end{equation} We discretize the segment $(0,1)$ by a sequence of points $x_k=(k-1)/(N-1)$ with $k=1\ldots N$. The natural approximation of $v$ is given by $y_k\approx v(x_k)$ solution of \begin{equation}\label{eq-lien2} \dot y_k(t)=\frac {y_{k+1}(t)-2y_k(t)+y_{k-1}(t)}{h^2} + a_k(t)\frac{y_{k+1}-y_k}h + b_k(t)y_k(t)~~~ \end{equation} where $a_k(t)=a(x_k,t)$, $b_k(t)=b(x_k,t)$ and $h=1/(N-1)$. If $h$ is small enough, \eqref{eq-lien2} is a tridiagonal cooperative system. The relation between Theorems \ref{th-Angenent} and \ref{th-FO} is obvious in this framework. \section{Beyond Kupka-Smale and other open problems}\label{section-open} One of the main goals of the study of dynamical systems is to understand the behaviour of a generic dynamical system. The most recent results concerning the parabolic equations are related to the genericity of Kupka-Smale property. However, such a property cannot give a good insight of the complex and chaotic dynamics that may be generated by homoclinic connections. For this reason, the study of finite-dimensional flows has been pursued further the Kupka-Smale property and is still in progress. The corresponding results should serve as a guideline for the study of the flow generated by the parabolic equation \eqref{PDE}. As for vector fields, going beyond the Kupka-Smale genericity property for scalar parabolic equations will be an important and difficult problem. For vector fields, one of the main steps for this purpose is Pugh's closing lemma: if $p$ is a non-wandering point of the dynamical system $T_g(t)$ generated by \eqref{ODE}, then there exists a $\mathcal{C}^1-$perturbation $\tilde g$ of $g$ such that $p$ is a periodic point of $T_{\tilde g}(t)$ (the case of a $\mathcal{C}^r-$perturbation with $r\geq2$ is still open). The proof of Pugh in \cite{Pugh} concerns discrete dynamical systems. It has been adapted to the case of flows by Pugh and Robinson in \cite{Pugh-Robinson} (see also \cite{Robinson3} for an introduction to the proof). A direct consequence of Pugh closing lemma is the general density theorem: for a generic finite-dimensional flow, the non-wandering points are is the closure of the periodic points (see \cite{Pugh2} and \cite{Robinson2}). Other connecting lemmas have been proved by Hayashi \cite{Hayashi} and Bonatti and Crovisier \cite{Bonatti-Crovisier}. They enable a better understanding of generic dynamics. For example, the class of finite-dimensional dynamical systems which either satisfy the Morse-Smale property or admit a transversal homoclinic connection is generic (see \cite{Pujals-Sambarino}, \cite{Bonatti-Gan-Wen} and \cite{Crovisier} for discrete dynamical systems in dimensions $d=2$, $d=3$ and $d\geq 4$ respectively, and see \cite{Arroyo-Rodriguez} for three-dimensional flows). Obtaining similar results for the flow of the parabolic equation should be a very interesting and difficult challenge. Other interesting open problems concern the realization of finite-dimensional dynamics in the semiflow of parabolic equations. Indeed, we only know that one can realize the dynamics of a dense set of general ODEs in the flow of a parabolic equation \eqref{PDE} on a two-dimensional domain. One may wonder if it is possible to realize the dynamics of all ODEs. Since the parallel between parabolic equations and cooperative systems is stronger, the following strong realization conjecture may be more plausible: \emph{any} flow of a cooperative system of ODEs can be realized in an invariant manifold of the flow of a parabolic equation \eqref{PDE} on a two-dimensional domain. Finally, the genericity of the Morse- and Kupka-Smale properties is also an interesting problem for other classes of partial differential equations. The genericity of the Morse-Smale property is known for the wave equations $\ddot u+\gamma \dot u =\Delta u+f(x,u)$ with constant damping $\gamma>0$ (see \cite{Bruno-Raugel}) and with variable damping $\gamma(x)\geq 0$ in space dimension one (see \cite{Joly}). We recall that, in both cases, the associated dynamical system is gradient. Nothing is known for other classes of PDEs, in particular for the equations of fluids dynamics and for systems of parabolic equations $\dot U=\Delta U+f(x,U)$, with $U(x,t)\in\mathbb{R}^N$. In all these cases, the main problem consists in understanding how the perturbations act on the phase plane of the PDE. Either one proves unique continuation results similar to Theorem \ref{th-nodal-sing} in order to be able to use local perturbations of the flow (as in \cite{JR}, \cite{JR2} and \cite{BJR}), or one uses particular non-local perturbations in a very careful way (as in \cite{Bruno-Pola}, \cite{Bruno-Raugel} and \cite{Joly}). \section*{Glossary} In this section, $S(t)$ denotes a general continuous dynamical system on a Banach space $X$. An orbit of $S(t)$ is denoted by $x(t)=S(t)x_0$ with $t\in I$, where $I=[0,+\infty)$, $I=(-\infty,0]$ or $I=(-\infty,+\infty)$ in the case of a positive, negative or global trajectory respectively. {\noindent\bf Compact global attractor:} if it exists, the compact global attractor $\mathcal{A}$ of $S(t)$ is a compact invariant set which attracts all the bounded sets of $X$. Notice that $\mathcal{A}$ is then the set of all the bounded global trajectories. See \cite{Hal88}. {\noindent\bf $\alpha-$ and $\omega-$limit sets:} let $x_0\in X$. The $\alpha-$limit set $\alpha(x_0)$ and the $\omega-$limit set $\omega(x_0)$ of $x_0$ are the sets of accumulation points of the negative and positive orbits coming from $x_0$ respectively. More precisely, \begin{align*} \alpha(x_0)&=\{ x\in X~/~\exists (t_n)_{n\in\mathbb{N}},~t_n\xrightarrow[n\rightarrow \infty]{}-\infty~\text{ and a negative trajectory } x(t)\\ &~~~~\hbox{such that }x(0)=x_0~\hbox{ and }~x(t_n) \xrightarrow[n\rightarrow \infty]{} x \} \\ \omega(x_0)&=\{ x\in X~/~\exists (t_n)_{n\in\mathbb{N}},~t_n\xrightarrow[n\rightarrow \infty]{}+\infty~\text{ such that }S(t_n)x_0\xrightarrow[n\rightarrow \infty]{} x \} \end{align*} The limit sets $\alpha(x_0)$ and $\omega(x_0)$ are non-empty connected compact sets. {\noindent\bf Homoclinic or heteroclinic orbit:} let $x(t)=S(t)x_0$ be a global trajectory of $S(t)$. Assume that the $\alpha-$ and $\omega-$limit sets of $x_0$ exactly consists in one orbit, denoted $x_-(t)$ and $x_+(t)$ respectively, this orbit being either an equilibrium point or a periodic orbit. The trajectory $x(t)$ is said to be a homoclinic orbit if $x_-(t)=x_+(t)$ and a heteroclinic orbit if $x_-(t)\neq x_+(t)$. {\noindent\bf Backward uniqueness property:} $S(t)$ satisfies the backward uniqueness property if for any time $t_0>0$ and any trajectories $x_1(t)$ and $x_2(t)$, $x_1(t_0)=x_2(t_0)$ implies $x_1(t)=x_2(t)$ for all $t\in [0,t_0]$. Notice that this does not mean that $S(t)$ admits negative trajectories. {\noindent\bf Hyperbolic equilibrium points or periodic orbits:} an equilibrium point $e$ of $S(t)$ is hyperbolic if the linearized operator $x\mapsto D_eS(1)x$ has no spectrum on the unit circle. Let $p(t)$ be a periodic solution of $S(t)$ with minimal period $T$. For each $x\in X$, we denote $t\mapsto\Pi(t,0)x$ the corresponding trajectory of the linearization of $S(t)$ along the periodic solution $p(t)$. Then, $p(t)$ is said hyperbolic if the linear map $x\mapsto \Pi(T,0)x$ has no spectrum on the unit circle except the eigenvalue $1$ which is simple. Remark that then, for any integer $k \ne 0$, the linear map $x\mapsto \Pi(kT,0)x$ has no spectrum on the unit circle except the eigenvalue $1$ (which is simple). {\noindent\bf Stable and unstable manifolds:} let $e$ be a hyperbolic equilibrium point of $S(t)$. There exists a neighbourhood $\mathcal{N}$ of $e$ such that the set \begin{equation*} \begin{split} W^u_{loc}(e)=\{x_0\in X~,~\exists\hbox{ a negative trajectory }x(t)\hbox{ with }x(0&)=x_0 \cr &\hbox{ and, } \forall t\leq 0,~x(t)\in\mathcal{N}\} \end{split} \end{equation*} is a submanifold of $X$, in which all negative trajectories converge to $e$ when $t$ goes to $-\infty$. The manifold $W^u_{loc}(e)$ is called the local unstable manifold of $e$. Pushing $W^u_{loc}(e)$ by the flow $S(t)$, one can define the (global) unstable set $W^u(e)=\cup_{t\geq 0} S(t)W^u_{loc}(e)$, which consists in all the negative trajectories converging to $e$ when $t$ goes to $-\infty$. This unstable set $W^u(e)$ is an immersed submanifold under suitable properties. For instance, backward uniqueness properties are needed to extend the manifold structure.\\ In the same way, one defines the local stable manifold \begin{align*} W^s_{loc}(e)&=\{x_0\in X~,~\forall t\geq 0,~S(t)x_0\in\mathcal{N}\}\\ &=\{x_0\in X~,~\forall t\geq 0,~S(t)x_0\in\mathcal{N}\text{ and }S(t)x_0\xrightarrow[t\rightarrow +\infty]{}e\}~. \end{align*} General partial differential equations (and parabolic equations in particular) do not admit negative trajectories. Therefore, it is less easy to extend the local stable manifold to a global stable manifold. However, one can define the stable set $W^s(e)$ of $e$ as follows $$ W^s(e) =\{x_0\in X~, ~S(t)x_0\xrightarrow[t\rightarrow +\infty]{}e\}~. $$ Under suitable additional properties (which are satisfied by the parabolic equation \eqref{PDE}), one can show that $W^s(e)$ is also an immersed submanifold. For instance, backward uniqueness properties of the adjoint dynamical system $S^*(t)$ on $X^*$ and finite-codimensionality of $W^s_{loc}(e)$ are needed (see \cite{Henry-book} for more details).\\ If $p(t)$ is a hyperbolic periodic orbit, one defines its unstable and local stable manifolds in the same way. See for example \cite{Palis-de-Melo} for more details. {\noindent\bf Non-wandering set:} a point $x_0\in X$ is non-wandering if for any neighbourhood $\mathcal{N}\ni x_0$ and any time $t_0>0$, there exists $t\geq t_0$ such that $S(t)\mathcal{N}\cap \mathcal{N}\neq\emptyset$. {\noindent\bf The Kupka-Smale and Morse-Smale properties:} $S(t)$ satisfies the Kukpa-Smale property if all its equilibrium points and periodic orbits are hyperbolic and if their stable and unstable manifolds intersect transversally. It satisfies the Morse-Smale property if in addition its non-wandering set consists only in a finite number of equilibrium points and periodic orbits. We refer to \cite{Palis-de-Melo} for more precisions on these notions. {\noindent\bf Gradient dynamical systems:} $S(t)$ is gradient if it admits a Lyapounov functional, that is a function $\Phi\in\mathcal{C}^0(X,\mathbb{R})$ such that, for all $x_0\in X$, $t\mapsto \Phi(S(t)x_0)$ is non-increasing and is constant if and only if $x_0$ is an equilibrium point. We recall that a gradient dynamical system does not admit periodic or homoclinic orbits. {\noindent\bf Cooperative system of ODEs:} see Section \ref{section-coop}. {\noindent\bf Generic set and Baire space:} a generic subset of a topological space $Y$ is a set which contains a countable intersection of dense open subsets of $Y$. A property is generic in $Y$ if it is satisfied for a generic set of $Y$. The space $Y$ is called a Baire space if any generic set is dense in $Y$. In particular a complete metric space is a Baire space. {\noindent\bf Whitney topology:} let $k\geq 0$ and let $M$ be a Banach manifold. The Whitney topology on $\mathcal{C}^k( M,\mathbb{R})$ is the topology generated by the neighbourhoods $$ \{g\in\mathcal{C}^k( M,\mathbb{R})~,~|D^i f(x)-D^i g(x)|\leq\delta(x),~\forall i\in\{0,1,\ldots,k\},~ \forall x\in M \}~,$$ where $f$ is any function in $\mathcal{C}^k( M,\mathbb{R})$ and $\delta$ is any positive function in $\mathcal{C}^k( M,(0,+\infty))$. Notice that $\mathcal{C}^k( M,\mathbb{R})$ endowed with the Whitney topology is a Baire space even if it is not a metric space when $M$ is not compact. We refer for instance to \cite{GG}. {\noindent\bf The fractional power space $X^\alpha$:} let $A$ be a positive self-adjoint operator with compact inverse on $\mathbb{L}^2(\Omega)$. Let $(\lambda_n)$ be the sequence of its eigenvalues, which are positive, and let $(\varphi_n)$ be the corresponding sequence of eigenfunctions, which is an orthonormal basis of $\mathbb{L}^2(\Omega)$. For each $\alpha\in\mathbb{R}$, we define the fractional power of $A$ by $A(\sum_n c_n \varphi_n)=\sum_n c_n \lambda_n^\alpha \varphi_n$. In particular, $A^0=Id$ and $A^1=A$. The space $X^\alpha$ is the domain of $A^\alpha$ that is $$X^\alpha=\{\varphi\in\mathbb{L}^2(\Omega)~,~\varphi=\sum_n c_n\varphi_n \hbox{ such that }(c_n\lambda_n^\alpha)\in\ell^2(\mathbb{N}) \}~.$$ It is possible to define fractional powers of more general operators, called sectorial operators, see \cite{Henry-book}. {\noindent\bf The Sobolev space $W^{s,p}(\Omega)$:} if $s$ is a positive integer, $W^{s,p}(\Omega)$ is the space of (classes of) functions $f\in\mathbb{L}^p(\Omega)$, which are $s$ times differentiable in the sense of distributions and whose derivatives up to order $s$ belong to $\mathbb{L}^p(\Omega)$. It is possible to extend this notion to positive numbers $s$ which are not integers by using interpolation theory. {\noindent\bf Unique continuation properties:} let us consider a partial differential equation on $\Omega$ and let $u(x,t)$ be any solution of it. A unique continuation property for this PDE is a result stating that if $u(x,t)$ vanishes on a subset of $\Omega\times\mathbb{R}_+$ which is too large in some sense, then $u(x,t)$ must vanish for all $(x,t)$ in $\Omega\times\mathbb{R}_+$. \section*{Acknowledgments} The authors are very grateful to Sylvain Crovisier and Lucien Guillou for fruitful discussions.\\ \addcontentsline{toc}{chapter}{Bibliographie}
2,877,628,088,787
arxiv
\section{Electronic Submission} \label{submission} Submission to ICML 2023 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX\ templates are available on the conference web site at: \begin{center} \textbf{\texttt{http://icml.cc/}} \end{center} The guidelines below will be enforced for initial submissions and camera-ready copies. Here is a brief summary: \begin{itemize} \item Submissions must be in PDF\@. \item \textbf{New to this year}: If your paper has appendices, submit the appendix together with the main body and the references \textbf{as a single file}. Reviewers will not look for appendices as a separate PDF file. So if you submit such an extra file, reviewers will very likely miss it. \item Page limit: The main body of the paper has to be fitted to 8 pages, excluding references and appendices; the space for the latter two is not limited. For the final version of the paper, authors can add one extra page to the main body. \item \textbf{Do not include author information or acknowledgements} in your initial submission. \item Your paper should be in \textbf{10 point Times font}. \item Make sure your PDF file only uses Type-1 fonts. \item Place figure captions \emph{under} the figure (and omit titles from inside the graphic file itself). Place table captions \emph{over} the table. \item References must include page numbers whenever possible and be as complete as possible. Place multiple citations in chronological order. \item Do not alter the style template; in particular, do not compress the paper format by reducing the vertical spaces. \item Keep your abstract brief and self-contained, one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. The title should have content words capitalized. \end{itemize} \subsection{Submitting Papers} \textbf{Paper Deadline:} The deadline for paper submission that is advertised on the conference website is strict. If your full, anonymized, submission does not reach us on time, it will not be considered for publication. \textbf{Anonymous Submission:} ICML uses double-blind review: no identifying author information may appear on the title page or in the paper itself. \cref{author info} gives further details. \textbf{Simultaneous Submission:} ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences and journals during ICML's review period. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. \medskip Authors must provide their manuscripts in \textbf{PDF} format. Furthermore, please make sure that files contain only embedded Type-1 fonts (e.g.,~using the program \texttt{pdffonts} in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using \textbf{Word} must convert their document to PDF\@. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF\@. Really. We're not joking. Don't send Word. Those who use \textbf{\LaTeX} should avoid including Type-3 fonts. Those using \texttt{latex} and \texttt{dvips} may need the following two commands: {\footnotesize \begin{verbatim} dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps \end{verbatim}} It is a zero following the ``-G'', which tells dvips to use the config.pdf file. Newer \TeX\ distributions don't always need this option. Using \texttt{pdflatex} rather than \texttt{latex}, often gives better results. This program avoids the Type-3 font problem, and supports more advanced features in the \texttt{microtype} package. \textbf{Graphics files} should be a reasonable size, and included from an appropriate format. Use vector formats (.eps/.pdf) for plots, lossless bitmap formats (.png) for raster graphics with sharp lines, and jpeg for photo-like images. The style file uses the \texttt{hyperref} package to make clickable links in documents. If this causes problems for you, add \texttt{nohyperref} as one of the options to the \texttt{icml2023} usepackage statement. \subsection{Submitting Final Camera-Ready Copy} The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except that author information (names and affiliations) should be given. See \cref{final author} for formatting instructions. The footnote, ``Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.'' must be modified to ``\textit{Proceedings of the $\mathit{40}^{th}$ International Conference on Machine Learning}, Honolulu, Hawaii, USA, PMLR 202, 2023. Copyright 2023 by the author(s).'' For those using the \textbf{\LaTeX} style file, this change (and others) is handled automatically by simply changing $\mathtt{\backslash usepackage\{icml2023\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2023\}}$$ Authors using \textbf{Word} must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$~point thick. The running head should be centered, bold and in $9$~point type. The rule should be $10$~points above the main text. For those using the \textbf{\LaTeX} style file, the original title is automatically set as running head using the \texttt{fancyhdr} package which is included in the ICML 2023 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using \verb|\icmltitlerunning{...}| just before $\mathtt{\backslash begin\{document\}}$. Authors using \textbf{Word} must edit the header of the document themselves. \section{Format of the Paper} All submissions must follow the specified format. \subsection{Dimensions} The text of the paper should be formatted in two columns, with an overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches between the columns. The left margin should be 0.75~inches and the top margin 1.0~inch (2.54~cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. Do not write anything on the margins. The paper body should be set in 10~point type with a vertical spacing of 11~points. Please use Times typeface throughout the text. \subsection{Title} The paper title should be set in 14~point bold type and centered between two horizontal rules that are 1~point thick, with 1.0~inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. \subsection{Author Information for Submission} \label{author info} ICML uses double-blind review, so author information must not appear. If you are using \LaTeX\/ and the \texttt{icml2023.sty} file, use \verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information will not be printed unless \texttt{accepted} is passed as an argument to the style file. Submissions that include the author information will not be reviewed. \subsubsection{Self-Citations} If you are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., ``in previous work \cite{langley00}, we have shown \ldots''). Do not anonymize citations in the reference section. The only exception are manuscripts that are not yet published (e.g., under submission). If you choose to refer to such unpublished manuscripts \cite{anonymous}, anonymized copies have to be submitted as Supplementary Material via CMT\@. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look at the Supplementary Material when writing their review (they are not required to look at more than the first $8$ pages of the submitted document). \subsubsection{Camera-Ready Author Information} \label{final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3~inches below the bottom rule surrounding the title. The authors' names should appear in 10~point bold type, in a row, separated by white space, and centered. Author names should not be broken across lines. Unbolded superscripted numbers, starting 1, should be used to refer to affiliations. Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.) Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term ``\textsuperscript{*}Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name \textless{}[email protected]\textgreater{}) can follow the list of affiliations. Ideally only one or two names should be listed. A sample file with author names is included in the ICML2023 style file package. Turn on the \texttt{[accepted]} option to the stylefile to see the names rendered. All of the guidelines above are implemented by the \LaTeX\ style file. \subsection{Abstract} The paper abstract should begin in the left column, 0.4~inches below the final address. The heading `Abstract' should be centered, bold, and in 11~point type. The abstract body should use 10~point type, with a vertical spacing of 11~points, and should be indented 0.25~inches more than normal on left-hand and right-hand margins. Insert 0.4~inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. \subsection{Partitioning the Text} You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. \subsubsection{Sections and Subsections} Section headings should be numbered, flush left, and set in 11~pt bold type with the content words capitalized. Leave 0.25~inches of space before the heading and 0.15~inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10~pt bold type with the content words capitalized. Leave 0.2~inches of space before the heading and 0.13~inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10~pt small caps with the content words capitalized. Leave 0.18~inches of space before the heading and 0.1~inches after the heading. Please use no more than three levels of headings. \subsubsection{Paragraphs and Footnotes} Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes\footnote{Footnotes should be complete sentences.} to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9~point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{icml_numpapers}} \caption{Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 -- ICML 2008) and International Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in \cref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. \cref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in \cref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \subsection{Theorems and such} The preferred way is to number definitions, propositions, lemmas, etc. consecutively, within sections, as shown below. \begin{definition} \label{def:inj} A function $f:X \to Y$ is injective if for any $x,y\in X$ different, $f(x)\ne f(y)$. \end{definition} Using \cref{def:inj} we immediate get the following result: \begin{proposition} If $f$ is injective mapping a set $X$ to another set $Y$, the cardinality of $Y$ is at least as large as that of $X$ \end{proposition} \begin{proof} Left as an exercise to the reader. \end{proof} \cref{lem:usefullemma} stated next will prove to be useful. \begin{lemma} \label{lem:usefullemma} For any $f:X \to Y$ and $g:Y\to Z$ injective functions, $f \circ g$ is injective. \end{lemma} \begin{theorem} \label{thm:bigtheorem} If $f:X\to Y$ is bijective, the cardinality of $X$ and $Y$ are the same. \end{theorem} An easy corollary of \cref{thm:bigtheorem} is the following: \begin{corollary} If $f:X\to Y$ is bijective, the cardinality of $X$ is at least as large as that of $Y$. \end{corollary} \begin{assumption} The set $X$ is finite. \label{ass:xfinite} \end{assumption} \begin{remark} According to some, it is only the finite case (cf. \cref{ass:xfinite}) that is interesting. \end{remark} \subsection{Citations and References} Please use APA reference format regardless of your formatter or word processor. If you rely on the \LaTeX\/ bibliographic facility, use \texttt{natbib.sty} and \texttt{icml2023.bst} included in the style-file package to obtain this format. Citations within the text should include the authors' last names and year. If the authors' names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel's pioneering work \yrcite{Samuel59}. Otherwise place the entire reference in parentheses with the authors and year separated by a comma \cite{Samuel59}. List multiple references separated by semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.' construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference \cite{MachineLearningI}. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to \cref{author info} for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports \cite{mitchell80}, and dissertations \cite{kearns89}. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). Please put some effort into making references complete, presentable, and consistent, e.g. use the actual current name of authors. If using bibtex, please protect capital letters of names and abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz in your .bib file. \section*{Accessibility} Authors are kindly asked to make their submissions as accessible as possible for everyone including people with disabilities and sensory or neurological differences. Tips of how to achieve this and what to pay attention to will be provided on the conference website \url{http://icml.cc/}. \section*{Software and Data} If a paper is accepted, we strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, \textbf{do not} include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as ``Supplementary Material'' into the CMT reviewing system. Note that reviewers are not required to look at this material when writing their review. \section*{Acknowledgements} \textbf{Do not} include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. \nocite{langley00} \section{Introduction} \label{submission} Bayesian optimization (BO)~\cite{garnett_bayesoptbook_2023} is a powerful machine learning technique designed to efficiently find the optimum of an expensive-to-evaluate black-box function. In its standard form, BO ignores the causal relations that can exist among controllable variables and purely observational ones, which can be captured by a causal graph. However, results from \cite{lee-2018} suggest that ignoring the causal graph when available, and considering only a fixed set of controlled variables may be sub-optimal. A naive approach to consider and test all subsets of controllable variables in a brute-force way rapidly becomes infeasible as the number of subsets to consider grows exponentially with the number of variables. The line of work on causal Bayesian optimisation assumes that the causal graph is available and develops specific tools to exploit the graph and achieve lower regret than standard BO. Causal Bayesian Optimization (CaBO) attempts to reduce the search space by assuming the availability of the causal graph of the environment and exploiting results from \cite{lee-2019}. After that, CaBO focuses on identifying the subset of controllable variables and the optimal values for those variables to optimize the expectation of the target variable. As CaBO, we assume the availability of the causal graph, if otherwise, the vast literature on causal discovery describes methods to infer the causal graph \cite{Spirtes2000, hoyer-2008, hyttinen-2013}. We demonstrate that utilizing information from non-interventional variables as context leads to improved optimal values for interventional variables. This setup of contextual policy (distributions mapping from contexts to controlled variables) optimisation is ubiquitous, including applications in engineering \cite{ANDERSON19951,partanen-1993,Mills1991APS}, economy \cite{hota-2021}, robotics \cite{shixiang}, biology, and healthcare \cite{magill-2009, health-exp}. A generalisation of standard Bayesian optimisation for such scenarios is contextual Bayesian optimisation (CoBO), that informs its decision on which point should be evaluated next by the black-box function based on the current context. For instance, Baheri and Vermillion \yrcite{Baheri} utilised CoBO to tune wind energy systems, while Fiducioso et al. \yrcite{andreas} considered a constrained version of CoBO to achieve sustainable room temperature control. The authors in \cite{pinsler-2019, feng-2020} demonstrated the importance of using context information for data-efficient decision making by tuning contextual policies in a reinforcement learning setting, and Tristan Fauvel, Matthew Chalk \yrcite{Fauvel2021ContextualBO} also stressed this importance by proposing successful contextualised strategies for adaptive optimisation in psychometric measurements. Although widely spread and commonly used, CoBO learns contextualised policies assuming fixed action and state spaces, whereby it \textit{a priori} bears a prespecified set of interventional (optimisable) and contextual variables, respectively. We generally refer to such a preselected choice as a policy scope. Unfortunately, \emph{CoBO may suffer a provably linear regret when the policy scope is fixed}, like the policy scope that contains all controllable variables in the action space and all contextual variables in the state space, which is often the case in practice. {\underline{\textbf{An example of CoBO/CaBO's failure modes:}}} We employ the semantical framework of structural causal models \cite{pearlscm} to describe data generating processes in this paper. An SCM $M$ is a tuple $\langle\boldsymbol{V}, \boldsymbol{U}, \mathcal{F}, P(\boldsymbol{U})\rangle$, where $\boldsymbol{V}$ is a set of endogenous variables and $\boldsymbol{U}$ is a set of exogenous variables. $\mathcal{F}$ is a set of functions s.t. each $f_V \in \mathcal{F}$ decides values of an endogenous variable $V \in \boldsymbol{V}$ taking as argument a combination of other variables in the system. That is, $V \leftarrow f_V\left(\boldsymbol{P} \boldsymbol{A}_V, \boldsymbol{U}_V\right), \boldsymbol{P} \boldsymbol{A}_V \subseteq \boldsymbol{V}, \boldsymbol{U}_V \subseteq \boldsymbol{U}$. Exogenous variables $U \in \boldsymbol{U}$ are mutually independent, values of which are drawn from the exogenous distribution $P(\boldsymbol{U})$. Naturally, $M$ induces a joint distribution $P(\boldsymbol{V})$ over endogenous variables $\boldsymbol{V}$, called the observational distribution. Employed with the definition of SCM, we illustrate a failure mode for CaBO and CoBO. To do so, consider the example in Fig.~\ref{Fig:FailingExample}, where the context variable $C$ affects a controllable variable $X_2$ and the target objective $Y$. $\mathcal{G}$ also introduces another controllable variable $X_1$, that influences $X_2$ and is associated with the context $C$. \begin{figure} \centering \begin{subfigure}{.2\textwidth} \centering \begin{tikzpicture}[every text node part/.style={align=center}] \node[draw,circle,, minimum size=.8cm] (X1) at (1.5, 3) {$X_1$}; \node[draw,circle,, minimum size=.8cm] (X2) at (1.5, 1.5) {$X_2$}; \node[draw,circle,pattern=dots, minimum size=.8cm] (Y) at (1.5, 0) {$Y$}; \node[draw,circle,fill=gray!30, minimum size=.8cm] (C) at (0, 1.5) {$C$}; \draw[>=triangle 45, ->] (X1) -- (X2); \draw[>=triangle 45, ->] (X2) -- (Y); \draw[>=triangle 45, ->] (C) -- (X2); \draw[>=triangle 45, ->] (C) -- (Y); \draw[>=triangle 45, <->, dashed, in=75,out=-165] (X1) to (C); \draw[>=triangle 45, <->, dashed, in=35,out=-35] (X2) to (Y); \end{tikzpicture} \caption{Graph: $\mathcal{G}$} \label{fig:FailingExampleGraph} \end{subfigure}% \begin{subfigure}{.25\textwidth} \centering \begin{align*} &U_1, U_2 \sim \operatorname{Uniform}([-1, 1]) \\ &X_1 = U_1, \quad C=U_1 \\ &X_2 = U_2 \operatorname{exp}\left( -(X_1 + C)^2\right) \\ & \\ &Y = U_2 X_2 + \operatorname{cnst.} C \end{align*} \caption{Structural causal model} \label{fig:FailingExampleSCM} \end{subfigure} \caption{A toy illustration depicting a failure mode of contextual Bayesian optimisation. Fig.~\ref{fig:FailingExampleGraph} presents the dependence or causal graph, and Fig.~\ref{fig:FailingExampleSCM} shows the compatible structural model. The variables $X_1$ and $X_2$ are controllable, $C$ represents a context variable, and $Y$ is the objective variable to be maximised. $U_1$ and $U_2$ are two unobserved random variables independently sampled from a uniform distribution.} \label{Fig:FailingExample} \end{figure} The goal is for the agent to determine a policy $\pi$ that maximises the expected target objective, i.e., $\max_{\pi} \mathbb{E}_{\pi}[Y]$. First, we note that there is a policy that achieves $\sfrac{1}{3}$ as the expected value of $Y$. We will show this after a few lines, but now let us start by considering CaBO which identifies three sets of variables that should be considered as policy scopes $\{\varnothing, \{X_1\}, \{X_2\}\}$ where $\varnothing$ represents the passive observation. Under passive observation we compute that the expected value of $Y$ is $\mathbb{E}[Y]=\sfrac{1}{3}\mathbb{E}[e^{-4U_1^2}]$ which is less than $\sfrac{1}{3}$. Now, if $X_2$ is controlled, it becomes independent of $U_2$ and we trivially see that the expectation of $Y$ becomes 0. Lastly, the control of $X_1$ yields $\mathbb{E}_{\operatorname{do}(X_1=x_1)}[Y]=\sfrac{1}{3}\mathbb{E}[e^{-(x_1+U_1)^2}]$ which reaches the maximum when $x_1$ is equal to 0, but please note that this maximum $\mathbb{E}_{\operatorname{do}(X_1=0)}[Y]$ is less than $\sfrac{1}{3}$. Similarly, let us detail the potential of CoBO in this problem. By design, CoBO's policy is such that $\pi_{\text{CoBO}}\equiv \pi_{\text{CoBO}}(x_1,x_2|c)$, where $X_1$ and $X_2$ are jointly controlled based on the context observation ${C}$. Following this strategy with the given policy scope, we derive that: $\mathbb{E}_{\pi_{\text{CoBO}}}[Y] = \mathbb{E}_{\pi_{\text{CoBO}}}[U_2{X}_2 + \text{cnst.}{C}] = 0 $ since $\pi_{\text{CoBO}}$ controls ${X}_2$ independently of ${U}_2$. However, if we choose another policy scope where we control ${X}_1$ on ${C}$, i.e., $\pi_{\text{scope}} \equiv \pi({x}_1|{c})$, we would arrive at a more favourable solution. To do so, first notice that we can expand the objective as follows: $\mathbb{E}_{\pi_{\text{scope}}}[{Y}] = \mathbb{E}_{\pi_{\text{scope}}}[{U}_2^2]\mathbb{E}_{\pi_{\text{scope}}}[\exp(-({X}_1 + {C})^2)] + \mathbb{E}_{\pi_{\text{scope}}}[\text{cnst.}{C}] $. We can clearly see that the optimum is achieved when \emph{only} ${X}_1$ is controlled and set to $-{C}$, whereby: $\mathbb{E}_{\pi_{\text{scope}}}[{Y}] = \sfrac{1}{3} > \mathbb{E}_{\pi_{\text{CoBO}}}[{Y}]$. In other words, we see through this example that choosing the ``wrong'' scope may produce policies that do not achieve the sought optima. The problem we tackle in this paper naturally emerges from the three elements analysed above. 1) CoBO has to optimise over candidate policies with varying state-action spaces to determine optima. 2) Brute forcing the solution is not feasible, and 3) CaBO fails when faced with contextual variables. To our knowledge, no algorithm exists to handle those settings, and our goal is to propose the first technique for contextual and causal BO. Our method, called CoCa-BO is capable of reducing the search space of policy scopes without risking the convergence to the optimum. We highlight difficulties arising from attempts to use the causal acquisition function \cite{aglietti-2020} for choosing the best policy scope in a contextual setting. We propose a novel approach that is practically capable of finding the best policy scope and further simplifies the selection procedure. We consider setups for which we show analytically that other methods fail and demonstrate experimentally that our method converges to the optimum. Additionally, we empirically demonstrate that our method retains a significant reduction in sample complexity in large systems even if other techniques can acquire the optima (Appendix \ref{big_sys}). \section{Related Work} Utilising causal knowledge to derive better policies and policy scopes is an active and recent research area \cite{lee-2018, lee-2019, lee-2020}. There are already prominent results for conditional and unconditional policies in various setups \cite{aglietti-2020, zhang-2017, zhang2022online}, and some of these results integrated in the scope of BO \cite{aglietti-2020} have been proven efficient and optimal. We describe those developments from both causal and Bayesian optimisation perspectives. \subsection{Causal BO} Causal Bayesian Optimisation (CaBO) \cite{aglietti-2020} combines the theory of possible optimal minimal interventional sets (POMIS) \cite{lee-2018, lee-2019} with the BO framework. CaBO assumes that a causal graph is given, possibly containing unobserved confounders, and attempts to find a subset of interventional variables and interventional values such that the expectation of the target variable is optimised when controlling the former at the latter. As the power set of all interventional variables grows exponentially with the number of interventional variables, rendering optimisation infeasible in large systems, CaBO derives the set of POMISs \cite{lee-2018} from the causal graph and restrict its search space to this set. By definition, a POMIS is guaranteed to be optimal for an SCM consistent with the causal graph, which means that if one ignores a POMIS, an SCM exists such that no other interventional sets yield the optimum. Sanghack Lee et al. \yrcite{lee-2018} also show that in some cases, an agent intervening on all interventional variables, which is a standard case in BO, never achieves the optimum. CaBO handles the case where not all observed variables can be controlled by projecting the original graph into a new one where non-controllable observed variables are latent \cite{lee-2019}. This procedure effectively marginalises some of the observed variables. After computing POMISs, CaBO decides at each step which POMIS and which interventional values to suggest, and runs the corresponding black-box evaluation. To make this decision, CaBO maximises for each POMIS an acquisition function based on a causal Gaussian process (GP) \cite{rasmussenW06} model per POMIS. CaBO returns the POMIS and the corresponding interventional values that attained the largest acquisition function value overall. Once CaBO has performed the intervention on the system, it collects the value of the target variable, and updates the GP of the selected POMIS. Moreover, CaBO performs an $\epsilon$-greedy policy search alternating between passively observing and actively intervening. This is necessary for CaBO if an empty set is a POMIS for the given causal graph. Contrary to CaBO, our method exploits the contextual variables available to the agent, and performs a search over the joint space of interventional and contextual variables. Note that this space is a super-set of the POMISs set, therefore we observe that our method recovers CaBO optima in absence of contexts, and achieves better optima than CaBO when contextual information is essential for the decisions. \subsection{Contextual BO} As discussed in the introduction, an agent may achieve better optima by considering some observable variables as contextual ones. An agent should decide the values at which interventional variables are controlled based on the values of contextual variables. Contextual BO is an algorithm well suited for this scenario. A kernel function is defined on the product space of both contextual and interventional variables, but the acquisition function is optimised only over the controllable variables as the contextual ones are fixed by the system at each step. The common way of utilizing CoBO is to consider all controllable variables as interventional and other observed ones as contextual within the system. This may lead to slower convergence as the optimisation space grows with each variable. Moreover, the work in \cite{lee-2020} on possibly optimal mixed policy scopes (POMPSes) provides an example in which such a policy scope, i.e. controlling all interventional variables based on all contextual variables, may never recover the optimal policy. Compared to CoBO, the method we propose optimises over MPSes in an efficient manner that does not iterate through the vast space of MPSes. This allows our method to circumvent suboptimality of CoBO. \subsection{Reinforcement Learning for Mixed Policy Scopes} The work in \cite{zhang2022online} considers a setup where all endogenous variables are discrete, and exogenous ones can be discrete or continuous. The first algorithm they propose is the $\text{causal-UCB}^*$ focusing on special policy scopes, that guarantee to contain the optimal one, to reduce the search space. Furthermore, they derive a method that allows the usage of samples gathered under a policy from one scope to improve the policies of other scopes under the considered setup. To do so, they identify $c$-components that are common for a group of policy scopes and learn $c$-factors of those $c$-components \cite{tian-2002} from samples coming from all the policy evaluations that are compatible with a policy scope belonging to the group. Authors derive sub-linear bounds for the cumulative regret based on the policy scope size being evaluated and the size of the domains of the variables of the policy scope. They show that in general, the bound for the $\text{causal-UCB}^*$ is smaller than the bound for the standard UCB, which demonstrates the importance of causal knowledge for building efficient agents. This method limits itself to cases where all observable variables are discrete and have finite domains. Whereas our proposed solution works for both discrete and continuous variables. \section{Background} This section aims to provide the reader with a background on POMPSes. It highlights its importance for contextual decision making setup where the causal graph is available. \subsection{Preliminaries} We will denote the causal graph of the problem by $\mathcal{G}$, and the target variable that we should optimise by $Y$. Random variables will be denoted by capital letters like $X$ and their values with corresponding lowercase letters like $x$. The domain of random variable $X$ will be noted $\mathfrak{X}_X$. Sets of variables will be denoted by bold capital letters $\mathbf {X} $ and the set of corresponding values by bold lowercase letters $\mathbf{x}$. Let $\mathbf{V}$ denote the set of all the observable variables in the system. We may now formally define mixed policy scopes (MPS). \begin{definition}[Mixed policy scopes (MPS)] Let $\mathbf{X}^*\subseteq \mathbf{V}\setminus\{Y\}$ be a set of interventional variables, and $\mathbf{C}^*\subseteq \mathbf{V}\setminus\{Y\}$ be a set of contextual variables. A mixed policy scope $\mathcal{S}$ is defined as a collection of pairs $\langle X \mid \mathbf{C}_X\rangle$ such that \begin{enumerate} \item $X\in X^*$, $\mathbf{C}_X\subseteq \mathbf{C}^*\setminus\{X\}$ \item $\mathcal{G}_{\mathcal{S}}$ is acyclic, where $\mathcal{G}_{\mathcal{S}}$ is defined as $\mathcal{G}$ with edges onto $X$ removed and directed from $\mathbf{C}_X$ to $X$ added for every $\langle X \mid \mathbf{C}_X\rangle\in \mathcal{S}$ \end{enumerate} \end{definition} Then we can properly describe what is a policy based on the MPS $\mathcal{S}$: \begin{definition}[Policy of an MPS] Given $\langle\mathcal{G}, Y, \mathbf{X}^*,\mathbf{C}^*\rangle$ and an SCM $\mathcal{M}$ conforming the graph $\mathcal{G}$ with $\mathfrak{X}_Y\subseteq\mathbb{R}$, a mixed policy $\pi$ is a realisation of a mixed policy scope $\mathcal{S}$ compatible with the tuple $\mathbf{\pi}\doteq\{\pi_{X|\mathbf{C}_X}\}_{\langle X, \mathbf{C}_X\rangle \in \mathcal{S}}$, where $\pi_{X|\mathbf{C}_X}:\mathfrak{X}_x\times\mathfrak{X}_{\mathbf{C}_X}\mapsto \lbrack0,1 \rbrack$ is a proper probability mapping. \end{definition} We denote the space of policies that are compatible with an MPS $\mathcal{S}$ by $\Pi_\mathcal{S}$, and the set of all mixed policy scopes is denoted by $\mathbb{S}$. We define $\Pi_\mathbb{S}=\bigcup_{\mathcal{S}\in\mathbb{S}}\Pi_{\mathcal{S}}$ as the space of all policies, and $\mathbf{X}(\mathcal{S})$ designates the set of interventional variables for an MPS $\mathcal{S}$. Our goal is to find a policy $\pi^* \in \Pi_\mathbb{S}$ maximising the expected reward $Y$, i.e., \begin{equation*} \pi^*=\underset{\pi\in\Pi_{\mathbb{S}}}{\operatorname{argmax}}\ \mathbb{E}_\pi[Y]. \end{equation*} In this paper, we make the standard assumption that the agent knows the causal graph $\mathcal{G}$ but does not have access to the underlying SCM of the environment. \subsection{POMPS} Some variables make the policy redundant, as when a policy contains an actionable variable that have no causal influence on $Y$. Redundant variables in a policy scope negatively affect the convergence speed, therefore we wish to restrict our search space to non-redundant policy scopes. We also want to converge to the optimum for any SCM $\mathcal{M}$ compatible with the causal graph as we can not rule out any such SCM \textit{a priori}. To meet both of these requirements, we employ the framework of possibly optimal mixed policy scopes (POMPS) \cite{lee-2020}. To properly define POMPS we need to introduce several entities, starting with the optimal value achievable by a mixed policy scope $\mathcal{S}$ given by $\mu_{\mathcal{S}}^*=\max_{\pi\in\Pi_{\mathcal{S}}}\mathbb{E}_{\pi}[Y]$. An important relation among MPSes that will serve to characterise POMPS is the relation of subsumption. \begin{definition}[Subsumption] An MPS $\mathcal{S}$ subsumes another MPS $\mathcal{S}'$, denoted by $\mathcal{S}\subseteq\mathcal{S}'$, if $\mathbf{X}(\mathcal{S}')\subseteq\mathbf{X}(\mathcal{S})$ and $\mathbf{C'}_X\subseteq\mathbf{C_X}$, for every pair $\langle X\mid\mathbf{C'_X}\rangle\in\mathcal{S'}$. \end{definition} Armed with the notion of subsumption, we can define non-redundancy. \begin{definition}[Non-Redundacy under Optimality (NRO)] Given $\langle\mathcal{G}, Y, \mathbf{X}^*, \mathbf{C}^*\rangle$, an MPS $\mathcal{S}$ is said to be non-redundant under optimality if there exists an SCM $\mathcal{M}$ compatible with $\mathcal{G}$ such that $\mu^*_{\mathcal{S}}>\mu^*_{\mathcal{S'}}$ for every strictly subsumed MPS $\mathcal{S'}\subsetneq\mathcal{S}$. \end{definition} Finally, we arrive to the definition of the target MPSes that an agent should optimise over. \begin{definition}[Possibly-Optimal MPS (POMPS)] Given $\langle\mathcal{G}, Y, \mathbf{X}^*, \mathbf{C}^*\rangle$, let $\mathbb{S}$ be a set of NRO MPSes. An MPS $\mathcal{S}\in\mathbb{S}$ is said to be possibly-optimal if there exists $\mathcal{M}$ compatible with the graph $\mathcal{G}$ such that \begin{equation*} \mu^*_{\mathcal{S}}>\max_{\mathcal{S'}\in\mathbb{S}\setminus{\mathcal{S}}}\mu^*_{\mathcal{S'}} \end{equation*} \end{definition} Analogous to the POMISs in CaBO, it is impossible to further restrict the search space to a subset of $\mathbb{S}^*$ \textit{a priori}. Indeed, for each POMPS $\mathcal{S}$, there will always exist an SCM $\mathcal{M}$ compatible with $\mathcal{G}$ for which $\mathcal{S}$ yields the optimum value. Therefore, knowing only $\mathcal{G}$, the agent cannot exclude a POMPS without the risk of missing the optimal policy. \section{Proposed Solution} As outlined in Alg.~\ref{alg:CoCa-BO}, our method starts by considering the given causal graph and to compute $\mathcal{S}^*$, the set of all POMPSes, using the criteria provided by Sanghack et al. \yrcite{lee-2020}. At each step $t$ we will need to select one POMPS $\mathcal{S}_t$ and assign values to its controllable variables before intervening on the system. We maintain one contextual BO instance per POMPS to guide this decision. We begin this section by defining the optimisation algorithm that is utilised in the proposed work, and subsequently explains the rationale behind its implementation. Furthermore, it highlights the inappropriateness of the causal acquisition function in a contextual setting, rendering CaBO inapplicable, and proposes an alternative approach that is effective in identifying the optimal POMPS for the given task. To further illustrate the challenges encountered in aligning the causal acquisition function with a contextual setup, an example is provided to demonstrate the difficulties that arise in such scenarios. \subsection{Contextual BO policy} We augment each POMPS $\mathcal{S}$ with a contextual GP that models the black-box to optimise as a function of $\mathbf{C}(\mathcal{S})$ and $\textbf{X}(\mathcal{S})$. We also keep track of all the points evaluated so far when $\mathcal{S}$ was selected in a dataset $\mathcal{D}_\mathcal{S}^t = \{(\mathbf{x}_{\mathcal{S}}^i,\mathbf{c}_{\mathcal{S}}^i, \mathbf{y}^i)\}_{i \in t_\mathcal{S}}$ where $t_\mathcal{S} = \{i \leq t \ \text{s.t.} \ \mathcal{S}_i = \mathcal{S}\}$. Each GP model is determined by a prior mean $\mu$ and a contextual kernel $k$ supposed to capture the behaviour of the black-box as context and intervention values vary. The GP associated to scope $\mathcal{S}$ with dataset $\mathcal{D}_\mathcal{S}^t$ provides a posterior prediction of the output we would observe by applying interventions $\boldsymbol{x}_\mathcal{S}$ in context $\boldsymbol{c}_\mathcal{S}$, given by $\mathcal{N}(\mu_\text{post}(\boldsymbol{z}_\mathcal{S}), \sigma_\text{post}(\boldsymbol{z}_\mathcal{S})^2)$ where $\boldsymbol{z}_\mathcal{S} = (\boldsymbol{x}_\mathcal{S}, \boldsymbol{c}_\mathcal{S})$ and: \begin{align*} \mu_\text{post}(\boldsymbol{z}_\mathcal{S}) &= \boldsymbol{k}_{t_\mathcal{S}}^{\intercal}(\boldsymbol{z}_\mathcal{S})\left(\boldsymbol{K}_{t_\mathcal{S}} + \sigma_{\text{n}}^2\boldsymbol{I}\right)^{-1} \boldsymbol{y}_{t_\mathcal{S}} \\ \sigma_\text{post}(\boldsymbol{z}_\mathcal{S})^2 &= k(\boldsymbol{z}_\mathcal{S},\boldsymbol{z}_\mathcal{S}) \\ & \quad- \boldsymbol{k}_{t_\mathcal{S}}^{\intercal} (\boldsymbol{z}_\mathcal{S}) \left(\boldsymbol{K}_{t_\mathcal{S}} + \sigma_{\text{n}}^2 \boldsymbol{I}\right)^{-1} \boldsymbol{k}_{t_\mathcal{S}} (\boldsymbol{z}_\mathcal{S}), \end{align*} with $\boldsymbol{I}$ the identity matrix, $\boldsymbol{y}_{t_\mathcal{S}}$ the vector of all observed targets in $\mathcal{D}_\mathcal{S}^t$, $\boldsymbol{K}_{t_\mathcal{S}}$ the covariance matrix of all interventional values and context pairs in $\mathcal{D}_\mathcal{S}^t$, $\boldsymbol{k}_{t_\mathcal{S}}$ the vector of covariances between $\boldsymbol{z}_\mathcal{S}$ and each already evaluated inputs, and $\sigma_{\text{n}}$ is a tunable noise level. In our method we deploy HEBO \cite{hebo} for each POMPS as it is a state-of-the-art BO algorithm \cite{turner-21} that handles optimisation in contextual setting, and which is able to cope with both continuous and discrete variables. HEBO's GP model integrates input and output learnable transformations, which makes it robust to potential heteroscedasticity and non-stationarity of the black-box, which is not unexpexted in a contextual setting where a change of context may induce some change of regime. Each HEBO tries to find the best policy $\pi^*_{\mathcal{S}}$ within its POMPS $\mathcal{S}$ such that: \begin{equation*} \pi^*_\mathcal{S}=\underset{\pi\in\Pi_{\mathcal{S}}}{\operatorname{argmax}} \ \mathbb{E}_{\pi}[Y] \end{equation*} \subsection{POMPS selection} In contrast to our method, CaBO utilises the values of the acquisition function for each GP to decide which POMIS should be controlled next. The motivation for such an approach comes from the observation that the best POMIS with the best interventional values will have a higher value of the expectation of the target than the others, by definition. Consequently, the acquisition function should be able to capture this pattern, especially in regions of low uncertainty. Nevertheless, this approach is not applicable in contextual cases as the optimal values of the acquisition functions are dependent on the context observed. This leads to the comparison of different functions at different points, i.e. different contexts. For example, consider POMPSes $\mathcal{S}$ and $\mathcal{S'}$ for which $c_\mathcal{S}$ and $c_\mathcal{S'}$ context has been observed correspondingly. The comparison of maximal acquisition function values associated $\mathcal{S}$ and $c_\mathcal{S}$ on the one hand, and $\mathcal{S}'$ and $c_{\mathcal{S}'}'$ on the other hand, are not reflective of the relation between $\mathbb{E}_{\pi^*_\mathcal{S}}[Y]$ and $\mathbb{E}_{\pi^*_\mathcal{S'}}[Y]$. It may even lead to the divergence of the optimiser as such comparison may switch between POMPSes depending on the observed context. We adopt multi-armed bandits (MAB) with upper confidence bound (UCB) action selection criterion \cite{Sutton1998}. Each POMPS serves as an arm for the MAB-UCB, where it attempts to find the arm with the highest reward which is the POMPS with the best $\mu^*_{\mathcal{S}}$. One may argue that for a given POMPS $\mathcal{S}$ the $\mathbb{E}_{\pi_{\mathcal{S}}}[Y]$ is changing as the policy is being optimised. Nevertheless, note that under optimality, it is stationary and equal to $\mu^*_{\mathcal{S}}$. \begin{algorithm} \caption{Contextual Causal BO (CoCa-BO)} \label{alg:CoCa-BO} \begin{algorithmic} \STATE {\bfseries Input:} causal graph $\mathcal{G}$, number of iterations $n$, set of interventional variables $\mathbf{X}$, domain of variables $\mathfrak{X}_\mathbf{V}$. \STATE Compute $\mathbb{S}^*=\operatorname{POMPS}(\mathcal{G},\mathbf{X}) $ \STATE Create $A=\operatorname{MAB-UCB(\mathbb{S}^*)}$ \FOR{$\mathcal{S}\in\mathbb{S}^*$} \STATE Create $\textnormal{Opt}_\mathcal{S}=\operatorname{HEBO}(\mathbf{X}(\mathcal{S}), \mathbf{C}(\mathcal{S}), \mathfrak{X}_\mathcal{S})$ \ENDFOR \FOR{$i=1$ {\bfseries to} $n$} \STATE Select a POMPS: $\mathcal{S}=A.\textnormal{suggest}()$ \STATE Implement $\pi_{\mathcal{S}}$ dictated by $\textnormal{Opt}_{\mathcal{S}}$, $y, \mathbf{x_\mathcal{S}},\mathbf{c_{\mathcal{S}}}\sim \mathcal{M}_{\pi_{\mathcal{S}}}$ \STATE $A.\textnormal{update}(y, \mathcal{S})$ \STATE $\textnormal{Opt}_{\mathcal{S}}.\textnormal{update}(y, \mathbf{x_\mathcal{S}},\mathbf{c_{\mathcal{S}}})$ \ENDFOR \end{algorithmic} \end{algorithm} We summarise our method in Algorithm \ref{alg:CoCa-BO}. Here, $\operatorname{POMPS}$ function computes the set of POMPSes \cite{lee-2020} for a given causal graph $\mathcal{G}$ and the set of interventional variables $\mathbf{X}$. It is assumed that all observed variables may be contextual, but a subset of observed variables may be provided optionally. $\operatorname{MAB-UCB}$ creates a UCB multi-armed bandits for choosing which POMPS to implement at iteration $i$. The selection is done by $\operatorname{suggest}()$ method call on the $\operatorname{MAB-UCB}$. The first $\textit{for}$ loop initialises the HEBO optimisers for each POMPS $\mathcal{S}$ based on its contextual $\mathbf{C}(\mathcal{S})$ variables, interventional $\mathbf{X}(\mathcal{S})$ variables, and the domain of all variables within $\mathcal{S}$ denoted by $\mathfrak{X}_\mathcal{S}$. The last $\textit{for}$ implements the policy $\pi_{\mathcal{S}}$ defined by $Opt_{\mathcal{S}}$ and collects the values under the policy. Then both the $\operatorname{MAB-UCB}$ and the chosen HEBO optimiser are updated with the observed values. The policy dictated by HEBO optimiser is based on the acquisition function maximisation. We use multiple acquisition functions for GPs: \begin{align*} \alpha_{\mathcal{S}}(\boldsymbol{x}_{\mathcal{S}},\boldsymbol{c}_{\mathcal{S}}) = (\alpha_{\mathrm{EI}}^{\boldsymbol{\theta}}\left(\boldsymbol{x}_\mathcal{S}\mid \mathcal{D}_\mathcal{S}, \boldsymbol{c}_\mathcal{S}\right), \alpha_{\mathrm{PI}}^{\boldsymbol{\theta}}\left(\boldsymbol{x}_\mathcal{S}\mid \mathcal{D}_\mathcal{S}, \boldsymbol{c}_\mathcal{S}\right), \\ \alpha_{\mathrm{UCB}}^{\boldsymbol{\theta}}\left(\boldsymbol{x}_\mathcal{S}\mid \mathcal{D}_\mathcal{S}, \boldsymbol{c}_\mathcal{S}\right)) \end{align*} where $\boldsymbol{x}_\mathcal{S}$ is the candidate compatible with $\mathcal{S}$, $\boldsymbol{c}_\mathcal{S}$ is the context under $\mathcal{S}$, $\mathcal{D_\mathcal{S}}$ is the collection of samples under POMPS $\mathcal{S}$, $\theta$ are the parameters of the GP. HEBO performs candidate selection by maximising the acqusition function $\max_{\boldsymbol{x}_\mathcal{S}} \ \alpha_{\mathcal{S}}(\boldsymbol{x}_{\mathcal{S}},\boldsymbol{c}_{\mathcal{S}})$. This process defines the policy by which a candidate $\boldsymbol{x}_{\mathcal{S}}$ is chosen given $\boldsymbol{c}_{\mathcal{S}}$. $$ \pi_\mathcal{S}(x_\mathcal{S}\mid c_\mathcal{S})=\underset{\boldsymbol{x_\mathcal{S}}}{\operatorname{argmax}} \ \alpha_{\mathcal{S}}(\boldsymbol{x}_{\mathcal{S}},\boldsymbol{c}_{\mathcal{S}}) $$ For details on picking the candidate please see \cite{hebo}. The utilisation of multiple acquisition functions is motivated by the work of Lyu et al. \yrcite{multi-ac}, which describes the conflicts that may arise between different acquisition functions. \section{Experiments} \begin{figure*}[t] \centering \begin{subfigure}{.245\textwidth} \centering \includegraphics[width= \textwidth ]{Figures/CoBO_v_us_CoBO_better.pdf} \caption{CoBO vs CoCa-BO (I.)} \label{fig:cobobetter} \end{subfigure} \begin{subfigure}{.245\textwidth} \centering \includegraphics[width=\textwidth]{Figures/CoBO_v_us_CoBO_impossible.pdf} \caption{CoBO vs CoCa-BO (II.)} \label{fig:CoBOImp} \end{subfigure} \begin{subfigure}{.245\textwidth} \centering \includegraphics[width= \textwidth ]{Figures/aspirin_statin_hom.pdf} \caption{CaBO vs CoCa-BO (I.) } \label{fig:aspirin_statin_hom} \end{subfigure} \begin{subfigure}{.245\textwidth} \centering \includegraphics[width=\textwidth ]{Figures/aspirin_statin_het.pdf} \caption{CaBO vs CoCa-BO (II.)} \label{fig:aspirin_statin_het} \end{subfigure} \caption{ The upper figures show the cumulative probability of selecting the corresponding MPS (POMPS or POMIS). The frequency curve is omitted when there is only one candidate MPS. The lower charts show the time-normalised cumulative regret $\bar{R}_T$.} \label{fig:big} \end{figure*} We conduct a series of experiments to compare the proposed method with CaBO and CoBO. There are two general cases considered for the experimental setup, each with a certain purpose. For each setup, $R_t = y_t - \mathbb{E}_{\pi^*}[Y]$ denotes the immediate regret suffered at iteration $t$, and we report the time-normalised cumulative regret $\bar{R}_t = \frac{1}{T}\sum_{t=1}^T R_t$ associated to each optimiser. We also report cumulative MPS selection frequency which gives at index $t$ the fraction by which each MPS has been selected up to iteration $t$. \textbf{Configuration I.:} Both the proposed method and the benchmark method can achieve the optimum. We attempt to measure the additional cost introduced by the proposed method in terms of sample complexity. Please note that in an environment with a large number of observed variables, CoBO yields a large optimisation domain, requiring a large number of samples (Appendix \ref{big_sys}). Our method utilises the independencies encoded within the causal graph to reduce the optimisation domain. \textbf{Configuration II.:} The benchmark method cannot achieve the optimum. We practically confirm that there are cases where neither CoBO nor CaBO converges to the optimum, leading to linear cumulative regret. We also show that CoCa-BO converges to the optimum policy and has sub-linear cumulative regret. All experiments have been run for 110 seeds for 700 iterations, i.e. 700 different interventions have been implemented in the system. The results are averaged and reported with 1.96 standard error intervals. \subsection{CaBO vs CoCa-BO} \subsubsection{First Configuration} The lack of contextual information in CaBO limits its potential to achieve optimal values. However, in cases where the context is deemed irrelevant parametrically, i.e. the SCM of the system is such that the optimal value of the target is the same across all values of the contextual variables (homogenous), CaBO is capable of attaining optima even in a contextual setup. However, these cases require a specific alignment of the SCM parameters, which may be unstable and can disappear even under small fluctuations \cite{pearlstability}. Another, less interesting, setup is when the contextual variables are in general irrelevant to the target $Y$ which can be inferred from the causal graph $\mathcal{G}$. Our method is able to identify those cases and perform equally with CaBO as the set of POMPSes collapses to the set of POMISs in this case. To further illustrate this, we define two SCMs that are compatible with the causal graph depicted in Fig.~\ref{Fig:AspStat}, from Ferro et al. (2015) \cite{ferro-2015} and Thompson et al. (2019) \cite{Thompson2019CausalGA}, where age, BMI, and cancer are only observable and aspirin, statin are interventional variables. PSA, which stands for prostate-specific antigen \cite{wang-2011}, is used to detect prostate cancer. We then compute the POMISs and POMPSes from the graph. The set of POMISs is $\{\varnothing, \{\text{Aspirin}\}, \{\text{Statin}\}, \{\text{Aspirin, Statin}\}\}$ and the set of POMPS consists of a single element, namely $\langle \text{Aspirin, Statin}\mid \text{Age, BMI}\rangle$. \begin{figure} \centering \resizebox{.35\textwidth}{!}{ \begin{tikzpicture}[every node/.style={circle, draw, minimum size=1.5cm}] \node[fill=gray!30, ] (age) at (-1, 3.5) {Age}; \node[fill=gray!30, ] (bmi) at (-1, .5) {BMI}; \node[, ] (aspirin) at (1.5, 3) {Aspirin}; \node[, ] (statin) at (1.5, 1) {Statin}; \node[fill=gray!30, ] (cancer) at (3.5, 2) {Cancer}; \node[pattern=dots] (psa) at (6, 2) {PSA}; \draw[>=triangle 45, ->] (age) -- (bmi); \draw[>=triangle 45, ->] (age) -- (aspirin); \draw[>=triangle 45, ->] (bmi) -- (aspirin); \draw[>=triangle 45, ->] (age) -- (statin); \draw[>=triangle 45, ->] (bmi) -- (statin); \draw[>=triangle 45, ->] (aspirin) -- (cancer); \draw[>=triangle 45, ->] (aspirin) -- (cancer); \draw[>=triangle 45, ->, in=150, out=10] (aspirin) to (psa); \draw[>=triangle 45, ->] (cancer) to (psa); \draw[>=triangle 45, ->, in=-150, out=-10] (statin) to (psa); \draw[>=triangle 45, ->] (statin) to (cancer); \draw[>=triangle 45, ->, in=120, out=20] (age) to (psa); \draw[>=triangle 45, ->, in=-120,out=-20] (bmi) to (psa); \end{tikzpicture} } \caption{Causal graph of PSA level. White nodes represent variables which can be intervened and gray nodes represent non-manipulative variables. The target variable PSA is denoted with a shaded node.} \label{Fig:AspStat} \end{figure} Under the parametrisation from \cite{aglietti-2020}, the optimal values at which Statin and Aspirin are controlled to minimise PSA are the same for all values of Age and BMI. Let us denote the domain of a random variable $X$ by $\mathfrak{X}_X$. Formally, the former statement can be written as $\forall(\textnormal{age}, \textnormal{bmi})\in\mathfrak{X}_{\textnormal{Age}} \times \mathfrak{X}_{\textnormal{BMI}}$: \begin{align*} \underset{(\text{aspirin}, \textnormal{statin})}{\operatorname{argmin}}\mathbb E[Y|&\operatorname{do}(\textnormal{aspirin}, \textnormal{statin}), \textnormal{age}, \textnormal{bmi}] \\ &=\underset{(\textnormal{aspirin}, \textnormal{statin})}{\operatorname{argmin}}\mathbb E[Y|\operatorname{do}(\textnormal{aspirin}, \textnormal{statin})] \end{align*} We show that the optimal values at which Aspirin and Statin should be controlled at are $0\text{ and }1$ correspondingly. For details please see Appendix \ref{cabovsus_1}. Fig.~ \ref{fig:aspirin_statin_hom} illustrates results for this setup. CaBO selects between POMISes via the causal acquisition function procedure. We note the high variance of this selection method. This piratically harms the convergence properties of the optimiser. On the other hand, the MAB-UCB based POMPS selection of CoCa-BO practically does not suffer from this. This can be seen from figures \ref{fig:cobobetter}, \ref{fig:CoBOImp}. \subsubsection{Second Configuration} The SCM for the second configuration is consistent with the causal graph in Fig.~\ref{Fig:AspStat} is such that if Aspirin and Statin are controlled based on Age and BMI a lower expected PSA is achieved than if Aspirin and Statin are controlled at constant values. Appendix \ref{cabovsus_2} shows that both Aspirin and Statin should be controlled at ($\frac{\text{Age}-55}{21})\left| \frac{\text{BMI}-27}{4}\right|$ after plugging in Cancer equation into PSA. The optimal values for Aspirin and Statin depend on Age and BMI. CaBO can not control medications based on Age and BMI as those contextual variables are marginalised. Our method converges as it utilises the $\langle \text{Aspirin, Statin}\mid \text{Age, BMI}\rangle$ and it is able to identify a policy to minimise PSA given Age and BMI. Fig.~\ref{fig:aspirin_statin_het} shows the results for this setup. We again observe high variance in the POMIS selection process for CaBO. The optimal policy being a contextual one renders impossible for the CaBO to achieve the optimum yielding linear cumulative regret which can be seen from the normalised cumulative regret of Fig.~\ref{fig:aspirin_statin_het}. \subsection{CoBO vs CoCa-BO} \subsubsection{First Configuration} Appendix \ref{cobo_good} describes an SCM that is consistent with the causal graph illustrated in Fig.~\ref{fig:FailingExampleGraph}. It also shows that controlling both $X_1$ and $X_2$ does not impede the agent's ability to achieve the optimal outcome. Fig.~\ref{fig:cobobetter} illustrates that both methods converge to the optimal solution, as evidenced by the decreasing trend of normalised cumulative regrets. We observe that CoBO actually converges faster than CoCa-BO. It is expected as CoBO optimises over a fixed policy scope, which reduces the cost associated with finding the optimal policy scope. Nevertheless, it is important to note that the convergence of CoBO is contingent on the specific parametric configuration of the SCM. It is not possible to a priori exclude cases such as the one depicted in Fig.~\ref{fig:FailingExampleSCM}, and therefore, it is necessary to perform a search over mixed policy sets (MPSes), or preferable over POMPSes, to make convergence achievable. Additionally, in environments with a large number of variables CoBO performance deteriorates even if the converge is possible (Appendix \ref{big_sys}). \subsubsection{Second Configuration} The second configuration examined in this study is the SCM depicted in Fig.~\ref{fig:FailingExampleSCM} in the introduction. We have show analytically that the CoBO method is unable to attain the optimal solution. Our proposed method, CoCa-BO, has been demonstrated to effectively achieve the desired optimum, resulting in sub-linear cumulative regret. The upper sub-figure of Fig.~\ref{fig:CoBOImp} illustrates the cumulative probability of selecting the corresponding POMPS. Our proposed method, CoCa-BO, converges to the policy scope that yields the optimal value, specifically $\langle X_1\mid C\rangle$. The lower sub-figure of Fig.~\ref{fig:CoBOImp} illustrates that the normalised cumulative regret remains constant for CoBO, indicating its linear cumulative regret. On the other hand, our proposed method, CoCa-BO, attains sub-linear cumulative regret, as evidenced by the decreasing trend in the normalised cumulative regret corresponding to CoCa-BO. \section{Future Work} \begin{figure} \centering \begin{tikzpicture} \node[draw,circle,, minimum size=1cm] (X1) at (0, 1.2) {$X_1$}; \node[draw,circle, minimum size=1cm, ] (X2) at (0, 0) {$X_2$}; \node[draw,circle,pattern=dots, minimum size=1cm] (Y) at (3, .6) {$Y$}; \draw[>=triangle 45, ->] (X1) -- (Y); \draw[>=triangle 45, ->] (X2) -- (Y); \draw[>=triangle 45, <->,out=25,in=140,dashed] (X1) to (Y); \end{tikzpicture} \caption{An example where no POMPSes share the same context} \label{Fig:no_common_context} \end{figure} One may think that sharing context between POMPSes, e.g. two POMPSes have the same set of contextual variables and observations from one can be used to evaluate the acquisition function for the other, will make the optimal value of the acquisition function a valid selection criterion. Unfortunately, it is not possible in general. To further reinforce this argument, we consider the causal graph in Fig.~\ref{Fig:no_common_context} for which POMPSes are: controlling $X_1$ based on $X_2$ ($\langle X_1 \mid X_2\rangle$), controlling $X_1$ based on $X_1$ ($\langle X_2 \mid X_1\rangle$), and controlling simultaneously $X_1$ and $X_2$ ($\langle X_1 \mid \varnothing\rangle, \langle X_2 \mid \varnothing\rangle$). It is clear that sharing context between those POMPSes is impossible as a variable that is contextual for one is interventional for the other one. Moreover, we do not observe a stable behaviour for the causal acquisition function selection procedure in our experiments where it is applicable (Fig. \ref{fig:aspirin_statin_het}, \ref{fig:aspirin_statin_hom}). Nevertheless, we acknowledge the necessity of detailed analysis and experiments to confidently compare MAB-UCB with the acquisition function-based selection procedure in scenarios where the latter is applicable. \section{Conclusion} We review the problems where agents should utilise contextual information and base their decision on them to achieve the optimal outcome. We highlight the importance of searching through policy scopes rather than performing policy optimisation on a fixed one, and we note that currently available solutions in BO domain do not address this problem. Moreover, a direct application of them is either impossible or yields sub-optimal results. The proposed method, named CoCa-BO, overcomes discussed problems and efficiently performs policy optimisation in mixed policy scopes. We analyse the performance of CoCa-BO against CaBO and CoBO and show practically that the proposed method always achieves the optimum. Furthermore, CoCa-BO has lower sample complexity than CoBO in environments with a sufficiently large number of parameters as it is capable of finding smaller optimisation domains without sacrificing the convergence. We empirically show that CoCa-BO has lower variance than CaBO and consistently reaches the optimum across multiple runs. We also share our PyRo \cite{bingham2018pyro, phan2019composable} based code, for the reproducibility of results and in an attempt to provide a unified framework for future research in this domain.
2,877,628,088,788
arxiv
\section{Conclusions} In this paper, we considered the problem of distributed decision-making in multi-agent systems (via potential games) with an emphasis on the impact of realistic communication links. We showed how to extend the current literature on potential games with binary log-linear learning to account for stochastic communication channels. We derived conditions on the probabilities of link connectivity and BLLL's temperature to achieve a target probability for the set of potential maximizers. Furthermore, our toy example demonstrated a transition phenomenon for achieving any target probability. \section{Introduction}\label{sec:introduction} Non-cooperative game theory has recently emerged as a powerful tool for the distributed control of multi-agent systems \cite{marden_cooperative_potential,martinez_cov_game,shamma_cov_game}. By designing proper local utility functions and learning algorithms that satisfy certain properties, desirable global behaviors can be achieved. Potential games \cite{monderer_potential} are an important class of non-cooperative games and have recently received considerable attention in the literature \cite{marden_welfare}. In potential games, the local utility function of the agents is aligned with a potential function in order to achieve a global objective through local decisions. There are a number of learning algorithms that can guarantee the convergence to a Nash equilibrium for potential games such as fictitious play \cite{monderer_fictitious} and joint strategy fictitious play\cite{marden_JSFP}. However, a Nash equilibrium may be a sub-optimum outcome and not the potential maximizer. Log-linear learning (first introduced in \cite{blume_log-linear}), on the other hand, is a learning mechanism that can guarantee convergence to the set of potential maximizers. As a result, it has been the subject of considerable research recently \cite{marden_revisiting}. Binary log-linear learning \cite{marden_revisiting},\cite{arslan_autonomous} is a variant of log-linear learning which can further handle constrained actions sets, i.e.\ scenarios where the future actions of the players are limited based on their current action (like in robotic networks). While considerable progress has been made for distributed decision making using potential games, ideal communication links are often assumed. In other words, it is typically assumed that an agent can hear from all the other agents that will impact its utility function. In realistic communication environments with packet-dropping stochastic communication links, this is simply not possible. For instance, Fig.\ \ref{fig:Allscales_blue_gray} shows an example of real channel measurements. We can see that the channel exhibits a great degree of stochasticity due to the shadowing and multipath fading components. Thus, it is the goal of this paper to bring an understanding of the impact of stochastic packet-dropping communication links on potential games with binary log-linear learning, where each link is properly represented with an action-dependent probability of connectivity. By extending \cite{marden_revisiting}, we derive conditions on the temperature (defined in Section \ref{sec:BLLL}) and probabilities of connectivity to achieve a given target probability (in the stationary distribution) for the set of potential maximizers (Theorems \ref{thm:blll-comm} and \ref{thm:blll-comm-2}). In Section \ref{sec:toy_game}, in a toy example, we further observe a transition behavior for achieving any target probability. \section{Problem Setup}\label{sec:problemsetup} In this section, we first introduce some basic concepts and properties of potential games. We then review the binary log-linear learning algorithm and the theory of resistance trees, which we use in our subsequent analysis. Finally, we motivate the need for considering stochastic communication links. \subsection{Potential Game (see \cite{drew_game_theory} for more details)}\label{sec:background} A game $\mathcal{G} = \{\mathcal{I},\{\mathcal{A}_{i}\}_{i\in \mathcal{I}},\{U_i\}_{i\in \mathcal{I}}\}$ is defined by its three components: \begin{enumerate} \item $\mathcal{I}=\{1,2,\cdots,n\}$ is the set of players/agents/robots; \item $\mathcal{A}_{i}$ is the set of all the actions (choices) that agent $i$ has. Then, an action profile $a=(a_1,\cdots,a_n)\in\mathcal{A}$ denotes the collection of actions of all the agents, where $\mathcal{A}=\mathcal{A}_{1}\times \cdots \times \mathcal{A}_{n}$ is the space of all action profiles; \item $U_{i}:\mathcal{A} \to \mathbb{R}$ is the utility function of agent $i$. \end{enumerate} One of the most important concepts in game theory is that of a pure Nash equilibrium, which is defined as follows. \begin{definition}[Pure Nash Equilibrium]\label{def:nasheq} Consider a game $\mathcal{G} = \{\mathcal{I},\{\mathcal{A}_{i}\}_{i\in \mathcal{I}},\{U_i\}_{i\in \mathcal{I}}\}$. An action profile $a^{\text{NE}}$ is said to be a pure Nash equilibrium of the game if and only if \begin{equation}\label{eq:nasheq} U_{i}(a^{\text{NE}}) \geq U_{i}(a_{i},a_{-i}^{\text{NE}}), \quad \forall \;a_{i} \in \mathcal{A}_{i} \text{ and } \forall\; i \in \mathcal{I}, \end{equation} where $a_{-i}=(a_1,\cdots,a_{i-1},a_{i+1},\cdots,a_n) \in \mathcal{A}_{-i} $ denotes the action profile of all the agents except $i$ and $\mathcal{A}_{-i} = \mathcal{A}_{1}\times \cdots \mathcal{A}_{i-1} \times \mathcal{A}_{i+1} \times \cdots \times \mathcal{A}_{n}$. \end{definition} As can be seen, a game has reached a pure Nash equilibrium if and only if no agent has the motivation to unilaterally change its action. In this paper, we are interested in potential games. Potential games can have broad applications in distributed multi-robot systems since they allow each robot to make local decisions while a global objective (the potential function) is optimized. \begin{definition}[Potential Games \cite{monderer_potential}]\label{def:potential_game} A game $\mathcal{G} = \{\mathcal{I},$ $\{\mathcal{A}_{i}\}_{i\in \mathcal{I}},\{U_i\}_{i\in \mathcal{I}}\}$ is said to be a potential game with potential function $\phi:\mathcal{A} \to \mathbb{R}$ if \begin{align}\label{eq:potential_game} &U_{i}(a'_{i},a_{-i}) -U_{i}(a_{i},a_{-i}) = \phi(a'_{i},a_{-i}) -\phi(a_{i},a_{-i}),\nonumber\\ &\hspace{0.5in} \forall\; a_{i},a'_{i} \in \mathcal{A}_{i} \text{,}\; \forall \;a_{-i} \in \mathcal{A}_{-i} \text{ and } \forall\; i \in \mathcal{I}. \end{align} \end{definition} As can be seen, a potential game requires a perfect alignment between the potential function and the agents' local utility functions. It is straightforward to confirm that the action profile that maximizes the potential function is a pure Nash equilibrium. Hence, a pure Nash equilibrium is guaranteed to exist in potential games. \subsection{Binary Log-Linear Learning (see \cite{marden_revisiting})}\label{sec:BLLL} In several scenarios, the set of possible actions that an agent can take is limited by its current action. For instance, in multi-robot systems, the next possible position of an agent is limited by its current position and velocity. Formally, we refer to this limited set as an agent's constrained action set i.e., $\mathcal{A}_{i}^{\text{cons}}(a_{i}) \subseteq \mathcal{A}_{i}$ is agent $i$'s constrained action set where $a_{i}$ is its current action. Binary Log-linear learning (BLLL) is a variant of Log-linear learning (as shown in \cite{marden_revisiting}) which can handle constrained action sets. It is summarized as follows. At each time step $t$, an agent $i \in \mathcal{I}$ is chosen randomly (uniformly) and is allowed to alter its action.\footnote{Note that the selection does not require coordination among the nodes and can be achieved through each agent using a Poisson clock \cite{boyd_randomized}.} All the other agents repeat their previous actions, i.e. $a_{-i}(t)=a_{-i}(t-1)$. Agent $i$ then plays according to the following strategy: \begin{align} p_{i}^{a_{i}(t-1)}(t) & = {e^{{1 \over \tau}U_{i}(a(t-1))} \over {e^{{1 \over \tau}U_{i}(a(t-1))}+e^{{1 \over \tau}U_{i}(\hat{a}_{i},a_{-i}(t-1))}}}, \label{eq:blll-1} \\ p_{i}^{\hat{a}_{i}}(t) & = {e^{{1 \over \tau}U_{i}(\hat{a}_{i},a_{-i}(t-1))} \over {e^{{1 \over \tau}U_{i}(a(t-1))}+e^{{1 \over \tau}U_{i}(\hat{a}_{i},a_{-i}(t-1))}}},\label{eq:blll-2} \end{align} where $\hat{a}_{i}$ is an action that is chosen uniformly from the constrained action set $\mathcal{A}^{\text{cons}}_{i}(a_{i}(t-1))$, $p_{i}^{a_{i}(t-1)}(t)$ is the probability of repeating its previous action, $p_{i}^{\hat{a}_{i}}(t)$ is the probability of selecting action $\hat{a}_{i}$, and $\tau > 0$ is the temperature. Moreover, the constrained action sets should possess the following two properties: \begin{definition}[Reachability]\label{def:reachability} For all $i \in \mathcal{I}$ and any action pair $a_{i}^{0},a_{i}^{m} \in \mathcal{A}_{i}$, there exists a sequence of actions $a_{i}^{0} \to a_{i}^{1} \to \cdots \to a_{i}^{m-1} \to a_{i}^{m}$ satisfying $a^{k}_{i} \in \mathcal{A}^{\text{cons}}_{i}(a^{k-1}_{i})$, $\forall \; k \in \{1,\cdots,m\}$. \end{definition} \begin{definition}[Reversibility]\label{def:reversability} For all $i \in \mathcal{I}$ and any action pair $a_{i}^{0},a_{i}^{1} \in \mathcal{A}_{i}$, if $a^{1}_{i} \in \mathcal{A}^{\text{cons}}_{i}(a^{0}_{i})$, then we have $a^{0}_{i} \in \mathcal{A}^{\text{cons}}_{i}(a^{1}_{i})$. \end{definition} Note that Definition \ref{def:reachability} implies that any action profile in $\mathcal{A}$ can be reached in finite time steps. Definition \ref{def:reversability} means that each agent can go back to its previous action. \begin{theorem} (see \cite{marden_revisiting})\label{thm:blll} Consider a potential game with constrained action sets that satisfy the reachability and reversibility properties. BLLL ensures that the support of the stationary distribution is the set of potential maximizers, as $\tau\rightarrow 0$. \end{theorem} We next introduce the concept of an asynchronous best reply process over constrained action sets, which is a process where each agent locally improves its own utility function when it is its turn to alter its action. \begin{definition}\label{def:best-reply} An asynchronous best reply process over constrained action sets is defined as follows. At each time $t>0$, an agent $i$ is randomly chosen (uniformly) and allowed to alter its action. All other agents repeat their current action, i.e. $a_{-i}(t) = a_{-i}(t-1)$. Agent $i$ then selects an action $\hat{a}_{i}$ uniformly from its constrained action set, i.e. $\hat{a}_{i}\sim\text{unif}(\mathcal{A}_{i}^{\text{cons}}(a_{i}(t-1)))$. It then plays the action which maximizes its utility function: ${a}_{i}(t) \in \big\{a_{i}\in\left\{\hat{a}_i,a_{i}(t-1)\right\}:U_{i}(a_{i},a_{-i}(t-1)) = $ $\max\left\{ U_{i}(a(t-1)),U_{i}(\hat{a}_{i},a_{-i}(t-1))\right\}\big\}$. \end{definition} The best reply process does not necessarily maximize the overall potential function of the game as it may result in a suboptimal Nash equilibrium. When $\tau=0$, the BLLL algorithm boils down to an asynchronous best reply process on the constrained action sets. A $\tau>0$ then allows each agent to occasionally select locally suboptimal moves, i.e. it selects an action that decreases its local utility with a non-zero probability. These occasional suboptimal moves are useful as they prevent the agents from converging to a suboptimal Nash equilibrium. The BLLL algorithm can then be thought of as a perturbation of the asynchronous best reply process, where the size of the perturbation is controlled by the temperature $\tau$. This idea is formalized in Section \ref{sec:BLLL_perturbed}. \subsection{Resistance Trees}\label{sec:resistance} In this part, we briefly review the concept of resistance trees, which we will use in our subsequent sections. We refer the readers to \cite{young_evolution} for a detailed discussion. \subsubsection{Resistance Trees}\label{sec:resistance_trees} Let $P^{0}$ be a stationary Markov chain defined on a state space $X$. We call this the \textit{unperturbed} process. The process $P^{\epsilon}$ is then called a \textit{regular perturbed Markov process} if it satisfies the following conditions: \begin{enumerate} \item $P^{\epsilon}$ is aperiodic and irreducible; \item $\lim_{\epsilon \rightarrow 0} P^{\epsilon}(x \rightarrow y) = P^{0}(x \rightarrow y)$, $\forall \;x,y \in X$, where $P^{\epsilon}(x \rightarrow y)$ and $P^{0}(x \rightarrow y)$ denote the transition probabilities from state $x$ to $y$ of processes $P^{\epsilon}$ and $P^{0}$ respectively; \item if $P^{\epsilon}(x \rightarrow y)>0$, for some $\epsilon >0$, then there exists some $R(x \rightarrow y) \geq 0$, such that $0<{\lim_{\epsilon \rightarrow 0}} \epsilon^{-R(x \rightarrow y)}P^{\epsilon}(x \rightarrow y) < \infty$, where we refer to $R(x \rightarrow y)$ as the resistance of the transition from state $x$ to $y$. \end{enumerate} Construct a tree $T$ with $\left\vert{X}\right\vert$ vertices, one for each state, rooted at some vertex $z$, such that there exists a unique directed path to $z$ from every other vertex. The weight of a directed edge from vertex $x$ to $y$ is given by the resistance $R(x \rightarrow y)$. Such a tree is called a resistance tree whose resistance is given by the sum of the $\left\vert{X}\right\vert - 1$ edges that compose it. Since $P^{\epsilon}$ is aperiodic and irreducible (the first condition of the regular perturbed Markov process), there exists a unique stationary distribution $\mu^{\epsilon}$ for a given $\epsilon$. Define $p^{\epsilon}_z = \sum_{T\in\mathcal{T}_z}\prod_{[x,y]\in T}P^{\epsilon}(x \rightarrow y)$, where $[x,y]$ is the directed edge from vertex $x$ to $y$ and $\mathcal{T}_{z}$ denotes the set of all the trees that are rooted at $z$. Then, we have \begin{align}\label{eq:res-2} \mu^{\epsilon}_z = \frac{p^{\epsilon}_z}{\sum_{z'\in X} {p^{\epsilon}_{z'}}}, \end{align} where $\mu_{z}^{\epsilon}$ denotes the probability of state $z$ in the stationary distribution. The stochastic potential of state $z$ is then defined as the minimum resistance among all the trees that are rooted at $z$: \begin{equation}\label{eq:res-1} \gamma(z) = \min_{T \in \mathcal{T}_{z}} \sum_{[x,y] \in T}R(x \rightarrow y), \end{equation} \begin{theorem} (see \cite{young_evolution})\label{thm:resistance-tree} Let $P^{\epsilon}$ be a regular perturbed Markov process of $P^{0}$ and $\mu^{\epsilon}$ be its unique stationary distribution. Then \begin{enumerate} \item $\lim_{\epsilon \rightarrow 0}\mu^{\epsilon} = \mu^{0}$ exists,\footnote{The perturbations effectively select one of the stationary distributions of $P^0$.} where $\mu^{0}$ is a stationary distribution of $P^{0}$; \item We have $\mu_{x}^{0}>0$ iff $\gamma(x) \leq \gamma(y)$, $\forall \; y \in X$, where $\mu_{x}^{0}$ denotes the probability of state $x$ in the stationary distribution $\mu^0$. \end{enumerate} \end{theorem} Theorem \ref{thm:resistance-tree} shows that the stochastically stable states (the support of the stationary distribution $\mu^{0}$) are the states with the minimum stochastic potential, i.e. $\mu_{x}^{0}>0$ if and only if $x$ minimizes $\gamma(x)$. Informally, resistance (of a transition) is a measure of how difficult that transition is. The greater the resistance, the more difficult (less likely) the transition. So the resistance of a tree rooted at state $x$ is a measure of how difficult it is for other states to transit to $x$. Thus, a state with minimum stochastic potential is a state to which it is the easiest to get to (informally speaking) as compared to other states. We will utilize this metaphor of difficulty in Section \ref{sec:toy_game} when explaining some of our results. \subsubsection{BLLL as a Regular Perturbed Markov process}\label{sec:BLLL_perturbed} BLLL algorithm induces a regular perturbed Markov process with the unperturbed process corresponding to the asynchronous best reply process defined in Section \ref{sec:BLLL} \cite{marden_revisiting}. The probability of a feasible transition $a^{0} \rightarrow a^{1} = (a_{i}^{1},a^{0}_{-i})$ (where agent $i$ alters its action and $a^0,a^1 \in \mathcal{A}$) is then given by \begin{equation}\label{eq:log-res-3} P^{\epsilon}(a^{0} \rightarrow a^{1}) = {1 \over n|\mathcal{A}_{i}^{\text{cons}}(a_{i}^0)|}{\epsilon^{-U_{i}(a_{i}^{1},a_{-i}^{0})} \over { \epsilon^{-U_{i}({a}_{i}^{1},a_{-i}^{0})} + \epsilon^{-U_{i}({a}_{i}^{0},a_{-i}^{0})}}}, \end{equation} where $ \epsilon = e^{-\frac{1}{\tau}}$. As shown in \cite{marden_revisiting}, the resistance of this transition is as follows: \begin{equation}\label{eq:log-res-4} R(a^{0} \rightarrow a^{1}) = V_{i}(a^{0},a^{1}) - U_{i}(a^{1}), \end{equation} where $V_{i}(a^{0},a^{1}) = \max \{ U_{i}(a^{0}),U_{i}(a^{1})\}$. Based on the theory of resistance trees, it can be shown that only the action profiles that maximize the potential function have the minimum stochastic potential \cite{young_evolution}. This in turn means that the stochastically stable states of BLLL are the set of potential maximizers, as also stated in Theorem \ref{thm:blll}. \subsection{Stochastic Communication Links}\label{sec:comm_links} Most of the current research in the area of motion planning of multi-robot systems assumes over-simplified channel models. For instance, it is common to assume perfect links or links that are perfect within a certain radius. In reality, however, communication links are best modelled stochastically. More specifically, the received channel to noise ratio (CNR) is a multi-scale random process with three major components: distance-dependent path loss, shadowing and multipath fading \cite{goldsmith_wireless}. See Fig. \ref{fig:Allscales_blue_gray} for a real example. In the current literature on potential games, it is assumed that each agent is connected to all the other agents that can impact its next step utility function for all the possible actions in its constrained set. If the wireless channel is modeled as a disk with a known radius, then this can be achieved by properly designing the constrained action set. However, in the case of realistic communication links, this is simply not the case. More specifically, it is not possible for every agent to truly evaluate its utility function as other agents with whom it cannot communicate may be influencing it. Thus, realistic communication links have a considerable implication for distributed decision making using potential games. It is the goal of this paper to bring an understanding of their impact on BLLL and derive sufficient conditions (on link quality and temperature) to guarantee a target probability for the set of potential maximizers (in the stationary distribution) in the presence of stochastic links. \begin{figure}[h] \centering \mbox{\epsfig{figure=Allscales_blue_gray.eps,height=1.6in,width=2.6in}} \vspace{-0.15in} \caption{Underlying dynamics of the received signal power across an indoor route \cite{malmirchegini_spatial}.} \vspace{-0.1in} \label{fig:Allscales_blue_gray} \end{figure} \section{Impact of Stochastic Communication Links on Binary Log-Linear Learning}\label{sec:impact} In this section, we characterize the impact of imperfect communication on the outcome of the BLLL algorithm. We first prove that given any arbitrarily-high probability of the set of potential maximizers, there exist a connectivity probability and temperature $\tau$ that can achieve it. We then give an illustrative example to provide a deeper understanding of our results. \subsection{BLLL with Stochastic Communication Links}\label{sec:BLLL_links} Consider the case where the communication graph among the agents is given by an undirected random graph $\mathcal{C}(a) = (\mathcal{I}, \mathcal{E}(a))$, where $\mathcal{E}(a)$ denotes the set of edges, i.e.\ the communication links among the agents. Then, the probability of having a link (probability of connectivity) between agents $i$ and $j$ is given by $p_{\text{c},j,i}(a) = p_{\text{c},i,j}(a) = p_{\text{c}}(a_i, a_j)$, where we take $p_{\text{c},i,i}(a) = 1$, for all $a\in \mathcal{A}$ and $i\in\mathcal{I}$.\footnote{Notation not to be confused with $p_{z}^{\epsilon}$, which was used in (\ref{eq:res-2}).} Note that we have taken the probability of connectivity (and subsequently the communication graph) to be action dependent to make our analysis more general (which naturally implies a time-varying graph). For instance, when the action of an agent involves its position, then the probability of connectivity becomes a function of the action profile. We assume that the probability of connectivity of different links are independent of each other in this paper. We further assume that the communication graph is drawn independently in each iteration. As mentioned in Section \ref{sec:BLLL}, in each iteration of the BLLL algorithm, an agent is chosen randomly (uniformly) to alter its action. Meanwhile, a realization of the communication graph is drawn from the random graph $\mathcal{C}(a)$. Let $\mathcal{I}_{\text{c},i}$ be the corresponding realization of the set of agents that agent $i$ can communicate with. The probability of realization $\mathcal{I}_{\text{c},i}$ is given by $p_{\text{c},i}(\mathcal{I}_{\text{c},i},a) = \prod_{j\notin\mathcal{I}_{\text{c},i}}(1-p_{\text{c},i,j}(a)) \prod_{j\in\mathcal{I}_{\text{c},i}} p_{\text{c},i,j}(a)$. Note that $\mathcal{I}_{\text{c},i} = \mathcal{I}$ corresponds to the case where agent $i$ can hear from all the other agents. Also, since the probability of connectivity is state-dependent, $p_{\text{c},i}(\mathcal{I}_{\text{c},i},a)$ is also a function of $a$. The agent then has to assess its local utility and determine its action based on incomplete information. To represent this, we extend the definition of the utility function $U_{i}:\mathcal{A} \to \mathbb{R}$ such that it is well defined for all $U_{i}(a|\mathcal{I}_{\text{c},i})$, $\forall a \in \mathcal{A},\forall \mathcal{I}_{\text{c},i}$, where $U_{i}(a|\mathcal{I}_{\text{c},i})$ is the evaluated local utility function of agent $i$ given that it only communicates with agents in $\mathcal{I}_{\text{c},i}$. One possibility for evaluating $U_{i}(a|\mathcal{I}_{\text{c},i})$ is that the agent ignores the impact of agents not in $\mathcal{I}_{\text{c},i}$. Another possible strategy is for an agent to assume the last communicated action for the agents it is unable to communicate with.\footnote{However, evaluating which is a better strategy becomes case dependent and is an avenue for future work.} In order to evaluate the impact of the stochastic communication links on the learning dynamics, we start with a temperature-dependent probability of connectivity of the form $p_{\text{c},i,j}(a) = {1\over 1+\epsilon^{m_{i,j}(a)}},\;\forall i,j\in \mathcal{I},\;\forall a\in \mathcal{A}$, where $m_{i,j}(a)>0$ is a constant. Based on our assumed form, we always have $p_{\text{c},i,j}(a) > 0.5$. Note that for $p_{\text{c},i,j}(a) = p_{\text{c}},\;\forall a\in \mathcal{A},\; \forall i,j \in \mathcal{I}$, $p_{\text{c}}$ need not have this temperature-dependent form, as we will show in our result (Theorem \ref{thm:blll-comm-2}). The probability of the transition $a^0 \rightarrow a^1 = (a^1_i, a^0_{-i})$ in the presence of stochastic communication links can then be characterized as follows: \begin{align} & P^{\epsilon}_c(a^{0} \rightarrow a^{1}) = \sum_{\mathcal{I}_{\text{c},i}}{p_{\text{c},i}(\mathcal{I}_{\text{c},i},a^0)P^{\epsilon}_c(a^{0} \rightarrow a^{1}|\mathcal{I}_{\text{c},i})} \nonumber \\ & = {1 \over n|\mathcal{A}_{i}^{\text{cons}}(a_{i}^0)|}\sum_{\mathcal{I}_{\text{c},i}}{p_{\text{c},i}(\mathcal{I}_{\text{c},i},a^0)\epsilon^{-U_{i}(a^{1}|{\mathcal{I}_{\text{c},i}})} \over { \epsilon^{-U_{i}({a}^{1}|{\mathcal{I}_{\text{c},i}})} + \epsilon^{-U_{i}({a}^{0}|{\mathcal{I}_{\text{c},i}})}}} \nonumber \end{align} \scriptsize \vspace{-0.1in} \begin{align*} = {1 \over n|\mathcal{A}_{i}^{\text{cons}}(a_{i}^0)|}\sum_{\mathcal{I}_{\text{c},i}}\frac{\epsilon^{-U_{i}(a^{1}|{\mathcal{I}_{\text{c},i}}) + \sum_{j\notin \mathcal{I}_{\text{c},i}} m_{i,j}(a^0)}}{(\epsilon^{-U_{i}({a}^{1}|{\mathcal{I}_{\text{c},i}})} + \epsilon^{-U_{i}({a}^{0}|{\mathcal{I}_{\text{c},i}})})\prod_{j \in \mathcal{I}}(1+\epsilon^{m_{i,j}(a^0)})}. \end{align*} \normalsize It can be seen that expressing the probability of connectivity in this fashion ensures that BLLL in the presence of stochastic communication links induces a regular perturbed Markov process with the unperturbed process as the asynchronous best reply process (Definition \ref{def:best-reply}). \begin{figure*}[!htb] \begin{equation}\label{eq:blll-alg-comm-2} P^{\epsilon}_c(a^{0} \rightarrow a^{1}) = {1 \over n|\mathcal{A}_{i}^{\text{cons}}(a_{i}^0)|}\sum_{\mathcal{I}_{\text{c},i}}\frac{\epsilon^{V_{i}({a}^{0}, {a}^{1}|{\mathcal{I}_{\text{c},i}}) - U_i({a}^{1}|{\mathcal{I}_{\text{c},i}}) + \sum_{j\notin \mathcal{I}_{\text{c},i}} m_{i,j}(a^0)}}{\left(\epsilon^{V_{i}({a}^{0}, {a}^{1}|{\mathcal{I}_{\text{c},i}})-U_i({a}^{1}|{\mathcal{I}_{\text{c},i}})} + \epsilon^{V_{i}({a}^{0}, {a}^{1}|{\mathcal{I}_{\text{c},i}}) -U_i({a}^{0}|{\mathcal{I}_{\text{c},i}})}\right) \prod_{j \in \mathcal{I}}\left(1+\epsilon^{m_{i,j}(a^0)}\right)}. \end{equation} \hrulefill \end{figure*} We can further show that the equation above can be expressed as shown in (\ref{eq:blll-alg-comm-2}) on top of the next page, which results in the following expression for the resistance of this transition: \begin{align} & R_{c}(a^{0} \rightarrow a^{1}) = \min_{\mathcal{I}_{\text{c},i}} \left\{ R_{c}(a^{0} \rightarrow a^{1}| \mathcal{I}_{\text{c},i}) + \sum_{j\notin \mathcal{I}_{\text{c},i}} m_{i,j}(a^0)\right\},\label{eq:res_comm} \end{align} where $R_{c}(a^{0} \rightarrow a^{1}| \mathcal{I}_{\text{c},i}) = V_{i}(a^{0},a^{1}|{\mathcal{I}_{\text{c},i}}) - U_{i}({a}^{1}|{\mathcal{I}_{\text{c},i}})$ and $V_{i}({a}^{0}, {a}^{1}|{\mathcal{I}_{\text{c},i}}) = \max\{U_i({a}^{0}|{\mathcal{I}_{\text{c},i}}),$ $U_i({a}^{1}|{\mathcal{I}_{\text{c},i}})\}$. Note that $R_{c}(a^0\to a^1|\mathcal{I}) = R(a^0\to a^1)$, where $R(a^0\to a^1)$ is the resistance in case of perfect communication (see (\ref{eq:log-res-4})). It can be seen that imperfect communication affects the transition probability from $a^0 \rightarrow a^1$, and as a result, affects its resistance. This means that the stochastically stable states may change. Hence, the outcome of the game may be significantly different as compared to the case of perfect communication. \begin{lemma} \label{lem:suff-bound} Consider a potential game where the agents employ BLLL algorithm in the presence of stochastic communication links. Furthermore, consider constrained action sets that satisfy the reachability and reversibility properties. The states with the minimum stochastic potential are the set of potential maximizers if we have the following, \begin{align} \sum_{j\notin\mathcal{I}_{\text{c},i}} m_{i,j}(a^0)\geq R(a^{0} \rightarrow a^{1}) -R_{c}(a^{0} \rightarrow a^{1}| \mathcal{I}_{\text{c},i}),\label{eq:suff-bound-1} \end{align} for every agent $i\in\mathcal{I}$, all $\mathcal{I}_{\text{c},i}$ and all $a^{0} \rightarrow a^{1} = (a_{i}^1,a_{-i}^0)$, where $R(a^{0} \rightarrow a^{1})$ is the resistance for the case of perfect communication (see (\ref{eq:log-res-4})). \end{lemma} \begin{proof} If the conditions in the lemma hold, then the resistance of the transition from $a^{0}$ to $a^{1} = (a^1_i, a^0_{-i})$, for some agent $i$, becomes $R_{c}(a^{0} \rightarrow a^{1}) = R(a^{0} \rightarrow a^{1})$. Therefore, the resistances of the transitions do not change as compared to the case of perfect communication. The proof of the lemma then follows immediately from Lemma 5.2 and Theorem 5.1 in \cite{marden_revisiting}. \end{proof} \begin{remark}\label{remark:suff-bound} A good choice of the constants $\{m_{i,j}(a)\}_{i,j\in \mathcal{I},\;a \in \mathcal{A}}$, is such that they satisfy $m_{i,j}(a) \geq \max_{a^{0}\rightarrow a^{1}}\{R(a^{0} \rightarrow a^{1})\}$. This has the advantage that there are separate conditions for each $m_{i,j}(a)$ and that they are not dependent on how communication failures affect the game. \end{remark} \begin{remark} Lemma \ref{lem:suff-bound} provides sufficient conditions to guarantee that the states with the minimum stochastic potential are still the set of potential maximizers. Equation (\ref{eq:suff-bound-1}) can be more explicitly expressed as a function of connectivity as follows: \begin{align} \sum_{j\notin\mathcal{I}_{\text{c},i}} m_{i,j}(a^0) & = \sum_{j\notin\mathcal{I}_{\text{c},i}} \log_{\epsilon}(\epsilon^{m_{i,j}(a^0)})\nonumber\\ & = \sum_{j\notin\mathcal{I}_{\text{c},i}} \log_{\epsilon}{1-p_{\text{c},i,j}(a^0) \over p_{\text{c},i,j}(a^0)}\nonumber\\ & \geq R(a^{0} \rightarrow a^{1}) -R_{c}(a^{0} \rightarrow a^{1}| \mathcal{I}_{\text{c},i}),\nonumber \end{align} for all $\mathcal{I}_{\text{c},i}$ and all $a^{0} \rightarrow a^{1}$. We can see that $\log_{\epsilon}{1-p_{\text{c},i,j}(a) \over p_{\text{c},i,j}(a)}$ is an important parameter (always positive). Furthermore, if $R_{c}(a^{0} \rightarrow a^{1}| \mathcal{I}_{\text{c},i}) \geq R(a^{0} \rightarrow a^{1}) $, then connectivity to agent $i$ is not important, since the condition is always satisfied. The following theorem shows that we can find some $\tau>0$ and $p_{\text{c},i,j}(a)<1$ to guarantee that the probability of the set of potential maximizers in the stationary distribution is larger than or equal to some required threshold. \end{remark} \begin{theorem}\label{thm:blll-comm} Consider a potential game where the agents employ BLLL algorithm in the presence of stochastic communication links. Furthermore, consider constrained action sets that satisfy the reachability and reversibility properties. For any given $p_{\text{tar}} < 1$, there exists a $\tau_{\text{th}} > 0$, such that the probability of the set of potential maximizers in the stationary distribution is larger than or equal to $p_{\text{tar}}$, if $0<\tau \leq \tau_{\text{th}}$ and $p_{\text{c},i,j}(a) = {1\over 1+\epsilon^{m_{i,j}(a)}},\; \forall a \in \mathcal{A},\; \forall i,j \in \mathcal{I}$, where $\{m_{i,j}(a)\}_{i,j\in \mathcal{I},\;a \in \mathcal{A}}$ are constants satisfying Lemma \ref{lem:suff-bound}. \end{theorem} \begin{proof} We construct temperature-dependent probabilities of connectivity, as discussed in Section \ref{sec:BLLL_links}, such that the constants $\{m_{i,j}(a)\}_{i,j\in \mathcal{I},\;a \in \mathcal{A}}$ satisfy Lemma \ref{lem:suff-bound}. Then, we have a regular perturbed Markov process, and the states with the minimum stochastic potential are still the set of potential maximizers. From Theorem \ref{thm:resistance-tree}, we have $\mu_x^0=\lim_{\tau \to 0}\mu_{x}^{\epsilon} = 0$, $\forall x\notin \mathcal{A}^{*}$, where $\mathcal{A}^{*} \subseteq \mathcal{A}$ is the set of potential maximizers. Thus, we know that, for any $ x \notin \mathcal{A}^{*}$, there exists a $\tau_{x}>0$ such that $\mu_{x}^{\epsilon}\leq{1-p_{\text{tar}}\over|\mathcal{A}\setminus\mathcal{A}^{*}|} $, if $0 < \tau\leq \tau_{x}$. Hence, we have \begin{align*} \sum_{a\in \mathcal{A}^{*}}\mu_{a}^{\epsilon} & = 1 - \sum_{a\notin \mathcal{A}^{*}}\mu_{a}^{\epsilon} \geq p_{\text{tar}}, \end{align*} if $0< \tau \leq \tau_{\text{th}} = \min_{x\notin \mathcal{A}^{*}}\tau_{x}$. \end{proof} The following theorem shows that we can find a sufficient lower bound on the probability of connectivity, to ensure that the probability of the set of potential maximizers is larger than or equal to some required threshold, for the special case of probabilities of connectivity which are state independent and equal for all links, i.e., $p_{\text{c},i,j}(a) = p_{\text{c}}, \; \forall a \in \mathcal{A}, \; \forall i,j \in \mathcal{I}$. \begin{theorem}\label{thm:blll-comm-2} Consider a potential game where the agents employ BLLL algorithm in the presence of stochastic communication links with probabilities of connectivity that are state independent and equal for all links, i.e., $p_{\text{c},i,j}(a) = p_{\text{c}}, \; \forall a \in \mathcal{A}, \; \forall i,j \in \mathcal{I}$. Furthermore, consider constrained action sets that satisfy the reachability and reversibility properties. For any given $p_{\text{tar}} < 1$, there exists a $p_{\text{c},\text{th}} < 1$, such that the probability of the set of potential maximizers in the stationary distribution is larger than or equal to $p_{\text{tar}}$, if $p_{\text{c}} \geq p_{\text{c},\text{th}}$, where $\tau = {- m\over \ln\left({1-p_{\text{c}}\over p_{\text{c}}}\right)}$, with $m$ representing a constant that satisfies Lemma \ref{lem:suff-bound}. \end{theorem} \begin{proof} We construct a temperature-dependent probability of connectivity, as discussed in Section \ref{sec:BLLL_links}, such that the constant $m$ satisfies Lemma \ref{lem:suff-bound}, in order to establish a $p_{\text{c},\text{th}}$. From Theorem \ref{thm:blll-comm}, we know that there exists a $\tau_{\text{th}}>0$, such that probability of convergence to the set of potential maximizers is larger than or equal to $p_{\text{tar}}$, if $0<\tau\leq\tau_{\text{th}}$. Consider $p_{\text{c}} \geq p_{\text{c},\text{th}} = {1\over 1+e^{-m\over\tau_{\text{th}}}}$. Then, the temperature associated with this probability of connectivity is $\tau = {- m\over \ln\left({1-p_{\text{c}}\over p_{\text{c}}}\right)} = \tau_{\text{th}}{\ln\left({1-p_{\text{c},\text{th}}\over p_{\text{c},\text{th}}}\right)\over\ln\left({1-p_{\text{c}}\over p_{\text{c}}}\right)}\leq\tau_{\text{th}}$. Thus, the probability of the set of potential maximizers is larger than or equal to $p_{\text{tar}}$, if $p_{\text{c}} \geq p_{\text{c},\text{th}} $ with $\tau = {- m\over \ln\left({1-p_{\text{c}}\over p_{\text{c}}}\right)}$. \end{proof} Recall that the BLLL algorithm allows perturbations from the asynchronous best reply process, so as to allow agents to intentionally choose locally sub-optimum actions with a small probability. However, in the imperfect communication case, the failure of communication links also causes the agents to unintentionally make sub-optimum decisions. Given some $p_{\text{tar}}$, our given sufficient conditions then aim to restrict how often the unintentional mistakes can be made. One observation in the proof of Theorem \ref{thm:blll-comm} is that in general, $\tau_{\text{th}}$ and $p_{\text{c},i,j}(a)$ are decreasing and increasing functions of $p_{\text{tar}}$ respectively, i.e.\ if higher $p_{\text{tar}}$ is required, then smaller temperature and better connectivity are needed. This is because when $p_{\text{tar}}$ is higher, then less perturbation from the asynchronous best reply process is allowed. Hence, the agents have to assess their local utilities more accurately, which requires better connectivity. Theorem \ref{thm:blll-comm} can be easily extended to the case of Log-Linear Learning. We skip the details for brevity. \subsection{An illustrative example}\label{sec:toy_game} In this part, we provide an example to have a better understanding on the impact of imperfect communication and the results of Theorem \ref{thm:blll-comm}. Consider a 2-agent game where the action sets of the players are given by $ \mathcal{A}_{1} = \{\text{T},\text{B}\} $ (top, bottom) and $ \mathcal{A}_{2} = \{\text{L},\text{R}\}$ (left, right). The utility function $U_{i}:\mathcal{A}=\mathcal{A}_{1} \times \mathcal{A}_{2} \to \mathbb{R}$ is given by \begin{center} \begin{tabular}{ r|c|c| } \multicolumn{1}{l}{$U_{1}$} & \multicolumn{1}{c}{$\text{L}$} & \multicolumn{1}{c}{$\text{R}$} \\ \cline{2-3} $\text{T}$ & 1 & 3 \\ \cline{2-3} $\text{B}$ & 3 & 1 \\ \cline{2-3} \end{tabular} \hspace{0.4in} \begin{tabular}{ r|c|c| } \multicolumn{1}{l}{$U_{2}$} & \multicolumn{1}{c}{$\text{L}$} & \multicolumn{1}{c}{$\text{R}$} \\ \cline{2-3} $\text{T}$ & 1 & 2 \\ \cline{2-3} $\text{B}$ & 4 & 1 \\ \cline{2-3} \end{tabular} \end{center} This is a potential game with the following potential function $\phi:\mathcal{A} \rightarrow \mathbb{R}$: \begin{center} \begin{tabular}{ r|c|c|c| } \multicolumn{1}{l}{} & \multicolumn{1}{c}{$U_{1}$} & \multicolumn{1}{c}{$U_{2}$} & \multicolumn{1}{c}{$\phi$}\\ \cline{2-4} $a^1 = (\text{B},\text{R})$ & 1 & 1 & 1 \\ \cline{2-4} $a^2 = (\text{T},\text{L})$ & 1 & 1 & 2\\ \cline{2-4} $a^3 = (\text{T},\text{R})$& 3 & 2 & 3\\ \cline{2-4} $a^4 = (\text{B},\text{L})$ & 3 & 4 & 4\\ \cline{2-4} \end{tabular} \end{center} For the case where the two nodes cannot communicate, we take $U_{1}(a|\{1\}) = \left\{\begin{array}{ll}3 & \text{if }a_1 = \text{T}\\ 1 & \text{if }a_1 = \text{B}\end{array}\right.$ and $U_{2}(a|\{2\}) = \left\{\begin{array}{ll}1 & \text{if }a_2 = \text{L}\\ 2 & \text{if }a_2 = \text{R}\end{array}\right.$. We next find the stochastic potential for each action profile, i.e.\ the minimum resistance of the tree rooted at each action profile, by using (\ref{eq:res-1}). For the case of perfect communication, the (only) state with minimum stochastic potential is $a^4$. For the case of imperfect communication, we consider the scenario where the probability of connectivity is state-independent, i.e., $p_{\text{c}}=p_{\text{c},1,2} = p_{\text{c},2,1}$. Let $m = m_{1,2}=m_{2,1}=\log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}}$. By constructing all the resistance trees and minimizing (\ref{eq:res-1}), we find that $a^4$ still has the minimum stochastic potential if and only if $m = \log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}}>1$. From (\ref{eq:suff-bound-1}) in Lemma \ref{lem:suff-bound}, our derived sufficient condition for the probability of connectivity to guarantee that $a^4$ still has the minimum stochastic potential can be found as $m = \log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}}\geq 3$. \begin{figure}[h] \centering \mbox{\epsfig{figure=var_temp_pcomm_3.eps,height=2.8in,width=3.6in}} \vspace{-0.3in} \caption{The color map corresponds to the probability of the potential maximizer as a function of $\tau$ and $p_{\text{c}}$. The blue line with empty circle markers shows $p_{\text{c}} = {1 \over 1+e^{-3/\tau}}$, while the black line with filled circle markers shows $p_{\text{c}} = {1 \over 1+e^{-1/\tau}}$.} \vspace{-0.1in} \label{fig:var_temp_m} \end{figure} By evaluating (\ref{eq:res-2}), we calculate the stationary distribution for the case of imperfect communication to see how the probability of the potential maximizer ($a^4$) changes as a function of $p_{\text{c}}$ and $\tau$, as shown in Fig.\ \ref{fig:var_temp_m}. The blue line with empty circle markers in the figure represents the curve $p_{\text{c}} = {1 \over 1+e^{-3/\tau}}$ ($m=3$), while the black line with filled circle markers represents $p_{\text{c}} = {1 \over 1+e^{-1/\tau}}$ ($m=1$). It can be seen that given any $p_{\text{tar}}$, the required probability of the potential maximizer can always be achieved by choosing a fixed $m>1$ and finding some appropriate $\tau$ and $p_{\text{c}}$. (Theorem \ref{thm:blll-comm} shows a sufficient condition for this, where $m\geq 3$). Informally, as discussed in Section \ref{sec:resistance}, this is because the state with the minimum stochastic potential (in this case, the potential maximizer $a^4$) is the easiest to transit to. The curve $m=1$, i.e., $p_{\text{c}} = {1 \over 1+e^{-1/\tau}}$ could be thought of as a transition curve, as curves above it can achieve any $p_{\text{tar}}$, while those below it cannot. This is due to the fact that above this curve $a^4$ is the only state with minimum stochastic potential, while, on this curve, the states with minimum stochastic potential are $a^4$ and $a^3$. On the other hand, below this curve, $a^3$ is the (only) state with minimum stochastic potential. Informally, this means that the potential maximizer, $a^4$, becomes more difficult to transit to as compared to $a^3$ when $m < 1$. In fact, for $m<1$, the probability of the potential maximizer $a^4$ becomes arbitrarily small as $\tau \to 0$. Note that we have plotted the y-axis only down to 0.88 for better visibility. Also, note that a case of connectivity of 96\%, for instance, means that the packets are dropped 4\% of the time, which is a typical value for several scenarios\cite{boyce1998packet}. \begin{figure}[h] \centering \mbox{\epsfig{figure=m_curves_4.eps,height=2in,width=3.2in}} \vspace{-0.15in} \caption{Probability of potential maximizer ($a^4$) as a function of temperature for different values of $\log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}}$.} \vspace{-0.15in} \label{fig:m_curves} \end{figure} Fig. \ref{fig:m_curves} and \ref{fig:var_m} better highlight the transition behavior. Fig.\ \ref{fig:m_curves} shows the probability of the potential maximizer as a function of the temperature, for various values of $\log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}}$ ($m$). The transition behavior of the curve $m = \log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}} = 1$, is clearly observed. Finally, Fig.\ \ref{fig:var_m} shows the probability of the potential maximizer as a function of $\log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}}$ for various values of temperature $\tau$. The transition point can clearly be seen at $\log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}} = 1$. \begin{figure}[h] \centering \mbox{\epsfig{figure=var_m_4.eps,height=1.9in,width=2.8in}} \vspace{-0.15in} \caption{Probability of potential maximizer ($a^4$) vs $\log_{\epsilon}{1-p_{\text{c}} \over p_{\text{c}}}$ for different temperatures.} \vspace{-0.27in} \label{fig:var_m} \end{figure}
2,877,628,088,789
arxiv
\section{Introduction} Harder's reduction theory (\cite{Harder:1968}, \cite{Harder:1969}) provides filtrations of Euclidean buildings that allow one to deduce cohomological (\cite{Harder:1977}) and homological (\cite{Stuhler:1980}, \cite{Bux/Wortman}) properties of $S$-arithmetic groups over global function fields. In this survey I will sketch the main points of Harder's reduction theory, starting from Weil's geometry of numbers and the Riemann--Roch theorem. I will describe a filtration, used for example in \cite{Behr:1998}, that is particularly useful for deriving finiteness properties of $S$-arithmetic groups. Finally, I will state the recently established rank theorem and some its earlier partial verifications that do not restrict the cardinality of the underlying field of constants. As a motivation for further research I also state a much more general conjecture on isoperimetric properties of $S$-arithmetic groups over global fields (number fields or function fields). \medskip \noindent {\bf Acknowledgements.} The author expresses his gratitude to Kai-Uwe Bux, Bernhard M\"uhlherr and Stefan Witzel for numerous invaluable discussions on the topics of this survey during joint research activities at the Hausdorff Institute of Mathematics at Bonn, the MFO at Oberwolfach, and the University of Bielefeld. The author also thanks Kai-Uwe Bux, Max Horn, Timoth\'ee Marquis, Andreas Mars, Susanne Schimpf, Rebecca Waldecker, Markus-Ludwig Wermer, Stefan Witzel and the participants of the reduction theory seminar at Bielefeld during summer 2010, especially Werner Hoffmann and Andrei Rapinchuk, for numerous comments, remarks and suggestions on how to improve the contents and exposition of this survey. Furthermore, the author thanks two anonymous referees for valuable comments and observations. \section{Projective varieties} \begin{introduction} In this section I give a quick introduction to the concept of projective varieties. For further reading the sources \cite[I]{Hartshorne:1977} and \cite[2]{Niederreiter/Xing:2009} are highly recommended. \end{introduction} \begin{definition} \label{begin} Let $k$ be a perfect field, let $\overline{k}$ be its algebraic closure, and let $S \subset \overline{k}[X] = \overline{k}[x_0,x_1,...,x_n]$ be a set of {homogeneous} polynomials. The set $$Z(S) = \{ P \in \mathbb{P}_n(\overline{k}) \mid f(P) = 0 \mbox{ for all $f \in S$} \}$$ is called a {\bf projective algebraic set}. The algebraic set $Z(S)$ is {\bf defined over $k$}, if $S$ can be chosen to be contained in $k[X]$. The {\bf Zariski topology} on $\mathbb{P}_n(\overline{k})$ is defined by taking the closed sets to be the projective algebraic sets. A non-empty projective algebraic set in $\mathbb{P}_n(\overline{k})$ is called a {\bf projective variety}, if it is irreducible in the Zariski topology of $\mathbb{P}_n(\overline{k})$, i.e., if it is not equal to the union of two proper closed subsets. It is called a {\bf projective $k$-variety} if it is defined over $k$. The {\bf dimension} of a non-empty projective variety is defined to be its dimension as a topological space in the induced Zariski topology, i.e., the supremum of all integers $n$ such that there exists a chain $Z_0 \subsetneq Z_1 \subsetneq \cdots \subsetneq Z_n$ of distinct non-empty irreducible closed subsets. \end{definition} \begin{theorem}[{\cite[2.3.8]{Niederreiter/Xing:2009}}] A projective algebraic set $V$ is a projective variety if and only if the ideal $I(V)$ of $\overline{k}[X]$ generated by the set $\{ f \in \overline{k}[X] \mid \mbox{ $f$ homogeneous and $f(P)=0$ for all $P \in V$} \}$ is a prime ideal of $\overline{k}[X]$. \end{theorem} \begin{definition} A non-empty intersection of a projective variety in $\mathbb{P}_n(\overline{k})$ with an open subset of $\mathbb{P}_n(\overline{k})$ is called a {\bf quasi-projective variety}. \end{definition} Let $V \subseteq \mathbb{P}_n(\overline{k})$ be a quasi-projective variety. For each $P \in V$ there exists a hyperplane $H \subset \mathbb{P}_n(\overline{k})$ with $P \not\in H$. Then $P \in V \backslash H = V \cap A_n(\overline{k})$ for $A_n(\overline{k}) := \mathbb{P}_n(\overline{k}) \backslash H$. Let $r$ be a defining relation of the hyperplane $H$ in the variables $x_0, ..., x_n$ (cf.\ \ref{begin}). A $\overline{k}$-valued function $f$ on $V$ is called {\bf regular at $P$}, if there exists a neighbourhood $N$ of $P$ in $V \cap A_n(\overline{k})$ such that there exist polynomials $a, b \in \overline{k}[X]/(r)$ with $b(Q) \neq 0$ for all $Q \in N$ and $f_{|N} = \frac{a_{|N}}{b_{|N}}$. The function $f$ is {\bf regular} on a non-empty open subset $U$ of $V$, if it is regular at every point of $U$. \begin{definition}\label{localring} Let $V$ be a quasi-projective variety and let $P \in V$. Then the {\bf local ring $\mathcal{O}_P = \mathcal{O}_P(V)$ at $P$} is defined as the ring of germs $[f]$ of functions $f : V \to \overline{k}$ which are regular on a neighbourhood of $P$. In other words, an element of $\mathcal{O}_P$ is an equivalence class of pairs $(U,f)$ where $U$ is an open subset of $Y$ containing $P$, and $f$ is a regular function on $U$, and two such pairs $(U,f)$ and $(V,g)$ are equivalent if $f_{|U \cap V} = g_{|U \cap V}$. For a non-empty open subset $U$ of $V$ define $\mathcal{O}_U = \mathcal{O}_U(V) := \bigcap_{P \in U} \mathcal{O}_P(V)$. \end{definition} The ring $\mathcal{O}_P$ is indeed a local ring in the sense of commutative algebra: its unique maximal ideal $\mathfrak{m}_P$ is the set of germs of regular functions which vanish at $P$. The $i$th power $\mathfrak{m}^i_P$ of $\mathfrak{m}_P$ consists of the germs of regular functions whose vanishing order at $P$ is at least $i$. Taking these powers $\mathfrak{m}^i_P$ as a neighbourhood basis of $\mathcal{O}_P$, this defines a topology on $\mathcal{O}_P$, the {\bf $\mathfrak{m}_P$-adic topology}. The inverse limit $\hat{\mathcal{O}}_P := \lim_\leftarrow \mathcal{O}_P/\mathfrak{m}^i_P$ is the {\bf completion} of $\mathcal{O}_P$; cf.\ \cite[7.1]{Eisenbud:1995}, \cite[p.~33]{Hartshorne:1977}, \cite[VIII \S 2]{Zariski/Samuel:1975b}. It is a local ring whose maximal ideal is denoted by $\hat{\mathfrak{m}}_P$. \begin{example} \label{projectiveline} The projective line $\mathbb{P}_1(\mathbb{C}) \cong \mathbb{C} \cup \{ \infty \}$ is a non-singular projective curve (see \ref{curvevar} below). For $P \in \mathbb{C}$ one has \begin{eqnarray*} \mathcal{O}_P & = & \left\{ \frac{a}{b} \in \mathbb{C}(t) \mid b(P) \neq 0 \right\}, \\ \mathfrak{m}_P & = & \left\{ \frac{a}{b} \in \mathcal{O}_P \mid a(P) = 0 \right\}, \\ \hat{\mathcal{O}}_P & = & \left\{ \frac{a}{b} \in \mathbb{C}((t)) \mid b(P) \neq 0 \right\}, \\ \hat{\mathfrak{m}}_P & = & \left\{ \frac{a}{b} \in \hat{\mathcal{O}}_P \mid a(P) = 0 \right\}. \end{eqnarray*} \end{example} \begin{definition}\label{rationalfunctions} Let $V$ be a variety defined over $k$. Then the {\bf field $\overline{k}(V)$ of rational functions of $V$} consists of the equivalence classes of pairs $(U,f)$ where $U$ is a non-empty open subset of $V$ and $f$ is a regular function on $U$, and two such pairs $(U,f)$ and $(V,g)$ are equivalent if $f_{|U \cap V} = g_{|U \cap V}$. The {\bf field of $k$-rational functions of $V/k$} is $$K := k(V) := \{ h \in \overline{k}(V) \mid \sigma(h) = h \mbox{ for all $\sigma \in \mathrm{Gal}(\overline{k}/k)$} \}.$$ \end{definition} \begin{definition} \label{closed} For a point $P \in \mathbb{P}_n(\overline{k})$ the set $\{ \sigma(P) \mid \sigma \in \mathrm{Gal}(\overline{k}/k) \}$ is called a {\bf $k$-closed point}. A $k$-closed point of cardinality $1$ is called {\bf $k$-rational}. \end{definition} For each $P \in \mathbb{P}_n(\overline{k})$, each homogeneous polynomial $f \in \overline{k}[X]$, and each $\sigma \in \mathrm{Gal}(\overline{k}/k)$ one has $f(P) = 0$ if and only if $\sigma(f(P)) = 0$ if and only if $\sigma(f)(\sigma(P)) = 0$. Hence, given a projective $k$-variety $V \subseteq \mathbb{P}_n(\overline{k})$, the assertion $P \in V$ is equivalent to the assertion $\{ \sigma(P) \mid \sigma \in \mathrm{Gal}(\overline{k}/k) \} \subset V$. It therefore makes sense to speak of {\bf $k$-closed points} of $V$. The set of $k$-closed points of $V$ is denoted by $V^\circ = V^\circ/k$. The {\bf degree} $\mathrm{deg}(P)$ of a $k$-closed point $P$ equals the number of points it contains. \begin{example} For an involutive automorphism $\sigma$ of $\mathbb{C}$, define $\mathbb{K} := \mathrm{Fix}_\mathbb{C}(\sigma)$. The set of $\mathbb{K}$-closed points of $\mathbb{P}_1(\mathbb{C})$ equals the set of $\sigma$-orbits of $\mathbb{P}_1(\mathbb{C})$, i.e., the $\sigma$-fixed/$\mathbb{K}$-rational points of $\mathbb{P}_1(\mathbb{C})$ and the pairs of $\sigma$-conjugate points of $\mathbb{P}_1(\mathbb{C})$. \end{example} \section{Curves over finite fields, considered as varieties} \begin{introduction} In this section I turn to one of the main objects of study of this survey, non-singular projective curves. For further reading the sources \cite[I]{Hartshorne:1977}, \cite[3]{Niederreiter/Xing:2009}, and \cite[II]{Serre:1988} or, if one prefers a very algebraic approach, \cite[I, II]{Chevalley:1951} and \cite[5, 6]{Rosen:2002} are highly recommended. \end{introduction} \begin{definition} \label{curvevar} \label{nonsingular} Let $k$ be a perfect field. A projective variety of dimension $1$ defined over $k$ is called a {\bf projective curve} over $k$ (cf.\ \ref{begin}). A projective curve $Y$ is called {\bf non-singular} at $P \in Y$, if $\mathcal{O}_P$ (cf.\ \ref{localring}) is a discrete valuation ring. It is called {\bf non-singular}, if it is non-singular at each of its points. \end{definition} For $Y$ non-singular at $P$, the valuation of $\mathcal{O}_P$ is given by $$\overline{\nu}_P : \mathcal{O}_P \to \mathbb{Z} \cup \{ \infty \}: x \mapsto \sup\left\{ i \in \mathbb{N} \cup \{ 0 \} \mid x \in \mathfrak{m}^i_P \right\},$$ with the understanding that $\mathfrak{m}^0_P = \mathcal{O}_P$. This valuation extends to the valuation $\hat{\mathcal{O}}_P \to \mathbb{Z} \cup \{ \infty \} : x \mapsto \sup\left\{ i \in \mathbb{N} \cup \{ 0 \} \mid x \in \hat{\mathfrak{m}}^i_P \right\}$, as $\hat{\mathfrak{m}}_P \cap \mathcal{O}_P = \mathfrak{m}_P$; cf.\ \cite[7.1]{Eisenbud:1995}, \cite[I.5.4A]{Hartshorne:1977}. The valuation $\overline{\nu}_P : \mathcal{O}_P \to \mathbb{Z} \cup \{ \infty \}$ also extends to a valuation on its field of fractions $\overline{k}(Y)$ via $\overline{\nu}_P(x) := \overline{\nu}_P(a) - \overline{\nu}_P(b)$ for $a, b \in \mathcal{O}_P$ with $\frac{a}{b} = x$. The geometric intuition is that $\overline{\nu}_P(x)$ indicates whether $P$ is a zero or a pole of $x$ and counts its multiplicity (cf.\ \ref{projectiveline}). \medskip An {\bf algebraic function field} (in one variable over $k$) is an extension field $\mathbb{F}$ of $k$ that admits an element $x$ that is transcendental over $k$ such that $\mathbb{F}/k(x)$ is a field extension of finite degree. \begin{theorem}[{\cite[3.2.9]{Niederreiter/Xing:2009}}] There is a one-to-one correspondence between $k$-isomorphism classes of non-singular projective curves over $k$ and $k$-isomorphism classes of algebraic function fields of one variable with full constant field $k$, via the map $Y/k \mapsto k(Y)$ (cf.\ \ref{rationalfunctions}). \end{theorem} Here, the {\bf full constant field} of an algebraic function field $K$ over $k$ is the algebraic closure of $k$ in $K$. In case the full constant field is finite, $K$ is a {\bf global function field}. For example, $\mathbb{F}_q(t)$ is a global function field with full constant field $\mathbb{F}_q$. \begin{definition} \label{place} Two discrete valuations $\nu_1$ and $\nu_2$ of an algebraic function field $K$ are {\bf equivalent} if there exists a constant $c > 0$ such that $\nu_1(x) = c\nu_2(x)$ for all $0 \neq x \in K$. An equivalence class of discrete valuations is called a {\bf place}. The {\bf degree} of a valuation/place is the degree of the residue class field $\mathcal{O}_P/\mathfrak{m}_P$ over the constant field $k$. This is always a finite number; cf.\ \cite[1.5.13]{Niederreiter/Xing:2009}. \end{definition} \begin{theorem}[{\cite[3.1.15]{Niederreiter/Xing:2009}}] Let $Y/\mathbb{F}_q$ be a non-singular projective curve. Then there exists a natural one-to-one correspondence between $\mathbb{F}_q$-closed points of $Y$ and places (cf.\ \ref{place}) of the field $K=\mathbb{F}_q(Y)$ of $\mathbb{F}_q$-rational functions (cf.\ \ref{rationalfunctions}). Moreover, the degree of an $\mathbb{F}_q$-closed point is equal to the degree of the corresponding place. \end{theorem} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{valuation} Given a point $P$ in an $\mathbb{F}_q$-closed point of $Y$, the valuation $\overline{\nu}_P : \mathcal{O}_P \to \mathbb{Z} \cup \{ \infty \}$ (cf.\ \ref{nonsingular}) restricts to a valuation $\nu_P : \mathcal{O}_{P,K} := \mathcal{O}_P \cap \mathbb{F}_q(Y) \to \mathbb{Z} \cup \{ \infty \}$. By \cite[3.1.14]{Niederreiter/Xing:2009} this valuation only depends on the $\mathbb{F}_q$-closed point of $Y$ containing $P$ and not on the particular choice of $P$ inside that $\mathbb{F}_q$-closed point. As before, $\nu_P$ extends to the field of $\mathbb{F}_q$-rational functions $K = \mathbb{F}_q(Y)$, the completion $\hat{\mathcal{O}}_{P,K} := \widehat{\mathcal{O}_P \cap K}$, and to the field of fractions $K_P$ of the completion $\hat{\mathcal{O}}_{P,K}$. The field $K_P$ is the {\bf local function field at $P$}. For example, $\mathbb{F}_q((t))$ is the local function field of the global function field $\mathbb{F}(t)$ at the place corresponding to the irreducible polynomial $t$. \begin{definition} \label{divisor} Let $Y/\mathbb{F}_q$ be a non-singular projective curve. The {\bf Weil divisor group} $\mathrm{Div}(Y)=\mathrm{Div}(Y/\mathbb{F}_q)$ is the free abelian group over the set $Y^\circ$ of $\mathbb{F}_q$-closed points of $Y$ (cf.\ \ref{closed}). An element $D = \sum_{P \in Y^\circ} n_P P \in \mathrm{Div}(Y)$ is called a {\bf Weil divisor} of $Y$. It is {\bf effective}, in symbols $D \geq 0$, if $n_P \geq 0$ for all $P \in Y^\circ$. For two divisors $D_1$ and $D_2$ of $Y$ one writes $D_1 \geq D_2$, if $D_1 - D_2 \geq 0$. The {\bf degree} $\mathrm{deg}(D)$ of a Weil divisor $D = \sum_{P \in Y^\circ} n_P P$ is given by $\mathrm{deg}(D) := \sum_{P \in Y^\circ} n_P \mathrm{deg}(P)$ (cf.~\ref{closed}). Also, define $\nu_P(D) := n_P$. \end{definition} For $0 \neq x \in K=\mathbb{F}_q(Y)$, define the {\bf divisor} $\mathrm{div}(x)$ of $x$ by $\mathrm{div}(x) := \sum_{P \in Y^\circ} \nu_P(x) P$. As, by \cite[3.3.2]{Niederreiter/Xing:2009}, \cite[5.1]{Rosen:2002}, any $0 \neq x \in K$ admits only finitely many zeros (i.e., points $P \in Y^\circ$ with $\nu_P(x) > 0$) and poles (i.e., points $P \in Y^\circ$ with $\nu_P(x) < 0$), the divisor $\mathrm{div}(x)$ indeed is a Weil divisor. A Weil divisor obtained in this way is {\bf principal}, and has degree $0$; cf.~\cite[3.4.3]{Niederreiter/Xing:2009}, \cite[5.1]{Rosen:2002}. Two Weil divisors that differ by a principal divisor are called {\bf equivalent}. \begin{definition} For a divisor $D$ of $Y/\mathbb{F}_q$, the {\bf Riemann--Roch space} $L(D)$ is defined as $$L(D) := \{ 0 \neq x \in K = \mathbb{F}_q(Y) \mid \mathrm{div}(x) + D \geq 0 \} \cup \{ 0 \}.$$ \end{definition} The Riemann--Roch space $L(D)$ of a divisor $D$ is an $\mathbb{F}_q$-vector space of finite dimension (cf.\ \cite[3.4.1(iv)]{Niederreiter/Xing:2009}). \begin{definition} \label{incompleteadeles} Define the {\bf ring of repartitions} as $$\mathbb{A}_K := \{ (x_P)_{P \in Y^{\circ}} \in \prod_{P \in Y^{\circ}} K \mid x_P \in \mathcal{O}_P \mbox{ for almost all $P\in Y^\circ$ } \} .$$ By \cite[p.~25]{Chevalley:1951}, \cite[3.3.2]{Niederreiter/Xing:2009}, the field $K$ embeds diagonally in $\mathbb{A}_K$. For any divisor $D \in \mathrm{Div}(Y)$ define $\mathbb{A}_K(D) := \{ x \in \mathbb{A}_K \mid \nu_P(x_P) + \nu_P(D) \geq 0 \mbox{ for all $P \in Y^\circ$} \}$. Note that $\mathbb{A}_K(D)$ is an $\mathbb{F}_q$-subvector space of $\mathbb{A}_K$. \end{definition} \begin{definition} A {\bf Weil differential} of a non-singular projective curve $Y/\mathbb{F}_q$, resp.\ of its global function field $K$, is an $\mathbb{F}_q$-linear map $\omega : \mathbb{A}_K \to \mathbb{F}_q$ such that $\omega_{|\mathbb{A}_K(D)+K} = 0$ for some divisor $D \in \mathrm{Div}(Y)$. Denote by $\Omega_K$ the set of all Weil differentials of $K$ and by $\Omega_K(D)$ the set of all Weil differentials of $K$ that vanish on $\mathbb{A}_K(D) + K$. \end{definition} Note the difference between the ring of repartitions $\mathbb{A}_K$ and the ring of ad\`eles $\hat{\mathbb{A}}_K$ defined in \ref{adeles} below (completions). The concept of a Weil differential can equally well be introduced using the ring of ad\`eles $\hat{\mathbb{A}}_K$, cf.\ \cite{Rosen:2002}. \begin{observation} \label{omegaadel} Let $D \in \mathrm{Div}(Y)$. Then $\Omega_K(D) \cong \mathbb{A}_K/\mathbb{A}_K(D) + K$ as $\mathbb{F}_q$-vector spaces. \end{observation} If $0 \neq \omega \in \Omega_K$, then by \cite[3.6.11]{Niederreiter/Xing:2009}, \cite[6.8]{Rosen:2002} the set $\{ D \in \mathrm{Div}(Y) \mid \omega_{|\mathbb{A}_K(D) + K} = 0 \}$ has a unique maximal element with respect to $\geq$. This maximal element is called a {\bf canonical divisor} of $Y$ and denoted by $(\omega)$. By \cite[3.6.10]{Niederreiter/Xing:2009}, \cite[6.10]{Rosen:2002} \begin{eqnarray} \mathrm{dim}_K(\Omega_K) & = & 1 \label{dim1} \end{eqnarray} and by \cite[3.6.13]{Niederreiter/Xing:2009}, \cite[6.9]{Rosen:2002} the Weil differential $x\omega : \mathbb{A}_K \to \mathbb{F}_q : a \mapsto \omega(xa)$ satisfies \begin{eqnarray} (x\omega) = \mathrm{div}(x) + (\omega) \label{mult} \end{eqnarray} for any $0 \neq x \in K$, $0 \neq \omega \in \Omega_K$. Hence any two canonical divisors of $Y$ are equivalent (cf.\ \ref{divisor}). \begin{observation} \label{lomega} Let $D$ be a divisor of $Y/\mathbb{F}_q$ and let $W = (\omega)$ be a canonical divisor of $Y$. Then $$L(W-D) \to \Omega_K(D) : x \mapsto x\omega$$ is an isomorphism of $\mathbb{F}_q$-vector spaces. \end{observation} \begin{proof} For $0 \neq x \in L(W-D)$ we have\footnote{The notation $\stackrel{{(\ref{mult})}}{=}$ means that (\ref{mult}) is a justification for the equality.} $(x\omega) \stackrel{{(\ref{mult})}}{=} \mathrm{div}(x) + (\omega) \geq -(W-D)+W = D$, so that indeed $x\omega \in \Omega_K(D)$. Injectivity and $\mathbb{F}_q$-linearity of the map $x \mapsto x\omega$ are clear. In order to prove surjectivity, let $0\neq \omega_1 \in \Omega_K(D)$. By (\ref{dim1}) there exists $0 \neq x \in K$ such that $\omega_1 = x\omega$. Since we have $\mathrm{div}(x) + W \stackrel{(\ref{mult})}{=} (x\omega) = (\omega_1) \geq D$, it follows that $x \in L(W-D)$. \end{proof} \begin{definition} \label{genus} Let $Y/\mathbb{F}_q$ be a non-singular projective curve and let $W$ be a canonical divisor of $Y$. Define the {\bf genus} $g$ of the curve $Y/\mathbb{F}_q$ by $$g := \mathrm{dim}_{\mathbb{F}_q}(L(W)) \stackrel{\ref{lomega}}{=} \mathrm{dim}_{\mathbb{F}_q}(\Omega_K(0)) \stackrel{\ref{omegaadel}}{=} \mathrm{dim}_{\mathbb{F}_q}(\mathbb{A}_K/\mathbb{A}_K(0) + K).$$ \end{definition} \begin{theorem}[Riemann--Roch Theorem, {\cite[p.~30]{Chevalley:1951}, \cite[3.6.14]{Niederreiter/Xing:2009}, \cite[5.4]{Rosen:2002}}] \label{RiemannRoch} Let $Y/\mathbb{F}_q$ be a non-singular projective curve of genus $g$ and let $W$ be a canonical divisor. Then for any divisor $D \in \mathrm{Div}(Y)$ one has $$\mathrm{dim}_{\mathbb{F}_q}(L(D)) - \mathrm{dim}_{\mathbb{F}_q}(L(W-D)) = \mathrm{deg}(D) + 1 - g.$$ \end{theorem} The Riemann--Roch theorem combined with the study of the zeta function of the global function field $\mathbb{F}_q(Y)$ allows one to establish the following useful result. \begin{proposition}[{\cite[4.1.10]{Niederreiter/Xing:2009}}]\label{invertible1} Let $Y/\mathbb{F}_q$ be a non-singular projective curve. Then there exists a Weil divisor of $Y$ of degree~$1$. \end{proposition} Note that this Weil divisor of degree $1$ need not be effective. In fact, for $p \geq 5$ prime the non-singular projective curve over $\mathbb{F}_p$ given by $x^{p-1} + y^{p-1} = 3z^{p-1}$ does not admit any $\mathbb{F}_p$-rational point and, thus, there cannot exist an effective Weil divisor of degree $1$ of this curve. \begin{definition} \label{vectorbundle} A {\bf vector bundle} of rank $r$ over a curve $Y/k$ is a variety $E/k$ together with a morphism $\pi : E \to Y$ such that there exists an open covering $\{ U_i \}$ of $Y$ and isomorphisms $\phi_i : \pi^{-1}(U_i) \to U_i \times A_r(k)$ (where $A_r(k)$ denotes the affine space over $k$ of dimension $r$) such that for each pair $U_i$, $U_j$ the composition ${\phi_j {\phi_i}^{-1}}_{|U_i \cap U_j}$ equals $(\mathrm{id}, \phi_{i,j})$ for a linear map $\phi_{i,j}$. A {\bf section} of a vector bundle $\pi : E \to Y$ over an open set $U \subset Y$ is a map $s : U \to E$ such that $\pi \circ s = \mathrm{id}$. \end{definition} \section{Geometry of numbers} \begin{introduction} In this section I describe Weil's geometry of numbers. For further reading the sources \cite[2]{Weil:1982}, \cite[VI]{Weil:1995} are highly recommended. \end{introduction} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{defnorm} Let $Y/\mathbb{F}_q$ be a non-singular projective curve, let $P$ be an $\mathbb{F}_q$-closed point of $Y$, and let $\nu_P$ be the valuation of $K = \mathbb{F}_q(Y)$ discussed in \ref{valuation}. Then $$|\cdot|_P : K \to \mathbb{R} : x \mapsto (q^{\mathrm{deg}(P)})^{-\nu_P(x)}$$ defines an absolute value on $K$. The completion of $K$ with respect to this absolute value equals $K_P$ (cf.~\ref{valuation}), which is locally compact as the zero neighbourhood $\hat{\mathcal{O}}_{P,K}$ is an inverse limit of finite rings (cf.~\ref{localring}), whence compact. On each field $K_P$ define the canonical Haar measure to be the one with respect to which $\hat{\mathcal{O}}_{P,K}$ has volume $1$. \begin{theorem}[\cite{Artin/Whaples:1945}] \label{productformula} Let $Y/\mathbb{F}_q$ be a non-singular projective curve. Then each $0 \neq x \in K$ satisfies $|x|_P = 1$ for almost all $P \in Y^\circ$. Moreover, for each $0 \neq x \in K$, $$\prod_{P \in Y^\circ} |x|_P = 1.$$ \end{theorem} \begin{definition} \label{adeles} In analogy to \ref{incompleteadeles} define the {\bf ad\`ele ring} $$\hat{\mathbb{A}}_K := \{ (x_P)_{P \in Y^{\circ}} \in \prod_{P \in Y^{\circ}} K_P \mid x_P \in \hat{\mathcal{O}}_P \mbox{ for almost all $P\in Y^\circ$ } \} .$$ It is a locally compact ring and contains $K$ embedded diagonally as a discrete subring (cf.\ \ref{productformula}). Define $$|\cdot| : \hat{\mathbb{A}}_K \to \mathbb{R} : x \mapsto \prod_{P \in Y^{\circ}} |x|_P.$$ The product measure of the canonical Haar measures on each individual $K_P$ (\ref{defnorm}) yields a Haar measure on the locally compact group $(\hat{\mathbb{A}}_K,+)$. Let $\omega_{\hat{\mathbb{A}}_K}$ denote the corresponding volume form. For any divisor $D \in \mathrm{Div}(Y)$, define $\hat{\mathbb{A}}_K(D) := \{ x \in \hat{\mathbb{A}}_K \mid \nu_P(x_P) + \nu_P(D) \geq 0 \mbox{ for all $P \in Y^\circ$} \}$. Note that $\hat{\mathcal{O}}_K := \hat{\mathbb{A}}_K(0)$ is the maximal compact subring of $\hat{\mathbb{A}}_K$. It has volume $1$ with respect to the chosen Haar measure. \end{definition} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{sadeles} For a finite subset $S \subset Y^\circ$ define the {\bf ring of $S$-ad\`eles} $$\hat{\mathbb{A}}_S := \prod_{P \in S} K_P \times \prod_{P \in Y^\circ \backslash S} \hat{\mathcal{O}}_{P,K}$$ (cf.~\ref{valuation}). One has $\hat{\mathbb{A}}_K = \lim_\rightarrow \hat{\mathbb{A}}_S$. For $K$ embedded diagonally in $\hat{\mathbb{A}}_K$ as a discrete subring (cf.~\ref{adeles}) define the {\bf ring of $S$-integers} $$\mathcal{O}_S := K \cap \hat{\mathbb{A}}_S = \bigcap_{P \in Y^\circ \backslash S} (K \cap \hat{\mathcal{O}}_{P,K}) \stackrel{\ref{valuation}}{=} \bigcap_{P \in Y^\circ \backslash S} \mathcal{O}_{P,K} \stackrel{\ref{localring}}{=} \mathcal{O}_{Y^\circ \backslash S, K};$$ these are the $k$-rational functions of $Y$ which are regular outside $S$. The field $K$ embeds diagonally in $\prod_{P \in S} K_P$ as a dense subring and, hence, also in $\mathbb{A}_S$. Define $$|\cdot|_S : \prod_{P \in S} K_P \to \mathbb{R} : x \mapsto \prod_{P \in S} |x|_P.$$ \medskip \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{serre} Let $G$ be a unimodular locally compact group, let $\Gamma$ be a discrete subgroup of $G$, let $H$ be a compact open subgroup of $G$, let $\mu$ be a Haar measure on $G$, and assume the double coset space $X \cong H \backslash G / \Gamma$, considered as a system of representatives of the $\Gamma$-orbits on $H\backslash G$, is countable. Note that $\mu$ is also a Haar measure on $H$, since $H$ is open, and hence of positive volume. Then, cf.~\cite[p.~84]{Serre:2003}, $$\int_{G/\Gamma} d\mu = \sum_{x \in X} \left(\int_{G_{x}/\Gamma_{x}} d\mu\right) = \sum_{x\in X} \frac{\int_{G_{x}} d\mu}{|\Gamma_{x}|} = \sum_{x\in X} \frac{\int_{H} d\mu}{|\Gamma_{x}|} = \int_{H} d\mu \sum_{x\in X} \frac{1}{|\Gamma_{x}|}.$$ Here, for choices of $x_0 \in X$ and $(g^x_{x_0})_{x \in X} \in G$ with $g^x_{x_0}(x_0)=x$, the map $\bigsqcup_{x \in X} G_x/\Gamma_x \to G/\Gamma : g\Gamma_x \mapsto g^{x}_{x_0}g\Gamma$ is an isomorphism of orbit spaces, where $G_x$ and $\Gamma_x$ denote the stabilizers of $x$ in the respective group. \medskip \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{12} One has \begin{eqnarray} & \hat{\mathbb{A}}_K(D) \backslash \hat{\mathbb{A}}_K/ K = \hat{\mathbb{A}}_K/ \hat{\mathbb{A}}_K(D) + K \cong \mathbb{A}_K / \mathbb{A}_K(D) + K \stackrel{\ref{omegaadel}}{\cong} \Omega_K(D) \stackrel{\ref{lomega}}{\cong} L(W-D), & \label{1}\\ & K \cap \hat{\mathbb{A}}_K(D) = K \cap \{ x \in \hat{\mathbb{A}}_K \mid \nu_P(x) + \nu_P(D) \geq 0 \mbox{ for all $P \in Y^\circ$} \} = L(D). & \label{2} \end{eqnarray} The observation that the intersection of the lattice $K$ with the compactum $\hat{\mathbb{A}}_K(D)$ equals $L(D)$ is a classical theme in the geometry of numbers, cf.\ \cite[p.~387]{Armitage:1967}. \begin{proposition}[{\cite[2.1.3]{Weil:1982}, \cite[p.~100]{Weil:1995}}] \label{mu} Using the notation of \ref{adeles}, one has $$\int_{\hat{\mathbb{A}}_K/K} \omega_{\hat{\mathbb{A}}_K} = q^{g-1}.$$ \end{proposition} \begin{proof} For the compact open subgroup $\hat{\mathbb{A}}_K(0)$ of $\hat{\mathbb{A}}_K$ Serre's formula (cf.\ \ref{serre}) yields \begin{equation} \int_{\hat{\mathbb{A}}_K/K} \omega_{\hat{\mathbb{A}}_K} = \int_{\hat{\mathbb{A}}_K(0)} \omega_{\hat{\mathbb{A}}_K} \sum_{\hat{\mathbb{A}}_K(0) \backslash \hat{\mathbb{A}}_K/ K} \frac{1}{|K \cap \hat{\mathbb{A}}_K(0)|} = \frac{|\hat{\mathbb{A}}_K(0) \backslash \hat{\mathbb{A}}_K/ K|}{|K \cap \hat{\mathbb{A}}_K(0)|} \int_{\hat{\mathbb{A}}_K(0)} \omega_{\hat{\mathbb{A}}_K}. \label{applicationofserre} \end{equation} Therefore, as $\int_{\hat{\mathbb{A}}_K(0)} \omega_{\hat{\mathbb{A}}_K} = 1$ (cf.~\ref{adeles}), the Riemann--Roch Theorem (cf.~\ref{RiemannRoch}) or, in fact, its underlying definitions and observations \ref{genus} and \ref{12} imply $\int_{\hat{\mathbb{A}}_K/K} \omega_{\hat{\mathbb{A}}_K} = q^{g-1}$. \end{proof} \begin{proposition}[{\cite[p.~39]{Harder:1969}, \cite[p.~98]{Weil:1995}}] \label{mu2} One has $$\int_{\hat{\mathbb{A}}_K(D)} \omega_{\hat{\mathbb{A}}_K} = q^{\mathrm{deg}(D)}.$$ \end{proposition} \begin{proof} The strategy is to first use \ref{mu} and then apply \ref{serre} to the compact open subgroup $\hat{\mathbb{A}}_K(D)$ of $\hat{\mathbb{A}}_K$ in analogy to (\ref{applicationofserre}) in the proof of \ref{mu}. This way one computes \begin{eqnarray*} q^{g-1} & \stackrel{\ref{mu}}{=} & \int_{\hat{\mathbb{A}}_K/K} \omega_{\hat{\mathbb{A}}_K} \\ & \stackrel{\ref{serre}}{=} & \frac{|\hat{\mathbb{A}}_K(D) \backslash \hat{\mathbb{A}}_K/ K|}{|K \cap \hat{\mathbb{A}}_K(D)|} \int_{\hat{\mathbb{A}}_K(D)} \omega_{\hat{\mathbb{A}}_K} \\ & \stackrel{\ref{12}}{=} & \frac{|L(W-D)|}{|L(D)|} \int_{\hat{\mathbb{A}}_K(D)} \omega_{\hat{\mathbb{A}}_K}. \end{eqnarray*} Hence, by the Riemann--Roch Theorem (cf.~\ref{RiemannRoch}), one has $\int_{\hat{\mathbb{A}}_K(D)} \omega_{\hat{\mathbb{A}}_K}= q^{\mathrm{deg}(D)}.$ \end{proof} \section{Curves over finite fields, considered as schemes} \begin{introduction} In this section I return to the study of non-singular projective curves, in the more general context of schemes. For further reading the sources \cite[II, III, IV]{Hartshorne:1977}, \cite[2, 6, 7]{Liu:2002} are highly recommended. \end{introduction} The basic idea of Harder's reduction theory is to apply the geometry of numbers to the groups of $\hat{\mathbb{A}}_K$-rational points of reductive $K$-isotropic algebraic $K$-groups, see \ref{GeometryOfNumbers} below. The concept of schemes, which generalizes the concept of varieties, allows one to do so in a very efficient way, as in this language one can consider reductive groups over projective curves, thus making the Riemann--Roch theorem applicable. \begin{definition} \label{stalk} Let $X$ be a topological space, let $\mathfrak{Top}(X)$ be the category of open subsets of $X$ with the inclusion maps as morphisms, and let $\mathfrak{C}$ be one of the categories $\mathfrak{Ab}$ of abelian groups, $\mathfrak{CRing}$ of commutative rings with $1$, or $\mathfrak{Mod}(A)$ of modules over the (commutative) ring $A$. A contravariant functor $\mathcal{F}$ from $\mathfrak{Top}(X)$ to $\mathfrak{C}$ which satisfies $\mathcal{F}(\emptyset) = 0$ is called a {\bf presheaf} on $X$ with values in $\mathfrak{C}$. For presheaves $\mathcal{F}$, $\mathcal{G}$ on $X$ with values in $\mathfrak{C}$, a natural transformation from $\mathcal{F}$ to $\mathcal{G}$ is called a {\bf morphism} of presheaves. A presheaf $\mathcal{F}$ is called a {\bf sheaf}, if it satisfies the following conditions: (a) if $U$ is an open subset of $X$, if $\{ V_i \}$ is an open covering of $U$, and if $s \in \mathcal{F}(U)$ satisfies $s_{|V_i} = 0$ for all $i$, then $s=0$; and (b) if $U$ is an open subset of $X$, if $\{ V_i \}$ is an open covering of $U$, and if the $s_i \in \mathcal{F}(V_i)$ satisfy ${s_i}_{|V_i \cap V_j} = {s_j}_{|V_i \cap V_j}$ for all $i$, $j$, then there exists $s \in \mathcal{F}(U)$ such that $s_{|V_i} = s_i$ for all $i$. For a presheaf $\mathcal{F}$ on $X$ and an element $x \in X$, the {\bf stalk} $\mathcal{F}_x$ is defined as the direct limit $\lim_\rightarrow \mathcal{F}(U)$ over the open neighbourhoods $U$ of $x$ where $U \leq V$ if $U \supseteq V$, i.e., $\mathcal{F}_x$ consists of the germs of elements of $\mathcal{F}(U)$ at $x$. \end{definition} \begin{example} \label{curveasscheme} Let $V$ be a quasi-projective variety. The functor $\mathcal{O}_V : U \mapsto \mathcal{O}_U(V)$ (cf.\ \ref{localring}) is a sheaf on $V$. The stalk $\mathcal{O}_{V,P}$ at a point $P$ equals the local ring $\mathcal{O}_P$ at that point. \end{example} \begin{definition} \label{sheafification} Let $X$ be a topological space and let $\mathcal{F}$ be a presheaf on $X$. The {\bf sheaf associated to the presheaf $\mathcal{F}$}, also called the {\bf sheafification of $\mathcal{F}$}, is a pair $(\mathcal{F}^\dagger,\theta)$ consisting of a sheaf $\mathcal{F}^\dagger$ and a morphism $\theta : \mathcal{F} \to \mathcal{F}^\dagger$ of presheaves that satisfies the following universal property: for every morphism $\alpha : \mathcal{F} \to \mathcal{G}$, where $\mathcal{G}$ is a sheaf, there exists a unique morphism $\tilde{\alpha} : \mathcal{F}^\dagger \to \mathcal{G}$ such that $\alpha = \tilde{\alpha} \circ \theta$. \end{definition} The sheafification of a presheaf is unique for abstract reasons and exists by \cite[2.2.15]{Liu:2002}; see also \cite[3.3]{Harder:2008}, \cite[II.1.2]{Hartshorne:1977}, \cite[Ex.\ 2.2.3]{Liu:2002}. \begin{definition} \label{quotient} A subfunctor $\mathcal{F}'$ of a sheaf $\mathcal{F}$ that is itself a sheaf is called a {\bf subsheaf} of $\mathcal{F}$. If $\mathcal{F}$ takes values in $\mathfrak{Ab}$, then the {\bf quotient sheaf} $\mathcal{F}/\mathcal{F}'$ is the sheaf associated to the presheaf $U \mapsto \mathcal{F}(U)/\mathcal{F}'(U)$ (cf.~\ref{sheafification}). If $\mathcal{F}$ is a sheaf on $X$ then, given an open cover $\{ U_i \}$ of $X$, an element of $\mathcal{F}/\mathcal{F}'(X)$ can be described by elements $f_i \in \mathcal{F}(U_i)$ such that $\frac{f_i}{f_j} \in \mathcal{F}'(U_i \cap U_j)$ for all $i$, $j$. \end{definition} \begin{definition}\label{directimage} Let $f : X \to Y$ be a continous map between topological spaces, let $\mathcal{F}$ be a sheaf on $X$, and let $\mathcal{G}$ be a sheaf on $Y$. Then $Y \supseteq U \mapsto \mathcal{F}(f^{-1}(U))$ defines a sheaf $f_*\mathcal{F}$ on $Y$, the {\bf direct image sheaf}. For each $x \in X$ there is a natural map $\epsilon_x : (f_*\mathcal{F})_{f(x)} \to \mathcal{F}_{x}$. The {\bf inverse image sheaf} $f^{-1}\mathcal{G}$ on $X$ is the sheaf associated to the presheaf $X \supseteq U \mapsto \lim_{V \supseteq f(U)} \mathcal{G}(V)$ where the limit is taken over all open subsets $V$ of $Y$ that contain $f(U)$; cf.\ \ref{sheafification}. For each $x \in X$ one has $(f^{-1}\mathcal{G})_x = \mathcal{G}_{f(x)}$ (\cite[p.~37]{Liu:2002}). \end{definition} \begin{definition} A {\bf ringed space} is a pair $(X,\mathcal{O}_X)$ consisting of a topological space $X$ and a sheaf of rings $\mathcal{O}_X$, i.e., a sheaf on $X$ with values in $\mathfrak{CRing}$. It is a {\bf locally ringed space}, if for each point $P \in X$, the stalk $\mathcal{O}_{X,P}$ is a local ring. \end{definition} \begin{example} Let $A$ be a commutative ring, define the set $\mathrm{Spec}(A)$ to be the set of prime ideals of $A$, and define a topology on $\mathrm{Spec}(A)$ by taking the closed sets to be sets of the type $V(\mathfrak{a}) = \{ \mathfrak{p} \in \mathrm{Spec}(A) \mid \mathfrak{a} \subseteq \mathfrak{p} \}$, for arbitrary ideals $\mathfrak{a}$ of $A$; cf.\ \cite[II.2.1]{Hartshorne:1977}, \cite[2.1.1]{Liu:2002}. For an open set $U \subseteq \mathrm{Spec}(A)$ define $\mathcal{O}(U)$ to be the set of functions $s : U \to \bigsqcup_{\mathfrak{p} \in U} A_\mathfrak{p}$, where $A_\mathfrak{p}$ denotes the localization of $A$ at $\mathfrak{p}$, such that $s(\mathfrak{p}) \in A_\mathfrak{p}$ and for each $\mathfrak{p} \in U$ there exist a neighbourhood $V \subseteq U$ of $\mathfrak{p}$ and elements $a, f \in A$ with $f \not\in \mathfrak{q}$ and $s(\mathfrak{q}) = \frac{a}{f}$ in $A_\mathfrak{q}$ for each $\mathfrak{q} \in V$. The resulting locally ringed space $\mathrm{Spec} A = (\mathrm{Spec}(A),\mathcal{O})$ is called the {\bf spectrum} of $A$; cf.\ \cite[p.~70]{Hartshorne:1977}, \cite[2.3.2]{Liu:2002}. \end{example} \begin{definition} \label{morphring} A {\bf morphism of locally ringed spaces} $(f,f^\sharp) : (X,\mathcal{O}_X) \to (Y,\mathcal{O}_Y)$ consists of a continuous map $f : X \to Y$ and a morphism of sheaves of rings $f^\sharp : \mathcal{O}_Y \to f_* \mathcal{O}_X$ such that for every $x \in X$ the induced map $\mathcal{O}_{Y,f(x)} \stackrel{f^\sharp}{\to} (f_*\mathcal{O}_X)_{f(x)} \stackrel{\epsilon_x}{\to} \mathcal{O}_{X,x}$ is a local homomorphism (cf.\ \ref{directimage}). An {\bf isomorphism} is an invertible morphism. A morphism $(f,f^\sharp) : (Z,\mathcal{O}_Z) \to (X,\mathcal{O}_X)$ of ringed spaces is a {\bf closed immersion} if $f$ is a topological closed immersion and if each $f^\sharp_x$ is surjective. \end{definition} \begin{definition} \label{oxmodule} Let $(X,\mathcal{O}_X)$ be a ringed space. An {\bf $\mathcal{O}_X$-module} is a sheaf $\mathcal{F}$ on $X$ with values in $\mathfrak{Ab}$ such that for each open subset $U \subseteq X$ the group $\mathcal{F}(U)$ is an $\mathcal{O}_X(U)$-module and, for each inclusion of open sets $V \subseteq U$, the group homomorphism $\mathcal{F}(U) \to \mathcal{F}(V)$ is compatible with the module structures via the ring homomorphism $\mathcal{O}_X(U) \to \mathcal{O}_X(V)$. If $U$ is an open subset of $X$, and if $\mathcal{F}$ is an $\mathcal{O}_X$-module, then $\mathcal{F}_{|U}$ is an $\mathcal{O}_{X|U}$-module. If $\mathcal{F}$ and $\mathcal{G}$ are two $\mathcal{O}_X$-modules, the group of morphisms from $\mathcal{F}$ to $\mathcal{G}$ is denoted by $\mathrm{Hom}_{\mathcal{O}_X}(\mathcal{F},\mathcal{G})$. The presheaf $U \mapsto \mathrm{Hom}_{\mathcal{O}_{X|U}}(\mathcal{F}_{|U}, \mathcal{G}_{|U})$ is a sheaf, denoted by $\mathcal{H}om_{\mathcal{O}_X}(\mathcal{F},\mathcal{G})$; it is also an $\mathcal{O}_X$-module. The {\bf tensor product} $\mathcal{F} \otimes_{\mathcal{O}_X} \mathcal{G}$ of two $\mathcal{O}_X$-modules is defined as the sheaf associated to the presheaf $U \mapsto \mathcal{F}(U) \otimes_{\mathcal{O}_X(U)} \mathcal{G}(U)$ (cf.\ \ref{sheafification}). An $\mathcal{O}_X$-module $\mathcal{F}$ is {\bf free} if it is isomorphic to a direct sum of copies of $\mathcal{O}_X$. It is {\bf locally free} if there exists an open cover $\{ U_i \}$ of $X$ for which each $\mathcal{F}_{|U_i}$ is a free $\mathcal{O}_X$-module. In that case the {\bf rank} of $\mathcal{F}$ on such an open set is the number of copies of the structure sheaf $\mathcal{O}_X$ needed. For $X$ irreducible, thus, the notion of rank of a locally free sheaf is well defined. A locally free $\mathcal{O}_X$-module of rank $1$ is called an {\bf invertible $\mathcal{O}_X$-module}. \end{definition} \begin{definition} For any ringed space $(X,\mathcal{O}_X)$, define the {\bf Picard group} $\mathrm{Pic(X)}$ of $X$ to be the group of isomorphism classes of invertible $\mathcal{O}_X$-modules under the operation $\otimes_{\mathcal{O}_X}$, cf.\ \cite[II.6.12]{Hartshorne:1977}. The inverse of an invertible $\mathcal{O}_X$-module $\mathcal{L}$ is $\mathcal{L}^\vee := \mathcal{H}om_{\mathcal{O}_X}(\mathcal{L},\mathcal{O}_X)$, cf.\ \cite[Ex.~II.5.1]{Hartshorne:1977}, \cite[Ex.~5.1.12]{Liu:2002}. \end{definition} \begin{definition} An {\bf affine scheme} is a locally ringed space $(X,\mathcal{O}_X)$ which is isomorphic to the spectrum of some ring. A locally ringed space $(X,\mathcal{O}_X)$ is a {\bf scheme}, if every point of $X$ has an open neighbourhood $U$ such that $(U,\mathcal{O}_{X|U})$ is an affine scheme. Its {\bf dimension} is the dimension of $X$ as a topological space (cf.\ \ref{begin}) and it is called {\bf irreducible}, if $X$ is irreducible. \end{definition} \begin{definition} \label{closedsub} A {\bf morphism} of schemes is a morphism of locally ringed spaces. Likewise, an {\bf isomorphism} of schemes is an isomorphism of locally ringed spaces. A {\bf closed subscheme} $(Z,\mathcal{O}_Z)$ of a scheme $(X,\mathcal{O}_X)$ consists of a closed subset $Z$ of $X$ and a closed immersion $(j,j^\sharp) (Z,\mathcal{O}_Z) \to (X,\mathcal{O}_X)$ where $j$ is the canonical injection. (Cf.\ \ref{morphring}.) \end{definition} \begin{definition}\label{sscheme} Let $S$ be a scheme. An {\bf $S$-scheme} or a {\bf scheme over $S$} is a scheme $X$ endowed with a morphism of schemes $\pi : X \to S$. If $S = \mathrm{Spec} A$ for a ring $A$, then an $S$-scheme is also called an {\bf $A$-scheme}. \end{definition} \begin{example}\label{morphismglobalfield} Let $(X,\mathcal{O}_X)$ be a scheme, for $x \in X$ let $\mathcal{O}_{X,x}$ be the stalk at $x$ (cf.\ \ref{stalk}) and let $\mathfrak{m}_x$ be its maximal ideal. The {\bf residue field} of $x$ on $X$ is $k(x) := \mathcal{O}_{X,x}/\mathfrak{m}_x$. For any field $K$, by \cite[Ex.~II.2.7]{Hartshorne:1977}, there exists a one-to-one correspondence between morphisms from the spectrum of $K$ to $(X,\mathcal{O}_X)$ and pairs of a point $x \in X$ and an inclusion $k(x) \to K$. In other words, any spectrum of a field that contains the residue field of some point $x \in X$ as a subfield can be considered as an $X$-scheme. \end{example} \begin{definition}\label{fibredproduct} Let $S$ be a scheme and let $X$, $Y$ be $S$-schemes with respect to the morphisms $\pi_X : X \to S$ and $\pi_Y : Y \to S$. The {\bf fibred product} $X \times_S Y$ of $X$ and $Y$ over $S$ is defined to be an $S$-scheme together with morphisms $p_X : X \times_S Y \to X$ and $p_Y : X \times_S Y \to Y$ satisfying $\pi_X \circ p_X = \pi_Y \circ p_Y$, such that given any $S$-scheme $Z$ and morphisms $f : Z \to X$ and $g : Z \to Y$ satisfying $\pi_X \circ f = \pi_Y \circ g$, then there exists a unique morphism $\theta : Z \to X \times_S Y$ satisfying $f = p_X \circ \theta$ and $g = p_Y \circ \theta$. $$\xymatrix{ Z \ar@{-->}[rrr]^\theta \ar@/^/[drr]_f \ar@/^/[drrrr]^g & & & X \times_S Y \ar[dl]^{p_X} \ar[dr]^{p_Y}& \\ & & X \ar[dr]^{\pi_X} & & Y \ar[dl]_{\pi_Y} \\ & & & S & } $$ By \cite[II.3.3]{Hartshorne:1977}, \cite[3.1.2]{Liu:2002} the fibred product of two $S$-schemes exists and is unique up to unique isomorphism. \end{definition} \begin{example} Let $A$ be a ring, let $B = \bigoplus_{d \geq 0} B_d$ be a graded $A$-algebra, and let $B_+ := \bigoplus_{d > 0} B_d$. Define the set $\mathrm{Proj}(B)$ to be the set of homogeneous prime ideals of $B$ which do not contain $B_+$, and define a topology on $\mathrm{Proj}(B)$ by taking the closed sets to be sets of the form $V(\mathfrak{a}) = \{ \mathfrak{p} \in \mathrm{Proj}(B) \mid \mathfrak{a} \subseteq \mathfrak{p} \}$, for arbitrary homogeneous ideals $\mathfrak{a}$ of $A$; cf.\ \cite[II.2.4]{Hartshorne:1977}, \cite[p.~50]{Liu:2002}. For each $\mathfrak{p} \in \mathrm{Proj}(B)$ let $B_\mathfrak{p}$ be the ring of elements of degree zero in the localized ring $T^{-1}B$, where $T$ is the multiplicative system of all homogeneous elements of $B$ which are not in $\mathfrak{p}$. For an open set $U \subseteq \mathrm{Proj}(B)$ define $\mathcal{O}(U)$ to be the set of functions $s : U \to \bigsqcup_{\mathfrak{p} \in U} B_\mathfrak{p}$ such that $s(\mathfrak{p}) \in B_\mathfrak{p}$ and for each $\mathfrak{p} \in U$ there exist a neighbourhood $V \subseteq U$ of $\mathfrak{p}$ and homogeneous elements $a, f \in B$ of the same degree with $f \not\in \mathfrak{q}$ for each $\mathfrak{q} \in V$ and $s(\mathfrak{q}) = a/f$ in $B_\mathfrak{q}$. The resulting ringed space $\mathrm{Proj} B = (\mathrm{Proj}(B),\mathcal{O})$ is a scheme, in fact an $A$-scheme, cf.\ \cite[II.2.5]{Hartshorne:1977}, \cite[2.3.38]{Liu:2002}. \end{example} \begin{example}\label{pna} Let $A$ be a ring and let $B = A[x_0,x_1,...,x_n]$ with grading by degree. Then the scheme $\mathbb{P}_n(A) := \mathrm{Proj} B$ is the {\bf projective $n$-space over $A$}. \end{example} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{catschemes}\label{duality} The schemes together with their morphisms form a category $\mathfrak{Sch}$. It contains the category $\mathfrak{AffSch}$ of affine schemes as a subcategory. The functor $\mathrm{Spec}$ from $\mathfrak{CRing}$ to $\mathfrak{AffSch}$ that sends $A$ to $\mathrm{Spec} A$ and a ring homomorphism $A \to B$ to the corresponding morphism of ringed spaces from the spectrum of $B$ to the spectrum of $A$ (cf.~\cite[II.2.3]{Hartshorne:1977}) is a contravariant equivalence (also called a duality) between the category of commutative rings and the category of affine schemes. The global section functor, i.e., the functor from $\mathfrak{AffSch}$ to $\mathfrak{CRing}$ that sends $(\mathrm{Spec}(A),\mathcal{O})$ to $\mathcal{O}(\mathrm{Spec}(A))$, is an inverse functor to $\mathrm{Spec}$; cf.\ \cite[II.2.2, II.2.3]{Hartshorne:1977}. \medskip \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{finalobject} The affine scheme $\mathrm{Spec} \mathbb{Z}$ is a final object in both categories of schemes and affine schemes; cf.\ \cite[Ex.~II.2.5]{Hartshorne:1977}. \begin{definition}\label{curvesch}\label{nonsingular2} Let $k$ be a perfect field. A {\bf projective scheme over $k$} is a scheme over $k$ that is isomorphic to a closed subscheme of the scheme $\mathbb{P}_n(k)$, for some $n$ (cf.\ \ref{closedsub}, \ref{pna}). A {\bf projective curve over $k$} is an irreducible projective scheme of dimension $1$. A projective curve $Y$ is called {\bf non-singular} at $P \in Y$, if $\mathcal{O}_P$ (cf.\ \ref{stalk}) is a discrete valuation ring (cf.\ \cite[4.2.9]{Liu:2002}). A projective curve is called {\bf non-singular}, if it is non-singular at each of its points. \end{definition} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{vectormodule} Comparing \ref{nonsingular} with \ref{nonsingular2} it is clear that a non-singular projective curve, defined as a variety, yields a non-singular projective curve if considered as a scheme. In the context of \ref{curveasscheme} one can convert locally free $\mathcal{O}_Y$-modules (\ref{oxmodule}) into vector bundles over $Y$ (\ref{vectorbundle}) and vice versa, cf.\ \cite[p.~61]{Harder:2008}, \cite[Ex.\ II.5.18]{Hartshorne:1977}, \cite{Serre:1955}: a vector bundle $\pi : E \to Y$ yields a sheaf $\mathcal{L}_E$ by sending an open subset $U$ of $Y$ to the (fibre-wise) $\mathcal{O}_Y(U)$-module of sections $U \to E$ over $U$. \medskip \noindent \refstepcounter{theorem}{\bf \thetheorem} Let $A$ be a ring. The quotient of $A[x_0,x_1,...,x_n]$ by a homogeneous ideal is a {\bf homogeneous $A$-algebra}. By \cite[2.3.41]{Liu:2002}, for every homogeneous $A$-algebra $B$, the scheme $\mathrm{Proj} B$ is a projective scheme and, by \cite[5.1.30]{Liu:2002}, conversely every projective scheme over $A$ is isomorphic to $\mathrm{Proj} B$, for some homogeneous $A$-algebra $B$. \begin{definition} Let $X$ be a scheme. For each open subset $U$, let $S(U)$ denote the set of elements $s \in \mathcal{O}_X(U)$ such that for each $x \in U$ the germ $s_x$ is not a zero divisor in the stalk $\mathcal{O}_x$. Then the {\bf sheaf $\mathcal{K}$ of total quotient rings} of the sheaf $\mathcal{O}_X$ is the sheaf associated to the presheaf $U \mapsto S(U)^{-1}\mathcal{O}_X(U)$ (cf.\ \ref{sheafification}). Let $\mathcal{K}^*$ and $\mathcal{O}^*$ denote the sheaves of groups of invertible elements in the sheaves of rings $\mathcal{K}$, resp.\ $\mathcal{O}_X$. An element of the group $(\mathcal{K}^*/\mathcal{O}^*)(X)$ is called a {\bf Cartier divisor}. A Cartier divisor is {\bf principal} if it is in the image of the natural map $\mathcal{K}^*(X) \to (\mathcal{K}^*/\mathcal{O}^*)(X)$. Two Cartier divisors are {\bf linearly equivalent} if their difference is principal. \end{definition} \begin{proposition}[{\cite[II.6.11]{Hartshorne:1977}, \cite[7.2.16]{Liu:2002}}] \label{weilcartier} Let $Y$ be a non-singular projective curve over $\mathbb{F}_q$. Then there exists a natural isomorphism between the group $\mathrm{Div}(Y)$ of Weil divisors (cf.~\ref{divisor}) and the group $\mathcal{K}^*/\mathcal{O}^*(Y)$ of Cartier divisors and, furthermore, the principal Weil divisors correspond to the principal Cartier divisors under this isomorphism. \end{proposition} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{concreteweil} In fact, as $\mathcal{O}_Y$ is integral (\cite[2.4.17]{Liu:2002}), the sheaf $\mathcal{K}$ is the constant sheaf corresponding to the function field $K$ of $Y$. As in \ref{quotient}, a Cartier divisor is given by a family $\{ U_i, f_i\}$ where $\{U_i\}$ is an open cover of $Y$ and $f_i \in \mathcal{K}^*(U_i)= K^*$. For each $P \in Y$ and for all $i$, $j$ such that $P \in U_i, U_j$ one has $\nu_P(f_i) = \nu_P(f_j)$, as $\frac{f_i}{f_j}$ is invertible on $U_i \cap U_j$. One therefore obtains the well-defined Weil divisor $\sum_{P \in Y^\circ} \nu_P(f_i)P$. \begin{definition} Let $D$ be a Cartier divisor on a non-singular projective curve $Y/\mathbb{F}_q$ represented by a family $\{ U_i, f_i\}$ where $\{U_i\}$ is an open cover of $Y$ and $f_i \in \mathcal{K}^*(U_i)$. The {\bf sheaf $\mathcal{L}(D)$ associated to $D$} is the subsheaf of $\mathcal{K}$ given by taking $\mathcal{L}(D)$ to be the sub-$\mathcal{O}_Y$-module of $\mathcal{K}$ generated by $f_i^{-1}$ on $U_i$. Since $\frac{f_i}{f_j}$ is invertible on $U_i \cap U_j$, the elements $f_i^{-1}$ and $f_j^{-1}$ generate the same $\mathcal{O}_Y$-module, and hence $\mathcal{L}(D)$ is well defined. \end{definition} \begin{proposition}[{\cite[II.6.13, II.6.15]{Hartshorne:1977}, \cite[7.1.19]{Liu:2002}}] \label{cartierinvert} Let $Y$ be a non-singular projective curve over $\mathbb{F}_q$. Then for any Cartier divisor $D$, the sheaf $\mathcal{L}(D)$ is an invertible $\mathcal{O}_Y$-module. Moreover, the map $D \mapsto \mathcal{L}(D)$ induces a surjection from the group of Cartier divisors onto the Picard group $\mathrm{Pic}(Y)$. The kernel of this surjection is the group of principal Cartier divisors. \end{proposition} \begin{definition} \label{c} Let $Y/\mathbb{F}_q$ be a non-singular projective curve. Define the {\bf degree} $c(\mathcal{L})$ of an invertible $\mathcal{O}_Y$-module $\mathcal{\mathcal{L}}$ to be the degree $\mathrm{deg}(D)$ of the corresponding Weil divisor; cf.\ \ref{weilcartier}, \ref{concreteweil}, \ref{cartierinvert}; \cite[7.3.1, 7.3.2]{Liu:2002}. \end{definition} \begin{example} Let $Y = \mathbb{P}_1 = \{ (x:y) \}$ and let $U_1 = \{ (x:y) \mid y \neq 0 \}$ and $U_2 = \{ (x:y) \mid x \neq 0 \}$ be an open cover of $Y$ with local coordinates $s = \frac{x}{y}$ and $t = \frac{y}{x}$. For $d \in \mathbb{N}$ define the line bundle $L(d) = \{ ((x:y),(a,b)) \in \mathbb{P}_1 \times A_2 \mid x^db = y^da \}$ with canonical projection $\pi : L(d) \to \mathbb{P}_1$. Then \begin{eqnarray*} U_1 \times A_1 & \stackrel{f_1}{\cong} & \pi^{-1}(U_1) = \{ ((x:y),(a,b)) \in U_1 \times A_2 \mid a = s^db \} \\ (s,b) & \mapsto & ((s:1),(s^db,b)) \\ U_2 \times A_1 & \stackrel{f_2}{\cong} & \pi^{-1}(U_2) = \{ ((x:y),(a,b)) \in U_1 \times A_2 \mid b = t^da \} \\ (t,a) & \mapsto & ((1:t),(a,t^da)). \end{eqnarray*} One has ${f_1}_{|U_1 \cap U_2} = s^d {f_2}_{|U_1 \cap U_2}$, so that the line bundle $L(d)$ corresponds to the Weil divisor $\sum_{P \in Y^\circ} \nu_P(f_i)P$ of degree $d$ (cf.~\ref{concreteweil}). \end{example} \begin{theorem}[Riemann--Roch Theorem, {\cite[IV.1.3]{Hartshorne:1977}, \cite[7.3.26]{Liu:2002}}] \label{RiemannRoch2} Let $Y/\mathbb{F}_q$ be a non-singular projective curve of genus $g$. Then for any invertible $\mathcal{O}_Y$-module $\mathcal{L}$ $$\mathrm{dim}_{\mathbb{F}_q}(H^0(Y,\mathcal{L})) - \mathrm{dim}_{\mathbb{F}_q}(H^1(Y,\mathcal{L})) = c(\mathcal{L}) + 1 - g.$$ \end{theorem} Here $H^i(Y,\mathcal{L})$ denotes sheaf cohomology groups as defined in \cite[III]{Hartshorne:1977}, \cite[5.2]{Liu:2002}. \medskip \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{RiemannRoch3} For a locally free $\mathcal{O}_Y$-module $\mathcal{E}$ of rank $n$ define $c(\mathcal{E}) := \sum_{i=1}^n c(\mathcal{E}_i/\mathcal{E}_{i-1})$ where $0 \subset \mathcal{E}_1 \subset \cdots \subset \mathcal{E}_{n-1} \subset \mathcal{E}_n = \mathcal{E}$ is a filtration such that each $\mathcal{E}_i/\mathcal{E}_{i-1}$ is invertible; cf.\ \cite[2.1]{Grothendieck:1957}, \cite[p.~122]{Harder:1968}. Then \ref{RiemannRoch2} implies $\mathrm{dim}_{\mathbb{F}_q}(H^0(Y,\mathcal{E})) - \mathrm{dim}_{\mathbb{F}_q}(H^1(Y,\mathcal{E})) = c(\mathcal{E}) + (1 - g)n$. (Cf.\ \cite[p.~99]{Weil:1995}.) \section{Reduction theory for rationally trivial group schemes} \label{reductiontheoryrtgs} \begin{introduction} In this section I describe the heart of Harder's reduction theory based on \cite{Harder:1968}. From this section on I will assume the reader has a basic intuition for linear algebraic groups and feels comfortable with some of the standard terminology such as that of Borel subgroups and parabolic subgroups. For introductory reading the sources \cite{Borel:1991} and \cite{Humphreys:1975}, for further reading the sources \cite{Jantzen:2003}, \cite{Springer:1998}, \cite{Voskresenski:1998}, \cite{Waterhouse:1979} are highly recommended. I also recommend \cite[Appendix E]{Laumon:1997}. \end{introduction} \begin{definition} An object $X$ of a category $\mathfrak{C}$ is called a {\bf group object}, if for each object $Y$ of $\mathfrak{C}$ the set $\mathrm{Hom}_\mathfrak{C}(Y,X)$ of morphisms from $Y$ to $X$ is a group and the correspondence $Y \mapsto \mathrm{Hom}_\mathfrak{C}(Y,X)$ is a (contravariant) functor from the category $\mathfrak{C}$ to the category $\mathfrak{Gr}$ of groups; cf.\ \cite[p.~3]{Voskresenski:1998}. \end{definition} \begin{definition} A group object in the categories $\mathfrak{Sch}$ of schemes, resp.\ $\mathfrak{AffSch}$ of affines schemes (cf.\ \ref{catschemes}) is called a {\bf group scheme}, resp.\ an {\bf affine group scheme}. \end{definition} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{affinegroupschemeisgroupscheme} As the categories $\mathfrak{Sch}$ and $\mathfrak{AffSch}$ both have $\mathrm{Spec} \mathbb{Z}$ as a final object (cf.\ \ref{finalobject}) and have finite products, being an (affine) group scheme is an intrinsic property; cf.\ \cite[p.~3]{Voskresenski:1998}. In particular, any affine group scheme is a group scheme. \medskip \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{duality2} When restricting (and co-restricting accordingly) the functor $\mathrm{Spec}$ from $\mathfrak{CRing}$ to $\mathfrak{AffSch}$ (cf.~\ref{catschemes}) to the subcategory of commutative Hopf algebras, it yields a contravariant equivalence (duality) to the category of affine group schemes; cf.\ \cite[1.3]{Voskresenski:1998}, \cite[p.~9]{Waterhouse:1979}. \begin{example} Let $\mathrm{G}_\mathrm{a}$ be the covariant functor on the category $\mathfrak{CRing}$ defined by $\mathrm{G}_\mathrm{a}(A) = (A,+)$. Since by means of duality (cf.\ \ref{duality}) $$\mathrm{Hom}_{\mathfrak{CRing}}(\mathbb{Z}[T],A) = A \quad \Longleftrightarrow \quad \mathrm{Hom}_{\mathfrak{AffSch}}(\mathrm{Spec}A, \mathrm{Spec}\mathbb{Z}[T]) = A,$$ the functor $G_\mathrm{a}$, considered as the contravariant functor on the category $\mathfrak{AffSch}$ that sends $(\mathrm{Spec}(A),\mathcal{O})$ to $(\mathcal{O}(\mathrm{Spec}(A)),+)$, is represented by $\mathrm{Spec}\mathbb{Z}[T]$. Therefore, $\mathrm{Spec}\mathbb{Z}[T]$ is an affine group scheme. By \ref{affinegroupschemeisgroupscheme}, it is also a group scheme, which as a functor sends $(X,\mathcal{O}_X)$ to $(\mathcal{O}_X(X),+)$. \end{example} \begin{example} Let $\mathrm{G}_\mathrm{m}$ be the covariant functor on the category $\mathfrak{CRing}$ defined by $\mathrm{G}_\mathrm{m}(A) = (A^*,\cdot)$. Since $$\mathrm{Hom}_{\mathfrak{CRing}}(\mathbb{Z}[T,T^{-1}],A) = A^* \quad \Longleftrightarrow \quad \mathrm{Hom}_{\mathfrak{AffSch}}(\mathrm{Spec}A, \mathrm{Spec}\mathbb{Z}[T,T^{-1}]) = A^*,$$ the functor $\mathrm{G}_\mathrm{m}$, considered as a contravariant functor on the category $\mathfrak{AffSch}$, is represented by $\mathrm{Spec}\mathbb{Z}[T,T^{-1}]$. Therefore, $\mathrm{Spec}\mathbb{Z}[T,T^{-1}]$ is an affine group scheme and, by \ref{affinegroupschemeisgroupscheme}, also a group scheme. \end{example} \begin{example} \label{gln} Let $\mathrm{GL}_n$ be the covariant functor on the category $\mathfrak{CRing}$ defined by $$A \mapsto \mathrm{GL}_n(A) = \{ M \in A^{n \times n} \mid \mathrm{det}(M) \neq 0 \}.$$ It is represented by the affine scheme $\mathrm{GL}_n = \mathrm{Spec} \mathbb{Z}[T_{11}, ..., T_{nn}, \mathrm{det}(T_{ij})^{-1}]$ and, hence, is an affine group scheme. By \ref{affinegroupschemeisgroupscheme}, $\mathrm{GL}_n$ is also a group scheme, and induces the contravariant functor on $\mathfrak{Sch}$ that sends $(X,\mathcal{O}_X)$ to $\mathrm{GL}_n(\mathcal{O}_X(X))$. An alternative, basis-free way of defining $\mathrm{GL}_n$ is to define a contravariant functor on $\mathfrak{Sch}$ that sends $$(X,\mathcal{O}_X) \quad \mbox{to} \quad \mathrm{Aut}_{\mathcal{O}_X(X)}\left(\bigoplus_{i=1}^n \mathcal{O}_X(X)\right).$$ \end{example} \begin{example} \label{sln} Let $\mathrm{SL}_n$ be the covariant functor on the category $\mathfrak{CRing}$ defined by $$A \mapsto \mathrm{ker}\left(\mathrm{GL}_n(A) \to \mathrm{G}_{\mathrm{m}}(A) \right) = \{ M \in A^{n \times n} \mid \mathrm{det}(M) = 1 \}.$$ By Yoneda's lemma (\cite[p.~6]{Waterhouse:1979}) the natural transformation $\mathrm{GL}_n \to \mathrm{G}_{\mathrm{m}}$ can be described by a homomorphism $\mathbb{Z}[T,T^{-1}] \to \mathbb{Z}[T_{11}, ..., T_{nn}, \mathrm{det}(T_{ij})^{-1}]$, the one that sends $T$ to $\mathrm{det}(T_{ij})$. Therefore $\mathrm{SL}_n$ is represented by $$\mathbb{Z}[T_{11}, ..., T_{nn}, \mathrm{det}(T_{ij})^{-1}] \otimes_{\mathbb{Z}[T,T^{-1}]} \mathbb{Z} = \mathbb{Z}[T_{11}, ..., T_{nn}]/(\mathrm{det}(T_{ij})-1).$$ In other words, $\mathrm{Spec} \mathbb{Z}[T_{11}, ..., T_{nn}]/(\mathrm{det}(T_{ij})-1)$ is an (affine) group scheme. \end{example} \begin{definition} \label{sgroupscheme} An $S$-scheme (cf.\ \ref{sscheme}) that is an (affine) group scheme is called an {\bf (affine) group $S$-scheme}. As the categories of (affine) $S$-schemes have finite products and have the scheme $S$ as a final object, as in \ref{affinegroupschemeisgroupscheme} being an (affine) group $S$-scheme is an intrinsic property. In particular, any affine group $S$-scheme is a group $S$-scheme. \end{definition} \begin{example} \label{slngen} Generalizing \ref{sln}, let $S$ be a scheme, let $G$ and $H$ be group $S$-schemes, and let $f : G \to H$ be a homomorphism of group $S$-schemes, i.e., a morphism of $S$-schemes from $G$ to $H$ such that for each $S$-scheme $Y$ the induced map $\mathrm{Hom}_\mathfrak{C}(Y,G) \to \mathrm{Hom}_\mathfrak{C}(Y,H)$ is a group homomorphism. Define a functor on the category of $S$-schemes by sending $$Y \quad \mbox{to} \quad \mathrm{ker}(\mathrm{Hom}_\mathfrak{C}(Y,G) \to \mathrm{Hom}_\mathfrak{C}(Y,H)).$$ It is represented by the fibred product $S$-scheme $G \times_{H} S$, cf.~\ref{fibredproduct}. \end{example} \begin{example} \label{sle} Let $(X,\mathcal{O}_X)$ be an (irreducible) scheme and let $\mathcal{E}$ be a locally free $\mathcal{O}_X$-module of finite rank (\ref{oxmodule}). Following \cite[I~4.5]{Demazure/Grothendieck:1970a} define a functor $\mathrm{GL}(\mathcal{E})$ on the category of $X$-schemes that sends $$(Y,\mathcal{O}_Y) \quad \mbox{to} \quad \mathrm{Aut}_{\mathcal{O}_Y(Y)}\left(\left(\mathcal{O}_Y \otimes_{f^{-1}\mathcal{O}_X} f^{-1}\mathcal{E}\right)(Y)\right),$$ where $f$ denotes the morphism of schemes from the $X$-scheme $(Y,\mathcal{O}_Y)$ to the scheme $(X,\mathcal{O}_X)$ (\ref{sscheme}) and $f^{-1}\mathcal{O}_X$ and $f^{-1}\mathcal{E}$ denote the respective inverse image sheaves (\ref{directimage}). For $\mathcal{E} = \bigoplus_{i = 1}^n \mathcal{O}_X$ this is just the affine group $X$-scheme $\mathrm{GL}_n/X$ from \ref{gln}. The present more general example is obtained from that example by a twisting process; cf.\ \cite[p.~134]{Harder:1968}, \cite[p.~117]{Hartshorne:1977}. In analogy to \ref{sln} define the (affine) group $X$-scheme $\mathrm{SL}(\mathcal{E})$ as $\mathrm{ker}(\mathrm{GL}(\mathcal{E}) \to \mathrm{G}_\mathrm{m}/X)$ (cf.~\ref{slngen}). \end{example} \begin{definition} Let $Y/\mathbb{F}_q$ be a non-singular projective curve (\ref{vectormodule}), let $K = \mathbb{F}_q(Y)$ be its field of $\mathbb{F}_q$-rational functions (\ref{rationalfunctions}) considered as the residue field of $Y$ at its generic point (\ref{morphismglobalfield} and \cite[Ex.~II.3.6]{Hartshorne:1977}), let $G/Y$ be an affine group $Y$-scheme (\ref{sgroupscheme}), and consider $\mathrm{Spec}K$ as a $Y$-scheme via the identity map on $K$ (\ref{morphismglobalfield}). The scheme $G/Y$ is called a {\bf rationally trivial group ($Y$-)scheme} if $G/K := G \times_Y \mathrm{Spec}K$ (\ref{fibredproduct}) is a Chevalley scheme (\cite[I.8.3.1]{Grothendieck:1960}). It is called {\bf reductive}, if $G/K = G \times_Y \mathrm{Spec} K$ is reductive (\cite[XIX~1.6]{Demazure/Grothendieck:1970c}). \end{definition} \begin{proposition}[{\cite[XX~1.3,~1.5,~1.16,~1.17; XXII~5.6.5,~5.9.5; XXVI~1.12,~2.1]{Demazure/Grothendieck:1970c}}] \label{filtration} Let $Y/\mathbb{F}_q$ be a non-singular projective curve, let $G/Y$ be a reductive group $Y$-scheme, and let $P/Y$ be a parabolic subgroup of $G/Y$ (cf.~\cite[XXVI~1.1]{Demazure/Grothendieck:1970c}). Then the unipotent radical $R_u(P)$ (cf.~\cite[XIX~1.2]{Demazure/Grothendieck:1970c}) admits a filtration $R_u(P) = U_0 \supset U_1 \supset \cdots \supset U_k = \{ e \}$ such that each $U_i/U_{i+1}$ is a vector bundle over $Y$. More precisely, if $P_\alpha$ denotes the vector bundle over $Y$ corresponding to the root space $\mathfrak{g}^\alpha$ (cf.~\cite[XIX~1.10]{Demazure/Grothendieck:1970c}), then $$U_i = \prod_{\alpha \in \Delta^+_P, l(\alpha)>i} P_\alpha \quad \quad \mbox{ and } \quad \quad U_i/U_{i+1} \cong \prod_{\alpha \in \Delta^+_P, l(\alpha)=i+1} P_\alpha.$$ \end{proposition} \begin{example} \label{borelcorrespondence} Let $A_1 \cong \mathcal{O}_Y \cong A_2$, let $\mathcal{E} = A_1 \oplus A_2$ and let $\mathrm{SL}(\mathcal{E}) = \mathrm{SL}_2/Y$ be the group $Y$-scheme defined in \ref{sln}, \ref{sle}. Each of $\mathcal{H}om_{\mathcal{O}_Y}(A_1,A_2)$ and $\mathcal{H}om_{\mathcal{O}_Y}(A_2,A_1)$ equals the unipotent radical of a Borel subgroup of $\mathrm{SL}(A_1 \oplus A_2)$ and is an invertible $\mathcal{O}_Y$-module, because $\mathcal{H}om_{\mathcal{O}_Y}(A_i,A_j) \cong A_i^\vee \otimes_{\mathcal{O}_Y} A_j$. By \ref{vectormodule}, this corresponds to a vector bundle (of dimension $1$), confirming \ref{filtration} for $\mathrm{SL}_2$. More generally, let $\mathcal{E}$ be a locally free $\mathcal{O}_Y$-module of rank $2$ with an invertible $\mathcal{O}_Y$-submodule $\mathcal{L}$ such that $\mathcal{E}/\mathcal{L}$ is also invertible and let $\mathrm{SL}(\mathcal{E})$ be the group $Y$-scheme defined in \ref{sle}. There is a one-to-one correspondence between the Borel subgroups $B$ of $\mathrm{SL}(\mathcal{E})$ and the invertible $\mathcal{O}_Y$-submodules $\mathcal{L}$ such that $\mathcal{E}/\mathcal{L}$ is also invertible, given by $\mathcal{L} \mapsto \mathrm{Stab}_{\mathrm{SL}(\mathcal{E})}(\mathcal{L})$. The unipotent radical $R_u(B)$ is isomorphic to $\mathcal{H}om_{\mathcal{O}_Y}(\mathcal{E}/\mathcal{L},\mathcal{L})$. For any invertible $\mathcal{O}_Y$-module $\mathcal{H}$, the equality $\mathcal{H}om_{\mathcal{O}_Y}(\mathcal{E}/\mathcal{L} \otimes_{\mathcal{O}_Y} \mathcal{H}, \mathcal{L} \otimes_{\mathcal{O}_Y} \mathcal{H}) = (\mathcal{E}/\mathcal{L} \otimes_{\mathcal{O}_Y} \mathcal{H})^\vee \otimes_{\mathcal{O}_Y} (\mathcal{L} \otimes_{\mathcal{O}_Y} \mathcal{H}) = \mathcal{H}om_{\mathcal{O}_Y}(\mathcal{E}/\mathcal{L},\mathcal{L})$ implies $\mathrm{SL}(\mathcal{E} \otimes_{\mathcal{O}_Y} \mathcal{H}) \cong \mathrm{SL}(\mathcal{E})$. (Cf.~\cite[I, 4.5]{Demazure/Grothendieck:1970a},\cite[XX, 5.1]{Demazure/Grothendieck:1970c}.) \end{example} \begin{definition} Let $G/Y$ be a rationally trivial group $Y$-scheme, let $B/Y$ be a Borel subgroup of $G/Y$ and let $\{ \alpha_1, ..., \alpha_r \}$ be the simple roots of $B$. Using the notation introduced in \ref{vectormodule}, \ref{c} and \ref{filtration}, define $$n_i(B) := c\left(\mathcal{L}_{B_{\alpha_i}}\right).$$ Note that, since $G/Y$ is rationally trivial, each $B_{\alpha_i}$ is a vector bundle of dimension $1$, so that $\mathcal{L}_{B_{\alpha_i}}$ is an invertible $\mathcal{O}_Y$-module. \end{definition} \begin{theorem}[{\cite[2.2.6]{Harder:1968}}]\label{reduction1} Let $G/Y$ be a rationally trivial group $Y$-scheme of genus $g$ (cf.~\ref{genus}). Then there exists a Borel subgroup $B/Y$ such that $n_i(B) \geq -2g$ for all $i \in I = \{ 1, ..., r \}$. \end{theorem} \begin{proof} In case $G/Y$ is of type $A_1$, i.e., if there exists a locally free $\mathcal{O}_Y$-module $\mathcal{E}$ of rank $2$ such that $G \cong \mathrm{SL}(\mathcal{E})$, one can proceed as follows. Let $\mathcal{L}$ be an invertible $\mathcal{O}_Y$-submodule such that $\mathcal{E}/\mathcal{L}$ is also invertible and let $B = \mathrm{Stab}_{\mathrm{SL}(\mathcal{E})}(\mathcal{L})$ (cf.\ \ref{borelcorrespondence}). Then $n_1(B) = c(R_u(B)) = c(\mathcal{H}om_{\mathcal{O}_Y}(\mathcal{E}/ \mathcal{L}, \mathcal{L})) = c((\mathcal{E}/ \mathcal{L})^\vee \otimes_{\mathcal{O}_Y} \mathcal{L}) = c(\mathcal{L}) - c(\mathcal{E} / \mathcal{L}) = 2c(\mathcal{L}) - c(\mathcal{E})$. By \ref{invertible1} and \ref{c} there exists an invertible $\mathcal{O}_Y$-module $\mathcal{H}$ with $c(\mathcal{H})= 1$. Since $\mathrm{SL}(\mathcal{E} \otimes_{\mathcal{O}_Y} \mathcal{H}^\vee) \cong \mathrm{SL}(\mathcal{E}) \cong \mathrm{SL}(\mathcal{E} \otimes_{\mathcal{O}_Y} \mathcal{H})$ by \ref{borelcorrespondence}, the formula $c(\mathcal{E} \otimes_{\mathcal{O}_Y} \mathcal{H}^\vee) = c(\mathcal{E}) - 2 c(\mathcal{H}) = c(\mathcal{E}) - 2$ allows us to assume without loss of generality that $2g-2 < c(\mathcal{E}) \leq 2g$. By the Riemann--Roch theorem (cf.\ \ref{RiemannRoch3}) there exists $0 \neq t \in H^0(Y,\mathcal{E}) \cong \mathcal{E}(Y)$. Since $\mathcal{E}$ is locally free of rank $2$ there exists an open cover $\{ U_i\}$ of $Y$ such that $\mathcal{E}_{|U_i} \cong {\mathcal{O}_Y}_{|U_i} \oplus {\mathcal{O}_Y}_{|U_i}$ for each $U_i$. Therefore $t$ is contained in an invertible $\mathcal{O}_Y$-submodule $\mathcal{L}_t$ such that $\mathcal{E} / \mathcal{L}_t$ is also invertible. As $t \in H^0(Y,\mathcal{L}_t)$, one has $c(\mathcal{L}_t) \geq 0$ by \cite[Lemma IV.1.2]{Hartshorne:1977}. Hence $$n_1(B_t) = c(\mathcal{H}om_{\mathcal{O}_Y}(\mathcal{E}/ \mathcal{L}_t, \mathcal{L}_t)) = 2c(\mathcal{L}_t) - c(\mathcal{E}) \geq -2g,$$ where $B_t = \mathrm{Stab}_{\mathrm{SL}(\mathcal{E})}(\mathcal{L}_t)$ (cf.\ \ref{borelcorrespondence}). The general case can be reduced to the case $A_1$ via a local to global argument. \end{proof} \begin{definition} Let $c_1 < -2g$. A Borel subgroup $B/Y$ of $G/Y$ is {\bf reduced}, if $n_i(B) \geq c_1$ for all $i$. \end{definition} \begin{theorem}[{\cite[2.2.13, 2.2.14]{Harder:1968}}] \label{reduction2} Let $G/Y$ be a rationally trivial group $Y$-scheme. Then there exist constants $c_2 > \gamma > c_1$ such that the following hold: Let $B/Y$ be a reduced Borel subgroup of $G/Y$ and let $\alpha_{i_0}$ be a simple root of $B$ such that $n_{i_0}(B) \geq c_2$. Then each reduced Borel subgroup $B'$ of $G/Y$ satisfies $n_{i_0}(B') \geq \gamma$ and is contained in $P_{i_0}(B)$, where $P_{i_0}(B)$ denotes the maximal parabolic of type $\alpha_{i_0}$ containing $B$. \end{theorem} \begin{proposition}[{\cite[2.2.3, 2.2.11]{Harder:1968}}] \label{isolatedparabolic2} To each reduced Borel subgroup $B/Y$ of $G/Y$, let $I^B := \{ i \in I \mid n_i(B) \geq c_2 \}$, and let $P^B := \bigcap_{i \in I^B} P_i(B)$. Then $$P := \bigcap_{B \subset G/Y \mbox{ reduced}} P^B$$ either equals $G/Y$ or is a parabolic subgroup of $G/Y$. \end{proposition} \begin{proof} Let $B$ be a reduced Borel subgroup of $G$. By \ref{reduction2} each reduced Borel subgroup $B'$ of $G$ is contained in $P^{B}$ and conversely, again by \ref{reduction2}, $B \subset P^{B'}$. Hence $P$ contains a Borel group and therefore either equals $G/Y$ or is a parabolic subgroup of $G/Y$. \end{proof} \noindent Theorem \ref{reduction2} is the heart of Harder's reduction theory. The only proof of \ref{reduction2} known to me in fact uses \ref{isolatedparabolic2}, as can be guessed from the numbering used in \cite{Harder:1968}. However, as the length of the proof of \ref{reduction2} by far surpasses anything I can reasonably include in this survey and as I will need to refer to \ref{isolatedparabolic2} later, I took the liberty of deducing \ref{isolatedparabolic2} from \ref{reduction2} for the sake of this exposition. \begin{proposition}[{\cite[p.~120]{Harder:1968}, \cite[p.~39]{Harder:1969}}] \label{sumofcharacters} Let $G/Y$ be a reductive group $Y$-scheme, let $P/Y$ be a parabolic $K$-subgroup of $G/Y$, let $d:= \mathrm{dim}(R_u(P))$, and let $\Delta^+_{P}$ be the set of positive roots of $P$. Then the character $$\chi_{P} : P \stackrel{\mathrm{Ad}}{\rightarrow} \mathrm{GL}\left(\mathrm{Lie}(R_u(P))\right) \stackrel{\mathrm{det}}{\rightarrow} \mathrm{GL}\left(\bigwedge^{d} \mathrm{Lie}(R_u(P))\right) \cong \mathrm{G}_\mathrm{m},$$ considered as a character of a maximal split torus contained in $P$, is given by $\chi_{P} = \sum_{\alpha \in \Delta^+_{P}} \mathrm{dim}(P_\alpha) \alpha$. \end{proposition} \begin{definition} \label{ccc} Let $B$ be a minimal parabolic $K$-subgroup of $G/Y$, let $\{ \alpha_1, ..., \alpha_r \}$ be the simple roots of $B$, and let $(P_i)_i$ be the maximal parabolic $K$-subgroups of $G$ of type $\alpha_i$. Using the notation of \ref{vectormodule}, \ref{RiemannRoch3} and \ref{filtration}, define $$p_i(B) := p(P_i) := \sum_{\alpha \in \Delta^+_{P_i}} c(\mathcal{L}_{P_\alpha}).$$ \end{definition} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{piandnu} Let $B$ be a minimal parabolic $K$-subgroup of $G$, let $R(B)$ be the radical of $B$, let $R_u(B)$ be the unipotent radical of $B$, let $T = R(B)/R_u(B)$, let $S \subseteq T$ be the maximal $K$-split subtorus of $T$, let $\pi = \{ \alpha_1, ..., \alpha_r \} \subset X(S)$ be the system of simple roots, and let $X(B) = \mathrm{Hom}_K(B,\mathrm{G}_\mathrm{m})$ be the module of $K$-rational characters of $B$ so that $X(B) \otimes \mathbb{Q} = X(S) \otimes \mathbb{Q}$, let $P_i \supseteq B$ be the maximal $K$-parabolic of type $\alpha_i$, let $\chi_{P_i} : P_i \to \mathrm{G}_\mathrm{m}$ be the sum of roots of $P_i$ (cf.\ \ref{sumofcharacters}), and let $\chi_i := {\chi_{P_i}}_{|B}$. The $\chi_i$ form a basis of $X(B) \otimes \mathbb{Q}$ and, if $(\cdot,\cdot)$ is a positive definite bilinear form on $X(B) \otimes \mathbb{Q}$ which is invariant under the action of the Weyl group, then \begin{eqnarray} (\chi_i,\alpha_j) & = & 0, \quad \mbox{if $i \neq j$, and} \label{orthogonal} \\ (\chi_i,\alpha_i) & > & 0 \quad \mbox{for all $i \in I$}. \label{acute} \end{eqnarray} If $G$ is rationally trivial, for $m_{i,j} \in \mathbb{Z}$ such that $\chi_i = \sum_{j=1}^r m_{i,j} {\alpha_j}$, then $$p_i(B) = \sum_{j=1}^r m_{i,j} n_j(B).$$ Furthermore, for $c_{i,j} \in \mathbb{Q}$ such that $\alpha_i = \sum_{j=1}^r c_{i,j} \chi_j$, then $$n_i(B) = \sum_{j=1}^r c_{i,j} p_j(B).$$ \section{Reduction theory for reductive groups over the ad\`eles} \begin{introduction} In this section I describe the interplay between Harder's reduction theory and Weil's geometry of numbers based on \cite{Harder:1969} in order to arrive at a geometric version of Harder's reduction theory. \end{introduction} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{tamagawameasure} Let $G/Y$ be a reductive group $Y$-scheme, let $P/Y$ be a parabolic $K$-subgroup of $G/Y$, and let $\omega$ be a non-trivial volume form on $R_u(P)$ defined over $K$. This volume form yields a Haar measure of $R_u(P(\hat{\mathbb{A}}_K))$ which is independent of the choice of $\omega$, i.e., for each $0 \neq \lambda \in K$ the volume forms $\omega$ and $\lambda \omega$ yield identical Haar measures; cf.\ \cite[p.~37]{Harder:1969}, \cite[2.3.1]{Weil:1982}. This measure differs from the classical Tamagawa measure by the factor $q^{(1-g)d}$ where $d = \mathrm{dim}(R_u(P))$, cf.\ \cite[p.~38]{Harder:1969}, \cite{Tamagawa:1966}, \cite[\S~14.4]{Voskresenski:1998}, \cite[2.3]{Weil:1982}, \cite[p.~113]{Weil:1995}. \begin{theorem}[{\cite[1.3.1]{Harder:1969}}] \label{GeometryOfNumbers} Let $G/Y$ be a reductive group $Y$-scheme, let $P/Y$ be a maximal parabolic $K$-subgroup of $G/Y$, let $\omega$ be a non-trivial volume form of $R_u(P)$ defined over $K$, and let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$, cf.~\ref{adeles}. Then $$\int_{R_u(P(\hat{\mathbb{A}}_K)) \cap \mathfrak{K}} \omega_{\hat{\mathbb{A}}_K} = q^{p(P)}.$$ \end{theorem} \begin{proof} For each $\alpha \in \Delta_P^+$ let $0 \subset P_\alpha^{(1)} \subset \cdots \subset P_\alpha^{(\mathrm{dim}(P_\alpha))} = P_\alpha$ be a filtration such that each $P^{(i)}_\alpha/P^{(i-1)}_\alpha$ is a vector bundle of dimension $1$; cf.\ \cite[2.1]{Grothendieck:1957}, \cite[p.~122]{Harder:1968}. One then computes \begin{eqnarray*} \int_{R_u(P(\hat{\mathbb{A}}_K)) \cap \mathfrak{K}} \omega_{\hat{\mathbb{A}}_K} & \stackrel{\ref{filtration}}{=} & \prod_{\alpha \in \Delta_P^+} \left(\int_{P_\alpha(\hat{\mathbb{A}}_K) \cap {\hat{\mathcal{O}}_K}^{\mathrm{dim}(P_\alpha)}} \omega_{\hat{\mathbb{A}}_K} \right) \\ & \stackrel{\ref{RiemannRoch3}}{=} & \prod_{\alpha \in \Delta_P^+} \prod_{i=1}^{\mathrm{dim}(P_\alpha)}\left(\int_{(P^{(i)}_\alpha/P^{(i-1)}_\alpha)(\hat{\mathbb{A}}_K) \cap {\hat{\mathcal{O}}_K}} \omega_{\hat{\mathbb{A}}_K} \right) \\ & \stackrel{\ref{mu2}}{=} & \prod_{\alpha \in \Delta_P^+} q^{c(\mathcal{L}_{P_\alpha})} \\ & \stackrel{\ref{ccc}}{=} & q^{p(P)}. \end{eqnarray*} \end{proof} \begin{theorem}[{\cite[1.3.2]{Harder:1969}}] \label{transformationformula} Let $G/Y$ be a reductive group $Y$-scheme, let $P/Y$ be a maximal parabolic $K$-subgroup, and let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$. Then, for each $x \in P(\hat{\mathbb{A}}_K)$, one has $$\int_{R_u(P(\hat{\mathbb{A}}_K)) \cap \mathfrak{K}} \omega_{\hat{\mathbb{A}}_K} = |\chi_P(x)| \int_{R_u(P(\hat{\mathbb{A}}_K)) \cap {}^x\mathfrak{K}} \omega_{\hat{\mathbb{A}}_K}.$$ \end{theorem} \begin{proof} The absolute value of the determinant of the derivative of conjugation by $x$ \begin{eqnarray*} |\chi_{P}(\cdot)| : P(\hat{\mathbb{A}}_K) & \stackrel{\mathrm{Ad}}{\rightarrow} & \mathrm{GL}\left(\mathrm{Lie}(R_u(P(\hat{\mathbb{A}}_K)))\right) \stackrel{\mathrm{det}}{\rightarrow} \mathrm{GL}\left(\bigwedge^{d} \mathrm{Lie}(R_u(P(\hat{\mathbb{A}}_K)))\right) \stackrel{\mathrm{|\cdot|}}{\rightarrow} \mathbb{R} \\ x & \mapsto & |\chi_P(x)| \end{eqnarray*} (cf.\ \ref{sumofcharacters}) measures the ratio of the volumes of $R_u(P(\hat{\mathbb{A}}_K)) \cap \mathfrak{K}$ and of $R_u(P(\hat{\mathbb{A}}_K)) \cap {}^x\mathfrak{K}$. \end{proof} \begin{definition} \label{piandnu2} Using the notation of \ref{piandnu}, let \begin{eqnarray*} \pi_i(B,{}^x\mathfrak{K}) := \pi(P_i,{}^x\mathfrak{K})& := & \int_{R_u(P_i(\hat{\mathbb{A}}_K)) \cap {}^x\mathfrak{K}} \omega_{\hat{\mathbb{A}}_K} \quad \mbox{ and} \\ \nu_i(B,{}^x\mathfrak{K}) & := & \prod_{j=1}^r \pi_j(B,{}^x\mathfrak{K})^{c_{i,j}}. \end{eqnarray*} \end{definition} \begin{corollary}[{\cite[p.~40]{Harder:1969}}]\label{transformationformulafornu}For each $x \in B(\hat{\mathbb{A}}_K)$, one has $\nu_i(B,\mathfrak{K}) = |\alpha_i(x)| \nu_i(B,{}^x\mathfrak{K})$. \end{corollary} \begin{observation} \label{kinvariance} For each $x \in G(K)$, one has $\pi_i(B,\mathfrak{K}) = \pi_i({}^xB,{}^x\mathfrak{K})$ and $\nu_i(B,\mathfrak{K}) = \nu_i({}^xB,{}^x\mathfrak{K})$. \end{observation} \begin{proof} Conjugation by $x \in G(K)$ maps the $K$-volume form $\omega$ on $R_u(P)$ onto a $K$-volume form ${}^x\omega$ on $R_u({}^xP)$, whence $$\pi_i(B,\mathfrak{K}) = \int_{R_u(P_i(\hat{\mathbb{A}}_K)) \cap \mathfrak{K}} \omega_{\hat{\mathbb{A}}_K} = \int_{R_u({}^xP_i(\hat{\mathbb{A}}_K)) \cap {}^x\mathfrak{K}} {}^x\omega_{\hat{\mathbb{A}}_K} \stackrel{\ref{tamagawameasure}}{=} \pi_i({}^xB,{}^x\mathfrak{K}).$$ By \ref{piandnu2} the second identity follows from the first. \end{proof} \begin{theorem}[{\cite[2.1.1]{Harder:1969}}] \label{reduction3} Let $G/Y$ be a rationally trivial group $Y$-scheme and let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$. Then there exists a constant $C_1 > 0$ such that for each $x \in G(\hat{\mathbb{A}}_K)$ there exists a Borel subgroup $B/Y$ of $G/Y$ such that $$\nu_i(B,{}^x\mathfrak{K}) \geq C_1.$$ \end{theorem} \begin{proof} By \cite[1.1.2]{Harder:1969}, for each $x \in G(\hat{\mathbb{A}}_K)$ there exists a rationally trivial group scheme $G^{(x)}/Y$ such that ${}^x\mathfrak{K} = G^{(x)}(\hat{\mathcal{O}}_K)$. Let $B^{(x)}/Y$ be a Borel subgroup of $G^{(x)}/Y$. Then, by \ref{piandnu}, \ref{GeometryOfNumbers}, and \ref{piandnu2}, $$\nu_i(B^{(x)},G^{(x)}(\hat{\mathcal{O}}_K)) = q^{n_i(B^{(x)})}.$$ Therefore, if $c_1 < -2g$, then by \ref{reduction1} the conclusion of the theorem holds for $C_1 := q^{c_1}$. \end{proof} \begin{corollary}[{\cite[2.1.2]{Harder:1969}}] \label{reduction4} Let $G/Y$ be a rationally trivial group $Y$-scheme, let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$, and let $C_1 = q^{c_1}$ be a constant for which the conclusion of \ref{reduction3} holds. Then there exist constants $C_2 > \Gamma > C_1$ such that the following hold: let $x \in G(\hat{\mathbb{A}}_K)$, let $B/Y$ be a Borel subgroup of $G/Y$ such that $\nu_{i}(B,{}^x\mathfrak{K}) \geq C_1$ for all $i \in I$, and let $\alpha_{i_0}$ be a simple root of $B$ such that $\nu_{i_0}(B,{}^x\mathfrak{K}) \geq C_2$. Then each Borel subgroup $B'/Y$ of $G/Y$ with $\nu_{i}(B',{}^x\mathfrak{K}) \geq C_1$ for all $i \in I$ satisfies $\nu_{i_0}(B',{}^x\mathfrak{K}) \geq \Gamma$ and is contained in $P_{i_0}(B)$. \end{corollary} \begin{proof} By \ref{reduction2} the conclusion holds for $C_2 := q^{c_2}$ and $\Gamma := q^\gamma$. \end{proof} \begin{corollary}[{\cite[2.3.2]{Harder:1969}}] \label{reduction5} Let $G/Y$ be a reductive group $Y$-scheme and let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$. Then there exists a constant $C_1 > 0$ such that for each $x \in G(\hat{\mathbb{A}}_K)$ there exists a minimal $K$-parabolic subgroup $B/Y$ of $G/Y$ with $$\nu_i(B,{}^x\mathfrak{K}) \geq C_1 \quad \quad \mbox{ for all $i \in I$.}$$ \end{corollary} \begin{definition} \label{reduceddef} Once and for all fix $C_1 \in (0,1)$ such that the conclusion of \ref{reduction5} holds. A pair consisting of a minimal parabolic subgroup $B$ of $G/Y$ and an element $x \in G(\hat{\mathbb{A}}_K)$ is called {\bf reduced}, if $\nu_i(B,{}^x\mathfrak{K}) \geq C_1$ for all $i$. \end{definition} \begin{corollary}[{\cite[2.3.3]{Harder:1969}}] \label{reduction6} Let $G/Y$ be a reductive group $Y$-scheme and let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$. Then there exist constants $C_2 > \Gamma > C_1$ such that the following hold: let $x \in G(\hat{\mathbb{A}}_K)$, let $B/Y$ be a minimal parabolic $K$-subgroup of $G/Y$ such that $(B,x)$ is reduced, and let $\alpha_{i_0}$ be a simple root of $B$ such that $\nu_{i_0}(B,{}^x\mathfrak{K}) \geq C_2$. Then each minimal parabolic $K$-subgroup $B'/Y$ of $G/Y$ with $(B',x)$ reduced satisfies $\nu_{i_0}(B',{}^x\mathfrak{K}) \geq \Gamma$ and $B' \subset P_{i_0}(B)$. \end{corollary} Note that \ref{reduction5} and \ref{reduction6} follow from \ref{reduction3} and \ref{reduction4} by a standard field extension argument using \cite[2.3.5]{Harder:1969}. \medskip \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{finitecosetspace} Let $G/Y$ be a reductive group $Y$-scheme, let $B/Y$ be a minimal parabolic $K$-subgroup of $G$, and let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$. Since $G(\hat{\mathbb{A}}_K)/B(\hat{\mathbb{A}}_K)$ is compact and $\mathfrak{K}$ is open (\cite[p.~36]{Harder:1969}), the double coset space $\mathfrak{K} \backslash G(\hat{\mathbb{A}}_K)/B(\hat{\mathbb{A}}_K)$ is finite. As $B(\hat{\mathbb{A}}_K)$ is self-normalizing in $G(\hat{\mathbb{A}}_K)$, one can consider $G(\hat{\mathbb{A}}_K)/B(\hat{\mathbb{A}}_K)$ as the space of conjugates of $B(\hat{\mathbb{A}}_K)$ in $G(\hat{\mathbb{A}}_K)$, and $\mathfrak{K} \backslash G(\hat{\mathbb{A}}_K)/B(\hat{\mathbb{A}}_K)$ as the space of $\mathfrak{K}$-orbits via conjugation on these. \begin{theorem}[{\cite[p.~40]{Harder:1969}}] \label{fundamentaldomain} Let $G/Y$ be a reductive group $Y$-scheme, let $B/Y$ be a minimal parabolic $K$-subgroup of $G$, let $\mathfrak{K} := G(\hat{\mathcal{O}}_K)$, let $B^{(1)}$, ..., $B^{(t)}$ be a system of representatives of the $\mathfrak{K}$-orbit space $\mathfrak{K} \backslash G(\hat{\mathbb{A}}_K)/B(\hat{\mathbb{A}}_K)$ (cf.\ \ref{finitecosetspace}), let $\xi_s \in G(K)$ with $B^{(s)} = \xi_s^{-1} B^{(1)}\xi_s$, and for $c \in \mathbb{R}$ define $B^{(s)}(c) = \{ x \in B^{(s)}(\hat{\mathbb{A}}_K) \mid |\alpha_i(x)| \leq c \mbox{ for all $i \in I$} \}$. Then there exists $r \in \mathbb{R}$ such that $$\bigcup_{s = 1}^t B^{(1)}(r)\xi_s\mathfrak{K}$$ is a fundamental domain for $G(K)\backslash G(\hat{\mathbb{A}}_K)$. \end{theorem} \begin{proof} For $x \in G(\hat{\mathbb{A}}_K)$, by \ref{reduction5}, there exists a minimal parabolic $K$-subgroup $B$ of $G$ such that $\nu_i(B,{}^x\mathfrak{K}) \geq C_1$ for all $i \in I$. Let $B^{(s)}$ be the representative of the $\mathfrak{K}$-orbit of $x^{-1}Bx$ in $G(\hat{\mathbb{A}}_K)/B(\hat{\mathbb{A}}_K)$, let $u \in \mathfrak{K}$ such that $u^{-1}x^{-1}Bxu = B^{(s)}$, let $a \in G(K)$ with $aBa^{-1} = B^{(s)}$, and define $y:=axu \in N_{G(\hat{\mathbb{A}}_K)}(B^{(s)}(\mathbb{A}_K)) = B^{(s)}(\mathbb{A}_K)$. Since $a \in G(K)$ one has $$\nu_i(B,{}^x\mathfrak{K}) \stackrel{\ref{kinvariance}}{=} \nu_i({}^aB,{}^{ax}\mathfrak{K}) = \nu_i(B^{(s)},{}^{y}\mathfrak{K}).$$ By \ref{transformationformulafornu}, for each $i \in I$ one has $|\alpha_i(y)| = \nu_i(B^{(s)},\mathfrak{K}) \left(\nu_i(B^{(s)},{}^{y}\mathfrak{K})\right)^{-1} \leq \nu_i(B^{(s)},\mathfrak{K}) C_1^{-1}$. Hence, for $r := \max\{ \nu_i(B^{(s)},\mathfrak{K}) C_1^{-1} \mid 1 \leq s \leq t, i \in I \}$, to each $x \in G(\hat{\mathbb{A}}_K)$ there exists $a \in G(K)$ and $u \in \mathfrak{K}$ such that $ax = yu^{-1} \in B^{(s)}(r)\mathfrak{K} = \xi_s^{-1} B^{(1)}(r)\xi_s\mathfrak{K}$. The claim follows because $a, \xi_s \in G(K)$. \end{proof} \section{Filtrations of Euclidean buildings} \begin{introduction} In this section I translate the geometric version of Harder's reduction theory into the setting of Euclidean buildings based on \cite{Harder:1977}. From this section on the survey is intended for the reader familiar with the concept of Euclidean buildings, as simplicial complexes and as CAT(0) spaces. For both introductory and further reading the sources \cite{Abramenko/Brown:2008}, \cite{Bridson/Haefliger:1999}, \cite{Brown:1989}, \cite{Weiss:2003}, \cite{Weiss:2009} are highly recommended. \end{introduction} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{building} Let $Y/\mathbb{F}_q$ be a non-singular projective curve, let $S \subset Y^\circ$ be finite, and let $G/Y$ be a reductive group $Y$-scheme. Clearly, each of \ref{reduction3}, \ref{reduction4}, \ref{reduction5}, \ref{reduction6} holds for $x \in G(\hat{\mathbb{A}}_S) \subset G(\hat{\mathbb{A}}_K)$ (cf.\ \ref{sadeles}). Since $$G(K \cap \hat{\mathbb{A}}_S)\backslash G(\hat{\mathbb{A}}_S)/G(\hat{\mathcal{O}}_K) \stackrel{\ref{sadeles}}{\cong} G(\mathcal{O}_S) \backslash G(\prod_{P \in S} K_P)/G(\prod_{P \in S} \hat{\mathcal{O}}_{P,K}) = G(\mathcal{O}_S) \backslash \prod_{P \in S} G(K_P)/G(\hat{\mathcal{O}}_{P,K}),$$ the functions $\pi_i(B,{}^x\mathfrak{K})$ and $\nu_i(B,{}^x\mathfrak{K})$ (cf.\ \ref{piandnu2}) allow one to define $G(\mathcal{O}_S)$-invariant (cf.\ \ref{kinvariance}) filtrations on the Euclidean building $X$ of $\prod_{P \in S} G(K_P)$. The group $G(\mathcal{O}_S)$ is called an {\bf $S$-arithmetic group}. The set of special vertices of $X$ is $X_v := \prod_{P \in S} G(K_P)/G(\hat{\mathcal{O}}_{P,K})$. The diagonal embedding of $K$ in $\prod_{P \in S} K_P$ (cf.\ \ref{sadeles}) yields a diagonal embedding of the spherical building of $G(K)$ into the spherical building at infinity of $X$ with respect to the complete system of apartments. Note that this embedding is in general not simplicial. \begin{definition} \label{horoball} Let $X$ be a CAT(0) space, e.g., a Euclidean building, and let $\gamma : [0,\infty) \to X$ be a unit speed geodesic ray, i.e., $d(\gamma(t),\gamma(0)) = t$ for all $t \geq 0$. The function $$b_\gamma : X \to \mathbb{R}: x \mapsto \lim_{t \to \infty} (t-d(x,\gamma(t)))$$ is the {\bf Busemann function} with respect to $\gamma$. Note that $t = d(\gamma(0),\gamma(t)) \leq d(\gamma(0),x) + d(x,\gamma(t))$ and $t - d(x,\gamma(t))$ non-decreasing in $t$, so that the limit always exists. A linear reparametrization of a Busemann function is called a {\bf generalized Busemann function}. For a (generalized) Busemann function $b_\gamma$, a sub-level set of $-b_\gamma$ is a {\bf horoball centred at $\gamma(\infty)$}. The boundary of a horoball is a {\bf horosphere}, {centred} at $\gamma(\infty)$. \end{definition} Some sources define the Busemann function with respect to a geodesic ray as $b_\gamma : X \to \mathbb{R}: x \mapsto \lim_{t \to \infty} (d(x,\gamma(t))-t)$. This does not affect the concept of a horoball, i.e., in that case a horoball is defined as a sub-level set of $b_\gamma$. \begin{theorem} \label{logarithm} \label{transformation} \label{formula1} Let $P$ be a maximal $K$-parabolic and let $\mathfrak{K} := G(\mathcal{O}_K)$. Then for each $g \in \prod_{P \in S} P(K_P)$ one has $$\log_q(\pi(P,{}^g\mathfrak{K})) = \log_q(\pi(P,\mathfrak{K})) + \sum_{P \in S} \mathrm{deg}(P)\nu_P(\chi_P(g)).$$ In particular, there exists a generalized Busemann function $p(P,\cdot) : X \to \mathbb{R}$ whose restriction to the set $X_v$ of special vertices of $X$ equals $\log_q(\pi(P,\cdot))$. \end{theorem} \begin{proof} One has \begin{eqnarray*} \log_q(\pi(P,{}^g\mathfrak{K})) & \stackrel{\ref{transformationformula}}{=} & \log_q(\pi(P,\mathfrak{K})) + \log_q(|\chi_P(g)|^{-1}) \\ & \stackrel{\ref{sadeles}}{=} & \log_q(\pi(P,\mathfrak{K})) + \log_q(|\chi_P(g)|_S^{-1}) \\& \stackrel{\ref{defnorm}}{=} & \log_q(\pi(P,\mathfrak{K})) + \sum_{P \in S} \mathrm{deg}(P)\nu_P(\chi_P(g)). \end{eqnarray*} The maximal $K$-parabolic $P$ corresponds to a vertex in the building of $G(K)$ and hence yields an element $\xi$ in the spherical building at infinity of $X$ (cf.\ \ref{building}). As $\chi_P$ is a multiple of the fundamental weight corresponding to $P$, any geodesic ray $\gamma$ in $X$ with $\gamma(\infty) = \xi$ provides a generalized Busemann function $b_\gamma = p(P,\cdot) : X \to \mathbb{R}$ as claimed. \end{proof} \begin{definition} \label{pandn} Let $B/Y$ be a minimal parabolic $K$-subgroup of $G/Y$, let $\{ \alpha_1, ..., \alpha_r \}$ be the simple roots of $B$, let $(P_i)_i$ be the maximal parabolic $K$-subgroups of $G$, of type $\alpha_i$, and, for each $i$, let $p(P_i,\cdot) : X \to \mathbb{R}$ be the generalized Busemann function from \ref{formula1}. For $x \in X$ define \begin{eqnarray*} p_i(B,x) & := & p(P_i,x), \\ \overline{p}_i(B,x) & := & c_ip_i(B,x), \\ n_i(B,x) & := & \sum_{j=1}^r c_{i,j} p_j(B,x), \end{eqnarray*} where $(c_i)_{1 \leq i \leq r}$ is a family of positive real numbers such that each $c_i{p}_i(B,\cdot)$ is a Busemann function and $c_{i,j} \in \mathbb{Q}$ are such that $\alpha_i = \sum_{j=1}^r c_{i,j} \chi_j$ (cf.\ \ref{piandnu}). \end{definition} \begin{corollary} \label{logarithm2}\label{transformation2} \label{formula2} Let $B/Y$ be a minimal parabolic $K$-subgroup of $G/Y$. For each $1 \leq i \leq r$ and each $g \in \prod_{P \in S} B(K_P)$ one has $$n_i(B,{}^g\mathfrak{K}) = n_i(B,\mathfrak{K}) + \sum_{P \in S} \mathrm{deg}(P)\nu_P(\alpha_i(g)).$$ \end{corollary} \begin{proof} Combine \ref{transformationformulafornu} with \ref{pandn}. \end{proof} \noindent \refstepcounter{theorem}{\bf \thetheorem} \label{existenceofD} The chambers of $X$ are compact and pairwise isometric and each contains a special vertex. Hence there exists $d > 0$ such that for each element $x \in X$ there exists a special vertex $x' \in X$ such that for each minimal parabolic $K$-subgroup $B$ of $G$ \begin{eqnarray*} n_i(B,x) \geq c & \mbox{implies} & n_i(B,x') \geq c - d \quad \mbox{for all $i \in I$ and $c \in \mathbb{R}$ and,} \\ n_i(B,x') \geq c & \mbox{implies} & n_i(B,x) \geq c - d \quad \mbox{for all $i \in I$ and $c \in \mathbb{R}$.} \end{eqnarray*} \begin{theorem}[{\cite[1.4.2]{Harder:1977}}]\label{c1} There exists $c_1 \in \mathbb{R}$ such that for each element $x \in X$ there exists a minimal parabolic $K$-subgroup $B$ of $G$ with $n_i(B,x) \geq c_1$ for all $i \in I$. \end{theorem} \begin{proof} In case one only considers special vertices $x \in X$, such a constant $c_1:=\log_q(C_1)$ exists by \ref{reduction3}. If $d$ is a constant for which the conclusion of \ref{existenceofD} holds, then replacing $c_1$ by $c_1 - d$ therefore implies the assertion for arbitrary elements $x \in X$. \end{proof} \begin{theorem}[{\cite[1.4.4]{Harder:1977}}] \label{c2} There exist constants $c_2 > \gamma > c_1$ such that for $x \in X$, a minimal $K$-parabolic $B$ with $n_i(B,x) \geq c_1$ for all $i \in I$, the family $(P_i)_{i \in I}$ of maximal parabolic $K$-subgroups of $G$ containing $B$, and $j \in I$ with $n_j(B,x) \geq c_2$, each minimal parabolic $K$-subgroup $B'$ of $G$ with $n_i(B',x) \geq c_1$ for all $i \in I$ satisfies $n_j(B',x) \geq \gamma$, and is contained in $P_j$. \end{theorem} \begin{proof} In case one only considers special vertices $x \in X$, such constants $c_2:=\log_q(C_2)$ and $\gamma := \log_q(\Gamma)$ exist by \ref{reduction6}. Choose again a constant $d$ for which the conclusion of \ref{existenceofD} holds, define $c_1' := c_1 - d$ and $C'_1 := q^{c_1'}$, use this constant $C'_1$ in \ref{reduceddef} to define reduced pairs, let $C'_2$ and $\Gamma'$ be constants for which the conclusion of \ref{reduction6} holds for this definition of a reduced pair, and let $c'_2 := \log_q(C'_2)$. For $c_2 := c'_2 + d$ by \ref{existenceofD} there exists a special vertex $x' \in X$ such that \begin{eqnarray*} n_i(B,x) \geq c_1 \quad \mbox{implies} & n_i(B,x') \geq c_1' & \mbox{for all $i \in I$}, \\ n_i(B',x) \geq c_1 \quad \mbox{implies} & n_i(B',x') \geq c_1' & \mbox{for all $i \in I$, and} \\ n_j(B,x) \geq c_2 \quad \mbox{implies} & n_j(B,x') \geq c_2'.& \end{eqnarray*} We conclude from \ref{reduction6} that $B'$ is contained in $P_j$ and that $n_j(B',x') \geq \log_q(\Gamma')$, whence $n_j(B',x) \geq \log_q(\Gamma')-d =: \gamma$. \end{proof} \begin{notation} \label{choiceofconstants} Once and for all fix constants $c_1, c_2, \gamma \in \mathbb{R}$ such that $c_1$ is negative, $c_2 > \gamma > c_1$ and such that the conclusions of \ref{c1} and \ref{c2} hold. \end{notation} \begin{definition} \label{reducedpair}\label{close}\label{isolatedparabolic} A pair $(B,x)$ consisting of a minimal $K$-parabolic subgroup $B$ of $G$ and an element $x \in X$ such that $n_i(B,x) \geq c_1$ for all $i \in I$ is called {\bf reduced}. For a minimal parabolic $K$-subgroup $B$ of $G$ and a maximal $K$-parabolic $P_j \supseteq B$, following \cite[p.~254]{Harder:1974}, an element $x \in X$ is called {\bf close to the boundary of $X$ with respect to $P_j$}, if $(B,x)$ is a reduced pair and $n_j(B,x) \geq c_2$. An element $x \in X$ is called {\bf close to the boundary of $X$}, if there exists a maximal $K$-parabolic $P$ such that $x$ is close to the boundary of $X$ with respect to $P$. For $x \in X$ close to the boundary of $X$, define $$P_x := \bigcap\{ P \subset G \mid \mbox{$x$ is close to the boundary of $X$ with respect to $P$}\}.$$ By \ref{c2} the group $P_x$ is a $K$-parabolic subgroup of $G$ (cf.\ \ref{isolatedparabolic2}). Following \cite[p.~138]{Harder:1968}, it is called the {\bf isolated parabolic subgroup of $G$ corresponding to $x$}. For each $K$-parabolic $Q \supset P_x$, the element $x \in X$ is called {\bf close to the boundary of $X$ with respect to $Q$}. \end{definition} \begin{proposition}[{\cite[p.~35]{Behr:2004}}] \label{inclusionatinfinity} Let $x \in X$ be close to the boundary of $X$ and let $P_x$ be the corresponding isolated parabolic subgroup of $G$. Let $\gamma$ be a geodesic ray in $X$ with $\gamma(0) = x$ and whose end point lies in the simplex of the building at infinity corresponding to $P_x$. Then each $y \in \gamma([0,\infty))$ is close to the boundary of $X$. Moreover, one has $P_y = P_x$ and for each minimal $K$-parabolic $B$ the pair $(B,y)$ is reduced if and only if $(B,x)$ is reduced. \end{proposition} \begin{Proof}{(Bux, Gramlich, Witzel)} First notice that by (\ref{orthogonal}) and (\ref{acute}) for each reduced pair $(B,x)$ and all $i \in I$ one has $n_i(B,y) \geq n_i(B,x)$. In particular, each reduced pair $(B,x)$ gives rise to a reduced pair $(B,y)$ and, moreover, $y$ lies close to the boundary of $X$ with respect to $P_x$. This implies $P_y \subseteq P_x$. Conversely, let $(B,y)$ be a reduced pair. Then by \ref{c2} one has $B \subseteq P_y \subseteq P_x$. Let $(P_i)_{i \in I}$ be the family of maximal parabolic $K$-subgroups of $G$ containing $B$ and let $I' \subseteq I$ such that $P_x = \bigcap_{i \in I'} P_i$. If there exists $j \in I$ such that $n_j(B,y) \geq c_2$, but $P_j \not\supseteq P_x$, then $j \in I \backslash I'$. As for each $i \in I \backslash I'$ one has $n_i(B,x) = n_i(B,y)$, in particular $n_j(B,x) = n_j(B,y) \geq c_2$, and in view of \ref{isolatedparabolic} the pair $(B,x)$ cannot be reduced. As being a reduced pair is a closed condition (see \ref{reducedpair}), there therefore exists a minimal $a \in (0,\infty)$ such that $(B,\gamma(a))$ is reduced. The first paragraph of this proof implies that $P_{\gamma(a)} \subseteq P_x$. Thus \ref{c2} applied to the reduced pair $(B,\gamma(a))$ yields $n_i(B,\gamma(a)) \geq \gamma$, and hence $n_i(B,\gamma(a)) > c_1$ by \ref{choiceofconstants}, for all $i \in I'$. As the $n_i(B,\gamma(b))$ are continuous in $b$ and as for each $i \in I \backslash I'$ one has $n_i(B,x) = n_i(B,\gamma(b)) = n_i(B,y) \geq c_1$, this contradicts the minimality of $a$. Therefore $(B,x)$ has to be reduced. Consequently, each $j \in I$ with the property that $n_j(B,y) \geq c_2$ satisfies $P_j \supseteq P_x$, whence $j \in I'$. Thus we have $P_y = P_x$. \end{Proof} \begin{definition} \label{filtrationsc3} For $c \in \mathbb{R}$ define \begin{eqnarray*} X^n(c) & = & \{ x \in X \mid (B,x) \mbox{ reduced implies $n_i(B,x) \leq c$ for all $i \in I$} \}, \\ X^p(c) & = & \{ x \in X \mid (B,x) \mbox{ reduced implies $p_i(B,x) \leq c$ for all $i \in I$} \}, \quad \mbox{ and}\\ X^{\overline{p}}(c) & = & \{ x \in X \mid (B,x) \mbox{ reduced implies $\overline{p}_i(B,x) \leq c$ for all $i \in I$} \}. \end{eqnarray*} These filtrations are $G(\mathcal{O}_S)$-invariant (cf.\ \ref{kinvariance}) and $G(\mathcal{O}_S)$-cocompact (cf.\ \cite[2.2.2]{Harder:1969}). There exists $c_3 \in \mathbb{R}$ such that $X^n(c_2) \subseteq X^{\overline{p}}(c_3)$. \end{definition} \begin{proposition}[{\cite[Section 5]{Bux/Gramlich/Witzel:2011}}] \label{uniquedirectiontoinfinity} Let $c \geq c_3$, let $x \in X \backslash X^{\overline{p}}(c)$ and let $\overline{x} \in X^{\overline{p}}(c)$ be an element at which the function $X^{\overline{p}}(c) \to \mathbb{R} : z \mapsto d(x,z)$ assumes a global minimum. Then $P_{\overline{x}} = P_x$. Furthermore, there exists a unique unit speed geodesic ray $\gamma_x^c : [0,\infty) \to X$ with $\gamma_x^c(0) = x$ along which the function $X \backslash X^{\overline{p}}(c) \to \mathbb{R} : x \mapsto d(x,X^{\overline{p}}(c))$ assumes its steepest ascent; its end point lies in the simplex at infinity corresponding to $P_x$. \end{proposition} The preceding proposition shows that one can measure the distance from the set $X^{\overline{p}}(c_3)$ in a neat way. It also shows that there is a substantial difference between $K$-rank $1$ and higher $K$-rank. Indeed, by \ref{uniquedirectiontoinfinity}, in the case of $K$-rank $1$ the boundary of $X^{\overline{p}}(c_3)$ consists of hypersurfaces of codimension one which must have pairwise empty intersections, as otherwise one would need non-minimal isolated $K$-parabolics, which do not exist. The following result makes this heuristic argument more concrete. \begin{theorem}[{\cite[3.7]{Bux/Wortman}}] \label{horo} If $\mathrm{rk}_K(G) = 1$, then there exists a collection $\mathcal{H}$ of pairwise disjoint horoballs of $X$ such that $X^{\overline{p}}(c_3) = X \backslash \mathcal{H}$ is $G(\mathcal{O}_S)$-invariant and $G(\mathcal{O}_S)$-cocompact. \end{theorem} \begin{proof} Since the functions $\overline{p}$ are Busemann functions (cf.\ \ref{formula1}, \ref{pandn}), there clearly exists a collection $\mathcal{H}$ of horoballs of $X$ such that $X^{\overline{p}}(c_3) = X \backslash \mathcal{H}$. By \ref{filtrationsc3} the set $X^{\overline{p}}(c_3)$ is $G(\mathcal{O}_S)$-invariant and $G(\mathcal{O}_S)$-cocompact. It therefore remains to prove that the horoballs in $\mathcal{H}$ can be chosen to be either disjoint or equal. Let $H_1, H_2 \in \mathcal{H}$ have non-trivial intersection and let $B_1$, $B_2$ be the minimal $K$-parabolic subgroups corresponding to their respective centres (cf.\ \ref{horoball}). Then there exists $x \in X$ such that $\overline{p}_\alpha(B_1,x), \overline{p}_\alpha(B_2,x) \geq c_3$, where $\alpha$ denotes the unique simple root. Therefore, by \ref{filtrationsc3}, $n_\alpha(B_1,x), n_\alpha(B_2,x) \geq c_2$, and by \ref{c2} one has $B_1 = B_2$. Hence, $H_1 \subseteq H_2$ or $H_2 \subseteq H_1$. If $H_1 \neq H_2$, one can remove one of the two from $\mathcal{H}$ without changing $X \backslash \mathcal{H}$. \end{proof} \section{Applications and conjectures} \begin{introduction} In this section I sketch the applicability of the geometric version of Harder's reduction theory to the study of finiteness properties of $S$-arithmetic groups over global function fields and state a very general conjecture on isoperimetric properties of $S$-arithmetic groups over arbitrary global fields. For reduction theory over number fields I strongly recommend \cite{Platonov/Rapinchuk:1994}. \end{introduction} \subsection{Finiteness properties of $S$-arithmetic groups} \begin{definition} A group $\Gamma$ is of {\bf type $F_m$}, if it admits a free action on a contractible CW complex $X$ with finitely many orbits on the $m$-skeleton of $X$. \end{definition} A group action on a CW complex is called {\bf cellular}, if the action preserves the cell structure, and {\bf rigid}, if each group element that elementwise fixes the skeleton of a cell in fact elementwise fixes the whole cell. \begin{theorem}[{\cite[1.1]{Brown:1987}}] \label{brown} Let $m \in \mathbb{N}$, let $X$ be an $(m-1)$-connected CW complex, and let $\Gamma \to \mathrm{Aut}(X)$ act cellularly, rigidly and cocompactly on $X$ such that the stabilizer of each $i$-cell is of type $F_{m-i}$. Then $\Gamma$ is of type $F_m$. \end{theorem} \begin{theorem}[{\cite[7.7]{Bux/Wortman}}] \label{horoconnected} Let $X = X_1 \times \cdots \times X_t$ be an affine building, decomposed into its irreducible factors, let $\partial X$ be the spherical building at infinity of $X$, let $\partial^j X$ be the spherical building at infinity of $\prod_{i=1, i \neq j}^t X_i$, and let $\xi \in \partial X \backslash \bigcup_{1 \leq j \leq t} \partial^j X$. Then any horosphere centred at $\xi$ (\ref{horoball}) is $(\mathrm{dim}(X)-2)$-connected. \end{theorem} \begin{theorem}[{\cite[8.1]{Bux/Wortman}}] Let $K$ be a global function field, let $G$ be an absolutely almost simple $K$-group of $K$-rank $1$, let $\emptyset \neq S \subset Y^\circ$ be finite, let $X$ be the Euclidean building of $\prod_{P \in S} G(K_P)$, and let $m = \mathrm{dim}(X) = \sum_{P \in S} \mathrm{rk}_{K_P}(G)$. Then $G(\mathcal{O}_S)$ is of type $F_{m-1}$, but not $F_m$. \end{theorem} \begin{proof} The group $G(\mathcal{O}_S)$ clearly acts rigidly and cellularly on the (contractible) building $X$. By \ref{horo}, the group $G(\mathcal{O}_S)$ acts cocompactly on $X^{\overline{p}}(c_3) = X \backslash \mathcal{H}$ and, in view of \ref{building}, each of the horoballs $H \in \mathcal{H}$ is centred at some $\xi$ which satisfies the hypothesis of \ref{horoconnected}. Therefore, by \ref{horoconnected}, $X^{\overline{p}}(c_3)$ is $(n-2)$-connected. As cell stabilizers in $G(\mathcal{O}_S)$ are finite, whence of type $F_\infty$, the claim follows from \ref{brown}. \end{proof} For $K$-rank greater than $1$ this strategy cannot work, as \ref{horo} becomes false. It can be adapted, however, which leads to a couple of technical difficulties that have first been overcome in \cite{Bux/Gramlich/Witzel}, \cite{Witzel} for the $S$-arithmetic groups $G(\mathbb{F}_q[t])$ and $G(\mathbb{F}_q[t,t^{-1}])$ where $G$ is an absolutely almost simple $\mathbb{F}_q$-group of rank $n \geq 1$. In this situation one can in fact make use of the theory of Euclidean twin buildings, as in \cite{Abramenko:1996}. \begin{theorem}[{\cite[A]{Bux/Gramlich/Witzel}, \cite[Main Theorem]{Witzel}}] Let $G$ be an absolutely almost simple $\mathbb{F}_q$-group of rank $n \geq 1$. Then $G(\mathbb{F}_q[t])$ is of type $F_{n-1}$, but not $F_n$, and $G(\mathbb{F}_q[t,t^{-1}])$ is of type $F_{2n-1}$, but not $F_{2n}$. \end{theorem} Recently, a combination of the ideas developed in \cite{Bux/Wortman}, \cite{Bux/Gramlich/Witzel}, \cite{Witzel} with Harder's reduction theory allowed the authors of \cite{Bux/Gramlich/Witzel:2011} to prove the following theorem, providing a positive answer to the question asked in \cite[13.20]{Abramenko/Brown:2008}, \cite[p.~80]{Behr:1998}, \cite[p.~197]{Brown:1989}. \begin{theorem}[{\cite[Rank Theorem]{Bux/Gramlich/Witzel:2011}}] \label{rank} Let $K$ be a global function field, let $G$ be an absolutely almost simple $K$-isotropic $K$-group, let $\emptyset \neq S \subset Y^\circ$ be finite, let $X$ be the Euclidean building of $\prod_{P \in S} G(K_P)$, and let $m = \mathrm{dim}(X) = \sum_{P \in S} \mathrm{rk}_{K_P}(G)$. Then $G(\mathcal{O}_S)$ is of type $F_{m-1}$, but not $F_m$. \end{theorem} \begin{remark} Using the notation introduced above, already \cite[1.1]{Bux/Wortman:2007} established that $G(\mathcal{O}_S)$ is not of type $F_{m}$. \end{remark} \begin{remark} $S$-arithmetic groups over number fields are known to be of type $F_\infty$, cf.\ \cite[\S 11]{Borel/Serre:1976}. \end{remark} \subsection{Isoperimetric properties of $S$-arithmetic groups} In the number field case, results similar to \ref{reduction5} and \ref{reduction6} hold, cf.\ \cite{Borel/Harish-Chandra:1962}, \cite[p.~53]{Harder:1969}. To the best of my knowledge, the question posed by Harder (\cite[p.~54]{Harder:1969}) whether it is possible to prove the results from \cite{Borel/Harish-Chandra:1962} using methods similar (or at least closer) to the approach used by Harder is still open. The key, of course, would be to prove \ref{reduction2} for number fields. As this problem by itself probably will not attract sufficient attention, I will finish this survey by stating a very general conjecture on properties of $S$-arithmetic groups over arbitrary global fields, i.e., global function fields or number fields, which, if verified, provides another proof of \ref{rank}. \begin{definition} A {\bf coarse $n$-manifold} $\Sigma$ in a metric space $X$ is a function from the vertices of a triangulated $n$-manifold $M$ into $X$. The homeomorphism type of the manifold $M$ is called the {\bf topological type} of $\Sigma$. The {\bf boundary} $\partial \Sigma$ is the restriction to $\partial M$ of the function $\Sigma$. The coarse manifold $\Sigma$ has {\bf scale} $r \in \mathbb{R}_+$, if $d(\Sigma(x),\Sigma(y)) \leq r$ for all adjacent vertices $x$, $y$ of $M$. The {\bf volume} $\mathrm{vol}(\Sigma)$ equals the number of vertices in $M$. \end{definition} \begin{conjecture}[{\cite{Bux/Wortman:2007}}] Let $K$ be a global field, i.e., global function field or a number field, let $G$ be an absolutely almost simple $K$-isotropic $K$-group, let $S$ be a non-empty finite set of places of $K$ containing all archimedean ones, and let $n < \sum_{P \in S} \mathrm{rk}_{K_P}(G)$. To any $r_1 > 0$ there exists a linear polynomial $f$ and $r_2 > 0$ such that, if $\Sigma \subseteq \prod_{P \in S} G(K_P)$ is a coarse $n$-manifold of scale $r_1$ with $\partial\Sigma \subseteq G(\mathcal{O}_S)$, then there exists a coarse $n$-manifold $\Sigma' \subseteq G(\mathcal{O}_S)$ of scale $r_2$ and identical topological type as $\Sigma$ such that $\partial\Sigma' = \partial\Sigma$ and $\mathrm{vol}(\Sigma') \leq f(\mathrm{Vol}(\Sigma))$. \end{conjecture} In case $n < |S|$, the existence of a polynomial $f$ of {\em unspecified degree} and $r_2 > 0$ as in the conjecture have been established by Bestvina, Eskin and Wortman. The precise statement of their result is as follows. \begin{theorem}[{\cite[2]{Wortman}}] Let $K$ be a global field, i.e., global function field or a number field, let $G$ be an absolutely almost simple $K$-isotropic $K$-group, let $S$ be a non-empty finite set of places of $K$ containing all archimedean ones, and let $n < |S|$. To any $r_1 > 0$ there exists a polynomial $f$ and $r_2 > 0$ such that, if $\Sigma \subseteq \prod_{P \in S} G(K_P)$ is a coarse $n$-manifold of scale $r_1$ with $\partial\Sigma \subseteq G(\mathcal{O}_S)$, then there exists a coarse $n$-manifold $\Sigma' \subseteq G(\mathcal{O}_S)$ of scale $r_2$ and identical topological type as $\Sigma$ such that $\partial\Sigma' = \partial\Sigma$ and $\mathrm{vol}(\Sigma') \leq f(\mathrm{Vol}(\Sigma))$. \end{theorem} Note that this result provides an alternative proof that the $S$-arithmetic group $G(\mathcal{O}_S)$ in \ref{rank} is of type $F_{|S|-1}$. \begin{footnotesize} \bibliographystyle{ralf}
2,877,628,088,790
arxiv
\section{\label{sec:intro}Introduction} During the past two decades, cosmological observations have achieved a remarkable degree of precision. Measurements of Type~Ia supernovae~\cite{Riess:2016jrr}, the cosmic microwave background (CMB)~\cite{Planck2018VI}, and large scale structure~\cite{SDSS14,Samushia:2013yga}, indicate that around $96\%$ of the energy content of the universe is in the form of so-called dark energy and dark matter. These exotic species may be described by the standard cosmological model, $\Lambda$CDM{}, in which dark energy takes the form of a cosmological constant and dark matter is taken to be cold, in other words having an equation of state equal to zero. While $\Lambda$CDM{} fits the available data very well, it suffers from a number of issues that motivate the study of alternatives. These include the fine-tuning~\cite{Martin:2012bt} and coincidence~\cite{whynow} problems. In addition, there are certain tensions between early- and late-universe observations in $\Lambda$CDM{}. The present-day expansion rate of the universe, $H_0$ and the growth of structure, quantified by $\sigma_8$, can be calculated using the best-fit $\Lambda$CDM{} parameters to cosmological data, including the CMB. This gives rise to a smaller $H_0$ and a larger $\sigma_8$ than the results of local, late-universe measurements (for a recent discussion see Ref.~\cite{Verde:2019ivm}). At present the tension in $H_0$ appears to be the more problematic of the two, though either or both issues may be caused by systematic effects that have not been accounted for. Future data from surveys such as EUCLID should confirm or resolve these tensions~\cite{Amendola:2012ys}. In the meantime it is worth exploring alternative explanations involving new physics. In this work we are especially interested in possible resolutions to the $\sigma_8$ tension. The value of $\sigma_8$ inferred from CMB data is $0.811\pm 0.006$~\cite{Planck2018VI}, while cluster counts from the SZ effect give $\sigma_8 = 0.77\pm 0.02$~\cite{Ade:2013lmv} and weak lensing gives values of $\sigma_8$ ranging from $0.65$ to $0.75$~\cite{Hildebrandt:2016iqg, 10.1093/mnras/stx1820, 10.1093/mnras/stw3161}. A popular class of modifications to $\Lambda$CDM{} is quintessence~\cite{1988PhRvD..37.3406R}, in which the cosmological constant $\Lambda$ is set to zero and a scalar field $\phi$ is introduced whose dynamical properties produce a negative equation of state giving rise to the observed late-time accelerated expansion of the universe. Normally it is assumed that the scalar field does not interact with dark matter. However there is no reason why this must be the case, and the consequences of relaxing this assumption have been widely studied. See Ref.~\cite{Wang:2016lxa} and references therein for a discussion of recent research on interacting dark energy. Traditionally, couplings between dark energy and dark matter are introduced at the level of the equations of motion, for example: \begin{equation} \label{eq:tradIDE} \nabla^\mu T_{\mu\nu}^{(\text{c})} = J_\nu \,, \quad \quad \nabla^\mu T_{\mu\nu}^{(\text{DE})} = -J_\nu \,, \end{equation} such that the overall energy--momentum tensor $T_{\mu\nu} = T_{\mu\nu}^{(\text{c})} + T_{\mu\nu}^{(\text{DE})}$, where $({\rm c})$ denotes cold dark matter and $({\rm DE})$ dark energy, is conserved as usual. $J_\nu$ is the flow of energy and momentum between dark energy and dark matter. A notable example was pioneered by Wetterich and Amendola~\cite{Wetterich:1994bg, Amendola:1999qq, Amendola:1999er}, in which $J_\nu = \beta T^{(\text{c})} \nabla_\nu \phi$, where $\beta$ is a constant, $\phi$ is the quintessence field and $T^{(\text{c})}$ is the trace of the dark matter energy--momentum tensor. Other couplings that have been proposed in the literature include promoting $\beta$ to be a function of $\phi$~\cite{Liu:2005wga, 2010PhRvD..82l3525L}, introducing a direct dependence on the expansion rate~\cite{Billyard:2000bh, Boehmer:2008av}, and couplings with non-linear dependence on the energy--momentum tensor or the scalar field gradient~\cite{Mimoso:2005bv, Chen:2008pz}. In Ref.~\cite{Pourtsidou:2013nha}, a construction was developed using the pull-back formalism for fluids to introduce dark energy--dark matter couplings at the level of the action. Defining the coupling at the level of the action is desirable for several reasons. It is often a more intuitive way to see the coupling, and it is easier to connect it to more fundamental physics. Perhaps more importantly, instabilities can often be more easily identified and avoided, saving time and computation when studying new models. The construction of Ref.~\cite{Pourtsidou:2013nha} leads to three distinct classes, or `Types' of models. Type~1 has been shown to include the commonly considered coupled quintessence model~\cite{Amendola:1999qq,Amendola:1999er} as a sub-class~\cite{Pourtsidou:2013nha,Skordis:2015yra}. Type~2 models have not been widely studied, but allow for both energy and momentum transfer between dark energy and dark matter. We focus on Type~3 models, which have a pure momentum coupling between dark energy and dark matter. Type~3 models are interesting for several reasons. Due to the absence of a coupling at the background level between dark energy and dark matter, they are much less tightly constrained than Types~1 and~2~\cite{Pourtsidou:2013nha}. They give rise to a varying speed of sound of dark energy, the consequences of which were studied in Ref.~\cite{Linton:2017ged}. Perhaps most significantly, Type~3 models have been shown to provide a basis for easing the tension between early- and late-universe probes of structure formation by reducing the predicted value of $\sigma_8$ inferred from early-universe data~\cite{Pourtsidou:2016ico}. In this paper we investigate the mechanism by which Type~3 models provide this reduction in structure growth and also study a more general form of the coupling function than previously considered. In \cref{sec:eqs} we present the cosmological equations of motion for the Type~3 models under consideration. In \cref{sec:overview} we describe in broad terms the way in which the structure growth suppression comes about, before explaining in detail the impact of a Type~3 coupling on the background cosmological evolution in \cref{sec:back} and how the linear perturbations are affected in \cref{sec:perts}. Finally in \cref{sec:disc} we present our conclusions and discuss possible avenues for future work. \section{Equations of motion} \label{sec:eqs} In the formalism of Ref.~\cite{Pourtsidou:2013nha}, a Type~3 model is described by the Lagrangian: \begin{equation} \label{eq:LT3} L(n,Y,Z,\phi) = F(Y,Z,\phi) + f(n)\,. \end{equation} where $n$ is the fluid number density, $Y=(1/2)\nabla_\mu \phi \nabla^\mu\phi$ is the usual kinetic term, and \begin{equation} Z = u^\mu \nabla_\mu \phi \,, \end{equation} is a direct coupling between the gradient of the scalar field and the fluid velocity $u^\mu$. We consider a coupled quintessence model of the form: \begin{equation} F = Y + V(\phi) + \gamma(Z)\,, \end{equation} where $V(\phi)$ is the scalar field potential and $\gamma(Z)$ is the coupling function. In this work we limit our analysis to power-law couplings of the form $\gamma(Z) = \beta_{n-2}Z^n$, where $n$ is an integer and $n \geq 2$. The background equations of motion may be found by assuming a spatially flat Friedmann--Lema\^itre--Robertson--Walker{} metric: \begin{equation} \mathrm{d} s^2 = a^2(\tau)(-\mathrm{d} \tau^2 + \mathrm{d} x_i \mathrm{d} x^i)\,, \end{equation} where $a(\tau)$ is the scale factor and $\tau$ is the conformal time. $a$ evolves according to the usual Friedmann equation: \begin{equation} \mathcal{H}^2 = \frac{1}{3 M_\text{P}^2} (\bar{\rho}_\text{b} + \bar{\rho}_\text{c} + \bar{\rho}_\gamma + \bar{\rho}_\phi)a^2\,, \end{equation} expressing the conformal Hubble parameter $\mathcal{H}$ in terms of the background energy densities of baryons ($\bar{\rho}_\text{b}$), dark matter ($\bar{\rho}_\text{c}$), radiation ($\bar{\rho}_\gamma$) and the scalar field ($\bar{\rho}_\phi$). $M_\text{P}$ denotes the reduced Planck mass. The energy density and pressure of the scalar field are given by~\cite{Pourtsidou:2013nha} \begin{equation} \label{eq:rhophi} \bar{\rho}_\phi = \frac{1}{2}\frac{\dot{\bar{\phi}}^2}{a^2} + \frac{\dot{\bar{\phi}}}{a}\gamma_{,Z} + \gamma(Z) + V(\phi)\,, \end{equation} \begin{equation} \label{eq:pphi} \bar{p}_\phi = \frac{1}{2}\frac{\dot{\bar{\phi}}^2}{a^2} - \gamma(Z) - V(\phi)\,, \end{equation} where $\bar{\phi}$ is the background value of the scalar field and dots denote differentiation with respect to conformal time. The background part of $Z$ is given by $\bar{Z} = -\dot{\bar{\phi}}/a$. The scalar field obeys \begin{equation} \label{eq:sfeback} (1-\gamma_{,ZZ})(\ddot{\bar{\phi}} - \mathcal{H}\dot{\bar{\phi}}) + 3a\mathcal{H}(\gamma_{,Z} - \bar{Z}) + a^2 V_{,\phi} = 0\,, \end{equation} and the background energy density of the cold dark matter is not modified by the Type~3 coupling: \begin{equation} \dot{\bar{\rho}}_\text{c} + 3\mathcal{H}\bar{\rho}_\text{c} = 0\,. \end{equation} To perturb the equations of motion to linear order we work in the synchronous gauge, where the metric tensor reads: \begin{equation} \mathrm{d} s^2 = a^2(\tau)\left\{-\mathrm{d} \tau^2 + \left[\left(1+\frac{1}{3}h\right)\gamma_{ij} + D_{ij}\nu\right]\mathrm{d} x^i \mathrm{d} x^j\right\}\,. \end{equation} Here, $h$ and $\nu$ are scalar perturbation variables, $D_{ij}$ is the traceless derivative operator $D_{ij} = \vec{\nabla}_i \vec{\nabla}_j - (1/3)\vec{\nabla}^2\gamma_{ij}$ and $\vec{\nabla}_i$ is the covariant derivative associated with the 3-space metric $\gamma_{ij}$. The unit time-like vector field $u_\mu$ is perturbed as \begin{equation} u_\mu = a(1,\vec{\nabla}_i\theta)\,. \end{equation} The cold dark matter density contrast $\delta_\text{c} = \delta\rho_\text{c}/\bar{\rho}_\text{c}$ obeys the standard continuity equation (in Fourier space): \begin{equation} \label{eq:delta0} \dot{\delta}_\text{c} = -k^2 \theta_\text{c} - \frac{1}{2}\dot{h}\,, \end{equation} while the velocity divergence $\theta_\text{c}$ obeys the modified Euler equation~\cite{Pourtsidou:2013nha}: \begin{equation} \label{eq:theta0} \dot{\theta}_\text{c} + \mathcal{H}\theta_\text{c} = \frac{(3\mathcal{H}\gamma_{,Z} + \gamma_{,ZZ}\dot{\bar{Z}})\delta\phi + \gamma_{,Z}\dot{\delta\phi}}{a(\bar{\rho}_\text{c} - \bar{Z}\gamma_{,Z})}\,, \end{equation} and the scalar field perturbation, $\delta\phi$, obeys \begin{multline} \label{eq:sfeperturb} (1-\gamma_{,ZZ})(\ddot{\delta\phi} + 2\mathcal{H}\dot{\delta\phi}) - \gamma_{,ZZZ}\dot{\bar{Z}}\dot{\delta\phi} \\+ (k^2 + a^2V_{,\phi\phi})\delta\phi + \frac{1}{2}(\dot{\bar{\phi}} + a\gamma_{,Z})\dot{h} + a k^2 \gamma_{,Z} \theta_\text{c} = 0\,. \end{multline} The perturbed Einstein field equations are not modified by a Type~3 coupling and take their standard form, see e.g. Ref.~\cite{Ma:1995ey}. \section{Overview of suppression of structure formation} \label{sec:overview} As found in Ref.~\cite{Pourtsidou:2016ico}, Type~3 models can result in a suppression of structure growth relative to $\Lambda$CDM{}. We examine the mechanism by which this suppression occurs with reference to the underlying equations of motion. Following Ref.~\cite{Pourtsidou:2016ico}, we consider a Type~3 coupled quintessence model with an exponential potential of the form \begin{equation} V(\phi) = A \e{-\lambda\phi/M_\text{P}}\,, \end{equation} and a power-law Type~3 coupling function given by \begin{equation} \gamma(Z) = \beta_{n-2}Z^n\,, \end{equation} where for $n=2$ we recover the quadratic coupling studied in Ref.~\cite{Pourtsidou:2016ico}. We consider only values of $\beta_{n-2}$ such that $\gamma(Z)$ is negative, as positive $\gamma(Z)$ can result in a ghost instability if $\gamma_{,ZZ}>1$~\cite{Pourtsidou:2013nha}. Since $Z$ always takes negative values, this means that we consider only negative values of $\beta_{n-2}$ for $n$ even and only positive $\beta_{n-2}$ for $n$ odd. We use the modified version of the Boltzmann code {\tt CLASS}{}~\cite{2011arXiv1104.2932L, 2011JCAP...07..034B, 2011arXiv1104.2934L, 2011JCAP...09..032L} developed by the authors of Ref.~\cite{Pourtsidou:2016ico}, further modifying it to compute the evolution of power-law couplings with $n>2$. The matter power spectrum at a time $t$ is given by \begin{equation} \label{eq:P} P(k,t) = \frac{2\pi^2}{k^3} T^2(k,t) \mathcal{P}(k)\,, \end{equation} where $\mathcal{P}(k)$ is the primordial power spectrum, which is assumed to have the form $\mathcal{P}(k) = A_\text{s}(k/k_*)^{n_\text{s}-1}$, and $T(k,t)$ is the transfer function describing the evolution of the matter density perturbation $\delta_\text{m}(k,t)$~\cite{Eisenstein:1997ik}. All perturbed quantities, and the transfer function, are computed numerically by {\tt CLASS}{}. The present-day matter power spectrum $P(k,t_0)$ is denoted by $P(k)$ for compactness. Since the primordial power spectrum is close to scale-invariant, with $n_\text{s}\approx 1$~\cite{Planck2018VI}, the matter power spectrum $P(k)$ derives all its interesting features from the transfer function. Due to the gravitational interaction between dark matter and baryons, their density contrasts obey $\delta_\text{c}\approx \delta_\text{b}\approx \delta_\text{m}$ to a very good approximation. In our numerical evolution we took $n_\text{s} = 0.97$~\cite{Planck2018VI}. The amplitude of the late-time matter density perturbations is commonly parameterised in terms of $\sigma_8$, defined as \begin{equation} \label{eq:sigma} \sigma_R^2 = \frac{1}{2\pi^2}\int W_R(k)^2 P(k) k^2\mathrm{d} k\,, \end{equation} with $R=8 h^{-1}\mathrm{Mpc}$, where $W_R(k)$ is the Fourier transform of the spherical top-hat window function: \begin{equation} W_R(k) = \frac{3}{k^3R^3} \,[\sin(kR) - kR \cos(kR)]\,. \end{equation} The structure growth suppression is illustrated for the $n=2$ case by \cref{fig:0pk,fig:0sigma8,}, which show the linear matter power spectrum and $\sigma_8$ for the quadratic coupling function and an exponential potential. Following Ref.~\cite{Pourtsidou:2016ico} here we fix the slope of the potential to be $\lambda = 1.22$, which is within the range of values providing a good fit to cosmological data~\cite{Pourtsidou:2016ico}. In \cref{sec:back,sec:perts} we investigate the effect of varying $\lambda$. We have set the sound horizon at recombination, which is tightly constrained by CMB measurements~\cite{Planck2018VI}, to $\theta_\text{s} = 0.0104$. Except where stated otherwise, we keep $\lambda$ and $\theta_\text{s}$ fixed throughout. \Cref{fig:0pk} is the analogue of the right-hand panel of Fig.~2 in Ref.~\cite{Pourtsidou:2016ico}. The slightly different value of $P(k)$ in the large $k$ limit is due to different input parameters used in Ref.~\cite{Pourtsidou:2016ico}. For moderate values of $\beta_0$ there is a slight reduction in $\sigma_8$ relative to uncoupled quintessence (given by the limit of small $|\beta_0|$). For large values of $|\beta_0|$ we see enhancement of $\sigma_8$ relative to uncoupled quintessence. Qualitatively, the suppression arises because the Type~3 coupling gives the CDM fluid a non-zero velocity divergence $\theta_\text{c}$, given by \cref{eq:theta0}. This results in a suppression of the CDM density contrast. We find numerically that the two terms on the right hand side of \cref{eq:delta0} are always of opposite signs, and the second term is larger in magnitude, which means that the larger $|\theta_\text{c}|$, the smaller $|\delta_\text{c}|$ is. \begin{figure} \centering \includegraphics[width=\columnwidth]{coupling0_pk.pdf} \caption{The linear matter power spectrum, $P(k)$, for an exponential potential, $V(\phi) = A \e{-\lambda\phi/M_\text{P}}$ and a coupling $\gamma(Z)=\beta_0 Z^2$ for various values of $\beta_{0}$. The slope of the potential is set to $\lambda = 1.22$ and the sound horizon at recombination is held fixed at $\theta_\text{s} = 0.0104$. } \label{fig:0pk} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{defaultsigma8.pdf} \caption{The dependence of $\sigma_8$ on $\beta_0$ for a quadratic coupling function and an exponential potential, as in \cref{fig:0pk}.} \label{fig:0sigma8} \end{figure} The steps by which the Type~3 coupling $\gamma(Z)$ impacts the parameter $\sigma_8$ are shown schematically in \cref{fig:flowchart}. In \cref{sec:back} we discuss how the Type~3 coupling affects the background cosmological evolution and in \cref{sec:perts} we demonstrate its effect on the perturbations, in particular how the CDM density contrast $\theta_\text{c}$ depends on the coupling. \begin{figure} \centering \begin{tikzpicture}[node distance=1.6cm] \node (coupling) [chainstep, text width=3cm] {Type 3 coupling $\gamma(Z) = \beta_{n-2}Z^n$}; \node (theta) [chainstep, below of=coupling, text width=3cm] {CDM velocity divergence $\theta_\text{c}$, \\see \cref{eq:theta0}}; \node (delta) [chainstep, below of=theta, text width=3cm] {CDM density contrast $\delta_\text{c}$, \\see \cref{eq:delta0}}; \node (P) [chainstep, below of=delta, text width=3cm] {Matter power spectrum $P(k)$, \\see \cref{eq:P}}; \node (sigma) [chainstep, below of=P, text width=3cm] {Amplitude of fluctuations $\sigma_8$, \\see \cref{eq:sigma}}; \node (back) [chainstep, right of=theta, xshift=2.2cm, text width=3cm] {Background $\bar{\phi} \rightarrow \bar{\rho}_\phi \rightarrow \mathcal{H}$, \\see \cref{sec:back}}; \draw[->] (coupling) -- (theta); \draw[->] (theta) -- (delta); \draw[->] (delta) -- (P); \draw[->] (P) -- (sigma); \draw[->] (coupling) --(back); \draw[->] (back) -- (delta); \end{tikzpicture} \caption{A schematic illustration of the steps by which the Type~3 coupling affects the amplitude of fluctuations $\sigma_8$.} \label{fig:flowchart} \end{figure} \section{Effect of Type~3 Coupling on the Background Evolution} \label{sec:back} For a Type~3 coupled quintessence model with power law coupling $\gamma(Z) = \beta_{n-2}Z^n$ and an exponential potential $V(\phi) = A\e{-\lambda\phi/M_\text{P}}$ the scalar field evolution equation, Eq.~(\ref{eq:sfeback}), becomes \begin{multline} \label{eq:SFEn} \left[1-n(n-1)\beta_{n-2}\left(-\frac{\dot{\bar{\phi}}}{a}\right)^{n-2}\right]\ddot{\bar{\phi}} + 2\mathcal{H}\dot{\bar{\phi}} \\ + n(4-n)a\mathcal{H}\beta_{n-2} \left(-\frac{\dot{\bar{\phi}}}{a}\right)^{n-1} - a^2 \frac{\lambda A}{M_\text{P}} \e{-\lambda\bar{\phi}/M_\text{P}} = 0\,, \end{multline} where we have used $\bar{Z} = -\dot{\bar{\phi}}/a$. To understand the behaviour of the scalar field it is instructive to consider certain limiting cases. In the interest of readability, in the following we use the general quantity $\gamma_{,ZZ}$ in place of its specific form for the power law coupling, $n(n-1)\beta_{n-2}(-\dot{\bar{\phi}}/a)^{n-2}$. First we consider the case in which $1 \gg |\gamma_{,ZZ}|$. This can result from either $|\beta_{n-2}|$ or $|\dot{\bar{\phi}}|$ being very small. In this limit, the second term in the square bracket of \cref{eq:SFEn} becomes negligible, as does the third term of the equation and hence \begin{equation} \label{eq:SFEnearly} \ddot{\bar{\phi}} + 2\mathcal{H}\dot{\bar{\phi}} - a^2 \frac{\lambda A}{M_\text{P}} \e{-\lambda\bar{\phi}/M_\text{P}} = 0\,, \end{equation} which is simply the scalar field equation for uncoupled quintessence. In this case the scalar field will roll down the potential with $\bar{\phi}$ and $\dot{\bar{\phi}}$ increasing with time. In the opposite case, where $1 \ll |\gamma_{,ZZ}|$, \cref{eq:SFEn} becomes \begin{multline} \label{eq:SFEnlate} -n(n-1)\beta_{n-2}\left(-\frac{\dot{\bar{\phi}}}{a}\right)^{n-2} \ddot{\bar{\phi}} \\+ n(4-n)a\mathcal{H}\beta_{n-2} \left(-\frac{\dot{\bar{\phi}}}{a}\right)^{n-1} - a^2 \frac{\lambda A}{M_\text{P}} \e{-\lambda\bar{\phi}/M_\text{P}} = 0\,. \end{multline} Note that for $n=4$ the second term in \cref{eq:SFEnlate} is equal to zero and therefore one would not neglect the $2\mathcal{H}\dot{\bar{\phi}}$ term in \cref{eq:SFEn}. For our present purposes, however, this distinction is not vital. What is important to note is that, since we are considering the regime where $|\gamma_{,ZZ}| \gg 1$, \cref{eq:SFEnlate} predicts a slower evolution of $\dot{\bar{\phi}}$ and hence $\bar{\phi}$ due to the large factor multiplying $\ddot{\bar{\phi}}$. Provided $n\neq 2$, we can now see that a Type~3 coupled quintessence model will transition from the first limit, when $\dot{\bar{\phi}}$ is very small, to the second limit, since $\dot{\bar{\phi}}$ grows with time. Larger $|\beta_{n-2}|$ will result in an earlier transition from the uncoupled quintessence regime of \cref{eq:SFEnearly} to the `slowed' regime of \cref{eq:SFEnlate}. This is demonstrated by \cref{fig:phip}, which shows the evolution of $\dot{\bar{\phi}}$ for $n=3$ and $n=4$. \begin{figure*} \centering \includegraphics[width=\columnwidth]{coupling1_phip_consth.pdf} \includegraphics[width=\columnwidth]{coupling2_phip.pdf} \caption{The evolution of the conformal time derivative of the scalar field with the scale factor $a$ for a Type~3 coupling of the form $\gamma(Z) = \beta_{n-2} Z^n$, for different values of the coupling parameter $|\beta_{n-2}|$. The left panel shows the $n=3$ case and the right panel shows $n=4$. The units of $\beta_{n-2}$ are $(\mathrm{Mpc}/M_\text{P})^{n-2}$.} \label{fig:phip} \end{figure*} To understand the way in which $\dot{\bar{\phi}}$ scales with $\beta_{n-2}$ it is instructive to consider the special case in which $n=2$. In this case, the scalar field equation \cref{eq:SFEn} becomes: \begin{equation} \label{eq:sfe1} (1-2\beta_0)(\ddot{\bar{\phi}} + 2\mathcal{H}\dot{\bar{\phi}}) - a^2 \frac{\lambda A}{M_\text{P}} \e{-\lambda\bar{\phi}/M_\text{P}} = 0\,. \end{equation} The factor multiplying the kinetic term is now independent of time, implying that the transition described above for general $n$ is not present for $n=2$. Instead, one can see that for small $|\beta_0|$ the uncoupled quintessence case is recovered for all time, and for large $|\beta_0|$ the scalar field evolution is slowed; while for all $\beta_0$, $\dot{\bar{\phi}}$ scales as $1/(1-2\beta_0)$, which we have confirmed numerically. We can obtain the scaling behaviour for general $n$, illustrated by \cref{fig:phip}, in a schematic way as follows. In analogy to the $n=2$ case in which $\dot{\bar{\phi}}$ scales with $1/(1-2\beta_0)$, let us suppose that $\dot{\bar{\phi}}$ for general $n$ will scale as the inverse of the term in square brackets of \cref{eq:SFEn}: \begin{equation} \label{eq:nscale} \dot{\bar{\phi}} \sim \frac{1}{1-n(n-1)\beta_{n-2}(-\dot{\bar{\phi}}/a)^{n-2}}\,. \end{equation} This relation is of little use in its present form because it contains $\dot{\bar{\phi}}$ on both sides. However, as above, we can consider the two limits: $1 \gg |\gamma_{,ZZ}|$ and $1 \ll |\gamma_{,ZZ}|$. In the first case there is no scaling with $\beta_{n-2}$ as the uncoupled quintessence case is recovered. In the second case, however, \cref{eq:nscale} becomes: \begin{equation} \label{eq:nscalelate} \dot{\bar{\phi}} \sim \frac{1}{-n(n-1)\beta_{n-2}(-\dot{\bar{\phi}}/a)^{n-2}}\,, \end{equation} and we can now rearrange to obtain \begin{equation} \label{eq:nscalelate2} \dot{\bar{\phi}} \sim |\beta_{n-2}|^{-\frac{1}{n-1}}\,, \end{equation} which agrees with the late time, large $|\beta_{n-2}|$ regime in \cref{fig:phip}. \subsection{Impact on expansion rate} In order to understand how the expansion rate depends on the Type~3 coupling it is necessary to consider the evolution of the scalar field energy density $\bar{\rho}_\phi$. The energy density and pressure are given by \cref{eq:rhophi,eq:pphi}. For a power-law coupling and exponential potential: \begin{equation} \label{eq:rhophi2} \bar{\rho}_\phi = \frac{1}{2} \left(\frac{\dot{\bar{\phi}}}{a}\right)^2 - (n-1) \beta_{n-2} \left(-\frac{\dot{\bar{\phi}}}{a}\right)^n + A\e{-\lambda\bar{\phi}/M_\text{P}}\,, \end{equation} and \begin{equation} \label{eq:pphi2} \bar{p}_\phi = \frac{1}{2}\left(\frac{\dot{\bar{\phi}}}{a}\right)^2 - \beta_{n-2}\left(-\frac{\dot{\bar{\phi}}}{a}\right)^n - A\e{-\lambda\bar{\phi}/M_\text{P}}\,. \end{equation} The energy density of the scalar field obeys the usual conservation equation: \begin{align} &\dot{\bar{\rho}}_\phi + 3\mathcal{H}(\bar{\rho}_\phi + \bar{p}_\phi) = 0 \,, \\\Rightarrow \quad &\dot{\bar{\rho}}_\phi = -3\mathcal{H} \left[\left(\frac{\dot{\bar{\phi}}}{a}\right)^2 - n\beta_{n-2}\left(-\frac{\dot{\bar{\phi}}}{a}\right)^n\right]\,. \label{eq:T3rhophicons} \end{align} Once again it is instructive to consider this expression in the small and large $\beta_{n-2}$ limits separately. In the limit where $1 \gg |\gamma_{,ZZ}|$, the second term in the square bracket of \cref{eq:T3rhophicons} is negligible and the uncoupled quintessence case is recovered. In the limit where $1 \ll |\gamma_{,ZZ}|$, however, the first term in the square bracket can be neglected and one obtains \begin{equation} \dot{\bar{\rho}}_\phi = 3n\mathcal{H} \beta_{n-2}\left(-\frac{\dot{\bar{\phi}}}{a}\right)^n\,. \end{equation} We have already established that in this limit, $\dot{\bar{\phi}}$ scales according to \cref{eq:nscalelate2}, so we can infer $\dot{\bar{\rho}}_\phi$ scales with $\beta_{n-2}$ as \begin{equation} \dot{\bar{\rho}}_\phi \sim |\beta_{n-2}|^{-\frac{1}{n-1}}\,. \end{equation} Finally, since $\beta_{n-2}(-\dot{\bar{\phi}}/a)^n$ is always negative for the choices of $\beta_{n-2}$ we consider, we can conclude that $\bar{\rho}_\phi$ falls with time, and does so more slowly the larger $|\beta_{n-2}|$ is. Thus for very large values of $|\beta_{n-2}|$ the scalar field behaves similarly at the background level to a cosmological constant. As in the case of uncoupled quintessence, a steeper scalar field potential also results in a faster evolution of $\bar{\phi}$ and hence a drop in $\bar{\rho}_\phi$. In terms of the background evolution we therefore find that the potential parameter $\lambda$ and the Type~3 coupling parameter $\beta_{n-2}$ act in opposition to each other, with an increase in the former tending to speed up the scalar field evolution and the latter tending to slow it. Both of these effects are illustrated by \cref{fig:rhophi} for a Type~3 coupling with $n=2$. \begin{figure} \centering \includegraphics[width=\columnwidth]{rhop_selection.pdf} \caption{The evolution of the background energy density of the scalar field $\bar{\rho}_\phi$ as a function of the scale factor,$a$, for two values of the coupling parameter $\beta_0$ and the potential parameter $\lambda$. } \label{fig:rhophi} \end{figure} The expansion rate can be calculated using the Friedmann equation. At early times, the contribution of the scalar field is negligible compared to those of matter and radiation but at late times it is the dominant species. A small value of $|\beta_{n-2}|$, allowing $\bar{\rho}_\phi$ to fall, will give rise to a smaller present-day expansion rate than a large value of $|\beta_{n-2}|$, which slows the evolution of $\bar{\rho}_\phi$ and gives an expansion rate close to that expected from a cosmological constant. \Cref{fig:h} illustrates the impact of the slope of the potential and the Type~3 coupling parameter on the present-day expansion rate for a Type~3 coupling with $n=2$. It can be seen that steep potentials give rise to unrealistically low values of $H_0$ unless the Type~3 coupling is sufficiently strong. \begin{figure} \centering \includegraphics[width=\columnwidth]{h.pdf} \caption{The present-day value of the Hubble parameter, $H_0$, as a function of the coupling parameter, $|\beta_0|$, for a range of potential parameters, $\lambda$, for a quadratic coupling function $\gamma(Z) = \beta_0 Z^2$ and an exponential potential $V(\phi) = A \e{-\lambda\phi/M_\text{P}}$.} \label{fig:h} \end{figure} \section{Evolution of linear perturbations} \label{sec:perts} \subsection{Dependence on coupling parameter} Type~3 models affect the cosmological perturbations through the modified equation for the CDM velocity divergence, \cref{eq:theta0}. In the case of a power-law coupling, \cref{eq:theta0} can be written: \begin{equation} \label{eq:thetacn} \dot{\theta}_\text{c} + \mathcal{H}\theta_\text{c} = \frac{n\beta_{n-2} [a^{4-n} (-\dot{\bar{\phi}})^{n-1} \delta\phi] \dot{}} {a^4[\bar{\rho}_\text{c}-n\beta_{n-2}(-\dot{\bar{\phi}}/a)^n]}\,. \end{equation} It turns out that the second term in the denominator is always significantly smaller than the first. Thus to understand the behaviour of $\theta_\text{c}$ it suffices to consider the numerator. Let us separately consider how $\beta_{n-2}(-\dot{\bar{\phi}})^{n-1}$ and $\delta\phi$ depend on $\beta_{n-2}$. From \cref{eq:nscalelate2} we can see that, for sufficiently large $|\beta_{n-2}|$, the factor $\beta_{n-2}(-\dot{\bar{\phi}})^{n-1}$ is approximately constant and any $\beta$-dependence of $\theta_\text{c}$ must come from $\delta\phi$. In the limit of small $|\beta_{n-2}|$, however, we have already seen that $\dot{\bar{\phi}}$ is approximately independent of $\beta_{n-2}$ so the factor $\beta_{n-2}(-\dot{\bar{\phi}})^{n-1}$ rises linearly with $\beta_{n-2}$. As illustrated by \cref{fig:dphi}, $\delta\phi$ is constant with $\beta_{n-2}$ for small $|\beta_{n-2}|$, and drops as \begin{equation} \delta\phi \sim |\beta_{n-2}|^{-\frac{1}{n-1}}\,, \end{equation} for large $|\beta_{n-2}|$, with the transition from the approximately constant regime to the $|\beta_{n-2}|^{-1/(n-1)}$ regime occurring at larger $|\beta_{n-2}|$ on smaller scales. For the special case of $n=2$, we can make a more precise statement and say that $\delta\phi$ scales as $1/(1-2\beta_0)$ on large scales. The black line in the top panel of \cref{fig:dphi} illustrates this scaling, closely matching the form of the magenta and cyan lines which correspond to large scales, while the blue line, corresponding to small scales, is constant for a wide range of $\beta_0$. The above scaling arguments for the factors in \cref{eq:thetacn} allow us to understand how $\theta_\text{c}$ depends on $\beta_{n-2}$. For small $|\beta_{n-2}|$, we expect $|\theta_\text{c}|$ to rise linearly with $|\beta_{n-2}|$, while for large $|\beta_{n-2}|$ we expect it to fall as $|\beta_{n-2}|^{-1/(n-1)}$. This behaviour is illustrated in \cref{fig:thetac}. The broad peak of $|\theta_\text{c}|$ on small scales results from the fact that $\delta\phi$ is approximately constant for a wide range of $\beta_{n-2}$ on small scales. Once again, we can be more precise in the special case in which $n=2$. Inserting the scalings for $\dot{\bar{\phi}}$ and $\delta\phi$ into \cref{eq:thetacn}, we find that $\theta_\text{c}$ scales as $\beta_0/(1-2\beta_0)^2$ on large scales (black solid line in the top panel of \cref{fig:thetac}) and as $\beta_0/(1-2\beta_0)$ on small scales (black dashed line in the top panel of \cref{fig:thetac}). \begin{figure} \centering \includegraphics[width=\columnwidth]{coupling0_dphi_consth.pdf} \includegraphics[width=\columnwidth]{coupling1_deltaphi.pdf} \caption{The present-day value of the scalar field perturbation $\delta\phi$ for a Type~3 coupling $\gamma(Z) = \beta_{n-2}Z^n$ as a function of $\beta_{n-2}$ for different scales $k$. The top panel shows the $n=2$ case and the bottom panel shows $n=3$. } \label{fig:dphi} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{coupling0_thetac_consth.pdf} \includegraphics[width=\columnwidth]{coupling1_thetac.pdf} \caption{The present-day CDM velocity divergence $\theta_\text{c}$ for a Type~3 coupling $\gamma(Z) = \beta_{n-2}Z^n$ as a function of $\beta_{n-2}$ for several scales $k$. The top panel shows the $n=2$ case and the bottom panel shows $n=3$.} \label{fig:thetac} \end{figure} \subsection{Dependence on potential} The slope of the potential, $\lambda$, also has an impact on the growth of matter perturbations. \Cref{fig:sigma8} demonstrates for a coupling with $n=2$ that a steeper potential can give rise to a very large reduction in $\sigma_8$, certainly large enough to resolve the discrepancy between early- and late-universe observations. It should be noted that increasing the slope $\lambda$ of the potential has the effect of reducing the expansion rate, which is to be avoided since this exacerbates the Hubble tension. (See Ref.~\cite{Verde:2019ivm} for a recent discussion.) The lines in \cref{fig:sigma8} stop once $H_0< 30 \, \mathrm{km}\mathrm{s}^{-1}\mathrm{Mpc}^{-1}$ but even much smaller reductions in $H_0$ are problematic. However, from \cref{fig:h,fig:sigma8} one can see that there are choices for $\lambda$ and $\beta_0$ that give rise to significant reduction in $\sigma_8$ without having a noticeable impact on $H_0$, for example $\lambda=3$, $\beta_0 = -10^2$. \begin{figure} \centering \includegraphics[width=\columnwidth]{sigma8.pdf} \caption{The amplitude of matter fluctuations $\sigma_8$ as a function of the coupling parameter $|\beta_0|$ for a range of potential parameters $\lambda$ for a quadratic coupling function $\gamma(Z) = \beta_0 Z^2$ and an exponential potential $V(\phi) = A \e{-\lambda\phi/M_\text{P}}$.} \label{fig:sigma8} \end{figure} To understand how structure growth depends on $\lambda$ one needs to consider the CDM velocity divergence. \Cref{fig:thetac_lambda} shows, again for the $n=2$ case, how the evolution of $\theta_\text{c}$ is affected by the potential parameter $\lambda$: larger $\lambda$, corresponding to a steeper potential, results in $|\theta_\text{c}|$ rising more rapidly. Larger $\theta_\text{c}$ at a given time reduces the time derivative of the CDM density contrast $\delta_\text{c}$ (see \cref{eq:delta0}), resulting in a smaller $|\delta_\text{c}|$ at the present epoch and hence a reduction of $\sigma_8$ for large $\lambda$ as seen in \cref{fig:sigma8}. The $\lambda$-dependence of $\theta_\text{c}$ can be seen in the $\theta_\text{c}$ equation (\cref{eq:theta0}). Substituting for $\ddot{\bar{\phi}}$ using \cref{eq:sfe1}, \cref{eq:theta0} becomes \begin{equation} \label{eq:thetapot} \dot{\theta}_\text{c} = -\mathcal{H}\theta_\text{c} + \frac{\frac{2\beta_0}{1-2\beta_0} a^2 V_{,\phi} \delta\phi - 2\beta_0\dot{\bar{\phi}}\dot{\delta\phi}} {(\bar{\rho}_\text{c}a^2 - 2\beta_0\dot{\bar{\phi}}^2)}\,, \end{equation} which, for an exponential potential $V(\phi) = A\e{-\lambda\phi/M_\text{P}}$, yields \begin{equation} \label{eq:thetapot2} \dot{\theta}_\text{c} = -\mathcal{H}\theta_\text{c} + \frac{-\frac{2\beta_0}{1-2\beta_0} a^2 A\lambda\e{-\lambda\phi/M_\text{P}}\delta\phi - 2\beta_0\dot{\bar{\phi}}\dot{\delta\phi}} {(\bar{\rho}_\text{c}a^2 - 2\beta_0\dot{\bar{\phi}}^2)}\,. \end{equation} Both of the terms in the numerator become larger in magnitude when $\lambda$ is large. In the first term this is obvious; in the second it is a consequence of the $V_{,\phi\phi}$ term in \cref{eq:sfeperturb}. Hence, a large slope $\lambda$ results in a large (negative) $\theta_\text{c}$ leading to a reduction in $\delta_\text{c}$ and a suppression of structure growth. \begin{figure} \centering \includegraphics[width=\columnwidth]{thetac_lambda.pdf} \caption{The evolution of the CDM velocity divergence $\theta_\text{c}$ as a function of the scale factor $a$ for a range of different potential parameters $\lambda$ with a coupling parameter $\beta_0 = -10^2$, at a scale $k = 0.1 \,\mathrm{Mpc}^{-1}$. The sound horizon at recombination is held fixed at $\theta_\text{s} = 0.0104$.} \label{fig:thetac_lambda} \end{figure} \subsection{Metric perturbation} As can be seen in \cref{eq:delta0}, the CDM density contrast depends not only on the CDM velocity divergence but also on the time derivative of the metric perturbation $h$. This latter quantity has a weak indirect dependence on the Type~3 coupling through its dependence on the background expansion rate. In general, a larger expansion rate at a given time results in a smaller value of $\dot{h}$, which in turn reduces $|\delta_\text{c}|$ and suppresses structure growth. For all of the cases we have considered, this effect is much smaller than the effect due to $\theta_\text{c}$. \section{Conclusions} \label{sec:disc} Unlike most coupled dark energy models that have been studied in the literature, Type~3 models, as classified at the Lagrangian level in Ref.~\cite{Pourtsidou:2013nha}, consist of a coupling between the momentum of the dark matter and the gradient of the dark energy scalar field. It was demonstrated in Ref.~\cite{Pourtsidou:2016ico} using MCMC methods that such models can ease the tension between early- and late-universe measurements of the degree of structure formation in the universe. In this work we have presented an explanation, using both analytical and numerical methods, of why Type~3 models suppress the growth of structure. We considered a fairly general power-law coupling function, finding that it gives rise to similar structure suppression behaviour to the quadratic case previously studied. We explored in detail the behaviour of the background cosmological evolution of Type~3 coupled quintessence models, demonstrating how the scalar field evolution depends on the coupling and the scalar field potential, and how the expansion rate is affected. In particular, we find that, as with uncoupled quintessence, a steeper slope of the scalar field potential gives rise to faster evolution of the scalar field and thus a reduced present-day expansion rate $H_0$. On the other hand, increasing the strength of the coupling via the parameter $\beta_{n-2}$ slows down the scalar field evolution and gives rise to a present-day expansion rate similar to that predicted by $\Lambda$CDM{}. In the models we have studied there is no mechanism for increasing $H_0$ beyond the value predicted by $\Lambda$CDM{} and hence resolving the existing Hubble constant tension. This understanding of the background evolution was then applied to the perturbed equations of motion. A Type~3 coupling between dark energy and CDM gives rise to a non-zero CDM velocity divergence, $\theta_\text{c}$, which suppresses structure growth via its role in the evolution of the density contrast of CDM, $\delta_\text{c}$ (see \cref{eq:delta0}). We found that the value of $|\theta_\text{c}|$ rises and falls with $|\beta_{n-2}|$, with a maximum corresponding to the maximum possible structure suppression. We demonstrated this behaviour using both approximate analytic arguments based on the equations of motion and a numerical analysis using an appropriately modified version of {\tt CLASS}{}. We also demonstrated how the structure suppression depends on the slope $\lambda$ of the scalar field potential. In particular, increasing $\lambda$ gives rise to a stronger suppression of structure growth. As our background analysis demonstrated, this can have the unwanted side effect of reducing the predicted value of $H_0$, thus worsening the Hubble tension. However, for appropriate values, such as $\lambda = 3$ and $\beta_0 = -10^2$, the structure suppression can be achieved without the Hubble constant being reduced. Thus our results indicate an even greater suppression of structure formation is possible than what has previously been realised. In order to understand the physical origin of the suppression of structure, we have held most cosmological parameters fixed. To fully explore the interplay between model parameters such as $\beta_{n-2}$ and $\lambda$ and cosmological parameters such as $\sigma_8$ and $H_0$ a multi-parameter MCMC analysis is needed. In Ref.~\cite{Linton:2017ged} such an analysis was carried out using CMB data from Planck for a Type~3 model with a cubic coupling, allowing the potential parameter $\lambda$ to vary between 0 and 2.1. They found the Type~3 model to be consistent with the CMB data but marginally disfavoured when compared to $\Lambda$CDM{}. We have not discussed the physical origin of the Type~3 coupling. Presenting a more physically motivated model would be a worthwhile avenue for future study. Recently in \cite{Kase:2019veo,Kase:2019mox} the authors have considered the presence of related interactions in the context of Horndeski theories of modified gravity. We note that our analysis has involved the use of large dimensionless numbers for the coupling parameter $\beta_0$. Without reference to a deeper underlying theory it is difficult to say whether such values are reasonable, but the requirement of large dimensionless numbers is somewhat unappealing. This would be a challenge for any future physically motivated Type~3 theory. Another possible focus of future research would be to study Type~3 models in a more model-independent way, using the PPF formalism developed in Ref.~\cite{Skordis:2015yra}. In this approach, there is a certain set of non-zero parameters that define a Type~3 model, which can in principle be constrained by observational surveys. Type~3 interacting dark energy is still a young and little-studied area of research but it has been shown to have interesting consequences for the structure and evolution of the universe. \section{Acknowledgements} F.N.C. is supported by a United Kingdom Science and Technology Facilities Council (STFC) studentship. A.A., E.J.C. and A.M.G. acknowledge support from STFC grant ST/P000703/1. A.P. is a UK Research and Innovation Future Leaders Fellow, grant MR/S016066/1, and also acknowledges support from the UK Science and Technology Facilities Council through grant ST/S000437/1. A.P. is grateful to the University of Nottingham for hospitality during the initial stages of this work. We are grateful to Thomas Tram for useful discussions.
2,877,628,088,791
arxiv
\section{Introduction} The isotropy of the cosmic expansion is well-constrained for the early universe, particularly by Cosmic Microwave Background observations, and is a basic tenet of physical cosmology. The change from a matter-dominated to a dark energy-dominated universe in recent times, however, raises the possibility of a dark energy-driven anisotropic expansion if dark energy is itself anisotropic. There is no obvious reason for such symmetry breaking, but observational tests of something as fundamental as the isotropy of the Hubble expansion should be made for late times (the current epoch). One such test is possible via extragalactic proper motions: if the expansion is anisotropic, then quasars and galaxies will stream toward directions of faster expansion and away from directions of slower expansion. The signature of anisotropic expansion in a homogeneous universe is thus a curl-free proper motion vector field \citep[to first order;][]{quercellini09,fontanini09,titov09}. The term ``cosmic parallax'' has been used by some to indicate a general relative angular motion of objects in the universe \citep[e.g.,][]{quercellini09,fontanini09} and used by others in a more canonical sense to indicate apparent angular motion induced by the motion of the observer \citep[e.g.,][]{ding09}. We favor the latter usage; this paper therefore treats the apparent proper motion induced by anisotropic cosmic expansion \citep{amendola13}, referenced to the International Celestial Reference Frame (ICRF) in the current epoch. The observed proper motions are therefore relative, but are not necessarily induced by the observer's motion. In this paper, we present a simple model of anisotropic expansion and fit the model to the \citet{titov13} proper motion catalog to place a new constraint on the isotropy of the Hubble expansion and thus on the isotropy of dark energy. We assume $H_\circ=72$~km~s$^{-1}$~Mpc$^{-1}$ and a flat cosmology (this treatment is independent of specific assumptions about $\Omega_\Lambda$ and $\Omega_M$, provided $\Omega_\Lambda + \Omega_M = 1$). \section{Anisotropic Expansion Model}\label{sec:theory} As described in \citet{quercellini09} and \citet{fontanini09}, a homogeneous but anisotropic Bianchi \rm{I} model with metric \begin{equation} ds^2 = -dt^2+a^2(t)\,dx'^{\,2}+b^2(t)\,dy'^{\,2}+c^2(t)\,dz'^{\,2} \label{eqn:metric} \end{equation} has three different expansion rates, $H_{x'} = \dot{a}/a$, $H_{y'} = \dot{b}/b$, and $H_{z'} = \dot{c}/c$, where the Hubble parameter as observed is $H={d\over dt}(abc)^{1/3}/(abc)^{1/3}$ and the Friedmann-Robertson-Walker metric is recovered for $a(t)=b(t)=c(t)$. This metric has a flat geometry and no global vorticity, but the anisotropic expansion will produce a shearing velocity field, causing objects to stream toward directions of faster expansion and away from directions of slower expansion. The shear can be characterized by the fractional deviation from the average Hubble expansion today ($t=t_\circ$), \begin{equation} \Sigma_{i'} = {H_{i',\circ}\over H_\circ}-1, \label{eqn:sheardefn} \end{equation} where $i' = x'$, $y'$, or $z'$, isotropy corresponds to $\Sigma_{x'} = \Sigma_{y'} = \Sigma_{z'} =0$ (no deviation from $H_\circ$ in any direction), and the expansion is ``conserved'': $\Sigma_{x'}+\Sigma_{y'}+\Sigma_{z'}=0$ (the directionless overall $H_\circ$ is preserved, despite anisotropy). Under the simplifying assumption of straight geodesics (incorrect, but a small error as demonstrated by \citealt{quercellini09}), the sky signal of an anisotropic expansion is a curl-free quadrupolar proper motion vector field that is independent of distance. Note that this is a ``real-time'' signal, meaning that the time derivatives are with respect to small coordinate time intervals today (decades) and therefore a constant $H_\circ$ is a good assumption. The application of this model by \citet{quercellini09}, \citet{titov09}, and \citet{titov11} to the apparent proper motions of extragalactic objects, however, retained a metric fixed with respect to the equatorial coordinate system and did not allow the anisotropy to have an arbitrary orientation. Application of this model to observations by \citet{titov11} was not rotationally invariant (the result would depend on the choice of coordinate system). In order to obtain a fully general model, we require rotational invariance appropriate for vector spherical harmonics (the divergence and curl of the scalar spherical harmonics, $Y_{\ell m}$): all orders $m$ for a given degree $\ell$ must be included in a model fit to data \citep{mignard12}. Here we demonstrate that all quadrupolar ($\ell=2$) curl-free vector spherical harmonic orders naturally arise from an arbitrary rotation of the anisotropic coordinate system. Using an arbitrary rotation from the equatorial reference frame to the one described by Equation (\ref{eqn:metric}), we can allow for any anisotropy orientation. The anisotropy also need not be triaxial; there could simply be a single direction of high or low expansion, provided the expansion conservation condition is satisfied. Rotations are made about $\hat{z}$ by angle $\alpha^*$, about the new $\hat{y}$ by $\delta^*$, and about the final $\hat{x}'$ axis by $\psi^*$. Thus, the anisotropic Bianchi \rm{I} axes can be expressed in the equatorial coordinate system via $\bmath{x}' = \mathbfss{R}_x(\psi^*)\,\mathbfss{R}_y(\delta^*)\, \mathbfss{R}_z(\alpha^*)\, \bmath{x}$, where $\mathbfss{R}_i(\phi)$ is the rotation matrix about axis $i$ by angle $\phi$. The equatorial coordinates of the anisotropy axes are therefore: \begin{subequations} \begin{eqnarray} (\alpha_{x'},\delta_{x'}) = (\alpha^*,\delta^*) \label{eqn:xaxis} \end{eqnarray} \begin{eqnarray} (\alpha_{y'},\delta_{y'}) = \left({\sin^{-1}(\cos\alpha^*\cos\psi^*-\sin\alpha^*\sin\delta^*\sin\psi^*)\over\sqrt{1-\cos^2\delta^*\sin^2\psi^*}}\, , \right. \nonumber \\ \left. \sin^{-1}(\cos\delta^*\sin\psi^*) \vphantom{\sin^{-1}(\cos\alpha^*)\over\sqrt{\cos^2\delta^*}} \right) \label{eqn:yaxis} \end{eqnarray} \begin{eqnarray} (\alpha_{z'},\delta_{z'}) = \left({\sin^{-1}(-\cos\alpha^*\sin\psi^*-\sin\alpha^*\sin\delta^*\cos\psi^*)\over\sqrt{1-\cos^2\delta^*\cos^2\psi^*}}\, , \right. \nonumber \\ \left. \sin^{-1}(\cos\delta^*\cos\psi^*) \vphantom{\sin^{-1}(\cos\alpha^*)\over\sqrt{\cos^2\delta^*}} \right). \label{eqn:zaxis} \end{eqnarray} \end{subequations} The anisotropic expansion proper motion vector field $\bmath{V}_{\rm Shear}(\alpha,\delta)$ coefficients are listed in Table \ref{tab:shear_theory} (see Appendix \ref{appdx} for the full equation). These were obtained by taking the time derivatives of the equatorial coordinates, $\dot{\alpha}\cos\delta$ and $\dot{\delta}$, expressed in terms of the shear terms $\Sigma_{i'}$. The general form of the vector field is \begin{eqnarray} \bmath{V}_{\rm Shear}(\alpha,\delta) = H_\circ\ \sum_{m=0}^\ell\ \sum_{i=\alpha,\delta}\ \xi_{2m,i}^{Re,Im}(\alpha,\delta) \nonumber \\ \times\left[ a_{2m}^{Re,Im}(\alpha^*,\delta^*,\psi^*) \left(\Sigma_{y'}+{1\over2}\,\Sigma_{x'}\right) \right. \nonumber \\ \left. +\ b_{2m}^{Re,Im}(\alpha^*,\delta^*,\psi^*) \left( {1\over2}\,\Sigma_{x'}\right)\right] \bmath{\hat{e}}_i \label{eqn:sheareqn} \end{eqnarray} where there are no imaginary coefficients for $m=0$ and the sum over real and imaginary coefficients is implied. The Hubble constant can be written as $H_\circ = 15.2$~$\mu$as~yr$^{-1}$ (for $H_\circ = 72$~km~s$^{-1}$~Mpc$^{-1}$), so a shear of 10\% would produce streaming motions of order 1.5~$\mu$as~yr$^{-1}$. Comparison of this derivation of the shear field $\bmath{V}_{\rm Shear}(\alpha,\delta)$ to the spheroidal (curl-free or E-mode) quadrupolar vector spherical harmonics \citep{mignard12} reveals an exact one-to-one correspondence (Table \ref{tab:shear_theory}). The treatment here is therefore rotationally invariant (as desired), and it is therefore completely equivalent to fit the spheroidal $\ell=2$ vector spherical harmonics to a proper motion vector field to test the isotropy of expansion. It is not, however, correct to fit or select single or selected orders of a vector spherical harmonic model, as has been done previously. Table \ref{tab:shear_theory} lists the equivalent vector spherical harmonic coefficients (see Appendix \ref{appdx} for the full equation), following the \citet{mignard12} conventions: \begin{eqnarray} \bmath{V}_{E2}(\alpha,\delta) = \sum_{m=0}^\ell\ \sum_{i=\alpha,\delta}\ \xi_{2m,i}^{Re,Im}(\alpha,\delta)\,\chi_{2m}^{Re,Im} s_{2m}^{Re,Im}\,\bmath{\hat{e}}_i\ . \label{eqn:E2eqn} \end{eqnarray} Note that this equation has absorbed the factors of 2 for the $m>0$ orders and the factors of $-1$ for the imaginary terms into the coefficients $\chi_{2m}^{Re,Im}$, in contrast to the definitions used by \citet{mignard12}. \begin{table*} \begin{minipage}{135mm} \caption{Anisotropy Model Coefficients} \label{tab:shear_theory} \begin{tabular}{@{}cccccc@{}} \hline $\bmath{\hat{e}}_\alpha$ & $\bmath{\hat{e}}_\delta$ & $\Sigma_{y'}+{1\over2}\Sigma_{x'}$ & ${1\over2}\Sigma_{x'}$ & \multicolumn{2}{c}{$\bmath{V}_{E2}$} \\ \noalign{\vskip 2mm} $\xi_{2m,\alpha}^{Re,Im}(\alpha,\delta)$ & $\xi_{2m,\delta}^{Re,Im}(\alpha,\delta)$ & $a_{2m}^{Re,Im}(\alpha^*,\delta^*,\psi^*)$ & $b_{2m}^{Re,Im}(\alpha^*,\delta^*,\psi^*)$ & $s_{2m}$ & $\chi_{2m}^{Re,Im}$ \\ \hline 0 & ${3\over8}\sin2\delta$ & $-\cos2\psi^*(1+\cos2\delta^*)$ & $1-3\cos2\delta^*$ & $s_{20}$ & ${2\over3}\sqrt{15\over2\pi}$ \\ ${1\over2} \sin\alpha\sin\delta$ & $-{1\over2}\cos\alpha\cos2\delta$ & \vtop{\hbox{\strut $2\sin\alpha^*\cos\delta^*\sin2\psi^*$}\hbox{\strut $-\cos\alpha^*\sin2\delta^*\cos2\psi^*$}} & $-3\cos\alpha^*\sin2\delta^*$ & $s_{21}^{Re}$ & $\sqrt{5\over\pi}$\\ ${1\over2} \cos\alpha\sin\delta$ & ${1\over2}\sin\alpha\cos2\delta$ & \vtop{\hbox{\strut $2\cos\alpha^*\cos\delta^*\sin2\psi^*$}\hbox{\strut $+\sin\alpha^*\sin2\delta^*\cos2\psi^*$}} & $3\sin\alpha^*\sin2\delta^*$ & $s_{21}^{Im}$ & $\sqrt{5\over\pi}$\\ ${1\over4} \sin2\alpha\cos\delta$ & ${1\over8}\cos2\alpha\sin2\delta$ & \vtop{\hbox{\strut $3\cos2\alpha^*\cos2\psi^*$}\hbox{\strut $-\cos2\alpha^*\cos2\delta^*\cos2\psi^*$}\hbox{\strut $-4\sin2\alpha^*\sin\delta^*\sin2\psi^*$}} & $-3\cos2\alpha^*(1+\cos2\delta^*)$ & $s_{22}^{Re}$ & $-2\sqrt{5\over\pi}$ \\ ${1\over4} \cos2\alpha\cos\delta$ & $-{1\over8}\sin2\alpha\sin2\delta$ & \vtop{\hbox{\strut $-3\sin2\alpha^*\cos2\psi^*$}\hbox{\strut $+\sin2\alpha^*\cos2\delta^*\cos2\psi^*$}\hbox{\strut $-4\cos2\alpha^*\sin\delta\sin2\psi^*$}} & $3\sin2\alpha^*(1+\cos2\delta^*)$ & $s_{22}^{Im}$ & $-2\sqrt{5\over\pi}$ \\ \hline \end{tabular} Model coefficients and angular terms corresponding to the terms in Equations (\ref{eqn:sheareqn}) and (\ref{eqn:E2eqn}). See Equation (\ref{eqn:fullshear}) for the full shear vector field and Equation (\ref{eqn:E2}) for the full spheroidal quadrupolar vector field. \end{minipage} \end{table*} \section{Data Analysis Methods}\label{sec:methods} We employ the \citet{titov13} proper motion measurements of 429 radio sources to examine the isotropy of the Hubble expansion. The data were obtained from sessions of the permanent geodetic and astrometric VLBI program, which includes the Very Long Baseline Array\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}, at 8.4 GHz in 1990--2013 using a relaxed per-session no-net rotation constraint and an iterative process to reject objects with large intrinsic or spurious proper motions \citep{titov13}. Objects in this catalog can show large intrinsic proper motions due to plasmon ejection in jets or due to core shift effects, but these motions are uncorrelated between objects; they simply add intrinsic proper motion noise to any correlated global signals. \citet{titov11} first detected the secular aberration drift quasar proper motion signature induced by the barycenter acceleration about the Galactic Center, which was later confirmed by \citet{xu12} and refined by \citet{titov13}. The signature is an E-mode (curl-free) proper motion dipole with apex at the Galactic Center. In order to measure or constrain the E-mode quadrupolar anisotropy signal, we subtract the dipole proper motion pattern from the observed proper motion vector field, but employ the \citet{reid14} results obtained from trigonometric parallaxes and proper motions of masers associated with young massive stars: they obtain a Galactic Center distance of $R_0=8.34\pm0.16$~kpc and a rotation speed at $R_0$ of $\Theta_0=240\pm8$~km~s$^{-1}$. Since the relevant quantity for aberration drift is the solar acceleration about the Galactic Center, we use the actual solar orbital motion that includes the solar motion in the direction of the Galactic rotation, $\Theta_0+V_\odot=255.2\pm5.1$~km~s$^{-1}$ \citep{reid14}, which yields an acceleration of $0.80\pm0.04$~cm~s$^{-1}$~yr$^{-1}$ and a dipole amplitude of $5.5\pm0.2$~$\mu$as~yr$^{-1}$. This dipole in the \citet{titov13} notation is $\vec{d} = (d_1,d_2,d_3) = (-0.300\pm0.013,-4.80\pm0.21,-2.66\pm0.12)$~$\mu$as~yr$^{-1}$; in the \cite{mignard12} notation, it is $(s_{10},s_{11}^{Re},s_{11}^{Im}) = (-7.71\pm0.34,+0.615\pm0.027,-9.82\pm0.44)$~$\mu$as~yr$^{-1}$. We assume that the acceleration direction is exactly toward the Galactic Center and do not include the out-of-the-disk acceleration described by \citet{xu12} in our correction (following \citealt{darling13}). \citet{titov13} do not confirm the acceleration detected by \citet{xu12}. The derived dipole has substantially smaller errors than the \citet{titov13} measurement, and when we subtract this dipole from the vector field, we introduce negligible statistical errors compared to the proper motion uncertainty in individual objects. Although dipole and quadrupole signals are in principle orthogonal, covariance between different-degree vector spherical harmonics do exist \citep[e.g.,][]{titovmalkin09,titov13}, so subtraction of the best-measured dipole --- using completely independent observations --- is appropriate before measuring the E-mode quadrupole. After subtracting the dipole signal from the quasar proper motion catalog, we perform a least-squares minimization fit of the observed proper motions to the anisotropy model described by Equations (\ref{eqn:sheareqn}) and (\ref{eqn:fullshear}) and Table \ref{tab:shear_theory}. The free parameters are the rotation angles between the equatorial reference frame and the anisotropy frame, $\alpha^*$, $\delta^*$, and $\psi^*$, and two of the shear parameters describing the anisotropy, $\Sigma_{x'}$ and $\Sigma_{y'}$. The third shear parameter, $\Sigma_{z'}$, is determined by the expansion conservation condition (Section \ref{sec:theory}). The Hubble constant is assumed. Unless genuine significant anisotropy is detected, this model does not constrain $H_\circ$, which acts as a scaling amplitude for the streaming proper motions ($H_\circ = 15.2$~$\mu$as~yr$^{-1}$). We also fit the spheroidal quadrupole vector spherical harmonics for comparison. \section{Results}\label{sec:results} Table \ref{tab:shear_meas} shows the measured anisotropy based on the least-squares fitting of the proper motion catalog to the anisotropy model. Table \ref{tab:E2quad_meas} shows the more general vector spherical harmonic parameters for a spheroidal quadrupole proper motion vector field. None of the shear parameters nor the quadrupole vector spherical harmonic coefficients are significant; all are consistent with zero, indicating an isotropic Hubble expansion. The largest deviation from isotropy is $-19\%\pm7\%$, and the largest positive deviation is $+17\%\pm7\%$ (neither significant). The smallest deviation (and thus the best anisotropy constraint) is $+2\%\pm7\%$. The anisotropy of the Hubble expansion in the epoch of dark energy is thus less than 7\% ($1\,\sigma$) in the best-constrained direction. \begin{table} \caption{Measured Expansion Anisotropy} \label{tab:shear_meas} \begin{tabular}{@{}cccccc@{}} \hline $\Sigma_{x'}$ & $\Sigma_{y'}$ & $\Sigma_{z'}$ & $\alpha^*$($^\circ$) & $\delta^*$($^\circ$) & $\psi^*$($^\circ$) \\ \hline 0.17(7) & $-$0.19(7) & 0.02(7) & 193(15) & 47(26) & $-$2(21)\\ \hline \end{tabular} \smallskip The expansion shear terms $\Sigma_{i'}$ indicate the fractional departure from the average Hubble expansion (Equation (\ref{eqn:sheardefn})). The $\hat{x}'$ axis lies in the $(\alpha^*,\delta^*)$ direction, and the $\hat{y}'$ and $\hat{z}'$ axes are rotated about the $\hat{x}'$ axis by $\psi^*$; their coordinates are listed generally in Equations (\ref{eqn:yaxis}) and (\ref{eqn:zaxis}) and as measured in Section \ref{sec:results}. Parenthetical values are $1\,\sigma$ uncertainties on the final digit(s). \end{table} \begin{table} \caption{Measured Spheroidal Quadrupole} \label{tab:E2quad_meas} \begin{tabular}{@{}cccccc@{}} \hline $s_{20}$ & $s_{21}^{Re}$ & $s_{21}^{Im}$ & $s_{22}^{Re}$ & $s_{22}^{Im}$ & $\sqrt{P_2^s}$\\ \hline 3.0(2.5) & 1.7(1.4) & $-$0.5(1.7) & 3.1(1.4) & $-$1.4(1.3) & 6.2(2.1)\\ \hline \end{tabular} \smallskip The quadrupolar ($\ell=2$) spheroidal vector spherical harmonic coefficients $s_{2m}$ and the total power in the $\ell=2$ curl-free order $P_2^s$ follow the \citet{mignard12} conventions. Parenthetical values are $1\,\sigma$ uncertainties. The units are $\mu$as~yr$^{-1}$. The Z-score of a one-sided significance test is 1.1, so the power in this mode is not significant. \end{table} Figure \ref{fig:shear} shows the model fit to the proper motion vector field for the 429 objects in the \citet{titov13} catalog. Positive deviation from the Hubble expansion ($\Sigma_{i'} > 0$) appears as an antipodal pair of convergent points, and negative deviation appears as an antipodal pair of divergent points. The equatorial coordinates of the best-fit (but not significant) anisotropy axes are: $(\alpha_{x'},\delta_{x'}) = (13^\circ\pm15^\circ,-47^\circ\pm26^\circ)$ and $(193^\circ\pm15^\circ,+47^\circ\pm26^\circ)$, $(\alpha_{y'},\delta_{y'}) = (102^\circ\pm24^\circ,+1^\circ\pm14^\circ)$ and $(282^\circ\pm24^\circ,-1^\circ\pm14^\circ)$, and $(\alpha_{z'},\delta_{z'}) = (11^\circ\pm33^\circ,+43^\circ\pm26^\circ)$ and $(191^\circ\pm33^\circ,-43^\circ\pm26^\circ)$. These results are insensitive to the initial parameter assumptions. While the anisotropy axes can be interchanged arbitrarily through rotations, the shear values and directions are stable best-fit solutions. The solution is likewise insensitive to the sometimes large intrinsic proper motions or large proper motion uncertainties of individual objects. Restricting the sample to the 358 objects with proper motions and proper motion uncertainties less than 100~$\mu$as~yr$^{-1}$ has a negligible impact on the model parameters or uncertainties listed in Table \ref{tab:shear_meas}. The power in the spheroidal vector spherical harmonics of degree $\ell$ is \begin{equation} P_\ell^s = s_{\ell0}^2+2 \sum_{m=1}^\ell \left(\left(s_{\ell m}^{Re}\right)^2+\left(s_{\ell m}^{Im}\right)^2\right), \label{eqn:power} \end{equation} a scalar under coordinate rotation \citep{mignard12}. For the $\ell=2$ fit in Table \ref{tab:shear_meas}, the power is $\sqrt{P_2^s} = 6.2\pm2.1$~$\mu$as~yr$^{-1}$. The Z-score of a one-sided significance test of the mean-error-reduced power \citep[][Equations (85) and (87)]{mignard12}, is 1.1, so the power in this mode is not significant. \begin{figure} \includegraphics[width=0.48\textwidth]{f1.ps} \caption{ The best-fit model anisotropy vector field in equatorial coordinates. The model parameters and uncertainties are listed in Tables \ref{tab:shear_meas} and \ref{tab:E2quad_meas}. The measured anisotropy is not significant, either by parameter in or total power. The high and low deviations from the average Hubble expansion are indicated as converging ($\Sigma_{x'}=0.17\pm0.07$) and diverging ($\Sigma_{y'}=-0.19\pm0.07$) loci, including error bars. The upper right scale bar indicates the amplitude of the vectors, the dotted line shows the Galactic plane, the solid circle indicates the Galactic Center, the open circles mark the Galactic poles, and the crosses indicate the $\hat{z}'$ direction.} \label{fig:shear} \end{figure} \section{Discussion} While the Bianchi \rm{I} metric model assumed for the data analysis is specific, the fitting function is a general $\ell=2$ spheroidal vector spherical harmonic; the model plays a role in specific parameter estimation, but the vector field sky pattern is generic. One can therefore adapt the fit parameters to different cosmological models, and one can fit additional vector spherical harmonic terms \citep[e.g.,][]{mignard12,titov13}. Figure \ref{fig:shear} shows a remarkable (but non-significant) alignment of the fit anisotropy axes with the Galactic plane-equator intersection. This could be driven by a vertical spheroidal dipole component induced by vertical (out-of-the-plane) solar acceleration \citep[a second aberration drift;][]{xu12}. A dipole fit to the proper motion vector field (after subtracting the galactocentric acceleration described in Section \ref{sec:methods}) is not significant and does not point toward the $\hat{x}'$ axis (the points of convergence). Since the anisotropy is neither significant in total power nor in individual parameters, this surprising alignment seems to be coincidental and should not be over-interpreted. The main limitation to this technique is the proper motion precision and sample size. The overall proper motion precision of individual objects will improve with time, as will the sample size of ICRF ``defining sources,'' but these will be slow, secular improvements. The next order-of-magnitude improvement will be provided by the Gaia mission. Gaia is an optical astrometry mission that will measure 500,000 quasar proper motions with $\sim$80~$\mu$as astrometry for $V=18$ mag stars \citep{debruijne05}. Unlike radio sources, compact optical extragalactic sources do not show significant internal intrinsic proper motions, so Gaia proper motion catalogs will not exhibit the uncorrelated proper motion signals that contaminate radio measurements. We estimate that the Gaia mission will constrain anisotropy below 1\%. \section{Conclusions} We have demonstrated how anisotropic Hubble expansion can be measured or constrained using extragalactic proper motions, and we applied this technique to the best current proper motion catalog \citep{titov13} to place a new constraint on the isotropy of the Hubble expansion and thus on the isotropy of dark energy. No significant anisotropy was detected; the Hubble expansion is isotropic to 7\% ($1\,\sigma$), corresponding to streaming motions of 1~$\mu$as~yr$^{-1}$, in the best-constrained directions ($-$19\% and +17\% in the least-constrained directions) and does not significantly deviate from isotropy in any direction. The Gaia mission, which is expected to obtain proper motions for 500,000 quasars, will likely constrain the anisotropy below 1\%. \section*{Acknowledgments} The author thanks the anonymous referee for helpful comments and O. Titov and S. B. Lambert for making their proper motion catalog publicly available. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published in A\&AS 143, 23. This research has made use of NASA's Astrophysics Data System Bibliographic Services.
2,877,628,088,792
arxiv
\section{Introduction} The aim of this paper is to introduce the notion of $\imath$crystals associated to a quantum symmetric pair $(\U,\Ui)$ of certain quasi-split type. This study is motivated by the theory of canonical bases for quantum symmetric pairs (also known as the theory of $\imath$canonical bases) initiated by Bao and Wang in \cite{BW18a} and developed in \cite{BW18,BW21}. Let $(I,I_\bullet,\tau)$ be a Satake diagram of symmetrizable Kac-Moody type (also known as an admissible pair). Namely, $I$ is a Dynkin diagram of symmetrizable Kac-Moody type, $I_\bullet \subset I$ is a subdiagram of finite type, and $\tau \in \Aut(I)$ is a diagram automorphism of order at most two, satisfying certain axioms. From our Dynkin diagram $I$, we can construct the quantum group $\U = \U_I = U_q(I)$. It has a set of generators $E_i,F_i,K_h$, $i \in I$, $h \in Y$, where $Y$ denotes the coroot lattice. In \cite{Ko14}, Kolb defined a right coideal subalgebra $\Ui = \Ui_{\bfvarsigma,\bfkappa} \subset \U$ in terms of $I_\bullet$ and $\tau$, where $\bfvarsigma = (\varsigma_i)_{i \in I} \in (\C(q)^\times)^I$ and $\bfkappa = (\kappa_i)_{i \in I} \in \C(q)^I$ are parameters. To be more a bit precise, $\Ui$ is generated by the quantum group $\U_{I_\bullet}$ associated to the subdiagram $I_\bullet$, $K_h$ for $h \in Y^\imath := \{ h \in Y \mid w_\bullet \tau(h) = -h \}$, and distinguished elements $B_i$ for $i \in I {\setminus} I_\bullet$. The pair $(\U,\Ui)$ is called the quantum symmetric pair associated to our Satake diagram $(I,I_\bullet,\tau)$ and parameters $\bfvarsigma,\bfkappa$. The coideal subalgebra $\Ui$ itself is referred to as the $\imath$quantum group. Kolb's construction generalizes Letzter's construction \cite{Le99} for $I$ being finite type, which unifies earlier examples constructed by Koornwinder \cite{Ko90}, Gavrilik-Klimyk \cite{GK91}, Noumi \cite{N96} and others. It has turned out that the theory of quantum symmetric pairs has many applications in numerous areas of mathematics and physics such as representation theory of Lie algebras, orthogonal polynomials, low-dimensional topology, categorifications, and integrable systems. Such applications are often based on the fact that the $\imath$quantum group $\Ui$ can be thought of as a generalization of the quantum group. To be more precise, when we take a Satake diagram of diagonal type, the embedding $\Ui \hookrightarrow \U$ can be identified with the comultiplication map $\Delta : \U_{I'} \rightarrow \U_{I'} \otimes \U_{I'}$ for some Dynkin diagram $I'$ (see Example \ref{set up for diagonal type} for details). From this point of view, the theory of $\imath$canonical bases are thought of as a generalization of the theory of canonical bases for quantum groups initiated by Lusztig \cite{L90}. Namely, the theory of $\imath$canonical bases for diagonal types recovers the usual theory of canonical bases. Let us see this in more detail. Let $X = X_I$ denote the weight lattice, and set $X^\imath := X/\{ \lm+w_\bullet \tau(\lm) \mid \lm \in X \}$, where $w_\bullet$ denotes the longest element of the Weyl group associated to $I_\bullet$ (recall that $I_\bullet$ is of finite type). Then, the modified $\imath$quantum group is defined as $$ \Uidot := \bigoplus_{\zeta \in X^\imath} \Ui \mathbf{1}_\zeta. $$ Here, $\mathbf{1}_\zeta$'s are orthogonal idempotents. When our Satake diagram is of diagonal type, we can identify $X$ with $X_{I'} \oplus X_{I'}$ and $X^\imath$ with $X_{I'}$, and hence, the modified $\imath$quantum group is identified with the modified quantum group $\Udot_{I'} = \bigoplus_{\lm \in X_{I'}} \U_{I'} \mathbf{1}_\lm$ (recall that we have identified $\U = \U_{I'} \otimes \U_{I'}$ and $\Ui = \Delta(\U_{I'}) \simeq \U_{I'}$). For each dominant integral weight $\lm \in X^+$, let $V(\lm)$ (resp., $V^{\rm{low}}(\lm)$) denote the irreducible highest weight module of highest weight $\lm$, and $v_\lm \in V(\lm)$ the highest weight vector (resp., irreducible lowest weight module of lowest weight $-\lm$, and $v^{\rm{low}}_\lm$ the lowest weight vector). Given $\lm,\mu \in X^+$, let $V^\imath(\lm,\mu)$ denote the $\U$-submodule of $V(\lm) \otimes V(\mu)$ generated by $v_{w_\bullet(\lm)} \otimes v_\mu$, where $v_{w_\bullet(\lm)} \in V(\lm)$ denotes the canonical basis element of weight $w_\bullet(\lm)$. Then, Bao and Wang \cite{BW18,BW21} proved that $V^\imath(\lm,\mu)$ has a distinguished basis, which they called the $\imath$canonical basis, of the form $$ \bfB^\imath(\lm,\mu) = \{ (b_1 \diamond b_2)^\imath_{w_\bullet(\lm),\mu} \mid b_1 \in \bfB_{I_\bullet},\ b_2 \in \bfB \} {\setminus} \{0\}. $$ Here, $\bfB = \bfB_I$ denotes the canonical basis of the Lusztig algebra $\bff$ associated to $I$. Also, they constructed a projective system of $\Ui$-modules $$ V^\imath(\lm+\tau(\nu),\mu+\nu) \rightarrow V^\imath(\lm,\mu) $$ which sends $v_{w_\bullet(\lm+\tau(\nu))} \otimes v_{\mu+\nu}$ to $v_{w_\bullet(\lm)} \otimes v_\mu$. Although it is not always true that this morphism is based (i.e., it sends an $\imath$canonical basis element to either an $\imath$canonical basis element or zero), Bao and Wang were able to prove that it is asymptotically true. Namely, given $b_1 \in \bfB_{I_\bullet}$, $b_2 \in \bfB$, if we take $\lm,\mu \in X^+$ to be sufficiently dominant, then the $\imath$canonical basis element $(b_1 \diamond b_2)^\imath_{w_\bullet(\lm+\tau(\nu)),\mu+\nu}$ is sent to $(b_1 \diamond b_2)^\imath_{w_\bullet(\lm),\mu}$ for all $\nu \in X^+$. Furthermore, they constructed the $\imath$canonical basis $\bfBidot$ of $\Uidot$ as an asymptotical limit of this projective system (see Theorem \ref{asymptotical limit} for the precise meaning). When our Satake diagram is of diagonal type, the $\U$-module $V^\imath(\lm,\mu)$ is just the tensor product $V^{\rm{low}}(\lm') \otimes V(\mu')$ for some $\lm',\mu' \in X_{I'}^+$, and its $\imath$canonical basis coincides with the canonical basis seen as a tensor product $\U_{I'}$-module. In this case, the morphisms in the projective system are based. Thus, we are led to investigate why and how much this property fails for quantum symmetric pairs beyond diagonal type. In this paper, we shall take a crystal theoretic approach. Namely, we focus on the projective system at $q = \infty$. Until the end of this Introduction, assume that our Satake diagram is quasi-split (i.e., $I_\bullet = \emptyset$), and the Cartan matrix $(a_{i,j})_{i,j \in I}$ satisfies $a_{i,\tau(i)} \in \{ 2,0,-1 \}$ for all $i \in I$. Note that every Satake diagram of diagonal type satisfies this assumption. Earlier works \cite{W17, W21b, W21c} of the author suggest that there is a good combinatorial theory which has much information of the representation theory of the $\imath$quantum group associated to our Satake diagram and special parameters $\bfvarsigma,\bfkappa$. In this paper, for each $i \in I$, we define a linear operator $\Btil_i$ acting on certain $\Ui$-modules which modifies the action of $B_i$ (recall that $B_i$ is one of the generators of $\Ui$). Such an operator $\Btil_i$ for $i \in I$ with $a_{i,\tau(i)} = 2$ (resp., $a_{i,\tau(i)} = 0$) has been introduced in \cite{W21b} (resp., \cite{W17}). The $\Btil_i$ for $i \in I$ with $a_{i,\tau(i)} = -1$ is new, and it generalizes the operator introduced in \cite{W17}. As the Kashiwara operators in the representation theory of quantum groups, our operators $\Btil_i$ are defined in a way such that we can take the crystal limit, i.e., the $q \rightarrow \infty$ limit. When our Satake diagram is of diagonal type, the $\Btil_i$ coincides with a Kashiwara operator for $\U_{I'}$. Then, we investigate how the operators $\Btil_i$ act on the tensor product of a $\Ui$-module on which $\Btil_i$'s are defined, and a $\U$-module on which Kashiwara operators are defined (recall that $\Ui$ is a right coideal of $\U$). Taking the crystal limit, we obtain a combinatorial tensor product rule for $\Btil_i$. As a special case, let us consider the tensor product of the trivial $\Ui$-module and a $\U$-module $M$. Such a $\Ui$-module is canonically identified with $M$ regarded as a $\Ui$-module by restriction. Then, the tensor product rule for $\Btil_i$ gives a $\Ui$-module structure of $M$ at $q = \infty$. When our Satake diagram is of diagonal type, the $\U$-module $M$ is of the form $L \otimes N$ for some $\U_{I'}$-modules $L,N$. Then, $\Btil_i$ on $M = L \otimes N$ at $q = \infty$ coincides with the tensor product rule for a Kashiwara operator for $\U_{I'}$. Following the theory of crystals, we introduce the notion of $\imath$crystals which abstract the operators $\Btil_i$ at $q = \infty$. An $\imath$crystal is a set $\clB$ equipped with several structure maps including linear operators $\Btil_i \in \End_{\C}(\C \clB)$, $i \in I$. We prove that given an $\imath$crystal $\clB_1$ and a crystal $\clB_2$ satisfying certain conditions, the tensor product $\clB_1 \otimes \clB_2 := \clB_1 \times \clB_2$ of them has an $\imath$crystal structure. Taking $\clB_1$ to be ``the trivial $\imath$crystal'', we obtain a way making a crystal into an $\imath$crystal. As a result, it turns out that the $\Ui$-module structure of a $\U$-module $M$ at $q = \infty$ is described by the crystal basis of $M$ regarded as an $\imath$crystal via this method. Let us return to the projective system $\{ V^\imath(\lm,\mu) \}_{\lm,\mu \in X^+}$. Under the assumption on our Satake diagram, the $\U$-module $V^\imath(\lm,\mu)$ is isomorphic to the irreducible highest weight module $V(\lm+\mu)$. Hence, the projective system becomes $\{ V(\lm) \}_{\lm \in X^+}$ with morphisms $V(\lm+\nu+\tau(\nu)) \rightarrow V(\nu)$ sending $v_{\lm+\nu+\tau(\nu)}$ to $v_\nu$. Now, we are able to answer our question why and how much the morphisms $V(\lm+\nu+\tau(\nu)) \rightarrow V(\lm)$ are not based. That is, when $\lm$ is not sufficiently dominant, there is no morphism $\clB(\lm+\nu+\tau(\nu)) \rightarrow \clB(\lm)$ of $\imath$crystals sending the highest weight element $b_{\lm+\nu+\tau(\nu)}$ to the highest weight element $b_\lm$, where $\clB(\lm)$ denotes the crystal basis of $V(\lm)$. Via a further observation, it turns out that there exists a dominant weight $\sigma \in X^+$ such that for each $\lm,\nu \in X^+$, there exists a morphism $\clB(\sigma+\lm+\nu+\tau(\nu);\lm+\nu+\tau(\nu)) \rightarrow \clB(\sigma+\lm;\lm)$ of $\imath$crystals sending $b_{\sigma+\lm+\nu+\tau(\nu)}$ to $b_{\sigma+\lm}$, where $$ \clB(\lm;\mu) := \{ \Ftil_{i_1} \cdots \Ftil_{i_r} b_\lm \mid \Ftil_{i_1} \cdots \Ftil_{i_r} b_\mu \in \clB(\mu) \} {\setminus} \{0\} \subset \clB(\lm). $$ Also, for each $\lm \in X^+$, there exists an $\imath$crystal $\clB(\lm)^\sigma$ isomorphic to $\clB(\sigma+\lm;\lm)$ whose underlying set is $\clB(\lm)$. Thus, we obtain a projective system $\{ \clB(\lm)^\sigma \}_{\lm \in X^+, \ol{\sigma+\lm} = \zeta}$ of $\imath$crystals for each $\zeta \in X^\imath$, where $\ol{\lm} \in X^\imath$ denote the image of $\lm \in X$. This projective system has the projective limit of the form $$ \clT_{\zeta} \otimes \clB(\infty), $$ where $\clT_\zeta$ is a certain $\imath$crystal consisting of a single element, and $\clB(\infty)$ denotes the crystal basis of the negative part $\U^-$ of $\U$. Then, we lift these results to the representation theory of $\imath$quantum groups. Namely, for each $\lm \in X^+$, we construct a $\Ui$-module $V(\lm)^\sigma$ whose underlying set is $V(\lm)$. The $V(\lm)^\sigma$ has a distinguished basis, called the $\imath$canonical basis, whose crystal limit is the $\imath$crystal $\clB(\lm)^\sigma$. Furthermore, there exists a based $\Ui$-module homomorphism $V(\lm+\nu+\tau(\nu))^\sigma \rightarrow V(\lm)^\sigma$ which sends $v_{\lm+\nu+\tau(\nu)}$ to $v_\lm$. Thus, we obtain a projective system $\{ V(\lm)^\sigma \}_{\lm \in X^+, \ol{\sigma+\lm} = \zeta}$ of $\Ui$-modules and based homomorphisms. It turns out that this projective system has the projective limit, and it is $\Uidot \mathbf{1}_\zeta$ with the $\imath$canonical basis $\bfBidot \mathbf{1}_\zeta$. From the results above, we should call the $\imath$crystal $\bigsqcup_{\zeta \in X^\imath} \clT_\zeta \otimes \clB(\infty)$ the $\imath$crystal basis of $\Uidot$, and denote it by $\clBidot$. The description $\clBidot = \bigsqcup_{\zeta \in X^\imath} \clT_{\zeta} \otimes \clB(\infty)$ can be interpreted as the crystal limit of the description $\Uidot = \bigoplus_{\zeta \in X^\imath} \Ui \mathbf{1}_\zeta$ since $\clT_\zeta$ consists of a single element, and $\clB(\infty)$ can be seen as the crystal limit of $\U^-$, which is isomorphic to $\Ui$ as a vector space. When our Satake diagram is of diagonal type, we can take $\sigma = 0$, and can identify $\clB(\infty) = \clB(-\infty)_{I'} \otimes \clB(\infty)_{I'}$, where $\clB(-\infty)_I$ denotes the crystal basis of the positive part of $\U$. Hence, we recover the projective system $\{ V^{\rm{low}}(\lm) \otimes V(\mu) \}_{\lm,\mu \in X_{I'}^+, -\lm+\mu = \zeta}$ of $\U_{I'}$-modules and based homomorphisms, whose projective limit is $\Udot_{I'} \mathbf{1}_\zeta$, and the projective system $\{ \clB^{\rm{low}}(\lm) \otimes \clB(\mu) \}_{\lm,\mu \in X_{I'}^+, -\lm+\mu=\zeta}$ of crystals, whose projective limit is $\clB(-\infty)_{I'} \otimes \clT_\zeta \otimes \clB(\infty)_{I'}$, where $\clT_\zeta$ denotes a certain crystal consisting of a single element, and $\clB^{\rm{low}}(\lm)$ the crystal basis of $V^{\rm{low}}(\lm)$. As explained above, our construction of the new projective system $\{ V(\lm)^\sigma \}_{\lm \in X^+}$ is motivated by an observation of the $\imath$crystal structures of various $\U$-modules. However, the construction itself may be possible without the theory of $\imath$crystals and our assumption on Satake diagrams and parameters $\bfvarsigma,\bfkappa$. We will treat this in a future work. This paper is organized as follows. In Section \ref{Section: quantum groups and crystals}, we recall necessary knowledge concerning ordinary quantum groups and crystals. In Section \ref{Section: iquantum groups and icrystals}, we set up an $\imath$quantum group of quasi-split type, and recall Bao-Wang's construction of the projective system and the $\imath$canonical basis of the modified $\imath$quantum group. Also, we define the notion of $\imath$crystals and their morphisms. Basic examples of $\imath$crystals are given there. Section \ref{Section: modified action of Bi} is devoted to defining the operators $\Btil_i$ acting on certain $\Ui$-modules. The tensor product rule is also investigated. Based on the results obtained there, we define the tensor product of an $\imath$crystal and a crystal in Section \ref{Section: tensor product rule for icrystal}. The associativity of this tensor product is stated there, too. The proofs of the well-definedness and the associativity of the tensor product is given in Section \ref{Section: proofs} because they are lengthy and independent of later argument. In Section \ref{Section: stability}, we construct the projective system of $\imath$crystals and its projective limit mentioned earlier. Then, we finally construct our new projective system of $\Ui$-modules and based homomorphisms. \subsection*{Acknowledgement} This work was supported by JSPS KAKENHI Grant Numbers JP20K14286 and JP21J00013. \subsection*{Notation} Throughout this paper, we use the following notation: \begin{itemize} \item $\bbK = \C(q)$: the field of rational functions in one variable $q$. \item $\bbK_\infty$: the subring of $\bbK$ consisting of functions regular at $q = \infty$. \item $\bfA := \C[q,q\inv]$. \item For $a,m,n \in \Z$, $[n]_{q^a} := \frac{q^{an}-q^{-an}}{q^a-q^{-a}}$, $[n]_{q^a}! := \prod_{k=1}^n [k]_{q^a}$, ${m+n \brack n}_{q^a} := \frac{[m+n]_{q^a}!}{[m]_{q^a}![n]_{q^a}!}$. \item $\ol{n} \in \Z/2\Z$: the image of $n \in \Z$ under the quotient map $\Z \rightarrow \Z/2\Z$. \item For $n \in \Z$, $\sgn(n) := \begin{cases} 1 & \IF n > 0, \\ 0 & \IF n = 0, \\ -1 & \IF n < 0. \end{cases}$ \end{itemize} \section{Proofs}\label{Section: proofs} In this section, we give proofs of Propositions \ref{tensor product of icrystal and crystal} and \ref{associativity for icrystal}, and complete our argument. \subsection{Proof of Proposition \ref{tensor product of icrystal and crystal}}\label{Subsection: proof of tensor product rule} Let $b_1 \in \clB_1$, $b_2 \in \clB_2$, and $i \in I$. Set $b := b_1 \otimes b_2$, $\beta_i := \beta_i(b_1)$, $\wti := \wti(b_1)$, $\wti_i := \wti_i(b_1)$, $\vep_i := \vep_i(b_2)$, $\vphi_i := \vphi_i(b_2)$, $\wt := \wt(b_2)$, $\wt_i := \wt_i(b_2)$. \begin{lem}\label{estimate} Suppose that $a_{i,\tau(i)} = -1$. Then, the following hold: \begin{enumerate} \item If $B_i(b) = \beta_{\tau(i)}$, then $$ \beta_{\tau(i)}(b)+\wti_i(b)-s_i = \max(E_i(b)-1, B_i(b), F_i(b))+\wti_i-s_i-\wt_{\tau(i)}. $$ \item If $B_i(b) \neq \beta_{\tau(i)}$, then $$ \beta_{\tau(i)}(b)+\wti_i(b)-s_i = \max(E_i(b)-1, B_i(b)-1, F_i(b))+\wti_i-s_i-\wt_{\tau(i)}. $$ \end{enumerate} \end{lem} \begin{proof} The assertions follow from definitions and direct calculation. \end{proof} First, we confirm Definition \ref{Def: icrystal} \eqref{Def: icrystal 2}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. By the definition of $\Btil_i b$, we see that $b'$ is either $b_1 \otimes \Ftil_i b_2$, $b_1 \otimes \Etil_{\tau(i)} b_2$, or $b''_1 \otimes b_2$ for some $b''_1 \in \clB_1$ with $(\Btil_i b_1,b''_1) \neq 0$. Each element has weight $\wti(b)-\ol{\alpha_i}$. This confirms the axiom. Below, we confirm the remaining axioms \subsubsection{When $a_{i,\tau(i)} = 2$} By the definition of $\beta_i(b)$, we have $\beta_i(b) \notin \Z$ if and only if $\beta_i,\vphi_i \notin \Z$. In this case, we have $\beta_i(b) = \beta_i-\wt_i \in \{ -\infty_\ev, -\infty_\odd \}$ and $$ \Btil_i b = \Btil_i b_1 \otimes b_2 = 0. $$ Hence, Definition \ref{Def: icrystal} \eqref{Def: icrystal 1} and \eqref{Def: icrystal 3a} are satisfied. \begin{enumerate} \item When $\beta_i \leq \vphi_i$ and $\ol{\beta_i} \neq \ol{\vphi_i}$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = \vep_i+1, \\ &\Btil_i b = b_1 \otimes \Ftil_i b_2. \end{split} \nonumber \end{align} Noting that $\wti_i = \ol{\beta_i + s_i}$ and $\wt_i = \vphi_i-\vep_i$, we see that $$ \wti_i(b) = \ol{\beta_i+s_i+\wt_i} = \ol{s_i-\vep_i+1} = \ol{\beta_i(b)+s_i}. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 3b}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = b_1 \otimes \Ftil_i b_2 = \Btil_i b$. Since $\vphi_i(b'_2) = \vphi_i-1 \geq \beta_i$ (note that $\beta_i \leq \vphi_i$ and $\ol{\beta_i} \neq \ol{\vphi_i}$ implies $\beta_i < \vphi_i$) and $\ol{\vphi_i(b'_2)} \neq \ol{\vphi_i} \neq \ol{\beta_i}$, we obtain \begin{align} \begin{split} &\beta_i(b') = \vep_i(b'_2) = \vep_i+1 = \beta_i(b), \\ &\Btil_i b' = b'_1 \otimes \Etil_i b'_2 = b_1 \otimes b_2 = b. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}, \eqref{Def: icrystal 2.6}, and \eqref{Def: icrystal 3c}. \item When $\beta_i > \vphi_i$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = \beta_i-\wt_i, \\ &\Btil_i b = \Btil_i b_1 \otimes b_2. \end{split} \nonumber \end{align} Since $\wti_i = \ol{\beta_i+s_i}$, we obtain $$ \wti_i(b) = \wti_i + \ol{\wt_i} = \ol{\beta_i+s_i+\wt_i} = \ol{\beta_i(b)+s_i}. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 3b}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $(\Btil_i b_1,b'_1) \neq 0$, and $b'_2 = b_2$. Since $\beta_i(b'_1) = \beta_i$, we obtain $\beta_i(b'_1) > \vphi_i$. Hence, it follows that \begin{align} \begin{split} &\beta_i(b') = \beta_i(b'_1)-\wt_i(b'_2) = \beta_i-\wt_i = \beta_i(b), \\ &\Btil_i b' = \Btil_i b'_1 \otimes b'_2 = \Btil_i b'_1 \otimes b_2, \\ &(b, \Btil_i b') = (b_1, \Btil_i b'_1) = (\Btil_i b_1, b'_1) = (\Btil_i b, b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5} and \eqref{Def: icrystal 3c}. Furthermore, if $\Btil_i b \in \clB$, we have $b'_1 = \Btil_i b_1$, and hence, $$ \Btil_i b' = \Btil_i b'_1 \otimes b_2 = b_1 \otimes b_2 = b. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}. \item When $\beta_i \leq \vphi_i$ and $\ol{\beta_i} = \ol{\vphi_i}$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = \vep_i, \\ &\Btil_i b = b_1 \otimes \Etil_i b_2. \end{split} \nonumber \end{align} Noting that $\wti_i = \ol{\beta_i + s_i}$ and $\wt_i = \vphi_i-\vep_i$, we see that $$ \wti_i(b) = \ol{\beta_i+s_i+\wt_i} = \ol{s_i-\vep_i} = \ol{\beta_i(b)+s_i}. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 3b}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = b_1 \otimes \Etil_i b_2 = \Btil_i b$. Since $\vphi_i(b'_2) = \vphi_i+1 > \beta_i$ and $\ol{\vphi_i(b'_2)} \neq \ol{\vphi_i} = \ol{\beta_i}$, we obtain \begin{align} \begin{split} &\beta_i(b') = \vep_i(b'_2) + 1 = \vep_i = \beta_i(b), \\ &\Btil_i b' = b'_1 \otimes \Ftil_i b'_2 = b_1 \otimes b_2 = b. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}, \eqref{Def: icrystal 2.6}, and \eqref{Def: icrystal 3c}. \end{enumerate} \subsubsection{When $a_{i,\tau(i)} = 0$} By the definition of $\beta_i(b)$, we have $\beta_i(b) \notin \Z$ if and only if $\vphi_i,\beta_i,\vphi_{\tau(i)} = -\infty$. In this case, we have $\beta_i(b) = \vep_{\tau(i)} = -\infty$ and $$ \Btil_i b = b_1 \otimes \Etil_{\tau(i)} b_2 = 0. $$ Hence, Definition \ref{Def: icrystal} \eqref{Def: icrystal 1} and \eqref{Def: icrystal 4a} are satisfied. Also, we have \begin{align} \begin{split} \beta_{\tau(i)}(b)+\wti_i(b) &= \max(\vphi_{\tau(i)}+\wti_{\tau(i)}-\wt_i, \beta_{\tau(i)}-\wt_i, \vep_i) + (\wti_i + \wt_i-\wt_{\tau(i)}) \\ &= \max(\vep_{\tau(i)}, \beta_i-\wt_{\tau(i)}, \vphi_i+\wti_i-\wt_{\tau(i)}) = \beta_i(b). \end{split} \nonumber \end{align} This confirms \ref{Def: icrystal} \eqref{Def: icrystal 4b}. \begin{enumerate} \item When $\vphi_i > \beta_{\tau(i)}, \vphi_{\tau(i)}-\wti_i$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = \vphi_i+\wti_i-\wt_{\tau(i)}, \\ &\Btil_i b = b_1 \otimes \Ftil_i b_2. \end{split} \nonumber \end{align} Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b, b') \neq 0$. Then, we have $b' = b_1 \otimes \Ftil_i b_2 = \Btil_i b$. We compute as \begin{align} \begin{split} &\vphi_{\tau(i)}(b'_2) = \vphi_{\tau(i)} < \vphi_i+\wti_i, \\ &\beta_i(b'_1) = \beta_i = \beta_{\tau(i)}+\wti_i < \vphi_i+\wti_i, \\ &\vphi_i(b'_2)-\wti_{\tau(i)}(b'_1) = (\vphi_i-1)+\wti_i. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b'), B_{\tau(i)}(b') \leq E_{\tau(i)}(b')$, and hence, \begin{align} \begin{split} &\beta_{\tau(i)}(b') = \vep_i(b'_2) = \vep_i+1, \\ &\beta_i(b') = \beta_{\tau(i)}(b')+\wti_i(b') = (\vep_i+1)+(\wti_i+\wt_i-\wt_{\tau(i)}-2) = \beta_i(b)-1, \\ &\Btil_{\tau(i)} b' = b'_1 \otimes \Etil_i b'_2 = b_1 \otimes b_2 = b. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}, \eqref{Def: icrystal 2.6}, and \eqref{Def: icrystal 4c}. \item When $\vphi_i \leq \beta_{\tau(i)} > \vphi_{\tau(i)}-\wti_i$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = \beta_i-\wt_{\tau(i)}, \\ &\Btil_i b = \Btil_i b_1 \otimes b_2. \end{split} \nonumber \end{align} Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b, b') \neq 0$. Then, we have $b' = \Btil_i b_1 \otimes b_2 = \Btil_i b$. We compute as \begin{align} \begin{split} &\vphi_{\tau(i)}(b'_2) = \vphi_{\tau(i)} < \beta_{\tau(i)}+\wti_i = \beta_i, \\ &\beta_i(b'_1) = \beta_i-1, \\ &\vphi_i(b'_2)-\wti_{\tau(i)}(b'_1) = \vphi_i+(\wti_i-2) \leq \beta_{\tau(i)}+\wti_i-2 = \beta_i-2. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b') \leq B_{\tau(i)}(b') > E_{\tau(i)}(b')$, and hence, \begin{align} \begin{split} &\beta_{\tau(i)}(b') = \beta_{\tau(i)}(b'_1)-\wt_i(b'_2) = (\beta_{\tau(i)}+1)-\wt_i, \\ &\beta_i(b') = \beta_{\tau(i)}(b')+\wti_i(b') = (\beta_{\tau(i)}-\wt_i+1)+(\wti_i+\wt_i-\wt_{\tau(i)}-2) = \beta_i(b)-1, \\ &\Btil_{\tau(i)} b' = \Btil_{\tau(i)} b'_1 \otimes b'_2 = b_1 \otimes b_2 = b. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}, \eqref{Def: icrystal 2.6}, and \eqref{Def: icrystal 4c}. \item When $\vphi_i,\beta_{\tau(i)} \leq \vphi_{\tau(i)}-\wti_i$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = \vep_{\tau(i)}, \\ &\Btil_i b = b_1 \otimes \Etil_{\tau(i)} b_2. \end{split} \nonumber \end{align} Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b, b') \neq 0$. Then, we have $b' = b_1 \otimes \Etil_{\tau(i)} b_2 = \Btil_i b$. We compute as \begin{align} \begin{split} &\vphi_{\tau(i)}(b'_2) = \vphi_{\tau(i)}+1, \\ &\beta_i(b'_1) = \beta_i = \beta_{\tau(i)}+\wti_i \leq \vphi_{\tau(i)}, \\ &\vphi_i(b'_2)-\wti_{\tau(i)}(b'_1) = \vphi_i+\wti_i \leq \vphi_{\tau(i)}. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b') > B_{\tau(i)}(b'), E_{\tau(i)}(b')$, and hence, \begin{align} \begin{split} &\beta_{\tau(i)}(b') = \vphi_{\tau(i)}(b'_2)+\wti_{\tau(i)}(b'_1)-\wt_i(b'_2) = (\vphi_{\tau(i)}+1)-\wti_i-\wt_i, \\ &\beta_i(b') = \beta_{\tau(i)}(b')+\wti_i(b') = (\vphi_{\tau(i)}-\wti_i-\wt_i+1)+(\wti_i+\wt_i-\wt_{\tau(i)}-2) = \beta_i(b)-1, \\ &\Btil_{\tau(i)} b' = b'_1 \otimes \Ftil_{\tau(i)} b'_2 = b_1 \otimes b_2 = b. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}, \eqref{Def: icrystal 2.6}, and \eqref{Def: icrystal 4c}. \end{enumerate} \subsubsection{When $a_{i,\tau(i)} = -1$} By the definition of $\beta_i(b)$, we have $\beta_i(b) \notin \Z$ if and only if $\vphi_i,\beta_i,\vphi_{\tau(i)} = -\infty$. In this case, we have $\beta_i(b) = \vep_{\tau(i)} = -\infty$, and $$ \Btil_i b = b_1 \otimes \Etil_{\tau(i)} b_2 = 0. $$ Hence, Definition \ref{Def: icrystal} \eqref{Def: icrystal 1} and \eqref{Def: icrystal 5a} are satisfied. \begin{enumerate} \item When $B_i(b) < F_i(b) = E_i(b)+1$ and $\vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}+1$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = F_i(b)+\wti_i-s_i-\wt_{\tau(i)} = (\vphi_{\tau(i)}+1)-\wt_{\tau(i)} = \vep_{\tau(i)}+1, \\ &\beta_{\tau(i)}(b)+\wti_i(b)-s_i = F_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b), \\ &\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Ftil_i b_2 \notin \clB. \end{split} \nonumber \end{align} For the second line of the identity above, we used Lemma \ref{estimate}. This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}, \eqref{Def: icrystal 5b}, and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = b_1 \otimes \Ftil_i b_2$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i-1 < F_i(b), \\ &B_i(b') = B_i(b) < F_i(b), \\ &E_i(b') = E_i(b)+1 = F_i(b), \end{split} \nonumber \end{align} and hence, \begin{align} \begin{split} &\beta_i(b') = E_i(b')+\wti_i-s_i-\wt_{\tau(i)}(b') = \vep_{\tau(i)}(b'_2) = \vep_{\tau(i)} = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (E_i(b')-1)+\wti_i-s_i-\wt_{\tau(i)}(b'_2) \neq \beta_i(b'). \end{split} \nonumber \end{align} For the last line of the identity above, we used Lemma \ref{estimate}. This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Now, we compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)}+1, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}+\wti_i-s_i+1 \leq \beta_i+1 < \vphi_{\tau(i)}+2, \\ &E_{\tau(i)}(b') = (\vphi_i-1)+\wti_i-s_i+1 = \vphi_i+\wti_i-s_i = \vphi_{\tau(i)}+1, \\ &\beta_i(b'_1) = \beta_i < \vphi_{\tau(i)}+1, \\ &\Etil_i b'_2 = b_2 \neq 0, \\ &\vphi_{\tau(i)}(\Etil_i b'_2) = \vphi_{\tau(i)} = \vphi_{\tau(i)}(b'_2)-1. \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}}(b'_1 \otimes \Etil_i b'_2 + b'_1 \otimes \Ftil_{\tau(i)} b'_2) = \frac{1}{\sqrt{2}}(b + b_1 \otimes \Ftil_{\tau(i)} b'_2), \end{split} \nonumber \end{align} which shows that $$ (\Btil_i b,b') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b'). $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}. \item When $F_i(b) \leq B_i(b) = E_i(b)+1$ and $\beta_i(\Btil_i b_1) = \beta_i-2$. In this case, we have \begin{align} \begin{split} &\beta_{\tau(i)} = B_i(b), \\ &\beta_i(b) = B_i(b)+\wti_i-s_i-\wt_{\tau(i)} = (\vphi_{\tau(i)}+1)-\wt_{\tau(i)} = \vep_{\tau(i)}+1, \\ &\beta_{\tau(i)}(b)+\wti_i(b)-s_i = B_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b), \\ &\Btil_i b = \frac{1}{\sqrt{2}} \Btil_i b_1 \otimes b_2 \notin \clB. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}, \eqref{Def: icrystal 5b}, and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = \Btil_i b_1 \otimes b_2$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i \leq B_i(b), \\ &B_i(b') = (\beta_i-2)-(\wti_i-3)+s_i = B_i(b)+1, \\ &E_i(b') = E_i(b)+3 = B_i(b)+2, \end{split} \nonumber \end{align} and hence, \begin{align} \begin{split} &\beta_i(b') = E_i(b')+\wti_i(b'_1)-s_i-\wt_{\tau(i)} = \vep_{\tau(i)}(b'_2) = \vep_{\tau(i)} = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (E_i(b')-1)+\wti_i(b'_1)-s_i-\wt_{\tau(i)} \neq \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Now, we compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)}, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}(b'_1)+(\wti_i-3)-s_i+1 = \beta_{\tau(i)}+\wti_i-s_i-1 = \vphi_{\tau(i)}, \\ &E_{\tau(i)}(b') = \vphi_i+(\wti_i-3)-s_i+1 = \vphi_i+\wti_i-s_i-2 \leq \vphi_{\tau(i)}-1, \\ &\beta_i(b'_1) = \beta_i-2 = \vphi_{\tau(i)}-1. \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}}(\Btil_{\tau(i)} b'_1 \otimes b'_2 + b'_1 \otimes \Ftil_{\tau(i)} b'_2) = \frac{1}{\sqrt{2}}(b + b'_1 \otimes \Ftil_{\tau(i)} b_2), \end{split} \nonumber \end{align} which shows that $$ (\Btil_i b,b') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b'). $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}. \item When $E_i(b) < F_i(b) = B_i(b) \neq \beta_{\tau(i)}$. Since $B_i(b) \neq \beta_{\tau(i)}$, we have $$ \Btil_i b_1 \in \clB_1, \qu \beta_i(\Btil_i b_1) = \beta_i-1, \qu \beta_{\tau(i)}(\Btil_i b_1) = \beta_{\tau(i)}+2 $$ if $\Btil_i b_1 \neq 0$. Also, we have \begin{align} \begin{split} &\beta_i(b) = B_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i-\wt_{\tau(i)}, \\ &\beta_{\tau(i)}(b)+\wti_i(b)-s_i = F_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b), \\ &\Btil_i b = \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes b_2 + b_1 \otimes \Ftil_i b_2) \notin \clB. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}, \eqref{Def: icrystal 5b}, and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = \Btil_i b_1 \otimes b_2$ or $b' = b'' = b''_1 \otimes b''_2 = b_1 \otimes \Ftil_i b_2$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i = B_i(b), \\ &B_i(b') = (\beta_i-1)-(\wti_i-3)+s_i = B_i(b)+2, \\ &E_i(b') = E_i(b)+3 \leq B_i(b)+2, \end{split} \nonumber \end{align} and hence, \begin{align}\label{id 1} \begin{split} &\beta_i(b') = B_i(b')+\wti_i(b'_1)-s_i-\wt_{\tau(i)} = \beta_i(b'_1)-\wt_{\tau(i)} = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (B_i(b')-1)+\wti_i(b'_1)-s_i-\wt_{\tau(i)} \neq \beta_i(b'). \end{split} \end{align} Also, we compute as \begin{align} \begin{split} &F_i(b'') = \vphi_i-1 = B_i(b)-1, \\ &B_i(b'') = B_i(b), \\ &E_i(b'') \leq E_i(b)+1 \leq B_i(b). \end{split} \nonumber \end{align} This shows that \begin{align}\label{id 2} \begin{split} &\beta_i(b'') = B_i(b'')+\wti_i-s_i-\wt_{\tau(i)}(b''_2) = \beta_i-\wt_{\tau(i)}(b''_2) = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b'')+\wti_i(b'')-s_i = (B_i(b'')-1)+\wti_i-s_i-\wt_{\tau(i)}(b''_2) \neq \beta_i(b''). \end{split} \end{align} Identities \eqref{id 1} and \eqref{id 2} confirm Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Now, we compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)} < \beta_i, \\ &B_{\tau(i)}(b') = (\beta_{\tau(i)}+2)+(\wti_i-3)-s_i+1 = \beta_i-1, \\ &E_{\tau(i)}(b') = \vphi_i+\wti_i-s_i-2 = \beta_i-2, \\ &\Btil_{\tau(i)} b'_1 = b_1 \in \clB_1, \\ &\beta_{\tau(i)}(\Btil_{\tau(i)} b'_1) = \beta_{\tau(i)} = \beta_{\tau(i)}(b'_1)-2. \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}} \Btil_{\tau(i)} b'_1 \otimes b'_2 = \frac{1}{\sqrt{2}}b. \end{split} \nonumber \end{align} The last identity shows that \begin{align}\label{id 3} (\Btil_i b,b') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b'). \end{align} Also, we have \begin{align} \begin{split} &F_{\tau(i)}(b'') \leq \vphi_{\tau(i)}+1 < \beta_i+1, \\ &B_{\tau(i)}(b'') = \beta_{\tau(i)}+\wti_i-s_i+1 = \beta_i, \\ &E_{\tau(i)}(b'') = (\vphi_i-1)+\wti_i-s_i+1 = \beta_i, \\ &\beta_i(b''_1) = \beta_i, \\ &\Etil_i b''_2 = b_2 \neq 0, \\ &\vphi_{\tau(i)}(\Etil_i b''_2) = \vphi_{\tau(i)} < \beta_i. \end{split} \nonumber \end{align} This shows that \begin{align} \begin{split} &\Btil_{\tau(i)} b'' = \frac{1}{\sqrt{2}} b''_1 \otimes \Etil_i b''_2 = \frac{1}{\sqrt{2}}b. \end{split} \nonumber \end{align} The last identity shows that \begin{align}\label{id 4} (\Btil_i b,b'') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b''). \end{align} Identities \eqref{id 3} and \eqref{id 4} confirm Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}. \item When $B_i(b) \leq E_i(b) = F_i(b)$ and $\vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = E_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \vep_{\tau(i)}, \\ &\beta_{\tau(i)}(b)+\wti_i(b)-s_i = F_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b), \\ &\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Etil_{\tau(i)} b_2 \notin \clB. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}, \eqref{Def: icrystal 5b}, and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = b_1 \otimes \Etil_{\tau(i)} b_2$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i, \\ &B_i(b') = B_i(b) \leq \vphi_i, \\ &E_i(b') = E_i(b)+1 = \vphi_i+1, \end{split} \nonumber \end{align} and hence, \begin{align} \begin{split} &\beta_i(b') = E_i(b')+\wti_i-s_i-\wt_{\tau(i)}(b'_2) = \vep_{\tau(i)}(b'_2) = \vep_{\tau(i)}-1 = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (E_i(b')-1)+\wti_i-s_i-\wt_{\tau(i)}(b'_2) \neq \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Now, we compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)}(\Etil_{\tau(i)} b_2) = \vphi_{\tau(i)}+1, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}+\wti_i-s_i+1 \leq B_i(b)+\wti_i-s_i+1 \leq \vphi_{\tau(i)}+1, \\ &E_{\tau(i)}(b') = \vphi_i(\Etil_{\tau(i)} b_2)+\wti_i-s_i+1 = \vphi_i+\wti_i-s_i+1 = \vphi_{\tau(i)}+1, \\ &\beta_i(b'_1) = \beta_i \leq \vphi_{\tau(i)}, \\ &\Ftil_{\tau(i)} b'_2 = b_2 \neq 0, \\ &\vphi_i(\Ftil_{\tau(i)} b'_2) = \vphi_i = \vphi_i(b'_2). \end{split} \nonumber \end{align} Since $\vphi_i(\Ftil_{\tau(i)} b'_2) = \vphi_i(b'_2)$ is equivalent to $\vphi_{\tau(i)}(\Etil_i b'_2) = \vphi_{\tau(i)}(b'_2)-1$ By Lemma \ref{Deduction from condition S's}, this implies that \begin{align} \begin{split} &\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}}(b'_1 \otimes \Etil_i b'_2 + b_2), \end{split} \nonumber \end{align} which shows that $$ (\Btil_i b,b') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b'). $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}. \item When $F_i(b) \leq E_i(b) = B_i(b) = \beta_{\tau(i)}$ and $\vphi_i(\Etil_{\tau(i)} b_2) < E_i(b)$. Since $B_i(b) = \beta_{\tau(i)}$, we have $$ \Btil_{\tau(i)} b_1 \in \clB_1, \qu \beta_{\tau(i)}(\Btil_{\tau(i)} b_1) = \beta_{\tau(i)}-1, \qu \beta_i(\Btil_{\tau(i)} b_1) = \beta_i+2 $$ if $\Btil_{\tau(i)} b_1 \neq 0$. Also, we have \begin{align} \begin{split} &\beta_i(b) = E_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \vep_{\tau(i)}, \\ &\beta_{\tau(i)}(b)+\wti_i-s_i = B_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b), \\ &\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Etil_{\tau(i)} b_2 \notin \clB. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}, \eqref{Def: icrystal 5b}, and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = b_1 \otimes \Etil_{\tau(i)} b_2$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i(\Etil_{\tau(i)} b_2) < E_i(b), \\ &B_i(b') = B_i(b) = E_i(b), \\ &E_i(b') = E_i(b)+1, \end{split} \nonumber \end{align} and hence, \begin{align} \begin{split} &\beta_i(b') = E_i(b')+\wti_i-s_i-\wt_{\tau(i)}(b'_2) = \vep_{\tau(i)}(b'_2) = \vep_{\tau(i)}-1 = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (E_i(b')-1)+\wti_i-s_i-\wt_{\tau(i)}(b'_2) \neq \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Now, we compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)}+1, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}+\wti_i-s_i+1 = \vphi_{\tau(i)}+1, \\ &E_{\tau(i)}(b') = \vphi_i(\Etil_{\tau(i)} b_2)+\wti_i-s_i+1 < \vphi_{\tau(i)}+1, \\ &\beta_i(b'_1) = \beta_i = \vphi_{\tau(i)}. \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}}(\Btil_{\tau(i)} b_1 \otimes \Etil_{\tau(i)} b_2 + b), \end{split} \nonumber \end{align} which shows that $$ (\Btil_i b,b') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b'). $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}. \item When $B_i(b) \leq E_i(b) = F_i(b) > \beta_{\tau(i)}$ and $\vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i-1$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = E_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \vep_{\tau(i)}, \\ &\beta_{\tau(i)}(b)+\wti_i(b)-s_i = F_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b), \\ &\Btil_i b = \frac{1}{\sqrt{2}}(b_1 \otimes \Etil_{\tau(i)} b_2 + b_1 \otimes \Ftil_i b_2) \notin \clB. \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}, \eqref{Def: icrystal 5b}, and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have either $b' = b_1 \otimes \Etil_{\tau(i)} b_2$ or $b' = b'' = b''_1 \otimes b''_2 = b_1 \otimes \Ftil_i b_2$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i-1, \\ &B_i(b') = B_i(b) \leq \vphi_i, \\ &E_i(b') = E_i(b)+1 \leq \vphi_i+1, \end{split} \nonumber \end{align} and hence, \begin{align}\label{id 5} \begin{split} &\beta_i(b') = E_i(b')+\wti_i-s_i-\wt_{\tau(i)}(b'_2) = \vep_{\tau(i)}(b'_2) = \vep-1 = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (E_i(b')-1)+\wti_i-s_i-\wt_{\tau(i)}(b'_2) \neq \beta_i(b'). \end{split} \end{align} Also, we compute as \begin{align} \begin{split} &F_i(b'') = \vphi_i-1, \\ &B_i(b'') = B_i(b) \leq \vphi_i, \\ &E_i(b'') \leq E_i(b) = \vphi_i. \end{split} \nonumber \end{align} This shows that \begin{align}\label{id 6} \begin{split} &\beta_i(b'') = E_i(b'')+\wti_i-s_i-\wt_{\tau(i)}(b''_2) = \vep_{\tau(i)}(b''_2) = \vep_{\tau(i)}-1 = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b'')+\wti_i(b'')-s_i = (E_i(b'')-1)+\wti_i-s_i-\wt_{\tau(i)}(b''_2) \neq \beta_i(b''). \end{split} \end{align} Identities \eqref{id 5} and \eqref{id 6} confirm Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Now, we compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)}+1, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}+\wti_i-s_i+1 < \vphi_{\tau(i)}+1, \\ &E_{\tau(i)}(b') = \vphi_i+\wti_i-s_i = \vphi_{\tau(i)}, \\ &\Ftil_{\tau(i)} b'_2 = b_2 \neq 0, \\ &\vphi_i(\Ftil_{\tau(i)} b'_2) = \vphi_i = \vphi_i(b'_2)+1. \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}} b'_1 \otimes \Ftil_{\tau(i)} b'_2 = \frac{1}{\sqrt{2}}b, \end{split} \nonumber \end{align} which shows that \begin{align}\label{id 7} (\Btil_i b,b') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b'). \end{align} Also, noting that $\vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i-1$ is equivalent to $\vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}$, we have \begin{align} \begin{split} &F_{\tau(i)}(b'') = \vphi_{\tau(i)}, \\ &B_{\tau(i)}(b'') = \beta_{\tau(i)}+\wti_i-s_i+1 < \vphi_{\tau(i)}+1, \\ &E_{\tau(i)}(b'') = \vphi_i+\wti_i-s_i = \vphi_{\tau(i)}, \\ &\Etil_i b''_2 = b_2 \neq 0, \\ &\vphi_{\tau(i)}(\Etil_i b''_2) = \vphi_{\tau(i)} = \vphi_{\tau(i)}(b''_2). \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\Btil_{\tau(i)} b'' = \frac{1}{\sqrt{2}} b''_1 \otimes \Etil_i b''_2 = \frac{1}{\sqrt{2}}b, \end{split} \nonumber \end{align} which shows that \begin{align}\label{id 8} (\Btil_i b,b'') = \frac{1}{\sqrt{2}} = (b, \Btil_{\tau(i)}b''). \end{align} Identities \eqref{id 7} and \eqref{id 8} confirm Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}. \item When $F_i(b) > B_i(b), E_i(b)$, and $\Btil_i b = b_1 \otimes \Ftil_i b_2$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = F_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \vphi_i-\wt_{\tau(i)}+\wti_i-s_i, \\ &\beta_{\tau(i)}(b)+\wti_i-s_i = F_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5b} and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = b_1 \otimes \Ftil_i b_2 = \Btil_i b$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i-1, \\ &B_i(b') = B_i(b) < \vphi_i, \\ &E_i(b') = \vphi_{\tau(i)}(\Ftil_i b_2)+\wti_i-s_i \leq E_i(b)+1 \leq \vphi_i. \end{split} \nonumber \end{align} We claim that $E_i(b') < \vphi_i$. Otherwise, by the last inequality, we must have $E_i(b)+1 = F_i(b)$ and $\vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}+1$. However, in this case, it holds that $\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Ftil_i b_2$, which contradicts our assumption that $\Btil_i b = b'$. Thus, our claim follows. Then, we obtain \begin{align} \begin{split} &\beta_i(b') = F_i(b')+\wti_i-s_i-\wt_{\tau(i)}(b'_2), \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = F_i(b')+\wti_i-s_i-\wt_{\tau(i)}(b'_2) = \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Now, we compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)}(\Ftil_i b_2) \leq \vphi_{\tau(i)}+1 < \vphi_i+\wti_i-s_i+1, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}+\wti_i-s_i+1 \leq B_i(b)+\wti_i-s_i+1 < \vphi_i+\wti_i-s_i+1, \\ &E_{\tau(i)}(b') = \vphi_i+\wti_i-s_i. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b'), B_{\tau(i)}(b') \leq E_{\tau(i)}(b')$. We claim that $\Btil_{\tau(i)} b' = b'_1 \otimes \Etil_i b'_2$. Otherwise, we have either $\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}} b'_1 \otimes \Etil_i b'_2$ or $\Btil_{\tau(i)} b' = \frac{1}{\sqrt{2}} (b'_1 \otimes \Etil_i b'_2 + b'_1 \otimes \Ftil_{\tau(i)} b'_2)$. In each case, by consideration above, we obtain $\Btil_i(b'_1 \otimes \Etil_i b'_2) = \frac{1}{\sqrt{2}} b'_1 \otimes \Ftil_i \Etil_i b'_2 = \frac{1}{\sqrt{2}} b'$, which is a contradiction because $b'_1 \otimes \Etil_i b'_2 = b$. Thus, it follows that $$ \Btil_{\tau(i)} b' = b'_1 \otimes \Etil_i b'_2 = b. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5} and \eqref{Def: icrystal 2.6}. \item When $F_i(b) \leq B_i(b) > E_i(b)$, $\Btil_i b = \Btil_i b_1 \otimes b_2$, and $B_i(b) = \beta_{\tau(i)}$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = B_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i-\wt_{\tau(i)}, \\ &\beta_{\tau(i)}(b)+\wti_i(b)-s_i = B_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \beta_i(b). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5b} and \eqref{Def: icrystal 5c}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $(\Btil_i b_1,b'_1) \neq 0$ and $b'_2 = b_2$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = F_i(b) \leq B_i(b), \\ &B_i(b') = \beta_i(b'_1)-(\wti_i-3)+s_i = \begin{cases} B_i(b)+1 & \IF \beta_{\tau(i)}(b'_1) = B_i(b'), \\ B_i(b)+2 & \IF \beta_{\tau(i)}(b'_1) \neq B_i(b'), \\ \end{cases} \\ &E_i(b') = E_i(b)+3 \leq B_i(b)+2. \end{split} \nonumber \end{align} Let us consider the case when $\beta_{\tau(i)}(b'_1) = B_i(b')$ and $E_i(b') < B_i(b)+2$. By Proposition \ref{basic property for a = -1}, we have $b'_1 = \Btil_i b'$, $\beta_i(b'_1) = \beta_i -2$, and hence, \begin{align} \begin{split} &\beta_i(b') = B_i(b')+\wti_i(b'_1)-s_i-\wt_{\tau(i)}, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = B_i(b')+\wti_i(b'_1)-s_i-\wt_{\tau(i)} = \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. Also, We compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)} < \beta_i, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}(b'_1)+\wti_i(b'_1)-s_i+1 = \beta_i(b'_1)+1 = \beta_i-1, \\ &E_{\tau(i)}(b') = \vphi_i+(\wti_i-3)-s_i+1 \leq \beta_i-2. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b') \leq B_{\tau(i)}(b') > E_{\tau(i)}(b')$. Now, as in the previous case, we obtain $$ \Btil_{\tau(i)} b' = \Btil_{\tau(i)} b'_1 \otimes b'_2 = b. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5} and \eqref{Def: icrystal 2.6}. Next, let us consider the case when $\beta_{\tau(i)}(b'_1) = B_i(b')$ and $E_i(b') = B_i(b)+2$. In this case, we have $b'_1 = \Btil_i b'$, $\beta_i(b'_1) = \beta_i -2$, and \begin{align} \begin{split} &\beta_i(b') = E_i(b')+\wti_i(b'_1)-s_i-\wt_{\tau(i)} = B_i(b)+\wti_i-s_i-\wt_{\tau(i)}-1 = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (E_i(b')-1)+\wti_i(b'_1)-s_i-\wt_{\tau(i)} \neq \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. We compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)} < \beta_i, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}(b')+\wti_i(b')-s_i+1 = \beta_i(b')+1 = \beta_i-1, \\ &E_{\tau(i)}(b') = \vphi_i+(\wti_i-3)-s_i+1 \leq \beta_i-2. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b') \leq B_{\tau(i)}(b') > E_{\tau(i)}(b')$. Hence, we have $$ \Btil_{\tau(i)} b' = \Btil_{\tau(i)} b'_1 \otimes b'_2 = b. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5} and \eqref{Def: icrystal 2.6}. Finally, let us consider the case when $\beta_{\tau(i)}(b'_1) \neq B_i(b')$. By Proposition \ref{basic property for a = -1}, we have $\beta_i(b') = \beta_i -1$, $\beta_{\tau(i)}(b') = \beta_{\tau(i)}+1$, and hence, \begin{align} \begin{split} &\beta_i(b') = B_i(b')+\wti_i(b'_1)-s_i-\wt_{\tau(i)} = B_i(b)+\wti_i-s_i-\wt_{\tau(i)}-1 = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (B_i(b')-1)+\wti_i(b'_1)-s_i-\wt_{\tau(i)} \neq \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d}. We compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)} < \beta_i, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}(b')+\wti_i(b')-s_i+1 = \beta_i(b')+1 = \beta_i-1, \\ &E_{\tau(i)}(b') = \vphi_i+(\wti_i-3)-s_i+1 \leq \beta_i-2. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b') \leq B_{\tau(i)}(b') > E_{\tau(i)}(b')$. Hence, we have $$ \Btil_{\tau(i)} b' = \Btil_{\tau(i)} b'_1 \otimes b'_2, $$ which implies that $\Btil_{\tau(i)} b' = b$ if $b' = \Btil_i b$, and that $$ (\Btil_i b,b') = (\Btil_i b_1,b'_1) = (b_1, \Btil_{\tau(i)} b'_1) = (b, \Btil_{\tau(i)} b'). $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5} and \eqref{Def: icrystal 2.6}. \item When $F_i(b) \leq B_i(b) > E_i(b)$, $\Btil_i b = \Btil_i b_1 \otimes b_2$, and $B_i(b) \neq \beta_{\tau(i)}$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = B_i(b)+\wti_i-s_i-\wt_{\tau(i)}, \\ &\beta_{\tau(i)}(b)+\wti_i(b)-s_i = \begin{cases} (B_i(b)-1)+\wti_i-s_i-\wt_{\tau(i)} & \IF F_i(b) < B_i(b), \\ B_i(b)+\wti_i-s_i-\wt_{\tau(i)} & \IF F_i(b) = B_i(b). \end{cases} \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5b}. Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $(\Btil_i b_1,b'_1) \neq 0$ and $b'_2 = b_2$. Since $B_i(b) \neq \beta_{\tau(i)}$, it follows from Proposition \ref{basic property for a = -1} that $b'_1 = \Btil_i b_1$, $\beta_i(b'_1) \neq B_i(b')$, and $\beta_i(b'_1) = \beta_i-1$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = F_i(b) \leq B_i(b), \\ &B_i(b') = \beta_i(b'_1)-(\wti_i-3)+s_i = B_i(b)+2, \\ &E_i(b') = E_i(b)+3 \leq B_i(b)+2. \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\beta_i(b') = B_i(b')+\wti_i(b'_1)-s_i-\wt_{\tau(i)} = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (B_i(b')-1)+\wti_i(b'_1)-s_i-\wt_{\tau(i)} \neq \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5c} and \eqref{Def: icrystal 5d} (note that $\Btil_i b = \Btil_i b_1 \otimes b_2 = b' \in \clB$). We compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)} < \beta_i, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}(b'_1)+\wti_i(b'_1)-s_i+1 = \beta_i(b'_1) = \beta_i-1, \\ &E_{\tau(i)}(b') = \vphi_i+(\wti_i-3)-s_i+1 \leq \beta_i-2. \end{split} \nonumber \end{align} This implies that $F_{\tau(i)}(b') \leq B_{\tau(i)}(b') > E_{\tau(i)}(b')$. Hence, we have $$ \Btil_{\tau(i)} b' = \Btil_{\tau(i)} b'_1 \otimes b'_2 = b. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5} and \eqref{Def: icrystal 2.6}. \item When $F_i(b), B_i(b) \leq E_i(b)$ and $\Btil_i b = b_1 \otimes \Etil_{\tau(i)} b_2$. In this case, we have \begin{align} \begin{split} &\beta_i(b) = E_i(b)+\wti_i-s_i-\wt_{\tau(i)} = \vep_{\tau(i)}. \end{split} \nonumber \end{align} Let $b' = b'_1 \otimes b'_2 \in \clB$ be such that $(\Btil_i b,b') \neq 0$. Then, we have $b' = b_1 \otimes \Etil_{\tau(i)} b_2 = \Btil_i b$. By our assumption, we see that \begin{align} \begin{split} &F_i(b') = \vphi_i(\Etil_{\tau(i)} b_2) \leq \vphi_i \leq E_i(b), \\ &B_i(b') = B_i(b) \leq E_i(b), \\ &E_i(b') = E_i(b)+1. \end{split} \nonumber \end{align} This implies that \begin{align} \begin{split} &\beta_i(b') = E_i(b')+\wti_i-s_i-\wt_{\tau(i)}(b'_2) = \vep_{\tau(i)}(b'_2) = \beta_i(b)-1, \\ &\beta_{\tau(i)}(b')+\wti_i(b')-s_i = (E_i(b')-1)+\wti_i-s_i-\wt_{\tau(i)}(b'_2) \neq \beta_i(b'). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 5c} and \eqref{Def: icrystal 5d}. We compute as \begin{align} \begin{split} &F_{\tau(i)}(b') = \vphi_{\tau(i)}+1, \\ &B_{\tau(i)}(b') = \beta_{\tau(i)}+\wti_i-s_i+1 \leq \beta_i+1 \leq \vphi_{\tau(i)}+1, \\ &E_{\tau(i)}(b') = \vphi_i(b'_2)+\wti_i-s_i+1 \leq \vphi_i+\wti_i-s_i+1 \leq \vphi_{\tau(i)}+1. \end{split} \nonumber \end{align} We claim that $F_{\tau(i)}(b') > B_{\tau(i)}(b'), E_{\tau(i)}(b')$. Assume contrary that $E_{\tau(i)}(b') = F_{\tau(i)}(b')$. In this case, we must have $\vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i$ and $F_i(b) = E_i(b)$. This implies that $\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Etil_{\tau(i)}b_2$, which contradicts our assumption that $\Btil_i b = b'$. Next, assume contrary that $B_{\tau(i)}(b') = F_{\tau(i)}(b')$. In this case, we must have $B_i(b) = \beta_{\tau(i)}$, $B_i(b) = E_i(b)$. If $\vphi_i(\Etil_{\tau(i)} b_2) < E_i(b)$, then it follows that $\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Etil_{\tau(i)} b_2$, which causes a contradiction again. Hence, we obtain $\vphi_i(\Etil_{\tau(i)} b_2) = E_i(b)$. Since $\vphi_i(\Etil_{\tau(i)} b_2) \leq \vphi_i \leq E_i(b)$, we must have $\vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i$ and $F_i(b) = E_i(b)$. This implies that $\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Etil_{\tau(i)} b_2$, which is a contradiction. Thus, our claim follows. Consequently, we obtain $$ \Btil_{\tau(i)} b' = b'_1 \otimes \Ftil_i b'_2 = b. $$ This confirms Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5} and \eqref{Def: icrystal 2.6}. \end{enumerate} Now, we have exhausted all the cases. Hence, the proof completes. \subsection{Proof of Proposition \ref{associativity for icrystal}}\label{Subsection: proof of associativity} Let $b_i \in \clB_i$, $i = 1,2,3$, and set $b := b_1 \otimes (b_2 \otimes b_3)$, $b' := (b_1 \otimes b_2) \otimes b_3$. It suffices to show that $\beta_i(b) = \beta_i(b')$ and $\Btil_i b = \Btil_i b'$ for all $i \in I$. For a later use, we note the following: \begin{align} \begin{split} &\vphi_i(b_2 \otimes b_3) = \max(\vphi_i(b_3)+\wt_i(b_2),\vphi_i(b_2)), \\ &\Ftil_i(b_2 \otimes b_3) = \begin{cases} b_2 \otimes \Ftil_i b_3 & \IF \vphi_i(b_3)+\wt_i(b_2) > \vphi_i(b_2), \\ \Ftil_i b_2 \otimes b_3 & \IF \vphi_i(b_3)+\wt_i(b_2) \leq \vphi_i(b_2), \end{cases} \\ &\vep_{\tau(i)}(b_2 \otimes b_3) = \max(\vphi_{\tau(i)}(b_2), \vphi_{\tau(i)}(b_3)+\wt_{\tau(i)}(b_2))-\wt_{\tau(i)}(b_2)-\wt_{\tau(i)}(b_3) \\ &\Etil_{\tau(i)}(b_2 \otimes b_3) = \begin{cases} \Etil_{\tau(i)} b_2 \otimes b_3 & \IF \vphi_{\tau(i)}(b_2) > \vphi_{\tau(i)}(b_3)+\wt_{\tau(i)}(b_2), \\ b_2 \otimes \Etil_{\tau(i)} b_3 & \IF \vphi_{\tau(i)}(b_2) \leq \vphi_{\tau(i)}(b_3)+\wt_{\tau(i)}(b_2). \end{cases} \end{split} \nonumber \end{align} \subsubsection{When $a_{i,\tau(i)} = 2$} Setting \begin{align} \begin{split} &A_1 := \vphi_i(b_3)+\wt_i(b_2)+\delta_{\ol{\beta_i(b_1)+1}, \ol{\vphi_i(b_3)+\wt_i(b_2)}}, \\ &A_2 := \vphi_i(b_2) + \delta_{\ol{\beta_i(b_1)+1}, \ol{\vphi_i(b_2)}}, \\ &A_3 := \beta_i(b_1), \\ &A_4 := \vphi_i(b_2), \\ &A_5 := \vphi_i(b_3)+\wt_i(b_2), \end{split} \nonumber \end{align} we see that \begin{align} \begin{split} &F_i(b) = \max(A_1,A_2), \qu B_i(b) = A_3, \qu E_i(b) = \max(A_4,A_5), \\ &F_i(b') = A_1-\wt_i(b_2), \qu B_i(b') = \max(A_2,A_3,A_4)-\wt_i(b_2), \qu E_i(b') = A_5-\wt_i(b_2), \\ &F_i(b_1 \otimes b_2) = A_2, \qu B_i(b_1 \otimes b_2) = A_3, \qu E_i(b_1 \otimes b_2) = A_4. \end{split} \nonumber \end{align} Therefore, we compute as \begin{align} \begin{split} \beta_i(b) &= \max(\max(A_1,A_2),A_3,\max(A_4,A_5)) - \wt_i(b_2 \otimes b_3) \\ &= \max(A_1,A_2,A_3,A_4,A_5)-\wt_i(b_2)-\wt_i(b_3), \\ \beta_i(b') &= \max(A_1-\wt_i(b_2),\max(A_2,A_3,A_4)-\wt_i(b_2),A_5-\wt_i(b_2))-\wt_i(b_3) \\ &= \max(A_1,A_2,A_3,A_4,A_5)-\wt_i(b_2)-\wt_i(b_3). \end{split} \nonumber \end{align} This implies that $\beta_i(b) = \beta_i(b')$. Also, we compute as \begin{align} \begin{split} \Btil_i b &= \begin{cases} b_1 \otimes \Ftil_i(b_2 \otimes b_3) & \IF \max(A_1,A_2) > A_3,\max(A_4,A_5), \\ \Btil_i b_1 \otimes (b_2 \otimes b_3) & \IF \max(A_1,A_2) \leq A_3 > \max(A_4,A_5), \\ b_1 \otimes \Etil_i(b_2 \otimes b_3) & \IF \max(A_1,A_2), A_3 \leq \max(A_4,A_5), \end{cases} \\ &= \begin{cases} b_1 \otimes (b_2 \otimes \Ftil_i b_3) & \IF A_1 > A_2,A_3,A_4,A_5, \\ b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \IF A_1 \leq A_2 > A_3,A_4,A_5, \\ \Btil_i b_1 \otimes (b_2 \otimes b_3) & \IF A_1,A_2 \leq A_3 > A_4,A_5, \\ b_1 \otimes (\Etil_i b_2 \otimes b_3) & \IF A_1,A_2,A_3 \leq A_4 > A_5, \\ b_1 \otimes (b_2 \otimes \Etil_i b_3) & \IF A_1,A_2,A_3,A_4 \leq A_5, \end{cases} \\ \Btil_i b' &= \begin{cases} (b_1 \otimes b_2) \otimes \Ftil_ib_3 & \IF A_1 > \max(A_2,A_3,A_4),A_5, \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \IF A_1 \leq \max(A_2,A_3,A_4) > A_5, \\ (b_1 \otimes b_2) \otimes \Etil_i b_3 & \IF A_1, \max(A_2,A_3,A_4) \leq A_5, \end{cases} \\ &= \begin{cases} (b_1 \otimes b_2) \otimes \Ftil_i b_3 & \IF A_1 > A_2,A_3,A_4,A_5, \\ (b_1 \otimes \Ftil_i b_2) \otimes b_3 & \IF A_1 \leq A_2 > A_3,A_4,A_5, \\ (\Btil_i b_1 \otimes b_2) \otimes b_3 & \IF A_1,A_2 \leq A_3 > A_4,A_5, \\ (b_1 \otimes \Etil_i b_2) \otimes b_3 & \IF A_1,A_2,A_3 \leq A_4 > A_5, \\ (b_1 \otimes b_2) \otimes \Etil_i b_3 & \IF A_1,A_2,A_3,A_4 \leq A_5. \end{cases} \end{split} \nonumber \end{align} Thus, we obtain $\Btil_i b = \Btil_i b'$. \subsubsection{When $a_{i,\tau(i)} = 0$} Setting \begin{align} \begin{split} &A_1 := \vphi_i(b_3)+\wt_i(b_2), \\ &A_2 := \vphi_i(b_2), \\ &A_3 := \beta_i(b_1)-\wti_i(b_1), \\ &A_4 := \vphi_{\tau(i)}(b_2)-\wti_i(b_1), \\ &A_5 := \vphi_{\tau(i)}(b_3)+\wt_{\tau(i)}(b_2)-\wti_i(b_1), \end{split} \nonumber \end{align} we see that \begin{align} \begin{split} &F_i(b) = \max(A_1,A_2), \qu B_i(b) = A_3, \qu E_i(b) = \max(A_4,A_5), \\ &F_i(b') = A_1-\wt_i(b_2), \qu B_i(b') = \max(A_2,A_3,A_4)-\wt_i(b_2), \qu E_i(b') = A_5-\wt_i(b_2), \\ &F_i(b_1 \otimes b_2) = A_2, \qu B_i(b_1 \otimes b_2) = A_3, \qu E_i(b_1 \otimes b_2) = A_4. \end{split} \nonumber \end{align} Therefore, we can compute as before to obtain $\beta_i(b) = \beta_i(b')$ and $\Btil_i b = \Btil_i b'$. \subsubsection{When $a_{i,\tau(i)} = -1$} Setting \begin{align} \begin{split} &A_1 := \vphi_i(b_3)+\wt_i(b_2), \\ &A_2 := \vphi_i(b_2), \\ &A_3 := \beta_i(b_1)-\wti_i(b_1)+s_i, \\ &A_4 := \vphi_{\tau(i)}(b_2)-\wti_i(b_1)+s_i, \\ &A_5 := \vphi_{\tau(i)}(b_3)+\wt_{\tau(i)}(b_2)-\wti_i(b_1)+s_i, \end{split} \nonumber \end{align} we see that \begin{align} \begin{split} &F_i(b) = \max(A_1,A_2), \qu B_i(b) = A_3, \qu E_i(b) = \max(A_4,A_5), \\ &F_i(b') = A_1-\wt_i(b_2), \qu B_i(b') = \max(A_2,A_3,A_4)-\wt_i(b_2), \qu E_i(b') = A_5-\wt_i(b_2), \\ &F_i(b_1 \otimes b_2) = A_2, \qu B_i(b_1 \otimes b_2) = A_3, \qu E_i(b_1 \otimes b_2) = A_4. \end{split} \nonumber \end{align} Hence, we can compute as before to obtain $\beta_i(b) = \beta_i(b')$. Note also that \begin{align} \begin{split} \beta_i(b_1 \otimes b_2) &= \max(A_2,A_3,A_4)+\wti_i(b_1)-s_i-\wt_{\tau(i)}(b_2), \\ \beta_{\tau(i)}(b_1 \otimes b_2) &= \max(A_4-1, \beta_{\tau(i)}(b_1), A_2)-\wt_i(b_2) \\ &= \begin{cases} \max(A_4-1,A_3,A_2)-\wt_i(b_2) & \IF \beta_{\tau(i)}(b_1) = A_3, \\ \max(A_4-1,A_3-1,A_2)-\wt_i(b_2) & \IF \beta_{\tau(i)}(b_1) \neq A_3. \end{cases} \end{split} \nonumber \end{align} \begin{enumerate} \item When $A_1 > A_2,A_3,A_4,A_5$. In this case, we have $$ \Ftil_i(b_2 \otimes b_3) = b_2 \otimes \Ftil_i b_3, $$ and hence, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (b_2 \otimes \Ftil_i b_3) & \IF A_1 = \max(A_4,A_5)+1, \\ &\AND \vphi_{\tau(i)}(b_2 \otimes \Ftil_i b_3) = \vphi_{\tau(i)}(b_2 \otimes b_3)+1, \\ b_1 \otimes (b_2 \otimes \Ftil_i b_3) & \OW. \end{cases} $$ Since $\vphi_{\tau(i)}(b_2 \otimes \Ftil_i b_3) = \max(\vphi_{\tau(i)}(b_2), \vphi_{\tau(i)}(\Ftil_ib_3)+\wt_{\tau(i)}(b_2))$ and $\vphi_{\tau(i)}(b_2 \otimes b_3) = \max(A_4,A_5)$, we have $\vphi_{\tau(i)}(b_2 \otimes \Ftil_i b_3) = \vphi_{\tau(i)}(b_2 \otimes b_3)+1$ if and only if $A_4 \leq A_5$ and $\vphi_{\tau(i)}(\Ftil_i b_3) = \vphi_{\tau(i)}(b_3)+1$. Therefore, we obtain $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (b_2 \otimes \Ftil_i b_3) & \IF A_1 = A_5+1 \AND \vphi_{\tau(i)}(\Ftil_i b_3) = \vphi_{\tau(i)}(b_3)+1, \\ b_1 \otimes (b_2 \otimes \Ftil_i b_3) & \OW. \end{cases} $$ Note that $A_1 = A_5+1$ implies $A_4 \leq A_1-1 = A_5$. On the other hand, we have \begin{align} \begin{split} &\Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}}(b_1 \otimes b_2) \otimes \Ftil_i b_3 & \IF A_1 = A_5+1 \AND \vphi_{\tau(i)}(\Ftil_i b_3) = \vphi_{\tau(i)}(b_3)+1, \\ (b_1 \otimes b_2) \otimes \Ftil_i b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Thus we obtain $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1 \leq A_2 > A_3,A_4,A_5$ and $A_4 > A_5$. In this case, we have \begin{align} \begin{split} &\Ftil_i(b_2 \otimes b_3) = \Ftil_i b_2 \otimes b_3, \end{split} \nonumber \end{align} and hence, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \IF A_2=A_4+1, \\ & \AND \vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(b_2 \otimes b_3)+1, \\ b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \OW. \end{cases} $$ Since $\vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(\Ftil_i b_2)$ and $\vphi_{\tau(i)}(b_2 \otimes b_3) = \vphi_{\tau(i)}(b_2)$, we have $\vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(b_2 \otimes b_3)+1$ if and only if $\vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b_2)+1$. Therefore, we obtain $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \IF A_2=A_4+1 \AND \vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b_2)+1, \\ b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \OW. \end{cases} $$ On the other hand, since $A_2 > A_3,A_4$, we have $$ \beta_{\tau(i)}(b_1 \otimes b_2) = A_2-\wt_i(b_2) = B_i(b'), $$ and hence, \begin{align} \begin{split} &\Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}} \Btil_i(b_1 \otimes b_2) \otimes b_3 & \IF A_2=A_5+1, \\ & \AND \beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2, \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} However, since $A_2 > A_4 > A_5$, it never happens that $A_2 = A_5+1$. Therefore, we obtain \begin{align} \begin{split} \Btil_i b' &= \Btil_i(b_1 \otimes b_2) \otimes b_3 \\ &= \begin{cases} \frac{1}{\sqrt{2}}(b_1 \otimes \Ftil_i b_2) \otimes b_3 & \IF A_2 = A_4+1 \AND \vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b_2)+1, \\ (b_1 \otimes \Ftil_i b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Thus, we conclude $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1 \leq A_2 > A_3,A_4,A_5$ and $A_4 \leq A_5$. In this case, we have \begin{align} \begin{split} &\Ftil_i(b_2 \otimes b_3) = \Ftil_i b_2 \otimes b_3, \end{split} \nonumber \end{align} and hence, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \IF A_2=A_5+1, \\ & \AND \vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(b_2 \otimes b_3)+1, \\ b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \OW. \end{cases} $$ Since $\vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(b_3)+\wt_{\tau(i)}(b_2)+1$ and $\vphi_{\tau(i)}(b_2 \otimes b_3) = \vphi_{\tau(i)}(b_3)+\wt_{\tau(i)}(b_2)$, we always have $\vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(b_2 \otimes b_3)+1$. Therefore, we obtain $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \IF A_2=A_5+1, \\ b_1 \otimes (\Ftil_i b_2 \otimes b_3) & \OW. \end{cases} $$ On the other hand, since $A_2 > A_3,A_4$, we have $$ \beta_{\tau(i)}(b_1 \otimes b_2) = A_2-\wt_i(b_2) = B_i(b'), $$ and hence, \begin{align} \begin{split} &\Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}} \Btil_i(b_1 \otimes b_2) \otimes b_3 & \IF A_2=A_5+1, \\ & \AND \beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2, \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Since $A_2 > A_3,A_4$, we have $\Btil_i(b_1 \otimes b_2) \in \clB_1 \otimes \clB_2$ if and only if $\Btil_i(b_1 \otimes b_2) = b_1 \otimes \Ftil_i b_2$. In this case, we always have $\beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2$. Otherwise, we have either $\Ftil_i b_2 = 0$ or $A_2=A_4+1$ and $\vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b_2)+1$. Noting that $A_2 = A_4+1$ implies $A_2 = A_5+1$, we obtain \begin{align} \begin{split} \Btil_i b' &= \begin{cases} \frac{1}{\sqrt{2}}(b_1 \otimes \Ftil_i b_2) \otimes b_3 & \IF A_2 = A_5+1, \\ (b_1 \otimes \Ftil_i b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Thus, we conclude $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1, A_2 \leq A_3 > A_4,A_5$ and $\beta_{\tau(i)}(b_1) = A_3$. In this case, we have $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} \Btil_i b_1 \otimes (b_2 \otimes b_3) & \IF A_3 = \max(A_4,A_5)+1,\\ & \AND \beta_i(\Btil_i b_1) = \beta_i(b_1)-2, \\ \Btil_i b_1 \otimes (b_2 \otimes b_3) & \OW, \end{cases} $$ On the other hand, since $A_2 \leq A_3 > A_4$ and $\beta_{\tau(i)}(b_1)=A_3$, we have $$ \beta_{\tau(i)}(b_1 \otimes b_2) = A_3-\wt_i(b_2) = B_i(b'), $$ and hence, $$ \Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}}(\Btil_i(b_1 \otimes b_2) \otimes b_3) & \IF A_3 = A_5+1, \\ &\AND \beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2, \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \OW, \end{cases} $$ Since $A_2 \leq A_3 > A_4$, we have $\Btil_i(b_1 \otimes b_2) \in \clB_1 \otimes \clB_2$ if and only if $\Btil_i(b_1 \otimes b_2) = \Btil_i b_1 \otimes b_2 \in \clB_1 \otimes \clB_2$. In this case, the equality $\beta_i(\Btil_i b_1 \otimes b_2) = \beta_i(b_1 \otimes b_2)-2$ is equivalent to that $\beta_i(\Btil_i b_1) = \beta_i(b_1)-2$ and $A_3+1 \geq A_4+3$ since we have \begin{align}\label{calc} \begin{split} &F_i(\Btil_i b_1 \otimes b_2) = F_i(b_1 \otimes b_2), \\ &B_i(\Btil_i b_1 \otimes b_2) = B_i(b_1 \otimes b_2)+(\beta_i(\Btil_i b_1) - \beta_i(b_1))+3, \\ &E_i(\Btil_ib_1 \otimes b_2)=E_i(b_1 \otimes b_2)+3. \end{split} \end{align} Now, consider the case when $A_4 > A_5$. In this case, it never happens that $A_3 = A_5+1$. Hence, we obtain \begin{align} \begin{split} \Btil_i b' &= \Btil_i(b_1 \otimes b_2) \otimes b_3 \\ &= \begin{cases} \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes b_2) \otimes b_3 & \IF A_3=A_4+1 \AND \beta_i(\Btil_i b_1) = \beta_i(b_1)-2, \\ (\Btil_i b_1 \otimes b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Next, consider the case when $A_4 \leq A_5$. Noting that $A_3 = A_4+1$ implies $A_3 = A_5+1$, we obtain \begin{align} \begin{split} \Btil_i b' &= \begin{cases} \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes b_2) \otimes b_3 & \IF A_3 = A_5+1 \AND \beta_i(\Btil_i b_1) = \beta_i(b_1)-2, \\ (\Btil_i b_1 \otimes b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} In each case, we have $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1, A_2 \leq A_3 > A_4,A_5$ and $\beta_{\tau(i)}(b_1) \neq A_3$. In this case, it never happens that $\beta_i(\Btil_i b_1) = \beta_i(b_1)-2$, and hence, \begin{align} \begin{split} \Btil_i b &= \begin{cases} \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes (b_2 \otimes b_3) + b_1 \otimes \Ftil_i (b_2 \otimes b_3)) & \IF \max(A_1,A_2) = A_3, \\ \Btil_i b_1 \otimes (b_2 \otimes b_3) & \OW \end{cases} \\ &= \begin{cases} \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes (b_2 \otimes b_3) + b_1 \otimes (b_2 \otimes \Ftil_i b_3)) & \IF A_2 < A_1 = A_3, \\ \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes (b_2 \otimes b_3) + b_1 \otimes (\Ftil_i b_2 \otimes b_3)) & \IF A_1 \leq A_2 = A_3, \\ \Btil_i b_1 \otimes (b_2 \otimes b_3) & \OW. \end{cases} \end{split} \nonumber \end{align} On the other hand, since $A_2 \leq A_3 > A_4$ and $\beta_{\tau(i)}(b_1) \neq A_3$, we have $$ \beta_{\tau(i)}(b_1 \otimes b_2) = \max(A_3-1,A_2)-\wt_i(b_2) = \begin{cases} B_i(b') & \IF A_2 = A_3, \\ B_i(b')-1 & \IF A_2 < A_3, \end{cases} $$ and hence, $$ \Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}}(\Btil_i(b_1 \otimes b_2) \otimes b_3) & \IF A_2 = A_3 = A_5+1, \\ &\AND \beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2, \\ \frac{1}{\sqrt{2}}(\Btil_i(b_1 \otimes b_2) \otimes b_3 + (b_1 \otimes b_2) \otimes \Ftil_i b_3) & \IF A_1 = A_3 > A_2, \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \OW. \end{cases} $$ Since $\beta_{\tau(i)}(b_1) \neq A_3$, it never happens that $\beta_i(\Btil_i b_1) = \beta_i(b_1)-2$. Hence, by identities \eqref{calc}, it never happens that $\beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2$. Now, consider the case when $A_1 > A_2$. In this case, we have $A_2 < A_3$, and hence \begin{align} \begin{split} \Btil_i b' &= \begin{cases} \frac{1}{\sqrt{2}}((\Btil_i b_1 \otimes b_2) \otimes b_3 + (b_1 \otimes b_2) \otimes \Ftil_i b_3) & \IF A_1 = A_3, \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \IF A_1 < A_3 \end{cases} \\ &= \begin{cases} \frac{1}{\sqrt{2}}((\Btil_i b_1 \otimes b_2) \otimes b_3 + (b_1 \otimes b_2) \otimes \Ftil_i b_3) & \IF A_1 = A_3, \\ (\Btil_i b_1 \otimes b_2) \otimes b_3 & \IF A_1 < A_3. \end{cases} \end{split} \nonumber \end{align} Next, consider the case when $A_1 \leq A_2$. In this case, we have \begin{align} \begin{split} \Btil_i b' &= \Btil_i(b_1 \otimes b_2) \otimes b_3 \\ &= \begin{cases} \frac{1}{\sqrt{2}}((\Btil_i b_1 \otimes b_2) \otimes b_3 + (b_1 \otimes \Ftil_i b_2) \otimes b_3) & \IF A_2 = A_3, \\ (\Btil_i b_1 \otimes b_2) \otimes b_3 & \IF A_2 < A_3. \end{cases} \end{split} \nonumber \end{align} In each case, we have $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1,A_2,A_3 \leq A_4 > A_5$ and $A_1 > A_2$. In this case, we have \begin{align} \begin{split} &\Etil_{\tau(i)}(b_2 \otimes b_3) = \Etil_{\tau(i)}b_2 \otimes b_3, \\ &\vphi_i(\Etil_{\tau(i)}b_2 \otimes b_3) = \vphi_i(b_3)+\wt_i(b_2)-1 = \vphi_i(b_2 \otimes b_3)-1 = A_1-1, \\ &\Ftil_i(b_2 \otimes b_3) = b_2 \otimes \Ftil_i b_3, \\ &\vphi_{\tau(i)}(b_2 \otimes \Ftil_i b_3) = \vphi_{\tau(i)}(b_2) = \vphi_{\tau(i)}(b_2 \otimes b_3) = A_4, \end{split} \nonumber \end{align} and hence, \begin{align} \begin{split} \Btil_i b &= \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) & \IF \beta_{\tau(i)}(b_1) = A_3 = A_4, \\ \frac{1}{\sqrt{2}}(b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) + b_1 \otimes (b_2 \otimes \Ftil_i b_3)) & \IF A_1 = A_4 > \beta_{\tau(i)}(b_1), \\ b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) & \OW \end{cases} \\ &= \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) & \IF \beta_{\tau(i)}(b_1) = A_3 = A_4, \\ \frac{1}{\sqrt{2}}(b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) + b_1 \otimes (b_2 \otimes \Ftil_i b_3)) & \IF A_1 = A_4 > A_3, \OR \\ &\ A_1 = A_4 = A_3 \neq \beta_{\tau(i)}(b_1), \\ b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) & \OW \end{cases} \end{split} \nonumber \end{align} On the other hand, we have $$ \beta_{\tau(i)}(b_1 \otimes b_2) = \begin{cases} B_i(b')-1 & \IF A_4 > A_3,A_2, \OR \\ &\ A_4 \leq A_3 > A_2 \AND \beta_{\tau(i)}(b_1) \neq A_3, \\ B_i(b') & \OW, \end{cases} $$ and hence, \begin{align} \begin{split} \Btil_i b' &= \begin{cases} \frac{1}{\sqrt{2}}\Btil_i(b_1 \otimes b_2) \otimes b_3 & \IF A_4 = A_5+1, \\ & \AND \beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2, \\ \frac{1}{\sqrt{2}}(\Btil_i(b_1 \otimes b_2) \otimes b_3 + (b_1 \otimes b_2) \otimes \Ftil_i b_3) & \IF A_1 = A_4 > A_3, \OR \\ &\ A_1 = A_4 = A_3 \neq \beta_{\tau(i)}(b_1), \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Since $A_2,A_3 \leq A_4$, we have $\Btil_i(b_1 \otimes b_2) \in \clB_1 \otimes \clB_2$ if and only if $\Btil_i(b_1 \otimes b_2) = b_1 \otimes \Etil_{\tau(i)} b_2 \neq 0$. In this case it never happens that $\beta_i(b_1 \otimes \Etil_{\tau(i)}b_2) = \beta_i(b_1 \otimes b_2) -2$ since we have. \begin{align} \begin{split} &F_i(b_1 \otimes \Etil_{\tau(i)}b_2) = \vphi_i(\Etil_{\tau(i)}b_2) \leq A_2, \\ &B_i(b_1 \otimes \Etil_{\tau(i)}b_2) = A_3, \\ &E_i(b_1 \otimes \Etil_{\tau(i)}b_2) = A_4+1. \end{split} \nonumber \end{align} Hence, we obtain \begin{align} \begin{split} \Btil_i b' &= \begin{cases} \frac{1}{\sqrt{2}}(\Btil_i(b_1 \otimes b_2) \otimes b_3 + (b_1 \otimes b_2) \otimes \Ftil_i b_3) & \IF A_1 = A_4 > A_3, \OR \\ &\ A_1 = A_4 = A_3 \neq \beta_{\tau(i)}(b_1), \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \OW \end{cases} \\ &= \begin{cases} \frac{1}{\sqrt{2}}((b_1 \otimes \Etil_{\tau(i)}b_2) \otimes b_3 + (b_1 \otimes b_2) \otimes \Ftil_i b_3) & \IF A_1 = A_4 > A_3, \OR \\ &\ A_1 = A_4 = A_3 \neq \beta_{\tau(i)}(b_1), \\ \frac{1}{\sqrt{2}}(b_1 \otimes \Etil_{\tau(i)}b_2) \otimes b_3 & \IF A_3 = A_4 = \beta_{\tau(i)}(b_1), \\ (b_1 \otimes \Etil_{\tau(i)}b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Thus, we conclude $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1,A_2,A_3 \leq A_4 > A_5$ and $A_1 \leq A_2$. In this case, we have \begin{align} \begin{split} &\Etil_{\tau(i)}(b_2 \otimes b_3) = \Etil_{\tau(i)}b_2 \otimes b_3, \\ &\vphi_i(\Etil_{\tau(i)}b_2 \otimes b_3) = \vphi_i(\Etil_{\tau(i)} b_2) = \begin{cases} \vphi_i(b_2 \otimes b_3) & \IF \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i(b_2), \\ \vphi_i(b_2 \otimes b_3)-1 & \IF \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i(b_2)-1, \end{cases} \\ &\Ftil_i(b_2 \otimes b_3) = \Ftil_i b_2 \otimes b_3, \\ &\vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(\Ftil_i b_2) = \begin{cases} \vphi_{\tau(i)}(b_2 \otimes b_3)+1 & \IF \vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b_2)+1, \\ \vphi_{\tau(i)}(b_2 \otimes b_3) & \IF \vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b_2). \end{cases} \end{split} \nonumber \end{align} and hence, \begin{align} \begin{split} \Btil_i b &= \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) & \IF A_2 = A_4 \AND \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i(b_2), \\ & \OR A_3 = A_4 = \beta_{\tau(i)}(b_1) \AND \vphi_i(\Etil_{\tau(i)} b_2) < A_4, \\ \frac{1}{\sqrt{2}}(b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) & \IF A_2 = A_4 > \beta_{\tau(i)}(b_1), \\ + b_1 \otimes (\Ftil_i b_2 \otimes b_3)) &\AND \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i(b_2)-1, \\ b_1 \otimes (\Etil_{\tau(i)}b_2 \otimes b_3) & \OW. \end{cases} \end{split} \nonumber \end{align} On the other hand, we have \begin{align} \begin{split} \Btil_i b' &= \begin{cases} \frac{1}{\sqrt{2}}\Btil_i(b_1 \otimes b_2) \otimes b_3 & \IF A_4 = A_5+1, \\ & \AND \beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2, \\ \frac{1}{\sqrt{2}}(\Btil_i(b_1 \otimes b_2) \otimes b_3 + (b_1 \otimes b_2) \otimes \Ftil_i b_3) & \IF A_1 = A_4 \AND \beta_{\tau(i)}(b_1 \otimes b_2) \neq B_i(b'), \\ \Btil_i(b_1 \otimes b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} As in the previous case, it never happens that $\beta_i(\Btil_i(b_1 \otimes b_2)) = \beta_i(b_1 \otimes b_2)-2$. Also, since $A_1 \leq A_2$, the condition that $A_1 = A_4$ implies $A_2 = A_4$, which, in turn, shows that $\beta_{\tau(i)}(b_1 \otimes b_2) = B_i(b')$. Hence, we obtain \begin{align} \begin{split} \Btil_i b' &= \Btil_i(b_1 \otimes b_2) \otimes b_3 \\ &= \begin{cases} \frac{1}{\sqrt{2}}(b_1 \otimes \Etil_{\tau(i)} b_2) \otimes b_3 & \IF A_2 = A_4 \AND \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i(b_2), \\ &\OR A_3 = A_4 = \beta_{\tau(i)}(b_1) \AND \vphi_i(\Etil_{\tau(i)} b_2) < A_4, \\ \frac{1}{\sqrt{2}}((b_1 \otimes \Etil_{\tau(i)} b_2) \otimes b_3 & \IF A_2 = A_4 > \beta_{\tau(i)}(b_1), \\ + (b_1 \otimes \Ftil_i b_2) \otimes b_3) &\AND \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i(b_2)-1, \\ (b_1 \otimes \Etil_{\tau(i)} b_2) \otimes b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Thus, we conclude $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1,A_2,A_3,A_4 \leq A_5$ and $A_1 > A_2$. In this case, we have \begin{align} \begin{split} &\Etil_{\tau(i)}(b_2 \otimes b_3) = b_2 \otimes \Etil_{\tau(i)} b_3, \\ &\vphi_i(b_2 \otimes \Etil_{\tau(i)} b_3) = \vphi_i(\Etil_{\tau(i)}b_3)+\wt_i(b_2) = \begin{cases} \vphi_i(b_2 \otimes b_3) & \IF \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3), \\ \vphi_i(b_2 \otimes b_3)-1 & \IF \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3)-1, \end{cases} \\ &\Ftil_i(b_2 \otimes b_3) = b_2 \otimes \Ftil_i b_3, \\ &\vphi_{\tau(i)}(b_2 \otimes \Ftil_i b_3) = \vphi_{\tau(i)}(\Ftil_i b_3) = \begin{cases} \vphi_{\tau(i)}(b_2 \otimes b_3)+1 & \IF \vphi_{\tau(i)}(\Ftil_i b_3) = \vphi_{\tau(i)}(b_3)+1, \\ \vphi_{\tau(i)}(b_2 \otimes b_3) & \IF \vphi_{\tau(i)}(\Ftil_i b_3) = \vphi_{\tau(i)}(b_3), \end{cases} \end{split} \nonumber \end{align} and hence, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (b_2 \otimes \Etil_{\tau(i)} b_3) & \IF A_1 = A_5 \AND \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3), \\ & \OR A_3 = A_5 = \beta_{\tau(i)}(b_1) \AND \vphi_i(\Etil_{\tau(i)} b_3)+\wt_i(b_2) < A_5, \\ \frac{1}{\sqrt{2}}(b_1 \otimes (b_2 \otimes \Etil_{\tau(i)} b_3) & \IF A_1 = A_5 > \beta_{\tau(i)}(b_1), \\ + b_1 \otimes (b_2 \otimes \Ftil_i b_3)) &\AND \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3)-1, \\ b_1 \otimes (b_2 \otimes \Etil_{\tau(i)} b_3) & \OW, \end{cases} $$ On the other hand, we have $$ \Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}} (b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \IF A_1 = A_5 \AND \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3), \\ & \OR A_5 = \max(A_2,A_3,A_4),\ \beta_{\tau(i)}(b_1 \otimes b_2) = B_i(b'), \\ & \AND \vphi_i(\Etil_{\tau(i)} b_3) < A_5-\wt_i(b_2), \\ \frac{1}{\sqrt{2}}((b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \IF A_1 = A_5,\ \beta_{\tau(i)}(b_1 \otimes b_2) < A_5-\wt_i(b_2), \\ + (b_1 \otimes b_2) \otimes \Ftil_i b_3) &\AND \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3)-1, \\ (b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \OW. \end{cases} $$ Since we have $A_2 < A_1$, we have $A_5 = \max(A_2,A_3,A_4)$ and $\beta_{\tau(i)}(b_1 \otimes b_2) = B_i(b')$ if and only if $A_3 = A_5$ and $\beta_{\tau(i)}(b_1) = A_3$. Also, we have $\beta_{\tau(i)}(b_1 \otimes b_2) < A_5-\wt_i(b_2)$ if and only if either $A_3 < A_5$, or $A_3 = A_5$ and $\beta_{\tau(i)}(b_1 \otimes b_2) \neq B_i(b')$. This condition is equivalent to that either $A_3 < A_5$, or $A_3 = A_5$ and $\beta_{\tau(i)}(b_1) < A_3$. This is, in turn, equivalent to $\beta_{\tau(i)}(b_1) < A_5$. Therefore, we obtain $$ \Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}} (b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \IF A_1 = A_5 \AND \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3), \\ & \OR A_3 = A_5 = \beta_{\tau(i)}(b_1) \AND \vphi_i(\Etil_{\tau(i)} b_3) < A_5-\wt_i(b_2), \\ \frac{1}{\sqrt{2}}((b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \IF A_1 = A_5 > \beta_{\tau(i)}(b_1), \\ + (b_1 \otimes b_2) \otimes \Ftil_i b_3) &\AND \vphi_i(\Etil_{\tau(i)} b_3) = \vphi_i(b_3)-1, \\ (b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \OW. \end{cases} $$ Thus, we conclude $$ \Btil_i b = \Btil_i b'. $$ \item When $A_1,A_2,A_3,A_4 \leq A_5$ and $A_1 \leq A_2$. In this case, we have \begin{align} \begin{split} &\Etil_{\tau(i)}(b_2 \otimes b_3) = b_2 \otimes \Etil_{\tau(i)} b_3, \\ &\vphi_i(b_2 \otimes \Etil_{\tau(i)} b_3) = \vphi_i(b_2) = \vphi_i(b_2 \otimes b_3), \\ &\Ftil_i(b_2 \otimes b_3) = \Ftil_i b_2 \otimes b_3, \\ &\vphi_{\tau(i)}(\Ftil_i b_2 \otimes b_3) = \vphi_{\tau(i)}(b_3)+1 = \vphi_{\tau(i)}(b_2 \otimes b_3)+1, \end{split} \nonumber \end{align} and hence, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes (b_2 \otimes \Etil_{\tau(i)} b_3) & \IF A_2 = A_5, \OR \\ &\ A_3 = A_5 = \beta_{\tau(i)}(b_1) \AND A_2 < A_5, \\ b_1 \otimes (b_2 \otimes \Etil_{\tau(i)} b_3) & \OW. \end{cases} $$ On the other hand, since $A_1 = A_5$ implies $A_2 = A_5$, and hence, $\beta_{\tau(i)}(b_1 \otimes b_2) = B_i(b')$, we have $$ \Btil_i b' = \begin{cases} \frac{1}{\sqrt{2}}(b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \IF A_1 = A_5 \AND \vphi_i(\Etil_{\tau(i)}b_3) = \vphi_i(b_3), \OR \\ & \ A_5 = \max(A_2,A_3,A_4),\ \beta_{\tau(i)}(b_1 \otimes b_2) = B_i(b'), \\ & \AND \vphi_i(\Etil_i b_3) < A_5-\wt_i(b_2), \\ (b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \OW. \end{cases} $$ Now, consider the case when $A_1 = A_5$. In this case, we automatically have $A_2 = A_5$ and $\beta_{\tau(i)}(b_1 \otimes b_2) = B_i(b')$. Then, regardless of the value $\vphi_i(\Etil_{\tau(i)} b_3)$, we obtain $$ \Btil_i b' = \frac{1}{\sqrt{2}}(b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 = \Btil_i b. $$ Next, consider the case when $A_1 < A_5$. In this case, we always have $\vphi_i(\Etil_{\tau(i)} b_3) \leq A_1-\wt_i(b_2) < A_5-\wt_i(b_2)$. Therefore, we obtain \begin{align} \begin{split} \Btil_i b' &= \begin{cases} \frac{1}{\sqrt{2}}(b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \IF A_5 = \max(A_2,A_3,A_4) \AND \beta_{\tau(i)}(b_1 \otimes b_2) = B_i(b'), \\ (b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \OW \end{cases} \\&= \begin{cases} \frac{1}{\sqrt{2}}(b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \IF A_2 = A_5, \OR \\ &\ A_2 < A_3 = A_5 \AND \beta_{\tau(i)}(b_1) = A_3, \\ (b_1 \otimes b_2) \otimes \Etil_{\tau(i)} b_3 & \OW. \end{cases} \end{split} \nonumber \end{align} Thus, we conclude $$ \Btil_i b = \Btil_i b'. $$ \end{enumerate} Now, we have exhausted all the cases. Hence, the proof completes. \section{$\imath$Crystal basis of the modified $\imath$quantum group}\label{Section: stability} In this section, we construct projective systems of $\imath$crystals and very strict morphisms, and describe their projective limits explicitly. Then, we lift this result to based $\Ui$-modules. \subsection{$\imath$Crystal basis of $\Uidot$} Fix $\sigma \in X^+$ such that $$ \la h_i,\sigma \ra = \begin{cases} |s_i| & \IF a_{i,\tau(i)} = 2, \\ 0 & \IF a_{i,\tau(i)} = 0, \\ \max(s_i,0) & \IF a_{i,\tau(i)} = -1 \AND i \in I_\tau, \\ \max(-s_{\tau(i)},0) & \IF a_{i,\tau(i)} = -1 \AND i \notin I_\tau. \end{cases} $$ Note that we have $$ \la h_i-h_{\tau(i)},\sigma \ra = \begin{cases} s_i & \IF i \in I_\tau, \\ -s_{\tau(i)} & \IF i \notin I_\tau \end{cases} $$ for all $i \in I$ with $a_{i,\tau(i)} = -1$. Recall from Section \ref{Section: iquantum groups and icrystals} the $1$-dimensional $\Ui$-module $V(0)^\sigma = \bbK v_0^\sigma$ and its $\bbK_\infty$-lattice $\clL(0)^\sigma = \bbK_\infty v_0^\sigma$. Set $b_0^\sigma := \ev_\infty(b_0^\sigma)$, $\clB(0)^\sigma := \{ b_0^\sigma \}$. \begin{lem}\label{icrystral structure of B(0)sigma} The $1$-dimensional $\Ui$-module $V(0)^\sigma$ has $\clB(0)^\sigma = \{ b_0^\sigma \}$ as its $\imath$crystal basis. Furthermore, its $\imath$crystal structure is given as follows: $$ \wti(b_0^\sigma) = \ol{\sigma}, \qu \beta_i(b_0^\sigma) = 0, \qu \Btil_i b_0^\sigma = 0. $$ \end{lem} \begin{proof} The assertion is clear from the $\Ui$-module structure of $V(0)^\sigma$. \end{proof} \begin{lem}\label{existence of gamma nu} Let $\nu \in X^+$. Then, there exists a very strict $\imath$crystal morphism $\gamma_\nu : \clB(\sigma+\nu+\tau(\nu)) \rightarrow \clB(0)^\sigma$ such that $$ \gamma_\nu(b) = \delta_{b,b_{\sigma+\nu+\tau(\nu)}} b_0^\sigma. $$ for all $b \in \clB(\sigma+\nu+\tau(\nu))$. \end{lem} \begin{proof} By Corollary \ref{icrystal structure on crystal}, we have $$ \wti(b_{\sigma+\nu+\tau(\nu)}) = \ol{\sigma}, \qu \beta_i(b_{\sigma+\nu+\tau(\nu)}) = 0, \qu \Btil_i b_{\sigma+\nu+\tau(\nu)} = 0. $$ This, together with Lemma \ref{icrystral structure of B(0)sigma}, shows that $\{ b_{\sigma+\nu+\tau(\nu)} \}$ forms an $\imath$crystal isomorphic to $\clB(0)^\sigma$. Thus, the assertion follows. \end{proof} Recall that for each $\lm \in X^+$, we set $$ V(\lm)^\sigma := V(0)^\sigma \otimes V(\lm), \qu \clB(\lm)^\sigma := \clB(0)^\sigma \otimes \clB(\lm), $$ and $$ v^\sigma := v_0^\sigma \otimes v, \qu b^\sigma := b_0^\sigma \otimes b $$ for all $v \in V(\lm)$ and $b \in \clB(\lm)$. \begin{lem}\label{existence of rho lm} Let $\lm \in X^+$. Then, there exists a very strict $\imath$crystal morphism $\rho_\lm : \clB(\sigma+\lm) \rightarrow \clB(\lm)^\sigma$ such that $$ \rho_\lm(\pi_{\sigma+\lm}(b)) = \pi_\lm(b)^\sigma $$ for all $b \in \clB(\infty)$. Consequently, there exists an injective very strict $\imath$crystal morphism $\clB(\lm)^\sigma \hookrightarrow \clB(\sigma+\lm)$ which sends $\pi_\lm(b)^\sigma$ to $\pi_{\sigma+\lm}(b)$ for all $b \in \clB(\infty;\lm)$. \end{lem} \begin{proof} By \cite[Proposition 25.1.2]{L10}, there exists an injective strict crystal morphism $\eta_{\sigma,\lm} : \clB(\sigma+\lm) \rightarrow \clB(\sigma) \otimes \clB(\lm)$ such that $$ \eta_{\sigma,\lm}(\pi_{\sigma+\lm}(b)) = b_\sigma \otimes \pi_\lm(b) \qu \Forall b \in \clB(\infty;\lm), $$ and $$ \eta_{\sigma,\lm}(\pi_{\sigma+\lm}(b)) \notin b_\sigma \otimes \clB(\lm) \qu \Forall b \in \clB(\infty) {\setminus} \clB(\infty;\lm). $$ By Lemma \ref{existence of gamma nu} and Proposition \ref{tensor product of morphisms}, there exists a very strict $\imath$crystal morphism $$ \rho_\lm : \clB(\sigma+\lm) \xrightarrow[]{\eta_{\sigma,\lm}} \clB(\sigma) \otimes \clB(\lm) \xrightarrow[]{\gamma_0 \otimes 1} \clB(0)^\sigma \otimes \clB(\lm) = \clB(\lm)^\sigma. $$ Then, it is clear that this morphism has the required property. \end{proof} \begin{prop}\label{existence of pii lm nu} Let $\lm,\nu \in X^+$. Then, there exists a very strict $\imath$crystal morphism $$ \pi^\imath_{\lm,\nu} : \clB(\lm+\nu+\tau(\nu))^\sigma \rightarrow \clB(\lm)^\sigma $$ such that $$ \pi^\imath_{\lm,\nu}(\pi_{\lm+\nu+\tau(\nu)}(b)^\sigma) = \pi_\lm(b)^\sigma $$ for all $b \in \clB(\infty)$. \end{prop} \begin{proof} As in the proof of Lemma \ref{existence of rho lm}, there exists a very strict $\imath$crystal morphism $$ \clB(\sigma+\lm+\nu+\tau(\nu)) \xrightarrow[]{\eta_{\sigma+\nu+\tau(\nu),\lm}} \clB(\sigma+\nu+\tau(\nu)) \otimes \clB(\lm) \xrightarrow[]{\gamma_\nu \otimes 1} \clB(0)^\sigma \otimes \clB(\lm) = \clB(\lm)^\sigma $$ which sends $\pi_{\sigma+\lm+\nu+\tau(\nu)}(b)$ to $\pi_\lm(b)^\sigma$. On the other hand, by Lemma \ref{existence of rho lm}, we have a very strict $\imath$crystal morphism $$ \clB(\lm+\nu+\tau(\nu))^\sigma \hookrightarrow \clB(\sigma+\lm+\nu+\tau(\nu)). $$ Combining these two morphisms, we obtain a very strict $\imath$crystal morphism $$ \pi^\imath_{\lm,\nu} : \clB(\lm+\nu+\tau(\nu))^\sigma \rightarrow \clB(\lm)^\sigma. $$ Then, it is clear that this morphism satisfies the required property. \end{proof} Now, for each $\zeta \in X^\imath$, we obtain a projective system $\{ \clB(\lm)^\sigma \}_{\lm \in X^+, \ol{\lm} = \zeta}$ of $\imath$crystals with very strict morphisms $\pi^\imath_{\lm,\nu} : \clB(\lm+\nu+\tau(\nu))^\sigma \rightarrow \clB(\lm)^\sigma$. We shall describe its projective limit in the category of $\imath$crystals and very strict morphisms. To do so, we prepare three lemmas. \begin{lem}\label{lamma; B(lm)sigma to Tsigma tensor B(lm)} Let $\lm \in X^+$. Then, there exists an $\imath$crystal isomorphism $\clB(\lm)^\sigma \rightarrow \clT_{\ol{\sigma}} \otimes \clB(\lm)$ which sends $b^\sigma$ to $t_{\ol{\sigma}} \otimes b$ for all $b \in \clB(\lm)$. \end{lem} \begin{proof} Let us compare the $\imath$crystal structures of $\clB(\lm)^\sigma$ and $\clT_{\ol{\sigma}} \otimes \clB(\lm)$. To do so, recall from Lemma \ref{Deduction from condition S's} \eqref{Deduction from condition S's 4} that $\vphi_i(\Etil_{\tau(i)} b) = \vphi_i(b)-1$ is equivalent to $\vphi_{\tau(i)}(\Ftil_i b) = \vphi_{\tau(i)}(b)$. Then, the the $\imath$crystal structure of $\clB(\lm)^\sigma$ is described as follows: Let $b \in \clB(\lm)$ and $i \in I$. \begin{enumerate} \item $\wti(b^\sigma) = \ol{\sigma} + \ol{\wt(b)}$. \item When $a_{i,\tau(i)} = 2$. \begin{align} \begin{split} &\beta_i(b^\sigma) = \begin{cases} \vep_i(b)+1 & \IF \ol{\vphi_i(b)} = \ol{1}, \\ \vep_i(b) & \IF \ol{\vphi_i(b)} = \ol{0}, \end{cases} \\ &\Btil_i b^\sigma = \begin{cases} (\Ftil_i b)^\sigma & \IF \ol{\vphi_i(b)} = \ol{1}, \\ (\Etil_i b)^\sigma & \IF \ol{\vphi_i(b)} = \ol{0}. \end{cases} \end{split} \nonumber \end{align} \item When $a_{i,\tau(i)} = 0$. \begin{align} \begin{split} &\beta_i(b^\sigma) = \begin{cases} \vphi_i(b)-\wt_{\tau(i)}(b) & \IF \vphi_i(b) > \vphi_{\tau(i)}(b), \\ \vep_{\tau(i)}(b) & \IF \vphi_i(b) \leq \vphi_{\tau(i)}(b), \end{cases} \\ &\Btil_i b^\sigma = \begin{cases} (\Ftil_i b)^\sigma & \IF \vphi_i(b) > \vphi_{\tau(i)}(b), \\ (\Etil_{\tau(i)} b)^\sigma & \IF \vphi_i(b) \leq \vphi_{\tau(i)}(b). \end{cases} \end{split} \nonumber \end{align} \item When $a_{i,\tau(i)} = -1$. Setting $s'_i := \begin{cases} 0 & \IF i \in I_\tau, \\ 1 & \IF i \notin I_\tau, \end{cases}$ we have \begin{align} \begin{split} &F_i(b^\sigma) = \vphi_i(b), \\ &B_i(b^\sigma) = s'_i, \\ &E_i(b^\sigma) = \vphi_{\tau(i)}(b)+s'_i. \end{split} \nonumber \end{align} This implies that $B_i(b^\sigma) \leq E_i(b^\sigma)$. Therefore, we obtain the following: \begin{enumerate} \item When $\vphi_i(b) > \vphi_{\tau(i)}(b)+s'_i$. In this case, we have \begin{align} \begin{split} \beta_i(b^\sigma) &= \vphi_i(b)-s'_i-\wt_{\tau(i)}(b), \\ \Btil_i b^\sigma &= \begin{cases} \frac{1}{\sqrt{2}} (\Ftil_i b)^\sigma & \IF \vphi_i(b) = \vphi_{\tau(i)}(b)+s'_i+1 \AND \vphi_{\tau(i)}(\Ftil_i b) = \vphi_{\tau(i)}(b)+1, \\ (\Ftil_i b)^\sigma & \OW. \end{cases} \end{split} \nonumber \end{align} \item When $\vphi_i(b) \leq \vphi_{\tau(i)}(b)+s'_i$. In this case, we have \begin{align} \begin{split} \beta_i(b^\sigma) &= \vep_{\tau(i)}(b), \\ \Btil_i b^\sigma &= \begin{cases} \frac{1}{\sqrt{2}} (\Etil_{\tau(i)} b)^\sigma & \IF \vphi_i(b) = \vphi_{\tau(i)}(b)+s'_i \AND \vphi_i(\Etil_{\tau(i)}b) = \vphi_i(b), \\ \frac{1}{\sqrt{2}}((\Etil_{\tau(i)}b)^\sigma + (\Ftil_i b)^\sigma) & \IF \vphi_i(b) = \vphi_{\tau(i)}(b)+s'_i > 0 \AND \vphi_{\tau(i)}(\Ftil_i b) = \vphi_{\tau(i)}(b), \\ (\Etil_{\tau(i)}b)^\sigma & \OW. \end{cases} \end{split} \nonumber \end{align} \end{enumerate} \end{enumerate} On the other hand, the $\imath$crystal structure of $\clT_{\ol{\sigma}} \otimes \clB(\lm)$ is described as follows: \begin{enumerate} \item $\wti(t_{\ol{\sigma}} \otimes b) = \ol{\sigma} + \ol{\wt(b)}$. \item When $a_{i,\tau(i)} = 2$. \begin{align} \begin{split} &\beta_i(t_{\ol{\sigma}} \otimes b) = \begin{cases} \vep_i(b)+1 & \IF \ol{\vphi_i(b)} = \ol{1}, \\ \vep_i(b) & \IF \ol{\vphi_i(b)} = \ol{0}, \end{cases} \\ &\Btil_i t_{\ol{\sigma}} \otimes b = \begin{cases} t_{\ol{\sigma}} \otimes \Ftil_i b & \IF \ol{\vphi_i(b)} = \ol{1}, \\ t_{\ol{\sigma}} \otimes \Etil_i b & \IF \ol{\vphi_i(b)} = \ol{0}. \end{cases} \end{split} \nonumber \end{align} \item When $a_{i,\tau(i)} = 0$. \begin{align} \begin{split} &\beta_i(t_{\ol{\sigma}} \otimes b) = \begin{cases} \vphi_i(b)-\wt_{\tau(i)}(b) & \IF \vphi_i(b) > \vphi_{\tau(i)}(b), \\ \vep_{\tau(i)}(b) & \IF \vphi_i(b) \leq \vphi_{\tau(i)}(b), \end{cases} \\ &\Btil_i t_{\ol{\sigma}} \otimes b = \begin{cases} t_{\ol{\sigma}} \otimes \Ftil_i b & \IF \vphi_i(b) > \vphi_{\tau(i)}(b), \\ t_{\ol{\sigma}} \otimes \Etil_{\tau(i)} b & \IF \vphi_i(b) \leq \vphi_{\tau(i)}(b). \end{cases} \end{split} \nonumber \end{align} \item When $a_{i,\tau(i)} = -1$. Setting $s'_i := \begin{cases} 0 & \IF i \in I_\tau, \\ 1 & \IF i \notin I_\tau, \end{cases}$ we have \begin{align} \begin{split} &F_i(t_{\ol{\sigma}} \otimes b) = \vphi_i(b), \\ &B_i(t_{\ol{\sigma}} \otimes b) = -\infty, \\ &E_i(t_{\ol{\sigma}} \otimes b) = \vphi_{\tau(i)}(b)+s'_i. \end{split} \nonumber \end{align} This implies that $B_i(t_{\ol{\sigma}} \otimes b) < E_i(t_{\ol{\sigma}} \otimes b)$. Therefore, we obtain the following: \begin{enumerate} \item When $\vphi_i(b) > \vphi_{\tau(i)}(b)+s'_i$. In this case, we have \begin{align} \begin{split} \beta_i(t_{\ol{\sigma}} \otimes b) &= \vphi_i(b)-s'_i-\wt_{\tau(i)}(b), \\ \Btil_i(t_{\ol{\sigma}} \otimes b) &= \begin{cases} \frac{1}{\sqrt{2}} t_{\ol{\sigma}} \otimes \Ftil_i b & \IF \vphi_i(b) = \vphi_{\tau(i)}(b)+s'_i+1 \AND \vphi_{\tau(i)}(\Ftil_i b) = \vphi_{\tau(i)}(b)+1, \\ t_{\ol{\sigma}} \otimes \Ftil_i b & \OW. \end{cases} \end{split} \nonumber \end{align} \item When $\vphi_i(b) \leq \vphi_{\tau(i)}(b)+s'_i$. In this case, we have \begin{align} \begin{split} \beta_i(t_{\ol{\sigma}} \otimes b) &= \vep_{\tau(i)}(b), \\ \Btil_i(t_{\ol{\sigma}} \otimes b) &= \begin{cases} \frac{1}{\sqrt{2}} t_{\ol{\sigma}} \otimes \Etil_{\tau(i)}b & \IF \vphi_i(b) = \vphi_{\tau(i)}(b)+s'_i \AND \vphi_i(\Etil_{\tau(i)}b) = \vphi_i(b), \\ \frac{1}{\sqrt{2}} t_{\ol{\sigma}} \otimes (\Etil_{\tau(i)}b + \Ftil_i b) & \IF \vphi_i(b) = \vphi_{\tau(i)}(b)+s'_i \AND \vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b), \\ t_{\ol{\sigma}} \otimes \Etil_{\tau(i)}b & \OW. \end{cases} \end{split} \nonumber \end{align} \end{enumerate} \end{enumerate} Thus, the proof completes (note that $\vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(b)$ implies that $\Ftil_i b_2 \in \clB(\lm)$, and hence, $\vphi_i(b) > 0$). \end{proof} \begin{lem}\label{Tzeta otimes Tmu} Let $\zeta \in X^\imath$ and $\mu \in X$. Then, we have an $\imath$crystal isomorphism $\clT_\zeta \otimes \clT_\mu \simeq \clT_{\zeta + \ol{\mu}}$. \end{lem} \begin{proof} For each $i \in I$ with $a_{i,\tau(i)} = 2$, let $\zeta_i \in \Z/2\Z$ denote the value of $\zeta$ at $i$. Then, we have \begin{align} \begin{split} &\wti(t_\zeta \otimes t_\mu) = \zeta + \ol{\mu}, \\ &\beta_i(t_\zeta \otimes t_\mu) = \begin{cases} -\infty_\ev & \IF a_{i,\tau(i)} = 2 \AND \zeta_i + \ol{\la h_i,\mu \ra} = \ol{s_i}, \\ -\infty_\odd & \IF a_{i,\tau(i)} = 2 \AND \zeta_i + \ol{\la h_i,\mu \ra} \neq \ol{s_i}, \\ -\infty & \IF a_{i,\tau(i)} \neq 2, \end{cases} \\ &\Btil_i (t_\zeta \otimes t_\mu) = 0. \end{split} \nonumber \end{align} This datum coincides with that of $t_{\zeta + \ol{\mu}} \in \clT_{\zeta + \ol{\mu}}$. Hence, the assertion follows. \end{proof} \begin{lem}\label{lemma: B(lm)sigma to T tensor Binfty} Let $\lm \in X^+$. Then, there exists an injective very strict $\imath$crystal morphism $\clB(\lm)^\sigma \hookrightarrow \clT_{\ol{\sigma+\lm}} \otimes \clB(\infty)$ which sends $\pi_\lm(b)^\sigma$ to $t_{\ol{\sigma+\lm}} \otimes b$ for all $b \in \clB(\infty;\lm)$. \end{lem} \begin{proof} By Lemmas \ref{lamma; B(lm)sigma to Tsigma tensor B(lm)} and \ref{Tzeta otimes Tmu}, we have an injective $\imath$crystal morphism $$ \clB(\lm)^\sigma \simeq \clT_{\ol{\sigma}} \otimes \clB(\lm) \hookrightarrow \clT_{\ol{\sigma}} \otimes (\clT_\lm \otimes \clB(\infty)) \simeq \clT_{\ol{\sigma+\lm}} \otimes \clB(\infty) $$ which sends $\pi_\lm(b)^\sigma$ to $t_{\ol{\sigma+\lm}} \otimes b$ for all $b \in \clB(\infty;\lm)$. Hence, to prove the assertion, it suffices to show that for each $i \in I$, $b \in \clB(\infty;\lm)$, and $b' \in \clB(\infty)$ with $(\Btil_i (t_{\ol{\sigma+\lm}} \otimes b), t_{\ol{\sigma+\lm}} \otimes b') \neq 0$, we have $b' \in \clB(\infty;\lm)$. Let $i \in I$, $b \in \clB(\infty;\lm)$. Since there are at most two $b' \in \clB(\infty)$ satisfying $(\Btil_i (t_{\ol{\sigma+\lm}} \otimes b),t_{\ol{\sigma+\lm}} \otimes b') \neq 0$, we may take $\nu \in X^+$ such that $b' \in \clB(\infty;\lm+\nu+\tau(\nu))$ for all such $b'$. As before, we have an injective $\imath$crystal morphism $$ \clB(\lm+\nu+\tau(\nu))^\sigma \hookrightarrow \clT_{\ol{\sigma+\lm+\nu+\tau(\nu)}} \otimes \clB(\infty) = \clT_{\ol{\sigma+\lm}} \otimes \clB(\infty). $$ Therefore, if we write $\Btil_i(t_{\ol{\sigma+\lm}} \otimes b) = \sum_{b' \in \clB(\infty;\lm+\nu+\tau(\nu))} c_{b'} t_{\ol{\sigma+\lm}} \otimes b'$ for some $c_{b'} \in \C$, we obtain $$ \Btil_i \pi_{\lm+\nu+\tau(\nu)}(b)^\sigma = \sum_{b' \in \clB(\infty;\lm+\nu+\tau(\nu))} c_{b'} \pi_{\lm+\nu+\tau(\nu)}(b')^\sigma. $$ On the other hand, by Lemma \ref{existence of pii lm nu}, we have an injective very strict $\imath$crystal morphism $$ \clB(\lm)^\sigma \hookrightarrow \clB(\lm+\nu+\tau(\nu))^\sigma $$ whose image is $\{ b^\sigma \mid b \in \clB(\lm+\nu+\tau(\nu);\lm) \}$. Therefore, we obtain $\pi_{\lm+\nu+\tau(\nu)}(b') \in \clB(\lm+\nu+\tau(\nu);\lm)$ for all $b'$ with $c_{b'} \neq 0$. In other words, we have $b' \in \clB(\infty;\lm)$ for all $b'$ with $c_{b'} \neq 0$. This proves our claim, and hence, the assertion follows. \end{proof} \begin{theo}\label{main at q=infty} Let $\zeta \in X^\imath$. For each $\lm \in X^+$ such that $\ol{\sigma+\lm} = \zeta$, there exists a very strict $\imath$crystal morphism $$ \pi^\imath_\lm : \clT_\zeta \otimes \clB(\infty) \rightarrow \clB(\lm)^\sigma;\ t_{\zeta} \otimes b \mapsto \pi_\lm(b)^\sigma. $$ Moreover, $\clT_\zeta \otimes \clB(\infty)$ and $\pi^\imath_\lm$'s form the projective limit of $\{ \clB(\lm)^\sigma \}_{\lm \in X^+, \ol{\sigma+\lm} = \zeta}$ in the category of $\imath$crystals and very strict morphisms. \end{theo} \begin{proof} Let $\lm \in X^+$ be such that $\ol{\sigma+\lm} = \zeta$, and $b \in \clB(\lm)$. By Lemma \ref{lemma: B(lm)sigma to T tensor Binfty}, we have an injective very strict $\imath$crystal morphism $$ \clB(\lm)^\sigma \hookrightarrow \clT_{\zeta} \otimes \clB(\infty) $$ which sends $\pi_\lm(b)^\sigma$ to $t_{\zeta} \otimes b$ for all $b \in \clB(\infty;\lm)$. Therefore, there exists a very strict $\imath$crystal morphism $$ \pi^\imath_\lm : \clT_{\zeta} \otimes \clB(\infty) \rightarrow \clB(\lm)^\sigma;\ t_{\zeta} \otimes b \mapsto \pi_\lm(b)^\sigma. $$ Then, we have $$ \pi^\imath_{\lm,\nu} \circ \pi^\imath_{\lm+\nu+\tau(\nu)} = \pi^\imath_\lm $$ for all $\nu \in X^+$. Let us prove the universality. Let $\clB$ be an $\imath$crystal and $\mu_\lm : \clB \rightarrow \clB(\lm)^\sigma$ a very strict $\imath$crystal morphism such that $\pi^\imath_{\lm,\nu} \circ \mu_{\lm+\nu+\tau(\nu)} = \mu_\lm$ for all $\lm,\nu \in X^+$ with $\ol{\sigma+\lm} = \zeta$. Define a map $\mu : \clB \rightarrow \clT_\zeta \otimes \clB(\infty)$ as follows. Let $b \in \clB$. If $\mu_\lm(b) = 0$ for all $\lm \in X^+$ with $\ol{\sigma+\lm} = \zeta$, then set $\mu(b) := 0$. If $\mu_\lm(b) = m_\lm(b)^\sigma$ for some $\lm \in X^+$ with $\ol{\sigma+\lm} = \zeta$ and $m_\lm(b) \in \clB(\lm)$, then set $\mu(b) := t_\zeta \otimes m(b)$, where $m(b) \in \clB(\infty)$ is such that $\pi_\lm(m(b)) = m_\lm(b)$. In order to see the well-definedness, let $\lm' \in X^+$ be such that $\ol{\sigma+\lm'} = \zeta$ and $\mu_{\lm'}(b) = m_{\lm'}(b)^\sigma$ for some $m_{\lm'}(b) \in \clB(\lm')$. Let $m'(b) \in \clB(\infty)$ be such that $\pi_{\lm'}(m'(b)) = m_{\lm'}(b)$. We want to show that $m(b) = m'(b)$. Since $\ol{\lm-\lm'} = \ol{0}$, there exists $\nu,\nu' \in X^+$ such that $\lm-\lm' = \nu'+\tau(\nu')-(\nu+\tau(\nu))$. Set $\lm'' := \lm+\nu+\tau(\nu) = \lm'+\nu'+\tau(\nu')$. Since $$ \pi^\imath_{\lm,\nu}(\mu_{\lm''}(b)) = \mu_\lm(b) = m_\lm(b)^\sigma, $$ we see that $\mu_{\lm''}(b) \neq 0$. Hence, there exists $b'' \in \clB(\infty)$ such that $\mu_{\lm''}(b) = \pi_{\lm''}(b'')^\sigma$. This implies that $$ \pi_\lm(b'')^\sigma = \pi^\imath_{\lm,\nu}(\pi_{\lm''}(b'')^\sigma) = \pi^\imath_{\lm,\nu}(\mu_{\lm''}(b)) = m_\lm(b)^\sigma = \pi_\lm(m(b))^\sigma $$ and hence, $$ b'' = m(b). $$ Similarly, we obtain $b'' = m'(b)$. Thus, we obtain $m(b) = m'(b)$, as desired. Hence, $\mu$ is well-defined. For each $\lm \in X^+$ such that $\ol{\sigma+\lm} = \zeta$, we have $$ \pi^\imath_\lm(\mu(b)) = \pi^\imath_\lm(t_\zeta \otimes m(b)) = \pi_\lm(m(b))^\sigma = m_\lm(b)^\sigma = \mu_\lm(b). $$ Here, we set $m(b) = 0$ if $\mu(b) = 0$. Therefore, we obtain $$ \pi^\imath_\lm \circ \mu = \mu_\lm. $$ It remains to shows that $\mu$ is a very strict $\imath$crystal morphism. Let $b \in \clB$ and $i \in I$. Let us write $\Btil_i b = \sum_{b' \in \clB} c_{b'} b'$ and $\Btil_i \mu(b) = \sum_{b'' \in \clB(\infty)} d_{b''} t_\zeta \otimes b''$ for some $c_{b'},d_{b''} \in \C$. We can take $\lm \in X^+$ such that $\ol{\sigma+\lm} = \zeta$ and $\mu_\lm(b'), \pi_\lm(b'') \in \clB(\lm)$ for all $b' \in \clB$ and $b'' \in \clB(\infty)$ with $c_{b'},d_{b''} \neq 0$ and $\mu(b') \neq 0$. Then, we have \begin{align} \begin{split} \sum_{\substack{c_{b'} \neq 0 \\ \mu(b') \neq 0}} c_{b'} \mu_\lm(b') &= \mu_\lm(\Btil_i b) = \Btil_i \mu_\lm(b) = \Btil_i \pi^\imath_\lm(\mu(b)) = \pi^\imath_\lm(\Btil_i \mu(b)) = \sum_{d_{b''} \neq 0} d_{b''} \pi_\lm(b'')^\sigma. \end{split} \nonumber \end{align} This implies that $c_{b'} = d_{b''}$ if $\mu_\lm(b') = \pi_\lm(b'')^\sigma$. Therefore, noting that $\mu_\lm(b') = \pi_\lm(b'')^\sigma$ implies that $\mu(b') = t_\zeta \otimes b''$, we obtain $$ \mu(\Btil_i b) = \sum_{\substack{c_{b'} \neq 0 \\ \mu(b') \neq 0}} c_{b'} \mu(b') = \sum_{d_{b''} \neq 0} d_{b''} t_\zeta \otimes b'' = \Btil_i \mu(b). $$ Thus, the proof completes. \end{proof} \begin{ex}\label{Tzeta otimes Binfty for diagonal type}\normalfont Suppose that our Satake diagram is of diagonal type. We retain notation in Examples \ref{set up for diagonal type}. Let us see the $\imath$crystal structure of $\clT_\zeta \otimes \clB(\infty)$, $\zeta \in X^\imath$. For each $i \in I_\tau = I_2$ and $b \in \clB(\infty)$, we have \begin{align} \begin{split} &F_i(t_\zeta \otimes b) = \vphi_i(b), \qu B_i(t_\zeta \otimes b) = -\infty, \qu E_i(t_\zeta \otimes b) = \vphi_{\tau(i)}(b)-\zeta_i, \\ &F_{\tau(i)}(t_\zeta \otimes b) = \vphi_{\tau(i)}(b), \qu B_{\tau(i)}(t_\zeta \otimes b) = -\infty, \qu E_{\tau(i)}(t_\zeta \otimes b)= \vphi_i(b)+\zeta_i, \end{split} \nonumber \end{align} where $\zeta_i := \la h_i-h_{\tau(i)}, \zeta \ra$. Then, we obtain \begin{align} \begin{split} &\beta_i(t_\zeta \otimes b) = \max(\vphi_i(b)+\zeta_i-\wt_{\tau(i)}(b), \vep_{\tau(i)}(b)), \\ &\beta_{\tau(i)}(t_\zeta \otimes b) = \max(\vphi_{\tau(i)}(b)-\zeta_i-\wt_i(b), \vep_i(b)), \\ &\Btil_i(t_\zeta \otimes b) = \begin{cases} t_\zeta \otimes \Ftil_i b & \IF \vphi_i(b) > \vphi_{\tau(i)}(b)-\zeta_i, \\ t_\zeta \otimes \Etil_{\tau(i)} b & \IF \vphi_i(b) \leq \vphi_{\tau(i)}(b)-\zeta_i, \end{cases} \\ &\Btil_{\tau(i)}(t_\zeta \otimes b) = \begin{cases} t_\zeta \otimes \Ftil_{\tau(i)} b & \IF \vphi_{\tau(i)}(b) > \vphi_i(b)+\zeta_i, \\ t_\zeta \otimes \Etil_i b & \IF \vphi_{\tau(i)}(b) \leq \vphi_i(b)+\zeta_i. \end{cases} \end{split} \nonumber \end{align} Under the identification $\U \simeq \U_{I'} \otimes \U_{I'}$, the crystal $\clB(\infty)$ is identified with $\clB(-\infty)_{I'} \otimes \clB(\infty)_{I'}$, where $\clB(-\infty)_{I'}$ denotes the crystal basis of the positive part of $\U_{I'}$. If we write $b \in \clB(\infty)$ as $b_1 \otimes b_2 \in \clB(-\infty)_{I'} \otimes \clB(\infty)_{I'}$, the $\U$-crystal structure of $\clB(\infty)$ and the $\U_{I'} \otimes \U_{I'}$-crystal structure of $\clB(\infty) = \clB(-\infty)_{I'} \otimes \clB(\infty)_{I'}$ are related as follows: Let $k \in I'$. \begin{align} \begin{split} &\vphi_{k_1}(b) = \vep_k(b_1), \qu \vphi_{k_2}(b) = \vphi_k(b_2), \qu \vep_{k_1}(b) = \vphi_k(b_1), \qu \vep_{k_2}(b) = \vep_k(b_2), \\ &\Ftil_{k_1} b = \Etil_k b_1 \otimes b_2, \qu \Ftil_{k_2} b = b_1 \otimes \Ftil_k b_2, \qu \Etil_{k_1} b = \Ftil_k b_1 \otimes b_2, \qu \Etil_{k_2} b = b_1 \otimes \Etil_k b_2. \end{split} \nonumber \end{align} Also, $X^\imath$ is identified with $X_{I'}$ in a way such that $\zeta_k := \la h_k,\zeta \ra = \la h_{k_2}-h_{k_1}, \zeta \ra = \zeta_{k_2}$. In Example \ref{icrystal of diagonal type is crystal}, we see that an $\imath$crystal can be thought of as a crystal over $\U_{I'}$. Then, the $\U_{I'}$-crystal structure of the $\imath$crystal $\clT_\zeta \otimes \clB(\infty)$ is described as follows: Let $k \in I'$ and $\clB(\infty) \ni b = b_1 \otimes b_2 \in \clB(-\infty)_{I'} \otimes \clB(\infty)_{I'}$. \begin{align} \begin{split} &\vphi_k(t_\zeta \otimes b) = \beta_{k_2}(t_\zeta \otimes b) = \max(\vphi_k(b_2)+\zeta_k+\wt_k(b_1),\vphi_k(b_1)), \\ &\vep_k(t_\zeta \otimes b) = \beta_{k_1}(t_\zeta \otimes b) = \max(\vep_k(b_1)-\zeta_k-\wt_k(b_2), \vep_k(b_2)), \\ &\Ftil_k(t_\zeta \otimes b) = \Btil_{k_2}(t_\zeta \otimes b) = \begin{cases} t_\zeta \otimes (b_1 \otimes \Ftil_k b_2) & \IF \vphi_k(b_2) > \vep_k(b_1)-\zeta_k, \\ t_\zeta \otimes (\Ftil_k b_1 \otimes b_2) & \IF \vphi_k(b_2) \leq \vep_k(b_1)-\zeta_k, \end{cases} \\ &\Etil_k(t_\zeta \otimes b) = \Btil_{k_1}(t_\zeta \otimes b) = \begin{cases} t_\zeta \otimes (\Etil_k b_1 \otimes b_2) & \IF \vep_k(b_1) > \vphi_k(b_2)+\zeta_k, \\ t_\zeta \otimes (b_1 \otimes \Etil_k b_2) & \IF \vep_k(b_1) \leq \vphi_k(b_2)+\zeta_k. \end{cases} \end{split} \nonumber \end{align} On the other hand, the $\U_{I'}$-crystal structure of $\clB(-\infty)_{I'} \otimes \clT_\zeta \otimes \clB(\infty)_{I'}$ is described as follows: Let $k \in I'$, $b_1 \in \clB(-\infty)_{I'}$, and $b_2 \in \clB(\infty)_{I'}$. First, we have $$ \vep_k(b_1 \otimes t_\zeta) = \vep_k(b_1)-\zeta_k, \qu \vphi_k(b_1 \otimes t_\zeta) = \vphi_k(b_1). $$ Hence, we obtain \begin{align} \begin{split} &\vphi_k(b_1 \otimes t_\zeta \otimes b_2) = \max(\vphi_k(b_1), \vphi_k(b_2)+\zeta_k+\wt_k(b_1)), \\ &\vep_k(b_1 \otimes t_\zeta \otimes b_2) = \max(\vep_k(b_1)-\zeta_k-\wt_k(b_2), \vep_k(b_2)), \\ &\Ftil_k(b_1 \otimes t_\zeta \otimes b_2) = \begin{cases} b_1 \otimes t_\zeta \otimes \Ftil_k b_2 & \IF \vep_k(b_1)-\zeta_k < \vphi_k(b_2), \\ \Ftil_k b_1 \otimes t_\zeta \otimes b_2 & \IF \vep_k(b_1)-\zeta_k \geq \vphi_k(b_2), \end{cases} \\ &\Etil_k(b_1 \otimes t_\zeta \otimes b_2) = \begin{cases} \Etil_k b_1 \otimes t_\zeta \otimes b_2 & \IF \vep_k(b_1)-\zeta_k > \vphi_k(b_2), \\ b_1 \otimes t_\zeta \otimes \Etil_k b_2 & \IF \vep_k(b_1)-\zeta_k \leq \vphi_k(b_2). \end{cases} \end{split} \nonumber \end{align} Therefore, the $\imath$crystal $\clT_\zeta \otimes \clB(\infty)$ is essentially the same as the $\U_{I'}$-crystal $\clB(-\infty)_{I'} \otimes \clT_\zeta \otimes \clB(\infty)_{I'}$. \end{ex} \subsection{Stability of the $\imath$canonical bases} In this subsection, we lift the results obtained in the previous subsection to based $\Ui$-modules. Lemmas \ref{cyclicity}--\ref{characterization of highest weight element} are preparations for this purpose. Given a sequence $\bfi = (i_1,\ldots,i_r) \in I^r$, set $$ F_{\bfi} := F_{i_1} \cdots F_{i_r}, \qu B_{\bfi} := B_{i_1} \cdots B_{i_r}. $$ We understand that $F_{\bfi} = 1 = B_{\bfi}$ when $r = 0$. Let $\clI \subset \bigsqcup_{r \geq 0} I^r$ be such that $\{ F_{\bfi} \mid \bfi \in \clI \}$ forms a basis of $\U^-$. By \cite[Proposition 6.2]{Ko14}, the set $\{ B_{\bfi} K_h \mid \bfi \in \clI,\ h \in Y^\imath \}$ forms a basis of $\Ui$. For each $r \in \Z_{\geq 0}$, set $\clI_r := \clI \cap I^r$, $\clI_{< r} := \bigsqcup_{s < r} \clI_s$. For each $\bfi \in \clI_r$, set $|\bfi| := r$. Then, for each $\bfi \in \clI$, we have \begin{align}\label{leading term of Bbfi} \begin{split} B_{\bfi} - F_{\bfi} \in \sum_{\bfi' \in \clI_{< |\bfi|}} F_{\bfi'} \U^{\geq 0}, \end{split} \end{align} where $\U^{\geq 0}$ denotes the subalgebra of $\U$ generated by $K_h, E_i$, $h \in Y$, $i \in I$. \begin{lem}\label{cyclicity} Let $M$ be a weight $\U$-module, $\lm \in X$, and $v \in M_\lm$. Suppose that $E_i v = 0$ for all $i \in I$. Then, we have $$ \Ui v = \U v. $$ \end{lem} \begin{proof} The submodule $\Ui v$ is spanned by vectors of the form $B_{\bfi} v$, $\bfi \in \clI$. By equation \eqref{leading term of Bbfi}, we have $$ B_{\bfi} v = F_{\bfi} v + \sum_{\bfi' \in \clI_{< |\bfi|}} c_{\bfi',\bfi} F_{\bfi'} v $$ for some $c_{\bfi',\bfi} \in \bbK$. This implies that $F_{\bfi} v \in \Ui v$. Since $\{ F_{\bfi} v \mid \bfi \in \clI \}$ spans $\U v$, the assertion follows. \end{proof} \begin{lem}\label{presentation of Verma as quotient of Ui} Let $\lm \in X$. Then, as a $\Ui$-module, we have $$ M(\lm) \simeq \Ui/\sum_{h \in Y^\imath} \Ui(K_h-q^{\la h,\lm \ra}). $$ \end{lem} \begin{proof} By Lemma \ref{cyclicity}, we have $M(\lm) = \Ui v_\lm$. Hence, there exists a surjective $\Ui$-module homomorphism $f : \Ui \rightarrow M(\lm)$ such that $f(1) = v_\lm$. It is clear that $\sum_{h \in Y^\imath} \Ui(K_h - q^{\la h,\lm \ra}) \subset \Ker f$. Let us prove the opposite direction. Let $x \in \Ker f$. We can write $x = \sum_{(\bfi,h) \in \clI \times Y^\imath} c_{\bfi,h} B_{\bfi} K_h$. Then, we have \begin{align}\label{expansion of xv} \begin{split} 0 = xv_\lm = \sum_{(\bfi,h)} c_{\bfi,h} q^{\la h,\lm \ra} B_{\bfi} v_\lm. \end{split} \end{align} On the other hand, by equation \eqref{leading term of Bbfi}, we have $$ B_{\bfi} v_\lm = F_{\bfi} v_\lm + \sum_{\bfi' \in \clI_{< |\bfi|}} c_{\bfi',\bfi} F_{\bfi'} v_\lm $$ for some $c_{\bfi',\bfi} \in \bbK$. Since $\{ F_{\bfi} v_\lm \mid \bfi \in \clI \}$ forms a basis of $M(\lm)$, we see that $\{ B_{\bfi} v_\lm \mid \bfi \in \clI \}$ forms a basis of $M(\lm)$. This, together with identity \eqref{expansion of xv}, implies that $$ \sum_{h \in Y^\imath} c_{\bfi,h} q^{\la h,\lm \ra} = 0 \qu \Forall \bfi \in \clI. $$ Let $\U^{\imath,0}$ denote the subalgebra of $\Ui$ generated by $K_h$, $h \in Y^\imath$, and, consider the algebra homomorphism $g : \U^{\imath,0} \rightarrow \bbK$ which sends $K_h$ to $q^{\la h,\lm \ra}$. Then, we have $\sum_{h \in Y^\imath} c_{\bfi,h} K_h \in \Ker g$. Since the subalgebra of $\U^{\imath,0}$ generated by $K_h-q^{\la h,\lm \ra}$, $h \in Y^\imath$ is contained in the kernel of $g$, and the quotient algebra is one-dimensional, we see that the subalgebra coincides with the kernel of $g$. Therefore, we obtain $$ \sum_h c_{\bfi,h} K_h \in \sum_{h' \in Y^\imath}\U^{\imath,0}(K_{h'} - q^{\la h',\lm \ra}) \qu \Forall \bfi \in \clI. $$ Since $x = \sum_{\bfi \in \clI} B_{\bfi}(\sum_{h \in Y^\imath} c_{\bfi,h} K_h)$, we conclude that $$ x \in \sum_{h \in Y^\imath} \Ui(K_h - q^{\la h,\lm \ra}), $$ as desired. Thus, the proof completes. \end{proof} \begin{lem}\label{defining relation of V(sigma)} Let $\lm \in X$. Set $N(\lm)$ to be the $\Ui$-submodule of $M(\lm)$ generated by $b_i^{\la h_i,\lm \ra+1} v_\lm$, $i \in I$, where for each $n \geq 0$, we set $b_i^{n+1} := B_i^{n+1}$ if $a_{i,\tau(i)} \neq 2$, and \begin{align} \begin{split} &b_i^{n+1} := \begin{cases} \prod_{l=0}^{n}(B_i-\sgn(s_i)[|s_i|-n+2l]_i) & \IF n < |s_i|, \\ B_i \prod_{l=1}^{\frac{n-|s_i|}{2}}(B_i^2-[2l]_i^2) & \\ \cdot \prod_{l=n-|s_i|+1}^{n}(B_i-\sgn(s_i)[|s_i|-n+2l]_i) & \IF n \geq |s_i| \AND \ol{n} = \ol{s_i}, \\ \prod_{l=1}^{\frac{n-|s_i|+1}{2}}(B_i^2-[2l-1]_i^2) & \\ \cdot \prod_{l=n-|s_i|+1}^{n}(B_i-\sgn(s_i)[|s_i|-n+2l]_i) & \IF n \geq |s_i| \AND \ol{n} \neq \ol{s_i} \end{cases} \end{split} \nonumber \end{align} if $a_{i,\tau(i)} = 2$. Then, we have $V(\lm) = M(\lm)/N(\lm)$. \end{lem} \begin{proof} Set $n_i := \la h_i,\lm \ra$. Since $$ V(\lm) = M(\lm)/\sum_{i \in I} \U F_i^{n_i + 1}v_\lm, $$ and $E_i F_j^{n_j+1} v_\lm = 0$ for all $i,j \in I$, it suffices, by Lemma \ref{cyclicity}, to show that $b_i^{n_i+1} v_\lm = F_i^{n_i+1} v_\lm$ for all $i \in I$. Let us first consider the case when $a_{i,\tau(i)} \neq 2$. Since $B_i = F_i + q_i^{s_i} E_{\tau(i)} K_i\inv$ and $E_{\tau(i)} F_i = F_i E_{\tau(i)}$, we have $$ B_i^{n_i+1} v_\lm = F_i^{n_i+1} v_\lm, $$ as desired. Next, let us consider the case when $a_{i,\tau(i)} = 2$. From example \ref{icrystal structure of V(lm) for AI} and the definitions of $\Btil_i$ and $\beta_i$, we see that $B_i$ acts on the $(n_i+1)$-dimensional irreducible $\U_i$-module diagonally with eigenvalues \begin{itemize} \item $\{ \sgn(s_i)[|s_i-n_i|+2l]_i \mid 0 \leq l \leq n_i \}$ when $n_i < |s_i|$, \item $\{ 0 \} \sqcup \{ \pm[2l]_i \mid 1 \leq l \leq \frac{n_i-|s_i|}{2} \} \sqcup \{ \sgn(s_i)[|s_i|-n_i+2l]_i \mid n_i-|s_i|+1 \leq l \leq n_i \}$ when $n_i \geq |s_i|$ and $\ol{n_i} = \ol{s_i}$, \item $\{ \pm[2l-1]_i \mid 1 \leq l \leq \frac{n_i-|s_i|+1}{2} \} \sqcup \{ \sgn(s_i)[|s_i|-n_i+2l]_i \mid n_i-|s_i|+1 \leq l \leq n_i \}$ when $n_i \geq |s_i|$ and $\ol{n_i} \neq \ol{s_i}$. \end{itemize} This implies that \begin{align}\label{vanishing} \begin{split} b_i^{n_i+1} v_\lm = 0 \qu \text{ in } V(\lm). \end{split} \end{align} On the other hand, $b_i$ is of the form $$ b_i^{n_i+1} = B_i^{n_i+1} + \sum_{k = 0}^{n_i} c_k B_i^k $$ for some $c_k \in \bbK$. Then, equation \eqref{leading term of Bbfi} implies that $$ b_i^{n_i+1} v_\lm = F_i^{n_i+1} v_\lm + \sum_{k= 0}^{n_i} c'_k F_i^k v_\lm $$ for some $c'_k \in \bbK$, and hence, $b_i^{n_i+1} v_\lm = \sum_{k=0}^{n_i} c'_k F_i^k v_\lm$ in $V(\lm)$. Since $\{ F_i^k v_\lm \mid 0 \leq k \leq n_i \}$ forms a linearly independent set of $V(\lm)$, identity \eqref{vanishing} implies that $c'_k = 0$ for all $k$. Therefore, we obtain $$ b_i^{n_i+1} v_\lm = F_i^{n_i+1} v_\lm, $$ as desired. Thus, the proof completes. \end{proof} \begin{lem}\label{characterization of highest weight element} Let $\nu \in X^+$ and $b \in \clB(\sigma+\nu+\tau(\nu))$. Then, we have $$ \beta_i(b) = 0, \qu \Btil_i b = 0 \qu \Forall i \in I $$ if and only if $b = b_{\sigma+\nu+\tau(\nu)}$. \end{lem} \begin{proof} By Corollary \ref{icrystal structure on crystal}, we see that $\beta_i(b) = 0$ and $\Btil_i b = 0$ if and only if \begin{itemize} \item $|s_i| \leq \vphi$, $\ol{s_i} = \ol{\vphi_i(b)}$, and $\vep_i(b) = 0$ when $a_{i,\tau(i)} = 2$, \item $\vphi_i(b) \leq \vphi_{\tau(i)}(b)+s_i$ and $\vep_{\tau(i)}(b) = 0$ when $a_{i,\tau(i)} \neq 2$. \end{itemize} It is easily verified that $b_{\sigma+\nu+\tau(\nu)}$ satisfies the latter condition. Conversely, if $b$ satisfies the latter condition, we have $\vep_i(b) = 0$ for all $i \in I$. This implies that $b = b_{\simga+\nu+\tau(\nu)}$. Thus, the proof completes. \end{proof} Now, recall from Proposition \ref{V(lm)sigma is based} (see also Example \ref{A-forms}) that $V(\lm)^\sigma$ is a based $\Ui$-module for all $\lm \in X^+$. \begin{prop}\label{gamma_sigma} Let $\nu \in X^+$. Then, there exists a based $\Ui$-module homomorphism $\gamma_\nu :V(\sigma+\nu+\tau(\nu)) \rightarrow V(0)^\sigma$ such that $\gamma_\nu(v_{\sigma+\nu+\tau(\nu)}) = v_0^\sigma$. \end{prop} \begin{proof} Recall that $V(0)^\sigma$ is isomorphic to the quotient of $\Ui$ factored by the left $\Ui$-submodule generated by $B_i$, $i \in I$ and $K_h - q^{\la h,\sigma \ra}$, $h \in Y^\imath$. On the other hand, by Lemmas \ref{presentation of Verma as quotient of Ui} and \ref{defining relation of V(sigma)}, $V(\sigma+\nu+\tau(\nu))$ is isomorphic to the quotient of $\Ui$ factored by the left $\Ui$-submodule generated by $b_i^{\la h_i,\sigma+\nu+\tau(\nu) \ra+1}$, $i \in I$ and $K_h-q^{\la h,\sigma+\nu+\tau(\nu) \ra}$, $h \in Y^\imath$. Noting that $b_i^{\la h_i,\sigma+\nu+\tau(\nu) \ra + 1} \in \Ui B_i$ and $\la h,\sigma+\nu+\tau(\nu) \ra = \la h,\sigma \ra$ for all $i \in I$ and $h \in Y^\imath$, we see that there exists a surjective $\Ui$-module homomorphism $\gamma_\nu : V(\sigma+\nu+\tau(\nu)) \rightarrow V(0)^\sigma$ such that $\gamma_\nu(v_{\sigma+\nu+\tau(\nu)}) = v_0^\sigma$. Let us show that $\gamma_\nu$ is a based $\Ui$-module homomorphism. Set $K := \Ker \gamma_\nu$. Since $\wp^*$ preserves $\Ui$, the complement $K^\perp \subset V(\sigma+\nu+\tau(\nu))$ of $K$ is isomorphic to $V(0)^\sigma$. Hence, there exists $v'_0 \in V(\sigma+\nu+\tau(\nu))$ such that $K^\perp = \bbK v'_0$. We may assume that $v'_0 \in \clL(\sigma+\nu+\tau(\nu))$ and $b'_0 := \ev_\infty(v'_0) \neq 0$. Since $K^\perp \simeq V(0)^\sigma$, we must have $$ \beta_i(b'_0) = 0, \ \Btil_i b'_0 = 0 \qu \Forall i \in I. $$ This implies that $\beta_i(b) = 0$, $\Btil_i b = 0$ for all $b \in \clB(\sigma+\nu+\tau(\nu))$ with $(b'_0,b) \neq 0$. By Lemma \ref{characterization of highest weight element}, we obtain $b = b_{\sigma+\nu+\tau(\nu)}$ for all such $b$, and hence, $$ b'_0 = b_{\sigma+\nu+\tau(\nu)}. $$ Let $b \in \clB(\sigma+\nu+\tau(\nu))$. Then, we have $$ (G^\imath(b),v'_0) \equiv (b,b'_0) = (b,b_{\sigma+\nu+\tau(\nu)}) = \delta_{b,b_{\sigma+\nu+\tau(\nu)}} \qu \pmod{q\inv \bbK_\infty}. $$ Hence, we see that $$ \gamma_\nu(G^\imath(b)) = \gamma_\nu(\frac{(G^\imath(b),v'_0)}{(v'_0,v'_0)} v'_0) = \frac{(G^\imath(b),v'_0)}{(v'_0,v'_0)} v_0^\sigma \in (\delta_{b,b_{\sigma+\nu+\tau(\nu)}} + q\inv \bbK_\infty) v_0^\sigma. $$ On the other hand, since $G^\imath(b) \in V(\sigma+\nu+\tau(\nu))_{\bfA} = \Uidot_{\bfA} v_{\sigma+\nu+\tau(\nu)}$, we have $\gamma_\nu(G^\imath(b)) \in \Uidot_{\bfA} v_0^\simga = V(0)^\sigma_{\bfA}$. Similarly, since $G^\imath(b)$ is bar-invariant, so is $\gamma_\nu(G^\imath(b))$. Thus, we obtain $$ \gamma_\nu(G^\imath(b)) = 0 $$ if $b \neq b_{\sigma+\nu+\tau(\nu)}$. This completes the proof. \end{proof} \begin{prop}\label{rholm at module level} Let $\lm \in X^+$. Then, there exists a based $\Ui$-module homomorphism $\rho_\lm : V(\sigma+\lm) \rightarrow V(\lm)^\sigma$ such that $\rho_\lm(G^\imath(\pi_{\sigma+\lm}(b))) = G^\imath(\pi_\lm(b)^\sigma)$ for all $b \in \clB(\infty)$. \end{prop} \begin{proof} Let $\eta_{\sigma,\lm} : V(\sigma+\lm) \rightarrow V(\sigma) \otimes V(\lm)$ denote the $\U$-module homomorphism such that $\eta(v_{\sigma+\lm}) = v_\simga \otimes v_\lm$. By \cite[Proposition 25.1.2]{L10}, we have $$ \eta_{\sigma,\lm}(G(\pi_{\sigma+\lm}(b))) \in v_\sigma \otimes G(\pi_{\lm}(b)) + q\inv \clL(\sigma) \otimes \clL(\lm) \qu \Forall b \in \clB(\infty;\lm), $$ and $$ (\eta_{\sigma,\lm}(G(\pi_{\sigma+\lm}(b))), v_\sigma \otimes G(b')) \in q\inv \bbK_\infty \qu \Forall b \in \clB(\infty) {\setminus} \clB(\infty;\lm),\ b' \in \clB(\lm). $$ Composing $\gamma_0$ in Proposition \ref{gamma_sigma} on the first factor, we obtain a $\Ui$-module homomorphism $$ \rho_\lm := (\gamma_0 \otimes \id) \circ \eta_{\sigma,\lm} : V(\sigma+\lm) \rightarrow V(\lm)^\sigma. $$ Then, we see that $\rho_\lm(G^\imath(b))$ is $\imath$bar-invariant, and belongs to the intersection of the $\bbK_\infty$-form and the $\bfA$-form. Moreover, for each $b \in \clB(\infty)$, we have $$ \ev_\infty(\rho_\lm(G^\imath(\pi_{\sigma+\lm}(b)))) = \pi_\lm(b)^\sigma. $$ By above, we conclude that $$ \rho_\lm(G^\imath(\pi_{\sigma+\lm}(b))) = G^\imath(\pi_\lm(b)^\sigma) $$ as desired. Thus, the proof completes. \end{proof} \begin{prop} Let $\lm,\nu \in X^+$. Then, there exists a based $\Ui$-module homomorphism $\pi^\imath_{\lm,\nu} : V(\lm+\nu+\tau(\nu))^\sigma \rightarrow V(\lm)^\sigma$ such that $$ \pi^\imath_{\lm,\nu}(G^\imath(\pi_{\lm+\nu+\tau(\nu)}(b)^\sigma)) = G^\imath(\pi_{\lm}(b)^\sigma) $$ for all $b \in \clB(\infty)$. \end{prop} \begin{proof} Consider the linear map $\pi^\imath_{\lm,\nu} : V(\lm+\nu+\tau(\nu))^\sigma \rightarrow V(\lm)^\sigma$ given by $$ \pi^\imath_{\lm,\nu}(G^\imath(\pi_{\lm+\nu+\tau(\nu)}(b)^\sigma)) = G^\imath(\pi_{\lm}(b)^\sigma), \qu b \in \clB(\infty). $$ In order to prove the assertion, it suffices to show that $\pi^\imath_{\lm,\nu}$ is a $\Ui$-module homomorphism. Note that the following diagram $$ \xymatrix@C=50pt{ V(\sigma+\lm+\nu+\tau(\nu)) \ar[r]^-{\eta_{\sigma+\nu+\tau(\nu),\lm}} \ar[d]_-{\rho_{\lm+\nu+\tau(\nu)}} & V(\sigma+\nu+\tau(\nu)) \otimes V(\lm) \ar[d]^-{{\gamma_\nu} \otimes \id} \\ V(\lm+\nu+\tau(\nu))^\sigma \ar[r]_-{\pi^\imath_{\lm,\nu}} & V(\lm)^\sigma } $$ commutes. For each $b \in \clB(\infty)$ and $x \in \Ui$, we have \begin{align} \begin{split} \pi^\imath_{\lm,\nu}(x \cdot G^\imath(\pi_{\lm+\nu+\tau(\nu)}(b)^\sigma)) &= \pi^\imath_{\lm,\nu}(x \cdot \rho_{\lm+\nu+\tau(\nu)}(G^\imath(\pi_{\sigma+\lm+\nu+\tau(\nu)}(b)))) \\ &=(\pi^\imath_{\lm,\nu} \circ \rho_{\lm+\nu+\tau(\nu)})(x \cdot G^\imath(\pi_{\sigma+\lm+\nu+\tau(\nu)}(b))) \\ &=((\gamma_\nu \otimes \id) \circ \eta_{\sigma+\nu+\tau(\nu),\lm})(xG^\imath(\pi_{\sigma+\lm+\nu+\tau(\nu)}(b))) \\ &=x((\gamma_\nu \otimes \id) \circ \eta_{\sigma+\nu+\tau(\nu),\lm})(G^\imath(\pi_{\sigma+\lm+\nu+\tau(\nu)}(b))) \\ &=x(\pi^\imath_{\lm,\nu} \circ \rho_{\lm+\nu+\tau(\nu)})(G^\imath(\pi_{\sigma+\lm+\nu+\tau(\nu)}(b))) \\ &=x \cdot G^\imath(\pi_\lm(b)^\sigma) \\ &=x \cdot \pi^\imath_{\lm,\nu}(G^\imath(\pi_{\lm+\nu+\tau(\nu)}(b)^\sigma)). \end{split} \nonumber \end{align} This shows that $\pi^\imath_{\lm,\nu}$ is a $\Ui$-module homomorphism. Hence, the proof completes. \end{proof} Now, for each $\zeta \in X^\imath$, we obtain a projective system $\{ V(\lm)^\sigma \}_{\lm \in X^+, \ol{\lm} = \zeta}$ of based $\Ui$-modules and based homomorphisms $\pi^\imath_{\lm,\nu}$. \begin{theo} Let $\zeta \in X^\imath$. Then, for each $\lm \in X^+$ such that $\ol{\sigma+\lm} = \zeta$, there exists a based $\Ui$-module homomorphism $\pi^\imath_\lm : \Uidot \mathbf{1}_\zeta \rightarrow V(\lm)^\sigma$ such that $$ \pi^\imath_\lm(G^\imath_\zeta(b)) = G^\imath(\pi_\lm(b)^\sigma) $$ for all $b \in \clB(\infty)$. Moreover, $\Uidot \mathbf{1}_\zeta$ is the projective limit of $\{ V(\lm)^\sigma \}_{\lm \in X^+, \ol{\sigma+\lm} = \zeta}$ in the category of based $\Ui$-modules and based homomorphisms. \end{theo} \begin{proof} Let $\lm \in X^+$ be such that $\ol{\sigma+\lm} = \zeta$. There exists a $\Ui$-module homomorphism $\Uidot \mathbf{1}_\zeta \rightarrow V(\sigma+\lm)$ which sends $\mathbf{1}_\zeta$ to $v_{\sigma+\lm}$. Combining $\rho_\lm$, we obtain a $\Ui$-module homomorphism $\pi^\imath_\lm : \Uidot \rightarrow V(\lm)^\sigma$ which sends $\mathbf{1}_\zeta$ to $v_\lm^\sigma$. Then, for each $\nu \in X^+$, we have $$ \pi^\imath_{\lm,\nu} \circ \pi^\imath_{\lm+\nu+\tau(\nu)} = \pi^\imath_\lm. $$ Let us show that $\pi^\imath_\lm$ is based. Let $b \in \clB(\infty)$. By Theorem \ref{asymptotical limit} there exists $\nu \in X^+$ such that $$ G^\imath_{\zeta}(b) v_{\sigma+\lm+\nu+\tau(\nu)} = G^\imath(\pi_{\sigma+\lm+\nu+\tau(\nu)}(b)). $$ Then, we have \begin{align} \begin{split} \pi^\imath_\lm(G^\imath_\zeta(b)) &= \pi^\imath_{\lm,\nu} \circ \pi^\imath_{\lm+\nu+\tau(\nu)}(G^\imath_\zeta(b)) \\ &= \pi^\imath_{\lm,\nu} \circ \rho_{\lm+\nu+\tau(\nu)}(G^\imath(\pi_{\sigma+\lm+\nu+\tau(\nu)}(b))) \\ &= \pi^\imath_{\lm,\nu}(G^\imath(\pi_{\lm+\nu+\tau(\nu)}(b)^\sigma)) \\ &= G^\imath(\pi_\lm(b)^\sigma), \end{split} \nonumber \end{align} as desired. The universality can be proved in a similar way to Theorem \ref{main at q=infty}. Thus, the proof completes. \end{proof} The very strict $\imath$crystal morphism $\pi^\imath_\lm : \clT_\zeta \otimes \clB(\infty) \rightarrow \clB(\lm)^\sigma$ can be thought of as the crystal limit of the based $\Ui$-module homomoprhism $\pi^\imath_\lm : \Uidot \mathbf{1}_\zeta \rightarrow V(\lm)^\sigma$. Hence, it is reasonable to denote $\bigsqcup_{\zeta \in X^\imath} \clT_\zeta \otimes \clB(\infty)$ by $\clBidot$, and call it the $\imath$crystal basis of $\Uidot$. \begin{ex}\normalfont Suppose that our Satake diagram is of diagonal type. As we have seen in Example \ref{Tzeta otimes Binfty for diagonal type}, the $\imath$crystal $\clT_\zeta \otimes \clB(\infty)$ is essentially the same as the $\U_{I'}$-crystal $\clB(-\infty)_{I'} \otimes \clT_\zeta \otimes \clB(\infty)_{I'}$. Therefore, our description $\clBidot = \bigsqcup_{\zeta \in X^\imath} \clT_\zeta \otimes \clB(\infty)$ of the $\imath$crystal basis of the modified $\imath$quantum group $\Uidot$ is essentially the same as Kashiwara's description \cite[Theorem 3.1.1]{Ka94} $\clBdot = \bigsqcup_{\zeta \in X_{I'}} \clB(-\infty)_{I'} \otimes \clT_\zeta \otimes \clB(\infty)_{I'}$ of the crystal basis of the modified quantum group $\Udot_{I'}$. \end{ex} \section{Tensor product rule for $\imath$crystal}\label{Section: tensor product rule for icrystal} In this section, we shall construct an $\imath$crystal from an $\imath$crystal $\clB_1$ and a crystal $\clB_2$ satisfying certain conditions, whose underlying set is $\clB_1 \times \clB_2$. The construction is motivated by the representation theoretic results in the previous section. \subsection{Statements} Given an $\imath$crystal $\clB_1$ and a crystal $\clB_2$, consider the direct product $\clB := \clB_1 \times \clB_2$, and identify $\ol{\clL} := \C\clB$ with $\clL_1 \otimes_{\C} \clL_2$, where $\clL_i := \C\clB_i$ with $i = 1,2$. Then, $\{ b_1 \otimes b_2 \mid b_1 \in \clB_1,\ b_2 \in \clB_2 \}$ forms an orthonormal basis of $\ol{\clL}$ with respect to the Hermitian inner product induced by those on $\clL_1$ and $\clL_2$ making $\clB_1$ and $\clB_2$ orthonormal bases; $$ (b_1 \otimes b_2, b'_1 \otimes b'_2) := (b_1,b'_1) (b_2,b'_2). $$ We identify $(b_1,b_2) \in \clB$ with $b_1 \otimes b_2$, and write $\clB = \clB_1 \otimes \clB_2$. For $i \in I$ and $b = b_1 \otimes b_2 \in \clB$, set \begin{align} \begin{split} &F_i(b) := \begin{cases} \vphi_i(b_2) + \delta_{\ol{\beta_i(b_1)+1},\ol{\vphi_i(b_2)}} & \IF a_{i,\tau(i)} = 2, \\ \vphi_i(b_2) & \IF a_{i,\tau(i)} \neq 2, \end{cases} \\ &B_i(b) := \begin{cases} \beta_i(b_1) & \IF a_{i,\tau(i)} = 2, \\ \beta_i(b_1)-\wti_i(b_1)+s_i & \IF a_{i,\tau(i)} \neq 2, \end{cases} \\ &E_i(b) := \begin{cases} \vphi_i(b_2) & \IF a_{i,\tau(i)} = 2, \\ \vphi_{\tau(i)}(b_2)-\wti_i(b_1)+s_i & \IF a_{i,\tau(i)} \neq 2, \end{cases} \\ \end{split} \nonumber \end{align} where we set $s_i = 0$ for all $i \in I$ with $a_{i,\tau(i)} = 0$. Note that we have $B_i(b) = \beta_{\tau(i)}(b_1)$ if $a_{i,\tau(i)} = 0$, and $$ B_i(b) = \begin{cases} \beta_{\tau(i)}(b_1) & \IF \beta_i(b_1) = \beta_{\tau(i)}(b_1)+\wti_i(b_1)-s_i, \\ \beta_{\tau(i)}(b_1)+1 & \IF \beta_i(b_1) \neq \beta_{\tau(i)}(b_1)+\wti_i(b_1)-s_i \end{cases} $$ if $a_{i,\tau(i)} = -1$. Also, note that when $a_{i,\tau(i)} = 2$, we have $$ F_i(b) = \begin{cases} E_i(b) & \IF \ol{\beta_i(b_1)} = \ol{\vphi_i(b_2)}, \\ E_i(b)+1 & \IF \ol{\beta_i(b_1)} \neq \ol{\vphi_i(b_2)}. \end{cases} $$ \begin{prop}\label{tensor product of icrystal and crystal}\normalfont Let $\clB_1$ be an $\imath$crystal, $\clB_2$ a crystal. Assume that $\clB_2$ satisfies conditions {\rm (S1)}--{\rm (S3)'} in Subsection \ref{subsection: cyrstal} for all $i,\tau(i) \in I$ with $a_{i,\tau(i)} \neq 2$. For each $b_1 \in \clB_1$, $b_2 \in \clB_2$, and $i \in I$, set $b := b_1 \otimes b_2$, $\beta_i := \beta_i(b_1)$, $\wti := \wti(b_1)$, $\wti_i := \wti_i(b_1)$, $\vep_i := \vep_i(b_2)$, $\vphi_i := \vphi_i(b_2)$, $\wt := \wt(b_2)$, $\wt_i := \wt_i(b_2)$. Then, $\clB := \clB_1 \otimes \clB_2$ is equipped with an $\imath$crystal structure as follows: Let $b_1 \in \clB_1$, $b_2 \in \clB_2$, and $i \in I$. \begin{enumerate} \item $\wti(b) = \wti + \ol{\wt}$. \item If $a_{i,\tau(i)} = 2$, then \begin{align} \begin{split} \beta_i(b) &= \max(F_i(b),B_i(b),E_i(b))-\wt_i \\ &= \begin{cases} \vep_i+1 & \IF F_i(b) > B_i(b), E_i(b), \\ \beta_i-\wt_i & \IF F_i(b) \leq B_i(b) > E_i(b), \\ \vep_i & \IF F_i(b), B_i(b) \leq E_i(b), \end{cases} \\ \Btil_i b &= \begin{cases} b_1 \otimes \Ftil_i b_2 & \IF F_i(b) > B_i(b),E_i(b), \\ \Btil_i b_1 \otimes b_2 & \IF F_i(b) \leq B_i(b) > E_i(b), \\ b_1 \otimes \Etil_i b_2 & \IF F_i(b),B_i(b) \leq E_i(b). \\ \end{cases} \end{split} \nonumber \end{align} Here, we understand that $-\infty < -\infty_\ev,-\infty_\odd < a$ for all $a \in \Z$. \item If $a_{i,\tau(i)} = 0$, then \begin{align} \begin{split} \beta_i(b) &= \max(F_i(b),B_i(b),E_i(b))+\wti_i-\wt_{\tau(i)} \\ &= \begin{cases} \vphi_i+\wti_i-\wt_{\tau(i)} & \IF F_i(b) > B_i(b),E_i(b), \\ \beta_i-\wt_{\tau(i)} & \IF F_i(b) \leq B_i(b) > E_i(b), \\ \vep_{\tau(i)} & \IF F_i(b),B_i(b) \leq E_i(b), \end{cases} \\ \Btil_i b &= \begin{cases} b_1 \otimes \Ftil_i b_2 & \IF F_i(b) > B_i(b),E_i(b), \\ \Btil_i b_1 \otimes b_2 & \IF F_i(b) \leq B_i(b) > E_i(b), \\ b_1 \otimes \Etil_{\tau(i)}b_2 & \IF F_i(b),B_i(b) \leq E_i(b). \end{cases} \end{split} \nonumber \end{align} \item If $a_{i,\tau(i)} = -1$, then we have \begin{align} \begin{split} \beta_i(b) &= \max(F_i(b),B_i(b),E_i(b))+\wti_i-s_i-\wt_{\tau(i)}, \\ &=\begin{cases} \vphi_i+\wti_i-s_i-\wt_{\tau(i)} & \IF F_i(b) > B_i(b), E_i(b), \\ \beta_i-\wt_{\tau(i)} & \IF F_i(b) \leq B_i(b) > E_i(b), \\ \vep_{\tau(i)} & \IF F_i(b), B_i(b) \leq E_i(b). \end{cases} \end{split} \nonumber \end{align} Also, the following hold: \begin{enumerate} \item When $F_i(b) > B_i(b), E_i(b)$, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes \Ftil_i b_2 & \IF F_i(b)=E_i(b)+1 \AND \vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}+1, \\ b_1 \otimes \Ftil_i b_2 & \OW. \end{cases} $$ Here and below, whenever talking about $\vphi_i(b_2)$ for $b_2 \in \clB_2 \sqcup \{0\}$, we further impose that $b_2 \in \clB_2$. \item When $F_i(b) \leq B_i(b) > E_i(b)$, \begin{align} \begin{split} &\Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} \Btil_i b_1 \otimes b_2 & \IF B_i(b) = E_i(b)+1,\AND \beta_i(\Btil_i b_1) = \beta_i-2, \\ \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes b_2 + b_1 \otimes \Ftil_i b_2) & \IF F_i(b) = B_i(b) \neq \beta_{\tau(i)}, \\ \Btil_i b_1 \otimes b_2 & \OW. \end{cases} \end{split} \nonumber \end{align} Here and below, whenever talking about $\beta_i(b_1)$ for $b_1 \in \clL_1$, we further impose that $b_1 \in \clB_1$. \item When $F_i(b), B_i(b) \leq E_i(b)$, \begin{align} \begin{split} &\Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} b_1 \otimes \Etil_{\tau(i)}b_2 & \IF E_i(b) = F_i(b) \AND \vphi_i(\Etil_{\tau(i)}b_2) = \vphi_i, \\ &\OR E_i(b) = B_i(b) = \beta_{\tau(i)} \AND \vphi_i(\Etil_{\tau(i)}b_2) < E_i(b), \\ \frac{1}{\sqrt{2}}(b_1 \otimes \Etil_{\tau(i)}b_2 + b_1 \otimes \Ftil_i b_2)& \IF E_i(b) = F_i(b) > \beta_{\tau(i)} \AND \vphi_i(\Etil_{\tau(i)} b_2) = \vphi_i-1, \\ b_1 \otimes \Etil_{\tau(i)}b_2 & \OW. \end{cases} \end{split} \nonumber \end{align} \end{enumerate} \end{enumerate} \end{prop} Since the proof of this proposition is lengthy and independent of the later argument, we put it in Subsection \ref{Subsection: proof of tensor product rule}. \begin{rem}\label{coincidence at Vnatural}\normalfont The tensor product rule above for $I = \{ i,\tau(i) \}$ with $a_{i,\tau(i)} \neq -1$ coincides with the one given in Propositions \ref{deg and Btil on tensor of crystal bases} and \ref{Btil on tensor of crystal bases for 0}. When $a_{i,\tau(i)} = -1$ and $\clB_2 = \clB_\natural$, it coincides with the one given in Proposition \ref{tensor rule in terms of icrystal data}. \end{rem} \begin{prop}\label{associativity for icrystal} The tensor product of an $\imath$crystal and a crystal is associative. Namely, let $\clB_1$ be an $\imath$crystal, and $\clB_2,\clB_3$ crystals such that $\clB_2,\clB_3,\clB_2 \otimes \clB_3$ satisfy conditions {\rm (S1)}--{\rm (S3)'} for all $i,\tau(i) \in I$ with $a_{i,\tau(i)} \neq 2$. Then, the canonical map $$ \clB_1 \otimes (\clB_2 \otimes \clB_3) \rightarrow (\clB_1 \otimes \clB_2) \otimes \clB_3;\ b_1 \otimes (b_2 \otimes b_3) \mapsto (b_1 \otimes b_2) \otimes b_3 $$ gives rise to an isomorphism of $\imath$crystals. \end{prop} Since the proof of this proposition is lengthy and independent of the later argument, we put it in Subsection \ref{Subsection: proof of associativity}. \begin{rem}\label{icrystal on tensor power}\normalfont Suppose that $I = \{ i,\tau(i) \}$ for some $i \in I$. Let $V_\natural$ denote the natural representation of $\U$, and $\clB_\natural$ its crystal basis. By Remark \ref{coincidence at Vnatural} and Proposition \ref{associativity for icrystal}, the $\beta_i$ and $\Btil_i$ on $\clB_\natural^{\otimes N} = \clB(0) \otimes \clB_\natural^{\otimes N}$ combinatorially defined in this section coincide with the ones representation theoretically defined in the previous section. \end{rem} \subsection{Consequences} In this subsection, we list some consequences of Propositions \ref{tensor product of icrystal and crystal} and \ref{associativity for icrystal}. \begin{cor}\label{icrystal structure on crystal 1} Let $\clB$ be a crystal satisfying conditions {\rm (S1)}--{\rm (S3)'} for all $i,\tau(i) \in I$ with $a_{i,\tau(i)} \neq 2$. For each $b \in \clB$ and $i \in I$, set $\vphi_i := \vphi_i(b)$, $\vep_i := \vep_i(b)$, $\wt := \wt(b)$, and $\wt_i := \wt_i(b)$. Then, it has an $\imath$crystal structure as follows: Let $b \in \clB$ and $i \in I$. \begin{enumerate} \item $\wti(b) = \ol{\wt}$. \item If $a_{i,\tau(i)} = 2$, then \begin{align} \begin{split} &\beta_i(b) = \begin{cases} \vep_i+1 & \IF |s_i| \leq \vphi_i \AND \ol{s_i} \neq \ol{\vphi_i}, \\ |s_i|-\wt_i & \IF |s_i| > \vphi_i, \\ \vep_i & \IF |s_i| \leq \vphi_i \AND \ol{s_i} = \ol{\vphi_i}, \end{cases} \\ &\Btil_i b = \begin{cases} \Ftil_i b & \IF |s_i| \leq \vphi_i \AND \ol{s_i} \neq \ol{\vphi_i}, \\ \sgn(s_i) b & \IF |s_i| > \vphi_i, \\ \Etil_i b & \IF |s_i| \leq \vphi_i \AND \ol{s_i} = \ol{\vphi_i}, \end{cases} \end{split} \nonumber \end{align} \item If $a_{i,\tau(i)} = 0$, then \begin{align} \begin{split} \beta_i(b) &= \max(\vphi_i, 0, \vphi_{\tau(i)})-\wt_{\tau(i)} \\ &= \begin{cases} \vphi_i-\wt_{\tau(i)} & \IF \vphi_i > 0,\vphi_{\tau(i)}, \\ -\wt_{\tau(i)} & \IF \vphi_i \leq 0 > \vphi_{\tau(i)}, \\ \vep_{\tau(i)} & \IF \vphi_i,0 \leq \vphi_{\tau(i)}, \end{cases} \\ \Btil_i b &= \begin{cases} \Ftil_i b & \IF \vphi_i > 0,\vphi_{\tau(i)}, \\ 0 & \IF \vphi_i \leq 0 > \vphi_{\tau(i)}, \\ \Etil_{\tau(i)}b & \IF \vphi_i,0 \leq \vphi_{\tau(i)}. \end{cases} \end{split} \nonumber \end{align} \item If $a_{i,\tau(i)} = -1$, then \begin{align} \begin{split} \beta_i(b) &= \max(\vphi_i,\max(0,s_i),\vphi_{\tau(i)}+s_i)-s_i-\wt_{\tau(i)}, \\ &= \begin{cases} \vphi_i-s_i-\wt_{\tau(i)} & \IF \vphi_i > \max(s_i,0),\vphi_{\tau(i)}+s_i, \\ \max(-s_i,0)-\wt_{\tau(i)} & \IF \vphi_i \leq \max(s_i,0) > \vphi_{\tau(i)}+s_i, \\ \vep_{\tau(i)} & \IF \vphi_i, \max(s_i,0) \leq \vphi_{\tau(i)}+s_i. \end{cases} \end{split} \nonumber \end{align} \begin{enumerate} \item When $\vphi_i > \max(s_i,0), \vphi_{\tau(i)}+s_i$, $$ \Btil_i(b) = \begin{cases} \frac{1}{\sqrt{2}} \Ftil_i b & \IF \vphi_i = \vphi_{\tau(i)}+s_i+1 \AND \vphi_{\tau(i)}(\Ftil_i b) = \vphi_{\tau(i)}+1, \\ \Ftil_i b & \OW. \end{cases} $$ \item When $\vphi_i \leq \max(s_i,0) > \vphi_{\tau(i)}+s_i$, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} \Ftil_i b & \IF \vphi_i = s_i > 0, \\ 0 & \OW. \end{cases} $$ \item When $\vphi_i,\max(s_i,0) \leq \vphi_{\tau(i)}+s_i$, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} \Etil_{\tau(i)} b & \IF \vphi_i = \vphi_{\tau(i)}+s_i \AND \vphi_i(\Etil_{\tau(i)} b) = \vphi_i, \\ & \OR \vphi_{\tau(i)}+s_i = 0 \geq s_i \AND \vphi_i(\Etil_{\tau(i)} b) < 0, \\ \frac{1}{\sqrt{2}}(\Etil_{\tau(i)} b + \Ftil_i b) & \IF \vphi_i = \vphi_{\tau(i)}+s_i > \max(0,-s_{\tau(i)}) \AND \vphi_i(\Etil_{\tau(i)} b) = \vphi_i-1, \\ \Etil_{\tau(i)} b & \OW. \end{cases} $$ \end{enumerate} \end{enumerate} \end{cor} \begin{proof} Recall from Example \ref{examples of icrystals} \eqref{examples of icrystals 1} that the crystal $\clB(0) = \{ b_0 \}$ has an $\imath$crystal structure. By Proposition \ref{tensor product of icrystal and crystal}, the tensor product $\clB(0) \otimes \clB$ has an $\imath$crystal structure. For each $b \in \clB$, we have \begin{align} \begin{split} F_i(b_0 \otimes b) &= \begin{cases} \vphi_i+\delta_{\ol{s_i+1},\vphi_i} & \IF a_{i,\tau(i)} = 2, \\ \vphi_i & \IF a_{i,\tau(i)} \neq 2, \end{cases} \\ B_i(b_0 \otimes b) &= \begin{cases} |s_i| & \IF a_{i,\tau(i)} = 2, \\ \max(0,s_i) & \IF a_{i,\tau(i)} \neq 2, \end{cases} \\ E_i(b_0 \otimes b) &= \begin{cases} \vphi_i & \IF a_{i,\tau(i)} = 2, \\ \vphi_{\tau(i)}+s_i & \IF a_{i,\tau(i)} \neq 2. \end{cases} \end{split} \nonumber \end{align} Now, the assertion follows from Proposition \ref{tensor product of icrystal and crystal} by identifying $b_0 \otimes b$ with $b$. \end{proof} \begin{cor}\label{icrystal structure on crystal} Let $\clB$ be a seminormal crystal satisfying conditions {\rm (S1)}--{\rm (S3)'} for all $i,\tau(i) \in I$ with $a_{i,\tau(i)} \neq 2$. For each $b \in \clB$ and $i \in I$, set $\vphi_i := \vphi_i(b)$, $\vep_i := \vep_i(b)$, $\wt := \wt(b)$, and $\wt_i := \wt_i(b)$. Then, it has an $\imath$crystal structure as follows: Let $b \in \clB$ and $i \in I$. \begin{enumerate} \item $\wti(b) = \ol{\wt}$. \item If $a_{i,\tau(i)} = 2$, then \begin{align} \begin{split} &\beta_i(b) = \begin{cases} \vep_i+1 & \IF |s_i| \leq \vphi_i \AND \ol{s_i} \neq \ol{\vphi_i}, \\ |s_i|-\wt_i & \IF |s_i| > \vphi_i, \\ \vep_i & \IF |s_i| \leq \vphi_i \AND \ol{s_i} = \ol{\vphi_i}, \end{cases} \\ &\Btil_i b = \begin{cases} \Ftil_i b & \IF |s_i| \leq \vphi_i \AND \ol{s_i} \neq \ol{\vphi_i}, \\ \sgn(s_i) b & \IF |s_i| > \vphi_i, \\ \Etil_i b & \IF |s_i| \leq \vphi_i \AND \ol{s_i} = \ol{\vphi_i}, \end{cases} \end{split} \nonumber \end{align} \item If $a_{i,\tau(i)} = 0$, then \begin{align} \begin{split} \beta_i(b) &= \max(\vphi_i, \vphi_{\tau(i)})-\wt_{\tau(i)} \\ &= \begin{cases} \vphi_i-\wt_{\tau(i)} & \IF \vphi_i > \vphi_{\tau(i)}, \\ \vep_{\tau(i)} & \IF \vphi_i \leq \vphi_{\tau(i)}, \end{cases} \\ \Btil_i b &= \begin{cases} \Ftil_i b & \IF \vphi_i > \vphi_{\tau(i)}, \\ \Etil_{\tau(i)}b & \IF \vphi_i \leq \vphi_{\tau(i)}. \end{cases} \end{split} \nonumber \end{align} \item If $a_{i,\tau(i)} = -1$, then \begin{align} \begin{split} \beta_i(b) &= \max(\vphi_i,\vphi_{\tau(i)}+s_i)-s_i-\wt_{\tau(i)}, \\ &= \begin{cases} \vphi_i-s_i-\wt_{\tau(i)} & \IF \vphi_i > \vphi_{\tau(i)}+s_i, \\ \vep_{\tau(i)} & \IF \vphi_i \leq \vphi_{\tau(i)}+s_i. \end{cases} \end{split} \nonumber \end{align} \begin{enumerate} \item When $\vphi_i > \vphi_{\tau(i)}+s_i$, $$ \Btil_i(b) = \begin{cases} \frac{1}{\sqrt{2}} \Ftil_i b & \IF \vphi_i = \vphi_{\tau(i)}+s_i+1 \AND \vphi_{\tau(i)}(\Ftil_i b) = \vphi_{\tau(i)}+1, \\ \Ftil_i b & \OW. \end{cases} $$ \item When $\vphi_i \leq \vphi_{\tau(i)}+s_i$, $$ \Btil_i b = \begin{cases} \frac{1}{\sqrt{2}} \Etil_{\tau(i)} b & \IF \vphi_i = \vphi_{\tau(i)}+s_i \AND \vphi_i(\Etil_{\tau(i)} b) = \vphi_i, \\ \frac{1}{\sqrt{2}}(\Etil_{\tau(i)} b + \Ftil_i b) & \IF \vphi_i = \vphi_{\tau(i)}+s_i > \max(0,-s_{\tau(i)}) \AND \vphi_i(\Etil_{\tau(i)} b) = \vphi_i-1, \\ \Etil_{\tau(i)} b & \OW. \end{cases} $$ \end{enumerate} \end{enumerate} \end{cor} \begin{proof} Noting that we have $\vphi_i(b) \geq 0$ for all $i \in I$, the assertion is immediate from Corollary \ref{icrystal structure on crystal 1}. \end{proof} \begin{theo} Let $M$ be an integrable $\U$-module with a crystal basis $\clB_M$. Then, $M$ has $\clB_M$ as its $\imath$crystal basis whose $\imath$crystal structure is given by Corollary \ref{icrystal structure on crystal}. \end{theo} \begin{proof} First, we show that the trivial module $V(0)$ has its crystal basis $\clB(0)$ as its $\imath$crystal basis. By Lemmas \ref{icrystal basis of the trivial module}, \ref{icrystal basis of the trivial module a=0}, and \ref{icrystal basis of the trivial module a=-1}, we have $$ \beta_i(b_0) = \begin{cases} |s_i| & \IF a_{i,\tau(i)} = 2, \\ \max(-s_i,0) & \IF a_{i,\tau(i)} \neq 2, \end{cases} \qu \Btil_i b_0 = \begin{cases} \sgn(s_i) b_0 & \IF a_{i,\tau(i)} = 2, \\ 0 & \IF a_{i,\tau(i)} \neq 2. \end{cases} $$ This shows that $\clB(0)$ is an $\imath$crystal basis of $V(0)$ whose $\imath$crystal structure is as in Example \ref{examples of icrystals} \eqref{examples of icrystals 1}. Next, let us investigate how $\beta_i$ and $\Btil_i$ act on $M \simeq V(0) \otimes M$ for each $i \in I$. Since $M$ is integrable, as a $\U_{i,\tau(i)}$-module, it can be embedded into a direct sum of tensor powers $V_\natural^{\otimes N}$ of the natural representation of $\U_{i,\tau(i)}$ for various $N > 0$. Therefore, we may assume that $I = \{ i,\tau(i) \}$ for some $i \in I$ and $M = V_\natural^{\otimes N}$ for some $N > 0$. Now, the assertion follows from Remark \ref{icrystal on tensor power}. \end{proof} \begin{prop}\label{tensor product of morphisms} Let $\clB_1,\clB_3$ be $\imath$crystals, $\clB_2,\clB_4$ crystals, $\mu_1 : \clB_1 \rightarrow \clB_3$ an $\imath$crystal morphisms, $\mu_2 : \clB_2 \rightarrow \clB_4$ a crystal morphism. Let $\mu_1 \otimes \mu_2 : \clB_1 \otimes \clB_2 \rightarrow \clB_3 \otimes \clB_4$ denote the $\C$-linear map given by $(\mu_1 \otimes \mu_2)(b_1 \otimes b_2) = \mu_1(b_1) \otimes \mu_2(b_2)$. Then, the following hold: \begin{enumerate} \item $\mu_1 \otimes \mu_2$ is an $\imath$crystal morphism. \item If $\mu_1$ and $\mu_2$ are strict, then so is $\mu_1 \otimes \mu_2$. \item If $\mu_1$ is very strict and $\mu_2$ is strict, then $\mu_1 \otimes \mu_2$ is very strict. \item If $\mu_1$ is an equivalence and $\mu_2$ is an isomorphism, then $\mu_1 \otimes \mu_2$ is an equivalence. \item If $\mu_1$ and $\mu_2$ are isomorphisms, then so is $\mu_1 \otimes \mu_2$. \end{enumerate} \end{prop} \begin{proof} First, we confirm Definition \ref{Def: icrystal morphism} \eqref{Def: icrystal morphism 1}. Let $b_i \in \clB_i$ be such that $(\mu_1(b_1) \otimes \mu_2(b_2), b_3 \otimes b_4) \neq 0$. Then, we have $(\mu_1(b_1), b_3) \neq 0$ and $\mu_2(b_2) = b_4$. Setting $b := b_1 \otimes b_2$ and $b' := b_3 \otimes b_4$, we obtain \begin{align} \begin{split} \wti(b_3 \otimes b_4) &= \wti(b_3)+\ol{\wt(b_4)} = \wti(b_1)+\ol{\wt(b_2)} = \wti(b_1 \otimes b_2), \\ \beta_i(b_3 \otimes b_4) &= \begin{cases} \max(F_i(b'), B_i(b'), E_i(b'))-\wt_i(b_4) & \IF a_{i,\tau(i)} = 2, \\ \max(F_i(b'), B_i(b'), E_i(b'))+\wti_i(b_3)-s_i-\wt_{\tau(i)}(b_4) & \IF a_{i,\tau(i)} \neq 2 \end{cases} \\ &= \begin{cases} \max(F_i(b), B_i(b), E_i(b))-\wt_i(b_2) & \IF a_{i,\tau(i)} = 2, \\ \max(F_i(b), B_i(b), E_i(b))+\wti_i(b_1)-s_i-\wt_{\tau(i)}(b_2) & \IF a_{i,\tau(i)} \neq 2 \end{cases} \\ &= \beta_i(b_1 \otimes b_2). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal morphism} \eqref{Def: icrystal morphism 1}. Next, let us confirm Definition \ref{Def: icrystal morphism} \eqref{Def: icrystal morphism 2}. Suppose that $\Btil_i(b_1 \otimes b_2) \in \clB_1 \otimes \clB_2$. Then, we have $$ \Btil_i(b_1 \otimes b_2) = \begin{cases} b_1 \otimes \Ftil_i b_2 & \IF F_i(b) > B_i(b), E_i(b), \\ \Btil_i b_1 \otimes b_2 & \IF F_i(b) \leq B_i(b) > E_i(b), \\ b_1 \otimes \Etil_{\tau(i)} b_2 & \IF F_i(b), B_i(b) \leq E_i(b), \end{cases} $$ We claim that $$ \Btil_i(b_3 \otimes b_4) = \begin{cases} b_3 \otimes \Ftil_i b_4 & \IF F_i(b) > B_i(b), E_i(b), \\ \Btil_i b_3 \otimes b_4 & \IF F_i(b) \leq B_i(b) > E_i(b), \\ b_3 \otimes \Etil_{\tau(i)} b_4 & \IF F_i(b), B_i(b) \leq E_i(b). \end{cases} $$ First, consider the case when $F_i(b') > B_i(b'),E_i(b')$. If our claim fails, then we have $F_i(b') = E_i(b')+1$ and $\vphi_{\tau(i)}(\Ftil_i b_4) = \vphi_{\tau(i)}(b_4)+1$. Since $\Ftil_i b_2 \neq 0$, we have $\mu_2(\Ftil_i b_2) = \Ftil_i \mu_2(b_2) = \Ftil_i b_4$, and hence, $$ \vphi_{\tau(i)}(\Ftil_i b_2) = \vphi_{\tau(i)}(\Ftil_i b_4) = \vphi_{\tau(i)}(b_4)+1 = \vphi_{\tau(i)}(b_2)+1. $$ This, together with $F_i(b) = F_i(b') = E_i(b')+1 = E_i(b)+1$ implies that $\Btil_i b = \frac{1}{\sqrt{2}} b_1 \otimes \Ftil_ib_2$, which is a contradiction. Thus, we obtain $\Btil_i b' = b_3 \otimes \Ftil_i b_4$. By the same way, our claim follows in the case when $F_i(b'),B_i(b') \leq E_i(b')$. Next, consider the case when $F_i(b') \leq B_i(b') > E_i(b')$. Since $\Btil_i b \in \clB_1 \otimes \clB_2$, we have $\Btil_i b_1 \in \clB_1$. If it holds that $F_i(b') = B_i(b') \neq \beta_{\tau(i)}(b_3)$, then we obtain $F_i(b) = B_i(b) \neq \beta_{\tau(i)}(b_1)$, and hence, $\Btil_i b = \frac{1}{\sqrt{2}}(\Btil_i b_1 \otimes b_2 + b_1 \otimes \Ftil_i b_2)$. This is a contradiction. On the other hand, if $B_i(b') = E_i(b')+1$ and $\beta_i(\Btil_i b_3) = \beta_i(b_3)-2$, then it follows that $\mu_1(\Btil_i b_1) = \Btil_i \mu_1(b_1) = \Btil_i b_3$. Therefore, we have $B_i(b)=E_i(b)+1$ and $\beta_i(\Btil_i b_1) = \beta_i(b_1)-2$, and hence, $\Btil_i b = \frac{1}{\sqrt{2}} \Btil_i b_1 \otimes b_2$. This causes a contradiction, too. Thus, we obtain $\Btil_i b' = \Btil_i b_3 \otimes b_4$. Now, we compute as follows: \begin{align} \begin{split} \Btil_i(\mu_1(b_1) \otimes \mu_2(b_2)) &= \sum_{b_3 \in \clB_3} (\mu_1(b_1),b_3) \Btil_i(b_3 \otimes b_4) \\ &= \begin{cases} \sum_{b_3 \in \clB_3} (\mu_1(b_1),b_3) b_3 \otimes \Ftil_i b_4 & \IF F_i(b) > B_i(b), E_i(b), \\ \sum_{b_3 \in \clB_3} (\mu_1(b_1),b_3) \Btil_i b_3 \otimes b_4 & \IF F_i(b) \leq B_i(b) > E_i(b), \\ \sum_{b_3 \in \clB_3} (\mu_1(b_1),b_3) b_3 \otimes \Etil_{\tau(i)} b_4 & \IF F_i(b), B_i(b) \leq E_i(b) \end{cases} \\ &=\begin{cases} \mu_1(b_1) \otimes \Ftil_i b_4 & \IF F_i(b) > B_i(b), E_i(b), \\ \Btil_i \mu_1(b_1) \otimes b_4 & \IF F_i(b) \leq B_i(b) > E_i(b), \\ \mu_1(b_1) \otimes \Etil_{\tau(i)} b_4 & \IF F_i(b), B_i(b) \leq E_i(b) \end{cases} \\ &= (\mu_1 \otimes \mu_2)(\Btil_i(b_1 \otimes b_2)). \end{split} \nonumber \end{align} This confirms Definition \ref{Def: icrystal morphism} \eqref{Def: icrystal morphism 2}. Thus, the first assertion of the proposition follows. Let us prove the second assertion. By the first assertion, it suffices to show that $\Btil_i(\mu_1(b_1) \otimes \mu_2(b_2)) = (\mu_1 \otimes \mu_2)(\Btil_i(b_1 \otimes b_2))$ for all $b_1 \otimes b_2 \in \clB_1 \otimes \clB_2$ and $i \in I$. Let $b_3 \otimes b_4 \in \clB_3 \otimes \clB_4$ be such that $(\mu_1(b_1) \otimes \mu_2(b_2), b_3 \otimes b_4) \neq 0$. By the definition of $\Btil_i$ on tensor products, and the strictness of $\mu_1,\mu_2$, we have \begin{align} \begin{split} &\Btil_i(b_1 \otimes b_2) = c_F b_1 \otimes \Ftil_i b_2 + c_B \Btil_i b_1 \otimes b_2 + c_E b_1 \otimes \Etil_{\tau(i)} b_2, \\ &\Btil_i(b_3 \otimes b_4) = c_F b_3 \otimes \Ftil_i b_4 + c_B \Btil_i b_3 \otimes b_4 + c_E b_3 \otimes \Etil_{\tau(i)} b_4 \end{split} \nonumber \end{align} for some $c_F,c_B,c_E \in \C$. Therefore, we compute \begin{align} \begin{split} \Btil_i(\mu_1(b_1) \otimes \mu_2(b_2)) &= \sum_{b_3 \in \clB_3} (\mu_1(b_1),b_3) \Btil_i(b_3 \otimes b_4) \\ &= \sum_{b_3 \in \clB_3} (\mu_1(b_1), b_3)(c_F b_3 \otimes \Ftil_i b_4 + c_B \Btil_i b_3 \otimes b_4 + c_E b_1 \otimes \Etil_{\tau(i)} b_4) \\ &= c_F \mu_1(b_1) \otimes \Ftil_i b_4 + c_B \Btil_i \mu_1(b_1) \otimes b_4 + c_E \mu_1(b_1) \otimes \Etil_{\tau(i)} b_4 \\ &= c_F \mu_1(b_1) \otimes \mu_2(\Ftil_i b_2) + c_B \mu_1(\Btil_i b_1) \otimes \mu_2(b_2) + c_E \mu_1(b_1) \otimes \mu_2(\Etil_{\tau(i)} b_2) \\ &= (\mu_1 \otimes \mu_2)(\Btil_i(b_1 \otimes b_2)). \end{split} \nonumber \end{align} This proves the second assertion. The remaining assertions are now obvious. Thus, the proof completes. \end{proof} \section{Quantum groups and crystals}\label{Section: quantum groups and crystals} In this section, we review basic results concerning the representation theory of quantum groups associated to a symmetrizable Kac-Moody algebra, abstract crystals, and canonical and crystal bases. \subsection{Quantum groups} Let $A = (a_{i,j})_{i,j \in I}$ be a symmetrizable generalized Cartan matrix with symmetrizing matrix $D = \diag(d_i)_{i \in I}$, i.e., $d_i$'s are pairwise coprime positive integers satisfying $d_i a_{i,j} = d_j a_{j,i}$ for all $i,j \in I$. Let $X,Y$ be free abelian groups of finite rank equipped with a perfect pairing $\la h,\lm \ra : Y \times X \rightarrow \Z$. Let $\{ \alpha_i \mid i \in I \} \subset X$ and $\{ h_i \mid i \in I \} \subset Y$ be linearly independent sets such that $$ \la h_i,\alpha_j \ra = a_{i,j} \qu \Forall i,j \in I. $$ When $I$ is of finite type, each $\lm \in X$ is uniquely determined by the values $\la h_i,\lm \ra \in \Z$, $i \in I$. We often identify $\lm \in X$ with $(\la h_i,\lm \ra)_{i \in I} \in \Z^I$. The quantum group $\U$ is defined to be a unital associative algebra over $\bbK$ generated by $E_i,F_i,K_h$, $i \in I$, $h \in Y$ subject to the following relations: Let $i,j \in I$, $h,h' \in Y$. \begin{align} \begin{split} &K_{0} = 1, \qu K_{h} K_{h'} = K_{h+h'}, \\ &K_h E_i = q^{\la h,\alpha_i \ra}E_i K_h, \qu K_h F_i = q^{-\la h,\alpha_i \ra}F_i K_h, \\ &E_iF_j-F_jE_i = \delta_{i,j} \frac{K_i-K_i\inv}{q_i-q_i\inv}, \\ &\sum_{r=0}^{1-a_{i,j}} (-1)^r E_i^{(r)} E_j E_i^{(1-a_{i,j}-r)} = 0 \qu \IF i \neq j, \\ &\sum_{r=0}^{1-a_{i,j}} (-1)^r F_i^{(r)} F_j F_i^{(1-a_{i,j}-r)} = 0 \qu \IF i \neq j, \end{split} \nonumber \end{align} where $$ q_i := q^{d_i}, \ K_i := K_{d_i h_i}, \ E_i^{(a)} := \frac{1}{[a]_i!} E_i^a, \ F_i^{(a)} := \frac{1}{[a]_i!} F_i^a, \ [a]_i := [a]_{q_i}, \ [a]_i! := [a]_{q_i}!. $$ The quantum group $\U$ is equipped with a Hopf algebra structure with comultiplication $\Delta$ given by $$ \Delta(E_i) := E_i \otimes 1 + K_i \otimes E_i, \ \Delta(F_i) := 1 \otimes F_i + F_i \otimes K_i\inv,\ \Delta(K_h) := K_h \otimes K_h. $$ There is an anti-algebra involution $\wp$ on $\U$ such that $$ \wp(E_i) = q_i\inv F_iK_i,\ \wp(F_i) = q_i\inv E_i K_i\inv,\ \wp(K_h) = K_h. $$ The complex conjugate $\C \ni z \mapsto z^*$ can be extended to an $\R(q)$-algebra automorphism on $\U$ by requiring $$ E_i^* = E_i, \ F_i^* = F_i, \ K_h^* = K_h. $$ Set $\wp^* := \wp \circ * = * \circ \wp$. Let $\ol{\cdot} : \bbK \rightarrow \bbK$ denote the $\C$-algebra automorphism given by $\ol{q} = q\inv$. This can be extended to a $\C$-algebra automorphism $\psi$, called the bar-involution on $\U$ by $$ \psi(E_i) = E_i,\ \psi(F_i) = F_i,\ \psi(K_h) = K_{-h}. $$ For each $J = \{ j_1,\ldots,j_k \} \subset I$ of finite type, let $\U_J = \U_{j_1,\ldots,j_k}$ denote the subalgebra of $\U$ generated by $E_j,F_j,K_j^{\pm 1}$, $j \in J$. Let $\U^-$ denote the subalgebra of $\U$ generated by $F_i$, $i \in I$. Also, let $\bfB(\infty)$ and $\clB(\infty)$ denote the canonical and crystal basis of $\U^-$ with $b_\infty \in \clB(\infty)$ the highest weight element. Let $\Udot = \bigoplus_{\lm \in X} \U \mathbf{1}_\lm$ denote the modified quantum group, and $\Udot_{\bfA}$ its $\bfA$-form. Also, let $\bfBdot$ and $\clBdot$ denote the canonical and crystal basis of $\Udot$, respectively. For each $\lm \in X$, let $M(\lm)$ denote the Verma module of highest weight $\lm$, and $V(\lm)$ its irreducible quotient. Let $v_\lm$ denote the highest weight vector of both $M(\lm)$ and $V(\lm)$. When $\lm \in X^+$, let $\bfB(\lm)$ and $\clB(\lm)$ denote the canonical and crystal basis of $V(\lm)$. Let $b_\lm \in \clB(\lm)$ denote the highest weight element. \subsection{Crystal}\label{subsection: cyrstal} A crystal is a set $\clB$ equipped with the following structure \begin{itemize} \item $\wt : \clB \rightarrow X$: map, \item $\vep_i,\vphi_i : \clB \rightarrow \Z \sqcup \{ -\infty \}$: maps, $i \in I$, where $-\infty$ is a formal symbol, \item $\Etil_i,\Ftil_i : \clB \rightarrow \clB \sqcup \{0\}$: maps, $i \in I$, where $0$ is a formal symbol, \end{itemize} satisfying the following axioms: Let $b \in \clB$ and $i \in I$. \begin{enumerate} \item\label{crystal 2} If $\vphi(b) = -\infty$, then $\Etil_i b = 0 = \Ftil_i b$. \item\label{crystal 1} $\vphi_i(b) = \vep_i(b)+\wt_i(b)$, where $\wt_i(b) := \la h_i, \wt(b) \ra$; we understand that $-\infty+a = -\infty$ for all $a \in \Z$. \item\label{crystal 3} If $\Etil_i b \neq 0$, then $\wt(\Etil_i b) = \wt(b) + \alpha_i$, $\vep_i(\Etil_i b) = \vep_i(b)-1$, and $\Ftil_i \Etil_i b = b$. \item\label{crystal 4} If $\Ftil_i b \neq 0$, then $\wt(\Ftil_i b) = \wt(b) - \alpha_i$, $\vphi_i(\Ftil_i b) = \vphi_i(b)-1$, and $\Etil_i \Ftil_i b = b$. \end{enumerate} Let $\clB$ be a crystal. The crystal graph of $\clB$ is an $I$-colored directed graph whose vertex set is $\clB$, and for each $b,b' \in \clB$ and $i \in I$, there exists an arrow from $b$ to $b'$ labeled by $i$ if and only if $\Ftil_i b = b'$. \begin{ex}\label{crystal B(n)}\normalfont Suppose that $I = \{ i \}$. For each $n \in \Z_{\geq 0}$, let $\clB(n) := \{ b_k \mid 0 \leq k \leq n \}$ denote the crystal given by $$ \wt_i(b_k) = n-2k, \qu \vep_i(b_k) = k, \qu \vphi_i(b_k) = n-k, \qu \Etil_i b_k = b_{k-1}, \qu \Ftil_i b_k = b_{k+1}, $$ where $b_{-1} = b_{n+1} := 0$. The crystal graph of $\clB(n)$ is as follows: $$ \xymatrix{ b_0 \ar[r]^-i & b_1 \ar[r]^-i & \cdots \ar[r]^-i & b_{n-1} \ar[r]^-i & b_n. } $$ \end{ex} Let $\clB_1,\clB_2$ be crystals. A morphism $\mu : \clB_1 \rightarrow \clB_2$ of crystals is a map $\mu : \clB_1 \rightarrow \clB_2 \sqcup \{0\}$ satisfying the following: Let $b \in \clB_1$ and $i \in I$. \begin{enumerate} \item If $\mu(b) \neq 0$, then $\wt(\mu(b)) = \wt(b)$, $\vep_i(\mu(b)) = \vep_i(b)$, and $\vphi_i(\mu(b)) = \vphi_i(b)$. \item If $\Etil_i b, \mu(b), \mu(\Etil_i b) \neq 0$, then $\mu(\Etil_i b) = \Etil_i \mu(b)$. \item If $\Ftil_i b, \mu(b), \mu(\Ftil_i b) \neq 0$, then $\mu(\Ftil_i b) = \Ftil_i \mu(b)$. \end{enumerate} A crystal morphism $\mu : \clB_1 \rightarrow \clB_2$ is said to be strict if $\mu(\Etil_i b) = \Etil_i \mu(b)$ and $\mu(\Ftil_i b) = \Ftil_i \mu(b)$ for all $i \in I$, $b \in \clB_1$; here, we set $\mu(0) = 0$. A strict crystal morphism $\mu : \clB_1 \rightarrow \clB_2$ is said to be an isomorphism if the underlying map $\mu : \clB_1 \rightarrow \clB_2$ is bijective; in this case, we denote $\clB_1 \simeq \clB_2$. Let $\clB_1,\clB_2$ be crystals. The tensor product $\clB_1 \otimes \clB_2$ of $\clB_1$ and $\clB_2$ is a crystal whose underlying set is $\clB_1 \times \clB_2$ and whose structure maps are given as follows: \begin{align} \begin{split} &\wt(b_1 \otimes b_2) = \wt(b_1) + \wt(b_2), \\ &\vep_i(b_1 \otimes b_2) = \max(\vep_i(b_1) - \wt_i(b_2), \vep_i(b_2)) = \begin{cases} \vep_i(b_1) - \wt_i(b_2) & \IF \vep_i(b_1) > \vphi_i(b_2), \\ \vep_i(b_2) & \IF \vep_i(b_1) \leq \vphi_i(b_2), \end{cases} \\ &\vphi_i(b_1 \otimes b_2) = \max(\vphi_i(b_1), \vphi_i(b_2) + \wt_i(b_1)) = \begin{cases} \vphi_i(b_2)+\wt_i(b_1) & \IF \vep_i(b_1) < \vphi_i(b_2), \\ \vphi_i(b_1) & \IF \vep_i(b_1) \geq \vphi_i(b_2), \end{cases} \\ &\Etil_i(b_1 \otimes b_2) = \begin{cases} \Etil_i b_1 \otimes b_2 \qu & \IF \vep_i(b_1) > \vphi_i(b_2), \\ b_1 \otimes \Etil_i b_2 \qu & \IF \vep_i(b_1) \leq \vphi_i(b_2), \end{cases} \\ &\Ftil_i(b_1 \otimes b_2) = \begin{cases} b_1 \otimes \Ftil_i b_2 \qu & \IF \vep_i(b_1) < \vphi_i(b_2), \\ \Ftil_i b_1 \otimes b_2 \qu & \IF \vep_i(b_1) \geq \vphi_i(b_2). \end{cases} \end{split} \nonumber \end{align} Here, we understand that $-\infty < a$ for all $a \in \Z$, and $-\infty \leq -\infty$. \begin{rem}\normalfont The tensor product for crystals is associative \cite[Proposition 2.3.2]{BS17}. \end{rem} \begin{ex}\label{tensor rule example}\normalfont Suppose that $I = \{ i \}$. Recall from Example \ref{crystal B(n)} the crystal $\clB(n)$. The crystal graph $\clB(2) \otimes \clB(3)$ is described as follows: $$ \xymatrix@R=10pt{ & b_0 \ar[r]^-i & b_1 \ar[r]^-i & b_2 \ar[r]^-i & b_3 \\ b_0 \ar[d]_i & \bullet \ar[r]^-i & \bullet \ar[r]^-i & \bullet \ar[r]^-i & \bullet \ar[d]^-i \\ b_1 \ar[d]_i & \bullet \ar[r]^-i & \bullet \ar[r]^-i & \bullet \ar[d]^-i & \bullet \ar[d]^-i \\ b_2 & \bullet \ar[r]^-i & \bullet & \bullet & \bullet } $$ \end{ex} Let us introduce some terminologies. Let $\clB$ be a crystal. \begin{enumerate} \item $\clB$ is said to be seminormal if $$ \vep_i(b) = \max\{ m \geq 0 \mid \Etil_i^m b \neq 0 \}, \qu \vphi_i(b) = \max\{ m \geq 0 \mid \Ftil_i^m b \neq 0 \} $$ for all $b \in \clB$ and $i \in I$. \item $\clB$ is said to be upper seminormal if $$ \vep_i(b) = \max\{ m \geq 0 \mid \Etil_i^m b \neq 0 \} $$ for all $b \in \clB$ and $i \in I$. \end{enumerate} \begin{ex}\normalfont \ \begin{enumerate} \item For each $\lm \in X^+$, the crystal basis $\clB(\lm)$ of $V(\lm)$ is a seminormal crystal. \item The crystal basis $\clB(\infty)$ of $\U^-$ is an upper seminormal crystal. Also, we have $\Ftil_i b \neq 0$ for all $i \in I$ and $b \in \clB(\infty)$. \item For each $\lm \in X$, let $\clT_\lm = \{ t_\lm \}$ denote the crystal given by $$ \wt(t_\lm) = \lm, \qu \vep_i(t_\lm) = \vphi_i(t_\lm) = -\infty, \qu \Etil_i t_\lm = \Ftil_i t_\lm = 0, \qu \Forall i \in I. $$ \item For each $\lm,\mu \in X$, we have $\clT_\lm \otimes \clT_\mu \simeq \clT_{\lm+\mu}$. \item For each $\lm \in X^+$, there exists an injective crystal morphism $\iota_\lm : \clB(\lm) \rightarrow \clT_\lm \otimes \clB(\infty)$ such that $\iota_\lm(b_\lm) = t_\lm \otimes b_\infty$. This morphism is not strict. \end{enumerate} \end{ex} For each $\lm \in X^+$, set $$ \clB(\infty;\lm) := \{ b \in \clB(\infty) \mid t_\lm \otimes b \in \im \iota_\lm \}. $$ Let $\pi_\lm : \clB(\infty) \rightarrow \clB(\lm) \sqcup \{0\}$ be a map defined by $$ \pi_\lm(b) := \begin{cases} b' \qu & \IF b' \in \clB(\lm) \AND \iota_\lm(b') = t_\lm \otimes b, \\ 0 \qu & \IF b \notin \clB(\infty;\lm). \end{cases} $$ Note that for each $b \in \clB(\infty;\lm)$ and $i \in I$, we have \begin{align}\label{crystal structure of B(infty;lm)} \wt(b) = \wt(\pi_\lm(b)) - \lm, \qu \vep_i(b) = \vep_i(\pi_\lm(b)), \qu \vphi_i(b) = \vphi_i(\pi_\lm(b)) - \la h_i,\lm \ra. \end{align} \begin{lem}\label{Kashiwara operators on B(infty;lm)} Let $\lm \in X^+$, $b \in \clB(\infty;\lm)$, and $i \in I$. \begin{enumerate} \item\label{Kashiwara operators on B(infty;lm) 1} We have $\Etil_i b \in \clB(\infty;\lm)$ if and only if $\vep_i(b) > 0$. \item\label{Kashiwara operators on B(infty;lm) 2} We have $\Ftil_i b \in \clB(\infty;\lm)$ if and only if $\vphi_i(b) > -\la h_i,\lm \ra$. \end{enumerate} \end{lem} \begin{proof} Set $b' := \pi_\lm(b) \in \clB(\lm)$. Let us prove the first assertion. Since $\clB(\infty)$ is upper seminormal, the ``only if'' part is obvious. Hence, let us assume that $\vep_i(b) > 0$. Then, by identity \eqref{crystal structure of B(infty;lm)}, $$ \vep_i(b') = \vep_i(b) > 0. $$ Since $\clB(\lm)$ is seminormal, we obtain $\Etil_i b' \neq 0$. Therefore, we have $\iota_\lm(\Etil_i b') \neq 0$, and hence, $$ \iota_\lm(\Etil_i b') = \Etil_i \iota_\lm(b') = \Etil_i(t_\lm \otimes b) = t_\lm \otimes \Etil_i b. $$ This implies that $\Etil_i b \in \clB(\infty;\lm)$, as desired. Next, we prove the second assertion. Noting that $\Ftil_i(t_\lm \otimes b) = t_\lm \otimes \Ftil_i b \neq 0$, we see that we have $\Ftil_i b \in \clB(\infty;\lm)$ if and only if $\Ftil_i b' \neq 0$. Since $\clB(\lm)$ is seminormal, the latter condition is equivalent to that $\vphi_i(b') > 0$. By identity \eqref{crystal structure of B(infty;lm)}, this condition is, in turn, equivalent to that $$ \vphi_i(b) > -\la h_i,\lm \ra. $$ Thus, the assertion follows. \end{proof} \begin{lem}\label{embedding of B(mu) in B(lm)} Let $\lm,\mu \in X^+$ be such that $\la h_i,\mu \ra \leq \la h_i,\lm \ra$ for all $i \in I$. Then, we have $\clB(\infty;\mu) \subset \clB(\infty;\lm)$. \end{lem} \begin{proof} Let $b \in \clB(\infty;\mu)$. Then, we can write as $$ b = \Ftil_{i_r} \cdots \Ftil_{i_1} b_\infty $$ for some $i_1,\ldots,i_r \in I$. We show that $b \in \clB(\infty;\lm)$ by induction on $r \geq 0$. When $r = 0$, we have $b = b_\infty \in \clB(\infty;\lm)$. Hence, assume that $r > 0$. By induction hypothesis, we have $b' := \Etil_{i_r} b = \Ftil_{i_{r-1}} \cdots \Ftil_{i_1} b_\infty \in \clB(\infty;\lm)$. By Lemma \ref{Kashiwara operators on B(infty;lm)} \eqref{Kashiwara operators on B(infty;lm) 2}, we obtain $$ \vphi_i(b') > -\la h_i,\mu \ra \geq -\la h_i,\lm \ra. $$ This implies, again by Lemma \ref{Kashiwara operators on B(infty;lm)} \eqref{Kashiwara operators on B(infty;lm) 2}, that $\Ftil_{i_r} b' \in \clB(\infty,\lm)$. This completes the proof. \end{proof} For each $\lm,\mu \in X^+$ such that $\la h_i,\mu \ra \leq \la h_i,\lm \ra$ for all $i \in I$, set $$ \clB(\lm;\mu) := \pi_\lm(\clB(\infty;\mu)). $$ Note that by Lemma \ref{embedding of B(mu) in B(lm)}, $\pi_\lm(\clB(\infty;\mu))$ does not contain $0$. By identity \eqref{crystal structure of B(infty;lm)}, for each $b \in \clB(\lm;\mu)$, $i \in I$, and $b' \in \clB(\infty;\mu)$ with $\pi_\lm(b') = b$, we obtain \begin{align}\label{crystal structure of B(lm;mu)} \wt(b) = \wt(\pi_\mu(b')) + (\lm-\mu), \qu \vep_i(b) = \vep_i(\pi_\mu(b')), \qu \vphi_i(b) = \vphi_i(\pi_\lm(b')) + \la h_i,\lm-\mu \ra. \end{align} \begin{lem}\label{Kashiwara operators on B(lm;mu)} Let $\lm,\mu \in X^+$ be such that $\la h_i,\mu \ra \leq \la h_i,\lm \ra$ for all $i \in I$, $b \in \clB(\lm;\mu)$, and $i \in I$. \begin{enumerate} \item We have $\Etil_i b \in \clB(\lm;\mu)$ if and only if $\vep_i(b) > 0$. \item We have $\Ftil_i b \in \clB(\lm;\mu)$ if and only if $\vphi_i(b) > \la h_i,\lm-\mu \ra$. \end{enumerate} \end{lem} \begin{proof} The assertions follow from Lemma \ref{Kashiwara operators on B(infty;lm)} and identity \eqref{crystal structure of B(lm;mu)}. \end{proof} Let $\clB$ be a crystal, and $i,j \in I$ with $a_{i,j} = a_{j,i} \in \{ 0,-1 \}$. Consider the following conditions ({\it cf. } \cite[Chapter 4]{BS17}): \begin{enumerate} \item[{\rm (S1)}] If $b,\Etil_i b \in \clB$, then $\vep_j(\Etil_i b)-\vep_j(b) \in \{ 0,-a_{i,j} \}$. \item[{\rm (S2)}] If $b,\Etil_j \Etil_i b \in \clB$ and $\vep_j(\Etil_i b) = \vep_j(b)$, then $\Etil_i \Etil_jb = \Etil_j \Etil_i b$ and $\vphi_i(\Etil_j b) = \vphi_i(b)$. \item[{\rm (S3)}] If $b,\Etil_i\Etil_j b, \Etil_j \Etil_i b \in \clB$, $\vep_j(\Etil_i b) = \vep_j(b)+1$, and $\vep_i(\Etil_jb) = \vep_i(b)+1$, then $\Etil_i \Etil_jb \neq \Etil_j \Etil_i b$. \item[{\rm (S2)'}] If $b,\Ftil_j \Ftil_i b \in \clB$ and $\vphi_j(\Ftil_i b) = \vphi_j(b)$, then $\Ftil_i \Ftil_jb = \Ftil_j \Ftil_i b$ and $\vep_i(\Ftil_j b) = \vep_i(b)$. \item[{\rm (S3)'}] If $b,\Ftil_i \Ftil_j b, \Ftil_j \Ftil_i b \in \clB$, $\vphi_j(\Ftil_i b) = \vphi_j(b)+1$, and $\vphi_i(\Ftil_jb) = \vphi_i(b)+1$, then $\Ftil_i \Ftil_jb \neq \Ftil_j \Ftil_i b$. \end{enumerate} \begin{ex} The crystals $\clB(\lm)$, $\clT_\lm$, and $\clB(\infty)$ satisfy the conditions above. \end{ex} \begin{lem}\label{Deduction from condition S's} Let $\clB$ be a crystal, and $i,j \in I$ with $a_{i,j} = a_{j,i} \in \{ 0,-1 \}$. Assume that $\clB$ satisfies conditions {\rm (S1)}--{\rm (S3)'}. Then, for each $b \in \clB$, the following hold. \begin{enumerate} \item\label{Deduction from condition S's 1} If $\Ftil_i b \neq 0$, then $\vphi_j(\Ftil_i b) - \vphi_j(b) \in \{ 0,-a_{i,j} \}$. \item\label{Deduction from condition S's 2} If $\Ftil_j \Ftil_i b \neq 0$ and $\vphi_j(\Ftil_i b) = \vphi_j(b)+1$, then $\vphi_i(\Ftil_j\Ftil_i b) = \vphi_i(b)-1$. \item\label{Deduction from condition S's 3} If $\Etil_i \Etil_j b \neq 0$ and $\vphi_i(\Etil_jb) = \vphi_i(b)$, then $\vphi_j(\Etil_i \Etil_jb) = \vphi_j(b)$. \item\label{Deduction from condition S's 4} If $\Ftil_i b, \Etil_j b \neq 0$, then we have $\vphi_i(\Etil_j b) = \vphi_i(b)-1$ if and only if $\vphi_j(\Ftil_i b) = \vphi_j(b)$. \end{enumerate} \end{lem} \begin{proof} Let us prove the first assertion. We compute as \begin{align} \begin{split} \vphi_j(\Ftil_i b) - \vphi_j(b) &= (\vep_j(\Ftil_i b) + \la h_j,\wt(b)-\alpha_i \ra)-(\vep_j(b)+\la h_j,\wt(b) \ra) \\ &=-(\vep_j(\Etil_i \Ftil_i b) - \vep_j(\Ftil_i b)) - a_{i,j}. \end{split} \nonumber \end{align} By condition {\rm (S1)}, applied to $\Ftil_i b$, the last line of the identity above is either $0$ or $-a_{i,j}$. This proves the assertion. Let us prove the second assertion. Assume contrary that $\vphi_i(\Ftil_j \Ftil_i b) \neq \vphi_i(b)-1$. Since $\vphi_i(b)-1 = \vphi_i(\Ftil_i b)$, the first assertion implies that $a_{i,j} = -1$ and \begin{align}\label{cond 1} \vphi_i(\Ftil_j \Ftil_i b) = \vphi_i(\Ftil_i b)+1 = \vphi_i(b). \end{align} Then, we have $$ \vep_i(\Ftil_j \Ftil_i b) = \vphi_i(\Ftil_j \Ftil_i b)-\wt_i(\Ftil_j \Ftil_i b) = \vphi_i(b)-(\wt_i(b)-1) = \vep_i(\Ftil_i b) = \vep_i(\Etil_j \Ftil_j \Ftil_i b). $$ By condition {\rm (S2)}, this implies that $$ \Etil_j \Etil_i \Ftil_j \Ftil_i b = \Etil_i \Etil_j \Ftil_j \Ftil_i b = b, $$ which, in turn, implies that \begin{align}\label{cond 2} \Ftil_j \Ftil_i b = \Ftil_i \Ftil_j b. \end{align} Then, using identity \eqref{cond 1}, we compute as $$ \vphi_i(\Ftil_j b) = \vphi_i(\Ftil_i \Ftil_j b)+1 = \vphi_i(\Ftil_j \Ftil_i b)+1 = \vphi_i(b)+1. $$ By condition {\rm (S3)'}, this, together with our assumption that $\vphi_j(\Ftil_i b) = \vphi_j(b)+1$, implies that $$ \Ftil_i \Ftil_j b \neq \Ftil_j \Ftil_i b, $$ which contradicts identity \eqref{cond 2}. Thus, the assertion follows. The third assertion can be proved in a similar way to the second one. The fourth assertion follows from the second and third ones. \end{proof} \subsection{Based modules} A $\U$-module is said to have a bar-involution $\psi_M : M \rightarrow M$ if $$ \psi_M(xv) = \psi(x) \psi_M(v) \qu \Forall x \in \U,\ v \in M. $$ For example, each $V(\lm)$, $\lm \in X^+$ has a unique bar-involution $\psi_\lm$ such that $\psi_\lm(v_\lm) = v_\lm$. A weight module is a $\U$-module $M$ which possesses a weight space decomposition $$ M = \bigoplus_{\lm \in X} M_\lm, \qu M_\lm := \{ v \in M \mid K_h v = q^{\la h,\lm \ra} v \Forall h \in Y \}. $$ Each weight module $M$ admits a natural $\Udot$-module structure. An $\bfA$-form of $M$ is an $\bfA$-lattice $M_{\bfA}$ such that $\Udot_{\bfA} M_{\bfA} \subset M_{\bfA}$. For example, $V(\lm)$ is a weight module, and $V(\lm)_{\bfA} := \Udot_{\bfA} v_\lm$ is an $\bfA$-form. Let $M$ be a weight module, and set $X(M) := \{ \lm \in X \mid M_\lm \neq 0 \}$. We say that the weights of $M$ is bounded from above (resp., below) if there exist finitely many weights $\lm_1,\ldots,\lm_n \in X$ such that $X(M) \subset \bigcup_{k=1}^n C(\lm_k)$ (resp., $-X(M) \subset \bigcup_{k=1}^n C(\lm_k)$), where $$ C(\lm_k) := \{ \mu \in X \mid \lm_k-\mu \in \sum_{i \in I} \Z_{\geq 0} \alpha_i \}. $$ An integrable module is a weight module $M$ on which $E_i,F_i$ acts locally nilpotently for all $i \in I$. In this paper, we further impose that the weights of $M$ is bounded from above or below. For example, $V(\lm)$ is integrale. Let $M$ be an integrable $\U$-module. Then, for each $J \subset I$ of finite type, as a $\U_J$-module, $M$ decomposes into the direct sum of finite-dimensional irreducible $\U_J$-modules. Let $M$ be a $\U$-module equipped with a $\bbK$-valued Hermitian inner product $(\cdot,\cdot)_M$. It is said to be contragredient if $$ (x u, v)_M = (u, \wp^*(x) v)_M \qu \Forall x \in \U,\ u,v \in M. $$ For example, $V(\lm)$ possesses a unique contragredient Hermitian inner product $(\cdot,\cdot)_\lm$ such that $(v_\lm,v_\lm)_\lm = 1$. Given a $\bbK_\infty$-lattice $\clL_M$ of a $\bbK$-vector space $M$, set $$ \ol{\clL}_M := \clL_M/q\inv \clL_M, $$ and let $\ev_\infty : \clL_M \rightarrow \ol{\clL}_M$ denote the quotient map. For example, when $M$ is a $\U$-module equipped with a contragredient Hermitian inner product $(\cdot,\cdot)_M$, we can take $$ \clL_M := \{ v \in M \mid (v,v)_M \in \bbK_\infty \}. $$ The inner product $(\cdot,\cdot)_M$ induces a $\C$-valued Hermitian inner product $(\cdot,\cdot)_M$ on $\ol{\clL}_M$. For example, the canonical basis $\bfB(\lm)$ of $V(\lm)$ forms an almost orthonormal basis of $V(\lm)$, and hence, the free basis of $\clL(\lm) := \clL_{V(\lm)}$. Furthermore, the crystal basis $\clB(\lm) = \ev_\infty(\bfB(\lm))$ forms an orthonormal basis of $\ol{\clL}(\lm) := \ol{\clL}_{V(\lm)}$. Given an integrable module $M$, let $\Etil_i,\Ftil_i$, $i \in I$ denote Kashiwara operators acting on it. If $M$ possesses a contragredient Hermitian inner product, then Kashiwara operators preserve $\clL_M$, and hence induce $\C$-linear operators on $\ol{\clL}_M$. Furthermore, $\Etil_i$ and $\Ftil_i$ on $\ol{\clL}_M$ are adjoint to each other. An integrable $\U$-module is said to have a crystal basis $\clB_M$ if it possesses a $\bbK_\infty$-lattice $\clL_M$ which is compatible with the weight space decomposition of $M$, and is preserved by the Kashiwara operators, $\clB_M$ is a $\C$-basis of $\ol{\clL}_M$ which is compatible with the weight space decomposition of $M$, and $\clB_M$ forms a seminormal crystal with respect to the Kashiwara operators. A based $\U$-module is a $\U$-module $M$ with a bar-involution $\psi_M$, a $\bbK_\infty$-lattice $\clL_M$, and an $\bfA$-form $M_{\bfA}$ satisfying the following: \begin{enumerate} \item The quotient map $\ev_\infty : \clL_M \rightarrow \ol{\clL}_M$ restricts to an isomorphism $\clL_M \cap M_{\bfA} \cap \psi_M(\clL_M) \rightarrow \ol{\clL}_M$ of $\C$-vector spaces; let $G$ denote its inverse. \item For each $b \in \ol{\clL}_M$, it holds that $\psi_M(G(b)) = G(b)$. \end{enumerate} Given a based $\U$-modules $M,N$ and $\C$-linear bases $\clB_M,\clB_N$ of $\ol{\clL}_M,\ol{\clL}_N$, a $\U$-module homomorphism $f : M \rightarrow N$ is said to be a based module homomorphism if $f(G(\clB_M)) \subset G(\clB_N) \sqcup \{0\}$. In most cases, which bases $\clB_M,\clB_N$ are taken is clear from the context. \begin{ex}\normalfont Let $\lm \in X^+$. As we have seen above, the irreducible highest weight module $V(\lm)$ possesses a bar-involution $\psi_\lm$, a $\bbK_\infty$-lattice $\clL(\lm)$, and an $\bfA$-form $V(\lm)_{\bfA}$. With respect to these structures, $V(\lm)$ is a based module, and we have $\bfB(\lm) = G(\clB(\lm))$. \end{ex} \section{$\imath$Quantum groups and $\imath$crystals of quasi-split types}\label{Section: iquantum groups and icrystals} In this section, we recall what the $\imath$quantum group of quasi-split type is, and formulate the notion of based $\Ui$-modules in a similar way to based $\U$-modules. Also, we introduce the notion of $\imath$crystals, which is the fundamental tool in this paper. \subsection{$\imath$Quantum groups of quasi-split types} Let $\tau$ be a Dynkin diagram involution on $I$, i.e., $\tau$ is a permutation on $I$ such that $\tau^2 = \id$ and $a_{\tau(i),\tau(j)} = a_{i,j}$ for all $i,j \in I$. We further assume that there exist automorphisms (also denoted by $\tau$) on $X,Y$ such that $\tau(\alpha_i) = \alpha_{\tau(i)}$ and $\tau(h_i) = h_{\tau(i)}$ for all $i \in I$, and $\la \tau(h), \tau(\lm) \ra = \la h,\lm \ra$ for all $h \in Y$ and $\lm \in X$. For each $i \in I$, fix $\varsigma_i \in \C(q)^\times$ and $\kappa_i \in \C(q)$ satisfying the following conditions: \begin{itemize} \item $\kappa_i = 0$ unless $\tau(i) = i$ and $a_{j,i} \in 2\Z$ for all $j \in I$ with $\tau(j) = j$. \item $\varsigma_i = \varsigma_{\tau(i)}$ if $a_{i,\tau(i)} = 0$. \end{itemize} Then, the associated $\imath$quantum group $\Ui = \Ui_{\bfvarsigma,\bfkappa}$ is defined to be a subalgebra of $\U$ generated by $B_i,K_h$, $i \in I$, $h \in Y^\imath$, where $$ B_i := F_i + \varsigma_i E_{\tau(i)} K_i\inv + \kappa_i K_i\inv, \qu Y^\imath := \{ h \in Y \mid \tau(h) = -h \}. $$ \begin{ex}\label{set up for diagonal type}\normalfont Suppose that our Satake diagram is of diagonal type, i.e., $a_{i,\tau(i)} = 0$ for all $i \in I$. Then, one can choose $I_1,I_2 \subset I$ in a way such that $I = I_1 \sqcup I_2$, $a_{i_1,i_2} = 0$ for all $i_1 \in I_1$, $i_2 \in I_2$, and $\tau(I_1) = I_2$. According to this decomposition, we obtain $X = X_{I_1} \oplus X_{I_2}$, $Y = Y_{I_1} \oplus Y_{I_2}$. For each $\lm \in X$ and $h \in Y$, we write $\lm = \lm_1+\lm_2$, $h = h_1+h_2$, where $\lm_i \in X_{I_i}$, $h_i \in Y_{I_i}$. Let $I'$ be a copy of $I_1$, and let $I_1 \rightarrow I':\ i \mapsto i'$ denote the isomorphism. This induces isomorphisms $X_{I_1} \rightarrow X_{I'};\ \lm \mapsto \lm'$ and $Y_{I_1} \rightarrow Y_{I'};\ h \mapsto h'$. For each $i \in I'$, set $i_1 \in I_1$ to be the preimage of $i$, and $i_2 := \tau(i_1) \in I_2$. Then, there exists an isomorphism $\U \rightarrow \U_{I'} \otimes \U_{I'}$ such that $$ E_i \mapsto \begin{cases} F_{i'} \otimes 1 & \IF i \in I_1, \\ 1 \otimes E_{\tau(i)'} & \IF i \in I_2, \end{cases} \ F_i \mapsto \begin{cases} E_{i'} \otimes 1 & \IF i \in I_1, \\ 1 \otimes F_{\tau(i)'} & \IF i \in I_2, \end{cases} \ K_h \mapsto K_{-h'_1} \otimes K_{\tau(h_2)'}. $$ For each $i \in I$, set $\varsigma_i = 1$ and $\kappa_i = 0$. Then, the associated $\imath$quantum group is a subalgebra of $\U_{I'} \otimes \U_{I'}$ generated by $$ B_{i_1} = E_i \otimes 1 + K_i \otimes E_i, \qu B_{i_2} = 1 \otimes F_i + F_i \otimes K_i\inv, \qu K_{h} \otimes K_h $$ for $i \in I'$, $h \in Y_{I'}$. Therefore, we have $$ \Ui = \Delta(\U_{I'}). $$ \end{ex} For each $J = \{ j_1,\ldots,j_k \} \subset I$ of finite type such that $\tau(J) = J$, let $\Ui_J = \Ui_{j_1,\ldots,j_k}$ denote the subalgebra of $\Ui$ generated by $B_j,K_jK_{\tau(j)}\inv$, $j \in J$. A set of defining relations is known \cite[Theorem 3.1]{CLW18}: For $h,h' \in Y^\imath$, $i \neq j \in I$, $\ol{p} \in \Z/2\Z$, \begin{align}\label{defining relation for Ui} \begin{split} &K_0 = 1, \qu K_h K_{h'} = K_{h+h'}, \\ &K_h B_i = q^{\la h,-\alpha_i \ra} B_i K_h, \\ &\sum_{n=0}^{1-a_{i,j}} (-1)^n B_i^{(n)} B_j B_i^{(1-a_{i,j}-n)} = \delta_{\tau(i),j} \frac{(-1)^{a_{i,\tau(i)}}}{q_i-q_i\inv}B_i^{(-a_{i,\tau(i)})} \\ &\qu \cdot (q_i^{a_{i,\tau(i)}}(q_i^{-2};q_i^{-2})_{-a_{i,\tau(i)}}\varsigma_{\tau(i)}k_i - (q_i^2;q_i^2)_{-a_{i,\tau(i)}}\varsigma_ik_i\inv) \qu \IF \tau(i) \neq i, \\ &\sum_{n=0}^{1-a_{i,j}} (-1)^n B_{i,\ol{a_{i,j}+p}}^{(n)} B_j B_{i,\ol{p}}^{(1-a_{i,j}-n)} = 0 \qu \IF \tau(i) = i, \end{split} \end{align} where $$ k_i := K_iK_{\tau(i)}\inv, \qu (x;x)_n := \prod_{k=1}^n (1-x^k), \qu B_i^{(n)} := \frac{1}{[n]_i!}B_i^n, $$ and \begin{align} \begin{split} &B_{i,\ol{0}}^{(n)} := \begin{cases} \frac{1}{[2k+1]_i!} B_i \prod_{j=1}^k (B_i^2 - [2j]_i^2) \qu & \IF n = 2k+1, \\ \frac{1}{[2k]_i!} \prod_{j=1}^k (B_i^2 - [2j-2]_i^2) \qu & \IF n = 2k, \end{cases} \\ &B_{i,\ol{1}}^{(n)} := \begin{cases} \frac{1}{[2k+1]_i!} B_i \prod_{j=1}^k (B_i^2 - [2j-1]_i^2) \qu & \IF n = 2k+1, \\ \frac{1}{[2k]_i!} \prod_{j=1}^k (B_i^2 - [2j-1]_i^2) \qu & \IF n = 2k. \end{cases} \end{split} \nonumber \end{align} Let us introduce a family of $1$-dimensional $\Ui$-modules which will be one of the key ingredients in later argument. \begin{prop}\label{existence of V(0)sigma} Let $\sigma \in X$ be such that $\la h_i-h_{\tau(i)},\sigma \ra = 0$ for all $i \in I$ with $a_{i,\tau(i)} = 0$. Then, there exists a $1$-dimensional $\Ui$-module $V(0)^\sigma = \bbK v_0^\sigma$ such that $$ B_i v_0^\sigma = 0, \qu K_h v_0^\sigma = q^{\la h,\sigma \ra} v_0^\sigma \qu \Forall i \in I, h \in Y^\imath. $$ \end{prop} \begin{proof} The assertion follows from relations \eqref{defining relation for Ui}. \end{proof} \begin{prop} Let $\sigma \in X$ with $\la h_i-h_{\tau(i)},\sigma \ra = 0$ for all $i \in I$ with $a_{i,\tau(i)} = 0$, and $M$ a $\U$-module. Then, for each $\lm \in X$, $v \in M_\lm$, $i \in I$, and $h \in Y^\imath$, we have \begin{align} \begin{split} &K_h (v_0^\sigma \otimes v) = q^{\la h,\sigma+\lm \ra} v_0^\sigma \otimes v, \\ &B_i(v_0^\sigma \otimes v) = v_0^\sigma \otimes (F_i + q_i^{-\la h_i-h_{\tau(i)},\sigma \ra}\varsigma_i E_{\tau(i)} K_i\inv)v. \end{split} \nonumber \end{align} \end{prop} \begin{proof} The assertion follows from the following calculation: \begin{align} \begin{split} &\Delta(K_h) = K_h \otimes K_h, \\ &\Delta(B_i) = B_i \otimes K_i\inv + 1 \otimes F_i + k_i\inv \otimes (\varsigma_i E_{\tau(i)} K_i\inv). \end{split} \nonumber \end{align} \end{proof} This result shows that $V(0)^\simga \otimes M$ behaves like $M$ viewed as a $\Ui_{\bfvarsigma',\bfkappa'}$-module with weights shifted by $\sigma$, where $\varsigma'_i = q_i^{-\la h_i-h_{\tau(i)},\sigma \ra} \varsigma_i$ and $\kappa'_i = 0$. \subsection{Based $\Ui$-modules}\label{Subsection: based Uimodules} Let us further assume the following conditions on the parameters $\varsigma_i,\kappa_i$: \begin{align}\label{axiom for sigma and kappa} \begin{split} &\varsigma_i, \kappa_i \in \Z[q,q\inv], \\ &\ol{\kappa_i} = \kappa_i, \\ &\varsigma_{\tau(i)} = q_i^{-a_{i,\tau(i)}} \ol{\varsigma_i}. \end{split} \end{align} This assumption ensures the existence of the $\imath$bar-involution $\psii$ on $\Ui$ and the $\imath$canonical basis $\bfB^\imath(\lm)$ of $V(\lm)$, $\lm \in X^+$ (see \cite{BW21}). A $\Ui$-module $M$ is said to have an $\imath$bar-involution $\psii_M : M \rightarrow M$ if $$ \psii_M(xv) = \psii(x) \psii_M(v) \Forall x \in \Ui,\ v \in M. $$ \begin{ex}\normalfont \ \begin{enumerate} \item The irreducible highest weight module $V(\lm)$ possesses a unique $\imath$bar-involution $\psii_\lm$ such that $\psii_\lm(v_\lm) = v_\lm$. \item The $1$-dimensional $\Ui$-module $V(0)^\sigma$ in Proposition \ref{existence of V(0)sigma} possesses a unique $\imath$bar-involution $\psi_0^{\imath, \sigma}$ such that $\psi_0^{\imath,\sigma}(v_0^\sigma) = v_0^\sigma$. \end{enumerate} \end{ex} Set $X^\imath := X/\{ \lm+\tau(\lm) \mid \lm \in X \}$, and $\ol{\cdot} : X \rightarrow X^\imath$ the quotient map. The perfect pairing $\la \cdot,\cdot \ra : Y \times X \rightarrow \Z$ induces a bilinear pairing $\la \cdot,\cdot \ra : Y^\imath \times X^\imath \rightarrow \Z$. For each $\zeta \in X^\imath$ and $i \in I$ with $\tau(i) = i$, the parity of $\la h_i,\lm \ra$ is independent of $\lm \in X$ satisfying $\ol{\lm} = \zeta$. We call $\ol{\la h_i,\lm \ra} \in \Z/2\Z$ the value of $\zeta$ at $i$. \begin{rem}\normalfont When $I$ is of finite type, $\zeta \in X^\imath$ is uniquely determined by the values $\la h_i-h_{\tau(i)},\zeta \ra \in \Z$ for $i \in I$ with $\tau(i) \neq i$, and the values of $\zeta$ at $i \in I$ with $\tau(i) = i$. In this way, we often identify $\zeta$ with an element of $(\Z \sqcup (\Z/2\Z))^I$. \end{rem} A $\Ui$-module $M$ is said to be an $X^\imath$-weight module if it has a decomposition $M = \bigoplus_{\zeta \in X^\imath} M_\zeta$ satisfying the following: \begin{itemize} \item $K_h v = q^{\la h,\zeta \ra} v$ for all $h \in Y^\imath$, $\zeta \in X^\imath$, $v \in M_\zeta$. \item $B_i M_\zeta \subset M_{\zeta-\ol{\alpha_i}}$ for all $i \in I$, $\zeta \in X^\imath$. \end{itemize} Such a decomposition is called an $X^\imath$-weight space decomposition. \begin{ex}\normalfont \ \begin{enumerate} \item Let $M = \bigoplus_{\lm \in X} M_\lm$ be a weight $\U$-module. Then, it is an $X^\imath$-weight module with $X^\imath$-weight space decomposition $$ M = \bigoplus_{\zeta \in X^\imath} M_\zeta, \qu M_\zeta := \bigoplus_{\substack{\lm \in X \\ \ol{\lm} = \zeta}} M_\lm. $$ We call it the canonical $X^\imath$-weight module structure of $M$. \item The $1$-dimensional $\Ui$-module $V(0)^\sigma$ possesses an $X^\imath$-weight module structure given by $V(0)^\sigma = (V(0)^\sigma)_{\ol{\sigma}}$. \end{enumerate} \end{ex} Let $\Uidot = \bigoplus_{\zeta \in X^\imath} \Ui \mathbf{1}_\zeta$ denote the modified $\imath$quantum group, and $\Uidot_{\bfA}$ its $\bfA$-form. Let $M$ be an $X^\imath$-weight module. Then it has a natural $\Uidot$-module structure (\cite[Subsection 3.3]{W21b}). An $\bfA$-form of $M$ is an $\bfA$-lattice $M_{\bfA}$ such that $\Uidot_{\bfA} M_{\bfA} \subset M_{\bfA}$. \begin{ex}\label{A-forms}\normalfont \ \begin{enumerate} \item $V(\lm)_{\bfA}$ is an $\bfA$-form of $V(\lm)$ as a $\Ui$-module with the canonical $X^\imath$-weight module structure. \item If the $1$-dimensional $\Ui$-module $V(0)^\sigma$ has an $\bfA$-form, it must be the subspace $V(0)^\sigma_{\bfA} := \bfA v_0^\sigma$. This is the case when, for example, $\varsigma_i = q_i\inv$ and $\kappa_i = [s_i]_i$ for some $s_i \in \Z$ for all $i \in I$ with $a_{i,\tau(i)} = 2$ (in this case, an explicit generating set of $\Uidot_{\bfA}$ is known \cite{BeW18,BeW18b}). \end{enumerate} \end{ex} A based $\Ui$-module is an $X^\imath$-weight module $M$ with an $\imath$bar-involution $\psii_M$, a $\bbK_\infty$-lattice $\clL_M$, and an $\bfA$-form $M_{\bfA}$ satisfying the following: \begin{enumerate} \item The quotient map $\ev_\infty : \clL_M \rightarrow \ol{\clL}_M$ restricts to an isomorphism $\clL_M \cap M_{\bfA} \cap \psii_M(\clL_M) \rightarrow \ol{\clL}_M$ of $\C$-vector spaces; let $G^\imath$ denote its inverse. \item For each $b \in \ol{\clL}_M$, it holds that $\psii_M(G^\imath(b)) = G^\imath(b)$. \end{enumerate} Given a based $\Ui$-modules $M,N$ and $\C$-linear bases $\clB_M,\clB_N$ of $\ol{\clL}_M,\ol{\clL}_N$, a $\Ui$-module homomorphism $f : M \rightarrow N$ is said to be a based module homomorphism if $f(G(\clB_M)) \subset G(\clB_N) \sqcup \{0\}$. In most cases, which bases $\clB_M,\clB_N$ are taken is clear from the context. \begin{ex}\normalfont \ \begin{enumerate} \item As we have seen so far, $V(\lm)$ possesses an $\imath$bar-involution $\psii_\lm$, a $\bbK_\infty$-lattice $\clL(\lm)$, and an $\bfA$-form $V(\lm)_{\bfA}$. With respect to theses structures, $V(\lm)$ is a based $\Ui$-module, and we have $\bfB^\imath(\lm) = G^\imath(\clB(\lm))$. \item If the $1$-dimensional $\Ui$-module $V(0)^\sigma$ has an $\bfA$-form, then it is a based module with respect to the $\imath$bar-involution $\psi_0^{\imath, \sigma}$, the $\bbK_\infty$-lattice $\clL(0)^\sigma := \bbK_\infty v_0^\sigma$, and the $\bfA$-form $V(0)^\sigma_{\bfA}$. Set $b_0^\simga := \ev_\infty(v_0^\sigma)$ and $\clB(0)^\sigma := \{ b_0^\sigma \}$. Then, $G^\imath(b_0^\sigma) = v_0^\sigma$. \end{enumerate} \end{ex} \begin{prop}\label{V(lm)sigma is based} Let $\sigma \in X$ be such that $\la h_i-h_{\tau(i)},\sigma \ra = 0$ for all $i \in I$ with $a_{i,\tau(i)} = 0$. Let $\lm \in X^+$. Suppose that $V(0)^\sigma$ has an $\bfA$-form. Then, $V(\lm)^\sigma := V(0)^\sigma \otimes V(\lm)$ is a based $\Ui$-module with respect to an $\imath$bar-involution fixing $v_\lm^\sigma$, a $\bbK_\infty$-lattice $\clL(\lm)^\sigma := \clL(0)^\sigma \otimes \clL(\lm)$, and an $\bfA$-form $V(\lm)^\sigma_{\bfA} := V(0)^\sigma_{\bfA} \otimes V(\lm)_{\bfA}$. \end{prop} \begin{proof} The assertion follows from \cite[Theorem 6.15]{BW21}. \end{proof} Let $\lm,\nu \in X^+$. By \cite[Proposition 7.1]{BW21}, there exists $\Ui$-module homomorphism $$ \pi = \pi_{\lm,\nu} : V(\lm+\nu+\tau(\nu)) \rightarrow V(\lm) $$ such that $$ \pi(v_{\lm+\nu+\tau(\nu)}) = v_\lm. $$ For each $\zeta \in X^\imath$, these homomorphisms form a projective system $\{ V(\lm) \}_{\lm \in X^+, \ol{\lm} = \zeta}$ which is asymptotically stable in the following sense: \begin{theo}[{\cite[Theorem 7.2]{BW21}}]\label{asymptotical limit} Let $\zeta \in X^\imath$ and $b \in \clB(\infty)$. Then, there exists a unique $G^\imath_\zeta(b) \in \Uidot \mathbf{1}_\zeta$ such that $$ G^\imath_\zeta(b) v_\lm = G^\imath(\pi_\lm(b)) $$ for all $\lm \gg 0$ with $\ol{\lm} = \zeta$. Here, $\lm \gg 0$ means $\la h_i,\lm \ra$ is sufficiently large for all $i \in I$. Moreover, $\Bidot := \{ G^\imath_\zeta(b) \mid \zeta \in X^\imath,\ b \in \clB(\infty) \}$ forms a basis of $\Uidot$. \end{theo} The basis $\Bidot$ is called the $\imath$canonical basis of $\Uidot$. Although each $V(\lm)$ is a based $\Ui$-module, the homomorphisms in the projective system above are not necessarily based. \subsection{$\imath$Crystal} From now on, we assume the following: \begin{itemize} \item $a_{i,\tau(i)} \in \{ 2,0,-1 \}$ for all $i \in I$. \item $\varsigma_i \in \{ q_i^a \mid a \in \Z \}$ for all $i \in I$. \item $\kappa_i \in \{ [a]_i \mid a \in \Z \}$ for all $i \in I$. \end{itemize} Note that the first condition is satisfied for all $I$ of finite or affine type, except of type $A_1^{(1)}$ with nontrivial $\tau$. The second condition, together with axiom \eqref{axiom for sigma and kappa} in the beginning of Subsection \ref{Subsection: based Uimodules}, forces $\varsigma_i$ to satisfy the following: \begin{itemize} \item $\varsigma_i = q_i\inv$ if $a_{i,\tau(i)} = 2$. \item $\varsigma_i = 1$ if $a_{i,\tau(i)} = 0$. \item $\varsigma_i \varsigma_{\tau(i)} = q_i$ if $a_{i,\tau(i)} = -1$. \end{itemize} Therefore, we have $$ B_i = \begin{cases} F_i + q_i\inv E_iK_i\inv + [s_i]_i \qu & \IF a_{i,\tau(i)} = 2, \\ F_i + E_{\tau(i)}K_i\inv \qu & \IF a_{i,\tau(i)} = 0, \\ F_i + q_i^{s_i} E_{\tau(i)} K_i\inv \qu & \IF a_{i,\tau(i)} = -1 \end{cases} $$ for some $s_i \in \Z$ such that $s_i + s_{\tau(i)} = 1$ for all $i \in I$ with $a_{i,\tau(i)} = -1$. \begin{rem}\label{wp preserves Ui}\normalfont Our assumption on the parameters $\varsigma_i,\kappa_i$ ensures that $\wp^*(\Ui) = \Ui$ by \cite[Proposition 4.6]{BW18}. In particular, we can talk about contragredient Hermitian inner product on $\Ui$-modules. \end{rem} \begin{defi}\label{Def: icrystal}\normalfont An $\imath$crystal is a set $\clB$ equipped with the following structure: \begin{itemize} \item $\wti : \clB \rightarrow X^\imath$: map. \item $\beta_i : \clB \rightarrow \Z \sqcup \{ -\infty,-\infty_\ev,-\infty_\odd \}$: map, $i \in I$, where $-\infty,-\infty_\ev$, and $-\infty_\odd$ are formal symbols. \item $\Btil_i \in \End_{\C}(\ol{\clL})$, $i \in I$, where $\ol{\clL} := \C \clB$. \item $(\cdot,\cdot)$ : Hermitian inner product on $\ol{\clL}$ making $\clB$ an orthonormal basis. \end{itemize} satisfying the following axioms: Let $b,b' \in \clB$, $i \in I$. \begin{enumerate} \item\label{Def: icrystal 1} If $\beta_i(b) \notin \Z$, then $\Btil_i b = 0$. \item\label{Def: icrystal 2} If $(\Btil_i b,b') \neq 0$, then $\wti(b') = \wti(b) - \ol{\alpha_i}$. \item\label{Def: icrystal 2.5} If $(\Btil_i b,b') \neq 0$, then $(\Btil_i b,b') = (b, \Btil_{\tau(i)} b')$. \item\label{Def: icrystal 2.6} If $\Btil_i b \in \clB$, then $\Btil_{\tau(i)} \Btil_i b = b$. \item\label{Def: icrystal 3} If $a_{i,\tau(i)} = 2$, then \begin{enumerate} \item\label{Def: icrystal 3a} $\beta_i(b) \in \Z \sqcup \{ -\infty_\ev, -\infty_\odd \}$. \item\label{Def: icrystal 3b} $\ol{\beta_i(b)+s_i} = \wti_i(b)$, where $\wti_i(b)$ denotes the value of $\wti(b)$ at $i$. We understand that $$ -\infty_\ev + a = \begin{cases} -\infty_\ev & \IF \ol{a} = \ol{0}, \\ -\infty_\odd & \IF \ol{a} = \ol{1}, \end{cases} \ -\infty_\odd + a = \begin{cases} -\infty_\odd & \IF \ol{a} = \ol{0}, \\ -\infty_\ev & \IF \ol{a} = \ol{1} \end{cases} $$ for all $a \in \Z$, and $\ol{-\infty_\ev} = \ol{0}$, $\ol{-\infty_\odd} = \ol{1}$. \item\label{Def: icrystal 3c} If $(\Btil_i b,b') \neq 0$, then $\beta_i(b') = \beta_i(b)$. \end{enumerate} \item\label{Def: icrystal 4} If $a_{i,\tau(i)} = 0$, then \begin{enumerate} \item\label{Def: icrystal 4a} $\beta_i(b) \in \Z \sqcup \{ -\infty \}$. \item\label{Def: icrystal 4b} $\beta_i(b) = \beta_{\tau(i)}(b) + \wti_i(b)$, where $\wti_i(b) := \la h_i-h_{\tau(i)}, \wti(b) \ra$. \item\label{Def: icrystal 4c} If $(\Btil_i b,b') \neq 0$, then $b' = \Btil_i b$ and $\beta_i(b') = \beta_i(b)-1$. \end{enumerate} \item\label{Def: icrystal 5} If $a_{i,\tau(i)} = -1$, then \begin{enumerate} \item\label{Def: icrystal 5a} $\beta_i(b) \in \Z \sqcup \{ -\infty \}$. \item\label{Def: icrystal 5b} $\beta_i(b) = \beta_{\tau(i)}(b) + \wti_i(b)-s_i$ or $\beta_i(b) = \beta_{\tau(i)}(b) + \wti_i(b)-s_i+1$, where $\wti_i(b) := \la h_i-h_{\tau(i)}, \wti(b) \ra$. \item\label{Def: icrystal 5c} If $\beta_i(b) \neq \beta_{\tau(i)}(b) + \wti_i(b)-s_i$ and $(\Btil_i b,b') \neq 0$, then $b' = \Btil_i b$ and $\beta_i(b') \neq \beta_{\tau(i)}(b')+\wti_i(b')-s_i$. \item\label{Def: icrystal 5d} If $(\Btil_i b, b') \neq 0$ and $\beta_i(b') \neq \beta_{\tau(i)}(b') + \wti_i(b')-s_i$, then $\beta_i(b') = \beta_i(b)-1$. \end{enumerate} \end{enumerate} \end{defi} Let us fix a complete set $I_\tau$ of representatives for the $\tau$-orbit on $I$. \begin{defi}\normalfont Let $\clB$ be an $\imath$crystal. The crystal graph of $\clB$ is an $(I_\tau \times \C^\times)$-colored directed graph whose vertex set is $\clB$, and for each $b,b' \in \clB$, $i \in I_\tau$, and $z \in \C^\times$, there exists an arrow from $b$ to $b'$ labeled by $(i,z)$ if and only if $(\Btil_i b,b') = z$. We often omit the label $z$ when $z = 1$. \end{defi} \begin{ex}\label{icrystal of diagonal type is crystal}\normalfont Suppose that our Satake diagram is of diagonal type. We retain the notation in Example \ref{set up for diagonal type}. If we set $I_\tau := I_2$, then an $\imath$crystal and its crystal graph are nothing but a crystal and its crystal graph associated to the Dynkin diagram $I'$. Under this identification, $\Btil_{i_1},\Btil_{i_2}, \beta_{i_1}, \beta_{i_2}$ correspond to $\Etil_i,\Ftil_i, \vep_i,\vphi_i$, respectively for each $i \in I'$. \end{ex} \begin{rem}\label{Btili is recovered from graph}\normalfont Since $\clB$ is an orthonormal basis, we have $$ \Btil_i b = \sum_{b' \in \clB} (\Btil_i b,b')b' $$ for all $b \in \clB$. Hence, the maps $\Btil_i$, $i \in I_\tau$ can be recovered from the crystal graph of $\clB$. \end{rem} \begin{lem}\label{Btil is Hermite} Let $\clB$ be an $\imath$crystal, $b,b' \in \clB$, and $i \in I$. Then, we have $$ (\Btil_i b,b') = (b, \Btil_{\tau(i)} b'). $$ \end{lem} \begin{proof} By Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.5}, we have $(\Btil_i b,b') = (b, \Btil_{\tau(i)}b')$ if $(\Btil_i b,b') \neq 0$. Now, suppose that $(\Btil_i b,b') = 0$. Assume contrary that $(b,\Btil_{\tau(i)} b') \neq 0$. Then, by \ref{Def: icrystal} \eqref{Def: icrystal 2.5} again, we obtain $$ (b',\Btil_i b) = (\Btil_{\tau(i)}b',b) \neq 0, $$ which is a contradiction. Thus, the proof completes. \end{proof} \begin{rem}\normalfont By Lemma \ref{Btil is Hermite} (see also Remark \ref{Btili is recovered from graph}), the maps $\Btil_{\tau(i)}$, $i \in I_\tau$ can be recovered from the crystal graph. \end{rem} \begin{defi}\label{Def: icrystal morphism}\normalfont Let $\clB_1,\clB_2$ be $\imath$crystals. A morphism $\mu : \clB_1 \rightarrow \clB_2$ of $\imath$crystals is a linear map $\mu : \ol{\clL}_1 \rightarrow \ol{\clL}_2$, where $\ol{\clL_j} := \C\clB_j$, satisfying the following: Let $b_1 \in \clB_1$, $b_2 \in \clB_2$, and $i \in I$. \begin{enumerate} \item\label{Def: icrystal morphism 1} If $(\mu(b_1),b_2) \neq 0$, then $\wti(b_2) = \wti(b_1)$ and $\beta_i(b_2) = \beta_i(b_1)$. \item\label{Def: icrystal morphism 2} If $\Btil_i b_1 \in \clB_1$ and $\mu(b_1), \mu(\Btil_i b_1) \neq 0$, then $\mu(\Btil_i b_1) = \Btil_i \mu(b_1)$. \end{enumerate} An $\imath$crystal morphism $\mu : \clB_1 \rightarrow \clB_2$ is said to be strict if $\mu(\Btil_i b) = \Btil_i \mu(b)$ for all $i \in I$, $b \in \clB_1$. A strict $\imath$crystal morphism is said to be an equivalence if the underlying map $\mu : \ol{\clL}_1 \rightarrow \ol{\clL}_2$ is a linear isomorphism; in this case, we write $\clB_1 \sim \clB_2$. A strict $\imath$crystal morphism is said to be very strict if $\mu(\clB_1) \subset \clB_2 \sqcup \{0\}$. An $\imath$crystal equivalence is said to be an isomorphism if it is very strict; in this case, we write $\clB_1 \simeq \clB_2$. \end{defi} \begin{rem}\normalfont Although an $\imath$crystal isomorphism induces an isomorphism of crystal graphs, it can happen that two equivalent $\imath$crystals have non-isomorphic crystal graphs. \end{rem} \begin{ex}\label{examples of icrystals}\normalfont \ \begin{enumerate} \item\label{examples of icrystals 1} Consider the crystal basis $\clB(0) = \{ b_0 \}$ of the trivial module $V(0)$. It has an $\imath$crystal structure given as follows: \begin{enumerate} \item $\wti(b_0) = \ol{0}$. \item $\beta_i(b_0) = \begin{cases} |s_i| \qu & \IF a_{i,\tau(i)} = 2, \\ 0 \qu & \IF a_{i,\tau(i)} = 0, \\ \max(-s_i,0) \qu & \IF a_{i,\tau(i)} = -1. \end{cases}$ \item $\Btil_i b_0 = \begin{cases} \sgn(s_i)b_0 \qu & \IF a_{i,\tau(i)} = 2, \\ 0 \qu & \OW. \end{cases}$ \end{enumerate} \item For each $\zeta \in X^\imath$, let $\clT_\zeta = \{ t_\zeta \}$ denote the $\imath$crystal given by $$ \wti(t_\zeta) = \zeta, \qu \beta_i(t_\zeta) = \begin{cases} -\infty_\ev & \IF a_{i,\tau(i)} = 2 \AND \zeta_i = \ol{s_i}, \\ -\infty_\odd & \IF a_{i,\tau(i)} = 2 \AND \zeta_i \neq \ol{s_i}, \\ -\infty & \IF a_{i,\tau(i)} \neq 2, \end{cases} \qu \Btil_i t_\zeta = 0, $$ where $\zeta_i \in \Z/2\Z$ denotes the value of $\zeta$ at $i$. \item\label{examples of icrystals 3} Suppose that $I = \{i\}$. For each $n \in \Z$, let $\clB^\imath(n) = \{ b \}$ denote the $\imath$crystal given by $$ \wti_i(b) = \ol{n+s_i}, \qu \beta_i(b) = |n|, \qu \Btil_i b = \sgn(n) b. $$ The crystal graph of $\clB^\imath(n)$ is as follows: $$ \xymatrix{ b \ar@(ur,dr)^-{(i,\sgn(n))} } $$ \item Suppose that $I = \{ i \}$. For each $n \in \Z_{> 0}$, let $\clB^\imath(n;-n) = \{ b_+,b_- \}$ denote the $\imath$crystal given by $$ \wti_i(b_\pm) = \ol{n+s_i}, \qu \beta_i(b_\pm) = n, \qu \Btil_i b_\pm = b_\mp $$ The crystal graph of $\clB^\imath(n;-n)$ is as follows: $$ \xymatrix{ b_+ \ar@<0.5ex>[r]^-i & b_- \ar@<0.5ex>[l]^-i } $$ There exists an $\imath$crystal equivalence $\clB^\imath(n) \sqcup \clB^\imath(-n) \rightarrow \clB^\imath(n;-n)$ which sends $b \in \clB^\imath(\pm n)$ to $\frac{1}{\sqrt{2}}(b_+ \pm b_-)$. Note that the crystal graphs of $\clB^\imath(n) \sqcup \clB^\imath(-n)$ and $\clB^\imath(n;-n)$ are not isomorphic. \item\label{examples of icrystals 5} Suppose that $I = \{ i,\tau(i) \}$ and $a_{i,\tau(i)} = 0$. For each $n \in \Z_{\geq 0}$, let $\clB^\imath(n) = \{ b_k \mid 0 \leq k \leq n \}$ denote the $\imath$crystal given by \begin{align} \begin{split} &\wti_i(b_k) = n-2k, \qu \beta_i(b_k) = n-k, \qu \Btil_i b_k = b_{k+1}, \\ &\wti_{\tau(i)}(b_k) = -n+2k, \qu \beta_{\tau(i)}(b_k) = k, \qu \Btil_{\tau(i)} b_k = b_{k-1}, \end{split} \nonumber \end{align} where $b_{-1} = b_{n+1} = 0$. The crystal graph of $\clB^\imath(n)$ (with $I_\tau = \{i\}$) is as follows: $$ \xymatrix{ b_0 \ar[r]^-i & b_1 \ar[r]^-i & \cdots \ar[r]^-i & b_{n-1} \ar[r]^-i & b_n } $$ \item\label{examples of icrystals 6} Suppose that $I = \{ i,\tau(i) \}$ and $a_{i,\tau(i)} = -1$. For each $n_- \in \Z_{\geq 0}$ and $n_+ \in \Z$, let $\clB^\imath(n_-,n_+) = \{ b_k \mid 0 \leq k \leq n_- \}$ denote the $\imath$crystal given by \begin{align} \begin{split} &\wti_i(b_k) = n_-+n_+-3k, \\ &\beta_i(b_k) = n_--k+\max(n_+-s_i-k,0) = \begin{cases} n_-+n_+-s_i-2k & \IF 0 \leq k \leq n_+-s_i, \\ n_--k & \IF n_+-s_i < k \leq n_-, \end{cases} \\ &\Btil_i b_k = b_{k+1}, \\ &\wti_{\tau(i)}(b_k) = -n_--n_++3k, \\ &\beta_{\tau(i)}(b_k) = k+\max(-n_+-s_{\tau(i)}+k,0) = \begin{cases} k & \IF 0 \leq k \leq n_+-s_i, \\ -n_+-s_{\tau(i)}+2k & \IF n_+-s_i < k \leq n_-, \end{cases} \\ &\Btil_{\tau(i)} b_k = b_{k-1}, \end{split} \nonumber \end{align} where $b_{-1} = b_{n_-+1} = 0$. The crystal graph of $\clB^\imath(n_-,n_+)$ (with $I_\tau = \{i\}$) is as follows: $$ \xymatrix{ b_0 \ar[r]^-i & b_1 \ar[r]^-i & \cdots \ar[r]^-i & b_{n_--1} \ar[r]^-i & b_{n_-} } $$ \item\label{examples of icrystals 7} Suppose that $I = \{ i,\tau(i) \}$ and $a_{i,\tau(i)} = -1$. For each $n_- \in \Z_{> 0}$ and $n_+ \in \Z$ with $-1 < n_+-s_i < n_-$, let $\clB^\imath(n_-,n_+;\vee) = \{ b_{k,\pm} \mid 0 \leq k \leq n_+-s_i \} \sqcup \{ b_{k} \mid n_+-s_i < k \leq n_- \}$ denote the $\imath$crystal given by \begin{align} \begin{split} &\wti_i(b_{k,\pm}) = n_-+n_+-3k, \qu \wti_i(b_k) = n_-+n_+-3k, \\ &\beta_i(b_{k,\pm}) = n_-+n_+-s_i-2k, \qu \beta_i(b_k) = n_--k, \\ &\Btil_i b_{k,\pm} = \begin{cases} b_{k+1,\pm} & \IF k \neq n_+-s_i, \\ \frac{1}{\sqrt{2}} b_{k+1} & \IF k = n_+-s_i, \end{cases} \qu \Btil_i b_{k} = b_{k+1}, \\ &\wti_{\tau(i)}(b_{k,\pm}) = -n_--n_++3k, \qu \wti_{\tau(i)}(b_k) = -n_--n_++3k, \\ &\beta_{\tau(i)}(b_{k,\pm}) = k, \qu \beta_{\tau(i)}(b_k) = -n_+-s_{\tau(i)}+2k, \\ &\Btil_{\tau(i)} b_{k,\pm} = b_{k-1,\pm}, \qu \Btil_{\tau(i)}b_k = \begin{cases} b_{k-1} & \IF k \neq n_+-s_i+1, \\ \frac{1}{\sqrt{2}} (b_{k-1,+}+b_{k-1,-}) & \IF k = n_+-s_i+1, \end{cases} \end{split} \nonumber \end{align} where $b_{-1,\pm} = b_{n_-+1,+} = 0$. The crystal graph of $\clB^\imath(n_-,n_+;\vee)$ (with $I_\tau = \{i\}$) is as follows: $$ \xymatrix{ b_{0,+} \ar[r]^-i & b_{1,+} \ar[r]^-i & \cdots \ar[r]^-i & b_{n_+-s_i,+} \ar[dr]^-{(i,\frac{1}{\sqrt{2}})} \\ &&&&b_{n_+-s_i+1} \ar[r]^-i & \cdots \ar[r]^-i & b_{n_-} \\ b_{0,-} \ar[r]_-i & b_{1,-} \ar[r]_-i & \cdots \ar[r]_-i & b_{n_+-s_i,-} \ar[ur]_-{(i,\frac{1}{\sqrt{2}})} } $$ There exists an $\imath$crystal equivalence $\clB^\imath(n_-,n_+) \sqcup \clB^\imath(n_+-s_i,n_-+s_i) \rightarrow \clB^\imath(n_-,n_+;\vee)$ which sends $b_k \in \clB^\imath(n_+-s_i,n_-+s_i)$ to $\frac{1}{\sqrt{2}}(b_{k,+} - b_{k,-})$ and $b_k \in \clB^\imath(n_-,n_+)$ to $\frac{1}{\sqrt{2}}(b_{k,+} + b_{k,-})$ when $k \leq n_+-s_i$, while $b_{k}$ when $k > n_+-s_i$. \item\label{examples of icrystals 8} Suppose that $I = \{ i,\tau(i) \}$ and $a_{i,\tau(i)} = -1$. For each $n_- \in \Z_{> 0}$ and $n_+ \in \Z$ with $-1 < n_+-s_i < n_-$, let $\clB^\imath(n_-,n_+;\wedge) = \{ b_{k,\pm} \mid n_++s_{\tau(i)} \leq k \leq n_- \} \sqcup \{ b_{k} \mid 0 \leq k < n_++s_{\tau(i)} \}$ denote the $\imath$crystal given by \begin{align} \begin{split} &\wti_i(b_{k,\pm}) = n_-+n_+-3k, \qu \wti_i(b_k) = n_-+n_+-3k, \\ &\beta_i(b_{k,\pm}) = n_--k, \qu \beta_i(b_k) = n_-+n_+-s_i-2k, \\ &\Btil_i b_{k,\pm} = b_{k+1,\pm}, \qu \Btil_i b_{k} = \begin{cases} b_{k+1} & \IF k \neq n_++s_{\tau(i)}-1, \\ \frac{1}{\sqrt{2}}(b_{k+1,+} + b_{k+1,-}) & \IF k = n_++s_{\tau(i)}-1, \end{cases} \\ &\wti_{\tau(i)}(b_{k,\pm}) = -n_--n_++3k, \qu \wti_{\tau(i)}(b_k) = -n_--n_++3k, \\ &\beta_{\tau(i)}(b_{k,\pm}) = -n_+-s_{\tau(i)}+2k, \qu \beta_{\tau(i)}(b_k) = k, \\ &\Btil_{\tau(i)} b_{k,\pm} = \begin{cases} b_{k-1,\pm} & \IF k \neq n_++s_{\tau(i)}, \\ \frac{1}{\sqrt{2}} b_{k-1} & \IF k = n_++s_{\tau(i)}, \end{cases} \qu \Btil_{\tau(i)} b_k = b_{k-1}, \end{split} \nonumber \end{align} where $b_{-1,+} = b_{n_-+1,\pm} = 0$. The crystal graph of $\clB^\imath(n_-,n_+;\wedge)$ is as follows: $$ \xymatrix@C=20pt{ &&&& b_{n_++s_{\tau(i)},+} \ar[r]^-i & \cdots \ar[r]^-i & b_{n_-,+}\\ b_{0} \ar[r]^-i & b_{1} \ar[r]^-i & \cdots \ar[r]^-i & b_{n_++s_{\tau(i)}-1} \ar[dr]_-{(i,\frac{1}{\sqrt{2}})} \ar[ur]^-{(i,\frac{1}{\sqrt{2}})} \\ &&&& b_{n_++s_{\tau(i)},-} \ar[r]_-i & \cdots \ar[r]_-i & b_{n_-,-} } $$ There exists an $\imath$crystal equivalence $\clB^\imath(n_-,n_+) \sqcup \clB^\imath(n_--n_+-s_{\tau(i)},-n_+-2s_{\tau(i)}) \rightarrow \clB^\imath(n_-,n_+;\wedge)$ which sends $b_k \in \clB^\imath(n_--n_+-s_{\tau(i)},-n_+-2s_{\tau(i)})$ to $\frac{1}{\sqrt{2}}(b_{k+n_++s_{\tau(i)},+} - b_{k+n_++s_{\tau(i)},-})$, and $b_k \in \clB^\imath(n_-,n_+)$ to $b_k$ when $k < n_++s_{\tau(i)}$, while $\frac{1}{\sqrt{2}}(b_{k,+} + b_{k,-})$ when $k \geq n_++s_{\tau(i)}$. \end{enumerate} \end{ex} \begin{prop}\label{property for a = 0} Let $\clB$ be an $\imath$crystal, $i \in I$ with $a_{i,\tau(i)} = 0$, and $b \in \clB$ with $\Btil_i b \neq 0$. Then, we have $\wti_i(\Btil_i b) = \wti_i(b)-2$ and $\beta_{\tau(i)}(\Btil_i b) = \beta_{\tau(i)}(b)+1$. \end{prop} \begin{proof} Set $b' := \Btil_i b \in \clB$. Then, by Definition \ref{Def: icrystal} \eqref{Def: icrystal 2}, we have $$ \wti_i(b') = \la h_i-h_{\tau(i)}, \wti(b)-\ol{\alpha_i} \ra = \wti_i(b)-2. $$ This proves the first assertion. Next, by Definition \ref{Def: icrystal} \eqref{Def: icrystal 4c}, we have $\beta_i(b') = \beta_i(b)-1$, and hence, $$ \beta_{\tau(i)}(b') = \beta_i(b') - \wti_i(b') = \beta_i(b)-\wti_i(b)+1 = \beta_{\tau(i)}(b)+1. $$ This proves the second assertion. \end{proof} \begin{prop}\label{basic property for a = -1} Let $\clB$ be an $\imath$crystal, $i \in I$ with $a_{i,\tau(i)} = -1$, and $b,b',b'' \in \clB$ with $(\Btil_i b,b'), (\Btil_{\tau(i)}b,b'') \neq 0$. Then, the following hold: \begin{enumerate} \item $\wti_i(b') = \wti_i(b)-3$. \item If $\beta_i(b) \neq \beta_{\tau(i)}(b)+\wti_i(b)-s_i$, then $b' = \Btil_i b$, $\beta_i(b') = \beta_i(b)-1$, and $\beta_{\tau(i)}(b') = \beta_{\tau(i)}(b)+2$. \item If $\beta_i(b) \neq \beta_{\tau(i)}(b)+\wti_i(b)-s_i$ and $\beta_i(b'') \neq \beta_{\tau(i)}(b'')+\wti_i(b'')-s_i$, then $b'' = \Btil_{\tau(i)} b$, $\beta_i(b'') = \beta_i(b)+1$, and $\beta_{\tau(i)}(b'') = \beta_{\tau(i)}(b)-2$. \item If $\beta_i(b) \neq \beta_{\tau(i)}(b)+\wti_i(b)-s_i$ and $\beta_i(b'') = \beta_{\tau(i)}(b'')+\wti_i(b'')-s_i$, then $\beta_i(b'') = \beta_i(b)+1$ and $\beta_{\tau(i)}(b'') = \beta_{\tau(i)}(b)-1$. \item $\wti_i(b'') = \wti_i(b)+3$. \item If $\beta_i(b) = \beta_{\tau(i)}(b)+\wti_i(b)-s_i$, then $b'' = \Btil_{\tau(i)} b$, $\beta_{\tau(i)}(b'') = \beta_{\tau(i)}(b)-1$, and $\beta_i(b'') = \beta_i(b)+2$. \item If $\beta_i(b) = \beta_{\tau(i)}(b)+\wti_i(b)-s_i$ and $\beta_i(b') = \beta_{\tau(i)}(b')+\wti_i(b')-s_i$, then $b' = \Btil_i b$, $\beta_{\tau(i)}(b') = \beta_{\tau(i)}(b)+1$, and $\beta_i(b') = \beta_i(b)-2$. \item If $\beta_i(b) = \beta_{\tau(i)}(b)+\wti_i(b)-s_i$ and $\beta_i(b') \neq \beta_{\tau(i)}(b')+\wti_i(b')-s_i$, then $\beta_{\tau(i)}(b') = \beta_{\tau(i)}(b)+1$ and $\beta_i(b') = \beta_i(b)-1$. \end{enumerate} \end{prop} \begin{proof} The first assertion follows from Definition \ref{Def: icrystal} \eqref{Def: icrystal 2} as Proposition \ref{property for a = 0}. The second assertion follows from Definition \ref{Def: icrystal} \eqref{Def: icrystal 5b} -- \eqref{Def: icrystal 5d} and the first assertion of the proposition. Let us prove the third assertion. By Lemma \ref{Btil is Hermite}, we have $(\Btil_i b'',b) = (b'', \Btil_{\tau(i)}b) \neq 0$. Then, the second assertion of the proposition implies that $b = \Btil_i b''$, $\beta_i(b) = \beta_i(b'')-1$, and $\beta_{\tau(i)}(b) = \beta_{\tau(i)}(b'')+2$. Now, by Definition \ref{Def: icrystal} \eqref{Def: icrystal 2.6}, we obtain $$ \Btil_{\tau(i)} b = \Btil_{\tau(i)} \Btil_i b'' = b''. $$ Thus, the third assertion follows. Let us prove the fourth assertion. Again, we have $(\Btil_i b'', b) \neq 0$. Then, the first assertion of the proposition and Definition \ref{Def: icrystal} \eqref{Def: icrystal 5d} imply that $\wti_i(b) = \wti_i(b'')-3$ and $\beta_i(b) = \beta_i(b'')-1$, respectively. Now, we compute as \begin{align} \begin{split} \beta_{\tau(i)}(b'') &= \beta_i(b'')-\wti_i(b'')-s_i \\ &= (\beta_i(b)+1)-(\wti_i(b)+3)-s_i \\ &= (\beta_i(b)-\wti_i(b)-s_i-1)-1 \\ &= \beta_i(b)-1. \end{split} \nonumber \end{align} Thus, the fourth assertion follows. The remaining assertions follow from the first four assertions by interchanging the roles of $i$ and $\tau(i)$. \end{proof} \section{Modified action of $B_i$}\label{Section: modified action of Bi} In this section, we shall define linear operators $\Btil_i$, $i \in I$ acting on certain $\Ui$-modules by modifying the action of $B_i$ on it. Also, we define maps $\beta_i$, $i \in I$ defined on certain vectors of $\Ui$-modules. We say that an $X^\imath$-weight $\Ui$-module $M$ has an $\imath$crystal basis $\clB_M$ if $M$ possesses a $\bbK_\infty$-lattice $\clL_M$ satisfying the following: \begin{itemize} \item $\clB_M$ forms a $\C$-basis of $\ol{\clL}_M$. \item $\clL_M = \bigoplus_{\zeta \in X^\imath} \clL_{M,\zeta}$, where $\clL_{M,\zeta} := \clL_M \cap M_\zeta$. \item $\clB_M = \bigsqcup_{\zeta \in X^\imath} \clB_{M,\zeta}$, where $\clB_{M,\zeta} := \clB_M \cap \ev_\infty(\clL_{M,\zeta})$; this enables us to define a map $\wti : \clB_M \rightarrow X^\imath$. \item $\clL_M$ is stable under $\Btil_i$ for all $i \in I$; this induces a linear map $\Btil_i$ on $\ol{\clL}_M$. \item $M = \bigoplus_{n \in \Z} M_{i,n}$, $\clL_M = \bigoplus_{n \in \Z} (\clL_M \cap M_{i,n})$, and $\clB_M = \bigsqcup_{n \in \Z} (\clB_M \cap \ev_\infty(\clL_M \cap M_{i,n}))$ for all $i \in I$, $n \in \Z$, where $M_{i,n} := \bbK\{ v \in M \mid \beta_i(v) = n \}$; this enables us to define a map $\beta_i : \clB_M \rightarrow \Z$. \item $\clB_M$ forms an $\imath$crystal with respect to the structure maps above. \end{itemize} Since $\Btil_i$ and $\beta_i$ are defined in terms of $\Ui_{i,\tau(i)}$-modules, we assume, until the end of this section, that $I = \{ i,\tau(i) \}$ for some $i \in I_\tau$. \subsection{The $a_{i,\tau(i)} = 2$ case} Suppose that $a_{i,\tau(i)} = 2$. In this case, we have $\U = U_q(\frsl_2)$, and $\Ui = \bbK[B_i]$. Hence, for each $n \in \Z$, there exists a $1$-dimensional irreducible $\Ui$-module $V^\imath(n) = \bbK v$ such that $$ B_i v = [n]_i v. $$ This $\Ui$-module has an $X^\imath$-weight module structure such that $V^\imath(n) = V^\imath(n)_{\ol{n}}$; note that $X^\imath = X/2X \simeq \Z/2\Z$. Let $M$ be an $X^\imath$-weight module isomorphic to a direct sum of $V^\imath(n)$'s. For each $n \in \Z$, let $M[n]$ denote the isotypic component of $M$ of type $V^\imath(n)$. Then, we have $$ M = \bigoplus_{n \in \Z} M[n]. $$ For each $v \in M[n]$, we set $$ \beta_i(v) := |n|, \qu \Btil_i v := \sgn(n)v. $$ \begin{lem}\label{icrystal basis of the trivial module} For each $n \in \Z$, the $\Ui$-module $V^\imath(n)$ has an $\imath$crystal basis isomorphic to $\clB^\imath(n)$. In particular, the trivial $\U$-module $V(0)$ has its crystal basis $\clB(0)$ as its $\imath$crystal basis. \end{lem} \begin{proof} It is immediately verified that $\bbK_\infty v$ forms a $\bbK_\infty$-lattice of $V^\imath(n)$, and $\{ \ev_\infty(v) \}$ forms an $\imath$crystal basis of $V^\imath(n)$ isomorphic to $\clB^\imath(n)$ (see also Example \ref{examples of icrystals} \eqref{examples of icrystals 3}). The second assertion follows from the fact that $V(0) \simeq V^\imath(s_i)$. \end{proof} \begin{prop}\label{deg and Btil on tensor of crystal bases} Let $M$ be a $\Ui$-module with an $\imath$crystal basis $\clB_M$, and $N$ an integrable $\U$-module with a crystal basis $\clB_N$. Then, for each $b_1 \in \clB_M$, $b_2 \in \clB_N$, and $i \in I$, we have \begin{align} \begin{split} &\beta_i(b_1 \otimes b_2) = \begin{cases} \beta_i(b_1) - \wt_i(b_2) \qu & \IF \beta_i(b_1) > \vphi_i(b_2), \\ \vep_i(b_2) \qu & \IF \beta_i(b_1) \leq \vphi_i(b_2) \AND \ol{\beta_i(b_1)} = \ol{\vphi_i(b_2)}, \\ \vep_i(b_2)+1 \qu & \IF \beta_i(b_1) \leq \vphi_i(b_2) \AND \ol{\beta_i(b_1)} \neq \ol{\vphi_i(b_2)}, \end{cases} \\ &\Btil_i(b_1 \otimes b_2) = \begin{cases} \Btil_i b_1 \otimes b_2 \qu & \IF \beta_i(b_1) > \vphi_i(b_2), \\ b_1 \otimes \Etil_i b_2 \qu & \IF \beta_i(b_1) \leq \vphi_i(b_2) \AND \ol{\beta_i(b_1)} = \ol{\vphi_i(b_2)}, \\ b_1 \otimes \Ftil_i b_2 \qu & \IF \beta_i(b_1) \leq \vphi_i(b_2) \AND \ol{\beta_i(b_1)} \neq \ol{\vphi_i(b_2)}. \end{cases} \end{split} \nonumber \end{align} \end{prop} \begin{proof} The assertion can be proved essentially in the same way as \cite[Proposition 5.1.4]{W21b}. \end{proof} \begin{cor}\label{icrystal structure of a crystal; a = 2} Let $M$ be an integrable $\U$-module with a crystal basis $\clB_M$. Then, for each $b \in \clB_M$ and $i \in I$, we have \begin{align} \begin{split} &\beta_i(b) = \begin{cases} |s_i| - \wt_i(b) \qu & \IF |s_i| > \vphi_i(b), \\ \vep_i(b) \qu & \IF |s_i| \leq \vphi_i(b) \AND \ol{s_i} = \ol{\vphi_i(b)}, \\ \vep_i(b)+1 \qu & \IF |s_i| \leq \vphi_i(b) \AND \ol{s_i} \neq \ol{\vphi_i(b)}, \end{cases} \\ &\Btil_i(b) = \begin{cases} \sgn(s_i) b \qu & \IF |s_i| > \vphi_i(b), \\ \Etil_i(b) \qu & \IF |s_i| \leq \vphi_i(b) \AND \ol{s_i} = \ol{\vphi_i(b)}, \\ \Ftil_i(b) \qu & \IF |s_i| \leq \vphi_i(b) \AND \ol{s_i} \neq \ol{\vphi_i(b)}. \end{cases} \end{split} \nonumber \end{align} \end{cor} \begin{proof} The assertion follows from Lemma \ref{icrystal basis of the trivial module} and Proposition \ref{deg and Btil on tensor of crystal bases} by identifying $\clB_M$ with $\clB(0) \otimes \clB_M$. \end{proof} \begin{ex}\label{icrystal structure of V(lm) for AI}\normalfont Let $n \in \Z_{\geq 0}$. Then, the $(n+1)$-dimensional irreducible $\U$-module $V(n)$ has $\clB(n)$ in Example \ref{crystal B(n)} as its crystal basis. Let us illustrate $\Btil_i$ and $\beta_i$ on it. In the following, the $\Btil_i$ is described in the same way as the crystal graph (the label ``$i$'' is omitted): \begin{enumerate} \item When $n < |s_i|$. $$ \xymatrix@R=5pt{ & b_0 \ar@(ul,ur)^-{\sgn(s_i)} & b_1 \ar@(ul,ur)^-{\sgn(s_i)} & \cdots & b_n \ar@(ul,ur)^-{\sgn(s_i)} \\ \beta_i :& |s_i|-n & |s_i|-n+2 & \cdots & |s_i|+n } $$ \item When $n \geq |s_i|$ and $\ol{n} = \ol{s_i}$. $$ \xymatrix@C=10pt@R=5pt{ &b_0 & b_1 \ar@<0.5ex>[r] & b_2 \ar@<0.5ex>[l] & \cdots & b_{n-|s_i|-1} \ar@<0.5ex>[r] & b_{n-|s_i|} \ar@<0.5ex>[l] & b_{n-|s_i|+1} \ar@(ul,ur)^-{\sgn(s_i)} & \cdots & b_n \ar@(ul,ur)^-{\sgn(s_i)} \\ \beta_i : & 0 & 2 & 2 & \cdots & n-|s_i| & n-|s_i| & n-|s_i|+2 & \cdots & |s_i|+n } $$ \item When $n \geq |s_i|$ and $\ol{n} \neq \ol{s_i}$. $$ \xymatrix@C=10pt@R=5pt{ & b_0 \ar@<0.5ex>[r] & b_1 \ar@<0.5ex>[l] & \cdots & b_{n-|s_i|-1} \ar@<0.5ex>[r] & b_{n-|s_i|} \ar@<0.5ex>[l] & b_{n-|s_i|+1} \ar@(ul,ur)^-{\sgn(s_i)} & \cdots & b_n \ar@(ul,ur)^-{\sgn(s_i)} \\ \beta_i :& 1 & 1 & \cdots & n-|s_i| & n-|s_i| & n-|s_i|+2 & \cdots & |s_i|+n } $$ \end{enumerate} \end{ex} \subsection{The $a_{i,\tau(i)} = 0$ case} Suppose that $a_{i,\tau(i)} = 0$. In this case, we have $\U \simeq U_q(\frsl_2) \otimes U_q(\frsl_2)$, and there exists an algebra isomorphism $U_q(\frsl_2) \rightarrow \Ui$ which sends $E_,F,K^{\pm 1}$ to $B_{\tau(i)},B_i, k_i^{\pm 1}$. Hence, for each $n \in \Z_{\geq 0}$, there exists an $(n+1)$-dimensional irreducible $\Ui$-module $V^\imath(n) = \bigoplus_{k=0}^n \bbK v_k$ such that $$ B_{\tau(i)} v_0 = 0, \qu B_i^{(k)} v_0 = v_k, \qu k_i v_0 = q_i^n v_0. $$ Here, we understand that $v_{-1} = v_{n+1} = 0$. This $\Ui$-module has an $X^\imath$-weight module structure such that $$ V^\imath(n) = \bigoplus_{k=0}^n V^\imath(n)_{\zeta_k}, \qu V^\imath(n)_{\zeta_k} = \bbK v_k, $$ where $\zeta_k \in X^\imath$ is such that $\la h_i-h_{\tau(i)}, \zeta_k \ra = n-2k$. Let $M$ be an $X^\imath$-weight module isomorphic to a direct sum of $V^\imath(n)$'s. For each $n \in \Z_{\geq 0}$, let $M[n]$ denote the isotypic component of $M$ of type $V^\imath(n)$. For each $n \in \Z_{\geq 0}$ and $0 \leq k \leq n$, set $$ M[n;k] := B_i^{k}(M[n] \cap \Ker B_{\tau(i)}). $$ Then, we have $$ M = \bigoplus_{0 \leq k \leq n} M[n;k]. $$ For each $v \in M[n,k]$, we set $$ \beta_i(v) := n-k, \qu \beta_{\tau(i)}(v) := k, \qu \Btil_i v := \frac{1}{[k+1]_i} B_i v, \qu \Btil_{\tau(i)} v := \frac{1}{[n-k+1]_i} B_{\tau(i)} v. $$ Note that $M[n;k] \subset M_{\zeta_k}$, where $\zeta_k \in X^\imath$ is as before, and $\Btil_i,\Btil_{\tau(i)}$ define a $\bbK$-linear endomorphism on $M$. \begin{rem}\normalfont If we interchange $i$ and $\tau(i)$, then $V^\imath(n)$ becomes $V^\imath(n)$ with $v_k$ being replaced by $v_{n-k}$. Hence, our definition of $\beta_j,\Btil_j$ for $j \in \{ i,\tau(i) \}$ is independent of the choice of $I_\tau$. \end{rem} \begin{lem}\label{icrystal basis of the trivial module a=0} For each $n \geq 0$, the $\Ui$-module $V^\imath(n)$ has an $\imath$crystal isomorphic to $\clB^\imath(n)$. In particular, the trivial module $V(0)$ has its crystal basis $\clB(0)$ as its $\imath$crystal basis. \end{lem} \begin{proof} It is immediately verified that $\bigoplus_{k = 0}^n \bbK_\infty v_k$ forms a $\bbK_\infty$-lattice of $V^\imath(n)$, and $\{ \ev_\infty(v_k) \mid 0 \leq k \leq n \}$ forms an $\imath$crystal basis of $V^\imath(n)$ isomorphic to $\clB^\imath(n)$ (see also Example \ref{examples of icrystals} \eqref{examples of icrystals 5}). The second assertion follows from the easily verified fact that $V(0) \simeq V^\imath(0)$. \end{proof} \begin{prop}\label{Btil on tensor of crystal bases for 0} Let $M$ be a $\Ui$-module with an $\imath$crystal basis $\clB_M$, and $N$ an integrable $\U$-module with a crystal basis $\clB_N$. Then, for each $b_1 \in \clB_M$, $b_2 \in \clB_N$ and $j \in \{ i,\tau(i) \}$, we have \begin{align} \begin{split} \beta_j(b_1 \otimes b_2) &= \max(\vphi_j(b_2)+\wti_j(b_1)-\wt_{\tau(j)}(b_2), \beta_j(b_1)-\wt_{\tau(j)}(b_2), \vep_{\tau(j)}(b_2)), \\ \Btil_j(b_1 \otimes b_2) &= \begin{cases} b_1 \otimes \Ftil_j b_2 \qu & \IF \vphi_j(b_2) > \beta_{\tau(j)}(b_1), \vphi_{\tau(j)}(b_2)-\wti_j(b_1), \\ \Btil_j b_1 \otimes b_2 \qu & \IF \vphi_j(b_2) \leq \beta_{\tau(j)}(b_1) > \vphi_{\tau(j)}(b_2)-\wti_j(b_1), \\ b_1 \otimes \Etil_{\tau(j)} b_2 \qu & \IF \vphi_j(b_2), \beta_{\tau(j)}(b_1) \leq \vphi_{\tau(j)}(b_2)-\wti_j(b_1). \end{cases} \end{split} \nonumber \end{align} \end{prop} \begin{proof} The assertion can be proved essentially in the same way as \cite[Theorem 6.3.5]{W17}. \end{proof} \begin{cor}\label{icrystal structure of a crystal; a = 0} Let $M$ be an integrable $\U$-module with a crystal basis $\clB_M$. Then, for each $b \in \clB_M$ and $j \in \{ i,\tau(i) \}$, we have \begin{align} \begin{split} &\beta_j(b) = \max(\vphi_j(b)-\wt_{\tau(j)}(b),\vep_{\tau(j)}(b)), \\ &\Btil_j b = \begin{cases} \Ftil_j b \qu & \IF \vphi_j(b) > \vphi_{\tau(j)}(b), \\ \Etil_{\tau(j)}b \qu & \IF \vphi_j(b) \leq \vphi_{\tau(j)}(b). \end{cases} \end{split} \nonumber \end{align} \end{cor} \begin{proof} The assertion follows from Lemma \ref{icrystal basis of the trivial module a=0} and Proposition \ref{Btil on tensor of crystal bases for 0} by identifying $\clB_M$ with $\clB(0) \otimes \clB_M$. \end{proof} \begin{ex}\normalfont Let $m,n \in \Z_{\geq 0}$, and consider the $(m+1)(n+1)$-dimensional irreducible $\U$-module $V(m,n) = \bigoplus_{\substack{0 \leq k \leq m \\ 0 \leq l \leq n}} \bbK v_{k,l}$ given by $$ E_i v_{0,0} = E_{\tau(i)} v_{0,0} = 0, \qu F_{\tau(i)}^{(k)} F_i^{(l)} v_{0,0} = v_{k,l}, \qu K_i v_{0,0} = q_i^n v_{0,0}, \qu K_{\tau(i)} v_{0,0} = q_i^m v_{0,0}. $$ It has a crystal basis $\clB(m,n) := \{ b_{k,l} \mid 0 \leq k \leq m,\ 0 \leq l \leq n \}$ given by \begin{align} \begin{split} &\wt_i(b_{k,l}) = n-2l, \ \vep_i(b_{k,l}) = l, \ \vphi_i(b_{k,l}) = n-l, \ \Etil_i b_{k,l} = b_{k,l-1}, \ \Ftil_i b_{k,l} = b_{k,l+1}, \\ &\wt_{\tau(i)}(b_{k,l}) = m-2k, \ \vep_{\tau(i)}(b_{k,l}) = k, \ \vphi_{\tau(i)}(b_{k,l}) = m-k, \ \Etil_{\tau(i)} b_{k,l} = b_{k-1,l}, \ \Ftil_{\tau(i)} b_{k,l} = b_{k+1,l}. \end{split} \nonumber \end{align} Hence, we have $$ \Btil_i b_{k,l} = \begin{cases} b_{k,l+1} & \IF n-l > m-k, \\ b_{k-1,l} & \IF n-l \leq m-k. \end{cases} $$ For example, when $(m,n) = (2,3)$, the $\Btil_i$ on $\clB(m,n)$ is described as follows: $$ \xymatrix@R=8pt{ b_{2,0} \ar[r] & b_{2,1} \ar[r] & b_{2,2} \ar[r] & b_{2,3} \ar[d] \\ b_{1,0} \ar[r] & b_{1,1} \ar[r] & b_{1,2} \ar[d] & b_{1,3} \ar[d] \\ b_{0,0} \ar[r] & b_{0,1} & b_{0,2} & b_{0,3} } $$ From above, we see that the $\Btil_i$ (resp., $\Btil_{\tau(i)}$) on a crystal basis of an integrable $\U$-module coincides with the $\Ftil_{\tau(i)'}$ (resp., $\Etil_{\tau(i)'}$) on the tensor product of crystal bases of two integrable $\U_{I'}$-modules (see Examples \ref{tensor rule example}, \ref{set up for diagonal type}, and \ref{icrystal of diagonal type is crystal}). \end{ex} \subsection{The $a_{i,\tau(i)} = -1$ case} Suppose that $a_{i,\tau(i)} = -1$. In this case, we have $\U \simeq U_q(\frsl_3)$, and \begin{align} \begin{split} &k_i B_i = q_i^{-3} B_i k_i, \qu k_i B_{\tau(i)} = q_i^3 B_{\tau(i)} k_i, \\ &B_i^2B_{\tau(i)} - [2]_i B_iB_{\tau(i)}B_i + B_{\tau(i)}B_i^2 = -[2]_i B_i \{ k_i;-1-s_i \}_i, \\ &B_{\tau(i)}^2B_i - [2]_i B_{\tau(i)}B_iB_{\tau(i)} + B_iB_{\tau(i)}^2 = -[2]_i \{ k_i;-1-s_i \}_i B_{\tau(i)}, \end{split} \nonumber \end{align} where $$ \{ k_i;a \}_i := q_i^ak_i + q_i^{-a}k_i\inv $$ for each $a \in \Z$. Set $$ t := B_{\tau(i)}B_i - q_iB_iB_{\tau(i)} - [k_i;-s_i]_i, $$ where $$ [k_i;a]_i := \frac{q_i^a k_i - q_i^{-a}k_i\inv}{q_i-q_i\inv}. $$ Then, for each $k \in \Z_{\geq 0}$, we have \begin{align}\label{Btaui Bik} \begin{split} B_{\tau(i)} B_i^{(k)} = B_i^{(k-1)}(t + [k_i;-s_i-2(k-1)]_i) + q_i^k B_i^{(k)} B_{\tau(i)}. \end{split} \end{align} For each $n_- \in \Z_{\geq 0}$ and $n_+ \in \Z$, let $V^\imath(n_-,n_+) = \bigoplus_{k=0}^{n_-} \bbK v_n$ be a $\Ui$-module given by $$ B_{\tau(i)} v_0 = 0, \qu B_i^{(k)} v_0 = v_k, \qu k_i v_0 = q_i^{n_-+n_+} v_0, \qu t v_0 = [n_--n_++s_i]_i v_0, $$ where, $v_{-1}=v_{n_-+1} = 0$. Then, $V^\imath(n_-,n_+)$ is an irreducible (see \cite[Theorem 4.4.7]{W21a}) $X^\imath$-weight module such that $$ V^\imath(n_-,n_+) = \bigoplus_{k=0}^{n_-} V^\imath(n_-,n_+)_{\zeta_k}, \qu V^\imath(n_-,n_+)_{\zeta_k} = \bbK v_k, $$ where $\zeta_k \in X^\imath$ is such that $\la h_i-h_{\tau(i)}, \zeta_k \ra = n_-+n_+-3k$. Furthermore, $V^\imath(n_-,n_+)$ admits a contragredient Hermitian inner product $(\cdot,\cdot)$ such that $(v_0,v_0) = 1$ (see Remark \ref{wp preserves Ui}). \begin{rem}\label{effect of changing i taui}\normalfont Note that we have $$ B_i v_{n_-} = 0, \qu B_{\tau(i)}^{(k)} v_{n_-} \in \bbK^\times v_{n_--k}, \qu k_{\tau(i)} v_{n_-} = q_i^{2n_--n_+}, \qu t' v_{n_-} = [n_++s_{\tau(i)}] v_{n_-}, $$ where $$ t' := B_iB_{\tau(i)} - q_iB_{\tau(i)}B_i - [k_{\tau(i)};-s_{\tau(i)}]_i. $$ This shows that if we exchange the roles of $i$ and $\tau(i)$, then $n_+,k$ are replaced by $n_--n_+,n_--k$, respectively. \end{rem} \begin{lem} For each $0 \leq k \leq n_-$, we have $$ (v_k,v_k) = q_i^{k(-n_--n_++ \frac{3}{2}k +s_i - \frac{1}{2})} { n_- \brack k }_i \prod_{l=1}^k \{ n_+-s_i-l+1 \}_i \in \Z[q,q\inv], $$ where $$ \{ a \}_i := q_i^a + q_i^{-a} $$ for each $a \in \Z$. Consequently, if we write $(v_k,v_k) = \lt(v_k)^2 + \text{lower terms}$ for some $\lt(v_k) \in \R_{> 0} q^{\frac{1}{2} \Z}$, then we obtain \begin{align} \begin{split} \lt(v_k) = \begin{cases} 1 & \IF n_+-s_i \geq n_- \OR, \\ & \ -1 < n_+-s_i < n_- \AND k < n_+-s_i+1, \\ \sqrt{2} q_i^{\frac{1}{2}(k-n_++s_i-1)(k-n_++s_i)} & \IF -1 < n_+-s_i < n_- \AND k \geq n_+-s_i+1, \\ q_i^{\frac{1}{2}k(k-2n_++2s_i-1)} & \IF n_+-s_i \leq -1. \end{cases} \end{split} \nonumber \end{align} \end{lem} \begin{proof} Let us see that $$ \wp^*(B_i) = q_i\inv E_iK_i\inv + q_i^{s_i} K_i\inv q_i\inv F_{\tau(i)} K_{\tau(i)} = q_i^{s_i-2} B_{\tau(i)} k_i\inv. $$ Also, by identity \eqref{Btaui Bik}, we have \begin{align} \begin{split} B_{\tau(i)} v_k &= B_{\tau(i)} B_i^{(k)} v_0 \\ &= ([n_--n_++s_i]_i + [n_-+n_+-s_i-2k+2]_i) v_{k-1} \\ &= [n_--k+1]_i\{ n_+-s_i-k+1 \}_i v_{k-1}. \end{split} \nonumber \end{align} Now, we compute as \begin{align} \begin{split} (v_k,v_k) &= \frac{1}{[k]_i} (v_{k-1}, q_i^{s_i-2} B_{\tau(i)} k_i\inv v_k) \\ &= \frac{q_i^{-n_--n_++3k+s_i-2}[n_--k+1]_i\{ n_+-s_i-k+1 \}_i }{[k]_i} (v_{k-1}, v_{k-1}) \\ &= \prod_{l=1}^k \frac{q_i^{-n_--n_++3l+s_i-2}[n_--l+1]_i\{ n_+-s_i-l+1 \}_i }{[l]_i} (v_0,v_0) \\ &= q_i^{k(-n_--n_++\frac{3}{2}k+s_i-\frac{1}{2})} { n_- \brack k }_i \prod_{l=1}^k \{ n_+-s_i-l+1 \}_i. \end{split} \nonumber \end{align} This proves the first half of the assertion. The remaining assertion follows from this identity by noting that the leading term of $\{ a \}_i$ is $2^{\delta_{a,0}} q_i^{|a|}$. \end{proof} For each $k$, set $$ \vtil_k := \lt(v_k)\inv v_k. $$ Such $\vtil_k$ is characterized up to a multiple of $1+q\inv \bbK_\infty$ by the conditions that $\vtil_k \in \bbK^\times v_k$ and $(\vtil_k,\vtil_k) \in 1 + q\inv \bbK_\infty$. Now, we define linear operators $\Btil_i,\Btil_{\tau(i)}$ by \begin{align} \begin{split} &\Btil_i \vtil_k = \begin{cases} \vtil_{k+1} & \IF 0 \leq k < n_-, \\ 0 & \IF k = n_-, \end{cases} \\ &\Btil_{\tau(i)} \vtil_k = \begin{cases} \vtil_k & \IF 0 < k \leq n_-, \\ 0 & \IF k = 0. \end{cases} \end{split} \nonumber \end{align} Note that $\Btil_i,\Btil_{\tau(i)}$ are independent of the choice of $I_\tau$ up to $1+q\inv \bbK_\infty$ (see Remark \ref{effect of changing i taui}). In particular, they are actually independent at $q = \infty$. Let $M$ be an $X^\imath$-weight module isomorphic to a direct sum of $V^\imath(n_-,n_+)$'s. The linear operators $\Btil_i,\Btil_{\tau(i)}$ on $V^\imath(n_-,n_+)$'s can be extended to $M$. For each $n_-,n_+$, let $M[n_-,n_+]$ denote the isotypic component of $M$ of type $V^\imath(n_-,n_+)$. For each $0 \leq k \leq n_-$, set $$ M[n_-,n_+;k] := B_i^{k}(M[n_-,n_+] \cap \Ker B_{\tau(i)}). $$ Then, we have $$ M = \bigoplus_{\substack{0 \leq k \leq n_- \\ n_+ \in \Z}} M[n_-,n_+;k]. $$ For each $v \in M[n_-,n_+;k]$, we set $$ \beta_i(v) = n_--k+\max(n_+-s_i-k,0), \qu \beta_{\tau(i)}(v) = k+\max(-n_+-s_{\tau(i)}+k,0). $$ Again, note that $\beta_i,\beta_{\tau(i)}$ are independent of the choice of $I_\tau$ (see Remark \ref{effect of changing i taui}). \begin{lem}\label{icrystal basis of the trivial module a=-1} For each $n_- \in \Z_{\geq 0}$ and $n_+ \in \Z$, the $\Ui$-module $V^\imath(n_-,n_+)$ has an $\imath$crystal basis isomorphic to $\clB^\imath(n_-,n_+)$. In particular, the trivial module $V(0)$ has its crystal basis $\clB(0)$ as its $\imath$crystal basis. \end{lem} \begin{proof} It is immediately verified that $\bigoplus_{k = 0}^n \bbK_\infty v_k$ forms a $\bbK_\infty$-lattice of $V^\imath(n_-,n_+)$, and $\{ \ev_\infty(v_k) \mid 0 \leq k \leq n \}$ forms an $\imath$crystal basis of $V^\imath(n_-,n_+)$ isomorphic to $\clB^\imath(n_-,n_+)$ (see also Example \ref{examples of icrystals} \eqref{examples of icrystals 6}). The second assertion follows from the easily verified fact that $V(0) \simeq V^\imath(0,0)$. \end{proof} Below, we aim to describe the tensor product rule for $\Btil_i$. Recall that $\U \simeq U_q(\frsl_3)$. Let us take this isomorphism in a way such that the natural representation $V_\natural = \bbK u_{-1} \oplus \bbK u_0 \oplus \bbK u_1$ of $U_q(\frsl_3)$ has the following $\U$-module structure: $$ E_i u_{-1} = E_{\tau(i)} u_{-1} = 0,\ K_i u_{-1} = q_i u_{-1},\ K_{\tau(i)} u_{-1} = u_{-1},\ F_i u_{-1} = u_0,\ F_{\tau(i)}u_0 = u_1. $$ Let $(\cdot,\cdot)$ denote the contragredient Hermitian inner product on $V_\natural$ such that $(u_{-1},u_{-1}) = 1$. Also, set $\clL_\natural := \clL_{V_\natural}$, $\ol{\clL}_\natural := \ol{\clL}_{V_\natural}$, $\clB_\natural := \{ b_j := \ev_\infty(u_j) \mid j = -1,0,1 \}$. Then, $\clB_\natural$ is the crystal basis of $V_\natural$. Set $V := V^\imath(n_-,n_+) \otimes V_\natural$, $\clL := \clL_{V^\imath(n_-,n_+)} \otimes \clL_\natural$, and $\clB := \clB^\imath(n_-,n_+) \otimes \clB_\natural$. The following propositions describe $\Btil_i$ on $\clL$ modulo $q\inv \clL$. The proofs are based on straightforward calculation, hence we omit them. \begin{prop} If $n_- = 0$, then we have $$ V \simeq V^\imath(1,n_+) \oplus V^\imath(0,n_++1), $$ with highest weight vectors \begin{align} \begin{split} &v_{+0} := v_0 \otimes u_{-1} + q_i^{-(n_+-s_i)} v_0 \otimes u_1, \\ &v_{0+} := v_0 \otimes u_{-1} - q_i^{+(n_+-s_i)} v_0 \otimes u_1. \end{split} \nonumber \end{align} Furthermore, we have $$ B_i v_{+0} = q_i^{-(n_+-s_i)} \{ n_+-s_i \}_i v_{0} \otimes u_0 $$ Consequently, $\clB$ forms an $\imath$crystal basis of $V$ whose crystal graph is described as follows. \begin{enumerate} \item When $n_+-s_i > 0$, we have $\clB \simeq \clB^\imath(1,n_+) \sqcup \clB^\imath(0,n_++1)$: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 & \bullet \ar[r] & \bullet & \bullet } $$ \item When $n_+-s_i = 0$, we have $\clB \simeq \clB^\imath(1,n_+;\vee)$: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 & \bullet \ar[r]^-{\frac{1}{\sqrt{2}}} & \bullet & \bullet \ar[l]_-{\frac{1}{\sqrt{2}}} } $$ \item When $n_+-s_i < 0$, we have $\clB \simeq \clB^\imath(1,n_+) \sqcup \clB^\imath(0,n_++1)$: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 & \bullet & \bullet & \bullet \ar[l] } $$ \end{enumerate} \end{prop} \begin{prop} If $n_- > 0$, then, we have $$ V \simeq V^\imath(n_-+1,n_+) \oplus V^\imath(n_-,n_++1) \oplus V^\imath(n_--1,n_+-1), $$ with highest weight vectors \begin{align} \begin{split} &v_{+0} := v_0 \otimes u_{-1} + q_i^{n_--(n_+-s_i)} v_0 \otimes u_1, \\ &v_{0+} := v_0 \otimes u_{-1} - q_i^{-n_-+(n_+-s_i)} v_0 \otimes u_1, \\ &v_{--} := v_1 \otimes u_{-1} + q_i^{-n_--(n_+-s_i)-1} v_1 \otimes u_1 - q_i^{-n_--(n_+-s_i)}[n_-]_i\{ n_+-s_i \}_i v_0 \otimes u_0. \end{split} \nonumber \end{align} Furthermore, for each $k \geq 0$, we have \begin{align} \begin{split} B_i^{(k)} v_{+0} &= q_i^{-k} v_k \otimes u_{-1} + q_i^{n_--(n_+-s_i)} v_k \otimes u_1 + q_i^{-(n_+-s_i)+k-1} \{ (n_+-s_i)-k+1 \}_i v_{k-1} \otimes u_0, \\ B_i^{(k)} v_{0+} &= q_i^{-k} v_k \otimes u_{-1} - q_i^{-n_-+(n_+-s_i)} v_k \otimes u_1 + (1-q_i^{-2(n_--k+1)}) v_{k-1} \otimes u_0, \\ B_i^{(k)} v_{--} &= q_i^{-k}[k+1]_i v_{k+1} \otimes u_{-1} + q_i^{-n_--(n_+-s_i)-1}[k+1]_i v_{k+1} \otimes u_1 \\ &\qu- q_i^{-n_--(n_+-s_i)+k}[n_--k]_i \{ (n_+-s_i)-k \}_i v_k \otimes u_0. \end{split} \nonumber \end{align} \end{prop} \begin{prop}\label{Bi(n-,n+) otimes Bnatural 1} Suppose that $0 < n_- < n_+-s_i$. Set $$ \vtil_{+0} := v_{+0}, \qu \vtil_{0+} := -q_i^{n_--(n_+-s_i)} v_{0+}, \qu \vtil_{--} := v_{--}. $$ Then, modulo $q\inv \clL$, we have \begin{align} \begin{split} &\Btil_i^k \vtil_{+0} \equiv \begin{cases} \vtil_0 \otimes u_{-1} \qu & \IF k = 0, \\ \vtil_{k-1} \otimes u_0 \qu & \IF 1 \leq k \leq n_-+1, \end{cases} \\ &\Btil_i^k \vtil_{0+} \equiv \vtil_k \otimes u_1 \qu \IF 0 \leq k \leq n_-, \\ &\Btil_i^k \vtil_{--} \equiv \vtil_{k+1} \otimes u_{-1} \qu \IF 0 \leq k \leq n_--1. \end{split} \nonumber \end{align} Consequently, $\clB$ forms an $\imath$crystal basis of $V$ isomorphic to $\clB^\imath(n_-+1,n_+) \sqcup \clB^\imath(n_-,n_++1) \sqcup \clB^\imath(n_--1,n_+-1)$ whose crystal graph is described as follows: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 \ar[d] & \bullet \ar[r] & \bullet \ar[d] & \bullet \ar[d] \\ b_1 \ar[d] & \bullet \ar[d] & \bullet \ar[d] & \bullet \ar[d] \\ \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] \\ b_{n_--1} \ar[d] & \bullet \ar[d] & \bullet \ar[d] & \bullet \ar[d] \\ b_{n_-} & \bullet & \bullet & \bullet } $$ \end{prop} \begin{prop}\label{Bi(n-,n+) otimes Bnatural 2} Suppose that $0 < n_- = n_+-s_i$. Set $$ \vtil_{+0} := \frac{1}{\sqrt{2}}v_{+0}, \qu \vtil_{0+} := \frac{1}{\sqrt{2}} v_{0+}, \qu \vtil_{--} := v_{--}. $$ Then, modulo $q\inv \clL$, we have \begin{align} \begin{split} &\Btil_i^k \vtil_{+0} \equiv \begin{cases} \frac{1}{\sqrt{2}}(\vtil_0 \otimes u_{-1}+\vtil_0 \otimes u_1) \qu & \IF k = 0, \\ \frac{1}{\sqrt{2}}(\vtil_{k-1} \otimes u_0 + \vtil_k \otimes u_1) \qu & \IF 1 \leq k \leq n_-, \\ \vtil_{n_-} \otimes u_0 \qu & \IF k = n_-+1, \end{cases} \\ &\Btil_i^k \vtil_{0+} \equiv \begin{cases} \frac{1}{\sqrt{2}}(\vtil_0 \otimes u_{-1} - \vtil_0 \otimes u_1) \qu & \IF k = 0, \\ \frac{1}{\sqrt{2}}(\vtil_{k-1} \otimes u_0 - \vtil_k \otimes u_1) \qu & \IF 1 \leq k \leq n_-, \end{cases} \\ &\Btil_i^k \vtil_{--} \equiv \vtil_{k+1} \otimes u_{-1} \qu \IF 0 \leq k \leq n_--1, \end{split} \nonumber \end{align} Consequently, $\clB$ forms an $\imath$crystal basis of $V$ isomorphic to $\clB^\imath(n_-+1,n_+;\vee) \sqcup \clB^\imath(n_--1,n_+-1)$ whose crystal graph is described as follows: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 \ar[d] & \bullet \ar[r] & \bullet \ar[d] & \bullet \ar[d] \\ b_1 \ar[d] & \bullet \ar[d] & \bullet \ar[d] & \bullet \ar[d] \\ \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] \\ b_{n_--1} \ar[d] & \bullet \ar[d] & \bullet \ar[d]_-{\frac{1}{\sqrt{2}}} & \bullet \ar[d] \\ b_{n_-} & \bullet & \bullet & \bullet \ar[l]^-{\frac{1}{\sqrt{2}}} } $$ \end{prop} \begin{prop}\label{Bi(n-,n+) otimes Bnatural 3} Suppose that $-1 < n_+-s_i < n_-$. Set $$ \vtil_{+0} := q_i^{-n_-+(n_+-s_i)}v_{+0}, \qu \vtil_{0+} := v_{0+}, \qu \vtil_{--} := v_{--}. $$ Then, modulo $q\inv \clL$, we have \begin{align} \begin{split} &\Btil_i^k \vtil_{+0} \equiv \begin{cases} \vtil_k \otimes u_{1} \qu & \IF 0 \leq k \leq n_-, \\ \vtil_{n_-} \otimes u_0 \qu & \IF k = n_-+1, \end{cases} \\ &\Btil_i^k \vtil_{0+} \equiv \begin{cases} \vtil_0 \otimes u_{-1} \qu & \IF k = 0, \\ \vtil_{k-1} \otimes u_0 \qu & \IF 1 \leq k \leq n_-, \end{cases} \\ &\Btil_i^k \vtil_{--} \equiv \vtil_{k+1} \otimes u_{-1} \qu \IF 0 \leq k \leq n_--1. \end{split} \nonumber \end{align} Consequently, $\clB$ forms an $\imath$crystal basis of $V$ isomorphic to $\clB^\imath(n_-+1,n_+) \sqcup \clB^\imath(n_-,n_++1) \sqcup \clB^\imath(n_--1,n_+-1)$ whose crystal graph is described as follows: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 \ar[d] & \bullet \ar[r] & \bullet \ar[d] & \bullet \ar[d] \\ b_1 \ar[d] & \bullet \ar[d] & \bullet \ar[d] & \bullet \ar[d] \\ \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] \\ b_{n_--1} \ar[d] & \bullet \ar[d] & \bullet & \bullet \ar[d] \\ b_{n_-} & \bullet & \bullet & \bullet \ar[l] } $$ \end{prop} \begin{prop}\label{Bi(n-,n+) otimes Bnatural 4} Suppose that $n_- > 0$ and $n_+-s_i = -1$. Set $$ \vtil_{+0} := q_i^{-n_--1} v_{+0}, \qu \vtil_{0+} := v_{0+}, \qu \vtil_{--} := v_{--}. $$ Then, modulo $q\inv \clL$, we have \begin{align} \begin{split} &\Btil_i^k \vtil_{+0} \equiv \begin{cases} \vtil_k \otimes u_{1} \qu & \IF 0 \leq k \leq n_-, \\ \vtil_{n_-} \otimes u_0 \qu & \IF k = n_-+1, \end{cases} \\ &\Btil_i^k \vtil_{0+} \equiv \begin{cases} \vtil_0 \otimes u_{-1} \qu & \IF k = 0, \\ \frac{1}{\sqrt{2}}(\vtil_k \otimes u_{-1} + \vtil_{k-1} \otimes u_0) \qu & \IF 1 \leq k \leq n_-, \end{cases} \\ &\Btil_i^k \vtil_{--} \equiv \frac{1}{\sqrt{2}}(\vtil_{k+1} \otimes u_{-1} - \vtil_k \otimes u_0) \qu \IF 0 \leq k \leq n_--1. \end{split} \nonumber \end{align} Consequently, $\clB$ forms an $\imath$crystal basis of $V$ isomorphic to $\clB^\imath(n_-+1,n_+) \sqcup \clB^\imath(n_-,n_++1;\wedge)$ whose crystal graph is described as follows: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 \ar[d] & \bullet \ar[r]^-{\frac{1}{\sqrt{2}}} \ar[d]_-{\frac{1}{\sqrt{2}}} & \bullet \ar[d] & \bullet \ar[d] \\ b_1 \ar[d] & \bullet \ar[d] & \bullet \ar[d] & \bullet \ar[d] \\ \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] \\ b_{n_--1} \ar[d] & \bullet \ar[d] & \bullet & \bullet \ar[d] \\ b_{n_-} & \bullet & \bullet & \bullet \ar[l] } $$ \end{prop} \begin{prop}\label{Bi(n-,n+) otimes Bnatural 5} Suppose that $n_- > 0$ and $n_+-s_i < -1$. Set $$ \vtil_{+0} := q_i^{-n_-+(n_+-s_i)}v_{+0}, \qu \vtil_{0+} := v_{0+}, \qu \vtil_{--} := -v_{--}. $$ Then, modulo $q\inv \clL$, we have \begin{align} \begin{split} &\Btil_i^k \vtil_{+0} \equiv \begin{cases} \vtil_k \otimes u_{1} \qu & \IF 0 \leq k \leq n_-, \\ \vtil_{n_-} \otimes u_0 \qu & \IF k = n_-+1, \end{cases} \\ &\Btil_i^k \vtil_{0+} \equiv \vtil_{k} \otimes u_{-1} \qu \IF 0 \leq k \leq n_-, \\ &\Btil_i^k \vtil_{--} \equiv \vtil_{k} \otimes u_{0} \qu \IF 0 \leq k \leq n_--1. \end{split} \nonumber \end{align} Consequently, $\clB$ forms an $\imath$crystal basis of $V$ isomorphic to $\clB^\imath(n_-+1,n_+) \sqcup \clB^\imath(n_-,n_++1) \sqcup \clB^\imath(n_--1,n_+-1)$ whose crystal graph is described as follows: $$ \xymatrix@R=12pt{ & b_{-1} \ar[r]^i & b_0 \ar[r]^{\tau(i)} & b_1 \\ b_0 \ar[d] & \bullet \ar[d] & \bullet \ar[d] & \bullet \ar[d] \\ b_1 \ar[d] & \bullet \ar[d] & \bullet \ar[d] & \bullet \ar[d] \\ \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] \\ b_{n_--1} \ar[d] & \bullet \ar[d] & \bullet & \bullet \ar[d] \\ b_{n_-} & \bullet & \bullet & \bullet \ar[l] } $$ \end{prop} Since $\beta_i(b), \beta_{\tau(i)}(b)$ for $b \in \clB^\imath(n_-,n_+)$, $\clB^\imath(n_-,n_+;\vee)$, $\clB^\imath(n_-,n_+;\wedge)$ can be read from the crystal graphs, Propositions \ref{Bi(n-,n+) otimes Bnatural 1} -- \ref{Bi(n-,n+) otimes Bnatural 5} can be reformulated as follows: \begin{prop}\label{tensor rule in terms of icrystal data} Let $M$ be a $\Ui$-module with an $\imath$crysal basis $\clB_M$, and $b \in \clB_M$. For each $j \in \{ i,\tau(i) \}$, set $\wti_j := \wti_j(b)$ and $\beta_j := \beta_j(b)$. Then, we have the following: \begin{align} \begin{split} &\beta_i(b \otimes b_{-1}) = \begin{cases} \beta_i+1 & \IF \beta_{\tau(i)} = 0 \AND \beta_i = \beta_{\tau(i)}+\wti_i-s_i, \\ \beta_i & \OW, \end{cases} \\ &\beta_{\tau(i)}(b \otimes b_{-1}) = \begin{cases} 0 & \IF \beta_{\tau(i)} = 0, \\ \beta_{\tau(i)}-1 & \IF \beta_{\tau(i)} > 0, \end{cases} \\ &\Btil_i(b \otimes b_{-1}) = \begin{cases} b \otimes b_0 & \IF \beta_{\tau(i)} = 0 < \beta_i = \beta_{\tau(i)}+\wti_i-s_i, \\ \frac{1}{\sqrt{2}} b \otimes b_0 & \IF \beta_{\tau(i)} = 0 = \beta_i = \beta_{\tau(i)}+\wti_i-s_i, \\ \frac{1}{\sqrt{2}}(\Btil_i b \otimes b_{-1} + b \otimes b_0) & \IF \beta_{\tau(i)} = 0 < \beta_i \neq \beta_{\tau(i)}+\wti_i-s_i, \\ \Btil_i b \otimes b_{-1} & \OW, \end{cases} \\ &\Btil_{\tau(i)}(b \otimes b_{-1}) = \begin{cases} 0 & \IF \beta_{\tau(i)} \leq 1, \\ \frac{1}{\sqrt{2}} \Btil_{\tau(i)} b \otimes b_{-1} & \IF \beta_{\tau(i)} = 2 \AND \beta_{\tau(i)}(\Btil_{\tau(i)} b) = 0, \\ \Btil_{\tau(i)} b \otimes b_{-1} & \OW, \end{cases} \end{split} \nonumber \end{align} \begin{align} \begin{split} &\beta_i(b \otimes b_0) = \begin{cases} 0 & \IF \beta_i = 0, \\ \beta_i-1 & \IF \beta_i > 0, \end{cases} \\ &\beta_{\tau(i)}(b \otimes b_0) = \begin{cases} \beta_{\tau(i)}+2 & \IF \beta_i = 0 \AND \beta_i \neq \beta_{\tau(i)}+\wti_i-s_i, \\ \beta_{\tau(i)}+1 & \OW, \end{cases} \\ &\Btil_i(b \otimes b_0) = \begin{cases} 0 & \IF \beta_i \leq 1, \\ \frac{1}{\sqrt{2}} \Btil_i b \otimes b_0 & \IF \beta_i = 2 \AND \beta_i(\Btil_i b) = 0, \\ \Btil_i b \otimes b_0 & \OW, \end{cases} \\ &\Btil_{\tau(i)}(b \otimes b_0) = \begin{cases} \frac{1}{\sqrt{2}}(b \otimes b_{-1} + b \otimes b_1) & \IF \beta_i = 0 = \beta_{\tau(i)} \AND \beta_i=\beta_{\tau(i)}+\wti_i-s_i, \\ \frac{1}{\sqrt{2}}(\Btil_{\tau(i)}b \otimes b_{0} + b \otimes b_1) & \IF \beta_i = 0 < \beta_{\tau(i)} \AND \beta_i=\beta_{\tau(i)}+\wti_i-s_i, \\ b \otimes b_1 & \IF \beta_i = 0 \AND \beta_i \neq \beta_{\tau(i)}+\wti_i-s_i, \\ \frac{1}{\sqrt{2}} b \otimes b_{-1} & \IF \beta_{\tau(i)} = 0 < \beta_i \neq \beta_{\tau(i)}+\wti_i-s_i, \\ b \otimes b_{-1} & \IF \beta_{\tau(i)} = 0 < \beta_i=\beta_{\tau(i)}+\wti_i-s_i, \\ \Btil_{\tau(i)} b \otimes b_0 & \OW, \end{cases} \end{split} \nonumber \end{align} \begin{align} \begin{split} &\beta_i(b \otimes b_1) = \beta_i+1, \\ &\beta_{\tau(i)}(b \otimes b_1) = \beta_{\tau(i)}, \\ &\Btil_i(b \otimes b_1) = \begin{cases} \frac{1}{\sqrt{2}} b \otimes b_0 & \IF \beta_i = 0 \AND \beta_i=\beta_{\tau(i)}+\wti_i-s_i, \\ b \otimes b_0 & \IF \beta_i = 0 \AND \beta_i \neq \beta_{\tau(i)}+\wti_i-s_i, \\ \Btil_i b \otimes b_1 & \OW, \end{cases} \\ &\Btil_{\tau(i)}(b \otimes b_1) = \Btil_{\tau(i)}b \otimes b_1. \end{split} \nonumber \end{align} \end{prop} \begin{cor}\label{icrystal structure of a crystal; a = -1} Let $M$ be an integrable $\U$-module with a crystal basis $\clB_M$. Then, the values $\beta_i(b), \beta_{\tau(i)}(b)$ are defined for all $b \in \clB_M$. Also, $\Btil_i,\Btil_{\tau(i)}$ preserve $\clL_M$. \end{cor} \begin{proof} Since $M$ can be embedded into a direct sum of $V_{\natural}^{\otimes N}$ for various $N > 0$, the assertion follows inductively from Lemma \ref{icrystal basis of the trivial module a=-1} and Proposition \ref{tensor rule in terms of icrystal data} by identifying $\clB(0) \otimes \clB_M$ with $\clB_M$. \end{proof} \begin{rem}\normalfont Let $M$ be as before. The $\beta_j(b),\Btil_j b$ for $j \in \{ i,\tau(i) \}$, $b \in \clB_M$ can be inductively calculated by Proposition \ref{tensor rule in terms of icrystal data}. Explicit formulas for them will be given later. \end{rem}
2,877,628,088,793
arxiv
\section{Introduction} In their seminal paper \citet{abrams:mtd03} developed a simple ODE model, \begin{equation}\label{e:i1} \dot{u}=(1-u)u^p-Au(1-u)^p, \end{equation} to help understand language competition and the decline in the number of people who speak such historic languages as Welsh, Quechua, and Scottish Gaelic. We will henceforth label \eref{e:i1} as the AS model (see \autoref{f:ASCompartmentModel} for a cartoon representation of this compartment model). The underlying assumptions for this model are that all speakers are monolingual, and the population is highly connected with no spatial or social structure. In equation \eref{e:i1} $u$ represents the proportion of the population which speak language $U$. If $v$ is the proportion which speak language $V$, since all speakers are monolingual, $v=1-u$. The parameter $p>0$ measures \textit{volatility}. The case $p=1$ is a neutral situation, where transition probabilities from one language to another depend linearly on local language densities. If $p>1$ there is a larger than neutral resistance to changing the language (low volatility), and if $p<1$ there is a lower than neutral resistance to changing the language (high volatility). Experimentally, it is estimated that $p=1.31\pm0.25$. The parameter $A>0$ represents the affinity of the general population towards one language or the other. In linguistics terminology the parameter $A$ can be used to represent the \textit{prestige} associated with a particular language. Assume $p>1$, so the volatility is low. The fixed points $u=0$ (language $V$ is preferred) and $u=1$ (language $U$ is preferred) are stable, while $u=B/(1+B)$ with $B=A^{1/(p-1)}$ is unstable. If $A<1$ and $u(0)=0.5$ (both languages are initially equally preferred), then $u(t)\to1$ as $t\to+\infty$, so the population has an affinity for language $U$. Or, language $U$ has more prestige in the general population. On the other hand, if $A>1$ and $u(0)=0.5$, then $u(t)\to0$ as $t\to+\infty$, so language $V$ has more prestige in the whole population. \begin{figure}[ht \begin{center} \includegraphics{ASCompartmentModel} \caption{(color online) The compartment model associated with AS model \eref{e:i1}. The variable $u$ represents the proportion of the population which speaks language $U$, and $v$ is the proportion which speaks language $V$. It is assumed $u+v=1$.} \label{f:ASCompartmentModel} \end{center} \end{figure} As pointed out by \citet{mira:isa05}, the monolingual assumption implies the two languages are so dissimilar that conversation is practically impossible between the two competing language groups. These authors extend the AS model to allow for languages which are similar enough for there to be bilingual speakers. The bilingual population subgroup satisfies, $b=1-u-v$, with $0\le b\le 1$. The model becomes, \begin{equation}\label{e:i2} \begin{split} \dot{u}&=(1-k)(1-u)(1-v)^p-Au(1-u)^p\\ \dot{v}&=(1-k)A(1-v)(1-u)^p-v(1-v)^p, \end{split} \end{equation} where $0<k<1$ represents the ease of bilingualism. In particular, $k=0$ means that conversation is not possible between monolingual speakers, and $k=1$ implies $U=V$. The larger the value of $k$, the more similar are the two languages. If $k=b=0$, then model \eref{e:i2} reduces to model \eref{e:i1}. An analysis of the model \eref{e:i2} is provided in \citep{otero:aas13,colucci:cie14}. An agent-based model associated with the AS model \eref{e:i1} when $p=1$, \begin{equation}\label{e:i1a} \dot{u}=(1-A)u(1-u), \end{equation} is considered by \citet{stauffer:mas07}. In particular, on a lattice of dimension $d$ an individual is assumed to feel the influence of $2d$ nearest neighbors. When $A>1$ the agent-based model results are qualitatively similar to those associated with the solution of \eref{e:i1a}. As expected, the results differ when the macroscopic model fails, $A=1$. It is not clear how the two models compare when $A<1$. An agent-based model is also considered by \citet{vazquez:abm10}. In the fully connected case the dynamics of the associated mean-field model are equivalent to those for the model \eref{e:i1}. The AS model has been extended to networks. Each node of the network corresponds to a group whose dynamics are governed by the AS model, and then the dynamics between groups satisfy some other rule. \citet{amano:gda14} collected and analyzed world-wide data taking into account such things as geographical range size, speaker population size, and speaker growth rate (i.e., changes in the number of speakers) of the world’s languages, and assessed interrelations among these three components to understand how they contribute to shaping extinction risk in languages. The role of population density and how it effects the interaction rates among groups is discussed by \citet{juane:uat19} in the context of language shift in Galicia, which is a bilingual community in northwest Spain. They model the problem by looking at equations \eref{e:i2} on a network, with the strength of the interactions between nodes depending on the population density. The model for $j=1,\dots,n$ is, \begin{equation}\label{e:i3} \begin{split} \dot{u}_j&=(1-k_j)(1-u_j)(1-v_j)^p-A_ju_j(1-u_j)^p+K_j\left(\overline{u}-u_j\right)\\ \dot{v}_j&=(1-k_j)A_j(1-v_j)(1-u_j)^p-v_j(1-v_j)^p+K_j\left(\overline{v}-v_j\right). \end{split} \end{equation} Here $\overline{y}$ represents the average of the set, $\{y_j\}$. The positive parameter $K_j$ is assumed to be a strictly increasing function of the population density. The authors \citet{franco:sme17} follow a similar strategy, except they assume the nonlinearities are of Lotka-Volterra type. Taking a different approach, \citet{yun:tpo16} assume a diffusion process to take into account spatial effects. \citet{fujie:amo13} and \citet{zhou:mce19} consider the problem of competition among more than two languages. In this paper we consider the language competition problem on a network under the assumption of low volatility, $p>1$. For ease we will primarily work with $p=2$, but our experience is that other values of $p>1$ do not effect the results qualitatively. We will assume there is no bilingual subpopulation (see \citep{juane:uat19,mira:tio11,otero:aas13} for some work in this area under the assumption of a single group). This may be an unrealistic assumption in terms of language; however, it is less so if one assumes language $U$ actually refers to those who have some type of religious affiliation, and language $V$ represents those who do not \citep{abrams:dos11}. We will assume the existence of $n$ distinct population groups, and let $0\le u_j\le 1$ represent the proportion of those in group $j$ who speak language $U$ ($v_j=1-u_j$ speak language $V$). For each $j=1,\dots,n$ our model equation is a natural extension of the compartment model illustrated in \autoref{f:ASCompartmentModel}, \begin{equation}\label{e:i4} \dot{u}_j=\left(\sum_{k=1}^n\vI_{jk}u_k^p\right)\cdot(1-u_j) -A_j\left(\sum_{k=1}^n\vI_{jk}(1-u_k)^p\right)\cdot u_j, \end{equation} where $\vI_{jk}\ge0$. We call the matrix $\vI=\left(\vI_{jk}\right)$ the influence matrix, and the term $\vI_{jk}$ represents the influence group $k$ has on group $j$ through the between-group reaction rate. If we think of the system \eref{e:i4} as being a compartment model, then the term $\vI_{jk}u_k^p$ is the rate constant associated with the influence that the $U$ speakers in group $k$ have on the $V$ speakers in group $j$, and $\vI_{jk}(1-u_k)^p$ is the rate constant associated with the influence that the $V$ speakers in group $k$ have on the $U$ speakers in group $j$. Clearly, when $n=1$ the system \eref{e:i4} collapses to the AS model \eref{e:i1}. We now compare the systems \eref{e:i3} and \eref{e:i4}. In the case of no bilingual speakers the system \eref{e:i3} collapses to, \begin{equation}\label{e:i5} \dot{u}_j=(1-u_j)u_j^p-A_ju_j(1-u_j)^p+K_j\left(\overline{u}-u_j\right). \end{equation} The systems \eref{e:i4} and \eref{e:i5} have the feature that the on-site dynamics are the same as those for the AS model. However, the coupling between groups is different; in particular, the model \eref{e:i5} assumes that group $j$ is influenced by all of the other groups, whereas the model \eref{e:i4} allows for each group to be isolated from some of the other groups. Under the assumption that each external group has an equal influence on a given group, $\vI_{jj}=1$ and $\vI_{jk}=K_j/(n-1)$ for all $k\neq j$, the system \eref{e:i4} becomes, \begin{equation}\label{e:i5aa} \dot{u_j}=(1-u_j)\left[u_j^p+K_j\overline{u_{\neq j}^p}\right]-A_ju_j\left[(1-u_j)^p+ K_j\overline{(1-u)_{\neq j}^p}\right], \end{equation} where we use the notation, \[ \overline{f_{\neq j}}=\frac1{n-1}\sum_{k\neq j}f_k. \] The nonlinear coupling term for the model \eref{e:i5aa} is clearly very different than the linear coupling term associated with the model \eref{e:i5}. It is an open question as to whether this functional difference leads to a qualitative difference in the dynamics. A simple model such as \eref{e:i1} can also be used to model opinion propagation in a population in which it is assumed that people have either opinion $U$, or opinion $V$, where we think of $V$ as being ``not $U$''. \citet{marvel:emc12}, hereafter referred to as MS, provide a model similar to \eref{e:i2} in which it is assumed there are three distinct groups: those who hold opinion $U$, those who hold opinion $V$, and the remaining who are undecided, \[ \begin{split} \dot{u}&=(1-u-v)u-uv\\ \dot{v}&=(1-u-v)v-uv. \end{split} \] The underlying assumption in this model is that in order for one who initially holds opinion $U$ to eventually hold opinion $V$ (or vice-versa), the person first must become undecided. \citet{wang:bam16} extended the MS model to allow for several competing opinions. The MS model was extended to networks by \citet{bujalski:cac18}, and the extended model was studied using dynamical systems techniques. \citet{tanabe:cdo13} proposed and analyzed an interesting opinion formation model (hereafter labelled TM) in which it was assumed that the population itself breaks down into two groups: congregators, and contrarians. In contrast, the MS model implicitly assumes the entire population is filled with congregators. One conclusion of the TM model is that if a large enough proportion of the population is contrarian, then no majority opinion will be achieved. This is in contrast to the conclusion of those models in which it is assumed there are only congregators, as here a majority opinion is always obtained. The TM model was later refined by \citet{eekhoff:ofd19}, and the new model allowed for the effects of peer pressure, and incorporated the influence of zealots. From a qualitative perspective the mean-field models used for opinion dynamics and language death have many similarities. Thus, although we frame our results using the formulation associated with language death, they are also directly applicable to mean-field opinion formation models. In this paper we are primarily interested in the existence and stability of spatial structures for the network system \eref{e:i4}. We assume the groups have been arranged on a square lattice. The interactions on this lattice are nearest-neighbor (NN) only. Our experience is that from a qualitative perspective the NN interactions can be expanded without substantively changing the solution behavior as long as the interactions are still somewhat spatially localized (the Implicit Function Theorem provides the theoretical justification). Moreover, there will be no preferential distinction in the reaction rates, $\vI_{jk}=\vI_{kj}$. This is a case study, so we have not fully explored a large set of networks. That work will be left for a future paper. Our goal here is not to do an exhaustive study for all types of influence matrices. Instead, we simply want to get a sense of what is possible for a given type of network. For this lattice configuration we start by considering the existence and stability of fronts and pulses for the system \eref{e:i4}. A front is a solution for which $u_{jk}=U_j$, and $U_j=0$ (or $U_j=1$) for $1\le j\le n_0$, and $U_j=1$ (or $U_j=0$) for $j\ge n_0+\ell$ and some $\ell\ge1$. In other words, to the left of $n_0$ language $V$ is spoken, and to the right of $n_0+\ell$ language $U$ is spoken. A pulse is a solution for which $U_j=0$ for $j\le n_0$ and $j\ge n_0+\ell$, and $U_j>0$ for $n_0<j<n_0+\ell$. In other words, on the full lattice there is a stripe of language $U$ speakers who are surrounded by a group of $V$ speakers. We will consider when fronts can travel, which implies that language $U$ is invading language $V$, or vice-versa. We will also consider when pulses can grow or shrink. A growing pulse can be thought of as the concatenation of two fronts traveling in opposing directions, which implies that language $U$ eventually takes over the entire network. A shrinking pulse eventually disappears, which means that language $U$ has gone extinct. As we will see, the prestige associated with speaking $U\,(A<1)$ or $V\,(A>1)$ plays a central role in the analysis. We will conclude with a case study for a spot, which is a contiguous group of sites with $u_{jk}>0$ surrounded by $u_{jk}=0$ - an island of $U$ in a sea of $V$. \vspace{2mm}\noindent\textbf{Acknowledgements.} This material is based upon work supported by the US National Science Foundation under Grant No. DMS-1809074 (PGK). \section{The model on a square lattice} As already stated, we consider the dynamics of a square lattice with nearest-neighbor interactions only. Here $u_{jk}$ will represent the proportion of the population at site $(j,k)$ who speak language $U$. We will henceforth assume that the prestige associated with language $U$ is uniform throughout the lattice, $A_{jk}=A$. It is an interesting problem in its own right to allow for a spatially inhomogeneous distribution of the prestige and see how it affects the prevalent dynamics. Moreover, we will assume $p=2$. Our numerical experiments indicate that from a qualitative perspective the results presented herein only need $p>1$. Under these assumptions the model \eref{e:i4}is, \[ \begin{split} \dot{u}_{jk}&=\left[\epsilon_0u_{jk}^2+ \epsilon_1\left(u_{j+1,k}^2+u_{j-1,k}^2+u_{j,k+1}^2+u_{j,k-1}^2\right)\right](1-u_{jk})\\ &\quad -A\left[\epsilon_0(1-u_{jk})^2+ \epsilon_1\left((1-u_{j+1,k})^2+(1-u_{j-1,k})^2+(1-u_{j,k+1})^2+(1-u_{j,k-1})^2\right)\right]u_{jk}. \end{split} \] Here $1\le j,k\le n$, and we assume in the model that at the edge of the square there are Neumann boundary conditions, e.g., $u_{n+1,k}=u_{nk}$. The parameter $\epsilon_0>0$ is the on-site interaction rate, and the parameter $\epsilon_1>0$ is the nearest-neighbor interaction rate. Using the notation for the discrete Laplacian, \[ \Delta_{\rmd\rmi\rms}f_{jk}=f_{j+1,k}+f_{j-1,k}+f_{j,k+1}+f_{j,k-1}-4f_{jk}, \] the above ODE takes the more compact form, \begin{equation}\label{e:2a2d} \dot{u}_{jk}=\left(\epsilon_0+4\epsilon_1\right)u_{jk}(1-u_{jk})\left[(1+A)u_{jk}-A\right]\\ +2A\epsilon_1u_{jk}\Delta_{\rmd\rmi\rms}u_{jk}+ \epsilon_1\left[1-(1+A)u_{jk}\right]\Delta_{\rmd\rmi\rms}u_{jk}^2. \end{equation} If we assume that the interactions between neighbors are strong, i.e., $\epsilon_1\gg1$, then upon setting $R=\epsilon_0+4\epsilon_1\gg1$ we have the limiting continuum model, \begin{equation}\label{e:42d} \partial_tu=R u(1-u)\left[(1+A)u-A\right]+(1+A)u(1-u)\Delta u+\left[1-(1+A)u\right]\left|\nabla u\right|^2. \end{equation} Here $\Delta$ represents the Laplacian, and $\nabla$ is the gradient operator. The continuum model incorporates the expected temporal dynamics associated with the original ODE model, but the coupling dynamics between sites is dictated by an effective nonlinear diffusion. The PDE is physical in the following sense: $u(x,y,t)=0$ implies $\partial_tu(x,y,t)\ge0$, and $u(x,y,t)=1$ implies $\partial_tu(x,y,t)\le0$. Note the diffusion coefficient vanishes when the entire population supports one language, $u=0$ or $u=1$. When studying the solution structure to the ODE \eref{e:2a2d}, or the accompanying PDE \eref{e:42d}, we will first focus on the existence and spectral stability of time-independent patterns which vary in one direction only. For the ODE \eref{e:2a2d} we will set $u_{j,k}(t)=U_{j}$ for all $j,k$, and $U_j$ will solve the 1D discrete model, \begin{equation}\label{e:2a1d} 0=\left(\epsilon_0+4\epsilon_1\right)U_j(1-U_j)\left[(1+A)U_j-A\right]+ 2\epsilon_1AU_j\Delta_jU_j+ \epsilon_1\left[1-(1+A)U_j\right]\Delta_jU_j^2, \end{equation} where $\Delta_jf_j=f_{j+1}+f_{j-1}-2f_j$. For the PDE \eref{e:42d} we will set $u(x,y,t)=U(x)$, and $U(x)$ will solve the nonlinear ODE, \begin{equation}\label{e:42dode} 0=R U(1-U)\left[(1+A)U-A\right]+(1+A)U(1-U)U''+\left[1-(1+A)U\right](U')^2,\quad ^\prime=\frac{\rmd}{\rmd x}. \end{equation} In both cases we will be looking for fronts/pulses, which for the full system will correspond to stripes. These solutions act as transitions between regions where language $U$ is dominant and language $V$ is dominant. \begin{remark} Even though the derivation is dissimilar, the continuum model \eref{e:42d} is remarkably similar to the mean-field model associated with the square lattice as provided for in \citep[equation~(48)]{vazquez:abm10}. The model \eref{e:42d} has the additional term, $\left[1-(1+A)u\right]\left|\nabla u\right|^2$; however, both models have the important feature that the diffusion coefficient is singular. Dynamically, both systems have the feature that small domains tend to shrink, and large domains tend to grow, and the domains tend to evolve in a way that reduces the curvature of the boundary; see also further relevant discussion regarding the dynamics below. \end{remark} \section{Existence and spectral stability of stripes for the discrete model} A front solution to \eref{e:2a1d} satisfies $U_j=0\,(1)$ for $j\le\ell$, and $U_j=1\,(0)$ for $j\ge k$, where $1<\ell<k<n$. A pulse solution will satisfy $U_j=0\,(1)$ for $j\le\ell$ and $j\ge k$, and $U_j\sim1\,(0)$ for $\ell<j<k$. The transition between the states 0 and 1 will be monotone. A stripe solution to the full 2D model will be a pulse, or a concatenation of two fronts. As we will see, the concatenation of two fronts provides for a ``thicker'' stripe. In the same spirit, we can also discuss multi-stripes, which are the concatenation of pulses and/or fronts. \subsection{Existence: fronts} If $\epsilon_1=0$, the system uncouples, so a front can be constructed analytically. In this limit, for a front we set $U_j=0\,(1)$ for $j=1,\dots,\ell$, and $U_j=1\,(0)$ for $j=\ell+1,\dots,n$. We will refer to this front as the off-site front. Since each of the fixed points is stable for the scalar AS model, the front will be stable for the full system. By the Implicit Function Theorem the front will persist and be stable for $0<\epsilon_1\ll1$. We can concatenate these fronts when $\epsilon_1=0$ to form stable stripes, and then again apply the Implicit Function Theorem to show the existence and stability for small $\epsilon_1$. When $\epsilon_1=0$ we can construct another front by setting $U_j=0\,(1)$ for $j=1,\dots,\ell,\,U_{\ell+1}=A/(1+A)$, and $U_j=1\,(0)$ for $j=\ell+2,\dots,n$. Since all of the fixed points but the one at $j=\ell$ are stable for the scalar AS model, the front will be unstable for the full system with the linearization having one positive eigenvalue. By the Implicit Function Theorem the front will persist and be unstable with one positive eigenvalue for $0<\epsilon_1\ll1$. We will refer to this front as the on-site front. When $\epsilon_1=0$ the off-site and on-site fronts exist for any value of $A$. However, once there is nontrivial coupling, we expect there will be an interval of $A$ values which contains $A=1$ for which the fronts will exist. In order to determine this interval we will do numerical continuation using the MATLAB package, Matcont \citep{dhooge:mam03}. Using this package will also allow us to numerically continue bifurcation points in parameter space. Setting \[ R_1=\frac{\epsilon_1}{\epsilon_0}, \] we will numerically explore the $(R_1,A)$-parameter space. Since we analytically know what happens for $R_1=0$, we are in a good position to use numerical continuation. \begin{figure}[ht \begin{center} \begin{tabular}{cc} \includegraphics{SnakingR06}& \includegraphics{SnakingBoundary2DFront} \end{tabular} \caption{(color online) Numerically generated existence curves for stationary $V\to U$ fronts, i.e., $U_j=0$ to the left, and $U_j=1$ to the right. The curve is given for $R_1=0.6$ in the left panel. The solid (blue) curves denote a stable front, and the dashed (red) curves denote an unstable front. The saddle-node bifurcation points are given by black circles. The vertical axis is the $\ell^2$-norm. Regarding the boundary in the right panel, inside the two curves there is a stable stationary front, and outside the curves the front travels. The invading language is provided in the figure.} \label{f:2DFront} \end{center} \end{figure} For each fixed $R_1>0$ there will be an associated snaking diagram in the parameter $A$. For a particular example, consider the left figure in \autoref{f:2DFront}. The horizontal axis is $A$, and the vertical axis is the $L^2$-norm of the front. In this figure the solid (blue) curve corresponds to a stable front (which is off-site when $A=1$), and the dashed (red) curve corresponds to an unstable front (which is on-site when $A=1$). These two curves meet at a saddle-node bifurcation point, which is denoted by an open black circle. We see there is an $A_-<1<A_+$ for which there are stable fronts for $A_-<A<A_+$, and no stationary fronts (at least as seen via numerical continuation) outside this interval. The values of $A_\pm$ depend on $R_1$ Each of the upward shifts of the stable and unstable branches correspond to waveforms that are shifted by an integer number of lattice nodes to the left (hence the growth in norm). The right panel in\autoref{f:2DFront} shows the functions $A_\pm$ as a function of $R_1$. While we do not show it here, even in the limit $R_1\to+\infty$ the two curves do not converge to $1$; instead, we have $A_+(+\infty)\sim1.0082$, and $A_-(+\infty)\sim0.9918$. Inside the two curves, and for fixed $R_1$, there is a stable stationary front. \begin{figure}[ht \begin{center} \includegraphics{WaveSpeed2DFrontNew} \caption{(color online) The numerically generated wave speed when $R_1=0.6$ is given by the solid (blue) curve. The (red) diamonds mark the boundary for the existence of the stationary front, $A_-\sim0.9395$ and $A_+\sim1.0644$, at which the wave speed is zero. The dashed black line is the wave-speed prediction of \eref{e:cpred} provided by the PDE model.} \label{f:2DFrontTravel} \end{center} \end{figure} \subsection{Existence: traveling waves}\label{s:32aa} Outside the two curves, $A_\pm(R_1)$, there is a traveling front. Traveling waves will be written as $U(x+ct)$, so $U_j(t)=U(j+ct)$. Setting $\xi=x+ct$, the resulting forward-backward difference equation to which the traveling wave is a solution is, \[ \begin{split} cU'&= \left(\epsilon_0+4\epsilon_1\right)U(1-U)\left[(1+A)U-A\right]\\ &\qquad+ 2A\epsilon_1U\left[U(\xi+1)+U(\xi-1)-2U(\xi)\right]+ \epsilon_1\left[1-(1+A)U\right]\left[U(\xi+1)^2+U(\xi-1)^2-2U(\xi)^2\right]. \end{split} \] This system is solved using a variant of Newton's method (see \citep{hupkes:pfi11,hupkes:aon05,elmer:avo02} for the details). We consider in detail the case of $R_1=0.6$. Our experience is that from a qualitative perspective the value of $R_1$ is not particularly important. The numerical result is plotted in \autoref{f:2DFrontTravel}. The points $A_\pm$ are marked with a (red) diamond. It should be the case that at these points $c=0$; unfortunately, the fact that the linearization becomes singular at $A=A_\pm$ precludes good convergence of the algorithm near these points. Away from these bifurcation points there is good convergence of the numerical algorithm. Assuming $U_j=0$ to the left, and $V_j=0$ to the right, if $c<0$ language $V$ invades language $U$, whereas if $c>0$ language $U$ invades language $V$. We see here that if $A>A_+\sim1.0644$, i.e., language $V$ has more prestige, then language $V$ invades language $U$. On the other hand, if $A<A_-\sim0.9395$, i.e., language $U$ has more prestige, then language $U$ invades language $V$. Note that the speed increases as the preferred language becomes more prestigious. Indeed, up to a small correction, and sufficiently far away from $A_\pm$, the wave speed follows the formal prediction of the continuum model, equation \eref{e:cpred}. The predicted curve, which is associated with the limit $R_1\to+\infty$, is given by the black dashed line. This result has been numerically verified for several different values of $R_1$. One can observe the nontrivial effect of discreteness in establishing an interval where the fronts can be stationary. Indeed, the continuum model is found to possess vanishing speed at the isolated point of prestige balance, namely at $A=1$, while the discrete variant requires a detuning from this value in order to enable such a depinning from the vanishing speed setting. \begin{remark} It is an interesting exercise to consider the scaling law for the wave speed as $A\to A_\pm$; however, we have not pursued this. The interested reader should consult \citet{anderson:pau16,kevrekidis:pfu01} and the references therein for details as to how such a law may be derived. \end{remark} \subsection{Existence: pulses}\label{s:33aa} As is the case for fronts, if $\epsilon_1=0$ a pulse can be constructed analytically by setting $U_j=0\,(1)$ for $1\le j\le\ell$ and $k\le j\le n$, and $U_j=1\,(0)$ for $\ell<j<k$. Since each of the fixed points is stable for the scalar AS model, the pulse will be stable for the full system. By the Implicit Function Theorem the pulse will persist and be stable for $0<\epsilon_1\ll1$. We can concatenate these pulses when $\epsilon_1=0$ to form stable stripes, and then again apply the Implicit Function Theorem to show the existence and stability for small $\epsilon_1$. If so desired, we can also construct unstable pulses by setting $U_{\ell+1}=A/(1+A)$ when $\epsilon_1=0$, and then using the Implicit Function Theorem for small $\epsilon_1$. Assuming the background supports language $V$, the size of the pulse is the number of adjacent groups which support language $U$. For small $\epsilon_1$ the size is $k-\ell-1$. \begin{figure}[ht \begin{center} \begin{tabular}{cc} \includegraphics{PulseLimitPointNew}& \includegraphics{PulsePic} \end{tabular} \caption{(color online) The left panel provides the numerically generated boundary of pulses of size 1 through 3. The boundary is given by a solid (blue) curve for the pulse of size 1, a (red) dashed curve for a pulse of size 2, and a (green) dashed-dotted curve for a pulse of size 3. For a given pulse size, the pulse exists inside the two curves, and ceases to exist outside. The right panel gives an example of each pulse for $R_1=0.05$ and $A=1$. The pulse of size 1 is shown in the upper right panel, the pulse of size 2 in the middle right panel, and the pulse of size 3 in the lower right panel.} \label{f:PulseLimitPoint} \end{center} \end{figure} Numerically it is seen that if a pulse is of size 4 or larger, then it is realized as a concatenation of a $V\to U$ and a $U\to V$ stationary front. Consequently, the front dynamics completely determine the pulse dynamics. If the front is stationary, so is the pulse. If the front moves, so will the edge of the pulse. On the other hand, if the pulse is of size 1, 2, or 3, then the dynamics are not related to front dynamics. From a dynamics perspective the pulse ceases to exist after a saddle-node bifurcation occurs. Using Matcont, the bifurcation point can be traced in $(R_1,A)$-space. The results are presented in \autoref{f:PulseLimitPoint}. The pulse will exist inside the boundary curve. The cusp point is $(R_1,A)\sim(0.1996,0.6936)$ for the pulse of size 1, and $(R_1,A)\sim(0.6352,0.9246)$ for the pulse of size 2. For a pulse of size 3 the cusp point satisfies $R_1>31.77$ with $0<1-A\ll1$, and is not shown in the figure. Note that the cusp point converges to $A=1$ as the size of the pulse increases, and satisfies $A<1$. This is due to the fact that language $U$ has more prestige for $A<1$. If the background was language $U$ instead of language $V$, then the cusp point would satisfy $A>1$. From a dynamics perspective, if $R_1$ is less than the cusp point value, and if $A$ is small enough so that $(R_1,A)$ is below the bottom boundary curve, then the pulse will grow until it can be thought of as a concatenation of two fronts. Once this occurs the edges of the pulse will move according to the front dynamics. The pulse grows because the prestige for language $U$ is sufficiently large. On the other hand, if $(AR_1,A)$ is above the top boundary curve, then language $V$ has sufficient prestige so that the background language prevails, and the pulse simply disappears in finite time. See \autoref{f:PulseGrowDecay} for the corroborating results of a particular simulation. \begin{figure}[ht \begin{center} \begin{tabular}{cc} \includegraphics{SizeOneDecay}& \includegraphics{SizeOneGrow}\\ \includegraphics{SizeTwoDecay}& \includegraphics{SizeTwoGrow} \end{tabular} \caption{(color online) The results of a numerical simulation of the full ODE \eref{e:2a2d} where the initial condition satisfies $u_{jk}(0)=u_{j\ell}(0)$ for all $k,\ell$. The color white represents language $V$, and the color black represents language $U$. In the top two figures $R_1=0.15$. For the top left figure $A=0.9$ (so the point is above the boundary for a pulse of size 1), and for the top right figure $A=0.6$ (so the point is below the boundary for a pulse of size 1). In both figures the initial condition for fixed $k$ is a small perturbation of a pulse of size 1. In the bottom two figures $R_1=0.5$. For the bottom left figure $A=1.0$ (so the point is above the boundary for a pulse of size 2), and for the bottom right figure $A=0.8$ (so the point is below the boundary for a pulse of size 2). In both figures the initial condition for fixed $k$ is a small perturbation of a pulse of size 2.} \label{f:PulseGrowDecay} \end{center} \end{figure} \begin{remark} If we assume a pulse of language $V$ sits on a background of language $U$, then we will get the same curves as in \autoref{f:PulseLimitPoint}. However, the dynamical interpretation leading to \autoref{f:PulseGrowDecay} will be reversed. In particular, if $A$ is too small the pulse will disappear, whereas if $A$ is sufficiently large it will grow. \end{remark} \subsection{Multiple stripes via pulse concatenation} We now consider the problem of concatenating individual pulses to form multi-pulses. For the sake of convenience and without loss of generality we assume that background consists of language $V$. As with the single pulses, each of the multi-pulses will be stable when $\epsilon_1=0$, and they will persist as stable structures for sufficiently small $\epsilon_1$. Typically, the construction of multi-pulses would involve a discussion of tail-tail interactions between individual pulses, and an application of the Hale-Lin-Sandstede method (e.g., see \citep{bramburger:lpi20,bramburger:sls20,sandstede:som98,hupkes:sop13,promislow:arm02,moore:rgr05,sandstede:sot02,parker:eas20} and the references therein). However, for the system under consideration this is less relevant, as the nonlinear coupling between adjacent sites renders the transition from one state to another to be super-exponential, instead of the exponential rates associated with linear coupling (see \autoref{f:SuperExponentialDecay} for a representative demonstration of this phenomena). Consequently, to leading order one can think of pulses as being compactons (a compactly supported structure), and fronts as being a compactly supported transition between two states.\footnote{We will return to this aspect in more detail in the continuum limit analysis, see \autoref{s:compacton}.} In this light, to leading order, and as long as the individual pulses are initially sufficiently separated, the dynamics associated with a concatenation of $k$ pulses is really just the dynamics of $k$ uncoupled pulses, each of which evolves according to the rules presented in \autoref{s:33aa}. \begin{figure}[ht \begin{center} \includegraphics{SuperExponentialDecay} \end{center} \caption{(color online) The top panel provides the numerically generated pulse of size 3, say $u_3$, for $(R_1,A)=(0.9,1.0)$. The bottom panel shows $\ln(u_3)$. For $j\le 12$ and $j\ge22$ the numerically determined value of $\ln(u_3)$ is $-\infty$. If the decay to $u=0$ was exponential, the bottom panel would be linear in $j$. Instead, it is concave down.} \label{f:SuperExponentialDecay} \end{figure} Since this is only a case study, we will focus on the example of the two-pulse, which at the $\epsilon_1=0$ limit we label as $j$-$k$-$\ell$. Here $j$ and $\ell$ refer to the size of the pulse which supports language $U$, and $k$ is the intervening pulse of size $k$ which supports language $V$. For example, a 2-1-2 can be thought of when $\epsilon_1=0$ as the sequence of $u$-values, $\cdots00\mathbf{11}0\mathbf{11}00\cdots$. \begin{figure}[ht \begin{center} \begin{tabular}{cc} \includegraphics{MultiPulseLimitPoint}& \includegraphics{MultipulsePic} \end{tabular} \caption{(color online) The left panel provides the numerically generated boundary of the two-pulse 2-1-2 (solid (blue) curve) and 2-2-2 (dashed (red) curve). The two-pulse exists inside the two curves, and ceases to exist outside. The right panel gives an example of each pulse when $R_1=0.05$ and $A=1.0$. The 2-1-2 pulse is upper right, and the 2-2-2 pulse is lower right.} \label{f:MultiplePulseLimitPoint} \end{center} \end{figure} First consider the 2-1-2 pulse. The boundary for which this solution exists is presented as a solid (blue) curve in \autoref{f:MultiplePulseLimitPoint}. The cusp point is $(R_1,A)\sim(0.1478,1.2954)$. For $(R_1,A)$ values inside the curve the pulse will exist as a stationary solution and be stable, whereas outside the curve it does not exist. From a dynamical perspective, if $R_1<0.1478$, and $A$ is chosen so that the point lies below the lower boundary curve, then the solution will quickly become a single pulse of size 5 (i.e., the internal $0$ becomes a $1$), see the center panel of \autoref{f:Size212} with $(R_1,A)=(0.1,1.0)$. As discussed previously, a pulse of this size can be thought of as the concatenation of two fronts. If the value of $A$ is such that the point is also below the lower boundary of the curve presented in the right panel of \autoref{f:2DFront}, so that $U$ invades $V$, then both fronts will travel, i.e., expand until the entire lattice is overtaken by language $U$ (see the left panel of \autoref{f:Size212} with $(R_1,A)=(0.1,0.6)$). On the other hand, if $R_1<0.1478$, and $A$ is chosen so that the point lies above the upper boundary curve, then the solution will quickly decay to a pulse of size zero, i.e., language $V$ is spoken over the entire lattice (see the right panel of \autoref{f:Size212} with $(R_1,A)=(0.1,1.7)$). \begin{remark} If $R_1>0.1478$, then the pulse no longer exists, and the fate of the perturbation is a more difficult question to answer. This task will be left for a future paper. \end{remark} \begin{figure}[ht \begin{center} \begin{tabular}{ccc} \includegraphics{Size212Grow}& \includegraphics{Size212Static}& \includegraphics{Size212Decay} \end{tabular} \caption{(color online) The results of a numerical simulation of the full ODE \eref{e:2a2d} with $R_1=0.1$ where the initial condition satisfies $u_{jk}(0)=u_{j\ell}(0)$ for all $k,\ell$. The color white represents language $V$, and the color black represents language $U$. In all three panels the initial condition is a small perturbation of a 2-1-2 pulse. For the left panel $A=0.6$, for the middle panel $A=1.0$, and for the right panel $A=1.7$.} \label{f:Size212} \end{center} \end{figure} Next consider the 2-2-2 pulse. The boundary for which this solution exists is presented as a dashed (red) curve in \autoref{f:MultiplePulseLimitPoint}. The cusp point is $(R_1,A)\sim(0.3515,0.9879)$. The dynamics associated with $(R_1,A)$ points chosen outside of the domain bounded by the curve are exactly as that outlined above. For points below the curve the solution quickly becomes a single pulse of size 6, which again is the concatenation of two fronts. Each front will travel, and $U$ will grow, if $A$ is sufficiently small. For points above the curve the solution again quickly decays to a pulse of size zero. Finally, consider the 2-$k$-2 pulse for any $k\ge3$. Here we find this is a true concatenation of two pulses of size 2, so the boundary curve is given by the dashed (red) curve in \autoref{f:PulseLimitPoint}. Moreover, the dynamics of this pulse is initially governed by the dynamics associated with a pulse of size 2 (see the bottom two panels of \autoref{f:PulseGrowDecay}). While we do not present the corroborating details here, we now have the following rule-of-thumb. If we start with a two-pulse of size $j$-$k$-$\ell$, and if $k\ge3$, then the resulting dynamics will initially be independently governed by those associated with the pulse of size $j$ and pulse of size $\ell$. The individual pulses ``see'' each other only if the gap between the two is one or two adjacent sites. Indeed, this rule holds for any concatenation of pulses. As long as the distance between adjacent pulses is at least 3 sites, the existence boundary curve is exactly that associated with each individual pulse which makes up the entire multi-pulse. Moreover, the dynamics are governed by those associated with the single pulse until the distance between individual pulses is reduced to one or two sites. \subsection{Spectral stability} We have proven stable fronts and pulses exists for small $\epsilon_1$ for the 1D model \eref{e:2a2d}. We now remove the assumption that $\epsilon_1$ is small, and assume that a stable front/pulse exists for \eref{e:2a2d}. The spectrum for the associated linearized self-adjoint operator, $\calL_{1\rmD}$, is then strictly negative, so \begin{equation}\label{e:2a11} \langle\calL_{1\rmD}v_j,v_j\rangle<0. \end{equation} We now consider the spectral stability for the original 2D model \eref{e:2a2d}. The self-adjoint linearized operator has the form, \[ \calL_{2\rmD}=\calL_{1\rmD}+2(1+A)\epsilon_1U_j(1-U_j)\Delta_k. \] Using a Fourier decomposition for the eigenfunctions in the transverse direction, \[ v_{jk}\mapsto v_j\rme^{\rmi\xi k},\quad -\pi\le\xi<\pi, \] we find, \[ \calL_{2\rmD}v_{jk}=\left[\calL_{1\rmD} -4(1+A)\epsilon_1\left(1-\cos(\xi)\right)U_j(1-U_j)\right]v_j\rme^{\rmi\xi k}. \] Since the second term in the sum is a nonpositive operator, by using the inequality \eref{e:2a11} we can conclude that \[ \langle\calL_{2\rmD}v_{jk},v_{jk}\rangle<0. \] Consequently, all the eigenvalues must be strictly negative, so the stable front/pulse for the 1D problem is transversely stable for the 2D problem. \section{Existence and spectral stability of stripes for the continuum model} We now consider the existence and spectral stability of solutions to the continuum model \eref{e:42d}. \subsection{Existence: compactons}\label{s:compacton} The existence problem is settled by finding solutions to the nonlinear ODE \eref{e:42dode}. Recalling $R=\epsilon_0+4\epsilon_1$, under the assumption that neither language is more prestigious, $A=1$, there exists the exact compacton solution, \[ U_\rmc(x)=\frac12\left[1+\cos\left(\sqrt{\frac{R}2}\,x\right)\right]. \] In writing this solutions there is the implicit understanding that the compacton is continuous with $U_\rmc(x)\equiv0$ or $U_\rmc(x)\equiv1$ outside some finite spatial interval. Of course, any spatial translation of the compacton is also a solution. Not only do these compactons define compactly supported pulses, they also define fronts connecting $u=0$ to $u=1$. One front satisfies $U_\rmc(x)=0$ for $x\le-\pi\sqrt{2/R}$, and $U_\rmc(x)=1$ for $x\ge0$ (of course, this front can be translated). Another front satisfies $U_\rmc(x)=1$ for $x\le0$, and $U_\rmc(x)=0$ for $x\ge\pi\sqrt{2/R}$ (again, this front can be translated). Note that the width of the front/pulse depends upon the reaction rate, $R$. \begin{remark} There is also an explicit compact solution when $p=3$, \[ U_\rmc(x)=\frac12\left[1+\cos\left(\sqrt{\frac{2R}3}\,x\right)\right]. \] Numerically, we see compactons for any $p>1$. \end{remark} \subsection{Traveling waves} If $A\neq1$, numerical simulations indicate that the compacton fronts will travel at a constant speed which depends upon $A$. Moreover, the simulations suggest that the shape of the front at a fixed time is roughly that of the compacton for $A=1$. In order to derive an approximate analytic expression for the wavespeed we plug $U_\rmc(x+ct)$ into the PDE \eref{e:4}, multiply the resultant equation by $\partial_xU_\rmc(x+ct)$, and then integrate over the domain where the front is nonconstant. Doing all this leads to the following predictions for the wave-speed, \begin{equation}\label{e:cpred} V\to U,\,\,c=-\frac{\sqrt{2R}}{\pi}(A-1);\quad U\to V,\,\,c=\frac{\sqrt{2R}}{\pi}(A-1). \end{equation} The notation $j\to k$ corresponds to the front which has value $j$ for $x\ll0$ and value $k$ for $x\gg0$. See \autoref{f:WaveSpeed} for the comparison of the theoretical prediction with the results of a numerical simulation of the PDE \eref{e:4}. Numerical simulations indicate that these are good predictions for a relatively large range of $A$ for the 1D PDE model; recall the relevant discussion also in~\autoref{f:2DFrontTravel}. Moreover, we find that for $R$ sufficiently large, and away from the saddle-node bifurcation points, these are also good predictions for the wave-speed for the discrete model. \begin{remark} If $A<1$, so that language $U$ is preferred, the front will move so that language $U$ invades language $V$. On the other hand, if $A>1$, so that $V$ is preferred, $V$ will invade $U$. The standing compacton which exists for $A=1$ is then seen as a transition between these two invasion fronts. \end{remark} \begin{figure}[ht \begin{center} \includegraphics{WaveSpeed} \caption{(color online) The numerically generated wave speed for the $V\to U$ front. The solid (red) curve corresponds to the analytic prediction, and the (blue) circles are the approximate wave speed derived from a numerical simulation of the PDE \eref{e:4} with $R=8$ using the standard second-order finite difference schemes to approximate the spatial derivatives.} \label{f:WaveSpeed} \end{center} \end{figure} \subsection{Spectral stability: one dimension}\label{s:ssod} Let us now consider the spectral stability of these compactons. The 1D version of the PDE \eref{e:42d} is, \begin{equation}\label{e:4} \partial_tu=R u(1-u)\left[(1+A)u-A\right]+(1+A)u(1-u)\partial_x^2u+\left[1-(1+A)u\right](\partial_xu)^2. \end{equation} Writing $u=U_\rmc+v$, when $A=1$ the linearized problem for $v$ is, \begin{equation}\label{e:4a} \partial_tv=2\partial_x\left[U_\rmc(1-U_\rmc)\partial_xv\right]+g(U_\rmc)v, \end{equation} where, \[ g(U_\rmc)=R(-6U_\rmc^2+6U_\rmc-1)+ 2(1-2U_\rmc)\partial_x^2U_\rmc-2(\partial_xU_\rmc)^2. \] Without loss of generality assume the solution in question is the $V\to U$ front, i.e., $U_\rmc(x)=0$ for $x\le-\pi\sqrt{2/R}$, and $U_\rmc(x)=1$ for $x\ge0$. Outside the interval $[-\pi\sqrt{2/R},0]$ the linearized PDE \eref{e:4a} becomes an ODE, \[ \partial_tv=-Rv. \] The associated spectral problem is, \[ \lambda v=-Rv\quad\leadsto\quad\lambda=-R,\,\,\mathrm{or}\,\,v\equiv0. \] Because of the degeneracy associated with the diffusion coefficient, the essential spectrum for the operator comprises a single point. On the other hand, if then upon using the expression for the compacton the associated spectral problem is the singular Sturm-Liouville problem, \begin{equation}\label{e:4b} \frac12\partial_x\left[\sin^2\left(\sqrt{\frac{R}2}\,x\right)\partial_xv\right]- \frac{R}4\left(3\cos^2\left(\sqrt{\frac{R}2}\,x\right)-1\right)v=\lambda v. \end{equation} If $\lambda\neq-R$, then for the sake of continuity we need Dirichlet boundary conditions at the endpoints, \[ v\left(-\sqrt{\frac2{R}}\,\pi\right)=v(0)=0. \] Regarding the interior problem, $x\in[-\pi\sqrt{2/R},0]$, due to spatial translation a solution when $\lambda=0$ is $v_0(x)=\partial_xU_\rmc$. Since the front is monotone, this eigenfunction is of one sign. Consequently, by classical Sturmian theory $\lambda=0$ is the largest eigenvalue, so the wave is spectrally stable. Now consider the concatenation of fronts. Since each front is a compacton, there will be no tail-tail interaction leading to small eigenvalues. Consequently, each front will add another eigenvalue associated with the eigenvalue of the original front. The associated eigenfunction will simply be a spatial translation of the associated eigenfunction. In particular, if there are $N$ fronts, then $\lambda=0$ will be a semi-simple eigenvalue with geometric multiplicity $N$. The multiplicity follows from the fact that each front can be spatially translated without affecting any of the other fronts. Suppose we have two fronts, so the solution is a flat-topped compacton. As the size of the top is nonzero, there will be two zero eigenvalues, and the rest of the spectrum will be negative. At the limit of a zero length top we have the pulse compacton, \[ U_\rmc(x)=\frac12\left[1+\cos\left(\sqrt{\frac{R}2}\,x\right)\right],\quad -\sqrt{\frac2{R}}\,\pi\le x\le\sqrt{\frac2{R}}\,\pi. \] Since the diffusion is zero at $x=0$, so the eigenvalue problem is still degenerate, we can still think of this solution as the concatenation of two fronts, a left front and a right front. The eigenvalue at zero will have geometric multiplicity two. One eigenfunction will be $\partial_xU_\rmc$ of the left front, and zero elsewhere, while another will be $\partial_xU_\rmc$ of the right front, and zero elsewhere. Using linearity, we note that one eigenfunction is the sum of these two, which is precisely the expected spatial translation eigenfunction of the full compacton, $\partial_xU_\rmc$. \subsection{Spectral stability: two dimensions} A steady-state front solution to the 2D model \eref{e:4} when $A=1$ is the compacton, $u(x,y)=U_\rmc(x)$. As we saw in \autoref{s:ssod}, for the 1D model \eref{e:4} the original front is spectrally stable with a simple zero eigenvalue, and a concatenation of $N$ fronts is spectrally stable with a semi-simple zero eigenvalue of multiplicity $N$. Let $U(x)$ represent a spectrally stable concatenation of $N$ fronts, which is a stripe pattern. Consider the spectral stability of the stripes for the full 2D problem. Denote the 1D self-adjoint linearization in \eref{e:4a} about the concatenation as $\calL_1$. The linearization about this striped pattern for \eref{e:42d} is, \[ \calL_2=\calL_1+2U(1-U)\partial_y^2, \] which is also self-adjoint. Using the Fourier transform to write candidate eigenfunctions, \[ w(x,y)=v(x)\rme^{\rmi\xi y}, \] we have, \[ \calL_2w=\left(\calL_1-2\xi^2U(1-U)\right)v\rme^{\rmi\xi y}. \] We already know $\calL_1$ is a nonpositive self-adjoint operator. Since $\xi^2U(1-U)\ge0$, we can therefore conclude $\calL_2$ is a nonpositive self-adjoint operator. Consequently, there are no positive eigenvalues, so the stripe pattern inherits the spectral stability of the concatenation. In particular, it is spectrally stable. \section{Spots: a case study} We now consider the existence and spectral stability of spots. A spot is a contiguous set of sites on the lattice which all share language $U$ (or $V$). All other sites share language $V$ (or $U$). For example, a $2\times3$ spot will be a rectangle of height 2 and length 3, so there will be 6 total sites which share language $U$. When $\epsilon_1=0$ a stable spot of any size and shapecan be formed. By the Implicit Function Theorem the spot will persist and be spectrally stable for small $\epsilon_1$. Our goal here is to construct a snaking diagram for this spot, and then briefly discuss the dynamics associated with small perturbations of a spot. \subsection{Existence} \begin{figure}[ht \begin{center} \begin{tabular}{cc} \includegraphics{SnakingDiagram1x1New}& \includegraphics[trim = 0 -25 0 0]{SnakingDiagram1x1Solns} \end{tabular} \caption{(color online) The numerically generated snaking diagram for a square lattice of size $20\times20$ when $R_1=0.1$ and starting with a $1\times1$ spot. The figure on the left is the snaking diagram, and the figures on the right provide stable solutions arising from the diagram. The notation on the vertical axis, $|u|^2$, represents the square of the $ \ell^2$-norm of the solution. The upper right panel has $(A,|u|^2)\sim(0.5,0.6139)$, the next one down has $(A,|u|^2)\sim(0.9276,7.0127)$, the third one down has $(A,|u|^2)\sim(1.0449,16.8568)$, and the bottom panel on the right has $(A,|u|^2)\sim(1.0139,32.2287)$. Each of these points is marked by a large (blue) filled dot on the snaking diagram. For the snaking diagram stable solutions are marked by a (blue) square, and unstable solutions are marked with a (red) dot. While we do not show it here, the growth in terms of the total number of contiguous groups holding language $U$ appears to have no upper bound.} \label{f:SnakingDiagram1x1-3x1} \end{center} \end{figure} First consider the snaking diagram associated with a steady-state solution. We will start with the configurations at $R_1=0$ of a $1\times1$ square of $U$ sitting on a background of $V$. The results are plotted in \autoref{f:SnakingDiagram1x1-3x1}. The figure on the left gives the snaking diagram, and some stable solutions arising from the snaking are given on the right. For the snaking diagram stable solutions are marked with a (blue) square, and unstable solutions are marked with a (red) dot. The initial $1\times1$ configuration grows seemingly without bound. While we do not provide all the pictures here, as the norm of the solution grows the shape of the contiguous $U$ speakers for a stable solution is either a square or something that has roughly a circular geometry. Regarding the transition from stable to unstable solutions, it is generally not a saddle-node bifurcation, e.g., at the transition point the number of unstable eigenvalues will go from zero to two. Moreover, within the curve of unstable solutions there are additional bifurcations where the number of positive eigenvalues either increases or decreases. The solution structure is rich, but we leave a detailed look at it for a different paper. \begin{remark} We should point out that as in the case of single stripes being concatenated to form more complicated stripe patterns, we can concatenate single spots to form more complicated structures. All that is required for each spot to essentially be an isolated structure is for the spots to be sufficiently separated. Our experience is that a minimal separation distance between two adjacent spots of three sites is enough. \end{remark} \subsection{Dynamics} \begin{figure}[ht \begin{center} \includegraphics{TwoClumpEvolveA060} \caption{(color online) The time evolution of an $A=0.8165$ solution when $A=0.6$. The panel on the far right shows the evolution of the square of the $ \ell^2$-norm of the solution.} \label{f:TwoClumpEvolveA060} \end{center} \end{figure} Now let us consider the dynamical implications of the snaking diagram. In particular, we shall look at the effect of varying $A$ for fixed $R_1=0.1$. Recall that for stripes we saw in \autoref{s:32aa} that outside the snaking diagram traveling waves would appear; in particular, if $A<A_-$ then language $U$ would invade language $V$, whereas if $A>A_+$, then language $V$ would invade language $U$. Consequently, we expect a similar behavior for spots; in particular, a spot will grow or die as a function of the prestige. For a particular example we start with a stable solution arising from the $1\times1$ initial configuration when $A=0.8165$. The square of the $\ell^2$-norm of this solution is roughly $11$. This solution is contained in the small stable branch shown in \autoref{f:SnakingDiagram1x1-3x1} with $A_-\sim0.8139$ and $A_+\sim0.8192$. \begin{figure}[ht \begin{center} \includegraphics{TwoClumpEvolveA075} \caption{(color online) The time evolution of an $A=0.8165$ solution when $A=0.75$. The panel on the far right shows the evolution of the square of the $ \ell^2$-norm of the solution.} \label{f:TwoClumpEvolveA075} \end{center} \end{figure} First suppose that $A=0.6$. When looking at the snaking diagram, we see that there are no stable steady-state solutions with this value of $A$. The time evolution associated with this initial condition is provided in \autoref{f:TwoClumpEvolveA060}. Of particular interest is the evolution of the square of the norm in the far right panel. We see that the norm is growing up to at least $t=50$. While we do not show it here, the norm continues to grow until the all the nodes share the common language $U$. The growth in language $U$ is manifested in the square becoming larger and larger as those nodes containing $V$ at the boundary between $U$ and $V$ switch to language $U$. \begin{figure}[ht \begin{center} \includegraphics{TwoClumpEvolveA090} \caption{(color online) The time evolution of an $A=0.8165$ solution when $A=0.9$. The panel on the far right shows the evolution of the square of the $ \ell^2$-norm of the solution.} \label{f:TwoClumpEvolveA090} \end{center} \end{figure} Next suppose that $A=0.75$. When looking at the snaking diagram, we see there is a (stable) steady-state solution with this value of $A$ and which also has a larger norm. The time evolution associated with this initial condition is provided in \autoref{f:TwoClumpEvolveA075}. Of particular interest is the evolution of the square of the norm in the far right panel, which in this case achieves a steady-state. The final state at $t=50$ corresponds to the first stable solution on the snaking diagram where $a=0.75$, and whose norm is greater than $11$. Language $U$ invades language $V$ until a steady-state configuration is reached. \begin{figure}[ht \begin{center} \includegraphics{TwoClumpEvolveA110} \caption{(color online) The time evolution of an $A=0.8165$ solution when $A=1.1$. The panel on the far right shows the evolution of the square of the $ \ell^2$-norm of the solution.} \label{f:TwoClumpEvolveA110} \end{center} \end{figure} For the next example suppose that $A=0.9$. When looking at the snaking diagram, we see there is a steady-state solution with this value of $A$ and which also has a smaller norm. The time evolution associated with this initial condition is provided in \autoref{f:TwoClumpEvolveA090}. Of particular interest is the evolution of the square of the norm in the far right panel, which in this case also achieves a steady-state. The final state at $t=50$ corresponds to the first stable solution on the snaking diagram where $A=0.9$, and whose norm is less than $11$. Language $V$ invades language $U$ until a steady-state configuration is reached. For the last example suppose that $A=1.1$. When looking at the snaking diagram, we see there is no steady-state solution with this value of $A$ and which also has a smaller norm. The time evolution associated with this initial condition is provided in \autoref{f:TwoClumpEvolveA110}. Of particular interest is the evolution of the square of the norm in the far right panel, which in this case goes to zero. Language $V$ invades language $U$ until the entire lattice shares the common language $V$. In conclusion, we have the following rule-of-thumb if the initial configuration is near a steady state solution. If the value of $A$ is decreased, so that the prestige of language $U$ increases, then a spot of $U$ in a sea of $V$ will grow until a stable steady-state associated with that value of $A$ is achieved. If no such steady-state exists, then eventually the entire lattice will share language $U$. On the other hand, if the value of $A$ is increased, so that the prestige of language $V$ increases, then a spot of $U$ in a sea of $V$ will shrink in size until a stable steady-state associated with that value of $A$ is achieved. If no such steady-state exists, then eventually the entire lattice will share language $V$. While we do not show it here, this rule was manifested in every numerical simulation that we performed. It would be most interesting to translate this observation into a precise mathematical statement. This is left as an interesting direction for future work. \section{Conclusions \& Future Challenges} We have derived an ODE model of language dynamics on a square lattice which is a natural generalization of the AS language model on one lattice site. The model can also be used to discuss, e.g., the spread of an opinion through the lattice, or the growth/decay of religious observance on the lattice. We also looked at the continuum limit of the ODE, which is a PDE which features a degenerate diffusion term. We numerically studied the existence of special spatial structures on the lattice; primarily, stripes and spots. Through a combination of numerics and analysis we analyzed the dynamics associated with small perturbations of these spatial structures. Finally, we provided rules-of-thumb to help understand how languages die and grow in terms of their prestige, and interaction with neighboring communities. As is already evident from the discussion above, there are numerous directions in this emerging field that are worthwhile of further study. Some are already concerning the model at hand. As highlighted earlier, features such as the bifurcation of traveling solutions from standing ones and their scaling laws, or the more precise identification of the discrete solutions and their tails from a mathematical analysis perspective would be of interest. While it is unclear whether something analytical can be said about the bifurcation diagram of genuinely two-dimensional states such as spots, our numerical observations regarding the model dynamics formulate a well-defined set of conjectures regarding the fate of a spot when the prestige is decreased or increased that may be relevant to further explore mathematically. However, it would also be relevant to consider variations of the model. Here, we selected as a first step of study to explore an ordered two-dimensional square lattice. However, the $\vI_{jk}$ may be relevant to generalize to more complex networks and modified (influence or) ``adjacency matrices'' to explore their impact on the findings presented herein. As indicated herein, the role of near-neighbor interactions is expected to maintain some of the key features we considered; yet in a progressively connected world, the consideration of nonlocal, long-range interactions may be of interest in its own right. Another possibility is to insert a spatially heterogeneous prestige $A_{jk}$ and examine how its spatial variation may influence standing and traveling structures. There are numerous variants that can be considered thereafter, e.g., how does a local prestige variation interact with the traveling wave patterns explored herein? Such queries have been considered in other contexts where the interactions bear a linear component recently, e.g., see \citet{hoffman:esf17}, but have yet to be considered in a fully nonlinear setting such as the one herein. Such studies, as applicable, will be reported in future publications. \bibliographystyle{plainnat}
2,877,628,088,794
arxiv
\section{Introduction} \label{intro} The classical Schwarz-Pick lemma \cite{Pic15} states that every holomorphic map from the unit disc $D$ of $\mathbb{C}$ into itself is distance-decreasing with respect to the Poincar\'e metric. This was later generalized by Ahlfors \cite{Ahl38} to holomorphic maps from the unit disc $D$ into a Riemannian surface with curvature bounded above by a negative constant. This lemma plays an important role in complex analysis and differential geometry, and has been extended to holomorphic maps between higher dimensional complex manifolds (cf. \cite{CCL79}, \cite{Che68}, \cite{Lu68}, \cite{Roy80}, \cite{Yau78}, etc.) and harmonic maps between Riemannian manifolds (cf. \cite{GH77}, \cite{She84}, etc.). An extremely useful generalization is Yau's Schwarz lemma \cite{Yau78}, which says that every holomorphic map from a complete K\"ahler manifold with Ricci curvature bounded from below by a constant $-K_1\leq 0$ into a Hermitian manifold with holomorphic bisectional curvature bounded from above by a constant $-K_2<0$ is distance-decreasing up to a constant $K_1 /K_2$. Later, Tosatti \cite{Tos07} generalized Yau's result to the Hermitian case. Recently, the authors in \cite{DRY19} extended this lemma to generalized holomorphic maps between pseudo-Hermitian manifolds. The main purpose of this paper is to generalize Yau's Schwarz lemma to two classes of generalized holomorphic maps between pseudo-Hermitian manifolds and Hermitian manifolds, which are called $(J, J^N)$-holomorphic maps (see Definition \ref{definition 3}) and $(J^N,J)$-holomorphic maps (see Definition \ref{definition 4}) respectively. By computing the Bochner formulas for these maps and using the maximum principle, we derive the following Schwarz type lemmas. \begin{theorem}\label{theorem 1.1} Let $(M^{2m+1},HM,J,\theta)$ be a complete pseudo-Hermitian manifold with pseudo-Hermitian Ricci curvature bounded from below by $-K_1\leq0$ and $\|A\|_{C^1}$ bounded from above where $A$ is the pseudo-Hermitian torsion. Let $(N^n,J^N,h)$ be a Hermitian manifold with holomorphic bisectional curvature bounded from above by $-K_2<0$. Then for any $(J,J^N)$-holomorphic map $f: M\rightarrow N$, we have \[f^*h\leq\frac{K_1}{K_2}G_\theta.\] In particular, if $K_1=0$, every $(J,J^N)$-holomorphic map from $M$ into $N$ is constant. \end{theorem} \begin{theorem}\label{theorem 1.2} Let $(N^n,J^N,h)$ be a complete K\"ahler manifold with Ricci curvature bounded from below by $-K_1\leq0$. Let $(M^{2m+1},HM,J,\theta)$ be a Sasakian manifold with pseudo-Hermitian bisectional curvature bounded from above by $-K_2<0$. Then for any $(J^N, J)$-holomorphic map $g: N\rightarrow M$, we have \[ g^*G_\theta \leq\frac{K_1}{K_2}h.\] In particular, if $K_1=0$, any $(J^N, J)$-holomorphic map is horizontally constant. \end{theorem} We remark that from Theorem \ref{theorem 1.1}, we can deduce Liouville theorem and little Picard theorem for basic CR functions (see Corollary \ref{corollary 4.2} and \ref{corollary 4.3} for details). In Theorem \ref{theorem 1.2}, when $\dim_{\mathbb{C}} N=1$, the hypothesis of pseudo-Hermitian bisectional curvature on $M$ can be replaced by pseudo-Hermitian sectional curvature(see Corollary \ref{corollary 5.3}). As an application of Schwarz lemmas, we introduce CR Carath\'eodory pseudodistance on CR manifolds, which is invariant under CR isomorphisms. Making use of the relationships between Carath\'eodory pseudodistance and CR Carath\'eodory pseudodistance as well as the Schwarz lemma, we can give anorher Liouville theorem for $(J,J^N)$-holomorphic maps. \section{Preliminaries}\label{sec2} In this section, we will present some notations and facts of pseudo-Hermitian geometry and Hermitian geometry (cf. \cite{BG08}, \cite{DT07} for details). \begin{definition}\label{definition 1} Let $M$ be a real $2m+1$ dimensional orientable $C^\infty$ manifold. A CR structure on $M$ is a complex subbundle $T_{1,0}M$ of complex rank m of the complexified tangent bundle $TM\otimes\mathbb{C}$ satisfying \begin{enumerate}[(i)] \item $T_{1,0}M\cap T_{0,1}M=\{0\}, T_{0,1}M=\overline{T_{1,0}M}$ \item $[\Gamma(T_{1,0}M),\Gamma(T_{1,0}M)]\subseteq\Gamma(T_{1,0}M)$ \label{definition 1 (ii)} \end{enumerate} Then the pair $(M,T_{1,0}M)$ is called a CR manifold. \end{definition} The complex subbundle $T_{1,0}$ corresponds to a real subbundle of $TM$: \begin{align} HM=Re\{T_{1,0}M\oplus T_{0,1}M\} \end{align} which is called Levi distribution and carries a natural complex structure $J$ defined by $J(X+\bar{X})=i(X-\bar{X})$ for any $X\in T_{1,0}M$. The CR structure can also be said to be $(HM,J)$. Let $(M, HM, J)$ and $(\tilde{M}, H\tilde{M}, \tilde{J})$ be two CR manifolds. A smooth map $f: M\rightarrow \tilde{M}$ is a CR map if it preserves the CR structures, that is, $df(HM)\subseteq H\tilde{M}$ and $df\circ J= \tilde{J}\circ df$ on $HM$. Moreover, if $f$ is a $C^\infty$ diffeomorphism, it is referred to as a CR isomorphism. Since $M$ is orientable and $HM$ is oriented by its complex structure $J$, it follows that there exist a global nowhere vanishing 1-form $\theta$ such that $HM=ker(\theta)$. Such $\theta$ is called a pseudo-Hermitian structure on $M$. The Levi form $L_\theta$ of a given pseudo-Hermitian structure $\theta$ is defined by \begin{align} L_\theta(X,Y)=d\theta(X,JY) \end{align} for any $X,Y\in HM$. The integrability condition \ref{definition 1 (ii)} in Definition \ref{definition 1} implies $L_\theta$ is $J$-invariant and symmetric. If $L_\theta$ is positive definite, then $(M,HM,J)$ is said to be strictly pseudoconvex. The quadruple $(M,HM,J,\theta)$ is called a pseudo-Hermitian manifold. Since $L_\theta$ is positive definite, there is a step-2 sub-Riemannian structure $( HM, L_\theta)$ on $M$, and all sections of $HM$ together with their Lie brackets span $T_xM$ at each point $x\in M$. We say that a Lipschitz curve $\gamma: [0, l]\rightarrow M$ is horizontal if $\gamma'(t)\in HM$ a.e. $t\in[0, l]$. For any two points $p, q\in M$, by the theorem of Chow-Rashevsky (cf. \cite{Cho02}, \cite{Ras38}), there always exist such horizontal curves joining $p$ and $q$. Consequently, we may define the so-called Carnot-Carath\'eodory distance: \begin{align*} d^M_{cc}(p, q)=\inf\{\int_0^l \sqrt{L_\theta(\gamma', \gamma')}\, dt |\ \ &\gamma: [0, l]\rightarrow M \ \text{is a horizontal curve with}\\ &\ \gamma(0)=p \ \text{and} \ \gamma(l)=q \} \end{align*} which induces to a metric space structure on $(M, HM, L_\theta)$. For a pseudo-Hermitian manifold $(M,HM,J,\theta)$, there is a unique globally defined nowhere zero tangent vector field $\xi$ on $M$ called Reeb vector field such that $\theta(\xi)=1$, $\xi\rfloor d\theta=0$. Consequently there is a splitting of the tangent bundle $TM$ \begin{align} TM=HM\oplus L, \end{align} where $L$ is the trivial line bundle generated by $\xi$. Let $\pi_H:TM\rightarrow HM$ denote the natural projection morphism. Set \begin{align} G_\theta(X,Y)=L_\theta(\pi_HX, \pi_HY) \end{align} for any $X, Y\in TM$. We extend $J$ to an endomorphism of $TM$ by requiring that \begin{align} J\xi=0. \end{align} The $J$-invariance of $L_\theta$ implies $G_\theta$ is also $J$-invariant. Since $G_\theta$ coincides with $L_\theta$ on $HM\times HM$, Levi form $L_\theta$ can be extended to a Riemannian metric $g_\theta$ on $M$ by \begin{align} g_\theta=G_\theta+\theta\otimes\theta. \end{align} This metric is usually called the Webster metric. On a pseudo-Hermitian manifold, there is a canonical connection preserving the CR structure and the Webster metric, which is usually called the Tanaka-Webster connection. \begin{theorem}[(cf. \cite{DT07})]\label{theorem 2.1} Let $(M,HM,J,\theta)$ be a pseudo-Hermitian manifold with the Reeb vector field $\xi$ and Webster metric $g_\theta$. Then there is a unique linear connection $\nabla$ on $M$ satisfying the following axioms: \begin{enumerate}[(i)] \item $HM$ is parallel with respect to $\nabla$. \item $\nabla J=0$, $\nabla g_\theta=0$. \item The torsion $T_\nabla$ of $\nabla$ satisfies \begin{align*} T_\nabla(X,Y)=2d\theta(X,Y)\xi\ and\ T_\nabla(\xi,JX)+JT_\nabla(\xi,X)=0 \end{align*} for any $X,Y\in HM$. \end{enumerate} \end{theorem} The pseudo-Hermitian torsion, denoted by $\tau$, is a $TM$-valued $1$-form defined by $\tau(X)=T_\nabla(\xi,X)$ for any $X\in TM$. Set \begin{align} A(X,Y)=g_\theta(\tau(X),Y) \end{align} for any $X,Y\in TM$. A pseudo-Hermitian manifold is called a Sasakian manifold if $\tau\equiv0$. Note that the properties of $\nabla$ in Theorem \ref{theorem 2.1} imply that $\tau(T_{1,0}M)\subset T_{0,1}M$ and $A$ is a trace-free symmetric tensor field. Suppose that $(M^{2m+1},HM,J,\theta)$ is a real $2m+1$ dimensional pseudo-Hermitian manifold with the Webster metric $g_{\theta}$. Let $(e_i)^m_{i=1}$ be a unitary frame of $T_{1,0}M$ with respect to $g_{\theta}$ and $(\theta^i)^m_{i=1}$ be its dual frame. Then the "horizontal component of $g_\theta$" may be expressed as \begin{align} G_\theta=\sum_{i=1}^m\theta^i\theta^{\bar{i}}. \end{align} From \cite{W+78}, we know the following structure equations for the Tanaka-Webster connection: \begin{gather} d\theta=2\sqrt{-1}\sum_j\theta^j\wedge\theta^{\bar{j}}\\ d\theta^i=\sum_j\theta^j\wedge\theta^i_j+\theta\wedge\tau^i\label{2.10}\\ d\theta^i_j=\sum_k\theta^k_j\wedge\theta^i_k+\Pi^i_j\label{2.11}\\ \theta^i_j+\theta^{\bar{j}}_{\bar{i}}=0 \end{gather} where $\theta^i_j$ is the Tanaka-Webster connection 1-form with respect to $(e_i)_{i=1}^n$, $\tau^i=\sum_jA^i_{\bar{j}}\theta^{\bar{j}}=\sum_j g_\theta(\tau e_i, e_j)\theta^{\bar{j}}$, and \begin{align} \Pi^i_j=2\sqrt{-1}&\theta^i\wedge\tau^{\bar{j}}-2\sqrt{-1}\tau^i\wedge\theta^{\bar{j}}+\sum_{k,l}R^i_{jk\bar{l}}\theta^k\wedge\theta^{\bar{l}}\notag\\ &+\sum_k(W^i_{jk}\theta^k\wedge\theta-W^i_{j\bar{k}}\theta^{\bar{k}}\wedge\theta )\end{align} where $W^i_{jk}=A^{\bar{k}}_{j,\bar{i}}$, $W^i_{j\bar{k}}=A^i_{\bar{k},j}$, and $R^i_{jk\bar{l}}$ are the components of curvature tensor with respect to Tanaka-Webster connection. Set $R_{i\bar{j}k\bar{l}}=R^j_{ik\bar{l}}$, then we know that \begin{align} R_{i\bar{j}k\bar{l}}&=-R_{\bar{j}ik\bar{l}}=-R_{i\bar{j}\bar{l}k}\notag\\ R_{i\bar{j}k\bar{l}}&=R_{k\bar{j}i\bar{l}}=R_{k\bar{l}i\bar{j}} \end{align} Suppose $X=\sum_i X^i e_i$ and $Y=\sum_j Y^j e_j$ are two nonzero vectors in $T_{1,0}M$, then the pseudo-Hermitian bisectional curvature determined by $X$ and $Y$ is defined by \begin{align} \frac{\sum_{i, j, k, l}R_{i\bar{j}k\bar{l}}X^iX^{\bar{j}}Y^kY^{\bar{l}}}{(\sum_{i}X^iX^{\bar{i}})(\sum_{j}Y^jY^{\bar{j}})}. \end{align} If $X=Y$, the above quantity is referred to as the pseudo-Hermitian sectional curvature in the direction $X$ (cf. \cite{W+78}). The pseudo-Hermitian Ricci tensor is defined as \begin{align} R_{i\bar{j}}=\sum_kR_{k\bar{k}i\bar{j}}, \end{align} and thus the pseudo-Hermitian scalar curvature is given by \begin{align} R=\sum_iR_{i\bar{i}}. \end{align} Analogous to Laplace operator in Riemiannian geometry, there is a degenerate elliptic operator in CR geometry which is called sub-Laplace operator. For a $C^2$ function $u:M\rightarrow \mathbb{R}$, $du$ is a smooth section of $T^*M$. Let $\nabla du$ be the covariant derivative of $du\in\Gamma(T^*M)$ with respect to the Tanaka-Webster connection. Therefore, the sub-Laplace operator can be defined by \begin{align} \Delta_b u=tr_H(\nabla du)=\sum_i(u_{i\bar{i}}+u_{\bar{i}i}), \end{align} where $u_{i\bar{i}}=(\nabla du)(e_i, e_{\bar{i}})$. In \cite{CDRZ18}, the authors give a sub-Laplacian comparison theorem in pseudo-Hermitian geometry which plays a similar role as the Laplacian comparison theorem in Riemannian geometry. \begin{lemma}\label{lemma 2.2} Let $(M^{2m+1},HM,J,\theta)$ be complete pseudo-Hermitian manifold with pseudo-Hermitian Ricci curvature bounded from below by $-k\leq0$ and $\|A\|_{C^1}\leq k_1$ $(k_1\geq 0)$. Let $x_0$ be a fixed point in $M$. Then for any $x\in M$ which is not in the cut locus of $x_0$, there exists $C=C(m)$ such that \[\Delta_b \gamma(x)\leq C(1/\gamma+\sqrt{1+k+k_1+k_1^2})\] where $\gamma(x)$ is Riemannian distance of $g_\theta$ between $x_0$ and $x$ in $M$, and $\|A\|_{C^1}=\max_{y\in M}\{\|A\|(y), \|\nabla A\|(y)\}$. Moreover, $\Delta_b \gamma$ is uniformly bounded from above when $\gamma\geq1$. \end{lemma} Let $N$ be a Hermitian manifold of complex dimension $n$. Let $(\eta_\alpha)_{\alpha=1}^n$ be a unitary frame field of $N$ and $(\omega^\alpha)_{\alpha=1}^n$ be its coframe field. Then the Hermitian metric $h$ of $N$ is given by \begin{align} h=\sum_\alpha\omega^\alpha\omega^{\bar{\alpha}}. \end{align} It is well-known that there are connection 1-forms $(\omega^\alpha_\beta)$ such that \begin{gather} d\omega^\alpha=\sum_\beta\omega^\beta\wedge\omega^\alpha_\beta+\Omega^\alpha\label{2.20}\\ \omega^\alpha_\beta+\omega^{\bar{\beta}}_{\bar{\alpha}}=0\label{2.21} \end{gather} where \begin{gather} \Omega^\alpha=\frac{1}{2}\sum_{\beta, \gamma}T^\alpha_{\beta\gamma}\omega^\beta\wedge\omega^\gamma\label{2.22}\\ T^\alpha_{\beta\gamma}=-T^\alpha_{\gamma\beta}. \end{gather} Note that this connection is usually called the Chern connection. The curvature forms $(\Omega^\alpha_\beta)$ are defined by \begin{gather} \Omega^\alpha_\beta=d\omega^\alpha_\beta-\sum_\gamma\omega^\gamma_\beta\wedge\omega^\alpha_\gamma\label{2.24} \end{gather} and according to \eqref{2.21}, we have \begin{gather} \Omega^\alpha_\beta=-\Omega^{\bar{\beta}}_{\bar{\alpha}}=\sum_{\gamma, \delta}R^\alpha_{\beta\gamma\bar{\delta}}\omega^\gamma\wedge\omega^{\bar{\delta}}. \end{gather} We set $R_{\alpha\bar{\beta}\gamma\bar{\delta}}=R^\beta_{\alpha\gamma\bar{\delta}}$, then the skew-Hermitian symmetry of $\Omega^\alpha_\beta$ is equivalent to $R_{\alpha\bar{\beta}\gamma\bar{\delta}}=R_{\bar{\beta}\alpha\bar{\delta}\gamma}$. If $Z=\sum_\alpha Z^\alpha \eta_\alpha$ and $W=\sum_{\beta} W^\beta \eta_\beta$ are two tangent vectors in $T_{1,0}N$, then the holomorphic bisectional curvature determined by $Z$ and $W$ is defined by \begin{align} \frac{\sum_{\alpha\beta\gamma\delta}R_{\alpha\bar{\beta}\gamma\bar{\delta}}Z^\alpha Z^{\bar{\beta}} W^\gamma W^{\bar{\delta}}}{(\sum_{\alpha}Z^\alpha Z^{\bar{\alpha}})(\sum_{\beta}W^\beta W^{\bar{\beta}})}. \end{align} If $Z=W$, the above quantity is called the holomorphic sectional curvature in the direction $Z$. The Ricci tensor is defined as \begin{align} R_{\alpha\bar{\beta}}=\sum_\gamma R_ {\gamma\bar{\gamma}\alpha\bar{\beta}} \end{align} and the scalar curvature is given by \begin{align} S=\sum_{\alpha}R_{\alpha\bar{\alpha}}. \end{align} For a $C^2$ function $v: N\rightarrow\mathbb{R}$ on the Hermitian manifold $N$, the Laplacian of $v$ is defined as the trace of the Hessian matrix of $v$ with respect to the Chern connection $\nabla^c$. In terms of the local frame fields $(\eta_\alpha)_{\alpha=1}^n$, it is a well-known fact that \begin{align} \Delta v=\sum_{\alpha}(v_{\alpha\bar{\alpha}}+v_{\bar{\alpha}\alpha})=2\sum_\alpha v_{\alpha\bar{\alpha}}, \end{align} where $ v_{\alpha\bar{\alpha}}=(\nabla^c dv)(\eta_\alpha, \eta_{\bar{\alpha}}).$ \section{ Bochner formulas} This section will derive the Bochner formulas of generalized holomorphic maps between pseudo-Hermitian manifolds and Hermitian manifolds. In pseudo-Hermitian geometry, there is an analogue of the holomorphic function on a complex manifold, which is called the CR function. \begin{definition}[(cf. \cite{DT07})] Let $(M,HM,J,\theta)$ be a pseudo-Hermitian manifold. A smooth function $f:M\rightarrow \mathbb{C}$ is said to be a CR function, if $Zf=0$ for all $Z\in T_{0,1}M$. Moreover, $f$ is called a basic CR function if the CR function $f$ satisfies $df(\xi)=0$, where $\xi$ is Reeb vector field. \end{definition} We know that a smooth map between two complex manifolds is called holomorphic if its differential commutes with the complex structures at each point. Following this idea, one may define generalized holomorphic maps from pseudo-Hermitian manifolds to Hermitian manifolds as follows. \begin{definition}[(cf. \cite{GIP01} or \cite{CDRY17})] \label{definition 3} Let $(M,HM,J,\theta)$ be a pseudo-Hermitian manifold and $(N,J^N)$ be a complex manifold. A smooth map $f: (M,HM,J,\theta)\rightarrow(N,J^N)$ is called a $(J,J^N)$-holomorphic map if it satisfies \begin{align} df\circ J=J^N\circ df.\label{3.1} \end{align} \end{definition} \begin{remark} \begin{enumerate}[(i)] \item When the target manifold $N$ is complex plane $\mathbb{C}$ endowed with the canonical complex structure $J^{\mathbb{C}}$. A smooth function $f:M\rightarrow \mathbb{C}$ is a $(J,J^{\mathbb{C}})$-holomorphic map if and only if it is a basic CR function, that is, $df(\xi)=0$ and $Zf=0$ for any $Z\in \Gamma(T_{0,1}M)$. \item The authors in \cite{CDRY17} discussed Siu-Sampson type theorem for $(J,J^N)$-holomorphic maps. \end{enumerate} \end{remark} Let $f: (M,HM,J,\theta)\rightarrow(N,J^N,h)$ be a smooth map with $\dim_{\mathbb{R}} M=2m+1$ and $\dim_{\mathbb{C}}N=n$. Then in terms of local frames in section \ref{sec2}, we can write $df$ as \begin{align} df=\sum_{A, B}f^A_B\theta^B\otimes \eta_A,\label{3.2} \end{align} where \[A=1, 2,\ldots n, \bar{1}, \bar{2},\ldots, \bar{n}\] \[B=0, 1, 2,\ldots m, \bar{1}, \bar{2},\ldots, \bar{m},\] \[\theta^0=\theta.\] Assume $f$ is $(J, J^N )$-holomorphic. Clearly the condition \eqref{3.1} leads $f^\alpha_{\bar{i}}=f^{\bar{\alpha}}_i=f^\alpha_0=f^{\bar{\alpha}}_0=0$ and \begin{align} f^*\omega^\alpha=\sum_if^\alpha_i\theta^i.\label{3.3} \end{align} For simplification, denote $\hat{\omega}^\alpha_\beta=f^*\omega^\alpha_\beta, \hat{T}^\alpha_{\beta\gamma}=f^*T^\alpha_{\beta\gamma}, \hat{\Omega}^\alpha_\beta=f^*\Omega^\alpha_\beta$, etc. Taking the exterior derivative of \eqref{3.3} and using \eqref{2.10}, \eqref{2.20}, \eqref{2.22}, we get \begin{align} \sum_iDf^\alpha_i\wedge\theta^i=\frac{1}{2}\sum_{\beta,\gamma,i,j}\hat{T}^\alpha_{\beta\gamma}f^\beta_if^\gamma_j\theta^i\wedge\theta^j-\sum_if^\alpha_i\theta\wedge\tau^i\label{3.4} \end{align} where \begin{align} Df^\alpha_i=df^\alpha_i+\sum_\beta f^\beta_i\hat{\omega}^\alpha_\beta-\sum_jf^\alpha_j\theta^j_i=f^\alpha_{i0}\theta+\sum_k(f^\alpha_{ik}\theta^k+f^\alpha_{i\bar{k}}\theta^{\bar{k}}).\label{3.5} \end{align} From \eqref{3.4}, it follows that \begin{gather} f^\alpha_{i0}=f^\alpha_{i\bar{j}}=0\label{3.6}\\ f^\alpha_{ij}=f^\alpha_{ji}-\sum_{\beta,\gamma}\hat{T}^\alpha_{\beta\gamma}f^\beta_if^\gamma_j.\label{3.7} \end{gather} By \eqref{3.5} and \eqref{3.6}, we have \begin{align} df^\alpha_i=\sum_kf^\alpha_{ik}\theta^k+\sum_jf^\alpha_j\theta^j_i-\sum_\beta f^\beta_i\hat{\omega}^\alpha_\beta.\label{3.8} \end{align} Taking the exterior derivative of \eqref{3.8}, and applying structure equations \eqref{2.11} and \eqref{2.24}, one finds that \begin{align} \sum_k Df^\alpha_{ik}\wedge\theta^k=\sum_\beta f^\beta_i\hat{\Omega}^\alpha_\beta-\sum_j f^\alpha_j\Pi^j_i-\sum_k f^\alpha_{ik}\theta\wedge\tau^k\label{3.9} \end{align} where \[Df^\alpha_{ik}=df^\alpha_{ik}-\sum_j(f^\alpha_{ij}\theta^j_k-f^\alpha_{jk}\theta^j_i)+\sum_\beta f^\beta_{ik}\hat{\omega}^\alpha_\beta=f^\alpha_{ik0}\theta+\sum_j(f^\alpha_{ikj}\theta^j+f^\alpha_{ik\bar{j}}\theta^{\bar{j}}).\] Hence, \begin{align} f^\alpha_{ik\bar{j}}=\sum_lf^\alpha_lR^l_{ik\bar{j}}-\sum_{\beta,\gamma,\delta}f^\beta_if^\gamma_kf^{\bar{\delta}}_{\bar{j}}\hat{R}^\alpha_{\beta\gamma\bar{\delta}}.\label{3.10} \end{align} Set \begin{align} u=\sum_{\alpha,i}f^\alpha_if^{\bar{\alpha}}_{\bar{i}}.\label{3.11} \end{align} Since \eqref{3.6}, the horizontal differential of $u$ is given by \begin{align*} d_Hu=\sum_{\alpha,i,k}(f^\alpha_{ik}f^{\bar{\alpha}}_{\bar{i}}\theta^k+f^{\bar{\alpha}}_{\bar{i}\bar{k}}f^\alpha_i\theta^{\bar{k}}), \end{align*} thus, \[u_k=\sum_{\alpha,i}f^\alpha_{ik}f^{\bar{\alpha}}_{\bar{i}}.\] By computing $d_Hu_k$, we obtain \begin{align*} u_{k\bar{k}}=\sum_{\alpha,i}(f^\alpha_{ik\bar{k}}f^{\bar{\alpha}}_{\bar{i}}+|f^\alpha_{ik}|^2). \end{align*} Using \eqref{3.10}, we get \begin{align*} u_{k\bar{k}}=\sum_{\alpha,i}|f^\alpha_{ik}|^2+\sum_{\alpha,i,l}f^\alpha_lf^{\bar{\alpha}}_{\bar{i}}R^l_{ik\bar{k}}-\sum_{\alpha,\beta,\gamma,\delta,i}f^{\bar{\alpha}}_{\bar{i}}f^\beta_if^\gamma_kf^{\bar{\delta}}_{\bar{k}}\hat{R}^\alpha_{\beta\gamma\bar{\delta}}. \end{align*} Hence, we have the following lemma. \begin{lemma} Suppose $f: (M,HM,J,\theta)\rightarrow(N,J^N,h)$ be a $(J,J^N)$-holomorphic map, then we have \begin{align} f^*h\leq u G_\theta,\label{2.12} \end{align} and \begin{align} \frac{1}{2}\Delta_bu=\sum_{\alpha,i,k}|f^\alpha_{ik}|^2+\sum_{\alpha,i,l}f^\alpha_lf^{\bar{\alpha}}_{\bar{i}}R_{i\bar{l}}-\sum_{\alpha,\beta,\gamma,\delta,i,k}f^{\bar{\alpha}}_{\bar{i}}f^\beta_if^\gamma_kf^{\bar{\delta}}_{\bar{k}}\hat{R}_{\beta\bar{\alpha}\gamma\bar{\delta}}.\label{3.13} \end{align} \end{lemma} In the rest of this section, we turn to generalized holomorphic maps from Hermitian manifolds to pseudo-Hermitian manifolds. \begin{definition} \label{definition 4}\label{definition 4} Let $(M, HM, J, \theta)$ be a pseudo-Hermitian manifold and $(N, J^N)$ be a complex manifold. A smooth map $g: N\rightarrow M$ is called a $(J^N, J)$-holomorphic map if it satisfies \begin{align} dg_H\circ J^N=J\circ dg_H,\label{3.14} \end{align} where $dg_H=\pi_H\circ dg$, $\pi_H: TM\rightarrow HM$ is the natural projection. Moreover, the $(J^N, J)$-holomorphic map $g$ is said to be horizontally constant if $ dg_H\equiv 0$. \end{definition} \begin{remark} Every $(J^N, J)$-holomorphic map $g: N\rightarrow M$ satisfying $dg(TN)\subseteq HM$, i.e., $dg\circ J^N=J\circ dg$, is constant. Indeed, if not, there exists a local vector field $e\in T^{1,0}N$ such that $dg(e)\neq 0$, so $HM \ni dg[e, \bar{e}]=[dg(e), dg(\bar{e})]\notin HM$, which leads to a contradiction. \end{remark} Suppose $(N, J^N )$ is a complex manifold and $(M, HM, J, \theta)$ is a pseudo-Hermitian manifold. Let $g : N\rightarrow M$ be a $(J^N , J )$-holomorphic map. Under the local frames in Section \ref{sec2}, we can express the differential of $g$ as \begin{align} dg=\sum_{A,B}g^B_A\omega^A\otimes e_B, \end{align} where $e_0=\xi$ and the values of $A, B$ are the same as those of \eqref{3.2}. Clearly, the condition \eqref{3.14} in Definition \ref{definition 4} is equivalent to \begin{align} g^i_{\bar{\alpha}}=g^{\bar{i}}_\alpha=0.\label{3.16} \end{align} From \eqref{3.14} and \eqref{3.16}, we have \begin{gather} g^*\theta^i=\sum_{\alpha} g^i_\alpha\omega^\alpha\label{3.17}\\ g^*\theta=\sum_{\alpha}(g^0_\alpha\omega^\alpha+g^0_{\bar{\alpha}}\omega^{\bar{\alpha}}). \end{gather} To simplify the notations, we set $\hat{\theta}^i_j=g^*\theta^i_j, \hat{A}^i_j=g^*A^i_j, \hat{R}^i_{j\bar{k}l}=R^i_{j\bar{k}l}$, etc. By taking the exterior derivative of \eqref{3.17} and using the structure equations in $M$ and $N$, we get \begin{align} \sum_{\alpha}Dg^i_\alpha\wedge\omega^\alpha+\sum_\alpha g^i_\alpha\Omega^\alpha-\sum_{\alpha,\beta, j}g^0_\alpha g^{\bar{j}}_{\bar{\beta}}\hat{A}^i_{\bar{j}}\omega^\alpha\wedge\omega^{\bar{\beta}}-\sum_{\alpha,\beta, j}g^0_{\bar{\alpha}}g^{\bar{j}}_{\bar{\beta}}\hat{A}^i_{\bar{j}}\omega^{\bar{\alpha}}\wedge\omega^{\bar{\beta}}=0,\label{3.19} \end{align} where \begin{align} Dg^i_\alpha=dg^i_\alpha-\sum_\gamma g^i_\gamma\omega^\gamma_\alpha+\sum_jg^j_\alpha\hat{\theta}^i_j=\sum_\beta (g^i_{\alpha\beta}\omega^{\beta}+ g^i_{\alpha\bar{\beta}}\omega^{\bar{\beta}}).\label{3.20} \end{align} Then \eqref{3.19} gives \begin{gather} g^i_{\alpha\beta}=g^i_{\beta\alpha}-\sum_\gamma g^i_\gamma T^\gamma_{\beta\alpha}\notag\\ g^i_{\alpha\bar{\beta}}=-\sum_j g^0_\alpha g^{\bar{j}}_{\bar{\beta}}\hat{A}^i_{\bar{j}}\label{3.21}\\ \sum_j (g^0_{\bar{\alpha}}g^{\bar{j}}_{\bar{\beta}}-g^0_{\bar{\beta}}g^{\bar{j}}_{\bar{\alpha}})\hat{A}^i_{\bar{j}}=0\notag \end{gather} Taking the exterior derivative of \eqref{3.20} and using the structure equations again, we obtain \begin{align} \sum_\beta Dg^i_{\alpha\beta}\wedge\omega^{\beta}+\sum_\beta Dg^i_{\alpha\bar{\beta}}\wedge\omega^{\bar{\beta}}=-\sum_\gamma g^i_{\gamma}\Omega^\gamma_\alpha+\sum_j g^j_\alpha g^*\Pi^i_j,\label{3.22} \end{align} where \begin{gather} Dg^i_{\alpha\beta}=dg^i_{\alpha\beta}-\sum_\gamma (g^i_{\alpha\gamma}\omega^\gamma_\beta+g^i_{\gamma\beta}\omega^\gamma_\alpha)+\sum_j g^j_{\alpha\beta}\hat{\theta}^i_j=\sum_\gamma (g^i_{\alpha\beta\gamma}\omega^\gamma+g^i_{\alpha\beta\bar{\gamma}}\omega^{\bar{\gamma}})\\ Dg^i_{\alpha\bar{\beta}}=dg^i_{\alpha\bar{\beta}}-\sum_\gamma (g^i_{\alpha\bar{\gamma}}\omega^{\bar{\gamma}}_{\bar{\beta}}+g^i_{\gamma\bar{\beta}}\omega^\gamma_\alpha)+\sum_j g^j_{\alpha\bar{\beta}}\hat{\theta}^i_j=\sum_\gamma(g^i_{\alpha\bar{\beta}\gamma}\omega^\gamma+g^i_{\alpha\bar{\beta}\bar{\gamma}}\omega^{\bar{\gamma}}). \end{gather} Compare the $(1,1)$-form of \eqref{3.22} and get \begin{align} g^i_{\alpha\beta\bar{\gamma}}=g^i_{\alpha\bar{\gamma}\beta}+\sum_\delta g^i_\delta R^\delta_{\alpha\beta\bar{\gamma}}-\sum_{j,k,l}g^j_\alpha g^k_\beta g^{\bar{l}}_{\bar{\gamma}}\hat{R}^i_{jk\bar{l}}-\sum_{j,k}(g^j_\alpha g^k_\beta g^0_{\bar{\gamma}}\hat{W}^i_{jk}+g^j_\alpha g^0_\beta g^{\bar{k}}_{\bar{\gamma}}\hat{W}^i_{j\bar{k}}).\label{3.25} \end{align} Set \begin{align} v=\sum_{i, \alpha}g^i_\alpha g^{\bar{i}}_{\bar{\alpha}}.\label{3.26} \end{align} By direct computation, we have \begin{align} \frac{1}{2}\Delta v=\sum_{i,\alpha,\gamma}(|g^i_{\alpha\gamma}|^2+|g^i_{\alpha\bar{\gamma}}|^2)+\sum_{i,\alpha,\gamma}(g^{\bar{i}}_{\bar{\alpha}}g^i_{\alpha\gamma\bar{\gamma}}+g^i_\alpha g^{\bar{i}}_{\bar{\alpha\gamma\bar{\gamma}}}).\label{3.28} \end{align} Using \eqref{3.21} and \eqref{3.25}, we perform the following computation \begin{align} g^i_{\alpha\gamma\bar{\gamma}}&=g^i_{\gamma\alpha{\bar{\gamma}}}-(\sum_\beta g^i_{\beta}T^\beta_{\gamma\alpha})_{\bar{\gamma}}\notag\\ &=g^i_{\gamma\bar{\gamma}\alpha}+\sum_\delta g^i_\delta R^\delta_{\gamma\alpha\bar{\gamma}}-\sum_{j,k,l}g^j_\gamma g^k_\alpha g^{\bar{l}}_{\bar{\gamma}}\hat{R}^i_{jk\bar{l}}-(\sum_\beta g^i_\beta T^\beta_{\gamma\alpha})_{\bar{\gamma}}\notag\\ &\ \ \ -\sum_{j,k}(g^j_\gamma g^k_\alpha g^0_{\bar{\gamma}}\hat{W}^i_{jk}+g^j_\gamma g^0_\alpha g^{\bar{k}}_{\bar{\gamma}}\hat{W}^i_{j\bar{k}})\label{3.29} \end{align} and \begin{align} g^{\bar{i}}_{\bar{\alpha}\gamma\bar{\gamma}}=-(\sum_j g^0_{\bar{\alpha}}g^j_\gamma \hat{A}^{\bar{i}}_j)_{\bar{\gamma}}.\label{3.30} \end{align} From \eqref{3.21}, \eqref{3.28}, \eqref{3.29}, \eqref{3.30}, we obtain the Bochner formula as follows. \begin{lemma} Suppose $f: (N, J^N, h)\rightarrow (M, HM, J, \theta)$ is a $(J^N,J)$-holomorphic map, then we have \begin{align} g^*G_\theta \leq vh, \label{3.30'} \end{align} and \begin{align} \frac{1}{2}\Delta v=\sum_{i,\alpha,\gamma}&(|g^i_{\alpha\gamma}|^2+|g^i_{\alpha\bar{\gamma}}|^2)+\sum_{i,\alpha,\delta}g^{\bar{i}}_{\bar{\alpha}}g^i_\delta R_{\alpha\bar{\delta}}-\sum_{i,j,k,l,\alpha,\gamma}g^{\bar{i}}_{\bar{\alpha}}g^j_\gamma g^k_\alpha g^{\bar{l}}_{\bar{\gamma}}\hat{R}^i_{jk\bar{l}}\notag\\ -&\sum_{i,j,k,\alpha,\gamma}(g^{\bar{i}}_{\bar{\alpha}}g^j_\gamma g^k_\alpha g^0_{\bar{\gamma}}\hat{W}^i_{jk}+g^{\bar{i}}_{\bar{\alpha}}g^j_\gamma g^0_\alpha g^{\bar{k}}_{\bar{\gamma}}\hat{W}^i_{j\bar{k}})\notag\\ -\sum_{i,\alpha,\gamma}&\{g^{\bar{i}}_{\bar{\alpha}}(\sum_j g^0_\gamma g^{\bar{j}}_{\bar{\gamma}}\hat{A}^i_{\bar{j}})_\alpha+g^i_\alpha(\sum_j g^0_{\bar{\alpha}}g^j_\gamma \hat{A}^{\bar{i}}_j)_{\bar{\gamma}}\}-\sum_{i,\alpha,\gamma}g^{\bar{i}}_{\bar{\alpha}}(\sum_\beta g^i_\beta T^\beta_{\gamma\alpha})_{\bar{\gamma}}\label{3.31} \end{align} \end{lemma} \section{Schwarz type lemma for $(J,J^N)$-holomorphic maps}\label{sec4} In this section, we will establish the Schwarz type lemma for $(J,J^N)$-holomorphic maps. As the corollaries of this lemma, the Liouville theorem and little Picard theorem for basic CR functions are given. Suppose that $(M,HM,J,\theta)$ is a complete pseudo-Hermitian manifold and $(N,J^N,h)$ is a Hermitian manifold. Let $f: M\rightarrow N$ be a $(J,J^N)$-holomorphic map. To give the Schwarz type lemma for $f$, it is sufficient to estimate the upper bound of $u$ defined by \eqref{3.11} due to \eqref{2.12}. Define \begin{align} \phi(x)=(a^2-\gamma^2(x))^2u \end{align} where $\gamma(x)$ is the Riemannian distance from a fixed point $z$ to $x$ in $M$. Let $B_a(z)$ be a open geodesic ball in $M$ with its center at $z$ and radius of $a$. It is obvious that $\phi(x)$ attains its maximum in $B_a(z)$. Suppose $x_0$ is a maximum point. For maximum principle, $\gamma$ is required to be twice differentiable near $x_0$. This may be remedied by the following consideration(cf. \cite{CCL79}): Let $\tau:[0, \gamma(x_0)]\rightarrow M$ be a minimizing geodesic joining $z$ and $x_0$ such that $\tau(0)=z$ and $\tau(\gamma(x_0))=x_0$. If $x_0$ is a cut point of $z$, then for a small number $\varepsilon>0$, $x_0$ is not a conjugate point of $\tau(\varepsilon)$ along $\tau$. It is well-known that there is a cone $\mathfrak{C}$ with its vertex at $\tau(\varepsilon)$ and containing a neighborhood of $x_0$. Let $\gamma_\varepsilon(x)$ denote the Riemannian distance from $\tau(\varepsilon)$ to $x$, then $\gamma_\varepsilon$ is smooth near $x_0$. Let $\tilde{\gamma}(x)=\varepsilon+\gamma$, then we have $\gamma\leq\tilde{\gamma}$ and the equality holds at $x_0$. So we can consider the function $(a^2-\tilde{\gamma}^2)^2u$, it also attains the maximum at $x_0$. Let $\varepsilon\rightarrow 0$, we may assume that $\gamma$ is smooth near $x_0$. Therefore, applying the maximum principle to $\phi$, at $x_0$, we have \begin{gather} \frac{\nabla^H u}{u}=-2\frac{\nabla^H(a^2-\gamma^2)}{a^2-\gamma^2}\label{4.2}\\ \frac{\Delta_bu}{u}+4\frac{\nabla^H(a^2-\gamma^2)}{(a^2-\gamma^2)}\cdot\frac{\nabla^Hu}{u}+2\frac{\Delta_b(a^2-\gamma^2)}{a^2-\gamma^2}+2\frac{\|\nabla^H(a^2-\gamma^2)\|^2}{(a^2-\gamma^2)^2}\leq0\label{4.3} \end{gather} where the inner product $\cdot$ and the norm $\|\cdot\|$ is induced by the Webster metric $g_\theta$. Substituting \eqref{4.2} into \eqref{4.3}, we obtain \begin{align} \frac{\Delta_bu}{u}-6\frac{\|\nabla^H(a^2-\gamma^2)\|^2}{(a^2-\gamma^2)^2}-2\frac{\Delta_b\gamma^2}{a^2-\gamma^2}\leq0.\label{4.4} \end{align} Let $-K_1$ be the greatest lower bound of the pseudo-Hermitian Ricci curvature of $M$, and $-K_2$ be the least upper bound of the holomorphic bisectional curvature of $N$. Then it follows from \eqref{3.13} that \begin{align} \frac{1}{2}\Delta_bu\geq-K_1u+K_2u^2.\label{4.5} \end{align} Using $\|\nabla^H\gamma\|\leq1$, we get \begin{align} \|\nabla^H(a^2-\gamma^2)\|^2=\|2\gamma\nabla^H\gamma\|^2\leq 4a^2.\label{4.6} \end{align} If $\|A\|_{C^1}$ is bounded from above on $M$, then by Lemma \ref{lemma 2.2}, we have \begin{align} \Delta_b\gamma^2=2\gamma\Delta_b\gamma+2\|\nabla^H\gamma\|^2\leq C(1+a)\label{4.7} \end{align} where $C$ is a positive constant independent of $a$. Suppose that $K_1\geq0$ and $K_2>0$. Then substituting \eqref{4.5}, \eqref{4.6} and \eqref{4.7} into \eqref{4.4}, we obtain \[u(x_0)\leq\frac{K_1}{K_2}+\frac{12a^2}{K_2(a^2-\gamma^2(x_0))^2}+\frac{C(1+a)}{K_2(a^2-\gamma^2(x_0))}. \] Thus, \begin{align*} (a^2-\gamma^2(x))^2u(x)&\leq(a^2-\gamma^2(x_0))^2u(x_0)\\ &\leq\frac{K_1}{K_2}(a^2-\gamma^2(x_0))^2+\frac{12a^2}{K_2}+\frac{C(1+a)}{K_2}(a^2-\gamma^2(x_0))\\ &\leq\frac{K_1}{K_2}a^4+\frac{12a^2}{K_2}+\frac{C(1+a)}{K_2}a^2 \end{align*} for any $x\in B_a(z)$. It follows that \[u(x)\leq\frac{K_1a^4}{K_2(a^2-\gamma^2(x))^2}+\frac{12a^2}{K_2(a^2-\gamma^2(x))^2}+\frac{C(1+a)a^2}{K_2(a^2-\gamma^2(x))^2}.\] Let $a\rightarrow\infty$, we deduce that \[\sup_M u\leq \frac{K_1}{K_2}.\] Due to \eqref{2.12}, we obtain \[f^*h\leq\frac{K_1}{K_2}G_\theta.\] Therefore, we have the Schwarz type lemma as follows: \begin{theorem}\label{theorem 4.1} Let $(M^{2m+1},HM,J,\theta)$ be a complete pseudo-Hermitian manifold with pseudo-Hermitian Ricci curvature bounded from below by $-K_1\leq0$ and $\|A\|_{C^1}$ bounded from above. Let $(N^n,J^N,h)$ be a Hermitian manifold with holomorphic bisectional curvature bounded from above by $-K_2<0$. Then for any $(J,J^N)$-holomorphic map $f: M\rightarrow N$, we have \begin{align} f^*h\leq\frac{K_1}{K_2}G_\theta.\label{4.8} \end{align} In particular, if $K_1=0$, every $(J,J^N)$-holomorphic map from $M$ into $N$ is constant. \end{theorem} \begin{remark}\label{rk3} Using Royden's lemma (cf. \cite{Roy80}), we can weaken the hypothesis on $N$ by assuming the holomorphic sectional curvature of $N$ is bounded above by $-K_2<0$, but the constant $\frac{K_1}{K_2}$ in \eqref{4.8} will be replaced by $\frac{2v}{v+1}\frac{K_1}{K_2}$, where $v$ is the maximal rank of $df$. \end{remark} Since a unit disk in complex plane equipped with Poincar\'{e} metric is a K\"{a}hler manifold with constant negative holomorphic curvature, we have the following Liouville theorem: \begin{corollary}\label{corollary 4.2} Let $(M,HM,J,\theta)$ be a complete pseudo-Hermitian manifold with non-negative pseudo-Hermitian Ricci curvature and $\|A\|_{C^1}$ bounded from above. Any bounded basic CR function on $M$ is constant. \end{corollary} Using the fact that the complex plane $\mathbb{C}$ minus two distinct points admits a complete Hermitian metric with holomorphic curvature less than a negative constant \cite{Kob05}, we derive little Picard theorem for basic CR functions. \begin{corollary}\label{corollary 4.3} Let $(M,HM,J,\theta)$ be a complete pseudo-Hermitian manifold with non-negative pseudo-Hermitian Ricci curvature and $\|A\|_{C^1}$ bounded from above. Any basic CR function $u: M\rightarrow \mathbb{C}$ missing more than one point in its image is constant. \end{corollary} \section{Schwarz type lemma for $(J^N, J)$-holomorphic maps} In this section, we will establish the Schwarz type lemma for $(J^N, J)$-holomorphic maps. In \cite{Yau78}, Yau have proved that \begin{proposition}\label{proposition 5.1} Let $N$ be a complete K\"ahler manifold with Ricci curvature bounded below. A non-negative smooth function $u$ on $N$ satisfies the following inequality \[\Delta u\geq -k_1u+ k_2u^2,\] where $k_1\geq 0, k_2>0.$ Then $\sup_M u \leq \frac{k_1}{k_2}.$ \end{proposition} It is notable that Tosatti has generalized this proposition to almost Hermitian manifold in \cite{Tos07}. Let $g:(N^n,J^N,h)\rightarrow (M^{2m+1},HM,J,\theta) $ be a $(J^N, J)$-holomorphic map. If $N$ is a complete K\"ahler manifold and $M$ is a Sasakian manifold, by \eqref{3.31}, we obtain \begin{align} \frac{1}{2}\Delta v=\sum_{i,\alpha,\gamma}(|g^i_{\alpha\gamma}&|^2+|g^i_{\alpha\bar{\gamma}}|^2)+\sum_{i,\alpha,\delta}g^{\bar{i}}_{\bar{\alpha}}g^i_\delta R_{\alpha\bar{\delta}}-\sum_{\alpha,\gamma,i,j,k,l}g^{\bar{i}}_{\bar{\alpha}}g^j_\gamma g^k_\alpha g^{\bar{l}}_{\bar{\gamma}}\hat{R}_{j\bar{i}k\bar{l}}\label{61} \end{align} where $v$ is defined by \eqref{3.26}. Let $-K_1$ be the greatest lower bound of the Ricci curvature of $N$, and $-K_2$ be the least upper bound of the pseudo-Hermitian bisectional curvature of $M$, where $K_1\geq 0, K_2>0$. It follows from \eqref{61} that \begin{align} \frac{1}{2}\Delta v\geq -K_1v+K_2v^2. \end{align} By Proposition \ref{proposition 5.1}, we deduce that $\sup v\leq \frac{K_1}{K_2}$, which, combining with \eqref{3.30'}, yields the Schwarz type lemma for $(J^N, J)$-holomorphic maps as follows. \begin{theorem} Let $(N^n,J^N,h)$ be a complete K\"ahler manifold with Ricci curvature bounded from below by $-K_1\leq0$. Let $(M^{2m+1},HM,J,\theta)$ be a Sasakian manifold with pseudo-Hermitian bisectional curvature bounded from above by $-K_2<0$. Then for any $(J^N, J)$-holomorphic map $g: N\rightarrow M$, we have \begin{align} g^*G_\theta \leq\frac{K_1}{K_2}h.\label{5.3} \end{align} In particular, if $K_1=0$, any $(J^N, J)$-holomorphic map is horizontally constant. \begin{remark} By Royden's lemma (cf. \cite{Roy80}), we can weaken the hypothesis on $M$ by assuming the pseudo-Hermitian sectional curvature of $M$ is bounded above by $-K_2<0$, but the constant $\frac{K_1}{K_2}$ in \eqref{5.3} will be replaced by $\frac{2v}{v+1}\frac{K_1}{K_2}$, where $v$ is the maximal rank of $dg_H$. \end{remark} \end{theorem} If $\dim_{\mathbb{C}} N=1$, one can also weaken the hypothesis on $N$. \begin{corollary} \label{corollary 5.3} Let $(N,J^N,h)$ be a complete 1-dimensional K\"ahler manifold with Ricci curvature bounded from below by $-K_1\leq0$. Let $(M^{2m+1},HM,J,\theta)$ be a Sasakian manifold with pseudo-Hermitian sectional curvature bounded from above by $-K_2<0$. Then for any $(J^N, J)$-holomorphic map $g: N\rightarrow M$, \eqref{5.3} holds. \end{corollary} \section{Invariant pseudodistance on CR manifolds} In this section, we will give an invariant pseudodistance on pseudo-Hermitian manifolds, which are analogous to the Carath\'eodory pseudodistance on complex manifolds(cf. \cite{Ca26}). Note that this notion holds true for more general CR manifolds. Let $(M^{2m+1},HM,J,\theta)$ be a pseudo-Hermitian manifold. Let $D$ denote the unit disk in the complex plane and $\rho$ denote the Bergman distance of $D$. Analogous to the Carath\'eodory pseudodistance on complex manifolds, we may also define CR Carath\'eodory pseudodistance on $M$ by \begin{align} c_M(p, q)= \sup_f \rho(f(p), f(q)) \end{align} where the supremum is taken for all possible CR functions $f: M\rightarrow D$. Note that $f: M\rightarrow D$ is a CR function if and only if it satisfies $df\circ J=J^D\circ df$ on $HM$. It is easy to verify the following axioms for the pseudodiatance: \begin{align} c_M(p, q)\geq 0, \ c_M(p, q)=c_M(q, p), \ c_M(p, r)+c_M(r, q)\geq c_M(p, q). \end{align} The most important property of $c_M$ is given as follows, whose proof is trivial. \begin{proposition} Let $M, \tilde{M}$ be two pseudo-Hermitian manifolds and let $f:M\rightarrow \tilde{M}$ be a CR map. Then \[c_{\tilde{M}}(f(p), f(q))\leq c_M(p, q) \] \end{proposition} \begin{corollary} Let $f:M\rightarrow \tilde{M}$ be a CR isomorphism, then \[c_{\tilde{M}}(f(p), f(q))=c_M(p, q) \] \end{corollary} In order to apply the Schwarz lemma in section \ref{sec4}, we may define the basic pseudodistance by \begin{align} c'_M(p, q)= \sup_f \rho(f(p), f(q)) \end{align} where the supremum is taken for all $(J, J^D)$-holomorphic maps $f: M\rightarrow D$. It's clear that $c'_M\leq c_M$. \begin{example} Let $D$ be the unit disc in $\mathbb{C}$. Set $\theta=dt+i(\bar{\partial}-\partial)\log(1-|z|^2)^{-1}$, $\xi=\frac{\partial}{\partial t}$, and $H=\ker \theta$. We define an almost complex structure $J$ on $H$ to be the horizontal lift of the complex structure $J^D$ on $D$. Then $D^3(-1)= (D\times \mathbb{R}, H, J, \theta)$ is the 3-dimensional Sasakian space form with pseudo-Hermitian sectional curvature $-1$. (cf. \cite{BG08} or \cite{DRY19}). Choose $T=\frac{\partial}{\partial z}+i\frac{\bar{z}}{1-|z|^2}\frac{\partial}{\partial t}$ as the frame field of $T_{1,0}D^3(-1)$. It is easy to see that $f=f(z,t): D^3(-1)\rightarrow D$ is a basic CR function if and only if $\frac{\partial f}{\partial t}=\frac{\partial f}{\partial \bar{z}}=0$. Thus, for any $(z_1,t_1), (z_2,t_2)\in D^3(-1)$, $c'_{D^3(-1)}((z_1,t_1), (z_2,t_2))=c'_{D^3(-1)}((z_1,0), (z_2,0))=c^D(z_1, z_2)$, where $c^D$ is the Carath\'eodory distance of unit disk $D$. For $c_{D^3(-1)}$, since $f_1(z,t)=z$ and $f_2(z,t)=\frac{t-i(\log{(1-|z|^2)}+1)}{t-i(\log(1-|z|^2)-1)}$ are CR functions from $D^3(-1)$ to $D$, $c_{D^3(-1)}((z_1,t_1), (z_2,t_2))\geq \rho(z_1, z_2)>0$ for any $z_1\neq z_2$ in $D$, and $c_{D^3(-1)}((z,t_1), (z,t_2))\geq \rho(f_2(z,t_1), f_2(z,t_2))>0$ for $z\in D, t_1\neq t_2$. Therefore, $c_{D^3(-1)}((z_1,t_1), (z_2,t_2))=0$ if and only if $(z_1,t_1)=(z_2,t_2)$. Consequently, for two distinct points $p$ and $q$ in $D^3(-1)$, there is a bounded CR function $f$ on $D^3(-1)$ such that $f(p)\neq f(q)$. \end{example} Making use of Theorem \ref{theorem 4.1}, we have \begin{theorem}\label{theorem 6.3} Let $(M,HM,J,\theta)$ be a complete pseudo-Hermitian manifold with pseudo-Hermitian Ricci curvature bounded from below by a constant $-K\leq 0$ and $\|A\|_{C^1}$ bounded from above. Then \[c'_M(p,q)\leq \sqrt{K}d^M_{cc}(p, q)<\infty\] for $p,q\in M$. In particular, if $K=0$, $c'_M\equiv0$. \end{theorem} \begin{proof} Let $ds^2_D$ denote the Poincar\'e metric on $D$, and the curvature of $(D, ds^2_D)$ is -1. For any $(J, J^D)$-holomorphic map $f: M\rightarrow D$, by Theorem \ref{theorem 4.1}, we have \begin{align} f^*ds^2_D\leq KG_\theta.\label{6.4} \end{align} Assume that $p, q$ are two points in $M$ and $\tau: [0, 1]\rightarrow M$ is a horizontal Lipschitz curve between them. Then, \begin{align} \rho(f(p), f(q))\leq\int_0^1\sqrt{ds^2_D(f_*(\tau'), f_*(\tau'))}\, dt\leq\sqrt{K}\int_0^1\sqrt{L_\theta(\tau', \tau')}\, dt, \end{align} where the second inequality follows from \eqref{6.4}. Taking the supremum with respect to $f$ and infimum with respect to $\tau$, the theorem follows. \end{proof} By the Definitions of the Carath\'eodory and CR Carath\'eodory pseudodistances, it is easy to see the relationships between them. \begin{proposition}\label{proposition 6.4} Let $(M^{2m+1},HM,J,\theta)$ be a pseudo-Hermitian manifold and $(N, J^N)$ a Hermitian manifold. \begin{enumerate}[(i)] \item For any $(J, J^N)$-holomorphic map $f: M\rightarrow N$, we have \begin{align} c^N(f(p), f(q))\leq c'_M(p, q)\leq c_M(p, q) \end{align} for any $p, q\in M$, where $c^N$ is the Carath\'eodory pseudodistance on $N$.\label{proposition 6.4 (1)} \item For any $(J^N, J)$-holomorphic map $g:N\rightarrow M$, we have \begin{align} c'_M(g(x), g(y))\leq c^N(x, y) \end{align} for any $x, y\in N$. \end{enumerate} \end{proposition} Combining Theorem \ref{theorem 6.3} and Proposition \ref{proposition 6.4} \eqref{proposition 6.4 (1)}, we derive anthor Liouville theorem for $(J, J^N)$-holomorphic maps. \begin{theorem}\label{theorem 6.5} Let $(M,HM,J,\theta)$ be a complete pseudo-Hermitian manifold with non-negative pseudo-Hermitian Ricci curvature and $\|A\|_{C^1}$ bounded from above. Let $(N, J^N)$ be a complex manifold whose Carath\'eodory pseudodistance is a distance. Then any $(J, J^N)$-holomorphic map $f: M\rightarrow N$ is constant. \end{theorem} \begin{remark} We can also deduce Corollary \ref{corollary 4.2} from Theorem \ref{theorem 6.5}, since the Carath\'eodory pseudodistance of unit disc $D$ is a distance. \end{remark}
2,877,628,088,795
arxiv
\section{Introduction} Millimeter wave (mmWave) communication systems, operating in frequency bands of 30-300 GHz, are considered as a promising technology for 5G and beyond cellular systems to achieve a high data rate thanks to wide frequency bands \cite{6G}. Due to the large isotropic path loss, large antenna arrays are typically deployed at base station (BS) and/or user terminals (UEs) in order to form narrow beams between BS-UE pairs. The classical beam sweeping mechanism is highly inefficient as its complexity increases substantially with the number of antenna arrays and its practical implementation using quantized phases may limit its accuracy. Therefore, a number of low-complexity schemes for initial beam alignment have been proposed in the literature (see e.g. a compressed sensing approach in \cite{X.Song_OSPS} and references therein). However, the existing schemes cannot be adapted directly to the high-mobility scenario, where the displacement of UEs may significantly increase the probability of beam misalignment. In such a scenario, the angle of each UE shall be estimated continuously through beam tracking methods (e.g. \cite{BeamTracking,Va, Love, PF} and references therein). For example, \cite{Va, Love, PF} proposed variants of Kalman filter (KF) to address this problem. Since their restricted channel model together with high complexity makes these methods \cite{Va, Love} impractical in high mobility environments, \cite{PF} considered the use of particle filtering (PF) in a time-varying channel and demonstrated an improved performance by modeling the non-linearities of the channel compared to other KF variants. Further, \cite{Palacios} attempted to overcome the computational burden. In this work, by focusing on the beamforming at the BS, we propose a beam tracking approach based on Recurrent Neural Networks (RNN) that can be applied to any arbitrary UE mobility pattern. The proposed method takes advantage of the temporal correlations within measurement data and learns the amount of \textit{adjustment} required for the beam direction from the sequence of measurements by approaching the problem as a classification task. Based on the proposed scheme, we define a modified frame structure with variable length which can be adapted to reduce overhead. The numerical examples in a highly non uniform linear motion scenario using the Quadriga channel generator demonstrate that our proposed scheme can track the AoD accurately and achieve higher communication rate compared to the PF. Finally, we remark that a number of recent works extensively adapted machine/deep learning for beamforming design over the mmWave channels (see e.g. \cite{Lim, Guo,Elbir1, Sim,Huang}). In \cite{Elbir1}, a convolutional neural network (NN) is proposed for an optimal beamformer design. A deep learning framework was proposed for beam selection by using the channel state information (CSI) in \cite{Sim}, while \cite{Liu} demonstrates an auto-encoder/decoder architecture for robust angle estimation. Although \cite{Lim, Guo} also addressed beam tracking using machine learning tools, the proposed method differs from these approaches. In \cite{Lim}, only fully connected layers, unable to capture time correlations of input data, are considered. The work of \cite{Guo} aims to infer the correct beam index by predicting (i.e. regression problem) the channel vector under the assumption of a linear motion path. More recently, the work in \cite{Qlearning} applied a Reinforcement Learning principle for UAV beam tracking, where the authors deal with the design of the reward function by introducing thresholds for the reward. These threshold values need to be optimized depending on the operational SNR. In contrast, our model does not require any SNR specific design parameter. While a comparison of the NN based methods is an interesting topic for future work, due to the limited space, we have restricted our comparison with the well assessed and understood PF scheme. \section{System Model} \subsection{Frame Structure and Signaling Scheme}\label{measurement_model} Fig.~\ref{fig:frame_strc} illustrates the proposed frame structure consisting of two different types depending on the operating mode: (a) initialization frame and (b) secondary frame. Frame (a) is used at the initialization of the system. During initial beam alignment (BA) phase, a beam alignment takes place where the BS transmits a number of pilots to sweep the beam space. The BS receives feedback (FB) signal from the UE on the previously transmitted pilots on the uplink (UL) channel, in the \textit{UL-FB} phase, as explained in Section \ref{initial_spectrum}. During the \textit{Data Transmission} phase, the data is transmitted in downlink (DL) channel. The final phase refers to Secondary Probing \textit{SP} phase of variable length dedicated to transmit a reduced number of pilots for further channel probing. Frame (b) of length $T_{F'}<T_F$ comes into operation once a reliable connection between a BS-UE pair is established after the initialization frame (a). During the first phase \textit{UL-FB}, the BS receives feedback on the previous probing (SP) from UEs. The second and third phases are identical to that of the initialization frame (a). \begin{figure}[h] \vspace{-0.3cm} \centering \scalebox{.5}{\input{FrameTimeSlots}} \caption{Frame structures depending on operating mode. Note, the UL-FB slot at the beginning receives feedback from the previous SP.} \label{fig:frame_strc} \end{figure} \vspace{-0.3cm} \subsection{Transmission Scheme and Measurement Equation}\label{measurement_model} Suppose the transmitter (BS), with $N_{\msub{TX}}$ uniformly and linearly positioned elements, sends pilot symbols to the receiver receiver (UE), with $N_{\msub{RX}}$ elements over the time-varying MIMO channel at frame $k$ by \begin{align} {\bf H}_k(t, \tau) = \alpha_k{\bf a}_{\msub{RX}}(\varphi_k){\bf a}_{\msub{TX}}^{H}(\theta_k) \delta(t-\tau_k) e^{j2\pi\nu_k t} \end{align} where $\alpha_k, \tau_k, \nu_k$ denotes the attenuation coefficient, the delay, and the Doppler shift, respectively, $\{d_i\}$ is the antenna spacing with $d_1=0$, $\lambda$ is the wavelength, ${\bf a}_{\msub{TX}}(\theta_k)$ is a steering vector at the Tx given by \begin{equation} {\bf a}_{\msub{TX}}(\theta_k) = [ e^{j\frac{2\pi}{\lambda}d_{1}\cos(\theta_k)},... , e^{j\frac{2\pi}{\lambda}d_{N_{\msub{TX}}}\cos(\theta_k)}]^{T}\ \end{equation} for the angle-of-departure (AoD) denoted by $\theta_k$, ${\bf a}_{\msub{RX}}(\varphi_k)$ is defined similarly for the angle-of-arrival (AoA) denoted by $\varphi_k$. Since we consider a LoS path between the UE/BS, we have $\varphi_k=\theta_k$. The model above can be easily extended to multiple paths. The parameters $(\theta_k, \alpha_k, \tau_k, \nu_k)$ remain constant over a frame duration. We consider the beamforming vector ${\bf f}$ and the combining vector ${\bf c}$ according to a predefined beam codebook. Namely, by focusing on the beamforming codebook at the BS side, denoted by $\mathcal{CB}=\{{\bf f}_1, \dots, {\bf f}_G\}$, where we let ${\bf f}_i ={\bf f}(\tilde{\theta}_i)= \frac{1}{\sqrt{N_{\msub{TX}}}}{\bf a}_{\msub{TX}} (\tilde{\theta}_i), i\in \{1, \dots, G\}$ for some angles $\tilde{\theta}_i$. Then, the observed signal at the UE for frame $k$ after the combining vector is given by \begin{equation}\label{measurements} z_k(t) = \alpha_k{\bf c}^{H}(\tilde{\varphi}){\bf a}_{\msub{RX}}(\varphi_k){\bf a}_{\msub{TX}}^{H}(\theta_k){\bf f}(\tilde{\theta}) x(t-\tau_k) e^{j2\pi\nu_k t} +\eta(t) \end{equation} where $\eta(t) \sim {\cal N}(0,\sigma_{\eta}^{2} )$ is the additive Gaussian noise. The objective of initial BA is to acquire an accurate AoA estimate $\tilde{\theta}\in \{\tilde{\theta}_1, \dots, \tilde{\theta}_G\}$ so that the corresponding beamforming vector ${\bf f}(\tilde{\theta})$ is used for the DL data transmission. We also assume that the codebooks available for UL and DL operation at the BS are identical. \subsection{Initial Channel State Estimation}\label{initial_spectrum} In order to focus on tracking the UE's angular location at the BS side, we further assume that the UE is equipped with a single antenna. The UE initially listens to the channel during the initial BA period where the BS sends pilots through $T$ probing directions according to the predefined codebook $\mathcal{CB}$. By replacing ${\bf f}(\tilde{\theta})$ with ${\bf f}_i$ for $i=1, \dots, T$ in \eqref{measurements} followed by suitable sampling every $T_{ACQ}/T$, we obtain $T$ discrete observations or measurements, denoted by ${\bf z}= \{z_{\tilde{\theta}_{1}}[1],...,z_{\tilde{\theta}_{T}}[T]\}$, where $z_{\tilde{\theta}_{i}}[i]$ is the received signal when probing in the direction $\tilde{\theta}_{i}$ at the $i^{th}$ sampling instant. After this initial BA phase, the UE selects the best beam direction $\tilde{\theta}_m$ corresponding to $\text{max}_{i=1,...,T}(z_{\tilde{\theta}_{i}}[i])$ and feeds back ${\bf z}$ to the BS during UL-FB phase. \section{Beam Tracking Methods} \subsection{Classical Beam Tracking}\label{Classical} Beam tracking based on Bayesian statistical inference principles have been extensively investigated in literature\cite{Va, Love, PF}. In this subsection, we provide a brief overview of the PF technique similar to that described in \cite{PF}. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are available in the presence of random perturbations. The objective is to compute the posterior distributions of the states of a Markov process, given some noisy and/or partial observations. Based on the system model described in (\ref{measurements}) state is sequentially estimated by using the measurement $z_{\tilde{\theta}_m}[k]$ (for brevity $z_k$) at each time frame. Due to the nonlinear dependency between the state space and the measurements in the problem at hand, the PF is particularly well suited among Kalman filtering methods. By letting $\tilde{\theta}[k], z_k $ denote the state and the observation at frame $k$ obtained from (\ref{measurements}) after a suitable sampling\footnote{It is also possible to consider multiple measurements for a given frame.}, we model the transition of the state as $\tilde{\theta}[k] =\tilde{\theta}[k-1] + u_k$ where $u_k\sim {\cal N}(0, \sigma_{\tilde{\theta}}^2)$ is the process noise (i.e. perturbation). We can modify \cite{PF} by focusing on particles (beams) within a reduced region of interest of the beam space. These regions can be selected adaptively, for instance in the case of static or slow moving users, the space in proximity of the main beam direction of the previous step will have a denser distribution of particles. \subsection{Proposed method with Recurrent Neural Networks} The particle filter discussed previously incurs increased complexity and overhead in the massive MIMO regime and in high mobility channels. In this section we consider a RNN approach for the aforementioned tracking problem by exploiting the time correlation between measurement data. Since the underlying channel parameters such as the AoD, the range, and the Doppler shift in (\ref{measurement_model}) evolve in time with some memory, a recurrent NN architecture seems a viable solution for this problem. In recent years, Long Short-Term Memory (LSTM) networks have been used to create large recurrent networks that in turn can be used to address complicated sequence problems in machine learning and achieve state-of-the-art results. The LSTM network, is trained using backpropagation through time and overcomes the vanishing gradient problem in this way. LSTM networks have memory blocks that replace neurons which are connected through layers. A block contains gates that manage the block’s state and output, thus developing a memory state for recent sequences. A block operates upon an input sequence and each gate within a block uses the sigmoid activation units to control whether they are triggered or not, making the change of state and addition of information flowing through the block conditional. \noindent The input data sequence of interest are the observed signal values $z_k$ at frame $k$. These input values referred to as input features, are used to train the network to output estimates of the UE's future AoD under a supervised learning framework. The input features of the NN are generated based on the \textit{windowed} input method where features corresponding to the previous time steps are inputs to the current step. Due to the fact that the measurements in (\ref{measurements}) are equivalent to the beam space representation of the channel at discrete angles, they can be viewed as a \textit{pseudo-spectrum}. On this basis, we define our input features using a \textit{sliding window} technique consisting of these values. As the initial feature set, we take the pseudo-spectrum $\mathcal{S} \equiv {\bf z}$ described in section \ref{initial_spectrum} and define the window length parameter $L$ such that $L$ bins around the current main beam direction $\tilde{\theta}_m[k]$ are selected at each time step (total of $L'=2L+1$ bins, as in Fig. \ref{fig:sliding_wind}). These inputs are updated at every measurement frame. With reference to Fig.~\ref{fig:frame_strc}, the updates are made at interval $T_{\msubl{SP,FB}}$ which relates to the secondary probe transmitted during interval $T_{\msubl{SP}}$ of the previous frame. The number of directions to be scanned during the secondary probation can be adaptively selected, leading to a trade-off between overhead and improved channel exploration. If the user has moved out of span of the current window, at the next time instance (this can happen if for example the window length is very small or that the grid points are very fine) the initial BA shall be performed by using the initialization frame (a) and the label generated for the current time frame will be the last element of the label vector (i.e. farthest from current) to minimize the distance to the next user position (note that this will only happen during inference mode). An exemplary graphical representation of this input is depicted in Fig. \ref{fig:input_feature}. The labels generated for the supervised learning task consist of a \textit{ hot-one-encoded} vector with the same dimension as the sliding window such that the vector contains all zeros except at the position which corresponds to the next angular position (i.e. AoD) of the UE. The network outputs a value between '0' and '1' for each position in the window which can be interpreted as the probability of the target AoD in the next instant. The significance of this technique is that the NN output becomes invariant to the value of the AoD itself, rather the predicted value is the amount of correction (in discrete grid points) needed for the next beam. This classification-type solution is an alternative to works attempting to regress values \cite{Guo}. It is well known that for limited data inputs and training, classification outperforms regression. It's also worth noting that by defining the problem as a classification task, we directly bypass the problem of dealing with ambiguities within the regressed values (e.g. regressing $370^{\circ}$ and $10^{\circ}$ for a desired value of $10^{\circ}$). \begin{figure}[h] \centering \scalebox{.45}{\input{schematicRNN_sliding_window}} \caption{Schematic of the sliding window and the corresponding label vector. Here the 1 indicates that the UE moves one grid point (discrete angle from grid) in the next time frame. } \label{fig:sliding_wind} \end{figure} \begin{algorithm} \algsetup{linenosize=\small} \scriptsize \SetKwInOut{Parameter}{parameter} \SetKwData{Left}{left} \SetKwData{This}{this} \SetKwData{Up}{up} \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{measurements $z_k$ and $\tilde{\theta}_m[k]$ as in (\ref{measurements}) for time frames $k \in \{0,1,..., K\}$ and sliding window size $2L+1$} \Output{feature set $\mathcal{F}$, label set $\mathcal{L}$} \BlankLine initialization\; set $\mathcal{W}=\mathcal{S}$ and $\mathcal{F}_k=[w_{k-L},....,w_{k+L}]$\ \While{$k \leq K-1$}{ insert $z(\tilde{\theta}_m[k])$ at $m^{th}$ position of $\mathcal{W}$ \\ center sliding window of length 2$L+1$ at position $m$\\ acquire $\tilde{\theta}_{m}[k+1]$ and calculate $d_{\tilde{\theta}} = \tilde{\theta}_{m}[k+1] - \tilde{\theta}_m[k] $ \; \eIf{$\vert d_{\tilde{\theta}} \vert\leq L $}{ set position $(L+1+d_{\tilde{\theta}})$ of $\mathcal{L}_k$ to 1 and the rest 0\; }{ set position $(L+1+ \text{sgn}(d_{\tilde{\theta}})L)$ of $\mathcal{L}_k$ to 1 and the rest 0\; } } \caption{Algorithm for feature and label generation from dataset. $m$ denotes the index of the main beam.} \label{alg: feature_gen} \end{algorithm} \begin{figure} \vspace{-0.5cm} \centering \resizebox{1.25\linewidth}{!}{\input{plotRNNfeatureSet.tex}} \caption{Graphical representation of the input features for the RNN in a dynamic scenario where the UE has a cyclic motion. The window length parameter $L$ is set to 8 for a total of 90 time frames. } \label{fig:input_feature} \end{figure} \subsection{Neural Network Architecture} The recurrent network architecture is depicted in Fig.~\ref{RNN_architecture} consisting of two LSTM layers followed by a fully connected layer before the final classification layer. Here we have used a \textit{bi-directional LSTM} (biLSTM) layer in order to preserve the information contained within the time series from the future samples. For conciseness we refrain from discussing different recurrent implementations here. An important remark here is that as the final layer of the classifier we have chosen the sigmoid activation function as opposed to a softmax due to the fact that when angular grids of the labels are smaller than the beam width of the array, it is possible that a certain beam can cover multiple grid points. \begin{figure}[h] \centering \scalebox{.45}{\input{schematicRNN}} \caption{RNN architecture with the LSTM layer directly fed by the input data. Note that it is also possible to replace the sigmoid activation at the final layer with a softmax layer.} \label{RNN_architecture} \end{figure} \section{Simulations and Numerical Results} \label{simulations} In this section we use the Fraunhofer Quadriga \cite{quadriga} channel generator with a mobile UE to train a network to predict the UE's AoD. The parameters generated by Quadriga at each time instance include channel coefficients, delay values and position coordinates (trajectory) among others. In the following we assume a ULA at the BS with N$_{\msubl{TX}}=64$. Since this ULA is not available in Quadriga, we have self-defined a 64 element ULA to obtain the channel parameters. These values are then used to simultaneously generate multiple realizations of the measurement values for the NN and state variables for the PF along the same trajectory. As can be seen in Fig.~\ref{quadriga_traj}, the UE trails a trajectory with variable speed along a circular path (start:S) followed by a linear path. The circular path can be a representation of a likely scenario for a BS installed at the corner of an urban round-about. The significant challenge of a circular path is the non-linearity in AoD values with time. The dataset contains 26 realizations of the the same trajectory, where in each case the parameters vary. \begin{figure}[h] \vspace{-5cm} \centering \scalebox{.5}{\input{CRv/plotQuadrigaTrajectoriesERP} \vspace{-1cm} \hspace{1.5cm} \caption{Quadriga simulated trajectory of a mobile UE. The green markings on the trajectory imply that the linear speed of the UE changes from there onward. The notation BERLIN-UMa-LOS describes the scenario defined by the Quadriga software to generate the coefficients (in this case a LoS environment in Berlin, Germany).} \label{quadriga_traj} \end{figure} Fig.~\ref{prediction} shows the performance of PF compared to the proposed method. The number of particles is set to 100 as in \cite{PF} and the other parameters have been accommodated correspondingly to the array size. The \textit{Oracle} defines the real AoD, \textit{GT} denotes the Ground Truth value which is the rounded (within 1 degree) value of the corresponding real value, included in the plots to signify that the network has been trained to predict up to a pre-chosen \textit{gridded} accuracy. Fig. \ref{prediction} shows the estimated AoD values associated with the UE along the marked circular section (A-D) of the trajectory from Fig.~\ref{quadriga_traj}. A few remarks are in order. 1) The section of the path used for inference, has not been seen by the network during training and validation. 2) Note that when the AoD progression becomes highly nonlinear along the path, the PF deviates significantly from the true value. This could be attributed to the channel model considered in \cite{PF}, which ignores the Doppler induced time-variance of the channel. Additionally, even though the PF outperforms other EKF variants, the particles are not fully able to capture the channel dynamics. For both estimators, at \textit{each} time index the reported values are the average estimated value (binned avg. for RNN) over 26 independent runs. In Tab.~\ref{table_mse}, MSE of the RNN estimated angles for various window lengths ($L'$) and probe lengths are shown. It is observed that with a larger window (more memory) and larger number of probes, the estimation error decreases. \begin{figure*}[t] \centering \begin{subfigure}[h]{0.3\linewidth} \scalebox{0.3}{\input{CRv/predictions_ERP_1}} \end{subfigure} \begin{subfigure}[h]{0.3\linewidth} \hspace{0.5cm} \scalebox{0.3}{\input{CRv/predictions_ERP_2}} \end{subfigure} \begin{subfigure}[h]{0.3\linewidth} \scalebox{0.3}{\input{CRv/predictions_ERP_3}} \end{subfigure} \vspace*{-1.0cm} \caption{The performance of PF compared to the proposed method. The overall MSE is reported in Tab.\ref{table_mse} for other training window lengths and transmitted number of secondary probes. The results are segmented in multiple plots for better visualization} \label{prediction} \end{figure*} In Fig.~\ref{rate_comparison}, a comparison of the achievable rate between the proposed RNN scheme, the PF from \cite{PF} and an Oracle estimator is provided. The reported values correspond to the predictions form Fig.~\ref{prediction} . The plotted values at each time index are averaged for 26 runs. It's worth mentioning, the change in the Oracle rate occurs as the result of the UE moving closer to the BS, leading to a higher received signal power. The achievable rate is calculated according to: \begin{equation} \begin{split} r(\tilde{\theta},\theta)&=\left(1-\frac{T_{\msubl{X}}}{T_{\msubl{FR}}}\right) \cdot \log_2 \left(1+\frac{P\vert {\bf c}^{H}(\tilde{\theta})\matr{H}(\theta){\bf f}(\tilde{\theta})\vert^{2}}{\sigma_{\eta}^2}\right)\\ \end{split} \label{eq: rate} \end{equation} where $\theta$ and $\tilde{\theta}$ are the actual and estimated AoD of the UE. The channel is defined $\matr{H(\theta)}=\alpha {\bf a}_{\msub{RX}}(\theta){\bf a}_{\msub{TX}}^{H}(\theta)e^{j2\pi\nu T_s}$, where $T_s$ is the sampling time. $ T_{\msubl{X}}$ and $T_{\msubl{FR}} \in [T_F, T_{F'}]$ denote the intervals in each frame type of Fig.~\ref{fig:frame_strc} defined as: \begin{equation*} T_{\msubl{X}}= \left\{ \begin{array}{ll} T_{\msubl{ACQ}}+T_{\msubl{FB,I}}+T_{\msubl{SP}} & \text{initialization frame} \\ T_{\msubl{FD,SP}}+T_{\msubl{SP}} & \text{secondary frame} \end{array} \right. \end{equation*} Note that the values for the proposed estimator in Fig.~\ref{rate_comparison} are obtained with a combination of $T_{\msubl{X}}$, depending on whether any future values move out of the current frame ( in Fig.~\ref{prediction} this is not the case, only a secondary $T_{\msubl{X}}$ is used). However for the Oracle we use the initial $T_{\msubl{X}}$ since a full BA must takes place for the exact AoD to be discovered. The above results have been obtained with $\sigma_{\eta}=1$ and we assume $P=1$. \vspace{-1.7cm} \begin{figure}[h] \centering \scalebox{0.5}{\input{CRv/rates_ERP}} \caption{Comparison of achievable instantaneous rate.} \label{rate_comparison} \end{figure} \begin{table} \centering \caption{RNN Estimation MSE vs. Window/Probe Length } \label{table_mse} \begin{tabular}{lSSSS} \toprule \multirow{2}{*}{} & \multicolumn{2}{c}{\# probes $=1$} & \multicolumn{2}{c}{\# probes $=3$} \\ & {$L'$=17} & {$L'$=29} & {$L'$=17} & {$L'$=29} \\ \midrule MSE($^{\circ}$) &0.058&0.051&0.048&0.044 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} In this paper, we proposed a RNN based approach for beam tracking at the BS side. This model is trained based on past channel measurements to predict the amount on alignment correction for the next time frame. Furthermore, this model places no restriction on the mobility on the UE. Based on this model, we proposed a frame structure which can adapted be to channel conditions. Our simulations demonstrate that our proposed scheme outperforms PF both in terms of prediction and communication rates, especially at the high mobility scenarios. \vspace{-0.5cm} \section{Introduction} Millimeter wave (mmWave) communication systems, operating in frequency bands of 30-300 GHz, are considered as a promising technology for 5G and beyond cellular systems to achieve a high data rate thanks to wide frequency bands \cite{6G}. Due to the large isotropic path loss, large antenna arrays are typically deployed at base station (BS) and/or user terminals (UEs) in order to form narrow beams between BS-UE pairs. The classical beam sweeping mechanism is highly inefficient as its complexity increases substantially with the number of antenna arrays and its practical implementation using quantized phases may limit its accuracy. Therefore, a number of low-complexity schemes for initial beam alignment have been proposed in the literature (see e.g. a compressed sensing approach in \cite{X.Song_OSPS} and references therein). However, the existing schemes cannot be adapted directly to the high-mobility scenario, where the displacement of UEs may significantly increase the probability of beam misalignment. In such a scenario, the angle of each UE shall be estimated continuously through beam tracking methods (e.g. \cite{BeamTracking,Va, Love, PF} and references therein). For example, \cite{Va, Love, PF} proposed variants of Kalman filter (KF) to address this problem. Since their restricted channel model together with high complexity makes these methods \cite{Va, Love} impractical in high mobility environments, \cite{PF} considered the use of particle filtering (PF) in a time-varying channel and demonstrated an improved performance by modeling the non-linearities of the channel compared to other KF variants. Further, \cite{Palacios} attempted to overcome the computational burden. In this work, by focusing on the beamforming at the BS, we propose a beam tracking approach based on Recurrent Neural Networks (RNN) that can be applied to any arbitrary UE mobility pattern. The proposed method takes advantage of the temporal correlations within measurement data and learns the amount of \textit{adjustment} required for the beam direction from the sequence of measurements by approaching the problem as a classification task. Based on the proposed scheme, we define a modified frame structure with variable length which can be adapted to reduce overhead. The numerical examples in a highly non uniform linear motion scenario using the Quadriga channel generator demonstrate that our proposed scheme can track the AoD accurately and achieve higher communication rate compared to the PF. Finally, we remark that a number of recent works extensively adapted machine/deep learning for beamforming design over the mmWave channels (see e.g. \cite{Lim, Guo,Elbir1, Sim,Huang}). In \cite{Elbir1}, a convolutional neural network (NN) is proposed for an optimal beamformer design. A deep learning framework was proposed for beam selection by using the channel state information (CSI) in \cite{Sim}, while \cite{Liu} demonstrates an auto-encoder/decoder architecture for robust angle estimation. Although \cite{Lim, Guo} also addressed beam tracking using machine learning tools, the proposed method differs from these approaches. In \cite{Lim}, only fully connected layers, unable to capture time correlations of input data, are considered. The work of \cite{Guo} aims to infer the correct beam index by predicting (i.e. regression problem) the channel vector under the assumption of a linear motion path. More recently, the work in \cite{Qlearning} applied a Reinforcement Learning principle for UAV beam tracking, where the authors deal with the design of the reward function by introducing thresholds for the reward. These threshold values need to be optimized depending on the operational SNR. In contrast, our model does not require any SNR specific design parameter. While a comparison of the NN based methods is an interesting topic for future work, due to the limited space, we have restricted our comparison with the well assessed and understood PF scheme. \section{System Model} \subsection{Frame Structure and Signaling Scheme}\label{measurement_model} Fig.~\ref{fig:frame_strc} illustrates the proposed frame structure consisting of two different types depending on the operating mode: (a) initialization frame and (b) secondary frame. Frame (a) is used at the initialization of the system. During initial beam alignment (BA) phase, a beam alignment takes place where the BS transmits a number of pilots to sweep the beam space. The BS receives feedback (FB) signal from the UE on the previously transmitted pilots on the uplink (UL) channel, in the \textit{UL-FB} phase, as explained in Section \ref{initial_spectrum}. During the \textit{Data Transmission} phase, the data is transmitted in downlink (DL) channel. The final phase refers to Secondary Probing \textit{SP} phase of variable length dedicated to transmit a reduced number of pilots for further channel probing. Frame (b) of length $T_{F'}<T_F$ comes into operation once a reliable connection between a BS-UE pair is established after the initialization frame (a). During the first phase \textit{UL-FB}, the BS receives feedback on the previous probing (SP) from UEs. The second and third phases are identical to that of the initialization frame (a). \begin{figure}[h] \vspace{-0.3cm} \centering \scalebox{.5}{\input{FrameTimeSlots}} \caption{Frame structures depending on operating mode. Note, the UL-FB slot at the beginning receives feedback from the previous SP.} \label{fig:frame_strc} \end{figure} \vspace{-0.3cm} \subsection{Transmission Scheme and Measurement Equation}\label{measurement_model} Suppose the transmitter (BS), with $N_{\msub{TX}}$ uniformly and linearly positioned elements, sends pilot symbols to the receiver receiver (UE), with $N_{\msub{RX}}$ elements over the time-varying MIMO channel at frame $k$ by \begin{align} {\bf H}_k(t, \tau) = \alpha_k{\bf a}_{\msub{RX}}(\varphi_k){\bf a}_{\msub{TX}}^{H}(\theta_k) \delta(t-\tau_k) e^{j2\pi\nu_k t} \end{align} where $\alpha_k, \tau_k, \nu_k$ denotes the attenuation coefficient, the delay, and the Doppler shift, respectively, $\{d_i\}$ is the antenna spacing with $d_1=0$, $\lambda$ is the wavelength, ${\bf a}_{\msub{TX}}(\theta_k)$ is a steering vector at the Tx given by \begin{equation} {\bf a}_{\msub{TX}}(\theta_k) = [ e^{j\frac{2\pi}{\lambda}d_{1}\cos(\theta_k)},... , e^{j\frac{2\pi}{\lambda}d_{N_{\msub{TX}}}\cos(\theta_k)}]^{T}\ \end{equation} for the angle-of-departure (AoD) denoted by $\theta_k$, ${\bf a}_{\msub{RX}}(\varphi_k)$ is defined similarly for the angle-of-arrival (AoA) denoted by $\varphi_k$. Since we consider a LoS path between the UE/BS, we have $\varphi_k=\theta_k$. The model above can be easily extended to multiple paths. The parameters $(\theta_k, \alpha_k, \tau_k, \nu_k)$ remain constant over a frame duration. We consider the beamforming vector ${\bf f}$ and the combining vector ${\bf c}$ according to a predefined beam codebook. Namely, by focusing on the beamforming codebook at the BS side, denoted by $\mathcal{CB}=\{{\bf f}_1, \dots, {\bf f}_G\}$, where we let ${\bf f}_i ={\bf f}(\tilde{\theta}_i)= \frac{1}{\sqrt{N_{\msub{TX}}}}{\bf a}_{\msub{TX}} (\tilde{\theta}_i), i\in \{1, \dots, G\}$ for some angles $\tilde{\theta}_i$. Then, the observed signal at the UE for frame $k$ after the combining vector is given by \begin{equation}\label{measurements} z_k(t) = \alpha_k{\bf c}^{H}(\tilde{\varphi}){\bf a}_{\msub{RX}}(\varphi_k){\bf a}_{\msub{TX}}^{H}(\theta_k){\bf f}(\tilde{\theta}) x(t-\tau_k) e^{j2\pi\nu_k t} +\eta(t) \end{equation} where $\eta(t) \sim {\cal N}(0,\sigma_{\eta}^{2} )$ is the additive Gaussian noise. The objective of initial BA is to acquire an accurate AoA estimate $\tilde{\theta}\in \{\tilde{\theta}_1, \dots, \tilde{\theta}_G\}$ so that the corresponding beamforming vector ${\bf f}(\tilde{\theta})$ is used for the DL data transmission. We also assume that the codebooks available for UL and DL operation at the BS are identical. \subsection{Initial Channel State Estimation}\label{initial_spectrum} In order to focus on tracking the UE's angular location at the BS side, we further assume that the UE is equipped with a single antenna. The UE initially listens to the channel during the initial BA period where the BS sends pilots through $T$ probing directions according to the predefined codebook $\mathcal{CB}$. By replacing ${\bf f}(\tilde{\theta})$ with ${\bf f}_i$ for $i=1, \dots, T$ in \eqref{measurements} followed by suitable sampling every $T_{ACQ}/T$, we obtain $T$ discrete observations or measurements, denoted by ${\bf z}= \{z_{\tilde{\theta}_{1}}[1],...,z_{\tilde{\theta}_{T}}[T]\}$, where $z_{\tilde{\theta}_{i}}[i]$ is the received signal when probing in the direction $\tilde{\theta}_{i}$ at the $i^{th}$ sampling instant. After this initial BA phase, the UE selects the best beam direction $\tilde{\theta}_m$ corresponding to $\text{max}_{i=1,...,T}(z_{\tilde{\theta}_{i}}[i])$ and feeds back ${\bf z}$ to the BS during UL-FB phase. \section{Beam Tracking Methods} \subsection{Classical Beam Tracking}\label{Classical} Beam tracking based on Bayesian statistical inference principles have been extensively investigated in literature\cite{Va, Love, PF}. In this subsection, we provide a brief overview of the PF technique similar to that described in \cite{PF}. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are available in the presence of random perturbations. The objective is to compute the posterior distributions of the states of a Markov process, given some noisy and/or partial observations. Based on the system model described in (\ref{measurements}) state is sequentially estimated by using the measurement $z_{\tilde{\theta}_m}[k]$ (for brevity $z_k$) at each time frame. Due to the nonlinear dependency between the state space and the measurements in the problem at hand, the PF is particularly well suited among Kalman filtering methods. By letting $\tilde{\theta}[k], z_k $ denote the state and the observation at frame $k$ obtained from (\ref{measurements}) after a suitable sampling\footnote{It is also possible to consider multiple measurements for a given frame.}, we model the transition of the state as $\tilde{\theta}[k] =\tilde{\theta}[k-1] + u_k$ where $u_k\sim {\cal N}(0, \sigma_{\tilde{\theta}}^2)$ is the process noise (i.e. perturbation). We can modify \cite{PF} by focusing on particles (beams) within a reduced region of interest of the beam space. These regions can be selected adaptively, for instance in the case of static or slow moving users, the space in proximity of the main beam direction of the previous step will have a denser distribution of particles. \subsection{Proposed method with Recurrent Neural Networks} The particle filter discussed previously incurs increased complexity and overhead in the massive MIMO regime and in high mobility channels. In this section we consider a RNN approach for the aforementioned tracking problem by exploiting the time correlation between measurement data. Since the underlying channel parameters such as the AoD, the range, and the Doppler shift in (\ref{measurement_model}) evolve in time with some memory, a recurrent NN architecture seems a viable solution for this problem. In recent years, Long Short-Term Memory (LSTM) networks have been used to create large recurrent networks that in turn can be used to address complicated sequence problems in machine learning and achieve state-of-the-art results. The LSTM network, is trained using backpropagation through time and overcomes the vanishing gradient problem in this way. LSTM networks have memory blocks that replace neurons which are connected through layers. A block contains gates that manage the block’s state and output, thus developing a memory state for recent sequences. A block operates upon an input sequence and each gate within a block uses the sigmoid activation units to control whether they are triggered or not, making the change of state and addition of information flowing through the block conditional. \noindent The input data sequence of interest are the observed signal values $z_k$ at frame $k$. These input values referred to as input features, are used to train the network to output estimates of the UE's future AoD under a supervised learning framework. The input features of the NN are generated based on the \textit{windowed} input method where features corresponding to the previous time steps are inputs to the current step. Due to the fact that the measurements in (\ref{measurements}) are equivalent to the beam space representation of the channel at discrete angles, they can be viewed as a \textit{pseudo-spectrum}. On this basis, we define our input features using a \textit{sliding window} technique consisting of these values. As the initial feature set, we take the pseudo-spectrum $\mathcal{S} \equiv {\bf z}$ described in section \ref{initial_spectrum} and define the window length parameter $L$ such that $L$ bins around the current main beam direction $\tilde{\theta}_m[k]$ are selected at each time step (total of $L'=2L+1$ bins, as in Fig. \ref{fig:sliding_wind}). These inputs are updated at every measurement frame. With reference to Fig.~\ref{fig:frame_strc}, the updates are made at interval $T_{\msubl{SP,FB}}$ which relates to the secondary probe transmitted during interval $T_{\msubl{SP}}$ of the previous frame. The number of directions to be scanned during the secondary probation can be adaptively selected, leading to a trade-off between overhead and improved channel exploration. If the user has moved out of span of the current window, at the next time instance (this can happen if for example the window length is very small or that the grid points are very fine) the initial BA shall be performed by using the initialization frame (a) and the label generated for the current time frame will be the last element of the label vector (i.e. farthest from current) to minimize the distance to the next user position (note that this will only happen during inference mode). An exemplary graphical representation of this input is depicted in Fig. \ref{fig:input_feature}. The labels generated for the supervised learning task consist of a \textit{ hot-one-encoded} vector with the same dimension as the sliding window such that the vector contains all zeros except at the position which corresponds to the next angular position (i.e. AoD) of the UE. The network outputs a value between '0' and '1' for each position in the window which can be interpreted as the probability of the target AoD in the next instant. The significance of this technique is that the NN output becomes invariant to the value of the AoD itself, rather the predicted value is the amount of correction (in discrete grid points) needed for the next beam. This classification-type solution is an alternative to works attempting to regress values \cite{Guo}. It is well known that for limited data inputs and training, classification outperforms regression. It's also worth noting that by defining the problem as a classification task, we directly bypass the problem of dealing with ambiguities within the regressed values (e.g. regressing $370^{\circ}$ and $10^{\circ}$ for a desired value of $10^{\circ}$). \begin{figure}[h] \centering \scalebox{.45}{\input{schematicRNN_sliding_window}} \caption{Schematic of the sliding window and the corresponding label vector. Here the 1 indicates that the UE moves one grid point (discrete angle from grid) in the next time frame. } \label{fig:sliding_wind} \end{figure} \begin{algorithm} \algsetup{linenosize=\small} \scriptsize \SetKwInOut{Parameter}{parameter} \SetKwData{Left}{left} \SetKwData{This}{this} \SetKwData{Up}{up} \SetKwFunction{Union}{Union} \SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{measurements $z_k$ and $\tilde{\theta}_m[k]$ as in (\ref{measurements}) for time frames $k \in \{0,1,..., K\}$ and sliding window size $2L+1$} \Output{feature set $\mathcal{F}$, label set $\mathcal{L}$} \BlankLine initialization\; set $\mathcal{W}=\mathcal{S}$ and $\mathcal{F}_k=[w_{k-L},....,w_{k+L}]$\ \While{$k \leq K-1$}{ insert $z(\tilde{\theta}_m[k])$ at $m^{th}$ position of $\mathcal{W}$ \\ center sliding window of length 2$L+1$ at position $m$\\ acquire $\tilde{\theta}_{m}[k+1]$ and calculate $d_{\tilde{\theta}} = \tilde{\theta}_{m}[k+1] - \tilde{\theta}_m[k] $ \; \eIf{$\vert d_{\tilde{\theta}} \vert\leq L $}{ set position $(L+1+d_{\tilde{\theta}})$ of $\mathcal{L}_k$ to 1 and the rest 0\; }{ set position $(L+1+ \text{sgn}(d_{\tilde{\theta}})L)$ of $\mathcal{L}_k$ to 1 and the rest 0\; } } \caption{Algorithm for feature and label generation from dataset. $m$ denotes the index of the main beam.} \label{alg: feature_gen} \end{algorithm} \begin{figure} \vspace{-0.5cm} \centering \resizebox{1.25\linewidth}{!}{\input{plotRNNfeatureSet.tex}} \caption{Graphical representation of the input features for the RNN in a dynamic scenario where the UE has a cyclic motion. The window length parameter $L$ is set to 8 for a total of 90 time frames. } \label{fig:input_feature} \end{figure} \subsection{Neural Network Architecture} The recurrent network architecture is depicted in Fig.~\ref{RNN_architecture} consisting of two LSTM layers followed by a fully connected layer before the final classification layer. Here we have used a \textit{bi-directional LSTM} (biLSTM) layer in order to preserve the information contained within the time series from the future samples. For conciseness we refrain from discussing different recurrent implementations here. An important remark here is that as the final layer of the classifier we have chosen the sigmoid activation function as opposed to a softmax due to the fact that when angular grids of the labels are smaller than the beam width of the array, it is possible that a certain beam can cover multiple grid points. \begin{figure}[h] \centering \scalebox{.45}{\input{schematicRNN}} \caption{RNN architecture with the LSTM layer directly fed by the input data. Note that it is also possible to replace the sigmoid activation at the final layer with a softmax layer.} \label{RNN_architecture} \end{figure} \section{Simulations and Numerical Results} \label{simulations} In this section we use the Fraunhofer Quadriga \cite{quadriga} channel generator with a mobile UE to train a network to predict the UE's AoD. The parameters generated by Quadriga at each time instance include channel coefficients, delay values and position coordinates (trajectory) among others. In the following we assume a ULA at the BS with N$_{\msubl{TX}}=64$. Since this ULA is not available in Quadriga, we have self-defined a 64 element ULA to obtain the channel parameters. These values are then used to simultaneously generate multiple realizations of the measurement values for the NN and state variables for the PF along the same trajectory. As can be seen in Fig.~\ref{quadriga_traj}, the UE trails a trajectory with variable speed along a circular path (start:S) followed by a linear path. The circular path can be a representation of a likely scenario for a BS installed at the corner of an urban round-about. The significant challenge of a circular path is the non-linearity in AoD values with time. The dataset contains 26 realizations of the the same trajectory, where in each case the parameters vary. \begin{figure}[h] \vspace{-5cm} \centering \scalebox{.5}{\input{CRv/plotQuadrigaTrajectoriesERP} \vspace{-1cm} \hspace{1.5cm} \caption{Quadriga simulated trajectory of a mobile UE. The green markings on the trajectory imply that the linear speed of the UE changes from there onward. The notation BERLIN-UMa-LOS describes the scenario defined by the Quadriga software to generate the coefficients (in this case a LoS environment in Berlin, Germany).} \label{quadriga_traj} \end{figure} Fig.~\ref{prediction} shows the performance of PF compared to the proposed method. The number of particles is set to 100 as in \cite{PF} and the other parameters have been accommodated correspondingly to the array size. The \textit{Oracle} defines the real AoD, \textit{GT} denotes the Ground Truth value which is the rounded (within 1 degree) value of the corresponding real value, included in the plots to signify that the network has been trained to predict up to a pre-chosen \textit{gridded} accuracy. Fig. \ref{prediction} shows the estimated AoD values associated with the UE along the marked circular section (A-D) of the trajectory from Fig.~\ref{quadriga_traj}. A few remarks are in order. 1) The section of the path used for inference, has not been seen by the network during training and validation. 2) Note that when the AoD progression becomes highly nonlinear along the path, the PF deviates significantly from the true value. This could be attributed to the channel model considered in \cite{PF}, which ignores the Doppler induced time-variance of the channel. Additionally, even though the PF outperforms other EKF variants, the particles are not fully able to capture the channel dynamics. For both estimators, at \textit{each} time index the reported values are the average estimated value (binned avg. for RNN) over 26 independent runs. In Tab.~\ref{table_mse}, MSE of the RNN estimated angles for various window lengths ($L'$) and probe lengths are shown. It is observed that with a larger window (more memory) and larger number of probes, the estimation error decreases. \begin{figure*}[t] \centering \begin{subfigure}[h]{0.3\linewidth} \scalebox{0.3}{\input{CRv/predictions_ERP_1}} \end{subfigure} \begin{subfigure}[h]{0.3\linewidth} \hspace{0.5cm} \scalebox{0.3}{\input{CRv/predictions_ERP_2}} \end{subfigure} \begin{subfigure}[h]{0.3\linewidth} \scalebox{0.3}{\input{CRv/predictions_ERP_3}} \end{subfigure} \vspace*{-1.0cm} \caption{The performance of PF compared to the proposed method. The overall MSE is reported in Tab.\ref{table_mse} for other training window lengths and transmitted number of secondary probes. The results are segmented in multiple plots for better visualization} \label{prediction} \end{figure*} In Fig.~\ref{rate_comparison}, a comparison of the achievable rate between the proposed RNN scheme, the PF from \cite{PF} and an Oracle estimator is provided. The reported values correspond to the predictions form Fig.~\ref{prediction} . The plotted values at each time index are averaged for 26 runs. It's worth mentioning, the change in the Oracle rate occurs as the result of the UE moving closer to the BS, leading to a higher received signal power. The achievable rate is calculated according to: \begin{equation} \begin{split} r(\tilde{\theta},\theta)&=\left(1-\frac{T_{\msubl{X}}}{T_{\msubl{FR}}}\right) \cdot \log_2 \left(1+\frac{P\vert {\bf c}^{H}(\tilde{\theta})\matr{H}(\theta){\bf f}(\tilde{\theta})\vert^{2}}{\sigma_{\eta}^2}\right)\\ \end{split} \label{eq: rate} \end{equation} where $\theta$ and $\tilde{\theta}$ are the actual and estimated AoD of the UE. The channel is defined $\matr{H(\theta)}=\alpha {\bf a}_{\msub{RX}}(\theta){\bf a}_{\msub{TX}}^{H}(\theta)e^{j2\pi\nu T_s}$, where $T_s$ is the sampling time. $ T_{\msubl{X}}$ and $T_{\msubl{FR}} \in [T_F, T_{F'}]$ denote the intervals in each frame type of Fig.~\ref{fig:frame_strc} defined as: \begin{equation*} T_{\msubl{X}}= \left\{ \begin{array}{ll} T_{\msubl{ACQ}}+T_{\msubl{FB,I}}+T_{\msubl{SP}} & \text{initialization frame} \\ T_{\msubl{FD,SP}}+T_{\msubl{SP}} & \text{secondary frame} \end{array} \right. \end{equation*} Note that the values for the proposed estimator in Fig.~\ref{rate_comparison} are obtained with a combination of $T_{\msubl{X}}$, depending on whether any future values move out of the current frame ( in Fig.~\ref{prediction} this is not the case, only a secondary $T_{\msubl{X}}$ is used). However for the Oracle we use the initial $T_{\msubl{X}}$ since a full BA must takes place for the exact AoD to be discovered. The above results have been obtained with $\sigma_{\eta}=1$ and we assume $P=1$. \vspace{-1.7cm} \begin{figure}[h] \centering \scalebox{0.5}{\input{CRv/rates_ERP}} \caption{Comparison of achievable instantaneous rate.} \label{rate_comparison} \end{figure} \begin{table} \centering \caption{RNN Estimation MSE vs. Window/Probe Length } \label{table_mse} \begin{tabular}{lSSSS} \toprule \multirow{2}{*}{} & \multicolumn{2}{c}{\# probes $=1$} & \multicolumn{2}{c}{\# probes $=3$} \\ & {$L'$=17} & {$L'$=29} & {$L'$=17} & {$L'$=29} \\ \midrule MSE($^{\circ}$) &0.058&0.051&0.048&0.044 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} In this paper, we proposed a RNN based approach for beam tracking at the BS side. This model is trained based on past channel measurements to predict the amount on alignment correction for the next time frame. Furthermore, this model places no restriction on the mobility on the UE. Based on this model, we proposed a frame structure which can adapted be to channel conditions. Our simulations demonstrate that our proposed scheme outperforms PF both in terms of prediction and communication rates, especially at the high mobility scenarios. \vspace{-0.5cm}
2,877,628,088,796
arxiv
\section{Introduction} \label{sect:intro} Suppose that we observe $(x_1, y_1), \ldots, (x_n, y_n)$ where $x_i=(x_{i1}, \ldots, x_{ip})^t$ is a $p$-dimensional predictor and $y_i$ is the response variable. We consider a standard linear model for each of $n$ observations $$ y _i= \sum_{j=1}^p \beta_j x_{ij} +\epsilon_i, \mbox{ for } i =1, \ldots, n, $$ with E$(\epsilon_i)=0$ and Var$(\epsilon_i) =\sigma^2$. We also assume the predictors are standardized and the response variable is centered, $$ \sum_{i=1}^n y_i =0, \quad \sum_{i=1}^n x_{ij} = 0 \mbox{ and } \sum_{i=1}^n x_{ij}^2 =1 \quad \mbox{ for } j =1, \ldots, p. $$ With the dramatic increase in the amount of data collected in many fields comes a corresponding increase in the number of predictors $p$ available in data analyses. For simpler interpretation of the underlying processes generating the data, it is often desired to have a relatively parsimonious model. It is often a challenge to identify important predictors out of the many that are available. This becomes more so when the predictors are correlated. As a motivating example, consider a study involving near infrared (NIR) spectroscopy data measurements of cookie dough \citep{Osborne1984}. Near infrared reflectance spectral measurements were made at 700 wavelengths from 1100 to 2498 nanometers (nm) in steps of 2nm for each of 72 cookie doughs made with a standard recipe. The study aims to predict dough chemical composition using the spectral characteristics of NIR reflectance wavelength measurements. Here, the number of wavelengths $p$ is much bigger than the sample size $n$. Many methods have been developed to address this issue of high dimensionality. Section \ref{sect:review} contains a brief review. Most of these methods involve minimizing an objective function, like the negative log-likelihood, subject to certain constraints, and the methods in Section \ref{sect:review} mainly differ in the constraints used. In this paper, we propose a variable selection procedure that can cluster predictors using the positive correlation structure and is also applicable to data with $p >n$. The constraints we use balance between an $L_1$ norm of the coefficients and an $L_1$ norm for pairwise differences of the coefficients. We call this procedure a {\em Hexagonal Operator for Regression with Shrinkage and Equality Selection}, HORSES for short, because the constraint region can be represented by a hexagon. The hexagonal shape of the constraint region focuses selection of groups of predictors that are positively correlated. The goal is to obtain a homogeneous subgroup structure within the high dimensional predictor space. This grouping is done by focusing on spatial and/or positive correlation in the predictors, similar to supervised clustering. The benefits of our procedure are a combination of variance reduction and higher predictive power. The remainder of the paper is organized as follows. We introduce the HORSES procedure and its geometric interpretation in Section \ref{sect:model}. We provide an overview of some other methods in Section \ref{sect:review}, relating our procedure with some of these methods. In Section \ref{sect:compute} we describe the computational algorithm that we constructed to apply HORSES to data and address the issue of selection of the tuning parameters. A simulation study is presented in Section \ref{sect:simstudy}. Two data analyses using HORSES are presented in Section \ref{sect:analysis}. We conclude the paper with discussion in Section \ref{sect:conclusion}. \section{Model} \label{sect:model} In this section we describe our method for variable selection for regression with {\it positively} correlated predictors. Our penalty terms involve a linear combination of an $L_1$ penalty for the coefficients and another $L_1$ penalty for pairwise differences of coefficients. Computation is done by solving a constrained least-squares problem. Specifically, estimates for the HORSES procedure are obtained by solving \begin{eqnarray} \hat \beta &=& \argmin_{\beta} \|y - \sum_{j=1}^p \beta_j x_j \|^2 \mbox { subject to} \nonumber \\ & & \alpha \sum_{j=1}^p | \beta_j| + (1-\alpha)\sum_{j < k} | \beta_j - \beta_k| \le t, \label{eqn:horses} \end{eqnarray} with $d^{-1} \le \alpha \le 1 $ and $d$ is a thresholding parameter. \begin{figure}[t] \centering \subfigure[Elastic Net]{ \includegraphics[scale=0.25,angle=0]{./Elasticnet.ps} \label{fig:elastic} } \subfigure[OSCAR]{ \includegraphics[scale=0.25, angle=0]{./OSCAR.ps} \label{fig:oscar1} } \subfigure[HORSES]{ \includegraphics[scale=0.25, angle=0]{./HORSES.ps} \label{fig:horses1} } \caption{Graphical representation of the constraint region in the $(\beta_1, \beta_2)$ plane for \subref{fig:elastic} Elastic Net, \subref{fig:oscar1} OSCAR, and \subref{fig:horses1} HORSES}. \label{fig:region1} \end{figure} As we describe in Section \ref{sect:review}, some methods like Elastic Net and OSCAR can group correlated predictors, but they can also put negatively correlated predictors into the same group. Our method's novelty is its grouping of {\it positively} correlated predictors in addition to achieving a sparsity solution. Figure \ref{fig:region1}(c) shows the hexagonal shape of the constraint region induced by (\ref{eqn:horses}), showing schematically the tendency of the procedure to equalize coefficients only in the direction of $y=x$. The lower bound $d^{-1}$ of $\alpha$ prevents the estimates from being a solution only via the second penalty function, so the HORSES method always achieves sparsity. We recommend $d=\sqrt{p}$, where $p$ is the number of predictors. This ensures that the constraint parameter region lies between that of the $L_1$ norm and of the Elastic Net method, i.e.\ the set of possible estimates for the HORSES procedure is a subset of that of Elastic Net. In other words, HORSES accounts for positive correlations up to the level of Elastic Net. With $d=p$, the HORSES parameter region lies within that of the OSCAR method. \begin{figure}[t] \label{fig:likelihood} \centering \subfigure[]{ \includegraphics[width=1in,angle=0]{./OSCAR_c.ps} \label{fig:oscar2} } \subfigure[]{ \includegraphics[width=1in, angle=0]{./OSCAR_b.ps} \label{fig:oscar3} } \subfigure[]{ \includegraphics[width=1in,angle=0]{./HORSES_c.ps} \label{fig:horses2} } \subfigure[]{ \includegraphics[width=1in, angle=0]{./HORSES_b.ps} \label{fig:horses3} } \caption{ Graphical representation in the $(\beta_1, \beta_2)$ plane. HORSES solutions are the first time the contours of the sum of squares function hit the hexagonal constraint region. \subref{fig:horses2} Contours centered at OLS estimate with a negative correlation. Solution occurs at $\hat \beta_1 =0$; \subref{fig:horses3} Contours centered at OLS estimate with a positive correlation. Solution occurs at $\hat \beta_1 =\hat \beta_2$.} \end{figure} In a graphical representation in the $(\beta_1, \beta_2)$ plane, the solution is the first time the contours of the sum of squares loss function hit the constraint regions. Figure 2 gives a schematic view. Figure \ref{fig:horses2} shows the solution for HORSES when there is negative correlation between predictors. HORSES treats them separately by making $\hat\beta_1 = 0$. On the other hand, HORSES yields $\hat\beta_1 = \hat\beta_2$ when predictors are positively correlated, as in Figure \ref{fig:horses3}. The following theorem shows that HORSES has the {exact} grouping property. As the correlation between two predictors increases, the predictors are more likely to be grouped together. Our proof follows closely the proof of Theorem 1 in \citet{Bondell08} and is hence relegated to an Appendix. \begin{Thm} Let $\lambda_1= \lambda \alpha$ and $\lambda_2=\lambda ( 1- \alpha)$ be the two tuning parameters in the HORSES criterion. Given data $\big({ y}, { X} \big)$ with centered response ${y}$ and standardized predictors ${X}=(x_1, \ldots, x_p)^t$, let $\widehat{\beta} \big(\lambda_1,\lambda_2)$ be the HORSES estimate using the tuning parameters $\big(\lambda_1,\lambda_2\big)$. Let $\rho_{kl} ={x}_k^{\rm T} {x}_l$ be the sample correlation between covariates $x_k$ and $x_l$. For a given pair of predictors ${x}_k$ and ${x}_l$, suppose that both $\widehat{\beta}_k (\lambda_1,\lambda_2)$ and $\widehat{\beta}_l(\lambda_1,\lambda_2)$ are distinct from the other $\widehat{\beta}_m$. Then there exists $\lambda_0 \ge 0$ such that if $\lambda > \lambda_0$ then \begin{equation} \nonumber \widehat{\beta}_k \big(\lambda_1,\lambda_2\big) = \widehat{\beta}_l \big( \lambda_1,\lambda_2\big), \quad \mbox{for all $\alpha \in [d^{-1} ,1]$.} \end{equation} Furthermore, it must be that \begin{equation} \nonumber \lambda_0 \le \|{ y}\| \sqrt{2 (1 -\rho_{kl} )} \big/ (1- \alpha). \end{equation} \end{Thm} \medskip The strength with which the predictors are grouped is controlled by $\lambda_2$. If $\lambda_2$ is increased, any two coefficients are morely likely to be equal. When $x_i$ and $x_j$ are positively correlated, Theorem 1 implies that predictors $i$ and $j$ will be grouped and their coefficient estimates almost identical. \section{Related work} \label{sect:review} This brief review cannot do justice to the many variable selection methods that have been developed. We highlight several of them, especially those that have links to our HORSES procedure. While variable selection in regression is an increasingly important problem, it is also very challenging, particularly when there is a large number of highly correlated predictors. Since the important contribution of the least absolute shrinkage and selection operator (LASSO) method by \citet{Tibshirani96}, many other methods based on regularized or penalized regression have been proposed for parsimonious model selection, particularly in high dimensions, e.g. Elastic Net, Fused LASSO, OSCAR and Group Pursuit methods \citep{Zou05, Tibshirani05, Bondell08, Shen10}. Briefly, these methods involve penalization to fit a model to data, resulting in shrinkage of the estimators. Many methods have focused on addressing various possible shortcomings of the LASSO method, for instance when there is dependence or collinearity between predictors. In the LASSO, a bound is imposed on the sum of the absolute values of the coefficients: \begin{eqnarray*} \hat \beta &=& \argmin_{\beta} \|y - \sum_{j=1}^p \beta_j x_j \|^2 \mbox { subject to} \sum_{j=1}^p | \beta_j| \le t, \end{eqnarray*} where $y = (y_1, \dots, y_n)$ and $x_j = (x_{1j}, \dots, x_{nj})$. The LASSO method is a shrinkage method like ridge regression \citep{Hoerl70}, with automatic variable selection. Due to the nature of the $L_1$ penalty term, LASSO shrinks each coefficient and selects variables simultaneously. However, a major drawback of LASSO is that if there exists collinearity among a subset of the predictors, it usually only selects one to represent the entire collinear group. Furthermore, LASSO cannot select more than $n$ variables when $p >n$. One possible approach is to cluster predictors based on the correlation structure and to use averages of the predictors in each cluster as new predictors. \citet{Park07} used this approach for gene expression data analysis and introduce the concept of a {\it super gene}. However, it is sometimes desirable to keep all relevant predictors separate while achieving better predictive performance, rather than to use an average of the predictors. The hierarchical clustering used in \citet{Park07} for grouping does not account for the correlation structure of the predictors. Other penalized regression methods have also been proposed for grouped predictors \citep{Bondell08, Tibshirani05, Zou05, Shen10}. All these methods except Group Pursuit work by introducing a new penalty term in addition to the $L_1$ penalty term of LASSO to account for correlation structure. For example, based on the fact that ridge regression tends to shrink the correlated predictors toward each other, Elastic Net \citep{Zou05} uses a linear combination of ridge and LASSO penalties for group predictor selection and can be computed by solving the following constrained least squares optimization problem, \begin{eqnarray*} \hat \beta &=& \argmin_{\beta} ||y - \sum_{j=1}^p \beta_j x_j ||^2 \mbox { subject to} \quad \alpha \sum_{j=1}^p | \beta_j| + (1-\alpha) \sum_{j =1}^p \beta_j^2 \le t. \end{eqnarray*} The second term forces highly correlated predictors to be averaged while the first term leads to a sparse solution of these averaged predictors. \citet{Bondell08} proposed OSCAR (Octagonal Shrinkage and Clustering Algorithm for Regression), which is defined by \begin{eqnarray*} \hat \beta &=& \argmin_{\beta} ||y - \sum_{j=1}^p \beta_j x_j||^2 \mbox { subject to} \quad \sum_{j=1}^p | \beta_j| + c \sum_{j < k}^p \max\{| \beta_j |, |\beta_k| \}\le t. \end{eqnarray*} By using a pairwise $L_{\infty}$ norm as the second penalty term, OSCAR encourages equality of coefficients. The constraint region for the OSCAR procedure is represented by an octagon (see Figure \ref{fig:region1}\subref{fig:oscar1}). Unlike the hexagonal shape of the HORSES procedure, the octagonal shape of the constraint region allows for grouping of negatively as well as positively correlated predictors. While this is not necessarily an undesirable property, there may be instances when a separation of positively and negatively correlated predictors is preferred. Unlike Elastic Net and OSCAR, Fused LASSO \citep{Tibshirani05} was introduced to account for {\it spatial} correlation of predictors. A key assumption in Fused LASSO is that the predictors have a certain type of ordering. Fused LASSO solves \begin{eqnarray*} \hat \beta &=& \argmin_{\beta} ||y - \sum_{j=1}^p \beta_j x_j||^2 \mbox { subject to} \quad \sum_{j=1}^p | \beta_j| \le t_1 \mbox{ and } \sum_{j =2}^p | \beta_j - \beta_{j-1}| \le t_2. \end{eqnarray*} The second constraint, called a {\it fusion penalty}, encourages sparsity in the differences of coefficients. The method can theoretically be extended to multivariate data, although with a corresponding increase in computational requirements. Note that the Fused LASSO signal approximator (FLSA) in \citet{Friedman07} can be considered as a special case of HORSES with design matrix $X=I$. We also want to point out that our penalty function is a convex combination of the $L_1$ norm of the coefficients and the $L_1$ norm of the pairwise differenc es of coefficients. Therefore, it is not a straightforward extension of Fused LASSO in which each penalty function is constrained separately. \citet{She10} extended Fused LASSO by considering all possible pairwise differences and called it Clustered LASSO. However, the constraint region of Clustered LASSO does not have a hexagonal shape. As a result, Clustered LASSO does not have the {\it exact} grouping property of OSCAR. Consequently, \citet{She10} suggested to use a data-argumentation modification such as Elastic Net to achieve exact grouping. Finally, the Group Pursuit method of \citet{Shen10} is a kind of supervised clustering. With a regularization parameter $t$ and a threshold parameter $\lambda_2$, they define $$ G(z) = \left\{ \begin{array}{rl} \lambda_2 & \mbox{ if $|z| > \lambda_2$} \\ |z| &\mbox{ otherwise,} \end{array} \right. $$ and estimate $\beta$ using \begin{eqnarray*} \hat \beta &=& \argmin_{\beta} \| y - \sum_{j=1}^p \beta_j x_j \|^2 \mbox { subject to} \quad \sum_{j < k }^p G ( \beta_j - \beta_{k}) \le t. \end{eqnarray*} HORSES is a hybrid of the Group Pursuit and Fused LASSO methods and addresses some limitations of the various methods described above. For example, OSCAR cannot handle the high-dimensional data while Elastic Net does not have the exact grouping property. \section{Computation and Tuning} \label{sect:compute} A crucial component of any variable selection procedure is an efficient algorithm for its implementation. In this Section we describe how we developed such an algorithm for the HORSES procedure. The Matlab code for this algorithm is available upon request. We also discuss here the choice of optimal tuning parameters for the algorithm. \subsection{Computation} Solving the equations for the HORSES procedure (\ref{eqn:horses}) is equivalent to solving its Lagrangian counterpart \begin{equation} \label{lagr_obj} f(\beta) = \frac{1}{2} \|y - \sum_{j=1}^p \beta_j x_j \|^2 + \lambda_1 \sum_{j=1}^p |\beta_j| + \lambda_2 \sum_{j<k} |\beta_j - \beta_k|, \end{equation} where $\lambda_1 = \lambda \alpha$ and $\lambda_2 = \lambda(1-\alpha)$ with $\lambda>0$. To solve (\ref{lagr_obj}) to obtain estimates for the HORSES procedure, we modify the pathwise coordinate descent algorithm of \citet{Friedman07}. The pathwise coordinate descent algorithm is an adaptation of the coordinate-wise descent algorithm for solving the 2-dimensional Fused LASSO problem with a non-separable penalty (objective) function. Our extension involves modifying the pathwise coordinate descent algorithm to solve the regression problem with a fusion penalty. As shown in \citet{Friedman07}, the proposed algorithm is much faster than a general quadratic program solver. Furthermore, it allows the HORSES procedure to run in situations where $p > n$. Our modified pathwise coordinate descent algorithm has two steps, the descent and the fusion steps. In the descent step, we run an ordinary coordinate-wise descent procedure to sequentially update each parameter $\beta_k$ given the others. The fusion step is considered when the descent step fails to improve the objective function. In the fusion step, we add an equality constraint on pairs of $\beta_k$s to take into account potential fusions and do the descent step along with the constraint. In other words, the fusion step moves given pairs of parameters together under equality constraints to improve the objective function. The details of the algorithm are as follows: \begin{itemize} \item Descent step: The derivative of (\ref{lagr_obj}) with respect to $\beta_k$ given $\beta_j=\tilde{\beta}_j$, $j \neq k$, is \begin{eqnarray} \frac{\partial f(\beta)}{\partial \beta_k} &=& x_k^T x_k \beta_k - \big(y - \sum_{j \neq k } \tilde{\beta}_j x_j \big)^T x_k \nonumber \\ && \quad + \lambda_1 sgn(\beta_k) + \lambda_2 \sum_{j=1}^{k-1} sgn(\tilde{\beta}_j - \beta_k) + \lambda_2 \sum_{j=k+1}^{p} sgn(\beta_k - \tilde{\beta}_j ), \label{eqn:deriv_obj} \end{eqnarray} where the $\tilde{\beta}_j$'s are current estimates of the $\beta_j$'s and $sgn(x)$ is a subgradient of $|x|$. The derivative (\ref{eqn:deriv_obj}) is piecewise linear in $\beta_k$ with breaks at $\{ 0, \tilde{\beta}_j, j \neq k\}$ unless $\beta_k \notin \{ 0, \tilde{\beta}_j, j \neq k\}$. \begin{itemize} \item If there exists a solution to $\big( \partial f(\beta) \big/ \partial \beta_k \big) = 0$, we can find an interval $(c_1, c_2)$ which contains it, and further show that the solution is \begin{eqnarray} \tilde{\beta}_k &=& sgn\bigg\{ \tilde{y}^T x_k - \lambda_2 (\sum_{j<k} s_{jk} + \sum_{j>k} s_{kj}) \bigg\} \nonumber\\ && \qquad \times \frac{ \Big( \big| \tilde{y}^T x_k - \lambda_2 (\sum_{j<k} s_{jk} + \sum_{j>k} s_{kj}) \big| - \lambda_1 \Big)_{+} }{x_k^T x_k}, \nonumber \end{eqnarray} where $\tilde{y} = y - \sum_{j \neq k } \tilde{\beta}_j x_j$, and $s_{jk} = sgn(\tilde{\beta}_j - \frac{c_1+c_2}{2})$. \item If there is no solution to $\big( \partial f(\beta) \big/ \partial \beta_k \big) = 0$, we let \begin{equation} \nonumber \tilde{\beta}_k = \left\{ \begin{array}{ll} \tilde{\beta}_l &\quad \mbox{if }~~ f(\tilde{\beta}_l) = \min \big\{ f(0), f(\tilde{\beta}_j ), \mbox{ for }j \neq k \big\} \\ 0 &\quad \mbox{if }~~ f(0) \le f(\tilde{\beta}_j ), \mbox{ for every }j \neq k. \end{array} \right. \end{equation} \end{itemize} \item Fusion step: If the descent step fails to improve the objective function $f(\beta)$, we consider the fusion of pairs of $\beta_k$s. For every single pair $(k,l), l \neq k$, we consider the equality constraint $\beta_k = \beta_l = \gamma$ and try a descent move in $\gamma$. The derivative of (\ref{lagr_obj}) with respect to $\gamma$ becomes \begin{equation} \nonumber \begin{array}{lll} \frac{\partial f(\beta)}{\partial \gamma} &=& (x_k^T x_k + x_l^T x_l) \gamma - \tilde{y}^T (x_k + x_l)+ 2 \lambda_1 sgn(\gamma)\\ & & \qquad + 2 \lambda_2 \sum_{j< k,l} sgn(\tilde{\beta}_j - \gamma) + 2\lambda_2 \sum_{j>k,l} sgn(\gamma - \tilde{\beta}_j ), \end{array} \end{equation} where $\tilde{y} = y - \sum_{j \neq k,l } \tilde{\beta}_j x_j$. { If the optimal value of $\gamma$ obtained from the descent step improves the objective function, we accept the move $\beta_k = \beta_l = \gamma$.} \end{itemize} \subsection{Choice of Tuning Parameters} Estimation of the tuning parameters $\alpha$ and $t$ used in the algorithm above is very important for its successful implementation, as it is for the other methods of penalized regression. Several methods have been proposed in the literature, and any of these can be used to tune the parameters of the HORSES procedure. $K$-fold cross-validation (CV) randomly divides the data into $K$ roughly equally sized and disjoint subsets $D_k$, $k=1,\ldots,K$; $\bigcup_{k=1}^K D_k =\{1,2,\ldots,n \}$. The CV error is defined by \begin{equation} \nonumber {\rm CV}(\alpha, t) = \sum_{k=1}^K \sum_{i \in D_k } \left( y_i - \sum_{j=1}^p \widehat{\beta}_j^{(-k)}(\alpha,t) x_{ij} \right)^2, \end{equation} where $\widehat{\beta}_j^{(-k)}(\alpha,t)$ is the estimate of $\beta_j$ for a given $\alpha$ and $t$ using the data set without $D_k$. {Generalized cross-validation (GCV) and Bayesian information criterion (BIC) \citep{Tibshirani96, Tibshirani05, Zou07} are other popular methods. These are defined by} \begin{eqnarray*} \nonumber {\rm GCV} (\alpha,t) &=& \frac{{\rm RSS}(\alpha,t)}{ n - {\rm df}},\\ {\rm BIC} (\alpha, t) &=& n \times \log \big({\rm RSS}(\alpha,t) \big) + \log n \times {\rm df} \end{eqnarray*} where $\widehat{\beta}_j\big(\alpha,t \big)$ is the estimate of $\beta_j$ for a given $\alpha$ and $t$, ${\rm df}$ is the degrees of freedom and \begin{equation} \nonumber {\rm RSS} ( \alpha, t ) = \sum_{i=1}^n \left( y_i - \sum_{j=1}^p \widehat{\beta}_j (\alpha,t)x_{ij} \right)^2. \end{equation} Here, the degrees of freedom is a measure of model complexity. To apply these methods, one must estimate the degrees of freedom \citep{Efron04}. Following \citet{Tibshirani05} for Fused LASSO, we use the number of distinct groups of non-zero regression coefficients as an estimate of the degrees of freedom. \section{Simulations} \label{sect:simstudy} We numerically compare the performance of HORSES and several other penalized methods: ridge regression, LASSO, Elastic Net, and OSCAR. We do this by generating data based on six models that differ on the number of data points $n$, number of predictors $p$, the correlation structure $\Sigma$ and the true values of the coefficients $\beta$. The parameters for these six models are given in Table \ref{table:models}. \begin{table} \begin{tabular}{|cccccl|} \hline Model & $n$ & $p$ & $\Sigma_{i,j}$ & $\sigma$ & $\beta$ \\ \hline 1 & 20 & 8 & $0.7^{|i-j|}$ & 3 & $(3,2,1.5,0,0,0,0,0)^T$ \\ 2 & 20 & 8 & $0.7^{|i-j|}$& 3 & $(3,0,0,1.5,0,0,0,0,2)^T$ \\ 3 & 20 & 8 & $0.7^{|i-j|}$ & 3 & $(0.85,0.85,0.85,0.85.0.85,0.85,0.85,0.85)^T$ \\ 4& 100 & 40 & 0.5 & 15 & $(\underbrace{0,\ldots,0}_{10},\underbrace{2,\ldots,2}_{10}, \underbrace{0,\ldots,0}_{10},\underbrace{2,\ldots,2}_{10})^T$ \\ 5 & 50 & 40 & 0.5 & 15 & $(\underbrace{3,\ldots,3}_{15},\underbrace{0,\ldots,0}_{25})^T$ \\ 6 & 50 & 100 & $0.7^{|i-j|}$ & 3 & see text \\ \hline \end{tabular} \caption{Parameters for the models used in the simulation study.} \label{table:models} \end{table} The first five models are very similar to those in \citet{Zou05} and \citet{Bondell08}. Specifically, the data are generated from the model \begin{equation} \nonumber {y} = {\rm X} {\bf \beta} + \epsilon, \end{equation} where $\epsilon \sim N\big(0,\sigma^2\big)$. For models 1-4, we generate predictors $x_i =(x_{i1}, \ldots, x_{ip})^{t}$ from a multivariate normal distribution with mean 0 and covariance $\Sigma$ where $\Sigma_{j,j} =1$ for $j=1, \ldots,p$. For model 5, the predictors are generated as follows: \begin{eqnarray*} &&x_i = Z_1 + \eta_i^x, Z_1 \sim N(0,1), \quad i \in G_1 =\{1, \ldots, 5\}\\ &&x_i = Z_2 + \eta_i^x, Z_2 \sim N(0,1), \quad i \in G_2 =\{6, \ldots, 10\}\\ &&x_i = Z_3 + \eta_i^x, Z_3 \sim N(0,1), \quad i \in G_3=\{11, \ldots, 15\}\\ &&x_i \sim N(0,1), i=16,\ldots, 40. \end{eqnarray*} where $\eta_i^x \sim N(0,0.16), i=1,\ldots, 15$. Then Corr$(x_i,x_j) \approx 0.85$ for $i, j \in G_k$ for $k=1,2,3$. For model 6, we consider the scenario where $p>n$. We choose $p=100$ because this is the maximum number of predictors that can be handled by the quadratic programming used in OSCAR. The vector of coefficients $\beta$ for model 6 is given by \begin{equation} \nonumber \beta = (\underbrace{3,\ldots,3}_{5},\underbrace{0,\ldots,0}_{10}, \underbrace{2,\ldots,2}_{5},\underbrace{0,\ldots,0}_{10}, \underbrace{-1.5,\ldots,-1.5}_{5},\underbrace{0,\ldots,0}_{10}, \underbrace{1,\ldots,1}_{5},\underbrace{0,\ldots,0}_{50})^T \end{equation} We generate 100 data sets of size $2n$ for each of the 6 models. In each data set, the final model is estimated as follows: (i) For each $(\alpha,t)$, we use the first $n$ observations as a training set to estimate the model and use the other $n$ observations as a validation set to compute the prediction error ${\rm PE}(\alpha,t)$; (ii) We set the tuning parameters to be the values $(\alpha^*,t^*)$ that minimize the prediction error ${\rm PE}(\alpha,t)$; (iii) The final model is estimated using the training set with $(\alpha,t)=(\alpha^*,t^*)$. \begin{table}[b] \begin{center} \caption{The number of groups in each model used in the simulation study.} \begin{tabular}{lcccccc}\\\hline Model & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline Number of groups & 3 & 3 & 1 & 1 & 3 & 4 \\\hline \end{tabular} \end{center} \end{table} We compare the mean square error (MSE) and the model complexity of the five penalized methods. The MSE is calculated as in \citet{Tibshirani96} via \begin{equation} \nonumber {\rm MSE} = (\hat{\beta} - \beta)^T {\rm V} (\hat{\beta} - \beta), \end{equation} where ${\rm V}$ is the population covariance matrix for $X$. The model complexity is measured by the number of groups. Based on the coefficient values and correlation structure, Table 1 shows the true number of groups for each of the six scenarios. Note that the true number of groups is not always the same as the degrees of freedom. For example, we note that the true number of groups in model 5 is three based on the correlation structure although all nonzero coefficients have the same value. On the other hand, model 4 assumes a compound symmetric covariance structure, therefore the number of groups only depends on the coefficient values. Hence, the order of the coefficients does not matter and we can consider model 4 as having only one group of non-zero coefficients. We take the model complexity of model 6 to be four, based on the coefficient values. However, it is possible that some of the zero coefficients might be included as signals because of strong correlations and relatively small differences in coefficient values in this case. For example, the correlation between $\beta_{50}=1$ and $\beta_{51}=0$ is 0.7. Therefore it is possible that the true model complexity in this case may be bigger than four. The simulation results are summarized in Table 2. The HORSES procedure reports the smallest {\rm df}s except for models 1 and 6. In both scenarios, the differences of {\rm df} between the least complex model and HORSES is marginal (4 vs 5 in model 1 and 30 vs 33.5 in model 6). The HORSES procedure is also very competitive in the MSE comparison. Its MSE is the smallest in models 2-4 and 6 and the second or third smallest in models 1 and 5. It is interesting to observe that HORSES is the best in model 2, but third in model 1 although the differences in MSE and {\rm df} of Elastic Net and HORSES in model 1 are minor. The values of the parameters are the same in both scenarios, but variables with similar coefficients are highly correlated in model 1, while these variables have little correlation with each other in model 2. Hence we can consider the grouping of predictors as mainly determined by coefficient values in model 2 while in model 1, the correlation structure may have an important role in the grouping. This can be confirmed by comparing the median MSEs of each method in the two models 1 and 2. As expected, the median MSE in model 1 is always smaller than the median MSE in model 2. The difference in the median MSEs can be interpreted as the gain achieved by using the correlation structure when grouping. Because of the explicit form of the fusion penalty in HORSES, our procedure seems to give more weight to differences among the coefficient values while still accounting for correlations. As a result, HORSES effectively groups in model 2. Not surprisingly, the HORSES procedure is much more successful than the other procedures in finding the correct model in model 3, where it may give higher weight to the fusion penalty ($\alpha$ close to $1$). It also has the smallest MSE among the methods. In this case, the true model is not sparse and the LASSO and Elastic Net methods fail. HORSES outperforms the other methods again in model 4. Since the model assumes the compound symmetric covariance structure, the grouping is solely based on the coefficient values. Because of the fusion penalty, the HORSES procedure is very effective in grouping and produces 3.5 as the median {\rm df} while the second smallest {\rm df} is 15 with OSCAR. In model 5, HORSES has the second smallest median MSE (=46.1) with Elastic Net's median MSE smallest at 40.7. However, HORSES chooses the least complex model and shows better grouping compared to Elastic Net. Model 6 considers a large $p$ and small $n$ case. The HORSES procedure reports the smallest MSE while the Elastic Net chooses the least complex model. However we notice that all methods report at least 30 as the {\rm df}. This might be due to the fact that the true model complexity in this case is not clear, as we point out above. In summary, the HORSES procedure outperforms the other methods in choosing the least complex model and attaining the best grouping, while also providing competitive results in terms of MSE. \begin{table}[ht] \begin{center} \caption{MSE and model complexity.} \begin{tabular}{llcccccc}\\\hline Case & Method & MSE & MSE & MSE &DF &DF &DF \\ & &Med. & 10th perc. & 90th perc. &Med. & 10th perc. & 90th perc.\\\hline {\bf C1} & Ridge & 2.31 & 0.98 & 4.25&8 &8 &8 \\ & LASSO & 1.92 & 0.68 & 4.02 & 5& 3&8 \\ & Elastic Net & 1.64& 0.49 & 3.26& 5&3 &7.5 \\ & OSCAR & 1.68 & 0.52 & 3.34 &4 &2 &7 \\ & HORSES & 1.85 & 0.74 & 4.40&5&3 &8 \\\hline {\bf C2} & Ridge & 2.94 & 1.36 & 4.63 &8 &8 &8 \\ & LASSO & 2.72 & 0.98 & 5.50&5 &3.5 &8 \\ & Elastic Net & 2.59 & 0.95 & 5.45 &6 &4 &8 \\ & OSCAR & 2.51 & 0.96 & 5.06&5 &3 &8 \\ & HORSES & 2.21 & 1.03 & 4.70 &5 &2 &8 \\ \hline {\bf C3} & Ridge & 1.48 & 0.56 & 3.39 &8 &8 &8 \\ & LASSO & 2.94 & 1.39 & 5.34&6 &4 &8 \\ & Elastic Net & 2.24& 1.02& 4.05&7 &5 & 8\\ & OSCAR & 1.44 & 0.51& 3.61&5 &2 &7 \\ & HORSES & 0.50 & 0.02 & 2.32 &2 &1 &5.5 \\ \hline {\bf C4} & Ridge & 27.4 & 21.2 & 36.3&40 &40 &40 \\ & LASSO & 45.4&32 &56.4 & 21&16 &25 \\ & Elastic Net & 34.4& 24 & 45.3&25 &21 &28 \\ & OSCAR & 25.9 & 19.1 & 38.1&15 &5 &19 \\ & HORSES & 21.2&19.3 & 33.0 &3.5 &1 &19.5 \\\hline {\bf C5} & Ridge & 70.2 & 41.8 & 103.6&40 &40 &40 \\ & LASSO & 64.7&27.6 &116.5 &12 &9 &18 \\ & Elastic Net & 40.7& 17.3 & 94.2&17 &13 &25 \\ & OSCAR & 51.8 & 14.8 & 96.3&12 &9 &18 \\ & HORSES & 46.1 &18.1 & 92.8 &11 &5.5 &19.5 \\\hline {\bf C6} & Ridge & 27.71 & 19.53 & 38.53 & 100 & 100 & 100 \\ & LASSO & 13.36 & 7.89 & 20.18 & 31 & 24 & 39.1 \\ & Elastic Net & 13.57 & 8.49 & 25.33 & 30 & 23.9 & 37 \\ & OSCAR & 13.16 & 8.56 & 19.16 & 50.00 & 35.9 & 83.7 \\ & HORSES & 12.20 & 7.11 & 22.02 & 33.5 & 24 & 66.3 \\\hline \end{tabular} \end{center} \end{table} \section{Data Analysis} \label{sect:analysis} \subsection{Cookie dough data} \begin{figure}[t] \begin{center} \includegraphics[scale=0.5,angle=0]{./dough_corr.eps} \end{center} \caption{Graphical representation of the correlation matrix of the 300 wavelengths of the cookie dough data} \label{fig:doughcorr} \end{figure} In this case study, we consider the cookie dough dataset from \cite{Osborne1984}, which was also analyzed by \cite{Brown2011}, \cite{Griffin2007}, \citet{Caron08}, and \cite{Hans2011}. \cite{Brown2011} consider four components as response variables: percentage of fat, sucrose, flour and water associated with each dough piece. Following \citet{Hans2011}, we attempt to predict only the flour content of cookies with the 300 NIR reflectance measurements at equally spaced wavelengths between 1200 and 2400 nm as predictors (out of the 700 in the full data set). {Also following \citet{Hans2011} we remove the 23rd and 61st observations as outliers. Then} we split the dataset randomly into a training set with 39 observations and a test set with 31 observations. Figure \ref{fig:doughcorr} shows the correlations between NIR reflectance measurements based on all observations. There are very strong correlations between any pair of predictors in the range of 1200-2200 and 2200-2400. Note however that strong correlations do not necessarily imply strong signals in this case since the correlations can be due to the measurement errors. With the training data set, tuning parameters of HORSES are computed to be $\alpha=0.999$ and $\lambda=0.1622$ (equivalently, $\lambda_1=0.1620$ and $\lambda_2=0.00016$). Since the $L_1$ penalty dominates the penalty function, we expect that both HORSES and LASSO will yield very similar results. We compare HORSES, LASSO and Elastic Net via the prediction mean squared error and degrees of freedoms on the test data. The OSCAR method is not included in the comparison because we are not able to apply it due to the high dimension of the data. Table 3 presents the prediction mean squared error and degrees of freedom of each method. The Elastic Net has the smallest MSE, but the differences in MSE across the three methods are small. On the other hand, the LASSO and HORSES methods provide parsimonious models with small degrees of freedom. The estimated coefficients for the LASSO, Elastic Net and HORSES methods are presented in Figure \ref{fig:biscuit}. Elastic Net produces 11 peaks while both LASSO and HORSES have 7 peaks. The estimated spikes from LASSO and HORSES are consistent with the results obtained in \citet{Caron08}. The main difference between the two methods is at wavelengths 1832 and 1836, where the LASSO estimates are 0.204 and 0 while the HORSES estimates are 0.0853 at both wavelengths. The Elastic Net has peaks at wavelength 1784 and 1804 but the other two methods do not provide a peak at those wavelengths. We observe a reverse pattern at wavelength 2176. \begin{figure}[t] \centering \subfigure[Elastic Net]{ \includegraphics[scale=0.2,angle=-90]{./biscuit_net.ps} \label{fig:biscuit_net} } \subfigure[LASSO]{ \includegraphics[scale=0.2, angle=-90]{./biscuit_lasso.ps} \label{fig:biscuit_lasso} } \subfigure[HORSES]{ \includegraphics[scale=0.2, angle=-90]{./biscuit_horses.ps} \label{fig:biscuit_horses} } \caption{Coefficient estimates for the 300 predictors of the cookie dough data}. \label{fig:biscuit} \end{figure} \begin{table}[ht] \caption{Biscuit dough data results} \begin{center} \begin{tabular}{cccc}\\\hline & Elastic Net & HORSES & LASSO\\\hline Mean Squared Error & 2.442 & 2.586 & 2.556 \\ Degrees of Freedom & 11 & 7 & 7 \\\hline \end{tabular} \end{center} \end{table} \subsection{Appalachian Mountains Soil Data} Our next example is the Appalachian Mountains Soil Data from \citet{Bondell08}. Figure \ref{fig:soilcorr} shows a graphical representation of the correlation matrix of 15 soil characteristics computed from measurements made at twenty 500-$m^2$ plots located in the Appalachian Mountains of North Carolina. The data were collected as part of a study on the relationship between rich-cove forest diversity and soil characteristics. Forest diversity is measured as the number of different plant species found within each plot. The values in the soil data set are averages of five equally spaced measurements taken within each plot and are standardized before the data analysis. These soil characteristics serve as predictors with forest diversity as the response. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5,angle=0]{./Corr_map.eps} \end{center} \caption{Graphical representation of the correlation matrix of the 15 predictors of the Appalachian soil data} \label{fig:soilcorr} \end{figure} As can be seen from Figure \ref{fig:soilcorr}, there are several highly correlated predictors. Note that our correlation graphic shows the signed correlation values and is thus different from the one in \citet{Bondell08} showing the {\it absolute} value of correlation. {The first seven covariates are closely related. Specifically they concern positively charged ions (cations). The predictors named ``calcium'', ``magnesium'', ``potassium'', and ``sodium'' are all measurements of cations of the corresponding chemical elements, while ``\% Base Saturation'', ``Sum Cations'' and ``CEC'' (cation exchange capacity) are all summaries of cation abundance. The correlations between these seven covariates fall in the range (0.360, 0.999). There is a very strong positive correlation between percent base saturation and calcium ($r=0.98$), but the correlation between potassium and sodium ($r=0.36$) is not quite as high as the others. Of the remaining eight variables, the strongest negative correlation is between soil pH and exchangeable acidity ($r=-0.93$). Since both of these are measures of acidity, this appears surprising. However, exchangeable acidity measures only a subset of the acidic ions measured in pH, this subset being of more significance only at low pH values.} {Note that because ``Sum Cations'' is the sum of the other four cation measurements the design matrix for these predictors is not full rank.} \begin{table}[t] \begin{center} \caption{Results of analyzing the Appalachian soil data using OSCAR and HORSES, and two different methods for choosing the tuning parameters.} \begin{tabular}{l|ccccc}\hline Variable & OSCAR & OSCAR & HORSES & HORSES \\ & (5-fold CV) & (GCV) & (5-fold CV) & (GCV) \\\hline \% Base saturation & 0 & -0.073 & 0 & {-0.1839} \\ Sum cations & {-0.178} & {-0.174} & {-0.1795} & {-0.1839} \\ CEC & {-0.178} &{-0.174} & {-0.1795} & {-0.1839} \\ Calcium & {-0.178} & {-0.174} & {-0.1795} & {-0.1839} \\ Magnesium & 0 & 0 & 0 & 0 \\ Potassium & {-0.178} & {-0.174} & {-0.1795} & {-0.1839} \\ Sodium & 0 & 0 & 0 & 0 \\ Phosphorus & 0.091 & {0.119} & 0.0803 & {0.2319} \\ Copper & 0.237 & {0.274} & 0.2532 & {0.3936} \\ Zinc & 0 & 0 & 0 & -0.0943 \\ Manganese & 0.267 & {0.274} & 0.2709 & {0.3189} \\ Humic matter & -0.541 & -0.558 & -0.5539 & -0.6334 \\ Density & 0 & 0 & 0 & 0 \\ pH & 0.145 & {0.174} & 0.1276 & {0.2319} \\ Exchangeable acidity & 0 & 0 & 0 & 0.0185 \\\hline\hline Degrees of Freedom & 6 & 5 & 6 & 7 \\\hline \end{tabular} \end{center} \end{table} We analyze the data with the HORSES and OSCAR procedures and report the results in Table 4. Although OSCAR and HORSES use the same definition of df, the OSCAR procedure groups predictors based on the {\it absolute} values of the coefficients. Therefore the number of groups is not the same as the df in OSCAR. The results for LASSO using 5-fold cross-validation and GCV can be found in \citet{Bondell08}. The 5-fold cross-validation OSCAR and HORSES solutions are similar. They select the exact same variables, but with slightly different coefficient estimates. Since the sample size is only 20 and the number of predictors is 15, the 5-fold cross-validation method may not be the best choice for selecting tuning parameters. However, using GCV, OSCAR and HORSES provide different answers. Compared to the 5-fold cross-validation solutions, the OSCAR solution has one more predictor (\% Base saturation) while the HORSES solution has 3 additional predictors (\% Base saturation, Zinc, Exchangeable acidity). More interestingly, in the OSCAR solution, \% Base saturation is not in the group measuring {\it abundance of cations}, while pH is. On the other hand, the \% Base saturation variable is included in the {\it abundance of cations} group. The HORSES solution also produces an additional group of variables consisting of Phosophorus and pH. \section{Conclusion} \label{sect:conclusion} {We proposed a new group variable selection procedure in regression that produces a sparse solution and also groups positively correlated variables together. We developed a modified pathwise coordinate optimization for applying the procedure to data. Our algorithm is much faster than a quadratic program solver and can handle cases with $p>n$. Such a procedure is useful relative to other available methods in a number of ways. First, it selects groups of variables, rather than randomly selecting one variable in the group as the LASSO method does. Second, it groups positively correlated rather than both positively and negatively correlated variables. This can be useful when studying the mechanisms underlying a process, since the variables within each group behave similarly, and may indicate that they measure characteristics that affect a system through the same pathways. Third, the penalty function used ensures that the positively correlated variables do not need to be spatially close. This is particularly relevant in applications where spatial contiguity is not the only indicator of functional relation, such as brain imaging or genetics.} {A simulation study comparing the HORSES procedure with ridge regression, LASSO, Elastic Net and OSCAR methods over a variety of scenarios showed its superiority in terms of sparsity, effective grouping of predictors and MSE.} It is desirable to achieve a theoretical optimality such as the oracle property of \citet{Fan01} in high dimensional cases. One possibility is to extend the idea of the adaptive Elastic Net \citep{Zou09} to the HORSES procedure. Then we may consider the following penalty form: \begin{eqnarray*} \hat \beta &=& \mbox{argmin}_{\beta} \|y - \sum_{j=1}^p \beta_j x_j \|^2 \mbox { subject to} \\ &&\alpha \sum_{j=1}^p \hat w_j | \beta_j| + (1-\alpha)\sum_{j < k} | \beta_j - \beta_k| \le t, \end{eqnarray*} where $\hat w_j$ are the adaptive data-driven weights. Investigating theoretical properties of the above estimator will be a topic of future research. \section{Appendix} \label{sect:appendix} {\bf Proof of Theorem 1:} Note that one can write the HORSES optimization problem in the equivalent Lagrangian form \begin{equation} \label{eqn:HORSES-obj} \argmin_{\beta} \left\{ \|{ y} - \sum_{j=1}^p \beta_j { x}_j \|^2 + \lambda \left( \alpha\sum_{j=1}^p | \beta_j | + (1-\alpha) \sum_{j < k} | \beta_j - \beta_k | \right) \right\}. \end{equation} Suppose the covariates $({x}_1,{ x}_2,\ldots,{ x}_p)$ are ordered such that their corresponding coefficient estimates satisfy \begin{equation} \nonumber \widehat{\beta}_1 \le \widehat{\beta}_2\le \cdots \le \widehat{\beta}_{\rm L}< 0 < \widehat{\beta}_{{\rm L}+1} \cdots \le \widehat{\beta}_Q \end{equation} and $\widehat{\beta}_{Q+1}=\cdots=\widehat{\beta}_p=0$. Let $\widehat{\theta}_1, \ldots,\widehat{\theta}_G$ denote the $G$ unique nonzero values of the set of $\widehat{\beta}_j$, so that $G \le Q$. For each $g=1,2,\ldots,G$, let \begin{equation} \nonumber \mathcal{G}_g=\{j : \widehat{\beta}_j = \widehat{\theta}_g\} \end{equation} denote the set of indices of the covariates whose estimates of regression coefficients are $\widehat{\theta}_g$. Let also $w_g=|\mathcal{G}_g|$ be the number of elements in the set $\mathcal{G}_g$ Suppose that $\widehat{\beta}_k (\lambda_1,\lambda_2) \neq \widehat{\beta}_l (\lambda_1,\lambda_2)$ and both are non-zero. In addtion, let assume $k \in \mathcal{G}_g$ and $l \in \mathcal{G}_h$ for $h>g$ without loss of generality. The differentiation of the objective function (\ref{eqn:HORSES-obj}) with respect to $\beta_k$ gives \begin{equation} \nonumber -2 {x}_k^{\rm T} \big( y - \sum_{j=1}^p \widehat{\beta}_j x_j)+ \lambda \kappa_k =0, \end{equation} where $u_{+,g}=\sum_{g1<g} w_{g1}$ and $u_{g,+} =\sum_{g<g2} w_{g2}$, and \begin{equation} \kappa_k =\alpha ~sgn(\widehat{\beta}_k) + \big( 1- \alpha \big) \big( u_{+,g} - u_{g,+} \big). \end{equation} In the same way, the differentiation of (\ref{eqn:HORSES-obj}) with respect to $\beta_l$ is \begin{equation} \nonumber -2 {x}_l^{\rm T} \big( y - \sum_{j=1}^p \widehat{\beta}_j x_j)+ \lambda \Big\{ \alpha~ sgn(\widehat{\beta}_l) + \big( 1- \alpha\big) \big( u_{+,h} - u_{h,+} \big) \Big\}=0, \end{equation} and we have, by taking their differences, \begin{equation} \nonumber -2 \big( {x}_k^{\rm T} - {x}_l^{\rm T} \big) \big( y - \sum_{j=1}^p \widehat{\beta}_j x_j \big)+ \lambda \big( \kappa_k - \kappa_l\big) = 0. \end{equation} Since ${X}$ is standardized, $\|{ x}_k^{\rm T} - { x}_l^{\rm T} \|^2=2 (1- \rho_{kl} )$. This together with the fact that $\| { y} - { X} \widehat{\beta} \|^2 \le \|{ y} \|^2$ gives \begin{equation} \nonumber |\kappa_k -\kappa_l| \le 2 \lambda^{-1} \|{y}\| \sqrt{2 (1-\rho_{ij})}. \end{equation} However, we find that \begin{eqnarray} \kappa_l - \kappa_k &=& \alpha \big\{ sgn(\widehat{\beta}_l) - sgn(\widehat{\beta}_k)\big\} \nonumber\\ && \qquad + \big( 1- \alpha \big) \Big\{ \big(u_{+,h}-u_{h,+} \big) - \big( u_{+,g} - u_{g,+} \big) \Big\}, \end{eqnarray} is always larger than or equal to $2(1-\alpha)$. Thus, If $2 \lambda^{-1} \|{y}\| \sqrt{2 (1-\rho_{kl})}<2(1-\alpha)$ - equivalently, $ \|{y}\| \sqrt{2 (1-\rho_{kl})} \big/ (1-\alpha)< \lambda$ - then we encounter a contradiction.
2,877,628,088,797
arxiv
\section{Introduction} Recent progress in nano-technology has attracted much interest in studying quantum transport in mesoscopic systems. Among others, a quantum dot (QD) \cite{Kouwen} has played an important role to reveal correlation effects in the nanoscale systems. In particular, the observation \cite{Gold,Cronen} of the Kondo effect in QD systems \cite{Glaz,Ng,MW,Kawa,Oguri1} opened a systematic way to investigate strongly correlated electrons, which has encouraged further theoretical and experimental studies in this field. Besides this substantial progress, transport properties of a mesoscopic system with hybrid normal(metal)-superconductor junctions have been also investigated extensively. In this system, the Andreev reflection \cite{ARreview} plays a key role in physics, in which an incoming electron from normal side can be reflected as a hole, therefore transferring a Cooper pair into superconductor. The above interesting topics in nanoscale systems naturally stimulated the research on the Andreev reflection for a QD coupled to normal and superconducting leads (N-QD-S) \cite{Beenakker,Zhao1,Sun1,Loss,Zhao2,Chen}. In this system, the Andreev reflection (the proximity effect) at the QD-S interface induces the superconducting correlation in the QD, which has a tendency to form the spin-singlet state. On the other hand, for large Coulomb interactions, the Kondo effect is enhanced and therefore the Kondo spin-singlet state is stabilized between the spin moment in the QD and the conduction electrons in the leads. Thus, the competition between these two distinct spin-singlet states occurs in the N-QD-S system. In order to clarify how this competition affects the transport in this system, theoretical analyses \cite{Fazio,Schwab,Clerk,Cuevas,Sun2,Cho,Aono,Avishai1,Avishai2,Krawiec, Splett,Domanski} as well as experimental investigations \cite{Graber} have been done intensively, \textit{e.g.} the linear \cite{Fazio,Schwab,Clerk,Cuevas,Krawiec,Graber} or nonlinear \cite{Fazio,Cho,Avishai1,Aono,Krawiec,Domanski,Graber} conductance, the excess Kondo resonance coming from the novel co-tunneling process (Andreev-normal co-tunneling) \cite{Sun2}, the Andreev reflection through a QD embedded in an Aharonov-Bohm ring \cite{Avishai2} and the adiabatic electron pumping \cite{Splett} etc. In particular, some theoretical studies on the linear conductance have clarified that the Coulomb interaction suppresses the Andreev reflection at the QD-S interface, which leads to the decrease of the linear conductance \cite{Fazio,Schwab,Clerk,Krawiec}. However, these studies have been done on the assumption that the Coulomb interaction in the QD is sufficiently large. On the other hand, Cuevas \textit{et al.} analyzed the conductance over the range from the noninteracting case to the strong-correlation limit, using a modified second-order perturbation theory \cite{Cuevas}. They found that the increase of the coupling between the QD and the superconducting lead makes it possible to restore the conductance possibly up to the maximum value $4e^2/h$, although physical implications of the conductance maximum in the presence of the Coulomb interaction were not discussed in detail. In an N-QD-S system, moreover, the total number of electrons is not conserved, because of the superconducting correlation. In such a system, it is not obvious whether the low-energy properties are described by the local Fermi liquid theory. In this paper, we theoretically investigate the transport in an N-QD-S system with the use of the numerical renormalization group (NRG) method \cite{Wilkins,Hewson1}, which has been applied successfully to a Josephson current through a QD \cite{Choi,Oguri2,Oguri3}. Applying the Bogoliubov transformation, we first show that the low-energy properties of an N-QD-S system are described by the local Fermi liquid theory. Using the NRG method, we calculate the conductance due to the Andreev reflection with high accuracy and thus confirm Cuevas \textit{et al.}'s results. To understand the behavior of the conductance, the renormalized parameters, which characterize the Andreev bound states around the Fermi energy, are calculated. From the analysis of the ground state properties, we demonstrate that the conductance maximum clearly characterizes a crossover between the superconducting singlet state and the Kondo singlet state. This paper is organized as follows. In the next section, we introduce the model and describe the low-energy properties in terms of the local Fermi liquid theory. Then in \S \ref{sec:result}, we calculate the conductance and the renormalized parameters. We discuss how the interplay between the Kondo effect and the superconducting correlation is reflected in the transport, by systematically changing the Coulomb interaction, the tunneling amplitude and the gate voltage. A brief summary is given in the last section. \section{Model and Formulation} \label{sec:model} \subsection{Model} The Hamiltonian of a QD coupled to normal (N) and superconducting (S) leads is given by \begin{eqnarray} H = H_d^0 + H_d^U + H_S + H_N + H_{TS} + H_{TN , } \label{Hami1} \end{eqnarray} where $H_d^0 + H_d^U$ and $H_{S(N)}$ represent the QD part and the superconducting (normal) lead part, respectively. $H_{TS}$ and $H_{TN}$ are the mixing terms between the QD and the leads. The explicit form of each part reads \begin{eqnarray} H_d^0 \!\!\!\!\!&=&\!\!\!\!\! \left(\varepsilon_d+\frac{U}{2}\right)(n_d-1), \,\,\,\, H_d^U = \frac{U}{2}(n_d-1)^2, \nonumber\\ H_S \!\!\!\!\!&=&\!\!\!\!\! \sum_{q,\sigma}\varepsilon _{q} s_{q\sigma}^\dag s_{q\sigma}^{} +\sum_{q}(\Delta s_{q\uparrow }^\dag s_{-q\downarrow }^\dag + \textrm{H.c.}), \nonumber\\ H_N \!\!\!\!\!&=&\!\!\!\!\! \sum_{k,\sigma}\varepsilon _{k} c_{k\sigma}^\dag c_{k\sigma}^{}, \,\,\,\, H_{TN} = \sum_{k,\sigma} \frac{V_N}{\sqrt{M_N}} (c_{k\sigma}^\dag d_{\sigma}^{} + \textrm{H.c.} ), \nonumber\\ H_{TS} \!\!\!\!\!&=&\!\!\!\!\! \sum_{q,\sigma} \frac{V_S}{\sqrt{M_S}} (s_{q\sigma}^\dag d_{\sigma}^{} + \textrm{H.c.} ). \label{Hamipart} \end{eqnarray} The operator $d^{\dag}_{\sigma}$ creates an electron with energy $\varepsilon _{d}$ and spin $\sigma$ at the QD, and $n_d=\sum_{\sigma}d^{\dag}_{\sigma}d^{}_{\sigma}$. Following ref. \citen{Wilkins}, we here write down the QD part $H_d^0 + H_d^U$ in such a way that the energy of the one-electron occupied state ($n_d=1$) is zero. Note that $H_d^0+H_d^U=\varepsilon_d n_d + Un_{d\uparrow}n_{d\downarrow} + const.$, because $(n_d-1)^2=2n_{d\uparrow}n_{d\downarrow}-n_d+1$. In this representation, $H_d^0$ gives the energy shift due to the deviation from the electron-hole symmetric case ($\varepsilon_d+U/2=0 $). In the Hamiltonian for leads, $s_{q\sigma}^\dag(c_{k\sigma}^\dag)$ is the creation operator of an electron with the energy $\varepsilon _{q}(\varepsilon _{k})$ in the superconducting (normal) lead. In $H_{TS}(H_{TN})$, $V_S(V_N)$ is the tunneling amplitude between the QD and the superconducting (normal) lead, and $M_S(M_N)$ is the number of lattice sites in the superconducting (normal) lead. We assume that the superconducting lead is well described by the BCS theory with a superconducting gap $\Delta=|\Delta|e^{i\phi_S}$, where $\phi_S$ is the phase of the superconducting gap. In what follows, we consider the limiting case of $|\Delta| \to \infty $. The essential physics of the Andreev reflection, which occurs inside the superconducting gap, is still captured in this limit. In this situation, the Hamiltonian \eqref{Hami1} can be reduced exactly to an effective single-channel Hamiltonian (see Appendix \ref{sec:Delta}) \cite{Oguri2,Affleck} \begin{eqnarray} H^\mathrm{eff} = H_d^0 + H_d^U + H_d^\mathrm{SC} + H_N + H_{TN}\,, \label{Hamieff} \end{eqnarray} where \begin{eqnarray} H_d^\mathrm{SC} = \Delta_d\, d^{\dag}_{\uparrow }d^{\dag}_{\downarrow }+\textrm{H.c.} \label{HamidSC} \end{eqnarray} $H_d^\mathrm{SC}$ denotes the effective onsite superconducting gap at the QD, \begin{eqnarray} \Delta_d\equiv \Gamma_S e^{i\phi_S} _{\quad.} \label{eq:delta_d} \end{eqnarray} Notice here that the resonance strength between the QD and the superconducting lead is given by $\Gamma_{S}(\varepsilon)= \pi \sum_q V_S^{2}\delta(\varepsilon-\varepsilon_{q})/M_{S}$, which is reduced to an energy-independent constant $\Gamma_{S}$ in the wide band limit. For simplicity, we set $\phi_S=0$ in what follows, so that $\Delta_d(\equiv \Gamma_S)$ becomes real. The reduction in the number of the channels gives us a practical advantage in the NRG calculations, because this method works with high accuracy for single-channel systems while the accuracy becomes rather worse for multi-channel systems. \subsection{Effective Anderson Hamiltonian with normal leads} In the NRG method, the normal lead part is transformed into a linear chain after carrying out a standard procedure of the logarithmic discretization \cite{Wilkins}. Then, a sequence of the Hamiltonians is obtained as \begin{eqnarray} \mathcal{H}_\mathrm{NRG}^\mathrm{eff} \!\!\!\!\!&=&\!\!\!\!\! \Lambda^{(N-1)/2} \, \left(\, H_d^0 + H_d^U + H_d^\mathrm{SC} + \mathcal{H}_{N} + \mathcal{H}_{TN} \,\right)_, \nonumber\\ &&\!\!\!\!\! \label{eq:NRG_SC_cond} \end{eqnarray} where \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\! \mathcal{H}_{N} \,+\, \mathcal{H}_{TN} \nonumber\\ \,\, &&\,\, = \sum_{n=-1}^{N-1} \sum_{\sigma} t_n\, \Lambda^{-n/2} \, \left(\, f^{\dagger}_{n+1\,\sigma}\,f^{\phantom{\dagger}}_{n \sigma} \, + \, f^{\dagger}_{n \sigma}\, f^{\phantom{\dagger}}_{n+1\,\sigma} \,\right)_. \nonumber\\ \label{eq:NRG_H_NT} \end{eqnarray} In eq. \eqref{eq:NRG_H_NT}, $f_{-1\sigma} = d_{\sigma}$, and $f_{n\sigma}$ for $n\geq 0$ is an operator for the conduction electron in the normal lead. The hopping factor $t_n$ is defined by $t_{-1} \equiv \widetilde{v} \,\Lambda^{-1/2}$ for $n=-1$, where \begin{eqnarray} \widetilde{v} =\sqrt{ \frac{2\,\Gamma_N D\,A_{\Lambda}}{\pi} }, \,\, A_{\Lambda}=\frac{1}{2}\, \left(\, {1+1/\Lambda \over 1-1/\Lambda }\,\right) \,\log \Lambda , \label{eq:t0} \end{eqnarray} and for the conduction band ($n\geq 0$), \begin{eqnarray} t_n = D\, \frac{1+1/\Lambda}{2} { 1-1/\Lambda^{n+1} \over \sqrt{1-1/\Lambda^{2n+1}} \sqrt{1-1/\Lambda^{2n+3}} }_. \label{eq:tn} \end{eqnarray} Here, $D$ is the half-width of the conduction band, and $\Gamma_N$ ($=\pi \sum_k V_N^{2}\delta(\varepsilon-\varepsilon_{k})| _{\varepsilon=\varepsilon_{F}}/M_{N}$) is the resonance strength between the QD and the normal lead. The factor $A_{\Lambda}$ is introduced to compare the discretized model with the original Hamiltonian \eqref{Hamieff} precisely, and it behaves as $A_{\Lambda}\to 1$ in the continuum limit $\Lambda\to 1$ \cite{Wilkins,Sakai}. In principle, we can carry out the NRG calculation for the Hamiltonian \eqref{eq:NRG_SC_cond}. However, we note that $H_d^\mathrm{SC}$ in eq. \eqref{HamidSC}, which represents the effective onsite superconducting gap at the QD, mixes states with different particle numbers. This means that the eigenstates of the Hamiltonian \eqref{eq:NRG_SC_cond} can not be classified in terms of the total number of electrons. To avoid this inconvenience, we perform the Bogoliubov transformation \cite{Satori}, which is summarized in Appendix \ref{sec:Bogo}. In the present case the superconducting gap is absent in $\mathcal{H}_{N} + \mathcal{H}_{TN}$, so that the Hamiltonian \eqref{eq:NRG_SC_cond} can be mapped onto the Anderson model without the onsite superconducting gap, \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\!\!\!\! \Lambda^{-(N-1)/2}\, \mathcal{H}_\mathrm{NRG}^\mathrm{eff} \nonumber \\ &=&\!\!\!\! E_d\left(\sum_{\sigma}\gamma_{-1\sigma}^\dag \gamma_{-1\sigma}^{} -1\right) +\frac{U}{2}\left(\sum_{\sigma}\gamma_{-1\sigma}^\dag \gamma_{-1\sigma}^{}-1\right)^2 \nonumber\\ &&\!\!\! +\sum_{n=-1}^{N-1}\sum_{\sigma} t_n\, \Lambda^{-n/2} (\gamma_{n+1\sigma}^\dag \gamma_{n\sigma}^{} + \textrm{H.c.} ) , \label{Hamitil} \end{eqnarray} where \begin{eqnarray} E_d=\sqrt{\left(\varepsilon_d+\frac{U}{2}\right)^2+\Delta_d^2}\,\;. \label{eq:E_d} \end{eqnarray} The important point is that the total number of Bogoliubov quasiparticles $\sum_{n=-1}^{N}\sum_{\sigma} \gamma_{n\sigma}^\dag \gamma_{n\sigma}^{}$ conserves. Moreover, the Hamiltonian \eqref{Hamitil} is identical to the ordinary Anderson model (without superconductivity). Therefore, the low-energy properties can be described by the local Fermi liquid theory, even though the original model of eq. \eqref{eq:NRG_SC_cond} has the onsite superconducting gap. Equation \eqref{Hamitil} has been obtained on the assumption that $|\Delta| \to \infty $. Also for finite $|\Delta|$, the coupling to the normal lead via $\Gamma_N$ could make the low-lying energy states at $|\varepsilon| \lesssim \min(|\Delta|,T_K )$ be described by the Fermi liquid, where $T_K$ is the Kondo temperature in the case of $\Gamma_S=0$. There is another point to be mentioned here. By comparing the first term of the Hamiltonian \eqref{Hamitil} with $H_d^0$ in eq. \eqref{Hamipart}, we notice that the parameter $\left(\varepsilon_d+U/2\right)$ is replaced by $E_d$. This means that the Hamiltonian \eqref{Hamitil} corresponds to the Anderson model with the energy level of the impurity site $\bar{\varepsilon}_d=E_d-U/2$, while the term $(U/2)(n_d-1)^2$ due to the Coulomb interaction remains unchanged. We discuss the case of $\Gamma_S\ne0$ ($\Delta_d\ne0$) in this paper, so that we treat the reduced Hamiltonian \eqref{Hamitil} with $E_d>0$ in eq. \eqref{eq:E_d} (so-called asymmetric Anderson model). \subsection{Local Fermi-liquid description} We now introduce the Green function to formulate the density of states (DOS) of the QD. Following eq. \eqref{eq:Gd_B} in Appendix \ref{sec:Bogo}, we can write down the retarded Green function of the QD, after the Bogoliubov transformation, as follows, \begin{eqnarray} G_{d\uparrow,d\uparrow}^r (\varepsilon ) = u_d^2 G_{\gamma_{-1}\uparrow,\gamma_{-1}\uparrow}^r (\varepsilon ) + v_d^2 \bar{G}_{\gamma_{-1}\downarrow,\gamma_{-1}\downarrow}^r (\varepsilon ). \label{eq:Green_d} \end{eqnarray} Note that the coherent factors $u_d, v_d$ are real because of $\phi_S=0$. Using eq. \eqref{eq:Green_d}, the DOS of the QD is given by \begin{eqnarray} \rho_{d} (\varepsilon ) \!\!\!\!&=&\!\!\!\! -\frac{1}{\pi} \textrm{Im}G_{d\uparrow,d\uparrow}^r (\varepsilon ) \nonumber\\ \!\!\!\!&=&\!\!\!\! -\frac{1}{\pi} \left\{ u_d^2 \textrm{Im} G_{\gamma_{-1}\uparrow,\gamma_{-1}\uparrow}^r (\varepsilon ) + v_d^2 \textrm{Im} \bar{G}_{\gamma_{-1}\downarrow,\gamma_{-1}\downarrow}^r (\varepsilon ) \right\}_. \nonumber\\ \label{eq:DOS_d} \end{eqnarray} The point we wish to stress is that $G_{\gamma_{-1}\uparrow,\gamma_{-1}\uparrow}^r (\varepsilon )$ and $\bar{G}_{\gamma_{-1}\downarrow,\gamma_{-1}\downarrow}^r (\varepsilon )$ are derived from the generalized Anderson model \eqref{Hamitil}, in which the superconductivity does not show up explicitly. By using the formula \eqref{eq:DOS_d}, we can describe the Andreev bound states in the QD, which are induced by the Andreev reflection at the QD-S interface. We here focus on these states around the Fermi energy ($\varepsilon \simeq 0$), where the self-energy due to the Coulomb interaction $\Sigma(\varepsilon)$ is approximately given by $\Sigma(\varepsilon) \simeq \Sigma(0) + \varepsilon \left. \partial \Sigma(\varepsilon)/\partial \varepsilon \right|_{\varepsilon=0 \; }$. Then, the retarded Green function for electrons after the Bogoliubov transformation reads \begin{eqnarray} G_{\gamma_{-1}\uparrow,\gamma_{-1}\uparrow}^r (\varepsilon ) \!\!\!\!&=&\!\!\!\! \frac{1}{\varepsilon-E_d+i\Gamma_N -\Sigma(\varepsilon)} \nonumber\\ \!\!\!\!&\simeq&\!\!\!\! \frac{z}{\varepsilon-\widetilde{E}_d+i\widetilde{\Gamma}_N}_{\;,} \label{eq:Green_ene0} \end{eqnarray} where \begin{eqnarray} \widetilde{E}_d=z(E_d+\Sigma(0)) \;,\quad \widetilde{\Gamma}_N=z\Gamma_N , \label{eq:Ed_til} \\ \qquad z= \left( 1-\left. \frac{\partial \Sigma(\varepsilon)}{\partial \varepsilon} \right|_{\varepsilon=0} \right)^{-1}_{.} \label{eq:Gamma_til} \end{eqnarray} As the retarded Green function for a hole, $\bar{G}_{\gamma_{-1}\downarrow,\gamma_{-1}\downarrow}^r (\varepsilon )$, is given by $\bar{G}_{\gamma_{-1}\downarrow,\gamma_{-1}\downarrow}^r(\varepsilon ) = -\left( G_{\gamma_{-1}\uparrow,\gamma_{-1}\uparrow}^r (-\varepsilon ) \right)^*,$ we end up with the DOS around the Fermi energy, \begin{eqnarray} \rho_{d} (\varepsilon ) \simeq \frac{z}{\pi} \left\{ \frac{u_d^2 \widetilde{\Gamma}_N} {(\varepsilon-\widetilde{E}_d)^2+\widetilde{\Gamma}_N^2} + \frac{v_d^2 \widetilde{\Gamma}_N} {(\varepsilon+\widetilde{E}_d)^2+\widetilde{\Gamma}_N^2} \right\}_. \label{eq:DOS_d_ene0} \end{eqnarray} We next consider the transport properties in the small bias regime, which are governed by the Andreev reflection induced inside the superconducting gap. According to Appendix \ref{sec:deri_Con}, the linear conductance $G_{V=0}=dI/dV|_{V=0}$ at zero temperature is given by \begin{eqnarray} G_{V=0} = \frac{4e^2}{h}\; 4\frac{\Delta_d^2}{E_d^2}\; \frac { (\widetilde{E}_d/\widetilde{\Gamma}_N)^2 } { \{ 1+(\widetilde{E}_d/\widetilde{\Gamma}_N)^2 \}^2 }_. \label{eq:Condu} \end{eqnarray} We see that the conductance is determined by the ratio of the renormalized parameters $\widetilde{E}_{d}/\widetilde{\Gamma}_{N}$, which are obtained from the eigenvalues of the Hamiltonian \eqref{Hamitil} at the fixed point \cite{Hewson2}. \section{Numerical Results} \label{sec:result} In this section, we discuss transport properties for the N-QD-S system at zero temperature. As mentioned in the previous section, we assume that the superconducting gap $|\Delta|$ is sufficiently large ($|\Delta| \to \infty $). In this case the excited states in the continuum outside the superconducting gap can be neglected. It describes a situation where the superconducting gap is much larger than the characteristic energies of the Andreev reflection. \subsection{Influence of the Coulomb interaction $U$} Let us first discuss how the Coulomb interaction $U$ affects the transport properties at zero temperature. To this end, we explore the detailed properties of the Andreev bound states, which can be obtained from the renormalized parameters computed by means of the NRG method with high accuracy. These renormalized parameters also determine the conductance, so that we can clarify how the conductance is controlled by the Andreev bound states formed around the Fermi energy. We observe how the Andreev bound states change their characters with the increase of the Coulomb interaction $U$. As discussed in \S \ref{sec:model}, the local DOS of the QD around the Fermi energy is given by eq. \eqref{eq:DOS_d_ene0}. In particular, in the electron-hole symmetric case ($\varepsilon_d+U/2=0 $), $E_d=\Gamma_S(\equiv \Delta_d)$ and $u_d^2=v_d^2=1/2$ follow from eqs. \eqref{eq:delta_d}, \eqref{eq:E_d} and \eqref{eq:Bogo_factor_B}. Then, the DOS of the QD, $\rho_{d} (\varepsilon )$, is rewritten as \begin{eqnarray} \rho_{d} (\varepsilon ) \simeq \frac{z}{2\pi} \left\{ \frac{\widetilde{\Gamma}_N} {(\varepsilon-\widetilde{\Gamma}_S)^2+\widetilde{\Gamma}_N^2} + \frac{\widetilde{\Gamma}_N} {(\varepsilon+\widetilde{\Gamma}_S)^2+\widetilde{\Gamma}_N^2} \right\}_, \label{eq:DOS_d_ene0_sym} \end{eqnarray} where \begin{eqnarray} \widetilde{\Gamma}_S=z(\Gamma_S+\Sigma(0)). \label{eq:GammaS_til} \end{eqnarray} It is seen from eq. \eqref{eq:DOS_d_ene0_sym} that $\rho_{d} (\varepsilon )$ in the low-energy region is determined by the renormalized parameters $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$. In the noninteracting case ($U=0$), they are reduced to the bare ones $\widetilde{\Gamma}_{N}=\Gamma_{N}$ and $\widetilde{\Gamma}_{S}=\Gamma_{S}$. In this case, the Andreev bound states are formed at $\varepsilon =\pm\Gamma_S$, which are broadened (finite width $\Gamma_N$) by the coupling between the QD and the normal lead, as schematically shown in Fig. \ref{sketch}(a). \begin{figure}[h] \begin{center} \includegraphics[scale=0.4]{fig1.eps} \end{center} \caption{Andreev resonances around the Fermi energy ($E_F=0$) in the symmetric case ($\varepsilon_d+U/2=0$). } \label{sketch} \end{figure} We shall refer to these Andreev bound states with finite widths as the Andreev resonances in the following. Here we would like to comment on $\Gamma_S$, which gives the position of the Andreev resonances shown in Fig. 1(a). The Andreev reflection at the QD-S interface gives rise to the superconducting correlation in the QD. Taking into account eqs. (4) and (5), in the case of $|\Delta| \to \infty $, we see the amplitude of the superconducting correlation in the QD is given by the resonance strength $\Gamma_S(\equiv \Delta_d)$. Then, $\Gamma_S$ becomes a parameter indicating the strength of the Andreev reflection at the QD-S interface, while it originally represents the position of the Andreev resonances in the QD. When the Coulomb interaction $U$ is introduced, the effective tunneling of electrons between the QD and the leads is modified, so that the Andreev resonances are renormalized. Namely, the position and the width of these resonances, which correspond to the renormalized parameters $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$ respectively, become smaller with the increase of $U$ (see Fig. \ref{sketch}(b)). Note that when the effect of the Coulomb interaction $U$ is included, the strength of the Andreev reflection at the QD-S interface is given by $\widetilde{\Gamma}_S$ instead of $\Gamma_S$. Since we are concerned with the DOS around the Fermi energy, the Andreev resonances away from the Fermi energy are not shown in Fig. \ref{sketch}(b). The overall structure including the high energy region can be found in the literature \cite{Fazio,Clerk,Cuevas,Sun2,Krawiec}. To observe the formation of the Andreev resonances in our model, we calculate the renormalized parameters $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$. Figure \ref{reconUr1}(a) shows the renormalized parameters $\widetilde{\Gamma}_{N}$ and $\widetilde{\Gamma}_{S}$ as a function of the Coulomb interaction $U$ for $\Gamma_S=\Gamma_N$, where the parameters $\widetilde{\Gamma}_{N(S)}$ are normalized by the bare resonance strength, $\Gamma_{N}$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.4]{fig2.eps} \end{center} \caption{ (Color online) (a) Renormalized parameters $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$, (b) expectation value of the pair correlation in the QD and (c) linear conductance as a function of the Coulomb interaction $U$ for $\Gamma_S=\Gamma_N$ in the symmetric case ($\varepsilon_d+U/2=0$). $\widetilde{\Gamma}_{N(S)}$ and $U$ are normalized by the bare resonance strength, $\Gamma_{N}$. The NRG calculations have been carried out for $\Lambda=3.0$ and $\Gamma_N/D=1.0\times 10^{-3}$. Inset of (c): Conductance in the noninteracting case ($U=0$) as a function of the ratio $\Gamma_S/\Gamma_N$, where we set $\varepsilon_d=0$. } \label{reconUr1} \end{figure} As shown in Fig. \ref{reconUr1}(a), both of $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$ decrease monotonically with the increase of $U$. The decrease of $\widetilde{\Gamma}_N$ signals a crossover from the charge-fluctuation regime to the Kondo regime, as is the case for general N-QD-N systems. On the other hand, the decrease of $\widetilde{\Gamma}_S$ results in the suppression of the Andreev reflection at the QD-S interface, implying that the superconducting correlation is suppressed in the QD. To confirm this implication, we also calculate the expectation value of the pair correlation in the QD, which is given by \begin{eqnarray} \left< d^{\dag}_{\uparrow }d^{\dag}_{\downarrow }+ d^{}_{\downarrow}d^{}_{ \uparrow } \right> =-\frac{2}{\pi} \textrm{tan}^{-1} (\widetilde{\Gamma}_S/\widetilde{\Gamma}_N). \label{pairC} \end{eqnarray} Note that the sign of $\langle d^{\dag}_{\uparrow }d^{\dag}_{\downarrow }+ d^{}_{\downarrow}d^{}_{ \uparrow } \rangle$ depends on the definition of the superconducting gap $\Delta$ in $H_S$: namely, $\langle d^{\dag}_{\uparrow }d^{\dag}_{\downarrow }+ d^{}_{\downarrow}d^{}_{ \uparrow } \rangle$ becomes negative because the second term in the right hand side of eq. (2) is assumed to be positive. As shown in Fig. 2(b), we see that the absolute value $|\langle d^{\dag}_{\uparrow }d^{\dag}_{\downarrow }+ d^{}_{\downarrow}d^{}_{ \uparrow } \rangle |$ approaches 0 with the increase of $U$ and/or $\widetilde{\Gamma}_S$ in Fig. 2(a). The Andreev resonances around the Fermi energy are thus renormalized with the increase of $U$, as shown in Fig. \ref{sketch}. It is to be noticed that in the large $U$ region where the Kondo effect is dominant, these two resonances merge around the Fermi energy, forming a single sharp Kondo resonance, in accordance with refs. \citen{Fazio,Clerk,Cuevas,Sun2} and \citen{Krawiec}. This is indeed seen from the tendency that $\widetilde{\Gamma}_S$ decreases more rapidly than $\widetilde{\Gamma}_N$ in Fig. \ref{reconUr1}(a). We next discuss how the conductance changes with the increase of $U$. In the symmetric case ($\varepsilon_d+U/2=0$), the conductance of eq. \eqref{eq:Condu} is rewritten as \begin{eqnarray} G_{V=0}^{\varepsilon_d+\frac{U}{2}=0} = \frac{4e^2}{h}\; \frac {4 (\widetilde{\Gamma}_S/\widetilde{\Gamma}_N)^2 } { \{ 1+(\widetilde{\Gamma}_S/\widetilde{\Gamma}_N)^2 \}^2 }_. \label{eq:Condu-half} \end{eqnarray} The conductance of eq. \eqref{eq:Condu-half} is a function of the ratio $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N$ and has a maximum at $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N=1$, as pointed out in the previous studies \cite{Schwab,Cuevas}. Here, we would like to mention the relation between the conductance and the Andreev resonances shown in Fig. \ref{sketch}. In the case of $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N=1$, which corresponds to the condition for the maximum of the conductance, the distance of the Andreev resonances measured from the Fermi energy ($\widetilde{\Gamma}_S$) is equal to the width of these resonances ($\widetilde{\Gamma}_N$). We also consider the case of $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N\neq1$. When $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N>1$, the distance $\widetilde{\Gamma}_S$ becomes larger than the width $\widetilde{\Gamma}_N$. Then, the DOS of the QD around the Fermi energy gets smaller, which results in the decrease of the conductance. On the other hand, when $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N<1$, the Andreev resonances get closer to the Fermi energy, which leads to the enhancement of the DOS around the Fermi energy. At the same time, however, this means that the transport due to the Andreev reflection is suppressed, so that the conductance decreases as well as the case of $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N>1$. Figure \ref{reconUr1}(c) shows the conductance computed as a function of the Coulomb interaction $U$ for $\Gamma_S=\Gamma_N$. For reference, we also plot the conductance in the noninteracting case ($U=0$) as a function of the ratio $\Gamma_S/\Gamma_N$ (inset of Fig. \ref{reconUr1}(c)), which is given by replacing $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N$ in eq. \eqref{eq:Condu-half} with $\Gamma_S/\Gamma_N$. From the inset, we see that the conductance for $U=0$ has the maximum at $\Gamma_S/\Gamma_N=1$, where the couplings to normal and superconducting leads have the same amplitude. In this case, the maximum value of the conductance reaches the unitary limit $4e^2/h$ because of the electron-hole symmetry. With the increase of $U$, the conductance decreases monotonically from $4e^2/h$, as shown in Fig. \ref{reconUr1}(c). This behavior is in agreement with Cuevas \textit{et al.}'s results, where they treated a system with a finite gap $\Delta$ of the superconducting lead \cite{Cuevas}. As stated in ref. \citen{Cuevas}, this is due to the reduction of $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N$, which corresponds to the case of $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N<1$ stated above. Similar behavior of monotonic reduction is observed for smaller $\Gamma_S$ ($\Gamma_S/\Gamma_N<1$). Note, however, that somewhat different behavior appears in the region of $\Gamma_S/\Gamma_N>1$. Figure \ref{reconUr5} shows similar plots of the renormalized parameters and the conductance for $\Gamma_S/\Gamma_N=5$. \begin{figure}[h] \begin{center} \includegraphics[scale=0.4]{fig3.eps} \end{center} \caption{ (Color online) (a) Renormalized parameters $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$, (b) expectation value of the pair correlation in the QD and (c) linear conductance as a function of the Coulomb interaction $U$ for $\Gamma_S/\Gamma_N=5$. The other parameters are the same as in Fig. \ref{reconUr1}. } \label{reconUr5} \end{figure} Let us focus on the small $U$ region in Fig. \ref{reconUr5}(a) and (b). In this region, $\widetilde{\Gamma}_S$ decreases monotonically, which reduces the absolute value $|\langle d^{\dag}_{\uparrow }d^{\dag}_{\downarrow }+ d^{}_{\downarrow}d^{}_{ \uparrow } \rangle |$, as is the case of $\Gamma_S/\Gamma_N=1$. On the other hand, $\widetilde{\Gamma}_N$ remains almost unchanged unlike the case of $\Gamma_S/\Gamma_N=1$. This result indicates that the position of the Andreev resonances approaches the Fermi energy, keeping its resonance width unchanged. However, when the value of $\widetilde{\Gamma}_S$ gets close to that of $\widetilde{\Gamma}_N$, $\widetilde{\Gamma}_N$ also begins to decrease. For larger $U$, both $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$ approach $0$, as is the case of $\Gamma_S/\Gamma_N=1$. Comparing $\widetilde{\Gamma}_S$ and $\widetilde{\Gamma}_N$ in Fig. \ref{reconUr5}(a) with the conductance shown in Fig. \ref{reconUr5}(c), we see that the conductance has the maximum when the condition $\widetilde{\Gamma}_S=\widetilde{\Gamma}_N$ is satisfied. Summarizing the above results, we can say that the conductance shows the characteristic $U$-dependence accompanied by the maximum structure, around which the strength of two effective resonances is exchanged: we have $\widetilde{\Gamma}_S > \widetilde{\Gamma}_N$ in the small $U$ region ($U/\Gamma_N<10$), while $\widetilde{\Gamma}_S < \widetilde{\Gamma}_N$ in the large $U$ region ($U/\Gamma_N>10$). In the following, we demonstrate that the maximum of the conductance characterizes a crossover between the two distinct singlet regions where either the superconducting correlation or the Kondo correlation is dominant. \subsection{Implications of the conductance maximum} To discuss the condition that the conductance takes the maximum value in more detail, we would like to observe how the conductance changes as a function of the resonance strength $\Gamma_S$ that gives a measure of the superconducting correlation in the QD, as has been done in ref. \citen{Cuevas}. We find it more instructive to summarize our results in two-dimensional plots in the plane of $\Gamma_S$ and $U$, which indeed allows us to elucidate how the electron correlations affect the conductance maximum. Figure \ref{maxplot} (a) is the color-scale representation of the conductance as a function of $U/\Gamma_N$ and $\Gamma_S/\Gamma_N$. We also show the enlarged picture in small $U$ and $\Gamma_S$ region in Fig. \ref{maxplot}(b). \begin{figure}[h] \begin{center} \includegraphics[scale=0.68]{fig4.eps} \end{center} \caption{ (Color online) (a) Color-scale representation of the conductance $G/G_0$, where $G_0=4e^2/h$, as a function of $U/\Gamma_N$ and $\Gamma_S/\Gamma_N$. The dashed line, $\Gamma_S=U/2$, represents the boundary of the singlet-doublet transition for $\Gamma_N=0$. (b) Enlarged picture in the region with small $U/\Gamma_N$ and $\Gamma_S/\Gamma_N$. } \label{maxplot} \end{figure} As shown in Fig. \ref{maxplot}(b), the conductance takes the maximum at $\Gamma_S/\Gamma_N=1$ when $U/\Gamma_N=0$, in accordance with the inset of Fig. \ref{reconUr1}. As $U/\Gamma_N$ increases, the value of $\Gamma_S/\Gamma_N$ giving the maximum of the conductance increases linearly along the line of $\Gamma_S=U/2$ (see Fig. \ref{maxplot}). As mentioned above, the maximum of the conductance reaches the unitary limit ($G_0=4e^2/h$) because of the electron-hole symmetry. For instance, around $U/\Gamma_N=10$ and $\Gamma_S/\Gamma_N=5$, the conductance reaches the unitary limit, as already shown in Fig. \ref{reconUr5}(c). Here we consider the physical implication of $\Gamma_S=U/2$, which coincides with the condition that the conductance reaches the unitary limit for large values of $\Gamma_S/\Gamma_N$ and $U/\Gamma_N$. To this end, let us examine the limit of $\Gamma_N\to 0$. When $\Gamma_N=0$, the QD is disconnected from the normal lead, and only connected to the superconducting lead (QD-S). Recall here that the QD-S system is equivalent to a magnetic impurity model embedded in a superconductor, which has been studied for dilute magnetic alloys \cite{Soda,Shiba,Muller,Matsuura}. As discussed in refs. \citen{Soda,Shiba,Muller,Matsuura}, the ground state of the QD-S system is a nonmagnetic singlet or a magnetic doublet. Specifically in the limit of $|\Delta| \to \infty $, the ground state only depends on the ratio of $\Gamma_S/U$. Namely, the ground state is a superconducting spin-singlet state for $\Gamma_S/U>0.5$ or a spin-doublet state for $\Gamma_S/U<0.5$. A transition between these ground states occurs at $\Gamma_S/U=0.5$. We now observe what happens for finite $\Gamma_N$. When the coupling between the QD and the normal lead is introduced, conduction electrons in the normal lead screen the free spin moment to form the Kondo singlet state in the region of $ \Gamma_S/U<0.5$. Therefore, the ground state of the N-QD-S system is always singlet. There are, however, two distinct singlets, i.e. one with superconducting-singlet character for $\Gamma_S/U>0.5$ and the other with Kondo-singlet character for $\Gamma_S/U<0.5$. From the above discussion, we see that the maximum of the conductance in Fig. \ref{maxplot} for large values of $\Gamma_S/\Gamma_N$ and $U/\Gamma_N$ clearly characterizes the crossover between these two different spin-singlet states. We have drawn this conclusion on the assumption that the gap is sufficiently large ($|\Delta| \to \infty $). We believe that this conclusion also holds for an N-QD-S system with a finite gap $\Delta$ of the superconducting lead, though the onset of the transition of the ground state for $\Gamma_N=0$ depends also on $\Delta$ \cite{Oguri2,Oguri3}. Before closing this subsection, we display the conductance in Fig. \ref{congs} in a slightly different way to make the comparison with experiments easier; it is plotted as a function of the ratio $\Gamma_S/U$ for several values of $\Gamma_N/U$, which are both controllable by changing the voltage between the QD and the leads experimentally \cite{Gold,Cronen}. \begin{figure}[h] \begin{center} \includegraphics[scale=0.38]{fig5.eps} \end{center} \caption{ (Color online) Conductance as a function of the ratio $\Gamma_S/U$ for several values of $\Gamma_N/U$, where the Coulomb interaction is fixed as $U/D=2.0\times10^{-2}$. } \label{congs} \end{figure} From Fig. \ref{congs}, we see that as $\Gamma_N/U$ gets smaller, the peak structure becomes sharper and its position approaches $\Gamma_S/U=0.5$. Thus, the peak structure in the conductance for $\Gamma_N/U=0.05$ and $0.005$ in Fig. \ref{congs} clearly characterizes the crossover between the superconducting and Kondo spin-singlet states, as discussed above. On the other hand, as $\Gamma_N/U$ becomes larger, the peak structure is somewhat broadened and its position shifts toward $\Gamma_S/U>0.5$, as shown for $\Gamma_N/U=0.5$ in Fig. \ref{congs}. Note here that the increase of $\Gamma_N/U$ enhances charge fluctuations in the QD, making the Kondo correlation weak. Therefore, the crossover between the superconducting and Kondo singlet states becomes smeared, and accordingly the position of the conductance maximum deviates from $\Gamma_S/U=0.5$. \subsection{Away from the symmetric case} Finally, we discuss the conductance as a function of the energy level in the QD, $\varepsilon_d$, away from the electron-hole symmetric case ($\varepsilon_d+U/2\neq 0$). Note that the energy level in the QD can be changed by the gate-voltage control. A deviation from the electron-hole symmetric case is represented by $H_d^0$ in eq. \eqref{Hamipart}, as stated in \S \ref{sec:model}. In this case, the position of the Andreev resonances is given by $\widetilde{E}_d$ in eq. \eqref{eq:Ed_til} instead of $\widetilde{\Gamma}_S$. Following the way outlined in the symmetric case, we discuss the characteristics of the conductance in connection with the ground-state nature of the system. \begin{figure}[h] \begin{center} \includegraphics[scale=0.75]{fig6.eps} \end{center} \caption{ (Color online) Color-scale representation of the conductance $G/G_0$ ($G_0=4e^2/h$) as a function of $\varepsilon_d/U$ and $\Gamma_S/U$. In this calculation, we set $\Gamma_N/U=0.05$ and $U/D=2.0\times10^{-2}$. The dashed line is $\{\left(\varepsilon_d/U+1/2\right)^2+ (\Gamma_S/U)^2\}^{1/2}=1/2$, which gives the boundary of the singlet-doublet transition for $\Gamma_N=0$. } \label{maxploted} \end{figure} In Fig. \ref{maxploted} we show the conductance in the color-scale representation as a function of $\varepsilon_d/U$ and $\Gamma_S/U$. We also draw a half circle given by $\{\left(\varepsilon_d/U+1/2\right)^2+(\Gamma_S/U)^2\}^{1/2}=1/2$, which denotes the boundary of the singlet-doublet transition for $\Gamma_N=0$. The inside of the half circle is the spin-doublet region at $\Gamma_N=0$, which is replaced with the Kondo spin-singlet region in the presence of any finite $\Gamma_N$. On the other hand, the outside is the superconducting spin-singlet region. From Fig. \ref{maxploted}, we see that the conductance for $\Gamma_S/U<0.5$ has large values along the half circle. This means that the conductance as a function of $\varepsilon_d$ has two peaks \cite{Cuevas}. Note here that a deviation from the electron-hole symmetric case ($\varepsilon_d/U \neq -0.5$) makes the Kondo correlation weaker. Therefore, the two-peak structure of the conductance for $\Gamma_S/U<0.5$ indicates the crossover of superconducting-Kondo-superconducting singlet states. On the other hand, for $\Gamma_S/U>0.5$, the ground state is the superconducting singlet state for any values of $\varepsilon_d/U$. As seen in eqs. \eqref{eq:E_d} and \eqref{eq:Ed_til}, a deviation from the symmetric case drives the position of the Andreev resonances ($\widetilde{E}_d$) away from the Fermi energy. In the superconducting singlet region ($\Gamma_S/U>0.5$), $\widetilde{E}_d/\widetilde{\Gamma}_{N}>1$ is satisfied, so that the deviation leads to the reduction of the conductance, as is the case of $\widetilde{\Gamma}_S/\widetilde{\Gamma}_N>1$ in the symmetric case. Thus, the conductance as a function of $\varepsilon_d$ has a single maximum at $\varepsilon_d/U=-0.5$, although this peak is not a sign of the crossover between two singlet states unlike the peaks for $\Gamma_S/U<0.5$. \section{Summary} We have investigated transport properties of an N-QD-S system using the NRG method. Especially, we have focused on the limiting case of $|\Delta| \to \infty $ with particular emphasis on the Andreev reflection which arises inside the superconducting gap. Adapting the Bogoliubov transformation to the simplified model, we have first demonstrated that our system with superconductivity can be mapped on an effective Anderson impurity model without superconductivity, which enables us to describe the low-energy properties in terms of the local Fermi liquid theory. To clarify the influence of the Coulomb interaction on the transport due to the Andreev reflection, we have calculated the conductance as a function of the Coulomb interaction $U$. For the ratio $\Gamma_S/\Gamma_N>1$, the conductance has the maximum in its $U$-dependence while for $\Gamma_S/\Gamma_N \le 1$ it decreases monotonically, which is in accordance with Cuevas \textit{et al.}'s results. We have also calculated the renormalized parameters to discuss the conductance in connection with the Andreev resonances around the Fermi energy. Through the analysis using the renormalized parameters, we have found that the maximum of the conductance gives an indication of the crossover between two distinct types of singlet ground states. To observe the nature of the crossover in detail, we have studied the transport properties by focusing on the changes of the Coulomb interaction $U$ and the resonance strength $\Gamma_S$. In particular, starting from the special case with $\Gamma_N=0$, i.e. the QD system only coupled to the superconducting lead, we have shown that the conductance maximum clearly characterizes the crossover between the Kondo singlet state and the superconducting singlet state. It has been further elucidated that the gate-voltage dependence of the conductance shows different behavior depending on the value of $\Gamma_S$; there are two peaks in the gate-voltage dependence characterizing the crossover of superconducting-Kondo-superconducting singlet regions for $\Gamma_S/U < 0.5$, whereas only a single maximum appears in the superconducting singlet region for $\Gamma_S/U >0.5$. In this paper, we have clarified several characteristic transport properties on the assumption that the superconducting gap is sufficiently large. Nevertheless we believe that our main conclusion, {\it i.e.} the conductance maximum clearly characterizes the crossover between two distinct types of singlet ground states, holds even for general N-QD-S systems with a finite gap $\Delta$. In this case, however, the crossover between two distinct types of singlet states depends also on $\Delta$, as stated in \S \ref{sec:result}. In near future, we expect that the characteristic maximum structure found in the conductance due to the Andreev reflection will be observed experimentally. \section*{Acknowledgment} We would like to thank Y. Nisikawa and T. Suzuki for valuable discussions. A part of computations was done at the Supercomputer Center at the Institute for Solid State Physics, University of Tokyo. The work was partly supported by a Grant-in-Aid from the Ministry of Education, Culture, Sports, Science and Technology of Japan. Y. Tanaka is supported by JSPS Research Fellowships for Young Scientists. A. Oguri is supported by a Grant-in-Aid from JSPS.
2,877,628,088,798
arxiv
\section{Introduction} The theory of the strong interaction, quantum chromodynamics (QCD), has a global chiral $U(N_{f})_{R} \times U(N_{f})_{L}$ symmetry, for $N_{f}$ flavors of massless quarks. This symmetry is spontaneously broken in the vacuum, which has important consequences for hadron phenomenology. Due to confinement of color charges, all low-energy hadronic properties, such as masses, decay widths, scattering lengths, etc.\ cannot be inferred from perturbative QCD calculations. Therefore, effective chiral models are widely used in order to study the vacuum properties of hadrons. Viable candidates should obey a well-defined set of low-energy theorems \cite{meissner,gasioro,pisarski}, but they may still differ in some interesting aspects such as the generation of the nucleon mass and the behavior at non-zero temperature, $T$, and chemical potential, $\mu$. A nucleon mass term $\sim m_{N} \bar{\Psi} \Psi$ explicitly breaks the chiral $U(N_{f})_{R} \times U(N_{f})_{L}$ symmetry and thus should not occur in a chiral linear sigma model. Therefore, in the standard linear sigma model of Refs.\ \cite{gasioro,lee}, the nucleon mass is (mostly) generated by the chiral condensate, $\langle\overline{q}q\rangle$. (A small contribution also arises from the explicit breaking of chiral symmetry due to the non-zero current quark masses.) Similarly, in the framework of QCD sum rules Ioffe \cite{Ioffe:1981kw} formulated a connection between the quark condensate and the nucleon mass, now called Ioffe formula: $m_{N}\sim-4\pi^{2} \Lambda _{B}^{-2} \, \langle\overline{q}q\rangle$, where $\Lambda_{B} \simeq1$ GeV. However, also other condensates exist, e.g.\ a gluon condensate, and it is not yet known to what extent they contribute to the nucleon mass \cite{othersumrules}. This problem can be studied in a chiral model via the so-called mirror assignment for the chiral partner of the nucleon, which was first discussed in Ref.\ \cite{lee} and extensively analyzed in Refs.\ \cite{DeTar:1988kn,jido}. In this assignment, there exists a chirally invariant mass term $\sim m_{0}$ which does \emph{not} originate from the quark condensate. The mirror assignment has been subsequently used in Ref.\ \cite{zschiesche} to study the properties of cold and dense nuclear matter. In this work we consider a linear sigma model with global chiral $U(2)_{R}\times U(2)_{L}$ symmetry which includes scalar and pseudoscalar mesons as well as vector and axial-vector mesons \cite{denis}. We extend this model by including the nucleon and its chiral partner in the mirror assignment. The most natural candidate for the chiral partner of the nucleon is the resonance $N(1535)$ which is the lightest state with the correct quantum numbers ($J^{P}=\frac{1}{2}^{-}$) listed in the PDG \cite{PDG}. We also investigate two other possibilities: the well-identified resonance $N(1650)$ and a speculative, very broad, and not yet discovered resonance with mass about $1.2$ GeV, which has been proposed in Ref.\ \cite{zschiesche}. We first study their axial charges which have been the focus of interest in recent studies of hadron phenomenology [see Ref.\ \cite{Glozman} and refs.\ therein]. We show that,in the present model, including (axial-) vector mesons drastically changes the relations of the original model \cite{DeTar:1988kn}. Without (axial-) vector mesons, $N$ and $N^{\ast}$ have opposite axial charge, $g_{A}^{N}=-g_{A}^{N^{\ast}}\leq1$. [We remind that, in the so-called \textquotedblleft naive assignment\textquotedblright, where the nucleon partner transforms just as the nucleon, one has $g_{A}^{N =g_{A}^{N^{\ast}}=1$ \cite{jido}]. With (axial-) vector mesons, this is no longer true and we are free to adjust the two axial charges independently, employing experimental knowledge about $g_{A}^{N}$ and recent lattice QCD data for $g_{A}^{N^{\ast}}$ \cite{Takahashi}. Using the decays $N^{\ast}\rightarrow N \pi$ and $a_{1} \rightarrow\pi\gamma$ to determine the other parameters of the model, the mass parameter turns out to be $m_{0} \sim500$ MeV. This value is between the one derived in Ref.\ \cite{DeTar:1988kn} and the one from Ref.\ \cite{zschiesche}. We then test our model studying the decay $N^{\ast}\rightarrow N \eta$ and pion-nucleon scattering. For $N(1535)$ as chiral partner of the nucleon, the decay width $N^{\ast}\rightarrow N \eta$ comes out too small, while for $N(1650)$, it agrees well with experimental data. Pion-nucleon scattering has been studied in a large variety of approaches [see Refs.\ \cite{piN,mojzis,ellis,matsui} and refs.\ therein]. Here, we evaluate the scattering lengths in the framework of the mirror assignment. We find that the isospin-odd $s$-wave scattering length $a_{0}^{(-)}$ is in good agreement with experimental data, while the isospin-even scattering length $a_{0}^{(+)}$ strongly depends on the value for the mass of the sigma meson. Finally, we discuss two possible extensions of our work. The first is an enlarged mixing scenario. A second pair of chiral partners is added, e.g.\ $N(1440)$ and $N(1650)$, which also mix with $N(939)$ and $N(1535)$. The second is the generalization of the chirally invariant mass term $\sim m_{0}$ to a dilatation-invariant mass term. In this case, we argue that $m_{0}$ is a sum of two contributions, arising from the tetraquark and the gluon condensates, respectively. The dilatation-invariant mass term also couples a tetraquark state to the nucleon. We discuss possible implications for nuclear physics and the behavior of the nucleon mass at non-zero temperature. This paper is organized as follows. In Sec.\ \ref{II} we present the Lagrangian of our model and the expressions for the axial charges, the decay widths $N^{*} \rightarrow N \pi$ and $N^{*} \rightarrow N \eta$, and the $s$-wave scattering lengths. Section \ref{III} contains our results. In Sec.\ \ref{IV}, we present a short summary of our work and discuss the two possible extensions mentioned above, i.e., the enlarged mixing scenario and the dilatation-invariant mass term. Details of our calculations are relegated to the Appendices. Our units are $\hbar= c = 1$, the metric tensor is $g^{\mu\nu} = \mathrm{diag}(+,-,-,-)$. \section{The model and its implications} \label{II} \subsection{The Lagrangian} In this section we present the chirally symmetric linear sigma model considered in this work. It contains scalar, pseudoscalar, vector, and axial-vector fields, as well as nucleons and their chiral partners including all globally symmetric terms up to fourth order, see Ref.\ \cite{denis,Urban:2001ru}. While higher-order terms are in principle possible, we do not consider them here. In fact, one can argue that they should be absent in dilation-invariant theories, cf.\ the discussion in Sec.\ \ref{IV}. The scalar and pseudoscalar fields are included in the matrix \begin{equation} \label{scalars}\Phi= \sum_{a=0}^{3} \phi_{a} t_{a} = (\sigma+i\eta_{N})\,t^{0} +(\vec{a}_{0}+i\vec{\pi}) \cdot\vec{t}\;, \end{equation} where $\vec{t}=\vec{\tau}/2,$ with the vector of Pauli matrices $\vec{\tau}$, and $t^{0}=\mathbf{1}_{2}/2$. Under the global $U(2)_{R} \times U(2)_{L}$ chiral symmetry, $\Phi$ transforms as $\Phi\rightarrow U_{L} \Phi U_{R}^{\dagger}$. The vector and axial-vector fields are represented by the matrices \begin{subequations} \label{vectors \begin{align} V^{\mu} & = \sum_{a=0}^{3} V_{a}^{\mu}t_{a} = \omega^{\mu}\, t^{0} +\vec{\rho}^{\mu} \cdot\vec{t}\; ,\\ A^{\mu} & = \sum_{a=0}^{3} A_{a}^{\mu}t_{a} = f_{1}^{\mu} \,t^{0} +\vec {a}_{1}^{\mu} \cdot\vec{t}\;. \end{align} From these fields, we define right- and left-handed vector fields $R^{\mu }\equiv V^{\mu}- A^{\mu}$, $L^{\mu}\equiv V^{\mu}+ A^{\mu}$. Under global $U(2)_{R} \times U(2)_{L}$ transformations, these fields behave as $R^{\mu }\rightarrow U_{R} R^{\mu}U_{R}^{\dagger}\, , \; L^{\mu}\rightarrow U_{L} L^{\mu}U_{L}^{\dagger}$. The identification of mesons with particles listed in Ref.\ \cite{PDG} is straightforward in the pseudoscalar and (axial-) vector sectors, as already indicated in Eqs.\ (\ref{scalars}), (\ref{vectors}): the fields $\vec{\pi}$ and $\eta_{N}$ correspond to the pion and the $SU(2)$ counterpart of the $\eta$ meson, $\eta_{N} \equiv(\overline{u}u+\overline{d}d)/\sqrt{2}$, with a mass of about $700$ MeV. This value can be obtained by ``unmixing'' the physical $\eta$ and $\eta^{\prime}$ mesons, which also contain $\overline{s} s$ contributions. The fields $\omega^{\mu}$ and $\vec{\rho}^{\mu}$ represent the $\omega(782)$ and $\rho(770)$ vector mesons, respectively, and the fields $f_{1}^{\mu}$ and $\vec{a}_{1}^{\mu}$ represent the $f_{1}(1285)$ and $a_{1}(1260)$ axial-vector mesons, respectively. (In principle, the physical $\omega$ and $f_{1}$ states also contain $\overline{s} s$ contributions, however their admixture is negligible small.) Unfortunately, the identification of the $\sigma$ and $\vec{a}_{0}$ fields is controversial, the possibilities being the pairs $\{f_{0}(600),a_{0}(980)\}$ and $\{f_{0}(1370),a_{0}(1450)\}$. In Sec.\ \ref{IVb} a more detailed discussion of this problem is presented. In the present work, the scalar assignment affects only the isospin-even $\pi N$ scattering length and we study its dependence on the sigma mass. The Lagrangian describing the meson fields reads \end{subequations} \begin{align} \mathcal{L}_{\mathrm{mes}} & = \mathrm{Tr}\left[ (D_{\mu}\Phi)^{\dagger }(D^{\mu}\Phi) -\mu^{2}\Phi^{\dagger}\Phi-\lambda_{2}\left( \Phi^{\dagger }\Phi\right) ^{2}\right] -\lambda_{1}\left( \mathrm{Tr}[\Phi^{\dagger \Phi]\right) ^{2} +c\,(\det\Phi^{\dagger}+\det\Phi)\nonumber\\ & +h_{0}\,\mathrm{Tr}[(\Phi^{\dagger}+\Phi)] -\frac{1}{4}\mathrm{Tr}\left[ (L^{\mu\nu})^{2}+(R^{\mu\nu})^{2}\right] +\frac{m_{1}^{2}}{2}\,\mathrm{Tr} \left[ (L^{\mu})^{2}+(R^{\mu})^{2}\right] \nonumber\\ & +\frac{h_{1}}{2}\, \mathrm{Tr} \left[ \Phi^{\dagger}\Phi\right] \mathrm{Tr}\left[ (L^{\mu})^{2}+(R^{\mu})^{2}\right] +h_{2}\, \mathrm{Tr \left[ \Phi^{\dagger} L_{\mu}L^{\mu}\Phi+\Phi R_{\mu}R^{\mu}\Phi^{\dagger }\right] +2h_{3}\, \mathrm{Tr}\left[ \Phi R_{\mu}\Phi^{\dagger} L^{\mu }\right] \nonumber\\ & +\mathcal{L}_{3}+\mathcal{L}_{4}\;, \label{meslag \end{align} where $D^{\mu}\Phi=\partial^{\mu} \Phi+ig_{1}(\Phi R^{\mu}-L^{\mu}\Phi)$, and $R^{\mu\nu} =\partial^{\mu}R^{\nu}-\partial^{\nu}R^{\mu}$, $L^{\mu\nu }=\partial^{\mu}L^{\nu}-\partial^{\nu}L^{\mu}$ are the field-strength tensors of the vector fields. The terms $\mathcal{L}_{3}$ and $\mathcal{L}_{4}$ describe three- and four-particle interactions of the (axial-) vector fields \cite{denis}, which are not important for this work. We list them in Appendix \ref{appendixC}. For $c = h_{0} =0$, the Lagrangian $\mathcal{L _{\mathrm{mes}}$ is invariant under global $U(2)_{R} \times U(2)_{L}$ transformations. For $c \neq0$, the $U(1)_{A}$ symmetry, where $A=L-R$, is explicitly broken, thus parametrizing the $U(1)_{A}$ anomaly of QCD. For $h_{0} \neq0$, the $U(2)_{R} \times U(2)_{L}$ symmetry is explicitly broken to the vectorial subgroup $U(2)_{V}$, where $V=L+R$. The chiral condensate $\varphi= \left\langle 0\left\vert \sigma\right\vert 0\right\rangle =Zf_{\pi}$ emerges upon spontaneous chiral symmetry breaking in the mesonic sector. The parameter $f_{\pi}=92.4$ MeV is the pion decay constant and $Z$ is the wavefunction renormalization constant of the pseudoscalar fields \cite{denis,Strueber}, also related to $\pi$-$a_{1}$ mixing, see Appendix \ref{appendixA} for more details. We now turn to the baryon sector which involves the baryon doublets $\Psi_{1}$ and $\Psi_{2}$, where $\Psi_{1}$ has positive parity and $\Psi_{2}$ negative parity. In the mirror assignment they transform as follows: \begin{equation} \Psi_{1R}\longrightarrow U_{R}\Psi_{1R}\, ,\; \Psi_{1L}\longrightarrow U_{L}\Psi_{1L}\, ,\; \Psi_{2R}\longrightarrow U_{L}\Psi_{2R}\, ,\; \Psi _{2L}\longrightarrow U_{R}\Psi_{2L}\;, \label{mirror \end{equation} i.e., $\Psi_{2}$ transforms in a ``mirror way'' under chiral transformations \cite{lee,DeTar:1988kn}. These field transformations allow to write down a baryonic Lagrangian with a chirally invariant mass term for the fermions, parametrized by $m_{0}$: \begin{align} \mathcal{L}_{\mathrm{bar}} & = \overline{\Psi}_{1L}i\gamma_{\mu}D_{1L}^{\mu }\Psi_{1L} +\overline{\Psi}_{1R}i\gamma_{\mu}D_{1R}^{\mu}\Psi_{1R} +\overline{\Psi}_{2L}i\gamma_{\mu}D_{2R}^{\mu}\Psi_{2L} +\overline{\Psi _{2R}i\gamma_{\mu}D_{2L}^{\mu}\Psi_{2R}\nonumber\\ & -\widehat{g}_{1} \left( \overline{\Psi}_{1L}\Phi\Psi_{1R} +\overline{\Psi }_{1R}\Phi^{\dagger}\Psi_{1L}\right) -\widehat{g}_{2} \left( \overline{\Psi }_{2L}\Phi^{\dagger}\Psi_{2R} +\overline{\Psi}_{2R}\Phi\Psi_{2L}\right) \nonumber\\ & -m_{0}(\overline{\Psi}_{1L}\Psi_{2R} -\overline{\Psi}_{1R}\Psi_{2L} -\overline{\Psi}_{2L}\Psi_{1R}+\overline{\Psi}_{2R}\Psi_{1L})\;, \label{nucl lagra \end{align} where $D_{1R}^{\mu}=\partial^{\mu}-ic_{1}R^{\mu}$, $D_{1L}^{\mu}=\partial ^{\mu}-ic_{1}L^{\mu}$, and $D_{2R}^{\mu}=\partial^{\mu}-ic_{2}R^{\mu}$, $D_{2L}^{\mu}=\partial^{\mu}-ic_{2}L^{\mu}$ are the covariant derivatives for the nucleonic fields, with the coupling constants $c_{1}$ and $c_{2}$. (Note that in the case of local chiral symmetry one has $c_{1}=c_{2}=g_{1}$). The interaction of the baryonic fields with the scalar and pseudoscalar mesons is parametrized by $\widehat{g}_{1}$ and $\widehat{g}_{2}$. The term proportional to $m_{0}$ generates a mixing between the fields $\Psi_{1}$ and $\Psi_{2}.$ The physical fields $N$ and $N^{\ast}$, referring to the nucleon and its chiral partner, arise by diagonalizing the corresponding mass matrix in the Lagrangian (\ref{nucl lagra}): \begin{equation} \left( \begin{array} [c]{c N\\ N^{\ast \end{array} \right) =\frac{1}{\sqrt{2\cosh\delta}}\left( \begin{array} [c]{cc e^{\delta/2} & \gamma_{5}e^{-\delta/2}\\ \gamma_{5}e^{-\delta/2} & -e^{\delta/2 \end{array} \right) \left( \begin{array} [c]{c \Psi_{1}\\ \Psi_{2 \end{array} \right) \; . \label{mixing \end{equation} The masses of the nucleon and its partner are obtained as: \begin{equation} m_{N,N^{\ast}}= \sqrt{ m_{0}^{2} +\left[ \frac{1}{4} (\widehat{g _{1}+\widehat{g}_{2}) \varphi\right] ^{2}} \pm\frac{1}{4}(\widehat{g _{1}-\widehat{g}_{2})\varphi\;. \label{nuclmasses \end{equation} The coupling constants $\widehat{g}_{1,2}$ are uniquely determined by the values of $m_{N}$, $m_{N^{\ast}}$, and the parameter $m_{0}$, \begin{equation} \label{g12}\widehat{g}_{1,2}=\frac{1}{\varphi} \left[ \pm(m_{N}-m_{N^{\ast })+\sqrt{(m_{N}+m_{N^{\ast}})^{2} - 4m_{0}^{2}} \right] \;. \end{equation} From Eq.\ (\ref{nuclmasses}) one observes that, in the chirally restored phase where $\varphi\rightarrow0$, the masses of the nucleon and its partner become degenerate, $m_{N}=m_{N^{\ast}}=m_{0}$. The mass splitting is generated by breaking chiral symmetry, $\varphi\neq0$. Note that the nucleon mass \emph{cannot} be expressed as $m_{N}=m_{0} +\lambda\,\varphi$, thus $m_{0}$ should not be interpreted as a linear contribution to the nucleon mass. Such a linearization is only possible in the case when $m_{0}$ dominates or the chiral condensate dominates. As we shall see, this does not happen and both quantities are sizable. The parameter $\delta$ in Eq.\ (\ref{mixing}) is related to the masses and the parameter $m_{0}$ by the expression: \begin{equation} \cosh\delta=\frac{m_{N}+m_{N^{\ast}}}{2m_{0}}. \end{equation} When $\delta\rightarrow\infty$, corresponding to $m_{0}\rightarrow0$, there is no mixing and $N=\Psi_{1}\,, \;N^{\ast}=-\Psi_{2}$. In this case, $m_{N}=\widehat{g}_{1}\varphi/2$ and $m_{N^{\ast}}=\widehat{g}_{2}\varphi/2$, thus the nucleon mass is solely generated by the chiral condensate as in the standard linear sigma model of Refs.\ \cite{gasioro,lee} with the naive assignment for the baryons. \subsection{Axial coupling constants} The expressions for the axial coupling constants of the nucleon and the partner are derived in Appendix \ref{appendixB}. The result is: \begin{equation} \label{axialcoupl}g_{A}^{N}=\frac{1}{2\cosh\delta} \left( g_{A ^{(1)}\,e^{\delta}+ g_{A}^{(2)}\,e^{-\delta} \right) \;,\;\; g_{A}^{N^{\ast }=\frac{1}{2\cosh\delta} \left( g_{A}^{(1)}\,e^{-\delta} + g_{A ^{(2)}\,e^{\delta}\right) \;, \end{equation} where \begin{equation} g_{A}^{(1)}=1-\frac{c_{1}}{g_{1}}\left( 1-\frac{1}{Z^{2}}\right) ,\;\; g_{A}^{(2)}=-1+\frac{c_{2}}{g_{1}}\left( 1-\frac{1}{Z^{2}}\right) \label{gas \end{equation} are the axial coupling constants of the bare fields $\Psi_{1}$ and $\Psi_{2}$. At this point, it should be emphasized that the interaction with the (axial-) vector mesons generates additional contributions to $g_{A}^{N}$ and $g_{A}^{N^{\ast}}$, proportional to $c_{1}$ and $c_{2}$. We now discuss several limiting cases, using the fact that $Z$ is required to be larger than 1, cf.\ Eq.\ (\ref{g1}): \begin{enumerate} \item[(i)] \emph{Local chiral symmetry:} In this case, the coupling constants $c_{1}=c_{2}=g_{1}$. This implies $g_{A}^{N}=- g_{A}^{N^{\ast}} =Z^{-2 \tanh\delta<1$, which is at odds with the experimental value $g_{A ^{N}=1.267\pm0.004$ \cite{PDG}. \item[(ii)] \emph{Decoupling of vector mesons:} Here, $Z=1$ and $c_{1 =c_{2}=0$, and we obtain the results of Ref.\ \cite{DeTar:1988kn}: $g_{A ^{N}=-g_{A}^{N^{\ast}}=\tanh\delta$. In the limit $\delta\rightarrow\infty$, this reduces to $g_{A}^{N}=1$ and $g_{A}^{N^{\ast}}=-1.$ Also in this case the experimental value for $g_{A}^{N}$ cannot be obtained for any choice of the parameters. Moreover, a positive value of $g_{A}^{N^{\ast}}$, as found in the lattice simulation of Ref.\ \cite{Takahashi}, is also impossible. \item[(iii)] \emph{Decoupling of the chiral partner:} This is achieved in the limit $\delta\rightarrow\infty$, where $N=\Psi_{1}$ and $N^{\ast}=-\Psi_{2}$. One has $g_{A}^{N}=g_{A}^{(1)}$ and $g_{A}^{N^{\ast}}=g_{A}^{(2)}$. Since $Z>1$, it is evident that the ratio $c_{1}/g_{1}$ must be negative in order to obtain the experimental value $g_{A}^{N}=1.267\pm0.004$ \cite{PDG}. \end{enumerate} Note that, in the case of local chiral symmetry, the axial charge of the nucleon can be also correctly reproduced when introducing dimension-6 terms in the Lagrangian $\mathcal{L}_{\mathrm{bar}}$, cf.\ Refs.\ \cite{meissner,gasioro,ellis,Ko,Wilms}, because the coefficients of these so-called Weinberg-Tomozawa (WT) terms \cite{weinberg,tomozawa} can be adjusted accordingly. However, such WT terms naturally arise when integrating out the axial-vector mesons from our Lagrangian, just as in chiral perturbation theory \cite{mojzis}. In this sense, it would be double-counting to simultaneously consider axial-vector mesons and WT terms. Our generalization to a \emph{global\/} chiral symmetry allows a description of the axial charge without explicitly introducing WT terms. \subsection{Decay widths} We now turn to the decays $N^{\ast}\rightarrow N \pi$ and $N^{\ast}\rightarrow N \eta$. The calculation of the tree-level decay width for $N^{\ast }\rightarrow N \pi$ from the Lagrangian (\ref{nucl lagra}) is straightforward. However, the decay $N^{\ast}\rightarrow N\eta$ cannot be directly evaluated because of the absence of the $s$ quark. In order to proceed, we have to take into account that \begin{equation} \eta=\eta_{N}\cos\phi_{P}+\eta_{S}\sin\phi_{P}\;, \end{equation} where $\eta_{N}\equiv(\overline{u}u+\overline{d}d)/\sqrt{2}$, $\eta_{S} \equiv\overline{s}s$ and $\phi_{P}$ lies between $-32^{\circ}$ and $-45^{\circ}$ \cite{mixingangle}. Then, the decay amplitude $\mathcal{A _{N^{\ast}\rightarrow N\eta}$ can be expressed as \begin{equation} \mathcal{A}_{N^{\ast}\rightarrow N\eta}= \mathcal{A}_{N^{\ast}\rightarrow N\eta_{N}}\, \cos\phi_{P} +\mathcal{A}_{N^{\ast}\rightarrow N\eta_{S}}\, \sin\phi_{P}\;. \end{equation} In the following, we assume that the OZI-suppressed amplitude $\mathcal{A _{N^{\ast}\rightarrow N\eta_{S}}$ is small, so that to good approximation the decay width $\Gamma_{N^{\ast}\rightarrow N \eta} \simeq\cos^{2} \phi_{P} \, \Gamma_{N^{\ast}\rightarrow N \eta_{N}}$. Note that the \emph{physical\/} $\eta$ meson mass, $m_{\eta}=547$ MeV, enters $\Gamma_{N^{\ast}\rightarrow N \eta}$. Therefore, also the decay width $\Gamma_{N^{\ast}\rightarrow N \eta_{N}}$ has to be evaluated for the physical mass $m_{\eta}$, not for $m_{\eta_{N}}$. The expression for the decay width $N^{\ast}\rightarrow NP$, where $P=\pi ,\eta$, is (for details, see Appendix \ref{appendixB}) \begin{align} \Gamma_{N^{\ast}\rightarrow NP} & =\lambda_{P}\,\frac{k_{P}}{2\pi \,\frac{m_{N}}{m_{N^{\ast}}}\,\frac{Z^{2}}{32\,\cosh^{2}\delta}\,\left\{ w^{2}\,(c_{1}+c_{2})^{2}\,\left[ (m_{N^{\ast}}^{2}-m_{N}^{2}-m_{P ^{2})\,\frac{E_{P}}{m_{N}}+m_{P}^{2}\,\left( 1-\frac{E_{N}}{m_{N}}\right) \right] \right. \nonumber\\ & +\left. (\widehat{g}_{1}-\widehat{g}_{2})^{2}\,\left( \frac{E_{N}}{m_{N }+1\right) +2w\,(\widehat{g}_{1}-\widehat{g}_{2})(c_{1}+c_{2})\,\left( \frac{m_{N^{\ast}}^{2}-m_{N}^{2}-m_{P}^{2}}{2m_{N}}+E_{P}\right) \right\} \;. \label{npidecay \end{align} The coefficients $\lambda_{\pi}=3$, $\lambda_{\eta}=\cos^{2}\phi_{P}$, $w\equiv g_{1}\varphi/m_{a_{1}}^{2}$, and the momentum of the pseudoscalar particle is given by \begin{equation} k_{P}=\frac{1}{2m_{N^{\ast}}}\sqrt{(m_{N^{\ast}}^{2}-m_{N}^{2}-m_{P}^{2 )^{2}-4\,m_{N}^{2}m_{P}^{2}}\;. \label{kpion \end{equation} The energies are $E_{P}=\sqrt{k_{P}^{2}+m_{P}^{2}}$ and $E_{N}=\sqrt{k_{P ^{2}+m_{N}^{2}}$, because the momenta of the nucleon and the pseudoscalar particles are equal in the rest frame of $N^{\ast}$. It is important to stress that, in the mirror assignment, the only way to obtain a nonzero $N^{\ast}N\pi$ coupling is a nonzero value of the parameter $m_{0}$. In fact, the coupling is proportional to $\cosh^{-1}\delta\propto m_{0}$, i.e., when $m_{0}$ increases, also the decay width increases. In the naive assignment, in which the field $\Psi_{2}$ transforms just like the field $\Psi_{1}$, a term proportional to $m_{0}$ is not possible, because it would break chiral symmetry. In this case a mixing term of the form $\propto\overline{\Psi}_{2}\gamma^{5}\Phi\Psi_{1}+$ h.c.\ is allowed. This leads to a term $\propto\overline{\Psi}_{2}\gamma^{5} (\sigma+i\gamma^{5 \vec{\pi} \cdot\vec{t})\Psi_{1}$, where the pion is coupled to $\Psi_{1}$ and $\Psi_{2}$ in a chirally symmetric way. However, the very same term also generates a mixing of $\Psi_{2}$ and $\Psi_{1}$ due to the nonzero vacuum expectation value of the field $\sigma=\sigma_{0}$. When performing the diagonalization one obtains two physical fields $N$ and $N^{\ast}$, to be identified with the nucleon and a negative-parity state such as $N^{\ast }(1535)$. In terms of the physical fields $N^{\ast}$ and $N$ the coupling $\overline{N}^{\ast}i\vec{\pi}\cdot\vec{t}N$ vanishes; for the explicit calculation see Ref.\ \cite{jido}. Thus, in the naive assignment and in the minimal framework with only one multiplet of scalar and pseudoscalar fields the decay $N^{\ast}\rightarrow N\pi$ vanishes. One could go beyond this minimal set-up: a possibility is to include the (axial-) vector mesons into the Lagrangian of the naive assignment. In this way a nonzero derivative coupling $\propto\overline{N}^{\ast}\gamma^{\mu}\partial_{\mu}\vec{\pi \cdot\vec{t}N$ survives. A complete study of this scenario, involving also the scattering lengths, is in preparation. Alternatively, the inclusion of a second (or more) multiplet(s) of (pseudo-)scalar mesons, see Refs.\ \cite{cohen,dmh}, coupled to the baryon fields also leads to a nonvanishing coupling between $N^{\ast}$, the nucleon, and the pion. \subsection{$\pi N$ scattering lengths} The general form of the $\pi N$ scattering amplitude is \cite{matsui}: \begin{equation} T_{ab}=\left[ A^{(+)}+\frac{1}{2}(q_{1}^{\mu}+q_{2}^{\mu}) \gamma_{\mu }\,B^{(+)}\right] \, \delta_{ab} + \left[ A^{(-)}+\frac{1}{2}(q_{1}^{\mu }+q_{2}^{\mu}) \gamma_{\mu}\,B^{(-)}\right] \,i\epsilon_{bac}\tau_{c}\;, \label{Tab \end{equation} where the subscripts $a$ and $b$ refer to the isospin of the initial and final states and the superscripts $(+)$ and $(-)$ denote the isospin-even and isospin-odd amplitudes, respectively. The $\pi N$ scattering amplitudes, $A^{(\pm)}$ and $B^{(\pm)}$, evaluated from the Lagrangian (\ref{nucl lagra}) at tree-level, involve exchange of $\sigma$ and $\rho$ mesons in the $t$-channel and intermediate $N$ and $N^{\ast}$ states in the $s$- and $u$-channels, cf.\ Fig.\ \ref{piN}. \begin{figure}[h] \begin{center} \includegraphics[width=15cm]{piNscattering.eps} \end{center} \caption{Tree-level diagrams contributing to $\pi N$ scattering. Dashed lines represent the pion, the bold dashed line the $\sigma$ meson, the wavy line the $\rho$ meson, full lines the nucleon, and double full lines the $N^{\ast}$, respectively. \label{piN \end{figure} The $s$-wave scattering lengths, $a_{0}^{(\pm)}$, are given by: \begin{equation} \label{apm}a_{0}^{(\pm)}=\frac{1}{4\pi(1+m_{\pi}/m_{N})} \, \left( A_{0}^{(\pm)}+m_{\pi}B_{0}^{(\pm)}\right) \;, \end{equation} where the subscript $0$ at the amplitudes $A^{(\pm)}$, $B^{(\pm)}$ indicates that they are taken at threshold, i.e., for the following values of the Mandelstam variables $s,t,u$: $s=(m_{N}+m_{\pi})^{2}$, $t=0$, $u=(m_{N -m_{\pi})^{2}$. The explicit expression for the isospin-even scattering length can be obtained from Eq.\ (\ref{apm}) by applying the Feynman rules resulting from the Lagrangians (\ref{meslag}) and (\ref{nucl lagra}) to the diagrams shown in Fig.\ \ref{piN}. The result is: \begin{align} a_{0}^{(+)} & = \frac{1}{4\pi(1+\frac{m_{\pi}}{m_{N}})} \left( \frac{Z}{2 \cosh\delta}\right) ^{2} \left( - \frac{1}{2}\left[ \widehat{g _{1}-\widehat{g}_{2} + \frac{Zf_{\pi}}{2}w(c_{1}+c_{2}) (\widehat{g _{2}-\widehat{g}_{1})\right] ^{2}\, \frac{(m_{N}+m_{N^{\ast}})(m_{N ^{2}+m_{\pi}^{2} -m_{N^{\ast}}^{2})}{(m_{N}^{2}+m_{\pi}^{2}-m_{N^{\ast} ^{2})^{2} -4m_{N}^{2}m_{\pi}^{2}} \right. \nonumber\\ & - w(c_{1}+c_{2})(\widehat{g}_{1} - \widehat{g}_{2}) +\frac{Z f_{\pi} {4}\,(\widehat{g}_{1}-\widehat{g}_{2})\, w^{2}(c_{1}+c_{2})^{2} - w (c_{1}e^{\delta}-c_{2}e^{-\delta}) (\widehat{g}_{1}e^{\delta}+\widehat{g _{2}e^{-\delta})\nonumber\\ & + w^{2}m_{N} (c_{1}e^{\delta}-c_{2}e^{-\delta})^{2} +(\widehat{g _{1}e^{\delta}-\widehat{g}_{2}e^{-\delta}) \frac{\cosh\delta}{ Z f_{\pi}} \left\{ 1 + \frac{m_{\pi}^{2}}{m_{\sigma}^{2}}\, \frac{1}{Z^{4}} \left[ Z^{2}-2 +2 (Z^{2} - 1) \left( 1 - \frac{Z^{2}m_{1}^{2} }{m_{a_{1}}^{2}} \right) \right] \right\} \nonumber\\ & + m_{\pi} \left\{ \left[ \widehat{g}_{1}-\widehat{g}_{2} +\frac{Zf_{\pi }{2}w(c_{1}+c_{2}) (\widehat{g}_{2}-\widehat{g}_{1})\right] ^{2} \, \frac{m_{N}m_{\pi}}{(m_{N}^{2}+m_{\pi}^{2} -m_{N^{\ast}}^{2})^{2}-4m_{N ^{2}m_{\pi}^{2}} \right. \nonumber\\ & \left. \left. +\, \left[ \widehat{g}_{1}e^{\delta}+\widehat{g _{2}e^{-\delta} -2m_{N}w(c_{1}e^{\delta}-c_{2}e^{-\delta})\right] ^{2} \, \frac{m_{N}}{m_{\pi}} \, \frac{1}{m_{\pi}^{2}-4m_{N}^{2}} \right\} \right) \; . \end{align} Similarly, the expression for the isospin-odd scattering length is given by: \begin{align} a_{0}^{(-)} & =\frac{1}{4\pi(1+\frac{m_{\pi}}{m_{N}})} \left( \frac{Z}{2 \cosh\delta}\right) ^{2} \left( \left[ \widehat{g}_{1}-\widehat{g}_{2} +\frac{Zf_{\pi}}{2}w(c_{1}+c_{2}) (\widehat{g}_{2}-\widehat{g}_{1})\right] ^{2} \, \frac{(m_{N}+m_{N^{\ast}})m_{N}m_{\pi}}{(m_{N}^{2} +m_{\pi ^{2}-m_{N^{\ast}}^{2})^{2}-4m_{N}^{2}m_{\pi}^{2}} \right. \nonumber\\ & + \frac{m_{\pi}}{2} \left\{ \left[ \widehat{g}_{1}-\widehat{g}_{2} +\frac{Z f_{\pi}}{2} w (c_{1}+c_{2})(\widehat{g}_{2} -\widehat{g}_{1})\right] ^{2} \, \frac{m_{N}^{2}+m_{\pi}^{2}-m_{N^{\ast}}^{2}}{ (m_{N}^{2}+m_{\pi ^{2}-m_{N^{\ast}}^{2})^{2}-4m_{N}^{2} m_{\pi}^{2}} \right. \nonumber\\ & - \left[ \widehat{g}_{1}e^{\delta} +\widehat{g}_{2}e^{-\delta -2m_{N}w(c_{1}e^{\delta}-c_{2} e^{-\delta})\right] ^{2}\, \frac{1}{m_{\pi }^{2}-4m_{N}^{2}}\nonumber\\ & - \left. \left. w^{2} \left[ (c_{1}+c_{2})^{2}- (c_{1}e^{\delta -c_{2}e^{-\delta})^{2}\right] +\frac{g_{1}}{m_{\rho}^{2}} \frac{4 \cosh \delta}{Z^{2}} (c_{1}e^{\delta}-c_{2} e^{-\delta})\right\} \right) \; . \end{align} Although it is not obvious from these expressions, one can show that the $s$-wave scattering lengths $a_{0}^{(\pm)}$ vanish in the chiral limit, as required by low-energy theorems for theories with spontaneously broken chiral symmetry. \section{Results and discussion} \label{III} In this section we present our results. We first discuss the case where the resonance $N(1535)$ is interpreted as the chiral partner of the nucleon. This is the most natural assignment because this resonance is the lightest with the correct quantum numbers. We then consider some important limiting cases. Finally, we also discuss two different assignments: the resonance $N(1650)$, which is the next heavier state with the correct quantum numbers listed in Ref.\ \cite{PDG}, and a \emph{speculative\/} candidate $N(1200)$ with a mass $M_{N^{\ast}}\sim1200$ MeV and a very large width $\Gamma_{N^{\ast}\rightarrow N\pi} \gtrsim800$ MeV, such as to have avoided experimental detection up to now \cite{zschiesche}. \subsection{$N(1535)$ as partner} The resonance $N(1535)$ has a mass $m_{N^{\ast}}=(1535 \pm10)$ MeV \cite{PDG}. The theoretical expressions for $g_{A}^{N}$, $g_{A}^{N^{\ast}}$, $\Gamma_{N^{\ast}\rightarrow N\pi}$, $\Gamma_{a_{1}\rightarrow\pi\gamma}$ depend on the four parameters $c_{1}$, $c_{2}$, $Z$, and $m_{0}$. Here, $Z$ is the only parameter entering from the meson sector, see Appendix \ref{appendixA}. We determine the parameters $c_{1}$, $c_{2}$, $Z$, and $m_{0}$ by using the experimental results \cite{PDG} for the decay width $\Gamma_{N^{\ast }\rightarrow N\pi}^{\exp}=(67.5\pm23.6)$ MeV, the radiative decay of the $a_{1}(1260)$ meson, $\Gamma_{a_{1}\rightarrow\pi\gamma}^{\exp}=(0.640\pm 0.246)$ MeV, and the axial coupling constant $g_{A}^{N,\exp} =1.267\pm0.004$, as well as the lattice result $g_{A}^{N^{\ast},\text{latt}}=0.2\pm0.3$ \cite{Takahashi}. With the help of a standard $\chi^{2}$ procedure it is also possible to determine the errors for the obtained parameters: \begin{equation} c_{1}=-3.0\pm0.6 \; , \;\; c_{2}=11.6\pm3.6\; , \;\; Z=1.67\pm0.2\;, \end{equation} and \begin{equation} m_{0}=(460\pm136)\, \mathrm{MeV}\;. \end{equation} The coupling constants $\widehat{g}_{1}$ and $\widehat{g}_{2}$ can be deduced from Eq.\ (\ref{g12}), \begin{equation} \widehat{g}_{1}=11.0\pm1.5\;,\;\; \widehat{g}_{2}=18.8\pm2.4\;. \end{equation} The value obtained for $m_{0}$ is larger than the one originally found in Ref.\ \cite{DeTar:1988kn} and points to a sizable contribution of other condensates to the nucleon mass. However, because of the non-linear relation (\ref{nuclmasses}) between the nucleon mass, $m_{0}$, and the chiral condensate, when switching off $m_{0}$ the nucleon mass is not simply by an amount $m_{0}$ smaller than the physical value, rather $m_{N} = \widehat {g}_{1}\varphi/2 \simeq850$ MeV, and thus only slightly smaller than 939 MeV. The Ioffe formula is thus still approximately justified also in this context. On the other hand, when varying $\varphi$ from 0 to the physical value $Zf_{\pi}$, the nucleon mass goes from $m_{0}=460$ MeV to 939 MeV. Interestingly, the coupling constant $c_{2}$ which parametrizes the interaction of the nucleon's partner with the (axial-) vector mesons is larger than the constant $c_{1}$ which parametrizes the interaction of the nucleon with the (axial-) vector mesons. Nevertheless, when compared with the coupling $g_{1}\sim6$ (similar in all models with vector mesons and pions) the constants $c_{1}$ and $c_{2}$ are $\left\vert c_{1}\right\vert \sim g_{1}/2,$ $c_{2}\sim2g_{1}$ i.e., they are related to $g_{1}$ by some numerical factor of order one. The direct comparison of $c_{1}$ and $c_{2}$ leads to $\left\vert c_{1}\right\vert \sim c_{2}/4.$ We now test the validity of our model by considering the $\pi N$ scattering lengths [some preliminary results were already presented in Ref.\ \cite{Gallas:2009yr}]. The quantity $a_{0}^{(-)}$ depends on $c_{1}$, $c_{2}$, $Z$, and $m_{0}$, and in addition on $m_{\rho}$ and $g_{1}$. The latter is a function of $Z$ and $m_{a_{1}}$, cf.\ Eq.\ (\ref{g1}). The values of $m_{\rho}$ and $m_{a_{1}}$ are known to reasonably good precision \cite{PDG}, and thus our uncertainty in determining $a_{0}^{(-)}$ is small. (This will be different for $a_{0}^{(+)}$ which also depends on the poorly known value of the $\sigma$ meson mass, $m_{\sigma}$.) We obtain \begin{equation} a_{0}^{(-)}=(6.04 \pm0.63) \cdot10^{-4}\, \mathrm{MeV}^{-1}\;, \label{a0mst \end{equation} in agreement with the experimental value measured by the ETH Z\"urich-Neuchatel-PSI collaboration in pionic hydrogen and deuterium X-ray experiments \cite{schroder}: \begin{equation} a_{0,\exp}^{(-)}=(6.4\pm0.1) \cdot10^{-4}\, \mathrm{MeV}^{-1}\;. \end{equation} An even better agreement is expected when including the $\Delta$ resonance \cite{ellis}. The scattering length $a_{0}^{(+)}$ depends also on $c_{1}$, $c_{2}$, $Z$, and $m_{0}$, but in addition on $m_{1}$ and $m_{\sigma}$. The former parametrizes the contribution to the $\rho$ mass which does not originate from the chiral condensate: $m_{\rho}^{2}=m_{1}^{2}+\frac{\phi^{2}}{2}(h_{1}+h_{2}+h_{3})$. Notice that in the present theoretical framework with global chiral symmetry the KSFR relation \cite{ksfr} is obtained for $m_{1}=0,$ $h_{1}+h_{2 +h_{3}=g_{1}^{2}/Z^{2}$. A physically reasonable range of values for $m_{1}$ is between $0$ and $m_{\rho}$. For the lower boundary, the mass of the $\rho$ meson is exclusively generated by chiral symmetry breaking, thus it becomes massless when $\varphi\rightarrow0$. This is similar to Georgi's vector limit \cite{Georgivectorlimit} or Brown-Rho scaling \cite{BrownRho}. In principle, the mass of the $\sigma$ meson varies over a wide range of values; we could choose $m_{\sigma}\sim0.4$ GeV or $1.37$ GeV, according to the assignment $f_{0}(600)$ and $f_{0}(1370)$. Since the allowed range of values for $m_{1}$ and $m_{\sigma}$ is large, we choose to plot the scattering length $a_{0}^{(+)}$ as function of $m_{1}$ for different choices of $m_{\sigma}$; the result is shown in Fig.\ \ref{a0plus}. The experimental result \cite{schroder} \begin{equation} a_{0,\exp}^{(+)}=(-8.8\pm7.2) \cdot10^{-6}\; \mathrm{MeV}^{-1 \end{equation} is shown as grey (online: yellow) band. One observes that for small values of $m_{\sigma}$ one requires a large value for $m_{1}$ in order to reproduce experimental data. For increasing $m_{\sigma}$, the required values for $m_{1}$ decrease. For $m_{\sigma} \agt 1.37$ GeV, $a_{0,\exp}^{(+)}$ cannot be reproduced for any value of $m_{1}.$ This, however, does not exclude a heavy $\sigma$ meson, rather, it indicates that an additional light scalar-isoscalar resonance needs to be included as discussed in Sec.\ \ref{IVb}. \begin{figure}[h] \begin{center} \includegraphics[scale = 0.80]{fig2.eps} \end{center} \caption{The isospin-even scattering length $a_{0}^{(+)}$ as a function of $m_{1}$ for fixed values of $m_{\sigma}$, for the assignment $N^{\ast }=N(1535)$ (left panel) and $N^{\ast}=N(1650)$ (right panel). The experimental range is shown by the grey (online: yellow) band. \label{a0plus \end{figure} For the decay $N^{\ast}\rightarrow N\eta$, we obtain with Eq.\ (\ref{npidecay ) the result \begin{equation} \Gamma_{N^{\ast}\rightarrow N\eta}= \left( 10.9 \pm3.8\right) \, \mathrm{MeV}\;, \end{equation} where the error also takes into account the uncertainty in the pseudoscalar mixing angle $\phi_{P}=-38.7^{\circ} \pm6^{\circ}$. We observe that $\Gamma_{N^{\ast}\rightarrow N\eta}$ is about a factor 7 smaller than $\Gamma_{N^{\ast}\rightarrow N\pi}$, which is in reasonable agreement with the naive expectation based on the relation $\lambda_{\eta}/\lambda_{\pi}= \cos^{2} \phi_{P}/3 \simeq0.097$. However, it is clearly smaller than the experimental value $\Gamma_{N^{\ast}\rightarrow N\eta}^{\exp}=(78.7 \pm24.3)$ MeV \cite{PDG}. The agreement could be improved if one generalizes our discussion to the $SU(3)$ case and includes a large OZI-violating contribution, or if one considers an enlarged mixing scenario as discussed in Sec.\ \ref{IVa}. \subsection{Limiting cases} We now consider three important limiting cases. In all of these $N(1535)$ is taken as chiral partner of the nucleon. \begin{enumerate} \item[(i)] \emph{Local chiral symmetry:} This case is obtained by setting $g_{1}=c_{1}=c_{2}$ and $h_{1}=h_{2}=h_{3}=0$. As a consequence, $m_{\rho }=m_{1}$, $m_{a_{1}}^{2} = m_{\rho}^{2} + (g_{1} \varphi)^{2}$, $Z=m_{a_{1 }/m_{\rho}$. Using the experimental values for $\Gamma_{N^{\ast}\rightarrow N\pi}$ and $\Gamma_{a_{1}\rightarrow\gamma\pi}$ one obtains \begin{equation} m_{0}=\left( 730 \pm229\right) \, \mathrm{MeV}\;. \end{equation} As a consequence, $g_{A}^{N}=-g_{A}^{N^{\ast}} \equiv Z^{-2} \tanh\delta= 0.33 \pm0.02$, both at odds with experimental and lattice data. The scattering length $a_{0}^{(-)}$ is in the range of the experimental data, $a_{0}^{(-)}= (4.9 \pm1.7) \cdot10^{-4}$ MeV$^{-1}$. Since $m_{1}= m_{\rho}$ is fixed, the isospin-even scattering length only depends on $m_{\sigma}$. Thus, for a given value of $m_{\sigma}$, we obtain a single value with theoretical errors: $a_{0}^{(+)}= \left( 7.06 \pm3.12\right) \cdot10^{-6}$ MeV$^{-1}$ for $m_{\sigma}=1.37$ GeV and $a_{0}^{(+)}=\left( 4.46 \pm0.11\right) \cdot10^{-5}$ MeV$^{-1}$ for $m_{\sigma}=0.44$ GeV, which is outside the range of the experimental error band. As already argued in Refs.\ \cite{denis,Wilms} we conclude that the case of local chiral symmetry (in the present model without higher-order terms) is not capable of properly reproducing low-energy phenomenology. \item[(ii)] \emph{Decoupling of vector mesons:} This corresponds to $g_{1}=c_{1}=c_{2}=h_{1}=h_{2}=h_{3}=0$, and thus $Z=1$ and $w=0$. Using the decay width $\Gamma_{N^{\ast}\rightarrow N\pi}=(67.5\pm23.6)$ MeV one obtains \begin{equation} m_{0}=\left( 262\pm46\right) \,\mathrm{MeV}\;, \end{equation} in agreement with Ref.\ \cite{DeTar:1988kn}. As a result $g_{A}^{N =-g_{A}^{N^{\ast}}=0.97\pm0.01$, in disagreement with both experimental and lattice data. The description of the scattering lengths also becomes worse; the isospin-odd scattering length $a_{0}^{(-)}=(5.7\pm0.47)\cdot10^{-4}$ MeV$^{-1}$, which is just outside the experimental error band. Also in this case, the isospin-even scattering length assumes a single value (with theoretical errors) for given $m_{\sigma}$: $a_{0}^{(+)}=\left( 1.08\pm0.05\right) \cdot10^{-4}$ MeV$^{-1}$ for $m_{\sigma}=1.37$ GeV and $a_{0}^{(+)}=\left( -7.55\pm0.19\right) \cdot10^{-4}$ MeV$^{-1}$ for $m_{\sigma}=0.44$ GeV, i.e., two orders of magnitude away from the experimental value. We thus conclude that vector mesons cannot be omitted for a correct description of pion-nucleon scattering lengths. \item[(iii)] \emph{Decoupling of the chiral partner:} This is obtained by sending $m_{0}\rightarrow0$ or $\delta\rightarrow\infty$. The partner decouples and we are left with a linear $\sigma$ model with vector and axial-vector mesons. The decay $\Gamma_{N^{\ast}\rightarrow N\pi}$ vanishes in this case and is obviously at odds with the experiment. Using the experimental values for $g_{A}^{N}$ and $\Gamma_{a_{1}\rightarrow\gamma\pi}$ to fix the parameters $c_{1}$ and $Z$ ($c_{2}$ and $\widehat{g}_{2}$ play no role here because of the decoupling of the partner), we obtain \begin{equation} c_{1}=-2.59\pm0.51\;,\;\;Z=1.66\pm0.2\;. \end{equation} The scattering lengths are: \begin{equation} a_{0}^{(-)}=(5.99\pm0.66)\cdot10^{-4}\,\mathrm{MeV}^{-1}\;, \end{equation} and $a_{0}^{(+)}$ shows a similar behavior as shown in Fig.\ \ref{a0plus}. We conclude that the role of the partner is marginal in improving the scattering lengths. It could be omitted, unless one wants to consider, in the framework of the mirror assignment, its decay into nucleon and pseudoscalar particles. \end{enumerate} \subsection{Other candidates} In this subsection, we discuss two more exotic possibilities for the chiral partner of the nucleon. \begin{enumerate} \item[(i)] \emph{$N(1650)$ as partner.} The resonance $N(1650)$ has a mass $m_{N^{\ast}}=(1655\pm15)$ MeV and a decay width $\Gamma_{N^{\ast}\rightarrow N\pi}^{\exp}=(128\pm44)$ MeV \cite{PDG}. The axial coupling constant measured in the lattice simulation of Ref.\ \cite{Takahashi} reads $g_{A}^{N^{\ast },\text{latt}}=0.55\pm0.2$. By following the previous steps we obtain \begin{equation} c_{1}=-3.3\pm0.7\;,\;\; c_{2}=14.8\pm3.4\;,\;\;Z=1.67\pm0.2\;, \end{equation} and \begin{equation} m_{0}=\left( 709\pm157\right) \, \mathrm{MeV}\;. \end{equation} This leads to the coupling constants \begin{equation} \widehat{g}_{1}=9.45\pm1.81\;,\;\; \widehat{g}_{2}=18.68\pm2.68\;. \end{equation} In this case $m_{0}$ is even larger than before. However, as in the case of $N(1535)$, the quantity $\widehat{g}_{1}\varphi/2\simeq730$ MeV is still sizable and similar to $m_{0}$. The result for the isospin-odd scattering length \begin{equation} a_{0}^{(-)}=(5.90\pm0.46) \cdot10^{-4} \,\mathrm{MeV}^{-1 \end{equation} is similar to the case of $N(1535)$. The isospin-even scattering lengths behave similarly as before, cf.\ Fig.\ \ref{a0plus}, however, slightly smaller values for $m_{\sigma}$ are required in order to reproduce the experimental data. The decay width into $N\eta$ is $\Gamma_{N^{\ast}\rightarrow N\eta =(18.3\pm8.5)$ MeV, which should be compared to $\Gamma_{N^{\ast}\rightarrow N\eta}^{\mathrm{exp}}=(10.7\pm6.7)$ MeV. Thus, in this case the decay width is in agreement with the experimental value. However, we then face the problem of how to describe the $N(1535)$ resonance, cf.\ the discussion in Sec.\ \ref{IVa}. \item[(ii)] \emph{Speculative candidate $N(1200)$ as partner.} We consider a \emph{speculative\/} candidate $N(1200)$ with a mass $m_{N^{\ast}}\sim1200$ MeV and a very large width $\Gamma_{N^{\ast}\rightarrow N\pi}\agt 800$ MeV, such as to have avoided experimental detection up to now. The reason for its introduction was motivated by properties of nuclear matter \cite{zschiesche} and further on investigated in Ref.\ \cite{dexheimer} in the context of asymmetric nuclear matter present in a neutron star. Regardless of the precise value of the axial coupling constant of the partner (which is unknown for this hypothetical resonance) one obtains $m_{0}>1$ GeV. This, in turn, implies a large interaction of $N$ and $N^{\ast}$. As a consequence, both scattering lengths turn out to be off by two order of magnitudes: $a_{0}^{(-)}\sim 10^{-2}$ MeV$^{-1}$ and $a_{0}^{(+)}\sim10^{-4}$ MeV$^{-1}$. Thus, we are led to \emph{discard\/} the possibility that a hypothetical, not yet discovered $N(1200)$ exists. \end{enumerate} \section{Summary and outlook} \label{IV} In this paper, we investigated a linear sigma model with global chiral $U(2)_{R}\times U(2)_{L}$ symmetry, where the mesonic degrees of freedom are the standard scalar and pseudoscalar mesons and the vector and axial-vector mesons. In addition to the mesons, we included baryonic degrees of freedom, namely the nucleon and its chiral partner, which is incorporated in the model in the so-called mirror assignment. We used this model to study the origin of the mass of the nucleon, the assignment and decay properties of its chiral partner and the pion-nucleon scattering lengths. The mass of the nucleon results as an interplay of the chiral condensate and a chirally invariant baryonic mass term, proportional to the parameter $m_{0}$. When the chiral partner of the nucleon is identified with the resonance $N^{\ast}\equiv N(1535)$, the parameter $m_{0}\simeq500$ MeV is obtained as a result of a fitting procedure which involves the three experimentally measured quantities $N^{\ast}\rightarrow N\pi,$ $a_{1 \rightarrow\pi\gamma$, $g_{A}^{N}$, and the quantity $g_{A}^{N^{\ast}}$ evaluated on the lattice. The isospin-odd scattering length $a_{0}^{(-)}$ is then fixed and found to be in good agreement with experimental data. The isospin-even scattering length depends, in addition, strongly on the mass of the $\sigma$ meson, see Fig.\ \ref{a0plus} and the discussion in Sec.\ \ref{IVb}. The decay width $N^{\ast}\rightarrow N\eta$ turns out to be a factor of eight smaller than the experimental value. The obtained value $m_{0}\simeq500$ MeV implies that a sizable amount of the nucleon mass does not originate from the chiral condensate. As this result is subject to the assumptions and the validity of the employed chiral model, most notably due to the identification of the chiral partner with $N(1535)$ and to the mathematical properties of the mirror assignment, future studies of other scenarios, incorporating new results both from the experiment and the lattice, are necessary to further clarify this important issue of hadron physics. It should also be noted that the results presented in this work are based on a tree-level calculation. The inclusion of loops represents a task for the future. Nevertheless, we expect that the results will not change qualitatively: On the one hand, while the dimensionless couplings of the model $g_{1}$, $c_{1},$ $c_{2},$ $\widehat{g}_{1},$ $\widehat{g}_{2}$ are large, the contribution of loops is suppressed according to large--$N_{c}$ arguments \cite{largenc}. On the other hand, in our model we have included from the very beginning the relevant resonances which contribute as virtual states to processes, thus reducing the effects of loops in the model. To clarify the latter point, consider the $\rho$ meson exchange in $\pi N$ scattering. In an approach in which the $\rho$ meson is not directly included, its contribution could only be obtained after a corresponding loop resummation, while in our approach it is taken directly into account by a tree-level exchange diagram. We studied three important limiting cases: (i) In the framework of local chiral symmetry it is not possible to correctly reproduce low-energy phenomenology. (ii) It is not admissible to neglect (axial-) vector mesons. They are crucial in order to obtain a correct description of the axial coupling constants and $\pi N$ scattering lengths. (iii) The role of the partner $N^{\ast}$ has only a marginal influence on the scattering lengths. We have also tested other assignments for the partner of the nucleon: a broad, not-yet discovered partner with a mass of about $1.2$ GeV must be excluded on the basis of scattering data. The well-established resonance $N(1650)$ provides qualitatively similar results as $N(1535)$ and, in this case, the theoretical value of the decay width $N(1650)\rightarrow N\eta$ is in agreement with the experimental one. However, in this scenario it is not clear how $N(1535)$ fits into the baryonic resonance spectrum. This issue is discussed in Sec.\ \ref{IVa} presented below. In Sec.\ \ref{IVb} we discuss the origin of $m_{0}$ in terms of tetraquark and gluon condensates and the implications for future studies. \subsection{Outlook 1: enlarged mixing scenario} \label{IVa} In this section we briefly describe open problems of the previous results and present a possible outlook to improve the theoretical description. A simultaneous description of both resonances $N(1525)$ and $N(1650)$ requires an extension of the model. In the framework of the mirror assignment, instead of only two bare nucleon fields $\Psi_{1}$ and $\Psi_{2}$ one should include two additional bare fields $\Psi_{3}$ and $\Psi_{4}$ with positive and negative parity, respectively. The latter two are assumed to transform like $\Psi_{1}$ and $\Psi_{2}$ in Eq.\ (\ref{mirror}). The interesting part of the enlarged Lagrangian are the bilinear chirally invariant mass terms: \begin{align} \mathcal{L}_{\text{mass}} & =m_{0}^{(1,2)}\left( \overline{\Psi}_{2 \gamma^{5}\Psi_{1}-\overline{\Psi}_{1}\gamma^{5}\Psi_{2}\right) +m_{0}^{(3,4)}\left( \overline{\Psi}_{4}\gamma^{5}\Psi_{3}-\overline{\Psi }_{3}\gamma^{5}\Psi_{4}\right) \nonumber\\ & +m_{0}^{(1,4)}\left( \overline{\Psi}_{4}\gamma^{5}\Psi_{1}-\overline{\Psi }_{1}\gamma^{5}\Psi_{4}\right) +m_{0}^{(2,3)}\left( \overline{\Psi _{2}\gamma^{5}\Psi_{3}-\overline{\Psi}_{3}\gamma^{5}\Psi_{2}\right) \;. \end{align} In the limit $m_{0}^{(1,4)}=m_{0}^{(2,3)}=0$ the bare fields $\Psi_{1}$ and $\Psi_{2}$ do not mix with the fields $\Psi_{3}$ and $\Psi_{4}.$ The fields $\Psi_{1}$ and $\Psi_{2}$ generate the states $N(939)$ and $N(1535)$, just as described in this paper with $m_{0}^{(1,2)}=m_{0}$, while the fields $\Psi _{3}$ and $\Psi_{4}$ generate the states $N(1440)$ and $N(1650)$, which are regarded as chiral partners. The term proportional to $m_{0}^{(3,4)}$ induces a decay of the form $N(1650)\rightarrow N(1440)\pi$ (or $\eta$), but still $N(1650)$ and $N(1440)$ do not decay into $N\pi(\eta)$. When in addition the coefficients $m_{0}^{(1,4)}$ and $m_{0}^{(2,3)}$ are non-zero, a more complicated mixing scenario involving four bare fields arises. As a consequence, it is possible to account for the decay of both resonances $N(1550)$ and $N(1650)$ into $N\pi(\eta)$. Moreover, it is well conceivable that the anomalously small value of the decay width $N(1550)\rightarrow N\eta$ arises because of destructive interference. Interestingly, a mixing of bare configurations generating $N(1535)$ and $N(1650)$ is necessary also at the level of the quark model \cite{isgur}. Note that in the framework of the generalized mixing scenario, the fields $N(1535)$ and $N(1650)$ are chiral partners of $N(939)$ and $N(1440)$. Due to mixing phenomena, it is not possible to isolate the chiral partner of the nucleon, which is present in both resonances $N(1535)$ and $N(1650).$ However, also in this case the nonzero decay widths of both fields $N(1535)$ and $N(1650)$ are obtained as a result of non-vanishing $m_{0}$-like parameters. The mixing scenario outlined above may look at first sight not very useful, because it involves too many new parameters. However, a quick counting shows that this is not the case. In addition to the four mass parameters $m_{0}^{(i,j)}$, we have the already discussed parameters $c_{1}$, $c_{2}$, $\widehat{g}_{1}$, and $\widehat{g}_{2}$, plus similar parameters $c_{3}$, $c_{4}$, $\widehat{g}_{3}$, and $\widehat{g}_{4}$ which describe the interactions of $\Psi_{3,4}$ with mesons. These twelve parameters can be used to describe the following 14 quantities: the masses of the states $N\equiv N(939)$, $N(1535)$, $N(1440)$, $N(1650)$, the decay widths $N(1535)\rightarrow N\pi$, $N(1535)\rightarrow N\eta$, $N(1650)\rightarrow N\pi$, $N(1650)\rightarrow N\eta$, $N(1440)\rightarrow N\pi$, $N(1440)\rightarrow N\eta$ (the latter by taking into account the non-zero width of the $N(1440)$ resonance), and the four axial coupling constants $g_{A}^{N}$, $g_{A ^{N(1535)}$, $g_{A}^{N(1440)}$, and $g_{A}^{N(1650)}$. A detailed study of this enlarged scenario, in which the four lightest $J^{P}=\frac{1}{2}^{\pm}$ baryonic resonances are simultaneously included, will be performed in the future. \subsection{Outlook 2: origin of $m_{0}$} \label{IVb} The scattering length $a_{0}^{(+)}$ shows a strong dependence on the mass of the $\sigma$ meson. A similar situation occurs for $\pi\pi$ scattering at low energies \cite{denis}. While a light $\sigma$ is favoured by the scattering data, many other studies show that the $\sigma$ meson -- as the chiral partner of the pion in the linear sigma model -- should be placed above $1$ GeV and identified with the resonance $f_{0}(1370)$ rather than the light $f_{0}(600)$ [see Refs.\ \cite{klempt,dynrec} and refs.\ therein]. Indeed, also in the framework of the linear sigma model used in this paper, the decay width $f_{0}(600)\rightarrow\pi\pi$ turns out to be too small when the latter is identified with the chiral partner of the pion \cite{denis}. When identifying $\sigma$ with $f_{0}(1370)$, two possibilities are left for $f_{0}(600)$: (i) It is a dynamically generated state arising from the pion-pion interaction. The remaining scalar states below 1 GeV, $f_{0}(980)$, $a_{0}(980)$, and $K_{0}^{*}(800)$ can be interpreted similarly. (ii) The state $f_{0}(600)$ is predominantly composed of a diquark $[u,d]$ (in the flavor and color antitriplet representation) and an antidiquark $[\overline {u},\overline{d}]$, i.e., $f_{0}(600)\simeq$ $[\overline{u},\overline {d}][u,d]$. In this case the light scalar states $f_{0}(600)$, $f_{0}(980)$, $a_{0}(980)$, and $K_{0}^{*}(800)$ form an additional tetraquark nonet \cite{jaffeorig,maiani,tq,tqmix,fariborz}. Note that in both cases the resonance $f_{0}(600)$ -- which is needed to explain $\pi\pi$ and $\pi N$ scattering experiments and also to understand the nucleon-nucleon interaction potential -- is \emph{not} the chiral partner of the pion. In the following we concentrate on the implications of scenario (ii) at a qualitative level, leaving a more detailed study for the future. First, a short digression on the dilaton field is necessary. Dilatation invariance of the QCD Lagrangian in the chiral limit is broken by quantum effects. This situation can be taken into account in the framework of a chiral model by introducing the dilaton field $G$ \cite{salomone}. The corresponding dilaton potential reflects the trace anomaly of QCD as underlying theory and has the form $V(G)\propto G^{4}\left( \log\frac {G}{\Lambda_{G}}-\frac{1}{4}\right) $, where $\Lambda_{G}\sim\Lambda_{QCD}$ is the only dimensional quantity which appears in the full effective Lagrangian in the chiral limit. Due to the non-zero expectation value of $G$, a shift $G\rightarrow G_{0}+G$ is necessary: the fluctuations around the minimum correspond to the scalar glueball, whose mass is placed at $M_{G \sim1.7$ GeV by lattice QCD calculations \cite{lattglue} and by various phenomenological studies \cite{gluephen}. (Beyond the chiral limit, also the parameter $h_{0}$ in Eq.\ (\ref{meslag}), which describes explicit symmetry breaking due to the non-zero valence quark masses, appears as an additional dimensionful quantity.) We assume that, in the chiral limit, the full interaction potential $V(\Phi,L_{\mu},R_{\mu},\Psi_{1},\Psi_{2},G,\chi)$ is dilatation invariant up to the term $\propto\log\frac{G}{\Lambda_{G}}$ and that it is finite for any finite value of the fields, i.e., only terms of the kind $G^{2}\mathrm{Tr \left[ \Phi^{\dagger}\Phi\right] $, $\mathrm{Tr}\left[ \Phi^{\dagger \Phi\right] ^{2}\, , \ldots$ are retained. By performing the shift $G\rightarrow G_{0}+G$, the term $G^{2}\mathrm{Tr}\left[ \Phi^{\dagger \Phi\right] $ becomes $G_{0}^{2}\mathrm{Tr}\left[ \Phi^{\dagger}\Phi\right] + \ldots$, where the dots refer to glueball-meson interactions. Identifying $\mu^{2} \sim G_{0}^{2}$, a term $G_{0}^{2}\mathrm{Tr} \left[ \Phi^{\dagger }\Phi\right] $ is already present in our Lagrangian (\ref{meslag}), but the glueball-hadron interactions are neglected. Note that a term of the kind $G^{-4}\mathrm{Tr}\left[ \partial_{\mu}\Phi^{\dagger} \partial^{\mu \Phi\right] ^{2}$ is not allowed because of our assumption that the potential is finite. Following this line of arguments, our Lagrangian (\ref{meslag}) cannot contain operators of order higher than four \cite{dynrec}, because such operators must be generated from terms with inverse powers of $G$. E.g., upon shifting $G$, the above mentioned term would generate an order-eight operator of the kind $G_{0}^{-4}\mathrm{Tr}\left[ \partial_{\mu}\Phi^{\dagger} \partial^{\mu}\Phi\right] ^{2}$. Let us now turn to the mass term $\sim m_{0}$ in Eq.\ (\ref{nucl lagra}), \begin{equation} \label{massterm}m_{0}(\overline{\Psi}_{1L}\Psi_{2R} -\overline{\Psi}_{1R \Psi_{2L} -\overline{\Psi}_{2L}\Psi_{1R} +\overline{\Psi}_{2R}\Psi_{1L})\;. \end{equation} The parameter $m_{0}$ has the dimension of mass and is the only term in the baryon sector, which is not dilatation invariant. In order to render it dilatation invariant while simultaneously preserving chiral symmetry, we can couple it to the chirally invariant dilaton field $G$. Moreover, in the framework of $U(2)_{R}\times U(2)_{L}$ chiral symmetry also the above mentioned tetraquark field, denoted as $\chi\equiv\frac{1}{2} \lbrack \overline{u},\overline{d}][u,d]$, is invariant under chiral transformations. We then write the following dilatation-invariant interaction term: \begin{equation} \label{tqbar}\left( a\chi+bG\right) \, (\overline{\Psi}_{1L}\Psi _{2R}-\overline{\Psi}_{1R}\Psi_{2L} -\overline{\Psi}_{2L}\Psi_{1R +\overline{\Psi}_{2R}\Psi_{1L})\;, \end{equation} where $a$ and $b$ are dimensionless coupling constants. When shifting both fields around their vacuum expectation values $\chi\rightarrow\chi_{0}+\chi$ and $G\rightarrow G_{0}+G$ we recover the term (\ref{massterm}) of our Lagrangian by identifying \begin{equation} m_{0}=a\chi_{0}+bG_{0}\;, \end{equation} where $\chi_{0}$ and $G_{0}$ are the tetraquark and gluon condensates, respectively. Note that the present discussion holds true also in the highly excited part of the baryon sector: as described in Ref.\ \cite{Glozman,cohen}, the heavier the baryons, the less important becomes the quark condensate $\varphi$: For two heavy chiral partners $B$ and $B^{*}$, one expects a mass degeneracy of the form $m_{B} \simeq m_{B^{\ast}}\simeq m_{0}$. We expect the gluon condensate $G_{0}$ to be the dominant term in this sector, $m_{0} \simeq b G_{0}$. In fact, the tetraquark condensate is also related to the chiral condensate in the vacuum \cite{tqmix,Heinz:2008cv} and -- while potentially important for low-lying states like the nucleon and its partner -- its role should also diminish when considering very heavy baryons. We now return to the nucleon and its partner and concentrate on their interaction with the tetraquark field $\chi$. From the point of low-energy phenomenology, the tetraquark field $\chi$ is very interesting because the corresponding excitation is expected to be lighter than the gluonium and the scalar quarkonium states, for instance $m_{\chi}\sim M_{f_{0}(600)}\sim0.6$ GeV. A nucleon-tetraquark interaction of the kind $a\chi(\overline{\Psi _{1L}\Psi_{2R} -\overline{\Psi}_{1R}\Psi_{2L} -\overline{\Psi}_{2L}\Psi_{1R} +\overline{\Psi}_{2R}\Psi_{1L})$ arising from Eq.\ (\ref{tqbar}) would then contribute to pion-pion and nucleon-pion scattering and possibly improve the agreement with experimental data. Moreover, there is also another interesting consequence: in virtue of Eq.\ (\ref{tqbar}) the state $\chi$ appears as intermediate state in nucleon-nucleon interactions and, due to its small mass, is likely to play an important role in the one-meson exchange picture for the nucleon-nucleon potential. This raises the interesting question whether a \emph{tetraquark\/} is the scalar state which mediates the middle-range attraction among nucleons, in contrast to the standard picture where this task is performed by a quark-antiquark state. Let us further elucidate this picture by a simple and intuitive example. Let us consider the nucleon as a quark-diquark bound state. The standard picture of one-boson exchange in the nucleon-nucleon interaction consists of exchanging the two quarks between the nucleons. However, one could well imagine that instead of the quarks one exchanges the two diquarks between the nucleons. Note that these diquarks are in the correct color and flavor antitriplet representations in order to form a tetraquark of the type suggested by Jaffe \cite{jaffeorig}, such as the meson $\chi$ discussed here. A full analysis must include a detailed study of mixing between all scalar states. As a last subject we discuss how the nucleon mass might evolve at non-zero temperature and density. In particular, in the high-density region of the so-called ``quarkyonic phase'' \cite{quarkyonic} hadrons are confined but chiral symmetry is (almost) restored, i.e., the chiral condensate (approximately) vanishes. What are the properties of the nucleon in this phase? In the framework of the Lagrangian (\ref{nucl lagra}), when $\varphi\rightarrow0$, the masses of both the nucleon and its partner approach a constant value $m_{0}$. Then, the first naive answer is that we expect a nucleon mass of about $500$ MeV in this phase. The situation is, however, more complicated than this. In fact, as discussed in this section the term $m_{0}$ is not simply a constant but is related to other condensates. The behavior of these condensates at non-zero $T$ and $\mu$ is then crucial for the determination of the nucleon mass. Interestingly, in Ref.\ \cite{Heinz:2008cv} it is shown that the tetraquark condensate does \emph{not} vanish but rather increases for increasing $T$. A future study at non-zero $T$ and $\mu$ must include both the tetraquark and the gluon condensate in the same framework. \section*{Acknowledgements} The authors thank L.\ Glozman, T.\ Kunihiro, S.\ Leupold, D.\ Parganlija, R.\ Pisarski, and T.\ Takahashi for useful discussions. The work of S.G.\ was supported by GSI Darmstadt under the F\&E program. The work of F.G.\ was partially supported by BMBF. The work of D.H.R.\ was supported by the ExtreMe Matter Institute EMMI. This work was (financially) supported by the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse.
2,877,628,088,799
arxiv
\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}} \makeatother \theoremstyle{plain} \newtheorem{theorem}{\noindent\bf Theorem}[section] \newtheorem{lemma}[theorem]{\noindent\bf Lemma} \newtheorem{corollary}[theorem]{\noindent\bf Corollary} \newtheorem{proposition}[theorem]{\noindent\bf Proposition} \newtheorem{claim}[theorem]{\noindent\bf Claim} \theoremstyle{definition} \newtheorem{definition}[theorem]{\noindent\bf Definition} \newtheorem{remark}[theorem]{\noindent\bf Remark} \newtheorem{example}[theorem]{\noindent\bf Example} \newtheorem{notation}[theorem]{\noindent\bf Notation} \newtheorem{assertion}[theorem]{\noindent\bf Assertion} \newtheorem*{remark0}{\noindent\sc Remark} \newcommand{\Vol}[0]{\operatorname{Vol}} \newcommand{\Pic}[0]{\operatorname{Pic}} \newcommand{\Sym}[0]{\operatorname{Sym}} \newcommand{\End}[0]{\operatorname{End}} \newcommand{\rank}[0]{\operatorname{rank}} \newcommand{\codim}[0]{\operatorname{codim}} \newcommand{\sign}[0]{\operatorname{sign}} \newcommand{\Coker}[0]{\operatorname{Coker}} \newcommand{\Ker}[0]{\operatorname{Ker}} \newcommand{\Image}[0]{\operatorname{Im}} \newcommand{\Supp}[0]{\operatorname{Supp}} \newcommand{\sqrt{-1}\partial \overline{\partial}}{\sqrt{-1}\partial \overline{\partial}} \newcommand{\overline{\partial}}{\overline{\partial}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\widetilde{\omega}}{\widetilde{\omega}} \newcommand{\I}[1]{\mathcal{J}(#1)} \newcommand{{\rm{min}}}{{\rm{min}}} \newcommand{{\rm{sing}}}{{\rm{sing}}} \newcommand{\lla}[0]{{\langle\!\hspace{0.02cm} \!\langle}} \newcommand{\rra}[0]{{\rangle\!\hspace{0.02cm}\!\rangle}} \makeatletter \renewcommand{\theequation}{% \thesection.\arabic{equation}} \@addtoreset{equation}{section} \makeatother \renewcommand{\proofname}{\noindent\bf Proof.} \makeatletter \def\address#1#2{\begingroup \noindent\parbox[t]{7.8cm}{ \small{\scshape\ignorespaces#1}\par\vskip1ex \noindent\small{\itshape E-mail address} \/: #2\par\vskip4ex}\hfill \endgroup} \makeatother \title{Logarithmic Chow semistability of polarized toric manifolds} \author{Satoshi Nakamura} \date{} \begin{document} \maketitle \footnote{ 2010 \textit{Mathematics Subject Classification}. Primary 53C56; Secondary 14L24. } \footnote{ \textit{Key words and phrases}. Toric manifolds, Weight polytopes, Log Chow semistability, Log K-semistability, Conical K\"{a}hler Einstein metric. } \begin{abstract} The logarithmic Chow semistability is a notion of Geometric Invariant Theory for the pair consists of varieties and its divisors. In this paper we introduce a obstruction of semistability for polarized toric manifolds and its toric divisors. As its application, we show the implication from the asymptotic log Chow semistability to the log K-semistability by combinatorial arguments. Furthermore we give a non-semistable example which has a conical K\"{a}hler Einstein metric. \end{abstract} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}{Introduction} Let $X$ be a compact complex manifold and $L\to X$ an ample line bundle. We call this pair $(X,L)$ a polarized manifold. The well-known Donaldson-Tian-Yau conjecture claims that $X$ has constant scalar curvature K\"{a}hler metrics in $c_1(L)$ if and only if $(X,L)$ is stable in the sense of Geometric Invariant Theory (GIT for short). This conjecture has the logarithmic generalization which claims that $X$ has constant scalar curvature conical K\"{a}hler metric in $c_1(L)$ with cone angle $2\pi\beta$ along a divisor $D$ if and only if $(X,D,L,\beta)$ is stable in the sense of GIT. The one of difficulty of this problem is that there are various notions of stability. Therefore it is important to see the relation among various stabilities and the relation between each stability and the existence of such metrics. In this paper, we mainly consider the logarithmic Chow semistability and logarithmic K-semistability for the pair consists of polarized toric manifolds and its toric divisors. It is well-known that the one to one correspondence between $n$-dimensional integral Delzant polytopes and $n$-dimensional compact toric manifolds with $(\mathbb{C}^*)^n$-equivariant very ample line bundles. Futhermore every facet of integral Delzant polytopes corresponds to a divisor, called toric divisor, which is invariant under $(\mathbb{C}^*)^n$-action. The following is the main theorem of this paper. Set $E_P(i):=\#\{P\cap(\mathbb{Z}/i)^n\}$ and $T_{iP}$ be the standard complex torus in $SL(E_P(i))$. (We refer to later sections for notations.) \begin{theorem}\label{main} Let $P\subset\mathbb{R}^n$ be a $n$-dim integral Delzant polytope, $F$ a facet of $P$ and $\beta \in (0,1]$ a cone angle. For a positive integer $i$, the Chow form of $(X_P, D_F, L_P^i, \beta)$ is $T_{iP}$-semistable if and only if \[ E_P(i) \biggl( 2i \int_P g d\nu + (1-\beta)\int_F g d\sigma \biggr) \geq \biggl( 2i \mathrm{Vol}(P) + (1-\beta)\mathrm{Vol}(F) \biggr) \sum_{\mathbf{a} \in P \cap (\mathbb{Z}/i)^n} g(\mathbf{a}) \] holds for all concave piecewise linear function $g :P \to \mathbb{R}$ in $\mathrm{PL}(P,i)$. \end{theorem} The organization of this paper is as follows. In section 2, we review some fundamentals of GIT for later use. In section 3, we define logarithmic Chow semistability in the general setting. In section 4, we consider logarithmic Chow semistability for polarized toric manifolds and its toric divisors and prove Theorem \ref{main}. In section 5, we compare with asymptotic logarithmic Chow semistability and logarithmic K-semistability as one of applications of Theorem \ref{main}. In section 6, as the other applications of Theorem \ref{main}, we give an example of asymptotic log Chow unstable manifold which admits conical K\"{a}hler Einstein metrics. {\bf Acknowlegdements.} The author would like to thank Professor Shigetoshi Bando and Doctor Ryosuke Takahashi for several helpful comments and constant encouragement. He also would like to thank Professor Naoto Yotsutani for careful reading the draft of this paper. \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}{GIT stability} We recall some facts on GIT stability, see for example \cite{Mum2}. Let $V$ be a finite dimensional complex vector space with linear action of a reductive Lie group $G$. We say non zero vector $v \in V$ is $G$-semistable if the closure of the orbit $G\cdot v$ does not contain $0\in V$. It is well-known the following criterion for semistability. \begin{proposition}\label{Hilbert-Mumford} $v\in V$ is $G$-semistable if and only if $v$ is $T$-semistable for all maximal torus $T \subset G$. \end{proposition} Therefore it is important to study the torus action. Let $T$ be a algebraic torus $(\mathbb{C}^*)^N$ for some $N$ in $G$. Then $V$ can be decomposed as \[ V = \sum_{\chi\in\chi(T)}\Set{v \in V | t\cdot v = \chi(t)v, \forall t \in T}, \] where $\chi(T)$ is the character group of $T$. Note that $\chi(T)$ is isomorphic to $\mathbb{Z}^N$ since we can express each $\chi \in \chi(T)$ as a Laurent monomial $\chi(t_1, \dots, t_N)=t_1^{a_1}\cdots t_N^{a_N}$ where $t_i \in \mathbb{C}^*$ and $a_i\in \mathbb{Z}$. \begin{definition} Let $v=\sum_{\chi\in\chi(T)}v_{\chi}$ be a non zero vector in $V$. The weight polytope $\mathrm{Wt}_T(v)$ of $v$ is the convex hull of $\Set{\chi \in \chi(T) | v_{\chi} \neq 0}$ in $\chi(T)\otimes_{\mathbb{Z}}\mathbb{R} \cong \mathbb{R}^N$. \end{definition} $T$-semistability can be visualize by the weight polytope. \begin{proposition} $v$ is $T$-semistable if and only if the weight polytope $\mathrm{Wt}_T(v)$ of $v$ contains the origin. \end{proposition} For laters arguments we consider the following situation. Let $H = (\mathbb{C^*})^{N+1}$ acting on $V$ and $T$ be a subtorus \begin{equation} \label{subtorus} T:= \Set{ \bigl(t_1, \dots , t_{N}, (t_1\cdots t_{N})^{-1}\bigr) | (t_1, \dots , t_{N})\in (\mathbb{C^*})^{N} } \cong (\mathbb{C}^*)^{N}. \end{equation} Then the weight polytope $\mathrm{Wt}_T(v)$ is coincide with $\pi (\mathrm{Wt}_H(v))$ where $\pi : \mathbb{R}^{N+1} \to \mathbb{R}^{N}$ is the linear map defined as $(x_1, \dots , x_{N+1}) \mapsto (x_1-x_{N+1}, \dots, x_{N}-x_{N+1})$. Therefore we see the following. \begin{proposition}\label{weight polytope} $v$ is $T$-semistable if and only if there exists $t\in \mathbb{R}$ such that $(t, \dots , t) \in \mathrm{Wt}_H(v)$. \end{proposition} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}{Log Chow semistability} \subsection{General definition} First we recall definition of the Chow form of projective varieties. See \cite{Mum1, Mum2} for more detail. Let $V$ be a finite dimensional complex vector space and $X \subset \mathbb{P}(V)$ be a $n$-dimensional irreducible projective variety of degree $d$. Then \[ Z_X :=\Set{ (H_0, \dots, H_n) \in \mathbb{P}(V^*)^{n+1} | (\cap_{i=0}^n H_i)\cap X \neq \emptyset} \] has codimension 1 in $\mathbb{P}(V^*)^{n+1}$ and multi-dergree $(d,\dots, d)$, so there exists the {\it Chow form} $R_X \in (\mathrm{Sym}^d V)^{\otimes (n+1)}$, unique up to scalar multiplication, which vanishes on $Z_X$. Although $(\mathrm{Sym}^d V)^{\otimes (n+1)}$ has the natural $SL(V)$-action induced from the natural $SL(V)$-action on $\mathbb{P}(V)$, we introduce a twisted action $SL(V) \times (\mathrm{Sym}^d V)^{\otimes (n+1)} \to (\mathrm{Sym}^d V)^{\otimes (n+1)}$ in order to define the log Chow semistability. For the fixed $\alpha \in \mathbb{R}$, we define as $(\exp u, v) \mapsto \exp(\alpha u)v$ where $u \in \mathrm{Lie}(SL(V))$ and the action of $\exp(\alpha u)$ for $v$ is the natural one. Then we define $R_X^{(\alpha)}$ as the Chow form of $X$ with $\alpha$-twisted $SL(V)$-action. Note that $R_X^{(1)}$ is the ordinary Chow form $R_X$ with the natural $SL(V)$-action. Next we define the log Chow semistability (\cite{WX}). Let $X$ be as above and $D$ be a irreducible divisor of degree $d'$ on $X$ and $\beta \in (0,1]$. \begin{definition} The pair $(X,D,\beta)$ is {\it log Chow semistable} if $R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)}$ is $SL(V)$-semistable in $(\mathrm{Sym}^d V)^{\otimes (n+1)} \otimes (\mathrm{Sym}^{d'} V)^{\otimes n}$. The pair $(X,D,\beta)$ is {\it log Chow unstable} if it is not log Chow semistable. \end{definition} \begin{definition} Let $(X,L)$ be a polarized variety, $D$ be a irreducible divisor on $X$ and $\beta \in (0,1]$. For $k\gg 0$, the pair $(X,D,L^k,\beta)$ is {\it log Chow semistable} if $(\varphi_k(X), \varphi_k(D), \beta)$ is log Chow semistable where $\varphi_k : X \to \mathbb{P}(H^0(X,L^k)^*)$ is the Kodaira embedding. Further $(X,D,L,\beta)$ is {\it asymptotically log Chow semistable} if $(X,D,L^k,\beta)$ is log Chow semistable for all $k\gg 0$. \end{definition} \begin{remark} (1) If we take $\beta =1$ then the log Chow semistability reduced to the Chow semistaility of $X$ defined in \cite{Mum1, Mum2}. (2) The ratio $2n!:(1-\beta)(n+1)!$ is chosen so that asymptotic log Chow semistability implies log K-semistability as discussed in Proposition \ref{K-semistability}. (3) The log Chow semistability can be extended to $\mathbb{R}$-divisor $D=\sum_i (1-\beta_i)D_i$ where $D_i$ are defferent irreducible divisors and $\beta_i \in (0,1]$ by considering the tensor product $R_X^{(2n!)}\otimes ( \otimes_i R_{D_i}^{((1-\beta_i)(n+1)!)})$ \end{remark} \subsection{A necessary condition} We introduce a necessary condition for log Chow semistability of $(X,D,\beta)$ where $X \subset \mathbb{C}P^N$ is a $n$-dimensional irreducible subvariety and $D \subset X$ is an irreducible divisor. Let $H :=(\mathbb{C}^*)^{N+1} \subset GL(N+1, \mathbb{C})$ be the torus consists of invertible diagonal matrix. Then the subtorus $T$ defined by \eqref{subtorus} is a maximal torus of $SL(N+1,\mathbb{C})$. By Proposition \ref{Hilbert-Mumford}, $T$-semistability of $R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)}$ is a necessary condition of log Chow semistability of $(X,D,\beta)$. Proposition \ref{weight polytope} implies the following. \begin{proposition}\label{T-semistability} $R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)}$ is $T$-semistable if and only if there exists $t\in\mathbb{R}$ such that \[ (t,\dots , t) \in \mathrm{Wt}_H(R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)}) \subset \mathrm{Aff}_{\mathbb{R}}\bigl(\mathrm{Wt}_H(R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)})\bigr). \] Here $\mathrm{Aff}_{\mathbb{R}}\bigl(\mathrm{Wt}_H(R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)})\bigr)$ is the affine hull of the weight polytope $\mathrm{Wt}_H(R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)})$ in $\chi(H)\otimes_{\mathbb{Z}}\mathbb{R} \cong \mathbb{R}^{N+1}$. \end{proposition} \begin{lemma}\label{decomposition} \[ \mathrm{Wt}_H(R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)}) = 2n! \mathrm{Wt}_H(R_X) + (1-\beta)(n+1)!\mathrm{Wt}_H(R_D). \] Here for any polytopes $P,Q \subset \mathbb{R}^{N+1}$ and for any $a, b \in \mathbb{R}$, we define the polytope $a P + b Q$ as $\Set{a x + b y | x \in P, y \in Q}$ which is called Minkowski summand. \end{lemma} \begin{proof} By \cite[Example 1.6]{KSZ}, \[ \mathrm{Wt}_H(R_X^{(2n!)}\otimes R_D^{((1-\beta)(n+1)!)}) = \mathrm{Wt}_H(R_X^{(2n!)}) + \mathrm{Wt}_H(R_D^{((1-\beta)(n+1)!)}). \] Further since $R_X^{(2n!)}$ has the $2n!$-twisted $SL(N+1,\mathbb{C})$-action, we can see \[\mathrm{Wt}_H(R_X^{(2n!)}) = 2n!\mathrm{Wt}_H(R_X)\] by definition of the weight polytope. Similarly, $\mathrm{Wt}_H(R_D^{((1-\beta)(n+1)!)})=(1-\beta)(n+1)!\mathrm{Wt}_H(R_D)$. \end{proof} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}{Log Chow semistability for polarized toric mamifolds} \subsection{On varieties defined by finite sets} First we consider projective varieties defined as follows. Let $A:=\{a_0, \dots , a_N \} \subset \mathbb{Z}^n$ be a finite subset. Suppose $A$ affinely generates $\mathbb{Z}^n \subset \mathbb{R}^n$ over $\mathbb{Z}$. Then \begin{equation}\label{variety} X_A := \text{closure of } \Set{ [x^{a_0} : \cdots : x^{a_{N}}]\in \mathbb{C}P^N | x \in (\mathbb{C}^*)^n } \end{equation} is an $n$-dimensional subvariety in $\mathbb{C}P^N$. Again let $H:=(\mathbb{C}^*)^{N+1} \subset GL(N+1, \mathbb{C})$ be the torus of invertible diagonal matrix. \begin{proposition}{\rm (\cite[Chapter 7, Proposition 1.11]{GKZ} and \cite{KSZ}.)}\label{GKZ} When we set $P$ as the convex hull of $A$ in $\mathbb{Z}^n\otimes\mathbb{R}$, we have \[ \mathrm{Aff}_{\mathbb{R}}(\mathrm{Wt}_H(R_{X_{A}})) = \Set{ \varphi : A \to \mathbb{R} | \sum_i \varphi(a_i) = (n+1)!\mathrm{Vol}(P), \sum_i \varphi(a_i)a_i = (n+1)!\int_P x d\nu }. \] Here $d\nu$ is the Euclidian volume form of $\mathbb{R}^n$ normalized as $\nu(\Delta_n)=1/n!$ for the $n$-dim standard simplex $\Delta_n$. \end{proposition} \begin{remark} In above proposition, there is the canonical identification $\chi(H) \cong \{ \varphi : A \to \mathbb{Z}\}$. \end{remark} \subsection{Toric setting} Let $P\subset \mathbb{R}^n$ be a $n$-dimensional integral Delzant polytope and let $(X_P, L_P)$ be the corresponding toric manifold and very ample line bundle. Furthermore for any $i \in \mathbb{N}$, it is also well-known that the rescaled polytope $iP$ corresponds to $(X_P, L_P^i)$. Then the image of $X_{P}$ of the Kodaira embedding by $L_P^i$ is given as the form \eqref{variety} defined by the finite set $iP\cap\mathbb{Z}^n$. Hereafter we identify $(X_{P},L_P^i)$ with its Kodaira embedding image in $\mathbb{P}(H^0(X_P, L_P^i)^*)$. Now we investigate log Chow semistability of pairs consists of polarized toric manifolds and toric divisors. Set $E_P(i) := \#\{iP\cap\mathbb{Z}^n\}$. By Proposition \ref{Hilbert-Mumford}, $T$-semistability of the Chow form is essential for any maximal torus $T\subset SL(E_P(i))$. We take the following maximal torus as a special one: \[ T_{iP}:=(\mathbb{C}^*)^{E_P(i)}\cap SL(E_P(i)). \] Here $(\mathbb{C}^*)^{E_P(i)} \subset GL(E_P(i))$ is the torus of invertible diagonal matrix and $T_{iP}$ is the subtorus given as \eqref{subtorus}. Proposition \ref{GKZ} implies the following. Set $\mathrm{Ch}(iP)$ as the weight polytope of the Chow form of $X_P \subset \mathbb{P}(H^0(X_P, L_P^i)^*)$ for $(\mathbb{C}^*)^{E_P(i)}$-action. \begin{proposition}\label{chow form P} The affine hull of $\mathrm{Ch}(iP)$ in $\{\varphi : iP\cap\mathbb{Z}^n \to \mathbb{R}\} \cong \{\varphi : P\cap(\mathbb{Z}/i)^n \to \mathbb{R}\}$ is \[ \Set{ \varphi : P\cap (\mathbb{Z}/i)^n \to \mathbb{R} | \sum_{\mathbf{a}\in P\cap (\mathbb{R}/i)^n} \varphi(\mathbf{a}) = (n+1)!\mathrm{Vol}(iP), \sum_{\mathbf{a}\in P\cap (\mathbb{R}/i)^n} \varphi(\mathbf{a})\mathbf{a} = (n+1)!\int_{iP} \mathbf{x} d\nu }. \] \end{proposition} Let $F\subset P$ be a facet. Then $F\subset P$ defines the toric divisor $D_{F} \subset X_{P}$ which is invariant under $(\mathbb{C^*})^n$-action on $X_P$. If we write $iP\cap\mathbb{Z}^n=\{a_0, \dots , a_N\}$, then the image of $D_F$ of the Kodaira embedding by $L_P^i$ is given as \[ \Set{[u_{a_0}:\cdots :u_{a_N}] \in X_P | u_{a_k}\equiv 0 \text{ iff } a_k \notin iF\cap\mathbb{Z}^n} \subset \mathbb{P}(H^0(X_P, L_P^i)^*). \] There is a canonical Euclidian volume form $d\sigma$ on $\partial P$ induced by $d\nu$ as follows: On a facet $\{h_F=c_F\}\cap F$ of $F$, where $h_F$ is a primitive linear form, $dh_F \wedge d\sigma$ equals to the Euclidian volume form $d\nu$ on $\mathbb{R}^n$. Therefore again Proposition \ref{GKZ} implies the following. Set $\mathrm{Ch}(iF)$ as the weight polytope of the Chow form of $D_F \subset \mathbb{P}(H^0(X_P, L_P^i)^*)$ for $(\mathbb{C}^*)^{E_P(i)}$-action. \begin{proposition}\label{chow form F} The affine hull of $\mathrm{Ch}(iF)$ in $\{\varphi : P\cap(\mathbb{Z}/i)^n \to \mathbb{R}\}$ is \[ \Set{ \varphi : P\cap (\mathbb{Z}/i)^n \to \mathbb{R} | \begin{array}{l} \displaystyle \varphi \equiv 0 \text{ on } (P\setminus F)\cap(\mathbb{Z}/i)^n,\\ \displaystyle \sum_{\mathbf{a}\in P\cap (\mathbb{Z}/i)^n} \varphi(\mathbf{a}) = n!\mathrm{Vol}(iF),\\ \displaystyle \sum_{\mathbf{a}\in P\cap (\mathbb{Z}/i)^n} \varphi(\mathbf{a})\mathbf{a} = n!\int_{iF} \mathbf{x} d\sigma . \end{array} }. \] \end{proposition} Summing up with Proposition \ref{T-semistability}, Lemma \ref{decomposition}, Proposition \ref{chow form P} and Proposition \ref{chow form F}, we have the following. \begin{proposition} The Chow form of $(X_P, D_F, L_P^i, \beta)$ is $T_{iP}$-semistable if and only if \begin{equation} \label{iff} \frac{n!(n+1)!}{E_P(i)}\Bigl(2\Vol(iP)+(1-\beta)\Vol(iF)\Bigr)d_{iP} \in 2n \mathrm{Ch}(iP)+ (1-\beta)(n+1)\mathrm{Ch}(iF), \end{equation} where $d_{iP}(\mathbf{a}) = 1$ for all $\mathbf{a}\in P\cap(\mathbb{Z}/i)^n$. \end{proposition} To state the main theorem, we introduce the notion of the set consists on concave piecewise linear functions on $P$ induced from every $\varphi : P\cap (\mathbb{Z}/i)^n \to \mathbb{R}$. For a fixed such $\varphi$, when we set \[ G_{\varphi} = \text{the convex hull of } \bigcap_{\mathbf{a}\in P \cap (\mathbb{Z}/i)^n} \Set{(\mathbf{a}, t) | t\leq \varphi(a)} \subset \mathbb{R}^n \times \mathbb{R}, \] and define a piecewise linear function $g_{\varphi} : P \to \mathbb{R}$ by \[ g_{\varphi}(\mathbf{x}) = \max\Set{t | (\mathbf{x},t) \in G_{\varphi}}. \] Here we set \[ \mathrm{PL}(P,i) = \Set{g_{\varphi} | \varphi : P\cap(\mathbb{Z}/i)^n \to \mathbb{R}}. \] \begin{remark}\label{Q} For each $\varphi : P\cap(\mathbb{Z}/i)^n \to \mathbb{Q}$, the same construction above defines the $g_{\varphi} : P \to \mathbb{Q}$. We also set \[ \mathrm{PL}_{\mathbb{Q}}(P,i) = \Set{g_{\varphi} | \varphi : P\cap(\mathbb{Z}/i)^n \to \mathbb{Q}}. \] \end{remark} When we further set \[\langle \varphi, \psi \rangle = \sum_{\mathbf{a}\in P\cap(\mathbb{Z}/i)^n} \varphi(\mathbf{a})\psi(\mathbf{a}) \] for every $\varphi, \psi : P\cap (\mathbb{Z}/i)^n \to \mathbb{R}$ as a scalar product, then $g_{\varphi}$ has the following properties. \begin{lemma}{\rm (\cite[Chapter 7, Lemma 1.9]{GKZ})} \label{g} For any $\varphi : P\cap(\mathbb{Z}/i)^n \to \mathbb{R}$, \begin{enumerate} \item the induced function $g_{\varphi}$ is concave. \item we have the equality \[ \max\Set{\langle \varphi, \psi \rangle | \psi \in \mathrm{Ch}(iP)} = i^n(n+1)!\int_P g_{\varphi} d\nu. \] \end{enumerate} \end{lemma} Now we are ready to prove Theorem \ref{main}. \begin{proof} Since $2n! \mathrm{Ch}(iP)+ (1-\beta)(n+1)!\mathrm{Ch}(iF)$ is convex, we can see that the condition \eqref{iff} holds if and only if \begin{equation*} \begin{split} &\max\Set{\langle \varphi, \psi \rangle | \psi \in 2n! \mathrm{Ch}(iP)+ (1-\beta)(n+1)!\mathrm{Ch}(iF)}\\ &\geq \frac{n!(n+1)!}{E_P(i)}\Bigl(2\Vol(iP)+(1-\beta)\Vol(iF)\Bigr) \langle \varphi , d_{iP} \rangle \end{split} \end{equation*} for all $\varphi : P\cap(\mathbb{Z}/i)^n \to \mathbb{R}$. By the definition of Minkovski sum and Lemma \ref{g}, we have \begin{equation*} \begin{split} &\max\Set{\langle \varphi, \psi \rangle | \psi \in 2n! \mathrm{Ch}(iP)+ (1-\beta)(n+1)!\mathrm{Ch}(iF)}\\ &= 2n! \max\Set{\langle \varphi, \psi \rangle | \psi \in \mathrm{Ch}(iP)} +(1-\beta)(n+1)! \max\Set{\langle \varphi, \psi \rangle | \psi \in \mathrm{Ch}(iF)}\\ &= i^{n-1}n!(n+1)!\Bigl(2i\int_P g_{\varphi}d\nu + (1-\beta)\int_F g_{\varphi}d\sigma \Bigr). \end{split} \end{equation*} On the other hand, by the definition of $g_{\varphi}$, we have \begin{equation*} \begin{split} &\frac{n!(n+1)!}{E_P(i)}\Bigl(2\Vol(iP)+(1-\beta)\Vol(iF)\Bigr) \langle \varphi , d_{iP} \rangle \\ &=\frac{i^{n-1}n!(n+1)!}{E_P(i)}\Bigl(2i\Vol(P)+(1-\beta)\Vol(F)\Bigr) \sum_{\mathbf{a}\in P\cap(\mathbb{Z}/i)^n} g_{\varphi}(\mathbf{a}). \end{split} \end{equation*} Therefore we have the desired result. \end{proof} Since the $T_{iP}$-semistablity is a necessary condition for the asymptotic log Chow semistability, applying the above theorem to linear functions, we have the following. \begin{corollary}\label{cor} If $(X_P, D_F, L_P, \beta)$ is asymptotically log Chow semistable then \begin{equation}\label{obstruction} E_P(i) \biggl( 2i \int_P \mathbf{x} d\nu + (1-\beta)\int_F \mathbf{x} d\sigma \biggr) = \biggl( 2i \mathrm{Vol}(P) + (1-\beta)\mathrm{Vol}(F) \biggr) \sum_{\mathbf{a} \in P \cap (\mathbb{Z}/i)^n} \mathbf{a} \end{equation} holds for all $i\gg 0$. \end{corollary} \begin{remark}\label{extension} (1) The reader should bear in mind that $E_P(i)$ (resp. $\sum_{a\in P\cap (Z/i)^n}a$) is a polynomial in $i$ (resp. $\mathbb{R}^n$-valued polynomial). Hence the identity theorem shows that \eqref{obstruction} must hold for every integer $i$ (cf. \cite[Theorem 1.4]{Ono1}). (2) We can extend Theorem \ref{main} and Corollary \ref{cor} for any formal sum of different toric divisors $\sum_t (1-\beta_t)D_{F_t}$. For instance, in the case of Corollary \ref{cor}, the equality (\ref{obstruction}) is replaced by \[ E_P(i) \biggl( 2i \int_P \mathbf{x} d\nu + \sum_t(1-\beta_t)\int_{F_t} \mathbf{x} d\sigma \biggr) = \biggl( 2i \mathrm{Vol}(P) + \sum_t (1-\beta_t)\mathrm{Vol}(F_t) \biggr) \sum_{\mathbf{a} \in P \cap (\mathbb{Z}/i)^n} \mathbf{a}. \] \end{remark} \begin{remark} For $\beta=1$, Theorem \ref{main} and Corollary \ref{cor} are Ono's results in \cite{Ono1, Ono2}. \end{remark} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}{Relation to the log K-semistability} As an application of Theorem \ref{main} we show that the asymptotic log Chow semistability for polarized toric manifolds and its toric divisors implies log K-semistability for toric degenerations. Let us first recall the definition of the test configuration, the log Futaki invariant and the log K-semistability (See for example \cite{L} for more detail). Let $(X,L)$ be a polarized manifold. \begin{definition} A {\it test configuration} $(\mathcal{X}, \mathcal{L})$ for $(X,L)$ consists of a $\mathbb{C}^*$-equivariant flat family $\mathcal{X} \to \mathbb{C}$ (where $\mathbb{C}^*$ acts on $\mathbb{C}$ by multiplication) and a $\mathbb{C}^*$-equivariant ample line bundle $\mathcal{L}$ over $\mathcal{X}$. In addition we require the fibers $(\mathcal{X}_t, \mathcal{L}|_{\mathcal{X}_t})$ over $t \neq 0$ is isomorphic to $(X,L)$. \end{definition} Note that any test configuration is equivariantly embedded into $\mathbb{C}P^N \times \mathbb{C}$. Here the $\mathbb{C}^*$-action on $\mathbb{C}P^N$ is given by a one parameter subgroup of $SL(N+1,\mathbb{C})$. If $D\subset X$ is any irreducible divisor, the one parameter subgroup of $SL(N+1,\mathbb{C})$ associated to any test configuration of $(X,L)$ also induces a test configuration $(\mathcal{D},\mathcal{L}|_{\mathcal{D}})$ of $(D,L|_{D})$. To define the log Futaki invariant, fix a test configuration $(\mathcal{X}, \mathcal{L})$ for $(X,L)$ and a irreducible divisor $D\subset X$. Let $d_k$ and $\tilde{d}_k$ be dimensions of $H^0(\mathcal{X}_0, \mathcal{L}^k|_{\mathcal{X}_0})$ and $H^0(\mathcal{D}_0, \mathcal{L}^k|_{\mathcal{D}_0})$ respectively and let $w_k$ and $\tilde{w}_k$ be total weights of $\mathbb{C}^*$ actions on these cohomologies respectively. For $k\gg 0$, we then have asymptotic expansions (set $n = \dim X$) \begin{eqnarray*} d_k &=& a_0k^n + a_1k^{n-1} + \mathcal{O}(k^{n-2}), \quad \tilde{d}_k= \tilde{a}_0k^{n-1} + \mathcal{O}(k^{n-2}),\\ w_k &=& b_0k^{n+1} + b_1k^n + \mathcal{O}(k^{n-1}) \quad\text{and}\quad \tilde{w}_k = \tilde{b}_0k^n + \mathcal{O}(k^{n-1}). \end{eqnarray*} \begin{definition} The {\it log Futaki invariant} for $(\mathcal{X}, \mathcal{D}, \mathcal{L})$ and $\beta \in (0,1]$ is defined as \[ \mathrm{Fut}(\mathcal{X}, \mathcal{D}, \mathcal{L}, \beta) = 2\Bigl(\frac{a_1}{a_0}b_0-b_1\Bigr) + (1-\beta)\Bigl(\tilde{b}_0-\frac{\tilde{a}_0}{a_0}b_0\Bigr). \] \end{definition} \begin{definition} $(X,D,L,\beta)$ is {\it log K-semistable} if $ \mathrm{Fut}(\mathcal{X}, \mathcal{D}, \mathcal{L}, \beta) \leq 0 $ for every test configuration $(\mathcal{X}, \mathcal{L})$ for $(X,L)$. \end{definition} For every toric variety $(X_P, L_P)$, Donaldson \cite{D2} showed that every rational convex piecewise linear function $h : P \to \mathbb{R}$ induced a test configuration for $(X_P, L_P)$ with \begin{eqnarray*} d_k &=& k^n\mathrm{Vol}(P) + \frac{k^{n-1}}{2}\mathrm{Vol}(\partial P) + \mathcal{O}(k^{n-2}),\\ w_k &=& k^{n+1}\int_P (R-h)d\nu + \frac{k^n}{2} \int_{\partial P} (R-h)d\sigma + \mathcal{O}(k^{n-1}) \end{eqnarray*} where $R$ is an integer such that $h\leq R$. For any toric divisor $D_F \subset X_P$ defined by a facet $F$ of $P$, every such function $h$ induces a test configuration for $(D_F, L_P)$ with \begin{eqnarray*} \tilde{d}_k &=& k^{n-1}\mathrm{Vol}(F) + \mathcal{O}(k^{n-2}),\\ \tilde{w}_k &=& k^n \int_F (R-h)d\sigma + \mathcal{O}(k^{n-1}). \end{eqnarray*} As analogy with Donaldson \cite{D2}, we define the log K-semistability for toric degenerations. \begin{definition} $(X_P, D_F, L_P, \beta)$ is {\it log K-semistable for toric degenerations} if for every rational convex piecewise linear function $h : P \to \mathbb{R}$, the induced log Futaki invariant is nonpositive: \[ \frac{\mathrm{Vol}(\partial P)}{\mathrm{Vol}(P)}\int_P h d\nu -\int_{\partial P} h d\sigma +(1-\beta) \Bigl( \int_F h d\sigma - \frac{\mathrm{Vol}(F)}{\mathrm{Vol}(P)} \int_P h d\nu \Bigr) \leq 0. \] \end{definition} Then we show the following as a application of Theorem \ref{main}. \begin{proposition}\label{K-semistability} If the Chow form of $(X_P, D_F, L_P^i, \beta)$ is $T_{iP}$-semistable for all $i\gg 0$ then this pair is K-semistable for toric degenerations. \end{proposition} \begin{remark} In particular, the asymptotic log Chow semistability for $(X_P, D_F, L_P^i, \beta)$ implies the log K-semistability for toric degenerations by Proposition \ref{Hilbert-Mumford}. \end{remark} \begin{proof} We first fix $h : P \to \mathbb{R}$ as a rational convex piecewise linear function. Then there is a positive integer $k$ such that $g:=-h \in \mathrm{PL}_{\mathbb{Q}}(P,k)$ (Remark \ref{Q}). By Theorem \ref{main}, for all $i\gg 0$, $T_{iP}$-semistability for the Chow form of $(X_P, D_F, L_P^i, \beta)$ implies that \begin{equation}\label{weight} E_P(ik) \biggl( 2ik \int_P g d\nu + (1-\beta)\int_F g d\sigma \biggr) - \biggl( 2ik \mathrm{Vol}(P) + (1-\beta)\mathrm{Vol}(F) \biggr) \sum_{\mathbf{a} \in P \cap (\mathbb{Z}/ik)^n} g(\mathbf{a}) \geq 0 \end{equation} holds for all $i\gg 0$. By Lemma 3.3 in \cite{ZZ} we note that \begin{eqnarray*} E_P(ik) &=& (ik)^n\mathrm{Vol}(P) + \frac{(ik)^{n-1}}{2}\mathrm{Vol}(\partial P) + \mathcal{O}((ik)^{n-2}),\\ \sum_{\mathbf{a} \in P \cap (\mathbb{Z}/ik)^n} g(\mathbf{a}) &=& (ik)^{n}\int_P gd\nu + \frac{(ik)^{n-1}}{2} \int_{\partial P} gd\sigma + \mathcal{O}((ik)^{n-1}). \end{eqnarray*} Therefore the left hand side of (\ref{weight}) is equal to \[ \frac{(ik)^n \mathrm{Vol}(P)}{2} \biggl\{ \frac{\mathrm{Vol}(\partial P)}{\mathrm{Vol}(P)}\int_P g d\nu -\int_{\partial P} g d\sigma +(1-\beta) \Bigl( \int_F g d\sigma - \frac{\mathrm{Vol}(F)}{\mathrm{Vol}(P)} \int_P g d\nu \Bigr) \biggr\} + \mathcal{O}((ik)^{n-1}). \] This complete the proof. \end{proof} \@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3 ex plus .2ex}{\centering\large\bf}{Relation to the existence of conical K\"{a}hler Einstein metrics} By recent progresses in Donaldson-Tian-Yau conjecture, it is natural to ask the equivalence between the existence of conical K\"{a}hler Einstein metrics and the log Chow stability. In this section we check the obstruction as in Corollary \ref{cor} for a few toric Fano manifolds admitting conical K\"{a}hler Einstein metrics. See \cite{L} for the definition of conical metrics and more details. Let $X_P$ be the toric Fano manifold associated from a reflexive polytope $P$ (that is a Delzant polytope satisfying $\mathrm{Int}(P)\cap\mathbb{Z}^n = \{0\}$), and let $D=\sum_t (1-\beta_t)D_{F_t}$ be a formal sum of different toric divisors associated from different facets $F_t$ of $P$ where $\beta_t \in (0,1].$ Let us consider \[ Q_i(X_P, D) := E_P(i) \biggl( 2i \int_P \mathbf{x} d\nu + \sum_t(1-\beta_t)\int_{F_t} \mathbf{x} d\sigma \biggr) - \biggl( 2i \mathrm{Vol}(P) + \sum_t (1-\beta_t)\mathrm{Vol}(F_t) \biggr) \sum_{\mathbf{a} \in P \cap (\mathbb{Z}/i)^n} \mathbf{a}. \] By Corollary \ref{cor} and Remark \ref{extension}, if the pair $(X_P, D)$ is asymptotically log Chow semistable then $Q_i$ must vanish for every integer $i$. \begin{example} Let $P$ be an interval $[-1,1]$. The corresponding toric Fano manifold $X_P$ is $\mathbb{C}P^1$. Let $D=(1-\beta_0)[0]+(1-\beta_{\infty})[\infty]$ be a formal sum of toric divisors where $[0]$ and $[\infty]$ are divisors of $X_P$ corresponding $-1, 1 \in P$ respectively. By a simple calculation, we have \[ Q_i(X_P, D)=\frac{1}{2}(i+1)(\beta_0-\beta_{\infty}). \] Thus if this pair $(X_P,D)$ is asymptotic log Chow semistable then $\beta_0=\beta_{\infty}$ must be hold. This condition $\beta_0=\beta_{\infty}$ is equivalent to the existence of a conical K\"{a}hler Einstein metric whose cone angles on $[0]$ and $[\infty]$ are $\beta_0$ . \end{example} \begin{example} Let $P$ be a polygon defined as \[ \Set{(x,y) \in \mathbb{R}^2 | -x-y+1\geq 0, \quad x+1\geq 0,\quad x+y+1\geq 0,\quad y+1\geq 0}. \] The corresponding toric Fano manifold $X_P$ is the first Hilzebruch surface $\mathbb{P}(\mathcal{O}_{\mathbb{C}P^1}\oplus \mathcal{O}_{\mathbb{C}P^1}(-1)).$ Let $D_1, D_2$ and $D_{\infty}$ be toric divisors corresponding to $P\cap\Set{x+1=0}, P\cap\Set{y+1=0}$ and $P\cap\Set{-x-y+1=0}$ respectively. Note that $D_1, D_2$ and $D_{\infty}$ are divisors of fibers over $0,\infty\in\mathbb{C}P^1$ and the $\infty$-section respectively. In a part of \cite[Theorem 3.14]{SW}, Song and Wang proved the existence of a conical K\"{a}hler Einstein metric whose cone angles are 13/14, 13/14 and 5/7 along $D_1, D_2$ and $D_{\infty}$ respectively. Set \[D=(1-\frac{13}{14})D_1+(1-\frac{13}{14})D_2+(1-\frac{5}{7})D_{\infty}.\] By a simple calculation we have \[ Q_i(X_P,D)=(\frac{21}{10}i-\frac{1}{7})(1,1)^T. \] Therefore this pair $(X_P,D)$ is asymptotically Chow unstable. \end{example}
2,877,628,088,800
arxiv
\section{Introduction} \label{sec:intro} With the development of observational tools and techniques, we are gaining a better understanding of the gas surrounding galaxies, known as the circumgalactic medium (CGM; see \citealt{2017ARA&A..55..389T} for a recent review). The CGM and the intracluster medium (ICM) in galaxy clusters hold the majority of baryons as a result of multiple processes that distribute (and redistribute) them as a diffuse gas within dark matter halos over cosmic time. The baryons in the CGM, both those suspended there due to accretion-shock heating and those expelled from galaxies and their interstellar medium (ISM), are believed to constitute the ``missing baryons'' \citep[e.g.,][]{1992MNRAS.258P..14P, 1998ApJ...503..518F, 2007ARA&A..45..221B}. Therefore, the thermodynamic properties of the CGM afford useful information for understanding the physical processes governing galaxy formation and evolution. Until recently, CGM studies have mostly relied on ultraviolet (UV) and optical absorption line observations, probing warm and cold components of the CGM \citep[e.g.,][]{2017MNRAS.465.2966S, 2018MNRAS.477..450N}. The Sunyaev-Zel'dovich (SZ) effect (\citealt{1970Ap&SS...7....3S, 1972CoASP...4..173S, 1980MNRAS.190..413S}; see \citealt{2019SSRv..215...17M} for a recent review) and soft X-ray emission have become powerful new tools to study the multiphase CGM's hot gas component, whose temperature exceeds $\sim10^6$~K. These tools provide a calorimetric view of the ionized gas that complements absorption line observations. The thermal SZ (tSZ) effect is a spectral distortion of the cosmic microwave background (CMB) radiation caused by inverse Compton scattering of CMB photons by free electrons. The signal is proportional to the electron thermal pressure of the halo gas, integrated along the line-of-sight. Individual tSZ detections have been limited to massive clusters because of sensitivity and angular resolution, but stacked measurements of galaxy samples (e.g., \citealt{2013A&A...557A..52P}, \citetalias{2013A&A...557A..52P} hereafter; \citealt{2015ApJ...808..151G}) presented the mass scaling relation down to halo masses below $\sim 10^{13}\ \textrm{M}_\odot$. Notably, the \citetalias{2013A&A...557A..52P} results suggested that the integrated tSZ flux within a radius $R_{500}$\footnote{The radius within which the mean overdensity is 500 times the critical density at a given redshift.} followed a self-similar dependence on halo mass over more than two orders of magnitude in mass. The mass scaling derived from a self-similar relation between the gas pressure profile and halo mass \citep{1986MNRAS.222..323K, 1991ApJ...383..104K} implied that the total thermal energy was governed primarily by the gravitational potential. This self-similarity suggested by \citetalias{2013A&A...557A..52P} appears counter-intuitive because the non-gravitational feedback mechanisms that regulate star formation and black hole growth in galaxies are expected to inject energy into and heat the CGM \citep[e.g.,][]{2009MNRAS.399.1773C, 2010MNRAS.406.2325O, 2010MNRAS.402.1536S}. Specifically, stellar winds and supernovae (`stellar feedback' due to star formation) and active galactic nuclei (AGN; `AGN feedback') are expected to eject gas from the disks of galaxies into the CGM or out past it into the intergalactic medium (IGM) in a manner that depends on various halo properties such as halo mass, physics of star formation, and black hole mass accretion efficiency. However, the \citetalias{2013A&A...557A..52P} result was driven by the extrapolation of results measured at large radii ($5R_{500}$) to $R_{500}$, assuming a fixed pressure profile model of \cite{2010A&A...517A..92A} based on \cite{2007ApJ...668....1N} profile. Also, \cite{2018PhRvD..97h3501H} pointed out that neglecting the signal from nearby halos (`two-halo term') in the modeling could also have biased the inference toward a self-similar model. Therefore, a careful comparison between the tSZ observation and models, with an emphasis on the distribution of the hot gas and its properties, is required. Observations using tSZ have already proven their worth in probing feedback \citep[e.g.,][]{verdier2016, 2016MNRAS.458.1478C, 2018ApJ...865..109S, 2021ApJ...913...88M}. \cite{2017JCAP...11..040B} furthermore demonstrated that combined measurements of the tSZ and the kinetic SZ (kSZ) effects provide constraints on thermodynamic processes affecting the gas. Recently, \cite{2021PhRvD.103f3513S} and \cite{2021PhRvD.103f3514A} cross-correlated the Atacama Cosmology Telescope (ACT) CMB survey maps \citep{2020JCAP...12..046N} with the Baryon Oscillation Spectroscopic Survey \citep[BOSS;][]{2014ApJS..211...17A} CMASS (``constant stellar mass'') galaxy catalog to obtain stacked tSZ and kSZ measurements. Comparing the radial thermodynamic profiles from the measurements to the predictions from cosmological simulations, they found that the simulations tend to underpredict the gas pressure and density, notably around the halos' outer regions. Future millimeter-wave SZ instruments with improved angular resolution and sensitivity will provide data from halos down to galaxies and group scales \citep{2019BAAS...51c.124M}, enabling us to effectively probe feedback mechanisms in these lower mass halos \citep{2019BAAS...51c.297B}. Such instruments include next-generation CMB experiments \citep[e.g., CMB-S4, Simons Observatory;][]{2016arXiv161002743A, 2019arXiv190704473A, 2017arXiv170602464A, 2019JCAP...02..056A} and single large-aperture telescopes \citep[e.g. CSST, CMB-HD, AtLAST;][]{2018alas.confE..46G, 2019BAAS...51g...6S, 2019BAAS...51g..58K}. While instrumentation and telescope development have enabled higher-sensitivity and higher-resolution observations, computational simulations have made significant progress as well. Current cosmological simulations have been successful in reproducing many observed galaxy properties, improving our understanding of galaxy formation and evolution \citep[see][for a recent review]{2020NatRP...2...42V}. However, these simulations often do not resolve the physical scales necessary to compute many astrophysical processes and instead resort to `sub-grid' prescriptions. The sub-grid prescriptions are coarse-level approximations of what physical mechanisms are occurring on finer resolution scales than are self-consistently modeled in the simulation. While these sub-grid models are calibrated against key observables, they are still a major source of uncertainty in computational models of galaxy evolution, so it behooves us to analyze multiple simulations with different sub-grid prescriptions. These physically motivated recipes give predictions for CGM properties that vary significantly depending on the models \citep[e.g.,][]{2012MNRAS.423.2991V, 2013MNRAS.430.1548H}. In particular, a comparison of tSZ simulations to data can distinguish which prescriptions are correct and thereby inform our understanding of the CGM and feedback mechanisms that affect it. In this work, we study the tSZ effect and relevant gas properties using state-of-the-art simulations. These include two large-volume simulations, Illustris-TNG \citep[][The Next Generation; TNG hereafter]{2018MNRAS.475..624N, 2018MNRAS.475..648P, 2018MNRAS.475..676S, 2018MNRAS.477.1206N, 2018MNRAS.480.5113M} and Evolution and Assembly of GaLaxies and their Environments \citep[EAGLE,][]{2015MNRAS.446..521S, 2016A&C....15...72M, 2017arXiv170609899T}, and one zoom-in simulation, Feedback In Realistic Environments-2 \citep[FIRE-2,][]{2018MNRAS.480..800H, 2020MNRAS.496.1620O}. In Section~\ref{sec:sims_methods}, we describe the simulations and how we calculate the thermodynamic properties of the CGM and the tSZ signal. In Section~\ref{sec:tsz}, we compare the tSZ signal predicted by different models to the observations and perform internal comparisons between simulations. In particular, we (1) study scaling relations for integrated tSZ flux using TNG and EAGLE, separated by galaxy type, (2) compare EAGLE's predictions to the radial profiles inferred recently from ACT data, and (3) discuss radial profiles of Milky Way-sized ($\sim 10^{12} \textrm{M}_\odot$) halos at $z=0$ from TNG, EAGLE, and FIRE-2. For the latter, we include the recent FIRE-2 simulations that model cosmic-rays \citep{2020MNRAS.492.3465H, 2020MNRAS.496.4221J}. We then (4) discuss the effect of AGN feedback on the tSZ flux by comparing EAGLE simulations with and without the AGN feedback prescription. Finally, we summarize the implications of our results in Section~\ref{sec:conclusion} and discuss possible synergies of tSZ observations with other CGM probes in Section~\ref{sec:futurework}. \section{Simulations and methods}\label{sec:sims_methods} \subsection{Thermal SZ effect} The tSZ effect is a distortion of the thermal frequency spectrum of the CMB caused by inverse Compton scattering of CMB photons with free electrons in any hot gas (including the CGM). The net effect is a shift of photons from lower to higher frequencies. The resultant spectrum is unique and has the same frequency dependence for all objects in the limit that the electrons are nonrelativistic.\footnote{Relativistic corrections cause the frequency dependence to depend on the electron temperature for temperatures approaching the electron rest energy ($T\gtrsim 10^8\,$K; e.g., \citealt{2012MNRAS.426..510C}), but the effect is negligible for typical CGM temperatures.} The amplitude of the distortion is proportional to the electron pressure integrated along the line-of-sight, as characterized by the Compton-$y$ parameter, \begin{equation} \begin{split} y &\equiv \frac{\sigma_T}{m_e c^2} \int P_e dl \\ &= \frac{\sigma_T}{m_e c^2} \int n_e k_B T_e dl, \end{split} \end{equation} where $n_e$ is the electron number density, $\sigma_T$ is the Thomson cross-section, $k_B$ is the Boltzmann constant, and $T_e$ is the electron temperature; $m_e$ is the electron rest mass and $c$ is the speed of light. We describe the details of computing the tSZ signal using simulation dataset in Appendix~\ref{sec:appendix_tszcalc}. The literature, including \citetalias{2013A&A...557A..52P}, often quotes the parameter integrated within a sphere of a radius $R_{500}$: \begin{equation} \begin{split} Y_{R_{500}} {D^2_A (z)} &\equiv \int_{0}^{R_{500}} n_e \sigma_T \frac{k_B T_e}{m_e c^2} dV\\ &= \int_{0}^{R_{500}} n_e \sigma_T \frac{k_B T_e}{m_e c^2} 4 \pi r^2 dr, \end{split} \end{equation} where $D_A (z)$ is the angular diameter distance to the given redshift. In order to put sources at different redshifts on the same footing (the equivalent of converting from flux to luminosity for more traditional measurements), we scale to $z=0$ and a fixed angular diameter distance of 500~Mpc, \begin{equation} \label{eq:y_r500} \tilde{Y}_{R_{500}} = Y_{R_{500}} E^{-2/3}(z) \left[D_\textrm{A} (z) / 500 \textrm{Mpc}\right]^2, \end{equation} where $E(z)$ is the Hubble parameter $H(z)$ normalized by $H_0$, which is approximated by $E^2(z) = \Omega_\textrm{m} (1+z)^3 + \Omega_\Lambda$. We express $\tilde{Y}_{R_{500}}$ in units of arcmin$^2$.\footnote{Some authors do not scale to $D_A = 500$~Mpc, resulting in a quantity in Mpc$^2$ rather than arcmin$^2$.} Since the temperature is proportional to $M^{2/3}$ for a virialized halo, the tSZ relation is dependent only on $M$, i.e., $\sim M^{5/3}$, in the self-similar limit, in which gravity and thus mass determines all the halo parameters. However, the spherically integrated $Y_{R_{500}}$ is not a direct observable due to the instrumental beam size and the contribution of all gas along the line-of-sight. Because of {\em Planck}'s several-arcmin beam, \citetalias{2013A&A...557A..52P} actually measured the signal within a cylindrical volume of projected radius $5\times R_{500}$ and then extrapolated to $Y_{R_{500}}$ assuming a universal electron pressure radial profile \citep{2010A&A...517A..92A}. In addition, the line-of-sight integration can be affected by the emission from nearby halos (`two-halo term'), thermal emission from dust in the host galaxies, the viewing angle, or other effects. The effect of the two-halo term on calculating the radial density and pressure profiles was discussed in several literatures \citep[e.g.,][]{2017MNRAS.467.2315V, 2018PhRvD..97h3501H, 2021ApJ...919....2M}, and it becomes non-negligible beyond a few times the virial radius. Following their approach, we include gas particles that do not belong to a central subhalo for the tSZ calculation in this work and discuss this issue in Appendix~\ref{sec:appendix_twohalo}. \subsection{Simulations} We use three simulation datasets, namely TNG, EAGLE, and FIRE-2, to compare tSZ signatures and gas radial profiles among galaxy samples. The simulations were tuned to reproduce various observed galaxy properties, such as the stellar mass function and galaxy size, employing different sub-grid prescriptions \citep{2015MNRAS.446..521S, 2018MNRAS.473.4077P, 2018MNRAS.480..800H}. Each of the simulations offers predictions for the distribution of the CGM in and around halos and for its thermodynamic properties that are highly dependent on its assumed model. We mainly use publicly available TNG100-2\footnote{http://www.tng-project.org/} \citep{2019ComAC...6....2N} and EAGLE\footnote{http://icc.dur.ac.uk/Eagle/} Ref-L100N1504 in similar comoving volumes ($\sim$100$^3$~cMpc$^3$; cMpc: comoving Mpc). We focus our analysis on the present-day ($z=0$) snapshots from both simulations. To test the impact of AGN feedback, we also use lower-resolution, smaller-volume EAGLE simulations with and without feedback (Ref-L0050N0752 and NoAGN-L0050N0752), which use a comoving volume of 50$^3$~cMpc$^3$ with 752$^3$ smoothed particle hydrodynamics (SPH) and dark matter (DM) particles. TNG and EAGLE provide halo (group) and subhalo (galaxy) catalogs identified using the friends-of-friends \citep[FoF;][]{1982ApJ...257..423H, 1985ApJ...292..371D} and \texttt{subfind} algorithms \citep{2001MNRAS.328..726S}. We only identify central `subhalos,' which are the equivalent of central galaxies, as galaxies for the analysis so that we can be assured that each galaxy's properties are driven by its own evolution rather than being dominated by a nearby larger galaxy. We do, however, include all nearby particles within a simulation volume to analyze gas properties, even if they are not bound to the specific halo/subhalo, because the CGM of a central galaxy and its satellites are not separate. We will describe the rationale for including these `two-halo' effects in detail in Appendix~\ref{sec:appendix_twohalo}. We complement these recent state-of-the-art cosmological simulations with the `zoom-in' FIRE-2 simulation, which explores much smaller physical scales than the other two large-volume simulations and focuses on individual halos, because its finer physical resolution has already provided additional useful understanding of CGM properties \citep[e.g.,][]{2020arXiv200613976S}. In particular, in Section~\ref{subsec:radial}, we compare the gas radial profiles of the Milky Way-sized galaxies in FIRE-2 to average profiles of galaxies of similar mass from TNG and EAGLE. We assess the impact of cosmic-rays by also comparing to the recent FIRE-2 run with cosmic-ray treatment \citep{2020MNRAS.496.4221J}. CGM properties in these simulations have been analyzed in several prior works. \cite{2018MNRAS.477..450N} analyzed TNG simulation data to study distribution of high ionization species O VI, O VII, and O VIII and their physical properties, which could be observed by UV and X-ray spectroscopy. \cite{2019MNRAS.485.3783D} studied the relation between the gas fraction and the halo mass of present-day $\sim\!L^{\star}$ galaxies in EAGLE. They showed that the central black hole mass and the halo gas fraction are strongly negatively correlated at a fixed halo mass and that the soft X-ray luminosity and the tSZ flux display similar correlations with the black hole mass and the star-formation rate (SFR). \cite{2020MNRAS.491.4462D} extended this work to TNG, finding that the scatter in CGM mass fraction is strongly correlated with the specific SFR (sSFR) in both EAGLE and TNG while the CGM fractions in low-mass halos are considerably higher in TNG than EAGLE. This result implies that tSZ observations of galaxy samples separated by their physical properties will give useful information about the halo gas and distinguish models. \cite{2016MNRAS.463.4533V} studied the tSZ signal and the soft X-ray emission in the original FIRE \citep[FIRE-1;][]{2014MNRAS.445..581H} simulation to understand the effect of stellar feedback. (The FIRE-1 simulation did not implement AGN feedback.) They found that the tSZ flux (integrated only out to $R_{500}$ but including all the gas along the line-of-sight inside that radius in the simulation volume) of halos whose masses are below $10^{13}~\textrm{M}_\odot$ deviate from the \citetalias{2013A&A...557A..52P} self-similar scaling relation. They were able to explain this suppression in these low-mass objects by their reduced hot gas ($T > 10^{4}$~K) fraction, which is also observed to be redshift-independent, unlike the baryon and total gas fractions. They explained that the little dependence on the redshift was because high redshift halos contained more cold gas than hot gas, and they lost a smaller fraction of the baryons than low redshift halos, canceling out the effect on the hot gas fraction. \section{Thermal SZ signal in Cosmological Simulations}\label{sec:tsz} The tSZ signal offers a way to probe hot gas components around the halos by measuring the integrated electron pressure. It has been observed with several millimeter instruments and wide-field, high-sensitivity CMB surveys. The observational effort includes experiments such as {\em Planck}\ \citep{2011A&A...536A..10P, 2013A&A...557A..52P}, ACT \citep{2016JLTP..184..772H}, the South Pole Telescope \citep[SPT;][]{2014SPIE.9153E..1PB}, and will continue in the future with the Simons Observatory \citep[SO;][]{2019JCAP...02..056A}, and CMB-S4 \citep{2016arXiv161002743A, 2019arXiv190704473A, 2017arXiv170602464A}. A key observational diagnostic is the scaling relation between tSZ flux and halo mass. Using a matched multi-filter \citep{2006A&A...459..341M}, \citetalias{2013A&A...557A..52P} reported a nearly self-similar scaling relation for $\sim$260,000 locally brightest galaxies (LBGs) extracted from the Sloan Digital Sky Survey \citep[SDSS;][]{2009ApJS..182..543A}. To convert observable stellar mass to halo mass, they generated a mock catalog using the Millennium simulation, selected LBG samples following the SDSS selection criteria, and derived the stellar mass-halo mass relation. The tSZ signal was measured down to a stellar mass of $\sim 2 \times 10^{11}~\textrm{M}_\odot$, spanning a mass range of almost two orders of magnitude ($M_{500}$ between $\sim 10^{13}$ and $\sim 10^{15}$~$\textrm{M}_\odot$). \cite{2015ApJ...808..151G} later analyzed the data applying aperture photometry instead of the matched filter, taking the stacking bias into account, and obtained results consistent with \citetalias{2013A&A...557A..52P}. \cite{2020ApJ...890..156P} found a self-similar scaling relation for X-ray selected groups and clusters with masses above $10^{13.4} \textrm{M}_\odot$. This self-similarity might seem to contradict the intuition that non-gravitational processes, such as stellar and AGN feedback, should cause a mass-dependent deviation from self-similarity with greater deviation at lower mass \cite[e.g.,][]{2015MNRAS.451.3868L, 2016MNRAS.463.4533V}. We will see below that simulations can exhibit self-similarity or deviate from it depending on the radial scale over which quantities are defined. It is worth noting in this context that, because of the large instrumental beam, the {\em Planck}\ result actually measured the tSZ flux on scales out to $5R_{500}$ and extrapolated back to $R_{500}$ using the filter template based on the pressure profile from \cite{2010A&A...517A..92A}. In parallel with the observations, the tSZ effect has been studied with several hydrodynamic simulations \citep[e.g.,][]{2015MNRAS.451.3868L, 2017MNRAS.465.2936M, 2017MNRAS.471.1088B, 2018MNRAS.480.4017L, 2021MNRAS.504.5131L, 2019MNRAS.485.3783D}. For example, \cite{2015MNRAS.451.3868L} performed mock tSZ observation using the cosmo-OWLS cosmological simulations. They studied several variations from the `reference' model that included metal-dependent radiative cooling and stellar feedback. Their `non-radiative' model only invoked the UV and X-ray background, without cooling and feedback prescriptions, while the AGN models included AGN feedback with varying temperature treatment. They found that both the tSZ signal and radial pressure profile change substantially among the models. An important implication is that extrapolation from $5R_{500}$ to $R_{500}$ based on a fixed $\beta$ pressure profile, as in the {\em Planck}\ analysis, can significantly bias the results. Higher angular resolution (arcmin or better), or full forward modeling from simulation to observation, is therefore critical for probing the CGM in galaxy-scale halos. \cite{2021MNRAS.504.5131L} compared the tSZ measurements from the {\em Planck}\ data to a number of hydrodynamical simulations. They analyzed the samples within the mass range of $M_{500} \sim 10^{12-14.5}~\textrm{M}_\odot$ and showed that the integrated tSZ flux $\tilde{Y}_{R_{500}}$ of the models (Illustris, TNG-300, EAGLE, and Magneticum) starts to deviate from the self-similar relation below $M_{500} \sim 10^{13} \textrm{M}_\odot$. Their comparison showed that the simulated tSZ flux deviates to a greater extent as halo mass $M_{500}$ decreases, and each simulation gives a different amount of discrepancy from the self-similar relation due to the feedback model they adopted. These preliminary studies demonstrate the potential of SZ observations to add a new kind of constraint on galaxy formation models. In this section, we analyze the tSZ signal of the halos in TNG, EAGLE, and FIRE-2 simulations. Since the large-volume cosmological simulations provide sufficient samples for a statistical study, we first explore the mass scaling relation with TNG and EAGLE, especially by separating galaxy types. Then, we compare the radial profiles of the thermodynamic properties for $\sim 10^{12}~\textrm{M}_\odot$ galaxies from the three simulations, including the zoom-in FIRE-2 simulation. Equation~\ref{eq:y_r500} indicates that the tSZ flux $\tilde{Y}_{R_{500}}$ is proportional to $(\Omega_\textrm{b}/\Omega_\textrm{m}) h^{2/3}$ once the self-similar pressure profile is assumed \citep{2010A&A...517A..92A, 2021MNRAS.504.5131L}. Since the \citetalias{2013A&A...557A..52P} analysis and the simulations we use in this analysis adopted different cosmological parameters \citep{2011ApJS..192...18K, 2014A&A...571A..16P, 2016A&A...594A..13P}, we apply the correction to scale the measurements to \citetalias{2013A&A...557A..52P} result. \subsection{Sample selection in TNG and EAGLE}\label{subsec:sample} We use simulated galaxies to compare the mass scaling relation with tSZ flux, depending on galaxy type. The previous observational and simulation studies of tSZ flux and halo mass imply the existence of a reliable self-similar relation down to $\sim 10^{13} \textrm{M}_\odot$. To see whether the self-similarity extends to lower masses, we limit our galaxy samples to the halo mass range of $10^{11.0}\textrm{M}_\odot < M_{500} < 10^{13.5}\textrm{M}_\odot$ in order to explore the effect of feedback in the simulations. (Note that the simulation study of \citealt{2021MNRAS.504.5131L} used the mass range of $M_{500} \sim 10^{12-14.5}~\textrm{M}_\odot$.) To understand how galaxy type affects the gas distribution around halos, and thus the tSZ signal, we categorize our galaxy samples from the TNG and EAGLE simulations according to their SFR. In general, we divide galaxies into star-forming and passive (quenched or quiescent galaxies with little SFR in the local universe) systems. Star-forming galaxies tend to reside in halos less massive than passive galaxies at given stellar mass \citep{2016MNRAS.457.3200M}. Our analysis uncovers the dependence of the scaling relation on galaxy type, as well as the relations expected in the low mass regime, where future tSZ observations will explore. Furthermore, the SFR is an indicator of stellar feedback with an assumption that the energy from galactic winds is proportional to the instantaneous SFR \citep[e.g.,][]{2018MNRAS.473.4077P, 2018MNRAS.479.4056W}. Our analysis tests the expectation that the SFR affects the tSZ signal close to the host galaxy. The relation between stellar mass and SFR has been studied for both TNG and EAGLE \citep{2015MNRAS.450.4486F, 2019MNRAS.485.4817D}. Star-forming galaxies follow a relation in the stellar-mass ($M_\star$)$-$SFR plane \citep[e.g.,][]{2011ApJ...730...61K} known as the star-forming main sequence (SFMS). \citet{2019MNRAS.485.4817D} identified the SFMS in the $M_\star-$SFR plane for the TNG simulations. They fit the SFR of the star-forming samples as a function of the stellar mass, \begin{equation} \log\left(\frac{\langle \textrm{SFR} \rangle_\textrm{sf-ing galaxies}}{\textrm{M}_\odot \textrm{yr}^{-1}}\right) = \alpha (z) \log \left( \frac{M_\star}{\textrm{M}_\odot} \right) + \beta (z), \end{equation} where they defined the SFR of a halo as an integration of individual SFR within twice the spherical stellar half-mass radius, and the stellar mass as a sum of all the stellar particles within the same radius. To make a coherent comparison, we use the SFR and the $M_\star$ measured within a 30 (physical) kpc aperture for both TNG and EAGLE, following the common EAGLE analysis \citep[e.g.,][]{2020MNRAS.491.4462D}. \begin{figure}[b!] \plottwo{f1a.png}{f1b.png} \caption{Star-forming and quiescent galaxy samples used in our analysis for TNG100-2 (left) and EAGLE Ref-L100N1504 (right) at $z=0$. We use well-resolved central galaxies with $10^{11.0}\textrm{M}_\odot < M_{500} < 10^{13.5}\textrm{M}_\odot$. We designate galaxies above the SFMS fit as star-forming (blue), and those with sSFR $< 10^{-11} \textrm{yr}^{-1}$ as quiescent (red). The galaxies below the SFMS fit but with sSFR $> 10^{-11} \textrm{yr}^{-1}$ are not taken into account to better contrast the effect of galaxy type. The green line shows a fit to the observations \citep{2012ApJ...754L..29W}. The orange line is the SFMS fit to the TNG300, presented in \cite{2019MNRAS.485.4817D} with $\alpha(0) = 0.80 \pm 0.01$ and $\beta(0) = -8.15 \pm 0.11$.} \label{fig:tsz_sfr_samples} \end{figure} We divide galaxies from TNG and EAGLE into star-forming and passive samples. First, we identify the SFMS for central galaxies, whereas \citet{2019MNRAS.485.4817D} did not distinguish centrals from satellites. We consider only systems with more than 50 gas particles and stellar particles each to ensure that the galaxies are well-resolved. Then, we classify as quiescent those galaxies with sSFR below $10^{-11} \textrm{yr}^{-1}$ \citep[e.g.,][]{2013MNRAS.432..336W}, where $\textrm{sSFR} = \textrm{SFR} / M_\star$. We fit the SFMS using the rest of the galaxies, performing a linear fit to the median SFRs of $M_\star$ in the range between $10^{9}$ and $10^{10}$, separated by 0.2 dex bins. We define the galaxies above the MS fit as star-forming samples in this analysis to better contrast the results depending on galaxy types. Figure~\ref{fig:tsz_sfr_samples} shows the $M_\star-$SFR plane populated with the present-day central galaxies in TNG (snapshot 99) and EAGLE (snapshot 28), and our aforementioned samples. The criteria we use give 4,178 star-forming and 1,590 quiescent galaxies for TNG, and 3,355 star-forming and 907 quiescent galaxies for EAGLE. Since TNG has $\sim$30\% larger volume than EAGLE, it accordingly contains more galaxies. However, these are of similar order and will not affect the interpretation. \subsection{Integrated tSZ signal: Mass scaling relation}\label{subsec:tsz_integrated} In this section, we examine the tSZ mass scaling relation ($Y\!-\!M$ relation) for the TNG and EAGLE galaxy samples, divided by SFR. The aims of showing the $Y\!-\!M$ relation using the simulated galaxy samples are to: (1) compare results to the \citetalias{2013A&A...557A..52P} data, where the relation shows little departure from a single power-law with a slope of 5/3 and, (2) investigate how the relation depends on the type of galaxy (star-forming vs. quiescent) and understand how the feedback physics included in each simulation can affect the relation. \begin{figure}[b!] \plottwo{f2a.png}{f2b.png} \caption{tSZ flux as a function of stellar mass for TNG (left) and EAGLE (right). Blue and red crosses indicate the star-forming and quiescent samples as defined in Figure~\ref{fig:tsz_sfr_samples}. Black circles show the \citetalias{2013A&A...557A..52P} measurements, and the error bars indicate the statistical error. Note that our samples do not cover the entire range of the stellar mass from the \citetalias{2013A&A...557A..52P} observation since we limited our galaxy mass range to $10^{11.0}\textrm{M}_\odot < M_{500} < 10^{13.5}\textrm{M}_\odot$.} \label{fig:tsz_mstar} \end{figure} We spherically integrate the tSZ signal out to $R_{500}$ to calculate $Y_{R_{500}}$ for each of the galaxy samples. The $Y\!-\!M$ relation is often presented as a function of the halo mass $M_{500}$ to relate the observable tSZ flux to the dark matter halo properties. However, stellar mass is the more directly measurable property, and observations determine the tSZ to stellar mass relation. The stellar mass$-$halo mass conversion is highly dependent on galaxy properties, especially on color. Figure~\ref{fig:tsz_mstar} shows the spherically integrated $Y_{R_{500}}$ as a function of $M_\star$, with the \citetalias{2013A&A...557A..52P} measurements overlaid. The simulations and the \citetalias{2013A&A...557A..52P} observation overlap around $10^{11.25}\textrm{M}_\odot < M_{*} < 10^{11.5}\textrm{M}_\odot$. Note that the \citetalias{2013A&A...557A..52P} points in their work flatten out below $M_\star \sim 10^{11}~\textrm{M}_\odot$ because of dust contamination. The $M_\star = 10^{11.25}\textrm{M}_\odot$ bin of the {\em Planck}\ measurement was made at 3.5$\sigma$ detection, and the lower mass bins show weaker detection significance, therefore we only include the points above $M_\star = 10^{11.25}\textrm{M}_\odot$ in Figure~\ref{fig:tsz_m500}. \begin{figure}[t!] \plottwo{f3a1.png}{f3b1.png}\\\vspace{0.2in} \plottwo{f3a2.png}{f3b2.png} \caption{tSZ flux as a function of halo mass $M_{500}$, integrated out to $R_{500}$ (upper), and to $5 R_{500}$ (lower) in TNG (left column) and EAGLE (right column). The shaded green region shows the mass range of the galaxy samples used in the \citetalias{2013A&A...557A..52P} analysis, and their best-fit scaling relation is shown as the dotted line. The relation in the lower panels is rescaled by multiplying the original relation in the upper panels by 1.796 (ratio from the universal pressure profile).} \label{fig:tsz_m500} \end{figure} \begin{figure}[t!] \plottwo{f4a.png}{f4b.png} \caption{$Y_{5R_{500}} / Y_{R_{500}}$ of the TNG and EAGLE galaxies. The dashed line shows the conversion factor 1.796 corresponding to the universal pressure profile. The red and blue dots connected by the solid lines indicate the median value of the ratios in each $M_{500}$ mass bin. The mass bins are separated by 0.5 dex between $10^{10.5} \textrm{M}_\odot$ and $10^{13.5} \textrm{M}_\odot$. Higher ratios of $Y_{5R_{500}} / Y_{R_{500}}$ indicate that the hot gas has been pushed out to the outskirts of the galaxy halos by feedback.} \label{fig:tsz_yratio} \end{figure} Figure~\ref{fig:tsz_m500} shows the simulated tSZ flux as a function of halo mass $M_{500}$ and compares to the \citetalias{2013A&A...557A..52P} scaling relation. As described earlier, \citetalias{2013A&A...557A..52P} measured the tSZ signal out to $5 R_{500}$ and converted it to $Y_{R_{500}}$ assuming the universal pressure profile (UPP; Equation~22 in \citealt{2010A&A...517A..92A}). This rescaling of the tSZ flux to go from $Y_{R_{500}}$ to $Y_{5R_{500}}$ corresponds to a factor of 1.796. We present the scaling relation in terms of both $Y_{R_{500}}$ and $Y_{5R_{500}}$. The simulated galaxies lie below the \citetalias{2013A&A...557A..52P} scaling relation for $Y_{R_{500}}\!-\!M_{500}$, and that the discrepancy grows as the mass decreases for both types of galaxies and for both simulations. If, on the other hand, we consider the scaling relation in $Y_{5R_{500}}\!-\!M_{500}$, we find good agreement. To test the explanation of this `aperture size' effect in the tSZ flux measurement, we show the ratio $Y_{5R_{500}} / Y_{R_{500}}$ as a function of halo mass and separated by galaxy type in Figure~\ref{fig:tsz_yratio}. The stark difference between these two scaling relations ($Y_{R_{500}}\!-\!M_{500}$ and $Y_{5R_{500}}\!-\!M_{500}$) could be explained qualitatively by feedback driving gas from small radii ($\sim R_{500}$) out to larger radii such that all the gas and energy one expects are contained inside $5R_{500}$ but not inside $R_{500}$. We observe two important trends. The first is that, for all galaxies, the ratio converges to the universal pressure profile value as the halo mass increases. The second is that the discrepancy with the UPP is larger for quiescent galaxies. Both of these trends support the feedback explanation. The trend with mass suggests that, as the halo mass grows, gravity wins out over feedback effects driving gas outward, resulting in better agreement with the UPP (derived from massive clusters, see below), or, equivalently, feedback is increasingly effective in overcoming gravity at $R_{500}$ as the halo mass decreases. The observation of a larger discrepancy for quiescent galaxies is consistent with the idea that feedback has had its full effect in that class, blowing the bulk of the gas fuel for star formation out into the far CGM, while star-forming galaxies are still somewhere along that evolutionary trend, with less of the gas blown out past $R_{500}$, making it possible for them to continue forming stars at a high rate. These deviations from self-similarity at $R_{500}$ occur simply because the UPP, on which the extrapolation from $Y_{5R_{500}}$ to $Y_{R_{500}}$ is based, is not correct for galaxies. One must remember that it was obtained from a sample of massive clusters, in which feedback plays a much smaller role (fractionally). In the mass range for which they overlap, $10^{11.5} < M_{500} < 10^{12.5}$, the relative locations of star-forming and quiescent galaxies in Figures~\ref{fig:tsz_m500} and \ref{fig:tsz_yratio} is also illuminating. The much larger $Y_{5R_{500}} / Y_{R_{500}}$ ratio for quiescent galaxies suggests that a much larger fraction of their gas has been heated and pushed out to large radius than for star-forming galaxies. This is consistent with the idea that star formation in quiescent galaxies was quenched by feedback from star formation and AGN activity that caused this heating/pushing of the gas. (Though the details of the model, including the timescale it takes to set up this inequality in the profile, need to be further studied, and it is outside the scope of this work.) Additionally, the growth of the ratio and the difference between star-forming and quiescent galaxies at smaller masses indicates that the effect is fractionally larger at lower masses. While the details are quite dependent on the simulation, the effect is clearly present in both simulations and is consistent with expectations from the basic picture of feedback quenching star-forming galaxies by driving their gas out to large radius and heating it. \subsection{Comparison with the ACT stacked profile} Recently, the ACT collaboration presented stacked tSZ and kSZ measurements by cross-correlating the ACT DR5 dataset and the BOSS CMASS galaxy catalog \citep{2021PhRvD.103f3513S, 2021PhRvD.103f3514A}. From the SZ measurements, they inferred the gas mass density and temperature, hence thermal pressure, profiles by fitting a parameterized general Navarro-Frenk-White (NFW) profile \citep{2012ApJ...758...74B}. They compared the results to the NFW profile \citep{1997ApJ...490..493N}, hydrodynamical simulations by \cite{2010ApJ...725...91B}, and TNG galaxy samples as shown in Figure~\ref{fig:eagle_act2020tsz} (Figure~6 in \citealt{2021PhRvD.103f3514A}). Although the simulations matched the profiles deprojected from the observations well at radii smaller than $\sim 2 R_{200}$ ($\sim$ 1~Mpc), the discrepancy was not insignificant at larger radii, especially for the thermal pressure profile, thus temperature. They claimed the simulations under-predicted the gas density and thermal pressure, and explained that it was due to the CGM gas in the outer region being less heated by the sub-grid models the simulations employed. In this section, we present the stacked radial profiles using the EAGLE galaxies and compare those to the ACT best-fit results and the TNG profiles in \cite{2021PhRvD.103f3514A}. \begin{figure}[t!] \plottwo{f5a.png}{f5b.png} \caption{Deprojected radial gas mass density (left) and thermal pressure (right) profiles from the ACT and CMASS cross correlation \citep[][blue line and band]{2021PhRvD.103f3514A} compared TNG (orange and green), EAGLE (red), \cite{2010ApJ...725...91B} simulation (magenta), and the NFW profile \citep[][black]{1997ApJ...490..493N}. The dotted line in the thermal pressure plot shows the best-fit GNFW pressure profile derived from the galaxy cluster observations in \cite{2013A&A...550A.131P}. We show both the original pressure profile without a two-halo term (dotted blue), and with a two halo term (dotted maroon) presented by \cite{2021PhRvD.103f3514A}. The upper axis shows the radial distance in units of $R_{500} = 350$~kpc, for $M_\textrm{200} \sim 3 \times 10^{13} \textrm{M}_\odot$ halo. The dashed vertical lines show $R=R_{500}$ and $R=5R_{500}$. The vertical grey bars indicate the radial ranges where the ACT SZ measurements were made \citep{2021PhRvD.103f3513S}.} \label{fig:eagle_act2020tsz} \end{figure} The CMASS galaxies are distributed within the redshift range $0.4 < z < 0.7$, and the galaxy samples used in the ACT analysis have median redshift $z=0.55$ and mean stellar mass of $3 \times 10^{11} \textrm{M}_\odot$ (from \citealt{2013MNRAS.435.2764M} stellar mass estimates) that corresponds to a halo mass $M_\textrm{200} \sim 3 \times 10^{13} \textrm{M}_\odot$ using the \cite{2018AstL...44....8K} stellar-to-halo mass conversion. Note that \cite{2021PhRvD.103f3514A} used $R_{200}$ and $M_{200}$ as virial radius and mass, whereas we use $R_{500}$ and $M_{500}$ throughout this paper. For their $M_\textrm{200} \sim 3 \times 10^{13} \textrm{M}_\odot$ halo, $R_{200}$ and $R_{500}$ correspond to $\sim 500$~kpc and $\sim 350$~kpc, respectively. They selected red galaxies in the TNG simulation and calculated the average profile by weighting the samples using the stellar and halo mass distributions of the observed galaxies (`TNG S' and `TNG H' in Figure~\ref{fig:eagle_act2020tsz}). For example, they used nine log-spaced bins between $10^{11.53}$ and $10^{13.98}~\textrm{M}_\odot$ for the halo mass distribution. The mass catalog can be downloaded as part of the \texttt{Mop-c GT} (``Model-to-observable projection code for Galaxy Thermodynamics'') package. We use the identical color cut ($g - r \geq 0.6$)\footnote{$g$ and $r$ are the Sloan Digital Sky Survey (SDSS) magnitudes.} for the EAGLE central galaxies, using snapshots 22 ($z=0.62$) and 23 ($z=0.50$) to match the median redshift. We calculate the weighted average of the gas mass and thermal pressure profiles using the halo mass ($M_{200}$) probability density function of \cite{2021PhRvD.103f3514A}. Since the gas mass density and the pressure are proportional to $M$ and $M^{5/3}$, respectively, it is necessary to use the correct mass distribution to estimate the stacked profiles \citep{2021ApJ...919....2M}. As described earlier, we include the two-halo term in calculating the radial profiles by taking the nearby particles within a volume into account. Figure~\ref{fig:eagle_act2020tsz} shows the gas mass density and thermal pressure profiles inferred from the EAGLE data, compared to the ACT best-fit profiles as well as the TNG results in \cite{2021PhRvD.103f3514A}. The EAGLE density and pressure profiles are nearly consistent with the TNG profiles. Both the EAGLE and TNG density profiles are within 2$\sigma$ of the ACT profile at most radial distances. The thermal pressure profiles show somewhat similar discrepancies among ACT, TNG, and EAGLE. We could interpret the comparison of the simulation data (EAGLE, TNG) to the ACT profiles in two ways. First, the EAGLE profiles are approximately consistent with the TNG profiles. This supports the discussion in \cite{2021PhRvD.103f3514A} that the sub-grid models in the cosmological simulations under-predict the gas density and pressure at large radii due to insufficient heating. However, as we will see in Section~\ref{subsec:radial}, the FIRE-2 simulation with the cosmic-ray (CR) treatment predicts even lower thermal pressure than TNG and EAGLE in general, primarily due to the temperature than the gas mass density, and the most recent FIRE-2 sub-grid model will not likely resolve this issue. Another possible interpretation is that feedback in TNG and EAGLE is too effective, that they blow too much gas out to a large radius. We argued earlier that feedback in TNG and EAGLE was not as effective in star-forming galaxies as in quiescent galaxies (Section~\ref{subsec:tsz_integrated}). As we will see in Figure~\ref{fig:mw_radial}, the density and pressure for star-forming galaxies are higher than for quiescent galaxies in both simulations. This implies that perhaps the low gas mass density and pressure profiles of TNG and EAGLE are results of very efficient feedback mechanisms. Also, we could relate the comparison to the integrated tSZ flux and mass scaling relation in Section~\ref{subsec:tsz_integrated}. Figure~\ref{fig:tsz_m500} shows that the simulated tSZ flux integrated out to $5R_{500}$ more or less agree with the \citetalias{2013A&A...557A..52P} fit around the mean halo mass of the galaxy samples ACT analysis used ($M_\textrm{200} \sim 3 \times 10^{13} \textrm{M}_\odot$ or $M_\textrm{500} \sim 2 \times 10^{13} \textrm{M}_\odot$). However, if we use the ACT pressure profile in Figure~\ref{fig:eagle_act2020tsz} to calculate the tSZ flux out to $5R_{500}$, it is a few times higher than the \citetalias{2013A&A...557A..52P} tSZ flux measurement. The discrepancy is dependent on the two-halo term and could be due to the sample selection. For example, \citetalias{2013A&A...557A..52P} explicitly selected `locally brightest galaxies' to construct a sample of central galaxies, which is defined as the galaxies that don't have a brighter neighbor within 1~Mpc. An investigation into this effect would require fully modeling the selection functions of the \citetalias{2013A&A...557A..52P} and CMASS samples, which is beyond the scope of this work. \subsection{Gas radial profiles of Milky Way sized galaxies}\label{subsec:radial} The study of integral quantities in Section~\ref{subsec:tsz_integrated} strongly suggests that there is a great deal of information about the impact of feedback on the CGM in the tSZ radial profile. Future SZ observations will be able to characterize the tSZ signal from Milky Way-sized or even less massive halos through stacking if individual detections are not possible. The CGM properties of Milky Way-mass halos have been studied with cosmological simulations. For example, \cite{2016MNRAS.462.3751K, 2019MNRAS.486.4686K} selected Milky Way-mass galaxies at $z=0$ in Illustris and TNG simulations, based on both halo and stellar masses and explored the CGM structure, including the radial distribution of the gas. In this section, we explore the radial profiles of TNG and EAGLE galaxies in the mass range $10^{11.75} < M_{500} < 10^{12.25} \textrm{M}_\odot$, separated by galaxy type, and we also include FIRE-2 simulations of individual Milky Way-sized galaxies (see Table~\ref{table:fire2_list}). \begin{deluxetable}{ccccccc}[b!] \tablecolumns{8} \tablewidth{0pc} \tablecaption{FIRE-2 Simulation Summary} \tablehead{\colhead{Name} & \colhead{$M^\textrm{vir}_\textrm{halo} (\textrm{M}_\odot)$} & \colhead{$M_* (\textrm{M}_\odot)$} & \colhead{$R_\textrm{vir}$ (kpc)} & \colhead{$M_{500} (\textrm{M}_\odot)$} & \colhead{$R_{500}$ (kpc)}} \startdata m12f & $1.6 \times 10^{12}$ & $8.0 \times 10^{10}$ & 306 & $1.2 \times 10^{12}$ & 163\\ m12i & $1.2 \times 10^{12}$ & $6.5 \times 10^{10}$ & 275 & $9.1 \times 10^{11}$ & 150\\ m12m & $1.5 \times 10^{12}$ & $1.2 \times 10^{11}$ & 301 & $1.1 \times 10^{12}$ & 159\\ \enddata \tablecomments{List of the FIRE-2 zoom-in simulations used in this work. The physical quantities are measured in the reference simulation dataset \citep[see][for details]{2018MNRAS.480..800H}. We used both the reference and CR runs for each of the models.} \label{table:fire2_list} \end{deluxetable} \begin{figure}[t!] \begin{center} \includegraphics[width=.33\textwidth]{f6a1.png}\hfill \includegraphics[width=.33\textwidth]{f6b1.png}\hfill \includegraphics[width=.33\textwidth]{f6c1.png}\\ \includegraphics[width=.33\textwidth]{f6a2.png}\hfill \includegraphics[width=.33\textwidth]{f6b2.png}\hfill \includegraphics[width=.33\textwidth]{f6c2.png}\\ \includegraphics[width=.33\textwidth]{f6a3.png}\hfill \includegraphics[width=.33\textwidth]{f6b3.png}\hfill \includegraphics[width=.33\textwidth]{f6c3.png}\\ \includegraphics[width=.33\textwidth]{f6a4.png}\hfill \includegraphics[width=.33\textwidth]{f6b4.png}\hfill \includegraphics[width=.33\textwidth]{f6c4.png} \caption{Mean radial profiles of the TNG (left column) and EAGLE (middle column) galaxies separated by SFR in the mass range $10^{11.75} < M_{500} < 10^{12.25} \textrm{M}_\odot$: Blue and red curves show median radial profiles of the star-forming and quiescent samples, respectively. We show cumulative tSZ flux, thermal pressure, gas mass density, and temperature from top to bottom. The faint dotted lines are the radial profiles of the individual galaxies. The black crosses in the tSZ radial profile plots (top row) denote the tSZ signal for $10^{12} \textrm{M}_\odot$ mass halo at $R_{500}$ ($\sim 160$~kpc) and $5 R_{500}$, inferred from the self-similar relation of \citetalias{2013A&A...557A..52P}, whose measurements are based on the halos at least an order of magnitude more massive than the MW-sized halo. The solid black lines in those plots show spherically integrated tSZ flux as a function of radius, derived from the universal pressure profile \citep{2010A&A...517A..92A}, normalized by the \citetalias{2013A&A...557A..52P} measurement. Radial profiles of the FIRE-2 galaxies (right column): The solid and dashed lines show the reference models and the CR included simulations, respectively. The FIRE-2 m12i, m12m, and m12f are all `star-forming galaxies' as opposed to quiescent.} \label{fig:mw_radial} \end{center} \end{figure} Figure~\ref{fig:mw_radial} shows the mass-weighted radial profiles, including cumulative tSZ signal, gas mass density, temperature, and thermal pressure. We selected the star-forming and quiescent samples in TNG and EAGLE based on the sample selection in Section~\ref{subsec:sample}, and limited the halo mass ($M_{500}$) range to $10^{11.75 - 12.25}\textrm{M}_\odot$. This selection leaves 602 star-forming and 699 quiescent galaxies for the TNG and 392 star-forming and 226 quiescent galaxies for the EAGLE dataset. Each of the dashed lines in the background shows an individual galaxy radial profile, and the solid, thick curves display the median profiles. When calculating the radial profiles, we include all the gas particles around the center of each halo. However, a distinction between ISM and CGM gas particles can be made by using the SFR of the fluid elements \citep{2019MNRAS.485.3783D, 2020MNRAS.491.4462D}. Since we mainly focus on the thermodynamic profiles around and beyond the virial radius, the separation of the ISM particles has a negligible effect on our results. As shown in Section~\ref{subsec:tsz_integrated}, star-forming galaxies tend to exhibit higher tSZ fluxes than quiescent galaxies when integrated out to $R_{500}$, although the difference is smaller or even negligible at $5R_{500}$ in both TNG and EAGLE (Figure~\ref{fig:tsz_m500}). This is consistent with the tSZ flux inferred for the $10^{12}\textrm{M}_\odot$ halo at $R=5 R_{500}$ from the \citetalias{2013A&A...557A..52P} measurement (Figure~\ref{fig:mw_radial}). Again, the radial profiles reflect the intuition that the heating and expulsion of gas far out into the CGM has been much more extensive for quiescent galaxies than for star-forming ones. The gas density is larger, and the temperature is lower close in ($\log (R/R_{500}) < -0.5$) for star-forming galaxies compared to quiescent galaxies, which supplies a reservoir of fresh fuel for star formation. Of course, because the thermal pressure (thus electron pressure) is proportional to the product of the gas density and temperature, these two effects can partially or completely cancel each other. At radii smaller than $\sim 2R_{500}$ in TNG, and at all radii in EAGLE, the gas density wins out over the temperature, yielding higher pressure for star-forming galaxies. At radii larger than $\sim 2R_{500}$ in TNG, the gas density of the quiescent galaxies surpasses that of star-forming galaxies. This leads to higher thermal pressure for $R \gtrsim 2R_{500}$, and the tSZ fluxes of the quiescent and star-forming samples become comparable at $3R_{500}$. The zoom-in FIRE-2 simulation \citep{2018MNRAS.480..800H} uses better spatial and mass resolutions than the TNG and EAGLE datasets, focusing on the physical processes of individual halos. \cite{2016MNRAS.463.4533V} used the FIRE-1 simulation, which only included the stellar feedback prescription without AGN feedback to study the tSZ effect and the X-ray emission properties of the halos. Incorporating the galaxy simulations' redshift-dependent snapshots, they discovered that the halo's hot gas fraction is dependent on the halo mass, with little redshift dependence. They pointed out that the tSZ measurements could observe this, with increasing suppression of the tSZ signal compared to the \citetalias{2013A&A...557A..52P} self-similar relation as the halo mass decreases below $10^{13} \textrm{M}_\odot$, as a result of stellar feedback. Recently, the FIRE project presented the FIRE-2 simulations with a CR treatment \citep{2020MNRAS.492.3465H, 2020MNRAS.496.4221J}. \cite{2020MNRAS.496.4221J} showed that the non-thermal CR pressure could dominate over thermal pressure in the CGM, supporting the gas, especially for Milky Way-sized halos. In these simulations, gas cooling is more effective than CR heating, and the gas temperatures are expected to decrease considerably compared to the reference simulations, while the gas mass density profiles are not strongly affected. Since the tSZ directly measures the integrated thermal pressure, comparing the FIRE-2 reference and CR runs illustrates how the tSZ flux changes with the different sub-grid models. The radial profiles of the $z=0$ (snapshot number 600) reference and CR simulation models are shown in the right column of Figure~\ref{fig:mw_radial}. We first observe that the reference simulations and CR runs show substantially different radial profiles, except for the gas mass density profile, as already discussed in \cite{2020MNRAS.496.4221J}. This implies that these CR-driven differences should be considered meaningful. CRs cause a substantial reduction in pressure and temperature out to $3R_{500}$ because they transport energy and momentum created by star formation out to $3R_{500}$ before slowing down and depositing their energy. We also see that the CRs greatly suppress the thermal pressure due to the decreased temperature, resulting in much lower tSZ flux than the reference runs at $R_{500}$. When we compare the FIRE-2 to TNG and EAGLE, the FIRE-2 reference simulation's radial profiles in all four quantities ($Y$, gas mass density, pressure, and temperature) are generally in qualitative agreement with TNG and EAGLE. However, we cannot directly compare the FIRE-2 radial profiles to TNG and EAGLE at radii beyond $\sim 1.5R_{500}$ because the zoom-in FIRE-2 simulations do not include the two-halo term from the gas particles outside their simulation volume. We observe that the tSZ radial profiles of the reference simulations flatten for $R \gtrsim 2R_{500}$. These tSZ profiles are similar to the TNG and EAGLE tSZ profiles when the two-halo term is not considered (Figure~\ref{fig:appendix_twohalo}). \subsection{Effect of AGN feedback in the EAGLE simulation} We compare the EAGLE simulations Ref-L0050N0752 (`Reference') and NoAGN-L0050N0752 (`NoAGN' simulation) at $z=0$ to explore the effect of the AGN feedback. They are in smaller $50^3$~cMpc$^3$ volumes and use the same particle resolution as Ref-L100N1504. NoAGN simulation does not include the sub-grid model describing the AGN feedback. \cite{2019MNRAS.485.3783D} already compared the simulations by looking at the present-day halo baryon fraction, which is a sum of the gas and stellar mass over the total mass within the radius $R_{200}$. As expected, the halos more massive than $\sim10^{12}~M_\odot$ have higher baryon fractions in the NoAGN simulation because the stellar feedback cannot efficiently push out the gas in massive halos. \begin{figure}[b!] \plottwo{f7a.png}{f7b.png} \caption{The tSZ flux integrated out to $R_{500}$ (left) and $5R_{500}$ (right), for the galaxy samples in EAGLE Ref-L0050N0752 (blue crosses) and NoAGN-L0050N0752 (orange crosses) simulations. The \citetalias{2013A&A...557A..52P} best-fit scaling relation is shown as a dotted line. The shaded green region shows the mass range of the galaxy samples used in the \citetalias{2013A&A...557A..52P} analysis.} \label{fig:tsz_eagle_refl0050} \end{figure} \begin{figure}[t!] \epsscale{0.5} \plotone{f8.png} \caption{$Y_{5R_{500}} / Y_{R_{500}}$ of the EAGLE galaxies in Ref-L0050N0752 and NoAGN-L0050N0752. The dashed line shows the conversion factor 1.796 for the UPP. The blue (Ref-L0050N0752) and orange (NoAGN-L0050N0752) dots connected by the solid lines indicate the median value of the ratios in each $M_{500}$ mass bin. The mass bins are separated by 0.75 dex between $10^{11.0} \textrm{M}_\odot$ and $10^{13.25} \textrm{M}_\odot$.} \label{fig:tsz_eagle_refl0050_yratio} \end{figure} In Figure~\ref{fig:tsz_eagle_refl0050}, we compare the tSZ flux measured within $R_{500}$ and $5R_{500}$. We do not separate the galaxy types here since we are only interested in the behavior depending on the AGN feedback prescription. As in Figure~\ref{fig:tsz_m500}, the Compton-$y$ parameters integrated within the radius $5R_{500}$ better reproduce the scaling relation of \citetalias{2013A&A...557A..52P}. The tSZ fluxes within $R_{500}$ of the reference simulation seem to be lower than in the NoAGN run, in particular in the mid-mass range between $10^{12}$ and $10^{12.5} \textrm{M}_\odot$ (Figure~\ref{fig:tsz_eagle_refl0050}, left panel). This becomes more evident in Figure~\ref{fig:tsz_eagle_refl0050_yratio}, where we show the ratio of the tSZ fluxes within $R_{500}$ and $5R_{500}$. The solid blue and orange lines indicate the median ratio of the galaxy samples within each mass bin, separated by 0.75 dex between $10^{11.0}$ and $10^{13.25} \textrm{M}_\odot$. For both of the simulations, the ratios approach the value from the UPP as the halo mass increases. The median tSZ flux ratios of the reference run are greater than the NoAGN run, indicating that the hot gas has been pushed out beyond $R_{500}$ by the AGN feedback. The median ratios of the NoAGN simulation samples between $10^{12.5}$ and $10^{13.25} \textrm{M}_\odot$ are nearly consistent with the self-similar value. As the halo mass decreases, the stellar feedback becomes efficient in both simulations, and the ratios deviate rapidly from the self-similar value. \begin{figure}[t!] \begin{center} \includegraphics[width=.33\textwidth]{f9a1.png}\hfill \includegraphics[width=.33\textwidth]{f9b1.png}\hfill \includegraphics[width=.33\textwidth]{f9c1.png}\\ \includegraphics[width=.33\textwidth]{f9a2.png}\hfill \includegraphics[width=.33\textwidth]{f9b2.png}\hfill \includegraphics[width=.33\textwidth]{f9c2.png}\\ \includegraphics[width=.33\textwidth]{f9a3.png}\hfill \includegraphics[width=.33\textwidth]{f9b3.png}\hfill \includegraphics[width=.33\textwidth]{f9c3.png}\\ \includegraphics[width=.33\textwidth]{f9a4.png}\hfill \includegraphics[width=.33\textwidth]{f9b4.png}\hfill \includegraphics[width=.33\textwidth]{f9c4.png}\\ \caption{Radial profiles of $10^{11.0}\textrm{M}_\odot < M_{500} < 10^{11.75}\textrm{M}_\odot$ (left column), $10^{11.75}\textrm{M}_\odot < M_{500} < 10^{12.5}\textrm{M}_\odot$ (middle column), and $10^{12.5}\textrm{M}_\odot < M_{500} < 10^{13.25}\textrm{M}_\odot$ (right column) galaxies in EAGLE Ref-L0050N0752 (orange) and NoAGN-L0050N0752 (blue) simulations. We show cumulative tSZ flux, thermal pressure, gas mass density, and temperature from top to bottom. The faint dotted lines are the radial profiles of the individual galaxies. The black crosses in the tSZ radial profile plots (top row) denote the tSZ signal at $R_{500}$ and $5 R_{500}$ for $10^{11.375}$, $10^{12.125}$, and $10^{12.875} \textrm{M}_\odot$ mass halos (left to right, respectively), inferred from the self-similar relation of \citetalias{2013A&A...557A..52P}. The mean $R_{500}$ of each mass bin corresponds to 90, 160, 300 physical kpc, respectively. The solid black lines in those plots show spherically integrated tSZ flux as a function of radius, derived from the universal pressure profile \citep{2010A&A...517A..92A}, normalized by the \citetalias{2013A&A...557A..52P} measurement.} \label{fig:tsz_eagle_refl0050_radial} \end{center} \end{figure} In Figure~\ref{fig:tsz_eagle_refl0050_radial}, we present the average radial thermodynamic profiles of the $10^{11.0}\textrm{M}_\odot < M_{500} < 10^{11.75}\textrm{M}_\odot$, $10^{11.75}\textrm{M}_\odot < M_{500} < 10^{12.5}\textrm{M}_\odot$ and $10^{12.5}\textrm{M}_\odot < M_{500} < 10^{13.25}\textrm{M}_\odot$ samples in both of the EAGLE simulations shown above. First of all, we see that the tSZ flux of the two simulations at both $R_{500}$ and $5R_{500}$ is reduced compared to the \citetalias{2013A&A...557A..52P} interpolation as the halo mass decreases, although the flux at $5R_{500}$ better agrees with the UPP profile than at $R_{500}$. As the halo mass increases, the tSZ radial profiles of the Reference and NoAGN runs display a larger discrepancy, especially at $R \lesssim R_{500}$, due to the enhanced AGN feedback. At the highest mass bin, the tSZ profile of the NoAGN simulation is nearly consistent with the UPP profile, implying that gravity is able to overcome feedback and retain the gas close in without AGN feedback. Comparing the gas mass density and the temperature profiles in the middle and the right columns, we observe that the reduction in the flux at $R \lesssim R_{500}$ for the Reference simulation comes from the reduced gas mass density (temperature is comparable or even higher), which is a result of the feedback processes pushing the gas out to larger radii. In the lowest mass bin, all the radial profiles of the NoAGN and Reference simulations are comparable because stellar feedback dominates over AGN feedback in this mass range. \section{Conclusions}\label{sec:conclusion} We studied the thermodynamic properties of the CGM using the tSZ signal in TNG, EAGLE, and FIRE-2 simulations. We find that the level of agreement of simulations over the mass range $M_{500} = 10^{11}$ to $3 \times 10^{13} \textrm{M}_\odot$ with a self-similar fit inferred from {\em Planck}\ data down to $M_{500} \sim 10^{13} \textrm{M}_\odot$ ($M_\star \sim 10^{11} \textrm{M}_\odot$) is dependent on the aperture size used for the tSZ measurement. We interpret this dependence as evidence that feedback has a significant impact on CGM thermodynamic properties at $R_{500}$ (significant deviations from self-similarity) but that these effects have subsided by $5R_{500}$ (self-similarity is recovered). We also find that the effect is more pronounced in quiescent than in star-forming galaxies, suggesting that the impact of feedback at $R_{500}$ is still in process for star-forming galaxies while it is complete for quiescent ones. We find that agreement on radial profiles of gas density and thermal pressure between simulations and ACT analysis at $M_{200} \sim 3 \times 10^{13} \textrm{M}_\odot$ ($z \sim 0.55$) is also radius-dependent. The disagreement at large radius is consistent with prior suggestions that sub-grid heating in the simulation at large radius is insufficient. Still, it could also be a sign that feedback in the simulations is too effective at these radii. We also compare the simulated radial profiles of gas density, temperature, and thermal pressure for Milky-way-mass galaxies by separating star-forming and quiescent galaxies. We find support from all three simulations for the above interpretation. We also find from FIRE-2 simulations that the inclusion of cosmic-rays further enhances these effects, presumably because of the non-thermal pressure supplied by the CR. Lastly, we compare EAGLE simulations with and without AGN feedback. We find that the impact of AGN on the $Y\!-\!M_{500}$ scaling relation and on the radial profiles is most significant near $M_{500} \sim 2 \times 10^{12} \textrm{M}_\odot$, vanishing at $10^{11} \textrm{M}_\odot$ and present but smaller above $10^{13} \textrm{M}_\odot$. At lower masses, the effect is likely due to the decreasing significance of AGN feedback relative to star-formation feedback with decreasing mass, even while star-formation feedback remains sufficient to cause a deviation from self-similarity. At higher masses, gravity has greater ability to counter the effect of feedback. \section{Future Work}\label{sec:futurework} We will extend this work to the simulation study of the CGM using other multi-wavelength probes. In addition to the tSZ effect measurement, X-ray observation is another tool to characterize the hot gas properties around the galaxy groups and clusters \citep[e.g.,][]{2015MNRAS.449.3806A, 2016MNRAS.463.4533V, 2020AN....341..177L}. Similar to the $Y\!-\!M$ relation, the soft X-ray luminosity of the galaxy clusters are expected to follow the mass scaling relation. Several observations suggest that the emission is affected by the non-gravitational heating, such as the AGN feedback \citep[e.g.,][]{2009A&A...498..361P, 2011A&A...536A..10P, 2015MNRAS.449.3806A}. Using the soft X-ray luminosity allows us to study the mass-dependent heating mechanism around the galaxies. Besides the tSZ and X-ray imaging, several observational tools have been emerging to explore the hot halo across the electromagnetic spectrum. They include X-ray spectroscopy, which can probe the gas metallicity and thus the chemical composition of the CGM \citep{2021arXiv210211510V}, and fast radio bursts (FRBs), which provide the line-of-sight free electron density with its frequency-dependent dispersion measure \citep[e.g.,][]{2019ApJ...872...88R, 2019MNRAS.485..648P, 2020Natur.581..391M, 2021arXiv210713692C}. Traditional UV absorption line spectroscopy can also trace the hot gas via high-energy ions such as O VI and Ne VIII. However, none of these methods can individually link the corresponding observation to a complete set of physical properties of the CGM, such as density, temperature, and metallicity. The best opportunity to gather the spatial and thermodynamic information of the hot CGM will be available through combining the observations across multiple wavelengths. The comprehensive approach will allow us to compare the relative importance of the feedback mechanisms that affect the regulation of star formation at various lengths and mass scales. In future work, we will address how the CGM can be used to distinguish different feedback models and how the relative importance of those models changes with the physical properties of galaxies. We will also study what combinations of the observational probes give the best understanding of the gas properties to break the degeneracy among them. \software{\texttt{astropy} \citep{2013A&A...558A..33A}, \texttt{numpy} \citep{2020NumPy-Array}, \texttt{matplotlib} \citep{2007CSE.....9...90H}, \texttt{Mop-c GT} (``Model-to-observable projection code for Galaxy Thermodynamics'')\footnote{https://github.com/samodeo/Mop-c-GT}, \texttt{scipy} \citep{2020SciPy-NMeth}}\\ \begin{acknowledgements} We thank Lee Armus for useful discussion and carefully reading the manuscript. JK is supported by a Robert A. Millikan Fellowship from California Institute of Technology (Caltech). JK, JGB, SG, NB, and JCH acknowledge support from the Research and Technology Development fund at the Jet Propulsion Laboratory through the project entitled “Mapping the Baryonic Majority”. NB, EM, and SA acknowledges support from NSF grant AST-1910021. Support for PFH was provided by NSF Research Grants 1911233 \&\ 20009234, NSF CAREER grant 1455342, NASA grants 80NSSC18K0562, HST-AR-15800.001-A. Numerical calculations were run on the Caltech compute cluster ``Wheeler,'' allocations FTA-Hopkins/AST20016 supported by the NSF and TACC, and NASA HEC SMD-16-7592. JCH acknowledges support from NSF grant AST-2108536. We acknowledge the Virgo Consortium for making their simulation data available. The EAGLE simulations were performed using the DiRAC-2 facility at Durham, managed by the ICC, and the PRACE facility Curie based in France at TGCC, CEA, Bruy\`{e}resle-Ch\^{a}tel. A portion of the research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (NASA). This research has made use of NASA's Astrophysics Data System Bibliographic Services. \end{acknowledgements} \newpage
2,877,628,088,801
arxiv
\section{Introduction} Nano-mechanical resonators are actively and widely investigated \cite{prm,rew_nm}. Their importance for ultra-light mass detection, ultra-small displacements or very small force measurements is now well recognized \cite{rew_nm} (and references therein). Significant contributions were achieved towards macroscopic quantum effects in these systems combined with existing ones. For instance, superconducting qubit storage and entanglement with nano-mechanical resonators was investigated in \cite{qb1} while feedback-enhanced parametric squeezing of mechanical motion was investigated in Ref.~\cite{sq1}, respectively. Observation of quantum motion of a nano-mechanical resonator was reported in \cite{exp_m}. Coherent phonon manipulation in coupled mechanical resonators was experimentally demonstrated in \cite{man1} and this may help in entangling two distinct macroscopic mechanical objects \cite{ent1}. A nano-mechanical interface between optical photons and microwave electrical signals was recently demonstrated as well \cite{inter}. Furthermore, the dipole-dipole coupling among a small ensemble of Rydberg atoms allows for quantum control of macroscopic mechanical oscillators via their mutual interactions \cite{mey} whereas stationary continuous-variable entanglement between an optical cavity and a nano-mechanical resonator beam was demonstrated too, in Ref.~\cite{ent2}. Finally, phonon lasing was experimentally achieved in an electromechanical resonator \cite{las1}. Here, we investigate a system composed of a two-level quantum dot fixed on a suspended nano-mechanical resonator inside an optical cavity. The quantum dot is externally laser pumped. Spontaneous decay as well as optical or vibrational modes damping rates are correspondingly taking into account. We are interested in a regime where the quantum dot dynamics is faster than the dynamics of other subsystems involved, i.e. in the good cavity limit. In this way, one can reduce the whole system to the quantum dynamics of the two modes, vibrational and optical. We found quantum correlations among the optical and vibrational quanta under particular circumstances of stronger qubit-resonator coupling strengths. Actually, we have demonstrated the violation of the Cauchy-Schwarz inequality defined as a product of the photon-photon and phonon-phonon second-order correlation functions divided on the photon-phonon or phonon-photon cross-correlation function squared. Moreover, the environmental temperature that explicitly affects the mechanical subsystem only modifies as well the mean-photon number in the optical resonator mode demonstrating the existence of correlations among the photon and phonon modes. The article is organized as follows. In Sec. II we describe the model as well as the analytical approach used. The master equation characterizing the induced correlations among optical and phonon modes is derived there. Sec. III deals with the corresponding equations of motion and discussion of the obtained results. The Summary is given in the last section, i.e. Sec. IV. \section{Analytical approach} The investigated model is described as follows: A two-level quantum dot of frequency $\omega_{0}$ is fixed on a semiconductor beam structure inside an optical cavity. A coherent laser field with the wave-vector $\vec k_{L}$ is resonantly interacting with the two-level quantum dot leading to correlations between mechanical vibrations and photon scattering. If the thickness of the beam is smaller than its width, the lowest-energy resonance corresponds to the fundamental flexural mode with the frequency being of the order of $\rm{GHz}$. Flexions induce extensions and compressions in the structure modifying the deformation potential coupling of the embedded quantum dot and, hence, its energy levels \cite{prm1}. Concomitantly, the artificial two-level emitter interacts with the optical cavity mode. The vibrational and optical mode frequencies are denoted by $\omega$ and $\omega_{c}$, respectively. The model Hamiltonian describing the whole system is: \begin{eqnarray} H &=& \hbar\omega_{c}a^{\dagger}a + \hbar\omega b^{\dagger}b + \hbar\omega_{0}S_{z} + \hbar g(a^{\dagger}S^{-} + aS^{+}) \nonumber \\ &+& \hbar \Omega(S^{+}e^{-i\omega_{L}t} + S^{-}e^{i\omega_{L}t}) + \hbar\lambda S_{z} (b^{\dagger} + b). \label{Hm} \end{eqnarray} Here, the first three terms describe the free energies of optical and mechanical modes as well as of the artificial two-level system. The fourth and the fifth terms characterize the interaction of the quantum dot with the optical resonator mode and laser field, respectively. The last term takes into account the interaction of the vibrational degrees of freedom with the radiator \cite{prm1}. $g$ and $\lambda$ denote the interaction strengths among the two-level emitter and the involved optical and mechanical modes, while $\Omega$ is the corresponding Rabi frequency due to external laser pumping. The qubit operators $S_{z}$ and $S^{\pm}$ have the usual meaning and satisfy the standard commutation relations. $\{a^{\dagger}, b^{\dagger}\}$ and $\{a,b\}$ are the generation and annihilation operators for photon and phonon subsystems, respectively, and obey the boson commutation relations \cite{kmek,book}. The entire system will be described in the dressed-state representation \cite{book,DM}: $|g\rangle=\sin{\theta}|+\rangle + \cos{\theta}|-\rangle$ and $|e\rangle=\cos{\theta}|+\rangle - \sin{\theta}|-\rangle$, where $\cot{2\theta}=\Delta/2\Omega$ with $\Delta=\omega_{0}-\omega_{L}$ being the detuning of the laser frequency $\omega_{L}$ from the two-level transition frequency. $|e\rangle$ and $|g\rangle$ are the excited and the ground bare states of the quantum dot, while $|+\rangle$ and $|-\rangle$ the corresponding states in the dressed-state picture. In the interaction picture, the master equation describing our model in the rotating-wave and dipole approximations as well as in Born-Markov approximations is: \begin{eqnarray} \dot{\rho}&+&\frac{i}{\hbar}[H_{d},\rho]=-\gamma_{0}[R_{z},R_{z}\rho]-\gamma_{+} [R_{\pm},R_{\mp}\rho]\nonumber\\ &-&\gamma_{-}[R_{\mp},R_{\pm}\rho] - \kappa_{a}[a^\dagger,a\rho] - \kappa_{b}(1+\bar{n})[b^\dagger,b\rho]\nonumber\\ &-&\kappa_{b}\bar{n}[b,b^\dagger\rho] + H.c.,\label{me_pm} \end{eqnarray} where an overdot denotes differentiation with respect to time. Here, the dressed-state Hamiltonian is $ H_{d}=H_{0}+H_{i}$ with: \begin{eqnarray} H_{0}&=&\hbar\Omega_R R_z-\hbar\Delta_{1}a^{\dagger}a + \hbar\omega b^{\dagger}b, \nonumber \\ H_{i}&=&{A} R_z+{B}^{\dagger}R_{\mp}e^{-2i\Omega_{R}t}+{B}R_{\pm}e^{2i\Omega_{R}t}, \label{H0I} \end{eqnarray} where $\Delta_{1}=\omega_{L} - \omega_{c}$ and $\Omega_{R} = \sqrt{(\Delta/2)^2 + \Omega^2}$, whereas \begin{eqnarray} {A}&=&\frac{\hbar}{2}\bigl( g(ae^{i\Delta_{1}t}+a^{\dagger}e^{-i\Delta_{1} t})\sin{2\theta}\nonumber\\ &+&\lambda(be^{-i\omega t}+b^{\dagger}e^{i\omega t})\cos{2\theta}\bigr), \nonumber\\ {B}&=&\hbar g(ae^{i\Delta_{1}t}\cos^{2}{\theta}-a^{\dagger}e^{-i\Delta_{1}t}\sin^{2}{\theta})\nonumber\\ &-&\frac{\hbar}{2}\lambda \sin{2 \theta}(be^{-i\omega t}+b^{\dagger}e^{i\omega t}). \end{eqnarray} The dressed-state quantum dot operators are defined as follows: $R_{z}=|+\rangle \langle+| - |-\rangle \langle-|$, $R_{\pm}=|+\rangle \langle-|$ and $R_{\mp}=|-\rangle \langle+|$ satisfying the standard commutation relations for su(2) algebra. Further, $\gamma_{0}$=$\frac{1}{4}(\gamma \sin^2{2\theta} +\gamma_{c}\cos^2{2\theta})$, $\gamma_{+}$=$\gamma \cos^4{\theta} + \frac{\gamma_{c}}{4}\sin^{2}{2\theta}$ and $\gamma_{-}=\gamma \sin^4{\theta} + \frac{\gamma_{c}}{4}\sin^{2}{2\theta}$ describe the spontaneous decay processes among the involved dressed-states, while $\gamma$ and $\gamma_c$ are the single-qubit spontaneous decay and dephasing rates, respectively. $\kappa_{a}(\kappa_{b})$ is the photon (phonon) resonator damping rate whereas $\bar n$ is the mean-phonon number corresponding to vibrational frequency $\omega$ and environmental temperature $T$. The master equation (\ref{me_pm}) is quite complex. However, for our purposes one can significantly simplify it. Particularly, we are interested in a regime where the pumped quantum-dot system is faster than the cavity photon and vibrational phonon subsystems, respectively. Consequently, the quantum dot variables can be eliminated from the whole quantum dynamics, an approximation valid for $\Omega \gg \gamma \gg \kappa_{a,b}$ as well as $\Omega \gg \{ g, \lambda \}$. Notice that this approach is widely used in other somehow related systems \cite{zb1,sc_zb,chk,gxl}. In the following, we write down the master equation (\ref{me_pm}) after tracing over quantum-dot degrees of freedom: \begin{eqnarray} \dot{\rho}_f&=&-i[{A},\rho_{++}-\rho_{--}]-i[{B}^\dagger,\rho_{+-}]e^{-2i\Omega_R t}\nonumber\\ &-&i[{B},\rho_{-+}]e^{2i\Omega_R t} - \kappa_{a}[a^\dagger,a\rho_{f}]\nonumber\\ &-&\kappa_{b}(1+\bar{n})[b^\dagger,b\rho_{f}] - \kappa_{b}\bar{n}[b,b^\dagger\rho_{f}] + H.c., \label{rf} \end{eqnarray} where $\rho_{\alpha\beta}=\langle \alpha|\rho_{q}|\beta\rangle \rho_{f}$, with $\{\alpha,\beta \in \pm \}$. The quantum dot variables can be found from Eq.~(\ref{me_pm}), namely: \begin{eqnarray} \dot{\rho}_{+-}&=&-\Gamma_{\perp}\rho_{+-}-i({A}\rho_{+-} + \rho_{+-}{A})\nonumber\\ &-&i({B}\rho_{--}-\rho_{++}{B})e^{2i\Omega_{R}t},\nonumber\\ \dot{\rho}_{++}&=&-2\gamma_{+}\rho_{++}+2\gamma_{-}\rho_{--}-i({A}\rho_{++}-\rho_{++}{A})\nonumber\\ &-&i({B}\rho_{-+}e^{2i\Omega_R t}-\rho_{+-}{B}^{\dagger}e^{-2i\Omega_{R}t}),\nonumber\\ \dot{\rho}_{--}&=&2\gamma_{+}\rho_{++} -2\gamma_{-}\rho_{--} + i({A}\rho_{--} - \rho_{--}{A})\nonumber\\ &+&i(\rho_{-+}{B} e^{2i\Omega_{R}t} - {B}^{\dagger}\rho_{+-}e^{-2i\Omega_{R}t})\label{rho+-}, \label{eks} \end{eqnarray} with $\Gamma_{\perp}=4\gamma_0+\gamma_{+}+\gamma_{-}$. In the secular approximation and to first order in the interaction parameters $\{g,\lambda\}$ the solutions of (\ref{eks}) are: \begin{eqnarray} &&\rho_{+-}=-i e^{2i\Omega_R t}(\bar{B}\rho_{--}-\rho_{++}\bar{B}), \nonumber\\ &&\rho_{++}-\rho_{--}=- i(\bar{A}\rho_f-\rho_f\bar{A}), \label{esol} \end{eqnarray} where \begin{eqnarray*} \bar{A}&=&\frac{g}{2}\biggl(\frac{a^\dagger e^{-i\Delta_1 t}}{\Gamma_{\shortparallel} -i\Delta_{1}}+\frac{a e^{i\Delta_1 t}} {\Gamma_{\shortparallel}+i\Delta_{1}} \biggr)\sin{2\theta}\nonumber\\ &+&\frac{\lambda}{2}\biggl(\frac{b^\dagger e^{i\omega t}}{\Gamma_{\shortparallel} +i\omega}+\frac{b e^{-i\omega t}} {\Gamma_{\shortparallel}-i\omega}\biggr)\cos{2\theta},\nonumber\\ \bar{B}&=&-\frac{\lambda}{2}\biggl(\frac{\sin{2\theta}b^\dagger e^{i\omega t}}{\Gamma_{\perp}+i(2 \Omega_{R} + \omega)} + \frac{\sin{2\theta}b e^{-i\omega t}}{\Gamma_{\perp} + i(2 \Omega_{R} - \omega)}\biggr)\nonumber\\ &+&g\biggl( \frac{a \cos^2{\theta} e^{i\Delta_1 t}}{\Gamma_{\perp}+i(2 \Omega_R+\Delta_1)}-\frac{a^\dagger\sin^2{\theta} e^{-i\Delta_1 t}}{\Gamma_{\perp}+i(2 \Omega_R-\Delta_1)} \biggr). \end{eqnarray*} Here $\Gamma_{\shortparallel} = \gamma(1 + \cos^{2}{2\theta})+\gamma_{c}\sin^2{2\theta}$. Inserting the solutions given by (\ref{esol}) in Eq.~(\ref{rf}) and using the identity ${\rm Tr}\{\dot{\rho}(t)Q\}={\rm Tr}\{\dot Q(t)\rho\}$, one can obtain the final master equation describing the quantum dynamics of the photon and phonon subsystems in the good cavity limit: \begin{eqnarray} &{}&\langle\dot{Q}\rangle + \frac{i}{2}(\Delta_1-\omega)\langle[a^\dagger a+b^\dagger b,Q]\rangle=\nonumber\\ &=&\langle[Q,a](-A^{*}_{1}a^{\dagger}+D^{*}_{2}b)+(B^{*}_{1}a^{\dagger}-C^{*}_{2}b)[Q,a]\rangle\nonumber\\ &+&\langle[Q,a^{\dagger}](-B_{1}a+C_{2}b^{\dagger})+(A_{1}a-D_{2}b^{\dagger})[Q,a^{\dagger}]\rangle\nonumber\\ &+&\langle[Q,b](D^{*}_{1}a-A^{*}_{2}b^{\dagger})+(B^{*}_{2}b^{\dagger}-C^{*}_{1}a)[Q,b]\rangle\nonumber\\ &+&\langle[Q,b^{\dagger}](C_{1}a^{\dagger}-B_{2}b)+(A_{2}b - D_{1}a^{\dagger})[Q,b^{\dagger}]\rangle \label{Q}. \label{Qq} \end{eqnarray} Here $Q$ denotes, respectively, any operator belonging to the photon and phonon subsystems, while \begin{eqnarray*} A_{1}&=&\frac{1}{4}\frac{g^2 \sin ^2{2 \theta}}{\Gamma_{\shortparallel} +i \Delta_{1} }+\frac{g^2 P_{-} \sin ^4{\theta}}{\Gamma_{\perp} -i (2 \Omega_{R} -\Delta_{1} )}\nonumber\\ &+&\frac{g^2 P_{+} \cos ^4{\theta}}{\Gamma_{\perp} + i (2 \Omega_{R}+\Delta_{1} )}, \nonumber\\ A_{2}&=&\frac{1}{4}\bigg(\frac{\lambda^2 \cos^{2}{2 \theta}}{\Gamma_{\shortparallel} -i \omega }+\frac{\lambda^2 P_{-} \sin^2{2\theta}}{\Gamma_{\perp} -i (2 \Omega_{R} +\omega )}\nonumber\\ &+&\frac{\lambda^2 P_{+} \sin ^2{2 \theta}}{\Gamma_{\perp} + i(2 \Omega_{R}-\omega )}\bigg)+\kappa_{b}\bar{n}, \nonumber\\ C_{1}&=&\frac{P_{+}}{2}\frac{g\lambda \sin{2 \theta}\cos^2{\theta}}{\Gamma_{\perp} -i (2\Omega_{R}+\Delta_{1}) } -\frac{P_{-}}{2}\frac{g\lambda \sin{2 \theta}\sin^2{\theta}}{\Gamma_{\perp}+ i (2\Omega_{R}-\Delta_{1})} \nonumber\\ &-&\frac{1}{4}\frac{g\lambda \sin{2 \theta}\cos{2\theta}}{\Gamma_{\shortparallel} -i\Delta_{1}}, \nonumber\\ C_{2}&=&\frac{P_{-}}{2}\frac{g\lambda \sin{2 \theta}\cos^2{\theta}}{\Gamma_{\perp} +i (2\Omega_{R}+\omega) } -\frac{P_{+}}{2}\frac{g\lambda \sin{2 \theta}\sin^2{\theta}}{\Gamma_{\perp}- i (2\Omega_{R}-\omega)}\nonumber\\ &-&\frac{1}{4}\frac{g\lambda \sin{2 \theta}\cos{2\theta}}{\Gamma_{\shortparallel} + i\omega }, \end{eqnarray*} with \begin{eqnarray*} P_{+} = \frac{\gamma_-}{\gamma_++\gamma_-}, ~~{\rm and}~~ P_{-}=\frac{\gamma_+}{\gamma_++\gamma_-}. \end{eqnarray*} $B_{i}$ can be obtained from $A_{i}$ via $P_{\mp} \leftrightarrow P_{\pm}$ as well as by adding $\kappa_{a}$ to $B_{1}$ and $\kappa_{b}$ to $B_{2}$, correspondingly. Respectively, $D_{i}$ can be obtained from $C_{i}$ through $P_{\mp} \leftrightarrow P_{\pm}$, and $\{i \in 1,2\}$. Notice that in obtaining Eq.~(\ref{Qq}) we have ignored rapidly oscillating terms at frequencies: $\pm 2\Delta_{1}, \pm (\Delta_{1}+ \omega)$ and $\pm 2\omega$. Actually, we are interesting in a regime where a laser photon absorption is accompanied by the generation of a phonon and an optical cavity photon, respectively, that is when $\Delta_{1} \approx \omega$. In the following section, we shall describe the induced quantum correlations among the optical and mechanical degrees of freedom. \section{Quantum correlations} Eq.~(\ref{Qq}) allows us to obtain the equations of motion for the variables of interest. We can define $Q=a^{\dagger j} a^{k} b^{\dagger l} b^{m}$ (where $\{j,k,l,m\}$ are any integer numbers) as a general operator belonging to the both, photon and phonon subsystems. The equation of motions for the mean-values of photon and phonon numbers etc. can be then obtained from the following main equation: \begin{eqnarray} &&\frac{d}{dt}\langle a^{\dagger j} a^k b^{\dagger l} b^m \rangle=\langle a^{\dagger j} a^k b^{\dagger l} b^m \rangle \nonumber\\ &\times&\bigr((A_{1}^*-B_{1}^*)j+(A_{1}-B_{1})k+(A_{2}^*-B_{2}^*)l\nonumber\\ &+&(A_{2}-B_{2})m-\frac{i}{2}(\Delta_1-\omega)(j-k+l-m)\big)\nonumber\\ &+&\langle a^{\dagger j+1} a^k b^{\dagger l} b^{m-1}\rangle(C_{1}-D_{1})m\nonumber\\ &+&\langle a^{\dagger j-1} a^k b^{\dagger l} b^{m+1}\rangle(C_{2}^*-D_{2}^*)j\nonumber\\ &+&\langle a^{\dagger j} a^{k+1} b^{\dagger l-1} b^{m}\rangle(C_{1}^*-D_{1}^*)l\nonumber\\ &+&\langle a^{\dagger j} a^{k-1} b^{\dagger l+1} b^{m}\rangle(C_{2}-D_{2})k\nonumber\\ &+&\langle a^{\dagger j-1} a^k b^{\dagger l-1} b^{m}\rangle(C_{1}^*+C_{2}^*)jl\nonumber\\ &+&\langle a^{\dagger j-1} a^{k-1} b^{\dagger l} b^{m}\rangle(A_{1}+A_{1}^*)jk\nonumber\\ &+&\langle a^{\dagger j} a^{k-1} b^{\dagger l} b^{m-1}\rangle(C_{1}+C_{2})km\nonumber\\ &+&\langle a^{\dagger j} a^k b^{\dagger l-1} b^{m-1}\rangle(A_{2}+A_{2}^*)lm. \label{aabb} \end{eqnarray} For instance, by selecting instead of $\{j,k,l,m\}$ the following sets: $\{1,1,0,0\},\ \{0,0,1,1\}, \{0,1,0,1\},\{1,0,1,0\}$ one arrives at the equations of motion for the mean-values of the photon and phonon numbers and their first-order correlations, namely: \begin{eqnarray} \frac{d}{dt}\langle a^{\dagger} a \rangle& = &\langle a^{\dagger} a\rangle(A_{1} - B_{1} + A_{1}^{*} - B_{1}^{*}) + \langle ab\rangle(C_{2}^{*} - D_{2}^{*}) \nonumber\\ &+&\langle a^{\dagger} b^{\dagger}\rangle (C_{2} - D_{2}) + A_{1} + A_{1}^{*}, \nonumber\\ \frac{d}{dt}\langle b^{\dagger} b\rangle& = &\langle b^{\dagger} b\rangle(A_{2} - B_{2} + A_{2}^{*} - B_{2}^{*}) + \langle ab\rangle(C_{1}^{*} - D_{1}^{*})\nonumber\\ &+&\langle a^{\dagger}b^{\dagger}\rangle (C_{1} - D_{1}) + A_{2} + A_{2}^{*},\nonumber\\ \frac{d}{dt}\langle ab\rangle &=& \langle ab\rangle \bigl(A_{1} - B_{1} + A_{2} - B_{2} + i(\Delta_{1} - \omega)\bigr)\nonumber\\ &+& \langle a^{\dagger}a\rangle(C_{1} - D_{1}) + \langle b^{\dagger}b\rangle (C_{2} - D_{2}) + C_{1} + C_{2}, \nonumber\\ \frac{d}{dt}\langle a^{\dagger}b^{\dagger}\rangle &=& \langle a^{\dagger} b^{\dagger}\rangle\bigl(A_{1}^{*} - B_{1}^{*} + A_{2}^{*} - B_{2}^{*} - i(\Delta_{1} - \omega)\bigr)\nonumber\\ &+&\langle a^{\dagger} a\rangle(C_{1}^{*} - D_{1}^{*}) + \langle b^{\dagger} b\rangle (C_{2}^{*} - D_{2}^{*}) + C_{1}^{*} + C_{2}^{*}. \label{aa} \nonumber\\ \label{eksm} \end{eqnarray} Additionally, the equations of motion for the second-order correlation functions and their cross-correlations, that is: \begin{eqnarray} g_{1}^{(2)}(0)&=&\frac{\langle{a^{\dagger} a^{\dagger} a a }\rangle}{\langle a^{\dagger}a\rangle^2}, ~~~ g_{2}^{(2)}(0)=\frac{\langle{b^{\dagger} b^{\dagger} b b }\rangle}{\langle b^{\dagger}b\rangle^2}, \nonumber\\ g_{3}^{(2)}(0)&=&\frac{\langle{a^{\dagger} a b^{\dagger} b }\rangle}{\langle a^{\dagger}a\rangle \langle b^{\dagger}b\rangle}, \label{gg} \end{eqnarray} can be obtained again from Eq.~(\ref{aabb}) by considering, respectively, the next sets of numbers: $\{2,2,0,0\}$, $\{0,0,2,2\}$, $\{1,1,1,1\}$, $\{1,0,2,1\}$, $\{0,1,1,2\}$, $\{2,1,1,0\}$, $\{1,2,0,1\}$, $\{2,0,2,0\}$, $\{0,2,0,2\}$ instead of $\{j,k,l,m\}$. An efficient tool to investigate the quantum features of the correlations between photons and phonons is the Cauchy-Schwarz inequality, CSI, which is defined as follows \cite{csi}: \begin{eqnarray} {\rm CSI} = g^{(2)}_{1}(0)g^{(2)}_{2}(0)/[g^{(2)}_{3}(0)g^{(2)}_{3}(0)]. \label{csi} \end{eqnarray} Particularly, the induced correlations are of quantum nature if ${\rm CSI} < 1$. \begin{figure}[t] \centering \includegraphics[height=5.3cm]{Fig1.eps} \caption{\label{fig-1} (color online) The mean-value of the phonon number $\langle b^{\dagger}b\rangle$ as a function of $\Delta_{1}/\gamma$. Here, $\gamma_{c}/\gamma=0.3$, $g/\gamma=3$, $\lambda/\gamma=5$, $\Omega/\gamma=50$, $\omega/\gamma=50$, $\Delta/(2\Omega)=-0.263$, $\kappa_{a}/\gamma=0.09$ and $\kappa_{b}/\gamma=0.009$. The solid line corresponds to $\bar n=2$, whereas the dashed one to $\bar n=0.5$. The inset figure shows the same but for the photon mean-number $\langle a^{\dagger}a\rangle$, respectively.} \end{figure} \begin{figure}[t] \centering \includegraphics[height=5.4cm]{Fig2.eps} \caption{\label{fig-2} (color online) The steady-state value of the CSI versus $\Delta_{1}/\gamma$. The solid line corresponds to $\bar n=2$, whereas the long-dashed dashed one to $\bar n=0.5$. Violation of the CSI occurs below the horizontal dashed line. Other parameters are the same as in Fig.~{(\ref{fig-1})}.} \end{figure} Figure (\ref{fig-1}) shows the steady-state value of the mean phonon and photon numbers for particular parameters close to those already considered in other setups \cite{prm,rew_nm,ent2,prm1,DM,prm2}. For stronger qubit-cavity couplings one can observe a peak of these quantities around $\Delta_{1} \approx \omega$, i.e. where a laser photon absorption is accompanied by the generation of a phonon and an optical cavity photon, respectively, while $\Delta_{1} \not = \Delta$. A frequency shift occurring due to nonlinear effects existing in the dispersive limit applied here (see, also, \cite{zb1,sc_zb,chk,gxl}) is responsible for the resonance around $\omega$ and not exactly at this value. Furthermore, the environment temperature that explicitly influences the vibrational degrees of freedom, implicitly affects also the mean photon number in the optical resonator mode (compare solid and dashed curves) demonstrating, thus, the existence of correlations among them. The quantum nature of these correlations can be demonstrated via the violation of the CSI given by Eq.~(\ref{csi}). Therefore, the steady-state behaviors of the CSI versus $\Delta_{1}/\gamma$ is shown in Fig.~(\ref{fig-2}). The CSI is violated, i.e. ${\rm CSI} <1$, near $\Delta_{1} \approx \omega$ where the mean values of the photon and phonon numbers are maximal. Notice that quantum features of these correlations will disappear for $|\Delta_{1} - \omega| \gg \gamma$. This can be seen also from the master equation (\ref{Qq}) where the terms responsible for the cross-correlations among the two different modes will simple vanish in this regime, i.e. when $|\Delta_{1} - \omega|/\gamma \gg 1$. \section{Summary} In summary, we have studied the quantum nature of induced correlations among mechanical and optical degrees of freedom. A two-level quantum dot leads to correlated vibrations when fixed on a nano-mechanical resonator beam while interacting with an external coherent laser field as well as with an optical mode of a leaking resonator. When the variables describing the pumped quantum dot are faster than those of other involved subsystems and the detuning of the laser frequency from the cavity one is located around the vibrating mode frequency, we have found quantum correlations between the photon and phonon subsystems. Actually, stronger qubit-cavity coupling strengths are required that can be achieved also via involving more two-level artificial qubits. \acknowledgments We are grateful to financial support via the research Grant No. 13.820.05.07/GF.
2,877,628,088,802
arxiv
\section{Introduction} F-theory \cite{Vafa:1996xn} is a powerful tool to engineer gauge theories appearing in type IIB string theory and in the hetorotic string theory using the geometry of elliptic fibrations \cite{Morrison:1996na, Bershadsky:1996nh}. When the structure of an elliptic fibration is seen through the eyes of string theory, the constraints coming from physics lead to surprising new mathematical results on the structure of elliptic fibrations \cite{FMW, KLRY, AE1,AE2,EY, EFY, GM1, GM2,Braun:2011ux, Park:2011ji, Fullwood:2011bf, Morrison:2011mb, Morrison:2012js,Taylor:2012dr,Morrison:2012ei}. Although F-theory is defined for elliptically fibered Calabi-Yau varieties, many of the mathematical results obtained in F-theory apply in a larger setup without restrictions on the dimension of the elliptic fibration and without assuming the Calabi-Yau condition \cite{AE1,AE2,GM2}. One fascinating aspect of the F-theory approach is that it provides a window to non-perturbative aspects of type IIB string theory. This is because the elliptic fibration implements geometrically several non-perturbative aspects of S-duality. In type IIB string theory S-duality changes the value of the string coupling constant and can relate weak and strong couplings. Understanding the connection between the strongly coupled regime of F-theory and weakly coupled type IIB string theory is a central theme in F-theory \cite{Denef.LH}. Sen has provided a beautiful description of a limit of an elliptic fibration which results in a type IIB orientifold theory at weak coupling \cite{ Sen:1996vd,Sen.Orientifold}. The orientifold theory is defined as the double cover $X\rightarrow B$ of the base over which the elliptic curve is fibered: $$ X:\xi^2 =h. $$ Sen's limit was originally defined essentially for elliptic fibrations in the Weierstrass form. A systematic way to describe the links between the orientifold theory obtained in Sen's weak coupling limit and the geometry of the elliptic fibration were described in \cite{CDE}. One would like to be able to uplift a compactification in type IIB to F-theory in order to understand its strong coupling behavior. Such effort relies on exploiting properties of a given weak coupling limit. Progress on that has been made in the past few years based essentially on Sen's limit \cite{Collinucci:2008zs}, \cite{Collinucci:2009uh,Blumenhagen:2009up}. Generalization of the weak coupling limit to other models of elliptic fibrations were later obtained in \cite{AE2, EFY}. These generalizations are based on a geometric reformulation of Sen's limit in terms of a transition from semi-stable to unstable singular fibers \cite{AE2}. These new limits illustrate among other things the non-uniqueness of the weak coupling limit in F-theory \cite{AE2, EFY}. Different limits of the same F-theory model can lead to very different configurations at weak coupling. Reciprocally, a given type IIB model can admit several nonequivalent uplifts to F-theory. See for example \cite{Braun:2009wh} for explicit examples. Recently, F-theory has been used to obtain local models of Grand Unified Theories (GUTs), starting from the papers \cite{Donagi:2008ca,Beasley:2008dc,Beasley:2008kw,Hayashi:2009ge}. Global completions of GUT models have also been intensively studied \cite{Andreas:2009uf,Blumenhagen:2009yv,Grimm:2009yu, Marsano:2009gv, Cvetic:2010rq,Marsano:2011hv,Collinucci:2009uh, Tatar:2012tm, Marsano:2012yc}, with special focus on $\mathop{\rm SU}(5)$ configurations. See \cite{Weigand:2010wm} for a review on the subject and a more complete list of references. In F-theory, there are specific ans\"atze that provide a realization of a particular gauge group using a generalized Weierstrass model with coefficients vanishing along the divisor of interest up to certain multiplicities directly inspired by Tate's algorithm \cite{Bershadsky:1996nh, Katz:2011qp}. For that reasons, such ans\" atze are usually called {\em Tate form} in the F-theory literature. The restrictions to put a given elliptic fibration with a given singularity type into a Tate form have been analyzed recently in \cite{Katz:2011qp} where some obstructions have been noticed for certain groups ($SU(m)$ (with $6\leq m\leq 9$), $\mathop{\rm {}Sp}(3)$, $\mathop{\rm {}Sp}(4)$, $\mathop{\rm SO}(13)$, $\mathop{\rm SO}(14)$) or in the presence of certain matter representations (such as the 2-symmetric representation of $\mathop{\rm SU}(m)$). Normal forms for local equations for classical groups were also given in \cite{Katz:2011qp}. Donagi and Wijnholt have proposed a realization of Sen's limit for elliptic fibrations in Tate forms \cite{Donagi:2009ra}. For a generalized Weierstrass model: $$ y^2 z= x^3 + a_1 xyz + a_3 y z^2 =x^3 + a_2 x^2 z + a_4 x z^2 + a_6 z^3, $$ Donagi and Wijnholt use the following ansatz: $$ \text{Donagi-Wijnholt} \begin{cases} a_3\rightarrow \epsilon a_3\\ a_4\rightarrow \epsilon a_4\\ a_6\rightarrow \epsilon^2 a_6 \end{cases} $$ We will refer to that limit as the {\em DW limit}. It provides a simple realization of Sen's limit for a generalized Weierstrass model. As we will see in section \ref{DWAnsatz}, the DW limit reduces to Sen's limit when the generalized Weierstrass equation is reduced to a short Weierstrass form. The DW limit is consistent with the Tate form of a nodal curve as it has $a_3=a_4=a_6=0$ in the limit $\epsilon\rightarrow 0$. It applies in particular to models in which a gauge group is implemented using a Tate form. For every nonzero value of $\epsilon$, it preserves the Tate form and therefore the gauge group. However as we take the limit $\epsilon\rightarrow 0$ the gauge group can change as we will see. In the DW-limit, the double cover of the base used to describe the orientifold theory in type IIB takes the form \begin{equation} X:\quad \xi^2 =a_1^2+ 4a_2. \end{equation} $X$ is nonsingular when $a_2=0$ describes a nonsingular divisor. However, the Tate form for groups other than $\mathop{\rm {}Sp}(n)$ will require $a_2$ to factorize and therefore to admit singularities. Typically, if $a_1, a_2$ factorize as $a_2=\sigma^{m_2} b_{2,m_2}$ and $a_1=\sigma^{m_1} a_{1,m_1}$ for a given Tate form, we get \begin{equation} X:\quad \xi^2 =\sigma^{2 m_1}a_{1,m_1}^2+4 \sigma^{m_2} a_{2,m_2}, \end{equation} which is singular when $m_2>0$. The presence of singularities is acceptable if one can obtain a crepant resolution $$\mu:\tilde{X}\rightarrow X$$ compatible with the involution $\xi\mapsto -\xi$, which represents the $\mathbb{Z}_2$ orientifold symmetry in the weakly coupled type IIB picture. The crepant condition means that the first Chern class of the resolved variety $\tilde{X}$ is the pullback of the first Chern class of $X$: $$c_1(\tilde{X})=\mu^\star c_1(X).$$ This is important in order to make sure that the string target space is still Calabi-Yau. One would also want the resolution to be a double cover in order to still have an orientifold theory in the weak coupling limit. The compatibility condition requires that the involution of $\tilde{X}$ naturally reduces to the one of $X$. We will call a resolution of a double cover an {\em admissible crepant resolution} if it is crepant and compatible with the double cover. Double covers and their admissible crepant resolutions are well studied in mathematics. We review the main results in appendix \ref{AdmissibleResolutions}. We point out an important subtlety involving the preservation of the D7 tadpole after a resolution of singularities in section \ref{PhysicalProperties} and discuss it further in the conclusion (section \ref{Outlook}). In the case of unitary gauge groups, the DW weak coupling limit gives a singular orientifold theory which does not admit an admissible crepant resolution. Indeed, for the Tate form of a unitary gauge group we have $a_2=\sigma a_{2,1}$. This implies that the double cover admits conifold-like singularities in codimension-3: \begin{equation}\label{ConifoldGeometry} X:\quad \xi^2 =a_1^2+ 4\sigma a_{2,1}, \end{equation} where $\sigma=0$ defines the divisor on which the fiber $I^{s}_{n}$ is located and gives the $\mathop{\rm SU}(n)$ gauge group. Such a singularity is a serious problem, especially for GUT model building, as it makes the theory at weak coupling ill-defined for any kind of computation. These conifold singularites can be avoided at the cost of working with elliptic fibrations for which the conifold points $a_1=\sigma=a_{2,1}=0$ do not actually occur due to the intersection theory of the base \cite{Krause:2012he}. Such geometries can be algorithmically constructed starting from type IIB, as explained in \cite{Collinucci:2009uh}. In this paper, we will consider a systematic way of solving the conifold problem in the weak coupling limit of $\mathop{\rm SU}(n)$ theories. If one is willing to give up the choice of having a gauge theory in eight dimensions with unitary gauge group, an immediate solution of the conifold problem would be to deform the singularity by adding on the right hand side of \eqref{ConifoldGeometry} a generic polynomial of the appropriate degree. This breaks the unitary gauge group we started with by giving an expectation value to fields in its antisymmetric representation \cite{Donagi:2009ra}. However, this would only make us move from a split to a non-split singularity of the Calabi-Yau fourfold, by rendering the latter slightly more generic. As a result, monodromies of the fiber are introduced and the unbroken gauge group is of the symplectic type. In contrast, in this paper, we construct new limits that lead to a double cover allowing for admissible crepant resolution. The limits we consider are specialization of the DW-limit. The idea behind is very simple: knowing the conditions to have an admissible crepant resolution (see appendix \ref{AdmissibleResolutions}), we improve the DW-limit in order to replace the conifold points by a better-behaved singularity. In particular, there is one choice of singularity which even allows us to preserve the unitary gauge group of the starting F-theory configuration. We consider the limit $$ \begin{cases} a_{2,1}\rightarrow \epsilon a_{2,1}+\sigma a_{2,2}\\ a_3\rightarrow \epsilon a_3\\ a_4\rightarrow \epsilon a_4\\ a_6\rightarrow \epsilon^2 a_6 \end{cases} $$ In this new limit, the conifold singularities are replaced by the singularities of suspended pinch points: $$ X: \xi^2=a_1^2 + \sigma^2 a_{2,2}. $$ Such a double cover admits an admissible crepant resolution obtained by blowing-up the locus $\xi=a_1=\sigma=0$ in codimension-two. It is also evident the presence of brane and image brane stacks when we put $\sigma=0$. We discuss this model at length and propose some applications of it to GUT model building. In particular we analyze the structure of matter curves before and after resolution and comment on specific suspended pinch point geometries which contain all the expected matter spectrum. We also raise a few questions, especially regarding the realization of Yukawa couplings, which we hope to address in the near future. The paper is organized as follows. We begin in section \ref{Notations} by briefly introducing the geometries of Weierstrass elliptic fibrations and by fixing our notations, whereas in section \ref{SenWeakCoupling} we review the Sen weak coupling limit and its orientifold interpretation in type IIB string theory. In section \ref{DWAnsatz} we present a general overview of the Donagi-Wijnholt limit and in subsection \ref{SolvingConifold} propose two alternatives which solve the conifold problem and show different physical features among themselves. Section \ref{GeneralitiesWCL} gives a broader overview on weak coupling limits and \ref{BraneContentDW} provides a detailed analysis of the brane content after taking DW limit for each kind of Kodaira singularity. Section \ref{resCY3spp} is instead devoted to a more-in-depth analysis of one of our alternative proposals, i.e. the suspended pinch point geometry: We blow-up the singularity and discuss the features of the resolved geometry from both a mathematical and a physical perspective. We draw our conclusions in section \ref{Outlook}. Finally some technical details are provided in the appendices: Some mathematical theorems relevant for our analysis are presented in appendix \ref{AdmissibleResolutions}, where we also propose an equivalent small resolution of the suspended pinch point geometry; In appendix \ref{BlowUpQuadricCone} a blow-up is given for the Quadric Cone geometry. \section{Elliptic curves and Weierstrass models: a quick review}\label{Notations} \begin{definition}[Elliptic curve and Weierstrass normal equation] An elliptic curve over a field $K$ is an irreducible nonsingular projective algebraic curve of genus $1$ with a choice of a $K$-rational point, the origin of the group law. It follows from Riemann-Roch theorem that an elliptic curve over a field $K$ is isomorphic to a plane cubic curve, cut in $\mathbb{P}^2_K$ by the following {\em generalized Weierstrass form}: \begin{equation}\label{wt} E:\quad zy^2 + a_1 x y z+ a_3 y z^2 = x^3 + a_2 x^2 z + a_4 x z^2 + a_6 z^3,\quad a_i \in K. \end{equation} Geometrically, the marked point of the Weierstrass form of an elliptic curve is its intersection point with the line at infinity $z=0$, namely the point $[x:y:z]=[0:1:0]$, which is a point of inflection and the only point of infinity of the curve. The curve \eqref{wt} is called a Weierstrass normal form since (in characteristic different from 2 and 3) after the change of variables: $\wp=x+\frac{1}{12}(a_1^2 + 4 a_2)$, $\wp'=2y + a_1 x + a_3$ it reduces to the traditional cubic equation satisfied by the Weierstrass $\wp$-function and its derivative: $E_\Lambda:(\wp')^2=4 \wp^3- g_2 \wp-g_3$. \end{definition} \subsection{Elliptic fibration} An elliptic fibration over a base $B$ can be seen as an elliptic curve over the function field of $B$. We will define elliptic fibrations by a Weierstrass model. The Weierstrass model over a base $B$ is written in a projective bundle $\mathbb{P}(\mathscr{E})\rightarrow B$ where $\mathscr{E}=\mathscr{O}_B\oplus \mathscr{L}^2\oplus \mathscr{L}^3$. \begin{notation} The sheaf of regular functions of a variety $B$ is denoted as usual by $\mathscr{O}_B$. We use the classical convention for projective bundles $\mathbb{P}(\mathscr{E})$: At each point they are defined by the set of lines of $\mathscr{E}$. We denote the tautological line bundle of the projective bundle $\mathbb{P}(\mathscr{E})$ by $\mathscr{O}_B(1)$. When the context is clear, we just denote it $\mathscr{O}(1)$. We denote by $\mathscr{O}(n)$ for $n>0$ the $n$th tensor product of $\mathscr{O}(1)$. Its dual is $\mathscr{O}(-n)$. \end{notation} The coefficients $a_i$ of \eqref{wt} are sections of $\mathscr{L}^i$. The projective coordinates $[x:y:z]$ of this projective bundle are such that $x$ is a section of $\mathscr{O}(1)\otimes \pi^\star\mathscr{L}^2$, $y$ is a section of $\mathscr{O}(1)\otimes \pi^\star\mathscr{L}^3$ and $z$ is a section of $\mathscr{O}(1)$. The elliptic fibration is a section of $\mathscr{O}(3)\otimes \pi^\star\mathscr{L}^6$ in the bundle $\mathbb{P}(\mathscr{E})$. The elliptic fibration $\varphi:Y\rightarrow B$ has vanishing Chern class if $ c_1(B)=c_1(\mathscr{L})$. For most of this paper, except for the physical applications in section \ref{resCY3spp}, we will not need to impose the Calabi-Yau condition and we will also not restrict the dimension of the base. \subsection{Formulaire} An elliptic curve given by a Weierstrass equation is singular if and only if its discriminant $\Delta$ is zero. If we denote by $\bar{K}$ the algebraic closure of $K$, two smooth elliptic curves are isomorphic over $\bar{K}$ if and only if they have the same $j$-invariant. We recall the formulaire of Deligne and Tate which is useful to express the discriminant $\Delta$, the $j$-invariant and to reduce the Weierstrass equation into simpler forms: \begin{align}\label{eq.formulaire} b_2=& a_1^2+ 4 a_2,\ \ b_4= a_1 a_3 + 2 a_4 ,\ \ b_6 = a_3^2 + 4 a_6 , \ \ b_8 =b_2 a_6 -a_1 a_3 a_4 + a_2 a_3^2-a_4^2,\\ c_4=& b_2^2 -24 b_4 (=12 g_2), \quad c_6 = -b_2^3+ 36 b_2 b_4 -216 b_6 (=216 g_3).\\ \intertext{The coefficients $b_i$ and $c_i$ are sections of $\mathscr{L}^i$. The discriminant and the $j$-invariant are given by: } \Delta=&-b_2^2 b_8 -8 b_4^3 -27 b_6^2 + 9 b_2 b_4 b_6 (=g_2^3 -27 g_3^2),\qquad j=\frac{c_4^3}{\Delta}(=1728 J). \end{align} These quantities are related by the following relations: \begin{equation} 4 b_8 =b_2 b_6 -b_4^2 \quad \text{and}\quad 1728 \Delta=c_4^3 -c_6^2. \end{equation} The variables $b_2, b_4, b_6$ are used to express the Weierstrass equation after completing the square in $y$ by a redefinition $y\mapsto y-\frac{1}{2}(a_1 x +a_3 z)$: $$ z y^2= x^3 + \frac{1}{4}b_2 x^2 z+ \frac{1}{2} b_4 x z^2+ \frac{1}{4} b_6 z^3. $$ The variables $c_2, c_4$ and $c_6$ are then obtained after eliminating the term in $x^2$ by the redefinition $x\mapsto x-\frac{1}{12} b_2 z $, in order to have the short form of the Weierstrass equation: $z y^2=x^3 - \tfrac{1}{48} c_4 x z^2 -\tfrac{1}{864} c_6 z^3.$ We will use the following normalization of the short Weierstrass equation (obtained by introducing $f= - \frac{1}{48} c_4$ and $ g=-\frac{1}{864} c_6 $): \begin{equation} E:z y^2=x^3 + f x z^2 +g z^3, \quad \Delta=-16(4f^3 + 27 g^2) , \quad j=1728\frac{4f^3 }{4f^3 + 27 g^2}. \end{equation} \section{Weak coupling limit of elliptic fibrations}\label{SenWeakCoupling} In Type IIB string theory, when taking into account the back-reaction of space-time-filling 7-brane sources, the string coupling $g_s$ is not a constant, but is varying along the internal space. It is defined by (the expectation value of) the exponential of the dilaton $\phi$. Together with the axion $C_0$, the dilaton forms a complex scalar field called the {\em axio-dilaton}: \begin{equation} \tau=C_0+\mathrm{i}\, e^{-\phi}, \end{equation} which transforms under the S-duality group $\mathop{\rm {}SL} (2,\mathbb{Z})$ as the complex modulus of a torus under the action of the modular group: \begin{equation} \tau \mapsto \frac{a \tau + b}{c\tau + d}, \quad a d-b c =1,\quad a, b,c,d\in \mathbb{Z}. \end{equation} F-theory \cite{Vafa:1996xn} is a non-perturbative approach to type IIB string theory that implements geometrically several non-trivial constraints of S-duality using the geometry of elliptic fibrations. In F-theory, the axio-dilaton field is geometrically modeled as the modulus of the fiber of an elliptic fibration $$\varphi:Y\rightarrow B.$$ The base $B$ of the fibration is the space over which type IIB string theory is compactified. The size of the elliptic curve is not physical and therefore the elliptic fiber is only defined modulo homothety. It follows that each regular fiber can be expressed as a quotient: $$\mathscr{E}_\tau:=\mathbb{Z}/(\mathbb{Z}+\tau\mathbb{Z}), \quad \Im (\tau)>0$$ depending on the modulus $\tau$ living in the complex upper half-plane. Since much of our understanding of string theory is based on perturbative calculations that make sense only for small string coupling $g_s$, it is useful to understand how the strongly coupled physics described by F-theory flows to a weakly coupled type IIB description. The limit $g_s\rightarrow 0$ is known as the weak coupling limit of F-theory. In terms of the elliptic fiber, a vanishing string coupling $g_s$ corresponds to an infinite $j$-invariant. This can be seen by considering the Laurent expension of the $j$-invariant in terms of the variable $q:=\exp(2\pi \mathrm{i} \tau)$ parametrizing the punctured unit disk: \begin{equation}\label{jinvariant.Laurent} j(q)=744+\frac{1}{q}+\sum_{n>0} c_n q^n, \quad q:=\exp(2\pi \mathrm{i} \tau). \end{equation} In particular, the absolute value of $q$ is related to the inverse of string coupling as: $$ |q|=\exp(-\frac{2\pi}{ g_s}). $$ It follows that the weak coupling limit $g_s\rightarrow 0$ is equivalent to approaching the center of the unit disk ($|q|\rightarrow 0$) and therefore to an infinite $j$-invariant: \begin{equation} (g_s\rightarrow 0) \iff ( j\rightarrow \infty). \end{equation} To make connection with the perturbative regime of IIB string theory, one can consider certain degenerations of the elliptic fibration such that the string coupling becomes small almost everywhere over the base $B$. This is called a {\em weak coupling limit of the elliptic fibration}. In the simplest set-up, such degenerations can be expressed in terms of a family of elliptic fibrations $\varphi_\epsilon: Y_\epsilon \rightarrow B$ parametrized by a deformation parameter $\epsilon$ for which the general fiber of the fibration $\varphi_\epsilon: Y_\epsilon \rightarrow B$ becomes a nodal curve (or more generally a semi-stable curve) as $\epsilon$ approaches zero. \subsection{Sen's limit of Weierstrass models} Sen \cite{Sen:1996vd,Sen.Orientifold} has proposed a simple realization of the weak coupling limit for an elliptic fibration defined by a short Weierstrass model \begin{equation}\label{ShortWeierstrass} y^2=x^3+ f x + g. \end{equation} Geometrically, the main idea of Sen's limit is to express the Weierstrass model as a deformation of a fibration of nodal curves. The coefficients $f$ and $g$ are then polynomials in the deformation parameter $\epsilon$ so that the general fiber is a nodal curve at $\epsilon=0$. This ensures that the $j$-invariant goes to infinity as $\epsilon$ goes to zero. Sen's limit is explicitly given in terms of the following expression for $f$ and $g$: \begin{equation}\label{Sen.original} \text{Sen's limit} \ \begin{cases} f=-3 h^2+ \epsilon\eta \\ g=-2h^3+ \epsilon h \eta + \epsilon^2 \chi \end{cases} \end{equation} For every fixed value of $\epsilon$, the variables $\eta$ and $\chi$ ensure that the Weierstrass model is as general as possible. The elliptic fibration is then \begin{equation} Y_{(\epsilon)}: y^2=(x + h)^2 (x - 2 h)+\epsilon (\eta x + h \eta )+ \epsilon^2 \chi. \end{equation} At $\epsilon=0$, we recognize a fibration of nodal curves: $$ Y_{(0)}: \quad y^2=(x+h)^2(x-2h). $$ Since a nodal curve has an infinite $j$-invariant, this ensures that the string coupling vanishes over a generic point of the base as $\epsilon$ approaches zero. We can get to the same conclusion by computing the leading terms of the Laurent expansion of the $j$-invariant as a function of $\epsilon$: \begin{equation} \Delta=-9 \epsilon^2 h^2(\eta^2 +12 h\chi) +O(\epsilon^3), \quad j= 1728 \frac{12 h^4}{\epsilon^2(\eta^2+12 h\chi)}+\sum_{k\geq -1} u_k(h,\eta,\chi) \epsilon^k.\end{equation} In particular, the $j$-invariant has a pole of order two at $\epsilon=0$. \begin{remark} In Sen's limit, $g_s$ goes to zero nearly everywhere. More precisely, away from $h=0$. Over $h=0$, we have to be more careful as the leading order of $j$ vanishes for non-zero values of $\epsilon$. By first imposing $h=0$ and then taking the limit, one can show that $j=1728$ over a generic point of $h=0$ in the limit $\epsilon\rightarrow 0$. \end{remark} \subsection{The orientifold interpretation of Sen's limit} The monodromy of the axio-dilaton field around $h=0$ due to the behavior of the $j$-invariant $j\sim h^4/\epsilon^2 (\eta^2+12 h\chi)$ indicates that $h=0$ is the location of an O7-plane. We recall that $h$ is a section of $\mathscr{L}^2$ and the discriminant is a section of $\mathscr{L}^{12}$. Since $h$ is a section of an even line bundle, it can describe the branch locus of a double cover of the base. Explicitly, the double cover $\rho:X\rightarrow B$ of the base $B$ branched along the divisor $\underline{O}\subset B: h=0$ is given by the canonical equation of a double cover: \begin{equation} X:\quad \xi^2=h, \end{equation} which is automatically Calabi-Yau $n$-fold if the elliptic fibration $\varphi:Y\rightarrow B$ we started with is also Calabi-Yau $(n+1)$-fold. The weak coupling limit, described as a geometrical construction, does not require the Calabi-Yau condition and can be defined for an elliptic fibration over a base of arbitrary dimension. The branch divisor $\underline{O}$ corresponds to the orientifold locus $O:\xi=0$ in the double cover $X$. The leading term in the discriminant pulls-back to the double cover $X$ as follows \begin{equation} \rho^\star\Delta=-9\epsilon^2 \xi^4 ( \eta^2+12 \xi^2 \chi)+ O(\epsilon^3). \end{equation} The corresponding $j$-invariant is \begin{equation} j \propto 1728 \frac{12\xi^{8}}{\epsilon^2 ( \eta^2+12 \xi^2 \chi)} \end{equation} This is physically described as an orientifold at $O:\xi=0$ and a D7 Whitney-brane at $D_w:\eta^2+12 \xi^2 \chi=0$. The divisor $D_w$ has the singularity of a Whitney umbrella: A double line in codimension-2 that enhances to a locus of pinch points in codimension-3. \subsection{Sen's limit for a Weierstrass model in Tate form} When an elliptic fibration is given in the Weierstrass equation in Tate form : \begin{equation}\label{gen.Weierstrass} y^2+ a_1 xy + a_3 y =x^3+ a_2 x^2 + a_4 x +a_6, \end{equation} Sen's limit can still be defined by remembering that the previous equation can be put into the short Weierstrass form \eqref{ShortWeierstrass} $$y^2=x^3-\frac{c_4}{48} x -\frac{c_6}{864},$$ with $c_4$ and $c_6$ defined as in equation \eqref{eq.formulaire}. Sen's limit as expressed in equation \eqref{Sen.original} corresponds for a Weierstrass equation in Tate form\eqref{gen.Weierstrass} to the requirement: \begin{equation} \text{Sen's limit} \begin{cases} b_4\rightarrow\epsilon \eta \\ b_6\rightarrow \epsilon^2 \chi\;. \end{cases} \label{sen} \end{equation} The branch divisor is given by $\underline{O}: b_2=0$ in the base $B$. It follows that the orientifold is defined by the double cover: \begin{equation} X: \xi^2=b_2, \quad b_2:=a_1^2+4 a_2. \end{equation} At leading order in the discriminant locus: \begin{equation} \Delta= \epsilon^2 b_2^2 b_8+O(\epsilon^3). \end{equation} It is composed of a factor $b_2^2$ which is the contribution from the branch locus and a factor $b_8:=b_2 b_6-b_4^2$. When pulled-back to the double cover we have \begin{equation} \rho^*\Delta= \epsilon^2 \xi^4 (\xi^2 b_6 - b_4^2)+O(\epsilon^3). \end{equation} The factor $\xi^4$ is the contribution from the orientifold and the second factor gives a Whitney brane $\xi^2 b_6 - b_4^2=0$. Since $b_4=a_1a_3+2a_4$ and $b_6=a_3^2+4a_6$, it is important to realize that the limit \eqref{sen} can be implemented in many different ways if we start from Tate's general form of the Weierstrass equation. In the F-theory literature, the realization which is normally used is the ansatz of Donagi and Wijnholt \cite{Donagi:2009ra}. We will analyze it in some details in the next section. \section{The Donagi-Wijnholt ansatz}\label{DWAnsatz} Donagi and Wijnholt \cite{Donagi:2009ra} have proposed the following realization of Sen's limit \eqref{sen} valid for elliptic fibrations given by the Tate form of a Weierstrass model: \begin{equation}\label{sen.Donagi} \text{Donagi-Wijnholt} \begin{cases} a_3\rightarrow \epsilon a_3, \\ a_4\rightarrow \epsilon a_4, \\ a_6\rightarrow \epsilon^2 a_6. \end{cases} \end{equation} Geometrically, this limit is a degeneration of a Weierstrass model to a fibration of nodal curves when $\epsilon=0$. In the limit $\epsilon=0$, we have $a_3=a_4=a_6=0$, which gives the nodal curve: \begin{equation} (y+\frac{1}{2}a_1 x)^2=x^2 ( x+\frac{1}{4} b_2 )\,. \end{equation} The nodal curve specializes to a cusp over the divisor in the base $\underline{O}: b_2=0$. Since $b_2=0$ is a section of a line bundle $\mathscr{L}^2$ , we can define a double cover $\rho: X\rightarrow B$ branched at $h=0$: \begin{equation} X:\quad \xi^2=a_1^2+4a_2, \end{equation} which is used to define an orientifold theory in type IIB, understood as the weak coupling limit of F-theory model given by the Weierstrass equation with coefficients $(a_1, a_2, a_3, a_4, a_6)$. The double cover $X$ is nonsingular as long as $a_2=0$ defines a nonsingular divisor in the base. As we will see in the following subsections, $X$ is usually singular when $Y$ admits a gauge group implemented by ansatz coming from Tate's algorithm except in the case of symplectic gauge groups. \subsection{Gauge groups and singularities in the weak coupling limit} In F-theory, non-Abelian gauge groups appear when the elliptic fibration admits reducible singular fibers over a component of the discriminant locus \cite{Morrison:1996na, Bershadsky:1996nh}. Such singular fibers located over a codimension-one locus in the base of an elliptic fibration were classified by Kodaira and N\'eron and admit dual graphs that are extended ADE Dynkin diagrams \cite{Kodaira,Neron}. Non-simply laced gauge groups can also be obtained by taking into account the monodromy action by an outer automorphism on the nodes of the dual graph of the singular fiber \cite{Bershadsky:1996nh}. When working with a Weierstrass model, non-Abelian gauge groups can occur only when the Weierstrass model becomes singular, as a smooth Weierstrass model admits only irreducible singular fibers (regular elliptic curves, nodal curves and cusps). To implement a certain non-Abelian gauge group over a divisor $\sigma=0$ of the base, one can use an ansatz inspired directly from Tate's algorithm \cite{Bershadsky:1996nh}. These ans\"atze are now familiarly called {\em Tate forms}. The original list of Tate forms in \cite{Bershadsky:1996nh} was corrected by Grassi and Morrison \cite{GM1}. A Weierstrass model admitting a certain gauge group is not necessarily realized by one of the Tate forms. A careful analysis was done recently to see when it is possible to achieve these forms and more general ans\"atze were presented when it was not possible to do so \cite{Katz:2011qp}. We will refer to the classification in table 2 of \cite{Katz:2011qp} throughout the paper. It is reproduced in table \ref{Table.TateForm}. The ansatz requests that each of the coefficients $(a_1,a_2,a_3,a_4,a_6)$ of the Weierstrass equation vanishes with a certain multiplicity over $\sigma=0$ as stipulated by Tate's algorithm. The condition for non-simply laced gauge groups are conditions on factorizations of a quadratic or cubic equation defined from the coefficients $a_k$. Following the notation familiar from Tate's algorithm, we denote \begin{equation} a_i=a_{i,m_i} \sigma^{m_i}, \end{equation} where $m_i$ denotes the multiplicity of the divisor $\sigma=0$ over the subvariety defined by $a_i=0$. We assume that $a_{i,m_i}$ is nonzero for a generic point of $\sigma=0$. When such a gauge group is implemented in this way, we can realize the Donagi-Wijnholt ansatz as conditions on $(a_{3,m_3}, a_{4,m_4}, a_{6,m_6})$: \begin{equation}\label{DWLimitNonAbelian} \begin{cases} a_{3, m_3}\rightarrow \epsilon a_{3,m_3}, \\ a_{4, m_4}\rightarrow \epsilon a_{4,m_2}, \\ a_{6, m_4}\rightarrow \epsilon^2 a_{6,m_6}. \end{cases} \end{equation} This leads to an orientifold theory defined by the following double cover at weak coupling: \begin{equation} X:\quad \xi^2=h, \quad \text{where}\quad h:=\sigma^{2m_1}a_{1,m_1}^2+4 \sigma^{m_2} a_{2,m_2}. \end{equation} We see immediately that this double cover is singular whenever $m_2>0$. This implies that it will be singular for all gauge groups realized through Tate forms with the exception of symplectic gauge groups $\mathop{\rm {}Sp}(\lfloor \frac{k}{2} \rfloor)$ obtained from the Tate form for a fiber of type $I^{ns}_{k}$. When the double cover is singular, we have to determine if it admits a crepant resolution compatible with the $\mathbb{Z}_2$ involution of the double cover. \begin{table}[bht] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline type & group & \quad $ a_1$\quad & \quad $a_2$\quad & \quad $a_3$ \quad & \quad $ a_4 $ \quad& \quad $ a_6$ \quad & $\Delta$ \\ \hline \hline $I_0 $ & --- & $ 0 $ & $ 0 $ & $ 0 $ & $ 0 $ & $ 0$ & $0$ \\ $I_1 $ & --- & $0 $ & $ 0 $ & $ 1 $ & $ 1 $ & $ 1 $ & $1$ \\ $I_2 $ & $SU(2)$ & $ 0 $ & $ 0 $ & $ 1 $ & $ 1 $ & $2$ & $ 2 $ \\ $I_{3}^{ns} $ & $Sp(1)$ & $0$ & $0$ & $2$ & $2$ & $3$ & $3$ \\ $I_{3}^{s}$ & \small $\mathop{\rm SU}(3)$ & $0$ & $1$ & $1$ & $2$ & $3$ & $3$ \\ $I_{2k}^{ns}$ & $ Sp(k)$ & $0$ & $0$ & $k$ & $k$ & $2k$ & $2k$ \\ $I_{2k}^{s}$ & $SU(2k)$ & $0$ & $1$ & $k$ & $k$ & $2k$ & $2k$ \\ $I_{2k+1}^{ns}$ & $Sp(k)$ & $0$ & $0$ & {\footnotesize $k+1$} & {\footnotesize $k+1$} & {\footnotesize $2k+1$} & {\footnotesize $2k+1$} \\ $I_{2k+1}^s$ & \footnotesize $SU(2k+1)$ & $0$ & $1$ & $k$ & {\footnotesize $k+1$} &{\footnotesize $2k+1$} & {\footnotesize $2k+1$} \\ $II$ & --- & $1$ & $1$ & $1$ & $1$ & $1$ & $2$ \\ $III$ & $SU(2)$ & $1$ & $1$ & $1$ & $1$ & $2$ & $3$ \\ $IV^{ns} $ & $Sp(1)$ & $1$ & $1$ & $1$ & $2$ & $2$ & $4$ \\ $IV^{s}$ & $SU(3)$ & $1$ & $1$ & $1$ & $2$ & $3$ & $4$ \\ $I_0^{*\,ns} $ & $G_2$ & $1$ & $1$ & $2$ & $2$ & $3$ & $6$ \\ $I_0^{*\,ss}$ & $SO(7)$ & $1$ & $1$ & $2$ & $2$ & $4$ & $6$ \\ $I_0^{*\,s} $ & $SO(8)^*$ & $1$ & $1$ & $2$ & $2$ & $4$ & $6$ \\ $I_{1}^{*\,ns}$ & $SO(9)$ & $1$ & $1$ & $2$ & $3$ & $4$ & $7$ \\ $I_{1}^{*\,s}$ & $SO(10) $ & $1$ & $1$ & $2$ & $3$ & $5$ & $7$ \\ $I_{2}^{*\,ns}$ & $SO(11)$ & $1$ & $1$ & $3$ & $3$ & $5$ & $8$ \\ $I_{2}^{*\,s}$ & $SO(12)^*$ & $1$ & $1$ & $3$ & $3$ & $5$ & $8$\\ $I_{2k-3}^{*\,ns}$ &\footnotesize $SO(4k+1)$ & $1$ & $1$ & $k$ & {\footnotesize $k+1$} & $2k$ & \footnotesize $2k+3$ \\ $I_{2k-3}^{*\,s}$ &\footnotesize $SO(4k+2)$ & $1$ & $1$ & $k$ & {\footnotesize $k+1$} & {\footnotesize $2k+1$} &\footnotesize $2k+3$ \\ $I_{2k-2}^{*\,ns}$ & \footnotesize $SO(4k+3)$ & $1$ & $1$ & \footnotesize $k+1$ & \footnotesize $k+1$ &\footnotesize $2k+1$ & \footnotesize $2k+4$ \\ $I_{2k-2}^{*\,s}$ & \footnotesize $SO(4k+4)^*$ & $1$ & $1$ & \footnotesize $k+1$ & \footnotesize $k+1$ & \footnotesize $2k+1$ & \footnotesize $2k+4$ \\ $IV^{*\,ns}$ & $F_4 $ & $1$ & $2$ & $2$ & $3$ & $4$ & $8$\\ $IV^{*\,s} $ & $E_6$ & $1$ & $2$ & $2$ & $3$ & $5$ & $8$\\ $III^{*} $ & $E_7$ & $1$ & $2$ & $3$ & $3$ & $5$ & $9$\\ $II^{*} $ & $E_8\,$ & $1$ & $2$ & $3$ & $4$ & $5$ & $10$ \\ \footnotesize non-min & --- & $ 1$ & $2$ & $3$ & $4$ & $6$ & $12$ \\ \hline \end{tabular} \end{center} \caption{ {\bf Tate forms in F-theory}. The superscript (s/ns/ss) stands for (split/non-split/semi-split), meaning that (there is/there is not/ there is a partial) monodromy action by an outer automorphism on the vanishing cycles along the singular locus. }\label{Table.TateForm} \end{table} \subsection{Singular double covers from the Donagi-Wijnholt limit} We would like to classify the types of singularities that occur when the weak coupling limit is reached through the Donagi-Wijnholt realization of Sen's limit in presence of gauge groups implemented by Tate forms. First we note that the different Tate forms for singular fibers can be organized into four groups characterized by the vanishing multiplicity $(m_1,m_2)$ of the coefficients $a_1=\sigma^{m_1} a_{1,m_1}$ and $a_2=\sigma^{m_2} a_{2,m_2}$: \begin{itemize} \item $(m_1,m_2)=(0,0)$ for symplectic groups realized by fibers $I_k^{ns}$ and $\mathop{\rm SU}(2)$ realized by a fiber $I_2$.\footnote{We may realize SU(2) with slightly less generic $I_2$ fibers, i.e. with $a_2$ having order of vanishing $1$ along $\sigma=0$. Its weakly coupled physics is very different from the more generic realization and it belongs to the category $(m_1,m_2)=(0,1)$ (see \cite{Collinucci:2010gz} about this distinction).} They lead to smooth double covers at weak coupling. \item $(m_1,m_2)=(0,1)$ for unitary groups realized by fibers $I^s_k$. They lead to conifold singularities in the double cover. \item $(m_1,m_2)=(1,1)$ for fibers $I^* _k$ (orthogonal groups $SO(r)$ and the exceptional group $G_2$) and fibers $III$ and $IV$ (leading to $\mathop{\rm {}Sp}(1)$ and $\mathop{\rm SU}(3)$). They lead to quadric cone singularities. \item $(m_1,m_2)=(1,2)$ for exceptional groups $F_4$, $E_6$, $E_7$ and $E_8$. They lead to Whitney umbrella singularities. \end{itemize} This is summarized in table \ref{table.DW}. The case $(m_1,m_2)=(0,1)$ is special in the sense that in contrast to the quadric cone singularity and the Whitney umbrella, the conifold singularities do not admit crepant resolutions compatible with the $\mathbb{Z}_2$ involution. This is a serious problem for phenomenological model building based on SU(5) Grand Unified Theories. We will explain how to resolve that problem in section \ref{sppLimit} and get the right GUT group on a D7-stack. In section \ref{resCY3spp}, we will instead address several other features (and issues) of the GUT theories so obtained. \begin{table}[htc] {\footnotesize \begin{tabular}{|c|lcl|c|} \hline $(m_1,m_2)$ & \multicolumn{3}{c|}{Double cover} & { \begin{tabular}{l} group over $\sigma=0$ \end{tabular} } \\ \hline $(0,0)$ & $\xi^2 = a_{1,0}^2 + 4 a_{2,0}$ &:& smooth & Symplectic \\ \hline $(0,1)$ & $\xi^2 = a_{1,0}^2 + 4 \sigma a_{2,1} $ &:& conifold & Unitary\\ \hline $(1,1)$ & $\xi^2 = \sigma r_{1,1}$ &:& quadric cone & Orthogonal \\ \hline $(1,2)$ & $\xi^2 = \sigma^2 r_{1,2}$ &:& Whitney umbrella & $F_4$, $E_6,E_7$ and $E_8$\\ \hline \end{tabular}} \caption{ \footnotesize Singular orientifolds for the weak coupling limit $(a_3, a_4, a_6)\rightarrow (\epsilon a_3, \epsilon a_4, \epsilon^2 a_6)$. This weak coupling limit is the ansatz used by Donagi-Wijnholt \cite{Donagi:2009ra}. In the first column $(m_1,m_2)$ are such that $a_1=\sigma^{m_1} a_{1,m_1}$ and $a_2=\sigma^{m_2} a_{2,m_2}$. In the second column $r_{m_1,m_2}:=\sigma^{2m_1-m_2} a_{1,1}^2 + 4 a_{2,m_2}$. }\label{table.DW} \end{table} \subsubsection{ $SO(k)$ , $G_2$, $\mathop{\rm {}Sp}(1)$ and $\mathop{\rm SU}(3)$ and quadric cone singularities} When $(m_1, m_2)=(1,1)$, we have the double cover \begin{equation} X:\quad \xi^2= \sigma r_{1,1}, \quad \text{where}\quad r_{1,1}=\sigma a_{1,1}^2+a_{2,1}. \end{equation} It has the singularity of a {\em quadric cone}. The singularity is the codimension-2 locus $\xi=\sigma=r=0$. The double cover $X$ admits a crepant resolution which is also a double cover. This geometry characterizes the orthogonal gauge group obtained in F-theory by Tate form. The exceptional gauge group $G_2$ is obtained from a non-split fiber $I^*_0$ and therefore also leads to such a singular double cover. For small rank groups, the Tate form for $\mathop{\rm {}Sp}(1)$ (with a fiber $III$ or $IV^{ns}$) and $\mathop{\rm SU}(3)$ with a fiber $IV^s$ all have $(m_1, m_2)=(1,1)$. \subsubsection{Exceptional groups $F_4,E_6, E_7, E_8$ and Whitney umbrella } The Tate form for exceptional groups $F_4, E_6, E_7, E_8$ have $(m_1,m_2)=(1,2)$ and at weak coupling using the Donagi-Wijnholt ansatz, the orientifold is defined through a double cover with the singularities of a Whitney umbrella: \begin{equation} X:\quad \xi^2= \sigma^2 r_{1,2}, \quad \text{where}\quad r=a_{1,1}^2+a_{2,2}. \end{equation} As the singularity can be resolved by blowing-up the codimension-2 locus $\sigma=\xi=0$ of multiplicity 2, the double cover admits a crepant resolution compatible with the $\mathbb{Z}_2$ involution. \subsubsection{Unitary groups and conifold singularities.} Unitary groups require fibers of type $I^s_k$. In the usual ansatz from Sen's algorithm, they are implemented with the conditions $(m_1,m_2)=(0,1)$ which implies that the double cover obtained at weak coupling, using the Donagi-Wijnholt ansatz, is: \begin{equation}\label{ConifoldSingularity} X:\quad \xi^2= a_{1,0}^2 + 4 \sigma a_{2,1}\,. \end{equation} This admits conifold singularities in codimension-3 at $\xi=a_{1,0}=\sigma=a_{2,1}=0$. Such singularities admit crepant resolutions. However, these crepant resolutions are not compatible with the double cover: There are in fact two small resolutions which are exchanged by the orientifold action (see appendix \ref{AppConifold}). In contrast, the standard blow-up of the conifold is non-crepant. \subsection{Solving the conifold problem for the Tate form of unitary gauge groups}\label{SolvingConifold} We can solve the conifold problem appearing in the Donagi-Wijnholt ansatz of unitary gauge groups in at least two different ways, each leading to a very different physical picture at weak coupling. This is done by slightly modifying the original Donagi-Wijnholt ansatz. \subsubsection{Replacing the conifolds by quadric cones} The conifold singularities can be removed from the weak coupling limit of F-theory with $SU(n)$ gauge groups by supplementing the Donagi-Wijnholt ansatz with the additional condition $a_1\rightarrow \epsilon a_1$. This gives the following limit: \begin{equation}\label{OrthogonalAnsatz} \begin{cases} a_1\ \rightarrow \epsilon a_1,\\ a_{3, m_3} \rightarrow \epsilon a_{3,m_3}, \\ a_{4, m_4}\rightarrow \epsilon a_{4,m_2}, \\ a_{6, m_4}\rightarrow \epsilon^2 a_{6,m_6}\,. \end{cases} \end{equation} This limit is in fact equivalent to the Donagi-Wijnholt original one, \eqref{DWLimitNonAbelian}, for most of Kodaira singularities\footnote{We are grateful to Andr\'es Collinucci for having pointed out this equivalence to us.}. In fact the equivalence breaks down when and only when $\sigma$ divides $a_2$ but not $a_1$. By inspecting table \ref{Table.TateForm}, we find that this circumstance is realized for the $SU(n)$ tower only, which is our focus here. The discriminant locus of the Weierstrass model becomes \begin{equation} \Delta \propto h^2 (h \sigma^s b_{6,s} -\sigma^{2m_4} a_{4,m_4}^2)\,\epsilon^2, \end{equation} with $h= 4 \sigma^{m_2} a_{2,m_2}$. For symplectic gauge groups this gives a smooth double cover as $m_2=0$. For Tate form with $(m_1,m_2)=(1,1)$ or $(1,2)$ we recover the same geometry at weak coupling as with the original Donagi-Wijnholt ansatz. However, for unitary gauge groups implemented using Tate forms, the conifold singularities are replaced by the quadric cone singularities as the double cover is now: \begin{equation}\label{QuadricConeSing} X:\quad \xi^2= 4 \sigma a_{2,1}. \end{equation} Such singularities admit a crepant resolution compatible with the double cover. The orientifold locus splits into two components, namely $\xi=\sigma=0$ and $\xi=a_{2,1}=0$. One of these components happens to be the divisor on which the group is defined. It follows that we expect orthogonal groups at weak coupling. At leading order for a group $\mathop{\rm SU}(2n)$, the discriminant is \begin{equation} \Delta \propto\epsilon^2 a_{2,1}^2 \sigma^{2n+2} (4 \sigma a_{2,1} b_{6,2n} - a_{4,n}^2), \end{equation} and for $\mathop{\rm SU}(2n+1)$ \begin{equation} \Delta \propto \epsilon^2 a_{2,1}^2 \sigma^{2n+3} (4 a_{2,1} b_{6,2n} -\sigma a_{4,n}^2). \end{equation} In this limit, the gauge group $\mathop{\rm SU}(k)$ seen at strong coupling becomes an orthogonal group $\mathop{\rm SO}(2k)$ at weak coupling as $\sigma=0$ is a component of the branch locus of the double cover. In the case of $\mathop{\rm SU}(k)$, the power $\sigma^{k+2}$ in the discriminant can be understood as composed of a factor $\sigma^2$ contributing to the orientifold and the leftover $\sigma^{k}$ corresponding to $k$ bibranes\footnote{By a {\em bibrane} we mean a brane-image-brane pair.} on top of the component $\sigma=0$ of the orientifold. This leads to a gauge group $\mathop{\rm SO}(2k)$. For $\mathop{\rm SU}(2n)$ there is also a singular brane \begin{equation} D:4 \sigma a_{2,1} b_{6,2n} - a_{4,n}^2=0, \end{equation} which becomes \begin{equation} D:4 a_{2,1} b_{6,2n} -\sigma a_{4,n}^2=0, \end{equation} for $\mathop{\rm SU}(2n+1)$. \subsubsection{Replacing the conifold by a suspended pinch point}\label{sppLimit} For many applications, one would like to retrieve a unitary gauge group at weak coupling. Preserving the unitary gauge group in presence of a $\mathbb{Z}_2$ orientifold requires the presence at weak coupling of a stack of branes not coinciding with its image stack under the orientifold involution. This happens in the conifold geometry because as $\sigma=0$, we get two divisors in the double cover and they are image of each other under the involution, namely: \begin{equation} \xi\pm a_1=\sigma=0. \end{equation} We can keep that property while modifying the singularity so that we can have a crepant resolution compatible with the double cover. This would be the case of a double cover of the type \begin{equation} X:\quad \xi^2=u^2+4\sigma^s v,\quad s=2 \quad \text{or}\quad s=3. \end{equation} The simplest choice is $s=2$, which describes a suspended pinch point also known as a suspended Whitney umbrella. The suspended pinch point can be obtained by using the following modification of the Donagi-Wijnholt ansatz: \begin{equation}\label{SppLimitNonAbelian} \begin{cases} a_{2,1}\rightarrow \epsilon a_{2,1} + \sigma a_{2,2},\\ a_{3, m_3} \rightarrow \epsilon a_{3,m_3}, \\ a_{4, m_4}\rightarrow \epsilon a_{4,m_2}, \\ a_{6, m_4}\rightarrow \epsilon^2 a_{6,m_6}. \end{cases} \end{equation} This leads to a double cover with the singularities of a suspended pinch point: \begin{equation}\label{SppSing} X:\xi^2 = a_1^2 + 4\sigma^2 a_{2,2}. \end{equation} This double cover is singular along the codimension-two locus $\xi=a_1=\sigma=0$. A viable resolution in this case does exist: Indeed there are three small resolutions of the suspended pinch point, two of them are exchanged by the orientifold involution, and the third one, absent for the conifold, is orientifold invariant (see appendix \ref{AppSpp}). It turns out that the latter is equivalent to the standard ``large'' blow-up of the suspended pinch point singularity. \subsection{ Generalities on weak coupling limits for Tate forms}\label{GeneralitiesWCL} In F-theory, the non-Abelian part of the gauge group is completely controlled by the Kodaira type of the singular fiber over components of the discriminant locus and the monodromy around them. As we take the weak coupling limit, the discriminant locus can be deformed and provides a very different spectrum of branes than what is seen in the full F-theory regime. When the weak coupling limit is an orientifold theory, a stack of branes gives a gauge group that can be symplectic, orthogonal or unitary depending on the behavior of the stack with respect to the orientifold symmetry. There are 3 cases to consider\footnote{We will not discuss the presence of $U(1)$ factors.}: \begin{enumerate} \item Symplectic groups: the stack of D7 branes is supported on a divisor invariant under the involution but not pointwise invariant. \item Orthogonal groups: the stack of D7 branes is supported on a divisor pointwise invariant under the orientifold involution. \item Unitary groups: the stack of D7 branes is supported on a divisor which admits a distinct orientifold image. \end{enumerate} Assuming that at weak coupling we have a stack of $r$ branes over the divisor $\underline{\Sigma}:\sigma=0$ in the base, the discriminant locus is of the following form at leading order in the deformation parameter of the weak coupling limit: \begin{equation} \Delta\propto h^2 \sigma^r (\dots), \end{equation} where $h=0$ is the branch locus of the double cover that defines the orientifold theory: \begin{equation} X:\quad \xi^2=h, \quad f=-3 h^2 +\dots \end{equation} Here $h$ could contain factors of $\sigma$ as well. We denote by $h_\sigma$ the restriction of $h$ to the divisor $\sigma=0$: \begin{equation} h_\sigma:=h\Big|_{\sigma=0}. \end{equation} The gauge group associated with the stack depends on $r$ and on the factorization properties of $\xi^2=h_\sigma$. This is reviewed in table \ref{table.stackPB}. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|} \hline Property of $h_\sigma$ & Gauge group \\ \hline $h_\sigma$ {is identically zero} & $\mathop{\rm SO}(2r)$ \\ \hline $h_\sigma$ {is a perfect square} & $\mathop{\rm SU}(r)$ \\ \hline $h_\sigma$ {is not a perfect square} & $\mathop{\rm {}Sp}\Big(\big\lfloor \frac{r}{2}\big\rfloor\Big)$\\ \hline \end{tabular} \end{center} \caption{ The discriminant locus at weak coupling is $\Delta\propto h^2 \sigma^r (\cdots)$ and $h_\sigma:=h\Big|_{\sigma=0}.$}\label{table.stackPB} \end{table} Supposing that $\Delta\propto h^2 \sigma^r (\cdots)$, we get a unitary, orthogonal or symplectic gauge group by the following ans\"atze for $h$: \begin{subequations}\label{wclr} \begin{align} h:= u^2 + \sigma^k v & \quad \Longrightarrow\quad \mathop{\rm SU}(r), \\ h := \sigma^k v & \quad \Longrightarrow\quad \mathop{\rm SO}(2r),\\ h\ \text{generic} & \quad \Longrightarrow \quad \mathop{\rm {}Sp}\Big(\big\lfloor \tfrac{r}{2}\big\rfloor\Big), \end{align} \end{subequations} where $\lfloor x \rfloor$ denotes the integral part of $x$. \subsection*{Symplectic case: $\xi^2=h$} The double cover $\xi^2=h$ with $h$ general leads to a symplectic gauge group and it is smooth. This is the generic case. \subsection*{Orthogonal case: $\xi^2=\sigma^k v$} The double cover $\xi^2=\sigma^k v$ leads to an orthogonal gauge group over $\sigma=0$. The rank of the orthogonal group depends on the multiplicity of the discriminant locus. The double cover is singular whenever $k>0$. The singularities are in codimension-1 if $k>1$ and codimension-2 if $k=1$. They admit an admissible crepant resolution for $0 \leq k\leq 3$. This is reviewed in appendix \ref{AdmissibleResolutions}. \subsection*{Unitary case: $\xi^2=u^2+\sigma^k v$} The double cover $X: \xi^2=u^2+\sigma^k v$ will give a unitary gauge group for the stack over $\sigma=0$ since the divisor $\sigma=0$ of the base pulls-back to two distinct divisors in the double cover, namely $D_{\sigma_\pm}: \sigma=\xi\pm u=0$. The group at weak coupling will be $\mathop{\rm SU}(r)$ if the leading term of the discriminant is of the type $\Delta\propto h^2 \sigma^r (\cdots)$ where $h=u^2+\sigma^k v$. The question is whether $X$ admits an admissible crepant resolution. If $k>1$, the singularity are in codimension-2. If $k=1$, the singularity jumps to codimention-3 and corresponds to conifold singularities. Such conifold singularities do not admit a crepant resolution compatible with the involution of the double cover. If $k=2$ or $k=3$, the double cover admits an admissible crepant resolution. \subsection{Brane spectrum at weak coupling for the DW and the orthogonal limit}\label{BraneContentDW} For a weak coupling limit compatible with Sen's limit, the discriminant locus at leading order is \begin{equation}\label{LeadingDiscriminantDW} \Delta\propto\epsilon^2 h^2 b_8= \epsilon^2 h^2 (h\sigma^s b_{6, s}-\sigma^{2q} b_{4,q}^2)+O(\epsilon^3). \end{equation} Taking $r=min(s, 2q)$, we have the following brane content: $$ \begin{cases} \text{an orientifold at $h=0$,}\\ \text{a stack of $r$ branes over $\sigma=0$,}\\ \text{a singular brane $ (h\sigma^{s-r} b_{6, s}-\sigma^{2q-r} b_{4,q}^2)=0$.} \end{cases} $$ \begin{table}[htb] \begin{tabular}{|c|c|c|r c|} \hline Type & $j$ &F theory & \multicolumn{2}{c|}{DW limit / Quadric cone } \\ \hline $I_2$ & $\infty$ & $\mathop{\rm SU}(2)$ & \multicolumn{2}{c|}{$\mathop{\rm SU}(2)$ } \\ \hline $I_3^{ns}$ & $\infty$ & $\mathop{\rm {}Sp}(1)$ & \multicolumn{2}{c|}{ $\mathop{\rm {}Sp}(1)$} \\ \hline $ I^s_3$ & $\infty$ & $\mathop{\rm SU}(3)$ & \multicolumn{1}{|c|}{\small $\mathop{\rm SU}(3)$} & \multicolumn{1}{|c|}{\small $\mathop{\rm SO}(6)$} \\ \hline $I^{ns}_{2n}$ &$\infty$ & $\mathop{\rm {}Sp}(n)$ & \multicolumn{2}{c|}{$\mathop{\rm {}Sp}(n)$ } \\ \hline $I^s_{2n}$ & $\infty$ & $\mathop{\rm SU}(2n)$ & \multicolumn{1}{|c|}{\small $\mathop{\rm SU}(2n)$} & \multicolumn{1}{|c|}{\small $\mathop{\rm SO}(4n)$}\\ \hline $I^{ns}_{2n+1}$ & $\infty$ & $\mathop{\rm {}Sp}(n)$ & \multicolumn{2}{c|}{$\mathop{\rm {}Sp}(n)$ }\\ \hline $I^{s}_{2n+1}$ & $\infty$ &\small $\mathop{\rm SU}(2n+1)$ & \multicolumn{1}{|c|}{\small $\mathop{\rm SU}(2n+1)$} & \multicolumn{1}{|c|}{\small $\mathop{\rm SO}(4n+2)$}\\ \hline $II$ & $0$ & $-$ & \multicolumn{2}{c|}{$\mathop{\rm SO}(4)$ } \\ \hline $III$ & \footnotesize $1728$ & $\mathop{\rm SU}(2)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(4)$ }\\ \hline $IV^{ns}$ & $0$ & $\mathop{\rm {}Sp}(1)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(6)$ } \\ \hline $IV^s$ & $0$ & $\mathop{\rm SU}(3)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(6)$ }\\ \hline $I^{*ns}_0$ & $\infty$ & $G_2$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(8)$ } \\ \hline $I^{*ss}_0$ & $\infty$ & $\mathop{\rm SO}(7)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(8)$ }\\ \hline $I^{*s}_0$ & $\infty$ & $\mathop{\rm SO}(8)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(8)$ } \\ \hline $I^{*ns}_1$ &$\infty$ & $\mathop{\rm SO}(9)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(10)$ } \\ \hline $I^{*s}_1$ & $\infty$ &$\mathop{\rm SO}(10)$ &\multicolumn{2}{c|}{ $\mathop{\rm SO}(10)$ }\\ \hline $I^{*ns}_2$ &$\infty$ & $\mathop{\rm SO}(11)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(12)$ }\\ \hline $I^{*s}_2$ & $\infty$ &$\mathop{\rm SO}(12)$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(12)$}\\ \hline $I^{*ns}_{2n-3}$ &$\infty$ & \small $\mathop{\rm SO}(4n+1)$& \multicolumn{2}{c|}{\small $\mathop{\rm SO}(4n+2)$ } \\ \hline $I^{*s}_{2n-3}$ & $\infty$ & \small $\mathop{\rm SO}(4n+2)$ & \multicolumn{2}{c|}{\footnotesize$SO(4n+2)$ } \\ \hline $I^{*ns}_{2n-2}$ &$\infty$ & \small $SO(4n+3)$ & \multicolumn{2}{c|}{\small $\mathop{\rm SO}(4n+4)$ } \\ \hline $I^{*s}_{2n-2}$ & $\infty$ &\small $SO(4n+4)$ & \multicolumn{2}{c|}{\small $\mathop{\rm SO}(4n+4)$ } \\ \hline $IV^{*ns}$ &$0$ & $F_4$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(12)$ } \\ \hline $IV^{*s}$ & $0$ &$E_6$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(12)$ } \\ \hline $III^*$ &\footnotesize $1728$ &$E_7$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(12)$ } \\ \hline $II^*$ &$0$ & $E_8$ & \multicolumn{2}{c|}{ $\mathop{\rm SO}(14)$ } \\ \hline \end{tabular} \caption{Groups at weak coupling using the DW ansatz or the orthogonal ansatz. There is a difference only for fibers $I^s_k$ ($k>2$) which gives $\mathop{\rm SU}(k)$ in F-theory and also in the DW limit, but $\mathop{\rm SO}(2k)$ in the orthogonal limit. \label{table.groups.DW} } \end{table} Using the rules explained in \eqref{wclr}, we compute in table \ref{table.groups.DW} the gauge group in the weak coupling limit for each Weierstrass model with a given singularity over a divisor $\sigma=0$ implemented by the Tate form. We use and compare the Donagi-Wijnholt ansatz \eqref{DWLimitNonAbelian} and the orthogonal ansatz \eqref{OrthogonalAnsatz} to take the weak coupling limit. By definition of weak coupling limit, the $j$-invariant is generically going to infinity over the base so that the string coupling goes to zero. However, the $j$-invariant can still be finite over certain sub-loci of the base. A natural question is whether the coupling is actually small over the divisor $\sigma=0$ on which the gauge group is implemented by the Tate form. The appearance of orthogonal gauge groups in the Donagi-Wijnholt weak coupling limit of exceptional singularities (II, III, IV and their duals) deserves some comment. In particular, the gauge groups of the E series fall in this category. These cases are indeed special with respect to all the others, as the value of the $j$-invariant is finite on the locus $\sigma=0$ where the singularity is implemented in F-theory. This fact makes open strings ending there intrinsically strongly coupled and therefore the presence of the listed gauge symmetries is questionable. We have deduced them first by looking at the order of vanishing of the $b_8$ factor in the leading term of the discriminant \eqref{LeadingDiscriminantDW}; Second by remembering that for these cases $h=\sigma^2(a_{1,1}^2+4a_{2,2})$ and thus $\sigma=0$ is a branch of the O7-plane $h=0$. The same situation actually arises when taking the orthogonal limit of the fibers $I_2$ and $I_3^s$. This is not surprising, because the orthogonal limit makes the stack and the image-stack degenerate onto the O7-plane and the $I_2, I_3^s$ fibers have special enhancements on the orientifold, namely to $III, IV^s$ respectively. The results we derive in table \ref{table.groups.DW} for the gauge groups at weak coupling do not actually depend on the dimension of the F-theory compactification manifold and are purely based on geometrical facts. Their string interpretation at weak coupling, though, is puzzling. However, there is one case where we do have a reliable open string picture to compare those predictions with, and this is when we compactify F-theory on K3. Here, indeed, 7-branes are not intersecting and we have the technology of the so called ``A-B-C'' branes \cite{Johansen:1996am,Gaberdiel:1997ud} at hand to identify the BPS states responsible for the gauge symmetry. In particular, the group E$_8$ is realized \cite{Johansen:1996am} via the bound state $A^7BC^2$, where the group $BC$ has the monodromy of an O7-plane. By higgsing one of the C-branes, we immediately realize the appearance of the $SO(14)$ group in perturbation theory using only the open fundamental strings ending on the A-branes, possibly winding around the O7-plane $BC$ (this is the result of the perturbative enhancement of the manifest $SU(7)$ group). We can repeat the same reasoning for E$_7$ realized as $A^6BC^2$ and we deduce the perturbative group $SO(12)$ in agreement with what we found here. However, the agreement does not seem to exist for E$_6$, realized as $A^5BC^2$, where a perturbative subgroup $SO(10)$ appears, rather then the $SO(12)$ deduced from the rank of the discriminant. This might be due to the fact that the two weak coupling limits to type IIB on $T^2/Z_2$ we are comparing are inequivalent. Moreover, for compactifications of F-theory on higher dimensional manifolds, the interpretation in terms of A-B-C branes is no longer possible in a globally well defined way, due to 7-brane intersections. Yet the computation done for table \ref{table.groups.DW} is still valid, as it does not depend on the dimension. Therefore a comparison analogous to the one above can only be done locally\footnote{See \cite{Bonora:2010bu}, where this kind of local analysis has been used to identify the string-junction states in the adjoint of non-simply-laced gauge groups, which are not realizable in F-theory on K3 due to the absence of monodromies.} (away from the loci of symmetry enhancement). We hope to come back to these issues in a future work. \section{The type IIB Calabi-Yau threefold: Suspended pinch point case}\label{resCY3spp} In this section we discuss the smooth background where type IIB strings are actually leaving at weak coupling. We address here the case where the weak coupling limit gives us a singular geometry of the suspended pinch point type for the type IIB Calabi-Yau threefold (see section \ref{sppLimit}). We first provide a mathematical description of the resolution procedure and afterwards discuss the physics of the ensuing smooth geometry. From now on we restrict our attention to elliptic fibration which are Calabi-Yau and thus impose $c_1(B )=c_1(\mathscr{L} )$. We will write $c_1$ for $c_1(B)$. We also restrict to 3 the complex dimensions of the base. \subsection{ Description of the resolution}\label{MathematicalProperties} In order to blow-up the singular Calabi-Yau threefold \eqref{SppSing} along the curve $\sigma=\xi=a_1=0$, we proceed using toric methods (see \cite{Collinucci:2010gz,Collinucci:2012as} for the same methods applied to elliptic fourfolds). We first add to the ambient four-dimensional manifold two homogeneous coordinates, $s$ and $a$, together with two new equations. The singular Calabi-Yau threefold will thus be expressed as the following system of equations \begin{equation} X_3: \begin{cases} \xi^2&= a^2+s^2\,a_{2,2}\\ s&=\sigma \\ a&=a_1\,, \end{cases} \end{equation} where we have reabsorbed the irrelevant factor of $4$ in $a_{2,2}$. We then introduce yet another homogeneous coordinate, $w$, together with the following projective weight assignment \begin{equation}\label{projectWeights} \begin{array}{cccc}s&a&\xi&w\\ \hline 1&1&1&-1\end{array}\,. \end{equation} This will produce an element in the Stanley-Reisner ideal of the ambient six-fold of the form $\xi\,s\,a$: Now these three coordinates cannot simultaneously vanish. The resolved Calabi-Yau threefold will then appear as the following complete intersection in the ambient six-dimensional manifold \begin{equation}\label{resSPP} \tilde{X}_3 : \begin{cases} \xi^2 &=a^2+s^2 a_{2,2}\\ ws &=\sigma \\ w a &=a_1 \end{cases} \end{equation} This is now a perfectly smooth manifold, still invariant under the orientifold involution\footnote{In this case one can alternatively define the orientifold involution by reversing the sign of $s,a,w$ at the same time. This is clearly gauge-equivalent to sending $\xi\to-\xi$.}. This is where type IIB strings are supposed to live at weak coupling. Let us study some of the properties of this new geometry. The stack of D7-branes and its orientifold image are described by the following systems \begin{equation} {\rm D7}_\pm: \begin{cases} \xi &= \pm1\\ s&=0 \\ \sigma&=0\\ w&=a_1 \end{cases} \end{equation} where we have fixed the gauge associated to projective scaling \eqref{projectWeights} by putting $a=1$. While in the singular geometry the D7-stack was intersecting its image in a curve, in the resolved geometry they clearly do not touch each other. They also do not touch the orientifold plane, which is the surface \begin{equation} \label{O-plane} {\rm O7}: \begin{cases} \xi&=0\\ a^2&=-a_{2,2}\\ w\,a&=a_1 \\ w&=\sigma \end{cases} \end{equation} where again we have conveniently fixed the gauge by putting $s=1$. We easily recognize from \eqref{O-plane} a surface wrapping the divisor $\{a_1^2+\sigma^2a_{2,2}=0\}$ of the original base $B_3$, as it should be. On the other hand a new divisor appears, which replaces the former curve of singularities. This is the exceptional divisor \begin{equation}\label{ExceptionalSpp} E: \begin{cases} w&=0\\ a_1&=0 \\ \sigma&=0\\ \xi^2&=a^2+s^2\,a_{2,2} \end{cases} \end{equation} which has the geometry of an orientifold-invariant. It corresponds to a $\mathbb{P}^1$ with homogeneous coordinates $a,s$, fibered over the locus $\{\sigma=0\}\cap\{a_1=0\}\subset B_3$. It interpolates between the D7-stack and the O-plane by intersecting both of them respectively in the following \emph{distinct} points of the fiber $\mathbb{P}^1_{s a}$: $(s,a)=(0,1)$ and $(s,a)=(1,p)$ such that $p^2=-a_{2,2}$. The base $\tilde{B_3}$ onto which the resolved Calabi-Yau threefold $\tilde{X_3}$ projects is the original base $B_3$ blown-up along the curve $\{\sigma=0\}\cap\{a_1=0\}$. In other words, $\tilde{X}_3$ can be seen as the double cover of the blown-up base $\tilde{B_3}$ defined by the set of equations \begin{equation}\label{resolvedBase} \tilde{B}_3: \begin{cases} w\,s&=\sigma \\ w a&=a_1\,.\end{cases} \end{equation} in an ambient five-dimensional manifold given by adding to $B_3$ the following three homogeneous coordinates with a projective weight assignment \begin{equation}\label{projectWeightsBase} \begin{array}{ccc}s&a&w\\ \hline 1&1&-1\end{array}\,. \end{equation} Hence $s$ and $a$ cannot vanish at the same time and the exceptional locus is $\mathbb{P}^1_{s a}$ fibered over the curve $\{\sigma=a_1=0\}$. Before discussing the features of the suspended pinch point from the physics perspective, a comment is in order. The first Chern class of the new base can be expressed in terms of the one of the old base as follows \begin{equation}\label{nonCrepant} c_1(\tilde{B}_3)=\phi^*c_1(B_3)-E\,, \end{equation} where $\phi:\tilde{B}_3\to B_3$ is the blow-down map and $E$ is the class of $\{w=0\}$. This map is not crepant if the corresponding map between the double covers is. Therefore it may happen that starting from a spin base we end up having a non-spin base after the blow-up. Let us show this circumstance in a concrete example. Take $B_3=\mathbb{P}^3$, which is clearly spin, and call $H$ its hyperplane class. We want to blow up this manifold along the curve $\{x_1=0\}\cap\{a_1=0\}$ where $x_1$ is one of the homogeneous coordinates of $\mathbb{P}^3$ and $a_1$ is a polynomial in $\mathbb{P}^3$ of class $4H$. Hence, the blown-up threefold $\tilde{B}_3$ will be given by the hypersurface of class $4H$ \begin{equation}\label{EqBTilde} wa=a_1(wx_1,x_2,x_3,x_4)\,, \end{equation} in the ambient four-dimensional toric manifold defined by \begin{equation} X_{4}:\begin{array}{cccccc}x_1&x_2&x_3&x_4&a&w\\ \hline 1&1&1&1&4&0\\ 1&0&0&0&1&-1\end{array} \end{equation} The Stanley-Reisner ideal of this ambient variety is made by the two elements $x_1\,a$ and $x_2\,x_3\,x_4\,w$. We now want to prove that $\tilde{B}_3$ is non-spin. To this end, we have to find at least one 2-cycle on which the first Chern class integrates to an odd number. It turns out that this 2-cycle is not manifest in a generic point of the moduli space of $\tilde{B}_3$. But if we constrain the complex structure moduli in a suitable way we are able to write this 2-cycle as a set of three algebraic equations in the ambient fourfold which automatically satisfy \eqref{EqBTilde} (see \cite{Braun:2011zm,Collinucci:2009uh}, where similar techniques are used for elliptic fourfolds). One possible constraint which works is the following \begin{equation} a_1=x_2\,\hat{a}_1+x_3\,\tilde{a}_1\,, \end{equation} where $\hat{a}_1,\tilde{a}_1$ are both polynomials of degree $3H$. Now consider the following non-complete intersection 2-cycle \begin{equation} C_{(2)}:\left\{\begin{array}{rcl}x_2&=&0\\ x_3&=&0 \\ w&=&0\,.\end{array}\right. \end{equation} The integral of the first Chern class of $\tilde{B}_3$ on $C_{(2)}$ is \begin{equation} \int_{C_{(2)}}c_1(\tilde{B}_3)=\int_{X_4}(4H-E)\,E\,H^2\;=\;1\,. \end{equation} Here we have used the following intersection numbers of the ambient fourfold \begin{equation} H^4=\frac{1}{4}\;,\;H^3E=0\;,\;H^2E^2=-1\;,\;HE^3=-5\;,\;E^4=-21\,. \end{equation} \subsection{Physical properties}\label{PhysicalProperties} The hope is now to use the resolved Calabi-Yau geometry \eqref{resSPP} as the target space for weakly coupled type IIB strings. Their perturbative and non-perturbative dynamics should effectively reproduce the strongly coupled physics of the corresponding F-theory configuration at each codimension in the base: Gauge degrees of freedom at codimension one, matter degrees of freedom at codimension two and Yukawa-type interactions at codimension three, in the spirit of the paradagm of model building in F-theory \cite{Vafa:2009se}. In order to see to what extent the new geometry we have obtained realizes all that, let us specify two unitary F-theory configurations and work with them throughout the rest of the section. We choose an $SU(4)$ and an $SU(5)$ model, since they display different properties of Yukawa couplings, as it will be clear in a moment. Also, they are the lowest rank representatives of the even and odd unitary series with more familiar enhancements: $SU(2)$ enhances to the Kodaira singularity III and $SU(3)$ to IV$^s$ on the O7-plane \cite{GM1}. \subsubsection{SU(4)} In order to identify the relevant objects at weak coupling, we have to study the behavior of the discriminant of the elliptic fibration as $\epsilon$ goes to $0$. Let us do that for both the Donagi-Wijnholt limit \eqref{DWLimitNonAbelian} and the new limit \eqref{SppLimitNonAbelian} and compare the two situations. We use the Tate form of $SU(4)$ as expressed in table \ref{Table.TateForm} for a fiber of type $I^s_4$: That means $(a_1,a_2, a_3, a_4, a_6)$ have multiplicity $(0,1,2,2,4)$ along $\sigma=0$ \begin{equation} a_2=\sigma a_{2,1}, \quad a_3=a_{3,2}\sigma^2, \quad a_4=a_{4,2} \sigma^2, \quad a_6=a_{6,4}\sigma^4. \end{equation} By applying the Donagi-Wijnholt limit, we obtain \begin{equation}\label{LeadingSen} \Delta|_{\rm DW}\sim\left[a_1^2+4\sigma a_{2,1}\right]^2\,\sigma^4\,\left[a_{4,2}^2+a_1(a_{3,2}a_{4,2}-a_1a_{6,4})-\sigma a_{2,1}b_{6,4}\right]\;\epsilon^2\,, \end{equation} The discriminant is factorized into three pieces whose vanishing respectively represents the O7-plane, the D7-stack hosting the $SU(4)$ gauge group and the Whitney umbrella D7-brane. We have two relevant matter curves here\footnote{To be more precise, one has to look at the full discriminant, which has the form $\sigma^4\,I_1$, the latter factor being a recombined 7-brane with $U(1)$ gauge group, responsible for canceling the tadpole. On $\sigma=0$, $I_1$ factorized into two branches which are the matter curves discussed above.\label{FullDelta}}: \begin{equation} {\rm\bf 6}\;:\quad\begin{cases}\sigma=0\\ a_1=0\end{cases}\qquad,\qquad{\rm\bf 4}\;:\quad\begin{cases} \sigma=0\\ a_{4,2}^2+a_1(a_{3,2}a_{4,2}-a_1a_{6,4})=0\,.\end{cases} \end{equation} This is also consistent with the result of \cite{GM1}. The first hosts matter fields transforming in the {\bf 6}, the antisymmetric representation of $SU(4)$, which originates from the symmetry enhancement to the $SO(7)$ group\footnote{This enhancement is slightly more generic than the $SO(8)$ we would expect from string theory, as the latter would require an additional factorization condition, which kills the monodromies. As usual, the ${\rm\bf 6}$ of $SU(4)$ arises from the decomposition of the adjoint of $SO(7)$, i.e. ${\rm\bf 21}={\rm\bf 15}+{\rm\bf 6}$.}. The second accommodates matter fields transforming in the {\bf 4}, the fundamental representation of $SU(4)$, which originates from the symmetry enhancement to $SU(5)$. These two matter curves intersect in points of the D7-stack worldvolume where the further symmetry enhancement is expected to occur to accommodate the coupling \begin{equation}\label{YukawaSU4} {\rm\bf 6}\,\bar{\rm\bf 4}\,\bar{\rm\bf 4}:\begin{cases}\sigma=0\\ a_1=0\\ a_{4,2}=0 \end{cases} \end{equation} Actually, being \emph{all} the enhancements of $SU(4)$ to either the unitary or the orthogonal type of group, we should have good chances of finding an effective description of this physics within the realm of weakly coupled type IIB string theory, possibly including D-instanton effects in order to reproduce certain perturbatively forbidden Yukawa interactions. The situation is different for $SU(5)$, as we will see shortly. Let us now use the new limit \eqref{SppLimitNonAbelian} and expand the discriminant accordingly \begin{equation}\label{LeadingSpp} \Delta|_{\rm spp}\sim \left[a_1^2+\sigma^2a_{2,2}\right]^2\,\sigma^4\,\left[a_{4,2}^2+a_1(a_{3,2}a_{4,2}-a_1a_{6,4})-\sigma^2a_{2,2}b_{6,4}\right]\;\epsilon^2\,, \end{equation} As one immediately sees, all the relevant features of the $SU(4)$ model are kept intact, since the pattern of intersections and enhancements are unchanged. However, what we should really be looking at is the discriminant after the blow-up of $B_3$, namely the proper transform of \eqref{LeadingSpp}. Recall that the proper transform of the polynomial defining the D7-stack $\sigma$ is $s$ and the class of the divisor $\{s=0\}$ is $\mathcal{D}-E$ with $\mathcal{D}$ the class of $\{\sigma=0\}$ and $E$ the exceptional class. The proper transform of $a_1$ is $a$, of class given by \eqref{nonCrepant}. Thus we have \begin{equation}\label{LeadingSPPpt} \hat{\Delta}|_{\rm spp}\sim \left[a^2+s^2a_{2,2}\right]^2\,s^4\,\left[a_{4,2}^2+a w(a_{3,2}a_{4,2}-a w a_{6,4})-w^2s^2a_{2,2}b_{6,4}\right]\;\epsilon^2\,, \end{equation} Notice that we are providing here a smooth target space for type IIB strings at weak coupling: This is the resolved Calabi-Yau threefold defined in \eqref{resSPP}. This manifold projects onto the base \eqref{resolvedBase}, which is \emph{different} from the base of the elliptic fibration we started with (it is connected to it by blow-down). As a consequence, \eqref{LeadingSPPpt} is not to be regarded as the discriminant of a Calabi-Yau elliptic fibration. Indeed, the $U(1)$ D7-brane which is not touched by the blow-up and is supposed to cancel the total charge of O7-plane and D7-stack (last piece in \eqref{LeadingSPPpt}) has no longer the right degree to do that. This means that to preserve the D7 tadpole in the type IIB theory on the resolved $\tilde{X}$, the spectrum of branes is not given by the proper transform of the discriminant. There is an additional contribution to the D7 charge required to satisfy the D7 tadpole. This contribution is $4 E$. Since the D7 tadpole is equivalent to the Calabi-Yau condition for an elliptic fibration that would admit $\tilde{X}$ as its weak coupling limit, we can also consider the following scenario. We will now construct a Calabi-Yau elliptic fibration with base $\tilde{B}_3$. We also impose that at weak coupling it admits the double cover $\tilde{X}$ as in equation \eqref{resSPP} and a $\mathop{\rm SU}(4)$ stack over $\tilde{\mathcal{D}}:s=0$, the proper transform of $\mathcal{D}$. However the $U(1)$ brane will not coincide with the one in \eqref{LeadingSpp}. But it will have the proper degree to ensure that the elliptic fibration is Calabi-Yau and therefore automatically satisfies the D7 tadpole. In order to impose the proper stack, we use the Tate form for a fiber $I^s_4$ over the divisor $\tilde{\mathcal D}=\mathcal{D}-E: s=0$. The coefficients of the Tate form are given by: \begin{equation} \tilde{a}_1=\tilde a_{1,0}, \quad \tilde{a}_2 =\tilde a_{2,1} s + \tilde a_{2,2} s^2 , \quad \tilde{a}_3= \tilde a_{3,2} s^2, \quad\tilde{a}_4=\tilde a_{4,2} s^2, \quad \tilde{a}_6=\tilde a_{6,4} s^4, \end{equation} where $\tilde{a}_p$ is by definition a section of a line bundle of class $p c_1(\tilde{B}_3)=p(c_1-E)$ and therefore $a_{p,q}$ is a section of a line bundle of class $p c_1(\tilde{B}_3)-q\tilde{\mathcal{D}}=p c_1-q \mathcal{D}-(p-q)E$. The coefficient $a_2$ has a deformation $a_{2,2}$ compatible with a fiber $I^s_4$ and useful to define a suspended pinch point weak coupling limit. If we take $a_{2,1}$ identically vanishing, the DW-weak coupling limit will coincide with the suspended pinch point weak coupling limit. If we keep $\tilde a_{2,1}$, then we will rescale it as $\tilde a_{2,1}\rightarrow \epsilon \tilde a_{2,1}$ in the weak coupling limit. Both ways, we get a spp $\xi^2= \tilde a_1^2 + 4 \tilde a_{2,2}s^2$ in the weak coupling limit. It will coincide with the double cover of the resolved $\tilde{X}$ if we impose $\tilde a_1=a$ and $\tilde a_{2,2}=a_{2,2}$. The other Tate coefficients can be realized as follows: \begin{equation}\label{PT} \begin{array}{lllll} \ a_1&\longrightarrow&a&&c_1-E\\ \\ a_{2,1}&\longrightarrow& s\, a_{2,1}^{(1,0)} + a\, a_{2,1}^{(0,1)} & &2 c_1-\mathcal{D}-E\\ \\ a_{2,2}&\longrightarrow&a_{2,2}& &2 c_1-2\mathcal{D}\\ \\ a_{3,2}&\longrightarrow&s\,a_{3,2}^{(1,0)}+a\,a_{3,2}^{(0,1)}&&3 c_1-2\mathcal{D}-E\\ \\ a_{4,2}&\longrightarrow&s^2\,a_{4,2}^{(2,0)}+s\,a\,a_{4,2}^{(1,1)}+a^2\,a_{4,2}^{(0,2)}&&4 c_1-2\mathcal{D}-2E \\ \\ a_{6,4}&\longrightarrow&s^2\,a_{6,4}^{(2,0)}+s\,a\,a_{6,4}^{(1,1)}+a^2\,a_{6,4}^{(0,2)}&&6 c_1-4\mathcal{D}-2E\;, \end{array} \end{equation} where the last column indicates the divisor class of $\tilde a_{p,q}$. The superscript $(n,m)$ just means that we have $n$ powers of $s$ and $m$ powers of $a$ in front of that coefficient. Depending on the details of the model some of them may identically vanish. By taking the DW limit of this new fibration we get the following discriminant at leading order: \begin{equation}\label{LeadingSppPT} \Delta|_{\rm spp}\sim\left[a^2+s^2a_{2,2}\right]^2\,s^4\,\left\{a^4\,\left[(a^{(0,2)}_{4,2})^2+a^{(0,1)}_{3,2}a^{(0,2)}_{4,2}-a^{(0,2)}_{6,4}\right]+\mathcal{O}(s)\right\}\;\epsilon^2\,. \end{equation} The last bracket in \eqref{LeadingSppPT} defines the recombined, tadpole-canceling $U(1)$ D7-brane. The geometry of the Calabi-Yau uplift $\tilde{Y}$ successfully reproduces the $SU(4)$ gauge degrees of freedom on the D7-stack at $s=0$. Let us now go to codimension two. From \eqref{LeadingSppPT} we deduce the two matter curves \begin{equation} \cancel{{\rm\bf 6}\;:\quad\begin{cases}s=0\\ a=0\end{cases}}\qquad,\qquad{\rm\bf 4}\;:\quad\begin{cases}s=0\\ (a^{(0,2)}_{4,2})^2+a^{(0,1)}_{3,2}a^{(0,2)}_{4,2}-a^{(0,2)}_{6,4}=0\,.\end{cases} \end{equation} We obtain again a matter curve accommodating fields in the fundamental representation ${\rm\bf 4}$. However, matter arising from the intersection of the D7-stack with the O7-plane loses its support. This curve of intersection is in fact precisely the center of our blow-up. This is an issue for those models, like F-theory inspired GUTs \cite{Donagi:2008ca, Beasley:2008dc}, which require such a curve for phenomenological reasons. We do not have a convincing solution yet. One may think that the ${\rm\bf 6}$ is ``higgsed'' and split in two parts. Fix a divisor which intersects both the O7-plane and the D7-stack: The two branches of the ${\rm\bf 6}$-matter curve would then arise from the fixed divisor intersecting the O7-plane on one hand and the D7-stack on the other. This would then imply having a 7-brane source wrapping the fixed divisor, which from the structure of the discriminant \eqref{LeadingSppPT} is generically not the case for $SU(4)$. Therefore, one is led to constrain the complex structure of the fourfold in order to achieve this further factorization of the discriminant. Note, however, that a natural candidate divisor which in our resolved geometry interpolates between the D7-stack and the O7-plane is the exceptional divisor \eqref{ExceptionalSpp}. One can easily work out the same analysis for $SU(2k)$ with $k>2$, and realize that for $k\geq5$ factors of $w$ start appearing in eq. \eqref{LeadingSppPT}. They are needed for consistency with 7-brane tadpole cancellation, expressed by the fundamental relation \begin{equation}\label{7-braneTadpole} [\Delta]_{\rm spp}=12\,c_1(\tilde{B}_3)=12\,(c_1(B_3)-E)\,, \end{equation} where we used eq. \eqref{nonCrepant} and $[\Delta]$ means the divisor class of the discriminant. This provides, at least for high rank gauge groups, a natural playground for putting on solid base the conjecture of the ${\rm\bf 6}$-matter curve higgsing. We have not explored the details of this idea yet. We hope to come back soon to the problem in a future publication. Meanwhile, in section \ref{secondApproach}, we will propose a slight modification of limit \eqref{SppLimitNonAbelian}, which, though having a smaller range of validity, does not affect the matter curve in question. \subsubsection{SU(5)} Let us now come to the odd series of unitary groups and focus on the basic case of $SU(5)$. By applying the Donagi-Wijnholt limit \eqref{DWLimitNonAbelian} to the discriminant, we obtain \begin{equation}\label{LeadingSenOdd} \Delta|_{\rm DW}\sim\left[a_1^2+4\sigma a_{2,1}\right]^2\,\sigma^5\,\left[a_1(a_{3,2}a_{4,3}-a_1a_{6,5})-a_{2,1}a_{3,2}^2+\mathcal{O}(\sigma)\right]\;\epsilon^2\,, \end{equation} The relevant matter curves now are (same observation as in footnote \ref{FullDelta} applies here) \begin{equation} {\rm\bf 10}\;:\quad \begin{cases} \sigma=0\\ a_1=0 \end{cases} \qquad,\qquad{\rm\bf 5}\;:\quad \begin{cases} \sigma=0\\ a_1(a_{3,2}a_{4,3}-a_1a_{6,5})-a_{2,1}a_{3,2}^2=0 \end{cases} \end{equation} Matter in the antisymmetric representation ${\rm\bf 10}$ arises from the enhancement to $SO(10)$ along the O7-plane, while matter in the fundamental representation ${\rm\bf 5}$ arises from the enhancement to $SU(6)$ along the remainder, invariant D7-brane. Here we readily see that there are two kinds of Yukawa couplings arising from the intersection matter curves\cite{Marsano:2011hv} \begin{equation}\label{YukawaSU5} {\rm\bf 10}\,\bar{\rm\bf 5}\,\bar{\rm\bf 5}\;:\quad \begin{cases} \sigma=0\\ a_1=0\\ a_{3,2}=0 \end{cases} \qquad,\qquad{\rm\bf 10}\,{\rm\bf 10}\,{\rm\bf 5}\;: \begin{cases} \sigma=0\\ a_1=0\\ a_{2,1}=0\,. \end{cases} \end{equation} The first is analogous to the one for $SU(4)$ \eqref{YukawaSU4}, and it comes from the enhancement to $SO(12)$. The second is peculiar of $SU(5)$ and it is exactly localized on the conifold points of the singular geometry \eqref{ConifoldSingularity}. It comes from the enhancement to `E$_6$'. Hence, the enhancement to an exceptional gauge group occurring for $SU(5)$ tells us that in this case the physics hidden in the conifold points may well be intrinsically strongly coupled and thus impossible to reproduce just by means of fundamental strings\footnote{One would need string-junctions, which are believed to be the `fundamental'' objects of F-theory.}. If we now use the new limit \eqref{SppLimitNonAbelian} to expand the discriminant, we get \begin{equation}\label{LeadingSppOdd} \Delta|_{\rm spp}\sim\left[a_1^2+\sigma^2a_{2,2}\right]^2\,\sigma^5\,a_1\,\left[a_{3,2}a_{4,3}-a_1a_{6,5}+\mathcal{O}(\sigma)\right]\;\epsilon^2\,, \end{equation} where we see that the previous pattern of intersections and enhancements is not respected. The conifold points are scaled away, as it should be, while the $SO(12)$ points are kept. However, the meaningful quantity is the discriminant after the blow-up of the suspended pinch point. In the table \eqref{PT} we have to constrain the complex structure in order to extract a further factor of $s$ from $\tilde a_{4,2}$ and $\tilde a_{6,4}$, as required by the Tate prescription for $SU(5)$. This leads us to the following discriminant \begin{equation}\label{LeadingSppPTOdd} \Delta|_{\rm spp}\sim\left[a^2+s^2a_{2,2}\right]^2\,s^5\,a^3\,\left[a^{(0,1)}_{3,2}a^{(0,1)}_{4,3}-a^{(0,1)}_{6,5}+\mathcal{O}(s)\right]\;\epsilon^2\,, \end{equation} where now the last polynomial in square brackets has class $5c_1-5\mathcal{D}$. Our new geometry successfully reproduces the $SU(5)$ gauge degrees of freedom on the D7-stack at $s=0$. As for codimension two, we have again \begin{equation} \cancel{{\rm\bf 10}\;:\quad\left\{\begin{array}{l}s=0\\ a=0\end{array}\right.}\qquad,\qquad{\rm\bf 5}\;:\quad\left\{\begin{array}{l}s=0\\ a^{(0,1)}_{3,2}a^{(0,1)}_{4,3}-a^{(0,1)}_{6,5}=0\,.\end{array}\right. \end{equation} While the curve accommodating matter in the fundamental representation is successfully reproduced, the one hosting matter in the antisymmetric disappears. One may think of ``higgsing'' the antisymmetric matter curve using the exceptional divisor, in analogy to what proposed for $SU(4)$. Here factors of $w$ appear in \eqref{LeadingSppPTOdd} for $SU(2k+1)$ with $k\geq4$. However, we defer a more accurate analysis of this issue to future work. \subsection{An alternative spp}\label{secondApproach} As stressed in the previous section, the suspended pinch point geometry after resolution does not reproduce the antisymmetric matter curve. In this section we propose a way out of this problem, by slightly modifying the definition of the new weak coupling limit \eqref{SppLimitNonAbelian}. We focus our attention on $SU(4)$ F-theory configurations and only say few words about other cases towards the end of the section. Assume the D7-stack wraps a \emph{spin} manifold. This hypothesis is necessary for this alternative limit to work. Then consider the following weak coupling limit \begin{equation}\label{sppLimitSpin} \begin{cases} a_{2,1} & \longrightarrow \epsilon \,a_{2,1}+\tfrac14P^2\\ a_{3,2} &\longrightarrow \epsilon \,a_{3,2}\\ a_{4,2} &\longrightarrow \epsilon \,a_{4,2}\\ a_{6,4} &\longrightarrow \epsilon^2 \,a_{6,4} \end{cases} \end{equation} where $P$ is a section of a line bundle of class $c_1-\mathcal{D}/2$, which makes sense since $\mathcal{D}$ is by assumption an even class. The new Calabi-Yau theefold geometry is still singular of the suspended pinch point-type \begin{equation}\label{SPPspin} X_3:(\xi-a_1)\,(\xi+a_1)=\sigma\,P^2\,. \end{equation} But this time the curve of singularities is \emph{not} the intersection of the D7-stack with the O7-plane. This different improvement of the Donagi-Wijnholt limit is still harmless from the point of view of the discriminant, as we get \begin{equation}\label{LeadingSppSpin} \Delta|_{\rm spp}\sim\left[a_1^2+\sigma P^2\right]^2\,\sigma^4\,\left[a_{4,2}^2+a_1(a_{3,2}a_{4,2}-a_1a_{6,4})+\mathcal{O}(\sigma)\right]\;\epsilon^2\,. \end{equation} Let us now blow-up \eqref{SPPspin} and convince ourselves that limit \eqref{sppLimitSpin} successfully reproduces the physics of the $SU(4)$ models in codimension one and two. The resolution procedure goes exactly as in the previous case. The resolved Calabi-Yau threefold is the complete intersection \begin{equation}\label{resSPPspin} \tilde{X}_3:\left\{\begin{array}{rcl}\xi^2&=&a^2+\sigma\,p^2\\ w\,p&=&P \\ w\,a&=&a_1\,,\end{array}\right. \end{equation} and $\xi\,p\,a$ is an element of the Stanley-Reisner ideal of the ambient six-dimensional manifold. Again it can be viewed as the double cover of the manifold \begin{equation}\label{resolvedBaseSpin} \tilde{B}_3\;:\quad\left\{\begin{array}{rcl}w\,p&=&P \\ w\,a&=&a_1\,.\end{array}\right.\qquad,\qquad\begin{array}{ccc}p&a&w\\ \hline 1&1&-1\end{array}\,. \end{equation} which is the blow-up of $B_3$ along the curve $\{a_1=P=0\}$. The stack of D7-branes and its orientifold image are described by the following systems \begin{equation} {\rm D7}_\pm:\left\{\begin{array}{rcl}\xi&=&\pm a\\ \sigma&=&0 \\ wp&=&P\\ wa&=&a_1\,.\end{array}\right. \end{equation} They now intersect on a curve which lies on the O7-plane. The latter is the surface \begin{equation}\label{O-planeSpin} {\rm O7}:\left\{\begin{array}{rcl} \xi&=&0\\ a^2&=&-\sigma \\ w\,a&=&a_1 \\ w&=&P\,,\end{array}\right. \end{equation} where we have fixed the gauge by taking $p=1$. We easily recognize from \eqref{O-planeSpin} a surface wrapping the divisor $\{a_1^2+\sigma P^2=0\}$ of the original base $B_3$, as it should be. Finally, the exceptional divisor is \begin{equation} E:\begin{cases} w &=0\\ a_1 &=0 \\ P &=0\\ \xi^2 &=a^2+\sigma\,p^2 \end{cases} \end{equation} which has the geometry of an orientifold-invariant, quadratic $\mathbb{P}^1$ with homogeneous coordinates $a,p$, fibered over the locus $\{P=0\}\cap\{a_1=0\}\subset B_3$. On the location of the D7-stack, $\{\sigma=0\}$, the fiber of the exceptional divisor splits into two linear spheres, $\mathbb{P}^1_{pa}|_{\xi=\pm a}$, exchanged by the orientifold involution. This last geometry may turn useful, as it will be clear below. Let us now look at the proper transform of the discriminant \eqref{LeadingSppSpin}. It is not difficult to understand that blue the Tate coefficients $a_{3,2}, a_{6,4}$ cannot be constrained as imposed by 7-brane tadpole cancellation. In contrast $a_{4,2}$ must be replaced by the monomial $p^4$, which is the most generic form of the appropriate degree. Therefore the discriminant simply reads \begin{equation}\label{LeadingSppSpinPT} \Delta|_{\rm spp}\sim \left[a^2+\sigma p^2\right]^2\,\sigma^4\,p^8\;\epsilon^2\,. \end{equation} Here we see that the U(1) D7-brane has undergone a drastic change and, due to its high degree, has given rise to a stack of eight D7-branes plus separated orientifold images (much like the $SU(4)$ stack in the resolved suspended pinch point geometry of section \ref{resCY3spp}). This new stack accommodates an $SU(8)$ flavor symmetry. There are two possible matter curves, described by the following intersections \begin{equation} {\rm\bf 6\;:\quad\left\{\begin{array}{l}\sigma=0\\ a=0\end{array}\right.}\qquad,\qquad{\rm\bf 4}\;:\quad\left\{\begin{array}{l}\sigma=0\\ p=0\,.\end{array}\right. \end{equation} The first is the curve where matter in the antisymmetric representation of $SU(4)$ is localized, which arises from the ordinary enhancement to $SO(8)$ along the O7-plane. Notice that it now survives the resolution. The second is the curve where matter in the fundamental representation of $SU(4)$ lives. Since this curve is the intersection of the gauge stack with an $SU(8)$ flavor stack, there is an enhancement to $SU(12)$ along it and the matter fields localized there transform in the fundamental representation of the flavor group. If we now look at the intersection of these two matter curves to search for the Yukawa couplings, we readily see that it is empty, because it is part of the locus which has been blown-up. However, we see that the triple intersection we are looking for is replaced by the curve $E\cap {\rm D7}_+\cup E\cap {\rm D7}_-$ \begin{equation} \begin{cases} w&=0\\ a_1&=0 \\ P&=0\\ \sigma&=0\\ \xi&=a\end{cases} \quad \cup \quad \begin{cases} w&=0\\ a_1&=0 \\ P&=0\\ \sigma&=0\\ \xi&=-a \end{cases} \end{equation} whose typical fiber, as already mentioned, is a pair of $\mathbb{P}^1$s, one the orientifold image of the other, touching at a point. One is now tempted to argue that we have an effective, non-perturbative type IIB description of the ${\rm\bf 6}\,\bar{\rm\bf 4}\,\bar{\rm\bf 4}$ Yukawa coupling by means of D1-instantons wrapping one $\mathbb{P}^1$ and anti-D1-instantons wrapping the image $\mathbb{P}^1$. However, we have not performed an accurate analysis of this system: Besides proving that it is actually stable, one has to make sure that the instantons in question have the right number of neutral zero-modes so to contribute to the superpotential and generate the wanted Yukawa coupling. We hope to clarify all that in a future work. To conclude this section, let us stress that limit \eqref{sppLimitSpin} does not properly work beyond $SU(4)$, i.e. for F-theory configurations with $SU(N\geq4$) singularity. This is because there is no way in the geometry of $\tilde{B}_3$ to satisfy the 7-brane tadpole. Therefore the validity of the weak coupling limit presented in this section is limited to $SU(4)$ F-theory configurations with gauge stack wrapping a spin manifold. \section{Conclusions}\label{Outlook} In this paper, we have discussed different realizations of Sen weak coupling limit\cite{Sen.Orientifold} which are alternatives or specializations of the traditional Donagi-Wijnholt ansatz \cite{Donagi:2009ra}. The main purpose has been to provide a systematic way of solving the conifold problem afflicting the DW ansatz when applied to singular fibrations with unitary gauge groups: These singularities do not admit admissible crepant resolutions\footnote{An admissible crepant resolution of a double cover $X$ is a resolution of $X$ that preserves the first Chern class and is compatible with the structure of the double cover (and therefore with the orientifold involution).}. We also analyze the weak coupling limit of all gauge groups implemented by Tate forms, including the exceptional ones. The properties of these limits are somehow surprising: \begin{enumerate} \item The gauge group seen at weak coupling is not necessarily the same as the one observed in F-theory. This is expected in certain cases, like for example for exceptional gauge groups in F-theory as they are not present at weak coupling. The groups seen in the weak coupling limit can be orthogonal, unitary or symplectic. An orthogonal gauge group appears when the locus of the brane coincides with a component of the orientifold locus. \item The gauge group seen at weak coupling is not necessarily a subgroup of the F-theory group. For example, for $E_6$, we get a group $\mathop{\rm SO}(12)$. One could argue that this $\mathop{\rm SO}(12)$ should reduce to $\mathop{\rm SO}(10)$ if it is generated by a perturbative subset of open strings that generate $E_6$ in F-theory. This would match the description of $E_6$ in F-theory using monodromies of ``$ABC$" branes. A clear, string-based deduction of gauge group at weak coupling for exceptional singularities (II,III,IV and duals thereof) of elliptic fibration of dimension grater that two is still missing. This is due to the fact that there is no way of getting a weak string coupling on the would-be gauge stack\footnote{This problem does not happen for exceptional singular fibers in weak coupling limits considered in \cite{AE2,EFY} which are not based on Tate forms.}. \item In the DW-weak coupling limit, the same gauge group is obtained for fibers regardless of it being split/non-split/semi-split. This is true with the exception of the $I_n$ fibers that lead to unitary and symplectic gauge groups. \item For unitary gauge groups, we can maintain the group and its rank and get a double cover with an admissible crepant resolution if we use the suspended pinch point (spp) limit. However, it requires introducing a term ($a_{2,2}$) which is a section of a line bundle of class $2L-2\mathcal{D}$. The existence of such a section is a non-trivial topological constraint. There is an alternative limit also leading to a double cover with an admissible crepant resolution and which is free of such a topological constraint but which leads to orthogonal gauge groups at weak coupling. \item We also note a possible tension between a (crepant) resolution $\tilde{X}\rightarrow X$ of a Calabi-Yau and the D7 tadpole cancellation condition which requires the vanishing of the total D7 charge. Indeed, after a resolution, the branes are expected to wrap the proper transforms of the cycles they used to wrap in $X$. However, if some of these cycles intersect the center of the blow-up with multiplicities, their classes will get a contribution from the exceptional divisor. It follows that the D7 tadpole can be in jeopardy as it is based on a delicate equilibrium between the class of the D7 branes and the orientifold plane. We will give an important example below. \begin{example}[Tadpole requirements for the typical configuration] Consider the typical situation that occurs for the weak coupling limits we have in this paper: A singular double cover is resolved by blowing-up a codimension-two locus of multiplicity 2. The spectrum consists of an orientifold $\underline{O}$, a stack of $r$ D7 branes on $\underline{D}$ and a spectator $U(1)$ brane $\underline{D}'$, all described in the base. We assume that the stack intersects the center of the blow-up with multiplicity 2. Before the resolution, we have the tadpole $$8[\underline{O}]-(r [\underline{D}]+[\underline{D}'])=0.$$ After the blow-up, we can evaluate the D7 charge mismatch: $$ 8[\underline{O}-E]-(r [\underline{D}-E]+[\underline{D}'])=(r-8)E. $$ If $r=8$, the proper transform of the spectrum does satisfy the tadpole. If $r<8$, the tadpole charge would require a negative contribution proportional to the exceptional divisor $E$. This can be for example a stack of $8-r$ anti-D7 branes, which will break supersymmetry. If $r>8$, the tadpole can be canceled by wrapping $(r-8)$ D7-branes on the exceptional divisor. We present a solution to the problem when $r<8$. Indeed, we cannot keep the spectrum of the proper transforms. But if we would like to keep the orientifold and the stack unchanged we can modify the remaining brane in such a way that the tadpole is preserved. We can think of it as a supersymmetric brane recombination of the stack of anti-branes and the witness brane. We have obtained a natural description of the final result of such a recombination using a Calabi-Yau elliptic fibration over the base of the resolution of $X$. \end{example} \end{enumerate} In the second part of the paper, we have focused on questions relevant for phenomenological GUT model building. In particular we have explored the possibility of realizing in an effective way as much as we could of the physics of F-theory $SU(N)$ configurations using only the weakly coupled dynamics of type IIB strings. Therefore we started by requiring that we have (on a arbitrary divisor $\{\sigma=0\}$) an $SU(N)$ stack of D7-branes and its orientifold image in a smooth Calabi-Yau threefold. This one condition (together with the fact that the Calabi-Yau threefold is the double cover of the base of the elliptic fibration) already constrain the hypersurface equation to have the following conifold form \begin{equation} \xi^2=a_1^2+\sigma\,B\,, \end{equation} where $B$ is a polynomial of the base of the appropriate degree. $B$ is the only factor we can play with in order to achieve a more tractable singularity. Now, if we impose that $\sigma$ divides $B$, as we did in subsection \ref{PhysicalProperties}, the antisymmetric matter curve becomes the singular locus and it is blown-up in the resolved picture. Alternatively, we can deform $B$ to be a perfect power. Here two sub-cases are possible. The power is even (the basic case of power two has been explored in subsection \ref{secondApproach}), which only works with the assumption of spin-ness of the gauge divisor; The case of $SU(4)$ seems to work with this strategy, but higher rank gauge groups seems incompatible with 7-brane tadpole cancellation. The power is odd; This case reduces to the original conifold, after a series of resolutions. More work is needed in the investigation of a full effective description within the realm of type IIB string theory of the strongly coupled physics of unitary F-theory configurations. In particular, suitable instanton effects will be required in order to reproduce certain expected Yukawa interactions. We hope to come back to all these issues in a future work. \subsection*{Acknowledgements} We would like to thank Andr\'es Collinucci for initial collaboration and for many fruitful conversations. We also like to acknowledge useful discussions with Frederik Denef, I\~naki Garc\'ia-Etxebarria, Thomas Grimm, Hirotaka Hayashi, Stefan Hohenegger, Shamit Kachru, Timo Weigand, Martijn Wijnholt and Shing-Tung Yau. We would like to acknowledge the hospitality of the Simons Center for Geometry and Physics in Stony Brook where this project was born. M.E. is very grateful to the members of the Taida Institute for hospitality. He would also like to thank Imran Esole for his joyful cooperation at different stage of this project.
2,877,628,088,803
arxiv
\section{Introduction} Lattice QCD is a powerful method to study Quantum chromodynamics (QCD) in a nonperturbative way. In lattice QCD, a path integral is directly evaluated on a discrete space-time lattice by means of the Monte Carlo method. As computer technology advances PC clusters can also be used for lattice QCD simulations as well as a number of commercial supercomputers. Since lattice QCD simulations demand huge computer power, it is very important to optimize the simulation codes so as to exploit the full potential of the processor. Thus we optimize the hot spots of the codes such as the operation of a Dirac operator to a spinor (referred as $Q\phi$ hereafter) and linear algebra of spinors, e.g. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item $\langle\psi,\phi\rangle$: scalar product of two spinors \item $\langle\psi,\psi\rangle$: norm square of a spinor \item $\psi=\psi+c\phi$: add two spinors with a constant factor and assign the result to one of the source spinors \end{itemize} In the simulation, a spinor is defined as a vector with 12 complex components on each grid point (site) of the lattice, and a gauge field is defined as a complex $3\times 3$ matrix on a link, which connects nearest neighbor sites. Numerically the operation $Q\phi$ is a combination of a complex $3\times 3$ matrix times a complex vector. From a technical point of view, we should optimize these two applications separately since $Q\phi$ is processor limited as it needs more than one thousand floating point operations per site, while the linear algebra routines demand more data transfer than actual computations (memory limited). In this note we report our experience of the optimization of the lattice QCD codes for the new PC cluster with the AMD Opteron processor~\cite{man} which is recently installed at DESY Hamburg. Of course the parallel computing with multiple processors is of our interest, however, we have optimized here only single processor version as a first step. We also compare benchmarks with an older PC cluster with the Intel Xeon processors (See Table~\ref{tbl:spec} for the specification of these two processors). \begin{table} \caption{Comparison of the specification of the two processors which we use} \small \begin{tabular}{ccc} \hline & AMD Opteron & Intel Xeon \\ \hline Processor speed & 2.4 GHz & 1.7 GHz \\ L1 Data Cache & 64 kbytes & --- \\ L2 Data Cache & 1024 kbytes & 512 kbytes \\ \# of SSE registers & 16 & 8 \\ Cache line size & 64 bytes & 128 bytes \\ \hline \end{tabular} \label{tbl:spec} \end{table} \section{SSE instructions} To achieve the full performance, it is important to make use of the special feature of the processor. Streaming SIMD extensions instruction sets (SSE) is the case for us~\cite{sse}. SSE is designed to process 128-bit long data which may contain multiple elements of vectors and is suitable for vector operations like $Q\phi$ and linear algebra. Both Opteron and Xeon processors support SSE and SSE2 (the first extension of SSE) and have special 128-bit long registers for the SSE instructions (SSE registers) as summarized in Table~\ref{tbl:spec}. Note that neither of them supports SSE3, the latest extension of SSE. In our group, simulation codes written in the standard C language with SSE/SSE2 have already been developed for the Xeon processor. These SSE instructions are embedded to the C codes by defining macros using GCC inline assembly~\cite{inline}. The macro contains the SSE instructions that load data from system memory to a SSE register, operate on vectors on the SSE registers, store data to system memory, etc. Technically it is also important to consider the latency of the SSE instruction itself as well as the latency caused by the data transfer. Usually an instruction needs several processor cycles to finish the operation. Although the Opteron processor can execute up to three instructions per cycle, the processor has to wait until the previous operation is finished if these two operations are interdependent. One can avoid this by performing independent operations by using several SSE registers. For instance, in a macro with SSE instructions, we first load $512$-bit data to four SSE registers per function call. Then the operation on the first SSE register starts while data loading proceeds on the third and the fourth registers. In the same way, the data on the first SSE register can be stored to the system memory while the calculation on the third and the fourth registers are going on. This is how one can hide the latency of the SSE instructions. In addition to this, of course, the reduction of the number of instructions and/or the use of lower latency instruction improves performance. Opteron's twice the legacy number of registers (see Table~\ref{tbl:spec}) can eliminate substantial memory access overhead by keeping intermediate results on SSE registers and can hide latency of each instruction more effectively. \section{Prefetch instruction} The mismatch between the memory bandwidth and the processor throughput is one of the big origins of latency. Typical difference of them is a factor of ten. The prefetch instructions, which read data from system memory into the level~1 (L1) data cache, take advantage of the high bus bandwidth of the Opteron processor to hide latencies when fetching the data from the system memory. Data transfer is processed in background and eight prefetch instructions can be ``in flight'' at a time. As a prefetch instruction initiates a read request of a specified address and reads one entire cache line that contains the requested address, prefetch instructions can improve performance in situations where the sequential addresses are read. This is the case for lattice QCD simulations. Once the data is in the L1 cache, there is almost no latency as 128-bit data can be loaded into a SSE register within two processor cycles. Note that the number of prefetch instructions is dependent on the length of one cache line, which is 64 bytes for the Opteron processor, while 128 bytes for the Xeon processor. One of the important parameter for the effective use of prefetch instructions is ``prefetch distance,'' which denotes how far ahead the prefetch request is made. In principle, the prefetch distance should be long enough so that the data is in the cache by the time it is needed by the processor. The actual distance is dependent on the application. \begin{figure}[t] \includegraphics[width=7.9cm]{prefetch.eps} \caption{Prefetch distance dependence of the performance of the macro which computes $3\times 3$ complex matrix times complex vector on each site on $12^{3}\times 24$ lattice with 16 SSE registers, which is optimized for the Opteron processor. The computation time is shown in units of second for 500 applications. Prefetch distance ``1'' means that a spinor on the next site and a gauge field on the next link is prefetched before the computation on a certain site. } \label{fig:prefetch} \end{figure} We show the prefetch distance dependence of the performance of the macro which computes a $3\times 3$ complex matrix times a complex vector on each site in Fig.~\ref{fig:prefetch}. This operation can be regarded as a prototype of $Q\phi$. We find that the macro with the prefetch distance three is more than twice faster than that without prefetch instruction. It is also interesting to note that shorter prefetch distance than the optimal distance (distances 1 or 2 in Fig~\ref{fig:prefetch}) gives worse performance compare to longer prefetch distance such as distance 5 or 6. This result suggests that the prefetch distance should not be chosen too small. Another important parameter is the amount of data to be processed for each ``prefetch-computation'' iteration. In general, for an optimal use of the prefetch instruction, the data stride per iteration should be longer than the length of a cache line. Although it is easier to hide latency in the SSE macro with more data per iteration, too big data stride like the length of more than four cache lines per iteration may reduce performance. This is also dependent on the number of source fields in the application, since the number of multiple prefetch requests is limited. This number is eight for the Opteron processor. We perform several tests for each application and determine the prefetch distance and data stride one by one. For instance, in the routine to compute $\langle\psi,\phi\rangle$, data stride is the length of two cache lines and prefetch both source fields at six cache lines ahead. In the $\psi + c\phi$ routine, the data stride is the length of a cache line and the prefetch distance is five cache lines. \begin{figure}[t] \includegraphics[width=7.9cm]{norm.eps} \caption{The lattice size dependence of the benchmark results for 32-bit $\langle\psi,\psi\rangle$ calculation written in C language without SSE ($\Box$), with SSE instructions optimized for Xeon ($\nabla$) and the new version optimized for Opteron ($\circ$). All calculations are done on the Opteron cluster. } \label{fig:norm} \end{figure} \begin{figure}[t] \includegraphics[width=7.9cm]{mulc_withd.eps} \caption{Benchmark results for 32-bit $ \psi+c\phi$. The slowing down observed in the Opteron version ($\circ$) at $L=8,16,24$ is cured of by adopting two prefetch distances ($\bullet$).} \label{fig:mulc} \end{figure} \section{Benchmarks} To see the effectiveness of the implementation of SSE instructions and of optimization, we show benchmarks of the linear algebra routines with and without SSE instructions in Figs.~\ref{fig:norm}, \ref{fig:mulc} and \ref{fig:spinor}. We also plot the result of the original code using 8 SSE resisters, which is written for the Xeon processor and compatible with the Opteron processor. The use of SSE instructions improves the performance even before the optimization for the Opteron processor. Besides we have another considerable gain by tuning prefetch distance and by making use of 16 SSE registers. The lattice size dependence of the performance is rather large for linear algebra routines. Good performance (6.4 GFlops) of $\langle\psi,\psi\rangle$ calculation at $L=8$ in Fig.~\ref{fig:norm} is due to the cache effect, where a spinor field (72 bytes $\times 8^{4}\sim 0.3 $ Mbytes) can entirely fit into the L2 cache. We also observed slowing down in $ \psi+c\phi$ calculation at $L=8, 16, 24$ as shown in Fig.~\ref{fig:mulc} and in $ \langle \psi,\phi\rangle$ calculation at $L=16, 24$ as shown in Fig.~\ref{fig:spinor}. They are caused by bank conflict in multiple prefetch requests. It happens for certain lattice volumes when two source fields are loaded as $\langle\psi,\phi\rangle$. However, this slowing down can be cured by extending the prefetch distance of the second source by four, which is obtained by the maximal number of prefetch request divided by the number of source field, $8/2=4$. Such treatment is based on the observation in Fig.~\ref{fig:prefetch} that the longer prefetch distance does not affect performance. The final version is roughly twice faster than the original C codes without SSE instructions. In Table \ref{tbl:comp}, we compare the throughput of the Opteron processor to that of the Intel Xeon processor. For the Dirac operator $Q\phi$, the ratio of the throughput is almost the same as that of the processor clock speed (2.4 GHz/1.7 GHz =1.41) except for the 32-bit version at $L=8$, where the ratio is 1.64 $>$ 1.41. This can again be the cache effect since a spinor and a gauge field can entirely fit into the L2 cache ((72+96) bytes $\times 8^{4} \sim 0.7$ Mbytes). \begin{figure}[t] \includegraphics[width=7.9cm]{spinor_withd.eps} \vspace{-0.5cm} \caption{Benchmark results for 32-bit $ \langle \psi,\phi\rangle$. The slowing down observed in the Opteron version ($\circ$) at $L=16,24$ is cured of by adopting two prefetch distances ($\bullet$).} \label{fig:spinor} \end{figure} \newcommand{\pmb}[1]{{#1}} \begin{table} \caption{Benchmarks of AMD Opteron processor (2.4 GHz) and of Intel Xeon processor (1.7 GHz) for the linear algebra routine and the Dirac operator routine in units of Mflops. Ratio of performance is shown in the last column, which can be compared to the processor clock speed ratio, 2.4/1.7=1.41.} \begin{tabular}{lccc} \hline & Opteron & Xeon & ratio \\ \hline $\pmb{ Q\phi}$ (32-bit, L=8) & 2462 & 1503 & 1.64 \\ $\pmb{ Q\phi}$ (32-bit, L=12) & 2037 & 1421 & 1.43 \\ $\pmb{ Q\phi}$ (64-bit, L=8) & 1140 & 799 & 1.43 \\ $\pmb{ Q\phi}$ (64-bit, L=12) & 1131 & 796 & 1.42 \\ $\pmb{\langle\psi,\phi\rangle}$ (32-bit, L=16) & 1370 & 617 & 2.22 \\ $\pmb{\langle\psi,\phi\rangle}$ (64-bit, L=12) & 1090 & 554 & 1.97 \\ $\pmb{\langle\psi,\phi\rangle}$ (64-bit, L=16) & 661 & 353 & 1.87 \\ \hline \end{tabular} \label{tbl:comp} \end{table} For linear algebra we have more gain as expected from the difference of the processor clock speed. The larger data cache and the improvement of memory bandwidth may reflect the result since linear algebra routines are rather memory limited. \section{Summary} In this report, we discuss the details of the optimization of the lattice QCD codes for the AMD Opteron processor. Tuning of prefetch distance for each application can improve performance by more than a factor of two. The prefetch instruction, however, may cause significant slowing down in some linear algebra routines at certain lattice sizes due to bank conflict. We found that this slowing down can be cured by adopting two different prefetch distances for each source field. The effect of the modification in SSE instructions can be observed only after the adjustment of the prefetch instructions. Large cache effect is observed not only for the linear algebra routines but also for the $Q\phi$ routines when the source fields can fit into relatively large L2 data cache of the Opteron processor. This suggests that the PC cluster of the Opteron processor can achieve high performance when the sublattice size becomes small enough. As a next step of this work, we will develop a parallel version of the codes. \section{Acknowledgement} The author would like to thank H. Wittig and Y. Koma for a number of useful discussions. The author appreciate the original codes and the benchmark programs written by L.~Giusti, M.~L\"uscher and H.~Wittig, which is the basis of the improvement reported here. The author would also like to thank U. Ensslin of DESY IT group for technical supports.
2,877,628,088,804
arxiv
\section{\label{sec:level1}Introduction} Turbulent flows are known to be significantly affected by rotation. Many geophysicists, astrophysicists, meteorologists, and engineers are interested in the effects of rotation on turbulence. In a rapidly rotating fluid, the well known Taylor-Proudman theorem suggests that a flow tends to become two-dimensional. Although an exactly two-dimensional flow is not always established especially in wall-confined flows, the tendency of the quasi two-dimensionalization is observed in homogeneous turbulence under rotation, using the eddy-damped quasi-normal Markovian (EDQNM) approximation \cite{cj1989}, the large-eddy simulation (LES) \cite{cambonetal1997}, and direct numerical simulation (DNS) \cite{mnr2001,ymk2011}. These results indicate that the inter-scale energy transfer in the wavenumber space is altered due to the system rotation. In the case of decaying homogeneous turbulence, system rotation prevails against the energy cascade to the small scale, and the decay rate of the turbulent energy is reduced \cite{bfr1985,mnr2001}. Since the energy cascade is the inter-scale energy transfer, it may be difficult to be modeled in one-point closure. In order to avoid this difficulty, the reduction of the dissipation rate of the turbulent energy is modeled in the Reynolds averaged Navier-Stokes (RANS) modeling instead of modeling the reduction of the cascade rate. As a result, a term accompanied with the rotation is added in the transport equation for the turbulent energy dissipation rate \cite{bfr1985,okamoto1995}. Although this is an indirect modeling of the phenomenon, the reduction of decay rate of the turbulent energy can be predicted by the one-point closure model of the RANS equation. The RANS models are more often applied to inhomogeneous turbulence. In the previous studies of the RANS modeling, the effects of the system rotation on inhomogeneous turbulence are mainly discussed in terms of the Reynolds stress. In these studies, the effects of the system rotation on the Reynolds stress are expressed in the form of the nonlinear eddy-viscosity models with rotation-dependent coefficients \cite{syb2000,wj2000} and with the turbulent helicity \cite{yy1993,yb2016,inagakietal2017}. Here, the turbulent helicity is defined as $H = \langle u_i' \omega_i' \rangle$ where $\langle \rangle$ denotes the ensemble average, and $u_i'$ and $\omega_i'$ are the velocity and vorticity fluctuations, respectively. Note that the turbulent helicity is not the total amount in volume but the statistically averaged value at one point, so that it can vary in time and space. On the other hand, the effects of the system rotation on the turbulent energy transport were not discussed. This might be because the Coriolis force does not perform work on the fluid, and the turbulent energy transport equation is not altered. In order to focus on the effects of rotation on the turbulent energy transport rather than on the Reynolds stress, it is useful to consider the inhomogeneous flow fields with zero mean velocity. A simple example of such a flow is oscillating-grid turbulence \cite{ht1976,dl1978}. Its schematic flow configuration is shown in Fig.~\ref{fig:1}, in which the system rotation rate $\mathbf{\Omega}^\mathrm{F}$ is zero. In the experiment involving this flow, the turbulent energy is generated by an oscillating grid in a tank and spatially transferred in one direction perpendicular to the grid plane. Dickinson and Long \cite{dl1978} experimentally suggested that the diffusion of the turbulent energy can be represented by the `eddy viscosity,' and the width of the turbulence region around the grid $d$ grows as $d \sim t^{1/2}$. This suggestion indicates that the diffusion in oscillating-grid turbulence can be predicted using the conventional gradient-diffusion approximation. Matsunaga \textit{et al}. \cite{matsunagaetal1999} revealed that the spatial distribution of the turbulent energy in a steady state of the oscillating-grid turbulence can be predicted by the conventional $K$-$\varepsilon$ model. These results suggest that the gradient-diffusion approximation with the eddy viscosity is suitable for the description of inhomogeneous turbulence without rotation. However, this is not the case for rotating turbulence. Dickinson and Long \cite{dl1983} performed an experiment involving oscillating-grid turbulence with system rotation where the axis was perpendicular to the grid plane (Fig.~\ref{fig:1}). They revealed that the width of the turbulence region grows as $d \sim t$; the growth is faster than the non-rotating case where $d \sim t^{1/2}$. The same result was obtained by experiments \cite{dsd2006,kolvinetal2009} and a numerical simulation \cite{rd2014}. This fact suggests that for rotating inhomogeneous turbulence, the diffusion of the turbulent energy cannot be simply described by the gradient-diffusion approximation since the time dependence of the width of the turbulence region is notably different from the well-known diffusion problem. In other words, the phenomenon cannot be predicted by the conventional RANS models using the gradient-diffusion approximation with the eddy viscosity. Moreover, Godefered and Lollini \cite{gl1999} performed the DNS which mimics the rotating oscillating-grid turbulence and showed that the spatial distribution of the turbulent energy is significantly affected by system rotation. Although Yoshizawa \cite{yoshizawa2002} proposed a model for the pressure-velocity correlation associated with the mean rotational motion of a fluid, this model represents the energy flux in the direction perpendicular to the rotation axis. Hence, this model cannot account for the energy transfer enhanced in the direction parallel to the rotation axis. \begin{figure}[htp] \centering \includegraphics[scale=0.32]{fig1.eps} \caption{Schematic diagram for oscillating-grid turbulence. $\mathbf{\Omega}^\mathrm{F}$ denotes the angular velocity of the system rotation.} \label{fig:1} \end{figure} Ranjan and Davidson \cite{rd2014} discussed the relationship between the growth of the width of the turbulence region and the inertial wave observed in rotating oscillating-grid turbulence in detail. The group velocity of the inertial wave is mostly directed to the rotation axis and its sign corresponds to the sign of the instantaneous helicity, $u_i' \omega_i'$ \cite{moffatt1970}; the wave packets with negative (positive) instantaneous helicity propagate in the positive (negative) direction of the rotation axis. In fact, it was showen the instantaneous helicity is successively segregated in a statistical sense in their simulation \cite{rd2014}, which is consistent with the inertial wave propagation. These facts imply that in the RANS modeling, the energy transport enhanced in rotating oscillating-grid turbulence can be described in terms of the turbulent helicity. Inagaki \textit{et al}. \cite{inagakietal2017} showed that in the case of inhomogeneous helical turbulence subject to system rotation, the pressure diffusion term significantly contributes to the Reynolds stress transport. They also confirmed that the correlation between the velocity and the pressure fluctuation can be expressed by the product of the turbulent helicity and the absolute vorticity vector. In contrast to the model proposed by Yoshizawa \cite{yoshizawa2002}, the model proposed by Inagaki \textit{et al}. \cite{inagakietal2017} represents the energy flux in the direction parallel to the rotation axis. Thus, this model is expected to account for the enhanced energy transport in the direction parallel to the rotation axis for rotating inhomogeneous turbulence. Helical flow structures are often seen in engineering fields such as a swirling flow in a straight pipe \cite{kitoh1991,steenbergen} or the swirling jet \cite{stepanovetal2018}, and meteorological flows including supercell \cite{lilly1986,nn2010}. It is possible that the turbulent helicity affects the energy flux in such flows. The model suggested by Inagaki \textit{et al}. \cite{inagakietal2017} may be useful for predicting such helical flows with rotation. In the context of magnetohydrodynamics (MHD), the turbulent helicity is known to be essential for the alpha dynamo effect \cite{moffattbook,krauseradler,bs2005}. The model of the energy flux accompanied with the turbulent helicity and rotation suggests that the turbulent helicity affects not only the mean magnetic field, but also the turbulent kinetic energy. The prediction of the turbulent kinetic energy is significant for the estimation of the turbulent time scale in the alpha coefficient in the RANS models of the MHD turbulence \cite{hamba2004,wby2016,wby2016-2}. In this study, we assess the validity of the model proposed by Inagaki \textit{et al}. \cite{inagakietal2017} which is associated with the turbulent helicity by using a DNS of freely decaying inhomogeneous turbulence with and without system rotation whereby the rotation axis is parallel to the inhomogeneous direction of turbulence. The simulation configuration is similar to that proposed by Ranjan and Davidson \cite{rd2014} in which the simulation of rotating turbulence is performed starting from a spatially confined homogeneous isotropic turbulence. We focus on the RANS modeling of the phenomenon in low to moderate rotation cases. This is because under such circumstances, neither the conventional gradient-diffusion approximation nor the linear inviscid solution of the Navier-Stokes equation is suitable for the description of the flow. Low to moderate rotation cases correspond to fully developed turbulent flows, and they are the target of the RANS modeling. In this study, the transport equation for the turbulent energy is examined. The validity of the newly proposed model is discussed compared to the simulation result. Finally, the helical Rossby number is proposed as a criterion for judging the relative importance of the enhanced energy flux due to the turbulent helicity and the rotation in general turbulent flows. The organization of this paper is as follows. In Sec~\ref{sec:level2}, the turbulent energy transport and the conventional model approach are described. In addition, a new model expression is proposed for the energy flux enhanced by the turbulent helicity and the rotation. In Sec.~\ref{sec:level3}, the numerical setup and the simulation results are presented. A discussion of the validity of the new model is presented in Sec.~\ref{sec:level4}. The helical Rossby number is proposed as a criterion for judging the relative importance of the energy flux enhanced by the turbulent helicity and rotation in general turbulent flows. Finally, a summary is provided and conclusions are discussed in Sec~\ref{sec:level5}. \section{\label{sec:level2}Turbulent energy transport enhanced by the turbulent helicity and rotation} The Navier-Stokes equation and the continuity equation for an incompressible fluid in a rotating system are given, respectively, by \begin{align} \frac{\partial u_i}{\partial t} & = - \frac{\partial}{\partial x_j} u_i u_j - \frac{\partial p}{\partial x_i} + \nu \nabla^2 u_i + 2 \epsilon_{ij\ell} u_j \Omega^\mathrm{F}_\ell + f^\mathrm{ex}_i, \label{eq:1} \\ \frac{\partial u_i}{\partial x_i} & = 0, \label{eq:2} \end{align} where $u_i$ is the $i$th component of velocity, $p$ the pressure divided by the fluid density with the centrifugal force included, $\nu$ the kinematic viscosity, $\nabla^2 (=\partial^2/\partial x_j \partial x_j)$ the Laplacian operator, $\Omega^\mathrm{F}_i$ the angular velocity of the system rotation, $\epsilon_{ij\ell}$ the alternating tensor, and $f^\mathrm{ex}_i$ the external forcing. The pressure is determined by the following Poisson equation: \begin{align} \nabla^2 p = - s_{ij} s_{ij} + \frac{1}{2} \omega_i \omega_i + 2 \omega_i \Omega^\mathrm{F}_i, \label{eq:3} \end{align} where $s_{ij} [ = (\partial u_i/\partial x_j + \partial u_j/\partial x_i )/2]$ is the strain rate of velocity and $\omega_i ( = \epsilon_{ij\ell} \partial u_\ell/\partial x_j )$ is the vorticity. In a non-rotating frame, the first two terms on the right-hand side of Eq.~(\ref{eq:3}) remain. The third term on the right-hand side of Eq.~(\ref{eq:3}) denotes the effect of the system rotation on the pressure. In order to directly evaluate the effects of rotation on the pressure, we decompose the pressure into a nonlinear component and a rotational component as $p = p^\mathrm{N} + p^\Omega$ where they are respectively defined as \begin{subequations} \begin{align} \nabla^2 p^\mathrm{N} & = - s_{ij} s_{ij} + \frac{1}{2} \omega_i \omega_i, \label{eq:4a} \\ \nabla^2 p^\Omega & = 2 \omega_i \Omega^\mathrm{F}_i. \label{eq:4b} \end{align} \end{subequations} Hereafter, we refer to $p^\mathrm{N}$ as the nonlinear pressure and refer to $p^\Omega$ as the rotational pressure. \subsection{\label{sec:level2a}Turbulent energy transport equation} In this study, the energy transport phenomenon observed in both non-rotating and rotating oscillating-grid turbulence are discussed in terms of the RANS equation. We decompose the physical quantities $q [= (u_i, p, \omega_i)]$ into the mean and the fluctuation parts as \begin{align} q = Q + q', \ \ & Q = \left< q \right>, \label{eq:5} \end{align} where $\left< \right>$ denotes the Reynolds or ensemble averaging. Substituting Eq.~(\ref{eq:5}) into Eqs.~(\ref{eq:1}) and (\ref{eq:2}), we can derive the equations for the mean velocity and the velocity fluctuation. Then, the equation for the turbulent energy $K (= \langle u_i' u_i' \rangle/2)$ is written as \begin{align} \frac{\partial K}{\partial t} + \frac{\partial}{\partial x_i} U_i K = P^K - \varepsilon + T^K + \Pi^K + D^K + F^K. \label{eq:6} \end{align} Here, $P^K$ is the production rate, $\varepsilon$ the dissipation rate, $T^K$ the turbulent diffusion, $\Pi^K$ the pressure diffusion, $D^K$ the viscous diffusion, and $F^K$ the work done by the external forcing. They are respectively defined as \begin{subequations} \begin{align} P^K & = - R_{ij} S_{ij}, \label{eq:7a} \\ \varepsilon & = \nu \left< \frac{\partial u_i'}{\partial x_j} \frac{\partial u_i'}{\partial x_j} \right>, \label{eq:7b} \\ T^K & = -\frac{\partial}{\partial x_i} \left< u_i' \frac{1}{2} u_j' u_j' \right>, \label{eq:7c} \\ \Pi^K & = -\frac{\partial}{\partial x_i} \left< u_i' p' \right>, \label{eq:7d} \\ D^K & = \nu \nabla^2 K, \label{eq:7e} \\ F^K & = \left< u_i' f^\mathrm{ex}_i{}' \right>, \label{eq:7f} \end{align} \end{subequations} where $S_{ij} [ = (\partial U_i/\partial x_j + \partial U_j/\partial x_i )/2]$ denotes the strain rate of the mean velocity and $R_{ij} (= \langle u_i' u_j' \rangle)$ is the Reynolds stress. It should be noted that the angular velocity of the system rotation does not appear explicitly in Eqs.~(\ref{eq:6}) and (\ref{eq:7a})--(\ref{eq:7f}) since the Coriolis force does not perform work. However, the effect of the system rotation on the turbulent energy transport should be incorporated thorough the rotational pressure [Eq.~(\ref{eq:4b})]. Then, we decompose the pressure diffusion term as \begin{align} \Pi^K = \Pi^\mathrm{N} + \Pi^\Omega, \label{eq:8} \end{align} where $\Pi^\mathrm{N}$ and $\Pi^\Omega$ are respectively defined as \begin{subequations} \begin{align} \Pi^\mathrm{N} & = -\frac{\partial}{\partial x_i} \left< u_i' p^\mathrm{N}{}' \right>, \label{eq:9a} \\ \Pi^\Omega & = -\frac{\partial}{\partial x_i} \left< u_i' p^\Omega{}' \right>. \label{eq:9b} \end{align} \end{subequations} Hereafter, we refer to $\Pi^\mathrm{N}$ as the nonlinear pressure diffusion and refer to $\Pi^\Omega$ as the rotational pressure diffusion. \subsection{\label{sec:level2b}Diffusion problem described in terms of the RANS equation} In the case of oscillating-grid turbulence, there is no mean velocity and the flow is homogeneous in two directions and inhomogeneous in one direction. Hereafter, we take the direction of the flow inhomogeneity and the rotation axis to be $z$. Then, the equation for $K$ is written as \begin{align} \frac{\partial K}{\partial t} = - \varepsilon - \frac{\partial}{\partial z} \left( \left< u_z' \frac{1}{2} u_i' u_i' \right> + \left< u_z' p^\mathrm{N}{}' \right> + \left< u_z' p^\Omega{}' \right> \right) + \nu \frac{\partial^2 K}{\partial z^2} + F^K. \label{eq:10} \end{align} In this case, $F^K$ represents the energy injection due to the grid oscillation. Here, all terms except for the viscous diffusion term are unknown variables and need to be modeled. In the conventional RANS modeling, $\varepsilon$ is usually obtained by solving its transport equation, and the diffusion terms are modeled by the gradient-diffusion approximation as \begin{align} \left< u_i' \frac{1}{2} u_j' u_j' \right> + \left< u_i' p^\mathrm{N}{}' \right> + \left< u_i' p^\Omega{}' \right> = - \frac{\nu^\mathrm{T}}{\sigma_K} \frac{\partial K}{\partial x_i}, \label{eq:11} \end{align} where $\nu^\mathrm{T}$ is the eddy-viscosity coefficient expressed by $\nu^\mathrm{T} = C_\nu K^2/\varepsilon$ in which $C_\nu$ and $\sigma_K$ are model constants. For high-Reynolds-number turbulence, the diffusion by the kinematic viscosity is negligible and the model is given by \begin{align} \frac{\partial K}{\partial t} = - \varepsilon + \frac{\partial}{\partial z} \left( \frac{C_\nu}{\sigma_K} \frac{K^2}{\varepsilon} \frac{\partial K}{\partial z} \right) + F^K. \label{eq:12} \end{align} This model accurately predicts a non-rotating oscillating-grid turbulence since the eddy viscosity represents the energy diffusion due to turbulent mixing. The gradient-diffusion approximation for the energy flux is consistent with the experimentally observed growth of the width of the turbulence region $d$, where $d \sim t^{1/2}$ \cite{dl1978}. Moreover, Matsunaga \textit{et al}. \cite{matsunagaetal1999} revealed that the spatial distribution of the turbulent energy in a steady state of the oscillating-grid turbulence can be predicted using the RANS model described by the gradient-diffusion approximation. However, the model given by Eq.~(\ref{eq:11}) does not contain the effects of system rotation. As such, this model cannot account for the enhancement of the energy transport observed in rotating oscillating-grid turbulence \cite{dl1983,dsd2006,rd2014,kolvinetal2009,gl1999}. There are some elaborate RANS models in which the effects of system rotation are incorporated. For example, the effect of the reduction of the energy cascade is considered by modifying the transport equation for $\varepsilon$ \cite{bfr1985,okamoto1995}, and the rotation-dependent model coefficient is proposed with the aid of the algebraic Reynolds stress model procedure \cite{syb2000,wj2000}. Although these effects are essential for describing some effects of rotation on turbulence, they are insufficient to predict the energy transport enhanced in rotating oscillating-grid turbulence. This is because these models are based on the gradient-diffusion approximation; so that, they cannot account for the rapid growth of the width of the turbulence region as $d \sim t$, which was confirmed by several previous works \cite{dl1983,dsd2006,kolvinetal2009,rd2014}. Yoshizawa \cite{tsdia} developed a statistical closure theory for inhomogeneous turbulence which is called the two-scale direct-interaction approximation (TSDIA). Yoshizawa \cite{yoshizawa2002} proposed a model for the pressure diffusion term containing the effects of the mean shear and the mean rotation with the aid of the TSDIA. The model expression is written as \begin{align} \left< u_i' p' \right> = C_{KPS} \frac{K^3}{\varepsilon^2} S_{ij} \frac{\partial K}{\partial x_j} + C_{KP\Omega} \frac{K^3}{\varepsilon^2} W_{ij} \frac{\partial K}{\partial x_j}, \label{eq:13} \end{align} where $W_{ij} [ = (\partial U_i/\partial x_j - \partial U_j/\partial x_i )/2 - \epsilon_{ij\ell} \Omega^\mathrm{F}_\ell]$ is the mean absolute vorticity tensor, and $C_{KPS}$ and $C_{KP\Omega}$ are model constants. In the case where there is no mean velocity and the system is rotating, the model is given by \begin{align} \left< u_i' p' \right> = - C_{KP\Omega} \frac{K^3}{\varepsilon^2} \epsilon_{ij \ell} \Omega^\mathrm{F}_\ell \frac{\partial K}{\partial x_j}. \label{eq:14} \end{align} In this model, the relationship $\langle u_i' p' \rangle \Omega^\mathrm{F}_i = 0$ holds; that is, the velocity-pressure correlation is orthogonal to the angular velocity vector of the rotation. The model given by Eq.~(\ref{eq:14}) represents the energy flux in the direction perpendicular to the rotation axis. Hence, it cannot account for the energy transport enhanced in the direction parallel to the rotation axis. In summary, previous models cannot account for the enhancement of the energy transport in the direction parallel to the rotation axis. \subsection{\label{sec:level2c}A model for energy flux enhanced by the turbulent helicity and system rotation} Inagaki \textit{et al}. \cite{inagakietal2017} showed that the pressure diffusion term significantly contributes to the Reynolds stress transport in rotating inhomogeneous turbulence accompanied with the turbulent helicity. In their work, the effect of the system rotation on the velocity-pressure fluctuation correlation was analytically obtained with the aid of the TSDIA \cite{tsdia}. Detailed calculations are provided in Appendix~\ref{sec:a}. As a result, we obtain the following model: \begin{align} \left< u_i' p^\Omega{}' \right> = - C_\Omega \frac{K^3}{\varepsilon^2} H 2 \Omega^\mathrm{F}_i, \label{eq:15} \end{align} where $H (= \langle u_i' \omega_i' \rangle)$ is the turbulent helicity and $C_\Omega$ is a model constant. Since the turbulent helicity is not the total amount in volume but the statistically averaged value at one point, it can vary in time and space. Therefore, the pressure diffusion due to the rotational pressure is expressed as \begin{align} \Pi^\Omega = \frac{\partial}{\partial x_i} \left( C_\Omega \frac{K^3}{\varepsilon^2} H 2 \Omega^\mathrm{F}_i \right). \label{eq:16} \end{align} As seen from Eqs.~(\ref{eq:6}) and (\ref{eq:7d}), the correlation between the velocity and the pressure fluctuation is interpreted as the energy flux due to the pressure. Hereafter, we refer to $\langle u_i' p^\Omega{}' \rangle$ as the rotational pressure flux. Equation~(\ref{eq:15}) indicates that the negative turbulent helicity, $H<0$, invokes an energy flux parallel to the rotation axis, while the positive turbulent helicity, $H>0$, invokes a flux anti-parallel to the rotation axis. This property corresponds to the group velocity of inertial waves; the wave packets with negative instantaneous helicity, $u_i' \omega_i' < 0$, propagate upward, while the packets with positive instantaneous helicity, $u_i' \omega_i' > 0$, propagate downward. Thus, this model of the rotational pressure flux plays a similar role to that of group velocity of inertial waves. The detailed expression of the group velocity of an inertial wave is given in Appendix~\ref{sec:b}. This model is expected to account for the enhancement of the energy transport observed in a rotating oscillating-grid turbulence \cite{dl1983,dsd2006,rd2014,kolvinetal2009,gl1999}. In fact, in the simulation of Ranjan and Davidson \cite{rd2014}, negative helicity is dominant in the upper side of the turbulent cloud, while positive helicity is dominant in the lower side, so that energy is transferred outward from the cloud. Moreover, the rotational pressure flux can be interpreted as the energy flux due to the inertial waves in linear inviscid limit. This point is discussed in Appendix~\ref{sec:c}. It should be noted that the effects of rotation should appear not only in the case of solid body rotation of the system, but also in the case in which the non-trivial mean vorticity exists. From the viewpoint of the covariance of model expression \cite{ariki2015}, Eq.~(\ref{eq:15}) should be written as \begin{align} \left< u_i' p^\Omega{}' \right> = - C_\Omega \frac{K^3}{\varepsilon^2} H \Omega^\mathrm{A}_i, \label{eq:17} \end{align} where $\Omega^\mathrm{A}_i (= \Omega_i + 2\Omega^\mathrm{F}_i)$ denotes the mean absolute vorticity vector and $\Omega_i = \epsilon_{ij\ell} \partial U_\ell /\partial x_j$. Hence, this effect of the turbulent helicity on the energy flux must be important in predicting the turbulent energy distribution in helical flows such as a swirling flow in a straight pipe \cite{kitoh1991,steenbergen}, the swirling jet \cite{stepanovetal2018}, supercell \cite{lilly1986,nn2010}, and also in the RANS model of the MHD turbulence \cite{hamba2004,wby2016,wby2016-2}. \section{\label{sec:level3}Numerical simulation} \subsection{\label{sec:level3a}Numerical setup} In order to assess the validity of the model given by Eq.~(\ref{eq:15}), a DNS of inhomogeneous turbulence subject to system rotation is performed. The flow configuration is similar to that proposed by Ranjan and Davidson \cite{rd2014}. The computational domain is $L_x \times L_y \times L_z = 2\pi \times 2 \pi \times 2\pi$ and the number of grid points is $512^3$. The pseudo-spectral method is used and the aliasing error is eliminated by using the phase-shift method. For time integration, the 3rd-order Runge-Kutta scheme is adopted for nonlinear term, while the viscous and the Coriolis terms are solved exactly using the integral factor technique \cite{integraltechnique}. The initial velocity field is given by a homogeneous isotropic turbulence confined around the $z=0$ plane, and the rotation axis is directed to the $z$-axis. The parameters for the simulations are shown in Table \ref{tb:1}. Here, the Reynolds number $\mathrm{Re}$ and the Rossby number $\mathrm{Ro}$ are respectively defined as \begin{align} \mathrm{Re} = \frac{K_0^2}{\nu \varepsilon_0}, \ \ \mathrm{Ro} = \frac{\varepsilon_0/K_0}{2 \Omega^\mathrm{F}} , \label{eq:18} \end{align} where $K_0 = K |_{z=0,t=0} (= 0.704)$, $\varepsilon_0 = \varepsilon |_{z=0,t=0} (= 1.24)$, and $\Omega^\mathrm{F}$ denotes the absolute value of the angular velocity of the system rotation. \begin{table}[t] \centering \caption{Simulation parameters. In all runs, the kinematic viscosity is set to $\nu = 10^{-3}$ and the Reynolds number is $\mathrm{Re} = 400$.} \begin{ruledtabular} \begin{tabular}{ccc} Run & $\Omega^\mathrm{F}$ & $\mathrm{Ro}$ \\ \hline 0 & $0$ & $\infty$ \\ 08 & $0.8$ & $1.10$ \\ 1 & $1$ & $0.880$ \\ 2 & $2$ & $0.440$ \\ 5 & $5$ & $0.176$ \\ \end{tabular} \end{ruledtabular} \label{tb:1} \end{table} In order to generate the solenoidal initial velocity, we perform a pre-computation of decaying homogeneous isotropic turbulence. For the initial condition of the pre-computation, the energy spectrum is set to $E(k) \propto k^4 \exp [ -2(k/k^\mathrm{p})^2]$ where $k^\mathrm{p} = 6$ and $K = \int_0^\infty \mathrm{d}k \ E(k) = 2$. Here, we use the velocity field $u^\mathrm{hit}_i$ at the time $\sqrt{2K/3}|_{t=0} k^\mathrm{p} t = 7.97$ at which the energy has been transferred to high-wavenumber region and the energy dissipation rate starts decaying. The stream function $\psi_i$ is introduced which satisfies $u^\mathrm{hit}_i = \epsilon_{ij\ell} \partial \psi_\ell / \partial x_j$ and $\nabla^2 \psi_i = 0$. Then, the initial velocity field of the main computation of the inhomogeneous turbulence $u^\mathrm{ini}_i$ is given by \begin{align} u^\mathrm{ini}_i = \epsilon_{ij\ell} \frac{\partial}{\partial x_j} \left[ g(z) \psi_\ell \right]. \label{eq:19} \end{align} Here, $g(z)$ is a weighting function which confines the velocity field around $z=0$ and is set to $g(z) = \exp [ - (z/\sigma)^4]$ where $\sigma = L_z /8 = 0.785$. The integral length scale obtained from $u^\mathrm{hit}_i$ field is $L^\mathrm{int} [= 3\pi/4K \int_0^\infty \mathrm{d} k \ k^{-1} E(k)] = 0.516$. Therefore, the width of the confined turbulent region, $2\sigma = 1.57$, is three times as wide as the integral length scale $L^\mathrm{int}$. \subsection{\label{sec:level3b}Results} In the following results, statistical quantities are obtained by averaging over the $x$-$y$ plane. Thus, the statistical quantities only depend on $z$ and $t$. \subsubsection{\label{sec:level3b1}Spatial distribution of the turbulent energy} Figure \ref{fig:2} shows the spatial distribution of the turbulent energy at each time for runs 0 and 1. In both cases, the energy at $|z| < 1$ decreases while that at $|z| > 1$ increases with the progression of time. This represents the outward energy transfer. In the rotating case (run 1), the energy increase at $|z| > 1$ is faster and the resulting energy transfer is rapid compared with the non-rotating case (run 0). The spatial distribution of the turbulent energy at $2\Omega^\mathrm{F}t = 2$ for four runs of the rotating cases is shown in Fig.~\ref{fig:3}(a). Here, `linear inv.' denotes the solution of the linear inviscid equation given by Eq.~(\ref{eq:b1}). In the outer region at $|z| > 1$, all lines almost overlap, while in the center region at $|z| < 1$, the values are quite different from each other. It is seen that the spatial distribution of $K$ asymptotically approaches the linear inviscid solution as the rotation rate increases. In Fig.~\ref{fig:3}(b), we compare the numerical solution of the Navier-Stokes equation, the linear inviscid solution, and the linear viscous solution obtained from the Navier-Stokes equation without the nonlinear term for run 1, whose Rossby number is moderate. It is clearly seen that the linear viscous solution does not overlap with the numerical solution of the nonlinear Navier-Stokes equation. This result indicates that the nonlinearity is not negligible in the center region at $|z| < 1$ for the moderate-Rossby-number case. \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig2.eps} \caption{Spatial distribution of turbulent energy at each time for (a) run 0 and (b) run 1.} \label{fig:2} \end{figure} \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig3.eps} \caption{Spatial distribution of the turbulent energy at $2\Omega^\mathrm{F}t = 2$: (a) comparison between four runs of the rotating cases with the linear inviscid solution (linear inv.) and (b) comparison between the numerical solution of run 1, the linear inviscid solution, and the solution of run 1 obtained from the Navier-Stokes equation without the nonlinear term (linear vis.).} \label{fig:3} \end{figure} Dickinson and Long \cite{dl1983} suggested that the growth of the the width of the turbulence region $d$ varies according to the rotation rate. Here, we define the width of the turbulence region as $d = |z (K = 0.02 K_0)|$; that is, the location of the turbulence edge where the turbulent energy takes the value of $K = 0.02K_0$, which is similar to the previous method \cite{dl1983,dsd2006,kolvinetal2009,rd2014}. The time evolution of $d$ with time is shown in Fig.~\ref{fig:4}(a). The width of the turbulence region for the rotating cases appears to grow linearly, while that for the non-rotating case (run 0) is saturated at $t = 1.5$. The growth for run 0 is not exactly the same as the experimental result of $d \sim t^{1/2}$ \cite{dl1978} because of the absence of energy injection in this simulation. For the rotating cases, the growth rate increases as the rotation rate increases. The flux of the turbulent energy in the direction parallel to the rotation axis depends on the rotation rate. The previous studies showed that for rotating cases, the lines of growth of the width of the turbulence region overlap when time is normalized by the angular velocity of the system rotation \cite{dl1983,dsd2006,kolvinetal2009,rd2014}. Figure \ref{fig:4}(b) shows the growth of the width of the turbulence region against time normalized by the angular velocity of the system rotation. The linear inviscid solution is also plotted. In all of the rotating cases except for run 08, the lines overlap and are similar to the linear inviscid solution. This result suggests that the growth of the width of the turbulence region of rapidly rotating flows can be estimated by the linear inviscid equation given by Eq.~(\ref{eq:b1}). On the other hand, the gradient of the growth for run 08 is not as steep as those in the other runs in Fig.~\ref{fig:4}(b). This result indicates that not only the energy transport due to the rotation, but also the diffusion due to the nonlinearity of the turbulence is important for moderate-Rossby-number flows. \begin{figure}[htp] \centering \includegraphics[scale=0.63]{fig4.eps} \caption{The growth of the the width of the turbulence region against time. In (b), time is normalized by the angular velocity of the system rotation. The result of the linear inviscid case is also plotted.} \label{fig:4} \end{figure} \subsubsection{\label{sec:level3b2}Budget of the turbulent energy transport equation} The budget of the turbulent energy transport equation (\ref{eq:10}) for runs 0, 1, and 5 at $2\Omega^\mathrm{F}t = 2$ ($t=1$ for run 0) is shown in Fig.~\ref{fig:5}. Note that $F^K = 0$, that is, the work done by the external forcing is zero in the simulation. In Fig.~\ref{fig:5}(a) for run 0, the turbulent energy dissipation is dominant at $|z|<1$ and the energy is transferred mainly by the turbulent diffusion given by Eq.~(\ref{eq:7c}) near $z = \pm 1$. On the other hand, in Fig.~{\ref{fig:5}(b) for run 1, the rotational pressure diffusion also contributes to the energy transfer at $z = \pm 1$. In Fig.~\ref{fig:5}(c) for run 5, the intensity of the rotational pressure diffusion is much larger than that of run 1 at the same time, where the time is normalized by the angular velocity of the system rotation. The turbulent energy increases at $1 < |z| < 2$ solely by the rotational pressure diffusion. As the Rossby number decreases, the contribution of the rotational pressure diffusion increases. \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig5.eps} \caption{The budget of the turbulent energy transport equation for (a) run 0 at $t=1$, (b) run 1 at $t=1 (2\Omega^\mathrm{F}t=2)$, and (c) run 5 at $t = 0.2 (2\Omega^\mathrm{F}t=2)$.} \label{fig:5} \end{figure} \section{\label{sec:level4}Discussion} \subsection{\label{sec:level4a}Evaluation of the model for the rotational pressure flux} As seen in Fig.~\ref{fig:5}, the rotational pressure diffusion term significantly contributes to the budget of the turbulent energy transport. This result indicates that the energy flux due to turbulence is enhanced by the system rotation. Firstly, we examine whether the enhancement of the energy flux can be predicted by the conventional gradient-diffusion approximation. Figure~\ref{fig:6} shows the comparison between the total energy flux due to turbulence, $\langle u_z' p' \rangle + \langle u_z' u_i' u_i'/2 \rangle$, and the conventional gradient-diffusion approximation given by Eq.~(\ref{eq:11}) for runs 0 and 1 at $t=1$ and $2$ ($2\Omega^\mathrm{F}t=2$ and $4$ for run 1). The model constant is chosen as $C_\nu/\sigma_K = 0.22$ so that the agreement is good for run 0. The value of the model constant is almost twice as large as the conventional value $C_\nu/\sigma_K = 0.09$ \cite{matsunagaetal1999,yoshizawabook}. For run 0, the spatial distribution of the total energy flux can be predicted by the gradient-diffusion approximation although the model constant is large. On the other hand, for run 1, the energy flux is under-predicted by the gradient-diffusion approximation. In particular, a broad spatial distribution of the energy flux at $-3 < z < 3$ at $2\Omega^\mathrm{F} t = 4$ is not predicted. Figure~\ref{fig:7} shows the comparison between the energy flux due to the nonlinearity, $\langle u_z' p^\mathrm{N}{}' \rangle + \langle u_z' u_i' u_i'/2 \rangle$, and the gradient-diffusion approximation for runs 08 and 1 at $2\Omega^\mathrm{F}t=2$ and $4$. The model constant is the same as in Fig.~\ref{fig:6}. In contrast to Fig.~\ref{fig:6}(b), the energy flux due to the nonlinearity agrees fairly well with the gradient-diffusion approximation for both runs. Therefore, it is clearly shown that the conventional gradient-diffusion approximation can predict the energy flux due to the nonlinearity, but cannot account for the rotational pressure flux, $\langle u_z' p^\Omega{}' \rangle$, enhanced by the system rotation. Therefore, a new model is required to account for the rotational pressure flux. \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig6.eps} \caption{Comparison between the total energy flux due to turbulence, $\langle u_z' p' \rangle + \langle u_z' u_i' u_i'/2 \rangle$, and the gradient-diffusion approximation given by Eq.~(\ref{eq:11}) for (a) run 0 and (b) run 1 at $t=1$ and $2$ ($2\Omega^\mathrm{F}t=2$ and $4$ for run 1).} \label{fig:6} \end{figure} \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig7.eps} \caption{Comparison between the energy flux due to the nonlinearity, $\langle u_z' p^\mathrm{N}{}' \rangle + \langle u_z' u_i' u_i'/2 \rangle$, and the gradient-diffusion approximation given by the right-hand side of Eq.~(\ref{eq:11}) for (a) run 08 and (b) run 1 at $2\Omega^\mathrm{F} t = 2$ and $4$.} \label{fig:7} \end{figure} Next, we examine the newly proposed model given by Eq.~(\ref{eq:15}). Figures~\ref{fig:8}(a) and 8(b) respectively show the spatial distribution of the turbulent helicity for run 0 and run 1 at each time. For run 1, it is clearly seen that the negative turbulent helicity is dominant at $z > 0$, while the positive turbulent helicity is dominant at $z < 0$. Although the initial condition also has a negative turbulent helicity at $z > 0$ due to the insufficiency of the statistical average, the most part of the turbulent helicity shown in Fig.~\ref{fig:8}(b) is not due to the initial condition but is generated by the system rotation effect at $t>0$. This is suggested by the fact that the turbulent helicity just decays for run 0 with no system rotation [Fig.~\ref{fig:8}(a)]. The same sign of the segregation of the turbulent helicity shown in run 1 was observed in the previous simulations \cite{rd2014,gl1999}, and this result may be obvious from the view point of the inertial wave propagation as stated in Appendix~\ref{sec:b}. In the context of the RANS equation, this spatial distribution of the turbulent helicity antisymmetric about $z=0$ can be also explained by considering the transport equation for the turbulent helicity. It is given by \cite{yy1993} \begin{align} \frac{\partial H}{\partial t} = 2\Omega^\mathrm{F}_z \frac{\partial K}{\partial z} + \cdots, \label{eq:20} \end{align} where only the production term is written on the right-hand side for simplicity. Since the turbulent energy is confined near $z=0$ at the initial condition, $\partial K/\partial z$ is negative at $z>0$, while it is positive at $z < 0$. Because $\Omega^\mathrm{F}_z > 0$, a negative $H$ is generated at $z>0$, while a positive $H$ is generated at $z<0$ for rotating cases as observed in Fig.~\ref{fig:8}(b). The spatial distribution of the rotational pressure flux is shown in Fig.~\ref{fig:8}(c). The spatial distribution of the rotational pressure flux is similar to that of the turbulent helicity with negative coefficient at each time. The same tendency is seen for other runs with system rotation. This result suggests that the model expression of the energy flux in terms of the turbulent helicity given by Eq.~(\ref{eq:15}) is qualitatively good. Figure~\ref{fig:9} shows the comparison between the rotational pressure flux and its model given by Eq.~(\ref{eq:15}) with $C_\Omega = 0.03$ for runs 08 and 1 at $2\Omega^\mathrm{F}t=2$ and $4$. The present model predicts the broad spatial distribution of the rotational pressure flux, which cannot be reproduced by the gradient-diffusion approximation. Therefore, the proposed expression is potentially a good candidate for the model of the rotational pressure flux. \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig8.eps} \caption{Spatial distribution of the turbulent helicity for (a) run 0 and (b) run 1, and (c) the rotational pressure flux for run 1 at each time.} \label{fig:8} \end{figure} \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig9.eps} \caption{Comparison between the rotational energy flux and its model given by Eq.~({\ref{eq:15}}) for (a) run 08 and (b) run 1 at $2\Omega^\mathrm{F} t = 2$ and $4$.} \label{fig:9} \end{figure} \subsection{\label{sec:level4b}Consistency of the model from the analytical view point} In Fig.~\ref{fig:9}, the prediction by the proposed model at $2\Omega^\mathrm{F} t = 2$ is good, but its accuracy decreases at $2\Omega^\mathrm{F}t = 4$ in the sense that the present model overestimates the DNS value especially at $1 < |z| < 2$. The disagreement at $1 < |z| < 2$ at $2\Omega^\mathrm{F} t = 4$ is partly because the dissipation rate $\varepsilon$ is not adequate to express the coefficient in Eq.~(\ref{eq:15}). The coefficient $K^3/\varepsilon^2$ in Eq.~(\ref{eq:15}) represents the square of the turbulent length scale for the rotational pressure flux. In the RANS modeling, $\varepsilon$ is often interpreted as the energy cascade rate from the large scale to the small scale in addition to the dissipation rate \cite{yoshizawabook}. In the case of fully developed turbulence or a statistically equilibrium state, the energy cascade rate related to the large scales and the dissipation rate related to the small scales are considered to be almost equal; thus the dissipation rate can be used to express the turbulent length scale. However, as seen in Figs.~\ref{fig:5}(b) and \ref{fig:5}(c), the dissipation rate is much less than the rotational pressure diffusion at $1 < |z| < 2$ in this simulation, and the turbulent field is not in an equilibrium state. The dissipation rate is not balanced by the energy cascade rate which is closely related to the turbulent length scale. Therefore, it is possible that the expression $K^3/\varepsilon^2$ overestimates the coefficient of the model given by Eq.~(\ref{eq:15}). In order to correct the model expression (\ref{eq:15}), we analyze the rotational pressure flux theoretically. Since the flow is homogeneous in $x$ and $y$ directions, the Fourier transformation is applicable in these directions. Here, the Fourier transformation of $q (\mathbf{x})$ in the homogeneous directions is defined as \begin{subequations} \begin{align} q (\mathbf{x}) & = \int \mathrm{d} \mathbf{k}_\perp \hat{q} (\mathbf{k}_\perp,z) \mathrm{e}^{i \mathbf{k}_\perp \cdot \mathbf{x}_\perp}, \label{eq:21a} \\ \hat{q} (\mathbf{k}_\perp, z) & = \frac{1}{(2\pi)^2} \int \mathrm{d} \mathbf{x}_\perp q (\mathbf{x}) \mathrm{e}^{-i \mathbf{k}_\perp \cdot \mathbf{x}_\perp}, \label{eq:21b} \end{align} \end{subequations} where $\mathbf{k}_\perp = (k_x, k_y)$ and $\mathbf{x}_\perp = (x, y)$. In the case that there is no solid wall, the Poisson equation for the rotational pressure given by Eq.~(\ref{eq:4b}) is solved as \begin{align} \hat{p}^\Omega (\mathbf{k}_\perp,z)= - 2 \Omega^\mathrm{F}_z \int_{-\infty}^\infty \mathrm{d} z' \frac{1}{2k_\perp} \mathrm{e}^{-k_\perp |z-z'|} \hat{\omega}_z (\mathbf{k}_\perp, z'). \label{eq:22} \end{align} The rotational pressure flux can be calculated as \begin{align} \left< u_z' p^\Omega{}' \right> & = \int \mathrm{d} \mathbf{k}_\perp \int \mathrm{d} \mathbf{k}_\perp' \left< \hat{u}_z' (\mathbf{k}_\perp, z) \hat{p}^\Omega{}' (\mathbf{k}_\perp', z) \right> \mathrm{e}^{i (\mathbf{k}_\perp + \mathbf{k}_\perp')\cdot \mathbf{x}_\perp} \nonumber \\ & = \int \mathrm{d} \mathbf{k}_\perp \Re \left[ \left< \hat{u}_z' (\mathbf{k}_\perp, z) \hat{p}^\Omega{}'{}^* (\mathbf{k}_\perp, z) \right> \right] \nonumber \\ & = - 2 \Omega^\mathrm{F}_z \int \mathrm{d} \mathbf{k}_\perp \int_{-\infty}^\infty \mathrm{d} z' \frac{1}{2k_\perp} \mathrm{e}^{-k_\perp |z-z'|} \Re \left[ \left< \hat{u}_z' (\mathbf{k}_\perp,z) \hat{\omega}_z'{}^* (\mathbf{k}_\perp, z') \right> \right], \label{eq:23} \end{align} where the homogeneity in the $\mathbf{x}_\perp$ direction is used and Eq.~(\ref{eq:22}) is substituted. Equation~(\ref{eq:23}) suggests that the rotational pressure flux can be expressed by using $H_{zz}$($=\langle u_z' \omega_z' \rangle$) with the factor $M^H$ as follows: \begin{gather} \left< u_z' p^\Omega{}' \right> = - M^H H_{zz} 2 \Omega^\mathrm{F}_z, \label{eq:24} \\ M^H = \frac{1}{H_{zz}} \int \mathrm{d} \mathbf{k}_\perp \int_{-\infty}^\infty \mathrm{d} z' \frac{1}{2k_\perp} \mathrm{e}^{-k_\perp |z-z'|} \Re \left[ \left< \hat{u}_z' (\mathbf{k}_\perp,z) \hat{\omega}_z'{}^* (\mathbf{k}_\perp, z') \right> \right]. \label{eq:25} \end{gather} Here, $M^H$ has the dimension of the square of the length scale. Modeling $M^H$ requires information on the integral length scales of the turbulent helicity spectrum in the $\mathbf{k}_\perp$ space and the correlation between the velocity and vorticity along the $z$ direction. If the two-point correlation between the velocity and vorticity in the $z$ direction is almost constant within the region at $|z-z'| < 1/k_\perp$, the integral $\int_{-\infty}^\infty \mathrm{d} z' \exp [ - k_\perp |z-z'| ]$ can be calculated separately; $M^H$ can then be written as \begin{align} M^H & = \frac{1}{H_{zz}} \int \mathrm{d} \mathbf{k}_\perp k_\perp^{-2} \Re \left[ \left< \hat{u}_z' (\mathbf{k}_\perp,z) \hat{\omega}_z'{}^* (\mathbf{k}_\perp, z) \right> \right]. \label{eq:26} \end{align} If the integral length scale of the turbulent helicity is comparable to the integral length scale of energy, $M^H$ can be expressed as $M^H \propto (L^K)^2$ where \begin{align} L^K & = \frac{1}{2K} \int \mathrm{d} \mathbf{k}_\perp k_\perp^{-1} \left< \hat{u}_i' (\mathbf{k}_\perp,z) \hat{u}_i'{}^* (\mathbf{k}_\perp,z) \right>. \label{eq:27} \end{align} Then, the rotational pressure flux can be expressed as \begin{align} \left< u_z' p^\Omega{}' \right> = - C_{\Omega L} \left(L^K\right)^2 H 2\Omega^\mathrm{F}_z, \label{eq:28} \end{align} where $C_{\Omega L}$ is a constant. Here, it is also assumed that the turbulent helicity is almost isotropic, $H_{zz} \simeq H/3$. Figure \ref{fig:10} shows the comparison between the rotational pressure flux and the expression (\ref{eq:28}) for runs 08, 1, and 2 at $2\Omega^\mathrm{F} t = 2$ and $4$. Here, $C_{\Omega L} = 0.16$ is adopted. In contrast to the model given by Eq.~(\ref{eq:15}) (Fig.~\ref{fig:9}), the expression (\ref{eq:28}) does not overestimate the exact value at $1 < |z| < 2$ at both time $2\Omega^\mathrm{F} t=2$ and $4$. The same tendency is also shown for run 2 in Fig.~\ref{fig:10}(c). In the case of fully developed turbulence, the energy cascade rate to the small scale is comparable to the dissipation rate, as previously discussed. In such cases, the integral length scale of the energy can be expressed in terms of $K$ and $\varepsilon$ as $L^K \sim K^{3/2}/\varepsilon$. Thus, the expression (\ref{eq:28}) can be rewritten as the model given by Eq.~(\ref{eq:15}). However, the model given by Eq.~(\ref{eq:15}) is not good enough in a non-equilibrium case. In this sense, non-equilibrium effects for the model coefficient $(L^K)^2$ should be incorporated in order to improve the model given by Eq.~(\ref{eq:15}) in future work. Nevertheless, it should be emphasized that the proposed model associated with the turbulent helicity and the system rotation can account for the energy flux enhanced in the direction parallel to the rotation axis, which is not expressed by previous models. \begin{figure}[htp] \centering \includegraphics[scale=0.65]{fig10.eps} \caption{Comparison between the rotational pressure flux and the expression (\ref{eq:28}) for (a) run 08, (b) run 1, and (c) run 2 at $2\Omega^\mathrm{F} t = 2$ and $4$.} \label{fig:10} \end{figure} \subsection{\label{sec:level4c}Helical Rossby number} In this simulation, the energy flux due to the nonlinearity can be predicted using the conventional gradient-diffusion approximation, while the energy flux enhanced by the rotation needs to be predicted by the newly proposed model. However, it is not clear in advance whether the new model is required for simulating general turbulent flows. It would be useful if there existed a criterion for judging the relative importance of the rotational pressure flux in general flows. The conventional Rossby number given by Eq.~(\ref{eq:18}) cannot be used for this purpose because it involves the system rotation, but not the turbulent helicity. Hence, a new non-dimensional parameter which involves both the absolute vorticity and the turbulent helicity is needed as a criterion. In this study, we define the helical Rossby number $\mathrm{Ro}^H$ as the ratio of the energy flux described by the gradient-diffusion approximation to the energy flux due to the turbulent helicity and the absolute vorticity. By using the expressions (\ref{eq:11}) and (\ref{eq:17}), the helical Rossby number can be defined as \begin{align} \mathrm{Ro}^H = \left |\frac{(K^2/\varepsilon) \nabla_\parallel K}{(K^3/\varepsilon^2) H \Omega^\mathrm{A}} \right| = \left |\frac{\varepsilon \nabla_\parallel K}{K H \Omega^\mathrm{A}} \right|, \label{eq:29} \end{align} where $\nabla_\parallel$ denotes the spatial derivative in the direction of the absolute vorticity and $\Omega^\mathrm{A}$ denotes the absolute value of the absolute vorticity in which the absolute vorticity is already defined in connection with Eq.~(\ref{eq:17}). A major difference between the helical Rossby number and the conventional Rossby number given by Eq.~(\ref{eq:18}) is that the former contains the turbulent helicity. Hence, for non-helical rotating homogeneous turbulence \cite{mnr2001,ymk2011}, the helical Rossby number is infinity although the conventional Rossby number can be small. On the other hand, for inhomogeneous turbulence with the finite turbulent helicity and a mean absolute vorticity, the helical Rossby number has a finite value. In the case of inhomogeneous turbulence accompanied with rotation, the turbulent helicity is often generated \cite{rd2014,gl1999,stepanovetal2018} and one of its generation mechanism is seen in Eq.~(\ref{eq:20}). Figure~\ref{fig:11} shows the distribution of the helical Rossby number given by Eq.~(\ref{eq:29}) for four runs of the rotating cases at $2\Omega^\mathrm{F} t = 2$ in the present simulation. As shown in Fig.~\ref{fig:8}(c), the rotational pressure flux assumes its maximum value near $z = \pm1$. It is clearly seen in Fig.~{\ref{fig:11} that $\mathrm{Ro}^H$ near $z = \pm1$ decreases as the rotation rate increases. In the present simulation, $C_\nu/\sigma_K = 0.22$ is appropriate for the model constant in the gradient-diffusion approximation given by Eq.~(\ref{eq:11}) (see Fig.~\ref{fig:7}), while $C_\Omega = 0.03$ in the model for the rotational pressure flux given by Eq.~(\ref{eq:17}) (see Fig.~\ref{fig:9}). Thus, the ratio of the nonlinear energy flux to the rotational energy flux is estimated as $0.22/0.03 \times \mathrm{Ro}^H \sim 7 \times \mathrm{Ro}^H$. In this sense, $\mathrm{Ro}^H < 1/7$ is a criterion that the energy flux enhanced by the turbulent helicity and the rotation exceeds the energy flux expressed by the gradient-diffusion approximation. \begin{figure}[htp] \centering \includegraphics[scale=0.72]{fig11.eps} \caption{Spatial distribution of the helical Rossby number given by Eq.~(\ref{eq:29}) for each run at $2\Omega^\mathrm{F}t = 2$.} \label{fig:11} \end{figure} Although the definition of the helical Rossby number given by Eq.~(\ref{eq:29}) has a physically clear interpretation, it is complex since it contains the spatial derivative in the direction of the rotation axis. Then, we define the simplified helical Rossby number $\mathrm{Ro}^H_\mathrm{s}$ as \begin{align} \mathrm{Ro}^H_\mathrm{s} = \left |\frac{\varepsilon^2}{K^{3/2} H \Omega^\mathrm{A}} \right|. \label{eq:30} \end{align} It should be noted that the simplified helical Rossby number still has the same feature as the helical Rossby number given by Eq.~(\ref{eq:29}) in the sense that it has a finite value only when the turbulent helicity is non-zero. Figure~\ref{fig:12} shows the distribution of the simplified helical Rossby number given by Eq.~(\ref{eq:30}) for four runs of the rotating cases at $2\Omega^\mathrm{F} t = 2$ in the present simulation. Although the overall profile is different from Fig.~\ref{fig:11}, it is seen that $\mathrm{Ro}^H_\mathrm{s}$ near $z = \pm1$ decreases as the rotation rate increases. Hence, the simplified helical Rossby number is another candidate criterion for judging the relative importance of the energy flux enhanced by the turbulent helicity and the rotation. \begin{figure}[htp] \centering \includegraphics[scale=0.72]{fig12.eps} \caption{Spatial distribution of the simplified helical Rossby number given by (\ref{eq:30}) for each run at $2\Omega^\mathrm{F}t = 2$.} \label{fig:12} \end{figure} \section{\label{sec:level5}Conclusions} In the case of rotating inhomogeneous turbulence, it is observed that the turbulent energy is rapidly transferred in the direction of the rotation axis in comparison with the non-rotating case \cite{dl1983,dsd2006,kolvinetal2009,rd2014}. The conventional gradient-diffusion approximation of the turbulent energy flux cannot account for this enhancement of the energy flux in the direction parallel to the rotation axis. A new model of the energy flux that represents the energy transport enhanced in the direction parallel to the rotation axis is proposed. The model is associated with the turbulent helicity and the mean absolute vorticity. Its property is similar to the group velocity of inertial waves governed by the linear inviscid equation; the negative instantaneous helicity invokes the energy flux parallel to the rotation axis, while the positive instantaneous helicity invokes the flux anti-parallel to the rotation axis. In order to assess the validity of the proposed model, a DNS of inhomogeneous turbulence subject to system rotation is performed, whose flow configuration is similar to that proposed by Ranjan and Davidson \cite{rd2014}. It was shown that the rotational pressure diffusion significantly contributes to the diffusion of the turbulent energy. The spatial distribution of the turbulent helicity is similar to that of the rotational pressure flux with negative coefficient, where the rotational pressure flux represents the energy transport enhanced by the system rotation. This result suggests that the new model expressed in terms of the turbulent helicity is qualitatively good. The proposed model agrees well with the exact value at an early stage, while the model overestimates the exact value at a later stage. This overestimation is partly because the turbulent length scale is not adequately expressed by the dissipation rate in a non-equilibrium state. Theoretical analysis revealed that the overestimation can be improved using the integral length scale $L^K$ instead of $K^{3/2}/\varepsilon$ in expressing the model coefficient. In further work, the performance of the model should be assessed in the statistically stationary helical turbulence such as the swirling flow in a straight pipe \cite{kitoh1991,steenbergen} or the swirling jet \cite{stepanovetal2018}. Moreover, we need to improve the model by incorporating non-equilibrium effects through the integral length scale. In the last section, we introduced the helical Rossby number which represents the ratio of the energy flux described by the gradient-diffusion approximation to that enhanced by the turbulent helicity and the absolute vorticity. The helical Rossby number is different from the conventional Rossby number. The former has a finite value only when the turbulent helicity is non-zero, while the latter can have a finite value even when the turbulent field is non-helical. Turbulent flows associated with the turbulent helicity and the large scale vortex are often encountered in engineering \cite{kitoh1991,steenbergen,stepanovetal2018}, meteorological \cite{lilly1986,nn2010}, and the MHD turbulence \cite{moffattbook,krauseradler,bs2005,hamba2004,wby2016,wby2016-2}. We expect that the helical Rossby number can potentially be utilized as a criterion for judging the relative importance of the energy flux enhanced by the turbulent helicity and the rotation in general turbulent flows. \begin{acknowledgments} We wish to acknowledge Dr. Nobumitsu Yokoi for valuable comments and discussion. This work was supported by JSPS KAKENHI Grant Number JP17K06143. \end{acknowledgments}
2,877,628,088,805
arxiv
\section{Introduction} Stochastic interacting particle systems with cyclic structure, sometimes called stochastic Lotka-Volterra (LV) systems or rock-paper-scissors games, play an important role in modelling in a large variety of different fields: ecology and population dynamics \cite{BM15,BRSF09,CFM06,M10}, evolutionary game theory \cite{F10,SMJSRP14}, dynamics of Bose-Einstein condensates \cite{KWKF15}, chemical reaction networks \cite{SNSWS17}, etc. To describe the long term dynamics of these models is therefore an important challenge in the theoretical and mathematical sciences. In the idealised large population limit, these stochastic systems are usually well-approximated by systems of ordinary differential equations (of LV type), see for instance \cite{AL84,F10,GKF18,KWKF15}; thus considerations about deterministic dynamics could suffice in principle. Yet, in the more realistic case of finite populations, the deterministic approximations are valid only on short time scales. Stochasticity must be integrated into the analysis in order to apprehend the long-term behaviors. In fact, dramatic finite-size effects can generate various phenomena, such as extinction events and other averaging phenomena, which are not captured by the deterministic limit, see e.g.\ \cite{BRSF09,CT08,DF12,IP13,RMF06} for examples in the physics literature. From a rigorous mathematical viewpoint, extinction events have been described in examples of particle systems, in particular in population dynamics \cite{D08,E04}. However, to the best of our knowledge, no mathematical characterisation of an emerging averaging phenomena in stochastic LV systems has been given in the literature. Of note, averaging is ubiquitous in particle systems without cyclic behaviour, when the slow-fast time scale separation naturally materializes in the original variables. Various mathematical results have been obtained in this context, see e.g.\ multiscale chemical reaction and gene networks \cite{BKPR06,CDMR12,KK13,KKP14} and structured population dynamics \cite{C16,MT12}. In order to mathematically address averaging in stochastic LV systems as it emerges from oscillatory behaviours that involve all the original variables, we consider in this paper a simple example of particle system with cyclic state space. In few words, the system is a Markov process that can be defined as follows (see section \ref{S-SYST} for details). The state space is $\mathbb{Z}_3=\mathbb{Z}/3\mathbb{Z}$ and a particle in state $i\in \mathbb{Z}_3$ can only jump to state $i+1$. The jumps are independent and for each particle in state $i$, occur with rate $a+N_{i+1}$ where $a\in\mathbb{R}^+$ is an intrinsic rate and $N_{i+1}\in\mathbb{N}$ is the number of particles in state $i+1$. The population size $N=\sum_{i\in\mathbb{Z}_3} N_i$ is constant so that the phase space is the two-dimensional simplicial grid. \begin{figure}[ht] \begin{center} \includegraphics*[width=60mm]{TrajecZa02.pdf} \hspace{2cm} \includegraphics*[width=60mm]{TrajecZa13.pdf} \end{center} \caption{Trajectories of the particle system in the two-dimensional simplicial grid ({\sl Left}\ $a=0.2$, {\sl Right}\ $a=1.3$, $N=2000$ in the main pictures, $N=200$ in the right insets). The initial condition are located at the center of the grid. The colors stand for the time in $[0,1]$, from blue ($t=0$) to red ($t=1$). {\sl Left inset:} Time series of the slow variable $z(t)=\prod_{i\in\mathbb{Z}_3}x_i(t)$ for $N=2000$, where $x_i=\tfrac{N_i(t)}{N}$.} \label{NUMERICS} \end{figure} As intended, such extensive transition rates promote rapid oscillations in phase space when $N$ is large (see illustrations on Fig.\ \ref{NUMERICS}, in particular compare the main pictures $N=2000$ from the corresponding right insets $N=200$). In fact, the deterministic flow in the large population limit, which turns out to approximate the short time scale dynamics when $N$ is large (see Proposition \ref{APPROXFAST} below), consists of an integrable Hamiltonian system whose two-dimensional simplex phase space is foliated by periodic trajectories (on which the slow variable $z=\prod_{i\in\mathbb{Z}_3}x_i$ remains constant). This suggests to consider the one-dimensional transverse dynamics of the variable $z$ that results from averaging the fast motions on the periodic loops. The main result of this paper (Theorem \ref{MAINRES}) states that for large $N$, the slow-time scale transverse dynamics is indeed approximated by a diffusion process with $a$-dependent drift. In short terms, a slow-fast dynamics emerges in this system in the large population limit. Technically speaking, the stochastic process that governs the dynamics of the particle system. can be regarded as a random perturbation of a dynamical system with a conservation law. Yet, the oscillation period diverges at the phase space boundary (independently of the population size) and this prevents us to apply the standard techniques in this setting \cite{FW04,PS08}. Instead, our proof follows the Stroock-Varadhan approach to martingale problems \cite{SV79} and relies on the compactness-uniqueness argument in this context. The core argument (section \ref{S-INDENT}) is a proof of the $L^1$-convergence of martingales which is tailored to the specific nature of the process and in particular, to its behaviour close to the boundary. Remarkably, for this particle system, the averaging phenomenon is further complemented by the large $N$ convergence of stationary measures. Indeed, for every $N$, the (unique) stationary measure of the process on the simplicial grid is a product measure which converges to a Dirichlet distribution in the large population limit (Proposition \ref{MU_INV_N_INFINI}). Moreover, the push-forward measure on the transverse variable $z$ induced by this distribution turns out to be stationary for the semi-group associated with the diffusion process (Proposition \ref{INVARIANT_MEASURE}). Together with the specification of the nature of the boundary points of this process (Lemma \ref{pro:boundary}), these properties indicate that the visits of the particles' system to the boundaries of the simplex are frequent for $a<1$ and become sparse when $a\geq 1$, as illustrated in Fig.\ \ref{NUMERICS}. Our results are limited here to a simple model with three states playing symmetric roles, but the ideas and techniques can be useful in a broader context. For instance, the analysis can be extended to state-dependent transition rates, which may provide clues to answer the important question: when extinction is possible, which species survives? The answer is counter intuitive as shown in the physics literature \cite{BRSF09}. The extension to more than three states is more challenging, as the deterministic dynamics may not be periodic any more; however, when it is periodic, the techniques of this article may allow to prove the convergence of the slow dynamics to a multi dimensional diffusion process. \section{Definitions and preliminary considerations} \subsection{The stochastic particle system}\label{S-SYST} We consider the two-dimensional simplex $S$ defined by \[ S=\left\{\mathrm{x}:=(x_1,x_2)\in (\mathbb{R}^+)^2\ \text{such that}\ x_1+x_2\leq 1\right\}, \] and given $N\in\mathbb{N}$ (where $\mathbb{N}=\{0,1,2,\cdots\}$), let $S_N=S\cap \tfrac1{N}\mathbb{N}^2$ be the two-dimensional simplicial grid, whose vertices $\mathrm{v}_i$ are the points with coordinates $(\mathrm{v}_i)_j=\delta_{ij},\,j=1,2$ (where $\delta_{ij}$ is the Kronecker symbol) (see Fig.\ \ref{SIMPLGRID}). \begin{figure}[ht] \begin{center} \includegraphics*[width=60mm]{SimplicialGrid.pdf} \end{center} \caption{Illustration of the simplicial grid $S_N$ of step size $\tfrac1{N}$.} \label{SIMPLGRID} \end{figure} The time evolution of the particle system in $S_N$ is governed by the (jump) Markov process $\{P_N^\mathrm{x}\}_{\mathrm{x}\in S_N}$ induced by the generator $L_N$ defined by\footnote{Throughout the paper, the notations $i+1$ and $i-1$ mean respectively $i+1\ \text{mod}\ 3$ and $i-1\ \text{mod}\ 3$.} \[ L_Nf(\mathrm{x})=N\sum_{i\in\mathbb{Z}_3}x_i(a+Nx_{i+1})\left(f(\mathrm{x}+\frac{\mathrm{u}_i}{N})-f(\mathrm{x})\right)\quad \forall \mathrm{x}\in S_N,\ f:S_N\to \mathbb{R} \] where $x_3:=1-x_1-x_2$, $a\in\mathbb{R}^+$ and $\mathrm{u}_i=v_{i+1}-v_i$ for $i\in\mathbb{Z}_3:=\mathbb{Z}/3\mathbb{Z}$. As mentioned in the introduction, this process represents the stochastic time evolution of a population of individuals with cyclic state space and extensive transition rates and is inspired by the modelling in various fields \cite{FT08,RMF06}. In particular, the definition above suggests various natural extensions of this process, such as increasing the number of states, from three to an arbitrary $d\in\mathbb{N}$, or allowing any particle of a site $i$ to jump on the site $j$. Notice that most of the approach and considerations in this paper can be adapted to these extensions without additional conceptual difficulties. A nice feature of this process is that it is ergodic for every $N\in\mathbb{N}$ and $a>0$ and its invariant measure turns out to be the following product measure \cite{FT08} \[ \mu_{N,a}(\mathrm{x}) = C_{N,a} \prod_{i\in\mathbb{Z}_3}\frac{\Gamma(Nx_i+a)}{\Gamma(Nx_i+1)},\quad \forall \mathrm{x}\in S_N, \] where $\Gamma$ stands for the Gamma function and $C_{N,a}$ is the normalisation constant. For $a=0$, the three vertices $\{\mathrm{v}_i\}_{i\in\mathbb{Z}_3}$ are absorbing states. The measure $\mu_{N,a}$ can be regarded as an atomic measure in $S$. Under this viewpoint, this measure can be shown to weakly converge to the Dirichlet measure, namely the absolutely continuous measure $\mu_a$ on $S$ with density $\rho_a$ defined by \[ \rho_a(\mathrm{x})=C_a z(\mathrm{x})^{a-1},\ \forall \mathrm{x}\in S\ \text{where}\ z(\mathrm{x})=\prod_{i\in\mathbb{Z}_3}x_i, \] and again $C_a$ is the normalisation constant. The convergence is claimed in the following statement, whose proof is given in Appendix \ref{A-MU_INV_N_INFINI}. \begin{Pro} For every $a>0$, we have \[ \lim_{N\to\infty} \mu_{N,a}=\mu_a, \] in the weak sense. \label{MU_INV_N_INFINI} \end{Pro} \subsection{The deterministic approximation on short time scales} When interested in the temporal process associated with $L_N$ for large $N$, the Taylor theorem applied to the first order expansion of $f(\mathrm{x}+\frac{\mathrm{u}_i}{N})$ at $\mathrm{x}$ suggests to consider the operator ${\cal L}_\text{fast}$ defined by (NB: $f'$ denotes the Fr\'echet derivative of $f$.) \[ {\cal L}_\text{fast}f(\mathrm{x})=\sum_{i\in\mathbb{Z}_3}x_ix_{i+1}f'(\mathrm{x})\mathrm{u}_i =\sum_{j=1,2}x_j(x_{j-1}-x_{j+1})\partial_{x_j}f(\mathrm{x}),\ f\in C^1(S), \] so that we have $\lim_{N\to\infty}\tfrac1{N}L_Nf(\mathrm{x})= {\cal L}_\text{fast}f(\mathrm{x})$. Let $F$ be the vector field on $S$ defined by \[ (F(\mathrm{x}))_j=x_j(x_{j-1}-x_{j+1}),\ j=1,2. \] This vector field defines a semi-flow on $S$, under which this simplex is invariant. Let $t\mapsto X^{\rm fast}_{\mathrm{x}_0}(t)$ be the solution of $\dot{\mathrm{x}}=F(\mathrm{x})$ with initial condition $\mathrm{x}_0\in S$. Then for any $f\in C^1(S)$, we have \[ \tfrac{d}{dt}f(X^{\rm fast}_{\mathrm{x}_0}(t))={\cal L}_\text{fast} f(X^{\rm fast}_{\mathrm{x}_0}(t)),\ t>0. \] The convergence $\tfrac1{N}L_Nf(\mathrm{x})\to {\cal L}_\text{fast}f(\mathrm{x})$ suggests that the process associated with the particle system can be approximated on the time scales of the order $\frac1{N}$ by the deterministic semi-flow. In order to formalize this approximation, given $T>0$, let $\mathbb{D}([0,T],S_N)$ (resp.\ $\mathbb{D}([0,T],S)$) be the set of c\`adl\`ag functions from $[0,T]$ into $S_N$ (resp.\ $S$) and let ${\cal F}_{T,N}$ (resp.\ ${\cal F}_{T}$) be the natural filtration associated with $\mathbb{D}([0,T],S_N)$ (resp.\ $\mathbb{D}([0,T],S)$). The set $\mathbb{D}([0,T],S)$ is endowed with the Skorokhod metric. We denote by $X_N(t)$ where $t\in [0,T]$ and $X_N\in \mathbb{D}([0,T],S_N)$ the stochastic process on $(\mathbb{D}([0,T],S_N),{\cal F}_{T,N})$ associated with the Markov process and the initial measure $\delta_{X_N(0)}$. Clearly, $X_N(t)$ can be seen as a process taking values in $S$ \begin{Pro} Assume that the sequence of initial conditions $\{X_N(0)\}_{N\in\mathbb{N}}$ converges in law to some $\mathrm{x}_0\in S$. Then, for every $T>0$, the sequence of time-scaled processes $\{X_N(\frac{t}{N}) : t\in [0,T]\}_{N\in\mathbb{N}}$ converges in law in $\mathbb{D}([0,T],S)$ to the trajectory arc $\{X^{\rm fast}_{\mathrm{x}_0}(t) : t\in [0,T]\}$ of the solution of $\dot{\mathrm{x}}=F(\mathrm{x})$ with initial condition $\mathrm{x}_0$. \label{APPROXFAST} \end{Pro} \noindent This statement can be proved using a compactness-uniqueness argument just as in the proof of Theorem 3.1, Chap.\ 3 in \cite{BM15}. See also the proof of Theorem \ref{MAINRES} below for the details of a similar argument. \subsection{Analysis of the deterministic dynamics} According to the expression of $F$, the semi-flow associated with $\dot{\mathrm{x}}=F(\mathrm{x})$ is an instance of a Lotka-Volterra system. Actually, this dynamics can be regarded as a Hamiltonian system with Hamiltonian function $(x_1,x_2)\mapsto x_1x_2(1-x_1-x_2)$. The dynamics can be analysed in full details and its essential features have already been identified \cite{AL84,RMF06}. In particular, there are four stationary points, namely the centre $(\tfrac13,\tfrac13,\tfrac13)$ and the vertices $\mathrm{v}_i$ of $S$. Each boundary edge of $S$ is invariant under the semi-flow and the dynamics on each edge consists of heteroclinic trajectories between the two corresponding vertices. In addition, in $\text{Int}(S)\setminus (\tfrac13,\tfrac13,\tfrac13)$, the interior of the simplex except the centre, the level sets of the functional $z$ - which takes values in $I:=(0,\tfrac1{27})$ - constitute a foliation by invariant loops on which the trajectories $t\mapsto X^{\rm fast}_{\mathrm{x}_0}(t)$ are periodic, with period say $T(z(\mathrm{x}_0))$, and counterclockwise motion (see Fig.\ \ref{CONTOURS}). The periodic trajectories and their period can be semi-explicitly computed, see Appendix \ref{A-DYNAML0-1} for the corresponding computations. In particular, the period diverges when approaching the boundary edges of $S$. \begin{figure}[ht] \begin{center} \includegraphics*[width=70mm]{Contours.pdf} \end{center} \caption{Color plot and level sets of the function $\mathrm{x}\mapsto z(\mathrm{x})$ in the simplex $S$.} \label{CONTOURS} \end{figure} In the sequel, we shall need the following additional properties of the period function. Of note, we use the symbol $z$ for the variable in $I$ and also $T(z)$ as an abbreviation of the notation of the period. Moreover, given two real functions $u$ and $v\neq 0$ and $x_0\in\mathbb{R}$, we write $u(x)\sim v(x)$ as $x\to x_0^\pm$ if $\lim_{x\to x_0^\pm}\frac{u(x)}{v(x)}=1$. \begin{Lem} (i) The function $z\mapsto T(z)$ is $C^\infty$ on $I$ and $T(\frac1{27}^-)=2\pi\sqrt{3}$. \noindent (ii) We have $T(z) \sim -3\ln z$ as $z\to 0^+$. \label{PROPERIOD} \end{Lem} \noindent The proof is given in Appendix \ref{A-PROPERIOD}. In addition, we shall also need some properties of the (signed) area enclosed in the loop $z(\mathrm{x})=z$ and defined by\footnote{Lemma \ref{AVGOP} below shows that in fact $A(z)=\int_0^{T(z)}x_{i+1}\dot{x}_idt$ for every $i\in\mathbb{Z}_3$.} \[ A(z)=\int_0^{T(z)}x_2\dot{x}_1dt,\quad z\in I, \] which can be regarded as the action of the Hamiltonian system. The desired properties of this function are listed in the next statement, whose proof is given in Appendix \ref{DYNRAP-A}. \begin{Lem} (i) The function $z\mapsto A(z)$ is $C^\infty$ and negative on $I$. Moreover, we have $A'(z)=T(z)$ for all $z\in I$. \noindent (ii) $A(0^+)=-\frac12$ and $A(z)\sim -2\pi\sqrt{3}(\frac1{27}-z)$ as $z\to \frac1{27}^-$. \label{pro:dyn_rapide} \end{Lem} \subsection{The averaged generator: definition and explicit expression} Following the approach to Anosov averaging \cite{PS08}, the Proposition \ref{APPROXFAST} and the periodic motions of the system $\dot{\mathrm{x}}=F(\mathrm{x})$ suggest to consider averaging the dynamics associated with the next-order approximation of $L_N$. To that goal, consider first the operator ${\cal L}_\text{slow}$ that collects the $N$-independent terms in the (second-order) expansion of $L_N$, and defined for $f\in C^2(S)$ as follows \begin{align*} {\cal L}_\text{slow}f(\mathrm{x})&=a\sum_{i\in\mathbb{Z}_3}x_if'(\mathrm{x})\mathrm{u}_i+\tfrac12\sum_{i\in\mathbb{Z}_3}x_ix_{i+1}f''(\mathrm{x})(\mathrm{u}_i,\mathrm{u}_i)\\ &=a\sum_{j=1,2}(x_{j+1}-x_j)\partial_{x_j} f(\mathrm{x})+ \tfrac12\sum_{j=1,2}x_j(x_{j-1}+x_{j+1})\partial_{x_j}^2f(\mathrm{x})-\sum_{j=1,2}x_jx_{j+1}\partial_{x_j,x_{j+1}}^2f(\mathrm{x}) \end{align*} Given $z\in I$ and a fonction $f$ defined on the loop of period $T(z)$, let the time average $\langle f\rangle_z$ be defined by \[ \langle f\rangle_z=\frac1{T(z)}\int_0^{T(z)}f(X^{\rm fast}_{\mathrm{x}_0}(t))dt, \] (where $\mathrm{x}_0$ is any point on the loop). Then, using the notation $f_I$ for the functions defined on $I$ (or $\bar{I}$), the averaged operator ${\cal L}_\text{avg}$ is defined by \[ {\cal L}_\text{avg}f_I(z)=\langle {\cal L}_\text{slow}(f_I\circ z)\rangle_z,\ z\in I,\ f_I\in C^2(I). \] An explicit expression of this operator can be obtained based on the analysis of the dynamics generated by $F$. The results are summarized in the following statement. \begin{Lem} The average $\langle x_{i+1}\dot x_i\rangle_z$ does not depend on $i\in\mathbb{Z}_3$. Letting $m(z):=-\langle x_{i+1}\dot x_i\rangle_z$, the averaged operator can be expressed as the following second-order differential operator \[ {\cal L}_\text{\rm avg}f_I(z)=3(a m(z)-z)f_I'(z)+3z m(z)f_I''(z),\quad z\in I,\ f_I\in C^2(I). \] \label{AVGOP} \end{Lem} \noindent Given the definition of $A(z)$ above, we have $m(z)=-\frac{A(z)}{T(z)}> 0$ for all $z\in I$ and Lemma \ref{pro:dyn_rapide} implies that $m \in C^\infty(I)$, which yields that ${\cal L}_\text{avg}f_I \in C^0(I)$. Moreover, Lemma \ref{pro:dyn_rapide} and Lemma \ref{AVGOP} both imply that for $f_I\in C^2(\overline{I})$; hence ${\cal L}_\text{avg}f$ can be extended by continuity to the boundary of $I$ as follows \[ {\cal L}_\text{avg}f_I(0) = 0\quad\text{and}\quad {\cal L}_\text{avg}f_I(\tfrac{1}{27})=-\tfrac19f_I'(\tfrac1{27}), \] which in particular yields the following match \begin{equation} {\cal L}_\text{avg}f_I(\tfrac{1}{27})={\cal L}_\text{slow}(f_I\circ z)(\tfrac13,\tfrac13,\tfrac13). \label{MATCH} \end{equation} \noindent {\sl Proof of the Lemma.} For $f_I\in C^1(I)$ and $\mathrm{x}\in \text{Int}(S)$, direct computations yield the following expression \begin{align*} {\cal L}_\text{slow}(f_I\circ z)&=\Big(a\sum_{i\in\mathbb{Z}_3}\big(x^2_{i-1}x_{i+1}-z\big)-3z\Big)f_I'\circ z+\tfrac{1}{2}\sum_{i\in\mathbb{Z}_3}z\Big(x_ix^2_{i-1}+x_ix^2_{i+1}-2z\Big)f_I''\circ z \end{align*} Together with the relations $z(\mathrm{x})\tfrac{x_{i-1}}{x_i}=x_{i-1}^2x_{i+1}$ and $z(\mathrm{x})\tfrac{x_{i+1}}{x_i}=x_{i-1}x_{i+1}^2$, this suggests consider the averages $\langle x_ix_{i+1}^2\rangle_z$ and $\langle x_ix_{i-1}^2\rangle_z$ in order to compute of the expression of ${\cal L}_\text{avg}$. We have the following statement. \begin{Claim} For every $z\in I$, we have $\langle x_ix_{i+1}^2\rangle_z=\langle x_ix_{i-1}^2\rangle_z$ for all $i\in\mathbb{Z}_3$ and these quantities do not depend on $i$. \end{Claim} \noindent {\sl Proof:} By periodicity of the trajectories, we have for every $f\in C^1(S)$ and $z\in I$ \[ 0=\langle \tfrac{d}{dt}f\rangle_z=\langle {\cal L}_\text{fast}f\rangle_z=\langle \sum_{j=1,2}x_j(x_{j-1}-x_{j+1})\partial_{x_j}f\rangle_z \] In particular, for $f(\mathrm{x})=x_ix_{i+1}$ for some $i\in \mathbb{Z}_3$, we get $\langle x_{i-1}x_{i+1}^2-x_{i+1}x_{i-1}^2\rangle_z=0$, which immediately yields the desired equality. In order to prove that the quantities do not depend on $i$, apply the equality above with $f(\mathrm{x})=x_{i+1}$, which combined with the previous one yields $\langle x_ix_{i+1}^2\rangle_z=\langle x_{i+1}x_{i-1}^2\rangle_z$. \hfill $\Box$ The expression of ${\cal L}_\text{avg}$ then immediately follows from the relation \[ \langle x_ix_{i+1}^2\rangle_z-z=\langle x_ix_{i+1}(x_{i+1}-x_{i-1})\rangle_z=-\langle x_{i+1}\dot x_i\rangle_z \] Lemma \ref{AVGOP} is proved. \hfill $\Box$ \subsection{The stochastic differential equation and its solutions} Consider the (one-dimensional) stochastic differential equation associated with ${\cal L}_\text{avg}$, namely \begin{equation} dZ(t)=b(Z(t))dt+\sigma(Z(t))dW(t),\quad Z(0)=z_0\in I \label{eq:eds_avg} \end{equation} where $b(z)=3(a m(z)-z)$, $\sigma(z)=\sqrt{6z m(z)}$ and $W(t)$ is some Brownian motion. Lemma \ref{pro:dyn_rapide} and the fact that $m(z)=-\frac{A(z)}{T(z)}$ for all $z\in I$ imply that both functions $b$ and $\sigma$ are smooth on $I$ and $\sigma^2>0$. These conditions ensure the existence and uniqueness of a solution $Z_{z_0}(t)$ for $t$ up to the so-called explosion time $T_\text{ex}(z_0)$, namely the time it takes for the solution to reach the boundary of $I$, see for instance \cite{IW89,KS91}. Of course, we have $T_\text{ex}>0$ a.s. The explosion time depends on the nature of the boundary points, which can be evaluated using Feller's test \cite{IW89,KS91}. This nature depends on the parameter $a$ and is given in the following statement, which uses the Feller's classification in chapter 8.1 of \cite{EK86}. \begin{Lem} \label{pro:boundary} In the SDE \eqref{eq:eds_avg}, the boundary point $z=\frac{1}{27}$ is entrance for all $a\in \mathbb{R}^+$. Moreover the boundary point $z=0$ is entrance for $a\geq 1$, regular for $a\in (0,1)$ and exit for $a=0$. \end{Lem} \noindent {\sl Proof:} We follow the arguments in chapter 8.1 of \cite{EK86}. Using $A'(z)=T(z)$ in Lemma \ref{pro:dyn_rapide}, the averaged generator can be recast as \begin{equation} \mathcal{L}_{\rm avg} =\tfrac{d}{ds(z)}\left(\tfrac{d}{d p(z)}\right), \label{Lavg_scale_speed} \end{equation} where the scale function $p$ and the speed function $s$ are the positive functions whose differential are respectively given by the following equalities \[ dp(z)=-\frac{1}{z^a A(z)}dz\quad \text{and}\quad ds(z)=\frac{z^{a-1}T(z)}3dz,\quad z\in I. \] The approximations in Lemmas \ref{PROPERIOD} and \ref{pro:dyn_rapide} imply the following ones \[ \tfrac{dp(z)}{dz}\sim 2 z^{-a}\quad \text{and}\quad \tfrac{ds(z)}{dz}\sim - z^{a-1}\ln z\quad \text{as}\ z\to 0^+, \] and \[ \tfrac{dp(z)}{dz}\sim \tfrac{27^a}{2\pi\sqrt{3}}\left(\tfrac1{27}-z\right)^{-1}\quad \text{and}\quad \tfrac{ds(z)}{dz}\sim \tfrac{18\pi\sqrt{3}}{27^a}\quad \text{as}\ z\to \tfrac1{27}^-. \] Explicit computations then yield the following estimates for every $r\in I$ (recall that $a\in\mathbb{R}^+$) \[ \left|\int_r^{0} s(z) dp(z)\right| <+\infty\ \text{iff}\ a\in [0,1)\quad \text{and}\quad \left|\int_r^0 p(z)ds(z)\right| <+\infty\ \text{iff}\ a>0, \] and \[ \int_r^{\frac{1}{27}} s(z) dp(z) =+\infty\quad \text{and}\quad \int_r^{\frac{1}{27}} p(z)ds(z) <+\infty, \] from where the characterization of the boundary points $z=0$ and $z=\frac1{27}$ immediately follow. \hfill $\Box$ \medskip As a consequence of the Lemma, for $a\geq 1$, we have $T_\text{\rm ex}=+\infty$ a.s., and hence existence and uniqueness of solutions of the SDE \eqref{eq:eds_avg} for all $t>0$, a.s. For $a\in [0,1)$, the trajectory hits the boundary point $z=0$ a.s. Yet, for $a=0$, existence and uniqueness of solutions for all $t>0$ a.s.\ follows by letting $Z(t)=0$ for $t>T_{\rm ex}$. For $a\in (0,1)$, the existence of solutions of the SDE \eqref{eq:eds_avg} extends for $t$ beyond $T_\text{ex}$ but uniqueness is not granted in general and requires to specify the behavior at the boundary point $z=0$ \cite{H96}. These features, as well as those for $a>1$ can be expressed through the semi-group associated with ${\cal L}_\text{avg}$. Following the definitions in \cite{EK86}, given $a>0$ consider the domain $\mathcal{D}_\text{avg}$ defined by \[ \mathcal{D}_\text{avg}=\left\{\begin{array}{ccl} \left\{f_I \in C^2(I)\cap C^0(\bar{I})~:~ \mathcal{L}_{\rm avg}f_I \in C^0(\bar{I}) \right\}&\text{if}&a\geq 1,\\ \left\{f_I \in C^2(I)\cap C^0(\bar{I})~:~ \mathcal{L}_{\rm avg}f_I \in C^0(\bar{I}),~\lim_{z\to 0^+} z^aA(z)f_I'(z)=0 \right\}&\text{if}&a\in (0,1). \end{array}\right. \] In particular, the choice of $\mathcal{D}_\text{avg}$ for $a\in (0,1)$ corresponds to an instantaneous reflexion at $z=0$. Theorem 1.1, Chap.\ 8 in \cite{EK86} states that, with these definitions of $\mathcal{D}_\text{avg}$, the operator ${\cal L}_\text{avg}$ generates a Feller semi-group on $C^0(\overline{I})$, for every $a>0$ (NB: In particular, in the case $a\in (0,1)$ for which $z=0$ is regular, the set $\mathcal{D}_0$ in \cite{EK86} is defined with $q_0=0$.) Our next result states that the measure on $\overline{I}$ with density $z\mapsto z^{a-1} T(z)$ - which, thanks to the equality $A'(z)=T(z)$, results to be the push-forward under $\mathrm{x}\mapsto z(\mathrm{x})$ of the limit measure $\mu_a$ in Proposition \ref{MU_INV_N_INFINI} - is a stationary measure of the Feller semi-group (NB: Recall that for $a=0$, the point $z=0$ is exit). \begin{Pro} \label{INVARIANT_MEASURE} For every $a>0$, we have \[ \int_0^{\frac{1}{27}} z^{a-1} T(z) \mathcal{L}_{\rm avg}f_I (z) dz =0,\ \forall f_I \in \mathcal{D}_{\rm avg}. \] \end{Pro} \noindent {\sl Proof:} For $z_0<z_1\in I$, we obtain after direct integration \[ \int_{z_0}^{z_1} z^{a-1} T(z) \mathcal{L}_{\rm avg}f_I (z) dz=\left[\tfrac{df_I(z)}{dp(z)}\right]_{z_0}^{z_1}. \] Therefore, all we have to show is $\lim_{z\to 0^+}\tfrac{df_I(z)}{dp(z)}=\lim_{z\to \frac1{27}^-}\tfrac{df_I(z)}{dp(z)}=0$. We focus on the first limit; the second one follows from similar argument. When $z=0$ is a regular boundary ($a\in (0,1)$), this is a consequence of the choice of $\mathcal{D}_{\text{avg}}$. When $z=0$ is entrance ($a\geq 1$), the Taylor formula implies that we have \[ \tfrac{df_I(z)}{dp(z)}=\frac{1}{p(z)-p(r)}\left(f_I(z)-f_I(r) +\int_{r}^{z}\,\mathcal{L}_{\rm avg}f_I(z)\,p(z)\,\mathrm{d}s(z)\right),\ r\in (z,\tfrac1{27}). \] The term inside the brackets converges when $z\to 0^+$ because we have $f_I,\mathcal{L}_{\rm avg}f_I \in C^0(\bar{I})$ and $|\int_{r}^{0}p(z) ds(z)|<+\infty$. The result then follows from the limit $\lim_{z\to 0^+}|p(z)|=+\infty$. \hfill $\Box$ \section{Main result: Averaging} Given $T>0$, let $\mathbb{D}([0,T],\overline{I})$ be the set of c\`adl\`ag functions from $[0,T]$ into $\overline{I}=[0,\frac1{27}]$ and let ${\cal G}_T$ be the natural filtration associated with $\mathbb{D}([0,T],\overline{I})$. The process $X_N(t)$ induces a stochastic process $Z_N(t):=z(X_N(t))$ on $(\mathbb{D}([0,T],\overline{I}),{\cal G}_T)$. We are now in position to formulate the main result of the paper. \begin{Thm} Assume that the sequence of initial conditions $\{Z_N(0)\}_{N\in\mathbb{N}}$ converges in law to some $z_0\in I$. \\ If $a\geq 1$ or $a=0$, then for any $T>0$, the sequence of processes $\{Z_N(t) : t\in [0,T]\}_{N\in\mathbb{N}}$ converges in law to the weak solution of the SDE \eqref{eq:eds_avg} with initial condition $z_0$. \\ If $0<a<1$, then for any $T>0$, the sequence $\{Z_N(t) : t\in [0,T]\}_{N\in\mathbb{N}}$ is relatively compact. Furthermore, any limit point is a weak solution of the SDE \eqref{eq:eds_avg}, with initial condition $z_0$. \label{MAINRES} \end{Thm} \noindent The proof of this statement follows the Stroock-Varadhan approach to martingale problems \cite{SV79}. The first step is a standard compactness argument (see the presentation in \cite{JM86}, especially Corollary 2.3.3 therein) for the semi-martingale structure associated with a stochastic process, the process $Z_N$ in our case. Then we prove that every limit point of a subsequence must satisfy the martingale problem associated with the SDE. That part of the proof is specific to the particle system under consideration as it relies in particular on various features of the short times deterministic dynamics. The convergence follows suit when the SDE has a unique solution, ie.\ for $a\geq 1$ and $a=0$. In the other case ($a\in (0,1)$), while the SDE solutions are not unique beyond $T_\text{ex}$, Proposition \ref{MU_INV_N_INFINI} and Proposition \ref{INVARIANT_MEASURE} suggest that $\{Z_N\}$ should converge to the solution of the SDE with instantaneous reflection at the boundary $z=0$, provided it is defined and unique. This property remains to be proved. \medskip The rest of this section is devoted to the proof of Theorem \ref{MAINRES} which, for the sake of clarity, is decomposed into three subsections. \subsection{Proof of compactness}\label{S-COMPACT} Recall that $\{P_N^\mathrm{x}\}_{\mathrm{x}\in S_N}$ denotes the Markov process generated by $L_N$. For every $\mathrm{x}\in S_N$, the probability measure $P_N^\mathrm{x}$ solves the martingale problem associated with $L_N$ and the initial condition $\mathrm{x}$. In particular the stochastic process $M_N$ on $(\mathbb{D}([0,T],S_N),{\cal F}_{T,N})$ with values in $\mathbb{R}$, defined by \[ t\mapsto M_N(t)=Z_N(t)-A_N(t)\, ,\quad \text{where}\quad A_N(t):=\int_0^t(L_Nz)(X_N(s))ds\, , \] is a martingale relative to $P_N^{X_N(0)}$. Moreover, that $L_Nz$ is bounded in $S_N$ implies that $A_N$ is a process of finite variation. Consequently $M_N$ is bounded and then is locally square integrable. According to Corollary 2.3.3 in \cite{JM86}, in order to prove that the laws associated with $\{Z_N\}$ form a tight family, it suffices to show that \begin{itemize} \item the sequences $\{A_N\}$ and $\{\langle M_N\rangle\}$ satisfy the Aldous condition and \item the sequence of the laws of $\sup_{t\in [0,T]} |A_N(t)|$ (resp.\ $\sup_{t\in [0,T]} |\langle M_N(t)\rangle|$) is tight in $\mathbb{R}$. \end{itemize} Below we focus on proving the Aldous condition. The proof of the other condition is similar and left to the reader. In order to prove the Aldous condition for $A_N$, we observe that the Markov inequality implies that for every $0\leq t< t'$, we have \[ P_N^\mathrm{x}(|A_N(t')-A_N(t)|\geq \eta)\leq \frac1{\eta}\mathbb{E}_\mathrm{x}\left(\int_{t}^{t'}|L_Nz(X_N(s))|ds\right), \] Explicit calculations using that ${\cal L}_{\rm fast}z=0$ imply that the integrand $|L_Nz(X_N(s))|$ is uniformly bounded in $N$, and so the probability $P_N^\mathrm{x}(|A_N(t')-A_N(t)|\geq \eta)$ can be made arbitrarily small, uniformly in $N$, by taking $t'-t$ sufficiently small. Moreover, the increasing process $\langle M_N\rangle$ is given by \[ \langle M_N(t)\rangle = \int_0^t q_Nz(X_N(s))ds \] where $q_N$ is the quadratic operator defined for any function $f:S_N\to \mathbb{R}$ by \begin{equation} q_Nf(\mathrm{x})=\sum_{i\in\mathbb{Z}_3}x_i(\frac{a}{N}+x_{i+1})N^2 \left(f(\mathrm{x}+\frac{\mathrm{u}_i}{N})-f(\mathrm{x})\right)^2. \label{QUADOP} \end{equation} A similar argument as above applies to prove the Aldous condition for $\langle M_N\rangle$, using the mean value theorem for $z$. \subsection{Extending the time scale of the convergence to the deterministic approximation} Let $\|\cdot\|_1$ denote the $\ell^1$-norm in $S$. \begin{Lem} Let $\{T_N\}_{N\in\mathbb{N}}$ be a sequence in $\mathbb{R}^+$ such that $T_N\leq C\frac{\log \log N}{N}$ for all $N\in \mathbb{N}$, for some $C>0$. Then we have \[ \lim_{N\to\infty}\sup_{t\in [0,T_N]}\sup_{X_N(0)\in S_N}\mathbb{E}_{P_N^{X_N(0)}} \left\| X_N(t)-X^{\rm fast}_{X_N(0)}(Nt)\right\|_1=0. \] \label{LIMITSHORTTIMES} \end{Lem} \noindent {\sl Proof.} Given $f\in C^2(S)$, consider the martingale $M_N^f$ relative to $P_N^{X_N(0)}$ and defined by \begin{equation} t\mapsto M_N^f(t):=f(X_N(t))-\int_0^tL_Nf(X_N(s))ds \label{MARTF} \end{equation} Using also the relation $f(X^{\rm fast}_{X_N(0)}(Nt))=f(X_{N}(0))+\int_0^tN{\cal L}_\text{fast}f(X^{\rm fast}_{X_N(0)}(s))ds$, we then get the estimate \begin{align*} \mathbb{E}_{P_N^{X_N(0)}} \left|f(X_N(t))-f(X^{\rm fast}_{X_N(0)}(Nt))\right|\leq & \mathbb{E}_{P_N^{X_N(0)}} \left|M_N^f(t)-f(X_{N}(0))\right|\\ &+\int_0^t\mathbb{E}_{P_N^{X_N(0)}} \left|L_Nf(X_N(s))-N{\cal L}_\text{fast}f(X_N(s))\right|ds\\ &+N\int_0^t\mathbb{E}_{P_N^{X_N(0)}} \left|{\cal L}_\text{fast}f(X_N(s))-{\cal L}_\text{fast}f(X^{\rm fast}_{X_N(0)}(s))\right|ds. \end{align*} We estimate each term in the RHS separately. By the H\"older inequality, we have \[ \mathbb{E}_{P_N^{X_N(0)}} \left|M_N^f(t)-f(X_{N}(0))\right|\leq \left(\mathbb{E}_{P_N^{X_N(0)}} \left| M_N^f(t)-f(X_{N}(0))\right|^2\right)^{\tfrac12}= \left(\mathbb{E}_{P_N^{X_N(0)}} \langle M_N^f(t)\rangle\right)^{\tfrac12}, \] where $\langle M_N^{f}(t)\rangle = \int_0^t q_Nf(X_N(s))ds$ and the quadratic operator $q_N$ is defined in \eqref{QUADOP}. This definition implies that for every $f\in C^1(S)$, there exists $K_f>0$ such that \[ \sup_{\mathrm{x}\in S,N\in\mathbb{N}}|q_Nf(\mathrm{x})|\leq K_f \] from where we get the upper bound $\mathbb{E}_{P_N^{X_N(0)}} \left|M_N^f(t)-f(X_{N}(0))\right|\leq K_f\sqrt{t}$. Similarly, given $f\in C^2(S)$, the Taylor theorem applied to the first order expansion of $f(\mathrm{x}+\frac{\mathrm{u}_i}{N})$ at $\mathrm{x}$ implies the inequality \[ \int_0^t\mathbb{E}_{P_N^{X_N(0)}} \left|L_Nf(X_N(s))-N{\cal L}_\text{fast}f(X_N(Ns))\right|ds\leq K_f t, \] provided that $K_f$ is chosen sufficiently large. By choosing $K_f$ even larger if necessary so that the following inequality holds \[ |{\cal L}_\text{fast}f(\mathrm{x})-{\cal L}_\text{fast}f(\mathrm{y})|\leq K_f\|\mathrm{x}-\mathrm{y}\|_1,\quad \forall \mathrm{x},\mathrm{y}\in S, \] and by collecting all the estimates, we finally obtain \[ \mathbb{E}_{P_N^{X_N(0)}} \left|f(X_N(t))-f(X^{\rm fast}_{X_N(0)}(Nt))\right|\leq K_f\left(\sqrt{t}+t+N\int_0^tE_{P_N^{X_N(0)}} \left\|X_N(s)-X^{\rm fast}_{X_N(0)}(Ns)\right\|_1ds\right). \] Applying this inequality to the functions $f(\mathrm{x})=x_i$ ($i\in \mathbb{Z}_3$), and using the Gronwall inequality, we finally get \[ \sup_{X_N(0)\in S_N}\mathbb{E}_{P_N^{X_N(0)}} \left\|X_N(t)-X^{\rm fast}_{X_N(0)}(Nt)\right\|_1\leq K\left(\sqrt{t}+t\right) e^{NKt}, \] for some $K>0$. The Lemma then immediately follows from the fact that $T_N\leq C\frac{\log \log N}{N}$ implies that $\lim_{N\to\infty} (\sqrt{T_N}+T_N)e^{NKT_N}=0$. \hfill $\Box$ \subsection{Identification of the limits, end of the proof of Theorem \ref{MAINRES}}\label{S-INDENT} The tightness of the sequence $\{Z_N\}$ (section \ref{S-COMPACT}) implies that, up to passing to a subsequence, this sequence converges in law to a process $Z$ with values in $\overline{I}$. To complete the proof of Theorem \ref{MAINRES}, it remains to prove that $Z$ must be a solution of the SDE \eqref{eq:eds_avg}. The core argument is to establish that $Z$ must solve the martingale problem associated with ${\cal L}_\text{avg}$ for a sufficiently large set of functions. To that goal, we first invoke a slight extension of the Skorokhod's representation theorem, see Theorem \ref{THMSKO} in Appendix \ref{A-SKO}, according to which there exist a common probability space $(\mathrm{E},\Omega,\mathrm{P})$ in which the random variables $z(X_N)$ pointwise converge to $Z$. Then, we consider the martingales $M_N^{f_I\circ z}$ defined by \[ t\mapsto M_N^{f_I\circ z}(t)=f_I(Z_N(t))-\int_0^tL_N(f_I\circ z)(X_N(s))ds. \] The key argument of the proof of Theorem \ref{MAINRES} is the following $L^1$-convergence of the martingales. \begin{Pro} For every $f_I\in C^3(\overline{I})$ and $T>0$, we have \[ \lim_{N\to \infty}\mathbb{E}_{\mathrm{P}}\left|M_N^{f_I\circ z}(t)-f_I(Z(t))-\int_0^t{\cal L}_\text{\rm avg}f_I(Z(s))ds\right|=0,\quad \forall t\in [0,T]. \] \end{Pro} \noindent The end of the proof of the Theorem \ref{MAINRES} uses again standard arguments. The Proposition implies in particular that the process $t\mapsto f_I(Z(t))+\int_0^t{\cal L}_\text{\rm avg}f_I(Z(s))ds$ equipped with the probability $\mathrm{P}$ is a martingale for every $f_I\in C^3(\overline{I})$, see e.g.\ Lemma 3.6, Chap.\ 2 in \cite{D96}, and hence for $f_I(Z)=Z$ and $f_I(Z)=Z^2$. Moreover, the process is continuous (proved in the proof of the proposition below). Hence by (an adaptation to $\overline{I}$ of the) Proposition 4.6, Chap.\ 5 in \cite{KS91}, one defines a Brownian motion $W$ such that the process is a solution of the SDE \eqref{eq:eds_avg}. \medskip \noindent {\sl Proof of the Proposition.} We are going to expand the difference $M_N^{f_I\circ z}(t)-f_I(Z(t))+\int_0^t{\cal L}_\text{avg}f_I(Z(s))ds$ into a telescopic sum for which the elements can be controlled using characteristic features of the various processes involved in the approximation. To that goal, let $k_N=\lfloor\tfrac{tN}{\log \log N}\rfloor$ and $T_N=\tfrac{t}{k_N}$. Using the definition of $M_N^{f_I\circ z}$ above, we write \begin{equation} \mathbb{E}_{\mathrm{P}}\left|M_N^{f_I\circ z}(t)-f_I(Z(t))+\int_0^t{\cal L}_\text{avg}f_I(Z(s))ds\right|\leq \mathbb{E}_{\mathrm{P}}\left|f_I(Z_N(t))-f_I(Z(t))\right|+\sum_{\ell=1}^5\mathbb{E}_{\mathrm{P}}\left|Q_{N,\ell}^{f_I}(t)-Q_{N,\ell+1}^{f_I}(t)\right|, \label{INEQMART} \end{equation} where the boundary terms \[ Q_{N,1}^{f_I}(t)=\displaystyle\int_0^tL_N(f_I\circ z)(X_N(s))ds\quad \text{and}\quad Q_{N,6}^{f_I}(t)=\int_0^t{\cal L}_\text{avg}f_I(Z(s))ds \] in the telescopic sum are already known (NB: While $Q_{N,6}^{f_I}(t)$ actually does not depend on $N$, using this notation simplifies the expression \eqref{INEQMART}) and the interior terms $Q_{N,\ell}^{f_I}(t)$ are defined by \[ Q_{N,\ell}^{f_I}(t)= \begin{cases} {\displaystyle\int_0^t{\cal L}_\text{slow}(f_I\circ z)(X_N(s))ds}\quad\text{if}\quad \ell=2\\ {\displaystyle\sum_{k=0}^{k_N-1}\int_{kT_N}^{(k+1)T_N}{\cal L}_\text{slow}(f_I\circ z)(X^{\rm fast}_{X_N(kT_N)}(Ns))ds}\quad\text{if}\quad \ell=3\\ {\displaystyle T_N\sum_{k=0}^{k_N-1}{\cal L}_\text{avg}f_I(Z_N(kT_N))}\quad\text{if}\quad \ell=4\\ {\displaystyle T_N\sum_{k=0}^{k_N-1}{\cal L}_\text{avg}f_I(Z(kT_N))}\quad\text{if}\quad \ell=5\\ \end{cases} \] We now prove that each term in the RHS of \eqref{INEQMART} vanishes in the limit of large $N$. \medskip \noindent {\bf Proof of convergence of $\mathbb{E}_{\mathrm{P}}\left|f_I(Z_N(t))-f_I(Z(t))\right|$ and $\mathbb{E}_{\mathrm{P}}\left|Q_{N,4}^{f_I}(t)-Q_{N,5}^{f_I}(t)\right|$}. We first write \[ \mathbb{E}_{\mathrm{P}}\left|f_I(Z_N(t))-f_I(Z(t))\right|\leq \mathbb{E}_{\mathrm{P}}\left\|f_I(Z_N)-f_I(Z)\right\|_\infty,\ t\in [0,T], \] where $\|\cdot\|_\infty$ denotes the uniform norm on $[0,T]$. Similarly, we have \[ \mathbb{E}_{\mathrm{P}}\left|Q_{N,4}^{f_I}(t)-Q_{N,5}^{f_I}(t)\right|\leq t \mathbb{E}_{\mathrm{P}}\left\|{\cal L}_\text{avg}f_I(Z_N)-{\cal L}_\text{avg}f_I(Z)\right\|_\infty,\ t\in [0,T]. \] The amplitude of the jumps of $Z_N$ is at most $\tfrac1{N}$ and the mapping $(Z(t))_{t\in\mathbb{R}^+}\mapsto \sup_{t\in [0,T]}|Z(t)-Z(t^-)|$ is continuous in the Skorokhod topology of $\mathbb{D}([0,T],\overline{I})$; hence the limit $Z$ must be continuous a.s.\ and the convergence $Z_N\to Z$ a.s.\ must occur in the sense of $\|\cdot\|_\infty$. The desired convergences then follow from the dominated convergence theorem using that $f_I$ and ${\cal L}_\text{avg}f_I$ are uniformly continuous over $\overline{I}$. \medskip \noindent {\bf Proof of convergence of $\mathbb{E}_{\mathrm{P}}\left|Q_{N,1}^{f_I}(t)-Q_{N,2}^{f_I}(t)\right|$}. Given $f\in C^3(S)$, the Taylor theorem applied to the second order expansion of $f(\mathrm{x}+\frac{\mathrm{u}_i}{N})$ at $\mathrm{x}$ implies the existence of $K_f>0$ such that we have \[ \left|L_Nf(\mathrm{x})-N{\cal L}_\text{fast}f(\mathrm{x})-{\cal L}_\text{slow}f(\mathrm{x})\right|\leq \frac{K_f}{N}. \] Applying this inequality to $f_I\circ z$ with $f_I\in C^3(\overline{I})$, and using the property ${\cal L}_\text{fast}(f_I\circ z)=0$ (which follows from the fact that the function $\mathrm{z}$ is invariant under the flow generated by $F$), we immediately obtain the desired convergence \[ \lim_{N\to\infty}\mathbb{E}_{\mathrm{P}}\left|Q_{N,1}^{f_I}(t)-Q_{N,2}^{f_I}(t)\right| =0,\ \forall t\in [0,T]. \] \medskip \noindent {\bf Proof of convergence of $\mathbb{E}_{\mathrm{P}}\left|Q_{N,2}^{f_I}(t)-Q_{N,3}^{f_I}(t)\right|$}. Given $f_I\in C^2(\overline{I})$, let $K_s$ be the Lipschitz constant of ${\cal L}_\text{slow}(f_I\circ z)$ with respect to the $\|\cdot\|_1$-norm. We have \begin{align*} \mathbb{E}_{\mathrm{P}}\left|Q_{N,2}^{f_I}(t)-Q_{N,3}^{f_I}(t)\right|&\leq K_s\sum_{k=0}^{k_N-1}\int_{kT_N}^{(k+1)T_N}\mathbb{E}_{\mathrm{P}}\|X_N(s)-X^{\rm fast}_{X_N(kT_N)}(Ns)\|_1ds\\ &\leq K_s t \sup_{t\in [0,T_N]}\sup_{X_N(0)\in S_N}\mathbb{E}_{P_N^{X_N(0)}} \left\| X_N(t)-X^{\rm fast}_{X_N(0)}(Nt)\right\|_1, \end{align*} and then Lemma \ref{LIMITSHORTTIMES} immediately imply \[ \lim_{N\to\infty}\mathbb{E}_{\mathrm{P}}\left|Q_{N,2}^{f_I}(t)-Q_{N,3}^{f_I}(t)\right| =0,\ \forall t\in [0,T]. \] \medskip \noindent {\bf Proof of convergence of $\mathbb{E}_{\mathrm{P}}\left|Q_{N,3}^{f_I}(t)-Q_{N,4}^{f_I}(t)\right|$}. The proof of convergence for this term follows from considerations about localisation in $S$ and related dynamical estimates. Writing \begin{equation} \mathbb{E}_{\mathrm{P}}\left|Q_{N,3}^{f_I}(t)-Q_{N,4}^{f_I}(t)\right|\leq T_N\sum_{k=0}^{k_N-1}\mathbb{E}_{\mathrm{P}}\left|\frac1{T_N}\int_{kT_N}^{(k+1)T_N}{\cal L}_\text{slow}(f_I\circ z)(X^{\rm fast}_{X_N(kT_N)}(Ns))ds-{\cal L}_\text{avg}f_I(Z_N(kT_N))\right| \label{ESTQ34} \end{equation} and given $r\in I$ (to be specified later on), for each term in the sum of the RHS, we consider separately the cases $Z_N(kT_N) \in [0,r)$ and $Z_N(kT_N)\in [r,\tfrac1{27}]$. In the second case, we use that given $t>0$ and $\mathrm{x}\in S$ such that $z(\mathrm{x})\geq r$, $\mathrm{x}\neq (\tfrac13,\tfrac13,\tfrac13)$, the definition of the averaged generator ${\cal L}_\text{avg}$ and the periodicity $X^{\rm fast}_{\mathrm{x}}(s+T(z(\mathrm{x})))= X^{\rm fast}_{\mathrm{x}}(s)$ imply \begin{align*} &\left|\tfrac1{NT_N}\int_0^{NT_N}{\cal L}_\text{slow}(f_I\circ z)(X^{\rm fast}_{\mathrm{x}}(s))ds-{\cal L}_\text{avg}f_I(Z(\mathrm{x}))\right| \\ &\leq \tfrac1{NT_N}\left|\int_0^{NT_N}{\cal L}_\text{slow}(f_I\circ z)(X^{\rm fast}_{\mathrm{x}}(s))ds-\lfloor\frac{NT_N}{T(z(\mathrm{x}))}\rfloor\int_0^{T(z(\mathrm{x}))}{\cal L}_\text{slow}(f_I\circ z)(X^{\rm fast}_{\mathrm{x}}(s))ds\right|\\ &+\left|\left(\frac{T(z(\mathrm{x}))}{NT_N}\lfloor\frac{NT_N}{T(z(\mathrm{x}))}\rfloor-1\right){\cal L}_\text{avg}f_I(Z(\mathrm{x}))\right|\\ &\leq \tfrac1{NT_N}\left|\int_{T(z(\mathrm{x}))\lfloor\frac{NT_N}{T(z(\mathrm{x}))}\rfloor}^{NT_N}{\cal L}_\text{slow}(f_I\circ z)(X^{\rm fast}_{\mathrm{x}}(s))ds\right|+\frac{T(z(\mathrm{x}))}{NT_N}\left|{\cal L}_\text{avg}f_I(Z(\mathrm{x}))\right|. \end{align*} Accordingly, and using also that the period $T(z(\mathrm{x}))$ is bounded over those points $\mathrm{x}\in S$ for which $z(\mathrm{x})\geq r$ (see Lemma \ref{PROPERIOD} {\em (i)}), we conclude that the RHS vanishes in the limit $N\to\infty$, since $\lim_{N\to\infty}NT_N=+\infty$. For $\mathrm{x}=(\tfrac13,\tfrac13,\tfrac13)$, the result immediately follows from the equality \eqref{MATCH}. The first case $Z_N(kT_N) \in [0,r)$ corresponds to the neighborhood of the simplex boundaries, where the control of averaging is more elusive. Recall from the comments after Lemma \ref{AVGOP} that ${\cal L}_\text{avg}f_I(0^+)=0$. Hence by taking $r$ sufficiently small, for each (putative) term ${\cal L}_\text{avg}f_I(Z_N(kT_N))$ in \eqref{ESTQ34}, we can make its contribution arbitrarily small (and the same comment applies to the maximal total contribution $T_N\sum_{k=0}^{k_N-1}\left|{\cal L}_\text{avg}f_I(Z_N(kT_N))\right|$). In order to address the remaining integral term in \eqref{ESTQ34}, we observe that when $Z_N(t) \in [0,r)$, we have $z(X^{\rm fast}_{X_N(t)}(Ns))\in [0,r)$ for all $s\in\mathbb{R}^+$. Therefore, at any $s$, the point $X^{\rm fast}_{X_N(t)}(Ns)$ may be close to one of the vertices $\mathrm{v}_i$ of $S$. An explicit calculation shows that \[ \lim_{\mathrm{x}\to \mathrm{v}_i}{\cal L}_\text{slow}(f_I\circ z)(\mathrm{x})=0,\ i\in\mathbb{Z}_3. \] As before, to choose some neighbourhoods $V_{i,r}$ of the $\mathrm{v}_i$ with sufficiently small radius $r$ implies that the contribution of the integral terms in \eqref{ESTQ34}, for those $X^{\rm fast}_{X_N(kT_N)}(Ns)\in \bigcup_{i\in\mathbb{Z}_3}V_{i,r}$, can be made arbitrarily small, uniformly in $N$. W.l.o.g.\ we may assume that $\bigcup_{i\in\mathbb{Z}_3}V_{i,r}$ is invariant under the cyclic permutation of coordinates $(x_i)\mapsto (x_{i+1})$. Then, if a trajectory $X^{\rm fast}_{X_N(t)}(Ns)$ leaves a set $V_{i,r}$, then it must travel to $V_{i+1,r}$. In the intermediate region between $V_{i,r}$ and $V_{i+1,r}$, the norm $\|F(\mathrm{x})\|$ of the vector field is bounded below (Indeed, one easily checks that this is the case when restricted to the segment $\mathrm{v}_i\mathrm{v}_{i+1}$, part of an edge in the boundary $\partial S$. Then apply a continuity argument). Therefore, the transit time of $X^{\rm fast}_{X_N(t)}(Ns)$ must be bounded from below by, say $\tfrac{t_r}{N}$ for some $t_r>0$. It follows that, in the interval $[kT_N,(k+1)T_N]$ the total time the trajectory spends outside $\bigcup_{i\in\mathbb{Z}_3}V_{i,r}$, cannot exceed $3T_N\tfrac{t_r}{N}$. The corresponding total contribution of the integral terms in \eqref{ESTQ34}, for those $X^{\rm fast}_{X_N(kT_N)}(Ns)$ in the transit regions between the $V_{i,r}$, then cannot exceed $3\tfrac{t_rk_N}{N}$, which vanishes when $NT_N\to\infty$. This completes the proof that \[ \lim_{N\to\infty}\mathbb{E}_{\mathrm{P}}\left|Q_{N,3}^{f_I}(t)-Q_{N,4}^{f_I}(t)\right| =0,\ \forall t\in [0,T]. \] \medskip \noindent {\bf Proof of convergence of $\mathbb{E}_{\mathrm{P}}\left|Q_{N,5}^{f_I}(t)-Q_{N,6}^{f_I}(t)\right|$}. The quantity ${\displaystyle T_N\sum_{k=0}^{k_N-1}{\cal L}_\text{avg}f_I(Z(kT_N))}$ can be regarded as a Riemann sum for the integral $\int_0^t{\cal L}_\text{avg}f_I(Z(s))ds$; hence the desired convergence follows since ${\cal L}_\text{avg}f_I\circ Z$ in \textit{a.s} continuous over $[0,T]$. \noindent The proof of the Proposition is complete. \hfill $\Box$ \medskip \noindent {\large \bf Acknowledgements} \noindent This work has been supported by the ANR-19-CE40-0023 (PERISTOCH). We are grateful to Nils Berglund, Nicolas Fournier and Luc Hillairet for fruitful discussions and relevant comments.
2,877,628,088,806
arxiv
\section{Introduction} In the field of spintronics \cite{Igor,Hirohata,Taniyama,Yuasa}, spin-based logic devices using semiconductors have so far been proposed theoretically \cite{Tanaka,Saito_TSF,Saito_JEC,Dery_Nature}. To achieve these concepts, electrical spin injection, transport, and detection in semiconductors have been explored by using nonlocal magnetoresistance measurements in lateral spin-valve (LSV) devices with GaAs \cite{Lou_NatPhys,Ciorga_PRB,Uemura_APEX,Saito_APEX,Salis_PRB,Bruski_APL}, Si \cite{Jonker_APL,Suzuki_APEX,Saito_IEEE,Ishikawa_PRB,Jansen_PRAP}, Ge \cite{Zhou_PRB,Kasahara_APEX,Fujita_PRAP,Yamada_APEX}, GaN \cite{Bhattacharya_APL,Park_NC}, SiGe \cite{Naito_APEX} and so forth. Although almost all the studies have used single crystalline semiconductor layers as the pure-spin-current transport channels, there has still been lack of information on the influence of the crystal orientation on the spin injection, transport, and detection in semiconductors. To date, Li {\it et al}. have clarified the influence of the $g$-factor anisotropy in the Ge conduction band on the spin relaxation of electrons by combining a ballistic hot electron spin injection-detection technique with changing in-plane applied magnetic field directions \cite{Appelbaum_PRL111}. Unfortunately, the above study did not show the pure spin current transport and the anisotropic phenomena were observed only at low temperatures. Very recently, Park {\it et al}. reported the crystallographic-dependent pure spin current transport in GaN-based LSVs with nanowire channels at room temperature \cite{Park_NC}. They discussed the influence of the spontaneous polarization, interface-specific spin filtering, or the strength of the spin-orbit coupling on the pure spin current transport in GaN nanowires. However, the detailed mechanism is still an open question. Also, there is no information on the crystallographic effect on the pure spin current transport in other semiconductors. In this letter, we experimentally find the efficient pure spin current transport in Si$\langle$100$\rangle$ LSV devices at room temperature. The enhancement in the spin injection/detection efficiency is related to the valley structures of the conduction band in Si. This study experimentally shows the importance of the crystallographic relationship between the magnetization direction of the ferromagnetic contacts and the orientation of the conduction-band valleys in Si. To explore the influence of the crystal orientation of the Si spin-transport channel, we designed two kinds of devices along $\langle$100$\rangle$ and $\langle$110$\rangle$, as shown in Figs. 1(a) and 1(b), on a phosphorous-doped ($n$ $\sim$ 1.3 $\times$ 10$^{19}$ cm$^{-3}$) (001)-SOI ($\sim$ 61 nm) layer. As a tunnel barrier, an MgO (1.1 nm) layer was deposited by electron-beam evaporation at 200 $^\circ$C on the SOI layer \cite{Ishikawa_APL}. Then, a CoFe (10 nm) and Ru cap (7 nm) layers were sputtered on top of it under a base pressure better than 5.0 $\times$ 10$^{-7}$ Pa. The MgO and CoFe layers were epitaxially grown on (001)-SOI, where the (001)-textured MgO layer was grown on Si(001) owing to an insertion of a thin Mg layer into MgO/Si interface \cite{Saito_AIP}. From the detailed characterizations, the CoFe(001)$\langle$100$\rangle$/MgO(001)$\langle$110$\rangle$/Si(001)$\langle$110$\rangle$ heterostructures were confirmed \cite{Saito_AIP}. Conventional processes with electron beam lithography and Ar ion milling were used to fabricate LSV devices \cite{Ishikawa_PRB,Fujita_PRAP,Yamada_APEX}. Next, the Ru/CoFe/MgO contacts, FM1 and FM2, were patterned into 2.0 $\times$ 5.0 $\mu$m$^{2}$ and 0.5 $\times$ 5.0 $\mu$m$^{2}$ in sizes, respectively, and the width of the Si spin-transport channel was 7.0 $\mu$m. Finally, Au/Ti ohmic pads were formed for all the contacts. Note that there was no difference in the size of the spin-injector contact between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices \cite{Jansen_PRAP}. Furthermore, the current-voltage characteristics of the FM1 (FM2) contact in Si$\langle$100$\rangle$ LSV devices were identical with those in the Si$\langle$110$\rangle$ LSV ones. The resistivity and Hall mobility ($\mu_{\rm Hall}$) of the Si spin-transport channel were evaluated from Hall-effect measurements for Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ Hall-bar devices. \begin{figure}[t] \begin{center} \includegraphics[width=7.5 cm]{Fig1.eps} \caption{(Color online) Schematic diagrams of (a) the lateral four-terminal device with the Si spin-transport layer and (b) the relationship between the crystal orientations, $\langle$100$\rangle$ or $\langle$110$\rangle$, and the fabricated Si spin-transport channels. (c) and (d) are nonlocal magnetoresistance curves for Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices ($d =$ 1.75 $\mu$m), respectively, at 303 K.} \end{center} \end{figure} Figures 1(c) and 1(d) show four-terminal nonlocal magnetoresistance signals for Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices, respectively, at a bias current ($I$) of -0.5 mA at room temperature (303 K), where the center-to-center distance ($d$) in the LSV device was 1.75 $\mu$m. Here in-plane external magnetic fields ($B_{\rm y}$) were applied along the directions shown in Fig. 1(b) for each Si$\langle$100$\rangle$ or Si$\langle$110$\rangle$ LSV device. First, we can see differences in the shape and magnitude of the signals between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices. From the magnetization measurements of the epitaxial CoFe layer on MgO/Si(001), we have confirmed the presence of the magnetocrystalline easy axis along Si$\langle$110$\rangle$ (CoFe$\langle$100$\rangle$). Namely, the magnetic fields along Si$\langle$100$\rangle$ ([100] or [010]) can contribute to the magnetization rotation because of the hard axes. Surely, when we applied the in-plane magnetic fields to Si$\langle$100$\rangle$ LSV devices along Si[010] for nonlocal measurements, their magnetization switching fields were larger than those of the Si$\langle$110$\rangle$ LSVs, as shown in Figs. 1(c) and 1(d). This behavior indicates that the magnetization of the narrower CoFe contact was pinned along a certain direction between Si$\langle$110$\rangle$ and the direction of $B_{\rm y}$ (Si $\langle$100$\rangle$). That is, although there is the shape-induced anisotropy along Si[010] in the Si$\langle$100$\rangle$ LSVs, the magnetization tends to be pinned along the magnetocrystalline easy axis along Si[110], leading to the enhancement in the magnetization switching field. In addition, the magnetization rotation of the narrower CoFe contact in the Si$\langle$100$\rangle$ LSV devices enables us to show gradual changes in the nonlocal magnetoresistance from $B_{\rm y} =$ 10 to 50 mT. Here we define the magnitude of spin signal, $|\Delta R_{\rm NL}|$, as the change in $\Delta R_{\rm NL}$ at the magnetization switching field from antiparallel to parallel magnetization states of the CoFe contacts. A representative $|\Delta R_{\rm NL}|$ is shown in Fig. 1(c). Note that $|\Delta R_{\rm NL}|$ for the Si$\langle$100$\rangle$ LSV device is nearly twice as large as that for the Si$\langle$110$\rangle$ one. These tendencies were observed reproductively for many LSV devices with $d =$ 1.75 $\mu$m at room temperature. \begin{figure}[t] \begin{center} \includegraphics[width= 7.5 cm]{Fig2.eps} \caption{(Color online) (a) Plots of $|\Delta R_{\rm NL}|$ versus $d$ at 303 K. The dashed line is a result fitted to Eq. (1). (b) Four-terminal nonlocal Hanle-effect curves of Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices ($d =$ 2.25 $\mu$m) for the parallel and antiparallel magnetization states at 303 K.} \end{center} \end{figure} To understand the above phenomena, we measured $d$ dependence of $|\Delta R_{\rm NL}|$ for many LSV devices, as shown in Fig. 2(a). Here the one data plot in Fig. 2(a) means the average of the $|\Delta R_{\rm NL}|$ value obtained from five LSV devices. For both Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices, the value of $|\Delta R_{\rm NL}|$ is decreased with increasing $d$, indicating the exponential decay of $|\Delta R_{\rm NL}|$. We note that the difference between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ becomes small in LSV devices with large $d$. In general, $|\Delta$$R$$_{\rm NL}|$ in the LSVs with sufficiently large contact resistance can be expressed by the following equation: \cite{Saito_JAP,Maekawa_PRB,Fert_PRB64,Fert_PRB82} \begin{equation} |\Delta R_{\rm NL}| ={\frac{4|{P_{\rm inj}|}|{P_{\rm det}|}{r_{\rm Si}}{r_{\rm b}^{2}}\exp\left(-\frac{d}{\lambda_{\rm Si}}\right)} {{S_{\rm N}}\{\left(2{r_{\rm b}} + {r_{\rm Si}}\right)^{2} - {r_{\rm Si}^{2}}{\exp\left(-\frac{2d}{\lambda_{\rm Si}}\right)\}}}}, \end{equation} where $P$$_{\rm inj}$ and $P$$_{\rm det}$ are spin polarizations of the electrons in Si created by the spin injector and detector, respectively, and ${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$ generally means the spin injection/detection efficiency of the spin injector and detector contacts. $r_{\rm b}$ ($\sim$ 10 k$\Omega$ $\mu$m$^{2}$) and $r_{\rm Si}$ ($=$ 0.0054 $\Omega$ cm $\times$ $\lambda_{\rm Si}$) are the spin resistances of the CoFe/MgO interface and the $n$-Si layer, respectively. $\lambda_{\rm Si}$ ($=$ $\sqrt{D\tau_{\rm Si}}$, where $D$ and $\tau_{\rm Si}$ are the diffusion constant and the spin lifetime, respectively) is the spin diffusion length in Si, $S_{\rm N}$ ($=$ 0.305 $\mu$m$^{2}$) is the cross-sectional area of the Si spin transport layer. Using Eq. (1), we can fit the experimental data in Fig. 2(a) and extract $\lambda_{\rm Si}$ for both Si$\langle$100$\rangle$ ($\lambda_{\rm Si} $ $\sim$ 0.80 $\mu$m) and Si$\langle$110$\rangle$ ($\lambda_{\rm Si} $ $\sim$ 0.88 $\mu$m). Also, using $D$ values of 5.03 cm$^{2}$/s for Si$\langle$100$\rangle$ ($\mu_{\rm Hall} =$ 87.0 cm$^{2}$/Vs) and 5.17 cm$^{2}$/s for Si$\langle$110$\rangle$ ($\mu_{\rm Hall} =$ 89.5 cm$^{2}$/Vs), estimated from the Hall mobility \cite{Flatte_PRL}, we can roughly estimate $\tau_{\rm Si}$ of 1.3 ns and 1.5 ns for Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$, respectively. This implies that the difference in the spin relaxation between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ is relatively small compared to other parameters. On the other hand, the obtained ${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$ value of $\sim$ 0.16 for Si$\langle$100$\rangle$ is valuably larger than that (${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$ $\sim$ 0.11) for Si $\langle$110$\rangle$. Thus, we can infer that the spin injection/detection efficiency of the contacts depends on the crystal orientation of the spin-transport channel on Si. \begin{table}[b] \begin{center} \caption{Comparison of the extracted parameters at room temperature between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSVs.} \vspace{2mm} \begin{tabular}{l|c|c} \hline & Si$\langle$100$\rangle$ & Si$\langle$110$\rangle$ \\ \hline $D$ (cm$^{2}$/s) & 5.03 & 5.17 \\ ${\tau_{\rm Si}}$ (ns) & $\sim$1.4 & $\sim$1.4 \\ $\lambda_{\rm Si}$ ($\mu$m) & $\sim$0.84 & $\sim$0.85 \\ ${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$ & 0.14 & 0.10 \\ \hline \end{tabular} \end{center} \end{table} By using the nonlocal Hanle analysis \cite{Jedema_Nat}, we can also confirm whether the spin relaxation behavior between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ changes or not. Figures 2(b) and 2(c) display room-temperature four-terminal nonlocal Hanle-effect curves in the parallel and antiparallel magnetization states for Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices ($d =$ 2.25 $\mu$m), respectively. Here using the following one dimensional spin drift diffusion model \cite{Jedema_Nat}, we can obtain the best fit curves expressed as solid lines. \begin{equation} \Delta R_{\rm NL}(B_{\rm z}) = \pm A{ {\int_0^{\infty}}{\phi(t)}{\rm cos}({\omega}_{\rm L}t){\exp\left(-\frac{t}{\tau_{\rm Si}}\right)}dt}, \end{equation} where $A =$ ${\frac{{P_{\rm inj}}{P_{\rm det}}{\rho_{\rm Si}}D}{S}}$, $\phi(t) =$ $\frac{1}{\sqrt{4{\pi}Dt}}{\exp\left(-\frac{d^{2}}{4Dt}\right)}$, $\omega$$_{\rm L}$ (= $g$$\mu$$_{\rm B}$$B$$_{z}$/$\hbar$) is the Larmor frequency, $g$ is the electron $g$-factor ($g$ = 2) in Si, $\mu$$_{\rm B}$ is the Bohr magneton. ${\rho_{\rm Si}}$ is the resistivity of Si. From the fitting results, the values of ${\tau_{\rm Si}}$ were estimated to be 1.4 ns and 1.3 ns for Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$, respectively, implying that there is almost no difference in the spin relaxation between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$. On the other hand, since the amplitude of the Hanle curve between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ is clearly different, we can judge that the value of ${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$ for Si$\langle$100$\rangle$ is clearly larger than that for Si$\langle$110$\rangle$. Using the above two methods, we compare the average values of $D$, ${\tau_{\rm Si}}$, $\lambda_{\rm Si}$, and ${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$ between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices in Table I. From the evaluated data in Table I, we conclude that the spin injection/detection efficiency (${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$) depends on the crystal orientation of the spin-transport layer in Si-based LSV devices. \begin{figure}[t] \begin{center} \includegraphics[width= 8.5 cm]{Fig3.eps} \caption{(Color online) (a) Schematics of Brillouin zone of bulk Si. (b) Schematics of crystallographic relationship between the magnetization direction of the ferromagnetic contacts and the orientation of the conduction-band valleys in Si$\langle$100$\rangle$ (upper) and Si$\langle$110$\rangle$ (lower) LSV devices.} \end{center} \end{figure} We discuss a possible origin of the large difference in the spin injection/detection efficiency between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$. First, the $g$-factor anisotropy in Si is negligibly small compared to that in Ge because of the weak spin orbit interaction \cite{Wilson_PR,Giorgioni_NatCom}. This fact means that, unlike Ge, we cannot see the change in the spin transport data only by changing the direction of the applied magnetic fields \cite{Appelbaum_PRL111}. Actually, we performed oblique Hanle measurements for a Si LSV device and confirmed the negligible change in the Hanle curves by changing the applied field directions (not shown here). Also, the interface quality of the CoFe/MgO contacts is the same between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices. Here we focus on the crystallographic orientation of the conduction band valleys in Si. Figure 3(a) illustrates the conduction-band valley positions in the ${\bf k}$-space in Si; six valleys are located close to the $X$ point along $\langle$100$\rangle$. In actual LSV devices used here [see Fig. 1(a)], since the width of the Si channel (7.0 $\mu$m) is larger than the distance ($d \le$ 4.2 $\mu$m) between the spin injector and spin detector, we should regard the pure spin current transport in Si as a two-dimensionally equivalent phenomenon even though we used two different spin-transport channel along $\langle$100$\rangle$ and $\langle$110$\rangle$. Thus, the anisotropic spin relaxation could not be detected between $\langle$100$\rangle$ and $\langle$110$\rangle$, as discussed in Table 1. On the other hand, there is a difference in the configuration between the magnetization direction of the ferromagnetic contacts and the crystal orientation of the conduction-band valleys in the Si channel, as depicted in Fig. 3(b). In this situation, we can expect the difference in the spin-related electronic band structures at the CoFe/MgO/Si interface between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices. Thus, we speculate that the difference in the above crystallographic relationship is one of the origins of the difference in ${\sqrt{|{P_{\rm inj}|}|{P_{\rm det}|}}}$ between Si$\langle$100$\rangle$ and Si$\langle$110$\rangle$ LSV devices. To elucidate the detailed mechanism, further experimental and theoretical studies are required. In summary, we experimentally found the efficient pure spin current transport in Si$\langle$100$\rangle$ LSV devices at room temperature. We infer that the enhancement in the spin injection/detection efficiency is related to the crystallographic relationship between the magnetization direction of the ferromagnetic contacts and the orientation of the conduction-band valleys in Si. This study indicates the importance of the consideration of the crystallographic orientation in the spin-transport channel even in spintronic devices. \vspace{5mm} The authors acknowledge Dr. H. Sugiyama of Toshiba Corporation for useful discussion about the magnetocrystalline anisotropy of the CoFe layer on MgO/(001)SOI, respectively. This work was partly supported by a Grant-in-Aid for Scientific Research (A) (No. 16H02333) from the Japan Society for the Promotion of Science (JSPS), and a Grant-in-Aid for Scientific Research on Innovative Areas "Nano Spin Conversion Science" (No. 26103003) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT).
2,877,628,088,807
arxiv
\section*{Introduction} The last decade has seen an increasing interest in the study of out-of-equilibrium thermodynamics at the micro- and nanoscale. Such interest is impelled by the development of quantum technology and experimental control methods at small scales. In this scenario energy fluctuations play an important rule, thermodynamic quantities as work and entropy production are defined by their mean values, and the laws of thermodynamics still hold on average. The thermodynamic description of quantum many-body systems is significant for understanding the limits of the emerging quantum technology~\cite{Goold2016,Vinjanampathy2015,Millen2016,Parrondo2015,Liuzzo-Scorpo2016,Girolami2015}. Fluctuation theorems~\cite{Jarzynski1997,Crooks1999,Esposito2009, Campisi2011,Jarzynski2011, Haenggi2015,Sagawa2012,Sagawa2013} have been key tools to describe the unavoidable fluctuations in the non-equilibrium dynamics and related experiments have been performed for small, non-interacting systems~\cite{Liphardt2002,Collin2005,Douarche2005,Toyabe2010,Saira2012,Batalhao2014,An2014,Batalhao2015,Auffeves2015,Koski2015,Peterson2016,Vidrighin2016,Rossnagel2016,Camati2016,Goold2016b,Cottet2017}, in both the classical and quantum domain. When we turn to the case of systems composed of interacting particles ('many-body systems'), complex behavior arises as a consequence of the interactions. In general, dealing with a quantum many-body system entails a tremendous effort and it is usually necessary to employ approximations to tackle the problem. Recently, some efforts have been directed to study the out-of-equilibrium thermodynamics in many body systems like quantum harmonic oscillators' chains and spin chains \cite{Silva2008,Dorner2012,Joshi2013,Mascarenhas2014,Sindona2014,Fusco2014,Zhong2015,Eisert2015,Bayat2016,Solano2016}; however a method which can tackle a general system, and systems of increasing complexity is still lacking. In particular a Ramsey-like interferometry experimental protocol has been proposed in Refs. \cite{Dorner2013,Mazzola2013} to explore the statistics of energy fluctuations and work in quantum systems. This protocol has been so far implemented for single spin systems\cite{Batalhao2014,Batalhao2015} while its application to a quantum many-body system implies in fact considerable experimental (and theoretical) difficulties. This protocol requires not only experimental control of each individual part of the system and of the interactions between its components, but also the ability to construct conditional quantum operations where a single ancilla should be used as a control qubit for the collective, coherent behavior of the whole quantum many-body system. This is experimentally extremely challenging and even the theoretical simulation of the dynamics generated by the circuit would become prohibitive as the number of particles in the interacting system increases. In this paper, we propose a new method to accurately describe some thermodynamics quantities (such as the mean work) in out-of-equilibrium quantum systems which can be applied to small, medium and large interacting many-body systems. In addition this method could be used to make the Ramsey-like interferometry experimental protocol accessible when consider many-body interacting systems. Our method is inspired by density-functional theory (DFT) \cite{Jones1989,Dreizler1990,Argaman2000,Capelle2006,Jones2015} and will use some of the tools developed within it. DFT is one of the most efficient non-perturbative methods for studying the properties of interacting quantum systems \cite{Hohenberg1964}, and has produced tools widely used to describe diverse properties of many-body correlated systems, such as band insulators, metals, semiconductors, nanostructures, etc. \cite{Capelle2006,Jones2015}. At the core of DFT there is the mapping between the interacting many-body system onto a noninteracting one, the Kohn-Sham (KS) system\cite{Kohn1965}, which is characterised by the same ground state density of the interacting system. In principle, from the density of such noninteracting system it is then possible to calculate exactly the ground-state (and even excited) properties of the interacting system, but in practice some approximations are required to obtain the so called exchange-correlation potential, a crucial quantity in DFT. Despite, being a very popular and successful approach to quantum many-body physics, to our knowledge, DFT or related methods have not been applied, so far, to quantum thermodynamics. Here we will consider tools from a particular flavour of DFT, lattice-DFT\cite{Capelle2013}, but our method could be straightforwardly extended to different DFT flavours. Lattice-DFT applies to model Hamiltonians such as the Hubbard or Heisenberg Hamiltonians, which are particularly important for quantum information processing and that well describe the type of spin systems of interest to the quantum thermodynamic community. As a test-bed example we will study the out-of-equilibrium dynamics of the Hubbard dimer, which provides a variety of interesting regimes and behaviours to benchmark our method, since it can be solved exactly. We will compare exact and approximate results and show that our method provides very accurate estimates in most regimes of interest opening attractive further possibilities for applications. \section*{The Hubbard Model} The Hubbard model was conceived in 1963 separately by Gutzwiller \cite{Gutzwiller1963}, Kanamori \cite{Kanamori1963}, and Hubbard \cite{Hubbard1963} to describe the interaction of electrons in solids. This model gives a microscopic understanding of the transition between the Mott-insulator and conductor systems \cite{Hubbard1964}, and allows for tunnelling of particles between adjacent sites of a lattice and interactions between particles occupying the same site. Here we consider the time-dependent one-dimensional Hubbard Hamiltonian \cite{Essler2016} described by \begin{eqnarray} \hat{H}(t)=-J\sum_{i=1,L;\sigma=\uparrow,\downarrow}(\hat{c}_{i,\sigma}^\dagger \hat{c}_{i+1,\sigma} +h.c.)+ \sum_{i=1,L}\Delta_i(t)\hat{n}_{i}+U\sum_{i=1,L} \hat{n}_{i,\uparrow}\hat{n}_{i,\downarrow},\label{Hubbard_Ham} \end{eqnarray} where $\hbar=1$ (atomic units), $L$ is the number of sites, $\hat{c}_{i,\sigma}^\dagger$ ($\hat{c}_{i,\sigma}$) are creation (annihilation) operators for fermions of spin $\sigma$, $\hat{n}_{i,\sigma}= \hat{c}_{i,\sigma}^\dagger \hat{c}_{i,\sigma}$ is the site $i$ number operator for the $\sigma$ spin component, $\hat{n}_{i}=\hat{n}_{i,\uparrow}+\hat{n}_{i,\downarrow}$, $J$ is the hopping parameter, $\Delta_i(t)$ is the time-dependent amplitude of the spin-independent external potential at site $i$, and $U$ is the on-site particle-particle interaction parameter. Exact or numerically exact solutions for static many-body problems are rare. For the Hubbard model, there exist an exact solution to the homogeneous one-dimensional case, however numerically exact solutions to the non-uniform case becomes quickly problematic as the number of particles and sites increases. Solutions to time-dependent many-body problems are even more difficult to achieve, and the Hubbard model is no exception. However quantum work due to an external driving of a many-body dynamics is one of those problems in which including time dependence and the effects of many-body interactions is essential to capture -- even qualitatively -- the system behavior. In the following, we propose a relatively simple approach to this problem which allows to include the key features stemming from the many-body interactions and dynamics, and yet maintaining the simplicity of simulating a non-interacting dynamics. We will show that, for a wide range of parameters, this simple approach allows to reproduce {\it quantitatively} the exact results. \section*{DFT-inspired estimate of quantum thermodynamic quantities} \subsection*{The Kohn-Sham system} The KS system is a fictitious, non-interacting, quantum system defined as having the same ground state particle density as the original interacting physical system \cite{Kohn1965}. The two systems have then the same number of particles. For each physical system, and a given many-body interaction, the KS system is uniquely defined \cite{Note1} and, in the limit in which the physical system is non interacting, the Honehnberg-Kohn theorem \cite{Hohenberg1964} ensures that the KS and the physical system coincide. Given an interacting system of Hamiltonian $\hat{H}=\hat{K}+\hat{V}+\hat{V}_{ee}$, where $\hat{K}$ is the kinetic energy operator, $\hat{V}$ is the one-body external potential, and $\hat{V}_{ee}$ is the electron–electron repulsion, the corresponding KS system is described by the non-interacting Hamiltonian $\hat{H}_{KS}=\hat{K}+\hat{V}_s$, where $\hat{V_s}$ is the one-body potential $\hat{V}_s = \hat{V}+ \hat{V}_H + \hat{V}_{xc}$. Here $\hat{V}_H$ is the Hartree potential corresponding to the classical electrostatic interaction and $ \hat{V}_{xc}$ is the exchange-correlation potential, the functional derivative of the exchange-correlation energy $E_{xc}$, which contains additional contributions from the many-body interactions as well as the many-body contributions to the kinetic energy. Usually, the exchange-correlation potential is defined as a sum of exchange and correlation contributions, $V_{xc}=V_x+V_c$, and, due to its unknown functional dependence on the ground-state particle density, approximations have to be used for calculate it. \subsection*{Work in a quantum system} In a non-equilibrium driven quantum system work is defined as the mean value of a work probability distribution $\langle W\rangle=\int WP(W)dW$ that takes into account energy-level transitions (system histories) that can occur in a quantum dynamics \cite{Talkner2007}. In this scenario an external agent is performing (extracting) work on (from) a quantum system and the concept of work take into both the intrinsic nondeterministic nature of quantum mechanics and the effects of non-equilibrium fluctuations. The work probability distribution $P(W)$ contains all the information about the possible transitions in the Hamiltonian energy spectrum produced by an external potential (field) that drives the system out-of-equilibrium between the initial time $t=0$ and the final time $t=\tau$. This distribution is defined as $P(W)=\sum_{n,m}p_n p_{m|n} \delta\left(W-\Delta\epsilon_{m,n}\right)$, where $p_n$ is the probability to find the system in the eigenstate $|n\rangle$ of the initial Hamiltonian $\hat{H}(t=0)$, and $p_{m|n} $ is the transition probability to evolve the system to the eigenstate $|m\rangle$ of the final Hamiltonian $\hat{H}(t=\tau)$ given the initial state $|n\rangle$. We will consider the work done on a system which starts in the thermal equilibrium state $\hat{\rho}_0=\exp\left[-\beta \hat{H}(t=0)\right]/Z_0$, where $\beta= 1/K_BT$ is the inverse temperature, $K_B$ is the Boltzmann constant, $T$ is the absolute temperature, and with $Z_t=\text{Tr}e^{-\beta \hat{H}_t}$ being the partition function for the instantaneous Hamiltonian $\hat{H}_t$ at time $t$. Then the system evolves up to time $\tau$ according to some driven protocol described by the time evolution operator generated by $\hat{H}(t)$ at the constant inverse temperature $\beta$. The final state of the system will not be, in general, an equilibrium state. This describes the non-equilibrium dynamics that we are interested in and it is the typical scenario explored in quantum thermodynamic protocols where fluctuations theorems can be applied. \subsection*{DFT-inspired methods} We consider the Hubbard system Eq. \eqref{Hubbard_Ham} and propose methods to accurately, quantitatively estimate the average quantum work $\langle W\rangle$ produced by its driven dynamics. The key idea will be to use the KS Hamiltonian $\hat{H}_{KS}$ as a zeroth-order Hamiltonian in a perturbation expansion scheme which converges to the exact many-body Schrödinger equation\cite{Goerling1993,Goerling1994,Coe2008}. Using this expansion at its zeroth-order, gives a simple method of including interactions within a {\em formally non-interacting} scheme through the DFT exchange-correlation ($\hat{V}_{XC}$) and Harthree ($\hat{V}_{H}$) terms. For the static case a similar scheme has been proven very effective to largely improve results over a wide range of parameters for the description of entanglement with respect to standard perturbation schemes\cite{Coe2008}. The formulation of DFT for the Hubbard model that we employ is the site-occupation functional theory (SOFT) \cite{Gunnarsson1986,Lima2003}, where the traditional density in real space $n(\textbf{r})=\langle\sum_{\sigma}\hat{\psi}_{\sigma}^\dagger(\textbf{r}) \hat{\psi}_{\sigma}(\textbf{r})\rangle$ is replaced by the site occupation $n_i=\langle\sum_{\sigma}\hat{c}_{i,\sigma}^\dagger \hat{c}_{i,\sigma}\rangle $. Therefore, using the SOFT we can write the KS Hamiltonian for the Hubbard model as \begin{equation} \hat{H}_{KS}=J\sum_{i=1,L;\sigma=\uparrow,\downarrow}(\hat{c}_{i,\sigma}^\dagger \hat{c}_{i+1,\sigma} +h.c.)+\sum_{i=1,L}\Delta_{KS,i}(t)\hat{n}_{i}\label{H_0}, \end{equation} where \begin{equation} \Delta_{KS,i}(t)=\Delta_{i}(t)+V_{H,i}+V_{xc,i}, \label{Delta_KS} \end{equation} and the Hartree potential is $V_{H,i}=U n_{i}/2$. The time-dependent potential $\Delta_{i}(t)$ defines the driven protocol and the exchange-correlation potential $V_{xc,i}$ reduces, for the Hubbard model, to the correlation potential $V_{c,i} ={\delta E_{c}}/{\delta n_{i}}$~\cite{Capelle2013}. However, as shown in the Methods section, not all the approximations for the exchange-correlation potential for the Hubbard model respect this property. Using the KS Hamiltonian Eq.~\eqref{H_0}, we write the full many-body Hamiltonian as $ \hat{H}=\hat{H}_{KS}+\Delta\hat{H}$, where the perturbative term is then defined by \begin{equation} \Delta\hat{H}=-\sum_{i=1,L}\left({V}_{H,i} +V_{xc,i}\right) \hat{n}_{i} + U\sum_{i=1,L} \hat{n}_{i,\uparrow}\hat{n}_{i,\downarrow}.\label{H'} \end{equation} For obtaining the mean value $\langle W\rangle$, we suggest three related protocols (as illustrated in Figure~\ref{scheme}). \subsubsection*{`Zero-order' approximation protocol} We write the interacting Hamiltonian as $\hat{H}=\hat{H}_{0}+\Delta\hat{H}$ where $\hat{H}_{0}$ is a (formally) non-interacting Hamiltonian and $\Delta\hat{H}\equiv \hat{H}-\hat{H}_{0}$. We will consider the case in which $\hat{H}_{0}=\hat{H}_{KS}$ and compare it with the case in which $\hat{H}_{0}$ corresponds to the standard non-interacting approximation to $\hat{H}$, see Results section. Then we approximate the initial thermal state of the system as the non-interacting one, $\hat{\rho}_0=\exp\left[-\beta \hat{H}_{0}(t=0)\right]/Z_0$, and the time-dependent evolution is calculated according to the (formally) non-interacting $H_0(t)$ up to the final time $\tau$. The quantum work will then be estimated based on this time evolution and on the spectra of the initial and final zero-order Hamiltonians. We note that in this protocol the time dependency in $H_0(t)$ is included only through the (explicitly) time-dependent term appearing in $\hat{H}$. For $\hat{H}_{0}=\hat{H}_{KS}$, we expect this method to reduce the magnitude of the perturbation $\Delta\hat{H}$ as many-body interactions are already partially accounted for the zero-order term $ \hat{H}_{KS}$, see Eqs. (\ref{H_0}), (\ref{Delta_KS}), and (\ref{H'}). We note that indeed the zero order term $ \hat{H}_{KS}$ of this approximation already reproduces a very important property of the many-body system, namely the ground-state site occupation distribution. We therefore expect this to produce more accurate results with respect to the standard perturbation expansion. \subsubsection*{Protocol with first-order correction to eigenenergies} Here we propose to use the same formally non-interacting dynamics of the previous protocol, but now to include the first order correction in the estimate of the eigenenergies associated to the initial and final Hamiltonians of the system, with the corresponding correction to the approximation of the initial thermal state. As we will show, a better estimate of the eigenenergies may be important to preserve agreement with exact results for certain regimes, and especially so when the zero-order and exact Hamiltonians present qualitatively different eigenstate degeneracies, as in the case study below. While, in this paper, we will consider first-order corrections only, it should be possible to further improve accuracy by including higher order corrections to the eigenenergies. \subsubsection*{Protocol including time-evolution effects of many-body interactions} This third protocol applies to the case in which $\hat{H}_{0}=\hat{H}_{KS}$. Here we include an implicit time-dependency in ${V}_{H}$ and ${V}_{xc}$. These quantities are functionals of the site occupation, so time dependence can be included via the time dependence of $n_i$. This implies a time-dependence which is local in time, i.e. that has no memory and could then not describe accurately non-Markovian processes. Nevertheless it allows to mimic, at least in part, the variation with time of the interaction effects due to the particles dynamics. As we will show, this significantly improves results in certain regimes. The time-dependent site occupation $n_i(t)$ will be obtained by solving the system self-consistently. This protocol takes inspiration from the adiabatic LDA approximation for time-dependent DFT \cite{Runge1984,Verdozzi2008,Ullrich2012}. It may be further enhanced by improving the approximation for the eigenergies of the initial and final time Hamiltonians, as described in the previous subsection. We will consider two approximations for the KS exchange-correlation potential (see Results and Methods sections). We will compare the results from these protocols to the exact results and to the results obtained for same-order standard perturbation theory. \begin{figure} \centering \includegraphics[width=11.5cm]{FIG1.jpg} \caption{Illustration of the application of the protocols to the Hubbard dimer. This results in four possible approximations to obtain out-of-equilibrium dynamics and related thermodynamic quantities for the Hubbard dimer. These approximations can be used depending on the desired precision and the regime of interest. The worse-case scenario is to use a standard zero-order non-interacting Hamiltonian to describe the system and the dynamics. For more precision, and depending on the regime of interest, we can use either the Kohn-Sham Hamiltonian to describe system and dynamics, or use the standard zero-order non-interacting Hamiltonian for the dynamics but refine the approximation of the initial and final energy spectra (here done up to first order perturbation, FOP): the implementations of both options have similar degrees of difficulty. Finally, we can combine the Kohn-Sham Hamiltonian for the dynamics and the FOP (or higher order precision) for the initial and final spectra.} \label{scheme} \end{figure} \section*{Results} We will focus on a Hubbard dimer with two electrons of opposite spin (half filling), which is known to display a rich physical behavior\cite{natphyseditoral2013,Murmann2015,Carrascal2015}, including an analogue to the Mott metal-insulator transition \cite{Murmann2015,Carrascal2015}. Because of its non-trivial dynamics, this model is ideal as a test bed for assessing the accuracy of approximations in reproducing quantities related to quantum fluctuations and quantum thermodynamics. When the system is reduced to a dimer with half-filling, the Eq.~(\ref{Hubbard_Ham}) can be written in the subspace basis set $\{|\uparrow\downarrow,0\rangle, |\uparrow,\downarrow\rangle, |\downarrow,\uparrow\rangle, |0,\uparrow\downarrow\rangle\}$ as % \begin{equation}\label{Hubbard_Ham_dimer} H=\left( \begin{array}{cccc} U+\Delta_1 & -J & J & 0 \\ -J & 0 & 0 & -J \\ J & 0 & 0 & J \\ 0 & -J & J & U+\Delta_2 \\ \end{array} \right). \end{equation} We will calculate the average work along the dynamics driven by the linear time-dependent on-site potentials $\Delta_1=-\Delta_2=\Delta_0-(\Delta_0-\Delta_{\tau})t/\tau$, with the parametrized initial and final values $\Delta_0=0.5J$, $\Delta_{\tau}=5J$, at the parametrized temperature $T=J/(0.4 K_B)$. The combined values of $U$ and $\tau$ will then determine the dynamical regime (sudden quench to intermediate to adiabatic, see discussion of Exact results). Due to the small Hilbert space associated to (\ref{Hubbard_Ham_dimer}), the quantum dynamics generated by it can be exactly solved by directly integrating the time-dependent Schr\"{o}dinger equation. The Hamiltonian (\ref{Hubbard_Ham_dimer}) can describe various physical systems, including, but not limited to, two gate-defined quantum dots \cite{Coe2010,Kikoin2012,Barthelemy2013}, cold atoms\cite{Murmann2015}, etc. A schematic description of how we will apply to the Hubbard dimer the protocols proposed is provided in Figure~\ref{scheme}. \subsection*{Exact results} \subsubsection*{Average work} In the panel (a) of Figure~\ref{HD_exact} we display the exact work that can be extracted from an interacting Hubbard dimer at half-filling when operated according to the dynamics described by the Hamiltonian (\ref{Hubbard_Ham_dimer}). By varying the interaction $U$ and the time length of the dynamics $\tau$, we can access very different regimes: from non-interacting to very strongly interacting; from sudden quench, to intermediate and to adiabatic dynamics. \begin{figure} \centering \includegraphics[width=14.5cm]{FIG2.jpg} \caption{Mean extracted work, $-\left\langle W \right\rangle$, in units of $J$ and entropy production of a Hubbard dimer at half-filling. (a) Contour plot of the mean work and (b) contour plot of the entropy production for different values of the particle-particle dimensionless interaction strength $U/J$ and the dimensionless evolution time $\tau\times J$. The red line in (a) marks the change towards the adiabatic regime, when the transition probability between different Hamiltonian instantaneous eigenstates will become negligible along the dynamics. } \label{HD_exact} \end{figure} {\bf Crossover between non-adiabatic to adiabatic dynamics.} This system has three characteristic energy scales, $U$, $1/\tau$, and $J$. For the parameter region for which $U\stackrel{>}{\sim}J$, the crossover between non-adiabatic to adiabatic dynamics depends mostly on the interplay between $U$ and $1/\tau$, and occurs when the two energy scales become comparable, that is for $U\propto 1 /\tau$, see behavior of dashed-red curve in Figure~\ref{HD_exact}(a). For each given $U$, when $\tau \gg1/U $ the work does not depend on the time length of the dynamics, showing that, for that particular $U$ the work has converged to its adiabatic value. For $\tau\stackrel{<}{\sim} 1/U $ and a given inter-particle interaction, the work instead strongly depends on $\tau$, increasing with $\tau$ up to the maximum allowed for that particular interparticle interaction. For $U\stackrel{<}{\sim}J$, the $J$ energy scale starts to influence the crossover between non-adiabatic to adiabatic regime, so that the simple relation between $U$ and $\tau$ described above dynamics breaks down in Figure~\ref{HD_exact}(a). The contours curves for equal average work are strikingly different between the adiabatic and non-adiabatic regime: in the $U,\tau$ plane, they can be well approximated by $U=constant$ for the adiabatic regime, while they rapidly and almost linearly increase with $\tau$ for non-adiabatic dynamics. The behavior of these contour curves mirrors the fact that in the non-adiabatic regime the final state of the system, and so the work, is strongly influenced by the details of the dynamics, and hence strongly dependent on the time-scale on which the time-dependent driven protocol occurs; however, by definition, in an adiabatic (energy-level transitionless) process the system remains at all times with the same energy-level occupation of the instantaneous Hamiltonian as in the initial thermal state, which means that the final state of the system --and hence the work -- is completely defined, independently from the time taken by the system from going from the initial to the final state. We note that, as the Hamiltonian changes due to the driven protocol, the final state in the adiabatic regime is not an equilibrium state at the inverse temperature $\beta$. {\bf Transition to `Mott insulator'.} The very strongly interacting parameter region $U\stackrel{>}{\sim} 5J$ corresponds to the two-particle equivalent of the Mott-insulator~\cite{Murmann2015}, where double site occupation becomes energetically very costly. Hence for the Hubbard dimer, in this regime both double and zero site occupation are highly suppressed. As the Hilbert space available to the system dynamics reduces across the transition, we observe a corresponding decreasing of the average work that can be extracted from the system. \subsubsection*{Entropy production and irreversibility} The entropy production $\langle\Sigma\rangle$ is defined in terms of the dissipated work in the out-of-equilibrium dynamics, $\langle\Sigma\rangle= \beta\left(\langle W\rangle-\Delta F\right)$, where $\Delta F=-\left( 1/\beta\right)\ln\left( Z_{\tau}/Z_{0}\right)$ is the free energy variation in the protocol. It is related with an uncompensated heat, that is the energy that will be dissipated to the environment in order for the out-of-equilibrium system to get back to the thermal equilibrium. In this sense $\langle\Sigma\rangle$ quantify the degree of irreversibility of the dynamical process at hand~\cite{Batalhao2015,Velasco2011}. It is then instructive to look at it as the system undergoes through different dynamic regimes. In general the degree of irreversibility will be related to the size of the Hilbert space available to the dynamics. For the system at hand {\it both} change to the adiabatic regime {\it and} Mott metal-insulator transition contribute to the reduction of the available Hilbert space, and hence to the decrease of entropy production. This phenomenon shows well in the data plotted in the panel (b) of Figure~\ref{HD_exact}, where we observe a combined decrease in the entropy profuction as $\tau$ increases and the adiabatic regime is entered {\it and} as $U$ increases and the system tends to `freeze' towards the $n_1=n_2=1$ Mott-insulator configuration. \subsection*{Results from `zero-order' approximations (standard non-interacting and Kohn-Sham based)} In this section we compare results from the protocol which uses a {\it zero-order} approximation, where the dynamics is propagated according to the Hamiltonian $\hat{H}_0(t)$ and time dependency is included {\it only} through the actual driving term $\Delta_i(t)$. As the {\it zero-order} Hamiltonian, $\hat{H}_0 (t)$, is always {\it formally} the sum of single-particle Hamiltonians, the dynamics it generates will then correspond to the dynamics of non-interacting systems. Being formally non-interacting, this dynamics is easier to calculate numerically and would be easier to simulate and measure experimentally (in a quantum simulator) than the one originated by the many-body Hamiltonian $\hat{H}(t)$. We underline once more that the formally non-interacting systems in $\hat{H}_0 (t)$ represent physical systems (non-interacting particles) in the case of standard zero-order perturbation $\hat{H}_0 (t)=\hat{H}_{NI} (t)\equiv -J\sum_{i=1,L;\sigma=\uparrow,\downarrow}(\hat{c}_{i,\sigma}^\dagger \hat{c}_{i+1,\sigma} +h.c.)+ \sum_{i=1,L}\Delta_i(t)\hat{n}_{i}$, while represent fictitious systems (`Kohn-Sham particles') when $\hat{H}_0=\hat{H}_{KS}$. \begin{figure} \centering \includegraphics[width=17.0cm]{FIG3.jpg} \caption{Relative error of the mean extracted work for the 'zero-order' perturbation protocol. Contour plot of the relative error is shown as function of the dimentionless particle-particle interaction strength $U/J$ and the dimentionless evolution time $\tau\times J$. (a) Standard zero-order perturbation theory. (b) KS-based zero-order perturbation theory using the PLDA approximation for the exchange-correlation potential. (c) KS-based zero-order perturbation theory using an accurate parametrization for the exact $E_{c}$. }\label{LDA_no_FOP} \end{figure} In Figure~\ref{LDA_no_FOP} we present the contour plots of the relative error, with respect to the exact average extractable work for all `zero-order' approximations. In the panel (a) we show the results for standard perturbation theory, where the dynamics is propagated according to the non-interacting part of the many-body Hamiltonian, $H_{NI}(t)$. In the panels (b) and (c) of Figure~\ref{LDA_no_FOP} we present our results for the case where the Hamiltonian is approximated to zero-order by $\hat{H}_{KS}(t)$. We approximate the exchange-correlation potential in two ways: using the pseudo local density approximation (PLDA) as proposed by Gunnarsson and Sch\"onhammer\cite{Gunnarsson1986} (Fig.~\ref{LDA_no_FOP}, panel (b)) and using the recent parametrization to the exact correlation energy $E_c$ proposed by Carrascal \textit{et al.}~\cite{Carrascal2015} (PAR) (Figure~\ref{LDA_no_FOP}, panel (c)). The two approximations PLDA and PAR are described in the 'Methods' section. It is known that the PLDA is not a particularly good approximation for the Hubbard model~\cite{Capelle2013}, but we chose it as we wish to show that even this provides already a good improvement over standard zero-order perturbation theory (compare panel (a) and (c) of Figure~\ref{LDA_no_FOP}). This is true especially for small and intermediate values of $\tau$: for example, for $\tau\approx 0$ (sudden quench), the use of PLDA instead of standard zero-order perturbation doubles the $U$-interval for which the relative error in the average work is below $10\%$, and makes it almost four times larger for $\tau\sim 2/J$ (non-adiabatic / adiabatic cross-over region). The parametrization to $E_c$ by Carrascal \textit{et al.}~\cite{Carrascal2015} reproduces the exact correlation potential $V_{c}$ for the Hubbard dimer quite accurately; as such, Figure~\ref{LDA_no_FOP}, panel (c), may be taken to represent the best results we can expect for this system from the approximation with $\hat{H}_0=\hat{H}_{KS}$ we are proposing. We see that now {\em the average work is reproduced to high accuracy in most of the $(U,\tau)$ parameter region}, even in parameter regions with strong many body interactions and/or corresponding to a dynamics very far from adiabaticity. For very small $\tau$ (the sudden quench dynamics) we can reproduce the exactable work within 10\% accuracy for {\it interaction strengths larger than $U=4J$}. This is an enormous improvement over results obtained from the non-interacting zero-order dynamics (Figure~\ref{LDA_no_FOP}, panel (a)), where, for the same values of $\tau$, the exact exactable work could be reproduced within 10\% error only for $U\stackrel{<}{\sim} 0.5J$. In the non-adiabatic / adiabatic cross-over region, $\tau\approx 2/J$, we reproduce very well the exact average exactable work {\it up to interactions $U\approx 6J$}, while standard perturbation theory does poorly, accounting for interactions only up to $U\approx1J$ for $\tau\approx 2$ and up to $U\approx3J$ for $\tau\approx 3$. In this respect we wish to remark that even the `lighter patch' occurring in the panel (c) in Figure~\ref{LDA_no_FOP} within the region $3\stackrel{<}{\sim}U\stackrel{<}{\sim}5$, $1.5\stackrel{<}{\sim}\tau\stackrel{<}{\sim}3.5$ still corresponds to very good accuracy, with a maximum relative error of 12\% for the $\tau=2$ cut and of 14\% for the $U=4$ cut. Finally, in the adiabatic regime, results from Carrascal's parametrization still outperforms substantially standard perturbation, almost doubling the 10\% relative error accuracy region, which now extends to interaction strengths of $U\approx 4.5J$, against the limit of $U\approx 2.5J$ for standard perturbation. We note that, at least for the Hubbard dimer, a `zero-order'-type of approximation will always start to deteriorate as the system enters the Mott metal-insulator-type transition, and that this is independent of how well many-body interactions are accounted for in $\hat{H}_0$. In fact $\hat{H}_0$, and hence $\hat{H}_{KS}$, by definition, does not include a many-body interaction term formally written as $U=\sum_{i=1,L} \hat{n}_{i,\uparrow}\hat{n}_{i,\downarrow}$: this leads to a spectrum where the singlet and triplet eigenstates, $\frac{1}{2}[|\uparrow,\downarrow\rangle- |\downarrow,\uparrow\rangle]$ and $\frac{1}{2}[|\uparrow,\downarrow\rangle + |\downarrow,\uparrow\rangle]$, are {\it always non-degenerate}. However, in the Mott-insulator-type regime described by the actual many-body system of Hamiltonian $\hat{H}$, only the two aforementioned states remain energetically accessible and, most importantly, {\it they become degenerate}. Why the first feature may be mimicked (e.g. this is done by the exact $\hat{H}_{KS}$ to reproduce the exact ground state site-occupation profile) the {\it intrinsic qualitative difference} in degeneracy between the interacting and the formally non-interacting spectra determines the failure of any `zero-order'-type of approximation in the Mott-insulator-type region, which is what we observe in Figure~\ref{LDA_no_FOP}. We note that the large improvement provided by the `zero-order', KS-based approximations comes at no additional computational cost with respect to standard perturbation, as in both cases we are propagating formally non-interacting Hamiltonians. \subsection*{Adding first order perturbation corrections to the initial and final energy spectra} At the end of the previous section we have discussed how, in regimes where the extent of extractable work is dominated by the spectrum and details of the system dynamics become less relevant, the KS-based `zero-order' approximation protocol may be seriously limited, and especially so if there exist different degeneracy patterns between the exact and the `zero-order' approximation spectra. For the Hubbard dimer this happens in the Mott-insulator-type parameter region. In this section we explore if a potential solution to this issue could be to lift this degeneracy by applying higher order perturbation to the initial ($t=0$) and final ($t=\tau$) spectra, while performing the system dynamics according to the `zero-order' Hamiltonian. In this paper, we have consider first order perturbation (FOP) corrections, and results are provided in Figure~\ref{LDA_FOP}. FOP corrections to the energy spectra can be considered accurate only for relatively low interactions $U\stackrel{<}{\sim}1J$. For these values of $U$ we see indeed either an improvement in accuracy or, where results were already within the 10\% of the exact ones, this accuracy is maintained. For larger values of the interaction, we can give a qualitative explanation of the influence of modifying the energy spectra. Let us first consider the adiabatic regime ($\tau \gg 1/U$): here results for the average work are dominated by the accuracy of the spectrum as the system -- in a perfectly adiabatic case -- would remain at any $t$ in a thermal state characterized by the same occupation probabilities determined at time $t=0$. So for $2J\stackrel{<}{\sim}U\stackrel{<}{\sim}5J$, as the spectra provided by the FOP are increasingly {\it quantitatively} worsening, we see that this correction {\it reduces} the accuracy of the results. However, for $U\stackrel{>}{\sim}5J$ the system undergoes the analogue to the Mott metal-insulator transition, which, as discussed in the previous section, cannot be properly accounted for by 'zero-order' protocols because there is a {\it qualitatively} different degeneracy between the zero-order and the exact spectra. In this parameter region then the protocol using the FOP spectrum, quantitatively inaccurate but qualitatively correct, provides a substantial improvement over the 'zero-order' protocol, as can be seen in Figure~\ref{LDA_FOP}. In particular, FOP corrections are very important at very large particle-particle interaction strength $U$, $U\approx 10J$: here as long as $U$ is accounted for in the eigenenergy splitting, the system freezes in the ground state and the FOP approximation is then enough to reproduce the exact result $W=0$ shown in Figure~\ref{HD_exact}. In the same parameter region, without the FOP correction, the standard non-interacting zero-order approximation, completely independent from $U$, would predict maximum average work ($W\ge 3.3 J$ for $\tau \stackrel{>}{\sim} 2.5/J$), while the KS-based zero-order protocols provide some improvement over this result but still predict a way-too-high average work ($W\ge 2.1 J$). For $U\stackrel{>}{\sim}1J$, and for non-adiabatic and transition regime ($\tau \stackrel{<}{\sim} 1/U$), both spectrum and dynamics contribute to the average work. Here results from the FOP corrections seem to depend on how well the 'zero-order' protocol was already reproducing the exact dynamics. In particular, for the KS-based zero order protocol which uses the accurate parametrization of the exact $E_{c}$ (panel (c) of Figure \ref{LDA_FOP}), the contribution of the quantitatively incorrect spectra from the FOP worsen the results. \begin{figure} \centering \includegraphics[width=17.cm]{FIG4.jpg} \caption{Relative error of the mean extractable work for the first-order correction to the eigenenergies' protocol. Contour plot of the relative error is shown as function of the dimensionless particle-particle interaction strength $U/J$ and the dimensionless evolution time $\tau\times J$. (a) Dynamics is generated by standard zero-order perturbation theory. (b) Dynamics is generated by KS-based zero-order perturbation theory using the PLDA approximation for the exchange-correlation potential. (c) Dynamics is generated by KS-based zero-order perturbation theory using an accurate parametrization for the exact correlation energy $E_{c}$. }\label{LDA_FOP} \end{figure} \subsection*{Including implicit time-dependency in many-body interaction terms(without and with FOP)} So far we have considered zero-order Hamiltonians $\hat{H}_0$ where particle-particle interactions were included at most through time-independent functionals of the initial site-occupation. However a more accurate representation of the driven system evolution should be expected by including time-dependent functionals. In this subsection we take inspiration from the adiabatic LDA and propose to include a time-dependence in these functionals by considering the same functional forms as for the static DFT but calculated at every time using the instantaneous site-occupation. The time-dependence considered is then local in time. To implement this protocol numerically, a self-consistent cycle to obtain the time-dependent site-occupation $n_{i}(t)$, and from there the $V_{H,i}[n_{i}(t)]$ and $V_{xc,i}[n_{i}(t)]$ functionals, is necessary. We illustrate this by applying the protocol to the PLDA exchange-correlation functional and focusing on the non-adiabatic and crossover regimes, $0\le\tau \le 4/J$. We use as starting point the exact density at the initial time, i.e., $n_{i}^{(0)}(t)=n_{i}^{(exact)}(0)$. From this density we obtain the exchange-correlation energy $E_{xc,i}^{(1)}(t)=E_{xc,i}^{(1)}[n_{i}^{(0)}(t)]$ and therefore the Kohn-Sham Hamiltonian $\hat{H}_{KS}^{(1)}(t)=\hat{H}_{KS}^{(1)}[n_{i}^{(0)}(t),t]$. We evolve the system using this Hamiltonian and we obtain the state of the system $\hat{\rho}^{(1)}(t)$. From this state we can calculate the next iteration for the site-occupation $n_{i}^{(1)}(t)=\text{Tr}\left[\hat{\rho}^{(1)}(t)\hat{n}_{i}\right]$. Using this, we restart the same cycle calculating the $E_{xc,i}^{(2)}(t)$ and consequently a new Kohn-Sham Hamiltonian $\hat{H}_{KS}^{(2)}(t)$. This cycle is repeated until the convergence criteria $\sum_{0<t<\tau}\lvert n_{i}^{(k-1)}(t)- n_{i}^{(k)}(t)\rvert/N = 10^{-6}$ is satisfied, where the time $[0,\tau]$ is discretized in $N$ different values of $t$. Results are shown in Figure~\ref{ALDA}, panel (a), to be compared with the panel (c) of Figure~\ref{LDA_no_FOP} for $0\le\tau \le 4/J$. As the system exits the sudden-quench regime ($\tau\stackrel{>}{\sim}1/J$), and the site-occupation starts to respond to the dynamics, we notice a marked improvement over using time-independent functionals. For $\tau>1.5/J$ we now achieve an accuracy of at least 10\% up to interaction strength $5J\le U\le 6J$, while in Figure~\ref{LDA_no_FOP} it was only up to $U\le 4J$. The wavy-pattern in the contour lines for $\tau>1.5/J$ reflects the system charge transfer dynamics between the two sites: the PLDA functional is unable to reproduce correctly the Mott-insulator transition so some charge transfer dynamics persists at large values of $U$. When including first order corrections to the initial and final energy spectra (Figure~\ref{ALDA}, panel (b)), we recover, and for analogous reasons, a behavior similar to what observed in Figure \ref{LDA_FOP}. \begin{figure} \centering \includegraphics[width=14.cm]{FIG5.jpg} \caption{Contour plot of the relative error for the average extractable work when including time dependence of the functionals (TPF) describing many-body interactions. Data are presented with respect to the dimensionless coupling strength $U/J$ and the dimensionless evolution time $\tau\times J$. (a) panel: without FOP; (b) panel: with FOP.}\label{ALDA} \end{figure} \section*{Conclusion} We have proposed a new method which uses some tools and concepts from density functional theory to study the non-equilibrium thermodynamics of driven quantum many-body systems, and illustrated it by the calculation of the average extractable work in a driven protocol. The method has the advantage of considering appropriate formally non-interacting systems (Kohn-Sham systems) to approximate the system dynamics, circumventing the theoretical and experimental problems of dealing with actual many-body interactions. It is easily scalable to large systems and can be used at different levels of sophistication, with increasing accuracy. We have tested it on the Hubbard dimer, a two-spin system with a rich dynamics which includes the precursor to a quantum phase transition (Mott metal-insulator transition), and which can be embodied by various physical systems, including coupled quantum dots and cold atom lattices. Our results show that the proposed method reproduces the average extractable work to high accuracy for a very large region of parameter space: for all dynamical regimes (from sudden quench, to the non-adiabatic to adiabatic crossover region, to the adiabatic regime) and up to quite strong particle-particle interactions ($U\stackrel{<}{\sim} 6 J$) our results are within 10\% of the exact results. These very encouraging results, together with the simplicity of the method make for a breakthrough in the calculation of non-equilibrium thermodynamic quantities, as the quantum work, in a complex many-body system. Future developments include the possibility to combine the method with quantum simulation techniques and an experimental implementation of quantum simulators based on this method. \section*{Methods}\label{methods} \subsubsection*{Approximations for the exchange-correlation energy} The exchange-correlation energy is a functional of the site occupation density, but its functional form is unknown and needs approximations. In this work we will consider and compare results from two different types of approximations to the exchange-correlation energy for the Hubbard Model. The first is the pseudo-LDA expression\cite{Gunnarsson1986} \[ E_{xc}^{P-LDA}(n)=-2^{-4/3}U\sum_{i}n_{i}^{4/3}. \] Here the homogeneous reference system for the LDA is the three-dimensional electron gas, and so exchange is non-zero (for a related discussion see \cite{Capelle2013}). In the second $E_{xc}=E_c$: this is the accurate parametrization to the exact correlation energy recently proposed in \cite{Carrascal2015,Carrascal2015a}, and given by \begin{equation} E_{c}^{par}(\delta,u)=2J\left[f_{k}(\delta,u)-t_{s}(\delta)-e_{HX}(\delta,u)\right],\label{soft exc} \end{equation} where $u=U/2J$, $\delta=\left|n_{1}-n_{2}\right|/2$, $f_{k}(\delta,u)=-g_{k}(\delta,u)+uh_{k}(\delta,u)$ , $t_{s}(\delta)=-\sqrt{1-\delta^{2}}$, and $e_{HX}(\delta,u)=\frac{u\left(1+\delta^{2}\right)}{2}$. The function $g_{k}(\delta,u)$ can be obtained iteratively from the equation \begin{equation} g_{k}(\delta,u)=g_{k-1}(\delta,u)+\left[udh_{k-1}(\delta,u)-1\right]dg_{k-1}(\delta,u), \end{equation} and using as starting point \begin{equation} g_{0}(\delta,u)=\sqrt{\frac{\left(1-\delta\right)\left\{ 1+\delta\left[1+\left(1+\delta\right)^{3}ua_{1}(\delta,u)\right]\right\} }{\left[1+\left(1+\delta\right)^{3}ua_{2}(\delta,u)\right]}.}\label{g0} \end{equation} Here the coefficients $a_{1}(\delta,u)$ and $a_{2}(\delta,u)$ are given by \begin{eqnarray} a_{1}(\delta,u) & = & a_{11}(\delta)+ua_{12}(\delta),\\ a_{2}(\delta,u) & = & a_{21}(\delta)+ua_{22}(\delta),\nonumber \end{eqnarray} where $a_{21}(\delta)=\frac{1}{2}\sqrt{\frac{\left(1-\delta\right)\delta}{2}},$ $a_{11}(\delta)=a_{21}(\delta)\left(1+\frac{1}{\delta}\right)$, $a_{12}(\delta)=\frac{1-\delta}{2}$, and $a_{22}(\delta)=\frac{a_{12}(\delta)}{2}$. The functions $h_{k}(\delta,u),$ $dg_{k}(\delta,u)$, and $dh_{k}(\delta,u)$ are defined as \begin{equation} h_{k}(\delta,u)=\frac{g_{k}^{2}(\delta,u)\left[1-\sqrt{1-g_{k}^{2}(\delta,u)-\delta^{2}}\right]+2\delta^{2}}{2\left[g_{k}^{2}(\delta,u)+\delta^{2}\right]}, \end{equation} \begin{equation} dg_{k}(\delta,u)=\frac{\left(1-\delta\right)\left(1+\delta\right)^{3}u^{2}\left\{ a_{12}(\delta,u)\left[\frac{3\delta}{2}-1+\delta u\left(1+\delta\right)^{3}a_{2}(\delta,u)\right]-\delta a_{22}(\delta,u)\left[1+\left(1+\delta\right)^{3}ua_{1}(\delta,u)\right]\right\} }{2g_{k}(\delta,u)\left[1+\left(1+\delta\right)^{3}ua_{2}(\delta,u)\right]^{2}}, \end{equation} \begin{equation} dh_{k}(\delta,u)=\frac{g_{k}(\delta,u)\left\{ g_{k}^{4}(\delta,u)+3g_{k}^{2}(\delta,u)\delta^{2}+2\delta^{2}\left[\delta^{2}-1-\sqrt{1-g_{k}^{2}(\delta,u)-\delta^{2}}\right]\right\} }{2\left[g_{k}^{2}(\delta,u)+\delta^{2}\right]^{2}\sqrt{1-g_{k}^{2}(\delta,u)-\delta^{2}}}. \end{equation} In our calculations we used $g_{1}(\delta,u)$ to obtain the exchange correlation energy: this already provides good accuracy as shown in \cite{Carrascal2015}.
2,877,628,088,808
arxiv
\section{Introduction} \label{sec:intro} Recently, neural transducer based end-to-end (E2E) models \cite{Graves-RNNSeqTransduction, he2019streaming, attentionisallyouneed, yeh2019transformer, TT, Li2019RNNT, battenberg2017exploring,chiu2018state, Li2020comparison, xiechen, E2EOverview}, such as recurrent neural network transducer (RNN-T) \cite{Graves-RNNSeqTransduction} , transformer-transducer (T-T) \cite{yeh2019transformer, TT} and conformer-transducer (C-T) \cite{gulati2020conformer}, have become the dominant model for automatic speech recognition (ASR) in industry due to its natural streaming property, as well as competitive accuracy with traditional hybrid speech recognition systems \cite{watanabe2017hybrid, sainath2020streaming, Li2020Developing}. However, one of the main challenges for neural transducer models is adaptation using only text data. This is because in neural transducer models, there are no separated acoustic or language model like in traditional hybrid models. Although the prediction network could be considered as an internal language model (LM) because the input to it is the previously predicted token, it is not a real LM since the prediction output needs to be combined with the acoustic encoder in a non-linear way to generate posteriors over the vocabulary augmented with a \textit{blank} token. Adapting the prediction network using text-only data is not as straightforward or effective as adapting the LM in hybrid systems. Paired audio and text data is needed to adapt a neural transducer model, however, collecting labeled audio data is both time and money costly. There are several types of methods proposed to address this issue. One is to generate artificial audio for adaptation text instead of collecting real audio. Audio generation method could be based on multi-speaker neural text to speech (TTS) model \cite{Li2020Developing, sim2019personalization, deng2020ttsrnnt, zheng2021ttsasr, ttsjasha} or spliced data method \cite{spliced}. The neural transducer model could then be fine-tuned with artificial paired audio and text data. A major drawback of these kinds of methods is the high computational cost. It takes much longer for the TTS-based methods to generate audio even with GPU machines, while the spliced-data method has very small cost for generating audio. However, the adaptation step for both methods involves updating part of the encoder, full prediction and the joint network with the RNN-T loss. The result is high computational cost for training, need for GPUs, and too much delay for scenarios in which rapid adaptation is necessary. Another class of text-only adaptation methods is LM fusion \cite{kannan2018shallowfusion, 2020fusion, 2021fusion, amazonilm, triebiasing}, such as shallow fusion \cite{kannan2018shallowfusion} where an external LM trained on target-domain text is incorporated during the neural transducer model decoding. However, there is already an internal LM in the neural transducer model. Directly adding an external LM is not mathematically grounded. To solve such an issue, density ratio \cite{mcdermott2019densityratio}, hybrid autoregressive transducer model \cite{variani2020hybrid}, and internal LM estimation \cite{meng2021ilme,ibmilm} were proposed to remove the influence of the internal LM contained in the neural transducer model. However, the performance is often sensitive to the interpolation weight of external LM for different tasks, and it needs to be well tuned based on development data to get optimal results \cite{meng202ilmt}. Different from aforementioned methods, factorized neural transducer model (FNT) \cite{fnt} modifies the neural transducer model architecture by factorizing the blank and vocabulary prediction so that a standalone LM can be used for the vocabulary prediction. In this way, various language model adaptation [32, 33, 34] techniques could be applied to FNT. But based on results in \cite{fnt}, FNT degrades the accuracy on general testing sets compared with the standard neural transducer model, although it significantly improves the accuracy in the new domain after adaptation. Besides, it still needs significant GPU time to finetune FNT with text only data for the adaptation, which may not meet the fast adaptation requirement in some real applications. In this paper, we proposed several methods to advance FNT for effective and efficient adaptation with text only data. These methods include: 1) Adding Connectionist Temporal Classification (CTC) \cite{Graves-CTCFirst} criterion for the encoder network during training to make it work more like an acoustic model. Then, the combination of encoder output and vocabulary predictor output is similar to the combination of acoustic and language model in hybrid models. 2) Adding Kullback-Leibler (KL) divergence between the outputs of adapted model and baseline model to avoid over fitting to the adaptation data. 3) Initializing the vocabulary predictor with a neural LM trained with more text data. 4) Replacing the network fine-tuning with more efficient adaptation method using n-gram interpolation. Experimental results showed that on general testing sets, these methods help the modified FNT to get even a little better accuracy than the baseline neural transducer model. On adaptation sets, the word error rate (WER) after the adaptation of modified FNT is reduced by 9.48\% relatively from the standard FNT model, and reduced by 29.21\% relatively from the baseline C-T model. Besides, n-gram interpolation results in much faster adaptation speed. The rest of this paper is organized as follows: Section \ref{sec:rnnt} introduces the neural transducer model and FNT model. Section \ref{sec:refinefnt} presents the proposed methods for modified FNT in detail. Section \ref{sec:exp} shows the experimental results and analysis. Section \ref{sec:conclusion} gives the conclusions. \section{standard neural transducer and FNT model} \label{sec:rnnt} \subsection{Standard neural transducer} \label{ssec:strnnt} A neural transducer model \cite{Graves-RNNSeqTransduction} consists of encoder, prediction, and joint networks. The encoder network is analogous to the acoustic model in hybrid models, which converts the acoustic feature $x_t$ into a high-level representation $f_t$, where $t$ is the time index. The prediction network works like a neural LM, which produces a high-level representation $g_u$ by conditioning on the previous non-blank target $y_{u-1}$ predicted by the RNN-T model, where $u$ is output label index. The joint network combines the encoder network output $f_t$ and the prediction network output $g_u$ to compute the output probability with \begin{eqnarray} z_{t, u} = W *\text{relu}(f_t+g_u)+b \nonumber \\ P(\hat{y}_{t+1}|x_1^t, y_1^{u}) = \text{softmax}(z_{t, u}) \end{eqnarray} To address the length differences between the acoustic feature $\textbf{x}_1^T$ and label sequences $\textbf{y}_1^U$, a special blank symbol, $\phi$, is added to the output vocabulary. Therefore the output set is $\{{\phi} \cup \mathcal{V}\}$, where $\mathcal{V}$ is the vocabulary set. \subsection{Factorized neural transducer} \label{ssec:fnt} Two prediction networks are used in FNT \cite{fnt}, as shown in figure \ref{fig:fnt}. One ($Predictor\_b$) is for the prediction of the blank label $\phi$, and the other ($Predictor\_v$) is vocabulary prediction (rightmost orange part in figure \ref{fig:fnt}). The vocabulary predictor could be considered as a standard LM. The combination methods with encoder output $f_t$ for these two prediction outputs are different. For the blank prediction, it is the same as in standard neural transducer models. \begin{align} z^{b}_{t, u} = W^b *\text{relu}(f_t+g_u^b)+b^b \end{align} For the vocabulary prediction, it is firstly projected to the vocabulary size and converted to the log probability domain by the operation of log softmax. After this, it is added with the encoder output. \begin{eqnarray} d_t^v &=& W_{enc}^v * \text{relu}(f_t)+b_{enc}^v \nonumber \\ d_u^v &=& W_{pred}^v* \text{relu}(g_u^v)+b_{pred}^v \nonumber \\ z_u^v &=& \text{log\_softmax}(d_u^v) \nonumber \\ z^{v}_{t, u} &=& d_t^v + z_u^v \label{eqn:fnt} \end{eqnarray} Two combination outputs are concatenated and softmax is applied to get the final label probability \begin{align} P(\hat{y}_{t+1}|x_1^t, y_1^{u}) = \text{softmax}([z^b_{t, u}; z^{v}_{t,u}] ) \end{align} The loss function of FNT is \begin{equation} \mathcal{J}_f = \mathcal{J}_t - \lambda \log P(\textbf{y}_1^U) \label{eqn:4} \end{equation} where the first term is the standard neural transducer loss and the second term is the LM loss with cross entropy (CE). $\lambda$ is a hyper-parameter to tune the effect of LM loss. \begin{figure}[htb] \centering \includegraphics[width=5.5cm]{./fnt_7.png} \caption{Flowchart of factorized neural transducer} \label{fig:fnt} \vspace{-0.2cm} \end{figure} \section{Improvement of FNT} \label{sec:refinefnt} In this section, we will propose several methods to improve the accuracy and efficiency of FNT. \subsection{Adding CTC criterion in training} \label{ssec:ctc} As showed in section \ref{ssec:fnt}, the encoder output and the vocabulary predictor output are combined by sum operation. The predictor output is log probability, but the encoder output is not. According to Bayes' theory, the acoustic and language model scores should be combined by weighted sum in log probability domain. Therefore we refine FNT by converting the encoder output to the log probability by adding log softmax Furthermore, to force the encoder part act more like the acoustic model, CTC criterion is added for the encoder output as shown in the blue frame part in figure \ref{fig:fnt_refine}. The reason we choose CTC instead CE is that it's not easy to get the sentence piece level alignment for training data, while sentence piece unit is commonly used as the output unit for neural transducer E2E model. With such changes, the combination of encoder output and vocabulary predictor output is shown in below equations. \begin{eqnarray} z_t^v &=& \text{log\_softmax}(d_t^v) \nonumber \\ z^{v}_{t, u} &=& z_t^v[:-1] + \gamma * z_u^v \label{eq:merge} \label{eqn:ctc} \end{eqnarray} where $\gamma$ is a trainable parameter, which could be taken as LM weight. One thing needs to be mentioned is after adding CTC, the dimension of $d_t^v$ and $z_t^v$ become vocabulary\_size+1 because CTC needs one extra output ``blank''. Here we put ``blank'' as the last dimension. and it's excluded when $z_t^v$ is added with $z_u^v$. The final loss function can be written as \begin{equation} \mathcal{J}_f = \mathcal{J}_t - \lambda \log P(\textbf{y}_1^U)+\beta \mathcal{J}_{ctc} \label{eqn:loss} \end{equation} where $\mathcal{J}_{ctc}$ is CTC loss and $\beta$ is a hyper-parameter to be tuned in the experiments. \begin{figure}[htb] \centering \includegraphics[width=8cm]{./fnt_refine_2.png} \caption{BLUE FRAME: adding CTC criterion for FNT training. ORANGE FRAME: adding KL divergence loss for FNT adaptation } \label{fig:fnt_refine} \vspace{-0.2cm} \end{figure} \subsection{Adding KL divergence in adaptation} \label{ssec:kl} To adapt FNT model with text data, the most straightforward way is to finetune the vocabulary predictor with the adaptation text based on cross-entropy loss. But this may degrade model performance on general domain. To avoid this, KL divergence between the vocabulary predictor outputs of adapted model and baseline model is added during the adaptation as shown in the orange frame part in figure \ref{fig:fnt_refine}. The adaptation loss with KL divergence is \begin{equation} \mathcal{J}_{adapt} = \text{CE}(Z_u^v,Y_{adapt}) + \alpha \text{KL}(Z_u^v,Z_u^{'v}) \label{eqn:kl} \end{equation} where $Y_{adapt}$ is the adaptation text data, $Z_u^{v}$ is the log softmax of adaptation text from the adapted model and $Z_u^{'v}$ is the log softmax of adaptation text from the baseline model. $\alpha$ is the KL divergence weight to be tuned in the experiments. \subsection{External language model} \label{ssec:lm} Since the vocabulary predictor in FNT is designed to be an LM, we explore the possibility of training it independently on a much larger text corpus than the transcriptions in the FNT training data. The parameters of this pre-trained external LM could be further updated to potentially improve accuracy. In principle, we could choose a variety of architectures for the external LM. In this paper, we limit ourselves to an architecture that is very close to that of the standard prediction network for a fair comparison with the baseline system. The vocabulary of the external LM is the same as that of the FNT system, and is trained using the conventional cross-entropy loss. \begin{comment} In FNT, the vocabulary predictor is a standalone LM. In theory, it can be replaced by a well trained external LM. In this paper, we investigate whether the model accuracy could be improved with a external LM trained with more text data. We also explore whether the external LM should be fixed or not during the FNT model training. \end{comment} Experimental results in Section \ref{sec:exp} show that external LM trained with more data improves model accuracy, and updating the external LM parameters during FNT model training further improves the results. \subsection{N-gram interpolation} \label{ssec:ngram} As noted before, fine-tuning the vocabulary predictor is one of the straightforward adaptation method for FNT. Although updating vocabulary predictor is much faster than updating the whole FNT network, it could not meet the immediate adaptation requirement for some applications. In this paper, we propose to use n-gram integration for fast adaptation of FNT. In this method, a n-gram based language model is firstly trained with the adaptation text data. Then it is interpolated with the probability output from the vocabulary predictor during the decoding. The vocabulary log probability after interpolation with n-gram probability $P(y_u|y_1^{u-1})_{ngram}$ is calculated as \begin{align} z_u^v = \text{log}((1-w)*P(y_u|y_1^{u-1})_{pred}+w*P(y_u|y_1^{u-1})_{ngram}) \label{eqn:ngram} \end{align} where $P(y_u|y_1^{u-1})_{pred} = \text{softmax}(d_u^v)$ is the label probability output from vocabulary predictor. Then $z_u^v$ is plugged into Equation \eqref{eq:merge} to calculate $z^{v}_{t, u}$ which is used to generate the final output of FNT. In this method, no neural network training is involved, and n-gram LM model training is super fast with the adaptation text data. Experimental results in section \ref{ssec:ngram} show it has much lower computational cost compared to the fine-tuning based method. \section{experiments} \label{sec:exp} In this section, the effectiveness of the proposed methods are evaluated based on conformer-transducer (C-T) model \cite{gulati2020conformer} for several adaptation tasks with different amount of adaptation text data. In the baseline C-T model, the encoder network contains 18 conformer layers. The prediction network contains 2 LSTM \cite{hochreiter1997long} layers, and 1024 nodes per layer. The output label size is 4000 sentence pieces. We use the low-latency streaming implementation in \cite{xiechen} with 160 milliseconds (ms) encoder lookahead. The standard FNT model has the same encoder structure and output label inventory as the baseline C-T model. The blank and vocabulary predictor each consists of 2 LSTM layers, also 1024 nodes per layer. The acoustic feature is 80-dimension log Mel filter bank for every 10 ms speech. The training data contains 30 thousand (K) hours of transcribed Microsoft data, mixed with 8K and 16K HZ sampled data \cite{li2012improving}. All the data are anonymized with personally identifiable information removed. The general testing set covers different application scenarios including dictation, conversation, far-field speech and call center etc., consisting of a total of 6.4 million (M) words. For the adaptation testing sets, we selected 2 real application tasks with different size of adaptation text data, as well as Librispeech sets for better reference. The data size of the testing sets are listed in table \ref{tab:testingdata}. The model training never observes the data from these adaptation tasks. The external LM has the same model structure as the vocabulary predictor in the FNT and was trained with text data containing about 1.5 billion words, which includes the transcription of 30k training data mentioned above for C-T and FNT model training. We first evaluate the FNT model's accuracy on general testing set with the proposed methods, including adding CTC criterion, initializing the vocabulary predictor from a well trained external LM. Then we examine the performance of above FNT models on adaptation sets by fine-tuning the vocabulary predictor with text adaptation data. Finally, n-gram interpolation adaptation method is evaluated based on the best FNT model from above experiments. \begin{table}[t] \centering \begin{tabular}{c|c|c} \hline testing set & adaptation data & testing data \\ \hline task1 & 6,135 & 6,269 \\ \hline task2 & 193,047 & 21,960 \\ \hline Librispeech & 18,740,565 & 210,246 \\ \hline \end{tabular} \caption{Word count for testing sets.} \vspace{-0.0cm} \label{tab:testingdata} \end{table} \subsection{Results on general testing set} \label{ssec:generalset} All results on general testing set are given in table \ref{tab:generalset}. The baseline model is a standard C-T model. The standard FNT model is with the same structure as in \cite{fnt}, and it is trained with 30k training data from scratch. Compared with the baseline C-T model, the standard FNT model got 1.29\% relative WER increase. Then FNT model is refined by adding the CTC criterion based on equation \ref{eqn:ctc} and \ref{eqn:loss}. The CTC loss weight $\beta$ is 0.1. We can see that adding CTC decreases WER from 11.01 to 10.97, but is still worse than the baseline C-T model. The accuracy is improved further by initializing vocabulary predictor from an external language model. Two recipes are examined: one is the external LM is fixed during the FNT model training, and the other is the external LM is updated together with other parts of FNT model. Updating the external LM got the best result, which is even better than the baseline C-T model. \begin{table}[th] \centering \begin{tabular}{l|c} \hline Model & General set \\ \hline Baseline C-T (B0) & 10.87 \\ \hline Standard FNT (F0) & 11.01 \\ \quad+CTC (F1) & 10.97 \\ \quad\quad +ext. LM fix (F2) & 10.89 \\ \quad\quad\quad+ext. LM update (F3) & 10.70 \\ \hline \end{tabular} \caption{WER(\%) on the general testing set.} \vspace{-0.0cm} \label{tab:generalset} \end{table} \subsection{Results on adaptation testing sets} \label{ssec:adaptation} In this section, We firstly evaluated the impact of KL divergence using Librispeech set based on standard FNT model. The results are given in table \ref{tab:kl}. The results showed the adapted model without KL divergence degraded the accuracy on general testing set largely. FNT adaptation with KL divergence helped to recovered the loss on general testing set obviously with very small WER increase on adaptation set compared to the standard FNT adaptation. In the following adaptation experiments, KL divergence weight $\alpha$ is always set to 0.1. Table \ref{tab:adaptationset} shows the adaptation results for different FNT models on all adaptation sets. For each task, the results in ``base'' column are WER before adaptation, and the results in ``adapt'' column are WER after adaptation. Simple average WERs are also reported by averaging the WERs from all three tasks. Comparing ``base'' results for B0 and F0, we could find the accuracy gap between the baseline C-T model and the standard FNT model on these adaptation set is much larger than that on general set, especially for task1 and task2. The possible reason is that the domains in the training data may have some coverage for the general testing set, but they are totally irrelevant to these adaptation sets. With the proposed refinements for FNT, this gap is decreased step by step, the best FNT model F3, which is with full combination of the proposed methods, could get the similar accuracy as the baseline C-T model. The same trend could be also observed for the ``adapt'' results. Each method contributes accuracy improvement to the adapted FNT model. Compared with the adapted standard FNT model (F0), the adapted model F3 reduces WER for three adaptation sets by relatively 9.48\% in average (from 12.80\% to 11.59\% ). And compared with the baseline C-T model, the adapted model F3 gets 29.21\% relative WER reduction (from 16.37\% to 11.59\%). \newcommand{\specialcell}[2][c]{% \begin{tabular}[#1]{@{}c@{}}#2\end{tabular}} \begin{table}[h] \centering \begin{tabular}{l|c|c|c|c|c} \hline \multirow{2}{*}{} & Standard & \multicolumn{4}{c}{KLD weight} \\\cline{3-6} & FNT & 0.0 & 0.1 & 0.2 & 0.3 \\ \hline General set & 10.87 & 12.78 & 11.69 &11.52 & 11.4 \\ \hline Librispeech & 8.32 &7.17 & 7.17 &7.25 & 7.33 \\ \hline \end{tabular} \caption{WER(\%) for Librispeech with different KLD weights.} \vspace{-0.0cm} \label{tab:kl} \end{table} \begin{table}[th] \centering \resizebox{0.99\columnwidth}{!}{ \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Task1} & \multicolumn{2}{|c|}{Task2} & \multicolumn{2}{|c|}{Librispeech} & \multicolumn{2}{c}{Simple average} \\\cline{2-9} & Base&Adapt & Base&Adapt & Base&Adapt & Base&Adapt \\ \hline B0 & 16.59 & - &24.19 & - &8.32&- & 16.37 &-\\ \hline F0 & 17.45 & 11.8 & 25.21 & 19.43& 8.44 & 7.17 & 17.03 & 12.80 \\ F1 & 17.21 & 10.89 & 25.05 & 19.04 & 8.43 & 7.07 & 16.90 & 12.33\\ F2 & 17.63 & 10.8 & 24.96 & 18.55 & 8.41 &7.16 &17.00 &12.17 \\ F3 & 16.38 & 10.19 & 24.44 & 17.7 & 8.33 & 6.87&16.38 &11.59 \\ F3+n-gram & 16.38 & 11.49 & 24.44 & 19.89 & 8.33 & 7.39& 16.38& 12.92 \\ \hline \end{tabular} } \caption{WER(\%) on adaptation testing sets. } \vspace{-0.0cm} \label{tab:adaptationset} \end{table} \subsection{Results of n-gram interpolation} \label{ssec:result_ngram} In this section, we evaluated n-gram interpolation performance based on the best FNT model (F3) for three adaptation tasks. For each task, a 5-gram LM is trained with the adaptation text. To make the interpolation simple and efficient, sentence piece instead of word is used as the basic unit for the 5-gram LM. The interpolation weight $w$ is always set as 0.3. The results are shown in the last row of table \ref{tab:adaptationset}. Compared with the adaptation of fine-tuning vocabulary predictor, n-gram interpolation based adaptation got a little higher WER, but the relative WER reduction over the baseline C-T model is still satisfying, which is 21.04\% (from 16.37\% to 12.92\%). More importantly, the adaptation speed is improved hugely. Experiments show that for fine-tuning method, the adaptation process cost about 10 seconds per 1,000 words on GPU and the cost is formidable when adapting with CPU. In contrast, it only needs about 0.002 seconds per 1,000 words on CPU for n-gram LM training. This is very useful for those application scenarios which need immediate adaptation. \section{conclusions} \label{sec:conclusion} In this paper, several methods are proposed to improve the accuracy and efficiency for FNT adaptation with text-only data. These methods include: 1) during the FNT model training, adding CTC criterion to make the encoder act more like an acoustic model and initializing vocabulary predictor with a well trained external LM to use more text data. 2) during the FNT model adaptation, adding KL divergence to avoid over fitting to the adaptation data. 3) using n-gram interpolation with the vocabulary-prediction LM module inside FNT instead of fine-tuning the vocabulary-prediction LM module to improve the adaptation speed. The experimental results proved that, compared with standard FNT, the proposed methods could get better accuracy on general testing set, and decrease the adaptation WER by 9.48\% percent relatively. In total, compared with the baseline C-T model, the adaptation WER is decreased by 29.21\% relatively. Besides, n-gram interpolation could get much faster adaptation than the fine-tuning method, enabling the scenarios which require immediate adaptation. \bibliographystyle{IEEEbib}
2,877,628,088,809
arxiv
\section*{Author Summary} The key to the genesis of migraine with aura is a traveling wave phenomenon called cortical spreading depression (SD). Migraine is characterized by recurrent episodes, the aura phase usually lasts only about 30$\,$min. Thus, SD is a transient state. During its course, SD massively perturbs the brain's ion homoeostasis. We resolve the puzzling problem why SD does not engulf all of the densely packed excitable neurons by suggesting a well-established pattern formation mechanism of long-range inhibitory feedback. Furthermore, we use cortical feature maps to create plausible initial conditions as perturbations of the homogeneous state. The statistics of occurrences of the different classes has the potential to reproduce epidemiological statistics of different diagnostic forms of migraine such as migraine with or without aura and provides simple answers to some very controversially discussed current questions in migraine research. \section*{Introduction} The undoubtedly most fundamental example of transient dynamics is the dynamical phenomenon of excitability, that is, all-or-none behavior. Shortly after transient response properties of excitable membranes were classified into two classes\cite{HOD48}, it was also explained in a detailed mathematical model how excitability emerges from electrophysiological properties of such membranes in the ground-breaking work by Hodgkin and Huxley\cite{HOD52}. Two features are central and are by no means exclusive to biological membranes but shared by all excitable elements. Firstly, the inevitable threshold in any all-or-none behavior requires nonlinear dynamics. Secondly, the transient response of the system to a super-threshold stimulation eventually has to lead back to a globally stable steady state after some large phase space excursion. This indicates global dynamics, that is, dynamics involving not only fixed points and their local bifurcations but larger invariant sets, for instance periodic orbits that collide with fixed points. An excitable element is in some sense the washed-up brother of the relaxation oscillator: when the threshold vanishes, a single excitable element usually becomes a simpler behaved---and much longer known---relaxation oscillator\cite{POL26}. In this study, we propose a model for wave patterns with a characteristic shape, size, and duration. These waves are transient responses to confined, spatially structured perturbations of the homogeneous steady state. The homogeneous steady state is globally stable in our proposed reaction-diffusion model because we also introduce an effective inhibitory mean field feedback control. This leads to a new type of local excitability in a spatially extended medium involving disappearing traveling wave solutions as larger invariant sets. Both, the model and the initial conditions are motivated by the pathophysiology of migraine and clinical observations \cite{DAH00a,HAD01,DAH08d} and the results are applied to some currently controversial topics \cite{AYA10,EIK10,AKE11}. We will briefly introduce concepts of excitable elements and excitable media in two-variable reaction-diffusion systems and also the idea of an additional long-range inhibitory feedback that is studied in various other systems outside the neurosciences and also in neural field models. While we also briefly introduce migraine, the view of migraine as a dynamical disease is more elaborated in the discussion. The original conductance-based membrane model from Hodgkin and Huxley, and the more refined versions to date, contain many variables, but fortunately this is not essential for excitable elements. In fact, it turned out that the two classes of excitability are actually amenable to direct analysis in a two-dimensional phase plane by identifying fast and slow processes in the conductance-based model and grouping these into dynamics of just two lump variables\cite{BON53,FIT69}. Using such a geometrical approach and partly analytical theory, the original empirical classification of excitability was further pursued with bifurcation analysis\cite{RIN89}, explaining class I by identifying its threshold as a stable manifold of a saddle point on an invariant cycle and the one of class II as a trajectory from which nearby trajectories diverge sharply (called a canard trajectory). Extensions to these principal mechanisms involve codimension 2 bifurcations and lead also to bursting in three-variable models, which have been investigated in great detail\cite{IZH00a}. However, the two-variable models of a fast activator and slow inhibitor and their phase portraits of class I and II became qualitative prototypes for excitable systems in various biological\cite{KEE98}, chemical\cite{KAP95a} and physical contexts\cite{SCH87}. Distinct from these excitable systems are those that are spatially extended systems, called excitable media. (We use the word ''system'' as a general term, ''medium'' only for spatially extended systems, and ''element'' for point-like systems.) Already the original work by Hodgkin and Huxley\cite{HOD52} described extended tube-like membranes (axons) and introduced the cable equation as a parabolic partial differential equation, which is in the same class as the diffusion equation. Even in reaction-diffusion media with infinite-dimensional phase space, we can again apply geometrical approaches, simply because excitable media are not defined by---in contrast to excitable elements---transient dynamics but traveling wave solutions. The quiescent state is the non-excited homogeneous steady state. And like the quiescent state, excited states of a medium are usually stationary states in some appropriate comoving frame with $\xi=x-ct$. Furthermore, the threshold is related to an unstable stationary state, the critical nucleation solution, usually in another comoving frame including $c=0$. The existence of the nucleation solution is a simple consequence of multistability, see Fig.~\ref{fig:phase_space_sketch}A, but note that even monostable excitable elements have similar unstable stationary states in class I. \begin{figure}[t] \includegraphics[width=0.95\columnwidth]{phase_space_sketch} \caption{\label{fig:phase_space_sketch} Schematic sketch of the phase space of (A) the uncontrolled system, (B) the system with mean field control adjusted such that the nucleation solution is stabilized and (C) the system with mean field control and control parameters such that the ghost of the saddle-node bifurcation is still influencing the dynamics.} \end{figure} Central to our approach is a distinct subexcitable medium in which localized traveling waves occur only transiently. Reaction-diffusion waves would engulf all of the medium, if formed in a two-variable system with only one activator and one inhibitor with the system's parameters in the appropriate regime. In contrast, localized traveling waves indicate a demand-controlled excitability. Similar ideas to obtain localized traveling, though not transient, waves have been introduced in various contexts, for instance an integral negative feedback or a third, fast diffusing inhibitory component for moving spots in semiconductor materials, gas discharge phenomena, and chemical systems\cite{OHT89a,KRI94,SCH97a,SAK02}. Furthermore, in neural field models\cite{BRE12}, localized two-dimensional bumps are studied \cite{LU11a,BRE11} in integrodifferential equations (without diffusion) in the context, for example, of memory formation\cite{KIL13}. Localized structures have also been discussed in the context of cortical spreading depression (SD) in migraine before, in particular a model with narrowly tuned parameters that shows transient waves \cite{DAH03a,DAH07a,DAH08d} and a model with mean field feedback control that allows for localized waves \cite{DAH09a}. But it is for the first time now that a model is presented in which wave phenomena occur that are both localized and transient, so that a variety of new questions that are controversially discussed in migraine research\cite{AYA10,AKE11,FIO11b,LEV12} can be adressed. Migraine is characterized by recurrent episodes of head pain, often throbbing and unilateral. In migraine without aura (MO), attacks are usually associated with nausea, vomiting, or sensitivity to light, sound, or movement \cite{GOA02}. Migraine with aura (MA) involve, in addition but also rarely exclusively, neurologic symptoms (aura) that are associated with waves of cortical SD \cite{OLE81,HAD01}. SD is a reaction-diffusion process, although, clearly, the originally proposed mechanism of a simple one-variable model for SD front propagation triggered by diffusion of elevated extracellular potassium with bistable kinetics, to date known as Hodgkin-Huxley-Grafstein model\cite{GRA63}, does not capture the complex chain of involved reactions\cite{SOM01,STR05,HER05}. However, migraine aura symptoms manifest themselves on a macroscopic scale over several minutes up to one hour and extend over several centimeters when mapped onto the corresponding cortical areas\cite{HAD01,VIN07}, see Fig.~\ref{fig:had01_fig4}. In this study, we are interested in these clinically relevant properties of migraine with aura. To this end, we exploit the concept of nucleation, growth, and subsequent shrinking of SD in a canonical reaction-diffusion model of activator-inhibitor type with mean field feedback control. \section*{Results} \label{sec:results} We first present a model constructed by adding a mean field inhibitory feedback to a well known reaction-diffusion system, then we present the statistical properties of the transient behavior this model is exhibiting. \begin{figure*} \begin{center} \fbox{\includegraphics[width=\textwidth]{had01fig4Copyright}} \end{center} \caption{\label{fig:had01_fig4} Source localization of the magnetic resonance (MR) data signal of SD (from \cite{HAD01}). Color code: time from onset, locations showing the first MR signals of SD are coded in red, later times are coded by green and blue (see color scale to the right). Signals from the first 975 seconds were not recorded because the migraine attack was triggered outside the MR imaging facility. (A) The data on folded right posterior pole hemispheric cortex; (B) the same data on inflated cortical surface; (C and D) the same data shown on the entire hemisphere from posterior-medial view (oblique forward facing), folded and inflated, respectively. As described in the original study \cite{HAD01}, MR data were not acquired from the extreme posterior tip of the occipital pole (rearmost portion). (E) A fully flattened view of the cortical surface. The aura-related changes are localized wave segments. Note that in the flattened cortex was cut along the steep sulcus calcarine to avoid large area distortions induced by the flattening process. The colored border to the left is the cut edge that should be considered being connected such that the color match up as seen in (B and D). (Copyright permission from authors granted, from PNAS is requested.) } \end{figure*} \subsection*{Before infinity and beyond by mean field inhibitory feedback control} The diversity of the behavior of traveling waves in two spatial dimensions was studied in canonical models (see Discussion) depending on the two generic parameters $\beta$ and $\varepsilon$ in Eqs.\,(\ref{eq:fhn1})-(\ref{eq:fhn2}), which determine the parameter plane of excitability \cite{WIN91}. Eqs.\,(\ref{eq:fhn1})-(\ref{eq:fhn2}) determine an excitable medium without feedback control. In those media, patterns of discontinuous (open ends) spiral-shaped waves are used to probe excitability and these patterns are closely related to the discontinuous, localized transient waves we propose in our model. We make use of the fact that at a low critical excitability, called the rotor boundary $\partial R_{\infty}$, spiral waves do not curl-in anymore but become half plane waves\cite{MIK91,HAK99}. Beyond the rotor boundary lies the subexcitable regime in which discontinuous waves start to retract at their open ends and any discontinuous wave is transient and will eventually disappear. The border $\partial R_{\infty}$ marks a saddle-node bifurcation at which discontinuous waves collide with their corresponding nucleation solution. This leads to the key idea of our model. A linear mean field feedback control moves this saddle-node bifurcation towards distinct localized wave segments with a characteristic form (shape, size) and behind this bifurcation these waves become transient objects (see Fig.~\ref{fig:phase_space_sketch}). Before we introduce the effect of mean field feedback control, we have to consider the behavior of continuous waves (closed waves fronts without open ends) when the excitability is decreased. This will be important if we want to understand the fate of any solution, discontinuous or not, under mean field feedback control. Unbroken plane waves propagate persistently even if the parameters are chosen in the subexcitable regime until the propagation boundary $\partial P$ is reached. At this border, the medium's excitability becomes too weak for continuous plane waves to propagate persistently. The border $\partial P$ in parameter space indicates again a saddle-node bifurcation at which a planar traveling wave solution collides with its corresponding nucleation solution. Note, that the planar wave is essentially a pulse solution in 1D and the nucleation solution in 1D is called the slow wave \cite{KRU97}. In Fig.~\ref{fig:control_plane}A, both the rotor boundary $\partial R_{\infty}$ and the propagation boundary $\partial P$ are shown in a bifurcation diagram for the excitable medium described by Eqs.\,(\ref{eq:fhn1})-(\ref{eq:fhn2}). We chose $\beta$ as the bifurcation parameter and follow (see Methods) the branch of the unstable nucleation solution (NS) whose stable manifold separates the basins of attraction of the homogeneous state and a spiral wave (with two counter-rotating open ends). The unstable manifold of NS consists of the two heteroclinic connections, one to the stable homogeneous state and the other to the traveling wave solution (see Fig.\,\ref{fig:phase_space_sketch}). The order parameter on the ordinate in Fig.~\ref{fig:control_plane} is the surface area $S$ inside the isoclines at $u=0$ of the traveling wave solutions, see Eq.\,(\ref{eq:S}). We call $S$ the \emph{wave size}. \begin{figure*}[t] \begin{center} \fbox{\includegraphics[width=1.0\textwidth]{control_plane}} \end{center} \caption{\label{fig:control_plane}{\bf (left) The $S$ - $\beta$ plane} with nucleation solution NS , propagation boundary $\partial{\text{P}}$, rotor boundary $\partial_{\textrm{R}_\infty}$ and control lines for the used values of $\beta_0$. {\bf (right) The $S$ - $\beta_0$ plane} with the same quantities.} \end{figure*} The mean field control that we introduce by Eqs.~(\ref{eq:S})-(\ref{eq:control}) establishes a linear feedback signal of the wave size $S$ to the threshold $\beta$. With this linear relation, we introduce two new parameters, the coupling constant $K$ and $\beta_0$, the threshold parameter for the medium without an excited state ($S=0$). Note that the parameter $\beta_0$ can be also seen as the sum of two threshold values, the former $\beta$ in Eq.\,(\ref{eq:fhn2}) and an offset coming from the new control scheme. While the introduction of the control introduces two new parameters $\beta_0$ and $K$, at the same time $\beta$ becomes dependent upon the control, so that we have a total of three parameters. in this study, we kept $K$ fixed at $K\!=\!0.003$ and varied $\beta_0$, with a particular focus on the statistics for $\beta_0\in\left[ 1.32,\! 1.33,\! 1.34 \right]$. We chose $\beta_0$ as the new bifurcation parameter in the bifurcation diagram for the full reaction-diffusion model with mean field coupling described by Eqs.\,(\ref{eq:fhn1})-(\ref{eq:control}), see Fig.~\ref{fig:control_plane}B. This diagram is a sheared version of the one without mean field coupling in Fig.~\ref{fig:control_plane}A. While it is a trivial fact, that the linear relation in Eq.~(\ref{eq:control}) describes an affine shear of the axises $(\beta,S)$ of bifurcation diagram in A to the axises $(\beta_0,S)$ in B, the fact that the branch of the NS solution can be mapped this way is not. Firstly, this relies on the way we introduce the feedback term. It just adds a constant value to the old bifurcation parameter $\beta$, if the solution under consideration is stationary. Therefore, any stationary solution must exist in both diagrams being just sheared branches. The same holds true for traveling wave solutions that are stationary in some appropriate comoving frame $\xi=x-ct$ with speed $c$. However, not much can be said about the stability of such solutions, when we introduce the mean field feedback term. \begin{figure}[b] \begin{center} \fbox{\includegraphics[width=\columnwidth]{example_bestia}} \end{center} \caption{ An example of a transient solution. Initial conditions, i.e. activator concentration $u$ at $0s$ (upper left), snapshot of activator-concentration $u$ after $30\,s$ (upper right), after $180\,s$ (lower right). Time of passing through threshold value $u_0=0$ from below the first time, i.e. passing of the wave front (lower left). \label{fig:example_bestia}} \end{figure} The branch of the formerly unstable nucleation solution NS (Fig.~\ref{fig:control_plane}A) folds in Fig.~\ref{fig:control_plane}B such that two solutions coincide for a given value of $\beta_0$ until they collide and annihilate each other at a finite value of $S\approx5.5$ for $K\!=\!0.003$. For the fixed value of $K\!=\!0.003$, the upper branch is a stable traveling wave solution in the shape of a wave segment, while the lower branch is the corresponding nucleation solution of this wave segments, as schematically shown in Fig.~\ref{fig:phase_space_sketch}B. The fact that the upper branch is stable was confirmed by numerical simulations. Larger $K$, that is, a less steep control line in Fig.~\ref{fig:control_plane}A can be seen as a ``harder'' control, because a small given change in $S$ leads to larger variations in the effective parameter $\beta$. As a consequence, it is difficult to continue by means of this control the lower part of the branch corresponding to small traveling wave segments in numerical simulations. The choice of a parameter regime for this model that shows transient localized waves and is globally stable with the homogeneous state as the only attractor is now straightforward. Transient localized waves occur due to a bottleneck---or ghost behavior---after the saddle-node bifurcation. \subsection*{Statistical properties} \label{sec:results} To examine the typical transient patterns that the system generates, we want to know how the system responds to arbitrary initial conditions with the noticeable constraint that the system should initially be in the homogeneous steady state almost everywhere and the arbitrary perturbations from it are localized. As it is not possible to formulate an analytical solution to the equations for arbitrary initial conditions, the idea is to simulate the dynamics for (many) different initial conditions. Ideally, these initial conditions should be ``equally spaced'' in phase space, in order to obtain relevant statistics about the different evolution possibilities. The problem that arises at this point is that an initial condition of this system is not only living in an infinite dimensional space (an initial condition would be given by two $C^2$-functions $\mathbb{R}^2 \rightarrow \mathbb{R}$) but because of the nonlinearity of the equations, the set of all solutions is not even a vector space. To our knowledge, there is no helpful mathematical structure that could guide us in choosing our initial conditions. To attack this problem, we take a set of patterns, which are parameterized by a finite number of parameters and scan through these parameter. For details on the patterns see Methods. Of course, in characterizing the solutions, the same problem arises and appropriate characteristic parameters for the solutions have to be defined. \begin{figure} \begin{center} \fbox{\includegraphics[width=\columnwidth]{ID4}} \end{center} \caption{ {\bf Distribution of solutions} for the control close to $\partial {}_\text{R}$. \label{fig:distr_near}} \end{figure} \begin{figure} \begin{center} \fbox{\includegraphics[width=\columnwidth]{ID6}} \end{center} \caption{ {\bf Distribution of solutions} for the control line at an intermediate distance to $\partial {}_\text{R}$. \label{fig:distr_middle}} \end{figure} \begin{figure} \begin{center} \fbox{\includegraphics[width=\columnwidth]{ID5}} \end{center} \caption{ {\bf Distribution of solutions} for the control far away from $\partial {}_\text{R}$. \label{fig:distr_far}} \end{figure} \begin{figure*} \begin{center} \fbox{\includegraphics[width=\textwidth]{cumulative_statistics}} \end{center} \caption{ {\bf Cumulative distribution functions} for the different classification parameters.\label{fig:cum_distr}} \end{figure*} To explain the three parameters we have chosen for the solutions and why they suit this problem, it is helpful to have a look at the lower left part of Fig.~\ref{fig:example_bestia}, in which an example solution is displayed. The first parameter we choose is the maximal area in which such a solution has activator concentration over a certain threshold level at one instant of time, termed maximal instantaneous area (MIA). The threshold level is taken to be $u=0$, although this is the same threshold as used to define $S$, this is rather convenience than necessity. The second parameter is the total area that has experienced an activator concentration above this level at some time during the course of the solution, termed total affected area (TAA). The third parameter is the time, during which the area of activator concentration above threshold is non-zero, termed the excitation duration (ED). Of course, the exact value of all these parameters for one single solution depends on the choice of threshold. For once, the threshold value has to be chosen such that after the activator concentration has fallen below it, no secondary excitation will be generated. The example solution depicted in Fig.~\ref{fig:example_bestia} is a comparatively long lived solution. It starts out very symmetrically (circularly) shaped, at one instant of time it breaks open into a discontinuous wave and a shape of the front develops, which is similar to that of a particle-like wave but because of the chosen control parameters, it shrinks in time and vanishes in the end. Because at the point when the circle breaks open, a comparatively large area is affected, it takes some time until it vanishes and the resulting TAA is relatively large. So this example solution has large ED, large MIA and large TAA. If the circle had not broken open at all, the control would have made the threshold value very large and the solution would have collapsed very quickly because of the propagation boundary $\partial P$, such that the ED and the TAA would have been short, whereas the MIA would have been large. Other prototypical courses of solutions take place for instant, when the initial conditions affect the activator over a larger area but only in the middle of the area, the value is high enough to start a solution. In the surrounding area, the activator level is not high enough for that but the increased activator concentration leads to a rise in inhibitor concentration until the time the front reaches those parts and as a consequence, the solution vanishes early, having small ED, small TAA and small MIA as a consequence. We did the simulation for three different adjustments of the control force, successively going farther and farther away from the bifurcation point. Each of these simulations were started using ~8000 initial conditions generated in a manner that is described in Methods. Each of these initial conditions resulted in a solution that was classified according to the three parameters mentioned above. The solutions that did not result in any excitation at all (ED=MIA=TAA=0) were discarded. The density plots according to the classification parameters are shown in Figs.~\ref{fig:distr_near},~\ref{fig:distr_middle},~\ref{fig:distr_far}. First of all, though the distribution of the solutions varies significantly, the number of solutions that represent an excitation hardly varies at all (4171 for small, 4183 for intermediate, 4182 for large distance from the bifurcation point), the symmetric difference between the sets of initial conditions that lead to an excitation contains between 5 and 19 solutions. From this we can also deduce that the set of initial conditions that lead to an excitation does not significantly depend on the choice of control parameters. When looking at Fig.~\ref{fig:distr_near} one notices a clustering of the solutions in certain regions of the classification parameters. In the section that depicts the TAA against the MIA, we notice three coarse clusters. Cluster I, the largest with high MIA and comparatively low TAA; cluster II, one that is less populated with low MIA and low TAA; and cluster III, one that is very sparsely populated with intermediate MIA and high TAA. The boundaries between these clusters are not very sharp. One could think that a solution that affects an overall large area (high TAA) will also affect a large area at one instant of time (high MIA). From looking at the mentioned clusters, one sees that this is not the case, the solutions with the highest MIA have all comparatively low TAA (clusters I and II) and the ones that have a high TAA only achieve an intermediate MIA (culster III). A partial explanation for this can be read off from the depiction of TAA against ED. All solutions that have a large TAA are also solutions that have a large ED, i.e., cluster III is distinct also in this plane. More than that, the dependence seems to be almost linearly. This is reminiscent of the localized particle-like wave solutions. For these, the area that is affected grows linearly in time because the area that these solutions occupy at one instant of time is constant. The two clusters I and II that we observed merge to one in this plane of projection because they differ only very little in ED. This can also be noted, when comparing the planes MIA vs.\ TAA and MIA vs.\ ED, also here the cluster III with high TAA translates to a cluster with high ED and the cluster I with high MIA and comparatively low TAA moves closer to cluster II with both low ED and MIA. When varying the $\beta_0$ parameter of the control force, the distribution of solutions in MIA-TAA-ED-space changes drastically. Upon raising the $\beta_0$ parameter from $\beta_0= 1.32$ over $\beta_0=1.33$ to $\beta_0=1.34$, the system is put more and more into the subexcitable regime and the solutions are less and less affected by the ghost behavior (saddle-node bifurcation), see Fig.~\ref{fig:control_plane}. This is noticeable by observing that the cluster with high TAA / high ED becomes less pronounced and vanishes almost completely for $\beta_0=1.34$. This can be understood as an interplay between the mean value of MIA in cluster III at about 25 and $S$ at the propagation boundary (at $\partial P$, $S\approx24$, $S\approx20.75$, and $S\approx17.5$ for $\beta_0= 1.32$, $\beta_0=1.33$, and $\beta_0=1.34$, respectively). For the control line farthest away from the saddle-node bifurcation ($\beta_0=1.34$), $\partial P$ is below even the smallest values of MIA in cluster III. Note that the value of $S$ at the ghost is about 6, well below the propagation boundary. Also the other two clusters merge though there still exist solutions with high and with low MIA, but the transition is much more fuzzy than it was before. In Figs.~\ref{fig:distr_near},~\ref{fig:distr_middle}~and~\ref{fig:distr_far}, we have included a little `bestiary' to illustrate the typical courses of solutions in the respective clusters and their change upon varying the parameter $\beta_0$, the initial conditions for solutions 1-4 in these figures are always the same. From this arbitrarily chosen selection, we see that the MIA of each solution hardly changes between the $\beta_0$ values. Whereas the change of TAA and ED always go hand in hand and---depending on the cluster---can be up to fourfold for the chosen range of $\beta_0$. One could argue that the formation of clusters is an artefact of the choice of initial conditions. There is no simple answer to this. As mentioned, it is not possible to examine the complete set of initial conditions. Neither does this set carry a helpful structure which would allow a sensible `equidistant' sampling. This is the reason why we made the mentioned choice of initial conditions. For testing purposes, we also tried different schemes for the generation of initial conditions and found the same distribution of clusters qualitatively. In Fig.~\ref{fig:cum_distr}, we have plotted the cumulative distribution functions for the three classification parameters and the three choices of mean field control. From this picture we see, that the distribution of the MIA is only hardly influenced by the choice of control. This is very different for TAA and ED. For the TAA for example there are values (around 75), where for one choice of control the majority of solutions is below and for another choice the majority is above. For example, the fraction of values below TAA=80 is 0.995 for $\beta_0=1.34$ and 0.216 for $\beta_0=1.32$. Also, we see that the cumulative distribution function for the TAA converges to 1 much slower, the closer the control is to the saddle-node bifurcation. This means, that more solutions with high TAA exist for these choices of control. \section*{Discussion} \label{sec:discussion} In this section we discuss three subjects related to the intended application to migraine pathophysiology. Firstly, the possible congruence between the prevalence of migraine subforms with the statistical properties of the wave patterns we observed; secondly, the possible physiological origin of the inhibitory feedback control; and thirdly, novel therapeutic approaches. We start with a discussion of the approach of using a canonical model. \subsection*{Canonical model and generic parameters for weakly excitable media} More realistic models of SD are given by a conductance-ion-based models of SD\cite{KAG00,SHA01,MIU07} with up to 29 dynamics variables. The fast inhibitory feedback that we suggest can be modeled in addition by neural field models\cite{BRE12}. Such large-scale models of brain structure, including lateral connectivity, are available but still require enormous computer capabilities. We will argue here, in which sense our model is canonical for the problem we attack. Generally speaking, an excitable medium is a spatially extended system with a stable homogeneous steady state being the quiescent state and one or many excited states that develop after a sufficient perturbation from the quiescent state (Fig.~\ref{fig:phase_space_sketch}A). The excited states are traveling wave solutions that propagate with a stable profile of permanent shape (possibly with some temporal modulation, such as breathing or meandering). To study generic features of an excitable medium, the simulations are often carried out in the reaction-diffusion system given by Eqs.\,(\ref{eq:fhn1})-(\ref{eq:fhn2}), the popular FitzHugh-Nagumo kinetics. Originally, the FitzHugh-Nagumo kinetics were a caricature of the electrophysiological properties of excitable membranes \cite{FIT61,NAG62}, but these equations with $D=0$ became a canonical model of {\em local} excitability of type II (based on Hopf bifurcation, either supercritical with subsequent extremely fast transition to a large amplitude limit cycle, named canard explosion\cite{WEC05}, or subcritical \cite{ERM98}), and also, for $D\neq0$, of {\em spatial} excitability \cite{WIN91}, sometimes including diffusion in the second inhibitory species, which we do not consider here. Because we are considering transient behavior originating from a high threshold regime (towards weak excitability), the classification of local excitability in type I and II (based on the transition at vanishing threshold, i.e., into the oscillatory regime) is not relevant. Furthermore, it is not clear whether this classification carries over in a meaningful way to the dynamics of spatially extended systems. We consider the set of Eqs.\,(\ref{eq:fhn1})-(\ref{eq:fhn2}) as canonical for two reasons. First, because the activator Eq.\,(\ref{eq:fhn1}) has the simplest polynomial form of bistability. Note that Eq.\,(\ref{eq:fhn1}) was for this reason originally suggested by Hodgkin and Huxley as the first mathematical model of the potassium dynamics in SD. It was published by Grafstein, who also provided experimental data supporting such a simple reaction-diffusion scheme for the front dynamics \cite{GRA63}. Second, the inhibitor Eq.\,(\ref{eq:fhn2}) has a linear rate function, in fact, the rate function is only of a function of the activator $u$. This is the simplest inhibitor dynamics needed for pulse propagation. By neglecting an additional linear term $-\gamma v$ in the inhibitor rate function, we limit the origin of excitability to the case of a supercritical Hopf bifurcation with subsequent canard explosion and avoid the bistable regime that exits in the subcritical case. The subcritical Hopf bifurcation occurs only in a narrow regime when $\gamma$ is close to 1 and $\beta$ close to 0. We have tested some simulations with $\gamma=0.5$ with similar results. As a consequence of our assumptions about the model being in this canonical form, only two parameters exits, $\beta$ which is associated with the threshold and the $\varepsilon$, the time scale separation of activator and inhibitor dynamics. Of course, the choice of parameters can be quite different, a common choice is $\alpha$ in the cubic rate function $f(u)=u(u-\alpha)(u-\alpha)$ but there are only two free parameters or two equivalent groups of parameters. So there are the same bifurcations in the parameter planes $(\varepsilon,\beta)$ or $(\varepsilon,\alpha)$, but to map the dynamics between equivalent groups of parameters might involve changes in time, space and concentrations scales. In particular the question of how the incidence of MA is reflected in the distance to the saddle-node bifurcation, involves a measure on the parameter space, which we have suggested to get from pharmacokinetic-pharmacodynamic models\cite{DAH07a}. \subsection*{Application to migraine pathophysiology} The cause of the neurological symptoms in migraine with aura (MA) is the phenomenon of cortical spreading depression (SD)\cite{LEA45,OLE81,LAU87,HAD01}. Whether SD is also a key to the subsequent headache phase is an open question, in particular, in cases of migraine without aura (MO). If SD occurs in MO, it must remain clinically silent \cite{AYA10,EIK10} or---by definition of diagnostic criteria---neurological symptoms must last less than 5min. Of course, the transient nature of SD poses challenging problems in clinical observability, in particular for objective measures by means of non-invasive imaging when clinical symptoms do not even indicate the aura phase with SD. The aura is usually, though not always, before the headache phase. Attacks observed with non-invasive imaging are usually triggered, which also could cause a trigger-specific bias. One well-documented case of a spontaneous migraine headache supports the contested notion of 'silent aura', because blood-flow changes where observed that were most likely the result of SD\cite{WOO94}. We suggest a qualitative congruence between the prevalence of MO and MA with the statistical properties we found in the transient response properties. We do not suggest that all MO attacks are related to SD nor that pain formation in MA is exclusively caused by SD. Rather that SD is one pathway of pain formation in MO and MA. We refer to this pathway as the ''spreading depression''-theory of migraine\cite{LAU87}. The ''migraine generator''-theory (MG), a dysfunction in a central pattern generator in the brainstem that modulates the perception of pain, is for various reasons not less plausible\cite{AKE11}. Some of the seemingly conflicting and controversially discussed evidence is probably resolved when one considered the basis of the classification of migraine. We currently have a symptom-based classification for migraine with possibly overlapping etiologies for individual subforms. In the light of an etiology-based classification with possibly overlapping symptoms the conflicts seem less puzzling to us. We also need to investigate an interplay of SD and MG, namely to which degree MG modulates pain traffic from SD generated in the intracranial tissues. The cortex is not pain sensitive. There are detailed investigations how SD in the cortex can cause pain via pain sensitive intracranial tissues and subsequent activation in the trigeminal nucleus caudalis in the brainstem\cite{MOS93a,BOL02}, but cf. \cite{ING97,MOS98}. The qualitative congruence between the prevalence of MO and MA with the statistical properties we found in the transient response properties is based on the following assumption on the geometrical layout: In the initial phase of cortical SD, with increased blood flow (hyperemic phase), a local release of noxious substances (ATP, glutamate, K$^+$, H$^+$) are thought to diffuse outward in the direction perpendicular to the cortex into ``the leptomeninges resulting in activation of pial nociceptors, local neurogenic inflammation and the persistent activation of dural nociceptors which triggers the migraine headache''\cite{ZHA10}, but for issues concerning the blood brain barrier system cf. \cite{TFE11}. If diffusion vertical to the affected area is critical, size and shape of this area should play a critical role, see Fig.~\ref{fig:cortex_meninges_mia}. This suggests that SD waves activate nociceptive mechanisms dependent upon a sufficiently large instantaneously affected cortical area, i.e., large MIA. \begin{figure} \begin{center} \fbox{\includegraphics[width=\columnwidth]{cortexMeningesSDBestia}} \end{center} \caption{{\bf Schematic representation of cross section of cortex, meninges and skull.} The leptomeninges refer to the pia mater and arachnoid membrane. SD releases noxious substances with increased blood flow thought to diffuse outward. Activation of pain pathways can depend on MIA. \label{fig:cortex_meninges_mia}} \end{figure} The aura phase on the other hand must clearly correlate with long and large enough cortical tissue being affected to notice the neurological deficits. In particular, because the very noticeable visual symptoms often start where the cortical magnification factor is large, so that only if they move into regions of lower magnification they get magnified by the reversed topographic mapping \cite{DAH03a}. The seemingly contested notion of MO (migraine without aura) with silent aura is also resolved. A view at the connection between MIA and TAA as well as MIA and ED in our model is shown in Fig.~\ref{fig:statistical_evaluation}. It shows that in the range of high MIA the average values for TAA and ED are becoming smaller. From Fig.~\ref{fig:statistical_evaluation} we can also read off that the range with the most events is in the regime of relatively high MIA (around 30) and significantly after the peak of ED resp.\ TAA. Moreover in the range with most events, the correlation coefficient r(MIA, ED) is always negative and the correlation coefficient r(MIA,TAA) is mostly negative. All these effects are stronger, the closer the control line is located to the saddle-node bifurcation. From these statistical correlations between MIA and TAA resp.\ ED and the distribution of the number of events, one could speculate that cases of MA are more rare and the quality of the headache in these cases might be less severe. This is exactly what has been reported in the medical literature\cite{RAS92}. While the number of events with high ED and high TAA is influenced by the distance of the control line to the saddle node bifurcation, the number of events with high MIA is much lesser affected. So in a way, the distance to the saddle node bifurcation controls the prevalence of MA in our model, while the prevalence of MO is not much affected. \subsection*{Inhibitory feedback and neurovascular coupling} This naturally rises the question of the physiological origin of the inhibitory feedback control. The hyperemic phase engulfs large regions of the human cortex\cite{OLE81}, while we suggest in this study the homeostatic breakdown directly due to SD is much more limited in extent. A fast spreading increased neural activation in adjacent cortical areas could represent synaptic activation through feed-forward and feedback circuitry. This was suggested by Wilkinson \cite{WIL04}. This would in turn extend the area of the hyperemic phase towards tissue that is not yet recruited into the SD state and this mechanism has therefore a neuroprotective effect by an increased blood flow which we mimic by the inhibitory mean field feedback. The coupling between neural activity and subsequent changes in cerebral blood flow, called neurovascular coupling, has a significant time delay in the order of seconds, which we ignore for the sake of simplicity in our model. \begin{figure} \begin{center} \fbox{\includegraphics[width=\columnwidth]{statistical_evaluation}} \end{center} \caption{{\bf Statistical analysis of output data.} For all four pictures we took all data points with MIA in the interval [MIA${}_\text{low}$, MIA${}_\text{low}$+10] (``sliding window'') and analyzed the connection with TAA {\bf (left column)} and ED {\bf(right column)}. In the {\bf(upper row)}, the average value is plotted with \emph{solid lines}, the area of one standard deviation around this value is shaded. In the {\bf(lower row)}, the correlation coefficient between MIA and the respective quantity is plotted. In all plots, the \emph{dotted lines} indicates the number of events for the respective interval with MIA${}_\text{low}$. \label{fig:statistical_evaluation}} \end{figure} \subsection*{Model-based control by neuromodulation} We briefly discuss model-based control and means by which neuromodulation techniques may affect pathways of pain formation and the aura phase. The emerging transient patterns and their classification according to size and duration offer a model-based analysis of phase-dependent stimulation protocols for non-invasive neuromodulation devices, e.g.\ utilizing transcranial magnetic stimulation (TMS)\cite{LIP10}, to intelligently target migraine. For instance, noise is a very effective method to drive the system back into the homogeneous steady state more quickly. In general, responses of nonlinear systems to noise applied when the system is just before or past a saddle-node bifuraction are well studied. Before the saddle-node on limit cycle bifuraction, the phenomenon of coherence resonance (CR) describes that a certain amount of noise makes responses most coherent\cite{LIN04}. Behind the saddle-node bifurcation on a limit cycle the time the flows spend in the bottleneck region of the ghost is shortened\cite{STR94a}. However, noise would, according to our model, mainly positively affect ED and TAA, that is, the aura, while it could even worsen the headache, if applied early during the nucleation and growth process. Therefore, TMS using noise stimulation protocols, which is currently considered, should be applied only some time after first noticing aura symptoms. Headaches are not generally considered appropriate for invasive neurosurgical therapy, but when all else fails---preventives, abortives, and pain management---invasive brain stimulation techniques are also considered, e.g. occipital nerve stimulation (ONS)\cite{SIL12,DIE12}. So model-based control will become increasingly important. Also the importance of modeling related epileptic seizure dynamics as spatio-temporal transient patterns, has been suggested in a recent paper\cite{BAI12}. Model-based control of Parkinson's disease, is already considered, yet Schiff remarks quite correctly\cite{SCH10h}: ``It seems incredible that the tremendous body of skill and knowledge of model-based control engineering has had so little impact on modern medicine. The timing is now propitious to propose fusing control theory with neural stimulation for the treatment of dynamical brain disease.'' We suggest to consider migraine as a dynamical disease that could benefit from model-based control therapies. \section*{Methods} \label{sec:methods} As a generic model for our excitable medium we use the well known FitzHugh-Nagumo equations \cite{IZH06}, augmented by a diffusion term for the activator variable: \begin{align} \varepsilon \frac{\partial u}{\partial t} &= u -\frac{1}{3} u^3 -v + \nabla^2u \label{eq:fhn1} \\ \frac{\partial v}{\partial t} &= u+ \beta. \label{eq:fhn2} \end{align} The parameter $\varepsilon$ separates the timescales of the dynamics of the activator $u$ and the inhibitor $v$. $\varepsilon$ is taken to be small. In the present work, we use a value of $\varepsilon=0.04$. The parameter $\beta$ is a threshold value which determines from which activator level on the inhibitor concentration is rising. The local dynamics of \eqref{eq:fhn2} (i.e. without the diffusion term) is oscillatory for $\left|\beta \right|<1$ and excitable for $\left|\beta \right|>1$. At $\left|\beta \right|=1$ the local dynamics undergo a supercritical Hopf-bifurcation. We choose a value of $\beta=1.1$ throughout this work. To integrate \eqref{eq:fhn2}, we used a simulation based on spectral methods \cite{CRA06} and adaptive timestepping. We define the (instantaneous) wave size as the area with activator level $u$ over a certain threshold $u_0$: \begin{equation} S(t) := \int\!\!\!\int \Theta\left(u(x,y,t)-u_\text{threshold}\right)\,\mbox{d}x\mbox{d}y \label{eq:S}, \end{equation} where $\Theta$ is the Heaviside function and $u_\text{threshold}=0$. Equations \eqref{eq:fhn2} are a paradigmatic model of an excitable medium \cite{MIK90}. It possesses a stable homogeneous solution as well as stable excited states (pulses, spirals or double spirals) cf.~\cite{DAH08,DAH09a} The boundary separating the basins of attraction of these types of solution consists of unstable so-called `nucleation-solutions', which are areas of excitation which are traveling at uniform speed. The size in the sense of \eqref{eq:S} of these solutions is plotted against the parameter $\beta$ in Fig.~\ref{fig:control_plane}. To measure this line $\partial_\text{R}$, called the `rotor boundary', we used a pseudo-continuation procedure. Which is described below. Making the parameter $\beta$ dependent on the wave size $S$ adds a mean field control to the system. \begin{equation} \beta = \beta(t) = \beta_0 + K \cdot S(t) \label{eq:control}, \end{equation} where $K$, $S_0$ and $\beta_0$ are control parameters. If the control line defined by \eqref{eq:control} intersects $\partial_\text{R}$, the point of intersection with higher $S$ is stabilized, cf.~\cite{KRI94}. The aim of the present work is to shed light on the transient behavior, occurring when the control line~\eqref{eq:control} is close to $\partial_\text{R}$ but does not intersect it. To account for the imprecision in the rotor boundary of the simulation and the exact rotor boundary $\partial {}_\text{R}$, we measured the rotor boundary in our simulation using a pseudo-continuation procedure. For this, we set the control such that it intersects $\partial {}_\text{R}$ and thus stabilize an otherwise unstable solution on it. Letting the simulation run until the system has stopped fluctuating, saving the ($\beta$,$S$)-pair, changing the control slightly and doing things over yields points of $\partial {}_\text{R}$ in our system. From this measured $\partial {}_\text{R}$ and the propagation boundary, inferred from continuation in 1D, we chose 3 suitable control lines which were used for simulations in this work: \begin{align} K &= 0.003 \nonumber \\ \beta_0 &\in \left[ 1.32,\! 1.33,\! 1.34 \right] \label{eq:control_pars} \end{align} For the purpose of visualization of the saddle-node bifurcation occurring in the bifurcation diagram of the system with mean field control (right of Fig.~\ref{fig:control_plane}, we fitted the measured branch of nucleation solutions to a function of the form $\beta = a + \frac {b}{c S + S^2}$ which also allows us to obtain an approximation for the $\beta$ value of the rotor boundary $\partial{}\textrm{R}_\infty$. As mentioned in section~\ref{sec:results}, we need an appropriate sampling of initial conditions for~\eqref{eq:fhn2}, ideally being equidistantly sampled in some distribution. As was also mentioned, the set of all initial conditions for this system does not---to our knowledge---carry a helpful mathematical structure which allows us to achieve this aim easily. In order to attack this problem, we turned to the physiological origin for that we chose this model. \begin{figure} \begin{center} \fbox{\includegraphics[width=\columnwidth]{ini_con_generation_schematic}} \caption{To construct initial conditions from artificially generated pinwheel maps, we first took such a pinwheel map with a certain \emph{scaling} (upper left), then we chose a selection of excited orientations by means of a gaussian. The width of the gaussian gives the selection \emph{depth} (upper right). After that, we masked the result spatially with another gaussian distribution that is radially symmetric (lower right). The width of this gaussian gives the third parameter, the \emph{size} of the pattern. Finally the result is scaled, giving rise to the fourth parameter, we called the \emph{excess} and added to the activator variable in the homogeneous state (lower left). The inhibitor variable is put into the homogeneous state. \label{fig:ini_con_schematic}} \end{center} \end{figure} A set of initial conditions should naturally reflect plausible spatial perturbations of the homogeneous steady state of the cortex. This can be achieved by defining localized but spatially structured activity states on large-scales, i.e., on the order of millimeters. Such pattern are obtained from cortical features maps (see Fig.~\ref{fig:ini_con_schematic}) by sampling three parameters (\emph{scaling}, \emph{depth}, and \emph{size}) that define patches of lateral coupling in theses maps. A fourth parameter (\emph{excess}) determines the amplitude of the perturbation. In the following, we first describe the rational behind using a cortical feature map and then the sampling. We focus on a cortical feature map in the primary visual cortex (V1) called pinwheel map. V1 is located at the occipital pole of the cerebral cortex and is the first region to process visual information from the eyes. Migraine aura symptoms often start there or nearby where similar feature maps exist. In V1, neurons within vertical columns (through the cortical layers) represent by their activity pattern edges, enlongated contours, and whole textures ``seen'' in the visual field. This representation has a distinct periodically microstructured pattern: the pinwheel map. Neurons preferentially fire for edges with a given orientation and the preference changes continuously as a function of cortical location, except at singularities, the pinwheel centers, where the all the different orientations meet \cite{ROJ90,BON91b}. Iso-orientation domains form continuous bands or patches around pinwheels and, on average, a region of about 1\,mm$^2$ (hypercolumn) will contain all possible orientation preferences. This topographical arrangement allows one hypercolumn to analyze all orientations coming from a small area in the visual field, but, as a consequence, the cortical representation of continuous contours in the visual field would be depicted in a patchy, discontinuous fashion \cite{EYS99}. In general, spatially separated elements are bound together by short- and long-range lateral connections. While the strength of the local short-range connection within one hypercolum is a graded function of cortical distance, mostly independent of relative orientation \cite{DAS99}, long-range connections over several hypercolums connect only iso-orientation domains of similar orientation preference \cite{GIL92a,GIL96b}. Even nearby regions, which are directly excitatory connected, have an inhibitory component through local inhibitory interneurons and this is likely be used to analyze angular visual features such as corners or T junctions \cite{DAS99}. Given the arguments above, we can now obtain localized yet spatially structured activity states on the scale we aim for as initial conditions by using iso-orientation domains that form continuous patches around pinwheels and extend in a discontinuous fashion over larger areas. In~\cite{NIE94c} the authors analyzed the design principles that lie behind the columnar organization of the visual cortex. The precise design principles of this cortical organization is governed by an annulus-like spectral structure in Fourier domain \cite{ROJ90,NIE94c}, which is determined by mainly one parameter (\emph{scaling}), that is, the annulus width. The parameter \emph{depth} reflects the tuning properties of orientation preference or we can also interpret this as the range of orientation angles that we consider within the iso-orientation domain. The third parameter reflects the distance long-range coupling ranges before it significantly attenuates. These design principles can be exploited and a procedure can be designed to construct maps with the same properties. The constructed maps come very close to the maps found in brains of macaque monkeys (see~\cite{NIE94c} and references therein). To construct initial conditions from these maps we used a procedure that uses four control parameters and is visualized in Fig.~\ref{fig:ini_con_schematic}. The details are as follows: A pinwheel map is a function that maps our twodimensional plane to the interval $(-\pi/2, \pi/2]$. We construct such a map using the procedure in \cite{NIE94c}. During construction, we can choose the \emph{scaling} of the map. This is our first parameter. After constructing this map, by means of a gaussian, we choose a range of orientations that is excited. Mathematically speaking this is the concatenation of the gaussian distribution with the pinwheel map. This gives the next parameter, namely the width of the gaussian that selects the angles, we call that parameter the \emph{depth}. The next step is to constrain the generated pattern spatially by multiplication with another gaussian which is defined on the plane $P$ and chosen to be rotationally symmetric. The width of this gaussian gives rise to the third parameter, the \emph{size} of the pattern. Finally we multiply the pattern by a certain amplitude, which is chosen such that the integral of the pattern over the plane gives a chosen number, which constitutes the fourth parameter, we called the \emph{excess}. Finally, initial conditions are generated by setting the plane to the (stable) homogeneous state and then adding the generated pattern to the activator variable $u$. In a first run, we scanned the space spanned by the four parameters coarsely. We used the marginal distributions of the number of solutions with ED$>0$ with respect to the parameters to decide how densely to sample the parameter space in the final run. \section*{Acknowledgments} The authors kindly acknowledge the support from the Deutsche Forschungsgemeinschaft (DFG) in the frameworks of SFB910 and GRK1558, and from the Bundesministerium f\"ur Bildung und Forschung under grant BMBF 01GQ1109. MD has been supported in part also by the Mathematical Biosciences Institute at the Ohio State University and the National Science Foundation under Grant No. DMS 0931642. The authors would also like to thank Gerold Baier, Michael Guevara, Zachary Kilpatrick, and Eckehard Sch\"oll for helpful discussions and advice. \section*{References}
2,877,628,088,810
arxiv
\section{Introduction} The Brascamp--Lieb inequality is a far-reaching common generalisation of well-known multilinear functional inequalities on euclidean spaces, such as the H\"older, Loomis--Whitney and Young convolution inequalities. It is typically written in the form \begin{equation}\label{BL} \int_{H} \prod_{j=1}^m (f_j \circ L_j )^{p_j} \leq C \prod_{j=1}^m \left(\int_{H_j} f_j\right)^{p_j}, \end{equation} where $m\in\mathbb{N}$, $H$ and $H_j$ denote euclidean spaces of finite dimensions $n$ and $n_j$ where $n_j\leq n$, equipped with Lebesgue measure for each $1\leq j\leq m$. The functions $f_j:H_j\to\mathbb{R}$ are assumed to be nonnegative. The maps $L_j:H\to H_j$ are surjective linear transformations, and the exponents $p_j$ satisfy $0\leq p_j\leq 1$. Following the notation in \cite{BCCT1} we denote by $\operatorname{BL}(\mathbf{L},\mathbf{p})$ the smallest constant $C$ for which \eqref{BL} holds for all nonnegative input functions $f_j\in L^1(H_j)$, $1\leq j\leq m$. Here $\mathbf{L}$ and $\mathbf{p}$ denote the $m$-tuples $(L_j)_{j=1}^m$ and $(p_j)_{j=1}^m$ respectively. We refer to $(\mathbf{L},\mathbf{p})$ as the \textit{Brascamp--Lieb datum}, and $\operatorname{BL}(\mathbf{L},\mathbf{p})$ as the \textit{Brascamp--Lieb constant}. The Brascamp--Lieb inequality has been studied extensively by many authors, and the delicate questions surrounding the finiteness and attainment of the constant have found some useful answers. Most notably, it was shown by Lieb \cite{Lieb} that in order to compute $\mbox{BL}(\mathbf{L},\mathbf{p})$ it is enough to restrict attention to gaussian inputs $f_j$, leading to the expression \begin{equation}\label{gauss} \mbox{BL}(\mathbf{L},\mathbf{p})=\sup \;\frac{\prod_{j=1}^m(\det A_j)^{p_j/2}}{\det(\sum_{j=1}^m p_jL_j^*A_jL_j)^{1/2}}, \end{equation} where the supremum is taken over all positive definite linear transformations $A_j$ on $H_j$, $1\leq j\leq m$. However, this expression (which of course involves a supremum over a non-compact set) retains considerable complexity in the context of general data. Nevertheless, a concise characterisation of finiteness is available, specifically $ \mbox{BL}(\mathbf{L},\mathbf{p})<\infty$ if and only if \begin{equation}\label{scaling} \sum_{j=1}^mp_jn_j=n \end{equation} and \begin{equation}\label{char} \dim(V)\leq\sum_{j=1}^mp_j\dim(L_j V) \end{equation} hold for all subspaces $V\subseteq H$; see \cite{BCCT1} where a proof of this is given based on \eqref{gauss}, or \cite{BCCT2} for an alternative. In recent years a variety of generalisations of the Brascamp--Lieb inequality have emerged in harmonic analysis, and have found surprising and diverse applications in areas ranging from combinatorial incidence geometry, to dispersive PDE and number theory -- see for example \cite{BCT,BD}, and perhaps most strikingly \cite{BDG}. These generalisations, which include the multilinear Fourier restriction and Kakeya inequalities (see for example \cite{BBFL} for further discussion), may be viewed as ``perturbations" of the classical Brascamp--Lieb inequality, and naturally raise questions about the local behaviour of $\operatorname{BL}(\mathbf{L},\mathbf{p})$ as a function of $\mathbf{L}$. While seemingly quite innocuous given the formula \eqref{gauss}, very little is known about the regularity of $\operatorname{BL}(\cdot,\mathbf{p})$ in general. In \cite{BBFL} it was shown that the function $\operatorname{BL}(\cdot,\mathbf{p})$ is at least \emph{locally bounded} -- a fact that already has applications in multilinear harmonic analysis and beyond; see \cite{BBFL,BD,BDG}. We note that the analysis in \cite{BBFL} also reveals the relatively simple fact that the finiteness set $\{\mathbf{L} : \operatorname{BL}(\mathbf{L},\mathbf{p})<\infty\}$ is open. The main theorem in this paper is the following: \begin{theorem}\label{main} For each $\mathbf{p}$, the Brascamp--Lieb constant $\operatorname{BL}(\cdot,\mathbf{p})$ is a continuous function. \end{theorem} For $\mathbf{L}_0$ such that $\operatorname{BL}(\mathbf{L}_0,\mathbf{p})=\infty$, Theorem \ref{main} should be interpreted as ``For any sequence approaching $\mathbf{L}_0$, the associated Brascamp--Lieb constants approach infinity". Simple, yet instructive examples reveal that this continuity conclusion cannot in general be improved to differentiability. In contrast, for so-called ``simple" data -- that is, data for which \eqref{char} holds with \emph{strict} inequality for all nontrivial proper subspaces $V$ of $H$ -- Valdimarsson showed in \cite{Vdiff}, that $\operatorname{BL}(\cdot,\mathbf{p})$ is \emph{differentiable}. We conclude this section by stating one of the conjectural generalisations of the Brascamp--Lieb inequality that inspired our work. The so-called \emph{nonlinear Brascamp--Lieb inequality} replaces the linear surjections $L_j:\mathbb{R}^n\rightarrow\mathbb{R}^{n_j}$ with \emph{local submersions} $B_j:U\rightarrow\mathbb{R}^{n_j}$, defined on a neighbourhood $U$ of a point $x_0\in\mathbb{R}^n$. In \cite{BB} (see also \cite{BBFL}) it is tentatively conjectured that if $dB_j(x_0)=L_j$ for linear maps $\mathbf{L}$ such that $\operatorname{BL}(\mathbf{L},\mathbf{p})<\infty$, then for some $U'\subseteq U$, there exists $C$ such that \begin{equation}\label{nlblconj} \int_{U'}\prod_{j=1}^m (f_j\circ B_j)^{p_j}\leq C \prod_{j=1}^m\left(\int_{\mathbb{R}^{n_j}}f_j\right)^{p_j}. \end{equation} For somewhat restrictive classes of data $(\mathbf{L},\mathbf{p})$ this can indeed be achieved (see \cite{BCW,BB,BH} for details and applications), while for general data \eqref{nlblconj} is known to hold if the input functions $f_j$ are assumed to have an arbitrarily small amount of regularity (see \cite{BBFL}). Global versions of this inequality are also of interest, see \cite{BBG,KS}. It is natural to formulate a more quantitative version of this conjecture as follows: \begin{conjecture}\label{conject} Given $\varepsilon>0$ there exists $\delta>0$ (depending on the maps $B_j$) such that \[ \int_{B(x_0,\delta)}\prod_{j=1}^m (f_j\circ B_j)^{p_j}\leq (1+\varepsilon)\operatorname{BL}(\mathbf{L},\mathbf{p})\prod_{j=1}^m\left(\int_{\mathbb{R}^{n_j}}f_j\right)^{p_j}. \] \end{conjecture} This conjecture is intimately related to the continuity of the general Brascamp--Lieb constant. Indeed an elementary scaling and limiting argument (as in \cite{BBG}) shows that, if it were true, then for every $\varepsilon>0$ there would exist $\delta>0$ such that \begin{equation}\label{imp} |\operatorname{BL}(\mathbf{L}(x),\mathbf{p})-\operatorname{BL}(\mathbf{L}(x_0),\mathbf{p})|<\varepsilon \end{equation} whenever $|x-x_0|<\delta$; here $\mathbf{L}(x)$ denotes the family of linear surjections $(dB_j(x))_j$. (We clarify that one of the two implicit inequalities in \eqref{imp} is a consequence of the elementary \textit{lower} semicontinuity of the Brascamp--Lieb constant.) One possible application of Conjecture \ref{conject} would be in establishing best constants for local versions of Young's inequality for convolution on noncommutative Lie groups, which is intimately related with best constants for the Hausdorff--Young inequality; this topic has been studied by several authors (see, for example, \cite{Beck1,KR,GCMP,CMMP}). The Baker--Campbell--Hausdorff formula suggests that, for functions supported on small sets, convolution resembles convolution in $\mathbb{R}^n$. If true, Conjecture \ref{conject} would put this on a firm footing. We refer the reader to \cite{BEsc,BBFL,Zhang} for further discussion and a description of some rather different Kakeya-type and Fourier-analytic generalisations of the Brascamp--Lieb inequality. Since the writing of this paper we have learnt that Garg, Gurvits, Oliveira and Wigderson have recently discovered a link between Brascamp--Lieb constants and the capacity of quantum operators (as defined in \cite{G04}) which, using ideas from \cite{GGOW}, leads to another proof of the continuity of the Brascamp--Lieb constant in the case of rational data, as well as a quantitative estimate in this case \cite{GGOW2}. We thank Kevin Hughes for bringing this to our attention, and Avi Wigderson for clarifying the connection with this independent work. Lastly, we thank the referee for many thoughtful comments on the original manuscript. \subsubsection*{Structure of the paper.\;} In Section \ref{sec:rank1} we prove Theorem \ref{main} in the relatively straightforward situation where the surjections $L_j$ have rank one (that is, when $n_j=1$ for each $j$) using a well-known formula of Barthe \cite{Barthe}. In Section \ref{sec:genBarthe} we establish a certain general-rank extension of Barthe's formula, which we then combine with the rank-one ideas to conclude Theorem \ref{main} in full generality. Finally, in Section \ref{sec:count}, we present a simple counterexample to the claim that the Brascamp--Lieb constant is everywhere differentiable. \section{Structure of the proof and the rank-1 case } \label{sec:rank1} In this section we provide a simple proof of Theorem \ref{main} in the case of rank one maps $L_j$. Our argument is based on the availability of a simpler formula for $\operatorname{BL}(\mathbf{L},\mathbf{p})$ due to Barthe \cite{Barthe} in that case. This is a natural starting point since our proof in the general-rank case proceeds by first establishing a suitable Barthe-type formula which holds in full generality. \begin{proof} We begin by setting up some notation. As each $L_j$ has rank one, there exists a vector $v_j$ such that $L_j(x) = \langle v_j,x\rangle $. We denote by $\mathbf{v}$ the $m$-tuple $(v_j)_{j=1}^m$, and identify $\mathbf{v}$ with $\mathbf{L}$. Set \[ \mathcal{I}=\{I: I\subseteq\{1,2,...,m\}, |I|=n\}.\] For each $I\in\mathcal{I}$, set $p_I= \prod_{i\in I} p_i$ and \[ d_I=\det((v_i)_{i\in I})^2,\] where the determinant of a sequence of $n$ vectors in $\mathbb{R}^n$ is the determinant of the $n\times n$ matrix whose $i$th column is the $i$th term in the sequence. Finally, let $\vec{d}=(d_I)_{\mathcal{I}}$. We view $\vec{d}$ as an element in $\mathbb{R}^N$ where $N={{n}\choose{m}}$. Barthe's formula \cite{Barthe} for the best constant is \begin{equation}\label{rhs} \operatorname{BL}(\mathbf{v},\mathbf{p})^2 = \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}} d_I p_I \lambda_I}, \end{equation} where $\lambda_I= \prod_{i\in I} \lambda_i$. By definition, for each $I$, $d_I$ is a continuous function of $\mathbf{L}$, and so it is enough to show that the right-hand side of \eqref{rhs} is a continuous function of $\vec{d}$. First, lower semicontinuity is immediate from the definition, as a supremum of lower semicontinuous functions is itself lower semicontinuous. It thus suffices to prove upper semicontinuity. Fix a point $\widetilde{\vec{d}}\in\mathbb{R}^N$. If for each $I$, $\widetilde{d}_I=0$, then the right-hand side of \eqref{rhs} is infinite and uppersemicontinuity is immediate. If not, then set $D=\min_{I:\widetilde{d}_I\neq 0} \widetilde{d}_I>0$. For $\delta\in(0, D)$ and $\vec{d}\in\mathbb{R}^N$ such that $|\vec{d}-\widetilde{\vec{d}}|\leq\delta$, \[ \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}} d_I p_I\lambda_I} \leq \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}: \widetilde{d}_I\neq0} d_I p_I\lambda_I} \leq \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}: \widetilde{d}_I\neq0} (\widetilde{d}_I-\delta) p_I\lambda_I},\] and so \[ \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}} d_I p_I\lambda_I} \leq \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}: \widetilde{d}_I\neq0}\widetilde{d}_Ip_I\lambda_I}\left( 1- \delta \frac{\sum_{\mathcal{I}:\widetilde{d}_I\neq0}p_I\lambda_I}{\sum_{\mathcal{I}:\widetilde{d}_I\neq0} \widetilde{d}_Ip_I\lambda_I} \right)^{-1}.\] Focusing on the second term in the product, for each $I$ such that $\widetilde{d}_I\neq 0$, \[ \frac{p_I\lambda_I}{\widetilde{d}_Ip_I\lambda_I} \leq \frac{1}{D}.\] Thus the second term is bounded by $ (1-\delta/D)^{-1}$, and so \[ \lim_{\delta\to0} \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}} d_I p_I\lambda_I} \leq \sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}: \widetilde{d}_I\neq0}\widetilde{d}_Ip_I\lambda_I}\leq\sup_{\lambda_i>0} \frac{ \prod_{i=1}^m \lambda_i^{p_i} }{ \sum_{\mathcal{I}}\widetilde{d}_Ip_I\lambda_I}.\] This proves the required upper semicontinuity. \end{proof} \section{A generalisation of Barthe's formula and the proof of Theorem \ref{main}}\label{sec:genBarthe} To extend the proof from Section \ref{sec:rank1} to the general case we first need an analogue of Barthe's formula \eqref{rhs} for the best constant \eqref{gauss}. A key step in obtaining such a formula is the parametrisation of positive definite matrices by a rotation matrix and a diagonal matrix of their (positive) eigenvalues. In the rank one case this is equivalent to Barthe's parametrisation. In \cite{Vdiff} Valdimarsson uses a related approach, although he parametrises positive definite matrices by symmetric matrices, rather than by diagonal matrices and rotations. It will be helpful to simplify notation so as to avoid double sums of the form $\sum_{i=1}^m \sum_{j = 1}^{n_i} a_{ij}$. To do so, we define $K = \sum_{i=1}^m n_i$, and write $a_{ij}$ as $a_{k}$, where $k=n_0+\cdots +n_{i-1}+j$, $n_0=0$, and $1\leq k\leq K$. Given this relationship between $(i,j)$ and $k$, define $q_k=p_i$, so that $(q_1,\hdots,q_K)$ is a $K$-tuple whose first $n_1$ entries are $p_1$, next $n_2$ entries are $p_2$, and so on. We write $\mathcal{I}$ for the set of all subsets $I$ of $\left\lbrace 1, \dots, K\right\rbrace$ of cardinality $n$. For each $I\in\mathcal{I}$ define $q_I=\prod_{k\in I} q_k$. Similarly, given a family $\lambda_1,\hdots,\lambda_K>0$ we define $\lambda_I=\prod_{k\in I} \lambda_k$. In what follows $R_i$ will denote a rotation on $\mathbb{R}^{n_i}$ for each $1\leq i\leq m$, and we denote by $\mathbf{R}$ the $m$-tuple $(R_i)_{i=1}^m$. \begin{theorem} \label{thm:MGCformula \[ \operatorname{BL}(\mathbf{L},\mathbf{p})^{2} = \sup\left\lbrace \frac{\prod_{k=1}^K \lambda_k^{q_k} }{ \sum_{I\in \mathcal{I}} \lambda_I q_I d_I } : \lambda_k \in \mathbb{R}^+, R_i \in \group{SO}(n_i) \right\rbrace, \] where $d_I=d_I(\mathbf{L},\mathbf{R})$ is a nonnegative continuous function for each $I\in\mathcal{I}$. \end{theorem} \begin{remark*} The functions $d_I(\mathbf{L},\mathbf{R})$ will be specified in the proof. In the rank-1 case, each $R_i=1$ and $d_I=\det((v_i)_{i\in I})^2$ as before. \end{remark*} \begin{proof} Recall from \eqref{gauss}, \[ \mbox{BL}(\mathbf{L},\mathbf{p})=\sup \;\frac{\prod_{i=1}^m(\det A_i)^{p_i/2}}{\det\left(\sum_{i=1}^m p_iL_i^*A_iL_i\right)^{1/2}}.\] Here $A_i$ is a positive definite $n_i \times n_i$ matrix. Further, $A_i$ is of the form $R_i^* D_i R_i$, where $R_i$ is a rotation matrix, with transpose $R_i^*$, and $D_i$ is a diagonal matrix, with positive diagonal entries $\lambda_i^1, \dots, \lambda_i^{n_i}$. Using the notation introduced above, \[ \prod_{i=1}^m(\det A_i)^{p_i/2} = \prod_{k=1}^K \lambda_k^{q_k/2} . \] Let the column vectors $\{ \vec{e}_i^j : j = 1, \dots, n_i \}$ be the standard basis for $\mathbb{R}^{n_i}$, so that $D_i = \sum_{j=1}^{n_i} \lambda_i^j \vec{e}_i^j({\vec{e}_i^j})^*$, and let $v_i^j = L_i^* R_i^* \vec{e}_i^j$. Then, \begin{align*} \sum_{i=1}^m p_iL_i^*A_iL_i & = \sum_{i=1}^m p_iL_i^*R_i^* D_i R_iL_i \\ & = \sum_{i=1}^m p_iL_i^*R_i^*\left( \sum_{j=1}^{n_i} \lambda_i^j \vec{e}_i^j(\vec{e}_i^j)^* \right) R_iL_i \\ & = \sum_{i=1}^m p_i \sum_{j=1}^{n_i} \lambda_i^j L_i^* R_i^* \vec{e}_i^j \left(L_i^* R_i^* \vec{e}_i^j\right)^* \\ & = \sum_{k=1}^K q_k \lambda_k v_k v_k^* \end{align*} where the $v_k$ are the $v_i^j$ in our chosen order, the $\lambda_k$ are the corresponding $\lambda_i^j$, and the $q_k$ are the corresponding $p_i$ (as described above). Set \[ T = \sum_{k=1}^K q_k \lambda_k v_k v_k^* .\] To compute $\det(T)$, we follow Barthe, and use the Cauchy--Binet formula. Define the $n \times K$ matrices $A$ and $B$ to be the matrices whose $k$th columns are the vectors $\lambda_k q_k v_k$ and $v_k$, respectively and the $K \times n$ matrix $C$ to be $B^*$, the matrix whose $k$th row is the vector $v_k^*$. Recall that $\mathcal{I}$ denotes the set of all subsets of $\{1, \dots, K\}$ of cardinality $n$; write $A_I$ and $B_I$ for the $n \times n$ matrices whose columns are the vectors $\lambda_k q_k v_k$ and $v_k$, respectively where $k \in I$, and $C_I=B_I^*$. Then \[ \det(T)=\det(AC) = \sum_{I\in \mathcal{I}} \det (A_I C_I) = \sum_{I\in \mathcal{I}} \left( \prod_{k \in I} \lambda_k q_k \right) \det ({B_I} C_I) = \sum_{I\in \mathcal{I}} \lambda_I q_I d_I. \] Here $\lambda_I = \prod_{k \in I} \lambda_k$, $q_I = \prod_{k \in I} q_k$ and $d_I = \det (B_I C_I)$. Evaluating $d_I$ using the definition of $B_I$ and $C_I$ yields $ d_I= \det((v_k)_{k\in I})^2$ where $v_k=v_{i}^j= L_i^* R_i^* \vec{e}_i^j$. We conclude that \begin{equation* \operatorname{BL}(\mathbf{L},\mathbf{p})^{2} = \sup\left\lbrace \frac{\prod_{k=1}^K \lambda_k^{q_k} }{ \sum_{I\in \mathcal{I}} \lambda_I q_I d_I } : \lambda_k \in \mathbb{R}^+, R_i \in \group{SO}(n_i) \right\rbrace, \end{equation*} where each $d_I$ is manifestly nonnegative and continuous as a function of $\mathbf{L}$ and $\mathbf{R}$. \end{proof} The argument presented in the rank-1 case in Section \ref{sec:rank1}, combined with Theorem \ref{thm:MGCformula}, quickly leads to Theorem \ref{main}. Define the function $F(\vec{d})$ by \[ F(\vec{d})=\sup_{\lambda_k>0}\frac{ \prod_{k = 1}^K \lambda_k^{q_k} }{ \sum_{I \in \mathcal{I}} d_I q_I \lambda_I }. \] Then \[ \operatorname{BL}(\mathbf{L},\mathbf{p})^{2} = \sup\left\lbrace F(\vec{d}(\mathbf{L},\mathbf{R})) : R_i \in \group{SO}(n_i) \right\rbrace. \] Now, $F(\vec{d})$ is continuous by the rank-1 argument in Section \ref{sec:rank1}. As the supremum (as the parameter varies) of a family of continuous functions continuously parametrised by a compact set is continuous, the Brascamp--Lieb constant is as well. \section{A non-differentiable example}\label{sec:count} In general, Brascamp--Lieb constants are surprisingly difficult to compute. However, consider the general 4-linear rank-1 case when $\mathbf{p}=(1/2,1/2,1/2,1/2)$: \[ \int \prod_{i=1}^4 f_i^{1/2}(\left\langle x,v_i\right\rangle)\;dx\leq C \prod_{i=1}^4 \left(\int f_i \right)^{1/2}. \] Here we can exploit symmetry to compute: \begin{equation}\label{gennondiff} \mbox{BL}(\mathbf{v}, \mathbf{p})^2 = 2\left({|\det(v_1v_2)\det(v_3v_4)| + |\det(v_1v_3)\det(v_2v_4)|+|\det(v_1v_4)\det(v_2v_3)| }\right)^{-1}, \end{equation} where again the determinant of a collection of $2$ vectors in $\mathbb{R}^2$ is the determinant of the $2\times2$ matrix with columns $v_i$. Indeed, by a change of variables and a rescaling, it is enough to consider the following case: \[ \int\int f_1^{1/2}(x)f_2^{1/2}(y)f_3^{1/2}(x-y)f_4^{1/2}(x+ay) \;dxdy \leq C \prod_{i=1}^4 \left(\int f_i \right)^{1/2}. \] Using the Cauchy--Schwarz inequality and a change of variables, \begin{multline*} \int\int f_1^{1/2}(x)f_2^{1/2}(y)f_3^{1/2}(x-y)f_4^{1/2}(x+ay)\;dxdy\\ \leq \left(\int\int f_1(x)f_2(y)\;dxdy\right)^{1/2}\left( \int\int f_3(x-y)f_4(x+ay)\;dxdy\right)^{1/2}\\ \leq |a+1|^{-1/2} \prod_{i=1}^4 \left( \int f_i \right)^{1/2}. \end{multline*} This inequality is sharp whenever the application of the Cauchy--Schwarz inequality is sharp, in this case when $a>0$, as may be seen by considering suitable gaussians. Repeating this argument for each possible pairing of functions yields the following formula for the sharp constant when $a\neq 0,-1$: \[ \min\{ 1, |a|^{-1/2}, |a+1|^{-1/2} \}\] or, equivalently, \[\left( \frac{2}{|a|+|a+1|+1}\right)^\frac{1}{2} .\] By Theorem \ref{main} this formula must hold for all $a$. Changing variables back to the general setting yields the general formula \eqref{gennondiff}. \begin{remark*} We suspect that in general the dependence of $\operatorname{BL}(\mathbf{L},\mathbf{p})$ on $\mathbf{L}$ is at least locally H\"older continuous, and we hope to return to this in a subsequent paper. \end{remark*}
2,877,628,088,811
arxiv
\section{Introduction} Many-body systems with dipolar interactions have attracted a lot of attention recently. Unlike the properties of ultracold atomic alkali vapors, which can be described to a very good approximation by a single scattering quantity (the $s$-wave scattering length), those of dipolar gases additionally depend on the dipole moment. This dipole moment can be magnetic, as in the case of atomic Cr~\cite{grie05,stuh05}, or electric, as in the case of heteronuclear molecules such as OH~\cite{meer05,boch04}, KRb~\cite{wang04a} or RbCs~\cite{kerm04}. Furthermore, dipolar interactions are long-ranged and anisotropic, giving rise to a host of novel many-body effects in confined dipolar gases such as roton-like features~\cite{dell03,sant03,rone06a} and rich stability diagrams~\cite{sant00,yi00,gora00,mart01,yi01,gora02,rone06,bort06}. The physics of dipolar gases loaded into optical lattices promises to be particularly rich. For example, this setup constitutes the starting point for a range of quantum computing schemes~\cite{bren99,jaks00,demi02,bren02}. Additionally, a variety of novel quantum phases have already been predicted to arise~\cite{gora02a,dams03,barn06,mich06}. Currently, a number of experimental groups are working towards loading dipolar gases into optical lattices. This paper investigates the physics of doubly-occupied optical lattice sites in the regime where the tunneling between neighboring sites and the interactions with dipoles located in other lattice sites can be neglected. In this case, the problem reduces to treating the interactions between two dipoles in a single lattice site. Assuming that the lattice potential can be approximated by a harmonic potential, the center of mass motion separates and the problem reduces to solving the Schr\"odinger equation for the relative distance vector $\vec{r}$ between the two dipoles. The interaction between the two aligned dipoles is angle-dependent and falls off as $1/r^3$ at large interparticle distances. In this work, we replace the shape-dependent interaction potential by an angle-dependent zero-range pseudo-potential, which is designed to reproduce the scattering properties of the full shape-dependent interaction potential, and derive an implicit eigenequation for two interacting identical bosonic dipoles and two interacting identical fermionic dipoles analytically. Replacing the full interaction potential or a shape-dependent pseudo-potential by a zero-range pseudo-potential~\cite{ferm34,huan57,busc98,blum02,bold02,kanj04,stoc04} often allows for an analytical description of ultracold two-body systems in terms of a few key physical quantities. Here we show that the eigenequation for appropriately chosen zero-range pseudo-potentials reproduces the energy spectrum of two dipoles under harmonic confinement interacting through a shape-dependent model potential; that the applied zero-range treatment readily leads to an approximate classification scheme of the energy spectrum in terms of angular momentum quantum numbers; and that the proposed pseudo-potential treatment breaks down when the characteristic length of the dipolar interaction becomes comparable to the characteristic length of the external confinement. The detailed understanding of two interacting dipoles obtained in this paper will guide optical lattice experiments and the search for novel many-body effects. Section~\ref{sec_pp} introduces the Hamiltonian under study and discusses the anisotropic zero-range pseudo-potential that is used to describe the scattering between two interacting dipoles. In Sec.~\ref{sec_ho}, we derive an implicit eigen equation for two dipoles under external spherical harmonic confinement interacting through the zero-range pseudo-potential and show that the resulting eigenenergies agree well with those obtained for a shape-dependent model potential. Finally, Sec.~\ref{sec_conclusion} concludes. \section{System under study and anisotropic pseudo-potential} \label{sec_pp} Within the mean-field Gross-Pitaevskii formalism, the interaction between two identical bosonic dipoles, aligned along the space-fixed $\hat{z}$-axis by an external field, has been successfully modeled by the pseudo-potential $V_{pp}(\vec{r})$~\cite{yi00}, \begin{eqnarray} \label{eq_dipole} V_{pp}(\vec{r})= \frac{2 \pi \hbar^2}{\mu} a_{00} \delta(\vec{r})+ d^2 \frac{1-3 \cos^2 \theta}{r^3}. \end{eqnarray} Here, $\mu$ denotes the reduced mass of the two-dipole system, $d$ the dipole moment, and $\theta$ the angle between $\hat{z}$ and the relative distance vector $\vec{r}$. The $s$-wave scattering length $a_{00}$ depends on both the short- and long-range parts of the true interaction potential. The second term on the right hand side of Eq.~(\ref{eq_dipole}) couples angular momentum states with $l=l'$ ($l >0$) and $|l - l'| = 2$ (any $l,l'$). For identical fermions, $s$-wave scattering is absent and the interaction is described, assuming the long-range dipole-dipole interaction is dominant, by the second term on the right hand side of Eq.~(\ref{eq_dipole}). Our goal in this paper is to determine the eigenequation of two identical bosonic dipoles and two identical fermionic dipoles under external spherically harmonic confinement with angular trapping frequency $\omega$ analytically. The Schr\"odinger equation for the relative position vector $\vec{r}$ reads \begin{eqnarray} \label{eq_se} [H_0 + V_{int}(\vec{r}) ] \psi(\vec{r}) = E \psi (\vec{r}), \end{eqnarray} where the Hamiltonian $H_0$ of the non-interacting harmonic oscillator is given by \begin{eqnarray} \label{eq_ham} H_0 = -\frac{\hbar^2}{2 \mu} \nabla^2 _{\vec{r}} +\frac{1}{2} \mu \omega^2 r^2. \end{eqnarray} In Eq.~(\ref{eq_se}), $V_{int}(\vec{r})$ denotes the interaction potential. The pseudo-potential $V_{pp}(\vec{r})$ cannot be used directly in Eq.~(\ref{eq_se}) since both parts of the pseudo-potential lead to divergencies. The divergence of the $\delta$-function potential arises from the singular $1/r$ behavior at small $r$ of the spherical Neumann function $n_0(r)$, and can be cured by introducing the regularization operator $\frac{\partial}{\partial r} r$~\cite{huan57}. Curing the divergence of the long-ranged $1/r^3$ term of $V_{pp}$ is more involved, since it couples an infinite number of angular momentum states, each of which gives rise to a singularity in the $r \rightarrow 0$ limit. The nature of each of these singularities depends on the quantum numbers $l$ and $l'$ coupled by the pseudo-potential, and hence has to be cured separately for each $l$ and $l'$ combination. In this work, we follow Derevianko~\cite{dere03,dere05} and cure the divergencies by replacing $V_{pp}(\vec{r})$ with a regularized zero-range potential $V_{pp,reg}(\vec{r})$, which contains {\em{infinitely}} many terms, \begin{eqnarray} \label{eq_ppreg} V_{pp,reg}(\vec{r}) = \sum_{ll'} V_{ll'}(\vec{r}). \end{eqnarray} The sum in Eq.~(\ref{eq_ppreg}) runs over $l$ and $l'$ even for identical bosons, and over $l$ and $l'$ odd for identical fermions. For $l \ne l'$, $V_{ll'}$ and $V_{l'l}$ are different and both terms have to be included in the sum. In Sec.~\ref{sec_ho}, we apply the pseudo-potential to systems under spherically symmetric external confinement. For these systems, the projection quantum number $m$ is a good quantum number, i.e., the energy spectrum for two interacting dipoles under spherically symmetric confinement can be solved separately for each allowed $m$ value. Consequently, a separate pseudo-potential can be constructed for each $m$ value. In the following, we restrict ourselves to systems with vanishing projection quantum number $m$; the generalization of the pseudo-potential to general $m$ is discussed at the end of this section. The $V_{ll'}$ are defined through their action on an arbitrary $\vec{r}$-dependent function $\Phi(\vec{r})$~\cite{dere03,dere05}, \begin{eqnarray} \label{eq_llprime} V_{ll'}(\vec{r}) \Phi(\vec{r}) = g_{ll'} \frac{\delta(r)}{r^{l'+2}} Y_{l'0}(\theta,\phi) \times \nonumber \\ \left[ \frac{\partial^{2l+1}}{\partial r^{2l+1}} r^{l+1} \int Y_{l0}(\theta,\phi) \Phi(\vec{r}) d \Omega \right]_{r \rightarrow 0} \end{eqnarray} with \begin{eqnarray} \label{eq_ppregstrength} g_{ll'} = \frac{\hbar^2}{2 \mu} \frac{a_{ll'}}{k^{l+l'}} \frac{(2l+1)!! (2l' +1)!!}{(2l+1)!}, \end{eqnarray} where $k$ denotes the relative wave vector, $k=\sqrt{2 \mu E/\hbar^2}$, and the $a_{ll'}$ generalized scattering lengths. Since we are restricting ourselves to $m=0$, the $V_{ll'}$ are written in terms of the spherical harmonics $Y_{lm}$ with $m=0$. When applying the above pseudo-potential we treat a large number of terms in Eq.~(\ref{eq_ppreg}), and do not terminate the sum after the first three terms as done in Refs.~\cite{dere03,dere05,yi04}. We note that the non-Hermiticity of $V_{pp,reg}$ does not lead to problems when determining the energy spectrum; however, great care has to be taken when calculating, e.g., structural expectation values~\cite{reic06}. To understand the functional form of the zero-range pseudo-potential defined in Eqs.~(\ref{eq_ppreg}) through (\ref{eq_ppregstrength}), let us first consider the piece of Eq.~(\ref{eq_llprime}) in square brackets. If we decompose the incoming wave $\Phi(\vec{r})$ into partial waves, \begin{eqnarray} \Phi(\vec{r}) = \sum_{n_il_im_i} c_{n_il_im_i} Q_{n_il_i}(r) Y_{l_im_i}(\theta,\phi), \end{eqnarray} where the $c_{n_il_im_i}$ denote expansion coefficients and the $Q_{n_il_i}$ radial basis functions, the spherical harmonic $Y_{l0}$ in the integrand of $V_{ll'}$ acts as a projector or filter. After the integration over the angles, only those components of $\Phi(\vec{r})$ that have $l_i=l$ and $m_i=0$ survive. The operator $\frac{\partial^{2l+1}}{\partial r^{2l+1}} r^{l+1}$ in Eq.~(\ref{eq_llprime}) is designed to then first cure the $r^{-l-1}$ divergencies of the $Q_{n_il}$, which arise in the $r \rightarrow 0$ limit, and to then second ``extract'' the coefficients of the regular part of the $Q_{n_il}(r)$ that go as $r^{l}$~\cite{huan57}. Alltogether, this shows that the square bracket in Eq.~(\ref{eq_llprime}) reduces to a constant when the $r \rightarrow 0$ limit is taken. To understand the remaining pieces of the pseudo-potential, we multiply Eq.~(\ref{eq_llprime}) from the left with $Q_{n_ol_o}^* Y^*_{l_om_o}$ and integrate over all space. The spherical harmonic $Y_{l'0}$ in Eq.~(\ref{eq_llprime}) then ensures that the integral is only non-zero when $l'=l_o$ and $m_o=0$. When performing the radial integration, the $\delta(r)/r^{l'}$ term ensures that the coefficients of the regular part of the $Q_{n_ol_o}$ that go as $r^{l_o}$ are being extracted (note that the remaining $1/r^2$ term cancels the $r^2$ in the volume element). Alltogether, the analysis outlined in the previous paragraph shows that the functional form of $V_{ll'}$ ensures that the divergencies of the radial parts of the incoming and outgoing wave is cured in the $r \rightarrow 0$ limit and that the $l$th component of the incoming wave is scattered into the $l'$th partial wave. The sum over all $l$ and $l'$ values in Eq.~(\ref{eq_ppreg}) guarantees that any state with quantum number $l$ can be coupled to any state with quantum number $l'$, provided the corresponding generalized scattering length $a_{ll'}$ is non-zero. We note that the regularized pseudo-potential given by Eqs.~(\ref{eq_ppreg}) through (\ref{eq_ppregstrength}) is only appropriate if the external confining potential in Eq.~(\ref{eq_ham}) has spherical symmetry~\cite{idzi06}. Generalizations of the above zero-range pseudo-potential, aimed at treating interacting dipoles under elongated confinement, require the regularization scheme to be modified to additionally cure divergencies of cylindrically symmetric wave functions. These extensions will be subject of future studies. We now discuss the generalized scattering lengths $a_{ll'}$, which determine the scattering strengths of the $V_{ll'}$. The $a_{ll'}$ have units of length and are defined through the K-matrix elements $K_{lm}^{l'm'}$~\cite{newt}, \begin{eqnarray} \label{eq_scatt} a_{ll'} = \lim_{k \rightarrow 0} \frac{-K_{l0}^{l'0}(k)}{k} \end{eqnarray} for $m=0$. The scattering lengths $a_{ll'}$ and $a_{l'l}$ are identical because the K-matrix is symmetric. In general, the scattering lengths $a_{ll'}$ have to be determined from the K-matrix elements for the ``true'' interaction potential, which contains the long-range dipolar and a short-ranged repulsive part, of two interacting dipoles. As discussed further in Sec.~\ref{sec_ho}, an approach along these lines is used to obtain the squares shown in Fig.~\ref{fig3}. Alternatively, it has been shown that the K-matrix elements (except for $K_{00}^{00}$, see below) for realistic potentials, such as for the Rb-Rb potential in a strong electric field~\cite{yi00} or an OH-OH model potential~\cite{rone06}, are approximated with high accuracy by the K-matrix elements for the dipolar potential only, calculated in the first Born approximation. Applying the Born approximation to the second term on the right hand side of Eq.~(\ref{eq_dipole}), we find for $m=0$ and $l=l'$ ($l \ge 1$) \begin{eqnarray} \label{eq_born1} a_{ll}= -\frac{2D_*}{(2l-1)(2l+3)}, \end{eqnarray} and for $m=0$ and $l=l' + 2$ \begin{eqnarray} \label{eq_born2} a_{l,l-2} = -\frac{D_*}{(2l-1) \sqrt{(2l+1)(2l-3)}}. \end{eqnarray} For $l'=2$ and $l=0$, e.g., Eq.~(\ref{eq_born2}) reduces to $a_{20}=-D_*/(3 \sqrt{5})$, in agreement with Ref.~\cite{dere03}. The scattering lengths $a_{l-2,l}$ are equal to $a_{l,l-2}$, and all other generalized scattering lengths are zero. In Eqs.~(\ref{eq_born1}) and (\ref{eq_born2}), $D_*$ denotes the dipole length, $D_* = \mu d^2/\hbar^2$. All non-zero scattering lengths $a_{ll'}$ are negative, depend on $l$ and $l'$, and are directly proportional to $d^2$. Furthermore, for fixed $D_*$, the absolute value of the non-zero $a_{ll'}$ decreases with increasing angular momentum quantum number $l$, indicating that the coupling between different angular momentum channels decreases with increasing $l$. However, this decrease is quite slow and, in general, an accurate description of the two-dipole system requires that the convergence with increasing $l_{max}$ be assessed carefully. One can now show readily that the K-matrix elements $K_{l0}^{l'0}$ of $V_{pp,reg}$, calculated in the first Born approximation, with $a_{ll'}$ given by Eqs.~(\ref{eq_born1}) and (\ref{eq_born2}) coincide with the K-matrix elements $K_{l0}^{l'0}$ of $V_{pp}$. This provides a simple check of the zero-range pseudo-potential construction and proofs that the prefactors of $V_{ll'}$ are correct. In turn, this suggests that the applicability regimes of $V_{pp}$ and $V_{pp,reg}$ are comparable, if the generalized scattering lengths $a_{ll'}$ used to quantify the scattering strengths of $V_{ll'}$ are approximated by Eqs.~(\ref{eq_born1}) and (\ref{eq_born2}). The applicability regime of $V_{pp,reg}$ may, however, be larger than that of $V_{pp}$ if the full energy-dependent K-matrix of a realistic potential is used instead. To generalize the zero-range pseudo-potential defined in Eqs.~(\ref{eq_ppreg}) through (\ref{eq_ppregstrength}) for projection quantum numbers $m=0$ to any $m$, only a few changes have to be made. In Eq.~(\ref{eq_llprime}), the spherical harmonics $Y_{l0}$ have to be replaced by $Y_{lm}$, and the generalized scattering lengths have to be defined through $\lim_{k \rightarrow 0} -K_{lm}^{l'm'}/k$. Correspondingly, Eqs.~(\ref{eq_born1}) and (\ref{eq_born2}) become $m$-dependent. \section{Two dipoles under external confinement} \label{sec_ho} Section~\ref{sec_hoA} derives the implicit eigenequation for two dipoles interacting through the pseudo-potential under external harmonic confinement and Section~\ref{sec_hoB} analyzes the resulting eigen spectrum. \subsection{Derivation of the eigenequation} \label{sec_hoA} To determine the eigen energies of two aligned dipoles with $m=0$ under spherical harmonic confinement interacting through the zero-range potential $V_{pp,reg}$, we expand the eigenfunctions $\Psi(\vec{r})$ in terms of the orthonormal harmonic oscillator eigen functions $R_{n_il_i}Y_{l_i0}$, \begin{eqnarray} \label{eq_expansion} \Psi(\vec{r}) = \sum_{n_il_i} c_{n_il_i} R_{n_il_i}(r) Y_{l_i0}(\theta,\phi). \end{eqnarray} The pseudo-potential $V_{pp,reg}$ enforces the proper boundary condition of $\Psi(\vec{r})$ at $r=0$, and thus determines the expansion coefficients $c_{n_il_i}$. To introduce the key ideas we first consider $s$-wave interacting particles~\cite{busc98}, for which the pseudo-potential reduces to a single term, and then consider the general case, in which the pseudo-potential contains infinitely many terms. Including only the term with $l$ and $l'=0$ in Eq.~(\ref{eq_ppreg}), the Schr\"odinger equation becomes, \begin{eqnarray} \label{eq_swave} \sum_{n_il_i} c_{n_il_i} (E_{n_il_i} - E + V_{00}) R_{n_il_i}(r) Y_{l_i0}(\theta,\phi) =0, \end{eqnarray} where the $E_{n_il_i}$ denote the eigenenergies of the non-interacting harmonic oscillator, \begin{eqnarray} \label{eq_hoen} E_{n_il_i}=\left( 2 n_i + l_i + \frac{3}{2} \right) \hbar \omega. \end{eqnarray} In what follows, it is convenient to express the energy $E$ of the interacting system in terms of a non-integer quantum number $\nu$, \begin{eqnarray} \label{eq_nu} E = \left( 2 \nu + \frac{3}{2} \right) \hbar \omega. \end{eqnarray} Multiplying Eq.~(\ref{eq_swave}) from the left with $R^*_{n_ol_o}Y^*_{l_o0}$ with $l_o>0$ and integrating over all space, we find that the $c_{n_il_i}$ with $l_i>0$ vanish. This can be understood readily by realizing that the $s$-wave pseudo-potential $V_{00}$, as discussed in detail in Sec.~\ref{sec_pp}, only couples states with $l=l'=0$. To determine the expansion coefficients $c_{n_i0}$, we multiply Eq.~(\ref{eq_swave}) from the left with $R^*_{n_o0}Y^*_{00}$ and integrate over all space. This results in \begin{eqnarray} \label{eq_swave1} c_{n_o0} (2 n_o - 2 \nu) \hbar \omega + R_{n_o0}^*(0) g_{00} B_0 =0, \end{eqnarray} where $B_0$ denotes the result of the square bracket in Eq.~(\ref{eq_llprime}), \begin{eqnarray} \label{eq_swave2} B_0 = \left[ \frac{\partial}{\partial r} \left( r \sum_{n_i=0}^{\infty} c_{n_i0} R_{n_i0}(r) \right) \right]_{r \rightarrow 0}. \end{eqnarray} Note that $B_0$ is constant and independent of $n_i$. In Eq.~(\ref{eq_swave1}), the $r$-independent term $R_{n_o0}^*(0)$ arises from the radial integration over the $\delta$-function of the pseudo-potential. If we solve Eq.~(\ref{eq_swave1}) for $c_{n_o0}$ and plug the result into Eq.~(\ref{eq_swave2}), the unknown constant $B_0$ cancels and we obtain an implicit eigenequation for $\nu$, \begin{eqnarray} 1 = g_{00} \left[ \frac{\partial}{\partial r} \left( r \sum_{n_i=0}^{\infty} \frac{R_{n_i0}^*(0) R_{n_i0}(r)} {(2 \nu - 2 n_i) \hbar \omega} \right) \right]_{r \rightarrow 0}. \end{eqnarray} Using Eqs.~(\ref{eq_app1}) and (\ref{eq_app6}) from the Appendix to simplify the term in square brackets, we obtain the well-known implicit eigenequation for two particles interacting through the $s$-wave pseudo-potential under spherical harmonic confinement~\cite{busc98}, \begin{eqnarray} \label{eq_swavefinal} \frac{\Gamma \left( \frac{-E}{2 \hbar \omega}+\frac{1}{4} \right) } {2 \Gamma \left( \frac{-E}{2 \hbar \omega} + \frac{3}{4} \right)} - \frac{a_{00}}{a_{ho}} =0. \end{eqnarray} Here, $a_{ho}$ denotes the harmonic oscillator length, $a_{ho}=\sqrt{\hbar/(\mu \omega)}$. The derivation of the implicit eigenequation for two dipoles under external harmonic confinement interacting through the pseudo-potential with infinitely many terms proceeds analogously to that outlined above for the $s$-wave system. The key difference is that each $V_{ll'}$ term in Eq.~(\ref{eq_llprime}) with $l \ne l'$ couples states with different angular momenta, resulting in a set of coupled equations for the expansion coefficients $c_{n_il_i}$. However, since $V_{pp,reg}$ for dipolar systems couples only angular momentum states with $|l-l'| \le 2$ [see, e.g., the discussion at the beginning of Sec.~\ref{sec_pp} and around Eqs.~(\ref{eq_born1}) and (\ref{eq_born2})], the coupled equations can, as we outline in the following, be solved analytically by including successively more terms in $V_{pp,reg}$. To start with, we plug the expansion given in Eq.~(\ref{eq_expansion}) into Eq.~(\ref{eq_se}), where the interaction potential $V_{int}$ is now taken to be the pseudo-potential $V_{pp,reg}$ with infinitely many terms. To obtain the general equation for the expansion coefficients $c_{n_il_i}$, we multiply as before from the left with $R^*_{n_ol_o}Y^*_{l_o0}$ and integrate over all space, \begin{eqnarray} \label{eq_general1} c_{n_ol_o} (2 n_o + l_o - 2\nu) \hbar \omega + \left [\frac{R^*_{n_ol_o}(r)}{r^{l_o}} \right]_{r \rightarrow 0} \times \nonumber \\ \left[ g_{l_o-2,l_o} B_{l_o-2} + g_{l_ol_o} B_{l_o}+ g_{l_o+2,l_o} B_{l_o+2} \right]=0. \end{eqnarray} Here, the $B_{l_o-2}$, $B_{l_o}$ and $B_{l_o+2}$ denote constants that are independent of $n_i$, \begin{eqnarray} \label{eq_general2} B_{l_o} = \left[ \frac{\partial^{2l_o+1}}{\partial r^{2l_o+1}} \left\{ r^{l_o+1} \left( \sum_{n_i=0}^{\infty} c_{n_il_o} R_{n_il_o}(r) \right) \right\} \right]_{r \rightarrow 0}. \end{eqnarray} The three terms in the square bracket in the second line of Eq.~(\ref{eq_general1}) arise because the $V_{l'-2,l'}$, $V_{l'l'}$ and $V_{l'+2,l'}$ terms in the pseudo-potential $V_{pp,reg}$ couple the state $R^*_{n_ol_o}Y^*_{l_o0}$, for $l'=l_o$, with three components of the expansion for $\Psi$, Eq.~(\ref{eq_expansion}). Importantly, the constants $B_{l_o-2}$, $B_{l_o}$ and $B_{l_o+2}$, defined in Eq.~(\ref{eq_general2}), depend on the quantum numbers $l_o-2$, $l_o$ and $l_o+2$, respectively, which implies that Eq.~(\ref{eq_general1}) defines a set of infinitely many coupled equations that determine, together with Eq.~(\ref{eq_general2}), the expansion coefficients $c_{n_il_i}$. Notice that Eqs.~(\ref{eq_general1}) and (\ref{eq_general2}) coincide with Eqs.~(\ref{eq_swave1}) and (\ref{eq_swave2}) if we set $l_o=0$ and $g_{ll'}=0$ if $l$ or $l' > 0$. We now illustrate how Eqs.~(\ref{eq_general1}) and (\ref{eq_general2}) can be solved for identical bosons, i.e., in the case where $l$ and $l'$ are even (the derivation for identical fermions proceeds analogously). Our strategy is to solve these equations by including successively more terms in the coupled equations, or equivalently, in the pseudo-potential. As discussed above, if $a_{00}$ is the only non-zero scattering length, the eigenenergies are given by Eq.~(\ref{eq_swavefinal}). Next, we also allow for non-zero $a_{20}$, $a_{02}$ and $a_{22}$, i.e., we consider $l$ and $l' \le 2$ in Eq.~(\ref{eq_ppreg}). In this case, the coefficients $c_{n_i0}$ and $c_{n_i2}$ are non-zero and coupled, but all $c_{n_il_i}$ with $l_i>2$ are zero. Using the expressions for $B_0$ and $B_2$ given in Eq.~(\ref{eq_general2}), we decouple the equations. Finally, using Eqs.~(\ref{eq_app1}) and (\ref{eq_app6}) from the Appendix, the eigenequation can be compactly written as \begin{eqnarray} \label{eq_uptotwo} t_0+ \frac{q_2}{t_2}=0, \end{eqnarray} where \begin{eqnarray} \label{eq_tl} t_l = \frac{\Gamma( \frac{-E}{2 \hbar \omega} + \frac{1}{4} - \frac{l}{2})} {2^{2l+1} \Gamma(\frac{-E}{2 \hbar \omega} +\frac{3}{4} + \frac{l}{2} )} - (-1)^l \frac{a_{ll}}{k^{2l} a_{ho}^{2l+1}}, \end{eqnarray} and \begin{eqnarray} \label{eq_ql} q_l = -\frac{a_{l-2,l}^2}{k^{4l-4}a_{ho}^{4l-2}}. \end{eqnarray} Equation~(\ref{eq_uptotwo}) can be understood as follows. If only $a_{00}$ is non-zero, it reduces to $t_0=0$, in agreement with Eq.~(\ref{eq_swavefinal}). If only $a_{00}$, $a_{02}$ and $a_{20}$ are non-zero, Eq.~(\ref{eq_uptotwo}) remains valid if $a_{22}$ in $t_2$ is set to zero. This shows that the term $q_2$ and the first term on the right hand side of $t_2$ arise due to the coupling between states with angular momenta $0$ and $2$. The second term of $t_2$, in contrast, arises due to a non-zero $a_{22}$. Finally, for non-zero $a_{00}$ and $a_{22}$ but vanishing $a_{20}$ and $a_{02}$, Eq.~(\ref{eq_uptotwo}) reduces to $t_0t_2=0$. In this case, we recover the eigenequations $t_0=0$ for $s$-wave interacting particles~\cite{busc98} and $t_2=0$ for $d$-wave interacting particles~\cite{stoc04}. We now consider $l$ and $l'$ values with up to $l_{max}=4$ in Eq.~(\ref{eq_ppreg}), i.e., we additionally allow for non-zero $a_{24}$, $a_{42}$ and $a_{44}$, and discuss how the solution changes compared to the $l_{max}=2$ case. The equation for the expansion coefficients $c_{n_i0}$ remains unchanged while that for $c_{n_i2}$ is modified. Furthermore, the expansion coefficients $c_{n_i4}$ are no longer zero. Consequently, we have three coupled equations, which can be decoupled, resulting in the following implicit eigenequation, $t_0+q_2/(t_2+q_4/t_4)=0$. In analogy to the $l_{max}=2$ case, the $q_4$ term and the first part on the right hand side of the $t_4$ term arise due to the ``off-diagonal'' scattering lengths $a_{24}$ and $a_{42}$, and the second term of $t_4$ arises due to the ``diagonal'' scattering length $a_{44}$. Next, let us assume that we have found the implicit eigenequation for the case where we include terms in Eq.~(\ref{eq_ppreg}) with $l$ and $l'$ up to $l_{max}-2$. If we now include terms with $l$ and $l'$ up to $l_{max}$, only the equations for the expansion coefficients $c_{n_o l_o}$ with $l_o = l_{max}-2$ and $l_{max}$ change; those for the expansion coefficients $c_{n_ol_o}$ with $l_o \le l_{max}-4$ remain unchanged. This allows the $l_{max}/2+1$ coupled equations for the expansion coefficients to be decoupled analytically using the results already determined for the case where $l$ and $l'$ go up to $l_{max}-2$. Following this procedure, we find the following implicit eigenequation \begin{eqnarray} \label{eq_eigen} T_{l_{max}}=0, \end{eqnarray} where $T_{l_{max}}$ itself can be written as a continued fraction. For identical bosons we find, \begin{eqnarray} \label{eq_tlmax} T_{l_{max}} = t_0 + \frac{q_2}{t_2 + \frac{q_4}{t_4 + \cdots +\frac{q_{l_{max}}}{t_{l_{max}}}}}. \end{eqnarray} Taking $l_{max} \rightarrow \infty$ gives the eigenequation for two identical bosons under spherical harmonic confinement interacting through $V_{pp,reg}$ with infinitely many terms. For two identical fermions, Eqs.~(\ref{eq_tl}) through (\ref{eq_tlmax}) remain valid if the subscripts $0,2,\cdots$ in Eq.~(\ref{eq_tlmax}) are replaced by $1,3,\cdots$. The derived eigenequation reproduces the eigenenergies in the known limits. For the non-interacting case (all $a_{ll'}=0$), the eigenenergies coincide with the eigenenergies of the harmonic oscillator, i.e., $E_{nl}= ( 2n + l + 3/2 ) \hbar \omega$, where $n=0,1,2,\cdots$ and $l=0,2,4,\cdots$ (in the case of identical bosons) and $l=1,3,\cdots$ (in the case of identical fermions). The $k$th levels, with energy $(2k+ 3/2) \hbar \omega$ for bosons and $(2k+5/2) \hbar \omega$ for fermions, has a degeneracy of $k+1$, $k=0,1,\cdots$. Non-vanishing $a_{ll'}$ lead to a splitting of degenerate energy levels but leave the number of energy levels unchanged. If $a_{ll}$ is the only non-zero scattering length, the eigenequation reduces to that obtained for spherically symmetric pseudo-potentials with partial wave $l$~\cite{stoc04}. \subsection{Analysis of the energy spectrum} \label{sec_hoB} This section analyses the implicit eigenequation, Eq.~(\ref{eq_eigen}), derived in the previous section for the zero-range pseudo-potential for $m=0$ and compares the resulting energy spectrum with that obtained for a shape-dependent model potential. The implicit eigenequation, Eq.~(\ref{eq_eigen}), can be solved readily numerically by finding its roots in different energy regions. The solutions of the Schr\"odinger equation for the shape-dependent model potential are otained by expanding the eigenfunctions on a B-spline basis. Lines in Figs.~\ref{fig1}(a) and (b) \begin{figure} \centering \includegraphics[angle=270,width=8cm]{fig1.ps} \caption{ Relative eigenenergies $E$ for (a) two identical bosonic dipoles and (b) two identical fermionic dipoles interacting through $V_{pp,reg}$ [using $a_{00}=0$ in (a)] under spherical harmonic confinement as a function of $D_*/a_{ho}$. The line style indicates the predominant character of the corresponding eigenstates. In (a), a solid line refers to $l \approx 0$, a dashed line to $l \approx 2$, and a dotted line to $l \approx 4$; in (b), a solid line refers to $l \approx 1$, a dashed line to $l \approx 3$, and a dotted line to $l \approx 5$. } \label{fig1} \end{figure} show the eigenenergies obtained by solving Eq.~(\ref{eq_eigen}) for two identical bosons and two identical fermions, respectively, interacting through $V_{ps,reg}(\vec{r})$ under external spherically symmetric harmonic confinement as a function of the dipole length $D_*$. In both panels, we assume that the interaction between the two dipoles is purely dipolar, i.e., in Fig.~\ref{fig1}(a) we set $a_{00}=0$. The other scattering lengths $a_{ll'}$ are approximated by Eqs.~(\ref{eq_born1}) and (\ref{eq_born2}). Interestingly, for identical bosons, the lowest gas-like level, which starts at $E=1.5 \hbar \omega$ for $D_*=0$, increases with increasing $D_*$. For identical fermions, in contrast, the lowest gas-like state decreases with increasing $D_*$. In addition to obtaining the eigenenergies themselves, the pseudo-potential treatment allows the spectrum to be classified in terms of angular momentum quantum numbers. To this end, we solve the implicit eigenequation, Eq.~(\ref{eq_eigen}), for increasing $l_{max}$, and monitor how the energy levels shift as additional angular momenta are included in $V_{pp,reg}$. Since a level with approximate quantum number $l$ changes only little as larger angular momentum values are included in the pseudo-potential, this analysis reveals the predominant character of each energy level. In Fig.~\ref{fig1}(a), the eigenfunctions of energies shown by solid, dashed and dotted lines have predominantly $l=0$, 2 and 4 character, respectively. In Fig.~\ref{fig1}(b), the eigenfunctions of energies shown by solid, dashed and dotted lines have predominantly $l=1$, 3 and 5 character, respectively. We find that the lowest excitation frequency between states with predominantly $l=0$ [$l=1$] character, increases [decreases] for identical bosons [fermions] with increasing $D_*$. These predictions can be verified directly experimentally. To assess the accuracy of the developed zero-range pseudo-potential treatment, we consider two interacting bosons with non-vanishing $s$-wave scattering length $a_{00}$. We imagine that the dipole moment of two identical polarized bosonic polar molecules is tuned by an external electric field. As the dipole moment $d$ is tuned, the $s$-wave scattering length $a_{00}$, which depends on the short-range and the long-range physics of the ``true'' interaction potential, changes. To model this situation, we solve the two-body Schr\"odinger equation, Eq.~(\ref{eq_se}), numerically for a shape-dependent model potential with hardcore radius $b$ and long-range dipolar tail. In this case, $V_{int}$ is given by \begin{eqnarray} \label{eq_model} V_{model}(\vec{r}) = \left\{ \begin{array}{ll} d^2\frac{1 - 3 \cos^2 \theta} {r^3} & \mbox{if $r \ge b$}\\ \infty & \mbox{if $r < b$} \end{array} \right. . \end{eqnarray} For $d=0$, the $s$-wave scattering length $a_{00}$ for $V_{model}$ is given by $b$. As the dipole length $D_*$ increases, $a_{00}$ goes through zero, and becomes negative. Just when the two-body potential supports a new bound state, $a_{00}$ goes through a resonance and becomes large and positive. As $D_*$ increases further, $a_{00}$ decreases. This resonance structure repeats itself with increasing $D_*$ (see Fig.~1 of Ref.~\cite{bort06}; note, however, that the lengths $a_{ho}$ and $D_*$ defined throughout the present work differ from those defined in Ref.~\cite{bort06}). For the model potential $V_{model}$, $a_{00}$ depends on the ratio between the short-range and long-range length scales, i.e., on $b/D_*$. To compare the pseudo-potential energies and the energies for the model potential, we fix $b$ and calculate $a_{00}$ for each $D_*$ considered. The dipole-dependent $s$-wave scattering length is then used in the zero-range pseudo-potential $V_{pp,reg}$. The other scattering lengths are, as before, approximated by the expressions given in Eqs.~(\ref{eq_born1}) and (\ref{eq_born2}). Solid lines in Fig.~\ref{fig2}(a) and (b) show the eigenenergies \begin{figure} \centering \includegraphics[angle=270,width=8cm]{fig2.ps} \caption{ Panel~(a) shows the relative energies $E$ for two aligned identical bosonic dipoles under external spherical harmonic confinement as a function of $D_*/a_{ho}$. Solid lines show the numerically determined energies obtained using $V_{model}$ with $b=0.0097 a_{ho}$. Crosses show the energies obtained using $V_{pp,reg}$ with essentially infinitely many terms, and $a_{00}$ calculated for $V_{model}$. Panel~(b) shows a blow-up of the energy region around $E \approx 5.5 \hbar \omega$. Note that the horizontal axis in (a) and (b) are identical. } \label{fig2} \end{figure} obtained for $V_{model}$ as a function of $D_*$. Crosses show the eigenenergies obtained for $V_{pp,reg}$ using a value of $l_{max}$ that results in converged eigenenergies. The overview spectrum shown in Fig.~\ref{fig2}(a) shows that one of the energy levels dives down to negative energies close to that $D_*$ value at which the two-body potential $V_{model}$ supports a new bound state. The blow-up, Fig.~\ref{fig2}(b), around $E \approx 5.5 \hbar \omega$ shows excellent agreement between the energies obtained using $V_{pp,reg}$ (crosses) and those obtained using $V_{model}$ (solid lines); the maximum deviation for the energy range shown is 0.05~\%. As before, we can assign approximate quantum numbers to each energy level. At $D_* \ll a_{ho}$, the three energy levels around $E \approx 5.5 \hbar \omega$ have, from bottom to top, approximate quantum numbers $l=2$, 4 and 0. After two closely spaced avoided crossings around $D_* \approx 0.025 a_{ho}$, the assignment changes to $l=0$, 2 and 4 (again, from bottom to top). If the maximum angular momentum $l_{max}$ of the pseudo-potential is set to 2, the energy level with approximate quantum number $l=4$ would be absent entirely. This illustrates that a complete and accurate description of the energy spectrum requires the use of a zero-range pseudo-potential with infinitely many terms. The energy of a state with approximate quantum number $l$ requires $l_{max}$ to be at least $l$ for the correct degeneracy be obtained and at least $l+2$ for a quantitative description. The sequence of avoided crossings at $D_* \approx 0.025 a_{ho}$ suggests an interesting experiment. Assume that the system is initially, at small electric field (i.e., small $D_*/a_{ho}$), prepared in the excited state with angular momentum $l \approx 0$ and $E \approx 5.52 \hbar \omega$. The electric field is then slowly swept across the first broad avoided crossing at $D_* \approx 0.019 a_{ho}$ to transfer the population from the state with $l \approx 0$ to the state with $l \approx 2$. We then suggest to sweep quickly across the second narrower avoided crossing at $D_* \approx 0.028 a_{ho}$ (the ramp speed must be chosen so minimize population transfer from the state with $l \approx 2$ to the state with $l \approx 4$). As in the case of $s$-wave scattering only~\cite{dunn04}, the time-dependent field sequence has to be optimized to obtain maximal population transfer. The proposed scheme promises to provide an efficient means for the transfer of population between states with different angular momenta and for quantum state engineering. Figure~\ref{fig2} illustrates that the pseudo-potential treatment reproduces the eigenenergies of the shape-dependent model potential $V_{model}$. To further assess the validity of the pseudo-potential treatment, we now consider two interacting bosonic dipoles for which the dipolar interaction is dominant, i.e., we consider $a_{00}=0$. For $V_{model}$ with $b = 0.0031 a_{ho}$, we determine a set of $D_*$ values at which $a_{00}=0$. Note that the number of bound states with predominantly $s$-wave character increases by one for each successively larger $D_*$. Crosses in Figs.~\ref{fig3}(a)-(c) show the eigenenergies \begin{figure} \centering \includegraphics[angle=270,width=8cm]{fig3.ps} \caption{ Crosses show the relative eigenenergies $E$ as a function of $D_*/a_{ho}$ for two identical bosons with $a_{00}=0$ interacting through $V_{model}$ with $b = 0.0031 a_{ho}$ in three different energy regions. Lines show $E$ for two identical bosons with $a_{00}=0$ interacting through $V_{ps,reg}$ with $a_{ll'}$ given by Eq.~(\protect\ref{eq_born1}) and (\protect\ref{eq_born2}). As in Fig.~\protect\ref{fig1}(a) solid, dashed and dotted lines show the energies of levels characterized by approximate quantum numbers $l \approx 0$, 2 and 4. The agreement between the crosses and the lines is good at small $D_*/a_{ho}$ but less good at larger $D_*/a_{ho}$. Squares show the eigenenergies obtained for the energy-dependent pseudo-potential at $D_*=0.242a_{ho}$; the agreement between the squares and the crosses is excellent, illustrating that usage of the energy-dependent K-matrix greatly enhances the applicability regime of $V_{pp,reg}$. } \label{fig3} \end{figure} for $V_{model}$ with $a_{00}=0$ as a function of $D_*$ in the energy ranges around $1.5$, $3.5$ and $5.5 \hbar \omega$. For comparison, lines show the eigenenergies obtained for the regularized pseudo-potential with $a_{00}=0$. As in Fig.~\ref{fig1}, the linestyle indicates the predominant character of the energy levels (solid line: $l \approx 0$; dashed line: $l \approx 2$; and dotted line: $l \approx 4$). The agreement between the energies obtained for the pseudo-potential with $a_{ll'}$ given by Eqs.~(\ref{eq_born1}) and (\ref{eq_born2}) and for the model potential for small $D_*$ is very good, thus validating the applicability of the pseudo-potential treatment. The agreement becomes less good, however, as $D_*$ increases. This can be explained readily by realizing that the dipole length $D_*$ approaches the harmonic oscillator length $a_{ho}$. In general, the description of confined particles interacting through zero-range pseudo-potentials is justified if the characteristic lengths of the two-body potential are smaller than the characteristic length of the confining potential. For example, in the case of $s$-wave interactions only, the van der Waals length has to be smaller than the oscillator length~\cite{blum02,bold02}. The model potential $V_{model}$ is characterized by a short-range length scale, the hardcore radius $b$, and the dipole length $D_*$; in Fig.~\ref{fig3}, it is the relatively large value of $D_*/a_{ho}$ that leads, eventually, to a break-down of the pseudo-potential treatment. As in the case of spherical interactions, the break-down can be pushed to larger $D_*$ values by introducing energy-dependent generalized scattering lengths $a_{ll'}(k)$, defined through $-K_{l0}^{l'0}(k)/k$ for $m=0$, and by then solving the eigenequation, Eq.~(\ref{eq_eigen}), self-consistently~\cite{blum02,bold02}. Figure~\ref{fig4} shows three selected scattering lengths $a_{ll'}(k)$ for the model potential $V_{model}$ with $D_*=78.9b$ as a function of energy. This two-body potential supports eight bound states with projection quantum number $m=0$, which have predominantly $s$-wave character. Both energy and length in Fig.~\ref{fig4} are expressed in oscillator units to allow for direct comparison with the data shown in Fig.~\ref{fig3}. The scattering length $a_{00}(k)$, shown by a solid line in Fig.~\ref{fig4}, is zero at zero energy and increases with increasing energy. Both $a_{20}(k)$ (dashed line) and $a_{22}(k)$ (dash-dotted line) are negative. Their zero-energy values coincide with those calculated in the Born approximation (horizontal dotted lines). Using these energy-dependent $a_{ll'}(k)$ to parametrize the strengths of the pseudo-potential and solving the eigenequation, Eq.~(\ref{eq_eigen}), self-consistently, we obtain the squares in Fig.~\ref{fig3}. The energies for $V_{pp,reg}$ with {\em{energy-dependent}} $a_{ll'}$ (squares) are in much better agreement with the energies obtained for the model potential (crosses) than the energies obtained using the {\em{energy-independent}} $a_{ll'}$ to parametrize the pseudo-potential (lines). \begin{figure} \centering \includegraphics[angle=270,width=8cm]{fig4.ps} \vspace*{-0.6in} \caption{Energy-dependent scattering lengths $a_{00}(k)$ (solid line), $a_{20}(k)$ (dashed line) and $a_{22}(k)$ (dash-dotted line) for the model potential $V_{model}$ with $D_*=78.9b$ as a function of the relative energy $E$. In oscillator units, $V_{model}$ is characterized by $b=0.0031 a_{ho}$ and $D_*=0.242 a_{ho}$. For comparison, horizontal dotted lines show the energy-independent scattering lengths $a_{22}$, Eq.~(\protect\ref{eq_born1}), and $a_{20}$, Eq.~(\protect\ref{eq_born2}), calculated in the first Born approximation. } \label{fig4} \end{figure} This suggests that the applicability regime of the regularized zero-range pseudo-potential can be extended significantly by introducing energy-dependent scattering lengths. Since the proper treatment of resonant interactions within the regularized zero-range pseudo-potential requires that the energy-dependence of the generalized scattering lengths be included, future work will address this issue in more depth. \section{Summary} \label{sec_conclusion} This paper applies a zero-range pseudo-potential treatment to describe two interacting dipoles under external spherically harmonic confinement. Section~\ref{sec_pp} introduces the regularized zero-range pseudo-potential $V_{pp,reg}$ used in this work, which was first proposed by Derevianko~\cite{dere03,dere05}. Particular emphasis is put on developing a simple interpretation of the individual pieces of the pseudo-potential. Furthermore, we clearly establish the connection between $V_{pp,reg}$ and the pseudo-potential $V_{pp}$, which is typically employed within a mean-field framework. We argue that the applicability regime of these two pseudo-potentials is comparable if the scattering strengths of $V_{pp,reg}$, calculated in the first Born approximation, are chosen so as to reproduce those of $V_{pp}$. We then use the regularized zero-range pseudo-potential to derive an implicit eigen equation for two dipoles under external confinement, a system which can be realized experimentally with the aid of optical lattices. In deriving the implicit eigenequation, we again put emphasis on a detailed understanding of how the solution arises, thus developing a greater understanding of the underlying physics. The implicit eigenequation can be solved straightforwardly, and allows for a direct classification scheme of the resulting eigenspectrum. By additionally calculating the eigen energies for two dipoles interacting through a finite range model potential numerically, we assess the applicability of the developed zero-range pseudo-potential treatment. We find good agreement between the two sets of eigenenergies for small $D_*$ and quantify the deviations as $D_*$ increases. Finally, we show that the validity regime of $V_{pp,reg}$ can be extended by parametrizing the scattering strengths of $V_{pp,reg}$ in terms of the energy-dependent K-matrix calculated for a realistic model potential. This may prove useful also when describing resonantly interacting dipoles. At first sight it may seem counterintuitive to replace the {\em{long-range}} dipolar interaction by a {\em{zero-range}} pseudo-potential. However, if the length scales of the interaction potential, i.e., the van der Waals length scale characterizing the short-range part and the dipole length characterizing the long-range part of the potential, are smaller than the characteristic length of the trap $a_{ho}$, this approach is justified since the zero-range pseudo-potential is designed to reproduce the K-matrix elements of the ``true'' interaction potential. This is particularly true if the pseudo-potential is taken to contain infinitely many terms, as done in this work. In summary, this paper determines the eigenenergies of two interacting dipoles with projection quantum number $m=0$. The applied zero-range pseudo-potential treatment is validated by comparing the resulting eigenenergies with those obtained numerically for a shape-dependent model potential. The analysis presented sheds further light on the intricate properties of angle-dependent scattering processes and their description through a regularized zero-range pseudo-potential with infinitely many terms. The calculated energy spectrum may aid on-going experiments on dipolar Bose and Fermi gases. Acknowledgements: KK and DB acknowledge support by the NSF through grant PHY-0555316 and JLB by the DOE. \section{Appendix} In this Appendix, we evaluate the following infinite sum, \begin{eqnarray} \label{eq_app1} C_l = \left[ \frac{\partial^{2l+1}}{\partial r^{2l+1}} \left\{ r^{l+1} \sum_{n=0}^{\infty} \frac{ \left[\frac{R^*_{nl}(r)}{r^l}\right]_{r\rightarrow0} R_{nl}(r)} {2 \left(\nu-n-\frac{l}{2} \right) \hbar \omega} \right\} \right]_{r \rightarrow 0}. \end{eqnarray} Writing the radial harmonic oscillator functions $R_{nl}(r)$ in terms of the Laguerre polynomials $L_n^{(l+1/2)}$, \begin{eqnarray} \label{eq_app2} R_{nl}(r)= \sqrt{\frac{2^{l+2}}{(2l+1)!!\pi^{1/2}L_n^{(l+1/2)}(0) a_{ho}^{3}}} \times \nonumber \\ \exp \left(-\frac{r^2}{2a_{ho}^2} \right) \left( \frac{r}{a_{ho}} \right)^l L_n^{(l+1/2)}(r^2/a_{ho}^2), \end{eqnarray} we find \begin{eqnarray} \label{eq_app3} \left[\frac{R_{nl}(r)}{r^l}\right]_{r\rightarrow0}= \sqrt{\frac{2^{l+2}L_n^{(l+1/2)}(0)}{(2l+1)!!\pi^{1/2} a_{ho}^{2l+3}}}. \end{eqnarray} Using Eqs.~(\ref{eq_app2}) and (\ref{eq_app3}), the $C_l$ can be rewritten as \begin{eqnarray} \label{eq_app4} C_l= \frac{2^{l+1}}{(2l+1)!!\pi^{1/2} a_{ho}^{2l+3}} \times \nonumber \\ \left[\frac{\partial^{2l+1}}{\partial r^{2l+1}} \left( \exp\left(\frac{-r^2}{2a_{ho}^2}\right) r^{2l+1}\sum_{n=0}^{\infty}{\frac{L_n^{(l+1/2)}\left((\frac{r}{a_{ho}})^2 \right)} {\left(\nu -n-\frac{l}{2} \right) \hbar \omega}} \right) \right]_{r\rightarrow0}. \end{eqnarray} We evaluate the infinite sum in Eq.~(\ref{eq_app4}) using the properties of the generating function~\cite{abranote1}, \begin{eqnarray} \label{eq_app5} \sum_{n=0}^{\infty}{\frac{L_n^{(l+1/2)}((r/a_{ho})^2)}{\nu -n-\frac{l}{2}}}= \nonumber \\ -\Gamma(-\nu + l/2)U(-\nu + l/2, l+3/2, (r/a_{ho})^2). \end{eqnarray} Using Eq.~(\ref{eq_app5}) together with the small $r$ behavior of the hypergeometric function $U$~\cite{abranote2}, the expression for the $C_l$ reduces to \begin{eqnarray} \label{eq_app6} C_l = \frac{(-1)^l 2^{2l+2}(2l)!!}{(2l+1)!!} \frac{\Gamma \left(-\nu + \frac{l}{2} \right)} {\Gamma \left(-\nu-\frac{l+1}{2} \right)} \frac{1}{\hbar \omega a_{ho}^{2l+3}}. \end{eqnarray}
2,877,628,088,812
arxiv
\section{Introduction} \label{sec:intro} Let $\ground$ be a field and fix an integer $m\geq 1$. Following \cite{ST}, we consider the path algebra $\ground\quiv$ of the quiver $\quiv$ with $m$ vertices $\{0,1,\ldots, m-1\}$ and $2m$ arrows given by $(a_i: i \rightarrow i+1)$ and $(\ol{a}_i: i+1 \rightarrow i)$ for $i\in\{0,1,\ldots,m-1\}$. The composition of arrows is ordered from left to right, for example $a_ia_{i+1}$ denotes the path $i \xrightarrow{a_i} i+1 \xrightarrow{a_{i+1}} i+2$. For the rest of the paper, we take all indices modulo $m$, that is, we identify the labeling set of the vertices with $\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$. \begin{figure}[H]\label{fig:quiver} \begin{center} \begin{tikzpicture}[scale=0.6] \begin{scope}[xshift=0cm] \path[use as bounding box] (-3.5,-1.5) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{50-\x*45}:2.35cm); \draw[thick, dotted] ({85-3*45}:2.5cm) arc ({85-3*45}:{50-4*45}:2.5cm); \draw ({90-1*45}:3cm) node {$i$}; \draw ({90-2*45}:3.35cm) node {$i+1$}; \draw (22:1.9cm) node {$\ol{a}_i$}; \draw (22:3.1cm) node {$a_i$}; \end{scope} \end{tikzpicture} \end{center} \vskip 5pt \caption{The quiver $\quiv$} \end{figure} For any integer $N \geq1$, we consider the quotient algebra $$\alg_N:=\ground\quiv/ \brac {a_i a_{i+1},\ \ol{a}_{i+1}\ol{a}_i,\ (a_i\ol{a}_i)^N - (\ol{a}_{i-1}a_{i-1})^N \mid i \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}}.$$ We note that the algebra $\alg_N$ is self-injective and special biserial. Special biserial algebras play an important role in the representation theory of algebras; for example, the study of Hochschild cohomology for blocks of group algebras with cyclic or dihedral defect groups is achieved using the fact that the basic algebras that appear are special biserial \cite{Ho1, Ho2}. As a consequence, several articles have been devoted in recent years to the computation of the Hochschild cohomology of self-injective special biserial algebras, see for examples \cite{ES, X} and the references therein. The case $N=1$ has been studied in many different contexts, revealing interesting connections. In the special case where $m=1$, $\alg_1$ provides a counterexample to Happel's question, see \cite{BGSS}. When $m$ is even, $\alg_1$ appears in the presentation by quiver and relations of the Drinfeld double $\mathcal{D}(\Lambda_{h,d})$ of the Hopf algebra $\Lambda_{h,d}$, where $d \mid h$, $h$ is even and $\Lambda_{h,d}$ is the algebra given by an oriented cycle with $h$ vertices such that all paths of length $d$ are zero, see~\cite{EGST}. The quiver of $\mathcal{D}(\Lambda_{h,d})$ consists of $\frac{h^2}{d}$ isolated vertices and $\frac{h(d-1)}{2}$ copies of the quiver~$\quiv$ with $m=\frac{2h}{d}$ vertices, corresponding to the algebra $\alg_1$. For general $m$, the algebra $\alg_1$ also occurs in the study of the representation theory of $U_q(\mathfrak{sl}_2)$; see~\cite{ChK,P, Su}. Moreover, Gadbled, Thiel and Wagner describe three different gradings of $\alg_1$ in~\cite{GTW} -- we comment on these gradings in Section~\ref{brackets HH1} -- and prove that the homotopy category $\mathcal{C}_m$ of finitely generated trigraded projective modules over $\alg_1$ carries an action of the extended affine braid group of Dynkin type $\widehat{\mathsf{A}}_m$. The induced action of this braid group on the Grothendieck group of $\mathcal{C}_m$ is a $2$-parameter homological representation of the braid group of Dynkin type $\widehat{\mathsf{A}}_m$. As mentioned in~\cite{GTW}, the algebra $\alg_1$ is a particular case of the general construction of algebras associated to graphs given in~\cite{HK}. For general $m$ and $N$, the algebra $\alg_N$ occurs in work of Farnsteiner and Skowro\'nski, where they determine the Hopf algebras associated to infinitesimal groups whose principal blocks are tame when $\ground$ is an algebraically closed field with characteristic greater than or equal to~$3$, see~\cite{FS, FS2}. Finally, Snashall and Taillefer prove in~\cite{ST} that the Hochschild cohomology $\hh^{\bullet}(\alg_N)$ of $\alg_N$ is finitely generated as an algebra over $\ground$. Their result allows the study of this graded commutative algebra with geometrical tools. In particular, they prove that the Hochschild cohomology algebra modulo nilpotents is, in this case, a finitely generated $\ground$-algebra and a commutative ring of Krull dimension $2$. \\ In this article, we provide a complete description of $\hh^{\bullet}(\alg_N)$ as a Gerstenhaber algebra. We assume that $m\geq 3$ and that the characteristic of $\ground$ does not divide $2$, $N$ or~$m$. The Gerstenhaber structure of an associative algebra was first considered in \cite{G}, and has been widely studied since then. However, the Gerstenhaber structure on the Hochschild cohomology of an algebra is very difficult to compute in concrete examples, since it is defined in terms of the bar resolution of the algebra. Recently, new computational methods have emerged, see for examples \cite{NW, S, V}. In particular, the method proposed by Su\'arez-\'Alvarez in \cite{S} allows us to compute the Gerstenhaber brackets of two special elements $\varphi$ and $\psi$ in $\hh^{1}(\alg_N)$ with a $\ground$-basis of $\hh^{\bullet}(\alg_N)$. In particular, we show that the derivations $[\varphi,-]$ and $[\psi,-]$ act diagonally on $\hh^{\bullet}(\alg_N)$ with respect to the basis given in~\cite{ST}. By the Jacobi identity, brackets of elements in the eigenbasis remain eigenvectors, and we can compute the corresponding eigenvalues. This fact, together with the Poisson identity, allows us to compute the complete Gerstenhaber structure on $\hh^{\bullet}(\alg_N)$. Moreover, we believe this method could be useful for other families of algebras with similar properties, providing an innovative approach to the computation of Gerstenhaber brackets on Hochschild cohomology. Our computations provide a complete description of the Lie algebra structure of $\hh^{1}(\alg_N)$ and its Lie-action on $\hh^{\bullet}(\alg_N)$. Indeed, we find that: \begin{theorem*} Let $N \geq 1$ and $m\geq 3$ be integers. Suppose the characteristic of a field $\ground$ does not divide $2$, $N$ or~$m$. Let $\alg_N$ be the quotient algebra defined over $\ground$ as above. Then the first Hochschild cohomology space $\hh^1(\alg_N)$ is embedded as a Lie algebra into a direct sum of a one-dimensional central Lie algebra $\langle c \rangle$ and $m$ copies of a subquotient $\mathfrak{a}_{N-1}$ of the Virasoro algebra, which share Virasoro degree 0 and commute otherwise, as follows: \begin{align*} \hh^1(\alg_N)\ &\hookrightarrow \ \langle c \rangle \oplus (\mathfrak{a}_{N-1})^{\oplus m}. \end{align*} \end{theorem*} See Theorem~\ref{th:vir} for a precise statement. Note that when $N=1$, the Lie algebra $\hh^1(\alg_1)$ is commutative with two generators. It is only when $N>1$ that the Lie algebra becomes more complicated. It would be interesting to relate this Lie algebra structure to the representation theory of $\alg_N$, as has been done for blocks of group algebras in the modular case, see \cite{BKL}. \\ The paper is structured as follows: we recall some basic facts about the Gerstenhaber structure on Hochschild cohomology in~Section~\ref{sec:Hochschild}, and describe Su\'arez-\'Alvarez's method from~\cite{S} in~Section~\ref{sec:approach}. In~Sections~\ref{sec:resolution} and~\ref{cohomology}, we remind the readers of the description of the $\ground$-algebra $\hh^{\bullet}(\alg_N)$ given in~\cite{ST}. We compute the brackets of $\varphi$ and $\psi$ in $\hh^1(\alg_N)$ with the elements in a $\ground$-basis of $\hh^{\bullet}(\alg_N)$ in~Section~\ref{brackets HH1}. Section~\ref{general brackets} is devoted to the computation of all brackets among $\ground$-algebra generators of~$\hh^{\bullet}(\alg_N)$, providing the complete Gerstenhaber structure of $\hh^{\bullet}(\alg_N)$. In Section~\ref{E brackets}, we compute the Lie algebra structure of $\hh^{1}(\alg_N)$ and its Lie-action on $\hh^{n}(\alg_N)$ for all $n\geq 0$. We gather all these results in Section~\ref{sec:lie-structure}, where we describe the Lie algebra $\hh^{1}(\alg_N)$ in terms of the Virasoro algebra, and provide a decomposition of $\hh^{n}(\alg_N)$ into indecomposable modules as a Lie module over~$\hh^{1}(\alg_N)$. \\ \textbf{Acknowledgement:} This project was started at the ``Women In Algebra and Representation Theory" workshop at Banff in March 2016. The authors thank Banff International Research Station, National Science Foundation, and the workshop organizers for this collaboration opportunity. The authors also thank Volodymyr Mazorchuk for helpful remarks. \section{Hochschild cohomology} \label{sec:Hochschild} Let $\alg$ be any associative algebra over $\ground$ and $\alg^e = \alg \otimes_\ground \alg\ensuremath{{}^\text{op}}$ be its enveloping algebra, where $\alg\ensuremath{{}^\text{op}}$ is $\alg$ with opposite multiplicative structure. The $n$-fold tensor product $A^{\otimes n}$ is a left $A^e$-module under left and right multiplication, equivalently, it is an $\alg$-$\alg$-bimodule. For $n\ge2$, it is free. There is an exact sequence of free left $\alg^e$-modules, called the \textbf{bar resolution} $\barcx(\alg)=(\alg^{\otimes(n+2)},\bard_n)_{n \geq 0}$, see for example~\cite{M}: $$(\barcx(\alg) \rightarrow A): \qquad \cdots \xrightarrow{\bard_3} \alg^{\otimes 4} \xrightarrow{\bard_2} \alg^{\otimes 3} \xrightarrow{\bard_1} \alg \otimes \alg \xrightarrow{\bard_0} \alg \rightarrow 0,$$ where $\bard_0$ is multiplication, and the differential maps $\bard_n$ are given by: $$\bard_n(a_0 \otimes a_1 \otimes \cdots \otimes a_{n+1}) = \sum_{i=0}^n (-1)^i a_0 \otimes a_1 \otimes \cdots \otimes a_ia_{i+1} \otimes \cdots \otimes a_{n+1},$$ for all $a_0, \ldots, a_{n+1} \in \alg$. Applying the functor $\hom{\alg^e}(-,\alg)$ to this bar resolution, one obtains a complex $\hom{\alg^e}(\barcx(\alg),\alg)$ with differentials $\bard_n^*(f) = f \circ \bard_n$, for any $f \in \hom{\alg^e}(\barcx_{n-1}(\alg),\alg)$, and $\bard_0^*$ is taken to be the zero map. The \textbf{$n$-th Hochschild cohomology} of $\alg$ is the $n$-th homology of this new complex $$\hh^n(\alg) := \coho_n(\hom{\alg^e}(\barcx(\alg),\alg)) = \ker(\bard_{n+1}^*)/ \ensuremath{\textnormal{im}}(\bard_n^*),$$ for all $n \geq 0$. It is well-known that the Hochschild cohomology $\hh^{\bullet}(\alg) := \bigoplus_{n \geq 0} \hh^n(\alg)$ is a graded commutative ring via the cup product, that is, $xy = (-1)^{|x||y|} yx$. Here, we denote $|x|=n$ to be the homological degree of element $x \in \hh^n(\alg)$. Moreover, in low degrees, it is well known that: \begin{itemize} \item $\hh^0(\alg) = \ker(\bard_1^*) \cong Z(\alg)$ is the center of the algebra $\alg$; \item $\hh^1(\alg)$ is the space of derivations of $\alg$ modulo its inner derivations; \item $\hh^2(\alg)$ is the space of equivalence classes of infinitesimal deformations of $\alg$, which plays an important role in the study of deformation theory of the algebra $\alg$ \cite{G2}. \end{itemize} Besides the associative product structure, $\hh^{\bullet}(\alg)$ also has a bracket operation $[-,-]$, called the Gerstenhaber bracket, of degree $(-1)$. This bracket gives $\hh^{\bullet}(\alg)$ the structure of a graded Lie ring and makes $\hh^{\bullet}(\alg)$ into a Gerstenhaber algebra \cite{G}. The bracket operation is defined at the chain level on the bar resolution as follows. Let $f \in \hom{\alg^e}(\barcx_n(\alg), \alg)$ and $g \in \hom{\alg^e}(\barcx_q(\alg), \alg)$ represent elements in $\hh^n(\alg)$ and $\hh^q(\alg)$ respectively. Their bracket $[f, g]$ is defined as an element of $\hom{\alg^e}(\barcx_{n+q-1}(\alg), \alg)$ given by \[ [f, g] = f \circ g - (-1)^{(n-1)(q-1)} g \circ f, \] where the circle product $f \circ g$ is defined by \begin{align*} (f & \circ g) (1 \otimes a_1 \otimes \dots \otimes a_{n+q-1}\otimes 1) = \\ &\sum^n_{i=1} (-1)^{(q-1)(i-1)} f(1 \otimes a_1 \otimes \dots \otimes a_{i-1}\otimes g(1 \otimes a_i \otimes \cdots \otimes a_{i+q-1} \otimes 1) \otimes a_{i+q} \otimes \cdots \otimes a_{n+q-1} \otimes 1), \end{align*} and similarly for $g \circ f$. The Gerstenhaber bracket satisfies the following properties: \begin{enumerate} \item Antisymmetry: $[x,y] = -(-1)^{(|x|-1)(|y|-1)} [y,x].$ \item Poisson identity: $[xy,z] = [x,z]y + (-1)^{|x|(|z|-1)} x[y,z].$ \item Jacobi identity: $$(-1)^{(|x|-1)(|z|-1)}[x,\, [y,z]] + (-1)^{(|y|-1)(|x|-1)}[y,\, [z,x]] + (-1)^{(|z|-1)(|y|-1)}[z,\, [x,y]] = 0.$$ \end{enumerate} Gerstenhaber brackets are in general difficult to compute due to the complexity of the bar resolution. One traditional approach is to construct explicit comparison maps to translate the brackets from the bar resolution to a more computationally friendly resolution. Such comparison maps are, in general, rather complicated to find. Recent progress has been made to define the Gerstenhaber brackets directly on \textit{any} projective resolution, see e.g.~\cite{NW, S, V}. We will investigate the Gerstenhaber structure of the Hochschild cohomology $\hh^{\bullet}(\alg)$ for our algebra $\alg:=\alg_N$ defined in~Section~\ref{sec:intro}. In order to compute the Gerstenhaber brackets $[\hh^1(\alg),-]$, we use a technique introduced in \cite{S} by Su\'{a}rez-\'{A}lvarez, which we discuss in the next section. \section{The Gerstenhaber bracket: an approach by Su\'{a}rez-\'{A}lvarez} \label{sec:approach} In this section, we consider a $\ground$-algebra $B$ with a derivation $\delta: B\rightarrow B$. If $W$ is a left $B$-module, a \textbf{$\delta$-operator} on $W$ is a $\ground$-linear map $f:W\rightarrow W$ such that $$f(bw)=\delta(b)w+bf(w),$$ for all $b\in B$ and $w\in W$. If $\epsilon:P_{\bullet}\twoheadrightarrow W$ is a projective resolution of $W$, $$\begin{tikzcd} \cdots \arrow{r} & P_2 \arrow{r}{d_2} & P_1 \arrow{r}{d_1} & P_0 \arrow{r}{\epsilon} & W \arrow{r} & 0, \end{tikzcd}$$ then a \textbf{$\delta$-lifting} of $f$ to $P_{\bullet}$ is a sequence $f_{\bullet}=(f_n)_{n\geq 0}$ of $\delta$-operators $f_n:P_n\rightarrow P_n$, such that the following diagram commutes: $$\begin{tikzcd} \cdots \arrow{r} & P_2 \arrow{r}{d_2}\ \arrow{d}{f_2} & P_1 \arrow{r}{d_1} \arrow{d}{f_1} & P_0 \arrow{r}{\epsilon} \arrow{d}{f_0} & W \arrow{r} \arrow{d}{f} & 0\\ \cdots \arrow{r} & P_2 \arrow{r}{d_2} & P_1 \arrow{r}{d_1} & P_0 \arrow{r}{\epsilon} & W \arrow{r} & 0. \end{tikzcd}$$ In~\cite{S}, Su\'{a}rez-\'{A}lvarez proves that $\delta$-liftings exist and are unique up to an equivalence: \begin{lemma}\cite[Lemma 1.4]{S} Let $W$ be a left $B$-module and $\epsilon:P_{\bullet}\twoheadrightarrow W$ be a projective resolution of $W$. If $f$ is a $\delta$-operator on~$W$, there exists a $\delta$-lifting $f_{\bullet}:P_{\bullet}\rightarrow P_{\bullet}$ of $f$ to $P_{\bullet}$. Moreover, if $f_{\bullet}$ and $f'_{\bullet}$ are both $\delta$-liftings of $f$ to $P_{\bullet}$, then $f_{\bullet}$ and $f'_{\bullet}$ are $B$-linearly homotopic. \end{lemma} Now, suppose $f:W\rightarrow W$ is a $\delta$-operator with $\delta$-lifting $f_{\bullet}:P_{\bullet}\rightarrow P_{\bullet}$ of $f$ to $P_{\bullet}$. Given $n\geq 0$ and $\phi \in \hom{B}(P_n,W)$, there is a $B$-linear map $f_n^{\#}(\phi):P_n\rightarrow W$ defined by setting $$f_n^{\#}(\phi)(p)=f(\phi(p))- \phi(f_n(p)),$$ for $p\in P_n$. The resulting morphism $$f_n^{\#}:\hom{B}(P_n,W)\rightarrow \hom{B}(P_n,W)$$ is an endomorphism of the complex $\hom{B}(P_{\bullet},W)$. In fact, the induced map on cohomology $$\Delta_{f,P_{\bullet}}^{\bullet}: \coho(\hom{B}(P_{\bullet},W))\rightarrow\coho(\hom{B}(P_{\bullet},W))$$ only depends on $f$ and not on the choice of the lifting $f_{\bullet}$: \begin{thm}\label{thm:Delta}\cite[Theorem A]{S} Let $W$ be a left $B$-module and $f$ be a $\delta$-operator on~$W$. There is a canonical morphism $$\Delta_f^{\bullet}:\ext{B}^{\bullet}(W,W)\rightarrow \ext{B}^{\bullet}(W,W)$$ of graded vector spaces, such that for every projective resolution $\epsilon:P_{\bullet}\twoheadrightarrow W$ and each $\delta$-lifting $f_{\bullet}:P_{\bullet}\rightarrow P_{\bullet}$ of $f$ to $P_{\bullet}$, the following diagram commutes: $$\begin{tikzcd} \coho(\hom{B}(P_{\bullet},W))\arrow{r}{\Delta_{f,P_{\bullet}}^{\bullet}} \arrow{d}{\cong} &\coho(\hom{B}(P_{\bullet},W))\arrow{d}{\cong}\\ \ext{B}^{\bullet}(W,W)\arrow{r}{\Delta_f^{\bullet}} &\ext{B}^{\bullet}(W,W). \end{tikzcd}$$ \end{thm} In what follows, let us consider an algebra $\alg$ and take $B=\alg^e$ and $W=A$ as a left $\alg^e$-module. In particular, $\alg$ can be taken to be the special biserial algebra $\alg_N$ defined in~Section~\ref{sec:intro}. It is well known that any element $f\in \hh^1(\alg)$ can be represented by a derivation $f:\alg \rightarrow \alg$. We can use the above results to compute the Gerstenhaber bracket $[f, -]$ on $\hh^{\bullet}(\alg)$. The first step is to construct a derivation on $\alg^e$, $$f^{e}:=f\otimes\mathrm{id}_\alg+\mathrm{id}_\alg\otimes f:\alg^e\longrightarrow \alg^e,$$ and note that $f:\alg\rightarrow \alg$ is an $f^{e}$-operator on the $\alg^e$-module~$\alg$. We can thus consider the $f^{e}$-lifting of $f$ to the bar resolution $\barcx(\alg)$ of~$\alg$ and get a morphism of complexes $$f_{\bullet}^{\#}:\hom{\alg^e}(\barcx(\alg),\alg )\rightarrow \hom{\alg^e}(\barcx(\alg),\alg ).$$ \begin{lemma}\cite[\S2.2]{S}.\label{lem:SA} The morphism $f_{\bullet}^{\#}$ describes the action of the bracket $[f, -]$ on the complex $\hom{\alg^e}(\barcx(\alg),\alg)$. In particular, the Gerstenhaber bracket $[f, -]$ on $\hh^{\bullet}(\alg)$ is given by $$\Delta_{f,\barcx}^{\bullet}:\hh^{\bullet}(\alg)\longrightarrow\hh^{\bullet}(\alg).$$ \end{lemma} \begin{rem}\label{rem:SA} In later sections, we will make use of the following two general observations: \begin{enumerate} \item By~Theorem~\ref{thm:Delta}, we can use any projective $A^{e}$-resolution $P_{\bullet}$ of $\alg$ to compute the bracket $[f,-]$, provided that we are able construct an $f^e$-lifting of $f$ to~$P_{\bullet}$. \item Let $A$ be a $\ensuremath{{\mathbb Z}}$-graded $\ground$-algebra with $\ensuremath{{\mathbb Z}}$-grading $\mathsf{deg}$, and denote the induced grading on $A^e$ also by $\mathsf{deg}$. Write $\delta_{\mathsf{deg}}$ for the Eulerian derivation on $A$ defined by setting $\delta_{\mathsf{deg}}(a)=\mathsf{deg}(a)a$ for any homogeneous element $a\in A$. Suppose $P_\bullet \twoheadrightarrow A$ is a graded projective resolution of $A$ by $A^e$-modules; that is, every $P_n$ is a graded $A^e$-module and the differential maps preserve the grading. Then there is a $(\delta_{\mathsf{deg}})^e$-lifting $(\delta_{\mathsf{deg}})_{\bullet}$ of $\delta_{\mathsf{deg}}$, defined by setting $$ (\delta_{\mathsf{deg}})_n(p)\ =\ \mathsf{deg}_n(p)p$$ for homogeneous elements $p\in P_n$, where we denote the grading on $P_n$ by $\mathsf{deg}_n$. \end{enumerate} \end{rem} \section{A minimal projective bimodule resolution for $\alg$} \label{sec:resolution} \begin{notation} Throughout the paper, we fix integers $N\geq 1$ and $m\geq 3$. We consider a field $\ground$ and assume the characteristic of $\ground$ does not divide $2$, $N$ or~$m$. We let $$A:=\alg_N=\ground\quiv/ \brac {a_i a_{i+1},\ \ol{a}_{i+1}\ol{a}_i,\ (a_i\ol{a}_i)^N - (\ol{a}_{i-1}a_{i-1})^N \mid i \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}},$$ where $\quiv$ is the quiver described in the~Introduction (Figure~\ref{fig:quiver}). Note that we identify the labeling set of the vertices of $\quiv$ with $\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$. \end{notation} In \cite{ST}, Snashall and Taillefer construct a minimal projective $A^{e}$-resolution $(P_{\bullet},d_{\bullet})$ for $\alg$, and use it to compute the Hochschild cohomology $\hh^{\bullet}(\alg)$. In what follows, we describe~$P_{\bullet}$. For every $n\geq 0$, $i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ and $0\leq r\leq n$, we define an element $\g{r,i}{n}\in \ground\quiv$. We set $\g{0,i}{0}=e_i$, the trivial path at vertex $i$, and for $n\geq 1$ we define $$\g{r,i}{n}= \begin{cases} \g{r,i}{n-1}a + (-1)^n \g{r-1,i}{n-1}\ol{a}(a\ol{a})^{N-1} & \mbox{ if } n-2r>0, \\ \g{r,i}{n-1}a(\ol{a}a)^{N-1} + (-1)^n \g{r-1,i}{n-1}\ol{a} & \mbox{ if } n-2r<0, \\ \g{r,i}{n-1}a(\ol{a}a)^{N-1} + \g{r-1,i}{n-1}\ol{a}(a\ol{a})^{N-1} & \mbox{ if } n=2r, \end{cases}$$ where $\g{-1,i}{n}=\g{n,i}{n-1}=0$ by convention. In the above formulas, the indices of the $a$ and $\ol{a}$ arrows are chosen uniquely such that $g^n_{r,i}$ is nonzero in $\ground\quiv$. We also write $\bas{n}:=\{\g{r,i}{n}\mid i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}},\ 0\leq r\leq n\}$. In particular, $\ground \bas{0}=E$, the subalgebra of $\ground\quiv$ generated by all the trivial paths $e_i$'s, and $$\begin{array}{lcl} \bas{1} & = & \{a_i,\ -\ol{a}_i\mid i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}} \}, \\ \bas{2} & = & \{a_i a_{i+1},\ -\ol{a}_{i+1}\ol{a}_{i},\ (a_i\ol{a}_i)^N - (\ol{a}_{i-1}a_{i-1})^N\mid i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}} \} . \end{array}$$ Now, $\ground \bas{n}$ is an $E^e$-module for every $n\geq 0$, and we can consider the $A^{e}$-module $$P_n:=\alg\otimes_E \ground \bas{n}\otimes_E \alg.$$ As usual, the augmentation map $\epsilon:P_0=\alg\otimes_E \alg\rightarrow \alg$ is induced by multiplication on~$\alg$. For $n\geq 1$, the differential $d_n:P_n\rightarrow P_{n-1}$ is the $\alg^e$-linear map that sends $1\otimes\g{r,i}{n}\otimes 1$ to $$\begin{cases} 1 \otimes \g{r,i}{n-1}\otimes a + (-1)^{n+r} a \otimes \g{r,i+1}{n-1}\otimes 1 & \\ \quad + (-1)^{n+r} \ol{a}(a\ol{a})^{N-1}\otimes \g{r-1,i-1}{n-1}\otimes 1 + (-1)^{n} 1 \otimes \g{r-1,i}{n-1}\otimes \ol{a}(a\ol{a})^{N-1} & \mbox{if } n-2r>0, \\ \ \\ 1 \otimes \g{r,i}{n-1}\otimes a(\ol{a}a)^{N-1} + (-1)^{n+r} a(\ol{a}a)^{N-1} \otimes \g{r,i+1}{n-1}\otimes 1 & \\ \quad + (-1)^{n+r} \ol{a}\otimes \g{r-1,i-1}{n-1}\otimes 1 + (-1)^{n} 1 \otimes \g{r-1,i}{n-1}\otimes \ol{a} & \mbox{if } n-2r<0,\\ \ \\ \sum_{k=0}^{N-1} \; \Big( (\ol{a}a)^{k} \otimes\g{r,i}{n-1}\otimes a (\ol{a}a)^{N-1-k} +(-1)^{\frac{n}{2}}(a\ol{a})^{k}a\otimes\g{r,i+1}{n-1}\otimes (a\ol{a})^{N-1-k} & \\ \quad + (-1)^{\frac{n}{2}}(\ol{a}a)^{k}\ol{a}\otimes\g{r-1,i-1}{n-1}\otimes (\ol{a}a)^{N-1-k} +(a\ol{a})^{k}\otimes \g{r-1,i}{n-1}\otimes \ol{a}(a\ol{a})^{N-1-k} \Big) & \mbox{if } n=2r. \end{cases}$$ In particular, the differential $d_1:P_1\rightarrow P_0=A\otimes_E A$ is defined by sending $$\left\lbrace \begin{array}{lll} 1\otimes\g{0,i}{1}\otimes 1=1\otimes a_i\otimes 1 & \mbox{to} & 1\otimes a_i-a_i\otimes 1,\\ 1\otimes\g{1,i}{1}\otimes 1=-1\otimes \ol{a}_{i-1}\otimes 1 & \mbox{to} & -1\otimes \ol{a}_{i-1}+\ol{a}_{i-1}\otimes 1. \end{array}\right. $$ \begin{thm}\cite[Theorem 1.6]{ST} Let $N\geq 1$. The complex $(P_{\bullet},d_{\bullet})$ is a minimal projective resolution for $\alg$ as an $\alg^e$-module. \end{thm} \begin{rem}\label{rem:N1} We observe that when $N=1$, every $\g{r,i}{n} \in \ground\quiv$ is a $\ground$-linear combination of all the paths $p$ of length $n$ that start at the vertex $i$, such that $p$ contains exactly $r$ arrows of the form $\ol a$ (see Figure~2). In this case, the differential $d_n:P_n\rightarrow P_{n-1}$ maps $1\otimes\g{r,i}{n}\otimes 1$ to $$1\otimes \g{r,i}{n-1}\otimes a_{i+n-2r-1} +(-1)^{n+r} a_i\otimes \g{r,i+1}{n-1}\otimes 1 +(-1)^{n+r} \ol{a}_{i-1}\otimes \g{r-1,i-1}{n-1}\otimes 1 +(-1)^n 1\otimes \g{r-1,i}{n-1}\otimes \ol{a}_{i+n-2r}.$$ \end{rem} \begin{rem} To better illustrate the notation $\g{r,i}{n}$ in the resolution $P_{\bullet}$ given above, we provide the terms appearing in $\g{1,2}{4}$ when $N=1$ and when $N=2$ in Figures~2 and 3, respectively. \end{rem} \begin{figure}[H]\label{fig:paths} \begin{center} \begin{tikzpicture}[scale=1] \begin{scope}[xshift=-6.2cm, scale=0.55] \path[use as bounding box] (-3.5,-3.5) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-2*45}:3cm) arc ({85-2*45}:{50-4*45}:3cm); \draw[thick, red, <-] ({85-4*45}:3.3cm) arc ({85-4*45}:{50-4*45}:3.3cm); \draw[thick, red] ({50-4*45}:3.3cm) arc ({410-4*45}:{230-4*45}:0.15cm); \draw[red] (270:4.4cm) node {$a_2 a_3 a_4 \ol{a}_4$}; \end{scope} \begin{scope}[xshift=-2.0cm, scale=0.55] \path[use as bounding box] (-3.5,-3.8) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-2*45}:3cm) arc ({85-2*45}:{50-3*45}:3cm); \draw[thick, red] ({85-3*45}:3.3cm) arc ({85-3*45}:{50-3*45}:3.3cm); \draw[thick, red, ->] ({85-3*45}:3.6cm) arc ({85-3*45}:{50-3*45}:3.6cm); \draw[thick, red] ({50-3*45}:3.3cm) arc ({410-3*45}:{230-3*45}:0.15cm); \draw[thick, red] ({85-3*45}:3.6cm) arc ({85-3*45}:{265-3*45}:0.15cm); \draw[red] (270:4.4cm) node {$a_2 a_3 \ol{a}_3 a_3 $}; \end{scope} \begin{scope}[xshift=2.0cm, scale=0.55] \path[use as bounding box] (-3.5,-3.5) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-2*45}:3cm) arc ({85-2*45}:{50-2*45}:3cm); \draw[thick, red] ({85-2*45}:3.3cm) arc ({85-2*45}:{50-2*45}:3.3cm); \draw[thick, red, ->] ({85-2*45}:3.6cm) arc ({85-2*45}:{50-3*45}:3.6cm); \draw[thick, red] ({50-2*45}:3.3cm) arc ({410-2*45}:{230-2*45}:0.15cm); \draw[thick, red] ({85-2*45}:3.6cm) arc ({85-2*45}:{265-2*45}:0.15cm); \draw[red] (270:4.4cm) node {$a_2 \ol{a}_2 a_2 a_3 $}; \end{scope} \begin{scope}[xshift=6.2cm, scale=0.55] \path[use as bounding box] (-3.5,-3.5) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-1*45}:3cm) arc ({85-1*45}:{50-1*45}:3cm); \draw[thick, red, ->] ({85-1*45}:3.3cm) arc ({85-1*45}:{50-3*45}:3.3cm); \draw[thick, red] ({85-1*45}:3.3cm) arc ({85-1*45}:{265-1*45}:0.15cm); \draw[red] (270:4.4cm) node {$\ol{a}_1 a_1 a_2 a_3 $}; \end{scope} \end{tikzpicture} \end{center} \vskip 7pt \caption{The paths that appear in $\g{1,2}{4}= a_2 a_3 a_4 \ol{a}_4 - a_2 a_3 \ol{a}_3 a_3 + a_2\ol{a}_2 a_2 a_3 - \ol{a}_1 a_1 a_2 a_3\in \ground\quiv$ when $N=1$, $m=8$.} \end{figure} \begin{figure}[H]\label{fig:pathsN} \begin{center} \begin{tikzpicture}[scale=1] \begin{scope}[xshift=-6.2cm, scale=0.55] \path[use as bounding box] (-3.5,-3.5) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-2*45}:3cm) arc ({85-2*45}:{50-4*45}:3cm); \draw[thick, red] ({85-4*45}:3.3cm) arc ({85-4*45}:{50-4*45}:3.3cm); \draw[thick, red, dotted] ({85-4*45}:3.6cm) arc ({85-4*45}:{50-4*45}:3.6cm); \draw[thick, red, dotted, <-] ({85-4*45}:3.9cm) arc ({85-4*45}:{50-4*45}:3.9cm); \draw[thick, red] ({50-4*45}:3.3cm) arc ({410-4*45}:{230-4*45}:0.15cm); \draw[thick, red, dotted] ({50-4*45}:3.9cm) arc ({410-4*45}:{230-4*45}:0.15cm); \draw[thick, red, dotted] ({85-4*45}:3.6cm) arc ({85-4*45}:{265-4*45}:0.15cm); \draw[red] (270:4.9cm) node {$a_2 a_3 a_4 \ol{a}_4 (a_4 \ol{a}_4)$}; \end{scope} \begin{scope}[xshift=-2.0cm, scale=0.55] \path[use as bounding box] (-3.5,-3.8) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-2*45}:3cm) arc ({85-2*45}:{50-3*45}:3cm); \draw[thick, red] ({85-3*45}:3.3cm) arc ({85-3*45}:{50-3*45}:3.3cm); \draw[thick, red, dotted] ({85-3*45}:3.6cm) arc ({85-3*45}:{50-3*45}:3.6cm); \draw[thick, red, dotted] ({85-3*45}:3.9cm) arc ({85-3*45}:{50-3*45}:3.9cm); \draw[thick, red, ->] ({85-3*45}:4.2cm) arc ({85-3*45}:{50-3*45}:4.2cm); \draw[thick, red] ({50-3*45}:3.3cm) arc ({410-3*45}:{230-3*45}:0.15cm); \draw[thick, red] ({85-3*45}:3.6cm) arc ({85-3*45}:{265-3*45}:0.15cm); \draw[thick, red, dotted] ({50-3*45}:3.9cm) arc ({410-3*45}:{230-3*45}:0.15cm); \draw[thick, red, dotted] ({85-3*45}:4.2cm) arc ({85-3*45}:{265-3*45}:0.15cm); \draw[red] (270:4.9cm) node {$a_2 a_3 \ol{a}_3 (a_3 \ol{a}_3)a_3 $}; \end{scope} \begin{scope}[xshift=2.0cm, scale=0.55] \path[use as bounding box] (-3.5,-3.5) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-2*45}:3cm) arc ({85-2*45}:{50-2*45}:3cm); \draw[thick, red] ({85-2*45}:3.3cm) arc ({85-2*45}:{50-2*45}:3.3cm); \draw[thick, red, dotted] ({85-2*45}:3.6cm) arc ({85-2*45}:{50-2*45}:3.6cm); \draw[thick, red, dotted] ({85-2*45}:3.9cm) arc ({85-2*45}:{50-2*45}:3.9cm); \draw[thick, red, ->] ({85-2*45}:4.2cm) arc ({85-2*45}:{50-3*45}:4.2cm); \draw[thick, red] ({50-2*45}:3.3cm) arc ({410-2*45}:{230-2*45}:0.15cm); \draw[thick, red] ({85-2*45}:3.6cm) arc ({85-2*45}:{265-2*45}:0.15cm); \draw[thick, red, dotted] ({50-2*45}:3.9cm) arc ({410-2*45}:{230-2*45}:0.15cm); \draw[thick, red, dotted] ({85-2*45}:4.2cm) arc ({85-2*45}:{265-2*45}:0.15cm); \draw[red] (270:4.9cm) node {$a_2 \ol{a}_2 (a_2\ol{a}_2) a_2 a_3 $}; \end{scope} \begin{scope}[xshift=6.2cm, scale=0.55] \path[use as bounding box] (-3.5,-3.5) rectangle (3.5,2.5); \foreach \x in {6,7,0,1,2,3,4,5} \fill ({90-\x*45}:2.5cm) circle (1.2mm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, ->] ({85-\x*45}:2.65cm) arc ({85-\x*45}:{50-\x*45}:2.65cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw[thick, <-] ({85-\x*45}:2.35cm) arc ({85-\x*45}:{52-\x*45}:2.35cm); \foreach \x in {5,6,7,0,1,2,3,4} \draw ({90-\x*45}:2cm) node {$\x$}; \draw[thick, red] ({85-1*45}:3cm) arc ({85-1*45}:{50-1*45}:3cm); \draw[thick, red] ({85-1*45}:3.3cm) arc ({85-1*45}:{50-1*45}:3.3cm); \draw[thick, red, dotted] ({85-1*45}:3.6cm) arc ({85-1*45}:{50-1*45}:3.6cm); \draw[thick, red, dotted] ({85-1*45}:3.9cm) arc ({85-1*45}:{50-1*45}:3.9cm); \draw[thick, red, ->] ({50-1*45}:3.9cm) arc ({50-1*45}:{50-3*45}:3.9cm); \draw[thick, red] ({85-1*45}:3.3cm) arc ({85-1*45}:{265-1*45}:0.15cm); \draw[thick, red, dotted] ({50-1*45}:3.6cm) arc ({410-1*45}:{230-1*45}:0.15cm); \draw[thick, red, dotted] ({85-1*45}:3.9cm) arc ({85-1*45}:{265-1*45}:0.15cm); \draw[red] (270:4.9cm) node {$\ol{a}_1 a_1 (\ol{a}_1 a_1) a_2 a_3 $}; \end{scope} \end{tikzpicture} \end{center} \vskip 15pt \caption{The paths that appear in $\g{1,2}{4}= a_2 a_3 a_4 \ol{a}_4 (a_4 \ol{a}_4)^{N-1} - a_2 a_3 \ol{a}_3 (a_3 \ol{a}_3)^{N-1} a_3 + a_2\ol{a}_2 (a_2\ol{a}_2)^{N-1} a_2 a_3 - \ol{a}_1 a_1 (\ol{a}_1 a_1)^{N-1} a_2 a_3\in \ground\quiv$ when $N=2$, $m=8$. All paths start at vertex $2$ and end at vertex $4$.} \end{figure} \section{The Hochschild cohomology ring for $\alg$} \label{cohomology} \numberwithin{equation}{subsubsection} In this section, we describe the Hochschild cohomology $\hh^{\bullet}(\alg)$ computed by Snashall and Taillefer. All results in this section appear in~\cite{ST}. For every $n\geq 0$, the $\ground$-module $\hh^{n}(\alg)$ is finite dimensional, and we describe its basis in terms of cocycles in $\hom{\alg^e}(P_n,\alg)$. As usual, we identify $\hom{\alg^e}(P_n,\alg)$ with $\hom{E^e}(\ground \bas{n},\alg)$, whose elements are the $\ground$-linear functions $\ground\bas{n}\rightarrow\alg$ that map every $g\in \bas{n}$ to a linear combination of paths in $\alg$ parallel to~$g$, that is, paths that share the same source and target with $g$. For $g\in \bas{n}$ and $u\in A$ parallel to $g$, we write $(g\parallel u)\in \hom{E^e}(\ground \bas{n},\alg)$ to denote the $\ground$-linear function that sends $g$ to $u$ and which maps all the other elements in $\bas{n}$ to zero. \subsection{The center $\hh^{0}(\alg)$ of $\alg$.}\label{ss:0} Writing $\varepsilon_i:=(a_i\ol{a}_i)^N\in \alg$ and $f_i:=(a_i\ol{a}_i + \ol{a}_i a_i)\in \alg$, the set $$\{1,\, \varepsilon_i,\, f_i^s \,\mid \, i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}, \, 1\leq s\leq N-1\}$$ is a basis for the $\ground$-module $\hh^{0}(\alg)$ under the usual identification of $\hh^0(\alg)$ with the center of $A$, see~\cite[Theorem 3.1]{ST}. \subsection{The $\ground$-module $\hh^{n}(\alg)$ for $n\geq 1$ \cite[Propositions 4.1 and 5.1]{ST}} \label{ss:n} For $m \geq 3$ and each $n \geq 1$, we write $n=pm +t$ for some integers $p,\,t$ with $p\geq 0$ and $0\leq t\leq m-1$. \subsubsection{When $m$ is even and $n$ is even, the $\ground$-module $\hh^{n}(\alg)$ has basis:}\label{subsec:m even n even} \begin{align*} \chi_{n,\alpha} &:= \sum_{i=0}^{m-1} \left( g^n_{\frac{n-\alpha m}{2},i} \parallel (-1)^{\frac{n-\alpha m}{2}i}\,e_i \right) && \text{ for } -p \leq \alpha \leq p; \\ \pi_{n,\alpha}&:= \left( g^n_{\frac{n-\alpha m}{2},0} \parallel (a_0\ol{a}_0)^N \right) && \text{ for } -p \leq \alpha \leq p; \\ F_{n,j,s}&:= \left( g^n_{\frac{n}{2},j} \parallel (a_j\ol{a}_j)^s \right) + \left( g^n_{\frac{n}{2},j+1} \parallel (-1)^{\frac{n}{2}}\,(\ol{a}_j a_j)^s \right) && \text{ for } j \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}} \text{ and } 1\leq s\leq N-1. \end{align*} \subsubsection{When $m$ is even and $n$ is odd, the $\ground$-module $\hh^{n}(\alg)$ has basis:}\label{subsec:m even n odd} \begin{align*} \varphi_{n,\gamma}&:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\gamma m-1}{2},i} \parallel (-1)^{\frac{n-\gamma m-1}{2}i} \, a_i (\ol{a}_i a_i)^{N-1} \right) && \text{ for } -p \leq \gamma < 0, \\ &&& \text{ and for } \gamma =-(p+1) \text{ in case } t=m-1; \\ \varphi_{n,\gamma}&:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\gamma m-1}{2},i} \parallel (-1)^{\frac{n-\gamma m-1}{2}i} \, a_i \right) && \text{ for } 0 \leq \gamma \leq p; \\ \psi_{n,\beta}&:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\beta m+1}{2},i} \parallel (-1)^{\frac{n-\beta m-1}{2}i} \, \ol{a}_{i-1} (a_{i-1} \ol{a}_{i-1})^{N-1} \right) && \text{ for } 0 < \beta \leq p, \\ &&& \text{ and for } \beta =p+1 \text{ in case } t=m-1; \\ \psi_{n,\beta}&:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\beta m+1}{2},i} \parallel (-1)^{\frac{n-\beta m-1}{2}i} \, \ol{a}_{i-1} \right) && \text{ for } -p \leq \beta \leq 0; \\ E_{n,j,s}&:= \left( g^n_{\frac{n-1}{2},j} \parallel a_j (\ol{a}_j a_j)^s \right) && \text{ for } j \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}} \text{ and } 1\leq s\leq N-1. \end{align*} \subsubsection{When $m$ is odd and $n$ is even, the $\ground$-module $\hh^{n}(\alg)$ has basis:} \label{subsec:m odd n even} \begin{align*} \chi_{n,\delta} &:= \sum_{i=0}^{m-1} \left( g^n_{\frac{n-\delta m}{2},i} \parallel e_i \right) && \text{ for } \delta =\begin{cases} p-2\alpha-1, &\text{if $t$ is odd, $0 \leq \alpha < p$ and $\alpha + \frac{m-t}{2}$ is odd}, \\ p-2\alpha, &\text{if $t$ is even, $ 0 \leq \alpha \leq p$ and $\alpha + \frac{t}{2}$ is even}; \end{cases} \\ \pi_{n,\delta} &:= \left( g^n_{\frac{n-\delta m}{2},0} \parallel (a_0\ol{a}_0)^N \right) && \text{ for } \delta =\begin{cases} p-2\alpha-1, & \text{if $t$ is odd, $0 \leq \alpha < p$ and $\alpha + \frac{m-t}{2}$ is even}, \\ p-2\alpha, & \text{if $t$ is even, $0 \leq \alpha \leq p$ and $\alpha + \frac{t}{2}$ is odd}; \end{cases} \end{align*} \begin{align*} F_{n,j,s} &:= \left( g^n_{\frac{n}{2},j} \parallel (a_j\ol{a}_j)^s \right) + \left( g^n_{\frac{n}{2},j+1} \parallel (-1)^{\frac{n}{2}}\,(\ol{a}_j a_j)^s \right) && \text{ for } j \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}} \text{ and } 1\leq s\leq N-1; \\ \varphi_{n,\sigma} &:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\sigma m-1}{2},i} \parallel (a_i \ol{a}_i)^{N-1}a_i \right) && \text{ for } t=m-1 \text{ and } \sigma=-(p+1); \\ \psi_{n,\tau} &:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\tau m+1}{2},i} \parallel (\ol{a}_{i-1}a_{i-1})^{N-1}\ol{a}_{i-1} \right) && \text{ for } t=m-1 \text{ and } \tau=p+1. \end{align*} \begin{rem}\label{rem:indexchi} We note that the value $\delta=0$ appears in the above index set for $\chi_{n,\delta}$ if and only if $n\equiv 0 \text{ (mod 4)}$. This follows by considering the possible values of $p,\ m$ and $t$ mod~$4$ for $\delta=0$. \end{rem} \subsubsection{When $m$ is odd and $n$ is odd, the $\ground$-module $\hh^{n}(\alg)$ has basis:} \label{subsec:m odd n odd} \begin{align*} \varphi_{n,\sigma} &:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\sigma m-1}{2},i} \parallel (a_i \ol{a}_i)^{N-1}a_i \right)&&\\ &\text{ for } \sigma = \begin{cases} p-2\gamma & \text{if $t$ is odd, $\gamma \leq p<2 \gamma$ and $\gamma + \frac{t-1}{2}$ is even}, \\ p-2\gamma-1 & \text{if $t$ is even, $\gamma <p \leq 2\gamma$ and $\gamma + \frac{m+t-1}{2}$ is even, $t \neq m-1$, } \\ p-2\gamma-1 & \text{if $t=m-1$, $\gamma \leq p\leq 2\gamma$ and $\gamma$ is even}; \end{cases} &&\\ \varphi_{n,\sigma} &:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\sigma m-1}{2},i} \parallel a_i \right)&&\\ &\text{ for } \sigma = \begin{cases} p-2\gamma & \text{if $t$ is odd, $0\leq 2\gamma \leq p$ and $\gamma + \frac{t-1}{2}$ is even}, \\ p-2\gamma-1 & \text{if $t$ is even, $0\leq 2\gamma < p$ and $\gamma + \frac{m+t-1}{2}$ is even}; \end{cases} &&\\ \psi_{n,\tau} &:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\tau m+1}{2},i} \parallel (\ol{a}_{i-1}a_{i-1})^{N-1}\ol{a}_{i-1} \right)&&\\ &\text{ for } \tau = \begin{cases} p-2\beta & \text{ if $t$ is odd, $0\leq 2\beta <p$ and $\beta + \frac{t-1}{2}$ is even}, \\ p-2\beta-1 & \text{ if $t$ is even, $0\leq 2\beta <p-1$ and $\beta + \frac{m+t-1}{2}$ is even, $t \neq m-1$,}\\ p-2\beta-1 & \text{ if $t=m-1$, $-2\leq 2\beta < p-1$ and $\beta$ is even}; \end{cases} &&\\ \psi_{n,\tau} &:=\sum_{i=0}^{m-1} \left( g^n_{\frac{n-\tau m+1}{2},i} \parallel \ol{a}_{i-1} \right)&&\\ &\text{ for } \tau = \begin{cases} p-2\beta & \text{ if $t$ is odd, $\beta\leq p\leq 2\beta$ and $\beta + \frac{t-1}{2}$ is even,} \\ p-2\beta-1 & \text{ if $t$ is even, $\beta \leq p-1\leq 2\beta$ and $\beta + \frac{m+t-1}{2}$ is even}; \end{cases}&&\\ E_{n,j,s} &:= \left( g^n_{\frac{n-1}{2},j} \parallel a_j (\ol{a}_j a_j)^s \right) \text{ for } j \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}} \text{ and } 1\leq s\leq N-1;\\ \pi_{n,\delta} &:= \left( g^n_{\frac{n-\delta m}{2},0} \parallel (a_0\ol{a}_0)^N \right) \text{ for } t=0 \text{ and } \delta = \pm p. \end{align*} \begin{rem}\label{rem:indexphi} We note that the value $\sigma=0$ appears in the above index set for $\varphi_{n,\sigma}$ if and only if $n\equiv 1 \text{ (mod 4)}$. Again, this follows by considering the possible values of $p,\ m$ and $t$ mod~$4$ for $\sigma=0$. Similarly, the value $\tau=0$ appears in the above index set for $\psi_{n,\tau}$ if and only if $n\equiv 1 \text{ (mod 4)}$. \end{rem} \subsection{For $m \geq 3$ even, the $\ground$-algebra $\hh^{\bullet}(\alg)$ has generators~\cite[Theorems 4.4 and 4.8]{ST}:} \label{m even ring} \begin{align*} & 1, \, \varepsilon_i, && \text{ in degree } 0, \text{ for } N=1, \,i \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}},\\ & 1, \, \varepsilon_i, \, f_i, && \text{ in degree } 0, \text{ for } N>1, \,i \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}, \\ & \varphi_{1,0}, \, \psi_{1,0} && \text{ in degree } 1,\\ & \chi_{2,0} && \text{ in degree } 2,\\ & \varphi_{m-1,-1}, \, \psi_{m-1,1} && \text{ in degree } m-1,\\ & \chi_{m,1}, \, \chi_{m,-1} && \text{ in degree } m. \end{align*} We will compute the Gerstenhaber brackets for each of these generators in the next sections, so we describe them explicitly here: \begin{align*} \varepsilon_i&=(a_i\ol{a}_i)^N, & f_i=&a_i\ol{a}_i + \ol{a}_i a_i,\\ \varphi_{1,0} &= \sum_{i=0}^{m-1} \left(g^1_{0,i} \parallel a_i\right), & \psi_{1,0} =& \sum_{i=0}^{m-1} \left(g^1_{1,i} \parallel \ol{a}_{i-1}\right), \\ \chi_{2,0} &= \sum_{i=0}^{m-1} \left(g^2_{1,i} \parallel (-1)^ie_i\right), && \\ \varphi_{m-1,-1} &= \sum_{i=0}^{m-1} \left(g^{m-1}_{m-1,i} \parallel (-1)^{i} \, a_i (\ol{a}_{i}a_{i})^{N-1}\right), & \psi_{m-1,1} =& \sum_{i=0}^{m-1} \left(g^{m-1}_{0,i} \parallel (-1)^i \, \ol{a}_{i-1} (a_{i-1} \ol{a}_{i-1})^{N-1}\right), \\ \chi_{m,1} &= \sum_{i=0}^{m-1} \left(g^m_{0,i} \parallel e_i\right), & \chi_{m,-1} =& \sum_{i=0}^{m-1} \left(g^m_{m,i} \parallel e_i\right). \end{align*} \subsection{For $m \geq 3$ odd, the $\ground$-algebra $\hh^{\bullet}(\alg)$ has generators~\cite[Theorems 5.2 and 5.4]{ST}:} \label{m odd ring} \begin{align*} & 1, \, \varepsilon_i, && \text{ in degree } 0, \text{ for } N=1, \,i \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}},\\ & 1, \, \varepsilon_i, \, f_i, && \text{ in degree } 0, \text{ for } N>1, \,i \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}, \\ & \varphi_{1,0}, \, \psi_{1,0} && \text{ in degree } 1,\\ & F_{2,j,1} && \text{ in degree } 2, \text{ for } N>1, \,j \in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}},\\ & \chi_{4,0} && \text{ in degree } 4,\\ & \varphi_{m-1,-1}, \, \psi_{m-1,1} && \text{ in degree } m-1,\\ & \chi_{2m,2}, \, \chi_{2m,-2} && \text{ in degree } 2m. \end{align*} Explicitly, these generators are: \begin{align*} \varepsilon_i&=(a_i\ol{a}_i)^N, & f_i=&a_i\ol{a}_i + \ol{a}_i a_i,\\ \varphi_{1,0} &= \sum_{i=0}^{m-1} \left(g^1_{0,i} \parallel a_i\right), & \psi_{1,0} =& \sum_{i=0}^{m-1} \left(g^1_{1,i} \parallel \ol{a}_{i-1}\right), \\ F_{2,j,1}&= \left( g^2_{1,j} \parallel a_j\ol{a}_j \right) + \left( g^2_{1,j+1} \parallel - \ol{a}_j a_j \right), &&\\ \chi_{4,0} &= \sum_{i=0}^{m-1} \left(g^4_{2,i} \parallel e_i\right), && \\ \varphi_{m-1,-1} &= \sum_{i=0}^{m-1} \left(g^{m-1}_{m-1,i} \parallel (a_i \ol{a}_i)^{N-1} a_i\right), & \psi_{m-1,1} =& \sum_{i=0}^{m-1} \left(g^{m-1}_{0,i} \parallel ( \ol{a}_{i-1} a_{i-1})^{N-1} \ol{a}_{i-1}\right), \\ \chi_{2m,2} &= \sum_{i=0}^{m-1} \left(g^{2m}_{0,i} \parallel e_i\right), & \chi_{2m,-2} =& \sum_{i=0}^{m-1} \left(g^{2m}_{2m,i} \parallel e_i\right). \end{align*} We list several algebra relations on the generators for $m$ even and odd in Method~\ref{methodc}. Some of the relations given there differ from the results in \cite[Theorem 4.8]{ST}. \section{Gerstenhaber brackets with $\varphi_{1,0}$ and $\psi_{1,0}$} \label{brackets HH1} \numberwithin{equation}{section} In this section, we compute the brackets with basis elements $\varphi_{1,0}$ and $\psi_{1,0}$ of $\hh^{1}(\alg)$, using Su\'{a}rez-\'{A}lvarez's approach as described in Section~\ref{sec:approach}. More precisely, we show that the basis for $\hh^n(\alg)$ described in~Section~\ref{cohomology} is an eigenbasis for the endomorphisms $[\varphi_{1,0},-]$ and $[\psi_{1,0},-]$. We proceed as follows. First we show that $\varphi_{1,0}$, $\psi_{1,0}$ correspond to Eulerian derivations on $\alg$ coming from gradings $d$, $\ol{d}$ on $\alg$, respectively. Then we check that the projective resolution $P_\bullet\twoheadrightarrow \alg$ given in~Section~\ref{sec:resolution} is graded with respect to both gradings. By Remark~ \ref{rem:SA}, we can then describe the brackets $[\varphi_{1,0},-]$ and $[\psi_{1,0},-]$ on $\hh^{n}(\alg)$ for all $n\ge 0$ in terms of these gradings. Furthermore, we will show that the basis for $\hh^n(\alg)$ given in~Section~\ref{cohomology} consists of homogeneous elements with respect to both gradings. In this case $[\varphi_{1,0},-]$ and $[\psi_{1,0},-]$ act diagonally with eigenvalues given by the degree of the homogeneous basis elements. Recall that $$ \varphi_{1,0} = \sum_{i=0}^{m-1}(\g{0,i}{1} \parallel a_i)=\sum_{i=0}^{m-1}(a_i \parallel a_i)\quad\mbox{and} \quad \psi_{1,0} = \sum_{i=0}^{m-1}(\g{1,i}{1} \parallel \ol{a}_{i-1})=\sum_{i=0}^{m-1}(-\ol{a}_{i} \parallel \ol{a}_{i}). $$ Consider the grading $d$ on $\ground\quiv$ such that the arrows $a_i$ are in degree $1$ and the arrows $\ol{a}_i$ are in degree $0$. Similarly, consider the grading $\ol{d}$ on $\ground\quiv$ such that the arrows $a_i$ are in degree $0$ and the arrows $\ol{a}_i$ are in degree $-1$. We will write $d(p)$ and $\ol{d}(p)$ for the degrees of the path $p\in\ground \quiv$ under the gradings $d$ and $\ol{d}$, respectively. Moreover, since $\alg$ is defined as a quotient of $\ground \quiv$ by an ideal which is homogeneous under both gradings, we get two corresponding gradings $d$ and $\ol{d}$ on $\alg$. The algebra $\alg^e$ inherits these gradings as well, by setting $d(a\otimes a\ensuremath{^{\prime}})=d(a)+d(a\ensuremath{^{\prime}})$ and $\ol{d}(a\otimes a\ensuremath{^{\prime}})=\ol{d}(a)+\ol{d}(a\ensuremath{^{\prime}})$ for homogeneous elements $a,a\ensuremath{^{\prime}}\in\alg$. Now, the elements $\varphi_{1,0}$ and $\psi_{1,0}$ in $\hh^1(\alg)$ correspond to the derivations $\delta_{d}, \delta_{\ol{d}}:\alg\rightarrow\alg$ sending a path $p$ in $\alg$ to $d(p)p$ and $\ol{d}(p)p$, respectively. In other words, $\delta_{d}$ is the Eulerian derivation associated to the grading $d$ on $\alg$, while $\delta_{\ol{d}}$ is the Eulerian derivation associated to the grading $\ol{d}$ on $\alg$. Observe furthermore that: \begin{lemma}\label{lem:deg} The elements $g_{r,i}^n$ in $\ground\quiv$ are homogeneus with respect to the gradings $d$ and $\ol{d}$, with degree given by \[ d( \g{r,i}{n} ) = \begin{cases} rN + n-2r & \mbox{if } n-2r \geq 0\\ (n-r) N & \mbox{if } n-2r<0 \end{cases}\quad \text{and} \quad \ol{d} ( \g{r,i}{n} ) = \begin{cases} -rN & \mbox{if } n-2r \geq 0\\ -(n-r) N +n-2r & \mbox{if } n-2r<0. \end{cases}\] Furthermore, the projective $\alg^e$-module $P_n= \alg\otimes_E \langle g_{r,i}^n\ |\ i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}},0\le r\le n\rangle\otimes_E \alg$ inherits both gradings (denoted again by $d$, $\ol{d}$) by setting \begin{align*} d(a\otimes g_{r,i}^n \otimes a\ensuremath{^{\prime}}) &=d(a)+d(g_{r,i}^n)+ d(a\ensuremath{^{\prime}}),\\ \ol{d}(a\otimes g_{r,i}^n \otimes a\ensuremath{^{\prime}})&=\ol{d}(a)+\ol{d}(g_{r,i}^n)+\ol{d}(a\ensuremath{^{\prime}}), \end{align*} for homogeneous elements $a,a\ensuremath{^{\prime}}\in \alg$. Then the minimal projective resolution $P_\bullet\twoheadrightarrow \alg$ from Section \ref{sec:resolution} is a graded projective resolution with respect to these two gradings. \end{lemma} \begin{proof} To compute the degree of $g_{r,i}^n$, recall the recursive definition of $g_{r,i}^n$ given in~Section~\ref{sec:resolution}, and use induction on $n$. For the graded projective resolution, observe that the differential maps $d_n$, defined in Section~\ref{sec:resolution}, preserve the gradings: a case-by-case analysis shows that all the summands appearing in the image of the differential $d_n(1\otimes \g{r,i}{n}\otimes 1)$ are again of degree $d(\g{r,i}{n})$ and $\ol{d}(\g{r,i}{n})$, respectively. Here, we use the explicit degree formulas for $\g{r,i}{n}$. \end{proof} Now we can apply the observations from Remark \ref{rem:SA} to both $d$ and $\ol{d}$ and obtain the following result: \begin{prop}\label{prop:lift} The $(\delta_d)^e$-operator $(\delta_d)_n:P_n\rightarrow P_n$ given by $$(\delta_d)_n (1\otimes\g{r,i}{n}\T1) = d(\g{r,i}{n})(1\otimes\g{r,i}{n}\T1 )$$ defines a $(\delta_d)^e$-lifting $(\delta_d)_{\bullet}$ of $\delta_d$ to $P_{\bullet}$. Similarly, the $(\delta_{\ol{d}})^{e}$-operator $(\delta_{\ol{d}})_n:P_n\rightarrow P_n$ given by $$(\delta_{\ol{d}})_n (1\otimes\g{r,i}{n}\T1) = \ol{d}(\g{r,i}{n})(1\otimes\g{r,i}{n}\T1 )$$ defines a $(\delta_{\ol{d}})^e$-lifting $(\delta_{\ol{d}})_{\bullet}$ of $\delta_{\ol{d}}$ to $P_{\bullet}$. \end{prop} \begin{rem}\label{rem:gradhh} Since $P_{\bullet}\twoheadrightarrow A$ is a graded projective resolution of $A$ with respect to the grading $d$, the cohomology $\hh^n(\alg)$ inherits a grading $d$ for every $n\geq 0$ as usual. More precisely, the $\ground$-module $\hom{\alg^e}(P_n,\alg) \cong \hom{E^e}(\ground \bas{n},\alg)$ inherits a grading $d$ by setting $$ d((\g{r,i}{n}\parallel u))=d(u)-d(\g{r,i}{n}), $$ for $u\in\alg$ homogeneous and parallel to $g^n_{r,i}$. This grading on $\hom{\alg^e}(P_{n},\alg)$ behaves well with respect to the differential, so the cohomology $\hh^n(\alg)$ acquires a grading $d$ for every $n\geq 0$. Similarly, since $P_{\bullet}\twoheadrightarrow A$ is graded with respect to the grading $\ol{d}$, the $\ground$-module $\hom{\alg^e}(P_n,\alg)$ and the cohomology $\hh^n(\alg)$ inherit a grading $\ol{d}$ for every $n\geq 0$. We note that the basis for $\hh^n(\alg)$ given in~Section~\ref{cohomology} consists of homogenous elements with respect to both gradings. \end{rem} The gradings on $\alg$ and on $\hh^0(\alg)$, considered as the center of $\alg$, agree. In this case we can compute the brackets of $\varphi_{1,0}$ and $\psi_{1,0}$ with generators of $\hh^0(\alg)$ directly as follows. \begin{align*} &[\varphi_{1,0},1]=0 && \text{and} && [\psi_{1,0},1]=0, \\ &[\varphi_{1,0},\varepsilon_i]=\delta_d((a_i\ol{a}_i)^N)=N(a_i\ol{a}_i)^N && \text{and} && [\psi_{1,0},\varepsilon_i]=\delta_{\ol{d}}((a_i\ol{a}_i)^N)=-N(a_i\ol{a}_i)^N ,\\ &[\varphi_{1,0},f_i]=\delta_d(a_i\ol{a}_i+\ol{a}_ia_i)=a_i\ol{a}_i+\ol{a}_ia_i && \text{and} && [\psi_{1,0},f_i]=\delta_{\ol{d}}(a_i\ol{a}_i+\ol{a}_ia_i)=-(a_i\ol{a}_i+\ol{a}_ia_i). \end{align*} The following proposition is an immediate consequence of Remark \ref{rem:gradhh}. \begin{prop}\label{diagonal} Let $n\geq 0$. The elements $\varphi_{1,0}$ and $\psi_{1,0}$ in $\hh^1(\alg)$ act diagonally on~$\hh^n(\alg)$. That is, the $\ground$-basis for $\hh^n(\alg)$, given in Sections~\ref{ss:0} and~\ref{ss:n}, is an eigenbasis for the endomorphisms $[\varphi_{1,0},-]$ and $[\psi_{1,0},-]$ of $\hh^n(\alg)$. The corresponding eigenvalues are given by the degree under $d$ and $\ol{d}$ respectively, as listed in Table~\ref{tab:diagonaleven} and~Table~\ref{tab:diagonalodd} below. \end{prop} \begin{proof} For $u\in\alg$ homogeneous, using~Lemma~\ref{lem:SA} and Proposition~\ref{prop:lift}, we can compute \begin{align*} [\varphi_{1,0}, (\g{r,i}{n}\parallel u)] & =\Delta_{\varphi_{1,0},P_{\bullet}}^{n}((\g{r,i}{n}\parallel u)) =\delta_d (\g{r,i}{n}\parallel u)- (\g{r,i}{n}\parallel u)(\delta_d)^n \\ &= d(u) (\g{r,i}{n}\parallel u) - d(\g{r,i}{n})(\g{r,i}{n}\parallel u)\\ & = d((\g{r,i}{n}\parallel u))(\g{r,i}{n}\parallel u) \end{align*} and \begin{align*} [\psi_{1,0}, (\g{r,i}{n}\parallel u)] &=\Delta_{\psi_{1,0},P_{\bullet}}^{n}((\g{r,i}{n}\parallel u))=\delta_{\ol{d}} (\g{r,i}{n}\parallel u)- (\g{r,i}{n}\parallel u)(\delta_{\ol{d}})^n \\ &= \ol{d}(u) (\g{r,i}{n}\parallel u) - \ol{d}(\g{r,i}{n})(\g{r,i}{n}\parallel u)\\ & = \ol{d}((\g{r,i}{n}\parallel u))(\g{r,i}{n}\parallel u). \end{align*} The result now follows from Remark~\ref{rem:gradhh}. \end{proof} Let us now illustrate this statement by some explicit computations: \begin{rem} The $[\varphi_{1,0},-]$-action on $(\g{r,i}{n}\parallel u)$ for a nonzero path $u\in \alg$ can be obtained as follows: \begin{align*} [\varphi_{1,0}, (\g{r,i}{n}\parallel u)] = \begin{cases} (d(u)- n+2r-rN ) (\g{r,i}{n}\parallel u) & \mbox{if } n-2r \geq 0\\ (d(u)- (n-r)N ) (\g{r,i}{n}\parallel u) & \mbox{if } n-2r<0, \end{cases} \end{align*} where $d(u)$ is the number of clockwise arrows that appear in the path $u$. Similarly, \begin{align*} [\psi_{1,0}, (\g{r,i}{n}\parallel u)] =\begin{cases} (\ol{d}(u)+ rN) (\g{r,i}{n}\parallel u) & \mbox{if } n-2r \geq 0\\ (\ol{d}(u)+ (n-r) N -n +2r) ) (\g{r,i}{n}\parallel u) & \mbox{if } n-2r<0, \end{cases} \end{align*} where $-\ol{d}(u)$ is the number of counterclockwise arrows that appear in the nonzero path $u\in \alg$. \end{rem} \begin{rem} We observe the following: \begin{enumerate} \item If $f,g \in \{\varphi_{1,0}, \psi_{1,0}\}$ then $[f,g]=0$. \item The sum $\varphi_{1,0} + \psi_{1,0}$ is in the center of $\hh^1(\alg)$, as $[\varphi_{1,0} + \psi_{1,0},\hh^1(\alg)]=0$. Moreover, we compute \[ [\varphi_{1,0} + \psi_{1,0}, (\g{r,i}{n}\parallel u)] = \left( d(u) + \ol{d}(u) - n+2r \right) (\g{r,i}{n}\parallel u),\] and \[ [\varphi_{1,0} - \psi_{1,0}, (\g{r,i}{n}\parallel u)] = \begin{cases} \left( d(u) - \ol{d}(u) - n+2r- 2r N \right) (\g{r,i}{n}\parallel u) & \mbox{if } n-2r \geq 0 \\ \left( d(u) - \ol{d}(u) -2nN+2rN+n-2r \right) (\g{r,i}{n}\parallel u) & \mbox{if } n-2r<0. \end{cases} \] \end{enumerate} \end{rem} \begin{rem} In the case $N=1$, the authors of \cite{GTW} define three gradings on $\alg$: the first one is obtained by setting the degree of $a_i$ equal to $1$ for all $i$, while the degree of any other generator is zero; the second grading is simply the path length grading. They note that with the given relations, any path is at most of length $2$ in the algebra, hence they consider this second grading as a $\mathbb{Z}/2\mathbb{Z}$ grading. The third grading is defined by setting the degree of $a_{m-1}$ equal to $1$, the degree of $\ol{a}_{m-1}$ equal to $-1$, and the degree of any other arrow is zero. The first grading mentioned in \cite{GTW} is in fact $d$, while the second one corresponds to $d-\overline{d}$. The third grading corresponds to a Eulerian derivation which is a linear combination of the preceding two and coboundaries. \end{rem} In the following Table~\ref{tab:diagonaleven} and~Table~\ref{tab:diagonalodd}, we record the eigenvalues of the endomorphisms $[\varphi_{1,0},\ ]$, $[\psi_{1,0}, \ ]$, $[\varphi_{1,0} + \psi_{1,0}, \ ]$ and $[\varphi_{1,0} - \psi_{1,0},\ ]$ of $\hh^n(\alg)$. The basis elements of $\hh^n(\alg)$, with explicit conditions on the indices, are given in Section~\ref{ss:n}. We let $i \in \mathbb{Z}/m\mathbb{Z}$, $1 \leq s \leq N-1$, and recall that we write $n=pm +t$ with $p\geq 0$ and $0\leq t\leq m-1$. \newcommand\MyLBrace[2]{% \text{#2}\left\{\rule{0pt}{#1}\right.} \begin{landscape} \renewcommand{\arraystretch}{1.3} \FloatBarrier \begin{table}[!htp \begin{center} \caption{Eigenvalues when $m \geq 3$ is \textbf{even}.} \label{tab:diagonaleven} \vspace{0.1cm} \begin{tabular}{c@{}l} $\begin{array}{r} \vspace{0.4ex}\\ \MyLBrace{5ex}{$n=0$} \vspace{3ex}\\ \vspace{3ex} \MyLBrace{9.5ex}{$n$ even} \\ \MyLBrace{10ex}{$n$ odd} \end{array}$ & \begin{tabular}{| p{0.9cm} p{2.5cm} | p{3.6cm} | p{3.6cm} | p{2.5cm} | p{4.2cm} |} \hline $\mathbf{\hh^n}$ & & $\mathbf{[\varphi_{1,0},\ \ ]}$ & $\mathbf{[\psi_{1,0},\ \ ]}$ & $\mathbf{[\varphi_{1,0}+\psi_{1,0},\ \ ]}$ & $\mathbf{[\varphi_{1,0}-\psi_{1,0},\ \ ]}$ \\ \hline \hline $1$ & & $0$ & $0$ & $0$ & $0$ \\ \hline $\varepsilon_i$ & & $N$ & $-N$ & $0$ & $2N$ \\ \hline $f_i^s$ & & $s$ & $-s$ & $0$ & $2s$ \\ \hline \hline \hline $\chi_{n,\alpha}$ & $\begin{cases} \mbox{with } \alpha \geq 0 \\ \mbox{with } \alpha<0 \end{cases} $ & $\begin{cases} -\alpha m- \left(\frac{n-\alpha m}{2}\right)N \\ -\left(\frac{n+\alpha m}{2}\right) N \end{cases} $ & $\begin{cases} \left(\frac{n-\alpha m}{2}\right)N \\ - \alpha m + \left(\frac{n+\alpha m}{2}\right) N \end{cases}$ & $-\alpha m$ & $\begin{cases} -\alpha m -\left(n-\alpha m\right)N \\ \alpha m -\left(n+\alpha m\right)N \end{cases}$ \vspace{0.5em} \\ \hline $\pi_{n,\alpha}$ & $\begin{cases} \mbox{with } \alpha \geq 0 \\ \mbox{with } \alpha<0 \end{cases} $ & $\begin{cases} -\alpha m- \left(\frac{n-\alpha m - 2}{2}\right)N \\ -\left(\frac{n+\alpha m-2}{2}\right) N \end{cases}$ & $\begin{cases} \left(\frac{n-\alpha m-2}{2}\right)N \\ -\alpha m + \left(\frac{n+\alpha m-2}{2}\right) N \end{cases}$ & $-\alpha m$ & $\begin{cases} -\alpha m -\left(n-\alpha m-2\right)N \\ \alpha m -\left(n+\alpha m-2\right)N \end{cases}$ \vspace{0.3em}\\ \hline $F_{n,i,s}$ & & $s-\frac{n}{2}N$ & $\frac{n}{2}N - s $ & $0$ & $2s-nN$ \\ \hline \hline \hline $\varphi_{n,\gamma}$ & $\begin{cases} \mbox{with } \gamma \geq 0 \\ \mbox{with } \gamma < 0 \end{cases} $ & $\begin{cases} -\gamma m-\left(\frac{n-\gamma m-1}{2}\right)N \\ -\left(\frac{n+\gamma m-1}{2}\right)N \end{cases}$ & $\begin{cases} \left(\frac{n-\gamma m-1}{2}\right)N \\ -\gamma m + \left(\frac{n + \gamma m-1}{2}\right)N \end{cases}$ & $-\gamma m$ & $\begin{cases} -\gamma m -\left(n-\gamma m-1\right)N \\ \gamma m -\left(n+\gamma m-1\right)N \end{cases}$ \vspace{0.3em} \\ \hline $\psi_{n,\beta}$ & $\begin{cases} \mbox{with } \beta >0\\ \mbox{with } \beta \leq 0 \end{cases}$ & $\begin{cases} -\beta m-\left(\frac{n-\beta m-1}{2}\right)N\\ -\left(\frac{n+\beta m-1}{2}\right)N \end{cases}$ & $\begin{cases} \left(\frac{n-\beta m-1}{2}\right)N \\ -\beta m+\left(\frac{n+\beta m-1}{2}\right)N\end{cases}$ & $-\beta m$ & $\begin{cases} -\beta m -\left(n-\beta m-1\right)N \\ \beta m -\left(n+\beta m-1\right)N \end{cases}$ \vspace{0.3em} \\ \hline $E_{n,i,s}$ & & $s-\frac{n-1}{2}N$ & $\frac{n-1}{2}N - s $ & $0$ & $2s-(n-1)N$ \\ \hline \end{tabular} \end{tabular} \end{center} \end{table} \end{landscape} \begin{landscape} \renewcommand{\arraystretch}{1.3} \FloatBarrier \begin{table}[!htp \begin{center} \caption{Eigenvalues when $m \geq 3$ is \textbf{odd}.} \label{tab:diagonalodd} \vspace{0.1cm} \begin{tabular}{c@{}l} $\begin{array}{r} \vspace{0.4ex}\\ \MyLBrace{5ex}{$n=0$} \vspace{3ex}\\ \vspace{3ex} \MyLBrace{13ex}{$n$ even} \\ \MyLBrace{13.6ex}{$n$ odd} \end{array}$ & \begin{tabular}{| p{0.9cm} p{3.6cm} | p{3.6cm} | p{3.6cm} | p{2.5cm} | p{4.2cm} |} \hline $\mathbf{\hh^n}$ & & $\mathbf{[\varphi_{1,0},\ \ ]}$ & $\mathbf{[\psi_{1,0},\ \ ]}$ & $\mathbf{[\varphi_{1,0}+\psi_{1,0},\ \ ]}$ & $\mathbf{[\varphi_{1,0}-\psi_{1,0},\ \ ]}$ \\ \hline \hline $1$ & & $0$ & $0$ & $0$ & $0$ \\ \hline $\varepsilon_i$ & & $N$ & $-N$ & $0$ & $2N$ \\ \hline $f_i^s$ & & $s$ & $-s$ & $0$ & $2s$ \\ \hline \hline \hline $\chi_{n,\delta}$ & $\begin{cases} \mbox{with } \delta \geq 0 \\ \mbox{with } \delta<0 \end{cases} $ & $\begin{cases} -\delta m- (\frac{n-\delta m}{2})N\\ -(\frac{n+\delta m}{2}) N \end{cases}$ & $\begin{cases} (\frac{n-\delta m}{2})N \\ -\delta m + (\frac{n+\delta m}{2}) N \end{cases}$ & $-\delta m$ & $\begin{cases} -\delta m- (n-\delta m)N \\ \delta m - (n+\delta m)N\end{cases}$ \vspace{0.5em}\\ \hline $\pi_{n,\delta}$ & $\begin{cases} \mbox{with } \delta \geq 0 \\ \mbox{with } \delta<0 \end{cases} $ & $\begin{cases} -\delta m- (\frac{n-\delta m - 2}{2})N\\ -(\frac{n+\delta m-2}{2}) N \end{cases}$ & $\begin{cases} (\frac{n-\delta m-2}{2})N \\ - \delta m + (\frac{n+\delta m-2}{2}) N \end{cases}$ & $-\delta m$ & $\begin{cases} -\delta m- (n-\delta m-2)N \\ \delta m - (n+\delta m-2)N \end{cases}$ \vspace{0.3em}\\ \hline $F_{n,i,s}$ & & $s-\frac{n}{2}N$ & $\frac{n}{2}N - s $ & $0$ & $2s-nN$ \\ \hline $\varphi_{n,\sigma}$ & & $N$ & $-\sigma m -N $ & $-\sigma m$ & $2N+\sigma m$ \\ \hline $\psi_{n,\tau}$ & & $-\tau m+N$ & $-N$ & $-\tau m$ & $2N-\tau m$ \\ \hline \hline \hline $\pi_{n,\delta}$ & $\begin{cases} \mbox{with } t=0, \ \delta= p \\ \mbox{with } t=0, \ \delta=- p \ \end{cases}$ & $\begin{cases} -\delta m +N\\ N \end{cases}$ & $\begin{cases} -N \\ - \delta m -N \end{cases}$ & $-\delta m$ & $\begin{cases} 2N-\delta m \\ 2N+\delta m \end{cases}$ \vspace{0.3em} \\ \hline $\varphi_{n,\sigma}$ & $\begin{cases} \mbox{with } \sigma \geq 0 \\ \mbox{with } \sigma < 0 \end{cases} $ & $\begin{cases} -\sigma m-\left(\frac{n-\sigma m-1}{2}\right)N \\ -\left(\frac{n+\sigma m-1}{2}\right)N \end{cases}$ & $\begin{cases} \left(\frac{n-\sigma m-1}{2}\right)N \\ -\sigma m + \left(\frac{n + \sigma m-1}{2}\right)N \end{cases}$ & $-\sigma m$ & $\begin{cases} -\sigma m - (n-\sigma m-1)N \\ \sigma m- (n+\sigma m-1)N \end{cases}$ \vspace{0.3em} \\ \hline $\psi_{n,\tau}$ & $\begin{cases} \mbox{with } \tau >0\\ \mbox{with } \tau \leq 0 \end{cases}$ & $\begin{cases} -\tau m-\left( \frac{n-\tau m-1}{2}\right) N\\ -\left(\frac{n+\tau m-1}{2}\right)N \end{cases}$ & $\begin{cases} \left(\frac{n-\tau m-1}{2}\right)N \\ -\tau m+\left(\frac{n+\tau m-1}{2}\right)N\end{cases}$ & $-\tau m$ & $\begin{cases} -\tau m- (n-\tau m-1)N \\ \tau m - (n+\tau m-1)N \end{cases}$ \vspace{0.3em} \\ \hline $E_{n,i,s}$ & & $s-\frac{n-1}{2}N$ & $\frac{n-1}{2}N - s $ & $0$ & $2s-(n-1)N$ \\ \hline \end{tabular} \end{tabular} \end{center} \end{table} \end{landscape} \section{Gerstenhaber brackets on $\hh^n(A)$} \label{general brackets} Let $X \in \hh^n(\alg)$ and $Y \in \hh^q(\alg)$ be elements in the $\ground$-basis for $\hh^{\bullet}(\alg)$ listed in~Sections~\ref{ss:0} and~\ref{ss:n}. In this section, we describe methods to compute the bracket $[X,Y]$, and we provide these brackets explicitly when $X,Y$ are algebra generators of $\hh^{\bullet}(\alg)$, as listed in~Sections~\ref{m even ring} and~\ref{m odd ring}. \begin{method}\label{method0} In Section~\ref{brackets HH1}, we showed that we can write \begin{align*} [\varphi_{1,0},X]&=aX, &[\varphi_{1,0},Y]&=bY,\\ [\varphi_{1,0}+\psi_{1,0},X]&=a'X, &[\varphi_{1,0}+\psi_{1,0},Y]&=b'Y, \end{align*} for some scalars $a,a',b,b' \in \ground$. Gerstenhaber brackets satisfy the Jacobi identity, so we have: $$ [\varphi_{1,0},[X,Y]]=[[\varphi_{1,0},X],Y]+[X,[\varphi_{1,0}, Y]]= (a+b)[X,Y], $$ and similarly, $$[\varphi_{1,0}+\psi_{1,0},[X,Y]]=(a'+b')[X,Y].$$ It follows that $[X,Y]$ is either zero or an eigenvector for $[\varphi_{1,0},-]$ and $[\varphi_{1,0}+\psi_{1,0},-]$, with eigenvalues $(a+b)$ and $(a'+b')$ respectively. If $[X,Y] \neq 0$, we can write $[X,Y] \in \hh^{n+q-1}(\alg)$ as a linear combination of basis elements with these exact eigenvalues. If there are no such basis elements, we know $[X,Y]=0$.\ \end{method} \begin{method}\label{methodc} Once we know that $[X,Y]$ is a linear combination of certain eigenvectors, we can often use the Poisson identity $$[xy,z] = [x,z]y + (-1)^{|x|(|z|-1)} x[y,z]$$ to compute the coefficients. For convenience, we list the cup products we use here, some of which already appear in~\cite[Theorem 4.8]{ST}. The starred $(*)$ cup products differ from the results given in~\cite[Theorem 4.8]{ST}. \\ \textbf{If $m$ is even, we have:} \begin{align*} \varepsilon_i\varphi_{1,0}&=0& \varphi_{1,0}^2&=0& E_{1,j,s}&=f_j^s \varphi_{1,0}& \chi_{n,0}\varphi_{1,0}&=\varphi_{n+1, 0} \\ \varepsilon_i\psi_{1,0}&=0& \psi_{1,0}^2&=0& E_{1,j,s}&=-f_j^s \psi_{1,0} (*) & \chi_{n,0}\psi_{1,0}&=\psi_{n+1,0}\\ \varepsilon_i\varphi_{m-1,-1}&=0& \varphi_{1,0}\varphi_{m-1,-1}&=0& E_{n+1,j,s}&=(-1)^{\frac{n}{2}j}E_{1,j,s}\chi_{n,0}& \chi_{n,0}\pi_{2,0} &=\pi_{n+2,0}\\ \varepsilon_i\psi_{m-1,1}&=0& \psi_{1,0}\psi_{m-1,1}&=0& F_{n,j,s}&=(-1)^{\frac{n}{2}j}f_j^s \chi_{n,0}& \chi_{n,0}\chi_{2,0}&=\chi_{n+2,0}\\ \varepsilon_if_j&=0& \varphi_{m-1, -1}\psi_{m-1,1}&=0& &&&\\ f_i f_j&=\delta_{i,j}f_i^2& \varphi_{1,0}\psi_{1,0}&=mN\pi_{2,0}& \pi_{2,0}&=(-1)^i\varepsilon_i\chi_{2,0} (*)& \chi_{m,1}\varphi_{m-1,-1} &=0\\ && \varphi_{1,0}\psi_{m-1,1}&=m\pi_{m,1} & \pi_{m, 1}&=(-1)^i\varepsilon_i\chi_{m,1}& \chi_{m,-1}\psi_{m-1,1} &=0\\ && \psi_{1,0}\varphi_{m-1,-1}&=-m\pi_{m,-1} (*)& \pi_{m, -1}&=(-1)^i\varepsilon_i\chi_{m,-1}& & \end{align*} for all $i,j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ and $1\leq s\leq N-1$, whenever both sides of the equation exist. \\ \textbf{If $m$ is odd, we have:} \begin{align*}\setlength{\columnsep}{5cm} f_i f_j&=\delta_{i,j}f_i^2 & E_{n,j,s} &=f_j^s \varphi_{n, 0} &F_{n,j,s}&=f_j^s\chi_{n,0}\\ \varepsilon_i\chi_{4,0}&=0 &\chi_{n, 0} \varphi_{1,0} & = \varphi_{n+1, 0} &F_{n,j,s} \varphi_{1,0} & = E_{n+1,j,s} \\ f_i\varphi&=-f_i\psi & \chi_{n, 0} \psi_{1,0} & = \psi_{n+1,0} & F_{n,j,s} \psi_{1,0} & = -E_{n+1,j,s} \end{align*} for all $i,j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ and $1\leq s\leq N-1$, whenever both sides of the equation exist. \end{method} In the following propositions, we describe the Gerstenhaber brackets among algebra generators of $\hh^{\bullet}(\alg)$, as listed in~Sections~\ref{m even ring} and~\ref{m odd ring}. Recall that the brackets with the generators $\varphi_{1,0}$ and $\psi_{1,0}$ were already computed in~Section~\ref{brackets HH1}, Table~\ref{tab:diagonaleven} and~Table~\ref{tab:diagonalodd}. \begin{prop} \label{prop:bracket-m-even} Suppose $m \geq 3$ is even and let $i,j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$. Then, \begin{align*} [\varepsilon_i,Y] &=\begin{cases} \frac{(-1)^{i+1}}{m}(\varphi_{1,0}+\psi_{1,0}) & \text{ if } Y= \chi_{2,0} \\ (-1)^{i+1} \varphi_{m-1,-1} & \text{ if } Y = \chi_{m,-1} \\ (-1)^{i+1}\psi_{m-1,1} & \text{ if } Y = \chi_{m,1} \\ 0 & \text{ if } Y\in\{\varepsilon_j, f_j, \varphi_{m-1,-1}, \psi_{m-1,1}\} , \end{cases} \\ [f_i,Y]&=0 \text{ if } Y\in\{\varepsilon_j, f_j, \chi_{2,0}, \varphi_{m-1,-1}, \psi_{m-1,1}, \chi_{m,-1}, \chi_{m,1}\},\\ [\chi_{2,0},Y]&=\begin{cases} \chi_{m,-1} & \text{ if } Y= \varphi_{m-1,-1} \\ -\chi_{m,1} & \text{ if } Y= \psi_{m-1,1} \\ 0 & \text{ if } Y\in\{f_j, \chi_{2,0}, \chi_{m,-1}, \chi_{m,1}\} , \end{cases} \\ [\varphi_{m-1,-1},Y]&=0 \,\,\text{ if } Y\in\{\varepsilon_j, f_j, \varphi_{m-1,-1}, \psi_{m-1,1}, \chi_{m,-1}, \chi_{m,1}\}, \\ [\psi_{m-1,1},Y]&=0 \,\,\text{ if } Y\in\{\varepsilon_j, f_j, \varphi_{m-1,-1}, \psi_{m-1,1}, \chi_{m,-1}, \chi_{m,1}\},\\ [\chi_{m,-1},Y]&=0 \,\,\text{ if } Y\in\{f_j, \chi_{2,0}, \varphi_{m-1,-1}, \psi_{m-1,1}, \chi_{m,-1}, \chi_{m,1}\},\\ [\chi_{m,1},Y]&=0 \,\,\text{ if } Y\in\{f_j, \chi_{2,0}, \varphi_{m-1,-1}, \psi_{m-1,1}, \chi_{m,-1}, \chi_{m,1}\}. \end{align*} \end{prop} \begin{proof} All brackets in the proposition can be computed by using~Methods~\ref{method0} and~\ref{methodc}. We provide a few examples of the computations. \begin{itemize}[leftmargin=*] \item $[\varepsilon_i, \varphi_{m-1,-1}]=0$:\\ The eigenvalues for $\varepsilon_i$ and $\varphi_{m-1,-1}$ under $[\varphi_{1,0}+\psi_{1,0},-]$ are $0$ and $m$ respectively. Hence, $[\varepsilon_i, \varphi_{m-1,-1}]\in \hh^{m-2}(\alg)$ is either zero or an eigenvector with eigenvalue~$m$. Since $\hh^{m-2}(\alg)$ has basis elements $\{\chi_{m-2,0}, \pi_{m-2,0}, F_{m-2,j,s}\}_{j,s}$, all with eigenvalue $0$ under $[\varphi_{1,0}+\psi_{1,0},-]$, we conclude that $[\varepsilon_i, \varphi_{m-1,-1}]=0$. \\ \item $[\varepsilon_i,\chi_{2,0}]=\frac{(-1)^{i+1}}{m}(\varphi_{1,0}+\psi_{1,0})$:\\ The eigenvalues for $\varepsilon_i$ and $\chi_{2,0}$ under $[\varphi_{1,0},-]$ are $N$ and $-N$ respectively. Hence, $[\varepsilon_i,\chi_{2,0}]$ is either zero or an eigenvector with eigenvalue $0$. Since $\hh^{1}(\alg)$ has basis elements $\{\varphi_{1,0}, \psi_{1,0}, E_{1,j,s}\}_{j,s}$, and $E_{1,j,s}$ has eigenvalue $s\neq 0$ under $[\varphi_{1,0},-]$, we know that $[\varepsilon_i,\chi_{2,0}]$ is a linear combination of $\varphi_{1,0}$ and $\psi_{1,0}$, say $[\varepsilon_i,\chi_{2,0}]=a\varphi_{1,0}+b\psi_{1,0}$. \\ Using the cup products listed in~Method~\ref{methodc}, we can compute $$0=[\varepsilon_i\varphi_{1,0}, \chi_{2,0}]= [\varepsilon_i, \chi_{2,0}]\varphi_{1,0}+\varepsilon_i[\varphi_{1,0}, \chi_{2,0}] =-bmN\pi_{2,0}-\varepsilon_i N \chi_{2,0} =(-bm-(-1)^i)N\pi_{2,0},$$ and similarly, $0=[\varepsilon_i\psi_{1,0}, \chi_{2,0}] =(am+(-1)^i)N\pi_{2,0}.$ It follows that $a=b=\frac{(-1)^{i+1}}{m}$. \\ \item $[\varepsilon_i,\chi_{m,1}]=(-1)^{i+1}\psi_{m-1,1}$:\\ By Method~\ref{method0}, we find that $[\varepsilon_i,\chi_{m,1}]=a\psi_{m-1,1}$ for some $a\in\ground$. Moreover, we see that $a=(-1)^{i+1}$, using the cup products listed in~Method~\ref{methodc}: $$0=[\varepsilon_i\varphi_{1,0}, \chi_{m,1}]= [\varepsilon_i, \chi_{m,1}]\varphi_{1,0}+\varepsilon_i[\varphi_{1,0}, \chi_{m,1}] =-am\pi_{m,1}-\varepsilon_i m\chi_{m,1} =(-a-(-1)^i)m\pi_{m,1}.$$ \item $[\chi_{2,0},\varphi_{m-1,-1}]=\chi_{m,-1}$:\\ By Method~\ref{method0}, we find that $[\chi_{2,0},\varphi_{m-1,-1}]=a\chi_{m,-1}$ for some $a\in\ground$. Moreover, $a=1$ since \begin{align*} 0&=[\chi_{2,0},\varepsilon_0\varphi_{m-1,-1}]= \varphi_{m-1,-1}[\varepsilon_0,\chi_{2,0}] +\varepsilon_0[\chi_{2,0},\varphi_{m-1,-1}] \\ &=-\varphi_{m-1,-1}\frac{\varphi_{1,0}+\psi_{1,0}}{m} +a\varepsilon_0 \chi_{m,-1} =(-1+a)\pi_{m,-1}. \end{align*} \item $[\varphi_{m-1,-1},\chi_{m,1}]=0$:\\ By Method~\ref{method0}, we find that $[\varphi_{m-1,-1},\chi_{m,1}]=a\chi_{2m-2,0}$ for some $a\in\ground$. We see $a=0$ since \begin{align*} 0&=[\varphi_{1,0}\varphi_{m-1,-1},\chi_{m,1}]= [\varphi_{1,0},\chi_{m,1}]\varphi_{m-1,-1} -\varphi_{1,0}[\varphi_{m-1,-1},\chi_{m,1}] \\ &=-m\chi_{m,1}\varphi_{m-1,-1} -a\varphi_{1,0}\chi_{2m-2,0} =-a\varphi_{2m-1,0}. \end{align*} \item $[\varphi_{m-1,-1},\psi_{m-1,1}]=0$:\\ By Method~\ref{method0}, we find that $[\varphi_{m-1,-1},\psi_{m-1,1}]=a \varphi_{2m-3,0} + b \psi_{2m-3,0}$ for some $a,b\in\ground$. Using that $\varphi_{2m-3,0}=\chi_{2m-4}\varphi_{1,0}$ and $\psi_{2m-3,0}=\chi_{2m-4}\psi_{1,0}$, we find \begin{align*} 0&=[\varphi_{1,0}\varphi_{m-1,-1},\psi_{m-1,1}]= [\varphi_{1,0},\psi_{m-1,1}]\varphi_{m-1,-1} +\varphi_{1,0}[\varphi_{m-1,-1},\psi_{m-1,1}] \\ &=(N-m)\psi_{m-1,1}\varphi_{m-1,-1} +b \varphi_{1,0}\chi_{2m-4}\psi_{1,0} =b mN \pi_{2,0}\chi_{2m-4} =b mN \pi_{2m-2,0}. \end{align*} Similarly, $0=[\varphi_{m-1,-1},\psi_{1,0}\psi_{m-1,1}] =a mN \pi_{2m-2,0}$. It follows that $a=b=0$. \\ \item $[\chi_{m,-1}, \chi_{m,1}]=0$: \\ The eigenvalues for $\chi_{m,-1}$ and $\chi_{m,1}$ under $[\varphi_{1,0}+\psi_{1,0},-]$ are $m$ and $-m$ respectively. Hence, bracket $[\chi_{m,-1}, \chi_{m,1}]\in \hh^{2m-1}(\alg)$ is either zero or an eigenvector with eigenvalue~$0$. The only basis elements in $\hh^{2m-1}(\alg)$ with eigenvalue $0$ under $[\varphi_{1,0}+\psi_{1,0},-]$ are $\left\{\varphi_{2m-1,0}, \psi_{2m-1,0}, E_{2m-1,j,s}\right\}_{j,s}$. \\ Now, the eigenvalues for $\chi_{m,-1}$ and $\chi_{m,1}$ under $[\varphi_{1,0},-]$ are $0$ and $-m$ respectively. Hence, $[\chi_{m,-1}, \chi_{m,1}]$ is either zero or an eigenvector with eigenvalue $-m$. Since $\varphi_{2m-1,0}$ and $\psi_{2m-1,0}$ have eigenvalue $-(m-1)N\neq -m$ under $[\varphi_{1,0},-]$, and $E_{2m-1,j,s}$ has eigenvalue $s-(m-1)N\neq -m$ under $[\varphi_{1,0},-]$, we conclude that $[\chi_{m,-1}, \chi_{m,1}]$ is zero. \end{itemize} \end{proof} \begin{prop}\label{prop:bracket-m-odd} Suppose $m \geq 3$ is odd. The brackets $[X,Y]$ equal zero for the generators $$X,Y \in\left\{\varepsilon_i,f_i,F_{2,i,1}, \chi_{4,0}, \varphi_{m-1,-1}, \psi_{m-1,1}, \chi_{2m,2}, \chi_{2m,-2}\mid i\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}\right\}.$$ \end{prop} \begin{proof} All brackets in the proposition other than $[\varepsilon_i, F_{2,j,1}]$ for $i,j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ can be computed using~Method~\ref{method0}. To show $[\varepsilon_i, F_{2,j,1}]=0$, we first use Method~\ref{method0} to write $[\varepsilon_i, F_{2,j,1}]=\sum_{k\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}}a_k E_{1,k,1}$ for some~$a_k\in \ground$. Noting that $\varepsilon_i\chi_{4,0}=0$, we find $$0=[\varepsilon_i\chi_{4,0}, F_{2,j,1}] =\chi_{4,0}[\varepsilon_i, F_{2,j,1}] +\varepsilon_i[\chi_{4,0}, F_{2,j,1}] =\sum_{k\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}}a_k \chi_{4,0} E_{1,k,1} =\sum_{k\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}}a_k E_{5,k,1}.$$ It follows that $a_k=0$ for all $k\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$. Hence, $[\varepsilon_i, F_{2,j,1}]=0$. \end{proof} \section{Gerstenhaber brackets with $E_{1,j,s}$} \label{E brackets} In Section~\ref{cohomology}, we described a $\ground$-linear basis of $\hh^n(\alg)$ for every $n \geq 0$. In particular, the basis for $\hh^1(\alg)$ was given by $$\left\{\varphi_{1,0}, \,\psi_{1,0}, \,E_{1,j,s} \mid j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}, \,1\leq s\leq N-1\right\}.$$ The brackets of $\varphi_{1,0}$ and $\psi_{1,0}$ with $\ground$-basis elements of $\hh^n(\alg)$ were computed in Section~\ref{brackets HH1}. In this section, we compute the brackets of $E_{1,j,s}\in\hh^{1}(\alg)$ with all basis elements of $\hh^n(\alg)$. This allows us to give a complete description of the Lie structure of $\hh^1(\alg)$ and its action on $\hh^n(\alg)$ in Section~\ref{sec:lie-structure}. \begin{prop}\label{prop:bracket-m-even-E} Suppose $m \geq 3$ is even. Let $i,j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$, $1\leq s,r\leq N-1$, and let indices $\alpha, \,\gamma,\,\beta$ be as in Sections~\ref{subsec:m even n even} and~\ref{subsec:m even n odd}. We write $E_{n,j,r}:=0$ and $F_{n,j,r}:=0$ for~$r\geq N$. Then, \begin{align*} [E_{1,j,s}, \varepsilon_i]&=0\\ [E_{1,j,s}, f_i]&=\delta_{i,j} f_j^{s+1}\\ [E_{1,j,s},\chi_{n,\alpha}] &= \begin{cases} -(-1)^{\frac{n}{2}j}\frac{n}{2}N F_{n,j,s} & \mbox{ if } \alpha = 0\\ 0 & \mbox{ if } \alpha \neq 0, \end{cases} \\ [E_{1,j,s},\pi_{n,\alpha}]&= 0, \\ [E_{1,j,s},F_{n,i,r}] &=\delta_{i,j} \left(r-\frac{n}{2}N\right)F_{n,j,s+r}, \\ [E_{1,j,s}, \varphi_{n,\gamma}] &= \begin{cases} -(-1)^{\frac{n-1}{2}j}(s+\frac{n-1}{2}N) E_{n,j,s} & \mbox{ if } \gamma = 0\\ 0 & \mbox{ if } \gamma \neq 0, \end{cases} \\ [E_{1,j,s}, \psi_{n,\beta}] &= \begin{cases} (-1)^{\frac{n-1}{2}j}(s+\frac{n-1}{2}N) E_{n,j,s} & \mbox{ if } \beta = 0 \\ 0 & \mbox{ if } \beta \neq 0 ,\end{cases} \\ [E_{1,j,s}, E_{n,i,r}] &= \delta_{i,j}\left(r-s-\frac{n-1}{2}N\right)E_{n,j,s+r}. \end{align*} \end{prop} \begin{proof} The brackets of $E_{1,j,s}$ with $\chi_{n,\alpha}, \,\pi_{n,\alpha},\, \pi_{n,0},\,\varphi_{n,\gamma}, \,\psi_{n,\beta}$ for $\alpha,\,\gamma,\,\beta\neq 0$ can be computed by using~Method~\ref{method0}. The remaining brackets in the proposition follow by referring to the cup products in~Method~\ref{methodc}. Indeed, for $\alpha\in\hh^n(A)$, the Poisson identity tells us that \begin{align*} [E_{1,j,s}, \alpha]=[f_j^s \varphi_{1,0}, \alpha] = [f_j^s, \alpha]\varphi_{1,0} + f_j^s[\varphi_{1,0},\alpha], \end{align*} which we use in the first two computations below: \begin{itemize}[leftmargin=*] \item $[ E_{1,j,s},\chi_{n, 0}] =-(-1)^{\frac{n}{2}j}\frac{n}{2}N F_{n,j,s}:$\\ Since $[f_j^s , \chi_{n, 0}]=[f_j^s ,\chi^{n/2}_{2, 0}]=0$ and $f_j^s \chi_{n, 0} = (-1)^{\frac n 2 j} F_{n,j,s} $, we find $$[ E_{1,j,s},\chi_{n, 0}] =[ f_j^s , \chi_{n, 0}] \varphi_{1,0} + f_j^s [\varphi_{1,0},\chi_{n, 0}] =f_j^s [\varphi_{1,0},\chi_{n, 0}] =- \frac n 2 N f_j^s \chi_{n, 0} = -(-1)^{\frac n 2 j} \frac n 2 N F_{n,j,s}.$$ \item $[ E_{1,j,s},F_{n,i,r}] = \delta_{i,j} \left(r-\frac{n}{2}N\right)F_{n,j,s+r}$:\\ Note that $[ f_j^s , F_{n,i,r}] = [ f_j^s , (-1)^{\frac n 2 i} f_i^r \chi_{n,0}] = 0$ because $ [ f_j^s , f_i^r ] =0$ and $ [ f_j^s , \chi_{n,0}] =0$. Hence, $$[ E_{1,j,s},F_{n,i,r}] = [ f_j^s , F_{n,i,r}] \varphi_{1,0} + f_j^s[\varphi_{1,0},F_{n,i,r}] = \left(r-\frac{n}{2}N\right) f_j^sF_{n,i,r} =\delta_{i,j} \left(r-\frac{n}{2}N\right)F_{n,j,s+r}.$$ \item $[ E_{1,j,s},\varphi_{n, 0}] =-(-1)^{\frac{n-1}{2}j}\left(s+\frac{n-1}{2}N\right) E_{n,j,s}$:\\ Since $\varphi_{n,0}=\chi_{n-1,0}\varphi_{1,0}$, we see \begin{align*}[ E_{1,j,s},\varphi_{n, 0}] &=[ E_{1,j,s},\chi_{n-1,0}]\varphi_{1, 0} -\chi_{n-1,0}[ \varphi_{1, 0},E_{1,j,s}] \\ &=-(-1)^{\frac{n-1}{2}j}\left( \frac{n-1}{2}\right) N F_{n-1,j,s} \varphi_{1, 0} -s\chi_{n-1,0} E_{1,j,s}\\ &=-(-1)^{\frac{n-1}{2}j}\left( \frac{n-1}{2}\right) NE_{n,j,s} -(-1)^{\frac{n-1}{2}j}sE_{n,j,s}. \end{align*} \item $[ E_{1,j,s},\psi_{n, 0}] = (-1)^{\frac{n-1}{2} j }\left(s+\frac{n-1}{2}N \right) E_{n,j,s}$:\\ Since $\psi_{n,0}=\chi_{n-1,0}\psi_{1,0}$, we see \begin{align*}[ E_{1,j,s},\psi_{n, 0}] &=[ E_{1,j,s},\chi_{n-1,0}]\psi_{1, 0} -\chi_{n-1,0}[ \psi_{1, 0},E_{1,j,s}] \\ &=-(-1)^{\frac{n-1}{2}j}\left( \frac{n-1}{2}\right) N F_{n-1,j,s} \psi_{1, 0} +s\chi_{n-1,0} E_{1,j,s}\\ &=(-1)^{\frac{n-1}{2}j}\left( \frac{n-1}{2}\right) NE_{n,j,s} +(-1)^{\frac{n-1}{2}j}sE_{n,j,s}. \end{align*} \item $[ E_{1,j,s}, E_{n,i,r}] = \delta_{i,j}\left(r-s-\frac{n-1}{2}N\right)E_{n,j,s+r}$:\\ Since $E_{n,i,r}=F_{n-1,i,r}\varphi_{1,0}$, we see \begin{align*} [ E_{1,j,s}, E_{n,i,r}] &= [ E_{1,j,s}, F_{n-1,i,r}] \varphi_{1,0} - F_{n-1,i,r} [\varphi_{1,0}, E_{1,j,s}] \\ &=\delta_{i,j} \left(r-\frac{n-1}{2}N\right)F_{n-1,j,s+r}\varphi_{1,0} -sF_{n-1,i,r}E_{1,j,s}\\ &=\delta_{i,j} \left(r-\frac{n-1}{2}N\right) E_{n,j,s+r} -\delta_{i,j} s E_{n,j ,s+r} = \delta_{i,j} \left(r-s -\frac {n-1}{2} N \right) E_{n,j,s+r} . \end{align*} \end{itemize} This proves the proposition. \end{proof} \begin{prop}\label{prop:bracket-m-odd-E} Suppose $m \geq 3$ is odd. Let $i,j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$, $1\leq s,\,r\leq N-1$ and let indices $\delta, \,\sigma,\,\tau$ be as in Sections~\ref{subsec:m odd n even} and~\ref{subsec:m odd n odd}. We write $E_{n,j,r}:=0$ and $F_{n,j,r}:=0$ for~$r\geq N$. Then, \begin{align*} [E_{1,j,s}, \varepsilon_i]&=0\\ [E_{1,j,s}, f_i]&=\delta_{i,j} f_j^{s+1}\\ [E_{1,j,s},\chi_{n,\delta}] &= \begin{cases} -\frac{n}{2}N F_{n,j,s} &\mbox{ if } \delta = 0\\ 0&\mbox{ if } \delta \neq 0, \end{cases} \\ [E_{1,j,s},\pi_{n,\delta}]&= 0, \\ [E_{1,j,s},F_{n,i,r}] &=\delta_{i,j} \left(r-\frac{n}{2}N\right)F_{n,j,s+r}, \\ [E_{1,j,s}, \varphi_{n,\sigma}] &= \begin{cases} -\left( s+\frac{n-1}{2}N\right) E_{n,j,s} &\mbox{ if } \sigma = 0 \\ 0& \mbox{ if } \sigma \neq 0 , \end{cases} \\ [E_{1,j,s}, \psi_{n,\tau}] &= \begin{cases} \left( s+\frac{n-1}{2}N\right) E_{n,j,s}&\mbox{ if } \tau = 0 \\ 0& \mbox{ if } \tau \neq 0 , \end{cases} \\ [E_{1,j,s}, E_{n,i,r}] &= \delta_{i,j}\left(r-s-\frac{n-1}{2}N\right)E_{n,j,s+r}. \end{align*} \end{prop} \begin{proof} The brackets of $E_{1,j,s}$ with $\chi_{n,\delta}, \,\pi_{n,\delta},\, \pi_{n,0}, \,\varphi_{n,\sigma}, \,\psi_{n,\tau}$ for $\delta,\,\sigma,\,\tau\neq 0$ can be computed using~Method~\ref{method0}. The remaining brackets in the proposition follow by referring to the cup products in~Method~\ref{methodc}. Again, for $\alpha\in\hh^n(A)$, the Poisson identity tells us that \begin{align*} [E_{1,j,s}, \alpha]=[f_j^s \varphi_{1,0}, \alpha] = [f_j^s, \alpha]\varphi_{1,0} + f_j^s[\varphi_{1,0},\alpha]. \end{align*} Moreover, note that $[f_j,\alpha]=0$ for all $\alpha\in \hh^n(\alg)$ with $n$ even. This follows because $[f_j,-]$ is zero on all generators of $\hh^\bullet(\alg)$ in even degree, and since $$[\varphi_{1,0}\psi_{1,0},f_j]=[\varphi_{1,0},f_j]\psi_{1,0}-\varphi_{1,0}[\psi_{1,0},f_j]=f_j\psi_{1,0}+f_j\varphi_{1,0}=0.$$ The rest of the proof follows as in the proof of~Proposition~\ref{prop:bracket-m-even-E}. \end{proof} We summarize the brackets $[E_{1,j,s},\hh^n(\alg)]$ in Table~\ref{tab:Ebrackets}. As before, we refer the reader to Section~\ref{ss:n} for explicit basis elements of $\hh^n(\alg)$ for $m$ even and $m$ odd, and we let $i \in \mathbb{Z}/m\mathbb{Z}$. \FloatBarrier {\small \begin{table}[!htp] \caption{Brackets $[E_{1,j,s},\hh^n(\alg)]$} \label{tab:Ebrackets} \vspace{.1cm} \begin{tabular}{c@{}l} $\begin{array}{r} \vspace{-0.5ex}\\ \MyLBrace{7.5ex}{$n=0$} \vspace{1.5ex}\\ \vspace{1.5ex} \MyLBrace{12.5ex}{$n$ even} \\ \MyLBrace{13ex}{$n$ odd} \end{array}$ & \begin{tabular}{| p{0.65cm} p{3.55cm} | p{4.55cm} | p{4.55cm} |} \hline $\mathbf{\hh^n}$ & & $[E_{1,j,s}, - ]$, $m$ even & $[E_{1,j,s}, - ]$, $m$ odd \\ \hline \hline $1$ & & $0$ & $0$ \\ \hline $\varepsilon_i$ & & $0$ & $0$ \\ \hline $f_i^r$ & $\begin{cases} \mbox{for } 1 \leq r \leq N-s \\ \mbox{otherwise } \end{cases} $ & $\begin{cases} \delta_{i,j}r f_i^{r+s} \\ 0 \end{cases} $ & $\begin{cases} \delta_{i,j} rf_i^{r+s} \\ 0 \end{cases} $ \\ \hline \hline \hline $\chi_{n,\delta}$ & $\begin{cases} \mbox{for }\delta = 0 \\ \mbox{otherwise } \end{cases} $ & $\begin{cases} -(-1)^{\frac n 2 j} \frac n 2 N F_{n,j,s} \\ 0 \end{cases}$ & $\begin{cases} - \frac n 2 N F_{n,j,s} \\ 0 \end{cases}$ \vspace{0.5em}\\ \hline $\pi_{n,\delta}$ & & 0 & 0 \vspace{0.3em}\\ \hline $F_{n,i,r}$ & $\begin{cases} \mbox{for } 1 \leq r \leq N-s-1 \\ \mbox{otherwise } \end{cases} $ & $\begin{cases} \delta_{i,j} ( r- \frac n 2) N F_{n,j,r+s} \\ 0 \end{cases}$ & $\begin{cases} \delta_{i,j} ( r- \frac n 2) N F_{n,j,r+s} \\ 0 \end{cases}$ \\ \hline $\varphi_{n,\sigma}$ & & -------------- & $0$ \\ \hline $\psi_{n,\tau}$ & & -------------- & 0 \\ \hline \hline \hline $\pi_{n,\delta}$ & & -------------- & 0 \vspace{0.3em} \\ \hline $\varphi_{n,\sigma}$ & $\begin{cases} \mbox{for } \sigma =0 \\ \mbox{otherwise } \end{cases} $ & $\begin{cases} - (-1)^{\frac{n-1}{2} j } (s +\frac {n-1}{ 2} N ) E_{n,j,s} \\ 0 \end{cases}$ & $\begin{cases} - (s +\frac {n-1}{ 2} N ) E_{n,j,s} \\ 0 \end{cases}$ \vspace{0.3em} \\ \hline $\psi_{n,\tau}$ & $\begin{cases} \mbox{for } \tau =0\\ \mbox{otherwise } \end{cases} $ & $\begin{cases} (-1)^{\frac{n-1}{2} j } (s + \frac {n-1}{ 2} N ) E_{n,j,s} \\ 0 \end{cases}$ & $\begin{cases} (s +\frac {n-1}{ 2} N ) E_{n,j,s} \\ 0 \end{cases}$ \vspace{0.3em} \\ \hline $E_{n,i,r}$ & $\begin{cases} \mbox{for } 1 \leq r \leq N-s-1 \\ \mbox{otherwise } \end{cases} $ & $\begin{cases} \delta_{i,j} (-s + r -\frac {n-1}{ 2} N ) E_{n,j,r+s} \\ 0 \end{cases}$ & $\begin{cases} \delta_{i,j} (-s + r -\frac {n-1}{ 2} N ) E_{n,j,r+s} \\ 0 \end{cases}$ \\ \hline \end{tabular}. \end{tabular} \end{table} } \FloatBarrier \section{The Lie algebra $\hh^1(\alg)$ and the Lie modules $\hh^n(\alg)$} \label{sec:lie-structure} \numberwithin{equation}{subsection} In this section, we describe the Lie structure of the first Hochschild cohomology space $\hh^1(\alg)$ and the Lie module structure of $\hh^n(\alg)$ over $\hh^1(\alg)$. \subsection{The Lie structure of $\hh^1(\alg)$} Recall that $\hh^1(\alg)$ has a $\ground$-basis given by $$\left\{\varphi_{1,0}, \,\psi_{1,0}, \,E_{1,j,s} \mid j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}, \,1\leq s\leq N-1\right\}.$$ We write $$C:=\frac12(\varphi_{1,0}+\psi_{1,0}) \qquad \text{ and } \qquad E_0:=\frac12(\varphi_{1,0}-\psi_{1,0}).$$ The brackets among these elements of $\hh^1(\alg)$ are given by \begin{align*} [C,E_0] & =0, \\ [C,E_{1,j,s}] & =0, \\ [E_0,E_{1,j,s}] & =sE_{1,j,s},\\ [E_{1,j,s},E_{1,i,r}] & = \begin{cases} \delta_{i,j}(r-s)E_{1,j,s+r} & \hbox{if } s+r \le N-1, \\ 0 & \hbox{ otherwise.} \end{cases} \end{align*} \begin{prop} The center of the Lie algebra $\hh^1(\alg)$ is given by the $\ground$-span of $C$. In particular, the center is one-dimensional. \end{prop} \begin{proof} Direct computations of the brackets $[C,-]$ on a basis of $\hh^1(\alg)$ imply that $C$ belongs to the center. In order to prove that $C$ generates the center as a vector space, we consider an element in the center and write it as a linear combination of $C$, $E_0$ and the $E_{1,j,s}$'s. Computing the bracket with $E_0$, we see that the coefficient of each $E_{1,j,s}$ must be zero. Computing the brackets with each $E_{1,j,s}$, we see that the coefficient of $E_0$ must also be zero. \end{proof} \begin{rem} From the bracket computations, we see that $\hh^1(\alg)$ is a solvable Lie algebra, since its derived series stops after at most $\lfloor N/2\rfloor$ steps. \end{rem} We note that $\hh^1(\alg)$ has a Lie subalgebra with $\ground$-basis $\{E_0, \, E_{1,j,s} \mid 1\le s\le N-1\}$ for each fixed $j \in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$. We will show that each of these Lie subalgebras is isomorphic to a subquotient of the Virasoro algebra. \begin{definition} Recall that the \textbf{Virasoro algebra} $\textnormal{Vir}$ is the unique central extension of the Witt algebra, with generators $L_s$ and $c$, where $c$ is central and $s \in \ensuremath{{\mathbb Z}}$, and brackets given by $$[L_s,L_r]=(r-s)L_{s+r}+\delta_{s+r,0}\frac{s^3-s}{12}c.$$ Denote the Lie subalgebra generated by all $L_s$ for $s \ge0$ by $\textnormal{Vir}_+$. For $q \ge0$, let $\textnormal{Vir}_{> q}$ be the Lie ideal of $\textnormal{Vir}_+$ generated by all $L_s$ with $s > q$, and write $\mathfrak{a}_q$ for the subquotient $\mathfrak{a}_q:=\textnormal{Vir}_+/\textnormal{Vir}_{>q}$. We denote the residue classes in $\mathfrak{a}_q$ of the generators of $\textnormal{Vir}$ again by $L_s$. \end{definition} This subquotient $\mathfrak{a}_q$ of the Virasoro algebra was studied in \cite{LLZ, Mazorchuk-Zhao}. It inherits a grading from the standard $\ensuremath{{\mathbb Z}}$-grading of the Virasoro algebra, in which $L_s$ has degree equal to $s \in\ensuremath{{\mathbb Z}}$ and $c$ has degree~$0$. In order to avoid confusion with the cohomological degree, we refer to this grading as the Virasoro grading. \begin{rem} A classification of the irreducible modules over $\mathfrak{a}_1$ was given in \cite{Block, Mazorchuk}, and a classification of the irreducible modules over $\mathfrak{a}_2$ was obtained in \cite{Mazorchuk-Zhao}. For $\mathfrak{a}_r$ with $r\ge 3$, the classification problem is open. We note that the Lie algebras of dimensions $2$ and $3$ are completely classified up to isomorphism, see for example \cite{agaoka}. In this classification, $\mathfrak{a}_1$ is the unique -- up to isomorphism -- non-abelian Lie algebra of dimension $2$, denoted by $\mathfrak{aff}(2)$. In dimension~$3$, six types of algebras appear, one of which is an infinite family $\mathfrak{\tau}_{\alpha}$ depending on a complex nonzero parameter $\alpha$, and $\mathfrak{a}_2$ is isomorphic to the Lie algebra $\mathfrak{\tau}_2$, see~\cite{agaoka}. \end{rem} By mapping $L_0$ to $E_0$ and $L_s$ to $E_{1,j,s}$ for $1\le s\le N-1$, we obtain an isomorphism between the subquotient $\mathfrak{a}_{N-1}$ of the Virasoro algebra and a Lie subalgebra of $\hh^1(\alg)$: \begin{prop} For each $j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$, there is an isomorphism of Lie algebras $$\mathfrak{a}_{N-1} \cong \langle E_0,E_{1,j,s}\ |\ 1\le s\le N-1\rangle.$$ \end{prop} Here and in what follows we use the notation $\langle \ldots \rangle$ to denote the $\ground$-linear span of the given elements. We conclude that $\hh^1(\alg)$ contains $m$ copies of the Lie algebra $\mathfrak{a}_{N-1}$, which share Virasoro degree $0$ and commute otherwise. We have the following result: \begin{thm} \label{th:vir} Let $N \geq 1$ and $m \geq 3$. There is an embedding of Lie algebras \begin{align*} \hh^1(\alg)\ &\hookrightarrow \ \langle c \rangle \oplus \left(\bigoplus_{j=0}^{m-1} \mathfrak{a}_{N-1}(j) \right) \\ E_{1,j,s}\ &\mapsto\ L_s(j)\\ E_0\ &\mapsto\ \sum_{j=0}^{m-1} L_0(j)\\ C\ &\mapsto\ c, \end{align*} where $L_0(j), L_s(j)$ are in $\mathfrak{a}_{N-1}(j) = \mathfrak{a}_{N-1}$ for all $j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$, and $c$ is a central element. More precisely, $\hh^1(\alg)$ is the pullback of the following diagram of Lie algebras: $$(\langle c \rangle \oplus\mathfrak{a}_{N-1}(j) \twoheadrightarrow \langle c, L_0 \rangle)_{0\leq j \leq m-1},$$ where $\langle c, L_0 \rangle$ is a commutative Lie algebra. \end{thm} \subsection{Decomposition of $\hh^n(\alg)$ as an $\hh^1(\alg)$-module when $n\geq 1$} In~Proposition~\ref{diagonal}, we showed that the elements $C, E_0$ in $\hh^1(\alg)$ act diagonally on a $\ground$-basis of $\hh^n(\alg)$, see Table~\ref{tab:diagonaleven} and~Table~\ref{tab:diagonalodd}. The action of $E_{1,j,s}$ on this basis is given in Table~\ref{tab:Ebrackets}. We thus obtain the following decomposition of $\hh^n(\alg)$ into indecomposable summands as a module over $\hh^1(\alg)$. \begin{thm} Let $m \geq 3$ be even and $n\geq 1$. Write $n=pm+t$ with $p\geq 0$ and $0\le t\le m-1$ as before. We can decompose $\hh^n(\alg)$ into indecomposable summands over $\hh^1(\alg)$ as follows: If $n$ is even, $$\hh^n(A) = \displaystyle \bigoplus_{\substack{-p\le \alpha\le p\\ \alpha\neq 0}} \langle \chi_{n,\alpha}\rangle \oplus \bigoplus_{-p\le \alpha\le p} \langle \pi_{n,\alpha}\rangle \ \oplus\ \langle \chi_{n,0}, F_{n,j,s} \mid j, s \rangle,$$ where $j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ and $1\leq s\leq N-1$.\\ If $n$ is odd, $$ \hh^n(A) = \begin{cases} \displaystyle \bigoplus_{\substack{-p\le \beta\le p\\ \beta \neq 0}} \langle \psi_{n,\beta}\rangle \oplus \bigoplus_{\substack{-p\le \gamma\le p\\ \gamma\neq 0}} \langle \varphi_{n,\gamma}\rangle \ \oplus\ \langle \varphi_{n,0} + \psi_{n,0}\rangle \ \oplus\ \langle \varphi_{n,0}, E_{n,j,s} \mid j, s \rangle & \text{ if } t \neq m-1,\\ \\ \displaystyle \!\bigoplus_{\substack{-p\le \beta \le p+1\\ \beta\neq 0}} \!\!\!\langle \psi_{n,\beta}\rangle \oplus \!\!\!\bigoplus_{\substack{-p-1\le \gamma\le p\\ \gamma\neq 0}} \!\langle \varphi_{n,\gamma}\rangle \ \oplus\ \langle \varphi_{n,0} + \psi_{n,0} \rangle \ \oplus\ \langle \varphi_{n,0}, E_{n,j,s} \mid j, s \rangle & \text{ if } t=m-1, \end{cases} $$ where $j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ and $1\leq s\leq N-1$. \end{thm} When $m$ is odd, Remark~\ref{rem:indexchi} and Remark~\ref{rem:indexphi} show $\varphi_{n,0}$ and $\psi_{n,0}$ exist if and only if $n\equiv 1 \text{ (mod 4)}$, and $\chi_{n,0}$ exists if and only if $n\equiv 0 \text{ (mod 4)}$. We thus have to consider more cases in the description of $\hh^n(\alg)$ as an $\hh^1(\alg)$-module, but the resulting decompositions are similar in nature to the case when $m$ is even. \begin{thm} Let $m \geq 3$ be odd and $n\geq 1$. Write $n=pm+t$ with $p\geq 0$ and $0\le t\le m-1$ as before. We can decompose $\hh^n(\alg)$ into indecomposable summands over $\hh^1(\alg)$ as follows: \begin{enumerate} \item If $n$ is even and $n\equiv 0 \text{ (mod 4)}$, $$ \hh^n(\alg)\ =\ \begin{cases} \displaystyle \bigoplus_{\substack{\delta\in I_\chi^n\\\delta\neq 0}} \langle \chi_{n,\delta}\rangle \oplus \bigoplus_{\delta\in I_\pi^n} \langle \pi_{n,\delta}\rangle \oplus \langle \chi_{n,0}, F_{n,j,s}\mid j,s\rangle & \text{ if }t\neq m-1 ,\\ \ \\ \displaystyle \bigoplus_{\substack{\delta\in I_\chi^n\\\delta\neq 0}} \langle \chi_{n,\delta}\rangle \oplus \bigoplus_{\delta\in I_\pi^n} \langle \pi_{n,\delta}\rangle \oplus \langle \chi_{n,0}, F_{n,j,s} \mid j,s\rangle\oplus \langle \varphi_{n,-(p+1)}\rangle \oplus \langle \psi_{n,p+1}\rangle & \text{ if }t=m-1; \end{cases} $$ \item If $n$ is even and $n \not \equiv 0 \text{ (mod 4)}$, $$ \hh^n(\alg)\ =\ \begin{cases} \displaystyle \bigoplus_{\substack{\delta\in I_\chi^n\\\delta\neq 0}} \langle \chi_{n,\delta}\rangle \oplus \bigoplus_{\delta\in I_\pi^n} \langle \pi_{n,\delta}\rangle \oplus \bigoplus_{j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} \langle F_{n,j,s}\mid s\rangle & \text{ if }t\neq m-1 ,\\ \ \\ \displaystyle \bigoplus_{\substack{\delta\in I_\chi^n\\\delta\neq 0}} \langle \chi_{n,\delta}\rangle \oplus \bigoplus_{\delta\in I_\pi^n} \langle \pi_{n,\delta}\rangle \oplus \bigoplus_{j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} \langle F_{n,j,s}\mid s\rangle\oplus \langle \varphi_{n,-(p+1)}\rangle \oplus \langle \psi_{n,p+1}\rangle & \text{ if }t=m-1; \end{cases} $$ \item If $n$ is odd and $n \equiv 1 \text{ (mod 4)}$, $$ \hh^n(\alg)\ =\ \begin{cases} \displaystyle \bigoplus_{\substack{\tau\in I_\psi^n\\\tau\neq 0}} \langle \psi_{n,\tau}\rangle \oplus \bigoplus_{\substack{\sigma\in I_\varphi^n\\\sigma\neq 0}} \langle \varphi_{n,\sigma}\rangle \oplus \langle \varphi_{n,0} + \psi_{n,0}\rangle \oplus \langle \varphi_{n,0}, E_{n,j,s}\mid j,s\rangle & \text{ if }t\neq 0, \\ \ \\ \displaystyle \bigoplus_{\substack{\tau\in I_\psi^n\\\tau\neq 0}} \langle \psi_{n,\tau}\rangle \oplus \bigoplus_{\substack{\sigma\in I_\varphi^n\\\sigma\neq 0}} \langle \varphi_{n,\sigma}\rangle \oplus \langle \varphi_{n,0} + \psi_{n,0}\rangle \oplus \langle \varphi_{n,0}, E_{n,j,s}\mid j,s\rangle \oplus \bigoplus_{\delta=\pm p} \langle \pi_{n,\delta}\rangle & \text{ if }t=0; \end{cases} $$ \item If $n$ is odd and $n \not \equiv 1 \text{ (mod 4)}$, $$ \hh^n(\alg)\ =\ \begin{cases} \displaystyle \bigoplus_{\substack{\tau\in I_\psi^n\\\tau\neq 0}} \langle \psi_{n,\tau}\rangle \oplus \bigoplus_{\substack{\sigma\in I_\varphi^n\\\sigma\neq 0}} \langle \varphi_{n,\sigma}\rangle \oplus \bigoplus_{j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} \langle E_{n,j,s}\mid s\rangle & \text{ if }t\neq 0, \\ \ \\ \displaystyle \bigoplus_{\substack{\tau\in I_\psi^n\\\tau\neq 0}} \langle \psi_{n,\tau}\rangle \oplus \bigoplus_{\substack{\sigma\in I_\varphi^n\\\sigma\neq 0}} \langle \varphi_{n,\sigma}\rangle \oplus \bigoplus_{j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} \langle E_{n,j,s}\mid s\rangle \oplus \bigoplus_{\delta=\pm p} \langle \pi_{n,\delta}\rangle & \text{ if }t=0, \end{cases} $$ \end{enumerate} where $j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$, $1\leq s\leq N-1$, and $I_\chi^n$, $I_\pi^n, I_\varphi^n$, $I_\psi^n$ are given by the respective basis index sets of $\hh^n(\alg)$ as described in~Section~\ref{subsec:m odd n even} and in~Section~\ref{subsec:m odd n odd}. \end{thm} \begin{rem} In the above decompositions, almost all the one-dimensional summands are nontrivial simple modules. The only exception is $\langle \pi_{2,0}\rangle$, on which $\hh^1(\alg)$ acts trivially for both $m$ even and $m$ odd cases. \end{rem} \begin{rem} Table~\ref{tab:diagonaleven} and~Table~\ref{tab:diagonalodd} allow us to describe the central characters of the indecomposable summands of $\hh^n(\alg)$ for both $m$ even and $m$ odd. Indeed, we see that the central element $C$ acts by $-\frac{\alpha m}{2}$ on the indecomposable summands $\langle \chi_{n,\alpha}\rangle$, $\langle \pi_{n,\alpha}\rangle$ $\langle \psi_{n,\alpha}\rangle$ and $\langle \varphi_{n,\alpha}\rangle$ whenever $\alpha\neq 0$. Furthermore, $C$ acts trivially on the remaining indecomposable summands. \end{rem} \begin{rem} Let $m$ be even or $m$ odd and $n\equiv 1 \text{ (mod 4)}$. The $\hh^1(\alg)$-module $\langle \varphi_{n,0}, E_{n,j,s}\mid j,s\rangle$ has dimension $(1+ m(N-1))$ and is a weight module with respect to the action of $E_0$. It is generated as an $\hh^1(\alg)$-module by $\varphi_{n,0}$, on which $E_0$ acts by $\frac{-(n-1)N}{2}$. For every $1\le s\le N-1$, the weight space of weight $s-\frac{(n-1)}{2}N$ is given by the $m$-dimensional subspace $\langle E_{n,j,s}\mid j\rangle$. Note that for every $j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$, the action of $E_{1,j,1}$ increases the weight by one. We note that any subspace of the form $$\langle E_{n,j,s} \mid j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}},\; s_j\le s\le N-1 \rangle,$$ where $1\le s_j\le N-1$ for every $j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$, is an $\hh^1(\alg)$-submodule of $\langle \varphi_{n,0}, E_{n,j,s}\mid j,s\rangle$. Hence, the simple subquotients of $\langle \varphi_{n,0}, E_{n,j,s}\mid j,s\rangle$ are all one-dimensional. A similar description applies to the $\hh^1(\alg)$-module $\langle \chi_{n,0}, F_{n,j,s}\mid j,s\rangle$, which is generated by the element $\chi_{n,0}$. \end{rem} \subsection{Decomposition of $\hh^0(\alg)$ as an $\hh^1(\alg)$-module} The $\ground$-module $\hh^0(\alg)$ is the same for $m$ even or $m$ odd. In both cases, it has a basis given by $$\{1,\varepsilon_i,f_i^s\ |\ i\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}},\ 1\le s\le N-1\}.$$ However, we treat the two cases separately as the decompositions as an $\hh^1(\alg)$-module are different. In~Proposition~\ref{diagonal}, we showed that the elements $C, E_0$ in $\hh^1(\alg)$ act diagonally on $\hh^0(\alg)$ with respect to the above basis. The action of $E_{1,j,s}$ on this basis is given in Table~\ref{tab:Ebrackets}. In particular, we know that for any $i,j\in \ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ and $1\le r, s\le N-1$, $$[E_{1,j,r},f_i^s]\ =\ \delta_{i,j} sf_i^{r+s},$$ which is zero if $r+s> N$. For $r+s=N$ we can rewrite $f_i^N=\varepsilon_i+\varepsilon_{i+1}$ in terms of the above basis by~\cite[Theorem 4.8]{ST}. \begin{thm} For $m\ge 3$ even, $\hh^0(\alg)$ has the following decomposition into indecomposable $\hh^1(\alg)$-modules: $$\hh^0(\alg)\ =\ \langle 1\rangle\ \oplus\ \langle \varepsilon_0\rangle\ \oplus \ \langle f_i^s, \varepsilon_i+\varepsilon_{i+1} \mid i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}, 1\le s\le N-1 \rangle.$$ \end{thm} \begin{proof} We first note that when $m$ is even, $$0= \sum_{j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} (-1)^{[j]}(\varepsilon_j+\varepsilon_{j+1}),$$ where $[j]\in\{0,\ldots,m-1\}$ denotes the unique representative of $j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$. Hence, the elements $\varepsilon_j+\varepsilon_{j+1}$ with $j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}$ are linearly dependent. Next, we show that for any direct sum decomposition $V\oplus W=\langle f_i^s, \varepsilon_i+\varepsilon_{i+1} \mid i,s\rangle$, one of the submodules $V, W$ is zero. Note that $[E_{1,j,N-1},-]$ maps $V\oplus W$ onto $\langle \varepsilon_j+\varepsilon_{j+1}\rangle$. Since \begin{align*} [E_{1,j,N-1},V]\subset V\cap\langle \varepsilon_j+\varepsilon_{j+1}\rangle \ \ \text{and}\ \ [E_{1,j,N-1},W]\subset\ W\cap\langle \varepsilon_j+\varepsilon_{j+1}\rangle, \end{align*} $\varepsilon_j+\varepsilon_{j+1}$ is contained in $V$ or $W$. Since the set $\{\varepsilon_j+\varepsilon_{j+1}\ |\ j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}\}$ is linearly dependent, all $\varepsilon_j+\varepsilon_{j+1}$ have to be contained in the same summand, say in $V$. Hence the socles $$\text{soc}(V)\ =\ \text{soc}\left(\langle f_i^s, \varepsilon_i+\varepsilon_{i+1} \mid i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}, 1\le s\le N-1 \rangle\right)\ =\ \langle \varepsilon_j+\varepsilon_{j+1} \mid j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}\rangle$$ agree, which implies that $\text{soc}(W)=0$, thus $W=0$. Finally, we observe that none of the $\varepsilon_j$ is contained in $\langle 1\rangle\oplus \langle f_i^s, \varepsilon_i+\varepsilon_{i+1} \mid i,s \rangle$. By adding the simple module $\langle \varepsilon_j\rangle$ for some $j$, say $j=0$, we obtain the decomposition above. \end{proof} \begin{thm} For $m\ge 3$ odd, $\hh^0(\alg)$ has the following decomposition into indecomposable $\hh^1(\alg)$-modules: \begin{align*} \hh^0(\alg)\ =\ \langle 1\rangle\ \oplus\ \bigoplus\limits_{i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} \langle f_i^s, \varepsilon_i+\varepsilon_{i+1} \mid 1\le s\le N-1\rangle. \end{align*} \end{thm} \begin{proof} Indeed, we see that $$\varepsilon_0 = \frac12 \sum_{j\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} (-1)^{[j]}(\varepsilon_j+\varepsilon_{j+1})$$ is contained in the sum $\sum\limits_{i\in\ensuremath{{\mathbb Z}}/m\ensuremath{{\mathbb Z}}} \langle f_i^s, \varepsilon_i+\varepsilon_{i+1}\ |\ 1\le s\le N-1\rangle$. We conclude that this sum contains $\varepsilon_i$ for every $i$, hence $\hh^0(\alg)$ equals the sum of the submodules on the right hand side in the theorem. For dimension reasons, this sum is direct. Finally, it is clear that the summands are indecomposable. \end{proof}
2,877,628,088,813
arxiv
\section{Introduction} Let $P$ be a set of $m$ distinct points in $\reals^3$ and let $L$ be a set of $n$ distinct lines in $\reals^3$. Let $I(P,L)$ denote the number of incidences between the points of $P$ and the lines of $L$; that is, the number of pairs $(p,\ell)$ with $p\in P$, $\ell\in L$, and $p\in\ell$. If all the points of $P$ and all the lines of $L$ lie in a common plane, then the classical Szemer\'edi--Trotter theorem~\cite{SzT} yields the worst-case tight bound \begin{equation} \label{inc2} I(P,L) = O\left(m^{2/3}n^{2/3} + m + n \right) . \end{equation} This bound clearly also holds in three dimensions, by projecting the given lines and points onto some generic plane. Moreover, the bound will continue to be worst-case tight by placing all the points and lines in a common plane, in a configuration that yields the planar lower bound. In the 2010 groundbreaking paper of Guth and Katz~\cite{GK2}, an improved bound has been derived for $I(P,L)$, for a set $P$ of $m$ points and a set $L$ of $n$ lines in $\reals^3$, provided that not too many lines of $L$ lie in a common plane. Specifically, they showed:\footnote We skip over certain subtleties in their bound: They also assume that no \emph{regulus} contains more than $s$ input lines, but then they are able also to bound the number of intersection points of the lines. Moreover, if one also assumes that each point is incident to at least three lines then the term $m$ in the bound can be dropped.} \begin{theorem}[Guth and Katz~\cite{GK2}] \label {ttt} Let $P$ be a set of $m$ distinct points and $L$ a set of $n$ distinct lines in $\reals^3$, and let $s\le n$ be a parameter, such that no plane contains more than $s$ lines of $L$. Then $$ I(P,L) = O\left(m^{1/2}n^{3/4} + m^{2/3}n^{1/3}s^{1/3} + m + n\right). $$ \end{theorem} This bound was a major step in the derivation of the main result of \cite{GK2}, which was to prove an almost-linear lower bound on the number of distinct distances determined by any finite set of points in the plane, a classical problem posed by Erd{\H o}s in 1946~\cite{Er46}. Their proof uses several nontrivial tools from algebraic and differential geometry, most notably the Cayley--Salmon theorem on osculating lines to algebraic surfaces in $\reals^3$, and additional properties of ruled surfaces. All this machinery comes on top of the main innovation of Guth and Katz, the introduction of the \emph{polynomial partitioning technique}; see below. In this paper, we provide a simple derivation of this bound, which bypasses most of the techniques from algebraic geometry that are used in the original proof. A recent related study by Guth~\cite{Gu14} provides another simpler derivation of a similar bound, but (a) the bound obtained in \cite{Gu14} is slightly worse, involving extra factors of the form $m^\eps$, for any $\eps>0$, and (b) the assumptions there are stronger, namely that no algebraic surface of degree at most $c_\eps$, a (potentially large) constant that depends on $\eps$, contains more than $s$ lines of $L$ (in fact, Guth considers in \cite{Gu14} only the case $s=\sqrt{n}$). It should be noted, though, that Guth also manages to derive a (slightly weaker but still) near-linear lower bound on the number of distinct distances. As in the classical work of Guth and Katz~\cite{GK2}, and in the follow-up study of Guth~\cite{Gu14}, here too we use the polynomial partitioning method, as pioneered in \cite{GK2}. The main difference between our approach and those of \cite{Gu14,GK2} is the choice of the degree of the partitioning polynomial. Whereas Guth and Katz \cite{GK2} choose a large degree, and Guth~\cite{Gu14} chooses a constant degree, we choose an intermediate degree. This reaps many benefits from both the high-degree and the constant-degree approaches, and pays a small price in the bound (albeit much better than in \cite{Gu14}). Specifically, our main result is a simple and fairly elementary derivation of the following result. \begin{theorem} \label {th:main} Let $P$ be a set of $m$ distinct points and $L$ a set of $n$ distinct lines in $\reals^3$, and let $s\le n$ be a parameter, such that no plane contains more than $s$ lines of $L$. Then \begin{equation} \label{st} I(P,L) \le A_{m,n} \left(m^{1/2}n^{3/4} + m\right) + B\left( m^{2/3}n^{1/3}s^{1/3} + n\right) , \end{equation} where $B$ is an absolute constant, and, for another suitable absolute constant $b>1$, \begin{equation} \label{amn} A_{m,n} = O\left( b^{\frac{\log (m^2n)}{\log (n^3/m^2)}} \right) ,\quad \text{for $m \le n^{3/2}$} , \quad\text{and}\quad\ O\left( b^{\frac{\log (m^3/n^4)}{\log (m^2/n^3)}} \right) ,\quad \text{for $m \ge n^{3/2}$} . \end{equation} \end{theorem} \noindent{\bf Remarks.} (1) Only the range $\sqrt{n} \le m \le n^2$ is of interest; outside this range, regardless of the dimension of the ambient space, we have the well known and trivial upper bound $O(m+n)$. \smallskip \noindent (2) The term $m^{2/3}n^{1/3}s^{1/3}$ comes from the planar Szemer\'edi--Trotter bound (\ref{inc2}), and is unavoidable, as it can be attained if we densely ``pack'' points and lines into planes, in patterns that realize the bound in (\ref{inc2}). \smallskip \noindent (3) Ignoring this term, the two terms $m^{1/2}n^{3/4}$ and $m$ ``compete'' for dominance; the former dominates when $m\le n^{3/2}$ and the latter when $m\ge n^{3/2}$. Thus the bound in (\ref{st}) is qualitatively different within these two ranges. \smallskip \noindent (4) The threshold $m=n^{3/2}$ also arises in the related problem of \emph{joints} (points incident to at least three non-coplanar lines) in a set of $n$ lines in 3-space; see \cite{GK}. A concise rephrasing of the bound in (\ref{st}) and (\ref{amn}) is as follows. We partition each of the ranges $m\le n^{3/2}$, $m > n^{3/2}$ into a sequence of subranges $n^{\alpha_{j-1}} < m \le n^{\alpha_j}$, $j=0,1,\ldots$ (for $m\le n^{3/2}$), or $n^{\alpha_{j-1}} > m \ge n^{\alpha_j}$, $j=0,1,\ldots$ (for $m \ge n^{3/2}$), so that within each range the bound asserted in the theorem holds for some fixed constant of proportionality (denoted as $A_{m,n}$ in the bound), where these constants vary with $j$, and grow, exponentially in $j$, as prescribed in (\ref{amn}), as $m$ approaches $n^{3/2}$ (from either side). Informally, if we keep $m$ ``sufficiently away'' from $n^{3/2}$, the bound in (\ref{st}) holds with a fixed constant of proportionality. Handling the ``border range'' $m\approx n^{3/2}$ is also fairly straightforward, although, to bypass the exponential growth of the constant of proportionality, it results in a slightly different bound; see below for details. Our proof is elementary to the extent that, among other things, it avoids any explicit handling of \emph{singular} and \emph{flat} points on the zero set of the partitioning polynomial. While these notions are relatively easy to handle in three dimensions (see, e.g., \cite{EKS,GK}), they become more complex notions in higher dimensions (as witnessed, for example, in our companion work on the four-dimensional setting~\cite{SS4d}), making proofs based on them harder to extend. Additional merits and features of our analysis are discussed in detail in the concluding section. In a nutshell, the main merits are: \noindent{\bf (i)} We use two separate partitioning polynomials. The first one is of ``high'' degree, and is used to prune away some points and lines, and to establish useful properties of the surviving points and lines. The second partitioning step, using a polynomial of ``low'' degree, is then applied, from scratch, to the surviving input, exploiting the properties established in the first step. This idea seems to have a potential for further applications. \noindent{\bf (ii)} Because of the way we use the polynomial partitioning technique, we need induction to handle incidences within the cells of the second partition. One of the nontrivial achievements of our technique is the ability to retain The ``planar'' term $O(m^{2/3}n^{1/3}s^{1/3})$ in the bound in (\ref{st}) through the inductive process. Without such care, this term does not ``pass well'' through the induction, which has been a sore issue in several recent works on related problems (see \cite{SSS,SSZ,surf-socg}). This is one of the main reasons for using two separate partitioning steps. \paragraph{Background.} Incidence problems have been a major topic in combinatorial and computational geometry for the past thirty years, starting with the aforementioned Szemer\'edi-Trotter bound \cite{SzT} back in 1983. Several techniques, interesting in their own right, have been developed, or adapted, for the analysis of incidences, including the crossing-lemma technique of Sz\'ekely~\cite{Sz}, and the use of cuttings as a divide-and-conquer mechanism (e.g., see~\cite{CEGSW}). Connections with range searching and related algorithmic problems in computational geometry have also been noted, and studies of the Kakeya problem (see, e.g., \cite{T}) indicate the connection between this problem and incidence problems. See Pach and Sharir~\cite{PS} for a comprehensive (albeit a bit outdated) survey of the topic. The landscape of incidence geometry has dramatically changed in the past six years, due to the infusion, in two groundbreaking papers by Guth and Katz~\cite{GK,GK2}, of new tools and techniques drawn from algebraic geometry. Although their two direct goals have been to obtain a tight upper bound on the number of joints in a set of lines in three dimensions \cite{GK}, and a near-linear lower bound for the classical distinct distances problem of Erd{\H o}s \cite{GK2}, the new tools have quickly been recognized as useful for incidence bounds. See \cite{EKS,KMSS,KMS,SSZ,SoTa,Za1,Za2} for a sample of recent works on incidence problems that use the new algebraic machinery. The simplest instances of incidence problems involve points and lines, tackled by Szemer\'edi and Trotter in the plane~\cite{SzT}, and by Guth and Katz in three dimensions~\cite{GK2}. Other recent studies on incidence problems include incidences between points and lines in four dimensions (Sharir and Solomon~\cite{surf-socg,SS4d}), and incidences between points and circles in three dimensions (Sharir, Sheffer and Zahl~\cite{SSZ}), not to mention incidences with higher-dimensional surfaces, such as in \cite{BS,KMSS,SoTa,Za1,Za2}. In a companion paper (with Sheffer)~\cite{SSS}, we study the general case of incidences between points and curves in any dimension, and derive reasonably sharp bounds (albeit weaker in several respects than the one derived here). That tools from algebraic geometry form the major key for successful solution of difficult problems in combinatorial geometry, came as a big surprise to the community. It has lead to intensive research of the new tools, aiming to extend them and to find new applications. A major purpose of this study, as well as of Guth~\cite{Gu14}, is to show that one can still tackle successfully the problems using less heavy algebraic machinery. This offers a new, simplified, and more elementary approach, which we expect to prove potent for other applications too, such as those just mentioned. Looking for simpler, yet effective techniques that would be easier to extend to more involved contexts (such as incidences in higher dimensions) has been our main motivation for this study. A more detailed supplementary discussion (which would be premature at this point) of the merits and other issues related to our technique is given in a concluding section. \section{Proof of Theorem~\ref{th:main}} \label{sec:pf1} The proof proceeds by induction on $m$. As already mentioned, the bound in (\ref{st}) is qualitatively different in the two ranges $m\le n^{3/2}$ and $m\ge n^{3/2}$. The analysis bifurcates accordingly. While the general flow is fairly similar in both cases, there are many differences too. \paragraph{The case $m < n^{3/2}$.} We partition this range into a sequence of ranges $m\le n^{\alpha_0}$, $n^{\alpha_0} < m \le n^{\alpha_1},\ldots$, where $\alpha_0 = 1/2$ and the sequence $\{\alpha_j\}_{j\ge 0}$ is increasing and converges to $3/2$. More precisely, as our analysis will show, we can take $\alpha_j = \frac32 - \frac{2}{j+2}$, for $j\ge 0$. The induction is actually on the index $j$ of the range $n^{\alpha_{j-1}}<m \le n^{\alpha_j}$, and establishes (\ref{st}) for $m$ in this range, with a coefficient $A_j$ (written in (\ref{st}, \ref{amn}) as $A_{m,n}$) that increases with $j$. This paradigm has already been used in Sharir et al.~\cite{SSZ} and in Zahl~\cite{Za2}, for related incidence problems, albeit in a somewhat less effective manner; see the discussion at the end of the paper. The base range of the induction is $m\le \sqrt n$, where the trivial general upper bound on point-line incidences, in any dimension, yields $I=O(m^2+n)=O(n)$, so (\ref{st}) holds for a sufficiently large choice of the initial constant $A_0$. Assume then that (\ref{st}) holds for all $m \le n^{\alpha_{j-1}}$ for some $j \ge 1$, and consider an instance of the problem with $n^{\alpha_{j-1}} < m \le n^{3/2}$ (the analysis will force us to constrain this upper bound in order to complete the induction step, thereby obtaining the next exponent $\alpha_j$). Fix a parameter $r$, whose precise value will be chosen later (in fact, and this is a major novelty of our approach, there will be two different choices for $r$---see below), and apply the polynomial partitioning theorem of Guth and Katz (see \cite{GK2} and \cite[Theorem 2.6]{KMS}), to obtain an $r$-partitioning trivariate (real) polynomial $f$ of degree $D=O(r^{1/3})$. That is, every connected component of $\reals^3\setminus Z(f)$ contains at most $m/r$ points of $P$, where $Z(f)$ denotes the zero set of $f$. By Warren's theorem~\cite{w-lbanm-68} (see also \cite{KMS}), the number of components of $\reals^3\setminus Z(f)$ is $O(D^3) = O(r)$. Set $P_1:= P\cap Z(f)$ and $P'_1:=P\setminus P_1$. A major recurring theme in this approach is that, although the points of $P'_1$ are more or less evenly partitioned among the cells of the partition, no nontrivial bound can be provided for the size of $P_1$; in the worst case, all the points of $P$ could lie in $Z(f)$. Each line $\ell\in L$ is either fully contained in $Z(f)$ or intersects it in at most $D$ points (since the restriction of $f$ to $\ell$ is a univariate polynomial of degree at most $D$). Let $L_1$ denote the subset of lines of $L$ that are fully contained in $Z(f)$ and put $L'_1 = L\setminus L_1$. We then have $$ I(P,L) = I(P_1,L_1) + I(P_1,L'_1) + I(P'_1,L'_1) . $$ We first bound $I(P_1,L'_1)$ and $I(P'_1,L'_1)$. As already observed, we have $$ I(P_1,L'_1) \le |L'_1|\cdot D \le nD . $$ We estimate $I(P'_1,L'_1)$ as follows. For each (open) cell $\tau$ of $\reals^3\setminus Z(f)$, put $P_\tau = P\cap \tau$ (that is, $P'_1\cap\tau$), and let $L_\tau$ denote the set of the lines of $L'_1$ that cross $\tau$; put $m_\tau = |P_\tau| \le m/r$, and $n_\tau = |L_\tau|$. Since every line $\ell\in L'_1$ crosses at most $1+D$ components of $\reals^3\setminus Z(f)$, we have $$ \sum_\tau n_\tau \le n(1+D) ,\quad\quad\text{and}\quad\quad I(P'_1,L'_1) = \sum_{\tau} I(P_\tau,L_\tau). $$ For each $\tau$ we use the trivial bound $I(P_\tau,L_\tau) = O(m_\tau^2+n_\tau)$. Summing over the cells, we get $$ I(P'_1,L'_1) = \sum_{\tau} I(P_\tau,L_\tau) = O\left(r\cdot(m/r)^2 + \sum_\tau n_\tau \right) = O\left(m^2/r + nD \right) = O(m^2/D^3 + nD) . $$ For the initial value of $D$, we take $D = m^{1/2}/n^{1/4}$ (which we get from a suitable value of $r=\Theta(D^3)$), and get the bound $$ I(P'_1,L'_1) + I(P_1,L'_1) = O(m^{1/2}n^{3/4}) . $$ This choice of $D$ is the one made in \cite{GK2}. It is sufficiently large to control the situation in the cells, by the bound just obtained, but requires heavy-duty machinery from algebraic geometry to handle the situation on $Z(f)$. We now turn to $Z(f)$, where we need to estimate $I(P_1,L_1)$. Since all the incidences involving any point in $P'_1$ and/or any line in $L'_1$ have already been accounted for, we discard these sets, and remain with $P_1$ and $L_1$ only. We ``forget'' the preceding polynomial partitioning step, and start afresh, applying a new polynomial partitioning to $P_1$ with a polynomial $g$ of degree $E$, which will typically be much smaller than $D$, but still non-constant. Before doing this, we note that the set of lines $L_1$ has a special structure, because all its lines lie on the algebraic surface $Z(f)$, which has degree $D$. We exploit this to derive the following lemmas. We emphasize, since this will be important later on in the analysis, that Lemmas~\ref{donpl}--\ref{incone} hold for any choice of ($r$ and) $D$. We note that in general the partitioning polynomial $f$ may be reducible, and apply some of the following arguments to each irreducible factor separately. Clearly, there are at most $D$ such factors. \begin{lemma} \label{donpl} Let $\pi$ be a plane which is not a component of $Z(f)$. Then $\pi$ contains at most $D$ lines of $L_1$. \end{lemma} \noindent{\bf Proof.} Suppose to the contrary that $\pi$ contains at least $D+1$ lines of $L$. Every generic line $\lambda$ in $\pi$ intersects these lines in at least $D+1$ distinct points, all belonging to $Z(f)$. Hence $f$ must vanish identically on $\lambda$, and it follows that $f\equiv 0$ on $\pi$, so $\pi$ is a component of $Z(f)$, contrary to assumption. \proofend \begin{lemma} \label{incpl} The number of incidences between the points of $P_1$ that lie in the planar components of $Z(f)$ and the lines of $L_1$, is $O(m^{2/3}n^{1/3}s^{1/3} + nD)$. \end{lemma} \noindent{\bf Proof.} Clearly, $f$ can have at most $D$ linear factors, and thus $Z(f)$ can contain at most $D$ planar components. Enumerate them as $\pi_1,\ldots,\pi_k$, where $k\le D$. Let $\tilde{P}_1$ denote the subset of the points of $P_1$ that lie in these planar components. Assign each point of $\tilde{P}_1$ to the first plane $\pi_i$, in this order, that contains it, and assign each line of $L_1$ to the first plane that fully contains it; some lines might not be assigned at all in this manner. For $i=1,\ldots,k$, let $\tilde{P}_i$ denote the set of points assigned to $\pi_i$, and let $\tilde{L}_i$ denote the set of lines assigned to $\pi_i$. Put $m_i = |\tilde{P}_i|$ and $n_i = |\tilde{L}_i|$. Then $\sum_i m_i \le m$ and $\sum_i n_i \le n$; by assumption, we also have $n_i\le s$ for each $i$. Then $$ I(\tilde{P}_i,\tilde{L}_i) = O(m_i^{2/3}n_i^{2/3} + m_i + n_i) = O(m_i^{2/3}n_i^{1/3}s^{1/3} + m_i + n_i) . $$ Summing over the $k$ planes, we get, using H\"older's inequality, \begin{align*} \sum_i & I(\tilde{P}_i,\tilde{L}_i) = \sum_i O(m_i^{2/3}n_i^{1/3}s^{1/3} + m_i + n_i) \\ & = O\left( \left( \sum_i m_i \right)^{2/3} \left( \sum_i n_i \right)^{1/3}s^{1/3} + m + n\right) = O\left( m^{2/3}n^{1/3}s^{1/3} + m + n\right) . \end{align*} We also need to include incidences between points $p\in \tilde{P}_1$ and lines $\ell\in L_1$ not assigned to the same plane as $p$ (or not assigned to any plane at all). Any such incidence $(p,\ell)$ can be charged (uniquely) to the intersection point of $\ell$ with the plane $\pi_i$ to which $p$ has been assigned. The number of such intersections is $O(nD)$, and the lemma follows. \proofend \begin{lemma} \label{md2} Each point $p\in Z(f)$ is incident to at most $D^2$ lines of $L_1$, unless $Z(f)$ has an irreducible component that is either a plane containing $p$ or a cone with apex $p$. \end{lemma} \noindent{\bf Proof.} Fix any line $\ell$ that passes through $p$, and write its parametric equation as $\{p+tv \mid t\in\reals\}$, where $v$ is the direction of $\ell$. Consider the Taylor expansion of $f$ at $p$ along $\ell$ $$ f(p+tv) = \sum_{i=1}^D \frac{1}{i!}F_i(p;v)t^i , $$ where $F_i(p;v)$ is the $i$-th order derivative of $f$ at $p$ in direction $v$; it is a homogeneous polynomial in $v$ ($p$ is considered fixed) of degree $i$, for $i=1,\ldots,D$. For each line $\ell\in L_1$ that passes through $p$, $f$ vanishes identically on $\ell$, so we have $F_i(p;v)=0$ for each $i$. Assuming that $p$ is incident to more than $D^2$ lines of $L_1$, we conclude that the homogeneous system \begin{equation} \label{allf0} F_1(p;v) = F_2(p;v) = \cdots = F_D(p;v) = 0 \end{equation} has more than $D^2$ (projectively distinct) roots. The classical B\'ezout's theorem, applied in the projective plane where the directions $v$ are represented (e.g., see~\cite{CLO}), asserts that, since all these polynomials are of degree at most $D$, each pair of polynomials $F_i(p;v)$, $F_j(p;v)$ must have a common factor. The following slightly more involved inductive argument shows that in fact all these polynomials must have a common factor.\footnote See also~\cite{RSZ} for a similar observation.} \begin {lemma} Let $f_1,\ldots, f_n \in \cplx[x,y,z]$ be $n$ homogeneous polynomials of degree at most $D$. If $|Z(f_1,\ldots, f_n)|>D^2$, then all the $f_i$'s have a nontrivial common factor. \end {lemma} \noindent{\bf Proof.} The proof is via induction on $n$. The case $n=2$ is precisely the classical B\'ezout's theorem in the projective plane. Assume that the inductive claim holds for $n-1$ polynomials. By assumption, $|Z(f_1,\ldots, f_{n-1})|\ge |Z(f_1,\ldots, f_{n})|>D^2$, so the induction hypothesis implies that there is a polynomial $g$ that divides $f_i$, for $i=1,\ldots, n-1$; assume, as we may, that $g=GCD(f_1,\ldots, f_{n-1})$. If there are more than $\deg(g)\deg(f_{n})$ points in $Z(g,f_{n})$, then again, by the classical B\'ezout's theorem in the projective plane, $g$ and $f_n$ have a nontrivial common factor, which is then also a common factor of $f_i$, for $i=1,\ldots, n$, completing the proof. Otherwise, put $\tilde f_i = f_i/g$, for $i=1,\ldots, n-1$. Notice that $Z(f_1,\ldots, f_{n-1}) = Z(\tilde f_1,\ldots, \tilde f_{n-1})\cup Z(g)$, implying that each point of $Z(f_1,\ldots, f_n)$ belongs either to $Z(g) \cap Z(f_n)$ or to $Z(\tilde f_1,\ldots,\tilde f_{n-1})\cap Z(f_n)$. As $|Z(f_1,\ldots, f_n)|>D^2$ and $|Z(g,f_{n})| \le \deg(g)\deg(f_n)\le \deg(g)D$, it follows that $$ |Z(\tilde f_1,\ldots, \tilde f_{n-1})| \ge |Z(\tilde f_1,\ldots, \tilde f_{n-1}, f_n)| \ge (D-\deg(g))D>(D-\deg(g))^2. $$ Hence, applying the induction hypothesis to the polynomials $\tilde f_1,\ldots, \tilde f_{n-1}$ (all of degree at most $D-\deg(g)$), we conclude that they have a nontrivial common factor, contradicting the fact that $g$ is the greatest common divisor of $f_1,\ldots, f_{n-1}$. \proofend Continuing with the proof of Lemma~\ref{md2}, there is an infinity of directions $v$ that satisfy (\ref{allf0}), so there is an infinity of lines passing through $v$ and contained in $Z(f)$. The union of these lines can be shown to be a two-dimensional algebraic variety,\footnote It is simply the variety given by the equations (\ref{allf0}), rewritten as $F_1(p;x-p) = F_2(p;x-p) = \cdots = F_D(p;x-p) = 0$. It is two-dimensional because it is contained in $Z(f)$, hence at most two-dimensional, and it cannot be one-dimensional since it would then consist of only finitely many lines (see, e.g., \cite[Lemma 2.3]{SS4d}).} contained in $Z(f)$, so $Z(f)$ has an irreducible component that is either a plane through $p$ or a cone with apex $p$, as claimed. \proofend \begin{lemma} \label{incone} The number of incidences between the points of $P_1$ that lie in the (non-planar) conic components of $Z(f)$, and the lines of $L_1$, is $O(m + nD)$. \end{lemma} \noindent{\bf Proof.} Let $\sigma$ be such an (irreducible) conic component of $Z(f)$ and let $p$ be its apex. We observe that $\sigma$ cannot contain any line that is not incident to $p$, because such a line would span with $p$ a plane contained in $\sigma$, contradicting the assumption that $\sigma$ is irreducible and non-planar. It follows that the number of incidences between $P_\sigma:=P_1\cap\sigma$ and $L_\sigma$, consisting of the lines of $L_1$ contained in $\sigma$, is thus $O(|P_\sigma|+|L_\sigma|)$ ($p$ contributes $|L_\sigma|$ incidences, and every other point at most one incidence). Applying a similar ``first-come-first-serve'' assignment of points and lines to the conic components of $Z(f)$, as we did for the planar components in the proof of lemma~\ref{incpl}, and adding the bound $O(nD)$ on the number of incidences between points and lines not assigned to the same component, we obtain the bound asserted in the lemma. \proofend \medskip \noindent{\bf Remark.} Note that in both Lemma~\ref{incpl} and Lemma~\ref{incone}, we bound the number of incidences between points on planar or conic components of $Z(f)$ and \emph{all} the lines of $L_1$. \paragraph{Pruning.} To continue, we remove all the points of $P_1$ that lie in some planar or conic component of $Z(f)$, and all the lines of $L_1$ that are fully contained in such components. With the choice of $D=m^{1/2}/n^{1/4}$, we lose in the process $$ O(m^{2/3}n^{1/3}s^{1/3}+m+nD) = O(m^{1/2}n^{3/4}+m^{2/3}n^{1/3}s^{1/3}) $$ incidences (recall that the term $m$ is subsumed by the term $m^{1/2}n^{3/4}$ for $m<n^{3/2}$). Continue, for simplicity of notation, to denote the sets of remaining points and lines as $P_1$ and $L_1$, respectively, and their sizes as $m$ and $n$. Now each point is incident to at most $D^2$ lines (a fact that we will not use for this value of $D$), and no plane contains more than $D$ lines of $L_1$, a crucial property for the next steps of the analysis. That is, this allows us to replace the input parameter $s$, bounding the maximum number of coplanar lines, by $D$; this is a key step that makes the induction work. \paragraph{A new polynomial partitioning.} We now return to the promised step of constructing a new polynomial partitioning. We adapt the preceding notation, with a few modifications. We choose a degree $E$, typically much smaller than $D$, and construct a partitioning polynomial $g$ of degree $E$ for $P_1$. With an appropriate value of $r=\Theta(E^3)$, we obtain $O(r)$ open cells, each containing at most $m/r$ points of $P_1$, and each line of $L_1$ either crosses at most $E+1$ cells, or is fully contained in $Z(g)$. Set $P_2:= P_1\cap Z(g)$ and $P'_2:=P_1\setminus P_2$. Similarly, denote by $L_2$ the set of lines of $L_1$ that are fully contained in $Z(g)$, and put $L'_2:=L_1\setminus L_2$. We first dispose of incidences involving the lines of $L_2$. (That is, now we first focus on incidences within $Z(g)$, and only then turn to look at the cells.) By Lemma~\ref{incpl} and Lemma~\ref{incone}, the number of incidences involving points $P_2$ that lie in some planar or conic component of $Z(g)$, and all the lines of $L_2$, is $$ O(m^{2/3}n^{1/3}s^{1/3}+m+nE) = O(m^{1/2}n^{3/4}+m^{2/3}n^{1/3}s^{1/3}+n). $$ (For $E\ll D$, this might be a gross overestimation, but we do not care.) We remove these points from $P_2$, and remove all the lines of $L_2$ that are contained in such components; continue to denote the sets of remaining points and lines as $P_2$ and $L_2$. Now each point is incident to at most $E^2$ lines of $L_2$ (Lemma~\ref{md2}), so the number of remaining incidences involving points of $P_2$ is $O(mE^2)$; for $E$ suitably small, this bound will be subsumed by $O(m^{1/2}n^{3/4})$. Unlike the case of a ``large'' $D$, namely, $D=m^{1/2}/n^{1/4}$, here the difficult part is to treat incidences within the cells of the partition. Since $E\ll D$, we cannot use the naive bound $O(n^2+m)$ within each cell, because that would make the overall bound too large. Therefore, to control the incidence bound within the cells, we proceed in the following inductive manner. For each cell $\tau$ of $\reals^3\setminus Z(g)$, put $P_\tau := P'_2\cap\tau$, and let $L_\tau$ denote the set of the lines of $L'_2$ that cross $\tau$; put $m_\tau = |P_\tau| \le m/r$, and $n_\tau = |L_\tau|$. Since every line $\ell\in L_1$ (that is, of $L'_2$) crosses at most $1+E$ components of $\reals^3\setminus Z(g)$, we have $\sum_\tau n_\tau \le n(1+E)$. It is important to note that at this point of the analysis the sizes of $P_1$ and of $L_1$ might be smaller than the original respective values $m$ and $n$. In particular, we may no longer assume that $|P_1| > |L_1|^{\alpha_{j-1}}$, as we did assume for $m$ and $n$. Nevertheless, in what follows $m$ and $n$ will denote the original values, which serve as upper bounds for the respective actual sizes of $P_1$ and $L_1$, and the induction will work correctly with these values; see below for details. In order to apply the induction hypothesis within the cells of the partition, we want to assume that $m_\tau \le {n_\tau}^{\alpha_{j-1}}$ for each $\tau$. To ensure that, we require that the number of lines of $L'_2$ that cross a cell be at most $n/E^2$. Cells $\tau$ that are crossed by $\kappa n/E^2$ lines, for $\kappa>1$, are treated as if they occur $\lceil \kappa \rceil$ times, where each incarnation involves all the points of $P_\tau$, and at most $n/E^2$ lines of $L_\tau$. The number of subproblems remains $O(E^3)$. Arguing similarly, we may also assume that $m_\tau \le m/E^3$ for each cell $\tau$ (by ``duplicating'' each cell into a constant number of subproblems, if needed). We therefore require that ${\displaystyle \frac {m} {E^3} \le \left(\frac{n}{E^2}\right)^{\alpha_{j-1}}}$. (Note that, as already commented above, these are only upper bounds on the actual sizes of these subsets, but this will have no real effect on the induction process.) That is, we require \begin{equation} \label{Dlb} E\ge \left( \frac {m} {n^{\alpha_{j-1}}} \right)^{1/(3-2\alpha_{j-1})} . \end{equation} With these preparations, we apply the induction hypothesis within each cell $\tau$, recalling that no plane contains more than $D$ lines\footnote This was the main reason for carrying out the first partitioning step, as already noted.} of $L'_2\subseteq L_1$, and get \begin{align*} I(P_\tau,L_\tau) & \le A_{j-1} \left( m_\tau^{1/2}n_\tau^{3/4} + m_\tau \right) + B\left( m_\tau^{2/3}n_\tau^{1/3}D^{1/3} + n_\tau \right) \\ & \le A_{j-1} \left( (m/E^3)^{1/2}(n/E^2)^{3/4} + m/E^3 \right) + B\left(+ (m/E^3)^{2/3}(n/E^2)^{1/3}D^{1/3} + n/E^2 \right) . \end{align*} Summing these bounds over the cells $\tau$, that is, multiplying them by $O(E^3)$, we get, for a suitable absolute constant $b$, $$ I(P'_2,L'_2) = \sum_{\tau} I(P_\tau,L_\tau) \le b A_{j-1} \left( m^{1/2}n^{3/4} + m\right) + B\left(m^{2/3}n^{1/3}E^{1/3}D^{1/3} + nE \right) . $$ We now require that $E=O(D)$. Then the last term satisfies $nE = O(nD) = O(m^{1/2}n^{3/4})$, and, as already remarked, the preceding term $m$ is also subsumed by the first term. The second term, after substituting $D=O(m^{1/2}/n^{1/4})$, becomes $O(m^{5/6}n^{1/4}E^{1/3})$. Hence, with a slightly larger $b$, we have $$ I(P'_2,L'_2) \le bA_{j-1} m^{1/2}n^{3/4} + bB m^{5/6}n^{1/4}E^{1/3} . $$ Adding up all the bounds, including those for the portions of $P$ and $L$ that were discarded during the first partitioning step, we obtain, for a suitable constant $c$, $$ I(P,L) \le c\left(m^{1/2}n^{3/4} + m^{2/3}n^{1/3}s^{1/3} + n + mE^2\right) + bA_{j-1} m^{1/2}n^{3/4} + bB m^{5/6}n^{1/4}E^{1/3} . $$ We choose $E$ to ensure that the two $E$-dependent terms are dominated by the term $m^{1/2}n^{3/4}$. That is, \begin{align*} m^{5/6}n^{1/4}E^{1/3} & \le m^{1/2}n^{3/4} ,\quad\text{or}\quad E\le n^{3/2}/m , \\ \text{and} \quad mE^2 & \le m^{1/2}n^{3/4} ,\quad\text{or}\quad E\le n^{3/8}/m^{1/4} . \end{align*} Since $n^{3/2}/m = \left( n^{3/8}/m^{1/4} \right)^4$, and both sides are $\ge 1$, the latter condition is stricter, and we ignore the former. As already noted, we also require that $E=O(D)$; specifically, we require that $E\le m^{1/2}/n^{1/4}$. In conclusion, recalling~(\ref{Dlb}), the two constraints on the choice of $E$ are \begin{equation} \label{econ} \left( \frac {m} {n^{\alpha_{j-1}}} \right)^{1/(3-2\alpha_{j-1})} \le E \le \min\left\{ \frac{n^{3/8}}{m^{1/4}} , \frac{m^{1/2}}{n^{1/4}} \right\} , \end{equation} and, for these constraints to be compatible, we require that $$ \left( \frac {m} {n^{\alpha_{j-1}}} \right)^{1/(3-2\alpha_{j-1})} \le \frac{n^{3/8}}{m^{1/4}} ,\quad\quad\text{or}\quad\quad m \le n^{\frac{9+2\alpha_{j-1}}{2(7-2\alpha_{j-1})}} , $$ and that $$ \left( \frac {m} {n^{\alpha_{j-1}}} \right)^{1/(3-2\alpha_{j-1})} \le \frac{m^{1/2}}{n^{1/4}} , $$ which fortunately always holds, as is easil;y checked, since $m\le n^{3/2}$ and $\alpha_{j-1} \ge 1/2$. Note that we have not explicitly stated any concrete choice of $E$; any value satisfying (\ref{econ}) will do. We put $$ \alpha_j := \frac{9+2\alpha_{j-1}}{2(7-2\alpha_{j-1})} , $$ and conclude that if $m\le n^{\alpha_j}$ then the bound asserted in the theorem holds, with $A_j = bA_{j-1} + c$ and $B=c$. This completes the induction step. Note that the recurrence $A_j = bA_{j-1}+c$ solves to $A_j = O(b^j)$. It remains to argue that the induction covers the entire range $m = O(n^{3/2})$. Using the above recurrence for the $\alpha_j$'s, with $\alpha_0 = 1/2$, it easily follows that $$ \alpha_j = \frac32 - \frac{2}{j+2} , $$ for each $j\ge 0$, showing that $\alpha_j$ converges to $3/2$, implying that the entire range $m = O(n^{3/2})$ is covered by the induction. To calibrate the dependence of the constant of proportionality on $m$ and $n$, we note that, for $n^{\alpha_{j-1}}\le m < n^{\alpha_j}$, the constant is $O(b^j)$. We have $$ \frac32 - \frac{2}{j+1} = \alpha_{j-1} \le \frac{\log m}{\log n} , \quad\quad\text{or}\quad\quad j \le \frac{\frac12 + \frac{\log m}{\log n}}{\frac32 - \frac{\log m}{\log n}} = \frac{\log (m^2n)}{\log (n^3/m^2)} . $$ This establishes the expression for $A_{m,n}$ given in the statement of the theorem. \paragraph{Handling the middle ground $m\approx n^{3/2}$.} Some care is needed when $m$ approaches $n^{3/2}$, because of the potentially unbounded growth of the constant $A_j$. To handle this situation, we simply fix a value $j$, in the manner detailed below, write $m=kn^{\alpha_j}$, solve $k$ separate problems, each involving $m/k=n^{\alpha_j}$ points of $P$ and all the $n$ lines of $L$, and sum up the resulting incidence bounds. We then get \begin{align*} I(P,L) & \le akb^j \left( (m/k)^{1/2}n^{3/4} + (m/k) \right) + kB \left( (m/k)^{2/3}n^{1/3}s^{1/3} + n \right ) \\ & = ak^{1/2}b^j m^{1/2}n^{3/4} + ab^j m + k^{1/3}B m^{2/3}n^{1/3}s^{1/3} + kB n , \end{align*} for a suitable absolute constant $a$. Recalling that $\alpha_j = \frac32 - \frac{2}{j+2}$, we have $$ k \le m/n^{\alpha_j} \le n^{3/2}/n^{\alpha_j} = n^{2/(j+2)} . $$ Hence the coefficient of the leading term in the above bound is bounded by $an^{1/(j+2)}b^j$, and we (asymptotically) minimize this expression by choosing $$ j = j_0 := \sqrt{\log n}/\sqrt{\log b} . $$ With this choice all the other coefficients are also dominated by the leading coefficient, and we obtain \begin{equation} \label{eq:mn32} I(P,L) = O\left( 2^{2\sqrt{\log b}\sqrt{\log n}} \left( m^{1/2}n^{3/4} + m^{2/3}n^{1/3}s^{1/3} + m + n \right) \right) . \end{equation} In other words, the bound in (\ref{st}) and (\ref{amn}) holds for any $m\le n^{3/2}$, but, for $m \ge n^{\alpha_{j_0}}$ one should use instead the bound in (\ref{eq:mn32}), which controls the exponential growth of the constants of proportionality within this range. \paragraph{The case $m > n^{3/2}$.} The analysis of this case is, in a sense, a mirror image of the preceding analysis, except for a new key lemma (Lemma~\ref{nd2}). For the sake of completeness, we repeat a sizeable portion of the analysis, providing many of the relevant (often differing) details. We partition this range into a sequence of ranges $m\ge n^{\alpha_0}$, $n^{\alpha_1} \le m < n^{\alpha_0},\ldots$, where $\alpha_0 = 2$ and the sequence $\{\alpha_j\}_{j\ge 0}$ is decreasing and converges to $3/2$. The induction is on the index $j$ of the range $n^{\alpha_{j}} \le m < n^{\alpha_{j-1}}$, and establishes (\ref{st}) for $m$ in this range, with a coefficient $A_j$ (written in (\ref{st},\ref{amn}) as $A_{m,n}$) that increases with $j$. The base range of the induction is $m\ge n^2$, where the trivial general upper bound on point-line incidences in any dimension, dual to the one used in the previous case, yields $I=O(n^2+m)=O(m)$, so (\ref{st}) holds for a sufficiently large choice of the initial constant $A_0$. Assume then that (\ref{st}) holds for all $m \ge n^{\alpha_{j-1}}$ for some $j \ge 1$, and consider an instance of the problem with $n^{3/2} \le m < n^{\alpha_{j-1}}$ (again, the lower bound will increase, to $n^{\alpha_j}$, to facilitate the induction step). For a parameter $r$, to be specified later, apply the polynomial partition theorem to obtain an $r$-partitioning trivariate (real) polynomial $f$ of degree $D=O(r^{1/3})$. That is, every connected component of $\reals^3\setminus Z(f)$ contains at most $m/r$ points of $P$, and the number of components of $\reals^3\setminus Z(f)$ is $O(D^3) = O(r)$. Set $P_1:= P\cap Z(f)$ and $P'_1:=P\setminus P_1$. Each line $\ell\in L$ is either fully contained in $Z(f)$ or intersects it in at most $D$ points. Let $L_1$ denote the subset of lines of $L$ that are fully contained in $Z(f)$ and put $L'_1 = L\setminus L_1$. As before, we have $$ I(P,L) = I(P_1,L_1) + I(P_1,L'_1) + I(P'_1,L'_1) . $$ We have $$ I(P_1,L'_1) \le |L'_1|\cdot D \le nD , $$ and we estimate $I(P'_1,L'_1)$ as follows. For each cell $\tau$ of $\reals^3\setminus Z(f)$, put $P_\tau = P\cap \tau$ (that is, $P'_1\cap\tau$), and let $L_\tau$ denote the set of the lines of $L'_1$ that cross $\tau$; put $m_\tau = |P_\tau| \le m/r$, and $n_\tau = |L_\tau|$. As before, we have $\sum_\tau n_\tau \le n(1+D)$, so the average number of lines that cross a cell is $O(n/D^2)$. Arguing as above, we may assume, by possibly increasing the number of cells by a constant factor, that each $n_\tau$ is at most $n/D^2$. Clearly, we have $$ I(P'_1,L'_1) = \sum_{\tau} I(P_\tau,L_\tau). $$ For each $\tau$ we use the trivial dual bound, mentioned above, $I(P_\tau,L_\tau) = O(n_\tau^2+m_\tau)$. Summing over the cells, we get $$ I(P'_1,L'_1) = \sum_{\tau} I(P_\tau,L_\tau) = O\left(D^3\cdot(n/D^2)^2 + m \right) = O\left(n^2/D + m \right) . $$ For the initial value of $D$, we take $D = n^2/m$, noting that $1\le D^3\le m$ because $n^{3/2}\le m\le n^2$, and get the bound $$ I(P'_1,L'_1) + I(P_1,L'_1) = O(n^2/D+m+nD) = O(m + n^3/m) = O(m) , $$ where the latter bound follows since $m\ge n^{3/2}$. It remains to estimate $I(P_1,L_1)$. Since all the incidences involving any point in $P'_1$ and/or any line in $L'_1$ have been accounted for, we discard these sets, and remain with $P_1$ and $L_1$ only. As before, we forget the preceding polynomial partitioning step, and start afresh, applying a new polynomial partitioning to $P_1$ with a polynomial $g$ of degree $E$, which will typically be much smaller than $D$, but still non-constant. For this case we need the following lemma, which can be regarded, in some sense, as a dual (albeit somewhat more involved) version of Lemma~\ref{md2}. Unlike the rest of the analysis, the best way to prove this lemma is by switching to the complex projective setting. This is needed for one key step in the proof, where we need the property that the projection of a complex projective variety is a variety. Once this is done, we can switch back to the real affine case, and complete the proof. Here is a very quick review of the transition to the complex projective setup. A real affine algebraic variety $X$, defined by a collection of real polynomials, can also be regarded as a complex projective variety. (Technically, one needs to take the \emph{projective closure} of the \emph{complexification} of $X$; details about these standard operations can be found, e.g., in Bochnak et al.~\cite[Proposition 8.7.17]{BCR} and in Cox et al.~\cite[Definition 8.4.6]{CLO}.) If $f$ is an irreducible polynomial over $\reals$, it might still be reducible over $\cplx$, but then it must have the form $f=g\bar g$, where $g$ is an irreducible complex polynomial and $\bar g$ is its complex conjugate. (Indeed, if $h$ is any irreducible factor of $f$, then $\bar h$ is also an irreducible factor of $f$, and therefore $h\bar h$ is a real polynomial dividing $f$. As $f$ is irreducible over $\reals$, the claim follows.) In the following lemma, adapting a notation used in earlier works, we say that a point $p\in P_1$ is \emph{$1$-poor} (resp., \emph{$2$-rich}) if it is incident to at most one line (resp., to at least two lines) of $L_1$. Recall also that a \emph{regulus} is a doubly-ruled surface in $\reals^3$ or in $\cplx^3$. It is the union of all lines that pass through three fixed pairwise skew lines; it is a quadric, which is either a hyperbolic paraboloid or a one-sheeted hyperboloid. \begin{lemma} \label{nd2} Let $f$ be an irreducible polynomial in $\cplx[x,y,z]$, such that $Z(f)$ is not a complex plane nor a complex regulus, and let $L_1$ be a finite set of lines fully contained in $Z(f)$. Then, with the possible exception of at most two lines, each line $\ell\in L_1$ is incident to at most $O(D^3)$ $2$-rich points. \end{lemma} \noindent{\bf Proof.} The strategy of the proof is to charge each incidence of $\ell$ with some $2$-rich point $p$ to an intersection of $\ell$ with another line of $L_1$ that passes through $p$, and to argue that, in general, there can be only $O(D^3)$ such other lines. This in turn will be shown by arguing that the union of all the lines that are fully contained in $Z(f)$ and pass through $\ell$ is a one-dimensional variety, of degree $O(D^3)$, from which the claim will follow. As we will show, this will indeed be the case except when $\ell$ is one of at most two ``exceptional'' lines on $Z(f)$. Fix a line $\ell$ as in the lemma, assume for simplicity that it passes through the origin, and write it as $\{tv_0 \mid t\in\cplx\}$; since $\ell$ is a real line, $v_0$ can be assumed to be real. Consider the union $V(\ell)$ of all the lines that are fully contained in $Z(f)$ and are incident to $\ell$; that is, $V(\ell)$ is the union of $\ell$ with the set of all points $p\in Z(f)\setminus\ell$ for which there exists $t\in\cplx$ such that the line connecting $p$ to $tv_0\in\ell$ is fully contained in $Z(f)$. In other words, for such a $t$ and for each $s\in\cplx$, we have $f((1-s)p+stv_0)=0$. Regarding the left-hand side as a polynomial in $s$, we can write it as ${\displaystyle \sum_{i=0}^D G_i(p;t) s^i \equiv 0}$, for suitable (complex) polynomials $G_i(p;t)$ in $p$ and $t$, each of total degree at most $D$. In other words, $p$ and $t$ have to satisfy the system \begin{equation} \label{fsyst} G_0(p;t) = G_1(p;t) = \cdots = G_D(p;t) = 0 , \end{equation} which defines an algebraic variety $\sigma(\ell)$ in $\P^4(\cplx)$. Note that, substituting $s=0$, we have $G_0(p;t)\equiv f(p)$, and that the limit points $(tv_0,t)$ (corresponding to points on $\ell$) also satisfy this system, since in this case $f((1-s)tv_0+stv_0)=f(tv_0)=0$ for all $s$. In other words, $V(\ell)$ is the projection of $\sigma(\ell)$ into $\P^3(\cplx)$, given by $(p,t)\mapsto p$. For each $p\in Z(f)\setminus\ell$ this system has only finitely many solutions in $t$, for otherwise the plane spanned by $p$ and $\ell_0$ would be fully contained in $Z(f)$, contrary to our assumption. By the projective extension theorem (see, e.g., \cite[Theorem 8.6]{CLO}), the projection of $\sigma(\ell)$ into $\P^3(\cplx)$, in which $t$ is discarded, is an algebraic variety $\tau(\ell)$. We observe that $\tau(\ell)$ is contained in $Z(f)$, and is therefore of dimension at most two. Assume first that $\tau(\ell)$ is two-dimensional. As $f$ is irreducible over $\cplx$, we must have $\tau(\ell) = Z(f)$. This implies that each point $p\in Z(f)\setminus\ell$ is incident to a (complex) line that is fully contained in $Z(f)$ and is incident to $\ell$. In particular, $Z(f)$ is ruled by complex lines. By assumption, $Z(f)$ is neither a complex plane nor a complex regulus. We may also assume that $Z(f)$ is not a complex cone, for then each line in $L_1$ is incident to at most one 2-rich point (namely, the apex of $Z(f)$), making the assertion of the lemma trivial. It then follows that $Z(f)$ is an irreducible singly ruled (complex) surface. As argued in Guth and Katz~\cite{GK2} (see also our companion paper~\cite{SS3dvar} for an independent analysis of this situation, which caters more explicitly to the complex setting too), $Z(f)$ can contain at most two lines $\ell$ with this property. Excluding these (at most) two exceptional lines $\ell$, we may thus assume that $\tau(\ell)$ is (at most) a one-dimensional curve. Clearly, by definition, each point $(p,t)\in\sigma(\ell)$, except for $p\in\ell$, defines a line $\lambda$, in the original 3-space, that connects $p$ to $tv_0$, and each point $q\in\lambda$ satisfies $(q,t)\in\sigma(\ell)$. Hence, the line $\{(q,t) \mid q\in\lambda\}$ is fully contained in $\sigma(\ell)$, and therefore the line $\lambda$ is fully contained in $\tau(\ell)$. Since $\tau(\ell)$ is one-dimensional, this in turn implies (see, e.g., \cite[Lemma 2.3]{SS4d}) that $\tau(\ell)$ is a \emph{finite} union of (complex) lines, whose number is at most $\deg (\tau(\ell))$. This also implies that $\sigma(\ell)$ is the union of the same number of lines, and in particular $\sigma(\ell)$ is also one-dimensional, and the number of lines that it contains is at most $\deg (\sigma(\ell))$. We claim that this latter degree is at most $O(D^3)$. This follows from a well-known result in algebra (see, e.g., Schmid \cite[Lemma 2.2]{Schmid}), that asserts that, since $\sigma(\ell)$ is a one-dimensional curve in $\P^4(\cplx)$, and is the common zero set of polynomials, each of degree $O(D)$, its degree is $O(D^3)$. This completes the proof of the lemma. (The passage from the complex projective setting back to the real affine one is trivial for this property.) \proofend \begin{corollary} \label{nd3} Let $f$ be a real or complex trivariate polynomial of degree $D$, such that (the complexification of) $Z(f)$ does not contain any complex plane nor any complex regulus. Let $L_1$ be a set of $n$ lines fully contained in $Z(f)$, and let $P_1$ be a set of $m$ points contained in $Z(f)$. Then $I(P_1,L_1) = O(m+nD^3)$. \end{corollary} \noindent{\bf Proof.} Write $f=\prod_{i=1}^s f_i$ for its decomposition into irreducible factors, for $s\le D$. We apply Lemma~\ref{nd2} to each complex factor $f_i$ of the $f$. By the observation preceding Lemma~\ref{nd2},some of these factors might be complex (non-real) polynomials, even when $f$ is real. That is, regardless of whether the original $f$ is real or not, we carry out the analysis in the complex projective space $\P^3(\cplx)$, and regard $Z(f_i)$ as a variety in that space. Note also that, by focussing on the single irreducible component $Z(f_i)$ of $Z(f)$, we consider only points and lines that are fully contained in $Z(f_i)$. We thus shrink $P_1$ and $L_1$ accordingly, and note that the notions of being 2-rich or 1-poor are now redefined with respect to the reduced sets. All of this will be rectified at the end of the proof. Assign each line $\ell\in L_1$ to the first component $Z(f_i)$, in the above order, that fully contains $\ell$, and assign each point $p\in P_1$ to the first component that contains it. If a point $p$ and a line $\ell$ are incident, then either they are both assigned to the same component $Z(f_i)$, or $p$ is assigned to some component $Z(f_i)$ and $\ell$, which is assigned to a later component, is not contained in $Z(f_i)$. Each incidence of the latter kind can be charged to a crossing between $\ell$ and $Z(f_i)$, and the total number of these crossings is $O(nD)$. It therefore suffices to consider incidences between points and lines assigned to the same component. Moreover, if a point $p$ is 2-rich with respect to the entire collection $L_1$ but is 1-poor with respect to the lines assigned to its component, then all of its incidences except one are accounted by the preceding term $O(nD)$, which thus takes care also of the single incidence within $Z(f_i)$. By Lemma~\ref{nd2}, for each $f_i$, excluding at most two exceptional lines, the number of incidences between a line assigned to (and contained in) $Z(f_i)$ and the points assigned to $Z(f_i)$ that are still 2-rich within $Z(f_i)$, is $O(\deg(f_i)^3) = O(D^3)$. Summing over all relevant lines, we get the bound $O(nD^3)$. Finally, each irreducible component $Z(f_i)$ can contain at most two exceptional lines, for a total of at most $2D$ such lines. The number of $2$-rich points on each such line $\ell$ is at most $n$, since each such point is incident to another line, so the total number of corresponding incidences is at most $O(nD)$, which is subsumed by the preceding bound $O(nD^3)$. The number of incidences with $1$-poor points is, trivially, at most $m$. This completes the proof of the corollary. \proofend \paragraph{Pruning.} In the preceding lemma and corollary, we have excluded planar and reguli components of $Z(f)$. Arguing as in the case of small $m$, the number of incidences involving points that lie on planar components of $Z(f)$ is $O(m^{2/3}n^{1/3}s^{1/3}+m)$ (see Lemma~\ref{incpl}), and the number of incidences involving points that lie on conic components of $Z(f)$ is $O(m+nD) = O(m)$ (see Lemma~\ref{incone}). A similar bound holds for points on the reguli components. Specifically, we assign each point and line to a regulus that contain them, if one exists, in the same first-come first-serve manner used above. Any point $p$ can be incident to at most two lines that are fully contained in the regulus to which it is assigned, and any other incidence of $p$ with a line $\ell$ can be uniquely charged to the intersection of $\ell$ with that regulus, for a total (over all lines and reguli) of $O(nD)$ incidences. We remove all points that lie in any such component and all lines that are fully contained in any such component. With the choice of $D=n^2/m$, we lose in the process $$ O(m^{2/3}n^{1/3}s^{1/3}+m+nD) = O(m + m^{2/3}n^{1/3}s^{1/3}) $$ incidences (recall that $nD\le m$ for $m\ge n^{3/2}$). For the remainder sets, which we continue to denote as $P_1$ and $L_1$, respectively, no plane contains more than $O(D)$ lines of $L_1$, as argued in Lemma~\ref{donpl}. \paragraph{A new polynomial partitioning.} We adapt the notation used in the preceding case, with a few modifications. We choose a degree $E$, typically much smaller than $D$, and construct a partitioning polynomial $g$ of degree $E$ for $P_1$. With an appropriate value of $r=\Theta(E^3)$, we obtain $O(r)$ cells, each containing at most $m/r$ points of $P_1$, and each line of $L_1$ either crosses at most $E+1$ cells, or is fully contained in $Z(g)$. Set $P_2:= P_1\cap Z(g)$ and $P'_2:=P_1\setminus P_2$. Similarly, denote by $L_2$ the set of lines of $L_1$ that are fully contained in $Z(g)$, and put $L'_2:=L_1\setminus L_2$. We first dispose of incidences involving the lines of $L_2$. By Lemma~\ref{incpl} and the preceding arguments, the number of incidences involving points of $P_2$ that lie in some planar, conic, or regulus component of $Z(g)$, and all the lines of $L_2$, is $$ O(m^{2/3}n^{1/3}s^{1/3}+m+nE) . $$ We remove these points from $P_2$, and remove all the lines of $L_2$ that are contained in such components. Continue to denote the sets of remaining points and lines as $P_2$ and $L_2$. By Corollary~\ref{nd3}, the number of incidences between $P_2$ and $L_2$ is $O(m+nE^3)$. To complete the estimation, we need to bound the number of incidences in the cells of the partition, which we do inductively, as before. Specifically, for each cell $\tau$ of $\reals^3\setminus Z(g)$, put $P_\tau := P'_2\cap\tau$, and let $L_\tau$ denote the set of the lines of $L'_2$ that cross $\tau$; put $m_\tau = |P_\tau| \le m/r$, and $n_\tau = |L_\tau|$. Since every line $\ell\in L_0$ crosses at most $1+E$ components of $\reals^3\setminus Z(g)$, we have $\sum_\tau n_\tau \le n(1+E)$, and, arguing as above, we may assume that each $n_\tau$ is at most $n/E^2$, and each $m_\tau$ is at most $m/E^3$. To apply the induction hypothesis in each cell, we therefore require that ${\displaystyle \frac {m} {E^3} \ge \left(\frac{n}{E^2}\right)^{\alpha_{j-1}}}$. (As before, the actual sizes of $P_1$ and $L_1$ might be smaller than the respective original values $m$ and $n$. We use here the original values, and note, similar to the preceding case, that the fact that these are only upper bounds on the actual sizes is harmless for the induction process.) That is, we require \begin{equation} \label{Dlb2} E\ge \left( \frac {n^{\alpha_{j-1}}}{m} \right)^{1/(2\alpha_{j-1}-3)} . \end{equation} With these preparations, we apply the induction hypothesis within each cell $\tau$, recalling that no plane contains more than $D$ lines of $L'_2\subseteq L_1$, and get \begin{align*} I(P_\tau,L_\tau) & \le A_{j-1} \left( m_\tau^{1/2}n_\tau^{3/4}+ m_\tau \right) + B\left( m_\tau^{2/3}n_\tau^{1/3}D^{1/3} + n_\tau \right) \\ & \le A_{j-1} \left( (m/E^3)^{1/2}(n/E^2)^{3/4}+ m/E^3 \right) + B\left( (m/E^3)^{2/3}(n/E^2)^{1/3}D^{1/3} + n/E^2 \right) . \end{align*} Summing these bounds over the cells $\tau$, that is, multiplying them by $O(E^3)$, we get, for a suitable absolute constant $b$, $$ I(P'_2,L'_2) = \sum_{\tau} I(P_\tau,L_\tau) \le b A_{j-1} \left( m^{1/2}n^{3/4} + m\right) + bB\left(m^{2/3}n^{1/3}E^{1/3}D^{1/3} + nE \right) . $$ Requiring that $E\le m/n$, the last term satisfies $nE\le m$, and the first term is also at most $O(m)$ (because $m\ge n^{3/2}$). The second term, after substituting $D=O(n^2/m)$, becomes $O(m^{1/3}nE^{1/3})$. Hence, with a slightly larger $b$, we have $$ I(P'_2,L'_2) \le bA_{j-1} m + bB m^{1/3}nE^{1/3} . $$ Collecting all partial bounds obtained so far, we obtain $$ I(P,L) \le c\left(m^{2/3}n^{1/3}s^{1/3} + m + nE^3\right) + bA_{j-1} m + bB m^{1/3}nE^{1/3} , $$ for a suitable constant $c$. We choose $E$ to ensure that the two $E$-dependent terms are dominated by $m$. That is, $$ m^{1/3}nE^{1/3} \le m ,\quad\text{or}\quad E\le m^{2}/n^{3} , \quad\quad\text{and}\quad\quad nE^3 \le m ,\quad\text{or}\quad E\le m^{1/3}/n^{1/3} . $$ In addition, we also require that $E\le m/n$, but, as is easily seen, both of the above constraints imply that $E\le m/n$, so we get this latter constraint for free, and ignore it in what follows. As is easily checked, the second constraint $E\le m^{1/3}/n^{1/3}$ is stricter than the first constraint $E\le m^2/n^3$ for $m\ge n^{8/5}$, and the situation is reversed when $m\le n^{8/5}$. So in our inductive descent of $m$, we first consider the second constraint, and then switch to the first constraint. Hence, in the first part of this analysis, the two constraints on the choice of $E$ are $$ \left( \frac {n^{\alpha_{j-1}}}{m} \right)^{1/(2\alpha_{j-1}-3)} \le E \le \frac{m^{1/3}}{n^{1/3}} , $$ and, for these constraints to be compatible, we require that $$ \left( \frac {n^{\alpha_{j-1}}}{m} \right)^{1/(2\alpha_{j-1}-3)} \le \frac{m^{1/3}}{n^{1/3}} , \quad\quad\text{or}\quad\quad m \ge n^{\frac{5\alpha_{j-1}-3}{2\alpha_{j-1}}} . $$ We start the process with $\alpha_0=2$, and take ${\displaystyle \alpha_1 := \frac{5\alpha_{0}-3}{2\alpha_{0}} = 7/4}$. As this is still larger than $8/5$, we perform two additional rounds of the induction, using the same constraints, leading to the exponents $$ \alpha_2 = \frac{5\alpha_1-3}{2\alpha_1} = \frac{23}{14}, \quad\text{and}\quad \alpha_3 = \frac{5\alpha_2-3}{2\alpha_2} = \frac{73}{46} < \frac{8}{5} . $$ To play it safe, we reset $\alpha_3:=8/5$, and establish the induction step for $m\ge n^{8/5}$. We can then proceed to the second part, where the two constraints on the choice of $E$ are $$ \left( \frac {n^{\alpha_{j-1}}}{m} \right)^{1/(2\alpha_{j-1}-3)} \le E \le \frac{m^{2}}{n^{3}} , $$ and, for these constraints to be compatible, we require that $$ \left( \frac {n^{\alpha_{j-1}}}{m} \right)^{1/(2\alpha_{j-1}-3)} \le \frac{m^{2}}{n^{3}} , \quad\quad\text{or}\quad\quad m \ge n^{\frac{7\alpha_{j-1}-9}{4\alpha_{j-1}-5}} . $$ We define, for $j\ge 4$, ${\displaystyle \alpha_j = \frac{7\alpha_{j-1}-9}{4\alpha_{j-1}-5}}$. Substituting $\alpha_3=8/5$ we get $\alpha_4=11/7$, and in general a simple calculation shows that $$ \alpha_j = \frac32 + \frac{1}{4j-2} , $$ for $j\ge 3$. This sequence does indeed converge to $3/2$ as $j\to\infty$, implying that the entire range $m = \Omega(n^{3/2})$ is covered by the induction. In both parts, we conclude that if $m\ge n^{\alpha_j}$ then the bound asserted in the theorem holds with $A_j = bA_{j-1}+c$,, and $B=c$. This completes the induction step. Finally, we calibrate the dependence of the constant of proportionality on $m$ and $n$, by noting that, for $n^{\alpha_{j}}\le m < n^{\alpha_{j-1}}$, the constant is $O(b^j)$. We have $$ \frac32 + \frac{1}{4j-6} = \alpha_{j-1} \ge \frac{\log m}{\log n} , \quad\text{or}\quad j \le \frac{3 \frac{\log m}{\log n}-4}{2\frac{\log m}{\log n}-3} = \frac{\log\left( m^3/n^4\right)}{\log\left(m^2/n^3\right)} . $$ (Technically, this only handles the range $j\ge 3$, but, for an asymptotic bound, we can extend it to $j=1,2$ too.) This establishes the explicit expression for $A_{m,n}$ for this range, as stated in the theorem, and completes its proof. \proofend Again, as in the case of a small $m$, we need to be careful when $m$ approaches $n^{3/2}$. Here we can fix a $j$, assume that $n^{3/2} \le m < n^{\alpha_j}$, and set $k:= m/n^{\alpha'_j}$, where $\alpha'_j = 3/2 - 2/(j+2)$ is the $j$-th index in the hierarchy for $m\le n^{3/2}$. That is, $$ k\le n^{\alpha_j-\alpha'_j} = \frac{1}{4j-2} + \frac{2}{j+2} . $$ As before, we now solve $k$ separate subproblems, each with $m/k$ points of $P$ and all the lines of $L$, and sum up the resulting incidence bounds. The analysis is similar to the one used above, and we omit its details. It yields almost the same bound as in (\ref{eq:mn32}), where the slightly larger upper bound on $k$ leads to the slightly larger bound $$ I(P,L) = O\left( 2^{\sqrt{4.5}\sqrt{\log b}\sqrt{\log n}} \left( m^{1/2}n^{3/4} + m^{2/3}n^{1/3}s^{1/3} + m + n \right) \right) , $$ with a slightly different absolute constant $b$. \section{Discussion} In this paper we derived an asymptotically tight bound for the number of incidences between a set $P$ of points and a set $L$ of lines in $\reals^3$. This bound has already been established by Guth and Katz~\cite{GK2}, where the main tool was the use of partitioning polynomials. As already mentioned, the main novelty here is to use two separate partitioning polynomials of different degrees; the one with the higher degree is used as a pruning mechanism, after which the maximum number of coplanar lines of $L$ can be better controlled (by the degree $D$ of the polynomial), which is a key ingredient in making the inductive argument work. The second main tool of Guth and Katz was the Cayley--Salmon theorem. This theorem says that a surface in $\reals^3$ of degree $\Deg$ cannot contain more than $11\Deg^2-24\Deg$ lines, unless it is \emph{ruled by lines}. This is an ``ancient'' theorem, from the 19th century, combining algebraic and differential geometry, and its re-emergenece in recent years has kindled the interest of the combinatorial geometry community in classical (and modern) algebraic geometry. New proofs of the theorem were obtained (see, e.g., Terry Tao's blog~\cite{tt:blog}), and generalizations to higher dimensions have also been developed (see Landsberg~\cite{Land}). However, the theorem only holds over the complex field, and using it over the reals requires some care. There is also an alternative way to bound the number of point-line incidences using flat and singular points. However, as already remarked, these two, as well as the Cayley--Salmon machinery, are non-trivial constructs, especially in higher dimensions, and their generalization to other problems in combinatorial geometry (even incidence problems with curves other than lines or incidences with lines in higher dimensions) seem quite difficult (and are mostly open). It is therefore of considerable interest to develop alternative, more elementary interfaces between algebraic and combinatorial geometry, which is a primary goal of the present paper (as well as of Guth's recent work~\cite{Gu14}). In this regard, one could perhaps view Lemma~\ref{md2} and Corollary~\ref{nd3} as certain weaker analogs of the Cayley--Salmon theorem, which are nevertheless easier to derive, without having to use differential geometry. Some of the tools in Guth's paper~\cite{Gu14} might also be interpreted as such weaker variants of the Cayley--Salmon theory. It would be interesting to see suitable extensions of these tools to higher dimensions. Besides the intrinsic interest in simplifying the Guth--Katz analysis, the present work has been motivated by our study of incidences between points and lines in four dimensions. This has begun in a year-old companion paper~\cite{surf-socg}, where we have used the the polynomial partitioning method, with a polynomial of constant degree. This, similarly to Guth's work in three dimensions~\cite{Gu14}, has resulted in a slightly weaker bound and considerably stricter assumptions concerning the input set of lines. In a more involved follow-up study~\cite{SS4d}, we have managed to improve the bound, and to get rid of the restrictive assumptions, using two partitioning steps, with polynomials of non-constant degrees, as in the present paper. However, the analysis in \cite{SS4d} is not as simple as in the present paper, because, even though there are generalizations of the Cayley--Salmon theorem to higher dimensions (due to Landsberg, as mentioned above), it turns out that a thorough investigation of the variety of lines fully contained in a given hypersurface of non-constant degree, is a fairly intricate and challenging problem, raising many deep questions in algebraic geometry, some of which are still unresolved. One potential application of the techniques used in this paper, mainly the interplay between partitioning polynomials of different degrees, is to the problem, recently studied by Sharir, Sheffer and Zahl~\cite{SSZ}, of bounding the number of incidences between points and circles in $\reals^3$. That paper uses a partitioning polynomial of constant degree, and, as a result, the term that caters to incidences within lower-dimensional spaces (such as our term $m^{2/3}n^{1/3}s^{1/3}$) does not go well through the induction mechanism, and consequently the bound derived in \cite{SSZ} was weaker. We believe that our technique can improve the bound of \cite{SSZ} in terms of this ``lower-dimensional'' term. A substantial part of the present paper (half of the proof of the theorem) was devoted to the treatment of the case $m>n^{3/2}$. However, under the appropriate assumptions, the number of points incident to at least two lines was shown by Guth and Katz~\cite{GK2} to be bounded by $O(n^{3/2})$. A recent note by Koll\'ar~\cite{Kol} gives a simplified proof, including an explicit multiplicative constant. In his work, Koll\'ar does not use partitioning polynomials, but employs more advanced algebraic geometric tools, like the \emph{arithmetic genus} of a curve, which serves as an upper bound for the number of singular points. If we accept (pedagogically) the upper bound $O(n^{3/2})$ for the number of 2-rich points as a ``black box'', the regime in which $m>n^{3/2}$ becomes irrelevant, and can be discarded from the analysis, thus greatly simplifying the paper. A challenging problem is thus to find an elementary proof that the number of points incident to at least two lines is $O(n^{3/2})$ (e.g., without the use of the Cayley--Salmon theorem or the tools used by Koll\'ar). Another challenging (and probably harder) problem is to improve the bound of Guth and Katz when the bound $s$ on the maximum number of mutually coplanar lines is $\ll n^{1/2}$: In their original derivation, Guth and Katz~\cite{GK2} consider mainly the case $s=n^{1/2}$, and the lower bound constrcution in \cite{GK2} also has $s=n^{1/2}$. Another natural further research direction is to find further applications of partitioning polynomials of intermediate degrees.
2,877,628,088,814
arxiv
\section{Introduction} In \cite{AnotherPresentation} W. van der Kallen proved that the linear Steinberg group \(\stlin(n, K)\) over a commutative ring \(K\) is a central extension of the elementary linear group \(\elin(n, K)\) if \(n \geq 4\). More precisely, he showed that the Steinberg group admits a more invariant presentation and actually is a crossed module over the general linear group \(\glin(n, K)\). This was generalized by M. Tulenbaev in \cite{Tulenbaev} for linear groups over almost commutative rings. For symplectic groups the same result was proved by A. Lavrenov in \cite{CentralityC}. He used essentially the same approach: there is another presentation of the symplectic Steinberg group \(\mathrm{StSp}(2\ell, K)\) over a commutative ring \(K\) for \(\ell \geq 3\) such that it is obvious that this group is a central extension of the elementary symplectic group \(\mathrm{ESp}(2\ell, K)\). Together with S. Sinchuk he also proved centrality of the corresponding \(K_2\)-functors for the Chevalley groups of types \(\mathsf D_\ell\) for \(\ell \geq 3\) and \(\mathsf E_\ell\) in \cite{CentralityD, CentralityE} using a different method. Another presentation for orthogonal groups was proved by S. B\"oge in \cite{AnotherPresentationOrth}, but only for fields of characteristic not \(2\). She also considered sufficiently isotropic orthogonal groups instead of the split ones. In \cite{LinK2} we reproved that \(\stlin(n, K)\) is a crossed module over \(\glin(n, K)\) using pro-groups. This more powerful method allowed to generalize the result for isotropic linear groups over almost commutative rings and for matrix linear groups over non-commutative rings satisfying a local stable rank condition. Together with Lavrenov and Sinchuk we applied the pro-group approach to the simple simply connected Chevalley groups of rank at least \(3\) in \cite{ChevK2}. The same result for isotropic odd unitary groups (including the Chevalley groups of type \(\mathsf A_\ell\), \(\mathsf B_\ell\), \(\mathsf C_\ell\), and \(\mathsf D_\ell\)) was proved in \cite{UnitK2}. For the Chevalley groups of rank \(2\) there are counterexamples, see \cite{Wendt}. The pro-group approach does not give any ``another presentation'' of the corresponding Steinberg group by itself. In this paper we prove the following: \begin{theorem*} Let \(K\) be a unital commutative ring, \(M\) be a finitely presented \(K\)-module with a quadratic form \(q\) and pairwise orthogonal hyperbolic pairs \((e_{-1}, e_1), \ldots, (e_{-\ell}, e_\ell)\) for \(\ell \geq 3\). Then the orthogonal Steinberg group \(\storth(M, q)\) is isomorphic to the abstract group \(\storth^*(M, q)\) generated by symbols \(X^*(u, v)\), where \(u, v \in M\) are vectors, \(u\) lies in the orbit of \(e_1\) under the action of the orthogonal group, and \(u \perp v\). The relations on these symbols are \begin{itemize} \item \(X^*(u, v + v') = X^*(u, v)\, X^*(u, v')\); \item \(X^*(u, v)\, X^*(u', v')\, X^*(u, v)^{-1} = X^*(T(u, v)\, u', T(u, v)\, v')\), where \(T(u, v)\) is the corresponding ESD-transvection; \item \(X^*(u, va) = X^*(v, -ua)\) if \((u, v)\) lies in the orbit of \((e_1, e_2)\); \item \(X^*(u, ua) = 1\). \end{itemize} \end{theorem*} Using this variant of ``another presentation'', it is obvious that \(\storth(M, q)\) is a crossed module over \(\orth(M, q)\). During the proof we also give a general definition of ESD-transvections \(X(u, v) \in \storth(M, q)\), where \(u\) is a member of a hyperbolic pair and \(v \perp u\). These elements lift ordinary ESD-transvections \(T(u, v) \in \orth(M, q)\) and satisfy various identities, but their existence for all such \(u\) is unclear. The author wants to express his gratitude to Nikolai Vavilov, Sergey Sinchuk and Andrei Lavrenov for motivation and helpful discussions. \section{Orthogonal Steinberg pro-groups} We use the group-theoretical notation \(\up gh = ghg^{-1}\) and \([g, h] = ghg^{-1}h^{-1}\). If a group \(G\) acts on a group \(H\) by automorphisms, we denote the action by \(\up gh\). Let \(K\) be a commutative unital ring, \(M\) be an \(K\)-module of finite presentation with a quadratic form \(q \colon M \to K\) and the associated symmetric bilinear form \(\langle m, m' \rangle = q(m + m') - q(m) - q(m)\). The orthogonal group \(\orth(M, q)\) consists of linear automorphisms \(g \in \glin(M)\) such that \(q(gm) = q(m)\) for all \(m\). A hyperbolic pair \((u, v)\) in \(M\) is a pair of orthogonal vectors such that \(q(u) = q(v) = 0\). Consider vectors \(u, v \in M\) such that \(q(u) = \langle u, v \rangle = 0\). In this case the operator \[T(u, v) \colon M \to M, m \mapsto m + u\, \langle v, m \rangle - v\, \langle u, m \rangle - u\, q(v)\, \langle u, m \rangle\] is called an Eichel -- Siegel -- Dickson transvection (or an ESD-transvection) with parameters \(u\) and \(v\), see \cite{OddPetrov} for details in a more general context. The following lemma summarized the well-known propreties of these operators, all of them may be checked by direct calculations. \begin{lemma}\label{esd-facts} Each ESD-transvection lies in the orthogonal group \(\orth(M, q)\). Moreover, \begin{itemize} \item \(T(u, v)\, T(u, v') = T(u, v + v')\) for \(q(u) = \langle u, v \rangle = \langle u, v' \rangle = 0\); \item \(T(ua, v) = T(u, va)\) for \(q(u) = \langle u, v \rangle = 0\) and \(a \in K\); \item \(\up g{T(u, v)} = T(gu, gv)\) for \(q(u) = \langle u, v \rangle = 0\) and \(g \in \orth(M, q)\); \item \(T(u, v) = T(v, -u)\) for \(q(u) = q(v) = \langle u, v \rangle = 0\); \item \(T(u, ua) = 1\) for \(q(u) = 0\) and \(a \in K\). \end{itemize} \end{lemma} From now on suppose that there are pairwise orthogonal hyperbolic pairs \((e_{-1}, e_1), \ldots, (e_{-\ell}, e_\ell)\) in \(M\) for \(\ell \geq 3\). We denote the orthogonal complement to the span of these vectors by \(M_0\). Recall that elementary orthogonal transvections of long root type are the elements \(t_{ij}(a) = T(e_i, e_{-j} a) = T(e_{-j}, -e_i a)\) for \(i \neq \pm j\) and \(a \in K\), the elementary orthogonal transvections of short root type are the elements \(t_j(m) = T(e_{-j}, -m)\) for \(m \in M_0\). These elements are precisely the elementary transvections in \(\orth(M)\) considered as an isotropic odd unitary group, but with slightly different parametrizations. See \cite{OddPetrov} or \cite{StabVor} for details. Clearly, \[T(e_i, m) = t_{-i}(-m_0)\, \prod_{j \neq \pm i} t_{i, -j}(m_j)\] if \(\langle e_i, m \rangle = 0\), where \(m = m_0 + \sum_{1 \leq |j| \leq \ell} e_j m_j\) and \(m_0 \in M_0\). The orthogonal Steinberg group \(\storth(M, q)\) is the abstract group generated by elements \(x_{ij}(a)\) and \(x_j(m)\) for \(a \in K\), \(m \in M_0\), and \(i \neq \pm j\). The relations on these elements are the following: \begin{itemize} \item \(x_{ij}(a + b) = x_{ij}(a)\, x_{ij}(b)\); \item \(x_{ij}(a) = x_{-j, -i}(-a)\); \item \(x_i(m + m') = x_i(m)\, x_i(m')\); \item \([x_{ij}(a), x_{kl}(b)] = 1\) for \(j \neq k \neq -i \neq -l \neq j\); \item \([x_{ij}(a), x_{j, -i}(b)] = 1\); \item \([x_{ij}(a), x_{jk}(b)] = x_{ik}(ab)\) for \(i \neq \pm k\); \item \([x_i(m), x_j(m')] = x_{-i, j}(-\langle m, m' \rangle)\) for \(i \neq \pm j\); \item \([x_i(m), x_{jk}(a)] = 1\) for \(j \neq i \neq -k\); \item \([x_i(m), x_{ij}(a)] = x_{-i, j}(-q(m)\, a)\, x_j(ma)\). \end{itemize} There is a group homomorphism \(\stmap \colon \storth(M, q) \to \orth(M, q), x_{ij}(a) \mapsto t_{ij}(a), x_j(m) \mapsto t_j(m)\). Its image is the elementary orthogonal group \(\eorth(M, q)\), i.e. the subgroup of \(\orth(M, q)\) generated by the elementary orthogonal transvections. We would like to find analogs of ESD-transvections in the Steinberg group \(\storth(M, q)\). In order to do so, we use results from \cite{UnitK2} (if \(q\) is split of even rank, then it suffices to use the case of \(\mathsf D_\ell\) considered in \cite{ChevK2}). Recall that there is a ``forgetful'' functor from the category of pro-groups \(\Pro(\Group)\) to the category of group objects in the category of pro-sets \(\Pro(\Set)\). This functor is fully faithful, so we identify pro-groups with the corresponding pro-sets. The projective limits in \(\Pro(\Set)\) are denoted by \(\varprojlim^{\Pro}\) and various pro-sets are labeled with the upper index \((\infty)\), such as \(X^{(\infty)}\). Both categories \(\Pro(\Group)\) and \(\Pro(\Set)\) are complete and cocomplete. We also use the following convention from \cite{ChevK2, LinK2, UnitK2}: if a morphism between pro-sets is given by a first order term (possibly many-sorted), then we add the upper index \((\infty)\) for the formal variables. For example, \([g^{(\infty)}, h^{(\infty)}]\) denotes the commutator morphism \(G^{(\infty)} \times G^{(\infty)} \to G^{(\infty)}\) for a pro-group \(G^{(\infty)}\), and \(m^{(\infty)} a^{(\infty)}\) denotes the product morphism \(M^{(\infty)} \times K^{(\infty)} \to M^{(\infty)}\) for a pro-ring \(K^{(\infty)}\) and its pro-module \(M^{(\infty)}\). The domains of such variables are usually clear from the context. For any \(s \in K\) an \(s\)-homotope of \(K\) is the commutative non-unital \(K\)-algebra \(K^{(s)} = \{a^{(s)} \mid a \in K\}\). The operations are given by \[ a^{(s)} + b^{(s)} = (a + b)^{(s)}, \quad a^{(s)} b = (ab)^{(s)}, \quad a^{(s)} b^{(s)} = (abs)^{(s)}. \] Similarly, \(M^{(s)} = \{m^{(s)} \mid m \in M\}\) is a non-unital \(K^{(s)}\)-module and a unital \(K\)-module with the operations \[ m^{(s)} + {m'}^{(s)} = (m + m')^{(s)}, \quad m^{(s)} a = (ma)^{(s)}, \quad m^{(s)} a^{(s)} = (mas)^{(s)}. \] The forms on \(M\) may be extended to \(M^{(s)}\) by \[ q\bigl(m^{(s)}\bigr) = (q(m)\, s)^{(s)}, \quad \bigl\langle m^{(s)}, {m'}^{(s)} \bigr\rangle = (\langle m, m' \rangle\, s)^{(s)}. \] Note that in the definition of \(\storth(M, q)\) we do not need the unit of \(K\). Hence we define \(\storth^{(s)}(M, q) = \storth(M^{(s)}, q)\) by the same generators and relations, but with parameters in \(K^{(s)}\) and \(M_0^{(s)}\). If \(s, s' \in K\), then there are homomorphisms \begin{align*} K^{(ss')} \to K^{(s)}, a^{(ss')} \mapsto (as')^{(s)},\\ M^{(ss')} \to M^{(s)}, m^{(ss')} \mapsto (ms')^{(s)}, \end{align*} and the obvious homomorphism \(\storth^{(ss')}(M, q) \to \storth^{(s)}(M, q)\) of their Steinberg groups. Fix a multiplicative subset \(S \subseteq K\). The formal projective limit \(K^{(\infty, S)} = \varprojlim^{\Pro}_{s \in S} K^{(s)}\) is actually a commutative non-unital pro-\(K\)-algebra, \(M^{(\infty, S)} = \varprojlim^{\Pro}_{s \in S} M^{(s)}\) is a pro-\(K\)-module. Similarly, the Steinberg pro-group \(\storth^{(\infty, S)}(M, q) = \varprojlim^{\Pro}_{s \in S} \storth^{(s)}(M, q)\) is indeed a pro-group. If \(S = K \setminus \mathfrak p\) for a prime ideal \(\mathfrak p\), then we write \(K^{(\infty, \mathfrak p)}\), \(M^{(\infty, \mathfrak p)}\), and \(\storth^{(\infty, \mathfrak p)}(M, q)\) instead of \(K^{(\infty, S)}\), \(M^{(\infty, S)}\), and \(\storth^{(\infty, S)}(M, q)\). Similarly, if \(S = \{1, f, f^2, \ldots\}\) for \(f \in K\), then we write \(f\) instead of \(S\) in the indices. Finally, the constructions \(K^{(\infty, S)}\) and \(\storth^{(\infty, S)}(M, q)\) are contravariant functors on \(S\), in the case \(S = \{1\}\) we get \(K\), \(M\), and \(\storth(M, q)\) up to canonical isomorphisms. There are well-defined morphisms \(x_{ij} \colon K^{(\infty, S)} \to \storth^{(\infty, S)}(M, q)\) and \(x_j \colon M_0^{(\infty, S)} \to \storth^{(\infty, S)}(M, q)\) of pro-groups, they generate the Steinberg pro-group in the categorical sense by \cite[lemma 5]{UnitK2}. In order to apply the main results from \cite{UnitK2}, we need a lemma. \begin{lemma}\label{quasi-finite} Let \((R, \Delta)\) be the odd form \(K\)-algebra constructed by \((M, q)\) as in \cite{StabVor}. Then \(R\) is locally finite, i.e. any finite subset of \(R\) is contained in a finite subalgebra. The construction of \((R, \Delta)\) commutes with localizations. \end{lemma} \begin{proof} Here we use that \(M\) is of finite presentation. Since \(M\) is of finite type, the \(K\)-algebra \(\End_K(M)\) is locally finite as a factor-algebra of the subalgebra \(\{x \in \mat(n, K) \mid xN \leq N\}\) of \(\mat(n, K)\), where \(0 \to N \to K^n \to M \to 0\) is a short exact sequence. Then \(R\) is also locally finite, by definition it is the algebra \[\{(x^\op, y) \in \End_K(M)^\op \times \End_K(M) \mid \langle xm, m' \rangle = \langle m, ym' \rangle \text{ for all } m, m' \in M\}.\] Now recall that \(M\) is of finite presentation. This implies that the construction \(\End_K(M)\) commutes with localizations, i.e. the map \(S^{-1} \End_K(M) \to \End_{S^{-1} K}(S^{-1} M)\) is an isomorphism for all multiplicative subsets \(S \subseteq K\). Hence the construction of \(R\) also commutes with localizations. Finally, \[\Delta = \{(x^\op, y; z^\op, w) \in R \times R \mid xy + z + w = 0, q(ym) + \langle m, wm \rangle = 0 \text{ for all } m \in M\}.\] Its localization with respect to \(S \subseteq K\) is \[\textstyle S^{-1} \Delta = \{(\frac rs, \frac{r'}{s^2}) \mid (r, r') \in \Delta\} \subseteq S^{-1} R \times S^{-1} R\] by definition from \cite{UnitK2}, so the construction of \(\Delta\) also commutes with localizations. Note that in the definition of \(\Delta\) is suffices to take \(m\) from a generating set of \(M\). \end{proof} Now take a multiplicative subset \(S \subseteq K\). Each \(\frac b{s^n} \in S^{-1} K\) gives endomorphisms of pro-groups \(K^{(\infty, S)}\) and \(M^{(\infty, S)}\) by \(a^{(s^n s')} \mapsto (ab)^{(s')}\) and \(m^{(s^n s')} \mapsto (mb)^{(s')}\). Similarly, each \(\frac{m'}{s^n} \in S^{-1} M\) gives the morphisms \(K^{(\infty, S)} \to M^{(\infty, S)}, a^{(s^n s')} \mapsto (m' a)^{(s')}\) and \(M^{(\infty, S)} \to K^{(\infty, S)}, m^{(s^n s')} \mapsto \langle m, m' \rangle^{(s')}\). These morphisms are denoted by the terms \(a^{(\infty)} \frac b{s^n} = \frac b{s^n} a^{(\infty)}\), \(m^{(\infty)} \frac b{s^n}\), \(\frac{m'}{s^n} a^{(\infty)}\), and \(\bigl\langle m^{(\infty)}, \frac{m'}{s^n} \bigr\rangle = \bigl\langle \frac{m'}{s^n}, m^{(\infty)} \bigr\rangle\) with the free variables \(a^{(\infty)}\) and \(m^{(\infty)}\). By \cite[beginning of section 10]{UnitK2}, the local Steinberg group \(\storth(S^{-1} M, q)\) acts on the corresponding Steinberg pro-group \(\storth^{(\infty, S)}(M, q)\) by automorphisms. The action is given by the obvious formulas on generators when there is a corresponding Steinberg relation: \begin{itemize} \item \(\up{x_{ij}(a / s^n)}{x_{kl}\bigl(b^{(\infty)}\bigr)} = x_{kl}\bigl(b^{(\infty)}\bigr)\) for \(i \neq l \neq -j \neq -k \neq i\); \item \(\up{x_{ij}(a / s^n)}{x_{j, -i}\bigl(b^{(\infty)}\bigr)} = x_{j, -i}\bigl(b^{(\infty)}\bigr)\); \item \(\up{x_{ij}(a / s^n)}{x_{jk}\bigl(b^{(\infty)}\bigr)} = x_{ik}\bigl(\frac a{s^n} b^{(\infty)}\bigr)\, x_{jk}\bigl(b^{(\infty)}\bigr)\) for \(i \neq \pm k\); \item \(\up{x_{ij}(a / s^n)}{x_k\bigl(m^{(\infty)}\bigr)} = x_k\bigl(m^{(\infty)}\bigr)\) for \(i \neq k \neq -j\); \item \(\up{x_{ij}(a / s^n)}{x_i\bigl(m^{(\infty)}\bigr)} = x_{-i, j}\bigl(q\bigl(m^{(\infty)}\bigr)\, \frac a{s^n}\bigr)\, x_j\bigl(-m^{(\infty)} \frac a{s^n}\bigr)\, x_i\bigl(m^{(\infty)}\bigr)\); \item \(\up{x_i(m / s^n)}{x_j\bigl({m'}^{(\infty)}\bigr)} = x_{-i, j}\bigl(-\bigl\langle \frac m{s^n}, {m'}^{(\infty)} \bigr\rangle\bigr)\, x_j\bigl({m'}^{(\infty)}\bigr)\) for \(i \neq \pm j\); \item \(\up{x_i(m / s^n)}{x_{jk}\bigl(a^{(\infty)}\bigr)} = x_{jk}\bigl(a^{(\infty)}\bigr)\) for \(j \neq i \neq -k\); \item \(\up{x_i(m / s^n)}{x_{ij}\bigl(a^{(\infty)}\bigr)} = x_{-i, j}\bigl(-q\bigl(\frac m{s^n}\bigr)\, a^{(\infty)}\bigr)\, x_j\bigl(\frac m{s^n} a^{(\infty)}\bigr)\, x_{ij}\bigl(a^{(\infty)}\bigr)\). \end{itemize} Note that it is not a morphism \(\storth(S^{-1} M, q) \times \storth^{(\infty, S)}(M, q) \to \storth^{(\infty, S)}(M, q)\) of pro-sets, but a homomorphism \(\storth(S^{-1} M, q) \to \Aut_{\Pro(\Group)}\bigl(\storth^{(\infty, S)}(M, q)\bigr)\) of abstract groups. This action is extranatural on \(S\). More precisely, if \(S \subseteq S'\) are two multiplicative subsets, \(g \in \storth(S^{-1} M, q)\) and \(g'\) is its image in \(\storth({S'}^{-1} M, q)\), then the following diagram with canonical vertical arrows commutes: \[\xymatrix@R=30pt@C=120pt@!0{ \storth^{(\infty, S')}(M, q) \ar[r]^{\up {g'}{(-)}} \ar[d] & \storth^{(\infty, S')}(M, q) \ar[d]\\ \storth^{(\infty, S)}(M, q) \ar[r]^{\up g{(-)}} & \storth^{(\infty, S)}(M, q). }\] By lemma \ref{quasi-finite} and \cite[theorem 2]{UnitK2}, for any maximal ideal \(\mathfrak m\) of \(K\) the action of \(\storth(M_{\mathfrak m}, q)\) on \(\storth^{(\infty, \mathfrak m)}(M, q)\) continues to an action of \(\orth(M_{\mathfrak m}, q)\). Also there is an action of \(\orth(M, q)\) on \(\storth(M, q)\) such that \(\stmap \colon \storth(M, q) \to \orth(M, q)\) is a crossed module, see \cite[theorem 3]{UnitK2}. This action is also extranatural: if \(g \in \orth(M, q)\), then the induced automorphisms of \(\storth(M, q)\) and \(\storth^{(\infty, \mathfrak m)}(M, q)\) give a commutative square as above. \section{Another presentation} Recall the following well-known result. \begin{lemma}\label{local-isotropy} Suppose that \(K\) is local. Then the group \(\orth(M, q)\) acts transitively on the set of members of hyperbolic pairs and on the set of pairs \((u, v)\), where \(u, v\) are members of orthogonal hyperbolic pairs. \end{lemma} \begin{proof} Actually, the orthogonal group acts transitively on the set of hyperbolic pairs and on the set of couples of orthogonal hyperbolic pairs. This is Witt's cancellation theorem, see \cite[corollary 8.3]{ASR} for a proof in our generality. \end{proof} Let \(S \subseteq K\) be a multiplicative subset and \(u \in S^{-1} M\) be a member of a hyperbolic pair. There is a well-defined split epimorphism \(\langle u, v^{(\infty)} \rangle \colon M^{(\infty, S)} \to K^{(\infty, S)}\) of pro-groups, i.e. its kernel \(M_u^{(\infty, S)}\) has a direct complement isomorphic to \(K^{(\infty, S)}\). If \(g \in \orth(S^{-1} M, q)\), then it naturally acts on \(M^{(\infty, S)}\) and maps the direct summand \(M_u^{(\infty, S)}\) to \(M_{gu}^{(\infty, S)}\). Our goal is to construct morphisms \(X(u, v^{(\infty)}) \colon M_u^{(\infty, S)} \to \storth^{(\infty, S)}(M, q)\) of pro-groups satisfying various natural conditions. \begin{lemma}\label{esd-local} Let \(\mathfrak m\) be a maximal ideal of \(K\). Then for all members \(u\) of hyperbolic pairs in \(M_{\mathfrak m}\) there are unique morphisms \(X(u, v^{(\infty)}) \colon M_u^{(\infty, \mathfrak m)} \to \storth^{(\infty, \mathfrak m)}(M, q)\) of pro-groups such that \begin{enumerate} \item \(\up g{X(u, v^{(\infty)})} = X(gu, gv^{(\infty)})\) for \(g \in \orth(M_{\mathfrak m}, q)\); \item \(X(u, u a^{(\infty)}) = 1\); \item \(X(ua, v^{(\infty)}) = X(u, v^{(\infty)} a)\) for \(a \in K_{\mathfrak m}^*\); \item \(X(u, v a^{(\infty)}) = X(v, -u a^{(\infty)})\) if \(u, v\) are members of orthogonal hyperbolic pairs; \item \(X(e_i, e_j a^{(\infty)}) = x_{i, -j}(a^{(\infty)})\) for \(i \neq \pm j\); \item \(X(e_i, m^{(\infty)}) = x_{-i}(-m^{(\infty)})\) for \(m^{(\infty)}\) with the domain \(M_0^{(\infty, \mathfrak m)}\). \end{enumerate} \end{lemma} \begin{proof} Note that \(M_{e_\ell}^{(\infty, \mathfrak m)} = M_0^{(\infty, \mathfrak m)} \oplus \bigoplus_{i \neq -\ell} e_i K^{(\infty, \mathfrak m)}\). Let \[X(e_\ell, v^{(\infty)}) = x_{-\ell}(v_0^{(\infty)})\, \prod_{0 < i < \ell} \bigl(x_{\ell, -i}(v^{(\infty)}_i)\, x_{\ell i}(v^{(\infty)}_{-i}) \bigr) \colon M_{e_\ell}^{(\infty, \mathfrak m)} \to \storth^{(\infty, \mathfrak m)}(M, q),\] where the variables \(v_0^{(\infty)}\) and \(v_i^{(\infty)}\) are the components of \(v^{(\infty)}\) in the decomposition of \(M_{e_\ell}^{(\infty, \mathfrak m)}\). This is the only possible choice satisfying the conditions \((2)\), \((5)\), and \((6)\). The parabolic subgroup \(P = \{g \in \orth(M_{\mathfrak m}, q) \mid ge_\ell \in e_\ell K_{\mathfrak m}\}\) is clearly generated by the diagonal subgroup \[\diag(M_{\mathfrak m}, q) = \{g \in \orth(M_{\mathfrak m}, q) \mid ge_i \in e_i K^*_{\mathfrak m} \text{ for all } 0 < |i| \leq \ell\},\] the unipotent radical \[U = \langle t_{i\ell}(K_{\mathfrak m}), t_\ell(M_{0, \mathfrak m}) \mid 0 < |i| < \ell \rangle,\] and the smaller orthogonal group \[\textstyle\orth(M_{0, \mathfrak m} \oplus \bigoplus_{0 < |i| < \ell} e_i K_{\mathfrak m}, q)\] trivially acting on \(e_{-\ell}\) and \(e_\ell\). By \cite[theorem 5]{OddPetrov}, this smaller orthogonal group is generated by its elementary orthogonal transvections, its intersection with the diagonal subgroup, and the orthogonal group \[O = \orth(M_{0, \mathfrak m} \oplus e_{-1} K_{\mathfrak m} \oplus e_1 K_{\mathfrak m}, q)\] trivially acting on \(e_i\) for \(|i| > 1\). Now we show that \[\textstyle \up g{X(e_\ell, v^{(\infty)})} = X(e_\ell, g v^{(\infty)} \frac{ge_\ell}{e_\ell})\] for every such a generator \(g \in P\), where \(\frac{ge_\ell}{e_\ell}\) is the element of \(K_{\mathfrak m}^*\) such that \(g e_\ell = e_\ell \frac{ge_\ell}{e_\ell}\). Indeed, the diagonal subgroup acts by roots on both sides (see \cite[formulas (Ad1)--(Ad6)]{UnitK2}), for \(O\) this follows from the same argument using \cite[proposition 2]{UnitK2} to eliminate the hyperbolic pair \((e_{-1}, e_1)\) and enlarge \(M_0\), and for the elementary transvections this may be checked directly by the definitions. Let \(u \in M_{\mathfrak m}\) be a member of a hyperbolic pair. By lemma \ref{local-isotropy}, there is \(g \in \orth(M_{\mathfrak m}, q)\) such that \(u = ge_\ell\). Let \(X(u, v^{(\infty)}) = \up g{X(e_\ell, g^{-1} v^{(\infty)})}\), this morphism is independent on \(g\) and satisfies \((1)\) and \((2)\). To prove \((3)\), note that there is \(h \in \diag(M_{\mathfrak m}, q)\) such that \(h e_\ell = e_\ell a\) and \(h\) lies in the maximal torus. Hence \[\textstyle X(e_\ell a, v^{(\infty)}) = \up h{X(e_\ell, h^{-1} v^{(\infty)})} = X(e_\ell, v^{(\infty)} \frac{he_\ell}{e_\ell}) = X(e_\ell, v^{(\infty)} a),\] and \((3)\) for other vectors \(u\) follows from the definition of \(X(u, v^{(\infty)})\). The properties \((5)\) and \((6)\) follow by applying various permutation matrices from \(\orth(M_{\mathfrak m}, q)\) to \(x_{\ell, \ell - 1}(a^{(\infty)})\) and \(x_\ell(m^{(\infty)})\). Finally, in order to prove \((4)\) we again use lemma \ref{local-isotropy}. Without loss of generality, \(u = e_\ell\) and \(v = e_{\ell - 1}\). Then \[X(e_\ell, e_{\ell - 1} a^{(\infty)}) = x_{\ell, 1 - \ell}(a^{(\infty)}) = x_{\ell - 1, -\ell}(-a^{(\infty)}) = X(e_{\ell - 1}, -e_\ell a^{(\infty)}).\qedhere\] \end{proof} In order to globalize such transvections, we need a lemma. \begin{lemma}\label{module-costalks} Let \(S \subseteq K\) be a multiplicative subset, \(N\) be arbitrary \(K\)-module. Consider the family \(\pi_{\mathfrak m} \colon N^{(\infty, \mathfrak m)} \to N^{(\infty, S)}\) of canonical pro-group morphisms, where \(\mathfrak m\) runs over all maximal ideals of \(K\) disjoint with \(S\). Then this family generates \(N^{(\infty, S)}\) in the following sense: for every pro-group \(G^{(\infty)} \in \Pro(\Group)\) the map \[\Pro(\Group)\bigl(N^{(\infty, S)}, G^{(\infty)}\bigr) \to \prod_{\mathfrak m} \Pro(\Group)\bigl(N^{(\infty, \mathfrak m)}, G^{(\infty)}\bigr)\] is injective. \end{lemma} \begin{proof} If suffices to consider an ordinary group \(G\). Take group homomorphisms \(f, g \colon N^{(s)} \to G\) for some \(s \in S\) such that \(f \circ \pi_{\mathfrak m} = g \circ \pi_{\mathfrak m} \colon N^{(\infty, \mathfrak m)} \to G\) for all maximal ideals \(\mathfrak m\) disjoint with \(S\). The set \[\mathfrak a = \{a \in K \mid f|_{N^{(s)} a} = g|_{N^{(s)} a}\}\] is an ideal of \(K\). By assumption it is not contained in any such \(\mathfrak m\), so \(S^{-1} \mathfrak a = S^{-1} K\). In other words, \(s' \in \mathfrak a\) for some \(s' \in S\). This means that \(f|_{N^{(ss')}} = g|_{N^{(ss')}}\). \end{proof} \begin{theorem}\label{esd-identities} Let \(K\) be a unital commutative ring, \(M\) be a finitely presented \(K\)-module with a quadratic form \(q\) and pairwise orthogonal hyperbolic pairs \((e_{-1}, e_1), \ldots, (e_{-\ell}, e_\ell)\) for \(\ell \geq 3\). Let also \(S \subseteq K\) be a multiplicative subset and \(u \in S^{-1} M\) be a member of a hyperbolic pair. Then there exists at most one morphism \(X(u, v^{(\infty)}) \colon M^{(\infty, S)}_u \to \storth^{(\infty, S)}(M, q)\) of pro-groups such that for any maximal ideal \(\mathfrak m\) of \(K\) disjoint with \(S\) the following diagram with canonical vertical arrows is commutative: \[\xymatrix@R=30pt@C=120pt@!0{ M_u^{(\infty, \mathfrak m)} \ar[r]^(0.4){X(u, v^{(\infty)})} \ar[d] & \storth^{(\infty, \mathfrak m)}(M, q) \ar[d]\\ M_u^{(\infty, S)} \ar[r]^(0.4){X(u, v^{(\infty)})} & \storth^{(\infty, S)}(M, q). }\] These morphisms also satisfy the following identities: \begin{enumerate} \item \(\up g{X(u, v^{(\infty)})} = X(gu, gv^{(\infty)})\) for \(g \in \storth(S^{-1} M, q)\) if the left hand side is defined; \item \(X(u, u a^{(\infty)}) = 1\) if the left hand side is defined; \item \(X(ua, v^{(\infty)}) = X(u, v^{(\infty)} a)\) for \(a \in S^{-1} K^*\) if both sides are defined; \item \(X(u, va^{(\infty)}) = X(v, -ua^{(\infty)})\) if \(u, v \in S^{-1} M\) are members of orthogonal hyperbolic pairs; \item \(X(e_i, e_j a^{(\infty)}) = x_{i, -j}(a^{(\infty)})\) for \(i \neq \pm j\); \item \(X(e_i, m^{(\infty)}) = x_{-i}(-m^{(\infty)})\). \end{enumerate} \end{theorem} \begin{proof} By lemma \ref{module-costalks} applied to \(M\) the pro-group \(M_u^{(\infty, S)}\) is generated by the canonical morphisms \(M_u^{(\infty, \mathfrak m)} \to M_u^{(\infty, S)}\) for all maximal ideals \(\mathfrak m\) of \(K\) disjoint with \(S\). Hence the morphisms \(X(u, v^{(\infty)})\) are unique whenever they exist. Clearly, such a morphism exists and satisfies \((5)\), \((6)\) for \(u = e_i\). The set of hyperbolic vectors \(u \in S^{-1} M\) such that \(X(u, v^{(\infty)})\) exists is closed under the action of \(\storth(S^{-1} M, q)\) by extranaturality of the action. All other properties follow from lemmas \ref{esd-local} and \ref{module-costalks}. \end{proof} \begin{theorem}\label{esd-existence} Let \(K\) be a unital commutative ring, \(M\) be a finitely presented \(K\)-module with a quadratic form \(q\) and pairwise orthogonal hyperbolic pairs \((e_{-1}, e_1), \ldots, (e_{-\ell}, e_\ell)\) for \(\ell \geq 3\). Let also \(S \subseteq K\) be a multuplicative subset, \(u \in S^{-1} M\) be a hyperbolic vector. The morphism \(X(u, v^{(\infty)}) \colon M_u^{(\infty, S)} \to \storth^{(\infty, S)}(M, q)\) from theorem \ref{esd-identities} exists in the following cases: \begin{enumerate} \item \(S = K \setminus \mathfrak m\) for a maximal ideal \(\mathfrak m\); \item \(u\) lies in the orbit of \(e_1\) under the action of \(\eorth(S^{-1} M, q)\); \item \(S = \{1\}\) and \(u\) lies in the orbit of \(e_1\) under the action of \(\orth(M, q)\). \end{enumerate} In the last case \(\up g{X(u, v)} = X(gu, gv) \in \storth(M, q)\) for \(g \in \orth(M, q)\), \(u \in \orth(M, q)\, e_1\), and \(u \perp v\). Also \(X(u, v) \in \storth(M, q)\) maps to \(T(u, v) \in \orth(M, q)\). \end{theorem} \begin{proof} The first case is lemma \ref{esd-local}, the second is theorem \ref{esd-identities}. If \(S = \{1\}\) and \(u = ge_1\) for some \(g \in \orth(M, q)\), then let \(X(u, v) = \up g{X(e_1, g^{-1} v)}\) for all vectors \(v \perp u\). This morphism satisfies the definition from theorem \ref{esd-identities} by extranaturality of the action. The remain claims follow from the uniqueness of \(X(u, v)\) and the definitions. \end{proof} Finally, we prove that the orthogonal Steinberg group admits ``another presentation'' in the sense of van der Kallen. Note that the third property of \(X(u, v)\) from theorem \ref{esd-identities} is not needed. \begin{theorem}\label{another-presentation} Let \(K\) be a unital commutative ring, \(M\) be a finitely presented \(K\)-module with a quadratic form \(q\) and pairwise orthogonal hyperbolic pairs \((e_{-1}, e_1), \ldots, (e_{-\ell}, e_\ell)\) for \(\ell \geq 3\). Let \(\storth^*(M, q)\) be the abstract group generated by symbols \(X^*(u, v)\) for vectors \(u, v \in M\) such that \(u \in \orth(M, q)\, e_1\) and \(v \perp u\). The relations are the following: \begin{itemize} \item \(X^*(u, v + v') = X^*(u, v)\, X^*(u, v')\); \item \(X^*(u, v)\, X^*(u', v')\, X^*(u, v)^{-1} = X^*(T(u, v)\, u', T(u, v)\, v')\); \item \(X^*(u, va) = X^*(v, -ua)\) if \((u, v)\) lies in the orbit of \((e_1, e_2)\) under the action of \(\orth(M, q)\); \item \(X^*(u, ua) = 1\). \end{itemize} Then the canonical morphism \(\storth(M, q) \to \storth^*(M, q), x_{ij}(a) \mapsto X^*(e_i, e_{-j} a), x_j(m) \mapsto X^*(e_{-j}, -m)\) is an isomorphism, the preimage of \(X^*(u, v)\) is the element \(X(u, v)\) from theorem \ref{esd-existence}. Also we may use everywhere \(\eorth(M, q)\) instead of \(\orth(M, q)\). \end{theorem} \begin{proof} Let \(F \colon \storth(M, q) \to \storth^*(M, q)\) be the homomorphism from the statement. By theorems \ref{esd-identities} and \ref{esd-existence}, there is a homomorphism \(G \colon \storth^*(M, q) \to \storth(M, q), X^*(u, v) \mapsto X(u, v)\). Clearly, \(G \circ F = \id\). The group \(\orth(M, q)\) (or \(\eorth(M, q)\)) acts on \(\storth^*(M, q)\) by \(\up g{X^*(u, v)} = X^*(gu, gv)\). Clearly, \(\storth^*(M, q)\) is generated by the image of \(F\) and its conjugates under this action (here we need that \(u\) lies in the orbit of \(e_1\)). Since \(\storth(M, q)\) is perfect, it follows that \(\storth^*(M, q)\) is perfect. The canonical homomorphism \(\storth^*(M, q) \to \orth(M, q), X^*(u, v) \mapsto T(u, v)\) has central kernel by the second relation, hence the kernel of \(G\) is also central. Now \(G \colon \storth^*(M, q) \to \storth(M, q)\) is a split perfect central extension. It is well-known that such a homomorphism is necessarily an isomorphism, its splitting \(F\) is the inverse. \end{proof} It follows that in the last theorem the third axiom holds for all \(u, v\) such that both of them lie in the orbit of \(e_1\) and they are members of orthogonal hyperbolic pairs. \bibliographystyle{plain}
2,877,628,088,815
arxiv
\section{Introduction} A quantitative description of the nature of strongly bound systems is of great importance for an improved understanding of the fundamental structure and origin of matter. One of the most promising ways to access information on the dynamical structure of the nucleon is through exclusive reactions at high momentum transfer, in which the deep interior of the nucleon is probed with a highly-energetic photon or electron probe and all final-state particles are detected~\cite{Hand63,Kugler71}. Even though the scattering probability of such reactions is extremely small it has become clear that such reactions offer a promising route to imaging of the elusive 3-D nucleon substructure. Indeed, there have been increasingly sophisticated theoretical efforts to exploit the richness of exclusive reactions at short resolution scales~\cite{Goeke01}. Exclusive measurements with high-energy electron and photon beams form the core of the new paradigm within sub-atomic science termed "nuclear femtography". In both photon and electron scattering experiments, the scale of the associated imaging that can be performed is set by the invariant squared four-momentum transferred to the proton target, $-t$, and the total centre-of-mass energy squared, $s$. Measurements over a wide range of $s$ and $-t$ with these probes allow for the disentangling of four functions representing the vector, axial, tensor, and pseudo-scalar response of the nucleon. Simultaneous experimental access to all of these functions is most readily achieved with a spin polarized nuclear or nucleon target. Much progress imaging nucleon structure can be made with electron-scattering reactions, yet experiments utilizing high-energy photons play a unique complementary role. Measurements involving the small scattering probabilities associated with exclusive reactions demand high-intensity photon beams. Further, our basic understanding will be much strengthened by imaging longitudinally-polarized and transversely-polarized nucleons. It is for this combination that the proposed concept is primarily focused: with a newly-developed compact photon source (CPS) \cite{SABW2014, BWGN2015} and a dynamically-nuclear polarized target system, for example in Hall C at Jefferson Lab, a gain of a factor of 30 in the figure-of-merit (as defined by the photon intensity and the average target polarization over the experiment) can be achieved. The net gain makes it possible to measure the very small scattering cross sections associated with a new suite of high-energy photon scattering experiments to image and understand the dynamical nucleon structure~\cite{CPSWS}. The concept of a CPS also enables other science possibilities, like enriching the hadron spectroscopy program in Hall D at Jefferson Lab and at other facilities. Hall D is a newly-built experimental hall, with a large acceptance spectrometer and a tagged, linearly polarized photon beam of low to moderate intensity. The addition of a CPS to this hall opens the door to increased sensitivity to rare processes through a higher intensity photon beam or the production of secondary beams of other particles, such as a $K_L$ beam~\cite{Klong}. Although there are fewer physical limitations on the size of the CPS in Hall D, allowing for additional flexibility in the optimization of the shielding, most of the other requirements are similar to CPS running in the other halls. The radiation shielding requirements are similar in order to ensure safe operation and to prevent radiation damage to the tagger detectors and their associated electronics located upstream of the planned CPS location. For operation of the proposed $K_L$ facility, the electron beam has been proposed to have a power up to 60kW, running at an energy of 12 GeV with a 64 ns beam bunch spacing. Initial estimates suggest that the default CPS configuration can handle the power deposition, and sufficient cooling water is available, as the electron dump for the nominal Hall D photon beam is designed to absorb at least 60 kW of power. A major difference, as compared to Hall C, is that the Hall D CPS is located in a separate section of the hall from the target and main spectrometer, and is separated by $\sim80$~m of pipe under vacuum surrounded by soil. The size of the photon beam generated by the CPS is dominated by multiple scattering in the radiator, and has estimated to be 2~cm after traveling 80~m. This is well within the size of the 15~cm-diameter beam pipe, and the 6~cm-diameter Be $K_{L}$ target. Finally, if the CPS radiator is retracted, then the current Hall D photon beam can be used without moving the CPS or any other modification from the beamline. Taking all of these factors into account, the CPS design is well matched for experiments in Hall D requiring a high-intensity untagged photon beam. \section{Science Opportunities with CPS} \label{sec:science-gain} Investigating the three-dimensional structure of the nucleon has historically been an active and productive field of research, especially so during the last two decades since the invention of the generalized parton distributions (GPD) formalism. Research focused on this three-dimensional structure continues to be central to the hadron physics program at facilities like Jefferson Lab. The GPD formalism provides a unified description of many important reactions including elastic electron scattering, deep-inelastic scattering (DIS), deeply-virtual and timelike Compton scattering (DVCS and TCS), deeply-virtual meson production (DVMP), and wide-angle real Compton scattering (RCS) and meson production. All of these can be described by a single set of four functions $H$, $\tilde{H}$, $E$ and $\tilde{E}$, which need to be modeled and constrained with parameters extracted from experimental data~\cite{Diehl03,Burkardt02,Goeke01,Belitsky05,Ji97,Radyushkin96,Mueller94,Collins97,Collins99}. The CPS science program as proposed for Jefferson Lab enables studies of the three-dimensional structure of the nucleon and features one fully approved and two conditionally approved experiments~\cite{Klong,E12-17-008,C12-18-005}. Jefferson Lab Experiment E12-17-008~\cite{E12-17-008} will measure polarization observables in real Compton scattering (RCS). This is a fundamental and basic process, yet its mechanism in the center-of-mass energy regime of $\sqrt{s}$ = 5-10 GeV remains poorly understood. Measurements show that these data cannot be described by perturbative calculations involving the scattering of three valence quarks. Rather the dominant mechanism is the so-called "handbag model" where the photon scatters from a single active quark and the coupling of this struck quark to the spectator system is described by GPDs~\cite{Rad98,Diehl99}. It is this latter conceptual mechanism that lies at the root of the worldwide efforts of 3D (spatial) imaging of the proton's quark-gluon substructure, as the GPDs contain information about the transverse spatial distribution of quarks and their longitudinal momenta inside the proton. The RCS experimental observables provide several constraints for GPDs which are complementary to other exclusive reactions due to an $e_a^2$ factor and an additional $1/x$ weighting in the corresponding GPD integrals. For example, the elastic form factor $F_1(t)$ is related to the RCS vector form factor $R_V(t)$, both of which are based on the same underlying GPD $H(x,0,t)$. Similarly, polarized observables in RCS uniquely provide high $-t$ constraints on $\tilde H(x,0,t)$ via extraction of the RCS axial form factor $R_A(t)$ in a kinematic regime where precise data on the nucleon axial form factor is not available~\cite{Kroll17,Kroll18}. A measurement of the spin asymmetry in RCS with the proton target longitudinally polarized can further disentangle the various reaction mechanism models. If consistent with the measurement of the spin transfer from the photon to the scattered proton, the asymmetry can be surprisingly large and stable with respect to the photon center-of-mass scattering angle. Investigations into the mechanisms behind RCS will provide crucial insight into the nature of exclusive reactions and proton structure and are ideally suited for the facilities provided by the Jefferson Lab 12-GeV upgrade~\cite{Dudek12,NPSWP14,DOE15,Mont17}. Jefferson Lab Experiment C12-18-005~\cite{C12-18-005} will probe 3D nucleon structure through timelike Compton scattering, where a real photon is scattered off a quark in the proton and a high-mass (virtual) photon is emitted, which then decays into a lepton pair~\cite{Berger02,Anikin18}. Using a transversely polarized proton target and a circularly polarized photon beam allows access to several independent observables, directly sensitive to the GPDs, and in particular the $E$ GPD which is poorly constrained and of great interest due to its relation to the orbital momentum of the quarks~\cite{Kroll10,Kroll11,Gold12}. The experiment involves measurements of the unpolarized scattering probabilities or cross section, the cross section using a circularly polarized photon beam, and the cross section using transversely-polarized protons. This will provide a first fundamental test of the universality of the GPDs, as the GPDs extracted from TCS should be comparable with those extracted from the analogous spacelike (electron) scattering process -- deeply virtual Compton scattering, a flagship program of the 12-GeV Jefferson Lab upgrade~\cite{Dudek12,NPSWP14,DOE15,Mont17}. A separate window on the nature of strongly bound systems is provided through the hadron spectrum. The spectrum allows study of the properties of QCD in its domain of strong-coupling, leading to the most striking feature of QCD: the confinement of quarks and gluons within hadrons: mesons and baryons. Experimental investigation of the baryon spectrum provides one obvious avenue to understand QCD in this region since the location and properties of the excited states depend on the confining interaction and the relevant degrees of freedom of hadrons. Understanding the constituent degrees of freedom in hadrons requires identifying a spectrum of states and studying their relationships, both between states with the same quark content but different quantum numbers, and vice versa. The hadrons containing strange quarks are particularly interesting to study because they lie in a middle ground between the nearly massless up \& down quarks and the heavy charm \& bottom quarks, and while a rich spectrum of strange quark hadrons is predicted comparatively very little is known about them. Over the past two decades, meson photo- and electroproduction data of unprecedented quality and quantity have been measured at facilities worldwide, leading to a revolution in our understanding of baryons consisting of the lightest quarks, while the corresponding meson beam data are mostly outdated or non-existent~\cite{meson_beams}. For the study of strange quark hadrons, a kaon beam~\cite{KL2016_proc} has the advantage over photon or pions beams of having strange quarks in the initial state, which leads to enhanced production of these states. A secondary $K_L$ beam provides a unique probe for such studies, and by using a primary high intensity photon beam, a high-quality $K_L$ beam with low neutron background can be generated. In conjunction with a large acceptance spectrometer, this enables the measurement of cross sections and polarizations of a range of hyperon production reactions, and allows for the identification of the quantum numbers of identified states and provides the opportunity to make a similar leap forward in our understanding of the strange hadron spectrum. The study of strange hadron spectroscopy using an intense $K_L$ beam is the topic of Jefferson Lab Experiment C12-19-001~\cite{Klong}. \section{Science Method} \label{sec:science-method} One of the traditional experimental techniques for producing a beam of high-energy photons is to allow an electron beam to strike a radiator, most commonly copper, producing a cone of bremsstrahlung photons which are consequently mixed with the electron beam (see Fig.~\ref{fig-photon-sources}a). The spread in the photon and outgoing electron beams is dominated by electron multiple scattering, and for electron beam energies of a few GeV is typically less than 1~mrad. Accompanying this mixed photon and electron beam are secondary particles produced in the electron-nuclei shower and characterized by a much larger angular distribution (the extent of these secondary cones are highlighted in the figure). For example, the cone of secondary particles that survive filtering through a heavy absorber material of one nuclear interaction length ($\approx$140-190~g/cm$^2$ or $\approx$15~cm) has an angular spread of 100-1000~mrad. Although this is the preferred technique for producing the largest flux of photons, drawbacks include the fact that the beam is a mix of both photons and electrons, that the photon beam energy is not a priori known, and that the method is accompanied by the potential for large radiation background dose due to the large spread of secondary particles produced. \begin{figure} \includegraphics[width=3.0in]{./FIGURE1.pdf} \caption{\label{fig-photon-sources} \it Different schemes to produce high-energy photon beams. Scheme a) is the traditional bremsstrahlung technique where a copper radiator is placed in an electron beam resulting in a mixed photon and electron beam. In scheme b) a deflection magnet and beam dump are used to peel off the electrons and produce a photon-only beam. Scheme c) is the new CPS technique, with a compact hermetic magnet-electron dump and a narrow pure photon beam.} \end{figure} An alternative technique for producing a photon beam involves the use of a radiator, a deflection magnet and a beam dump for the undeflected electrons, augmented for energy-tagged photon beams with a set of focal plane detectors covering a modest to large momentum acceptance (see Fig.~\ref{fig-photon-sources}b). A configuration like this requires significant space along the beam direction and heavy shielding around the magnet and the beam dump, which have large openings due to the large angular and energy spread of the electrons after interactions in the radiator. In addition, without tight collimation the traditional scheme leads to a large transverse size of the photon beam at the target due to divergence of the photon beam and the long path from the radiator to the target. This can be an issue as the beam spot size contributes to the angular and momentum reconstruction resolution of the resultant reaction products due to uncertainty in the transverse vertex position. The advantage of this method is that one has a pure photon beam, and if augmented with a set of focal-plane tagging detectors the exact photon energies can be determined. A significant drawback is that in order to keep focal-plane detector singles rates at a manageable level (typically less than a few MHz) the flux of incident electrons must be modest ($\approx$ 100~nA) and, correspondingly, the photon flux is less than might otherwise be possible. The proposed CPS concept (see Fig.~\ref{fig-photon-sources}c) addresses the shortcomings of these two traditional widely-used experimental techniques. The concept takes advantage of the modest spread of the photon beam relative to the angular distribution of the secondary particles produced in the electron-nuclei shower. It does so by combining in a single shielded assembly all elements necessary for the production of the intense photon beam and ensures that the operational radiation dose rates around it are acceptable (see Ref.~\cite{CPS-Concept}). Much of this is achieved by keeping the overall dimensions of the apparatus limited, and by careful choice and placement of materials. The CPS conceptual design features a magnet, a central copper absorber to handle the power deposition, and tungsten powder and borated plastic to hermetically shield the induced radiation dose as close to the source as possible. The magnet acts as dump for the electrons with a cone of photons escaping through a small collimator. The size of the collimator can be chosen to be as narrow as the photon beam size, taking into account natural divergence plus the size of the electron beam raster. The concept of a combined magnet-dump allows us to reduce dramatically the magnet aperture and length, as well as the weight of the radiation shield, due to the compactness and hermeticity (with minimized openings) of the system, thus significantly reducing the radiation leakage. This conceptual approach opens a practical way forward for a CPS, providing one can manage both the radiation environment in the magnet and the power deposition density in the copper absorber. \begin{figure} \includegraphics[width=3.0in]{./FIGURE2.pdf} \caption{\label{fig-cps-fom} \it The figure-of-merit (FOM) of photon beam experiments with dynamically nuclear polarized targets, defined as the logarithm of the effective photon beam intensity multiplied by the averaged target polarization squared, as a function of time. Note the large gain enabled by the CPS. The indicated FOM in 1972, 1977, 1995, 2007 and 2008 are based on actual experiments at Daresbury, Bonn, Jefferson Lab and Mainz~\cite{Court1980,Crabb2997,Others}. The FOM noted in 2000 and 2005 are based upon proposed setups at SLAC and Jefferson Lab, with the latter closest in concept to the CPS. We also add the projected FOM of approved future experiments at HiGS/Duke and Jefferson Lab.} \end{figure} Compared to the more traditional bremsstrahlung photon sources (Figs.~\ref{fig-photon-sources}a and \ref{fig-photon-sources}b and {\sl e.g.}~Refs.~\cite{Anderson1970,Tait1969}), the proposed solution offers several advantages, including an intense and narrow pure photon beam and much lower radiation levels, both prompt and post-operational from radio-activation of the beam line elements. The drawbacks are a somewhat reduced photon flux as compared to the scheme of Fig.~\ref{fig-photon-sources}(a), and not having the ability to directly measure the photon energy as in the scheme of Fig.~\ref{fig-photon-sources}(b). The primary gain of the CPS, and the reason for much of the initial motivation, is for experiments using dynamically nuclear polarized (DNP) targets, with an estimated gain in figure-of-merit of a factor of 30 (see Fig.~\ref{fig-cps-fom}). Dynamic nuclear polarization is an effective technique to produce polarized protons, whereby a material containing a large fraction of protons is cooled to low temperatures, $<$1~K, and placed in a strong magnetic field, typically about 5~Tesla~\cite{Averett99,Pierce14}. The material is first doped, either chemically or through irradiation, to introduce free radicals (electrons). The low-temperature and high-field conditions cause the electrons to self-polarize, and their polarization is then transferred to the proton using microwave techniques. These conditions however impose a serious limitation: beams traversing the polarized target material will produce ionization energy losses that simultaneously heat and depolarize the target. They also produce other harmful free radicals which allow further pathways for proton polarization to decay. This limits the local beam intensities the polarized target material can handle. Conventional target cells have diameters much larger than the desirable beam spot size, and one is forced to minimize rapid degradation of the target polarization by the beam at one location at the target. The traditional solution of minimizing such localized polarization degradation is fast movement of the beam spot, which allows avoiding overheating of the material and ensuring that the depolarizing effects of the beam are uniformly spread over the target volume. A beam raster magnet, which moves the beam with a frequency of several Hz, was used in past experiments in Jefferson Lab~\cite{Averett99,Zhu01,Pierce14}. However, this does not work for very small collimation apertures, e.g.~a few mm by a few mm collimation cone, limiting possible beam motion. The CPS solution for the beam-target raster thus includes a combination of the target rotation around the horizontal axis and $\pm$10~mm vertical motion of the target ladder. Such a raster method effectively moves the motion complexity out of the high radiation area of the absorber. The same effect can be achieved by vertical displacement of the beam spot, i.e.~by a small variation of the vertical incident angle of the electron beam at the radiator. With a $\pm$5~mrad vertical angle variation and 200~cm distance between the radiator and the target, the displacement of the beam spot is equal to $\pm$1~cm, about the size of the conventional target cells. Traditionally, such photon beam experiments have been performed using the scheme indicated in Fig.~\ref{fig-photon-sources}a. This limits the electron beam current to less than 100~nA to prevent rapid target polarization damage. With the CPS scheme, we anticipate use of an electron beam current of up to 2.7~$\mu$A to provide the photon flux for an equivalent heat load in the DNP target. Hence, we gain a factor of about 30. The history of the figure-of-merit of bremsstrahlung photon beam experiments with DNP targets is further illustrated in Fig.~\ref{fig-cps-fom}. \section{The Compact Photon Source - Description of Instrumentation} \label{sec:cps} The physics program described above requires a high-intensity and narrow polarized photon beam and a polarized target to access the exclusive photoproduction reactions in order to extract the relevant experimental observables. The CPS provides a compact solution with a photon flux of $1.5 \times 10^{12}$ equivalent photons/s. \subsection{Conceptual Design} \label{sec:concept} \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE3.pdf} \caption{\label{fig:CPS} \it The CPS cut-out side view. Deflected electrons strike a copper absorber, surrounded by a W-Cu insert inside the magnet yoke. The outer rectangular region in this view is the tungsten-powder shield.} \end{figure} The main elements of the CPS are shown in Fig.~\ref{fig:CPS}. Without loss of photon intensity, a channel (a collimator for the secondary radiation) around the photon beam can be as narrow as the photon beam size. After passing through the radiator, the electron beam should be separated from the photon beam by means of deflection in a magnetic field. The length, aperture and field strength of the magnet are very different in the proposed source compared to in the traditional tagging technique. In the traditional source the magnet is needed to direct the electrons to the dump. Because of the large momentum spread of electrons which have interacted in the radiator, the magnet aperture needs to be large and the dump entrance even larger: 13\% of the beam power is therefore lost before the beam dump, even with a 10\% momentum acceptance of the beam line. In contrast, in the proposed source the magnet acts as dump for the electrons with a cone of photons escaping through a small collimator. The dumping of the electron beam starts in the photon beam channel, so even a small deflection of the electron trajectory by just 1-3~mm due to the presence of the magnetic field is already sufficient to induce a shower. At the same time, such a deflection needs to be accomplished at a relatively short distance (much shorter than the size of the radiation shielding) after the beam passes through the radiator to keep the source compact. Indeed, in the proposed CPS magnet design the trajectory radius is about 10~m for 11~GeV electrons, the channel size is 0.3~cm, and the raster size is 0.2~cm, so the mean distance travelled by an electron in the magnetic field is around 17~cm, with a spread of around 12~cm (see the scheme in Fig.~\ref{fig:beam}). Therefore, a total field integral of 1000~kG-cm is adequate for our case, which requires a 50~cm long iron-dominated magnet. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE4.pdf} \caption{\label{fig:beam} \it The scheme of beam deflection in the magnetic field to the absorber/dump.} \end{figure} \subsection{Magnet} \label{sec:magnet} Normal conducting magnets for operation in high levels of radiation have been constructed at several hadron facilities, including the neutron spallation source at ORNL and the proton complex JPARC~\cite{Tanaka11,Petrov16}. The magnet designed for the CPS has permendur poles tapered in two dimensions, which allows for a strong magnetic field at the upstream end of the magnet (3.2~T), with the coils located 20~cm from the source of radiation. The resulting radiation level at the coil location was calculated to be sufficiently low (below 1~Mrem/hr) to allow the use of relatively inexpensive kapton tape based insulation of the coils~\cite{Kapton}. As discussed above, the length of the magnet was selected to be 50~cm and the field integral 1000~kG-cm. Fig.~\ref{fig:field} shows the longitudinal profile of the magnetic field obtained from OPERA calculations. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE5.png} \caption{\label{fig:field} \it Magnetic field ($B_x$) profile along the beam direction, as a function of distance from the radiator position.} \end{figure} \subsection{Central Absorber} \label{sec:absorber} The beam power from the deflected electron beam and subsequent shower is deposited in an absorber made of copper, whose high heat conductivity helps to manage the power density. An absorber made of aluminum would help to reduce power density by a factor of 2-3 compared with copper due to its smaller radiation length, but it would also increase the length of the CPS by about 50~cm so is not preferred. The heat removal from the copper absorber is arranged via heat conduction to the wider area where water cooling tubes are located. Fig.~\ref{fig:power} shows the simulated longitudinal profile of the power density. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE6.pdf} \caption{\label{fig:power} \it Longitudinal profile of the energy distribution (integrated for one cm copper slab) for a 11 GeV incident electron beam. The maximum power density occurs at a distance of 18~cm from the radiator. The blue dots show the energy deposition for the electron beam centered in a 3~mm by 3~mm channel, while the red dots show the same for the beam rastered with a radius of 1~mm.} \end{figure} The transverse distribution of power is also very important to take into account because, for a high energy incident beam, it has a narrow peak. Simulation of the deposited power density and 2-dimensional heat flow analysis were performed to evaluate the maximum temperature in the absorber. Fig.~\ref{fig:temperature} (left panel) shows the layout of materials in the model used for the temperature analysis. The calculation was performed for an 11~GeV, 30~kW beam and a radiator with 10\% radiation length thickness. The resultant temperature was found to be below 400$^\circ$C, which is well in the acceptable range for copper. Fig.~\ref{fig:temperature} (right panel) shows the temperature profile in the transverse plane at the longitudinal location of maximum power deposition. Cooling of the core will require about four gallons of water per minute at 110~psi pressure (at 30$^\circ$C temperature rise), which is easy to provide. \begin{figure} \centering \includegraphics[width=3.5in]{./FIGURE7.pdf} \caption{\label{fig:temperature} \it Left panel: the cross section of the absorber with the water cooling channels (the copper is shown in light blue and the W-Cu(20\%) is shown in gold). Right panel: the temperature map for 1 cm by 1 cm elements at the longitudinal coordinate of the power deposition maximum.} \end{figure} \subsection{Tungsten-powder Shield} \label{cps-w-powder-shield} The amount of material needed for radiation shielding is primarily defined by the neutron attenuation length, which is 30~g/cm$^2$ for neutrons with energy below 20~MeV and 125~g/cm$^2$ for high energy neutrons. The neutron production rate by an electron beam in copper is $1\times 10^{12}$ per kW of beam power according to Ref.~\cite{Swanson77} (see Fig.~\ref{fig:n-yield}). At a distance of 16~meters from the unshielded source for a 30~kW beam, the neutron flux would be $1\times 10^7$~n/cm$^2$/s, which would produce a radiation level of 110~rem/hr. The proposed conceptual design has a total shield mass of 850~g/cm$^2$ and will result in a reduction in these radiation levels by a factor of around 1000. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE8.png} \caption{\label{fig:n-yield} \it The neutron yield and dose rate for a 500~MeV incident electron beam as a function of atomic number (based on an original figure from SLAC~\cite{Swanson77}).} \end{figure} The space inside the magnet between the poles and coils is filled by an inner copper absorber and an outer W-Cu(20\%) insert, which provides a good balance between effective beam power absorption and radiation shielding. For the shield outside the magnet, the current design employs tungsten powder, whose high density (16.3~g/cm$^3$) \footnote{The density of tungsten is 19.25~g/cm$^3$, but more commonly admixtures of tungsten and Cu/Ni, or in this case tungsten powder, are used with somewhat lower densities} helps to reduce the total weight of the device. A thickness of 50~cm was used as a first iteration for the thickness of the outer shield of the CPS, but we have investigated the impact of varying this amount of outer shielding and adding borated plastic (as discussed later). \subsection{Impact on Polarized Target} \label{cps-target-field} The most significant gain associated with deployment of the CPS is for experiments using dynamically polarized targets. However, such polarized targets operate with strong polarizing fields themselves. In addition, dynamically polarized target operation imposes strict requirements on the field quality at the target location, where fields and gradients need to be compensated at the 10$^{-4}$ level. This necessitates studies of the mutual forces associated with the 2-3~Tesla CPS dipole magnet and the 5~Tesla polarized target solenoid, in terms of both the design of the support structures and the experimental operation. The fields associated with the combination of these two magnetic systems were calculated using the model shown in Fig.~\ref{fig:modelT}, with the following results obtained: \begin{itemize} \item When the CPS is on but the polarized target magnet is off, the (total) field at the target location is 0.1~Gauss. \item When the polarized target magnet is on and the CPS is off or removed, the field at the CPS location is about 130~Gauss. \item When both the CPS and the polarized target magnet are ON, the field gradient at the polarized target center is about 2 Gauss/cm (Fig.~\ref{fig:gradient}). \end{itemize} \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE9.png} \caption{\label{fig:modelT} \it The TOSCA model used in the field and force calculations for longitudinal orientation of the coils/target polarization.} \end{figure} These results show that, for the CPS the induced field is mainly due to the CPS magnet yoke becoming polarized by the target field. Whereas for the target, the field gradient at the target location is sufficiently low for routine dynamically polarized NH$_3$ or ND$_3$ operation, with a relative values of around $0.4 \times 10^{-4}$. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE10.png} \caption{\label{fig:gradient} \it The field at the target center. The insert shows the field zoomed by a factor of 10.} \end{figure} \section{Radiation Requirements} \label{sec:radiation-requirements} As discussed previously, most of the proposed Jefferson Lab experiments with the CPS will utilize a dynamically nuclear polarized target. Electron beam currents for use with such targets are typically limited to 100~nA or less, to reduce both heat load and radiation damage effects. The equivalent heat load for a pure photon beam impinging on such a target corresponds to a photon flux originating from a 2.7~ $\mu$A electron current striking a 10$\%$ copper radiator. The radiation calculations presented in this section therefore assume a CPS able to absorb 30~kW of beam power (corresponding to a beam of 11 GeV electrons with a current of 2.7 $\mu$A). In addition, the beam time assumed for a typical experiment is 1000 hours. For such an experiment at Jefferson Lab, the following radiation requirements must be fulfilled: \begin{itemize} \item{The prompt dose rate in the experimental hall must be $\le$ several rem/hr at a distance of 30 feet from the CPS.} \item{The activation dose outside the CPS envelope at a distance of one foot must be $\le$ several mrem/hr one hour after the end of a 1000 hour run.} \item{The activation dose at the centre of the experimental target area, where operational maintenance tasks may be required at a distance of one foot from the scattering chamber must be $\le$ several mrem/hr one hour after the end of a 1000 hour run.} \end{itemize} The CPS conceptual design has been established with the aid of several extensive simulations. As validation of the simulation tools used, benchmark comparisons were made with GEANT3, GEANT4, FLUKA and DINREG. ~\cite{Geant4,Fluka}\footnote{Note that these codes calculate particle yields/s/cm$^2$, which have to be converted into the effective dose rate (in rem/hr) using Fluence-to-Effective Dose conversion factors~\cite{ICRP116} taking into account an energy-dependence factor.}. After benchmark validation, a series of radiation calculations were performed in order to: \begin{itemize} \item{Determine the size and layout of the shielding around the magnet, and the choice of materials (copper, Cu-W alloy, concrete, borated plastic, etc.).} \item{Determine the magnet field requirements in terms of peak field, gap size, and field length.} \item{Determine the radiation levels on the magnet coils, and based on these results to identify radiation hardened materials that might be used in building the coils.} \item{Determine the radiation levels on the polarized target electronics.} \item{Determine the radiation levels directly adjacent to the CPS as well as at the experimental hall boundary.} \end{itemize} \section{Radiation Studies and Shielding Design} \label{sec:radstudies-shieldingdesign} In this section we will describe studies performed for several different experimental configurations in order to identify the various sources of radiation and make direct comparisons of the calculated dose rates. \subsection{Prompt Radiation Dose Rates} In order to provide a baseline the prompt radiation dose originating from a 2.7~$\mu$A electron beam hitting a 10\% copper radiator located at a distance of 2.15~m upstream of the centre of the experimental target was calculated. As the geometry of the target system and CPS are not included in this simulation, all prompt radiation originates from the interaction between the primary electron beam and the radiator. The prompt radiation dose is calculated by summing over all azimuthal angles in a radial range between 5 and 10~cm from the beam line. Fig.~\ref{fig:prompt_no_cps} shows two-dimensional dose rates originating from photons only (top left), from neutrons only (top right), from all particles (bottom left), and the one-dimensional prompt radiation dose along the beam direction (bottom right). With the exception of the neutron contribution, most of the prompt radiation is created along the beam direction, as expected. The prompt radiation levels reach roughly 40~rem/hr, of which only around 200~mrem/hr is in the form of gamma radiation and 10~mrem/hr from neutrons. The remaining and clearly dominant contribution is from charged electron- and positron-induced showers. \begin{figure*} \centering \includegraphics[width=5.0in]{./FIGURE11.pdf} \caption{\label{fig:prompt_no_cps} \it Prompt radiation dose rate as a function of position in the experimental hall for the case of a 2.7~$\mu$A electron beam hitting a 10\% copper radiator. Two-dimensional plots are shown for the dose from photons only (top left), from neutrons only (top right) and from all particle types (bottom left). Also shown is a one-dimensional plot of prompt dose rate along the beam direction (bottom right).} \end{figure*} The second scenario considered is that of a 2.7~$\mu$A electron beam incident on a 10\% copper radiator as before, but with the radiator located within the CPS geometry. Fig.~\ref{fig:prompt_cps} illustrates the prompt radiation dose along the beam direction for this case (note that the y-axis scale on this figure is the same as in Fig.~\ref{fig:prompt_no_cps}). One can clearly see that the prompt radiation levels within the CPS are much higher than before (around 300 times higher because the full power of the beam is now being deposited in the CPS). Crucially, however, the prompt radiation dose rate outside the CPS is only around 15~mrem/hr. Comparing this value for prompt dose rate to the one obtained above for the baseline scenario highlights the effect of the CPS shielding: there is a reduction by a factor of over 1000. This reduction is consistent with the factor estimated previously in section~\ref{cps-w-powder-shield}. \begin{figure*} \centering \includegraphics[width=5.0in]{./FIGURE12.pdf} \caption{\label{fig:prompt_cps} \it Prompt radiation dose rate as a function of upstream distance from the target for the case of a 2.7~$\mu$A electron beam hitting a 10\% copper radiator inside the CPS. The dose includes contirbtuions from all particles. The large reduction factor of $>$1000 as a result of the CPS shielding is apparent.} \end{figure*} This is a very important result, which is further illustrated in Fig.~\ref{fig:prompt_cps-1D}. In contrast with the baseline scenario, there are now no contributions to the overall prompt dose rate in the experimental hall from photons, electrons and positrons as these are all contained within the CPS shielding -- the neutron-only dose rate is nearly identical to the all-radiation rate. The bottom-right panel in Fig.~\ref{fig:prompt_cps-1D} illustrates how well optimized the CPS shielding concept is for absorbing prompt radiation. Outside the CPS the prompt radiation dose rate on the surface (indicated by the outer black rectangular lines on the figure) is reduced to a maximum level of roughly 10~rem/hr. This is due to the fact that the development of showers generated by interactions of the primary beam is highly suppressed and the resultant secondary charged particles and photons are fully contained. This confirms that with a CPS the following requirement can be met: the prompt dose rate in the experimental hall $\le$ several~rem/hr at a distance of 30 feet from the device. \begin{figure*} \centering \includegraphics[width=5.0in]{./FIGURE13.pdf} \caption{\label{fig:prompt_cps-1D} \it Prompt radiation dose rate as a function of position in the experimental hall for the case of a 2.7~$\mu$A electron beam hitting a 10\% copper radiator inside the CPS. One-dimensional plots are shown for the dose from photons only (top left), from neutrons only (top right) and from all particle types (bottom left). Also shown is a two-dimensional plot of prompt dose rate (bottom right), which shows the effectiveness of the CPS shielding concept.} \end{figure*} \subsection{Impact of Boron and Shielding Optimization} It is well known that the neutron flux through a surface can be drastically reduced by the addition of boron as a result of the very high capture cross section of $^{10}$B. This effect was simulated by calculating the neutron flux at the CPS boundary assuming various thicknesses of tungsten shielding (65, 75 and 85~cm), and then adding 10~cm of borated (30\%) plastic. The result can be seen in Fig.~\ref{fig:boron}, which shows the neutron flux as function of neutron energy. Increasing the tungsten thickness clearly reduces the neutron flux as expected, but a much more drastic reduction is seen when the 10~cm of borated plastic is added. Thus, the baseline conceptual shielding design of the CPS is assumed to be 85~cm thick tungsten surrounded by 10~cm of borated plastic. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE14.pdf} \caption{\label{fig:boron} \it Neutron flux escaping the CPS for different shielding configurations, including the use of borated plastic.} \end{figure} The outer dimension of the tungsten-powder shielding as outlined for optimized shielding above is 1.7~m by 1.7~m by 1.95~m, or a volume of 5.63~m$^3$. One needs to subtract from this total volume the inner box including the magnet, which amounts to 0.26~m$^3$, leaving a net volume of 5.37~m$^3$, or 88~tons of W-powder. There are various options to reduce the weight and therefore cost, if needed. One could reduce the overall size of the W-powder shielding by 5~cm on each side. This would result in a reduction of the shield weight to 73 tons, but would also lead to an increase of the radiation levels by about 50\%. If one would remove an additional 10~cm only on the bottom side, there would be a further increase of a factor of two in radiation level in the direction of the floor, but a further reduction in shielding weight to 68~tons. Alternatively, one could round the W-powder corners, as illustrated in Fig.~\ref{fig:FLUKA-oval}. This would complicate modular construction, but would allow for similar radiation levels as with the optimized design, while reducing rhe shielding weight to $\approx$66 tons. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE15.pdf} \caption{\label{fig:FLUKA-oval} \it An alternative shielding design used in FLUKA radiation calculations with reduced W-powder overall, on the bottom-side and with rounded corners.} \end{figure} \subsection{Dose Rates due to Activation} Dose rates due to the decay of activation products produced in the CPS during beam-on conditions have been calculated. Fig.~\ref{fig:activation-cps} shows the calculated activation dose one hour after a 1000-hour experiment has been completed with the same conditions as before (2.7 $\mu$A, 10\% copper radiator, with shielded CPS). Fig.~\ref{fig:activation-cps-radial} shows the activation dose rate as a function of radial distance from the CPS. The activation dose outside the CPS is 2~mrem/hr at the surface and reduces radially outward. At a distance of one foot it is reduced to about 1.5~mrem/hr. This therefore demonstrates that the current design meets the requirement that the activation dose outside the device envelope at one foot distance is $\le$ several mrem/hr after one hour following the end of a 1000 hour run. Note that these estimates do not depend much on the assumed 1000-hour continuous running assumption, as similar dose rates are seen in a calculation for a 100-hour continuous run, reflecting the fact that much of the activation products are relatively short-lived. Furthermore, activation dose rates do not drop appreciably after one hour or even one day. On the other hand, after one month the activation dose rates at the CPS surface are reduced by up to a factor of ten. Inside the CPS the activation dose rate can be up to 1~krem/hr, which is why the CPS will be moved laterally to the side after an experiment rather than disassembled. \begin{figure}[!htbp] \centering \subfigure[\label{fig:activation-cps} \it ] { \includegraphics[width=2.8in]{./FIGURE16A.pdf} } \hspace{0.25cm} \subfigure[\label{fig:activation-cps-radial} \it ] { \includegraphics[width=2.8in]{./FIGURE16B.pdf} } \caption{\label{fig:activation-figures} Activation radiation dose rate one hour after a 1000-hour experiment as a function of position in the experimental hall for the case of a 2.7~$\mu$A electron beam hitting a 10\% copper radiator inside the CPS.} \end{figure} \subsection{Comparison with Dose Rates from the Target} \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE17.pdf} \caption{\label{fig:prompt-at-target} \it Prompt dose at the target for different configurations. Distance R is radial distance from the target centre, with the radius of the scattering chamber boundary located at 50 cm.} \end{figure} Fig.~\ref{fig:prompt-at-target} shows the prompt dose at the target for different experimental configurations as a function of radial distance from the target centre. It is worth commenting on the results for three of these configurations: the {\it 100 nA electron beam}, the {\it 2.7 $\mu$A photon beam} and the {\it CPS with polarized target}. At the boundary of the scattering chamber in the {\it 100 nA electron beam} configuration, the default operating mode for polarized beam experiments with dynamically nuclear polarized targets at Jefferson Lab to date, the prompt dose at the target is roughly 1~rem/hr. In the {\it 2.7 $\mu$A photon beam} scenario it is roughly 30~rem/hr, which simply reflects the fact that even if a 2.7 $\mu$A pure photon beam deposits the same heat load in a target as a 100 nA electron beam, the radiation rate is much higher. The {\it CPS with polarized target} scenario is identical to the pure photon beam case, further demonstrating that no additional radiation in the target area is created due to the presence of the CPS. \begin{figure*} \centering \includegraphics[width=5.0in]{./FIGURE18.pdf} \caption{\label{fig:activation-at-target} \it Activation dose rate at the target for different configurations. Distance $R$ is radial distance from the target centre, with the radius of the scattering chamber boundary located at 50 cm.} \end{figure*} Similarly, Fig.~\ref{fig:activation-at-target} shows the activation dose rates for the same three configurations. One can see that the {\it 2.7 $\mu$A photon beam} configuration has a much higher activation dose rate at the target than the {\it 100 nA electron beam} case. This again reflects what was seen in the previous figure for the prompt radiation dose rate, as there are many more photons coming from a 2.7~$\mu$A electron beam on a 10\% copper radiator than there are from a 100~nA electron beam on a roughly 3\% dynamically nuclear polarized target. The effect of the CPS on the activation rate at the target is, as before, negligible. \begin{figure} \centering \includegraphics[width=3.0in]{./FIGURE19.pdf} \caption{\label{fig:activation-rates} \it Activation radiation dose rate one hour after a 1000-hour experiment as a function of position in the experimental hall for the case of a 2.7~$\mu$A electron beam hitting a 10\% copper radiator inside the CPS, with the target geometry included. The 1~mrem/hr contour is indicated.} \end{figure} Fig.~\ref{fig:activation-rates} shows a two-dimensional plot of the activation dose rate in the experimental hall one hour after a 1000 hour run with the CPS, a 2.7~$\mu$A, 11~GeV beam on a 10\% radiator and the polarized target system (at z = 0). The 1~mrem/hour contour is indicated, and demonstrates that with the current CPS baseline design, the activation dose at the target centre in the experimental target area, where operational maintenance tasks may be required, is dominated by the dose induced by a pure photon beam. At a distance of one foot from the scattering chamber it is $\le$ several mrem/hr one hour after a 1000 hour run, as required. \subsection{Material Considerations} The level of radiation of the CPS experiments is well below what is typical for many high-luminosity experiments at Jefferson Lab using regular cryogenic target systems and/or radiators. However, the radiation level on the polarized target coils, due to the interaction of the photon beam with the polarized target material, will be higher than in previous experiments (around 500~rem/hr as illustrated in Fig.~\ref{fig:prompt-and-neutron-1meveq-damage}). This is not expected to pose any significant issues. Furthermore, the radiation levels in the CPS magnet coils at a distance of 20~cm from the radiation source are around 1~Mrem/hr (see {\sl e.g.} Fig.~\ref{fig:prompt_cps-1D}, bottom right). This relatively moderate level will allow the use of a modest-cost Kapton tape-based insulation of the coils~\cite{Kapton}. \begin{figure*} \centering \includegraphics[width=5.0in]{./FIGURE20.pdf} \caption{\label{fig:prompt-and-neutron-1meveq-damage} \it The prompt radiation dose (left) and the resulting 1~MeV neutron equivalent damage to silicon (right) in the target area, assuming the conditions described above. The polarized target system is centred at $R = 0$, the nominal target chamber radius is 50~cm and the target coils are at about 20~cm from the beam line. The dose at the target coils is $5\times 10^5$~rem and the 1~MeV neutron equivalent damage is $5 \times 10^{12}$~neutrons/cm$^2$.} \end{figure*} \section{Engineering and Safety Aspects} \label{sec:safety-and-engineering} As stated earlier, cooling of the CPS core will require four gallons of water per minute at 110 psi pressure, which will result in a 30$^\circ$C rise in coolant temperature. Activation of this coolant water and beam dump is anticipated, meaning a closed-cycle cooling system will be needed. Activation inside the CPS will be confined to a very small volume and in the event of a leak, external contamination will be minimized. A leak pan under the device could easily be included to catch and confine any leakage up to and including a total loss of primary coolant. A modular pallet mounted design would be efficient and would include primary coolant pumps, DI resin beds, heat exchanger, surge tank, controls instrumentation and manifolds. The combination of placing a high-power bremsstrahlung radiator, a magnet and a beam dump inside a shielded box imposes significant reliability and remote handling considerations. The primary engineering control involves making the design as robust as possible, including large safety margins and avoiding the need for disassembly for maintenance or any other reason. The CPS should be heavily instrumented for early detection of problems such as low coolant flow, leaks, low pressure, high temperature, and high conductivity. The two areas where conservative safety design is most needed are in the magnetic coil and dump cooling systems. A low magnet coil current density design is envisioned, which is not expected to exceed 500 A/cm$^2$. In order to allow easy access, individual coil pancake leads should be extended to an area outside of the magnet and shielding. There should be no electrical or coolant joints inside the shielding, and each separate sub-coil of the CPS magnet should have thermometers, thermal circuit breakers, voltage and coolant flow monitors to avoid any possibility that one of the separate current paths can overheat due to lack of sufficient coolant or a bad electrical contact. Extra insulation between sub-coils and between the coil and ground should be added to prevent ground faults. Lastly, a commercial power supply is assumed that will come with a wide array of internal interlock protections. The available interlocks and signals can be fed into the electron beam Fast Shutdown (FSD) system. To protect equipment in the experimental hall from the beam striking the CPS shielding, a dual protection scheme using both a beam position monitoring system and direct instrumentation of the fast raster magnet is proposed. The beam diagnostics systems would monitor beam position and motion in close to real time and monitor coild voltage on the raster coils, which would provide ample early warning of raster problems. Both of these independent signals would be fed into the FSD system. Radiator temperature could be monitored to provide a third independent protection system, and if implemented, thermocouples mounted on the radiator should be robust against radiation damage and provide fast enough protection against radiator overheating. \section{Summary} \label{sec:summary} The Compact Photon Source (CPS) design features a magnet, a central copper absorber and hermetic shielding consisting of tungsten powder and borated plastic. The addition of the latter has a considerable impact on reducing the neutron flux escaping the CPS. The ultimate goal in this design process is that radiation from the source should be a few times less than from a photon beam interacting with the material of a polarized target. The equivalent heat load for a pure photon beam impinging such targets corresponds to a photon flux originating from a 2.7 $\mu$A electron beam current striking a 10\% copper radiator. Detailed simulations of the power density and heat flow analysis show that the maximum temperature in the absorber is below 400 degrees, which is well within the acceptable range of copper, and thus demonstrates that the CPS can absorb 30 kW in total, {\sl e.g.} corresponding to an 11-GeV electron beam energy and a 2.7 $\mu$A electron beam current. The CPS also fulfills the requirements on operational dose rates at Jefferson Lab, which have been established with extensive and realistic simulations. The projected prompt dose rate at the site boundary is less than 1 $\mu$rem/hr (to be compared with 2.4 $\mu$rem/hr, which corresponds to a typical JLab experiment that does not require extra shielding). The activation dose outside the device envelope at one foot distance is less than several mrem/hr after one hour following the end of a 1000 hour run ($\sim$ 3 months). The activation dose at the target centre in the experimental target area, where operational maintenance tasks may be required, is dominated by the dose induced by the pure photon beam. At a distance of one foot from the scattering chamber it is less than several mrem/hr one hour after the end of a 1000 hour run (i.e. the additional activation dose induced by absorption of the electron beam in the Compact Photon Source is negligible). \section{Acknowledgements} This work is supported in part by the National Science Foundation grants PHY-1306227, 1913257, and PHY-1714133, the U.S. Department of Energy, Office of Science, Office of Nuclear Physics award DE-FG02-96ER40950, and the United Kingdom’s Science and Technology Facilities Council (STFC) from Grant No. ST/P004458/1. We would like to thank Paul Brindza for helpful discussions and providing valuable input for the writing of this document. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177.
2,877,628,088,816
arxiv
\section{Introduction} The sharp increase in uses of video-conferencing creates both a need and an opportunity to better understand these conversations \citep{kim2019review}. In post-event applications, analyzing conversations can give feedback to improve communication skills \citep{hoque2013mach, naim2015automated}. In real-time applications, such systems can be useful in legal trials, public speaking, e-health services, and more \citep{poria2019emotion, tanveer2015rhema}. Analyzing conversations requires both human expertise and a lot of time. However, to build automated analysis systems, analysts often require a training set annotated by humans \citep{poria2019emotion}. The annotation process is costly, thereby limiting the amount of labeled data. Moreover, third-party annotations on emotions are often noisy. Deep networks coupled with limited noisy labeled data increases the chance of overfitting \citep{james2013introduction, zhang2016understanding}. Could data be used more productively? From the perspective of feature engineering to analyze video-conferences, analysts often employ pre-built libraries \citep{baltruvsaitis2016openface, vokaturi2019} to extract multimodal features as inputs to training. This preprocessing phase is often computationally heavy, and the resulting features are only used as inputs. In this paper, we investigate how the preprocessed data can be re-used as auxiliary tasks which provide inductive bias through multiple noisy supervision \citep{caruana1997multitask, lipton2015learning, ghosn1997multi} and consequently, promoting a more productive use of data. Specifically, our main contributions are (1) the identification of beneficially auxiliary tasks, (2) studying the method of distributing learning capacity between the primary and auxiliary tasks, and (3) studying the relative supervision hierarchy between the primary and auxiliary tasks. We demonstrate the value of our approach through predicting emotions on two publicly available datasets, IEMOCAP \citep{busso2008iemocap} and SEMAINE \citep{mckeown2011semaine}. \section{Related Works and Hypotheses} \label{section:related_works} Multitask learning has a long history in machine learning \citep{caruana1997multitask}. In this paper, we focus on transfer learning within MTL, a less commonly discussed subfield within MTL \citep{mordan2018revisiting}. We are concerned with the performance on one (primary) task -- the sole motivation of adding auxiliary tasks is to improve the primary task performance. In recent years, this approach has been gaining attention in computer vision \citep{yoo2018deep, fariha2016automatic, yang2018multitask, mordan2018revisiting, sadoughi2018expressive}, speech recognition \citep{krishna2018hierarchical, chen2015multitask, tao2020end, bell2016multitask, chen2014joint}, and natural language processing (NLP) \citep{arora2019multitask, yousif2018citation, zalmout2019adversarial, yang2019information, du2017novel}. The drawback of adding multiple tasks increases the risk of negative transfer \citep{torrey2010transfer, lee2016asymmetric, lee2018deep,liu2019loss, simm2014tree}, which leads to many design considerations. Three of such considerations are, identifying (a) what tasks are beneficial, (b) how much of the model parameters to share between the primary and auxiliary tasks, and (c) whether we should prioritize primary supervision by giving it a higher hierarchy than the auxiliary supervision. In contrast with previous MTL works, our approach (a) identifies sixteen beneficially auxiliary targets, (b) dedicates a primary-specific branch within the network, and (c) investigates the efficacy and generalization of prioritizing primary supervision across eight primary tasks. Since our input representation is fully text-based, we dive deeper into MTL model architecture designs in NLP, \citet{sogaard2016deep} found that lower-level tasks like part-of-speech tagging, are better kept at the lower layers, enabling the higher-level tasks like Combinatory Categorical Grammar tagging to use these lower-level representations. In our approach, our model hierarchy is not based on the difficulty of the tasks, but more simply, we prioritize the primary task. Regarding identifying auxiliary supervisors in NLP, existing works have included tagging the input text \citep{zalmout2019adversarial,yang2019information, sogaard2016deep}. Text classification with auxiliary supervisors have included research article classification \citep{du2017novel, yousif2018citation}, and tweet classification \citep{arora2019multitask}. Multimodal analysis of conversations has been gaining attention in deep learning research \citep{poria2019emotion}. The methods in the recent three years have been intelligently fusing numeric vectors from the text, audio, and video modalities before feeding it to downstream layers. This approach is seen in MFN \citep{zadeh2018memory}, MARN \citep{zadeh2018multi}, CMN \citep{hazarika2018conversational}, ICON \citep{hazarika2018icon}, DialogueRNN \citep{majumder2019dialoguernn}, and M3ER \citep{mittal2020m3er}. Our approach is different in two ways. (1) Our audio and video information is encoded within text before feeding only the text as input. Having only text as input has the benefits of interpretability, and the ability to present the conversational analysis on paper \citep{kim2019detecting}, similar to how the linguistics community performs manual conversational analysis using the Jefferson transcription system \citep{jefferson2004glossary}, where the transcripts are marked up with symbols indicating how the speech was articulated. (2) Instead of using the audio and video information as only inputs, we demonstrate how to use multimodal information in both input and as auxiliary supervisors. \textbf{Hypothesis H1: The introduced set of auxiliary supervision features improves primary task performance.} We introduce and motivate the full set of sixteen auxiliary supervisions, all based on existing literature: these are grouped into four families, each with four auxiliary targets. The four families are (1) facial action units, (2) prosody, (3) historical labels, and (4) future labels: \newline (1) Facial action units, from the facial action coding system identifies universal facial expressions of emotions \citep{ekman1997face}. Particularly, AU 05 (upper lid raiser), 17 (chin raiser), 20 (lip stretcher), 25 (lips part) have been shown to be useful in detecting depression \citep{yang2016decision, kim2019detecting} and rapport-building \citep{kim2021monah}. \newline (2) Prosody, the tone of voice -- happiness, sadness, anger, and fear -- can project warmth and attitudes \citep{hall2009observer}, and has been used as inputs in emotions detection \citep{garcia2017emotion}. \newline (3 and 4) Using features at different historical time-points is a common practice in statistical learning, especially in time-series modelling \citep{christ2018time}. Lastly, predicting future labels as auxiliary tasks can help in learning \citep{caruana1996using, cooper2005predicting, trinh2018learning, zhu2020vision, shen2020auxiliary}. We propose using historical and future (up to four talkturns ago or later) target labels as auxiliary targets. \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{Figures/reuse.png} \caption{Reusing components (dotted lines) of the feature engineering process as auxiliary targets (in blue). The MONAH framework \citep{kim2021monah} is introduced in section \ref{ssect:input}} \label{fig:reuse} \end{figure*} Given that we are extracting actions and prosody families as inputs, we propose to explore whether they can be reused as supervisors (see Fig. \ref{fig:reuse}). Our hypothesis \textbf{H1} is that re-using them as auxiliary supervision improves primary task performance. This is related to using hints in the existing MTL literature, where the auxiliary tasks promote the learning of the feature \citep{cheng2015open, yu2016learning}. \textbf{Hypothesis H2: When the primary branch is given maximum learning capacity, it would not be outperformed by models with primary branch having less than the maximum learning capacity.} Deeper models with higher learning capacity produce better results \citep{huang2019gpipe, nakkiran2019deep, menegola2017knowledge, blumberg2018deeper, romero2014fitnets}. Also, since the auxiliary branch is shared with the primary supervision, the auxiliary capacity should be limited to improve primary task performance \citep{wu2020understanding} because limiting the auxiliary capacity will force the branch to learn common knowledge (instead of auxiliary specific knowledge) across the auxiliary tasks \citep{arpit2017closer}. Therefore, given a fixed learning capacity budget, our hypothesis \textbf{H2} implies that we should allocate the maximum learning capacity to the primary branch because we care only about the primary task performance. \textbf{Hypothesis H3: Auxiliary supervision at the lower hierarchy yields better primary task performance as compared to flat-MTL.} Having the auxiliary tasks at the same supervisory level as the primary task is inherently sub-optimal because we care only about the performance of the primary task \citep{mordan2018revisiting}. Information at the lower hierarchy learns basic structures that are easy to transfer, whilst upper hierarchy learns more semantic information that is less transferable \citep{zeiler2014visualizing}. Therefore, we propose that the auxiliary supervision to be at the lower hierarchy than the primary supervision. \section{Model Architecture} \subsection{Flat-MTL Hierarchical Attention Model} \label{ssect:flat_mtl_han} We start with an introduction of the Hierarchical Attention Model (HAN) \citep{yang2016hierarchical}. We chose HAN because of its easy interpretability as it only uses single-head attention layers. There are four parts to the HAN model, (1) text input, (2) word encoder, (3) talkturn encoder, and (4) the predictor. In our application, we perform our predictions at the talkturn-level for both IEMOCAP and SEMAINE. For notation, let $s_i$ represent the \textit{i}-th talkturn and $w_{it}$ represent the \textit{t}-th word in the \textit{i}-th talkturn. Each single talkturn can contain up to \textit{T} words, and each input talkturn can contain up to \textit{L} past talkturns to give content context (see section \ref{ssect:input}). Given a talkturn of words, we first convert the words into vectors through an embedding matrix $W_e$, and the word selection one-hot vector, $w_{it}$. The word encoder comprises of bidirectional GRUs \citep{bahdanau2014neural} and a single head attention to aggregate word embeddings into talkturn embeddings. Given the vectors $x_{it}$, the bidirectional GRU reads the words from left to right as well as from right to left (as indicated by the direction of the GRU arrows) and concatenates the two hidden states together to form $h_{it}$. We then aggregate the hidden states into one talkturn embedding through the attention mechanism. $u_{it}$ is the hidden state from feeding $h_{it}$ into a one-layer perceptron (with weights $W_{w}$ and biases $b_{w}$). The attention weight ($\alpha_{it}$) given to $u_{it}$ is the softmax normalized weight of the similarity between itself ($u_{it}$) and $u_w$, which are all randomly initialized and learnt jointly. \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{Figures/han.png} \caption{Forward pass of the Flat-MTL HAN architecture. Auxiliary tasks (yellow) are added at the same level of the primary task (orange).} \label{fig:han} \end{figure*} \begin{figure*}[ht] \includegraphics[width=1.0\textwidth]{Figures/rock.png} \caption{Forward pass of the HAN-ROCK architecture. There is a primary branch (orange) where auxiliary supervision (yellow) can not influence. The fusion module (blue) aggregates the talkturn embeddings from all tasks into one.} \label{fig:rock} \end{figure*} \begin{gather*} x_{it} = W_e w_{it}, t \in [1,T]. \\ \overrightarrow{h}_{it} = \overrightarrow{GRU}(x_{it}), t \in [1,T]. \\ \overleftarrow{h}_{it} = \overleftarrow{GRU}(x_{it}), t \in [T,1]. \\ {h}_{it} = (\overrightarrow{h}_{it}, \overleftarrow{h}_{it}). \\ u_{it} = relu(W_w h_{it} + b_{w}). \end{gather*} \begin{gather*} s_i = \Sigma_t \alpha_{it} u_{it}. \\ \alpha_{it} = \frac{exp(u_{it}^{\top} u_w)}{\Sigma_t exp(u_{it}^{\top} u_w)}. \end{gather*} With the current and past talkturn embeddings (content context, to discuss in section \ref{ssect:input}), the talkturn encoder aggregates them into a single talkturn representation ($v$) in a similar fashion, as shown below. \begin{gather*} \overrightarrow{h}_{i} = \overrightarrow{GRU}(s_{i}), i \in [1,L]. \\ \overleftarrow{h}_{i} = \overleftarrow{GRU}(s_{i}), i \in [L,1]. \\ {h}_{i} = (\overrightarrow{h}_{i}, \overleftarrow{h}_{i}). \\ u_{i} = relu(W_s h_{i} + b_{s}). \\ \alpha_{i} = \frac{exp(u_{i}^{\top} u_s)}{\Sigma_i exp(u_{i}^{\top} u_s)}. \\ v = \Sigma_i \alpha_{i} u_{i}. \end{gather*} The simplest way of adding the sixteen auxiliary task predictors would be to append them to where the primary task predictor is, as illustrated in Fig. \ref{fig:han}. That way, all predictors use the same representation $v$. We refer to this architecture as flat-MTL, but we are unable to test \textbf{H2} and \textbf{H3} using this architecture. Therefore, we introduce HAN-ROCK next. \subsection{HAN-ROCK} \label{ssect:han_rock} We adapted\footnote{Implementation will be available at GitHub; Please see attached supplementary material during the review phase.} the ROCK architecture \citep{mordan2018revisiting} which was built for Convolutional Neural Networks \citep{lecun1995convolutional} found in ResNet-SSD \citep{he2016deep, liu2016ssd} to suit GRUs \citep{bahdanau2014neural} found in HAN \citep{yang2016hierarchical} (see Fig. \ref{fig:rock}). To study \textbf{H3}, we bring the auxiliary task predictors forward (see Fig. \ref{fig:rock}), so that the back-propagation from the primary supervision is able to temper the back-propagation from the auxiliary supervision but not vice-versa. This also sets us up to study \textbf{H2}. Each of the auxiliary tasks has its own talkturn encoder but shares one word encoder in the auxiliary branch (to keep the network small). Subscript $a$ indicates whether the word encoder is for the primary or auxiliary branch: \begin{gather*} x_{it} = W_e w_{it}, t \in [1,T] \\ \overrightarrow{h}_{ait} = \overrightarrow{GRU_a}(x_{it}), t \in [1,T], a \in \{pri, aux\} \\ \overleftarrow{h}_{ait} = \overleftarrow{GRU_a}(x_{it}), t \in [T,1], a \in \{pri, aux\} \\ {h}_{ait} = (\overrightarrow{h}_{ait}, \overleftarrow{h}_{ait}) \\ u_{ait} = relu(W_{aw} h_{ait} + b_{aw}) \\ \alpha_{ait} = \frac{exp(u_{ait}^{\top} u_{aw})}{\Sigma_t exp(u_{ait}^{\top} u_{aw})} \\ s_{ai} = \Sigma_{t} \alpha_{ait} u_{ait} \end{gather*} \begin{figure*}[ht] \centering \includegraphics[width=0.75\textwidth]{Figures/verbatim.png} \caption{Example of a MONAH transcript.} \label{fig:verbatim} \end{figure*} Each task has its own talkturn encoder. Subscript $b$ indicates which of the seventeen tasks -- the primary talkturn task or one of the sixteen auxiliary tasks -- is the talkturn encoder is dedicated to: \begin{gather*} \overrightarrow{h}_{abi} = \overrightarrow{GRU_{ab}}(s_{ai}), i \in [1,L], a \in \{pri, aux\}, \\ b \in \{pri, aux1, aux2, ..., aux16 \} \\ \overleftarrow{h}_{abi} = \overleftarrow{GRU_{ab}}(s_{ai}), i \in [L,1], a \in \{pri, aux\}, \\ b \in \{pri, aux1, aux2, ..., aux16 \} \\ {h}_{abi} = (\overrightarrow{h}_{abi}, \overleftarrow{h}_{abi}) \\ u_{abi} = relu(W_b h_{abi} + b_{b}) \\ \alpha_{abi} = \frac{exp(u_{abi}^{\top} u_b)}{\Sigma_i exp(u_{abi}^{\top} u_b)} \\ v_{ab} = \Sigma_i \alpha_{abi} u_{abi} \end{gather*} The seventeen talkturn embeddings ($v_{ab}$) goes through a concatenation, then the single head attention, aggregating talkturn embeddings across seventeen tasks into one talkturn embedding for the primary task predictor. Subscript $c$ pertains to the fusion module. \begin{gather*} \text{concatenation: } v_c = (v_{ab}), a \in \{pri, aux\}, \\ b \in \{pri, aux1, aux2, ..., aux16 \} \\ \text{attention: } \alpha_{c} = \frac{exp(v_c^{\top} u_c)}{\Sigma_c exp(v_{c}^{\top} u_c)} \\ \text{overall primary talkturn vector: } v = \Sigma_c \alpha_{c} v_{c} \end{gather*} \section{Experiments} \subsection{Data and Primary Tasks} We validate our approach using two datasets with a total of eight primary tasks: the IEMOCAP \citep{busso2008iemocap} and the SEMAINE \citep{mckeown2011semaine} datasets. Both datasets are used in multimodal emotions detection research \citep{poria2019emotion}. We divided the datasets into train, development, and test sets in an approximate 60/20/20 ratio such that the sets do not share any speaker (Appendix \ref{appendix:partition} details the splits). The target labels of the eight primary tasks are all at the talkturn-level. The four primary tasks of IEMOCAP consists of the four-class emotions classification (angry, happy, neutral, sad), and three regression problems -- valence (1-negative, 5-positive), activation (1-calm, 5-excited), and dominance (1-weak, 5-strong). The four-class emotions classification target is common for IEMOCAP \citep{latif2020multi, xia2015multi, li2019improved, hazarika2018conversational, mittal2020m3er}. For SEMAINE, there are four regression problems -- activation, intensity, power, valence. We use two standard evaluation metrics, mean absolute error (MAE), and 4-class weighted mean classification accuracy, MA(4). \subsection{Input} \label{ssect:input} Multimodal feature extraction is computed using the MONAH framework \citep{kim2021monah}. This framework uses a variety of pre-trained models to extract nine multimodal features, associated with the prosody of the speech and the actions of the speaker, and weaves them into a multimodal text narrative. We refer the reader to \citet{kim2021monah} for the details and efficacy of the MONAH framework. The benefit of the created narrative is that it describes what is said together with how it is said for each talkturn, giving richer nonverbal context to the talkturn (see Fig. \ref{fig:verbatim} for an example). Being fully text-based means that the analysis product can be printed out on paper, without the need for speakers nor monitors to replay the conversation on a computer. In addition to nonverbal context, we concatenated a variable number of preceding talkturns to the current talkturn as content context. Content context has been proven to be useful in CMN \citep{hazarika2018conversational} and ICON \citep{hazarika2018icon}, DialogueRNN \citep{majumder2019dialoguernn}. The content-context size is tuned as a hyperparameter. The resulting multimodal text narrative, consisting of both nonverbal and context context, is used as the sole input to the model. \subsection{Auxiliary Targets} We first clarify the method of extraction for the auxiliary families. The OpenFace algorithm \citep{baltruvsaitis2016openface} is used to extract the four continuous facial action units (AU) -- AU 05, 17, 20, 25. The Vokaturi algorithm \citep{vokaturi2019} is used to extract the four continuous dimensions in the tone of voice -- happiness, sadness, anger, and fear. As for historical and future features, we simply look up the target label for the past four talkturns and future four talkturns. Any label that is not available (for example, the label four talkturns ago is not available for the third talkturn) is substituted with the next nearest non-missing label. All auxiliary targets that reused the input features (actions and prosody) are converted into a percentile rank that has the range [0,1] using the values from the train partition. This is a subtle but note-worthy transformation. When reusing an input as an auxiliary target, it would be trivial if the input can easily predict the target. For example, given the following MONAH transcript as input, ``The woman sadly and slowly said no." It would be trivial to use a binary (quantized) auxiliary target of ``was the tone sad?" because we would only be training the model to look for the word ``sadly". However, if the auxiliary target is a percentile rank (less quantized) of the sadness in tone, then the presence of the word ``sadly" increases the predicted rank, but the model could still use the rest of the nonverbal cues (``slowly") and what is being said (``no") to predict the degree of sadness. That way, representations learnt for the auxiliary tasks uses more of the input. Percentile rank also has the convenient property of having the range [0,1]. We scaled the percentile ranks so that they all have the same range as the primary task (see appendix \ref{appendix:scalingaux} for transformation details). This ensures that if we assigned equal loss weights to all tasks, the contribution of every task is of the same order of magnitude \citep{gong2019comparison, hassani2019unsupervised, sener2018multi}. \begin{table*}[ht] \centering \caption{\textbf{H1} Results. *: the model performance has a statistically significant difference with the baseline model. a: action, p: prosody, h: historical labels, f: future labels.} \label{tab:h1} \input{Tables/h1} \end{table*} \begin{table*}[ht] \centering \caption{\textbf{H2} Results. *: the model performance has a statistically significant difference with the baseline model (\textit{P} = 256). ^: Assigning 1 GRU to the auxiliary task talkturn encoder yields a statistically significant difference with assigning 0 GRU.} \label{tab:h2} \input{Tables/h2} \end{table*} \subsection{Models, training, and hyperparameters tuning} \label{ssect:models} The overall loss is calculated as the weighted average across all seventeen tasks: (1) we picked a random weight for the primary task from the range [0.50, 0.99]; this ensures that the primary task has the majority weight. (2) For the remaining weights (1 - primary weight), we allocated them to the sixteen auxiliary tasks by: (a) random, (b) linearly-normalized mutual information, or (c) softmax-normalized mutual information. (a) is self-explanatory. As for (b) and (c), mutual information has been shown that it is the best predictor -- compared to entropy and conditional entropy -- of whether the auxiliary task would be helpful \citep{bjerva2017will}. We computed the mutual information (vector $m$) of each auxiliary variable with the primary target variable \citep{kraskov2004estimating, ross2014mutual} using scikit-learn \citep{scikit-learn}. Then, we linearly-normalized or softmax-normalized $m$ to sum up to 1. Finally, we multiplied the normalized $m$ with the remaining weights from (2); this ensures that the primary weight and the sixteen auxiliary weight sum up to one. (a), (b), and (c) have ten trials each during hyper-parameters tuning. Two variants of the HAN architectures are used (Fig \ref{fig:han} and \ref{fig:rock}). For hypotheses testing, we bootstrapped confidence intervals (appendix \ref{appendix:bootstrap}). \section{Results and Discussion} The key takeaways are: (\textbf{H1}) The introduced set of auxiliary supervision improves primary task performance significantly in six of the eight primary tasks. (\textbf{H2}) Maximum learning capacity should be given to the primary branch as a default. (\textbf{H3}) HAN-ROCK is unlikely (in one of the eight tasks) to degrade primary task performance significantly, and sometimes significantly improves it (in four of the eight tasks). (\textbf{H1}): To test \textbf{H1} (whether the introduced set of auxiliary supervision improves primary task performance), we first train the model with all sixteen auxiliary targets (from families: actions, prosody, historical, and future). Then, to differentiate the effect from the historical and future supervision, we set the loss weights from historical and future targets to be zero; effectively, there is only supervision from eight auxiliary targets (actions and prosody). Lastly, for the baseline model (no auxiliary supervision), we set the loss weights from all sixteen auxiliary targets to zero. \begin{table*}[ht] \centering \caption{\textbf{H3} Results. *: the model performance has a significant difference with the baseline.} \label{tab:h3} \input{Tables/h3} \end{table*} \begin{table*}[ht] \centering \caption{Class-wise classification F1 score on IEMOCAP. Baseline (challenger) refers to HAN-Rock architecture under the three hypotheses. *: the challenger performance has a statistically significant difference with the baseline model.} \label{tab:low_resource} \input{Tables/low_resource} \end{table*} Given auxiliary supervision, the model significantly outperforms the baseline of not having auxiliary supervision in six out of the eight primary tasks (Table \ref{tab:h1}). Comparing the baseline model with the model with two auxiliary target families, they significantly outperformed the baseline model in five out of eight primary tasks. The addition of two auxiliary target families (historical and future labels) sometimes significantly improved primary task performance (valence in IEMOCAP), but it also sometimes significantly made it worse (activation and intensity in SEMAINE). This shows that the value of auxiliary tasks, and the associated risk of negative transfer, depends on the auxiliary task. (\textbf{H2}): To test \textbf{H2} (whether maximum learning capacity should be given to the primary branch), we let \textit{P} represent the number of GRU assigned to the primary talkturn encoder, and \textit{A} represent the number of GRU assigned to each of the sixteen auxiliary talkturn encoder. We constrained \textit{P} + \textit{A} to be equal to 257. During our experiments, we set \textit{P} to 1, 64, 128, 192, and 256. We set 256 as the baseline model because it is the maximum learning capacity we can give to the primary branch while giving 1 GRU ($=257-256$) to each of the sixteen auxiliary talkturn encoders. In all primary tasks, the baseline model of assigning 256 GRUs to the primary branch is not significantly outperformed by models that assigned 1, 64, 128, 192 GRUs (Table \ref{tab:h2}). Generally, the performance decreased as the number of GRUs assigned to the primary talkturn encoder decreased from 256 to 1. We observed significantly worse performance in two out of eight tasks -- in power and valence in SEMAINE. Also, assigning 256 GRU to the primary talkturn encoders and 1 to each of the sixteen auxiliary talkturn encoders yields the smallest model\footnote{As opposed to assigning 1 GRU to the primary talkturn encoder and 256 GRU to each of the sixteen auxiliary encoder.}, and thus trains the fastest. Therefore, we recommend that the maximum capacity be given to the primary branch as a default. That said, the presence of an auxiliary branch is still important. The baseline of \textbf{H1} (no auxiliary supervision, Table \ref{tab:h1}) can be approximated\footnote{Same model architecture except that the loss weights of all auxiliary tasks are zero} as \textit{P}=$256 + 16 \times 1$, \textit{A}=0 . We compared the former to the baseline in Table \ref{tab:h2}, and found that four out of eight primary tasks have significant improvements by changing the number of talkturn encoders assigned to each auxiliary task from zero to \textit{one}. (\textbf{H3}): To test \textbf{H3} (whether auxiliary supervision should be given a lower hierarchy), we compare the results from the flat-MTL HAN architecture (baseline) against the HAN-ROCK architecture (Table \ref{tab:h3}). Placing auxiliary supervision at the lower hierarchy significantly improves primary task performance in four out of eight tasks. In only one out of eight tasks (power in SEMAINE), auxiliary supervision significantly degrades primary task performance. Further improvements are possible through the fusion module with future research. \subsection{Class-wise Performance and SoTA} Generally, we found that all hypotheses effects are stronger in lower resource labels (sad and anger, Table \ref{tab:low_resource}). We also present the performance of M3ER \citep{mittal2020m3er}, a previous state-of-the-art (SoTA) approach. We do not expect the performance of our text-only input to match the SoTA approach, which is confirmed in Table \ref{tab:low_resource}. By fusing numerical vectors from the three modalities prevalent in SoTA approaches \citep{zadeh2018memory, zadeh2018multi, hazarika2018conversational, hazarika2018icon, majumder2019dialoguernn, mittal2020m3er}, the inputs are of a much higher granularity as compared to our approach of describing the multimodal cues using discrete words. Although the text-based input is likely to constrain model performance, the multimodal transcription could be helpful for a human to analyze the conversation, we could also overlay the model perspective on the multimodal transcription to augment human analysis (see Appendix \ref{appendix:visualization}). \section{Conclusion} We proposed to re-use feature engineering pre-processing data as auxiliary tasks to improve performance and transfer learning. Three hypotheses were tested. The experimental results confirm \textbf{H1} -- Introducing our set of sixteen auxiliary supervisors resulted in better performance in most primary tasks. For \textbf{H2}, maximum learning capacity should be given to the primary branch. Lastly, for \textbf{H3}, placing the auxiliary supervision in a lower hierarchy is unlikely to hurt performance significantly, and it sometimes significantly improves performance. This is encouraging news for multi-modal conversational analysis systems as we have demonstrated how pre-processed data can be used \textit{twice} to improve performance, once as inputs, and again as auxiliary tasks. The first limitation of our paper is that the solutions are evaluated on eight tasks in the conversational analysis domain, and it is not clear if these would generalize outside of this domain. The second limitation is that we have evaluated on HAN, but not on other network architectures. A challenge to be addressed is the apriori selection of the auxiliary targets. Future research could investigate targets selection, including how to use a much larger range of auxiliary targets, how to decide the optimum number of auxiliary targets, and whether it is possible to perform these automatically. \newpage \bibliographystyle{acl_natbib}
2,877,628,088,817
arxiv
\section{ Introduction} \label{Sec:Introduction} Transversity quark distributions in the nucleon remain among the most unknown leading twist hadronic observables. This is mostly due to their chiral odd character which enforces their decoupling in most hard amplitudes. After the pioneering studies \cite{tra}, much work \cite{Barone} has been devoted to the exploration of many channels but experimental difficulties have challenged the most promising ones. On the other hand, tremendous progress has been recently witnessed on the QCD description of hard exclusive processes, in terms of generalized parton distributions (GPDs) describing the 3-dimensional content of hadrons. Numerous experimental and theoretical reviews~\cite{review} exist now on this quickly developing subject. It is not an overstatement to stress that this activity is very likely to shed light on the confinement dynamics of QCD through the detailed understanding of the quark and gluon structure of hadrons. Access to the chiral-odd transversity generalized parton distributions~\cite{defDiehl}, noted $H_T$, $E_T$, $\tilde{H}_T$, $\tilde{E}_T$, has however turned out to be even more challenging~\cite{DGP} than the usual transversity distributions : one photon or one meson electroproduction leading twist amplitudes are insensitive to transversity GPDs. A possible way out is to consider higher twist contributions to these amplitudes \cite{liuti}, which however are beyond the factorization proofs and often plagued with end-point singularities. The strategy which we follow here, as initiated in Ref.~\cite{IPST,eps}, is to study the leading twist contribution to processes where more mesons are present in the final state; the hard scale which allows to probe the short distance structure of the nucleon is now the invariant mass of the meson pair, related to the large transverse momentum transmitted to each final meson. In the example developed previously~\cite{IPST,eps}, the process under study was the high energy photo (or electro) diffractive production of two vector mesons, the hard probe being the virtual "Pomeron" exchange (and the hard scale being the virtuality of this pomeron), in analogy with the virtual photon exchange occuring in the deep inelastic electroproduction of a meson. A similar strategy has also been advocated recently in Ref.~\cite{kumano} to enlarge the number of processes which could be used to extract information on chiral-even GPDs. The process we study here (\cite{DSPIN}) \begin{equation} \gamma + N \rightarrow \pi^+ + \rho^0_T + N'\,, \label{process1} \end{equation} is a priori sensitive to chiral-odd GPDs because of the chiral-odd character of the leading twist distribution amplitude of the transversally polarized $\rho$ meson. Its detailed study should not present major difficulties to modern detectors such as those developed for the 12 GeV upgrade of Jlab or for the Compass experiment at CERN. The estimated rate depends of course much on the magnitude of the chiral-odd generalized parton distributions. Not much is known about them, but model calculations have been developed in~\cite{eps} for the ERBL part and in Ref.~\cite{Sco,Pasq,othermodels}; moreover, a few moments have been computed on the lattice~\cite{lattice}. To supplement this and use the recent phenomenological knowledge acquired on the transversity quark distributions through single inclusive deep inelastic data, we propose in this paper a parametrization of the (dominant) transversity GPD $H_T^q$ based on the concept of double distributions. Let us now explain how we factorize the amplitude of this process and what is the rational of this extension of the existing factorization proofs in the framework of QCD. The basis of our argument is two-folded. \begin{figure}[h] \begin{center} \psfrag{z}{\begin{small} $z$ \end{small}} \psfrag{zb}{\raisebox{0cm}{ \begin{small}$\bar{z}$\end{small}} } \psfrag{gamma}{\raisebox{+.1cm}{ $\,\gamma$} } \psfrag{pi}{$\,\pi$} \psfrag{rho}{$\,\rho$} \psfrag{TH}{\hspace{-0.2cm} $T_H$} \psfrag{tp}{\raisebox{.5cm}{\begin{small} $t'$ \end{small}}} \psfrag{s}{\hspace{.6cm}\begin{small}$s$ \end{small}} \psfrag{Phi}{ \hspace{-0.3cm} $\phi$} \hspace{-0.7cm} $\begin{array}{cc} \hspace{.4cm} \raisebox{.7cm}{\includegraphics[width=7cm]{FigIntroa.eps}~~\hspace{0.6cm}} & \psfrag{piplus}{$\,\pi^+$} \psfrag{rhoT}{$\,\rho^0_T$} \psfrag{M}{\hspace{-0.3cm} \begin{small} $M^2_{\pi \rho}$ \end{small}} \psfrag{x1}{\hspace{-0.5cm} \begin{small} $x+\xi $ \end{small}} \psfrag{x2}{ \hspace{-0.2cm}\begin{small} $x-\xi $ \end{small}} \psfrag{N}{ \hspace{-0.4cm} $N$} \psfrag{GPD}{ \hspace{-0.6cm} $GPDs$} \psfrag{Np}{$N'$} \psfrag{t}{ \raisebox{-.1cm}{ \hspace{-0.5cm} \begin{small} $t$ \end{small} }} \includegraphics[width=7cm]{FigIntrob.eps} \\ \\ \hspace{-.5cm}(a) & \hspace{-1.5cm} (b) \end{array} $ \caption{a) Factorization of the amplitude for the process $\gamma + \pi \rightarrow \pi + \rho $ at large $s$ and fixed angle (i.e. fixed ratio $t'/s$); b) replacing one DA by a GPD leads to the factorization of the amplitude for $\gamma + N \rightarrow \pi + \rho +N'$ at large $M_{\pi\rho}^2$\,.} \label{feyndiag} \end{center} \end{figure} \begin{itemize} \item We use the now classical proof of the factorization of exclusive scattering at fixed angle and large energy~ \cite{LB}. The amplitude for the process $\gamma + \pi \rightarrow \pi + \rho $ is written as the convolution of mesonic distribution amplitudes and a hard scattering subprocess amplitude $\gamma +( q + \bar q) \rightarrow (q + \bar q) + (q + \bar q) $ with the meson states replaced by collinear quark-antiquark pairs. This is described in Fig.~\ref{feyndiag}a. The demonstration of absence of any pinch singularities (which is the weak point of the proof for the generic case $A+B\to C+D$ ) has been proven in the case of interest here~\cite{FSZ}. \item We extract from the factorization procedure of the deeply virtual Compton scattering amplitude near the forward region the right to replace in Fig.~\ref{feyndiag}a the lower left meson distribution amplitude by a $N \to N'$ GPD, and thus get Fig.~\ref{feyndiag}b. Indeed the same collinear factorization property bases the validity of the leading twist approximation which either replaces the meson wave function by its distribution amplitude or the $N \to N'$ transition to its GPDs. A slight difference is that light cone fractions ($z, 1- z$) leaving the DA are positive, but the corresponding fractions ($x+\xi,\xi-x$) may be positive or negative in the case of the GPD. The calculation will show that this difference does not ruin the factorization property, at least at the order that we are working here. \end{itemize} One may adopt another point of view based on an analogy with the timelike Compton scattering \begin{equation} \gamma N \to \gamma^* N' \to \mu^+ \mu^- N' \,, \label{process2} \end{equation} where the lepton pair has a large squared invariant mass $Q^2$, is instructive. This process has been thoroughly discussed~\cite{TCS} in the framework of the factorization of GPDs, and it has been proven that its amplitude was quite similar to the deeply virtual Compton scattering one, being dominated at lowest order by the handbag diagram amplitude convoluted with generalized quark distributions in the nucleon. There is no ambiguity in this case for the definition of the hard scale, the photon virtuality $Q$ being the only scale present. Although the meson pair in the process (\ref{process1}) has a more complex momentum flow, we feel justified to draw on this analogy to ascribe the role of the hard scale to the meson pair invariant squared mass. However, to describe the final state mesons by their distribution amplitudes (DAs), one needs in addition a large transverse momentum (and thus a large Mandelstam $t'$, see Fig.~\ref{feyndiag}b. Practically, we consider kinematics in which $|u'| \sim |t'| \sim |p_T^2| \sim M_{\pi \rho}^2 = (p_\pi +p_\rho)^2.$ We cannot prove, at the level of our study, that $M_{\pi \rho}^2$ is the most adequate hard scale. Indeed, applying a definite strategy to define a factorization scale requires at least a next to leading (in the strong coupling) analysis \cite{BLM} and this is clearly a major work to be undertaken. For both point of view, in order for the factorization of a partonic amplitude to be valid, and the leading twist calculation to be sufficient, one should avoid the dangerous kinematical regions where a small momentum transfer is exchanged in the upper blob, namely small $t' =(p_\pi -p_\gamma)^2$ or small $u'=(p_\rho-p_\gamma)^2$, and the regions where strong interactions between two hadrons in the final state are non-perturbative, namely where the invariant masses, $M^2_{\pi N'} = (p_\pi +p_{N'})^2$, $M^2_{\rho N'} = (p_\rho +p_{N'})^2$ and $M^2_{\pi\rho}$, are not large enough to suppress final state interactions. We will discuss the necessary minimal cuts to be applied to data before any attempt to extract the chiral odd GPDs. However, although the ultimate proof of the validity of the factorization scheme proposed in this paper is based on comparison of the predictions with experimental data, on the theoretical side it requires to go beyond Born approximation considered here which is beyond the scope of the present work. Our paper is organized as follows. In section \ref{Sec:Kinematics}, we clarify the kinematics we are interested in and set our conventions. Then, in section \ref{Sec:Scattering_Amplitude}, we describe the scattering amplitude of the process under study in the framework of QCD factorization. Section \ref{Sec:DD} is devoted to the presentation of our model chiral-odd GPDs. Section \ref{Sec:Cross Section and Rates} presents our results for the unpolarized differential cross section in the kinematics of two specific experiments : quasi-real photon beams at JLab where $S_{\gamma N} \sim$ 14-20 GeV$^2$ and Compass at CERN where $S_{\gamma N} \sim$ 200 GeV$^2$. As a final remark in this introduction, let us stress that our discussion applies as well to the case of electroproduction where a moderate virtuality of the initial photon may help to access the perturbative domain with a lower value of the hard scale $M_{\pi\rho}$. \section{Kinematics} \label{Sec:Kinematics} We study the exclusive photoproduction of a transversely polarized vector meson and a pion on a polarized or unpolarized proton target \begin{equation} \gamma(q) + N(p_1,\lambda) \rightarrow \pi(p_\pi) + \rho_T(p_\rho) + N'(p_2,\lambda')\,, \label{process} \end{equation} in the kinematical regime of large invariant mass $M_{\pi\rho}$ of the final meson pair and small momentum transfer $t =(p_1-p_2)^2$ between the initial and the final nucleons. Roughly speaking, these kinematics mean a moderate to large, and approximately opposite, transverse momentum of each meson. Our conventions are the following. We decompose momenta on a Sudakov basis as \begin{equation} \label{sudakov1} k^\mu = a \, n^\mu + b \, p^\mu + k_\bot^\mu \,, \end{equation} with $p$ and $n$ the light-cone vectors \begin{equation} \label{sudakov2} p^\mu = \frac{\sqrt{s}}{2}(1,0,0,1)\qquad n^\mu = \frac{\sqrt{s}}{2}(1,0,0,-1) \qquad p\cdot n = \frac{s}{2}\,, \end{equation} and \begin{equation} \label{sudakov3} k_\bot^\mu = (0,k^x,k^y,0) \,, \qquad k_\bot^2 = -\vec{k}_t^2\,. \end{equation} The particle momenta read \begin{equation} \label{impini} p_1^\mu = (1+\xi)\,p^\mu + \frac{M^2}{s(1+\xi)}\,n^\mu~, \quad p_2^\mu = (1-\xi)\,p^\mu + \frac{M^2+\vec{\Delta}^2_t}{s(1-\xi)}n^\mu + \Delta^\mu_\bot\,, \quad q^\mu = n^\mu ~, \end{equation} \begin{eqnarray} \label{impfinc} p_\pi^\mu &=& \alpha \, n^\mu + \frac{(\vec{p}_t-\vec\Delta_t/2)^2+m^2_\pi}{\alpha s}\,p^\mu + p_\bot^\mu -\frac{\Delta^\mu_\bot}{2}~,\nonumber \\ p_\rho^\mu &=& \alpha_\rho \, n^\mu + \frac{(\vec{p}_t+\vec\Delta_t/2)^2+m^2_\rho}{\alpha_\rho s}\,p^\mu - p_\bot^\mu-\frac{\Delta^\mu_\bot}{2}\,, \end{eqnarray} with $\bar{\alpha} = 1 - \alpha$ and $M$, $m_\pi$, $m_\rho$ the masses of the nucleon, the pion and the $\rho$ meson. From these kinematical relations it follows \begin{equation} \label{2xi} 2 \, \xi = \frac{(\vec{p}_t -\frac{1}2 \vec{\Delta}_t)^2 + m_\pi^2}{s \, \alpha} + \frac{(\vec{p}_t +\frac{1}2 \vec{\Delta}_t)^2 + m_\rho^2}{s \, \alpha_\rho} \end{equation} and \begin{equation} \label{exp_alpha} 1-\alpha-\alpha_\rho = \frac{2 \, \xi \, M^2}{s \, (1-\xi^2)} + \frac{\vec{\Delta}_t^2}{s \, (1-\xi)}\,. \end{equation} The total center-of-mass energy squared of the $\gamma$-N system is \begin{equation} \label{energysquared} S_{\gamma N} = (q + p_1)^2 = (1+\xi)s + M^2\,. \end{equation} $\xi$ is the skewedness parameter which can be written in terms of the $\tau$ variable used in lepton pair production, as \begin{equation} \label{skewness} \xi = \frac{\tau}{2-\tau} ~~~~,~~~~\tau = \frac{M^2_{\pi\rho}-t}{S_{\gamma N}-M^2}\,. \end{equation} On the nucleon side, the transferred squared momentum is \begin{equation} \label{transfmom} t = (p_2 - p_1)^2 = -\frac{1+\xi}{1-\xi}\vec{\Delta}_t^2 -\frac{4\xi^2M^2}{1-\xi^2}\,. \end{equation} The other various Mandelstam invariants read \begin{eqnarray} \label{M_pi_rho} s'&=& ~(p_\pi +p_\rho)^2 = ~M_{\pi\rho}^2= 2 \xi \, s \left(1 - \frac{ 2 \, \xi \, M^2}{s (1-\xi^2)} \right) - \vec{\Delta}_t^2 \frac{1+\xi}{1-\xi}\,, \\ \label{t'} - t'&=& -(p_\pi -q)^2 =~\frac{(\vec p_t-\vec\Delta_t/2)^2+\bar\alpha\, m_\pi^2}{\alpha} \;,\\ \label{u'} - u'&=&- (p_\rho-q)^2= ~\frac{(\vec p_t+\vec\Delta_t/2)^2+(1-\alpha_\rho)\, m_\rho^2}{\alpha_\rho} \; , \end{eqnarray} and \begin{eqnarray} \label{M_pi_N} M_{\pi N'}^2 = s\left(1-\xi+ \frac{(\vec{p}_t-\vec{\Delta}_t/2)^2+ m_\pi^2}{s\, \alpha}\right) \left(\alpha + \frac{M^2 + \vec{\Delta}_t^2}{s \, (1-\xi)} \right) - \left(\vec{p}_t + \frac{1}2 \vec{\Delta}_t \right)^2\,, \end{eqnarray} \begin{eqnarray} \label{M_rho_N} M_{\rho N'}^2 = s\left(1-\xi+ \frac{(\vec{p}_t+\vec{\Delta}_t/2)^2+ m_\rho^2}{s\, \alpha_\rho}\right) \left(\alpha_\rho + \frac{M^2 + \vec{\Delta}_t^2}{s \, (1-\xi)} \right) - \left(\vec{p}_t - \frac{1}2 \vec{\Delta}_t \right)^2\,. \end{eqnarray} The hard scale $M^2_{\pi\rho}$ is the invariant squared mass of the ($\pi^+$, $\rho^0$) system. The leading twist calculation of the hard part only involves the approximated kinematics in the generalized Bjorken limit: neglecting $\vec\Delta_\bot$ in front of $\vec p_\bot$ as well as hadronic masses, it amounts to \begin{eqnarray} \label{skewness2} M^2_{\pi\rho} &\approx & \frac{\vec{p}_t^2}{\alpha\bar{\alpha}} \, \\ \alpha_\rho &\approx& 1-\alpha \equiv \bar{\alpha} \,,\\ \tau &\approx & \frac{M^2_{\pi\rho}}{S_{\gamma N}-M^2}\,,\\ -t' & \approx & \bar\alpha\, M_{\pi\rho}^2 \quad \mbox{ and } \quad -u' \approx \alpha\, M_{\pi\rho}^2 \,. \end{eqnarray} The typical cuts that one should apply are $-t', -u' > \Lambda^2$ and $M_{\pi N'}^2 = (p_\pi +p_{N'})^2 > M_R^2$, $M_{\rho N'}^2= (p_\rho +p_{N'})^2 > M_R^2$ where $\Lambda \gg \Lambda_{QCD}$ and $M_R$ is a typical baryonic resonance mass. This amounts to cuts in $\alpha $ and $\bar\alpha$ at fixed $M_{\pi\rho}^2$, which can be translated in terms of $u'$ at fixed $M_{\pi\rho}^2$ and $t$. These conditions boil down to a safe kinematical domain $(-u')_{min} \leq -u' \leq (-u')_{max} $ which we will discuss in more details in Section \ref{Sec:Cross Section and Rates}. In the following, we will choose as kinematical independent variables $t, u', M^2_{\pi \rho}\,.$ \section{The Scattering Amplitude} \label{Sec:Scattering_Amplitude} We now concentrate on the specific process \begin{equation} \gamma(q) + p(p_1,\lambda) \rightarrow \pi^+(p_\pi) + \rho^0_T(p_\rho) + n(p_2,\lambda')\,. \label{process_pn} \end{equation} Let us start by recalling the non-perturbative quantities which enter the scattering amplitude of our process (\ref{process_pn}). The transversity generalized parton distribution of a parton $q$ (here $q = u,\ d$) in the nucleon target at zero momentum transfer is defined by~\cite{defDiehl} \begin{eqnarray} &&<n(p_2,\lambda')|\, \bar{d}\left(-\frac{y}{2}\right)\sigma^{+j}\gamma^5 u \left(\frac{y}{2}\right)|p(p_1,\lambda)> \nonumber \\ &&= \bar{u}(p_2,\lambda')\,\sigma^{+j}\gamma^5u(p_1,\lambda)\int_{-1}^1dx\ e^{-\frac{i}{2}x(p_1^++p_2^+)y^-}H_T^{ud}(x,\xi,t)\,, \label{defGPD} \end{eqnarray} where $\lambda$ and $\lambda'$ are the light-cone helicities of the nucleons $p$ and $n$. Here $H^{ud}_T$ is the flavor non-diagonal GPD \cite{Mankiewicz:1997aa} which can be expressed in terms of diagonal ones as \begin{equation} \label{defHud} H^{ud}_T = H^{u}_T-H^{d}_T\,. \end{equation} The chiral-odd light-cone DA for the transversely polarized meson vector $\rho^0_T$, is defined, in leading twist 2, by the matrix element~\cite{defrho} \begin{equation} \langle 0|\bar{u}(0)\sigma^{\mu\nu}u(x)|\rho^0(p,\epsilon_\pm) \rangle = \frac{i}{\sqrt{2}}(\epsilon^\mu_{\pm}(p)\, p^\nu - \epsilon^\nu_{\pm}(p)\, p^\mu)f_\rho^\bot\int_0^1du\ e^{-iup\cdot x}\ \phi_\bot(u), \label{defDArho} \end{equation} where $\epsilon^\mu_{\pm}(p_\rho)$ is the $\rho$-meson transverse polarization and with $f_\rho^\bot$ = 160 MeV.\\ The light-cone DA for the pion $\pi^+$ is defined, in leading twist 2, by the matrix element (see for example \cite{defpi}) \begin{equation} \label{defDApion} \langle 0|\bar{d}(z)\gamma^\mu\gamma^5u(-z)|\pi^+(p) \rangle = ip^\mu f_\pi\int_0^1du\ e^{-i(2u-1)p\cdot z}\ \phi_\pi(u), \end{equation} with $f_\pi$ = 131 MeV. In our calculations, we use the asymptotic form of these DAs : $\phi_\pi(u) = \phi_\bot(u) = 6\,u\bar{u}$. We now pass to the computation of the scattering amplitude of the process (\ref{process_pn}). As the order of magnitude of the hard scale is greater than 1 GeV$^2$, it is possible to study it in the framework of QCD factorization, where the invariant squared mass of the ($\pi^+$, $\rho^0$) system $M^2_{\pi\rho}$ is taken as the factorization scale. The amplitude gets contributions from each of the four twist 2 chiral-odd GPDs $E_T\,, H_T\,, \tilde{E}_T\,, \tilde{H}_T$. However, all of them but $H_T$ are accompanied by kinematical factors which vanish at $\vec{\Delta}_t=0\,.$ The contribution proportional to $H_T$ is thus dominant in the small $t$ domain which we are interested in. We will thus restrict our study to this contribution, so that the whole $t-$dependence will come from the $t$-dependence of $H_T$, as we model in Sec.~\ref{Sec:DD}. Note that within the collinear framework, the hard part is computed with $\vec{\Delta}_t=0$. Thus we write the scattering amplitude of the process (\ref{process_pn}) in the factorized form : \begin{equation} \label{AmplitudeFactorized} \mathcal{A}(t,M^2_{\pi\rho},p_T) =\frac{1}{\sqrt{2}} \int_{-1}^1dx\int_0^1dv\int_0^1dz\ (T^u(x,v,z)-T^d(x,v,z)) \, H^{ud}_T(x,\xi,t)\Phi_\pi(z)\Phi_\bot(v)\,, \end{equation} where $T^u$ and $T^d$ are the hard parts of the amplitude where the photon couples respectively to a $u$-quark (Fig.~\ref{feyndiageued}a) and to a $d$-quark (Fig.~\ref{feyndiageued}b). This decomposition, with the $\frac{1}{\sqrt{2}}$ prefactor, takes already into account that the $\rho^0$-meson is described as $\frac{u\bar{u}-d\bar{d}}{\sqrt{2}}$. \begin{figure}[!h] $\begin{array}{cc} \psfrag{fpi}{$\,\phi_\pi$} \psfrag{fro}{$\,\phi_\rho$} \psfrag{z}{\begin{small} $z$ \end{small}} \psfrag{zb}{\raisebox{-.1cm}{ \begin{small}$\hspace{-.3cm}-\bar{z}$\end{small}} } \psfrag{v}{\begin{small} $v$ \end{small}} \psfrag{vb}{\raisebox{-.1cm}{ \begin{small}$\hspace{-.3cm}-\bar{v}$\end{small}} } \psfrag{gamma}{$\,\gamma$} \psfrag{pi}{$\,\pi^+$} \psfrag{rho}{$\,\rho^0_T$} \psfrag{N}{$N$} \psfrag{Np}{$\,N'$} \psfrag{H}{\hspace{-0.2cm} $H^{ud}_T(x,\xi,t_{min})$} \psfrag{hard}{\hspace{-0.2cm} $H^{ud}_T(x,\xi,t_{min})$} \psfrag{p1}{\begin{small} $p_1$ \end{small}} \psfrag{p2}{\begin{small} $p_2$ \end{small}} \psfrag{p1p}{\hspace{-0.4cm} \begin{small} $p_1'=(x+\xi) p$ \end{small}} \psfrag{p2p}{\hspace{-0.2cm} \begin{small} $p_2'=(x-\xi) p$ \end{small}} \psfrag{q}{\begin{small} $q$ \end{small}} \psfrag{ppi}{\begin{small} $p_\pi$\end{small}} \psfrag{prho}{\begin{small} $p_\rho$\end{small}} \includegraphics[width=7.8cm]{fig2a.eps}&\hspace{-0.5cm} \psfrag{fpi}{$\,\phi_\pi$} \psfrag{fro}{$\,\phi_\rho$} \psfrag{z}{\begin{small} $z$ \end{small}} \psfrag{zb}{\raisebox{-.1cm}{ \begin{small}$\hspace{-.3cm}-\bar{z}$\end{small}} } \psfrag{v}{\raisebox{-.2cm}{\begin{small} $\hspace{-.2cm}-\bar{v}$ \end{small}}} \psfrag{vb}{\raisebox{-.1cm}{ \begin{small}$v$\end{small}} } \psfrag{gamma}{$\,\,\,\gamma$} \psfrag{pi}{$\,\pi^+$} \psfrag{rho}{$\,\rho^0_T$} \psfrag{N}{$N$} \psfrag{Np}{$\,N'$} \psfrag{H}{\hspace{-0.2cm} $H^{ud}_T(x,\xi,t_{min})$} \psfrag{p1}{\begin{small} $p_1$ \end{small}} \psfrag{p2}{\begin{small} $p_2$ \end{small}} \psfrag{p1p}{\hspace{-0.4cm} \begin{small} $p_1'=(x+\xi) p$ \end{small}} \psfrag{p2p}{\hspace{-0.2cm} \begin{small} $p_2'=(x-\xi) p$ \end{small}} \psfrag{q}{\begin{small} $q$ \end{small}} \psfrag{ppi}{\begin{small} $p_\pi$\end{small}} \psfrag{prho}{\begin{small} $p_\rho$\end{small}} \includegraphics[width=7.8cm]{fig2b-1.eps}\\ \\ \hspace{-.5cm}(a) & \hspace{-.4cm}(b) \\ \end{array}$ \caption{Two representative diagrams with a photon $u$-quark coupling ($a$) and with a photon $d$-quark coupling ($b$).} \label{feyndiageued} \end{figure} \begin{figure}[!h] \begin{center} \psfrag{fpi}{$\,\phi_\pi$} \psfrag{fro}{$\,\phi_\rho$} \psfrag{z}{\begin{small} $z$ \end{small}} \psfrag{zb}{\raisebox{-.1cm}{ \begin{small}$\hspace{-.3cm}-\bar{z}$\end{small}} } \psfrag{v}{\begin{small} $v$ \end{small}} \psfrag{vb}{\raisebox{-.1cm}{ \begin{small}$\hspace{-.3cm}-\bar{v}$\end{small}} } \psfrag{gamma}{$\,\gamma$} \psfrag{pi}{$\,\pi^+$} \psfrag{rho}{$\,\rho^0_T$} \psfrag{N}{$N$} \psfrag{Np}{$\,N'$} \psfrag{H}{\hspace{-0.2cm} $H^{ud}_T(x,\xi,t_{min})$} \psfrag{p1}{\begin{small} $p_1$ \end{small}} \psfrag{p2}{\begin{small} $p_2$ \end{small}} \psfrag{p1p}{\hspace{-0.1cm} \begin{small} $p_1'=(x+\xi) p$ \end{small}} \psfrag{p2p}{\hspace{-0.2cm} \begin{small} $p_2'=(x-\xi) p$ \end{small}} \psfrag{q}{\begin{small} $q$ \end{small}} \psfrag{ppi}{\begin{small} $p_\pi$\end{small}} \psfrag{prho}{\begin{small} $p_\rho$\end{small}} \includegraphics[width=10cm]{fig3.eps} \caption{Representative diagram with a 3 gluon vertex.} \label{feyndiageu3g} \end{center} \end{figure} For this process, one has two kinds of Feynman diagrams : some without (Fig.~\ref{feyndiageued}) and some with a 3-gluon vertex (Fig.~\ref{feyndiageu3g}). In both cases, an interesting symmetry allows to deduce the contribution of some diagrams from other ones. This is examplified in Fig. \ref{feyndiageued}. The transformation rules \begin{equation} \label{symrules} x\ \to\ -x \qquad u\ \to\ \bar{u} \qquad v\ \to\ \bar{v} \qquad e_{u}\ \to\ e_{d} \end{equation} relate the hard amplitude of Fig.~\ref{feyndiageued}b to the one of Fig.~\ref{feyndiageued}a. This reduces our task to the calculation of half the 62 diagrams involved in the process.\\ Let us sketch the main steps of the calculation on the specific example of the diagram of Fig.~\ref{feyndiageued}a, in the Feynman gauge. Using the notation $\slashchar{k} = k_\mu\gamma^\mu$, the amplitude reads : \begin{eqnarray} \label{hp2a_u} T_{2a}^u(x,v,z) &=& Tr[(if_\pi\slashchar{p}_\pi\gamma^5)(-ig\gamma^\mu)\slashchar{F}(p'_2+\bar{v}p_\rho+zp_\pi)(ie_u\slashchar{\epsilon}(q))\slashchar{F}(p'_1-vp_\rho-\bar{z}p_\pi) \nonumber \\ &&(-ig\gamma^\nu)(\sigma^{\alpha\beta}\gamma^5)(-ig\gamma_\mu)(2i\sigma^{\sigma^*_\rho p_\rho}f_\rho^\bot)(-ig\gamma_\nu)] \nonumber \\ &\times&Tr_C[t^at^bt^at^b] \frac{1}{(8N_c)^2} \frac{1}{4N_c} G(p'_2+\bar{v}p_\rho)G(vp_\rho+\bar{z}p_\pi)\,, \end{eqnarray} where the fermion propagator is (we put all quark masses to zero) : \begin{equation} \label{fermprop} i\slashchar{F}(k) = \frac{i\slashchar{k}}{k^2+i\epsilon}\,, \end{equation} and \begin{equation} \label{gluonprop} -i g^{\mu \nu}G(k) = \frac{-i g^{\mu \nu}}{k^2+i\epsilon} \end{equation} is the gluonic propagator. $Tr_C$ is the trace over color indices and the factors $\frac{1}{(8N_C)^2}$ and $\frac{1}{4N_C}$ come from Fierz decompositions. The corresponding expression for the diagram \ref{feyndiageued}b \begin{eqnarray} \label{hp2a_d} T_{2b}^d(x,v,z) &=& Tr[(if_\pi\slashchar{p}_\pi\gamma^5)(-ig\gamma^\mu)(2i\sigma^{\sigma^*_\rho p_\rho}f_\rho^\bot)(-ig\gamma^\nu)(\sigma^{\alpha\beta}\gamma^5) \nonumber \\ &&(-ig\gamma_\mu)\slashchar{F}(p'_2+\bar{v}p_\rho+zp_\pi)(ie_d\slashchar{\epsilon}(q))\slashchar{F}(p'_1-vp_\rho-\bar{z}p_\pi)(-ig\gamma_\nu)] \nonumber \\ &\times&Tr_C[t^at^bt^at^b] \frac{1}{(8N_c)^2} \frac{1}{4N_c} G(-p'_1+vp_\rho)G(-\bar{v}p_\rho-zp_\pi)\nonumber \\ &=& \frac{iC_Fe_df^\bot_\rho f_\pi g^4\bar{z}}{32N_C^3s^2\bar{\alpha}[x-\xi-i\epsilon][x+\xi-i\epsilon]} \nonumber \\ &\times& \frac{\left[(\vec{N}_t\cdot\vec{\sigma}^*_{\rho t})(\vec{p}_t\cdot\vec{\epsilon}_{\gamma t})-(\vec{N}_t\cdot\vec{p}_t)(\vec{\epsilon}_{\gamma t}\cdot\vec{\sigma}*_{\rho t}) + \frac{2\alpha\xi-\bar{\alpha}}{2\alpha\xi+\bar{\alpha}}(\vec{N}_t\cdot\vec{\epsilon}_{\gamma t})(\vec{p}_t\cdot\vec{\sigma}^*_{\rho t})\right]}{zv\bar{v}[(\alpha\bar{z}+\bar{\alpha}v)(x+\xi-i\epsilon)-2\xi\bar{z}v]} \end{eqnarray} justifies the symmetry we quoted a few lines above. Thus the hard part of the diagram \ref{feyndiageued}a is proportional to \begin{equation} \label{hp2a} T_{2a}^u \propto \frac{1}{[(\slashchar{p}'_2+\bar{v}\slashchar{p}_\rho+z\slashchar{p}_\pi)^2+i\epsilon][(\slashchar{p}'_1-v\slashchar{p}_\rho-\bar{z}\slashchar{p}_\pi)^2+i\epsilon][(p'_2+\bar{v}p_\rho)^2+i\epsilon][(vp_\rho+\bar{z}p_\pi)^2+i\epsilon]} \end{equation} and the $i\epsilon$ prescription in the 4 propagators leads to the fact that the scattering amplitude gets both a real and an imaginary parts. Integrations over $v$ and $z$ have been done analytically whereas numerical methods are used for the integration over $x$. The first integration is rather straightforward. The second integration is more involved because of the presence of $i \epsilon$ terms inside the integrand, and in particular as an argument of logarithmic funtion, leading in the final result to appearance of imaginary parts. For example, the integration over $z$ of $T_{2b}^d$ requires to evaluate integrals of the type \begin{equation} \label{diffint} \int_0^1 dz\ \frac{1}{2\,\xi\, z-\bar{\alpha}\,X}\log\left[\frac{\alpha Xz}{\bar{\alpha}\,X+z\,(\alpha X-2\xi)}\right]\,, \end{equation} where $X = x-\xi+i\epsilon$ contains all the dependence of the integrand on $i \epsilon$.\\ Nevertheless, since we have rewritten the $x$-dependence of propagators with the new variable X, it is possible to calculate this integral analytically without any problem. Thus the expression (\ref{diffint}) gives \begin{equation} \label{diffintresult} \frac{\pi^2}{12\xi}+\frac{1}{2\xi}\mathrm{Li}_2\left[\left(1-\frac{2\xi}{\alpha X}\right)\left(1-\frac{2\xi}{\bar{\alpha} X}\right)\right]-\frac{1}{2\xi}\mathrm{Li}_2\left[1-\frac{2\xi}{\alpha X}\right]-\frac{1}{2\xi}\mathrm{Li}_2\left[1-\frac{2\xi}{\bar{\alpha} X}\right]\,. \end{equation} Lorentz invariance and the linearity of the amplitude with respect to the polarization vectors and with respect to the nucleons' spinors allow us to write the amplitude as : \begin{eqnarray} \mathcal{A} &=& (\epsilon^{*}_{\pm}(p_\rho)\cdot N^\bot_{\lambda_1\lambda_2})(\epsilon_{\gamma \bot}\cdot p_T)A' + (\epsilon^{*}_{\pm}(p_\rho)\cdot \epsilon_{\gamma \bot})(N^\bot_{\lambda_1\lambda_2}\cdot p_T)B' \nonumber\\ &+& (\epsilon^{*}_{\pm}(p_\rho)\cdot p_T)(N^\bot_{\lambda_1\lambda_2}\cdot \epsilon_{\gamma \bot})C' + (\epsilon^{*}_{\pm}(p_\rho)\cdot p_T)(N^\bot_{\lambda_1\lambda_2}\cdot p_T)(\epsilon_{\gamma \bot}\cdot p_T)D' \nonumber \\ &+& (\epsilon^{*}_{\pm}(p_\rho)\cdot p)(N^\bot_{\lambda_1\lambda_2}\cdot \epsilon_{\gamma \bot})E' + (\epsilon^{*}_{\pm}(p_\rho)\cdot p)(N^\bot_{\lambda_1\lambda_2}\cdot p_T)(\epsilon_{\gamma \bot}\cdot p_T)F', \end{eqnarray} where $A'$, $B'$, $C'$, $D'$, $E'$, $F'$ are scalar functions of $s$, $\xi$, $\alpha$ and $M^2_{\pi\rho}$, and the transverse polarization of $\rho$-meson \begin{equation} \label{polarho} \epsilon^{\mu}_{\pm}(p_\rho) = \left(\frac{\vec{p}_\rho\cdot\vec{e}_\pm}{m_\rho}\ ,\ \vec{e}_\pm + \frac{\vec{p}_\rho\cdot\vec{e}_\pm}{m_\rho(E_\rho + m_\rho)}\vec{p}_\rho \right) \end{equation} is expressed in terms of $\vec{e}_{\pm}=-\frac{1}{\sqrt{2}}(\pm 1,i,0)$. $\epsilon_{\gamma\bot}^\mu$ is the transverse polarization of the on-shell photon and \begin{equation} \label{polnucleon} N^{\bot\mu}_{\lambda_1\lambda_2} = \frac{2i}{p\cdot n}g_\bot^{\mu\nu}\bar{u}(p_2,\lambda_2)\slashchar{n}\gamma_\nu\gamma^5u(p_1,\lambda_1) \end{equation} is the spinor dependent part which expresses the nucleon helicity flip with $g_\bot^{\mu\nu} = diag(0,-1,-1,0)$.\\ To be more precise, the expressions of this $2-$dimensional transverse vector read \begin{eqnarray} \label{polprocess} N_{+\hat{x},+\hat{x}}^{\bot\mu} &=& -4i\sqrt{1-\xi^2}(0,1,0,0) \qquad N_{-\hat{x},+\hat{x}}^{\bot\mu} = 4\sqrt{1-\xi^2}(0,0,1,0) \\ N_{+\hat{x},-\hat{x}}^{\bot\mu} &=& -4\sqrt{1-\xi^2}(0,0,1,0) \qquad N_{-\hat{x},-\hat{x}}^{\bot\mu} = 4i\sqrt{1-\xi^2}(0,1,0,0)\,, \end{eqnarray} assuming that these nucleons are polarized along the $\hat{x}$ axis.\\ Since the DA of $\rho^0_T$ (Eq. (\ref{defDArho})) introduces the factor $\epsilon^\mu_{\pm}(p_\rho)\, p_\rho^\nu - \epsilon^\nu_{\pm}(p_\rho)\, p_\rho^\mu$, any term proportional to $p_\rho^\mu$ in its polarisation does not contribute to the amplitude. On may then replace \begin{eqnarray} \label{polarho2} \epsilon^{\mu}_{\pm}(p_\rho) &\Rightarrow& 2 \bar \alpha\frac{ \vec p_t \cdot \vec e_\pm }{\bar \alpha^2 s + \vec p_t^{\,2}} \left( p^\mu + n^\mu \right) + (0,\vec e_\pm) \nonumber \\ &\Rightarrow& 2\bar{\alpha}\frac{\vec{p}_t\cdot\vec{e}_\pm}{\bar{\alpha}^2s + \vec{p}_t^{\,2}}\left[1 - \frac{\vec{p}_t^{\,2}}{\bar{\alpha}^2s}\right]p^\mu + 2\frac{\vec{p}_t\cdot\vec{e}_\pm}{\bar{\alpha}^2s + \vec{p}_t^{\,2}} p_T^\mu + (0,\vec{e}_\pm). \end{eqnarray} Consequently, the amplitude of this process can be simplified as \begin{eqnarray} \label{ampl} \mathcal{A} &=& (\vec{N}_t\cdot \vec{e}^{\,*}_\pm)(\vec{p}_t\cdot \vec{\epsilon}_{\gamma t}) A + (\vec{N}_t\cdot\vec{\epsilon}_{\gamma t})(\vec{p}_t\cdot\vec{e}^{\,*}_\pm) B \nonumber \\ &+& (\vec{N}_t\cdot\vec{p}_t) (\vec{\epsilon}_{\gamma t}\cdot\vec{e}^{\,*}_\pm) C + (\vec{N}_t\cdot\vec{p}_t) (\vec{p}_t\cdot\vec{\epsilon}_{\gamma t}) (\vec{p}_t\cdot \vec{e}^{\,*}_\pm) D \end{eqnarray} where $A$, $B$, $C$, $D$ are also scalar functions of $s$, $\xi$, $\alpha$ and $M^2_{\pi\rho}$.\\ The final result for each particular diagram is rather lenghty, and because of that we do not present explicit final results for scalar functions $A,$ $B$, $C$, $D$ of (\ref{ampl}). \section{Transversity GPD and Double Distribution} \label{Sec:DD} In order to get an estimate of the differential cross section of this process, we need to propose a model for the transversity GPD $H_T^q(x,\xi,t)$ ($q=u,\ d$). Contrary to what Enberg \textit{et al.} have done \cite{eps}, here we must get a parametrization in both ERBL ($]-\xi ; \xi[$) and DGLAP ($[-1 ; -\xi]\ \bigcup\ [\xi ; 1]$) $x-$domains.\\ We use the standard description of GPDs in terms of double distributions \cite{Rad} \begin{equation} \label{DDdef} H_T^q(x,\xi,t=0) = \int_\Omega d\beta\, d\alpha\ \delta(\beta+\xi\alpha-x)f_T^q(\beta,\alpha,t=0) \,, \end{equation} where $f_T^q$ is the quark transversity double distribution and $\Omega = \{|\beta|+|\alpha| \leqslant 1\}$ is its support domain. Moreover we may add a D-term contribution, which is necessary to be completely general while fulfilling the polynomiality constraints. Since adding a D-term is quite arbitrary and unconstrained, we do not include it in our parametrization. We thus propose a simple model for these GPDs, by writting $f_T^q$ in the form \begin{equation} \label{DD} f_T^q(\beta,\alpha,t=0) = \Pi(\beta,\alpha)\,\delta \, q(\beta)\Theta(\beta) - \Pi(-\beta,\alpha)\,\delta \bar{q}(-\beta)\,\Theta(-\beta)\,, \end{equation} where $ \Pi(\beta,\alpha) = \frac{3}{4}\frac{(1-\beta)^2-\alpha^2}{(1-\beta)^3}$ is a profile function and $\delta q$, $\delta \bar{q}$ are the quark and antiquark transversity parton distribution functions (PDF). The transversity GPD $H_T^q$ thus reads \begin{eqnarray} H_T^q(x,\xi,t=0) &=& \Theta(x>\xi)\int_{\frac{-1+x}{1+\xi}}^{\frac{1-x}{1-\xi}}dy\ \frac{3}{4}\frac{(1-x+\xi y)^2-y^2}{(1-x+\xi y)^3}\delta q(x-\xi y) \nonumber \\ &+& \Theta(\xi>x>-\xi)\left[\int_{\frac{-1+x}{1+\xi}}^{\frac{x}{\xi}}dy\ \frac{3}{4}\frac{(1-x+\xi y)^2-y^2}{(1-x+\xi y)^3}\delta q(x-\xi y) \right. \nonumber \\ &-& \left. \int_{\frac{x}{\xi}}^{\frac{1+x}{1+\xi}}dy\ \frac{3}{4}\frac{(1+x-\xi y)^2-y^2}{(1+x-\xi y)^3}\delta \bar{q}(-x+\xi y) \right] \nonumber \\ &-& \Theta(-\xi>x)\int_{-\frac{1+x}{1-\xi}}^{\frac{1+x}{1+\xi}}dy\ \frac{3}{4}\frac{(1+x-\xi y)^2-y^2}{(1+x-\xi y)^3}\delta \bar{q}(-x+\xi y)\,. \end{eqnarray} For the transversity PDFs $\delta q$ and $\delta \bar{q}$, we use the parametrization proposed by Anselmino \textit{et al.}~\cite{Anselmino} \begin{eqnarray} \delta u(x) &=& 7.5 \cdot 0.5\cdot (1-x)^5\cdot(x\,u(x)+x\,\Delta u(x)) \,,\\ \delta \bar{u}(x) &=& 7.5 \cdot 0.5\cdot (1-x)^5\cdot(x \,\bar{u}(x)+x\,\Delta \bar{u}(x)) \,,\\ \delta d(x) &=& 7.5\cdot (-0.6)\cdot(1-x)^5\cdot(x\,d(x)+x\,\Delta d(x)) \,,\\ \delta \bar{d}(x) &=& 7.5 \cdot (-0.6) \cdot(1-x)^5\cdot(x \,\bar{d}(x)+x\,\Delta \bar{d}(x)) \,, \end{eqnarray} where the helicity-dependent PDFs $\Delta q(x)$, $\Delta\bar{q}(x)$ are parametrized with the help of the unpolarized PDFs $q(x)$ and $\bar{q}(x)$ by \cite{GRSV} \begin{eqnarray} \Delta u(x) &=& \sqrt{x}\cdot u(x) \,,\\ \Delta \bar{u}(x) &=& -0.3 \cdot x^{0.4} \cdot\bar{u}(x) \,,\\ \Delta d(x) &=& -0.7 \cdot\sqrt{x} \cdot d(x) \,,\\ \Delta \bar{d}(x) &=& -0.3\cdot x^{0.4}\cdot\bar{d}(x) \,, \end{eqnarray} and the PDFs $q(x)$, $\bar{q}(x)$ come from GRV parametrizations~\cite{GRV}. All these PDFs are calculated at the energy scale $\mu^2 = 10 $ GeV$^2$. Fig.~\ref{HTu_d_JLab} represents $H_T^u(x,\xi,t=0)$ and $H_T^d(x,\xi,t=0),$ respectively, for different values of $\xi$, which are determined through (\ref{skewness}) for $S_{\gamma N}=20$ GeV$^2$ of JLab and for $M^2_{\pi \rho}$ equal 2, 4, 6 GeV$^2$. Similarly, Fig.~\ref{HTu_d_Compass} represents $H_T^u(x,\xi,t=0)$ and $H_T^d(x,\xi,t=0),$ respectively, for different values of $\xi$, which are determined through (\ref{skewness}) for $S_{\gamma N}=200$ GeV$^2$ of Compass and for $M^2_{\pi \rho}$ equal 2, 4, 6 GeV$^2$.\\ These two GPDs show some common features like a peak when $x$ is near $\pm \xi$, their order of magnitude and the fact that they both tend to zero when $x$ tends to $\pm 1$. The main difference is their opposite sign. The restricted analysis of Ref.~\cite{eps} based on a meson exchange is insufficient for this study since it only gives us the transversity GPDs in the ERBL region. The MIT bag model inspired method of Ref.~\cite{Sco} underestimates the value of $H_T(x,\xi)$ in the ERBL domain because this model does not take into account antiquark degrees of freedom. One can notice that these GPDs have the same order of magnitude but some differences with other models like light-front constituent quark models \cite{Pasq}, principally due to the fact that in \cite{Pasq}, parametrizations have been done at $\mu^2$ $\sim$ 0.1 GeV$^2$ whereas our model is calculated at $\mu^2$ $\sim$ 10 GeV$^2$. Other model-studies have been developed in the chiral quark soliton model and a QED-based overlap representation \cite{othermodels}.\\ The t-dependence of these chiral-odd GPDs - and its Fourier transform in terms of the transverse localization of quarks in the proton \cite{impact} - is very interesting but completely unknown. We will describe it in a simplistic way as: \begin{equation} \label{t-dep} H^q_T(x,\xi,t) = H^q_T(x,\xi,t=0)\times F_H(t), \end{equation} where \begin{equation} \label{dipoleFF} F_H(t) = \frac{C^2}{(t - C)^2} \end{equation} is a standard dipole form factor with $C=.71~$GeV$^2$. Let us stress that we have no phenomenological control of this assumption, since the tensor form factor of the nucleon (in fact even the tensor charge) has never been measured. \begin{figure}[h!] $\begin{array}{cc} \hspace{-.15cm} \includegraphics[width=8.5cm]{HTuM2468JLab.eps} & \hspace{-1.9cm} \includegraphics[width=8.5cm]{HTdM2468JLab.eps} \\ \\ (a) & \hspace{-1.5cm}(b) \\ \end{array}$ \caption{Transversity GPD $H_T^u(x,\xi,t=0)$ (a) and $H_T^d(x,\xi,t=0)$ (b) of the nucleon for $\xi$ = .111 (solid line), $\xi$ = .176 (dotted line), $\xi$ = .25 (dashed line), corresponding respectively to $M_{\pi \rho}^2/S_{\gamma N}$ equal to 4/20, 6/20 and 8/20.} \label{HTu_d_JLab} \end{figure} \newpage \begin{figure}[h!] $\begin{array}{cc} \hspace{-.15cm} \includegraphics[width=7.5cm]{HTuM2468Compass.eps} & \hspace{0cm} \raisebox{-.22cm}{\includegraphics[width=7.5cm]{HTdM2468Compass.eps}} \\ \\ \hspace{-.15cm} \includegraphics[width=7.5cm]{HTuM2468Compassb.eps} & \hspace{0cm} \raisebox{-.23cm}{\includegraphics[width=7.5cm]{HTdM2468Compassb.eps}} \\ \\ (a) & (b) \\ \end{array}$ \caption{Transversity GPD $H_T^u(x,\xi,t=0)$ (a) and $H_T^d(x,\xi,t=0)$ (b) of the nucleon for $\xi$ = .01 (solid line), $\xi$ = .015 (dotted line), $\xi$ = .02 (dashed line), corresponding respectively to $M_{\pi \rho}^2/S_{\gamma N}$ equal to 4/200, 6/200 and 8/200, the lower plots being blow-ups of the central part of the upper ones.} \label{HTu_d_Compass} \end{figure} \newpage \section{Unpolarized Differential Cross Section and Rate Estimates} \label{Sec:Cross Section and Rates} \psfrag{ds}{\raisebox{.2cm}{{\hspace{-.3cm}$\left.\frac{d \sigma}{dt \, du' \, d M_{\pi \rho}^2}\right|_{t=t_{min}}$ \hspace{-.3cm}{\footnotesize (nb.GeV$^{-6}$)}}}} \psfrag{mup}{\raisebox{-.7cm}{{\hspace{-2.5cm}$-u'$(GeV$^2$)}}} \begin{figure}[h!] $\begin{array}{cc} \\ \hspace{-.2cm} \includegraphics[width=8cm]{dsigma_up_M23_JLab_D_fit_cuts_new.eps} & \hspace{-.3cm} \includegraphics[width=8cm]{dsigma_up_M26_JLab_D_fit_cuts_new.eps} \\ \\ (a) & (b) \\ \end{array}$ \caption{Variation of the differential cross section (\ref{difcrosec}) (nb.GeV$^{-6}$) with respect to $|u'|$ at $M^2_{\pi\rho}$ = 3 GeV$^2$ (a) and $M^2_{\pi\rho}$ = 6 GeV$^2$ (b) with $S_{\gamma N}$ = 20 GeV$^2$. The lines on the left correspond to the constraints $-u' > 1$ GeV$^2$ and $M_{\pi N'}^2 > 2$ GeV$^2$ and the lines on the right correspond to the constraints $-t' > 1$ GeV$^2$ and $M_{\rho N'}^2 > 2$ GeV$^2$ (dashed line for $t = t_{min}$ and solid line for $t = -0.5$ GeV$^2$).} \label{resultS20} \end{figure} Starting with the expression of the scattering amplitude (\ref{ampl}) we now calculate the amplitude squared for the unpolarized process \begin{equation} \label{amplsqu} |\mathcal{M}|^2 = \left(\frac{1}{2}\right) \left(\frac{1}{2}\right)\sum_{\lambda_1 \lambda_2} \mathcal{A}\mathcal{A}^* \,. \end{equation} It can seem odd to study the chiral-odd quark content of the nucleon by calculating the cross section of an unpolarized scattering but it is enough for now in order to reach this unknown structure. Of course it is possible to consider the polarized one by producing the spin density matrix, which will be done in a future work. We now present the cross-section as a function of $t$, $M^2_{\pi\rho},$ $-u'$ which reads \begin{equation} \label{difcrosec} \left.\frac{d\sigma}{dt \,du' \, dM^2_{\pi\rho}}\right|_{\ t=t_{min}} = \frac{|\mathcal{M}|^2}{32S_{\gamma N}^2M^2_{\pi\rho}(2\pi)^3}. \end{equation} \noindent We show, in Fig.~\ref{resultS20}, the differential cross section (\ref{difcrosec}) as a function of $-u'$ at $S_{\gamma N}$ = 20 GeV$^2$ for $M^2_{\pi\rho}$ = 3 GeV$^2$ i.e. $\xi = 0.085$ and for $M^2_{\pi\rho}$ = 6 GeV$^2$ i.e. $\xi = 0.186$ and, in Fig.~\ref{resultS200}, at $S_{\gamma N}$ = 200 GeV$^2$ respectively for $M^2_{\pi\rho}$ = 3 GeV$^2$ i.e. $\xi = 0.0076$ and for $M^2_{\pi\rho}$ = 6 GeV$^2$ i.e. $\xi = 0.015$. \begin{figure}[h!] $\begin{array}{cc} \hspace{-0cm} \includegraphics[width=8cm {dsigma_up_M23_Cp_hE_fit_cuts_new.eps} & \hspace{-.45cm} \includegraphics[width=8cm]{dsigma_up_M26_Cp_hE_fit_cuts_new.eps} \\ \\ (a) & (b) \\ \end{array}$ \caption{Variation of the differential cross section (\ref{difcrosec}) (nb.GeV$^{-6}$) with respect to $|u'|$ at $M^2_{\pi\rho}$ = 3 GeV$^2$ (a) and $M^2_{\pi\rho}$ = 6 GeV$^2$ (b) with $S_{\gamma N}$ = 200 GeV$^2$. The solid line on the left corresponds to the constraints $-u' > 1$ GeV$^2$ and $M_{\pi N'}^2 > 2$ GeV$^2$ for any value of $t$ and the lines on the right correspond to the constraints $-t' > 1$ GeV$^2$ and $M_{\rho N'}^2 > 2$ GeV$^2$ (dashed line for $t = t_{min}$ and solid line for $t = -0.5$ GeV$^2$).} \label{resultS200} \end{figure} To get an estimate of the total rate of events of interest for our analysis, we first get the $M^2_{\pi\rho}$ dependence of the differential cross section integrated over $u'$ and $t$, \begin{equation} \label{difcrosec2} \frac{d\sigma}{dM^2_{\pi\rho}} = \int_{(-t)_{min}}^{0.5} \ d(-t)\ \int_{(-u')_{min}}^{(-u')_{max}} \ d(-u') \ F^2_H(t)\times\left.\frac{d\sigma}{dt \, du' d M^2_{\pi\rho}}\right|_{\ t=t_{min}} \,. \end{equation} \psfrag{mup}{\hspace{-.2cm}$-u'$} \psfrag{mt}{$-t$} \psfrag{mtmin}{$(-t)_{min}$} \psfrag{mupmax}{\rotatebox{10}{{\hspace{0cm}$(-u')_{max}(t)$}}} \psfrag{mupmin}{\rotatebox{17}{{\hspace{-1cm}$(-u')_{min(res.)}(t)$}}} \begin{figure}[!h] \centerline{\includegraphics[width=8cm]{Prepfigure_3_20region.eps}} \caption{The phase space domain of integration in the $(-t,-u')$ variables. The upper limit in $-u'$ is given by the constraint $-u'(t)<(-u')_{max}(t)=-(-t')_{min}+M_{\pi \rho}^2 -t -m_\pi^2 -m_\rho^2$ (with $(-t')_{min}= 1$ GeV$^2$). The lower limit in $-u'$ is given by $-u' > 1$ GeV$^2$ and $-u'(t)>(-u')_{min(res.)}(t)$ where $(-u')_{min(res.)}(t)$ is obtained from the constraint $M_{\pi N'}^2 >$ 2 GeV$^2$. The figure illustrates the case $S_{\gamma N}=20$ GeV$^2$ and $M_{\pi \rho}^2 = 3$ GeV$^2$.} \label{phase_space} \end{figure} The domain of integration over $-u'$ is deduced from the cuts we discussed at the end of section \ref{Sec:Kinematics}: $-t', -u' > 1$ GeV$^2$ and $M_{\pi N'}^2,\ M_{\rho N'}^2 > 2$ GeV$^2$, and is illustrated in Fig.~\ref{phase_space}. Then we get two limits for the domain over $-u'$ : \begin{itemize} \item The cuts over $-u'$ and $M_{\pi N'}^2$ give the minimum value for $-u'$ ($-u'_{min}$ = 1 GeV$^2$ or $(-u')_{min(res.)}$) which depends on $t$, $S_{\gamma N}$ and $M^2_{\pi\rho}$. For instance, the lines on the left in Figs.~\ref{resultS20}-\ref{resultS200} represent that cut : solid lines for $t$ = -0.5 GeV$^2$ and dashed lines for $t = t_{min}$. One can notice that at high energy, the $\pi N'$ system is outside the baryonic resonance region so that $-u'_{min}$ is always equal to 1 GeV$^2$ and for any value of $t$. \item The cuts over $-t'$ and $M_{\rho N'}^2$ give the maximum value for $-u'$ ($(-u')_{max}$) which depends on $t$, $S_{\gamma N}$ and $M^2_{\pi\rho}$. For instance, the lines on the right in Figs.~\ref{resultS20}-\ref{resultS200} represent that cut : solid lines for $t$ = -0.5 GeV$^2$ and dashed lines for $t = t_{min}$. It is interesting to stress that, for any value of the hard scale and of the energy, the $\rho N'$ system is always outside the baryonic resonance region. \end{itemize} Moreover, one notices that $(-u')_{min}$ decreases and $(-u')_{max}$ increases with $M^2_{\pi\rho}$ at $S_{\gamma N}$ fixed and then the width of the physical region $[(-u')_{min},(-u')_{max}]$ grows.\\ Thus, in Figs.~\ref{result20}, \ref{result100} and \ref{result200}, we show the $M^2_{\pi\rho}$ dependence of the differential cross section (\ref{difcrosec2}). \psfrag{ds}{\raisebox{.5cm}{{\hspace{-.6cm}$\displaystyle \frac{d \sigma}{d M_{\pi \rho}^2}$ \hspace{0cm}{ (nb.GeV$^{-2}$)}}}} \psfrag{M2}{\raisebox{-.7cm}{{\hspace{-2.5cm}$M_{\pi \rho}^2$(GeV$^2$)}}} \begin{figure}[!h] \begin{center} \includegraphics[width=12cm]{dsigma_M2_JLab_D_fit_new.eps} \vspace{.3cm} \caption{$M^2_{\pi\rho}$ dependence of the differential cross section (\ref{difcrosec2}) (nb.GeV$^{-2}$) at $S_{\gamma N}$ = 20 GeV$^2$.} \label{result20} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \vspace{.5cm} \includegraphics[width=12cm]{dsigma_M2_Cp_bE_fit_new.eps} \vspace{.3cm} \caption{$M^2_{\pi\rho}$ dependence of the differential cross section (\ref{difcrosec2}) (nb.GeV$^{-2}$) at $S_{\gamma N}$ = 100 GeV$^2$.} \label{result100} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \vspace{.5cm} \includegraphics[width=12cm]{dsigma_M2_Cp_hE_fit_new.eps} \vspace{.3cm} \caption{$M^2_{\pi\rho}$ dependence of the differential cross section (\ref{difcrosec2}) (nb.GeV$^{-2}$) at $S_{\gamma N}$ = 200 GeV$^2$.} \label{result200} \end{center} \end{figure} \newpage Let us first focus on the high energy domain, and discuss the specific case of muoproduction with the COMPASS experiment at CERN. Integrating differential cross sections on $t$, $u'$ and $M^2_{\pi\rho}$, with the cuts specified above and $M^2_{\pi\rho} > 3$ GeV$^2$, leads to an estimate of the cross sections for the photoproduction of a $\pi^+\rho^0_T$ pair at high energies such as : \begin{equation} \label{crossec1} \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 100\ GeV^2) \simeq 3\ \textrm{nb} \qquad \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 200\ GeV^2) \simeq 0.1\ \textrm{nb}. \end{equation} The virtuality $Q^2$ of the exchange photon plays no crucial role in our process, and the virtual photoproduction cross section is almost $Q^2$-independent if we choose to select events in a sufficiently narrow $Q^2-$window (say $Q^2_{min}<Q^2<.5 $ GeV$^2$), which is legitimate since the effective photon flux is strongly peaked at very low values of $Q^2$. The quasi real (transverse) photon flux $\Gamma_T^l(Q^2,\ \nu) $ reads \begin{equation} \label{virphoflux} \Gamma_T^l(Q^2,\ \nu) = \frac{\alpha \left(\nu - \frac{Q^2}{2M_p}\right)}{2\pi Q^2\nu^2}\left[\left(\frac{\nu}{E_l}\right)^2\left(1 - 2\frac{m^2_l}{Q^2}\right) + \left(1 - \frac{\nu}{E_l} - \frac{Q^2}{4E^2_l}\right)\frac{2}{1 + \frac{Q^2}{\nu^2}}\right]\,, \end{equation} with the fine structure constant $\alpha = 1/137$ and $E_l$ the lepton energy (in the laboratory frame). Consequently, the rate in a photon energy bin $[\nu_1,\nu_2]$ corresponding to $[\bar S_{\gamma N} - \Delta S,\bar S_{\gamma N} + \Delta S ]$ with $\bar S_{\gamma N} = 2\,\bar \nu \, M = (\nu_1 + \nu_2)M$ and $\Delta S = 2 \, \Delta \nu M = (\nu_2 - \nu_1)M$ is \begin{eqnarray} \label{virphoflux2} \sigma(l \,N\to l\,\pi^+\rho^0_TN') &=& \int_{Q^2_{min}}^{1} dQ^2 \int_{\nu_1}^{\nu_2}d\nu\ \Gamma_T^l(Q^2,\ \nu)\sigma_{\gamma^* N\to \pi^+\rho^0_TN'}(Q^2,\nu)\nonumber\\ &\simeq& \sigma_{\gamma^* N\to \pi^+\rho^0_TN'}(S_{\gamma N} =\bar S_{\gamma N})\times\int_{Q^2_{min}}^{1} dQ^2 \int_{\nu_1}^{\nu_2}d\nu\ \Gamma_T^l(Q^2,\ \nu). \end{eqnarray} For the muoproduction ($E_\mu = 160$ GeV), one gets the following cross section estimates, firstly for $S_{\gamma N}$ between 50 and 150 GeV$^2$ \begin{eqnarray} \label{ } \sigma(\mu \,N\to \mu \, \pi^+\rho^0_TN') &\simeq& \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 100\ GeV^2)\times\int_{0.02}^{1} dQ^2 \int_{25}^{75}d\nu\ \Gamma_T^\mu(Q^2,\ \nu)\nonumber\\ &\simeq& 10^{-2}\ \textrm{nb}, \end{eqnarray} which yields a rate equal to 3 10$^{-3}$ Hz with a lepton beam luminosity of 2.5 10$^{32}$ cm$^{-2}$.s$^{-1}$, and, for $S_{\gamma N}$ between 150 and 250 GeV$^2$ \begin{eqnarray} \label{ } \sigma(\mu \, N\to \mu \, \pi^+\rho^0_TN') &\simeq& \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 200\ GeV^2)\times\int_{0.02}^{1} dQ^2 \int_{75}^{125}d\nu\ \Gamma_T^\mu(Q^2,\ \nu)\nonumber\\ &\simeq& 5\ 10^{-4}\ \textrm{nb}, \end{eqnarray} which yields a rate equal to 1.3 10$^{-4}$ Hz with the same lepton beam luminosity. This looks sufficient to get an estimate of the transversity GPDs in the region of small $\xi$ of the order 0.01. Let us now turn to the lower energy domain, which will be studied in details at JLab. With the cuts discussed above and $M^2_{\pi\rho} > 3$ GeV$^2$, estimates of the cross sections for the photoproduction of a $\pi^+\rho^0_T$ pair at JLab energies are: \begin{equation} \label{crossec} \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 10\ GeV^2) \simeq 15\ \textrm{nb} \qquad \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 20\ GeV^2) \simeq 33\ \textrm{nb}. \end{equation} In electroproduction ($E_e = 11$ GeV), applying Eqs. (\ref{virphoflux}) and (\ref{virphoflux2}), one gets the total cross section \begin{equation} \sigma(e^- N\to e^-\pi^+\rho^0_TN') \simeq 0.1\ \textrm{nb}. \end{equation} Tagging the photons is however required if one aims at a detailed understanding of the reaction and at an extraction of the GPD. This is possible at JLab and indeed is well documented for the future 12 GeV energy upgrade in \cite{HallDB}. More specifically, Hall D will be equipped with a crystal radiator, which through the technique of coherent brehmsstrahlung will produce an intense photon beam of 8 - 9 GeV with an excellent degree of polarization. This leads to the following rate \begin{eqnarray} \label{rateJLabHallD} R^D &=& \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 17\ GeV^2)\times N^D_\gamma\times N^D_p \nonumber\\ &\simeq& 5\ \mathrm{Hz} \end{eqnarray} where $N^D_\gamma \sim\ 10^8$ photons/s is the photon flux for Hall D and $N^D_p = 1.27\ b^{-1}$ is the number of protons per surface in the target (liquid hydrogen of 30 cm), assuming that the efficiency of the detector is at 100$\%$.\\ With a different technique, CLAS12 in Hall B may be equipped with a photon tagger allowing an intense ($\approx 5\ 10^7 $ photons/s) flux of photons with energy 7 - 10.5 GeV. This will lead to slightly lower but still large enough rates.\\ Thanks to the high electron beam luminosity expected at JLab, a detailed analysis is possible.\\ Moreover, one can make an additionnal comment about the use of the Compass experiment with kinematics of JLab, i.e. with photons at low energies. In this context, one gets the following cross section estimate for muoproduction for $S_{\gamma N}$ between 20 and 50 GeV$^2$ \begin{eqnarray} \label{ } \sigma(\mu N\to \mu\pi^+\rho^0_TN') &\simeq& \sigma_{\gamma N\to \pi^+\rho^0_TN'}(S_{\gamma N} = 35\ GeV^2)\times\int_{0.02}^{1} dQ^2 \int_{10}^{25}d\nu\ \Gamma_T^\mu(Q^2,\ \nu)\nonumber\\ &\simeq& 0.2\ \textrm{nb}\, , \end{eqnarray} which leads to the conclusion that muoproduction with low energy, at Compass, gives greater rates (5 10$^{-2}$ Hz) than with photon with high energy. \section{Conclusions} In this paper, we have advocated that the exclusive photoproduction of a meson pair with a large invariant mass gives access to generalized parton distributions through the factorization of a hard subprocess, provided all the kinematical invariants ($s',t',u'$) which characterize this subprocess are large enough. We applied this strategy to access the chiral-odd generalized quark distributions from the photoproduction of a $\pi^+ \rho^0_T$ meson pair with a large invariant mass. We modeled the dominant chiral-odd GPD $H^{ud}_T(x,\xi,t)$ though a double distribution constrained by the phenomenological knowledge of the transversity quark distribution $h_1^u(x)$ and $h_1^d(x)$. The calculation of the hard part at the leading order in the strong coupling $\alpha_s$ shows that no divergence nor end-point singularity plagues the validity of our approach. From our results, we conclude that the experimental search is promissing, both at low real or almost real photon energies within the JLab@12GeV upgraded facility, with the nominal effective luminosity generally expected ($\mathcal{L} \sim 10^{35}$ cm$^2$.s$^{-1}$) and at higher photon energies with the Compass experiment at CERN. These two energy ranges should give complementary information on the chiral-odd GPD $H_T(x,\xi,t)$. Namely, the large $\xi$ region may be scrutinized at JLab and the smaller $\xi$ region may be studied at COMPASS. It is obvious that our model for the chiral-odd GPD $H_T^q$ may be improved and refined in many ways, for instance by adding a D-term which gives a complete parametrization by double distribution. Our primary goal in this paper was to prove the feasibility of the study of this physics with this physical process. We believe that this task is achieved. The described framework opens the way to future studies. Firstly, the contributions proportionnal to the other three chiral-odd GPDs should be included in the calculation of the amplitude. Alhough they are suppressed by kinematical factors at small $t$, they constitute an interesting addition to the transversity structure of the nucleon. Secondly, it will be interesting to study the polarized process and calculate more observables like beam or target spin asymmetries, or like the spin density matrix of the vector meson. These two extensions will be discussed in a separate paper. Other two meson channels may also be interesting; they deserve a thorough study. \section*{Acknowledgments} We are grateful to Igor Anikin, Markus Diehl, Samuel Friot, Franck Sabatie and Jean Philippe Lansberg for useful discussions and correspondance. This work is partly supported by the French-Polish scientific agreement Polonium 7294/R08/R09, the ECO-NET program, contract 18853PJ, the ANR-06-JCJC-0084-02, the Polish Grant N202 249235 and the DFG (KI-623/4).
2,877,628,088,818
arxiv
\section{Introduction} \begin{figure*} \begin{center} \includegraphics[width=0.75\textwidth, trim=3.5cm 3.5cm 6cm 3.2cm, ,clip]{images/schema.pdf} \end{center} \caption{\textbf{Conventional technology-mediated persuasion (left) compared to \emph{latent persuasion} by language models (right).} In conventional influence campaigns, a central persuader designs an influential message or choice architecture distributed to recipients. In \emph{latent persuasion}, language models produce some opinions more often than others, influencing what their users write, which is, in turn, read by others.} \label{fig:schema} \end{figure*} \x{Large language models like GPT-3~\cite{winata2021language, bommasani2021opportunities, vaswani2017attention} are increasingly becoming part of human communication. Enabled by developments in computer hardware and software architecture~\cite{vaswani2017attention}, large language models produce human-like language \cite{jakesch2022human} by iteratively predicting likely next words based on the sequence of preceding words. Applications like writing assistants \cite{dang2022beyond}, grammar support \cite{koltovskaia2020student}, and machine translation \cite{gaspari2014perception} inject the models' output into what people write and read \cite{Hancock2020}.} Using large language models \x{in our daily communication may} change how we form opinions and influence each other. In conventional forms of persuasion, a persuader crafts a compelling message and delivers it to recipients -- either face-to-face or mediated through contemporary technology~\cite{simons2011persuasion}. More recently, user researchers and behavioral economists have shown that technical choice architectures, \x{such as the order of options presented} affect people's behavior as well~\cite{leonard2008richard, fogg2002persuasive}. With the emergence of large language models that produce human-like language~\cite{jakesch2022human, buchanan2021truth}, interactions with technology may \x{influence} not only behavior but also opinions: when language models produce some views more often than others, they may persuade their users. We call this new paradigm of influence \emph{latent persuasion} by language models, illustrated in Figure \ref{fig:schema}. \emph{Latent persuasion} by language models extends the insight that choice defaults \x{affect} people's behavior \cite{leonard2008richard, fogg2002persuasive} to the field of language and persuasion. Where \emph{nudges} change behavior by making some \x{choices more convenient} than others, AI language technologies may shift opinions by making it easy to express certain views but not others. \x{Such influence could be \emph{latent} and hard to pinpoint:} choice architectures are visible, but opinion preferences built into language models may be opaque to users, policymakers, and even system developers. While in traditional persuasion, a central designer \x{intentionally creates a message to convince a specific audience, a language model may be opinionated by accident and its opinions may vary by user, product and context}. Prior research on the risks of generative language models has focused on conventional persuasion scenarios, where \x{a human persuader uses} language models to automate and optimize the production of content for advertising ~\cite{sunny, duerr2021persuasive} or misinformation~\cite{kreps2022all, buchanan2021truth, zellers2019defending}. \x{Initial audits also highlight} that language models reproduce stereotypes and biases~\cite{huang2019reducing, brown2020language, nozza2021honest} and support certain cultural values more than others~\cite{johnson2022ghost}. While emerging research on co-writing with large language models suggests that models become increasingly active partners in people's writing~\cite{Lee2022coauthor, Yang2022AIAA, Yuan2022wordcraft}, little is known about how the opinions produced by language models affect users' views. Work by~\citet{Arnold2018review_sentiment} and \citet{Bhat2021, Bhat2022} shows that a biased writing assistant may affect movie or restaurant reviews, but whether \x{co-writing with large language models} affect users' opinions on public issues remains an open and urgent question. This study investigates whether large language models that generate certain opinions more often than others \x{affect} what their users write and think. In an online experiment \x{(N=1,506)}, participants wrote a short statement discussing whether social media is good or bad for society. Treatment group participants were shown suggested text generated by a large language model. The model, GPT-3~\cite{winata2021language}, was configured to either generate text that argued that social media is good for society or text that argued the opposite. Following the writing task, we asked participants to assess social media's societal impact in a survey. A separate sample of human judges (N=500) evaluated the opinions expressed in participants' writing. Our quantitative analysis tests whether the interactions with the opinionated language model \x{shifted} participants' writing and survey opinions. We explore how this opinion \x{shift} may have occurred \x{in secondary analyses}. We find that both \x{participants' writing} \textit{and} their attitude towards social media in the survey were considerably \x{affected} by the model's preferred opinion. We conclude by discussing how \x{researchers, AI practitioners, and policymakers can respond to the possibility of latent persuasion by AI language technologies.} \section{Related work} \x{Our study is informed by} prior research on social influence and persuasion, interactions with writing assistants, and the societal risks of large language models. \subsection{Social influence and persuasion} Social influence is defined as a shift in an individual's thoughts, feelings, attitudes, or behaviors as a result of interaction with others~\cite{rashotte2007social}. While social influence is integral to human collaboration, landmark studies have shown that it can also lead to unreasonable or unethical behavior. On a personal level, people may conform to majority views against their better judgement~\cite{asch1951effects} and obey an authority figure even if it means harming others~\cite{milgram1963behavioral}. On a societal level, researchers have shown that social influence drives speculative markets~\cite{shiller2015irrational}, affects voting patterns~\cite{lazarsfeld1968people}, and contributes to the spread of unhealthy behaviors such as smoking and obesity~\cite{christakis2007spread, christakis2008collective}. Following the rise of social media, how online interactions affect people's opinions and decisions has been studied extensively. Research has shown that a variety of sources influences users' attitudes and behaviors, including friends, family, experts, and internet celebrities~\cite{goel2012structure,marwick2011tweet}; the latter group was labeled \emph{influencers} due to their influence on a large group of 'followers'~\cite{bakshy2011everyone}. Research has found that in online settings, users can be influenced by non-human entities such as brand pages, bots, and algorithms~\cite{ferrara2016rise}. Studies have evaluated the influence that technical artifacts such as personalized recommendations, chatbots, and choice architectures have on users' decision-making~\cite{berkovsky2012influencing, leonard2008richard, cosley2003seeing, gunaratne2018persuasive}. The influence that algorithmic entities have on people depends on how people perceive the algorithm, for example, whether they attribute trustworthiness to its recommendations~\cite{logg2019algorithm, gunaratne2018persuasive}. The influence of algorithms on individuals tends to increase as the environment becomes more uncertain and decisions become more difficult~\cite{bogert2021humans}. With the public's growing awareness of developments in artificial intelligence, people may regard \emph{smart} algorithms as a source of authority ~\cite{kapania2022because, logg2019algorithm, araujo2020ai}. There is recent evidence that people may accept algorithmic advice even in simple cases when it is clearly wrong~\cite{liel2020if}. In the related field of automation, such over-reliance on machine output has been referred to as \emph{automation bias}~\cite{parasuraman1997humans,parasuraman2010complacency, wickens2015complacency}. \subsection{Interaction with writing assistants} Historically, HCI research for text entry has predominantly focused on efficiency~\cite{Kristensson2014}. Typical text entry systems attend to language context at the word~\cite{Vertanen2015velocitap, Bi2014} or sentence level~\cite{Arnold2016phrases_vs_words, Buschek2021emails}. They suggest one to three subsequent words based on underlying likelihood distributions~\cite{Dunlop2012, Fowler2015, Gordon2016, Quinn2016chi}. More recent systems also provide multiple short reply suggestions~\cite{Kannan2016smartreply} or a single long phrase suggestion~\cite{Chen2019smartcompose}. More extensive suggestions have traditionally been avoided because the time taken to read and select them might exceed the time required to enter that text manually. Several studies indicate that features such as auto-correction and word suggestions can negatively impact typing performance and user experience~\cite{Banovic2019mobilehci, Dalvi2016, Buschek2018researchime, Palin2019}. However, with the emergence of larger and more powerful language models \cite{winata2021language, bommasani2021opportunities, vaswani2017attention}, there has been a growing interest in design goals beyond efficiency. Studies have investigated interface design factors and interactions with writing assistants that directly or indirectly support inspiration~\cite{Lee2022coauthor, Singh2022elephant, Yuan2022wordcraft, Bhat2022}, language proficiency~\cite{Buschek2021emails}, story writing~\cite{Singh2022elephant, Yuan2022wordcraft}, text revision~\cite{Cui2020, Zhang2019} or creative writing~\cite{Clark2018, Gero2019MetaphoriaAA}. Here, language models are framed as \textit{active writing partners} or \textit{co-authors}~\cite{Lee2022coauthor, Yang2022AIAA, Yuan2022wordcraft}, rather than tools for prediction or correction. There is evidence that a system that suggests phrases rather than words~\cite{Arnold2016phrases_vs_words} is more likely to be perceived as a collaborator and content contributor by users. The more writing assistants become \textit{active writing partners} rather than mere tools for text entry, the more the writing process and output may be affected by their ``co-authorship''. \citet{Bhat2022} discuss how writers evaluate the suggestions provided and integrate them into different cognitive writing processes. Work by \citet{Singh2022elephant} suggests that writers make active efforts or 'leaps' to integrate generated language into their writing. \citet{Buschek2021emails} conceptualized nine behavior patterns that indicate varying degrees of engagement with suggestions, from ignoring them to chaining multiple ones in a row. Writing with suggestions correlates with shorter and more predictable texts being written~\cite{Arnold2020image_captions}, along with increased use of standard phrases when writing with a language model~\cite{Buschek2021emails, Bhat2022}. Furthermore, the sentiment of the suggested text may \x{affect} the sentiment of the written text~\cite{Arnold2018review_sentiment, Hohenstein2020}. \subsection{Societal risks of large language models} Technical advances have given rise to a generation of language models~\cite{bommasani2021opportunities} that produces language so natural that humans can barely distinguish it from real human language~\cite{jakesch2022human}. Enabled by improvements in computer hardware and the transformer architecture~\cite{vaswani2017attention}, models like GPT-3~\cite{brown2020language, radford2019language} have attracted attention for their potential to impact a range of beneficial real-world applications~\cite{bommasani2021opportunities}. However, more cautious voices have also warned about the ethical and social risks of harm from large language models~\cite{weidinger2021ethical, weidinger2022taxonomy}, ranging from discrimination and exclusion~\cite{huang2019reducing, brown2020language, nozza2021honest} to misinformation~\cite{kreps2022all, lin2021truthfulqa, rae2021scaling, zellers2019defending} and environmental~\cite{strubell2019energy} and socioeconomic harms~\cite{bender2021dangers}. Comparatively little research has considered widespread shifts in opinion, attitude, and culture that may result from a comprehensive deployment of generative language models. It is known that language models work and perform better for the languages and contexts they are trained in (primarily English or Mandarin Chinese)~\cite{brown2020language, rae2021scaling, winata2021language}. Small-n audits have also suggested that the values embedded in models like GPT-3 were more aligned with reported dominant US values than those upheld in other cultures~\cite{johnson2022ghost}. Work by \citet{jakesch2022different} has highlighted that the values held by those developing AI systems differ from those of the broader populations interacting with the systems. The adjacent question of AI alignment -- how to build AI systems that act in line with their operators' goals and values -- has received comparatively more attention~\cite{askell2021general}, but primarily from a control and safety angle. A related topic, the political repercussions of social media and recommender systems~\cite{zhuravskaya2020political}, has received extensive research attention, however. After initial excitement about social media's democratic potential~\cite{khondker2011role}, it became evident that technologies that affect public opinion will be subject to powerful political and commercial interests~\cite{bradshaw2017troops}. Rather than mere technical platforms, algorithms become constitutive features of public life~\cite{gillespie2014relevance} that may change the political landscape~\cite{aral2019protecting}. Even without being designed to \x{shift} opinions, it has been found that algorithms may contribute to political polarization by reinforcing divisive opinions~\cite{bruns2019filter, cinelli2021echo, bail2018exposure}. \section{Methods} To investigate whether interacting with opinionated language models shifts people's writing and affects people's views, we conducted an online experiment asking participants \x{(N=1,506)} to respond to a social media post in a simulated online discussion using a writing assistant. The language model powering this writing assistant was configured to generate text supporting one or the other side of the argument. We compared the essays and opinions of participants to a control group that wrote their social media posts without writing assistance. \begin{figure*} \begin{center} \includegraphics[width=0.87\textwidth, , trim=1cm 2.2cm 1cm 1cm]{images/screenshot.png} \end{center} \caption{\textbf{Screenshot of the writing task.} The task is shown on the top of the page, followed by usage instructions for the writing assistant. Below, participants read a Reddit-style discussion post to which they were asked to reply. The writing assistant displayed writing suggestions (shown in grey) extending participants' text. The participant in the screenshot wrote an argument critical of social media, but the model is configured to argue that social media is \textit{good} for society.} \label{fig:screenshot} \end{figure*} \subsection{Experiment design} To study interactions between model opinion and participant opinion in a possibly realistic and relevant setting, we created the scenario of an opinionated discussion on social media platforms like Reddit. Such discussions have a large readership~\cite{medvedev2017anatomy}, pertain to political controversies, and are plausible application settings for writing assistants and language models. We searched ProCon.org\footnote{\url{https://www.procon.org/}}, an online resource for research on controversial issues, to identify a discussion topic. We selected ``Is Social Media Good for Society?'' as a discussion topic. We chose this topic because it is an easily accessible discussion topic that is politically relevant but not considered so controversial that entrenched views may limit constructive debate. To run the experiment, we created a custom experimental platform combining a mock-up of a social media discussion page, a rich-text editor, and a writing assistant. The assistant was powered by a language generation server and included comprehensive logging tools. To provide a realistic-looking social media mock-up, we copied the design of a Reddit discussion page and drafted a question based on the ProCon.org discussion topic. Figure~\ref{fig:screenshot} shows a screenshot of the experiment. We asked participants to write at least five sentences expressing their take on social media's societal impact. We randomly assigned participants to three different treatment groups: \begin{enumerate} \item \emph{Control group:} participants wrote their answers without a writing assistant. \item \emph{Techno-optimist language model treatment:} participants were shown suggestions from a language model configured to argue that social media is good for society. \item \emph{Techno-pessimist language model treatment:} participants received suggestions from a language model configured to argue that social media is bad for society. \end{enumerate} \subsection{Building the writing assistant} Similar to Google's \textit{Smart Compose}~\cite{Chen2019smartcompose} and Microsoft's predictive text in Outlook, the writing assistant in the treatment groups suggested possible continuations (sometimes called ``completions'') to text that participants had entered. We integrated the suggestions into a customized version of the rich-text editor Quill.js\footnote{\url{https://quilljs.com/}}. The client sent a generation request to the server whenever a participant paused their writing for a certain amount of time (750ms). Including round-trip and generation time, a suggestion appeared on participants' screens about 1.5 seconds after they paused their writing. When the editor client received a text suggestion from the server, it revealed the suggestion letter by letter with random delays calibrated to resemble a co-writing process (cf.~\cite{Lehmann2022muc}). Once the end of a suggested sentence was reached, the editor would pause and request from the server an extended generation until at least two sentences had been suggested. Participants could accept each suggested word by pressing the tab key or clicking an accept button on the interface. In addition, they could reset the generation, requesting a new suggestion by pressing a button or key. We hosted the required cloud functions, files, and interaction logs on Google's Firebase platform. \subsection{Configuring an opinionated language model} In this study, we experimented with language models that \textit{strongly} favored one view over another. We chose a strong manipulation as we wanted to explore the \textit{potential} of language models to affect users' opinions and understand whether they could be used or abused to shift people's views~\cite{bagdasaryan2021spinning}. We used GPT-3~\cite{brown2020language} with manually designed prompts to generate text suggestions for the experiment in real-time. Specifically, we accessed OpenAI's most potent 175B parameter model (``text-davinci-002''). \x{We used temperature sampling, a method for choosing a specific next token from the set of likely next tokens inspired by statistical thermodynamics. We set the sampling temperature (randomness parameter) to 0.85 to generate suggestions that are varied and creative. We set the frequency and presence penalty parameters to 1 to reduce the chance that the model suggestions would become repetitive. We also prevented the model from producing new lines, placeholders, and list by setting logit bias parameters that reduced the likelihood of the respective tokens being selected. } \x{We evaluated different techniques to create an opinionated model, i.e., a model that \emph{likely supports a certain side of the debate} when generating a suggestion. We used prompt design \cite{lester_constant_2022}, a technique for guiding frozen language models to perform a specific task. Rather than updating the weights of the underlying model, we concatenated an engineered prompt to the input text to increase the chance that the model generates a certain opinion. Specificially we inserted the prefix} \emph{"Is social media good for society? {Explain why social media is good/bad for society:}"} before participants' written texts when generating continuation suggestions. \x{The engineered prompt was not visible to participants in their editor UI; it was inserted in the backend before generation and removed from the generated text before showing it to participants. } \x{Initial experimentation and validation suggested that the prompt produced the desired opinion in the generated text, but when participants strongly argued for another opinion in their writing, the model's continuations would follow their opinion. In addition to the prefix prompt, we thus developed an infix prompt that would be inserted throughout participants' writing to reinforce the desired opinion.} We inserted the snippet (\emph{"{One sentence continuing the essay explaining why social media is good/bad:}"}) \x{right before the last sentence that participants had written. This additional prompt guided the model's continuation towards the target opinion even if participants had articulated a different opinion earlier in their writing. } Validation of the model opinion configuration is provided in section~\ref{sec:validation}. We also experimented with fine-tuning~\cite{howard2018universal} \x{to guide the models' opinion, but the fine-tuned models} did not consistently produce the intended opinion. \subsection{Outcome measures and covariates} We collected different types of outcome measures to investigate interactions between participants' opinions and the model opinion: \emph{Opinion expressed in the post:} To evaluate expressed opinion, we split participants' written texts into sentences and asked crowd workers to evaluate the opinion expressed in each sentence. Each crowd worker assessed 25~sentences, indicating whether each argued that social media is good for society, bad, or both good and bad. A fourth label was offered for sentences that argued neither or were unrelated. For example, \emph{"Social media also promotes cyber bullying which has led to an increase in suicides" (P\#421)} was labeled as arguing that social media is bad for society, while \emph{"Social media also helps to create a sense of community" (P\#1169)} was labeled as \emph{social media is good for society}. We collected one to two labels for each sentence participants wrote and collected labels for a sample of the writing assistant's suggestions. In sentences where we collected multiple labels, the labels provided by different raters agreed 84.1\% of the time (Cohen's $\kappa=0.76$). \emph{Real-time writing interaction data:} We gathered comprehensive interaction logs at the key-stroke level of how participants interacted with the model's suggestions. We recorded which text the participant had written, what text the model had suggested, and what suggestions participants had accepted from the writing assistant. We measured how long they paused to consider suggestions and how many suggestions they accepted. \emph{Opinion survey (post-task):} After finishing the writing task, participants completed an opinion survey. The central question, ``Overall, would you say social media is good for society?'' was designed to assess \x{shifts} in participants' attitude. This question was not shown immediately after the writing task to reduce demand effects. Secondary questions were asked to understand participants' opinions in more detail: ``How does social media affect your relationships with friends and family?'', ``Does social media usage lead to mental health problems or addiction?'', ``Does social media contribute to the spread of false information and hate?'', ``Do you support or oppose government regulation of social media companies?'' The questions were partially adapted from Morning Consults' National Tracking Poll~\cite{consultnational}; answers were given on typical 3- and 5-point Likert scales. \emph{User experience survey (post-task):} Participants in the treatment groups completed a survey about their experience with the writing assistant following the opinion survey. They were asked, ``How useful was the writing assistant to you?'', whether ``The writing assistant understood what you wanted to say'' and whether ``The writing assistant was knowledgeable and had expertise.'' To explore participants' awareness of the writing assistant's opinion and \x{its effect on their own views}, we asked them whether ``The writing assistant's suggestions were reasonable and balanced'' and whether ``The writing assistant inspired or changed my thinking and argument.'' Answers were given on a 5-point Likert scale from ``strongly agree'' to ``strongly disagree.'' An open-ended question asked participants what they found most useful or frustrating about the writing assistant. \emph{Covariates:} We asked participants to self-report their age, gender, political leaning, and their highest level of education at the end of the study. We also constructed a ``model alignment'' covariate estimating whether the opinion the model supported was aligned with the participant's opinion. We did not ask participants to report their overall judgment before the writing task to avoid commitment effects. Instead, we asked them at the end of the study whether they believed social media was good for society before participating in the discussion. While imperfect, this provides a proxy for participants' pre-task opinions. It is biased by the treatment effect observed on this covariate, which amounts to 14\% of its standard deviation. \subsection{Participant recruitment} We recruited 1,506 participants (post-exclusion) for the writing task, corresponding to 507, 508, and 491 individuals in the control, techno-optimist, and techno-pessimist treatment groups, respectively. The sample size was calculated based on effect sizes observed in the pilot studies' post-task question, "Overall, would you say social media is good for society?" at a power of 80\%. The sample was recruited through Prolific~\cite{palan2018prolific}. The sample included US-based participants at least 18 years old (M= 37.7, SD= 14.2); 48.5\% self-identified as female, and 48.6\% identified as male. \x{38 participants identified as non-binary and eight preferred to self-describe or not disclose their gender identity. } Six out of ten indicated liberal leanings; 57.1\% had received at least a Bachelor's degree. Participants who failed the pre-task attention check (8\%) were excluded. Six percent of participants admitted to the task did not finish it. We paid participants \$1.50 for an average task time of 5.9 minutes based on an hourly compensation rate of \$15. For the labeling task, we recruited a similar sample of 500 participants through Prolific. The experimental protocols were approved by the Cornell University Institutional Review Board. \subsection{Data sharing} \x{The experiment materials, analysis code and data collected are publicly available through an Open Science repository (\url{https://osf.io/upgqw/}}). A research assistant screened the data, and records with potentially privacy-sensitive information were removed before publication. \section{Results} We first analyze the opinions participants expressed in their social media posts. We then examine whether participants may have accepted the models' suggestions out of mere convenience and whether the model influenced participants' opinions in a later survey. Finally, we present data on participants' perceptions of the model's opinion and influence. The reported statistics are based on a logistic regression model. \subsection{Did the interactions with the language model affect participants' writing?} \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/written-opinion.pdf} \end{center} \caption{\textbf{Participants assisted by a model supportive of social media were more likely to argue that social media is good for society in their posts (and vice versa).} N\textsubscript{s}=9,223 sentences written by N\textsubscript{p}=\x{1,506} participants evaluated by N\textsubscript{j}=500 judges. The y-axis indicates whether participants wrote their social media posts with assistance from an opinionated language model that was supportive (top) or critical of social media (bottom). The x-axis shows how often participants argued that social media is bad for society (blue), good for society (orange), or both good and bad (white) in their writing.} \label{fig:written-opinion} \end{figure*} Figure~\ref{fig:written-opinion} shows how often participants in each of the treatment conditions (y-axis) argued that social media is good or bad for society (x-axis) in their writing. The social media posts written by participants in the control group (middle row) were slightly critical of social media: They argued that social media is bad for society in 38\% and that social media is good in 28\% of their sentences. In about 28\% of their sentences, control group participants argued that social media is both good and bad, and 11\% of their sentences argued neither or were unrelated. Participants who received suggestions from a language model supportive of social media (top row of Figure~\ref{fig:written-opinion}) were 2.04 times more likely than control group participants (p<0.0001, 95\% CI [1.83, 2.30]) to argue that social media is good. In contrast, participants who received suggestions from a language model that criticized social media (bottom row) were 2.0 times more likely (p<0.0001, 95\% CI [1.79, 2.24] to argue that social media is bad than control group participants. We conclude that using an opinionated language model \x{affected} participants' writing such that the text they wrote was more likely to support the model's preferred view. \subsection{Did participants accept the model's suggestions out of mere convenience?} Participants may have accepted the models' suggestions out of convenience, even though the suggestions did not match what they would have wanted to say. Paid participants in online studies, in particular, may be motivated to accept suggestions to swiftly complete the task. \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/ux-accepts.pdf} \end{center} \caption{\textbf{Participants were more likely to accept suggestions if the model's opinion aligned with their own views} N\textsubscript{s}=6,142 sentences by N\textsubscript{p}=1,000 participants. The x-axis shows how many of the sentences participants had written themselves (blue), together with the model (white), or fully accepted from the model's suggestions (orange). The y-axis disaggregates the data based on whether the model suggestions were in line with participants' likely pre-task opinion.} \label{fig:acceptance-ratio} \end{figure*} Our data shows that, across conditions and treatments, most participants did not blindly accept the model's suggestions but interacted with the model to co-write their social media posts. On average, participants wrote 63\% of their sentences themselves without accepting suggestions from the model (compare Figure~\ref{fig:acceptance-ratio}). About 25\% of participants' sentences were written by both the participant and the model, which typically meant that the participant wrote some words and accepted the model's remaining sentence suggestion. Only 11.5\% of sentences were fully accepted from the model. Participants whose personal views were likely aligned with the model were more likely to accept suggestions, while participants with opposing views accepted fewer suggestions. About one in four participants did not accept any model suggestion, and one in ten participants had more than 75\% of their post written by the model. \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/written-opinion-time.pdf} \end{center} \caption{\textbf{The opinion \x{differences} in participants' writing were larger when they finished the task quickly.} \x{N=1,506}. The y-axis shows the mean opinion expressed in participants' social media posts based on aggregated sentence labels ranging from -1 for ``social media is bad for society'' to 1 for ``social media is good for society''. The x-axis indicates how much time participants took to write their posts. For reference, the left panel shows expressed opinions aggregated across writing times.} \label{fig:opinion-time} \end{figure*} \subsubsection{Did conveniently accepted suggestions increase the observed differences in written opinion?} The writing of participants who spent little time on the task was more \x{affected} by the model's opinion. We use the time participants took to write their posts to estimate to what extent they may have accepted suggestions without due consideration. For a concise statistical analysis, we treat the ordinal opinion scale as an interval scale. Since the opinion scale has comparable-size intervals and a zero point, continuous analysis is meaningful and justifiable~\cite{knapp1990treating}. We treat ``social media is bad for society'' as -1 and ``social media is good for society'' as 1. Sentences that argue both or neither are treated as zeros. Figure~\ref{fig:opinion-time} shows the mean opinion expressed in participants' social media posts depending on treatment group and writing time. The left panel shows participants' expressed opinions across times for reference, with a mean opinion difference of about 0.29 (p<0.001, 95\% CI [0.25, 0.33], SD=0.58) between each treatment group and the control group (corresponding to a large effect size of d=0.5). Participants who took little time to write them (less than 160 seconds, left-most data in right panel) were more affected by the opinion of the language model (0.38, p<0.001, 95\% CI [0.31, 0.45]). Our analysis shows that accepting suggestions out of convenience has contributed to the differences in the written opinion. However, even for participants who took four to six minutes to write their posts, we observed significant differences in opinions across treatment groups (0.20, p<0.001, 95\% CI [0.13, 0.27], corresponding to a treatment effect of d=0.34). \subsection{Did the language model affect participants' opinions in the attitude survey?} \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/survey-opinion.pdf} \end{center} \caption{\textbf{Participants interacting with a model supportive of social media were more likely to say that social media is good for society in a later survey (and vice versa).} N\textsubscript{r}=\x{1,506} survey responses by N\textsubscript{r}=\x{1,506} participants. The y-axis indicates whether participants received suggestions from a model supportive or critical of social media during the writing task. The x-axis shows how often they said that social media was good for society (orange) or not (blue) in a subsequent attitude survey. Undecided participants are shown in white. Brackets indicate significant opinion differences at the **p<0.005 and ***p<0.001 level.} \label{fig:survey-opinion} \end{figure*} The opinion differences in participants' writing may be due to \x{shifts} in participants' actual opinion caused by interacting with the opinionated model. We evaluate whether interactions with the language model \x{affected} participants' attitudes expressed in a post-task survey asking participants whether they thought social media was good for society. An overview of participants' answers is shown in Figure~\ref{fig:survey-opinion}. The figure shows the frequency of different survey answers (x-axis) for the participants in each condition (y-axis). Participants who did not interact with the opinionated models (middle row in Figure~\ref{fig:survey-opinion}) were balanced in their evaluations of social media: 33\% answered that social media is not good for society (middle, blue); 35\% said social media is good for society. In comparison, 45\% of participants who interacted with a language model supportive of social media (top row) answered that social media is good for society. Converting participants' answers to an interval scale, this \x{difference} in opinion corresponds to an effect size of d=0.22 (p<0.001). Similarly, participants that had interacted with the language model critical of social media (bottom row) were more likely to say that social media was bad for society afterward (d=0.19, p<0.005). \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/written-opinion-sentence.pdf} \end{center} \caption{\textbf{Participants' writing was affected by the model equally throughout the writing process.} N\textsubscript{s}=9,223 sentences by N\textsubscript{p}=1,506 participants. The y-axis shows the mean opinion expressed in participants' sentences. The x-axis indicates whether the sentence was positioned earlier or later in participants' social media posts. Since most participants wrote five sentences as requested, the quintiles roughly correspond to sentence numbers.} \label{fig:suggestions-position} \end{figure*} \subsubsection{Did the opinionated model gradually convince the participant?} While we cannot ascertain the mechanism of persuasion, our results provide further insight into how this process might have occurred. Figure~\ref{fig:suggestions-position} shows how participants' written opinions \x{evolved} throughout their writing process. In the control group (shown in black), participants tended to start their posts with two positive statements, followed by two more critical statements and an overall critical conclusion. Participants interacting with a model that evaluated social media positively (orange) consistently evaluated social media more favorably throughout their entire statement. Participants interacting with a model critical of social media (blue) also wrote sentences that were more critical of social media, starting with their first sentence. Similar to the control group, they were more positive at the beginning and more critical towards the end of their post, showing that the writing assistant augmented rather than replaced their narrative. \subsection{Were participants aware of the model's opinion and influence?} After the writing task, we asked treatment group participants about their experience with the writing assistant. We use their answers to estimate to what extent they were aware of the model's opinion and influence. \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/ux-expertise.pdf} \end{center} \caption{\textbf{Participants viewed the model as knowledgeable -- even if it did not share their opinion. } N\textsubscript{p}=1,000 treatment group participants. The x-axis indicates whether participants believed the language model had expertise. The y-axis indicates whether the model's opinion was aligned with participants' views. } \label{fig:ux-expertise} \end{figure*} The vast majority of participants thought the language model had expertise and was knowledgeable -- even if it contradicted their personal views. As shown in Figure \ref{fig:ux-expertise}, 84\% of participants said that the assistant was knowledgeable and had expertise when the language model supported their opinion. When the model contradicted their opinion, only 15\% of participants said that it was not knowledgeable or lacked expertise. \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/ux-balanced.pdf} \end{center} \caption{\textbf{Participants were often unaware of the model's opinion.} N\textsubscript{p}=1,000 treatment group participants. The x-axis indicates whether participants found the model's suggestions balanced and reasonable. The y-axis indicates whether the model's opinion was aligned with participants' personal views. } \label{fig:ux-balanced} \end{figure*} While the language model was configured to support one specific side of the debate, the majority of participants said that the model's suggestions were balanced and reasonable. Figure~\ref{fig:ux-balanced} shows that, in the group of participants whose opinion was supported by the model, only 10\% noticed that its suggestions were imbalanced (top row in blue). When the model contradicted participants' opinions, they were more likely (30\%) to notice its skew, but still, more than half agreed that the model's suggestions were balanced and reasonable (bottom row in orange). \begin{figure*} \begin{center} \includegraphics[width=0.82\textwidth, trim=0cm 0cm 0cm 0cm]{figures/ux-influence.pdf} \end{center} \caption{\textbf{Participants interacting with a model that supported their opinion were more likely to indicate that the model \x{affected} their argument.} N\textsubscript{p}=1,000 treatment group participants. The x-axis indicates whether participants thought that the model \x{affected} their argument. The y-axis indicates whether the model's opinion was aligned with participants' personal views. } \label{fig:ux-influence} \end{figure*} Figure~\ref{fig:ux-influence} shows that the majority of participants were not aware of the model's \x{effect} on their writing. Participants using a model aligned with their view -- and accepting suggestions more frequently -- were slightly more aware of the model's effect (34\%, top row in orange). In comparison, only about 20\% of the participants who did not share the model's opinion believed that the model influenced them. Overall, we conclude that participants were often unaware of the model's opinion and influence. \subsection{Robustness and validation} \label{sec:validation} We finally validate that the experimental manipulation worked as intended and address potential concerns about experimenter demand effects. \subsubsection{Did manipulating the models' opinion work as intended?} To validate that the prompting technique led to model output opinionated as intended, we sampled a subset of all suggestions shown to participants and asked raters in the sentence labeling task to indicate the opinion expressed in each. We found that of the full sentences suggested by the model, 86\% were labeled as supporting the intended view, and 8\% were labeled as balanced. For partially suggested sentences, that is, suggestions where the participants had already begun a sentence and the model completed it, the ratio of suggestions that were opinionated as intended dropped to 62\% (another 19\% argued that social media is both good and bad). Overall, these numbers indicate that the prompting technique guided the model to generate the target opinion with a high likelihood. \subsubsection{Could participants have accepted the model suggestion and \x{shifted} their opinion to satisfy the experimenters?} As in all subject-based research, there is a chance that participants \x{adapted} their behavior to fit their interpretation of the study's purpose. However, we have reason to believe that demand effects do not threaten the validity of our results. When participants were asked what they perceived as the purpose of the study, most thought we were studying what people think about social media or how they use writing assistants. Only about 14\% mentioned that we might be studying the assistants' effect on people's opinions. Further, based on our post-task survey, most participants were not aware of the model's opinion and believed that the model did not \x{affect} their argument. These results suggest that participants did not adapt their views because they felt the research team expected them to. \section{Discussion} \x{The findings show that opinionated AI language technologies can affect what users write and think. In our study, participants assisted by an opinionated language model were more likely to support the model's opinion in a simulated social media post} than control group participants who did not interact with a language model. Even participants who took five minutes to write their post -- ample time to write the five required sentences -- were significantly affected by the model's preferred view, showing that conveniently accepted suggestions do not explain the model's influence. Most importantly, the interactions with the opinionated model also led to opinion differences in a later attitude survey. The opinion shifts in the survey suggest that the differences \x{in written opinion were associated with a shift in personal attitudes. We attribute the shifts in written opinion and post-task attitude to a new form of technology-mediated influence that we call \emph{latent persuasion} by language models.} \subsection{Theoretical interpretation} \x{The literature on social influence and persuasion \cite{rashotte2007social} provides ample evidence that our thoughts, feelings, and attitudes shift due to interaction with others. Our results demonstrate that co-writing with an opinionated language model similarly shifted people's writing and attitudes. We discuss below how \emph{latent persuasion} by AI language technologies extends and differs from traditional social influence and conventional forms of technology-mediated persuasion \cite{simons2011persuasion}. We consider how the model's influence can be explained by discussing two possible vectors of influence inspired by social influence theory \cite{rashotte2007social}--informative and normative persuasion-- and a third vector of influence extending the nudge paradigm \cite{leonard2008richard, fogg2002persuasive} to the realm of opinions.} \subsubsection{Informational influence} \x{The language model may have influenced participants' opinions by providing new information or compelling arguments, that is, through \emph{informational influence} \cite{myers2008social}. Some of the suggestions the language model provided may have made participants think about benefits or drawbacks of social media that they would not have considered otherwise, thus influencing their thinking. While the language model may have provided new information to writers in some cases, our secondary findings indicate that \emph{informational influence} may not fully explain the observed shifts in opinion. First, the model influenced participants consistently throughout the writing process. Had the language models influenced participants' views through convincing arguments, one would expect a gradual or incremental change of opinion, as has been observed for human co-writers \cite{Kimmerle2012UsingCF}. Further, our participants were largely unaware of the language model's skewed opinion and influence. The lack of awareness of the models' influence supports the idea that the model's influence was not only through conscious processing of new information but also through the subconscious \cite{petty1986elaboration} and intuitive processes \cite{kahneman2011thinking}.} \subsubsection{Normative influence} The language model may have shifted participants' views through \emph{normative influence} \cite{myers2008social}. \x{Under normative influence, people adapt their opinions and behaviors based on a desire to fulfill others' expectations and gain acceptance. This explanation aligns with the \textit{computers are social actors} paradigm \cite{nass1994computers}, where the writing assistant may have been perceived as an independent social actor. People may have felt the need to reciprocate the language model, applying the social heuristics they apply in interactions with other humans. The \emph{normative influence} explanation is supported by the finding that} participants in our experiment attributed a high degree of expertise to the assistant (see Figure \ref{fig:ux-expertise}). The wider literature similarly suggests that people may regard AI systems as authoritative sources \cite{kapania2022because, logg2019algorithm, araujo2020ai}. \x{However, our experimental design presented the language model as a support tool and did not personify the assistant. An ad-hoc analysis of participants' comments on the assistant suggested that they did not feel obliged to reciprocate or comply with the models' suggestions, indicating that the strength of normative influence may have been limited. } \subsubsection{\x{Behavioral influence}} Large language models may \x{affect people's views by changing behaviors related to opinion formation.} \x{The suggestions may have interrupted participants' thought processes and driven them to spend time evaluating the suggested argument \cite{Bhat2022, Buschek2021emails}.} Similar to \emph{nudges}, the suggestions changed participants' behavior, prompting participants to consider the models' view and even accept it in their writing. According to self-perception theory \cite{bem1972self}, such changes in behavior may lead to changes in opinion. People who do not have strongly formed attitudes may infer their opinion from their own behavior. \x{Even participants with pre-formed opinions on the topic may have changed their attitudes by being encouraged to communicate a belief that runs counter to their own belief \cite{WAN2010162, becker2006peer}. The finding that the model strongly influenced participants who accepted the models' suggestions frequently corroborates that some of the opinion influence has been through behavioral routes. The \emph{behavioral influence} route implies that the user interface and interaction design of AI language systems mediate the model's influence as they determine when, where, and how the generated opinions are presented.} \x{We conclude that further research will be required to identify the mechanisms behind \emph{latent persuasion} by language models. Our secondary findings suggest that the influence was at least partly subconscious and not simply due to the convenience and new information that the language model provided. Rather, co-writing with the language model may have changed participants' opinion formation process on a behavioral level.} \subsection{Implications for research and industry} Our results caution that interactions with opinionated language models affect users' opinions, even if unintended. \x{The results also show how simple it is to make models highly opinionated using accessible methods like prompt engineering. How can researchers, AI practitioners, and policymakers respond to this finding? We believe that our results imply that we must be more careful about the opinions we build into AI language technologies like GPT-3.} Prior work on the societal risks of large language models has warned that models learn stereotypes and biases from their training data \cite{bender2021dangers, caliskan2017semantics, garrido2021survey} that may be amplified through widespread deployments \cite{Blodgett_power_2020}. Our work highlights the possibility that large language models reinforce not only stereotypes but all kinds of opinions -- from whether social media is good to whether people should be vegetarians and who should be the next president. Initial tools have been developed for monitoring and mitigating generated text that is discriminating~\cite{huang2019reducing, brown2020language, nozza2021honest} or otherwise offensive~\cite{askell2021general}. We have no comparable tools for monitoring the opinions built into large language models and in the text they generate during use. A first exploration of the opinions built into GTP-3 by \citet{johnson2022ghost} suggests that the model's preferred views align with dominant US public opinion. In addition, a version of GPT trained on 4chan data led to controversy about the ideologies that training data should not contain. \x{We need theoretical advancements and a broader democratic discourse on what kind of opinions a well-designed model should ideally generate. } \x{Beyond unintentional opinion shifts through carelessly calibrated models, our results raise concerns about new forms of targeted opinion influence. If large language models affect users' opinions, their influence could be used for beneficial social interventions, like reducing polarization in hostile debates or countering harmful false beliefs. However, the persuasive power of AI language technology may also be leveraged by commercial and political interest groups to amplify views of their choice, such as a favorable assessment of a policy or product. In our experiment, we have explored the scenario of influence through a language-model-based writing assistant in an online discussion, but opinionated language models could be embedded in other applications like predictive keyboards, smart replies, and voice assistants. Like search engine and social media network operators \citep{knoll2016advertising}, operators of these applications may choose to monetize the persuasive power of their technology.} \x{As researchers, we can advance an early understanding of the mechanisms and dangers of \emph{latent persuasion} through AI language technologies. Studies that investigate how \emph{latent persuasion} differs from other sorts of influence, how it is mediated by design factors and users' traits, and engineering work on how to measure and guide model opinions can support product teams in reducing the risk of misuse and legislators in drafting policies that preempt harmful forms of \emph{latent persuasion}.} \subsection{Limitations and generalizability} \x{As appropriate for an early study, our experiment has several limitations: We only tested whether a language model affected participants' views on a single topic. We chose this topic as people had mixed views on it and were willing to deliberate. Whether our findings generalize to other topics, particularly where people hold strong entrenched opinions, needs to be explored in future studies. Further, we only looked at one specific implementation of a writing assistant powered by GPT-3. Interacting with different language models through other applications, such as a predictive keyboard that only suggests single words or an email assistant that handles entire correspondences, may lead to different influence outcomes. } Our results provide initial evidence that language models in writing assistance tasks affect users' views. \x{How large is this influence compared to other types of influence, and to what extent effects persist over time, will need to be explored in future studies. For this first experiment, we created a \textit{strongly opinionated} model. In most cases, model opinions in deployed applications will be less definite than in our study and subject to chance variation.} However, our design also underestimates the opinion shifts that even weakly opinionated models could cause: In the experiment, participants only interacted with the model once. In contrast, people will regularly interact with deployed models over an extended period. Further, in real-world settings, people will not interact with models individually, but millions will interact with the same model, and what they write with the model will be read by others. Finally, when language models insert their preferred views into people's writing, they increase the prevalence of their opinion in future training data, leading to even more opinionated future models. \subsection{Ethical considerations} \x{The harm participants incurred through interacting with the writing assistant in our study was minimal. The opinion shift was likely transient, inconsequential, and not greater than shifts ordinarily encountered in advertising on the web and TV. Yet, given the weight of our research findings, we decided to share our results with all participants in a late educational debrief: In a private message, we invited crowdworkers who had participated in the experiment and pilot studies to a follow-up task explaining our findings. We reminded participants of the experiment, explained the experimental design, and presented our results in understandable language. We also provided them with a link to a website with a nonpartisan overview of the pros and cons of social media and asked them whether they had comments about the research. 1,469 participants completed the educational debrief in a median time of 109 seconds, for which they received a bonus payment of \$0.50. We asked participants for open-ended feedback on our experiment so they could voice potential concerns. 839 participants provided open-ended comments on our experiment and results. Their feedback was exceptionally positive and is included in the Open Science Repository.} \x{Considering the broader ethical implications of our results, we are concerned about misuse. On the one hand, we have shown how simple it is to create highly opinionated models. Our results might motivate some to develop technologies that exploit the persuasive power of AI language technology. In disclosing a new vector of influence, we face ethical tensions similar to cybersecurity researchers: On the one hand, publicizing a new vector of influence increases the chance that someone will exploit it; on the other hand, only through public awareness and discourse effective preventive measures can be taken at the policy and development level. While risky, decisions to share vulnerabilities have led to positive developments in computer safety \citep{macnish2020ethics}. We hope our results will contribute to an informed debate and early mitigation of the risks of opinionated AI language technologies. } \section{Appendix}
2,877,628,088,819
arxiv
\section{\toolname: Algorithm} \label{sec:algo} \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{plots/our-architecture.pdf} \caption{\toolname's architecture.} \label{fig:our-architecture} \end{figure} \Cref{fig:our-architecture} shows the architecture of \toolname. We train a model (\emph{Auxiliary Model} in the diagram) that takes as input a user's embedding from the trained recommender model and an item's attribute vector. This attribute vector is multi-hot -- it has 1s when the item has those attributes, otherwise, 0s (instead of binary values, the vector could also have continuous values). The model is trained for all ratings present in the training dataset, and the goal is to reproduce the rating as given by the user (\texorpdfstring{\emph{R\textsubscript{orig}}}{} in the diagram). The model is trained using a mean-squared error between the original rating and the predicted rating (\texorpdfstring{\emph{R\textsubscript{pred}}}{} in the diagram). The auxiliary model can be instantiated with any ML model. For experiments, we used three models: Linear, a 2-layer neural network, and a 4-layer neural network. We train the model using Adam and SGD optimizers until convergence. \subsection{Generating Explanations} Once the auxiliary model is trained, we use it to generate explanations for recommended items. For a given user-item pair for which an explanation needs to be generated, \toolname produces a sequence of that item's attributes ranked in decreasing order of that user's preference. We term this a user's \emph{specific preference} over an item's attributes. The preference of a particular attribute (for a given user-item pair) is determined as the loss in predicted rating if that attribute was not there in the item. For example consider a movie who genres are \emph{Crime}, \emph{Documentary}, and \emph{Horror}. For a user Alice, the auxiliary model predicts a rating of 4.2 for this movie. Now, we can use the auxiliary model to predict this movie's rating if the genres are zeroed out one by one. \Cref{tab:specific-pref} shows this procedure. We zero out the attributes present in the item, one by one, and see the drop in rating (last column). The higher the drop, the higher the rank of the removed attribute. Our approach of attributing importance to attributes falls in a well-studied domain of explainability -- broadly termed as removal-based explanation methods. We provide more details in \Cref{sec:appendix}. In the example, the ranked order of preference is \emph{Crime}, \emph{Horror}, and \emph{Documentary}. This is our analog of \emph{local interpretability} in the context of classification in ML provided by popular methods like LIME \citep{ribeiro_why_2016} and SHAP \citep{shap-paper}. \begin{table}[h] \centering \caption{Illustration of how \toolname computes the specific preference of a user over an item's attributes (in this case, genres of a movie). } \label{tab:specific-pref} \begin{tabular}{ccc} \toprule Attribute zeroed & Predicted rating & $\Delta$(Predicted rating) \\ \midrule \textcolor{gray}{No attribute zeroed} & 4.2 & -- \\ \emph{Horror} & 3.7 & 0.5 \\ \emph{Documentary} & 3.9 & 0.3 \\ \emph{Crime} & 3.2 & 1.0 \\ \bottomrule \end{tabular} \end{table} \toolname also generates the \emph{general preference} of a user over all attributes in a dataset by averaging the specific preferences over all the items that a user `liked' in the past (note that this is not the same as all items that a user rated in the past). This is our analog of \emph{global interpretability} in the context of classification in ML. \section{Appendix} \label{sec:appendix} \textbf{AMCF Evaluation Metrics: } AMCF also evaluate their general and specific preference. For general preference, similar to our evaluation metrics, their primary metric is \emph{Top M recall at K}, i.e., the intersection between the top M ground truth user preference and top K predicted user preferences. To realize this metric, they need the ground truth user preference -- which is not present in the dataset -- and hence similar to our technique, they need to simulate. The way they compute the ground truth preference is unjustified and inexplicable. The process involves computing the weight of each item by removing the user and item bias terms, and then the preference of an attribute is just the sum of the weights of the items it occurs in. The latter part is still reasonable; however, we are uncertain about the weight calculation part. For evaluating specific preferences, they use the sorted order of general preference for the attributes present in that specific item -- which does not resolve the concerns mentioned above. We also use proxies to simulate the users' ground truth attribute preference; however there are key differences in our evaluation when compared to AMCF: \begin{itemize}[leftmargin=*,itemsep=2pt] \item The proxies are more generalizable and reasonable, like the conditional probability of liking and odds of liking. \item We use multiple proxies and report results on all of them to avoid cherry-picking. \end{itemize} \textbf{Justification of the removal-based explanation:} \citet{Ians-paper-xai-by-removal-survey} developed a framework to categorize such methods along three dimensions: \begin{itemize}[leftmargin=*,itemsep=2pt] \item Attribute removal: how the approach removes attributes from the model, \item Model behavior: what model behavior is it observing, and \item Summary technique: how does it summarize an attribute's impact? \end{itemize} \toolname, when instantiated in this framework: removes attributes by setting them to zero, analyzes prediction as the model behavior, and summarizes an attribute's impact by removing them individually. Since our attribute input vector is binary and indicates whether an attribute is present or not, simulating an attribute's removal by setting it to zero is a natural choice. Previous removal-based explanation methods have used \emph{prediction} or \emph{prediction loss} or \emph{dataset loss} for model behavior analysis. We chose \emph{prediction} instead of \emph{prediction loss} for our analysis because we wanted to get specific preference (analog of local interpretability) without requiring the original rating. Previous removal-based explanation methods have used \emph{removing individual attributes} or \emph{Shapley values} or \emph{trained additive models} to get attribute impact value. Removing individual attributes accesses the impact of an attribute by measuring the loss in prediction when that one attribute is removed, and this is what we choose. On the other hand, Shapley values take all subsets of attributes and then use the cooperative game theoretic formulation to assign impact value to each attribute. It has two disadvantages: \begin{itemize}[leftmargin=*] \item It creates all subsets of attributes -- which is exponential in the number of attributes, making the process very expensive. \item For creating all the subsets, it simulates removing many features that can potentially create attribute vectors that the auxiliary model has not seen, and thereby its prediction can not be trusted in that part of the data manifold. \end{itemize} For these reasons, we choose the method of removing individual attributes. There are potential downsides to this choice as well. If the ground truth reason for recommending a movie is either if it were either \emph{Horror} or \emph{Crime}, zero-ing out one of them will not make a difference in prediction, and hence their assigned impact will not be correct. \textbf{Justification of using Rank-Biased Overlap (RBO): } Comparing ranked lists can be achieved using various rank correlation metrics like Kendall's Tau and Spearman's correlation~\citep{rank-correlation}. These metrics have restrictions that the ranked lists must be conjoint, i.e., they both must contain all the items that the entire universe of items contains. This is not suitable for our metrics because the top-$k$ attributes from either the conditional probability or the odds liking proxy might not have any common element with the ranking produced by any technique. Hence we choose to compare the ranked lists using rank-biased overlap (RBO) metric~\citep{RBO-paper}, which was recently proposed to overcome the limitation posed by correlation-based metrics. Specifically, RBO is better than correlation-based metrics as: \begin{itemize} \item RBO does not require the two ranked lists to be conjoint, i.e., a ranked list having items that do not occur in the other list is acceptable, e.g., the similarity between [1, 3, 7] and [1, 5, 8] can be measured using RBO. \item RBO does not require the two ranked lists to be of the same length. \item RBO provides weighted comparison, i.e., discordance at higher ranks is penalized more than discordance at lower ranks. \end{itemize} \textbf{AMCF Post-Hoc Adaptation: } For adapting AMCF~\citep{amcf-paper} to be post-hoc, we made a minor modification to the architecture mentioned in their paper (see \Cref{fig:AMCF-PH}). We froze the left-hand side of the architecture and only trained the right-hand side, which consists of the attention network that aims to reconstruct the item's embedding from its attributes. Since the user and item embeddings were not trained, this made the technique post-hoc. \begin{figure} \centering \includegraphics[width=\columnwidth]{plots/AMCF-PH.png} \caption{The architecture of AMCF Post-Hoc (AMCF-PH). The part within the blue colored region was not trained, thereby making the technique post-hoc. } \label{fig:AMCF-PH} \end{figure} \textbf{\Cref{fig:genre_dist_ml-100k} shows the genre distribution for Movielens-100K. } \begin{figure} \centering \includegraphics[width=\columnwidth]{plots/genre_count.pdf} \caption{The genre distribution for Movielens-100K dataset. The top-3 most popular genres: \emph{Action}, \emph{Comedy}, and \emph{Drama} occur in over 88\% of the movies. } \label{fig:genre_dist_ml-100k} \end{figure} \section{Evaluation} \label{sec:eval} Our experiments characterize the quality of explanations for every user by the number of liked items in the user history and the number of top-$k$ recommendations that can be explained. \subsection{Experimental Methodology} \textbf{Metrics. } As mentioned in \Cref{sec:related}, there are only two previous methods that can generate attribute-based explanations for recommender systems: LIME-RS~\citep{limers-paper} and AMCF~\citep{amcf-paper}. However, LIME-RS did not quantitatively evaluate its attribute-based explanations, and the AMCF did not justify its metric of choice (see \Cref{sec:appendix}). Hence, we propose a set of \met generalized metrics to evaluate any attribute-based explanation method for recommender systems. We evaluate both general and specific preferences in these metrics. \begin{enumerate}[leftmargin=*] \item \emph{Test set coverage: } This metric finds if the general preferences of a user identified by a technique have any intersection with the attributes of the items in the test set that they liked. For example, suppose the identified general preferences for a user Adam are \emph{Anime}, \emph{Comedy}, and \emph{Sci-Fi} and there is a movie that Adam liked whose genres are \emph{Action} and \emph{Anime}. In that case, we could count this movie as covered. Had the genre of the movie only been \emph{Action}, then it would not have been covered. For Movielens-$100K$, we consider a movie `liked' if a user has rated it $4$ or $5$. We only consider the top-$3$ general preferences for a user when measuring coverage. We report the mean coverage over all the users for this metric. \item \emph{Top-$k$ recommendations coverage: } This metric finds if the top-$3$ identified general preferences covers (i.e., have any intersection) with the attributes of the top-$20$ recommended items to a user. We report the mean coverage over all the users for this metric. \item \emph{Personalization of the explanations: } Since a few attributes in most recommender datasets are very popular, i.e., they occur in almost all items: identifying such an attribute as a user's preference will provide almost 100\% coverage for the metrics mentioned above -- however, this might be an inaccurate and unpersonalized explanation. To measure how personalized the explanations are, we need to know the ground-truth users' preferences over the set of attributes -- unfortunately, this is unknown to us. To overcome this limitation, we propose two methods that act as reasonable proxies for a user's attribute preference (The gold standard would be to conduct a user study; however, that would involve collecting a new recommender dataset and asking users to tell us about the preferences, which we leave to future work.): \begin{enumerate}[leftmargin=*] \item \emph{Conditional Probability of Liking given a genre is present: } This measures the probability that a user likes a movie, given that the attribute is present in a movie. It is the ratio of the number of times a genre is present in the movies a user likes (rated 4 or 5) and the number of times it is present in all the movies the user rated (rated 1 through 5). \item \emph{Odds of Liking vs. Disliking: given a genre is present: } This measures the ratio of the number of times a genre is present in a movie a user likes (rated 4 or 5) and the number of times it is present in a movie that the user dislikes (rated 1 or 2). We use the training set for calculating both these preference proxies. \end{enumerate} Both provide a reasonable proxy of users' general preference over genres, i.e., weights over the set of genres. We report four metrics considering such weights as ground truth user preference: \begin{enumerate}[leftmargin=*] \item \emph{General preferences coverage:} This metric measures if there is any intersection between the top-$k$ general preferences identified for a user and the top-$k$ preferences identified by either of the two proxies mentioned above. \item \emph{General preferences ranking:} This metric measures the similarity between the ranking of the top-$k$ general genre preferences identified for a user and the top-$k$ preferences identified by either of the two proxies. We use rank-biased overlap (RBO) as a measure of similarity between the two ranked lists (we justify this choice in \Cref{sec:appendix}). \item \emph{Specific preferences coverage:} Similar to general preferences, we also measure if there is any intersection between the top-$k$ genre preferences for each item that a user liked in the training set (specific genre preference for a user-item pair) and the top-$k$ genre preferences identified for this user by either of the two proxies. \item \emph{Specific preferences ranking:} We also measure the similarity in the ranking of the top-$k$ genre preferences for items liked by a user in the training set and the top-$k$ genre preferences identified for this user by either of the two proxies. \end{enumerate} For all the above metrics, we report the mean over all the users. Hence we have a total of 8 metrics for measuring the personalization of the explanations. We used k = 3 in the experiments. \end{enumerate} \textbf{Dataset. } We used the popular recommender dataset Movielens-$100K$ for our experiments. Movielens-$100K$ has 100,000 ratings for 1682 movies provided by 943 users. Each rating is an integer from 1 to 5, with a higher number meaning a higher likeness of it. The dataset also contains the genre of each movie which is a multi-hot vector of size 18: there are a total of 18 genres in the dataset: \emph{Action, Adventure, Animation, Children's, Comedy, Crime, Documentary, Drama, Fantasy, Film-Noir, Horror, Musical, Mystery, Romance, Sci-Fi, Thriller, War, Western} and each movie is categorized as having one or more genre. These are the attributes that a technique identifies a user's preference over. We used 70\% of the dataset for training a matrix factorization-based recommender system and 30\% of the data as the test set (data split is stratified by users). \textbf{Baselines. } We compare \toolname to six baseline methods: \begin{itemize}[leftmargin=*,itemsep=2pt] \item \emph{LIME-RS~\citep{limers-paper}: } The only previous post-hoc attribute-based explainability method. \item \emph{AMCF~\citep{amcf-paper}: } Another attribute-based explainability method -- this is not a post-hoc method. \item \emph{AMCF-PH: } The AMCF approach adapted for being post-hoc (details in \Cref{sec:appendix}). \item \emph{Global popularity: } The genres that are the most popular among all the movies that are liked across the entire dataset. For Movielens-$100K$, these were \emph{Action}, \emph{Comedy}, and \emph{Drama}. \item \emph{User-specific popularity: } The most popular genres for a particular user. These are calculated based on only the movies they liked (rated 4 or higher). \item \emph{Random: } This baseline is for control, for each metric, it selects a random set of genres for each user-item pair, when computing specific preferences, and for each user when computing the general preferences. \end{itemize} \input{table-3} \subsection{Discussion} \Cref{tab:results} reports all the metrics for all the baselines and \toolname. We use it to answer our research questions: \begin{enumerate}[leftmargin=*] \item \emph{LIME-RS: } The test set and top-$k$ recommendations coverage provided by LIME-RS is abysmally low, even worse than the \emph{random} baseline. It trails \toolname in all the \met metrics, barring the ranking of general preferences when the ground truth preference is calculated using the conditional probability of liking. \item \emph{AMCF: } This approach trails \toolname in all metrics. \item \emph{AMCF-PH: } The post-hoc adapted version of AMCF also trails \toolname in all metrics. \item \emph{Global popularity: } It achieves the highest test set and top-$k$ recommendations coverage among all techniques, and understandably so — the three most common genres \emph{Action}, \emph{Comedy}, and \emph{Drama} occur is over 88\% of the test set items (see the genre distribution plot in \Cref{fig:genre_dist_ml-100k}). However, it performs much worse than \toolname on all personalization metrics, many times even worse than the \emph{random} baseline. Therefore, we can conclude that the explanations served by this approach do not capture a user’s preference over genres and can perform well for coverage-based metrics because of their skewed distribution. \item \emph{User-specific popularity: } This popularity approach achieves the second highest test set and top-$k$ recommendations coverage, and similar to \emph{global popularity} performs worse than \toolname on all personalization metrics, many times even worse than the \emph{random} baseline. \item \emph{Random: } We used this approach to serve as control and to ensure that no metric was trivial to perform well on. Not surprisingly, it performs worse than \toolname on all metrics. \item \emph{Ours: } \toolname has the third highest test set and top-$k$ recommendations coverage (approaching the performance of user-specific popularity for the latter) while performing the best on all personalization metrics except for the general preferences ranking when using the conditional probability of liking, where it performs the second best. Therefore, we can conclude that \toolname serves the most personalized explanations while still being able to explain a large proportion of test set items and top-$k$ recommendations. \end{enumerate} \section{Conclusions} As our metrics indicate, \toolname strikes a great balance between coverage and personalization of the explanations. We would also like to mention that neither of the previous techniques, LIME-RS or AMCF, considered popularity as a baseline. We show that popularity approaches perform pretty well on the two coverage metrics (those are the metrics that were used in their papers). Even the item-based explainability papers \citep{Adbollahi-and-Nasraoui-constrain,Peake-and-Wang-data-mining2018} did not compare to popularity even though the metrics they considered were precision and recall. Since recommender systems are known to have popularity bias \citep{Abdollahpouri2019-popularitybias1,Zhu-popularitybias2}, popular items and popular attributes can perform very well on these metrics. Hence we consider discussing popularity as a potential explanation for both item-based and attribute-based explanations as a contribution of our work. \section{Limitations} The only limitation of this approach is the cost incurred during training of the auxiliary model. In our experiments, the training of the three model architectures were all very fast, all of them were trained in less than 10 minutes on a laptop. \section{Introduction} \label{sec:intro} Recommender systems drive the modern discovery of subjects of interest. Examples are ubiquitous, Netflix and YouTube for entertainment, Yelp for restaurants, Amazon and Shein for fashion. Content-based and collaborative filtering (CF) are the two primary approaches of generating recommendations \citep{Adomavicius2005Toward,recobroad1}. Content-based recommender systems use the similarity between an item’s attributes and a user’s preference over them to recommend new items, for example recommending \textit{Top Gun} to a user who likes \emph{Action} movies \citep{content1,content2}. Collaborative filtering (CF) methods instead rely on the wisdom of the crowd to generate recommendations. Such methods are trained only using the ratings provided by the users of a recommendation platform like Netflix. The idea is that if two users have a similar rating pattern, then the items liked by a user can be recommended to the other (if they have not already interacted with it) \citep{collab-reco,colab1}. Hybrid recommender systems aim at combining the two approaches \citep{Burke2004HybridRS}. In recent years, CF has been widely chosen over content-based methods owing to 1) ease of training -- labeling costs of new items is high as their attributes need to manually labeled \citep{pandora-content-based}, and 2) better recommendations in terms of serendipity and discovery \citep{google-cf-serendipitious}. However, this comes with one major limitation. Content-based methods are transparent and scrutable as they generate recommendations using a user’s attribute preference, but this is not the case with CF-based methods. CF methods map users and items to an embedding space, which is learned from the user-item interaction matrix (consisting of all users and all items) -- and the proximity in this space is used to generate recommendations. Such an embedding space is difficult to interpret. A core challenge is understanding what a model learns about the user’s preference over the items and explaining how it generates the recommendations. Previous research has established that providing reasons for a recommendation enhances the transparency, scrutability, trustworthiness, effectiveness, persuasiveness, efficiency, and satisfaction of the recommender systems \citep{Tintarev2007-thesis,xai-reco-motivation2,old-xai-reco-paper-sinha-2002}. Providing explanations when using CF-based methods is nontrivial. This has spurred significant research in the broad field of ``explainability for CF recommender systems''. Most of the previous explanation generating approaches provide explanations in the form of either user-based or item-based explanations. User-based explanations explain a recommendation on the basis of ‘similar’ users liking it. And item-based explanations explain a recommendation on the basis of its closeness to other items that the user has liked in the past. Item-based explanations are usually easier to grasp as the user knows about the items they interacted with in the past. However, both of these explanation formats do not capture a user's preference over the attributes of an item — which is how users inherently think about a recommendation \citep{attribute-based-xai-reco2-radarchart-paper,attribute-based-xai-reco3-tagplanations-paper,attribute-based-xai-reco1,McAuley-and-Leskovec2013,McAuley2012LearningAttributes,rl-approach-xai-reco-MSFT}. (Providing attribute-based explanations is much easier when using content-based recommender systems, as users' preference is what those systems directly use to generate recommendations.) This work proposes a novel approach, \toolname, that generates attribute-based explanations for CF-based recommender systems. A recommendation is explained in terms of a user’s preference over the attributes of this item, e.g., `We are recommending you this movie because you like \emph{Action} movies.' Such explanations are personalized to the user and hence help further enhance the persuasiveness and trustworthiness of the recommender systems. Previous research has also established the significance of post-hoc and model-agnostic explainability, citing two reasons: 1) to evade the accuracy-interpretability trade-off (simpler and interpretable models are usually less accurate), and 2) generalizability to all recommender systems \citep{rl-approach-xai-reco-MSFT,Peake-and-Wang-data-mining2018,molnar2022-xaibook}. Aligning with this school of thought, our proposed approach is post-hoc and model-agnostic -- it can generate explanations for any recommender system that operates using user and item embeddings. We found this important intersection under-researched in the literature. We could only find two methods that generate attribute-based explanations, LIME-RS \citep{limers-paper} and AMCF \citep{amcf-paper}. LIME-RS did not evaluate its attribute-based explanations, and the metrics used by AMCF were not justified and not generalizable (see \Cref{sec:appendix}). To this end, we also propose a set of generalizable and well-justified metrics to evaluate attribute-based explainability techniques for recommender systems (\Cref{sec:eval}). In summary, our contributions are: \begin{enumerate}[leftmargin=*,topsep=0pt,itemsep=2pt] \item We propose \toolname, a novel, post-hoc, and model-agnostic technique to provide attribute-based explainability for recommender systems, a largely neglected research area. \item We propose several metrics to evaluate attribute-based explainability methods for recommender systems that can be used to compare all such future techniques. \item We explore comparison of \toolname and other methods with the often overlooked popularity-based methods. \end{enumerate} \section{Related Work} \label{sec:related} Machine Learning (ML) is being increasingly used to automate decisions. Some of the applications where ML is being used are highly critical and directly affect humans, for example, loan approval \citep{credit-risk-ml}, criminal justice \citep{parole-ml}, and hiring \citep{hiring-ml}. The nascent field of trustworthy ML aims to detect bias in ML models (and counteract it), understand the factors that the ML model is using in making predictions, ensure the models respect privacy and security, and frame policies and regulations that the ML models should abide by \citep{barocas-hardt-narayanan,trustml1,trustml2}. In this work, we focus on explainability and refer the readers to other works for a broad discussion of trustworthy ML \citep{barocas-hardt-narayanan}, \citep{Ians-paper-xai-by-removal-survey}, \citep{varshney}. \subsection{Explainability in Recommender Systems} Explainable recommender systems provide recommendations along with a justification for doing so. The term `explainable recommendation' was introduced by \citet{explicit-factors-paper-yongfeng} in \citeyear{explicit-factors-paper-yongfeng}, however there were papers talking about the benefits of providing explanations much earlier \citep{Tintarev-cfe-xai-survey-2007,Tintarev2011,old-xai-reco-paper-sinha-2002}. As mentioned, most previous methods for explainability in recommender systems provide explanations in user-based or item-based fashion \citep{survey-reco-xai}. The approaches that generate these explanations can either be model-specific or model agnostic, the former constituting the majority of previous methods. Models-specific approaches train inherently interpretable models by constraining the embedding space \citep{Peake-and-Wang-data-mining2018}. \citet{explicit-factors-paper-yongfeng} proposed extracting user preferences over attributes from item reviews and using that to provide explanations. They train an explicit factor recommender model that takes the user's preference over attributes as an input during training. \citet{Adbollahi-and-Nasraoui-constrain} add constraints in the latent space of the matrix factorization models to encourage explainability -- they provide user and item-based explanations. There exist many other approaches in this category \citep{Chen2016,Chen2018,Bauman2017,Tao2019,Liu2019,McAuley-and-Leskovec2013,EX3-yongfeng-paper}. A major downside of these approaches is that they are \emph{not} post-hoc, i.e., they train a separate model for each recommendation use case, and this is problematic for two primary reasons: 1) there is a potential accuracy-interpretability trade-off, and 2) industrial teams have fine-tuned their recommender models over large datasets, and are rarely willing to train a new model from scratch in order to provide explanations. Hence post-hoc explainability techniques attract lot of attention, especially from industry perspective \citep{verma2021amortized,molnar2022-xaibook}. \subsection{Post-hoc Explainability in Recommender Systems} There only exist a handful of explainability approaches in this category. \citet{Peake-and-Wang-data-mining2018} use a data-mining approach to provide item-based explanations in a post-hoc manner. \citet{limers-paper} propose LIME-RS which is motivated from the LIME paper \citep{ribeiro_why_2016}. It samples items close to a recommended item, learns a linear regression model on these samples, and uses the regressor to provide an explanation that can be either item-based or attribute-based (the latter can be provided if the attributes of the items are appended to the items embeddings when training the regression model). \citet{FIA-paper-Cheng2019} propose a technique that uses influence functions to find the influence of training rating on a particular recommendation, and the ratings with the highest influence are served as explanations. This technique, therefore, provides item-based explanations. \subsection{Attribute-Based Explainability in Recommender Systems} Unlike user- and item-based explanations, attribute-based explanations learn a user's personalized attribute preferences, thereby increasing the system's persuasiveness and trustworthiness \citep{attribute-based-cfe-reco-RL,attribute-based-xai-reco1}. As mentioned earlier, there do exist works that provide explanations which utilize the user's preference over interpretable attributes of an item. Most of these works utilize the reviews provided by the users to capture this preference (and use that to train the recommender system itself \citep{explicit-factors-paper-yongfeng,McAuley-and-Leskovec2013,attribute-based-xai-reco2-radarchart-paper}). However, such reviews might not exist even in the presence of ratings, for example, Netflix scrapped its review section due to low user participation recently \citep{netflix-no-more-reviews}. Secondly, there is no guarantee of mention of interpretable attributes in the reviews that can be used to learn the user's preference. Even the datasets used in these papers had very sparse user reviews; over 77\% of the users had only \emph{one review} \citep{attribute-based-xai-reco4-trirank-paper}, making the inference of users' preferences questionable. On the other hand, our approach also provides attribute-based explanations by utilizing the attributes of an item available in most recommender datasets (e.g., movie or game genres), and it \emph{does not} use reviews. Similar to ours, AMCF \citep{amcf-paper} learns users' preferences over item attributes by training an attention network. So it provides attribute-based explanations like our approach; however, AMCF is not a post-hoc technique. Similarly, \citet{attribute-based-cfe-reco-RL} proposes CERec to determine the attributes that are important for a user-item pair (in a counterfactual manner); however, CERec is also not a post-hoc technique. \citet{rl-approach-xai-reco-MSFT} propose a post-hoc RL-based approach that generates attribute-based explanations for a user; however, like most previous approaches, it requires access to the user reviews. Hence our approach sits at the intersection of attribute-based (\emph{without} using user reviews) and post-hoc explanation techniques. To the best of our knowledge, LIME-RS is the only previous approach that falls in this category.
2,877,628,088,820
arxiv
\section{Introduction} When a large population of limit cycle oscillators with slightly different natural frequencies are coupled, they often come to oscillate with an identical frequency. Such collective synchronization phenomena have been observed in various oscillatory systems in physics, biology, chemistry, and other sciences~\cite{Winfree,Walker,Eckhorn,Benz}, attracting much interest in recent years~\cite{Kura,Daido,Stro,Park,Choi,Arenas,Tanaka}. In the Kuramoto model for those oscillator systems, oscillators are coupled with each other via the interaction which depends on the phase difference between each pair~\cite{Kura}. It describes the emergence of phase coherence with the increase of the coupling strength, elucidating interesting connection between the collective synchronization and a phase transition. Here, as in the usual dynamics of many-particle systems, sufficient attention has not been paid to the effects of time retardation in the oscillator system. In biological systems such as pacemaker cells and neurons, however, temporal delay is natural and the finite time interval required for the information transmission between two elements may be important~\cite{Walker,Eckhorn}. Time delay in the interaction may modify drastically dynamic behavior of the system, such as stability and ergodicity~\cite{Choi85}. In some types of a system of coupled oscillators, retarded interactions have been found to result in multistability and suppression of the collective frequency~\cite{Shuster,Luzyanina,Niebur,Nakamura}. In a system of two coupled oscillators, it has been found that the time delay induces a multitude of synchronized solutions. Namely, in the system with finite time delay, more than one stable solution are possible at given coupling strength. Among those, the most stable solution is the one with the largest synchronization frequency, as shown via the linear stability analysis~\cite{Shuster}. Similar behaviors including frequency suppression and multistability have been observed in the neural network model, where peripheral oscillators with identical natural frequencies are coupled only with a central oscillator by forward and backward connections with time delay~\cite{Luzyanina}. The two-dimensional system of identical oscillators with time-delayed nearest-neighbor coupling has also been considered to reveal similar frequency suppression~\cite{Niebur}. The system of non-identical oscillators with delayed interactions has been studied recently~\cite{YS}: In the case of a Lorenzian distribution of natural frequencies with a nonzero mean, the stability boundary of the incoherent state has been obtained and coexistence of one or more coherent and/or incoherent states has been observed in appropriate regions~\cite{YS}. However, detailed behaviors such as frequency suppression and emergence of different coherent states have not been addressed fully. This paper investigates in detail the effects of time delay in the interaction on collective synchronization of coupled oscillators with different natural frequencies. For this purpose, we derive the self-consistency equations for the order parameter and examine how the characteristic features of the collective synchronization change due to time delay. This reveals a multitude of coherent states with nonzero synchronization frequencies, each separated from the incoherent state by a discontinuous transition. In particular, we show that the system with a nonzero average frequency can be reduced to the system with the vanishing average natural frequency, which allows us to focus on the latter system. As in the system without delay, there exists the critical coupling strength, at which the system undergoes the usual continuous transition from the incoherent state to the coherent state displaying collective synchronization (with zero synchronization frequency). In addition, at higher values of the coupling strength (beyond the critical value), coherent states with larger synchronization frequencies also appear via discontinuous transitions. Thus coherent states with different synchronization frequencies in general coexist in the appropriate regions, leading to multistability. The synchronization frequency of the oscillators in a coherent state is observed to decrease with the delay time, which is similar to the result of other systems with time delay~\cite{Luzyanina,Niebur,Nakamura}. There are five sections in this paper: Section II presents the system of globally coupled oscillators with time delay, as a generalization of the Kuramoto model. The stationary probability distribution for the system is obtained, and the self-consistency equations for the order parameter are derived. Section III is devoted to the analysis of the self-consistency equations, which reveals the characteristic behavior of the system as the coupling strength or the delay time is varied. In particular, the phase diagram is obtained on the plane of the coupling strength and the delay time, and ubiquity of multistability as well as suppression of the synchronization frequency is demonstrated. Numerical simulations are also performed and the results, which are in general consistent with the analytical ones, are presented in Sec. IV. Finally, Sec. V summarizes the main results, while some details of the calculations are presented in Appendices A and B. \section{System of coupled oscillators with time delay} The set of equations of motion for $N$ coupled oscillators, each described by its phase $\phi_i~ (i=1,2,\cdots,N)$, is given by \begin{equation} \label{model} \dot\phi_i (t) =\omega_i - \frac{K}{N} {\sum_j}^{'} \sin[\phi_i(t)-\phi_j(t{-}\tau)], \end{equation} where the prime restricts the summation such that $j\neq i$. The first term on the right-hand side represents the natural frequency of the $i$th oscillator, which is distributed according to the distribution function $g(\omega)$. Here $g(\omega)$ is assumed to be smooth and symmetric about $\omega_0$, which may be taken to be zero without loss of generality (see below), and also to be concave at $\omega=0$, i.e., $g''(0) < 0 $. The second term denotes the global coupling of strength $K/N$ between oscillators, with time delay, indicating that each oscillator interacts with other oscillators only after the retardation time $\tau$. Without time delay, Eq.~(\ref{model}) exactly reduces to the Kuramoto model. In order to describe collective synchronization of such an $N$ oscillator system, we define the complex order parameter, whose amplitude represents the degree of synchronization, to be \begin{equation} \label{order} \Psi\equiv \frac{1}{N}\sum_{j=1}^{N} e^{i \phi_j} =\Delta e^{i \theta}. \end{equation} Here it is convenient to introduce new variables $\psi_i$ defined by $\psi_{i} \equiv \phi_i-\Omega t$, where $\Omega $ is a constant. Note the existence of physical invariance due to the rotational symmetry of the total system. In terms of the new variables, Eq.~(\ref{model}) reads \begin{equation} \label{newmodel} \dot\psi_i =\tilde{\omega_i}-\frac{K}{N} {\sum_j}^{'} \sin[\psi_i(t)-\psi_j(t{-}\tau)+\Omega \tau], \end{equation} where $\tilde{\omega_i} \equiv \omega_i-\Omega $. Multiplying Eq.~(\ref{order}) by $e^{-i \Omega t}$, we also obtain the corresponding order parameter for the new variables \begin{equation} \label{neworder} \tilde{\Psi} \equiv \frac{1}{N}\sum_{j=1}^{N} e^{i \psi_j} =\Delta e^{i \tilde{\theta}}, \end{equation} where $\tilde{\theta} \equiv \theta - \Omega t$. Incidentally, the order parameter defined in Eq.~(\ref{neworder}) allows us to reduce Eq.~(\ref{newmodel}) into a single decoupled equation with time delay \begin{equation} \label{model1} \dot\psi_i =\tilde{\omega_i} - K \Delta \sin(\psi_i-\theta_0), \end{equation} where $\theta_0 \equiv \tilde{\theta}-\Omega \tau$. Although $\Delta$,~$\Omega$, and ~$\theta_{0}$ depend on the delay time $\tau$, they are assumed to be independent of time $t$, which is possible due to the symmetry. Considering the relation between the old order parameter and the new one \begin{equation} \Psi = \tilde{\Psi} e^{i \Omega t}, \end{equation} we understand that the collective synchronization can be described in terms of a giant oscillator rotating with the frequency $\Omega$ which is in general nonzero. For finite delay time, there exists a multitude of synchronized solutions with nonzero values of $\Omega$; this is in contrast to system without delay, where the rotational symmetry of the system allows us to set $\Omega = 0$. Instead of Eq.~(\ref{model1}), which may be regarded as a Langevin equation without noise, one may resort to the corresponding Fokker-Planck equation for the probability distribution $P(\psi,t)$ at zero temperature~\cite{Risken89}: \begin{equation} \label{fokker} \frac{\partial}{\partial t}P(\psi,t) = \frac{\partial}{\partial \psi} [K \Delta \sin (\psi-\theta_0)-\tilde{\omega}]P(\psi,t). \end{equation} The order parameter given by Eq.~(\ref{neworder}) then obtains the form \begin{eqnarray} \label{self} \Delta e^{i \tilde{\theta}} &=& \frac{1}{N}\sum_{j=1}^{N} e^{i \psi_j} \nonumber\\ &=& \int_{-\infty}^{\infty} ~d\tilde{\omega} ~g(\tilde{\omega}{+}\Omega) ~\langle e^{i\psi}\rangle_{t-\tau; \tilde{\omega}}, \end{eqnarray} where $\tilde{\omega} =\omega-\Omega$ has been noted, and the average of $e^{i \psi}$ is to be taken over the distribution $P(\psi, t{-}\tau)$ of Eq.~(\ref{fokker}) with given $\tilde{\omega}$: $ \langle e^{i \psi}\rangle_{t-\tau; \tilde{\omega}} \equiv \int_{0}^{2 \pi} d\psi \,P(\psi, t{-}\tau) \,e^{i\psi} . $ In the stationary state, we take the average over the stationary distribution $P^{(0)}(\psi;\tilde{\omega})$ of Eq.~(\ref{fokker}). With the stationary solution \begin{equation} \label{statsol} P^{(0)}(\psi;\tilde{\omega}) = \left\{ \begin{array}{ll} \delta\left[ \psi-\theta_{0}-\sin^{-1}(\tilde{\omega}/K \Delta)\right] &{\rm for} \quad | \omega | \leq K \Delta \\ \displaystyle \frac{\sqrt{\tilde{\omega}^2-(K \Delta)^2}}{ 2\pi \left| \tilde{\omega}-K \Delta\sin(\psi-\theta_{0}) \right|} &{\rm otherwise}, \end{array} \right. \end{equation} it is easy to compute the average: \begin{eqnarray} \label{avstat} \langle e^{i \psi}\rangle_{\tilde{\omega}} & \equiv & \int_{0}^{2 \pi} d\psi \,P^{(0)}(\psi;\tilde{\omega})~e^{i\psi} \nonumber \\ & = & e^{i\theta_0} \left\{\begin{array}{ll} i(\tilde{\omega}/K\Delta) - i \sqrt{(\tilde{\omega}/K\Delta)^2-1}, & \mbox{~~~ $\tilde{\omega}> K \Delta$} \\ i(\tilde{\omega}/K\Delta) + \sqrt{1-(\tilde{\omega}/K\Delta)^2}, & \mbox{~~~ $ -K\Delta\leq \tilde{\omega}\leq K\Delta$}\\ i(\tilde{\omega}/K\Delta) + i \sqrt{(\tilde{\omega}/K\Delta)^2-1}, & \mbox{~~~ $\tilde{\omega}< -K\Delta$}. \end{array} \right. \end{eqnarray} It is thus natural to divide the system into two groups: one satisfying $|\tilde{\omega}| \leq K\Delta$, which is called the synchronization group, and the other $|\tilde{\omega}| > K\Delta$, the desynchronization group. Accordingly, we write \begin{equation} \label{magorder} \Delta = \Delta_{s} + \Delta_{d}, \end{equation} where $\Delta_{s/d}$ is the contribution from the synchronization/desynchronization group to the order parameter. The contribution from the synchronization group is given by \begin{equation} \label{synself1} \Delta_{s} \,e^{i \Omega\tau} = K\Delta \int_{-1}^{1} dx\, g(\Omega{+}K\Delta x)\,[\sqrt{1-x^{2}}+ix], \end{equation} where $x \equiv \tilde{\omega}/K\Delta$. Separating Eq.~(\ref{synself1}) into the real and imaginary parts, we obtain the two coupled nonlinear equations \begin{eqnarray} \label{synself2} \Delta_{s}\cos \Omega\tau &=& K\Delta \int_{-1}^{1} dx\,g(\Omega{+}K\Delta x)\,\sqrt{1-x^{2}},\nonumber\\ \Delta_{s}\sin \Omega\tau &=& K\Delta\int_{-1}^{1} dx\,g(\Omega{+}K\Delta x)\,x. \end{eqnarray} Similarly, the desynchronization group leads to the equation \begin{eqnarray} \label{desynself1} \Delta_{d}\, e^{i\Omega\tau } =&& iK\Delta \int_{1}^{\infty} dx \,g(\Omega{+}K\Delta x) \,\left(x-\sqrt{x^{2}-1}\right) \nonumber \\ && +iK\Delta \int_{-\infty}^{-1} dx \,g(\Omega{+}K\Delta x) \, \left(x+\sqrt{x^{2}-1}\right) \end{eqnarray} or \begin{eqnarray} \label{desynself2} \Delta_{d}\cos \Omega\tau &=& 0, \nonumber\\ \Delta_{d} \sin \Omega\tau &=& K\Delta \left[ \int_{1}^{\infty} dx \,g(\Omega{+}K\Delta x) \,\left(x-\sqrt{x^{2}-1}\right) \right. \nonumber \\ &&~~~~~\left.+\int_{-\infty}^{-1} dx \, g(\Omega{+}K\Delta x) \, \left(x+\sqrt{x^{2}-1}\right)\right]. \end{eqnarray} Note that, unlike the Kuramoto model, the imaginary parts of Eqs.~(\ref{synself1}) and (\ref{desynself1}) do not vanish, which arises from the fact that the nonzero collective frequency due to time delay breaks the symmetry of the integration interval of the distribution $g(\omega)$. It is obvious in Eq.~(\ref{desynself2}) that the contribution from the desynchronization group vanishes in the absence of time delay ($\tau =0$). Recalling that in the presence of time delay the total order parameter is given by the sum of the two contributions, one from the synchronization group and the other from the desynchronization group, we finally obtain the self-consistency equations from Eqs.~(\ref{synself2}) and (\ref{desynself2}): \begin{eqnarray} \label{self2} \Delta\cos \Omega\tau &=& K\Delta \int_{-1}^{1} dx \,g(\Omega{+}K\Delta x) \,\sqrt{1-x^{2}}, \nonumber\\ \Delta\sin \Omega\tau &=& K\Delta \left[\int_{-1}^{1} dx \,g(\Omega{+}K\Delta x)\,x + \int_{1}^{\infty} dx \,g(\Omega{+}K\Delta x)\, \left(x-\sqrt{x^{2}-1}\right) \right. \nonumber\\ &&~~~~~~~\left.~+ \int_{-\infty}^{-1} dx \,g(\Omega{+}K\Delta x) \,\left(x+\sqrt{x^{2}-1}\right)\right]. \end{eqnarray} When the average natural frequency is not zero ($\omega_0 \neq 0$), we define the variables $\psi_i \equiv \phi_i - (\Omega + \omega_0 )t$, and obtain exactly Eq.~(\ref{self2}) except for $\Omega$ replaced by $\tilde{\Omega} \equiv \Omega + \omega_0$. For example, the first equation is given by \begin{equation} \label{unsym} \Delta\cos \tilde{\Omega}\tilde{\tau} = K\Delta \int_{-1}^{1} dx \,\tilde{g}(\tilde{\Omega}{+}K\Delta x) \,\sqrt{1-x^{2}}, \end{equation} where $\tilde{\tau}$ is the delay time and the distribution $\tilde{g}(\omega)$ is symmetric about $\omega=\omega_0$. Since the distribution $g(\omega) \equiv \tilde{g}(\omega{+}\omega_0)$ is symmetric about $\omega =0$, we rewrite the above equation in the form \begin{equation} \label{unsym2} \Delta\cos [(\Omega{+}\omega_0)\tilde{\tau}] = K\Delta \int_{-1}^{1} dx \, g(\Omega{+}K\Delta x) \,\sqrt{1-x^{2}}, \end{equation} which, with the identification $(\Omega+\omega_0)\tilde{\tau}\equiv \Omega\tau$, just reproduces the first equation in Eq.~(\ref{self2}). Accordingly, the behavior of the system with $\omega_0 \neq 0$, which has been considered in Ref.~\cite{YS}, can be obtained from that of the system with $\omega_0=0$ via appropriate rescaling of parameters. \section{Analysis of the self-consistency equations} The non-vanishing imaginary part of the self-consistency equation given by Eq.~(\ref{self2}), arising from time delay, leads to a variety of behaviors which are not displayed by the system without delay. In this section, we solve the two coupled equations in Eq.~(\ref{self2}) to obtain the synchronization frequency $\Omega$ and the order parameter $\Delta$. We take the Gaussian distribution with zero mean and unit variance for the natural frequencies: $g(\omega) = (2\pi)^{-1/2} e^{-\omega^2/2}$, and first compute numerically the synchronization frequency and the order parameter for various values of the coupling strength $K$ and the delay time $\tau$. Figure~1 exhibits the dependence of the synchronization frequency on $K$ and $\tau$, which manifests multistability. At small values of $\tau$, only the nontrivial solution $(\Delta\neq 0)$ with $\Omega=0$ appears for $K>K_c \,(\approx 1.596)$, as in the system without delay. For large $\tau$, on the other hand, solutions with $\Omega\neq 0$ also emerge as $K$ is increased further. In Fig.~1(a), where the synchronization frequency is plotted as a function of the delay time $\tau$ at $K = 10$, it is also observed that the synchronization frequency is suppressed as time delay is increased. This is expected since the delay tends to disturb synchronization~\cite{Luzyanina}. In Fig.~1(b), we plot the synchronization frequency as a function of the coupling strength at $\tau = 5$. It shows that at given values of $\tau$ the synchronization frequency $\Omega$ depends rather weakly on $K$ after synchronization sets in. Among those solutions at given coupling strength $K$, the most stable solution is the one with the largest value of $\Omega$~\cite{Shuster} although the basin of attraction in general shrinks with $\Omega$. The phase boundaries separating the coherent states ($\Delta\neq 0$) with various synchronization frequencies from the incoherent state ($\Delta=0$) are shown in Fig.~2, where data have been taken with the step width $\delta\tau =0.06$. Below the lowest boundary, which is the straight line $K=K_c \approx 1.596$, only the incoherent state exists; above it, the coherent state with $\Omega=0$ also exist. Similarly, above each boundary in Fig.~2(a), a new additional coherent state with a larger synchronization frequency emerges. Note here that the region of the existence of coherent states constitutes two-dimensional (semi-infinite) surfaces in the three-dimensional $(K, \tau, \Omega)$ space. Whereas Figs.~1(a) and (b) may be regarded as the cross-sections of these surfaces at given values of $K$ and of $\tau$, respectively, Fig.~2(a) represents the projection of the {\em boundaries} of these surfaces onto the $K$-$\tau$ plane. In Fig.~2(b) the curves of Fig.~2(a) are redrawn, with the horizontal axis rescaled: $\tilde{\tau}$ in (b) corresponds to $\Omega\tau/(\Omega+3)$ in (a). In this new scale of the horizontal axis, unlike in Fig.~2(a), the boundaries intersect with each other, and only the envelope consisting of the curve segments with lowest values of $K$ at given $\tilde{\tau}$ is displayed in Fig.~2(b). According to the discussion in Sec.~II, Fig.~2(b) describes the phase boundary below which only the incoherent state exists in the system with $\omega_0 =3$. Above the boundary, the coherent state with the appropriate (nonzero) synchronization frequency, depending on $\tilde{\tau}$, appears and can coexist with the incoherent state. Note that the lowest boundary ($K=K_c \approx 1.596$) in Fig.~2(a) has no counterpart in Fig.~2(b) since the zero synchronization frequency ($\Omega = 0$) corresponds to $\tilde{\tau}=0$. Similar boundaries have been obtained through numerical simulations for the Lorenzian as well as the delta-function distributions~\cite{YS}. It is of interest to compare Fig. 2(b) with Fig. 4 in Ref.~\cite{YS}, which indicates that the Gaussian distribution leads to smoother stability boundaries than the Lorenzian distribution. In Fig.~3, the obtained order parameter $\Delta$ is depicted as a function of $K$ at $\tau = 5$. Each line describes the transition for each synchronization frequency, and the critical coupling strength $K_c$ is shown to increase for the transition with larger $\Omega$. For example, the leftmost curve, which corresponds to $\Omega = 0$, gives $K_c \approx 1.596$, whereas the next one, corresponding to $\Omega \approx 1.09$, gives $K_c \approx 1.97$. Note that as $K$ approaches $K_c\,(\approx 1.596)$, the order parameter with $\Omega=0$ decreases continuously to zero, indicating that the leftmost curve describes a continuous transition at $K_c$. On the other hand, the rest of the curves with $\Omega > 0$ apparently display jumps in the order parameter, indicating discontinuous transitions. Accordingly, whereas the lowest boundary in Fig.~2(a) describes a continuous transition, the others as well as the boundary in Fig.~2(b) correspond to discontinuous transitions. To understand the nature of these transitions analytically, we assume $K\Delta\ll 1$ near the transition to the coherent state and expand Eq.~(\ref{self2}) to the order of $(K\Delta)^3$, together with $\Omega$ also expanded accordingly: \begin{equation} \label{ao} \Omega\approx \Omega_{0}+\Omega_{1}K\Delta+\Omega_{2}{(K\Delta)}^2. \end{equation} We investigate two regimes, $\Omega \ll K\Delta$ and $\Omega \gg K\Delta$, which exhibit phase transitions of different types with each other, still taking the Gaussian distribution with zero mean and unit variance for $g(\omega)$. For $\Omega \ll K\Delta \,(\ll 1)$, the self-consistency equation for the order parameter takes the form \begin{equation} \label{asc1} \Delta = a_1 K\Delta+b_1 (K\Delta)^2+c_1 (K\Delta)^3+{\cal O}(K\Delta)^4, \end{equation} where the coefficients $a_1$, $b_1$, and $c_1$ depend on $\Omega_0$, $\Omega_1$, and $\Omega_2$ defined in Eq.~(\ref{ao}). Their specific forms as well as the details of the calculation are given in Appendix A. Since the condition $\Omega \ll K\Delta$ implies $\Omega_0 \ll 1$, we need to consider Eq.~(\ref{omega1}) only for the range $-\pi/2 < \tan^{-1} \left(\sqrt{2/\pi}\Omega_0 e^{\Omega_0^2/2}\right) < \pi/2$. It is then obvious that the desired solution of Eq.~(\ref{omega1}) is simply $\Omega_0 = \Omega_1 = \Omega_2 = 0$, regardless of $\tau$. Inserting these values into Eq.~(\ref{cof1}), we obtain the values of the coefficients: $a_1 = 0.626$, $b_1 = 0$, and $c_1 = -0.078$. Figure~4 illustrates the graphical solution of Eq.~(\ref{asc1}), displaying $f(\Delta)\equiv (a_1 K-1)\Delta+b_1 (K\Delta)^2+c_1 (K\Delta)^3$ versus $\Delta$ for $b_1 =0$. Note that the critical coupling strength is given by $K_c\,(\equiv {a_1}^{-1}) \approx 1.595$, which indeed agrees perfectly with the numerical value given by the leftmost curve in Fig.~3. It is thus concluded that the system displays a continuous phase transition with $\Omega = 0$, which is consistent with the result of the Kuramoto model. In the opposite case of $\Omega \gg K\Delta \,(\ll 1)$, we can still obtain the self-consistency equation for $\Delta$ up to the order of $(K\Delta)^3$, in a manner similar to that for the previous small-$\Omega$ case: \begin{equation} \label{asc2} \Delta = a_2 K\Delta+b_2 (K\Delta)^2+c_2 (K\Delta)^3+{\cal O}(K\Delta)^4, \end{equation} where the coefficients $a_2$, $b_2$, and $c_2$ again depend on $\Omega_0$, $\Omega_1$, and $\Omega_2$ (see Appendix B). In this case, we need to obtain larger solutions, considering the regions $(n+1/2)\pi < \tan^{-1} \left(\sqrt{2/\pi}\Omega_0 e^{\Omega_0^2/2}\right) < (n+3/2)\pi$ with non-negative integer $n$. Interestingly, this in general yields nonzero values of $\Omega_0$ and accordingly, nonzero values of $b_2$, with which Eq.~(\ref{asc2}) displays a jump in $\Delta$ at $K = -4c_2({b_2}^2-4c_2a_2)^{-1}$, thus indicating a discontinuous transition~\cite{Choi}. Such discontinuous transitions are ubiquitous for regions with higher values of $n$. Namely, the system with delay is in general characterized by nonzero values of the synchronization frequency together with discontinuous transitions, which is consistent with the numerical results displayed in Fig.~3. There the jumps in $\Delta$ displayed by the curves with $\Omega > 0$, associated with discontinuous transitions, may invalidate the assumption $K\Delta \ll 1$, and the expansion in Eq.~(\ref{asc2}) is not expected to yield quantitatively accurate results. Nevertheless the appearance of such discontinuous transitions has been revealed by the above expansion, which is concluded to give a qualitatively correct description of the nature of transitions. For more accurate results, we investigate these transition phenomena by examining in detail the behaviors of the solutions of the self-consistency equations. Note that $\Delta =0$ is always a solution of Eq.~(\ref{self2}) for all values of $\Omega$. To seek for other solutions, we divide Eq.~(\ref{self2}) by $\Delta$ and obtain \begin{eqnarray} \label{self20} K^{-1}\cos \Omega\tau &=& \int_{-1}^{1} dx \,g(\Omega{+}K\Delta x) \,\sqrt{1-x^{2}}, \nonumber\\ K^{-1}\sin \Omega\tau &=& \int_{-\infty}^{\infty} dx \,g(\Omega{+}K\Delta x)\,x - \int_{1}^{\infty} dx \,g(\Omega{+}K\Delta x)\, \sqrt{x^{2}-1} \nonumber \\ & &~~ + \int_{-\infty}^{-1} dx \,g(\Omega{+}K\Delta x)\,\sqrt{x^{2}-1}, \end{eqnarray} which may be computed numerically. The resulting values of $\Omega$ versus $\Delta$ are plotted in Figs.~5-7 for $\tau=5$. The solid and the dotted lines represent solutions of the first (real part) and the second (imaginary part) equations of Eq.~(\ref{self20}), respectively. In each figure, the point where the two lines meet with each other provides the synchronization frequency $\Omega$ and the order parameter $\Delta$. Figure~5(a) shows the absence of a meeting point for $K=1.58$, which implies that synchronization does not set in yet. In contrast, the meeting of the solid and dotted lines is obvious for $K = 1.60$ shown in (c); (b) reveals a continuous transition (for $\Omega = 0$) at $K = K_c \approx 1.596$, which coincides with the previous result. The value of $\Delta$ grows continuously as $K$ is increased beyond $K_c$ (see Fig.~6). When $K$ reaches the value $1.97$, as displayed in Fig.~6(b), there emerges via a tangent bifurcation an additional meeting point at finite values of $\Delta \,(\approx 0.08)$ and $\Omega \,(\approx 1.09)$, giving rise to a discontinuous transition, in agreement with the result shown in Fig.~3. As the coupling strength is increased further, there appear two meeting points, giving two values of the order parameter for the pair of the lines (i.e., with almost the same value of $\Omega$), as shown in Fig.~6(c). Such a tangent bifurcation in general produces a pair of stable and unstable solutions; here the solution with the smaller value of $\Delta$, decreasing with $K$, should be unstable. Figure~7(c) shows that the unstable solution becomes null ($\Delta =0$) at $\Omega \approx 1.257$ as $K$ approaches $K_0 \approx 3.515$. Figure~7 also reveals the occurrence of the third transition at $K_c\approx 3.46$, which is of the same nature as the second. The values of $K_c \approx 1.596$ and $K_0 \approx 3.515$ can also be obtained analytically since they are given by the solutions of Eq.~(\ref{self20}) in the limit $\Delta\rightarrow 0$. In this limit, the right-hand side of the second equation vanishes, yielding $\Omega = n\pi/\tau$ with $n$ integer. The first one, which reduces to $K^{-1}\cos\Omega\tau = (\pi/2) g(\Omega)$, then gives \begin{equation} \label{K} K = \frac{2}{\pi g(2n\pi/\tau)}, \end{equation} where it has been noted that $K>0$. Taking $n=0$ and $n=1$ in Eq.~(\ref{K}), where $g(\omega)$ is given by the Gaussian distribution with unit variance, we obtain $K =K_c = \sqrt{8/\pi}\approx 1.596$ (with $\Omega=0$) and $K=K_0 = \sqrt{8/\pi} e^{2(\pi/\tau)^2} \approx 3.515$ (with $\Omega =2\pi/\tau \approx 1.257$) for $\tau=5$, respectively. To examine how the stability changes at these bifurcations, we now consider a small perturbation from the incoherent state, for which the stationary distribution in Eq.~(\ref{statsol}) is simply given by $1/2\pi$, and write \begin{equation} \label{perturb} P(\psi, t) = \frac{1}{2 \pi} + \epsilon \eta(\psi, t), \end{equation} where $\epsilon \ll 1$. Upon substitution into Eq.~(\ref{fokker}) and with $\Delta = \Delta_1 \epsilon + {\cal O}(\epsilon^2)$, we obtain, to the lowest order in $\epsilon$, \begin{equation} \label{fokker1} \frac{\partial \eta}{\partial t} = - ~\tilde{\omega} \frac{\partial \eta}{\partial \psi} + \frac{K}{2 \pi} \Delta_1 \cos(\psi - \theta_0), \end{equation} and seek solutions of the form \begin{equation} \label{solution1} \eta(\psi,t) = c(t; \tilde{\omega}) e^{i \psi} + c^{*}(t; \tilde{\omega}) e^{-i \psi} , \end{equation} where higher harmonics have been neglected. Equations~(\ref{fokker1}) and (\ref{solution1}), together with Eq.~(\ref{self}), lead to the amplitude equation for $c(t; \tilde{\omega})$: \begin{equation} \label{fokker2} \frac{\partial c(t, \tilde{\omega})}{\partial t} = - i \tilde{\omega} ~c(t, \tilde{\omega}) + \frac{K}{2} e^{i \Omega \tau} \int_{-\infty}^{\infty} d\tilde{\omega} g(\tilde{\omega}+\Omega) c(t{-}\tau, \tilde{\omega}), \end{equation} which in general possesses both discrete and continuous spectra. To find out the discrete spectrum, we put \begin{equation} \label{solution2} c(t;\tilde{\omega}) = b(\tilde{\omega}) \,e^{\lambda t}, \end{equation} where the eigenvalue $\lambda$ is independent of $\tilde{\omega}$, and obtain the equation \begin{equation} \label{fokker3} e^{-(\lambda-i \Omega) \tau} ~\frac{K}{2} \int_{-\infty}^{\infty} d \omega ~\frac{g(\omega)}{\lambda + i ~(\omega - \Omega)} = 1 , \end{equation} which has been examined for a Lorenzian distribution~\cite{YS}. Here we investigate Eq.~(\ref{fokker3}) for a Gaussian distribution. The stability of the incoherent state depends on whether all roots of Eq.~(\ref{fokker3}) possess negative real parts, i.e., $\mbox{Re} \,\lambda < 0$. This is the case for $K$ less than $K_s$, where the incoherent state is neutrally stable (since the continuous spectrum is pure imaginary). Beyond $K_s$, there appears an eigenvalue with a positive real part, giving rise to instability. The value of $K_s$ can be computed from Eq.~(\ref{fokker3}) with $\mbox{Re} \,\lambda = 0$ imposed; this yields the coupled equations for $K_s$ and $\mbox{Im}\,\lambda$ \begin{eqnarray} \label{stab} \cos \gamma\tau &=& K_s \sqrt{\frac{\pi}{8}} e^{-\gamma^2/2} \nonumber \\ \sin \gamma\tau &=& - \frac{K_s}{2} \gamma e^{-\gamma^2/2} \sum_{k=0}^{\infty} \frac{\gamma^{2k}}{2^k (2k+1)k!}, \end{eqnarray} where $\gamma \equiv \Omega -\mbox{Im}\,\lambda$. Unlike the system without delay, Eq.~(\ref{stab}) has an infinite number of solutions, among which the lowest value of $K_s$ should be taken. It is obvious that $\gamma =0$ is the desired solution (regardless of time delay), leading to the lowest value $K_s = \sqrt{8/\pi} \approx 1.596$. Note also that this value of $K_s$ coincides exactly with that of $K_c$, implying that the incoherent state becomes unstable simultaneously with the appearance of the (stable) coherent state with $\Omega =0$. These results reveal that the order parameter exhibits a supercritical bifurcation at $K = K_c$ along the leftmost curve ($\Omega=0$) in Fig.~3. Namely, the emergence of a nontrivial solution ($\Delta > 0$) is accompanied by the loss of stability of the null solution ($\Delta =0$) at $K_c(\Omega=0)\approx 1.596$. For the rest ($\Omega >0$), on the other hand, the unstable solution, generated together with the stable one by a tangent bifurcation at $K_c(\Omega)$, decreases as $K$ is raised further and vanishes to zero at a larger value, $K=K_0 (\Omega)$. For example, the unstable solution for $\Omega \approx 1.09$, emerging at $K\approx 1.97$, decreases to zero at $K\approx 3.515$ [see Fig.~7(c)]. It is thus concluded that for $\Omega >0$ the bifurcation at $K_0$ is subcritical: Between $K_c$ and $K_0$ there exist an unstable coherent state in addition to the stable coherent states (and the incoherent one) although the unstable states have not been displayed in Fig.~3. The general features of the synchronization behavior obtained here are similar to those in Ref.~\cite{YS}, and it is thus concluded that the difference in the distribution of natural frequencies does not change results qualitatively. On the other hand, we have examined additional interesting phenomena such as frequency suppression and details of multistability. In particular, unlike in the system with $\omega_0 \neq 0$ considered mostly in Ref.~\cite{YS}, here the phase boundaries with different values of the synchronization frequency do not intersect with each other on the $K{-}\tau$ plane. [Compare Fig.~2(a) and (b).] Accordingly, the system with $\omega_0=0$ does not undergo a discontinuous transition directly from the {\em stable} incoherent state and the coherent one with a nonzero synchronization frequency, and the associated hysteresis may not be observed. Further, in order to confirm these results, we have also performed numerical simulations, the results of which are presented in the next section. \section{Numerical simulations} We have studied directly the equations of motion given by Eq.~(\ref{model}) via numerical simulations. The globally coupled system of size $N=5000$, where natural frequencies are distributed according to the Gaussian distribution with unit variance, has been considered, and the Euler method with discrete time steps of $\delta t=0.01$ has been employed. At each run, we have discarded the first $10^5$ time steps per oscillator to eliminate transient effects and taken the next $10^5$ time steps per oscillator to investigate synchronized solutions. Finally, independent runs with 30 different realizations of the natural frequency distribution and initial conditions have been performed, over which the averages have been taken. In the simulations, the synchronization frequency is given by the average phase speed, i.e., the average rate of the phase change, and the obtained data at the coupling strength $K=10$ are represented by crosses in Fig.~8(a). Note that both the incoherent state and the coherent one are found to be stable at the same value of $\tau$, indicating multistability; frequency suppression with increasing delay is also manifested. For comparison, the results shown in Fig.~1(a), obtained from Eq.~(\ref{self2}), are also displayed, and perfect agreement is observed. Notice here that the basin of attraction shrinks rapidly with the synchronization frequency $\Omega$, which makes it quite difficult in numerical simulations to find the coherent-state solutions with large values of $\Omega$. Figure 8(b) shows the behavior of the order parameter $\Delta$ as a function of the coupling strength $K$ for $\tau=0$ (plus signs) and $\tau=5$ (crosses). In both cases the system displays a continuous transition to the coherent state (with zero synchronization frequency). Slight suppression of synchronization by time delay can be observed. The error bars have been estimated by the standard deviation and the lines are guides to the eye. To make comparison of the analytical results obtained from Eq.~(\ref{self2}) and the simulation results, we have also included in Fig.~8(b) the analytical results for $\tau=5$, which are represented by the solid line. Good overall agreement between the two can be observed. \section{Summary} We have studied analytically and numerically the collective synchronization phenomena in a set of globally coupled oscillators with time retarded interaction. In order to understand the effects of time delay on the synchronization, we have derived the self-consistency equations for the order parameter, which describe synchronization in the system. The detailed analysis of the self-consistency equations has revealed a multitude of coherent states with nonzero synchronization frequencies, each separated from the incoherent state by a discontinuous transition. At the critical coupling strength, the system exhibits the usual continuous transition from the incoherent state to the coherent one, displaying collective synchronization with zero synchronization frequency. As the coupling strength is increased further, coherent states with larger synchronization frequencies have also been shown to appear via discontinuous transitions from the incoherent state. Thus a multitude of coherent states with different synchronization frequencies have been found to coexist in the appropriate regions, leading to multistability. The synchronization frequency of the oscillators in a coherent state has been observed to decrease with the delay time. To confirm the analytical results, we have also performed numerical simulations, the results of which indeed display multistability and suppression of the synchronization frequency. For detailed comparison, however, one should search the solution space extensively, with varying initial conditions, to obtain solutions with various values of the synchronization frequency. This requires more extensive simulations, which is left for future study. Finally, one may also include stochastic noise in the system and study its effects on synchronization behavior. In particular, the interplay between the external driving and noise poses the possibility of stochastic resonance~\cite{Hong99}, and it is of interest to examine how the collective synchronization together with the time delay affects the possible resonance phenomena. \section*{Acknowledgments} We thank G. S. Jeon, M.-S. Choi, and K. Park for illuminating discussions and C. W. Kim for the hospitality during our stay at Korea Institute for Advanced Study, where part of this work was accomplished. This work was supported in part by the SNU Research Fund, by the Korea Research Foundation, and by the Korea Science and Engineering Foundation. \section*{Appendix A} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} In the case $\Omega \ll K\Delta \,(\ll 1)$, we approximate the integral appearing in Eq.~(\ref{self2}): \begin{eqnarray} \label{approx1} &&\int_{1}^{\infty} dx \,g(\Omega{+}K\Delta x)\, \left(x-\sqrt{x^{2}-1}\right) +\int_{-\infty}^{-1} dx \,g(\Omega{+}K\Delta x)\, \left(x+\sqrt{x^{2}-1}\right) \nonumber \\ &\quad& = \int_{1}^{\infty} dx [g(\Omega{+}K\Delta x) - g(\Omega{-}K\Delta x)] \left(x-\sqrt{x^{2}-1}\right) \nonumber \\ &\quad& \approx \int_{1}^{\infty} dx \,2\Omega \,g^{\prime}( K\Delta x)\, \left(x-\sqrt{x^{2}-1}\right), \end{eqnarray} and expand Eq.~(\ref{self2}) to the order $(K\Delta)^3$. This yields \begin{eqnarray} \label{small} &&\Delta \cos [(\Omega_{0}+\Omega_{1}(K\Delta)+\Omega_{2}(K\Delta)^2)\tau] \nonumber \\ &&~= \frac{1}{2}\sqrt{\frac{\pi}{2}} e^{-{\Omega_{0}}^2/2} \left\{K\Delta-\Omega_{0}\Omega_{1}(K\Delta)^2 + \left[({\Omega_{0}}^2-1)\left(\frac{{\Omega_{1}}^2}{2} +\frac{1}{8}\right)-\Omega_{0}\Omega_{2}\right](K\Delta)^3 \right\}, \nonumber \\ &&\Delta \sin[(\Omega_{0}+\Omega_{1}(K\Delta)+\Omega_{2}(K\Delta)^2)\tau] \nonumber \\ &&~= -\frac{\Omega_{0}}{2}K\Delta +\left[\frac{\Omega_{0}}{3}\sqrt{\frac{2}{\pi}} \left(1- e^{-{\Omega_{0}}^2/2}\right) -\frac{\Omega_{1}}{2}\right](K\Delta)^2 \nonumber \\ &&~ + \left\{\frac{\Omega_{0}}{8}+\frac{\Omega_{1}}{3}\sqrt{\frac{2}{\pi}} \left[ 1+({\Omega_{0}}^2-1)e^{-{\Omega_{0}}^2/2}\right] - \frac{\Omega_{2}}{2}\right\} (K\Delta)^3, \end{eqnarray} where Eqs.~(\ref{ao}) and the Gaussian distribution $g(\omega)$ have been used. After a tedious calculation, we obtain from Eq.~(\ref{small}) \begin{eqnarray} \label{omega1} \Omega_0\tau &=& -\tan^{-1} \left(\sqrt{\frac{2}{\pi}}\Omega_0 e^{\Omega_0^2/2}\right), \nonumber \\ \Omega_1\tau &=& \sqrt{\frac{8}{\pi}}e^{\Omega_0^2/2} \left(1+\frac{2}{\pi}\Omega_0^2 e^{\Omega_0^2}\right)^{-1} \left[\sqrt{\frac{2}{9\pi}}\Omega_0 \left(1- e^{-\Omega_0^2/2} \right) -{\Omega_1\over2}- {\Omega_0^2\Omega_1\over2} \right], \nonumber \\ \Omega_2\tau &=& \sqrt{\frac{2}{\pi}}e^{-\Omega_0^2/2}\left(1+\frac{2}{\pi}\Omega_0^2 e^{\Omega_0^2}\right)^{-1} \left\{(1+\Omega_0^2)\left({\Omega_0\over 8} -\Omega_2\right) \right. \nonumber \\ & &~ + \frac{8}{\pi} \Omega_0 e^{\Omega_0^2/2} \left(1+\frac{2}{\pi}\Omega_0^2 e^{\Omega_0^2}\right)^{-1} \left[\sqrt{\frac{2}{9\pi}}\Omega_0 \left(1- e^{-\Omega_0^2/2} \right) -{\Omega_1\over2}- {\Omega_0^2\Omega_1\over2}\right]^2 \nonumber \\ & &~ - \left. {1\over 2}\Omega_0 \Omega_1^2 (\Omega_0^2+3) + \sqrt{\frac{8}{9\pi}}\Omega_1 \left(1+ \Omega_0^2 - e^{-\Omega_0^2/2} \right) \right\} \end{eqnarray} together with \begin{equation} \label{sc1} \Delta = a_1 K\Delta+b_1 (K\Delta)^2+c_1 (K\Delta)^3+{\cal O}(K\Delta)^4, \end{equation} which is just Eq.~(\ref{asc1}). The coefficients depend on $\Omega_0$, $\Omega_1$, and $\Omega_2$ according to \begin{eqnarray} \label{cof1} a_1 &=& \left(\frac{\Omega_0^2}{4}+\frac{\pi}{8} e^{-\Omega_0^2}\right)^{-1/2}, \nonumber \\ b_1 &=& \left(\Omega_0^2+{\pi\over2} e^{-\Omega_0^2}\right)^{-1/2} \left[-\sqrt{\frac{2}{9\pi}}\Omega_0^2 \left(1- e^{-\Omega_0^2/2}\right) + \frac{\Omega_0\Omega_1}{2}\left(1-{\pi\over2} e^{-\Omega_0^2}\right) \right], \nonumber \\ c_1 &=& \left(\Omega_0^2+{\pi\over2} e^{-\Omega_0^2}\right)^{-1/2} \left\{ \left({2\over9\pi}-{1\over8}\right)\Omega_0^2 + \frac{\Omega_0\Omega_1}{2} +{\Omega_1^2\over 4}-\sqrt{\frac{2}{9\pi}}\Omega_0\Omega_1 (2+\Omega_0^2) \right. \nonumber \\ && ~- \left(\Omega_0^2+{\pi\over2} e^{-\Omega_0^2}\right)^{-1} \left[\sqrt{\frac{2}{9\pi}}\Omega_0^2 \left(1- e^{-\Omega_0^2/2}\right) - \frac{\Omega_0\Omega_1}{2}\left(1-{\pi\over2} e^{-\Omega_0^2}\right) \right]^2 \nonumber \\ && ~- \left({4\over 9\pi}\Omega_0^2-\sqrt{\frac{8}{9\pi}}\Omega_0 \Omega_1\right) e^{-\Omega_0^2/2} \nonumber \\ && ~\left. + \left[{2\over9\pi}\Omega_0^2+{\pi\over4}\left({\Omega_0^2\over8} -{1\over8}+ \Omega_0^2\Omega_1^2 - {\Omega_1^2\over 2}-\Omega_0\Omega_2\right)\right] e^{-\Omega_0^2} \right\}. \end{eqnarray} \section*{Appendix B} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} In the case $\Omega \gg K\Delta \,(\ll 1)$, we approximate the integral in Eq.~(\ref{self2}) as follows: \begin{eqnarray} \label{approx2} &&\int_{1}^{\infty} dx g(\Omega{+}K\Delta x)\, \left(x-\sqrt{x^{2}-1}\right) +\int_{-\infty}^{-1} dx \,g(\Omega{+}K\Delta x)\, \left(x+\sqrt{x^{2}-1}\right) \nonumber \\ &\quad& = \int_{1}^{\infty} dx \,[g(\Omega{+}K\Delta x) - g(\Omega{-}K\Delta x)] \left(x-\sqrt{x^{2}-1}\right) \nonumber \\ &\quad& \approx - \int_{1}^{\infty} dx \,g(\Omega{-}K\Delta x)\, \left(x-\sqrt{x^{2}-1}\right), \end{eqnarray} which, upon expansion to the order $(K\Delta)^3$, gives Eq.~(\ref{self2}) in the form \begin{eqnarray} \label{large} &&\Delta \cos [(\Omega_{0}+\Omega_{1}(K\Delta)+\Omega_{2}(K\Delta)^2)\tau] \nonumber \\ &&~~= \frac{1}{2}\sqrt{\frac{\pi}{2}} e^{-{\Omega_{0}}^2/2}\left\{K\Delta -\Omega_{0}\Omega_{1}(K\Delta)^2 + \left[({\Omega_{0}}^2-1)\left(\frac{{\Omega_{1}}^2}{2} +\frac{1}{8}\right)-\Omega_{0}\Omega_{2}\right](K\Delta)^3 \right\}, \nonumber \\ &&\Delta \sin [(\Omega_{0}+\Omega_{1}(K\Delta)+\Omega_{2}(K\Delta)^2)\tau] \nonumber \\ &&~~= -\frac{(1+{\Omega_{0}}^2)}{4{\Omega_{0}}^3} \left[1+\Phi\left(\Omega_0/\sqrt{2}\right)\right]K\Delta \nonumber \\ &&~~+ \frac{1}{12{\Omega_{0}}^4}\left\{-\sqrt{\frac{2}{\pi}} e^{-{\Omega_{0}}^2/2}(4 {\Omega_{0}}^5+ 3{\Omega_{0}}^3+3\Omega_{0}\Omega_{1}) + 3\Omega_{1}(3+{\Omega_{0}}^2)\left[1+\Phi\left(\Omega_0/\sqrt{2}\right)\right] \right\}(K\Delta)^2 \nonumber \\ &&~~+ \frac{1}{48{\Omega_{0}}^5} \left\{\sqrt{\frac{2}{\pi}} e^{-{\Omega_{0}}^2/2} \left[16{\Omega_{0}}^7\Omega_{1}+2{\Omega_{0}}^5(3-14\Omega_{1}+3{\Omega_{1}}^2) -12{\Omega_{0}}^4\Omega_{2} \right.\right. \nonumber \\ &&~~~~\left. +6{\Omega_{0}}^3(-1-2\Omega_{1}+3{\Omega_{1}}^2) +9\Omega_{0}(1+4{\Omega_{1}}^2) - 12{\Omega_{0}}^2\Omega_{2}\right] \nonumber \\ &&~~~~\left. -3\left[6+24{\Omega_{1}}^2 +{\Omega_{0}}^2(1+4{\Omega_{1}}^2) -12\Omega_{0}\Omega_{2}-4{\Omega_{0}}^3\Omega_{2}\right] \left[1+\Phi\left(\Omega_0/\sqrt{2}\right)\right]\right\}(K\Delta)^3 \end{eqnarray} with the error function $\Phi(y)\equiv (2/\sqrt{\pi})\int_0^y dz~e^{-z^2}$. After a tedious calculation, we obtain from Eq.~(\ref{large}): \begin{eqnarray} \label{omega2} \Omega_0\tau &=& -\tan^{-1} \left\{ {1\over \sqrt{2\pi}} \left({1+\Omega_0^2\over\Omega_0^3}\right) e^{\Omega_0^2/2}\left[1+ \Phi\left(\Omega_0/\sqrt{2}\right) \right]\right\} + \pi, \nonumber \\ \Omega_1\tau &=& \left\{1+{1\over 2\pi}{(1+\Omega_0^2)^2 \over \Omega_0^6} e^{\Omega_0^2} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]^2 \right\}^{-1} \nonumber \\ & &~~~\times \left\{-{4\Omega_0\over 3\pi} -{\Omega_1-1\over\pi\Omega_0}- {\Omega_1\over \pi\Omega_0^3} + {1\over \sqrt{2\pi}}{\Omega_1\over \Omega_0^4} (3-\Omega_0^4) e^{\Omega_0^2/2} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right\}, \nonumber \\ \Omega_2\tau &=& \left\{ {1\over \sqrt{2\pi}} \left({1+\Omega_0^2\over\Omega_0^3}\right) e^{\Omega_0^2/2} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right\} \left\{1+{1\over 2\pi}{(1+\Omega_0^2)^2 \over \Omega_0^6} e^{\Omega_0^2} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]^2 \right\}^{-2} \nonumber \\ & &~~~\times \left\{-{4\Omega_0\over 3\pi} -{\Omega_1-1\over\pi\Omega_0} - {\Omega_1\over \pi\Omega_0^3} + {1\over \sqrt{2\pi}}{\Omega_1\over \Omega_0^4} (3-\Omega_0^4) e^{\Omega_0^2/2}\left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right\} \nonumber \\ & &~~- \left\{ {1\over\sqrt{2\pi}\Omega_0^3} \left[{1\over8} (1-\Omega_0^4)-{\Omega_1^2\over2}(5-\Omega_0^4)+\Omega_0\Omega_2(1+\Omega_0^2)\right] e^{\Omega_0^2/2} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right. \nonumber \\ & &~~+ {1\over 12\pi\Omega_0^4}[16\Omega_0^6\Omega_1+2\Omega_0^4(3-14\Omega_1 +3\Omega_1^2)-12\Omega_0^3\Omega_2-6\Omega_0^2(1+2\Omega_1-3\Omega_1^2) \nonumber \\ & &~~ -12 \Omega_0\Omega_2+9(1+4\Omega_1^2)]-{1\over 4\sqrt{2\pi}\Omega_0^5} [6+24\Omega_1^2+\Omega_0^2(1+4\Omega_1^2)-12\Omega_0\Omega_2-4\Omega_0^3 \Omega_2] \nonumber \\ & &~~~~\left.\times e^{\Omega_0^2/2} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right\} \end{eqnarray} and \begin{equation} \label{sc2} \Delta = a_2 K\Delta+b_2 (K\Delta)^2+c_2 (K\Delta)^3+{\cal O}(K\Delta)^4, \end{equation} which is Eq.~(\ref{asc2}). Again the coefficients depend on $\Omega_0$, $\Omega_1$, and $\Omega_2$ via \begin{eqnarray} \label{cof2} a_2 &=& \left\{\frac{\pi}{8} e^{-\Omega_0^2}+{(1+\Omega_0^2)^2\over 16\Omega_0^6} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]^2\right\}^{-1/2}, \nonumber \\ b_2 &=& \left\{\frac{\pi}{2} e^{-\Omega_0^2}+{(1+\Omega_0^2)^2\over 4\Omega_0^6} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]^2 \right\}^{-1/2} \left\{ -{\pi\over4}\Omega_0\Omega_1 e^{-\Omega_0^2}+{1+\Omega_0^2\over 8\Omega_0^6} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right. \nonumber \\ &&~~~\left.\times\left[{2\over\pi}e^{-\Omega_0^2/2} \left({4\over3}\Omega_0^4+\Omega_0^2(\Omega_1-1)+\Omega_1\right) -(3+\Omega_0^2)\Omega_1\left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]\right] \right\}, \nonumber \\ c_2 &=& \left\{\frac{\pi}{2} e^{-\Omega_0^2}+{(1+\Omega_0^2)^2\over 4\Omega_0^6} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]^2 \right\}^{-3/2} \left\{-{\pi\over4}\Omega_0\Omega_1 e^{-\Omega_0^2}+{1+\Omega_0^2\over 8 \Omega_0^6} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right. \nonumber \\ &&~~~\left.\times\left[\sqrt{\frac{2}{\pi}}e^{-\Omega_0^2/2} \left({4\over3}\Omega_0^4+\Omega_0^2(\Omega_1-1)+\Omega_1\right) -(3+\Omega_0^2)\Omega_1\left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \right] \right\}^2 \nonumber \\ &&~~+ \left\{\frac{\pi}{2} e^{-\Omega_0^2}+{(1+\Omega_0^2)^2\over 4\Omega_0^6} \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]^2 \right\}^{-1/2} \left\{ {\pi\over 8} e^{-\Omega_0^2}\left(2\Omega_0^2\Omega_1^2 +{\Omega_0^2\over4}-\Omega_1^2-2\Omega_0\Omega_2-{1\over4}\right) \right. \nonumber \\ &&~~+ {1\over 144\Omega_0^7}\left[\sqrt{\frac{2}{\pi}}e^{-\Omega_0^2/2} \left({4\over3}\Omega_0^4+\Omega_0^2(\Omega_1-1)+\Omega_1\right) -(3+\Omega_0^2)\Omega_1\left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right]\right]^2 \nonumber \\ &&~~-\sqrt{\frac{2}{\pi}}e^{-\Omega_0^2/2}{1+\Omega_0^2\over 96 \Omega_0^7} \left[16\Omega_0^6\Omega_1+2\Omega_0^4(3-14\Omega_1+3\Omega_1^2)-12\Omega_0^3\Omega_2 \right. \nonumber \\ &&~~\left.-6\Omega_0^2(1+2\Omega_1-3\Omega_1^2)-12 \Omega_0\Omega_2+9(1+4\Omega_1^2)\right] \left[1+ \Phi\left(\Omega_0/\sqrt{2}\right)\right] \nonumber \\ &&~~\left.-{1\over 4\sqrt{2\pi}\Omega_0^5} \left[6+24\Omega_1^2+\Omega_0^2(1+4\Omega_1^2)-12\Omega_0\Omega_2-4\Omega_0^3\Omega_2\right] \right\}. \end{eqnarray} \newpage
2,877,628,088,821
arxiv
\section{Introduction} There are increasing evidences that the Universe is expanding with acceleration and the transition from decelerated expansion to accelerating expansion happened in the recent past. The transition redshift $z_T>0.2$, we use the subscript $T$ to denote the transition throughout this paper. These results suggest that there exists dark energy (DE) with negative pressure in the Universe, and the DE was subdominant in the past and dominates the Universe now. The presence of DE has a lot of interesting physical effects. For example, there exists an event horizon if the acceleration is eternal. The event horizon sets a causal limit that the observers can ever access. The existence of eternal acceleration also prevents us from ever measuring inflationary perturbations which originated before the ones currently observable \cite{nbound,nbound1}. The DE physics is still a challenging topic. The current supernova Ia (SN Ia) data is unable to distinguish different DE models and different DE parameterizations \cite{riess,alam,alam1,daly,daly1,hannestad,hkjbp,hkjbp1,hkjbp2,lazkoz,gong1,gong2,gong05,gong06,gong3,rcf}. Starkman, Trodden and Vachaspati (STV) addressed the problem of inflation in our patch of the Universe with the help of the concept of the minimal anti-trapped surface (MAS) \cite{stv}. They argued that if we can confirm the acceleration up to a redshift $z_c$ and observe the contraction of our MAS, then we are certain that our universe is inflating. If we see the contraction of our MAS, then we observe the onset of inflation. The immediate conclusion is that our universe is undergoing inflation because the cosmic acceleration is confirmed up to the redshift $1.755$ by the SN 1997ff. STV found that the period of acceleration has not lasted long enough for observations to confirm the onset of inflation for the $\Lambda$CDM model. The work of STV is based on the earlier work of Vachaspati and Trodden, who proved that in a homogeneous and isotropic universe, the necessary and sufficient condition for observing the contraction of the MAS is that the Universe is vacuum dominated in a region of radius greater than the Hubble size $H^{-1}$ \cite{vt}. The comoving contraction of our MAS is the essence of inflation. Thus only if a region of size greater than $H^{-1}$ remains vacuum dominated long enough for the MAS to begin collapsing then we are certain that the Universe is undergoing inflation \cite{stv}. Because the Hubble size is increasing with time in general, so the later the transition time (the smaller the redshift $z_T$) is, the longer inflation needs to last (the larger $a_v/a_T$ is, where $a_v$ is the scale factor at the time when the onset of acceleration is first seen.). Avelino, de Carvalho and Martins then replaced the Hubble size by the event horizon with some additional assumptions \cite{acm}. If the event horizon criterion is used, then the smaller $z_T$ is, the smaller $a_v/a_T$ we need. Huterer, Starkman and Trodden also analyzed general DE models and found that current observations are unable to confirm the onset of inflation \cite{hst}. In this paper, we discuss the holographic DE model \cite{hldark1,hsu,li,huang,gonghl,gongh2} and the generalized Chaplygin gas (GCG) model \cite{chap,chap1,chap2,gonggcg}. \section{The Hubble Size Criterion} Vachaspati and Trodden proved that inflationary models based on the classical Einstein equations, the weak energy conditions, and trivial topology, must assume homogeneity on super-Hubble scales. Based on this result, STV introduced the concept of the MAS to discuss the observability of the onset of inflation. The MAS is a sphere, centered on the observer, on which the velocity of comoving objects is the speed of light $c$ \cite{stv}. For light emitted directly toward the observer inside the MAS, the photons get closer to the observer with time, while all photons emitted by sources outside the MAS get farther away. For a homogeneous and isotropic universe, the physical radius of the MAS at time $t$ is the Hubble size $1/H(t)$. It was argued that the beginning of the comoving contraction of the MAS can be identified with the onset of inflation. Note that the condition of the onset of inflation is $\ddot{a}=0$ which is equivalent to $d(aH)^{-1}/dt$, where $a(t)$ is the scale factor. If a light was emitted at time $t_e$ from a source located at a comoving distance $r$ and then is received at time $t_v$ by the observer located at the origin, the physical distance between the source and the observer at $t_e$ is \begin{equation} \label{phydis} d(t_e,t_v)=a(t_e)\int^{t_v}_{t_e}\frac{dt}{a(t)}. \end{equation} In this paper, we consider a flat universe. We can see the contraction of the MAS at time $t_v$ if $d(t_e,t_v)=1/H(t_e)$ \cite{stv}, here $t_v$ ($a_v$) is the time (scale factor) when the turnaround of the MAS comes into view. Therefore, the onset of the late time inflation can be seen at time $t_v$ if $d(t_T,t_v)=1/H(t_T)$, where $t_T$ is the transition time when the Universe experienced from the deceleration phase to the acceleration phase. To see the consequences of $d(t_e,t_v)=1/H(t_e)$ clearly, we use a simple DE model $p=w\rho$ with constant $w$ satisfying $w\ge -1$ as an example. The matter energy density is $\rho_m=\rho_{mr}(a_r/a)^3$, where the subscript $r$ means that the variable takes a value at an arbitrary reference time $t_r$. The DE density is $\rho_x=\rho_{xr}(a_r/a)^{3(1+w)}$. So we have \begin{equation} \label{hubeq1} H^2=H^2_r\left[\Omega_{mr}\left(\frac{a_r}{a}\right)^3+\Omega_{xr}\left(\frac{a_r}{a}\right)^{3(1+w)}\right], \end{equation} where $\Omega=8\pi G\rho/(3H^2)$. The transition time $t_T$ is determined from \begin{equation} \label{acc1} \Omega_{mT}+(1+3w)\Omega_{xT}=0, \end{equation} or \begin{equation} \label{acc2} 1+z_T=\frac{a_0}{a_T}=\left(-\frac{\Omega_{m0}}{\Omega_{x0}(1+3w)}\right)^{1/3w}, \end{equation} where $\Omega_{m0}=1-\Omega_{x0}$ and the subscript $0$ means that the variable takes its present value. For the $\Lambda$CDM model, $w=-1$ and $\Omega_{m0}=0.3$, we get $z_T=0.67$. Using Eq. (\ref{acc1}), the condition $d(t_T,t_v)=1/H(t_T)$ becomes \begin{equation} \label{wcond1} \int^1_{a_T/a_v}\frac{dx}{\sqrt{x^3(x^{3w}-1-3w)}}=\frac{1}{\sqrt{-3w}}. \end{equation} To be able to observe the onset of inflation at present, we require $a_v<a_0$. For the $\Lambda$CDM model, the solution to the above equation (\ref{wcond1}) is $a_v/a_T=3.59>1+z_T=1.67$, so $a_v>a_0$ and $\Omega_{\Lambda v}=0.96$. In addition, we require the confirmation of cosmic acceleration up to a redshift $z_c$, where $z_c$ determined from the condition $d(t_c,t_0)=1/H(t_c)$ satisfies the following equation \begin{equation} \label{wcond2} \int^{z_c}_0\frac{dz}{\sqrt{\Omega_{m0}(1+z)^3+\Omega_{x0}(1+z)^{3(1+w)}}}= \frac{1+z_c}{\sqrt{\Omega_{m0}(1+z_c)^3+\Omega_{x0}(1+z_c)^{3(1+w)}}}. \end{equation} Note that $z_c$ is the minimum redshift that the cosmic acceleration needs to be observed by current observations. For the $\Lambda$CDM model with $\Omega_{m0}=0.3$, we get $z_c=1.61$. Since the current SN Ia observations extend to redshift $1.755$, so the cosmic acceleration is confirmed. But we still need to wait until $\Omega_\Lambda$ reaches a value of $0.96$ to observe the onset of inflation. The solutions to Eqs. (\ref{acc2}) and (\ref{wcond1}) for other choices of $\Omega_{x0}$ and $w$ are shown in Fig \ref{fig1}. From Fig. \ref{fig1}, we see that we need to wait until $\Omega_{x}\sim 0.9$ for $w > -0.67$ to be able to observe the onset of inflation. For bigger $w$, we need smaller $\Omega_x$. However, current observations strongly constrain $w\lesssim -0.8$. In other words, we are unable to confirm the onset of inflation now with current observations. \begin{figure} \centering \includegraphics[width=12cm]{zwsol.eps} \caption{The dependence of $a_T/a_0$ and $a_T/a_v$ on $w$ and $\Omega_{x0}$ for the DE model with constant equation of state parameter $w$. The line labeled ``Hubble Horizon" shows the dependence of $a_T/a_v$ on $w$ by using the Hubble scale criterion. The line labeled ``Event Horizon" shows the dependence of $a_T/a_v$ on $w$ by using the event horizon criterion.} \label{fig1} \end{figure} In Ref.~\refcite{press}, Lightman and Press introduced the concept of constant redshift surfaces to discuss the causal communication of comoving particles. In the definition of the constant redshift surfaces, the lower integral $t_e$ changes with the upper integral $t_v$ in equation (\ref{phydis}) by fixing $a(t_v)/a(t_e)=z_1$ to be a constant. The constant redshift surface or $z_1$-surface increases with time before inflation. After inflation, the $z_1$-surface will eventually decrease with time. For small redshift $z_1<z_T$, the $z_1$-surface is decreasing. For large redshift, the $z_1$-surface is increasing. So there exists a turnaround redshift $z_1$ so that the $z_1$-surface reaches its maximum at present. For the $\Lambda$CDM model, we find that the turnaround redshift $z_1=2.09$ if we take $\Omega_{m0}=0.3$. Although the decrease of the $z_1$-surface for small $z_1$ is a characteristic feature of an accelerated universe, it does not mean that the Universe is inflating and the space-time will evolve into the de-Sitter phase. Only if $z_1$ is big enough so that the $z_1$-surface crosses the event horizon, then we can say that the Universe is inflating. We will discuss this in the next section. \subsection{Holographic DE model} Cohen, Kaplan and Nelson proposed that for any state with energy $E$ in the Hilbert space, the corresponding Schwarzschild radius $R_S\sim E$ is less than than the infrared (IR) cutoff $L$ \cite{hldark1}. Therefore, the maximum entropy is $S_{BH}^{3/4}$. Under this assumption, a relationship between the ultraviolet cutoff $\rho_x^{1/4}$ and the IR cutoff is derived, i.e., $8\pi G L^3 \rho_x/3\sim L$. Hsu found that the model based on the Hubble scale as the IR cutoff would not give an accelerating universe \cite{hsu}. Li then showed that a plausible dark energy is possible by choosing the future event horizon as the IR cutoff \cite{li}. So the holographic DE density is \cite{li,huang} \begin{equation} \label{holorho} \rho_x=\frac{3d^2}{8\pi G R_{eh}^2}, \end{equation} where $R_{eh}(t)=d(t,\infty)$ is the event horizon. The equation of state of the holographic DE is $$w_x=-\frac{1}{3}\left(1+\frac{2\sqrt{\Omega_x}}{d}\right).$$ Because of some physical constraints on $d$, we take $d=1$ for simplicity \cite{gonghl}. Note that the weak energy condition is satisfied as long as $d^2\ge \Omega_x$. By using the Friedmann equations, we get \begin{equation} \label{holorho2} \frac{d\Omega_x}{d\ln a}=\Omega_x(1-\Omega_x)(1+2\sqrt{\Omega_x}). \end{equation} The solution to Eq. (\ref{holorho2}) is \begin{equation} \label{holorho3} \ln\Omega_x-\frac{1}{3}\ln(1-\sqrt{\Omega_x})+\ln(1+\sqrt{\Omega_x})-\frac{8}{3}\ln(1+2\sqrt{\Omega_x}) =\ln\left(\frac{a}{a_r}\right)+y_r, \end{equation} where $y_r$ is determined from the above equation by using $\Omega_{xr}$. From the definition of the holographic DE density (\ref{holorho}) and the Friedmann equations, we get \begin{equation} \label{ehor} R_{eh}(t)=\frac{1}{H_r\sqrt{1-\Omega_{xr}}}\left(\frac{a(t)}{a_r}\right)^{3/2}\left( \frac{1-\Omega_x(t)}{\Omega_x(t)}\right)^{1/2}. \end{equation} The transition time $t_T$ is determined from \begin{equation} \label{hacc1} \Omega_{xT}+2\Omega_{xT}\sqrt{\Omega_{xT}}=1. \end{equation} So $\Omega_{xT}=0.432$ and $y_T=-2.215$. Substitute these values to Eq. (\ref{holorho3}), set $a_r=a_0$ and $\Omega_x=\Omega_{xT}$ and use the best fitting value $\Omega_{xr}=\Omega_{x0}=0.75$, we get $z_T=0.72$. Since $d(t_T,t_v)=R_{eh}(t_T)-a_T R_{eh}(t_v)/a_v$, the condition $d(t_T,t_v)=1/H(t_T)$ becomes \begin{equation} \label{hcond1} \frac{a_v(1-\Omega_{xv})}{a_T \Omega_{xv}}=\frac{(1-\sqrt{\Omega_{xT}})^2(1-\Omega_{xT})}{\Omega_{xT}}=0.154. \end{equation} Combining Eqs. (\ref{holorho3}) and (\ref{hcond1}), we get \begin{equation} \label{omxv} \ln\Omega_{xv}-\frac{1}{3}\ln(1-\sqrt{\Omega_{xv}})+\ln(1+\sqrt{\Omega_{xv}})- \frac{8}{3}\ln(1+2\sqrt{\Omega_{xv}}) =\ln\left(\frac{0.154\Omega_{xv}}{1-\Omega_{xv}}\right)+y_T. \end{equation} Combining Eqs. (\ref{hcond1}) and (\ref{omxv}), we get $a_v/a_T=3.46>1+z_T=1.72$, so $a_v>a_0$. The redshift $z_c$ satisfies the equation \begin{equation} \label{hcond2} \frac{(1-\sqrt{\Omega_{xc}})^2(1-\Omega_{xc})}{(1+z_c)\Omega_{xc}} =\frac{1-\Omega_{x0}}{\Omega_{x0}}. \end{equation} Combining Eqs. (\ref{holorho3}) and (\ref{hcond2}) with $\Omega_{x0}=0.75$, we get $z_c=1.64<1.755$. Therefore although we see our MAS today, we are unable to observe the onset of inflation for the holographic DE model because the cosmic acceleration has not lasted long enough. \subsection{GCG Model} For the GCG model, we have $p_g=-A/\rho_g^\alpha$. By using the energy conservation equation, we get \begin{equation} \label{grho1} \rho_g=\rho_{gr}\left[-w_{gr}+(1+w_{gr})\left(\frac{a_r}{a}\right)^{3(1+\alpha)}\right]^{1/(1+\alpha)}, \end{equation} where the equation of state parameter $w_g=p_g/\rho_g$. Because $\rho_g\sim (a_0/a)^3$ when $a\ll a_0$ and $\rho_g\sim {\rm constant}$ when $a\gg a_0$, the GCG model can be thought as a unified model of DE and dark matter. Therefore we assume that there is no matter present for simplicity and require $w_g\ge -1$ so that the weak energy condition is satisfied. As discussed in Ref. \refcite{gonggcg}, some reasonable physical constraints also require $\alpha\ge 0$. The Friedmann equation is \begin{equation} \label{gcos1} H^2=H^2_r\left[-w_{gr}+(1+w_{gr})\left(\frac{a_r}{a}\right)^{3(1+\alpha)}\right]^{1/(1+\alpha)}. \end{equation} At the transition time $t_T$, we have $w_{gT}=-1/3$. So the transition redshift satisfies \begin{equation} \label{gzt} 1+z_T=\left(-\frac{2w_{g0}}{1+w_{g0}}\right)^{1/3(1+\alpha)}. \end{equation} By using the best supernova fitting values $w_{g0}=-0.83$ and $\alpha=1.20$, we get $z_T=0.412$. The condition $d(t_T,t_v)=1/H(t_T)$ gives \begin{equation} \label{gcond1} \int^1_{a_T/a_v}\left[\frac{1}{3}+\frac{2}{3}x^{3(1+\alpha)}\right]^{-1/2(1+\alpha)} dx=1. \end{equation} If we take $\alpha=1.2$, we get $a_v/a_T=5.50$ and $a_v>a_0$. The condition $d(t_c,t_0)=1/H(t_c)$ gives \begin{eqnarray} \label{gcond2} \int^{z_c}_0\left[-w_{g0}+(1+w_{g0})(1+z)^{3(1+\alpha)}\right]^{-1/2(1+\alpha)} dz= \nonumber \\ \frac{1+z_c}{\left[-w_{g0}+(1+w_{g0})(1+z_c)^{3(1+\alpha)}\right]^{1/2(1+\alpha)}}. \end{eqnarray} By using the best supernova fitting values $w_{g0}=-0.83$ and $\alpha=1.20$, we get $z_c=1.424<1.755$. Again the present observations of cosmic acceleration extend to a redshift $z_c=1.424$. Currently we are still unable to confirm the onset of inflation for the GCG model. For some other values of $w_{g0}$ and $\alpha$, the numerical solutions to Eqs. (\ref{gzt}) and (\ref{gcond1}) are shown in Fig. \ref{fig2}. From Fig. \ref{fig2}, we see that it is possible to confirm the onset of inflation for the GCG model only if $\alpha<0$ which is outside the physical parameter space. \begin{figure} \centering \includegraphics[width=12cm]{gcgzw.eps} \caption{The dependence of $a_T/a_0$ and $a_T/a_v$ on $w_{g0}$ and $\alpha$ for the GCG model. The line labeled ``Hubble Horizon" shows the dependence of $a_T/a_v$ on $\alpha$ by using the Hubble scale criterion. The line labeled ``Event Horizon" shows the dependence of $a_T/a_v$ on $\alpha$ by using the event horizon criterion.} \label{fig2} \end{figure} \section{The Event Horizon Criterion} In general, the Hubble size increases with time. Therefore, it will take longer time to observe the onset of inflation if the transition happened at later time. In order to avoid this situation, Avelino, de Carvalho and Martins replaced the Hubble scale criterion discussed in the previous section by requiring that the comoving distance equals to the comoving event horizon at the time of reception. Of course, some additional assumptions on the content of the local universe and field dynamics are needed. The event horizon criterion is \begin{equation} \label{ereq} r=\int_{t_e}^{t_v}\frac{dt}{a(t)}=\int_{t_v}^\infty \frac{dt}{a(t)}. \end{equation} By using the notation of the constant redshift surface in \cite{press}, the above condition gives us the redshift $z_1$ when the $z_1$-surface crosses the event horizon. To illustrate the effect of this condition, we take the simple DE model with constant equation of state as an example. Applying the condition (\ref{ereq}) to the onset of inflation, Eq. (\ref{wcond1}) is replaced by \begin{equation} \label{wecond1} \int^1_{a_T/a_v}\frac{dx}{\sqrt{x^3(x^{3w}-1-3w)}}=\int^{a_T/a_v}_0\frac{dx}{\sqrt{x^3(x^{3w}-1-3w)}}, \end{equation} and Eq. (\ref{wcond2}) is replaced by \begin{equation} \label{wecond2} \int^{z_c}_0\frac{dz}{\sqrt{\Omega_{m0}(1+z)^3+\Omega_{x0}(1+z)^{3(1+w)}}} =\int^{0}_{-1}\frac{dz}{\sqrt{\Omega_{m0}(1+z)^3+\Omega_{x0}(1+z)^{3(1+w)}}}. \end{equation} For the $\Lambda$CDM model, the solution to Eq. (\ref{wecond1}) is $a_v/a_T=2.30>1+z_T=1.67$ or $\Omega_{\Lambda v}=0.86$, and the solution to Eq. (\ref{wecond2}) is $z_c=1.81$ if we take $\Omega_{m0}=0.3$. So we are unable to confirm the onset of inflation now with current observations. The numerical solutions to Eq. (\ref{wecond1}) for other values of $w$ are shown in Fig \ref{fig1}. From Fig. \ref{fig1}, we see that we need to wait until $\Omega_{x}\sim 0.9$ for $w< -0.85$ to be able to observe the onset of inflation. The smaller $w$ is, the sooner we observe the onset of inflation. It is possible that we observe the onset of inflation with $\Omega_{\Lambda 0}=0.7$ if $w<-1$. However, the weak energy condition is violated if $w<-1$, and the criterions discussed in this paper do not apply. This situation needs to be investigated more carefully and are out of the scope of our discussion. \subsection{Holographic DE model} Applying the event horizon criterion (\ref{ereq}) to the holographic DE model discussed in the previous section, we replace Eq. (\ref{hcond1}) by \begin{equation} \label{hecond1} \frac{a_v(1-\Omega_{xv})}{a_T \Omega_{xv}}=\frac{1-\Omega_{xT}}{4\Omega_{xT}}=0.329, \end{equation} and Eq. (\ref{hcond2}) by \begin{equation} \label{hecond2} \frac{1-\Omega_{xc}}{4(1+z_c)\Omega_{xc}} =\frac{1-\Omega_{x0}}{\Omega_{x0}}. \end{equation} Combining Eqs. (\ref{holorho3}) and (\ref{hecond1}), we get $a_v/a_T=2.34>1+z_T=1.72$, so $a_v>a_0$. Combining Eqs. (\ref{holorho3}) and (\ref{hecond2}) with $\Omega_{x0}=0.75$, we get $z_c=1.84$. Therefore we are unable to observe the onset of inflation now for the holographic DE model. \subsection{GCG Model} Applying the event horizon criterion (\ref{ereq}) to the GCG model discussed in the previous section, we replace Eq. (\ref{gcond1}) by \begin{equation} \label{gecond1} \int^1_{a_T/a_v}\left[\frac{1}{3}+\frac{2}{3}x^{3(1+\alpha)}\right]^{-1/2(1+\alpha)} dx= \int^{a_T/a_v}_{0}\left[\frac{1}{3}+\frac{2}{3}x^{3(1+\alpha)}\right]^{-1/2(1+\alpha)} dx, \end{equation} and Eq. (\ref{gcond2}) by \begin{eqnarray} \label{gecond2} \int^{z_c}_0\frac{dz}{[-w_{g0}+(1+w_{g0})(1+z)^{3(1+\alpha)}]^{1/2(1+\alpha)}}=\nonumber\\ \int^0_{-1}\frac{dz}{[-w_{g0}+(1+w_{g0})(1+z)^{3(1+\alpha)}]^{1/2(1+\alpha)}}. \end{eqnarray} If we take $\alpha=1.2$, we get $a_v/a_T=2.08$, so $a_v>a_0$. By using $w_{g0}=-0.83$ and $\alpha=1.20$, the solution to Eq. (\ref{gecond2}) is $z_c=1.64$. Because $a_v>a_0$, currently we are unable to confirm the onset of inflation for the GCG model. For different values of $\alpha$, the numerical solutions to Eq. (\ref{gecond1}) are shown in Fig. \ref{fig2}. From Fig. \ref{fig2}, we see that it is possible to confirm the onset of inflation for the GCG model when $\alpha< 0.5$ and $w_{g0}\sim -0.95$. The smaller $\alpha$ is, the smaller $w_{g0}$ is required to observe the onset of inflation. \section{Discussion} For the Hubble size criterion, our results are: (1) Constant $w\ge -1$ model. It is possible to observe the onset of inflation when $\Omega_x\sim 0.9$ and $w> -0.67$. (2) The holographic DE model with $d=1$. We find that $z_T=0.72$ and $z_c=1.64$ if we use the best supernova fitting result $\Omega_{x0}=0.75$. We also get $a_v/a_T=3.46$ and $a_v>a_0$. (3) The GCG model. We find that $z_T=0.412$ and $z_c=1.424$ if we use the best supernova fitting results $w_{g0}=-0.83$ and $\alpha=1.20$. By using $\alpha=1.20$, we get $a_v/a_T=5.50$ and $a_v>a_0$. It is possible to observe the onset of inflation when $\alpha<0$ and $w_g\sim -0.75$. For the event horizon criterion, our results are: (1) Constant $w\ge -1$ model. It is possible to observe the onset of inflation when $\Omega_x\sim 0.9$ and $w< -0.85$. (2) The holographic DE model with $d=1$. We find that $z_c=1.84$ if we use the best supernova fitting result $\Omega_{x0}=0.75$. We also get $a_v/a_T=2.34$ and $a_v>a_0$. (3) The GCG model. We find that $z_c=1.64$ if we use the best supernova fitting results $w_{g0}=-0.83$ and $\alpha=1.20$. By using $\alpha=1.20$, we get $a_v/a_T=2.08$ and $a_v>a_0$. It is possible to observe the onset of inflation when $\alpha< 0.5$ and $w_g\sim -0.95$. In general, the event horizon criterion gives bigger value of $z_c$ than the Hubble size criterion does. The reason is that today we may be able to observe a larger portion of the Universe than that we can ever access. The event horizon criterion is not applicable for very low values of the dark energy density. However, the later the cosmic acceleration started, the sooner we are able to observe the onset of inflation by using the event horizon criterion. For all the three models discussed in this paper, we have $z_c>1$ and $a_v>a_0$. Therefore it is impossible to confirm the onset of inflation by current observations for all three models. However, this conclusion cannot be extended to phantom models because they violate the weak energy condition. Therefore, the conclusion cannot apply to the holographic DE model with phantom behavior \cite{nojiri}. However, it can be applied to the interacting holographic DE model discussed in Refs. ~\refcite{intde} and \refcite{intde1} because $w_{tot}\ge -1$. The conclusion is neither applicable to the GCG model with $w_g<-1$ discussed in Ref.~\refcite{ggcg}. When we parameterized DE equation of state, we find that the supernova data might not be able to distinguish those parameterizations that have almost the same past behaviors and different future behaviors \cite{gong05}. For general dark energy model, it is impossible that cosmic acceleration started at a redshift $z_T>1$, so it is impossible to observe the onset of inflation up to a region of Hubble size. If the event horizon criterion is used, then it is possible to observe the onset of inflation and confirm the inflation of our universe for some general dynamic dark energy models with low transition redshift. In this paper, we find that the acceleration has not lasted long enough for observations to confirm that we are undergoing inflation. Therefore the future fate of the Universe is still unknown from current observations. We need to wait some time to be confident that we are undergoing inflation. On the other hand, if the cosmic acceleration never ends, less information about the early inflationary perturbations will be observed in the future. So we are living in a peculiar era in the history of the Universe. For the GCG model, the marginal allowed parameter spaces $\alpha \lesssim 0.5$ and $w_{g0}\gtrsim -0.75$ will make it possible to confirm that our universe is inflating by using the event horizon criterion. \section*{Acknowledgments} Y. Gong is supported by NNSFC under grant No. 10447008 and 10605042, SRF for ROCS, State Education Ministry, CQMEC under grant No. KJ060502. Y.Z. Zhang's work was in part supported by NNSFC under Grant No. 90403032 and also by National Basic Research Program of China under Grant No. 2003CB716300. \section*{References}
2,877,628,088,822
arxiv
\section{Introduction} A large number of systems in optics and atomic physics feature a dual-core (double-well) structure, based on an effective potential in the form of a symmetric set of two minima (wells). The wells are linked by the tunneling of optical waves or matter waves, trapped in them, across the barrier separating the wells. In cases when the trapped waves are subject to nonlinear self-interaction, a fundamental property of the symmetric double-well systems, which originates from the competition between the linear tunnel coupling between the wells and the nonlinearity acting inside of them, is the \textit{spontaneous symmetry breaking} (SSB). In particular, the SSB of solitons (solitary waves trapped in the double-well potential) was first studied in the model of dual-core optical fibers \cite{Skinner} \cite{Progress}. A similar analysis was carried out for Bose-Einstein condensates (BECs) loaded into double-well potentials \cite{mean-field}-\cit {mean-field5}. In the experiment, the SSB has been observed in the BEC with repulsive interactions between atoms \cite{Heidelberg}, and in an optical setting based on photorefractive crystals \cite{ZChen}. It has been demonstrated that, in the systems with attractive or repulsive intrinsic nonlinearities, stationary asymmetric modes are generated by \textit symmetry-breaking} \textit{bifurcations} from symmetric or antisymmetric trapped solitons, respectively. SSB effects for solitons were also studied in the model with competing nonlinearities of both these types, \textit{viz ., self-focusing cubic and self-defocusing quintic nonlinear terms, which gives rise to closed \textit{bifurcation loops} \cite{CQ,CQ3}. The analysis of the SSB in matter-wave (BEC) settings and their photonic counterparts was extended for two-dimensional (2D) solitons, supported by the interplay of the self-attractive \cite{Warsaw}-\cite{Arik3} or self-repulsive \cite{Arik}-\cite{Markus} nonlinearity and spatially periodic potentials acting in each core. The periodic potential is necessary to stabilize 2D solitons (in each core separately) against the collapse in the case of the self-attraction \cite{review}, or support solitons of the gap type in the case of the self-repulsion \cite{gap1}-\cite{gap3}. The analysis of the SSB in a 2D localized configuration of a different type, based on the set of four potential wells forming a square pattern (in a single core) was reported in Ref. \cite{square}. Photonic and matter waves can be trapped not only in the usual linear potentials, but also in nonlinear \textit{pseudopotentials} (this term is often used in solid-state physics \cite{Harrison}), which may be induced by a spatial modulation of the local nonlinearity coefficient. In BEC, the \ local nonlinearity may be controlled, through the Feshbach-resonance effect, by an external magnetic field \cite{Feshbach}. Accordingly, the spatial nonlinearity modulation may be induced by an inhomogeneous magnetic field \cite{SU}. In optics, similar settings can be designed as all-solid or liquid-filled microstructured fibers, using combinations of materials with matched refractive indices but different Kerr coefficients, as proposed in Ref. \cite{Barcelona-OL}. The studies of soliton dynamics in purely nonlinear (pseudo)potentials, as well as in their combinations with usual linear potentials, has recently grown into a vast research area, see the review article \cite{Barcelona-RMP . Models with various patterns of the spatial nonlinearity modulation were studied in detail in 1D geometries \cite{NL}-\cite{Barcelona-vector}, \cit {Barcelona-RMP}. In 2D, the analysis is essentially more challenging, as, in this case, it is difficult to stabilize localized modes against the spatiotemporal \textit{collapse }(catastrophic self-focusing induced by the cubic nonlinearity),\textit{\ }using only the local modulation of the nonlinearity, without the support provided by a linear potential \cite{HS} \cite{Hung}, \cite{Barcelona-OL}, \cite{Barcelona-RMP}. Nevertheless, 2D nonlinear structures capable to support stable solitons have been found. Basic types of such structures are represented by a circle or annulus, filled with the self-focusing material, which is embedded into a linear or defocusing medium \cite{HS} (as well as a lattice of such circles \cit {Barcelona-OL}), and a single or double stripe made of the same material \cite{Hung}. In both cases, it was concluded that the nonlinearity-modulation profiles with sharp edges provide for much more efficient stabilization of 2D solitons than similar smooth profiles. On the other hand, no pattern of 2D modulations of the self-focusing cubic nonlinearity was found that would support stable vortex solitons. Coming back to 1D, it was found that the simplest pseudopotential capable to sustain stable localized modes is built as a symmetric set of two delta-functions multiplying the cubic self-focusing nonlinearity \cite{we}. Explicit analytical solutions are available for all stationary modes trapped in this setting -- symmetric, antisymmetric, and asymmetric ones. The analytical solution allows one to study the corresponding SSB bifurcation, which generates asymmetric modes from the symmetric ones. On the other hand, the exact solution of the 1D model with two delta-functions is partly degenerate, which, in particular, makes the asymmetric modes completely unstable. The replacement of the ideal delta-functions by their regularized counterparts lifts the degeneracy, making the asymmetric modes partly or fully stable \cite{we}. The model with a pair of ideal or regularized delta-functions may be used as a paradigm for the study of the SSB of localized modes supported by symmetric double-well pseudopotential structures. The objective of the present work is to introduce a natural generalization of this setting in the 2D geometry, with the nonlinearity concentrated in two separated or overlapping identical circles with sharp edges, see Fig. \ref{fig1} and insets to Figs. \ref{fig9}(a) and \ref{fig13}(a) below (ideal $\delta -functions are irrelevant in the 2D case, see Eqs. (\ref{Poisson}), (\ref{ln ) and the related text in the next section). This setting can be realized in BEC and nonlinear optics alike, by means of the same techniques which were outlined above, i.e., respectively, the use of the Feshbach resonance, controlled by a spatially nonuniform magnetic field, or a crystalline structure formed by a pair if intermingled materials with equal refractive indices but different Kerr coefficients. The paper is organized as follows. The model is introduced in detail in Section 2. Then, in Section 3 we develop the variational approximation (VA), with the objective to describe patterns with different symmetries supported by the symmetric pair of well-separated circles. Results of the numerical analysis are collected in Section 4. The results describe the transition from double-peak symmetric and antisymmetric modes to asymmetric ones, with the decrease of the separation between the circles, which implies the strengthening interaction between them. The symmetric solitons perform the symmetry-breaking transition via an intermediate mode in the form of weakly asymmetric breathers, while antisymmetric solitons directly jump into strongly asymmetric single-peak modes. For touching or partly overlapping circles, only single-peak solitons exist. These may be either symmetric ones, centered at the midpoint of the bi-circle configuration, or solitons spontaneously leaping into either circle and then featuring shuttle motion in that trap. Various manifestations of the SSB in the present system are summarized in Section 4, which concludes the paper. \section{The model} \subsection{The equations} The setting considered in this work is shown in Fig. \ref{fig1}, where $a$ is the radius of the two identical circles, and $L$ is the separation between them, $R\equiv L+2a$ being the distance between the centers of the circles, which are located at points $\left\{ x_{0},y_{0}\right\} =\left\{ \pm \left( L/2+a\right) ,0\right\} $. Along with the configuration displayed in Fig. \ref{fig1}, we will also consider the one with $R<2a$ (i.e., $L<0$), which corresponds to partly overlapping circles [see top left insets to Figs. \ref{fig9}(a) and \ref{fig13}(s) below]. \begin{figure}[tbph] \centering\includegraphics[width=3in]{Fig1.eps} \caption{(Color online) The geometry of the bi-circle system.} \label{fig1} \end{figure} The scaled form of the underlying Gross-Pitaevskii/nonlinear-Schr\"{o}dinger (GP/NLS) equation with the self-focusing ($g_{1}>0$) nonlinearity confined to the two circles i \begin{equation} i\psi _{t}=-\left( \psi _{xx}+\psi _{yy}\right) -g\left( x,y\right) \left\vert \psi \right\vert ^{2}\psi , \label{psi} \end{equation where the simplest form of the localization may be chosen a \begin{equation} g(x,y)=g_{1}\left[ \exp \left( -\frac{\left( x-R/2\right) ^{2}+y^{2}}{a^{2} \right) +\exp \left( -\frac{\left( x+R/2\right) ^{2}+y^{2}}{a^{2}}\right) \right] . \label{g} \end{equation In the case of the GP equation, $t$ is normalized time, while in optics, $t$ is actually the propagation distance in the bulk waveguide. In either case, x$ and $y$ are the transverse coordinates. In fact, a different form of the 2D modulation function will be used for numerical calculations, see Eqs. \ref{g1(x,y)}-(\ref{g(x,y)} below. Stationary solutions to Eq. (\ref{psi}), with chemical potential $\mu $ of the BEC (or propagation constant $-\mu $ in the optical waveguide) are sought for as $\psi \left( x,y\right) =e^{-i\mu t}\phi \left( x,y\right) $, where real function $\phi \left( x,y\right) $ obeys equatio \begin{equation} \mu \phi +\left( \phi _{xx}+\phi _{yy}\right) +g\left( x,y\right) \phi ^{3}=0. \label{phi} \end{equation Equation (\ref{phi}) will be solved below by means of a numerical method, and the VA will be used too. \subsection{The Lagrangian structure and variational ansatz} Equation (\ref{phi}) can be derived from the Lagrangian, $L=\int \int \mathcal{L}dxdy$, with densit \begin{equation} 2\mathcal{L}=-\mu \phi ^{2}+\left( \phi _{x}\right) ^{2}+\left( \phi _{y}\right) ^{2}-(1/2)g\left( x,y\right) \phi ^{4}. \label{density} \end{equation Obviously, localized solutions have $\mu <0$. The energy corresponding to Eq. (\ref{psi}) i \begin{equation} E=\int \int \left[ \left( \left\vert \psi _{x}\right\vert ^{2}+\left\vert \psi _{y}\right\vert ^{2}\right) -\frac{1}{2}g\left( x,y\right) \left\vert \psi \left( x,y\right) \right\vert ^{4}\right] dxdy. \label{E} \end{equation} The application of the VA is possible in the case when $a$ in Eq. (\ref{g}) is small, hence $g\left( x,y\right) $ may be approximated \ (only for the purpose of the development of the VA) by a combination of delta-functions \begin{equation} g\left( x,y\right) =G\left[ \delta \left( x-R/2\right) \delta \left( y\right) +\delta \left( x+R/2\right) \delta (y)\right] ,~G\equiv \pi g_{1}a^{2}, \label{delta} \end{equation where coefficient $G$ is determined by the integral-balance condition, $\int \int g\left( x,y\right) dxdy\equiv G\int \int \left[ \delta \left( x-R/2\right) \delta \left( y\right) +\delta \left( x+R/2\right) \delta (y \right] dxdy$. The VA developed below is based on the use of the following \textit{ansatz} \begin{equation} \phi \left( x,y\right) =\exp \left( -\frac{2y^{2}}{R^{2}l^{2}}\right) \left\{ A\exp \left[ -\frac{2\left( x-R/2\right) ^{2}}{R^{2}l^{2}}\right] +B\exp \left[ -\frac{2\left( x+R/2\right) ^{2}}{R^{2}l^{2}}\right] \right\} , \label{ansatz} \end{equation where $A,B$ and $l$ are variational parameters, the latter one being the width of the ansatz measured in units of $R/2$. The ansatz is isotropic in the vicinity of each attractive center (the variational equations for an anisotropic ansatz turn out to be very cumbersome). It is assumed that the asymmetry between the wave functions in vicinities of the two attractive centers may be accounted for by a difference in the amplitudes, $A\neq B$, while widths $m$ and $l$ are assumed to be the same near both centers. Comparing expression (\ref{ansatz}) with Eqs. (\ref{g}) demonstrates that approximation (\ref{delta}), based on the delta-functions, may be valid for sufficiently small $a$ in Eq. (\ref{g})---namely, if the VA will eventually yield \begin{equation} l\gg 2a/R. \label{a} \end{equation On the other hand, it is easy to check that Eq. (\ref{phi}) with the nonlinearity-modulation function represented by the single 2D delta-function, i.e., $g\left( x,y\right) =G\delta (x)\delta (y)$, does not give rise to any soliton solution (unlike its 1D counterpart \cite{we}). Indeed, taking the corresponding equation \begin{equation} \mu \phi +\left( \phi _{xx}+\phi _{yy}\right) +G\delta \left( x)\delta (y\right) \phi ^{3}=0, \label{phi-delta} \end{equation and setting, at first, $\mu =0$, one can use the commonly known fact that the Poisson equation in the form of \begin{equation} \phi _{xx}+\phi _{yy}=C\delta (x)\delta (y), \label{Poisson} \end{equation with constant $C$, has the fundamental solution, \begin{equation} \phi (r)=\left( C/2\pi \right) \ln \left( r/r_{0}\right) , \label{ln} \end{equation where $r\equiv \sqrt{x^{2}+y^{2}}$, and $r_{0}$ is an arbitrary positive constant. Temporarily introducing a small cutoff radius, $\rho $, formula \ref{ln}) suggests that a solution to Eq. (\ref{phi-delta}) can be sought for as $\phi (r)=\phi _{0}\ln \left( r/r_{0}\right) $ , which corresponds to $C=-G\left( \phi \left( r=\rho \right) \right) ^{3}=G\phi _{0}^{3}\left( \ln \left( r/\rho _{0}\right) \right) ^{3}\equiv C_{\rho }$ in Eq. (\ref{Poisson ). Then, the self-consistency condition ensuing from Eq. (\ref{ln}), $\phi _{0}=C_{\rho }/2\pi ,$ yields $\phi _{0}^{2}=\left( 2\pi /G\right) \left[ \ln \left( r_{0}/\rho \right) \right] ^{-3}.$This relation shows that the limit of $\rho \rightarrow 0$ corresponds to $\phi _{0}\rightarrow 0$, which means that solely the trivial solution, $\phi \equiv 0$, is possible if the cutoff is removed. A similar conclusion [the nonexistence of nontrivial solutions to Eq. (\re {phi-delta})] can be obtained for $\mu <0$, using an appropriate Hankel's function, instead of solution (\ref{ln}). Thus, the use of approximation \ref{delta}) in the framework of the VA is meaningful for the circles of a small but finite radius $a$, which obeys condition (\ref{a}). \section{The variational analysis} The substitution of ansatz (\ref{ansatz}) into density (\ref{density}) and calculation of the integrals yields the following expression for the effective Lagrangian \begin{gather} L_{\mathrm{eff}}=-\left( \pi /8\right) R^{2}\mu l^{2}\left( A^{2}+B^{2}+2e^{-1/l^{2}}AB\right) \notag \\ +\left( \pi /2\right) \left[ \left( A^{2}+B^{2}\right) +2\left( 1-l^{-2}\right) e^{-1/l^{2}}AB\right] \notag \\ -G\left( R/4\right) ^{2}\left[ \left( 1+e^{-8/l^{2}}\right) \left( A^{4}+B^{4}\right) \right. \notag \\ \left. +4e^{-2/l^{2}}\left( 1+e^{-4/l^{2}}\right) AB\left( A^{2}+B^{2}\right) +12e^{-4/l^{2}}A^{2}B^{2}\right] , \label{simple} \end{gather the norm of the ansatz bein \begin{equation} N\equiv \int \int \phi ^{2}(x,y)dxdy=\pi \left( Rl/2\right) ^{2}\left( A^{2}+B^{2}+2e^{-1/l^{2}}AB\right) . \label{N} \end{equation In particular, for the symmetric and antisymmetric solitons, with $A=\pm B$, Eqs. (\ref{simple}) and (\ref{N}) giv \begin{gather} L_{\mathrm{eff}}^{\left( \pm \right) }=\pi A^{2}\left[ -\left( R/2\right) ^{2}\mu l^{2}\left( 1\pm e^{-1/l^{2}}\right) +1\pm \left( 1-l^{-2}\right) e^{-1/l^{2}}\right] \notag \\ -\left( G/8\right) R^{2}A^{4}\left[ 1+e^{-8/l^{2}}\pm 4e^{-2/l^{2}}\left( 1+e^{-4/l^{2}}\right) +6e^{-4/l^{2}}\right] , \label{+-} \end{gather \begin{equation} N=\pi l^{2}\left( R^{2}/2\right) \left( 1+e^{-1/l^{2}}\right) A^{2}~. \label{N-symm} \end{equation} Even after the simplifications, the Euler-Lagrange equations, $\partial L_ \mathrm{eff}}^{\left( \pm \right) }/\partial \left( A^{2}\right) =\partial L_{\mathrm{eff}}^{\left( \pm \right) }/\partial \left( l^{2}\right) =0$, following from Eq. (\ref{+-}) remain cumbersome. A tractable approximation can be developed if $l^{2}$ is small enough to neglect exponentially small factors $e^{-4/l^{2}}$ and $e^{-8/l^{2}}$ in comparison with $1$ in expression (\ref{+-}) (in fact, we will then have $l^{2}<1/2$, see below). In this approximation, the variational equations for the symmetric solitons ar \begin{equation} \mu \left[ 1+\left( l^{-2}+1\right) e^{-1/l^{2}}\right] =-\left( 2/R\right) ^{2}l^{-4}\left( l^{-2}-2\right) e^{-1/l^{2}}, \label{mu-symm} \end{equation \begin{equation} A^{2}=\frac{\pi \left[ 1-\left( l^{-2}-1\right) e^{-1/l^{2}}-(R/2)^{2}\mu l^{2}\left( 1+e^{-1/l^{2}}\right) \right] }{G\left( R/2\right) ^{2}\left( 1+4e^{-2/l^{2}}\right) }. \label{A-symm} \end{equation It follows from Eq. (\ref{mu-symm}) that the localization condition, $\mu <0 , entails $l^{2}<1/2$, as mentioned above. Then, both terms in the square brackets in Eq. (\ref{A-symm}) are positive, hence $A^{2}$ is positive too, as it should be. An explicit analysis is also possible for the limit case of very broad solitons, $l^{2}\gg 1$. In this case, the expansion of Lagrangian (\re {simple}) in powers of $l^{-2}$ yield \begin{eqnarray} L_{\mathrm{eff}} &\approx &-\left( \pi /8\right) R^{2}\mu l^{2}\left( A+B\right) ^{2} \notag \\ &&+\left( \pi /2\right) \left( A+B\right) ^{2}-G\left( R^{2}/8\right) \left( A+B\right) ^{4} \notag \\ &&+l^{-2}\left[ -2\pi AB+G\left( R^{2}/2\right) \left( A+B\right) ^{4}\right] . \label{simplest} \end{eqnarray A straightforward consideration of the the Euler-Lagrange equations following from Eq. (\ref{simplest}) demonstrates that they give rise to no antisymmetric solutions, while the symmetric solution is found in the following form, valid at the lowest order in $l^{-2}$ \begin{eqnarray} A^{2} &=&\ B^{2}=\pi /\left( 2GR^{2}\right) ,~N=\left[ \pi /\left( 2G\right) \right] ^{2}l^{2}, \label{const} \\ N &=&\left( \pi ^{2}/GR\right) \left( -\mu \right) ^{-1/2}. \label{asympt} \end{eqnarray In this approximation, Eq. (\ref{const}) demonstrates that the amplitude of the symmetric soliton is constant, while its norm diverges $\sim l^{2}$, and the entire soliton family is described by dependence $N(\mu )$ given by Eq. \ref{asympt}). \section{Numerical results} \subsection{The setting} For the numerical analysis, we return from approximation (\ref{delta}) to the configuration displayed in Fig. \ref{fig1}, and define the nonlinearity as follows [cf. Eq. (\ref{g})]: \begin{equation} g_{1}\left( x,y\right) =\exp \left[ -\left( r_{L}/a\right) ^{k}\right] +\exp \left[ -\left( r_{R}/a\right) ^{k}\right] , \label{g1(x,y)} \end{equation \begin{equation} g_{2}\left( x,y\right) =\left\vert \exp \left[ -\left( r_{L}/a\right) ^{k \right] -\exp \left[ -\left( r_{R}/a\right) ^{k}\right] \right\vert \label{g2(x,y)} \end{equation \begin{equation} g(x,y)=(1/2)\left[ g_{1}\left( x,y\right) +g_{2}\left( x,y\right) \right] , \label{g(x,y)} \end{equation where $r_{L}\equiv \sqrt{y^{2}+\left( x+R/2\right) ^{2}}$ and $r_{R}\equiv \sqrt{y^{2}+\left( x-R/2\right) ^{2}}$. As mentioned above, the stabilization of 2D solitons supported by the single nonlinear circle is facilitated by making edges of the circles sharp \cit {HS,Barcelona-OL,Barcelona-RMP}. To implement this feature in the present setting, $k=100$ was used in Eqs. (\ref{g1(x,y)}) and (\ref{g2(x,y)}) [the full nonlinearity (\ref{g(x,y)}) is then almost identical to $g_{1}\left( x,y\right) $]. Localized stationary modes were found as solutions to Eq. (\ref{phi}) by means of the so-called Newton-conjugate-gradient method \cite{method} (the ordinary version of the Newton's method did not lead to convergent solutions in the present setting). After finding the stationary solutions, their stability was tested by means of direct simulations of Eq. (\ref{psi}), performed by means of the usual split-step method. In addition to presenting the numerical results, we also include some predictions produced by the VA [chiefly, in Fig. \ref{fig2}(a)]. To this end, norm\textbf{\ }(\ref{N-symm})\textbf{\ }was plotted vs.\textbf{\ }$\mu , taking $A^{2}$\textbf{\ }as per Eq.\textbf{\ (}\ref{A-symm}\textbf{) }and eliminating $l$, in favor of $\mu $, with the help of Eq. (\ref{mu-symm}). \subsection{Separated circles ($L>0$)} \subsubsection{Symmetric solitons} We start by presenting results for symmetric modes, with $\phi \left( x,y\right) =\phi \left( -x,y\right) $, in the case of $L>0$, i.e., non-overlapping circles, see Fig. \ref{fig1}. Figures \ref{fig2}(a) and (b) display the overall characteristics of families of the symmetric modes, in the form of dependences $N\left( \mu \right) $\ and $E(N)$, i.e., the norm versus the chemical potential and the energy versus the norm [see Eqs. (\re {N}) and (\ref{E})], for different values of radius $a$ of the circles and a fixed separation between them, $L=3\pi $ (this value was selected to display generic results). The curve marked ``$a=0$" in Fig. \ref{fig2}(a) displays the prediction of the VA for the symmetric solitons, obtained from Eqs. (\ref{N-symm})\textbf , (}\ref{A-symm}\textbf{), }and (\ref{mu-symm}), as outlined above [recall that the VA was developed for the nonlinearity modulation taken in the form of Eq. (\ref{delta}), which formally corresponds to $a=0$]. The shape of the VA-predicted curve is reasonably close to its numerical counterparts generated for finite $a$. The other tractable approximation, which amounts to Eq. (\ref{asympt}) for $\mu \rightarrow -0$ and $N\rightarrow \infty $, correctly predicts the asymptotic form of the numerically found curves in this limit. The opposite limit of $\mu \rightarrow -\infty $ corresponds to very narrow peaks pinned to the center of each circle. When the radius of these peaks is much smaller than the radius of the circles ($a$), they are nearly identical to the Townes solitons \cite{Berge}, whose norm (i.e., the collapse threshold in the uniform medium with the self-focusing cubic nonlinearity) is \begin{equation} N_{\mathrm{thr}}\approx 11.69. \label{Townes} \end{equation This fact explains the limit value $N(\mu \rightarrow -\infty )\approx 23.5~\simeq 2N_{\mathrm{thr}}$ observed in Fig. \ref{fig2}(a) for a=1,~1.5,~2,$ and $3$. The form of the VA adopted above is irrelevant in this case, as approximation (\ref{delta}) implies that the radius of the peak is much larger than $a$. However, the isolated peak in the medium with the uniform cubic nonlinearity (i.e., the unstable Townes' soliton) may be approximated by another version of the VA, which yields a reasonable approximation for the collapse threshold, $N_{\mathrm{thr}}^{\left( \mathrm VA}\right) }=4\pi $ \cite{Anderson}. The plots displayed in Figs. \ref{fig2}(a) and \ref{fig2}(b) identify the stability of the families of symmetric solitons. A well-known necessary, but not sufficient, condition for the stability of solitons supported by the self-focusing nonlinearity amounts to the Vakhitov-Kolokolov (VK) criterion, $dN/d\mu <0$ \cite{Berge}. It is seen in Fig. \ref{fig2}(a) that this criterion is indeed necessary but not sufficient, as the portion of the N(\mu )$ curve with the negative slope is stable only for $a=1$. It is worthy to note that, as seen in Fig. \ref{fig2}, the stable portion of the latter curve corresponds to lower values of the energy than the unstable part of the same curve. \begin{figure}[tbph] \centering\subfigure[]{\includegraphics[width=3.5in]{Fig2a.eps} \subfigure[]{\includegraphics[width=3.5in]{Fig2b.eps}} \caption{(Color online) Norm $N$ versus $\protect\mu $ (a) and energy $E$ versus $N$ (b) for symmetric solitons at different values of radius $a$ of the circles, with a fixed separation between them, $L=3\protect\pi $. The stability of the solution families is identified as follows: the magenta color pertains to the instability through decay; black -- the instability via a jump into a strongly asymmetric mode; red -- stability (it is a part of the curve for $a=1$ with the negative slope); blue -- spontaneous transformation of the stationary solitons into robust breathers.} \label{fig2} \end{figure} A typical example of a stable symmetric (\textit{double-peak}) soliton is displayed in Fig. \ref{fig3} (it pertains to the separation between the circles $L\equiv R-2a=6\pi -3\approx \allowbreak 15.85$, which is different from $L=3\pi $ fixed in Fig. \ref{fig2}, but there is no essential difference in the shape of the stable solitons in these cases). The evolution of weakly unstable symmetric solitons leads to the formation of breathers, as shown in Fig. \ref{fig4}. In this case, the soliton is spontaneously transformed into a robust localized object featuring irregular small-amplitude intrinsic oscillations. It is worthy to note that the transition from the stationary solitons to breathers leads to a relatively weak but tangible SSB effect, as seen in Fig. \ref{fig4}(b). Numerical data demonstrate that the breather keeps practically the entire initial norm, i.e., radiation losses are negligible in the course of the rearrangement of the symmetric soliton into the breather. \begin{figure}[tbph] \centering\includegraphics[width=4in]{Fig3.eps} \caption{(Color online) An example of a stable symmetric soliton, with norm N=22$ ($\protect\mu =-0.23045$), $R=6\protect\pi $, and $a=1.5$. The evolution of the soliton is shown in the $x$-cross section drawn through y=0 $.} \label{fig3} \end{figure} \begin{figure}[tbp] \centering\subfigure[]{\includegraphics[width=4in]{Fig4a.eps}} \subfigure[]{\includegraphics[width=4in]{Fig4b.eps}} \caption{(Color online) (a) An example of the transformation of a weakly unstable symmetric soliton into a robust breather, with $N=22$, $R=4. \protect\pi $ and $a=1.5$. (b) The evolution of largest values of the local density of the left (``L") and right (``R") peaks, which features a weak spontaneous breaking of the symmetry between the peaks.} \label{fig4} \end{figure} The instability mode designated as the decay into radiation in Fig. \re {fig2} means that the soliton completely disintegrates (not shown here in detail). On the other hand, the evolution outcome marked as the instability via a jump into a strongly asymmetric mode implies that the symmetric double-peak soliton suffers strong SSB, which leads to a sudden transition into a robust \textit{single-peak} mode nested in either circle, as shown in Fig. \ref{fig5}. Note that, in the case displayed in Fig. \ref{fig2}, the initial norm ($N=22$) is much higher than critical value (\ref{Townes}). The numerical data demonstrate that the eventual value of the norm of the emerging single-peak mode is slightly smaller than the threshold value (\re {Townes}), the surplus norm, $\Delta N\simeq 10$, being shed off in the form of radiation. \begin{figure}[tbp] \centering\includegraphics[width=4in]{Fig5.eps} \caption{(Color online) The spontaneous transformation of an unstable symmetric double-peak soliton into a single-peak one, at $N=22$, $R=4\protec \pi $ and $a=1.2$.} \label{fig5} \end{figure} The results of the numerical analysis of the stability of the symmetric solitons is summarized in the diagram in the plane of $\left( a,R/\pi \right) $, which is displayed in Fig. \ref{fig6}, for a fixed total norm, N=22$. An obvious feature is that radius $a$ of the circles supporting stable symmetric solitons must not be too small (roughly speaking, all the solitons are unstable at $a<1$). In fact, stable symmetric solitons (as well as antisymmetric ones, dealt with in the next subsection, see Fig. \ref{fig8} below) may be considered as pairs of virtually non-interacting stable fundamental solitons supported by isolated nonlinear circles -- essentially the same fundamental modes which were reported in Refs. \cite{HS} and \cit {Barcelona-OL}. On the other hand, the further increase of $a$ and/or decrease of $R$ lead to the destabilization of the symmetric soliton -- first, through its conversion into the breather (see Fig. \ref{fig4}), which is followed by the sudden transformation of the breather into a strongly asymmetric single-peak soliton (see Fig. \ref{fig5}). These transitions can be readily understood: the increase of $a$ and/or decrease of $R$ lead to the enhancement of the interaction between the two\ circles, which causes the destabilization of the double-peak soliton. As concerns the stability border of the stable symmetric solitons, it has been checked that it is precisely determined by the VK criterion, i.e., it coincides with the locus of points where $dN/d\mu =0$ [recall, however, that these points do not always determine a stability border, as soliton families which include portions with $dN/d\mu <0$ may be unstable, see Fig. \ref{fig2}(a)]. \begin{figure}[tbph] \centering\includegraphics[width=4in]{Fig6.eps} \caption{(Color online) The stability diagram for various localized modes developing from stationary symmetric solitons at a fixed value of the norm, N=22$, in the plane of the radius of the circles ($a$) and normalized separation between their centers ($R/\protect\pi $). ``Unstable solitons" are those which spontaneously transform into the single-peak modes, as shown in Fig. \protect\ref{fig5}. The dashed line represents $L=0$, i.e., $R=2a$. Along this line, the two circles are tangent to each other, and below it the circles partly overlap. As shown in the next section, only single-peak solitons may be stable at $L\leq 0$.} \label{fig6} \end{figure} \subsubsection{Antisymmetric solitons} Figures \ref{fig7} and \ref{fig8} are counterparts of the above figures \re {fig2} and \ref{fig6} for antisymmetric solitons, with $\phi \left( x,y\right) =-\phi \left( -x,y\right) $. Predictions of the VA for antisymmetric solitons are not included into Fig. \ref{fig7}(a), as, on the contrary to the symmetric solitons, the agreement between the VA and the numerical findings is poor in this case. The diagram in Fig. \ref{fig7} includes a region of stable antisymmetric solitons, as well as an area (``unstable solitons") where the unstable antisymmetric soliton spontaneously transforms itself into a single-peak mode trapped in either circle, and shedding off about a half of the initial norm. In terms of the evolution of the respective density profile, $\left\vert \psi \left( x,y,t\right) \right\vert ^{2}$, the latter outcome of the evolution is very similar to that shown in Fig. \ref{fig5}. \begin{figure}[tbp] \centering\subfigure[]{\includegraphics[width=3.5in]{Fig7a.eps} \subfigure[]{\includegraphics[width=3.5in]{Fig7b.eps}} \caption{(Color online) The same as in Fig. \protect\ref{fig2} (for the fixed separation between the circles, $L=3\protect\pi $), but for families of antisymmetric stationary solitons.} \label{fig7} \end{figure} \begin{figure}[tbph] \centering\includegraphics[width=4in]{Fig8.eps} \caption{(Color online) The same as in Fig. \protect\ref{fig6} (with the fixed norm $N=22$), but for antisymmetric solitons and modes generated by the development of their instability.} \label{fig8} \end{figure} Note that only parts of the solution branches with the negative slope, dN/d\mu <0$, which meet the VK criterion, are stable in Fig. \ref{fig7}(a) (see the curves pertaining to $a=1,1.5,~2$, and $3$). The stability and instability intervals for these three values of $a$ are reported in Table 1. Further, similar to the situation for the symmetric solitons displayed in Fig. \ref{fig2}(b), the $E(N)$ curves for the antisymmetric solutions displayed in Fig. \ref{fig7}(b) confirm the natural expectation that stable portions of the solution families have lower energy than their unstable counterparts, for $a=1$ and $1.5$. \begin{table}[th] \caption{Stability intervals, in terms of the total norm ($N$) and chemical potential ($\protect\mu $), for families of stationary antisymmetric solitons presented in Fig. \protect\ref{fig7}, for fixed $L=3\protect\pi $. } \label{table:stability}\centering \addtolength{\tabcolsep}{5pt} \begin{tabular}{ccc} \hline\hline $a$ & $N$ & $\mu$ \\[0.5ex] \hline 1.0 & $21.8347 < N < 22.4702$ & $-1.5 <\mu< -0.68$ \\ 1.5 & $21.9289 < N < 23.0877$ & $-1.2 <\mu< -0.42$ \\ 2.0 & $22.1718 < N < 23.2130$ & $-0.8 <\mu< -0.3$ \\ 3.0 & $22.9914 < N < 23.3356$ & $-0.5 <\mu< -0.278$ \\[1ex] \hline \end{tabular \end{table} \subsection{Overlapping and touching circles ($L\leq 0$)} As shown in Figs. \ref{fig6} and \ref{fig8}, stable double-peak solitons, symmetric or antisymmetric ones, cannot be supported by the pair of circles with a small separation between them. Further numerical analysis demonstrates that such solitons cannot be found \ either for touching ($L=0 ) or overlapping ($L<0$) circles. Instead, in this case it is possible to find symmetric \emph{single-peak} solitons, centered at the midpoint of the configuration. Strongly asymmetric solitons, trapped near the center of either circle, while the other one is left almost empty, exist in this case too, and they may be stable. These modes are considered below. \subsubsection{Touching circles ($L=0$)} In the case of $L=0$, the two circles touch each other at the single point, featuring a ``figure of eight" [see insets in Fig. \ref{fig9}(a)]. As said above, in this case symmetric stationary modes are represented by single-peak solitons centered at the touch point, see examples of the soliton profiles in Fig. \ref{fig9}(a). These solitons are unstable, spontaneously jumping into one of the circles, as seen in Fig. \ref{fig9 (b). This instability can be easily understood, as the symmetric soliton is actually located at the position of an unstable equilibrium between two attractive pseudopotential wells corresponding to the circles. After the jump, the single-peak soliton keeps hopping in the circle, periodically hitting its border and bouncing back. \begin{figure}[tbp] \centering\subfigure[]{\includegraphics[width=3.5in]{Fig9a.eps} \subfigure[]{\includegraphics[width=3.5in]{Fig9b.eps}} \caption{(Color online) (a) Profiles of unstable symmetric single-peak solitons centered around the touch point of the set of two circles with zero separation, $L=0$ (see the insets). The top and bottom panels pertain to different radii of the circles, $a=2$ and $a=4$. In both cases, the norm of the soliton is $N=15.9$. (b) The ensuing hopping motion of the single-peak breather inside one circle, shown \ by means of the density contour plots in the $x$-cross section.} \label{fig9} \end{figure} Strongly asymmetric single-peak solitons trapped near the center of either circle can be found too at $L=0$. An example of such a \emph{stable} nearly isotropic soliton in shown in Fig. \ref{fig10}. The numerical data demonstrate that the ratio of local densities at the center of the circle, and at the midpoint where the two circles touch each other, is $\left[ \phi \left( 1.5,0\right) /\phi \left( 0,0\right) \right] ^{2}\simeq 25$ for this soliton. \begin{figure}[tbp] \centering\includegraphics[width=2.5in]{Fig10.eps} \caption{(Color online) An example of the \emph{stable} single-peak soliton, trapped \ in the right circle of the ``figure-of-eight" structure (with $L=0 ), is shown by means of contour plots of the density, $\left( \protect\phi \left( x,y\right) \right) ^{2}$, for $\protect\mu =-1$, $N=11.4798$, and a=1.5$.} \label{fig10} \end{figure} Figure \ref{fig11} displays $N(\mu )$ and $E(N)$ dependences for families of such single-peak asymmetric modes, obtained at different values of radius $a . It is seen that stable subfamilies are found for $1\leq a\leq 3$. The stability of these solution families \emph{exactly} obeys the VK criterion, with unstable modes featuring decay into radiation (not shown here). Extremely close to the critical value (\ref{Townes}), the stationary single-peak solitons develop a weak oscillatory instability and turn into breathers featuring intrinsic regular vibrations with a small amplitude (not shown here in detail). For instance, at $a=1.5$, the robust breathers are observed in an interval of $11.668<N<11.690$. \begin{figure}[tbph] \centering\subfigure[]{\includegraphics[width=3.5in]{Fig11a.eps} \subfigure[]{\includegraphics[width=3.5in]{Fig11b.eps}} \caption{(Color online) The same as in Figs. \protect\ref{fig2}, but for the family of stationary single-peak solitons nested in either circle of the ``figure-of-eight" configuration (the one with $L=0$). As before, the red color designates stable solutions, while black branches represent modes which are unstable through the decay into radiation.} \label{fig11} \end{figure} \subsection{Partly overlapping circles ($L<0$)} Negative $L$\ ($-2a<L<0$) implies that the two circles partly overlap [see the top left inset in Fig. \ref{fig13}(a)], merging into a single circle at L$\ $=$ $-2a$. Only single-peak solitons were found in this case. First, we continue the numerical analysis of the symmetric solitons centered at the midpoint, $x=y=0$. In a narrow interval corresponding to slightly overlapping circles, $-0.1a\leq L<0$, the symmetric solitons are unstable in essentially the same fashion as demonstrated above in Fig. \ref{fig9}, i.e., they jump from the unstable equilibrium position into either circle, performing the shuttle motion in it. As the degree of the overlap between the circles increases, in the interval of $-0.33a\leq L<-0.1a$ the unstable symmetric soliton simply decays into radiation (not shown here in detail). This outcome of the evolution may be interpreted as a consequence of the fact that the total norm of the soliton is too low in this case, not allowing it to maintain its integrity. Further, keeping to increase the overlap degree, in the interval of $-0.33a<L\leq 0.48 $ we observe that the gradual enhancement of the nonlinearity around the midpoint does not yet stabilize the symmetric soliton, but restores its integrity. In this case, the soliton again spontaneously leaps to the left or right circle, which is followed by its shuttle motion inside the circle. The symmetric solitons centered around the midpoint may be stable at $L\leq -0.48a$. For instance, at $L=-0.50a$, they are stable in the interval of N_{\min }=11.2523<N<11.2665=N_{\max }$, which corresponds to $-0.55<\mu <-0.47$, cf. Table 1. Similarly, at $L=-0.55a$\ the stability interval is 11.2375<N<11.2637$, or $-0.59<\mu <-0.47$. The stability interval for the symmetric solitons expands with the increase of $|L|$ towards $L=-2a$. The situation when the symmetric solitons may be stable is illustrated by Fig. \ref{fig13}. The right inset to panel \ref{fig13}(a) demonstrates that, at $\mu =-0.6$\ ($N=11.2592$), the $N(\mu )$\ curve splits into two branches, which then merge at $\mu =-1.65$\ ($N=11.5295$). The resulting bifurcation loop resembles those found in the 1D and 2D double-core models with the cubic-quintic nonlinearity \cite{CQ,CQ3}. The splitting means that two soliton solutions with different values of the norm can be found at given $\mu $ (for instance, the pair of solitons corresponding to points ``c" and ``d" in Fig. \ref{fig13}). In the splitting region, the top branch in panel \ref{fig13}(a) represents the symmetric soliton placed at the midpoint, while the lower branch represents a soliton shifted to left or right. In fact, the splitting and recombination of the two solution branches are explicit examples of the direct and inverse symmetry-breaking bifurcations. \begin{figure}[tbp] \centering\subfigure[]{\includegraphics[width=3.5in]{Fig13a.eps} \subfigure[]{\includegraphics[width=3.5in]{Fig13b.eps}} \caption{(Color online) (a) The $N(\protect\mu )$ curves for stationary single-peak solitons in the configuration with $L=-0.5720a$ and $a=1.1$. In part (b), subplots labeled ``a", ``b", ``c" and ``d" display cross sections along $x $ (the magenta dashed lines) and along $y$ (the blue solid lines) of the single-peak solitons corresponding to points ``a", ``b", ``c" and ``d" in panel (a), i.e., respectively, for $\protect\mu =-0.56$, $N=11.2544 ; $\protect\mu =-0.6$, $N=11.2592$; $\protect\mu =-0.8$, $N=11.2606$; and \protect\mu =-0.8 $, $N=11.3421$.} \label{fig13} \end{figure} Note that the solitons of types ``a" and ``d", shown in Fig. \ref{fig13}(b), are centered exactly at the midpoint, while the solitons of types ``b" and ``c" show a small shift off the midpoint. Of course, together with the shifted soliton, it is possible to find its mirror-image counterpart, shifted in the opposite direction. The simulations demonstrate that broad symmetric solitons, corresponding to large values of $N$ [cf. Eq. (\ref{asympt})], suffer decay at $\mu >-0.47$\ (i.e., at $N>11.2361$), as indicated by the dashed black line in Fig. \re {fig13}(a). Stable symmetric solitons where found in the interval of -0.6<\mu <-0.47$\ (in terms of the norm, it is $11.2654<N<11.2361$), which corresponds to the short solid red segment in Fig. \ref{fig13}(a). At $\mu <-0.6$, the solitons marked by ``b" and ``c", which belong to the family indicated by the dashed-dotted magenta curve, spontaneously transform themselves into breathers which feature regular small-amplitude oscillations. Further, solitons pertaining to the dotted blue curve in Fig. \ref{fig13}(a) (for instance, the soliton corresponding to point ``d" in Fig. \ref{fig13}) undergo a spontaneous transformation into breathers which feature strong irregular intrinsic oscillations, but remain robust localized modes. The splitting of the $N(\mu )$\ curve occurs in a small region of the plane of $\left( a,L/a\right) $, as shown in Fig. \ref{fig17}. As concerns the (narrow) stability area of the symmetric solitons, it is outlined by Table 2, for the overlapping parameter fixed with respect to the radius of the circles, $L=-0.5720a$ (the same relation between $L$ and $a$ as in Fig. \re {fig13}). At values of $N$ exceeding the largest value indicated in Table 2, the symmetric solitons spontaneously turn into robust single-peak breathers. \begin{figure}[tbph] \centering\includegraphics[width=4in]{Fig17.eps} \caption{(Color online) The region in the plane of $\left( a,L/a\right) $, at fixed $\protect\mu =-0.8$, where curves $N(\protect\mu )$ for the family of single-peak solitons, supported by the overlapping circles, feature the splitting [see Fig. \protect\ref{fig13}(a)].} \label{fig17} \end{figure} \begin{table}[th] \caption{Stability intervals for families of stationary symmetric solitons centered around the midpoint of the configuration with partly overlapping circles, $L=-0.5720a$, and several values of radius $a$ of the overlapping circles. } \label{table:stability2}\centering \addtolength{\tabcolsep}{5pt} \begin{tabular}{ccc} \hline\hline $a$ & $N$ & $\mu$ \\[0.5ex] \hline 1.1 & $11.2361 < N < 11.2620 $ & $-0.47 < \mu < -0.588$ \\ 1.5 & $11.3449 < N < 11.2633 $ & $-0.33 < \mu < -0.47$ \\ 1.8 & $11.2025 < N < 11.2611 $ & $-0.17 < \mu < -0.25$ \\ $a\geq 2$ & no stability intervals & \\[1ex] \hline \end{tabular \end{table} Finally, for strongly overlapping circles, with $-2a\leqslant L<-0.8$, the results, including the stability region for the symmetric solitons (only the symmetric ones exist in this case) are nearly the same as reported in Ref. \cite{HS} for the single circle, which corresponds to $L=-2a$. \section{Summary} In this work, we have introduced the 2D model based on the modulation function confining the self-focusing cubic nonlinearity, i.e., the corresponding nonlinear potential, to the two identical circles, which may be separated or overlapped. We aimed to study symmetric, antisymmetric, and asymmetric stationary 2D solitons in this setting, as well as breathers into which unstable solitons may be spontaneously transformed. Results of the analysis, obtained by means of numerical methods and the VA (variational approximation), can be summarized as follows below. Well-separated circles support stable symmetric and antisymmetric solitons, as long as the interaction between the density peaks trapped in the two circles is negligible. The increase of the interaction strength (i.e., the decrease of separation $L$ between the circles, relative to their radii $a$) leads to the destabilization of the symmetric and antisymmetric solitons. As shown in Fig. \ref{fig6}, the symmetric solitons pass two SSB transitions. At first, they are transformed into robust breathers featuring small-amplitude irregular intrinsic vibrations, with a spontaneously emerging small asymmetry between the two peaks (Fig. \ref{fig4}). As the interaction between the two circles grows still stronger, the weakly asymmetric double-peak breathers lose their stability and are replaced by single-peak modes residing in one circle, while the other one remains nearly empty. On the other hand, Fig. \ref{fig8} shows that the antisymmetric solitons feature the single SSB transition, by a direct jump into the single-peak mode. The VA, based on the superposition of two Gaussians centered in the two circles, describes the symmetric solitons reasonably well, while it fails to accurately approximate antisymmetric ones. The situation is different in the case of touching ($L=0$) and overlapping ( L<0$) circles. In this case, only single-peak solitons are found -- both asymmetric modes, trapped in one circle (Fig. \ref{fig10}), a part of which are stable (Fig. \ref{fig11}), and symmetric solitons centered around the midpoint ($x=y=0$). For the weak overlapping, $0\leq -L\leq 0.1a$, the symmetric single-peak soliton is situated at the unstable-equilibrium position. As a result, it spontaneously leaps into either circle and then performs shuttle motion therein (Fig. \ref{fig9}). For a stronger overlap, 0.1a<-L<0.33a$, the symmetric soliton decays into radiation. At a still larger degree of the overlap, $0.48a<-L<0.33a$, the enhancement of the local nonlinearity around the midpoint results in the reappearance of the SSB regime, in the form of the spontaneous leap followed by the shuttle motion. A region of the stability of the symmetric single-peak solitons appears at -L>0.48a$, gradually expanding, with the growth of $|L|$, into the stability region for the soliton trapped in the single nonlinear circle \cite{HS} in the limit when the two circles merge into the single one ($-L\rightarrow 2a ). In the case of the moderately strong overlap, a noteworthy manifestation of the SSB was found in the form of the paired symmetry-breaking and symmetry-restoring bifurcations, which give rise to the loop in Fig. \re {fig13}(a). This work may be naturally extended in other directions. In particular, a related dynamical problem is \textit{Josephson oscillations} in bosonic junctions formed by double-well potentials, cf. Refs. \cite{mean-field}, \cite{Salerno}-\cite{Mazzarella}. The present model calls for the study of Josephson oscillations in the double-well \textit{nonlinear} \textit pseudopotentials}. Because the setting has no linear limit, the Josephson frequency is expected to strongly depend on the norm of the condensate. The models considered in Ref. \cite{we} and here suggest that the corresponding \textit{nonlinear Josephson junctions} may be experimentally realized in photonic media, in addition to BEC. The nonlinear pseudopotentials may be also naturally generalized for two-component models, which give rise to vector solitons \cit {Barcelona-vector}. Accordingly, it may be relevant to study SSB effects in double-well pseudopotentials trapping two-component mixtures. \section*{Acknowledgment} We appreciate valuable discussions with M. Salerno. The work of T.M. was supported by the Thailand Research Fund under grant RMU5380005.
2,877,628,088,823
arxiv
\section{Introduction} \label{secIntroduction} It was first proposed by Cachazo, He and Yuan (CHY)~\cite{Cachazo:2013iaa,Cachazo:2013gna,Cachazo:2013hca,Cachazo:2013iea} that the tree-level scattering amplitudes of many massless theories are supported by the solutions to the scattering equations \begin{equation} \label{eq:SE} f_i=\sum_{j=1, j\neq i}^{n}\frac{s_{ij}}{\sigma_{ij}}=0~,~~~~~~i=1,2\ldots n~,~~~ \end{equation} where $s_{ij}=(p_i+p_j)^2$ are Mandelstam variables, and $\sigma_{ij}=\sigma_{i}-\sigma_{j}$ are moduli space variables. The CHY formulation consists of an integrand $\mathcal{I}_n$ that specifies the theory, and a measure on the moduli space that fully localizes the integration to the solutions of the scattering equations~\eqref{eq:SE}: \begin{equation} \int d\mu_{\text{CHY}}=\int (\sigma_{rs}\sigma_{st}\sigma_{tr})^2\prod_{i\neq r,s,t}d\sigma_i\delta(f_i)~.~~~ \end{equation} The moduli space integration indicates that the CHY formulation should appear as a certain limit of the string amplitudes, which effectively reduces the path ordered string measure into the color ordered CHY measure. It has been shown that the CHY formulation naturally emerges as the infinite tension limit of ambitwistor strings~\cite{Mason:2013sva,Geyer:2014fka,Casali:2016atr}, chiral strings~\cite{Siegel:2015axg,Li:2017emw} and pure spinor formalism of superstrings~\cite{Berkovits:2013xba,Gomez:2013wza}. In the context of conventional string theory, the CHY-formulation can also appear as the zero tension limit of an alternative dual model~\cite{Bjerrum-Bohr:2014qwa}. The scattering equations~\eqref{eq:SE} have $(n-3)!$ solutions, and to obtain the amplitudes, naively one needs to get all the solutions and sum up their contributions. However, solving the equations becomes computationally unavailable as the number of particles grows. This difficulty can be circumvented by using integration rules~\cite{Cachazo:2015nwa,Baadsgaard:2015voa,Baadsgaard:2015ifa,Baadsgaard:2015hia}. The idea behind this approach is that one can obtain the sum over algebraic combinations of the solutions in terms of the coefficients of the original polynomial equations without knowing each individual solution. Using the original integration rules, we can extract the correct amplitudes of those theories without the appearance of higher order poles, for example, the bi-adjoint scalar theory whose integrand consists of two Parke-Taylor (PT) factors, \begin{align} \label{eq:PTfactor} \mathcal{I}_n={\mbox{PT}}(\pmb\alpha)\times{\mbox{PT}}(\pmb\beta)&=\langle\alpha_1\alpha_2\cdots\alpha_n\rangle\times\langle\beta_1\beta_2\cdots\beta_n\rangle\nonumber\\ &=\frac{1}{\sigma_{\alpha_1\alpha_2}\sigma_{\alpha_2\alpha_3}\cdots\sigma_{\alpha_{n}\alpha_1}}\times\frac{1}{\sigma_{\beta_1\beta_2}\sigma_{\beta_2\beta_3}\cdots\sigma_{\beta_n\beta_1}}~,~~~ \end{align} where $\pmb\alpha=\{\alpha_1,\ldots,\alpha_n\}$ and $\pmb\beta=\{\beta_1,\ldots,\beta_n\}$ are two permutations of external particles. On the other hand, theories with more complicated integrands usually involve (spurious) higher order poles. For example, by expanding the Yang-Mills integrand following the way of Lam and Yao~\cite{Lam:2016tlk}, one has to develop various techniques to evaluate the higher order poles and show that they indeed cancel towards the end~\cite{Huang:2016zzb,Cardona:2016gon,Bjerrum-Bohr:2016axv,Huang:2017ydz,Zhou:2017mfj}. Alternatively, one can expand the integrand in terms of linear combinations of bi-adjoint scalar ones with local coefficients, such that the calculation of higher order poles can be avoided. This approach has succeeded in Yang-Mills, Yang-Mills-scalar and nonlinear sigma model~\cite{Stieberger:2016lng,Nandan:2016pya,Fu:2017uzt,Teng:2017tbo,Du:2017kpo,Du:2017gnh}. The latter approach has an extra benefit that the expansion coefficients are automatically the Bern-Carrasco-Johansson numerators~\cite{Bern:2008qj,Bern:2010ue,Cachazo:2013iea}. It is not surprising that the amplitudes of various theories land on the bi-adjoint scalar ones after a recursive expansion. The reason is that these bi-adjoint scalar amplitudes capture exactly the physical poles associated with various diagrams that are planar under certain color ordering, while different theories just dress these diagrams with different kinematic numerators. The above discussion shows the fundamental role of the bi-adjoint cubic scalar amplitudes in the understanding of CHY-formulation, so it is not surprising that many different approaches have been proposed towards the evaluation and better understanding of CHY-integrand with product of two PT-factors. While many of them focus on the rational function of complex variables $\sigma_i$'s, in paper~\cite{Arkani-Hamed:2017mur} the authors have related the PT-factors to the partial triangulations of a polygon with $n$ edges ($n$-gon), and the PT-factors are connected to the associahedron in a profound way. It also brings permutation group $S_n$ into the story, since each PT-factor is accompanied by a color trace with a definite ordering. We should find one-to-one correspondence between action of $S_n$ onto the PT-factors and partial triangulations of $n$-gon. It would be a very natural idea to understand the CHY-integrand from the knowledge of permutations. Certain progress along this direction has been made in~\cite{Lam:2015sqb} by investigating pairing of external legs, whose results are presented in terms of illustrative objects like \emph{crystal} and \emph{defect}. In fact, for the two PT-factors in the CHY-integrand of bi-adjoint cubic scalar theory, if we set one PT-factor as the natural ordering $\vev{12\cdots n}$, corresponding to identity element in the permutation group, then the other PT-factor can be interpreted as a permutation acting on the identity. The physical information, i.e., the poles and vertices of the Feynman diagrams that this CHY-integrand evaluates to, should find its clue in the structure of permutations. In this note, we show that the aforementioned constructions of bi-adjoint scalar amplitudes have a unified description in terms of cycle structure of permutation group. We first demonstrate how the physical information is encoded in the so-called V-type and P-type cycle representations of a given permutation. We then explore how the relations between different PT-factors emerge from operations like merging and splitting of cycles. In terms of Feynman diagram, this corresponds to fixing/unfixing a certain pole. We also show how the same pattern arise from the use of cross-ratio factors. Thus not only does cycle representations provides a new efficient way to evaluate amplitudes, it also makes manifest the mathematical correspondence between amplitudes and cycle structures. This paper is organized as follows. In~\S \ref{secSetup}, we set our convention and provide some necessary backgrounds. In \S \ref{secFeynman}, we show how the structure of those Feynman diagrams produced by a CHY-integrand can be extracted from the cycle representations of the corresponding PT-factor viewed as a permutation. In \S\ref{secPermutation}, we study the inverse problem on how to write out the PT-factor for an arbitrary given Feynman diagram. In the form of cycle representation, we propose a recursive method to construct an $n$-point PT-factor recursively from lower point PT-factors. In \S\ref{secRelation}, we investigate the relations among different PT-factors via the merging and splitting of cycle representations, as well as via multiplying cross-ratio factors. Conclusion is presented in \S\ref{secConclusion}, and in Appendix \ref{secAppendix}, we comment on an interesting interplay between the associahedron and cycle representations of permutation. \section{The setup} \label{secSetup} In this section, we give the definitions of some important objects to be used in later. \subsection{The canonical PT-factor} \label{secSetup1} Since the $2n$ PT-factors obtained by acting cyclic rotations and reflections evaluate to the same amplitude, despite an overall sign $(-1)^n$ for the latter case, all these $2n$ PT-factors form an \emph{equivalent class}. Thus the number of independent PT-factors is $n!/(2 n)$. We can represent each independent PT-factors by a \emph{canonical} ordering $\vev{{\alpha}_1 {\alpha}_2\cdots {\alpha}_n}$ that satisfy two conditions: \begin{enumerate*}[label=(\arabic*)] \item the first element ${\alpha}_1$ is fixed to be $1$ to eliminate the cyclic ambiguity, \item the second element ${\alpha}_2$ should be smaller than the last element ${\alpha}_n$ to eliminate the reversing ambiguity. \end{enumerate*} The complete equivalent class can be generated from these independent PT-factors by acting the cyclic rotation and reversing. For example, up to $n=5$, we can choose the independent PT-factors as follows, \begin{align} & n=3 & &\#=\frac{n!}{2n}=1: & &\vev{123}~,~~~\nonumber \\ & n=4 & &\#=\frac{n!}{2n}=3: & &\vev{1234}~~,~~\vev{1243}~~,~~\vev{1324}~,~~~\nonumber \\ & n=5 & &\#=\frac{n!}{2n}=12:& &\vev{12345}~~,~~\vev{12354}~~,~~\vev{12435}~~,~~\vev{12453}~~,~~\vev{12534}~~,~~\vev{12543}~,~~~\nonumber \\ & & & & &\vev{13245}~~,~~\vev{13254}~~,~~\vev{13425}~~,~~\vev{13524}~~,~~\vev{14235}~~,~~\vev{14325}~.~~~\nonumber \end{align} The CHY-integrand for bi-adjoint cubic scalar amplitudes is given by ${\mbox{PT}}(\pmb{{\alpha}})\times {\mbox{PT}}(\pmb{{\beta}})$, as shown in Eq.~\eqref{eq:PTfactor}. A simultaneous permutation acting on $\pmb{\alpha}$ and $\pmb{\beta}$ merely leads to the same result up to a relabeling of external legs. Hence we can fix one of the PT-factors to be the natural ordering ${\mbox{PT}}(\pmb{{\alpha}})=\vev{12\cdots(n-1) n}$, and consider the other PT-factor ${\mbox{PT}}(\pmb{{\beta}})$ as a permutation acting on $\pmb{{\alpha}}$. Thus all the dynamical information is encoded in ${\mbox{PT}}(\pmb{{\beta}})$, from which one can read out the amplitude. \subsection{Permutation, cycle representation and PT-factor} \label{secSetup2} Permutations, as group elements of the $n$-point symmetric group $S_n$, can be defined by their action onto the space spanned by the elements of $S_n$ themselves, for example, \begin{eqnarray} \label{eq:groupaction} \pmb\beta|\pmb{e}\rangle=|\pmb\beta\rangle ~~~,~~~ \pmb{\gamma\beta}|\pmb{e}\rangle=\pmb{\gamma}|\pmb\beta\rangle=|\pmb{\gamma\beta}\rangle~,~~~ \end{eqnarray} where $\pmb\beta$ and $\pmb\gamma$ are two generic elements of $S_n$, and $\pmb{e}$ is the identity element defined as the natural ordering $\{1,2,\ldots, n\}$. Each permutation can be represented by a product of disjoint cycles $(i_1i_2\cdots i_s)$, which stands for the map $i_1\mapsto i_2$, $i_2\mapsto i_3$, $\ldots$, $i_s\mapsto i_1$. For example, \begin{equation} \label{eq:action123} (123)(4)(5)\cdots(n)|1234\cdots n\rangle=|2314\cdots n\rangle \end{equation} stands for the permutation in Cauchy's two-line notation \begin{eqnarray} \left(\begin{array}{cccccc} 1 & 2 & 3 & 4 & \cdots & n \\ 2 & 3 & 1 & 4 & \cdots & n \end{array}\right)~. \end{eqnarray} Cycles are defined up to a cyclic ordering, for example, $(123)=(231)=(312)$ gives the same permutation. It is also obvious that two disjoint cycles commute, i.e., $(123)(45)=(45)(123)$. Each permutation has a unique decomposition in terms of disjoint cycles, modulo the cyclicity of each cycle and the ordering of disjoint cycles.\footnote{In this statement, \emph{disjoint} is crucial to the uniqueness. Otherwise, we could have other decompositions like $(1)(2)=(12)(12)$, etc.} We call this unique decomposition the \emph{cycle representation} of a permutation in the rest of this paper. The number of disjoint cycles in a cycle representation is called the length of this cycle representation, and the number of elements in a cycle is called the length of cycle. In our CHY-integrand~\eqref{eq:PTfactor}, we can treat ${\mbox{PT}}(\pmb{\beta})=\langle {\beta}_1{\beta}_2\cdots {\beta}_n\rangle$ also as a permutation. The equivalent class of ${\mbox{PT}}(\pmb\beta)$ consists of all the $2n$ permutations obtained by cyclic rotations and reversing, \begin{eqnarray} \mathfrak{b}[\pmb{\beta}]:=\left\{\begin{array}{l} \langle \beta_1\beta_2\cdots\beta_{n-1}\beta_n\rangle\,,\,\langle \beta_2\beta_3\cdots\beta_n\beta_1\rangle\,,\ldots\,,\,\langle \beta_n\beta_{1}\cdots\beta_{n-2}\beta_{n-1}\rangle \\ \langle \beta_n \beta_{n-1}\cdots\beta_{2}\beta_1\rangle\,,\,\langle \beta_1\beta_n\cdots\beta_3\beta_2\rangle\,,\ldots\,,\,\langle \beta_{n-1}\beta_{n-2}\cdots\beta_{1}\beta_{n}\rangle \end{array}\right\}~.~~~\label{eq:PTequivclass} \end{eqnarray} We define the cyclic generator $\pmb{g}_c$ and reversing generator $\pmb{g}_r$ respectively as \begin{align} \label{eq:grgc} \pmb{g}_c=(\beta_1\beta_2\cdots\beta_n)~~~,~~~ \pmb{g}_r=\left\{\begin{array}{lll} (\beta_1\beta_n)(\beta_2\beta_{n-1})\cdots(\beta_{\frac{n}{2}}\beta_{\frac{n+2}{2}})&\quad &\text{for~even~}n \\ (\beta_1\beta_n)(\beta_2\beta_{n-1})\cdots (\beta_{\frac{n-1}{2}}\beta_{\frac{n+3}{2}})(\beta_{\frac{n+1}{2}})&\quad &\text{for~odd~}n \end{array}\right. ~,~~~ \end{align} which satisfy the relation $\pmb{g}_r\pmb{g}_c=\pmb{g}_c^{-1}\pmb{g}_r$ and $\pmb{g}_c^n=\pmb{g}_r^2=\pmb{e}$. Thus they generate the $n$-point dihedral group $D_n$. For a permutation $\pmb\beta$, the equivalent class $\mathfrak{b}[\pmb\beta]$ is thus given by, \begin{eqnarray} \label{eq:equivperm} \mathfrak{b}[\pmb\beta]=\left\{~\pmb{\beta}~,~\pmb{\beta}\pmb{g}_c~,\ldots,~ \pmb{\beta}\pmb{g}_c^{n-1}~,~\pmb{\beta}\pmb{g}_r~,~\pmb{\beta}\pmb{g}_r\pmb{g}_c~,~ \ldots,~\pmb{\beta}\pmb{g}_r\pmb{g}_c^{n-1}~\right\}~.~~~ \end{eqnarray} The elements in $\mathfrak{b}$ has one-to-one correspondence to the equivalent class of PT-factors (\ref{eq:PTequivclass}). There are in all $n!/(2n)$ non-equivalent permutations. However, they do not form a group in general, since $D_n$ is not a normal subgroup of $S_n$ for $n\geqslant 4$. For permutations in the same equivalent class, of course they have different cycle representations, since after all they are different group elements of $S_n$. In the next section, we are going to show that which set of Feynman diagrams ${\mbox{PT}}(\pmb\beta)$ corresponds to is encoded collectively in the different cycle representations of the equivalent permutations in $\mathfrak{b}$. \section{From permutations to Feynman diagrams} \label{secFeynman} One of our motivations is to explore the information encoded in the PT-factors, described in the form of permutations. As mentioned in the previous section, in our setup the amplitude result is determined by the second PT-factor ${\mbox{PT}}(\pmb\beta)$, considered to be a permutation acting on the identity element. It determines an equivalent class containing $2n$ elements evaluating to the same amplitude. Thus we need to consider all the permutations in the equivalent class. For example, when working with the CHY-integrand ${\mbox{PT}}(\pmb {\alpha})\times {\mbox{PT}}(\pmb {\beta})=\vev{1234}\times \vev{1243}$, we need the equivalent class of $\vev{1243}$, containing eight elements, \begin{eqnarray}\Big\{\vev{1243}~,~\vev{2431}~,~\vev{4312}~,~\vev{3124}~,~\vev{3421}~,~\vev{4213}~,~\vev{2134}~,~\vev{1342}\Big\}~,~~~\label{4p-F1-1-PermuOri}\end{eqnarray} or in the form of cycle representations, \begin{eqnarray} \Big\{(1)(2)(34)~,~(124)(3)~,~(1423)~,~(132)(4)~,~(1324)~,~(143)(2)~,~(12)(3)(4)~,~(1)(234)\Big\}~.~~~\label{4p-F1-1-Permu} \end{eqnarray} Our purpose is to relate these cycle representations to the Feynman diagrams contributing to the amplitude. All the above eight cycle representations in~\eqref{4p-F1-1-Permu} can be used to reconstruct the PT-factor ${\mbox{PT}}(\pmb{{\beta}})$ by acting them on the natural ordering $\langle 1234\rangle$, so each one encodes the complete information for evaluation. However, they have different structures. In Eq.~\eqref{4p-F1-1-Permu}, two cycle representations are length-$1$, four are length-$2$, while the remaining two are length-$3$. How can we read useful information out of these different cycle structures? To answer this question, let us recall the integral result of the CHY-integrand $\vev{1234}\times \vev{1243}$, which is $-\frac{1}{s_{12}}$. It corresponds to a Feynman diagram with two cubic vertices, one connecting the legs $1$, $2$ and the internal propagator $P_{12}:= p_1+p_2$, while the other connecting the legs $3$, $4$, and the internal propagator $-P_{12}$. It is very plausible to conjecture that, the two cycle representations $(1)(2)(34)$ and $(12)(3)(4)$ in fact describe respectively the two cubic vertices. In $(1)(2)(34)$, the two cycles $(1)$ and $(2)$ describe respectively the external legs $1$ and $2$ attached to a vertex, while the cycle $(34)$ describes the corresponding internal propagator of that vertex. It also indicates that $\frac{1}{s_{34}}=\frac{1}{s_{12}}$ is an internal pole. Similar analysis can be carried out for the cycle representation $(12)(3)(4)$. Above discussion tells us that, although each cycle representation contains the complete information of amplitude, the pole structure is manifest in some of them but not all. The complete picture of Feynman diagrams is determined collectively by all cycle representations whose pole and/or vertex structures are manifest. With this understanding, we only need to consider those good cycle representations, i.e., the pole and/or vertex structure are manifest. For bi-scalar theory, the physical pole would appear only for the external legs with consecutive ordering. So let us define the \emph{good cycle representations} as those satisfying the following criteria: \begin{itemize} \item the cycles in the considered cycle representation can be separated into at least two parts, while the union of cycles in each part is consecutive (later called \emph{planar separation}). \item in case that the cycle representation \emph{can only be} separated into two parts, then each part should contain at least two elements. \end{itemize} Moreover, if the planar separation of a good cycle representation contains at least three parts, we call it a \emph{vertex type} (V-type) cycle representation. Otherwise, we call it a \emph{pole type }(P-type) cycle representation. Let us give a few more examples. At six point, both $(12)(34)(56)$ and $(12)(35)(46)$ are good cycle representations. The former is a V-type one since it can be separated into three parts $(12)$, $(34)$ and $(56)$, while the latter is a P-type one since it can only be separated into two parts $(12)$ and $(35)(46)$. On the other hand, both $(14)(25)(36)$ and $(1)(23456)$ are bad cycle representations according to the above criteria, since the former one has no planar separation at all while the separated part $(1)$ in the latter contains only one element. With the above definitions, we can give an answer to the question raised at the end of the first paragraph of this section: we can reconstruct the Feynman diagrams by considering all the good cycle representations in the equivalent class of a PT-factor. Now we illustrate how a good cycle representation reflects the vertex and pole structure of the corresponding Feynman diagram by an eight point example ${\mbox{PT}}(\pmb{\beta})=\vev{12846573}$, which gives four trivalent Feynman diagrams after the CHY integration, \begin{eqnarray} \frac{1}{s_{12} s_{56}s_{8123}} \left(\frac{1}{s_{812}}+\frac{1}{s_{123}}\right)\left(\frac{1}{s_{456}}+\frac{1}{s_{567}}\right)~.~~~\label{8p-bad-1-1}\end{eqnarray} The result can be summarized into a single effective Feynman diagram, \begin{equation*} \adjustbox{raise=0cm}{\begin{tikzpicture} \draw (135:0.5) node[left=0pt]{$2$} -- (0,0) -- (-135:0.5) node[left=0pt]{$1$}; \draw (0,0) -- (1.5,0) -- ++(45:0.5) node[right=0pt]{$5$} (1.5,0) -- ++(-45:0.5) node[right=0pt]{$6$}; \draw (0.5,-0.5) node[below=0pt]{$8$} -- (0.5,0.5) node[above=0pt]{$3$} (1,-0.5) node[below=0pt]{$7$} -- (1,0.5) node[above=0pt]{$4$}; \node at (2.5,0) {$.$}; \end{tikzpicture}} \end{equation*} There are $16$ equivalent cycle representations in $\mathfrak{b}[\pmb{\beta}]$, collected as, \begin{eqnarray} \text{good}: & ~~~ & \text{V-type}:~~ (1)(2)(38)(4)(56)(7)~~,~~(12)(3)(47)(5)(6)(8)~,~~~\nonumber \\ & & \text{P-type}:~~(132)(4875)(6)~~~,~~(128)(3467)(5)~~,~~(1)(2378)(456)~~,~~(1843)(2)(576)~,~~~\nonumber \\ \text{bad}: &~~~ & (15274)(3)(68)~~,~~(14726)(35)(8)~~,~~(16385)(24)(7)~~,~~(17)(25836)(4)~,~~~\label{8p-bad-1-2}\\ & & (176423)(58)~~,~~(182457)(36)~~,~~(135468)(27)~~,~~(14)(286753)~,~~~\nonumber \\ & & (1526)(3487)~~,~~(1625)(3784)~.~~~\nonumber\end{eqnarray} In general, P-type cycle representations manifest certain poles contained in some of the Feynman diagrams. For example, $(132)\pmb{|}(4875)(6)$ is P-type since it can only be divided into two parts indicated by the vertical line. This separation indicates that the pole $s_{123}$ should appear in some Feynman diagrams. Similarly, the other three P-type cycle representations correspond respectively to the pole $s_{812}$, $s_{456}$ and $s_{567}$, as can be seen in Eq.~\eqref{8p-bad-1-1}. In contrary, the V-type cycle representations contain both pole and vertex information. For example, the cycle representation $(1)(2)(38)(4)(56)(7)$ allows two different planar separations, \begin{subequations} \begin{eqnarray} \text{four parts}:&~~~ & (1)(2)(38)\pmb{|}(4)\pmb{|}(56)\pmb{|}(7)\,,\label{eq:4way}\\ \text{three parts}:&~~~ &(1)\pmb{|}(2)\pmb{|}(38)(4)(56)(7)\,.\label{eq:3way}\end{eqnarray} \end{subequations} These two separations indicate that the effective Feynman diagram contains one quartic vertex with legs $\{P_{8123}, 4, P_{56}, 7\}$, and one cubic vertex with legs $\{1, 2, P_{345678}\}$. Since the legs with more than one elements also give pole information, we can read out the poles $s_{8123}$, $s_{56}$, and $s_{12}$ from this V-type cycle representation. Similarly, the cycle representation $(12)(3)(47)(5)(6)(8)$ gives one quartic vertex with legs $\{P_{12}, 3, P_{4567}, 8\}$ and one cubic vertex with legs $\{5, 6, P_{781234}\}$, and it gives the same pole structures as the previous one. Combining these two, we do produce all the vertices in the effective Feynman diagram. The above example shows that by collectively combining the information from all good cycle representations, we can read out the complete Feynman diagram result. This provides one method of analysis. On the other hand, we can arrive at the complete final result by relying on only one good cycle representation, since each one should contain the complete information of PT-factor ${\mbox{PT}}(\pmb{{\beta}})$. Hence we should have another method of analysis. Observations from practical computation show that, \begin{enumerate}[label=(\Alph*)] \item All V-type cycle representations together manifest the vertex structure of the corresponding effective Feynman diagram. \item It is also possible to reproduce the effective Feynman diagram from one V-type cycle representation if we recursively use the lower multiplicity results. \item One P-type cycle representation is not sufficient to reproduce the complete result, and in order to get the correct answer we should consider all P-type cycle representations. \end{enumerate} We use again the example~\eref{8p-bad-1-2} to demonstrate our observations. We start with the V-type cycle representation $(1)(2)(38)(4)(56)(7)$. In the four-part separation~\eqref{eq:4way}, we first coarse grain the part $\{8123\}$ and $\{56\}$ by replacing them by a single propagator. This leads to an effective quartic vertex, \begin{align} &{\mbox{PT}}(\pmb{\beta})=(1)(2)(38)\pmb{|}(4)\pmb{|}(56)\pmb{|}(7)~\rightarrow~(P_{8123})(4)(P_{56})(7)~,~~~\nonumber\\ &{\mbox{PT}}(\pmb{\alpha})=(8)(1)(2)(3)\pmb{|}(4)\pmb{|}(5)(6)\pmb{|}(7)~\rightarrow~(P_{8123})(4)(P_{56})(7)~,~~~\nonumber \end{align} which gives the contribution \begin{equation} \adjustbox{raise=-0.65cm}{\begin{tikzpicture} \draw (-135:0.75) node[left=0pt]{$P_{8123}$} -- (45:0.75) node[right=0pt]{$P_{56}$}; \draw (135:0.75) node[left=0pt]{$4$} -- (-45:0.75) node[right=0pt]{$7$}; \end{tikzpicture}}=\frac{1}{s_{8123} s_{56}} \left( \frac{1}{s_{4P_{56}}}+\frac{1}{s_{P_{56} 7}}\right)=\frac{1}{s_{8123} s_{56}}\left(\frac{1}{s_{456}}+\frac{1}{s_{567}}\right)~.~~~\label{8p-bad-1-3} \end{equation} Next, we look into the substructures. In Eq.~\eqref{eq:4way}, the substructure $\{P_{56}, 5,6\}$ has the cycle representation $(P_{56})(56)$. In its equivalent class, we have only one good cycle representation \begin{equation} \label{eq:3pVertex} \text{V-type}:\qquad (P_{56})(5)(6)\;\Longrightarrow\;\adjustbox{raise=-0.65cm}{\begin{tikzpicture} \draw (-0.75,0) node[left=0pt]{$P_{56}$} -- (0,0) -- (45:0.75) node [right=0pt]{$5$} (-45:0.75) node[right=0pt]{$6$} -- (0,0); \end{tikzpicture}}~,~~~ \end{equation} which gives the familiar cubic vertex. The substructure $\{P_{8123},8,1,2,3\}$ has the cycle representation $(1)(2)(38)(P_{8123})$ in Eq.~\eqref{eq:4way}. By acting on it with the cyclic generator $\pmb{g}_c=(P_{8123}8123)$ and the reversing generator $\pmb{g}_r=(P_{8123})(83)(12)$, we can reproduce all the ten permutations in the equivalent class,\footnote{We omit the subscript in the propagator $P$ when there is no possible confusion.} \begin{eqnarray} \text{V-type}: &~~& (P)(8)(12)(3)~,~~~ \nonumber \\ \text{P-type}: &~~ & (P8)(132)~~,~~(P3)(812)~,~~~\label{8p-bad-1-4}\\ \text{Bad}: &~~& (P)(38)(1)(2)~~,~~(P231)(8)~~,~~(P182)(3)~,~~~\nonumber \\ & & (P823)(1)~~,~~(P318)(2)~~,~~(P1)(832)~~,~~(P2)(813)~.~~~\nonumber\end{eqnarray} From the V-type cycle representation $(P_{8123})(8)(12)(3)$, we see immediately the quartic vertex structure \begin{equation} (P_{8123})(8)(12)(3)\;\Longrightarrow\;\adjustbox{raise=-0.65cm}{\begin{tikzpicture} \draw (-135:0.75) node[left=0pt]{$P_{8123}$} -- (45:0.75) node[right=0pt]{$P_{12}$}; \draw (135:0.75) node[left=0pt]{$8$} -- (-45:0.75) node[right=0pt]{$3$}; \end{tikzpicture}}~,~~~ \end{equation} where the $\{P_{12},1,2\}$ part is another cubic vertex, following the analysis of~\eqref{eq:3pVertex}. Thus we get the contribution $\left(\frac{1}{s_{812}}+\frac{1}{s_{123}}\right)$. When combining with~\eref{8p-bad-1-3}, we do get the complete result~\eref{8p-bad-1-1}. The above calculation shows that by recursively looking into the V-type cycle representations of each substructure, we can reproduce the full Feynman diagram result. Now we move to the P-type cycle representation, for example, the planar separation $(132)\pmb{|}(4875)(6)$. We need to analyze the two substructures given by cycle representations $(P_{123})(132)$ and $(P_{123})(4875)(6)$. Using the algorithm given in~\eqref{eq:equivperm}, the substructure $(P_{123})(132)$ gives the following eight equivalent cycle representations \begin{eqnarray*} \text{V-type}: &~~& (1)(2)(3P)~~,~~(12)(3)(P)~,~~~ \\ \text{Bad}: &~~& (132)(P)~~,~~(1P23)~~,~~(124)(3)~~,~~(1P3)(2)~~,~~(132P)~~,~~(1)(23P)~,\end{eqnarray*} in which either $(1)(2)(3P_{123})$ or $(12)(3)(P_{123})$ manifests the pole structure $\frac{1}{s_{12} s_{123}}$. Similarly, the substructure $(P_{123})(4875)(6)$ gives the following $12$ equivalent cycle representations \begin{eqnarray*} \text{V-type}: &~~& (8P)(4)(56)(7)~~,~~(8)(P)(47)(5)(6)~,~~~ \\ \text{P-type}: &~~& (78P)(456)~~,~~(4P8)(765)~,~~~\\ \text{Bad}: &~~& (P)(6)(4875)~~,~~(P764)(58)~~,~~(P5)(47)(68)~~,~~(P6)(4578)~~,~~(P467)(5)(8)~,~~~\\ & & (P48675)~~,~~(P6)(4)(58)(7)~~,~~(P54687)~.~~~\end{eqnarray*} From both the V-type cycle representation, we can read out the contribution $\frac{1}{s_{56}s_{1238}}\left(\frac{1}{s_{456}}+\frac{1}{s_{567}}\right)$. Putting two substructures together, we get only two terms \begin{equation*} \frac{1}{s_{12}s_{56}s_{1238}}\frac{1}{s_{123}}\left(\frac{1}{s_{456}}+\frac{1}{s_{567}}\right)~, \end{equation*} compared to the full result~\eqref{8p-bad-1-1}. One can check that only by combining with another P-type cycle representation $(128)(3476)(5)$ can we obtain the full result. With the above example in mind, let us move on to the systematic investigation of four, five and six point PT-factors. For presentation purpose, we shall organize the independent PT-factors into categories according to the topology of corresponding Feynman diagrams. In the same category, different PT-factors are related by group actions, and can be analyzed in the same manner. Concretely, we can define the group action as follows. In the space of $\frac{n!}{2n}$ equivalent classes $\mathfrak{b}[\pmb{\beta}]$, we define the permutation action \begin{equation} \mathcal{C}(\mathfrak{b}[\pmb{\beta}])=\mathfrak{b}\left[\left.\pmb{\beta}\right|_{i\mapsto i+1}\right]~~~~,~~~~\mathcal{R}(\mathfrak{b}[\pmb{\beta}])=\mathfrak{b}\left[\left.\pmb{\beta}\right|_{i\mapsto n+1-i}\right]~,~~~ \end{equation} where $\mathcal{C}$ and $\mathcal{R}$ also generate a dihedral group $\mathcal{D}_n$.\footnote{We note that this $\mathcal{D}_n$ is different from the $D_n$ defined in \S\ref{secSetup} that generates the equivalent class $\mathfrak{b}[\pmb{\beta}]$, since their actions on the permutations are different.} The action of $\mathcal{D}_n$ further separates the space of $\mathfrak{b}[\pmb{\beta}]$ into different orbits. The number of elements inside each orbit depends on the symmetric property of such orbit. For example, the identity permutation ${\mbox{PT}}(\pmb{\beta})=\langle 12\cdots n\rangle$ is invariant under $\mathcal{D}_n$ action such that it forms a one dimensional orbit by itself. A nontrivial example is that by acting $\mathcal{D}_n$ onto ${\mbox{PT}}(\pmb{\beta})=\langle 12846573\rangle$, we get an orbit with four elements: \begin{equation} \langle 12846573\rangle~~~,~~~\langle 13248675\rangle~~~,~~~\langle 15342687\rangle~~~,~~~\langle 17354628\rangle~. \end{equation} The above discussion is useful because, all PT-factors in the same orbit of $\mathcal{D}_n$ share the same structure of the cycle representations. Most importantly, starting from the V-type and P-type cycle representations of one PT-factor in the orbit, we can get the V-type and P-type cycle representations of all the other PT-factors in this orbit simply by the mapping $i\mapsto i+1$ or $i\mapsto n+1-i$. Later on we will only study one PT-factor for each orbit. After above general discussion, we present more example to further elaborate our algorithm. In appendix~\ref{secAppendix}, we will give some further discussions on the cycle structure of PT-factors and Feynman diagrams. \subsection{The three and four point cases} \label{secFeynmanSub1} At three point, there is only one independent PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{123}$. Among the six cycle representations of the equivalent class, only $(1)(2)(3)$ is a good one. The planar separation $(1)\pmb{|}(2)\pmb{|}(3)$ indicates pictorially that the three external legs are attached to a single cubic vertex. This Feynman diagram evaluates to $1$, which agrees with the CHY integration result. At four point, there are three independent PT-factors ${\mbox{PT}}(\pmb{\beta})$. Each of them corresponds to an equivalent class with $8$ permutations. Their good cycle representations are summarized in the following table as \begin{center} \begin{tabular}{|c|c|c|} \hline ${\mbox{PT}}(\pmb{{\beta}})$ & V-type & P-type \\ \hline $\vev{1234}$ & ~~$(1)(2)(3)(4)$~~& ~~$(41)(23)$~~,~~$(12)(34)$~~ \\ \hline $\vev{1243}$ & ~~$(1)(2)(34)$~~,~~$(12)(3)(4)$~~ & \\ \hline $\vev{1324}$ & ~~$(4)(1)(23)$~~,~~$(41)(2)(3)$~~ & \\ \hline \end{tabular}~~~~\label{4point-Table} \end{center} For the PT-factor $\vev{1234}$, we can read out the complete vertex information from the sole V-type cycle representation $(1)(2)(3)(4)$. This PT-factor gives all trivalent four point Feynman diagrams whose four external legs are connected at cubic vertices respecting the color ordering, and the result is simply $\frac{1}{s_{12}}+\frac{1}{s_{23}}$. In the language of planar separation, the V-type cycle representation can be separated into four parts $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(4)$, which can be explained as defining an effective quartic vertex with exactly the same meaning as mentioned above. This structure will be one of the building blocks for the analysis of higher point Feynman diagrams. However, the two P-type cycle representations alone only provide partial result. For example, the planar separation of cycle representation $(41)\pmb{|}(23)$ indicates that, legs $2$ and $3$ are connected to the same cubic vertex while legs $4$ and $1$ are connected to another, resulting in a contribution of $\frac{1}{s_{23}}$. This is half of the complete answer, and the remaining part is given by the other P-type cycle representation $(12)\pmb{|}(34)$, leading to $\frac{1}{s_{12}}$. The other two PT-factors $\vev{1243}$ and $\vev{1324}$ are related through $i\mapsto i+1$, i.e., $\vev{1243}\mapsto \vev{2314}=\vev{1324}$. In other words, they belong to the same orbit under $\mathcal{D}_4$ action. Thus by knowing one, we can obtain the other just by relabeling. For $\vev{1243}$, the V-type cycle representation with planar separation $(1)\pmb{|}(2)\pmb{|}(34)$ indicates a cubic vertex with three legs $\{1, 2 ,P_{34}\}$, while $(12)\pmb{|}(3)\pmb{|}(4)$ indicates the other cubic vertex with legs $\{P_{12}, 3 ,4\}$. Putting them together, we get the $s$-channel Feynman diagram evaluated to $\frac{1}{s_{12}}$. Alternatively, we can use just one V-type cycle representation to reproduce the complete result. For example, $(1)(2)(34)$ indicates a cubic vertex represented by $(1)(2)(P_{34})$, and a three point substructure $(P_{34})(34)$. Then by using the three point result, this substructure is nothing but another cubic vertex $(P_{34})(3)(4)$. Thus $(1)(2)(34)$ indeed gives the $s$-channel diagram. For the V-type cycle representation $(12)(3)(4)$, the analysis is exactly the same, and we can show that both V-type cycle representations give the same answer. \subsection{The five point case} \label{secFeynmanSub2} For five point case, there are $\frac{5!}{10}=12$ independent PT-factors ${\mbox{PT}}(\pmb{{\beta}})$. They can be divided into following four categories, \begin{enumerate}[label=(\arabic*)] \item The PT-factor $\vev{12345}$ has only one V-type cycle representation $(1)(2)(3)(4)(5)$ and five P-type cycle representations $(15)(24)(3)$, $(1)(25)(34)$, $(12)(35)(4)$, $(13)(2)(45)$ and $(14)(5)(23)$. The V-type one corresponds to the Feynman diagrams where the legs $\{1,2, 3, 4 ,5\}$ form all possible Feynman diagrams connected by cubic vertices, while respecting the color ordering. The result is simply % \begin{eqnarray} \adjustbox{raise=-0.9cm}{\begin{tikzpicture} \draw (0,0) -- (0,0.75) node[above=0pt]{$2$} (0,0) -- (18:0.75) node[right=0pt]{$3$} (0,0) -- (162:0.75) node[left=0pt]{$1$}; \draw (0,0) -- (-54:0.75) node[right=0pt]{$4$} (0,0) -- (-126:0.75) node[left=0pt]{$5$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{12}s_{34}} +\frac{1}{s_{23}s_{45}}+\frac{1}{s_{34}s_{51}}+ \frac{1}{s_{45}s_{12}}+\frac{1}{s_{51}s_{23}}~.~~~\end{eqnarray} % In the language of planar separations, $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(4)\pmb{|}(5)$ indicates an effective five point vertex with the same meaning. As a comparison, each P-type cycle representation corresponds to only two trivalent Feynman diagrams. For example, the planar separation $(15)\pmb{|}(24)(3)$ fixes the pole $s_{15}$. The substructure $(P_{15})(24)(3)$ has a V-type cycle representation $(P)(2)(3)(4)$ in its equivalent class, indicating an effective quartic vertex, which gives two trivalent Feynman diagrams. Only after combining all the five P-type cycle representations do we get the complete answer, where each Feynman diagram appears twice. \item The following five PT-factors $\vev{12354}$, $\vev{12435}$, $\vev{12543}$, $\vev{13245}$ and $\vev{14325}$ form an orbit under $\mathcal{D}_5$ action, and are related by the cyclic permutation $i\mapsto i+1$, for example, $\vev{12354}\mapsto \vev{23415}=-\vev{14325}$.\footnote{One can easily check that the result of $i\mapsto n+1-i$ is also in this orbit.} Thus we only need to analyze one of them, say, $\vev{12354}$. In its equivalent class, There are two V-type cycle representations $(1)(2)(3)(45)$ and $(13)(2)(4)(5)$, together with two P-type cycle representations $(12)(345)$ and $(154)(23)$. For the first V-type cycle representation, the planar separation $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(45)$ indicates an effective quartic vertex, labeled as $V_1$ in Eq.~\eqref{eq:5pF2}, while for the second V-type cycle representation, the planar separation $(13)(2)\pmb{|}(4)\pmb{|}(5)$ indicates a cubic vertex labeled as $V_2$ in Eq.~\eqref{eq:5pF2}. Putting them together, we do reproduce the unique effective Feynman diagram with five external legs, % \begin{equation} \label{eq:5pF2} \adjustbox{raise=-1.3cm}{\begin{tikzpicture} \draw (0,0) node[left=0pt]{$2$} -- (2,0) -- ++(45:1) node[right=0pt]{$4$} (2,0) -- ++(-45:1) node[right=0pt]{$5$}; \draw (1,-1) node[below=0pt]{$1$} -- (1,1) node[above=0pt]{$3$}; \filldraw (1,0) circle (2pt) node[below left=0pt]{$V_1$}; \filldraw (2,0) circle (2pt) node[below left=0pt]{$V_2$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{45}}\left(\frac{1}{s_{12}}+\frac{1}{s_{23}}\right)~.~~~ \end{equation} Alternatively, we can obtain the same answer by using only one V-type cycle representation, for instance the planar separation $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(45)$. We first replace the cycle $(45)$ by the propagator $(P_{45})$. This gives the effective quartic vertex $V_1$ marked in Eq.~\eqref{eq:5pF2} and a three point substructure $(P_{45})(45)$. Then following the three point analysis presented at the beginning of \S\ref{secFeynmanSub1}, we reproduce the vertex $V_2$ in Eq.~\eqref{eq:5pF2}. The complete result can be arrived by combining the two vertices along the propagator $P_{45}$, which is just Eq.~\eqref{eq:5pF2}. \item The following five PT-factors $\vev{12453}$, $\vev{12534}$, $\vev{13254}$, $\vev{13425}$ and $\vev{14235}$ form another orbit under $\mathcal{D}_5$ action, and are related by the cyclic permutation $i\mapsto i+1$. As before, we only consider the PT-factor $\vev{12453}$ as example. It contains three V-type cycle representations, and the planar separations $(1)\pmb{|}(2)\pmb{|}(345)$, $(12)\pmb{|}(3)\pmb{|}(45)$ and $(321)\pmb{|}(4)\pmb{|}(5)$ manifest three cubic vertices. After combining them together, we get the effective Feynman diagram % % \begin{equation} \adjustbox{raise=-0.65cm}{\begin{tikzpicture} \draw (135:0.75) node[left=0pt]{$2$} -- (0,0) -- (-135:0.75) node[left=0pt]{$1$} (0,0) -- (1.5,0) -- ++(45:0.75) node[right=0pt]{$4$} (1.5,0) -- ++(-45:0.75) node[right=0pt]{$5$}; \draw (0.75,0) -- (0.75,0.75) node[above=0pt]{$3$}; \filldraw (0,0) circle (2pt) node[below=1pt]{$V_1$} (0.75,0) circle (2pt) node[below=1pt]{$V_2$} (1.5,0) circle (2pt) node[below=1pt]{$V_3$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{12}s_{45}}~.~~~ \label{5point-re1} \end{equation} % Next, we show how to reproduce above result by using only one V-type cycle representation, for example, $(12)(3)(45)$. The planar separation $(12)\pmb{|}(3)\pmb{|}(45)$ indicates a cubic vertex $(P_{12})(3)(P_{45})$, which is the vertex $V_2$ in~\eqref{5point-re1}. We also get two three point substructures $(P_{12})(12)$ and $(P_{45})(45)$, leading to the vertex $V_1$ and $V_3$ respectively in~\eqref{5point-re1}. The Feynman diagram thus contains only cubic vertices, and there are exactly two internal propagators $P_{12}$ and $P_{45}$, evaluating to $\frac{1}{s_{12}s_{45}}$. Also, the same result can be obtained by using the planar separation $(1)\pmb{|}(2)\pmb{|}(345)$, where the vertex $V_1$ in~\eqref{5point-re1} is manifest. For the substructure $(P_{345})(345)$, we need to look into its equivalent class. Now using~\eqref{eq:equivperm}, we find two V-type cycle representations $(P)(3)(45)$ and $(P3)(4)(5)$. According to the four point analysis in \S\ref{secFeynmanSub1}, they both lead to a pole $\frac{1}{s_{45}}$. Thus we get again the result $\frac{1}{s_{12}s_{45}}$. \item The cycle representations for the last PT-factor $\vev{13524}$ are \begin{eqnarray} \begin{array}{l} (1)(2354)~~,~~(2)(3415)~~,~~(3)(4521)~~,~~(4)(5132)~~,~~(5)(1243)~,~~~ \\ (1)(2453)~~,~~(2)(3514)~~,~~(3)(4125)~~,~~(4)(5231)~~,~~(5)(1342)~.~~~ \end{array}~~~\label{eq:5p0F} \end{eqnarray} There is no good cycle representation at all, so the contribution is zero, which is indeed the case. \end{enumerate} \subsection{The six point case} \label{secFeynmanSub3} There are in total $\frac{6!}{12}=60$ independent PT-factors for the six point case. According to the number of trivalent Feynman diagrams they evaluate to, we can distribute them into different groups, with the number of PT-factors in each group as \begin{equation} \setlength{\arraycolsep}{10pt} \begin{array}{|c|c|c|c|c|c|c|}\hline \text{\# of trivalent Feyn. diagrams} & 14 & 5 & 4 & 2 & 1 & 0 \\ \hline \text{\# of PT-factors} & 1 & 6 & 3 & 21 & 14 & 15 \\ \hline \end{array} \label{6pt-counting} \end{equation} We will study them group by group in the following paragraphs. \paragraph{With $14$ Feynman diagrams:} There is only one PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{123456}$ that evaluates to 14 Feynman diagrams. In the equivalent class, the good cycle representations are \begin{subequations} \begin{align} &\text{V-type}~:~~~(1)(2)(3)(4)(5)(6)~,~~~\\ \label{eq:6pPtype} &\text{P-type}~:~~~ (61)(25)(34)~~,~~(12)(36)(45)~~,~~ (23)(41)(56)~,~~~\nonumber\\ & ~~~~~~~~~~~~~~~(1)(26)(35)(4)~~,~~ (2)(31)(46)(5)~~,~~ (3)(42)(15)(6)~.~~~ \end{align} \end{subequations} The sole V-type one indicates that the six external legs form all possible cubic diagrams respecting the color ordering, contributing to $14$ terms, \begin{align} \adjustbox{raise=-0.7cm}{\begin{tikzpicture} \draw (-0.75,0) node[left=0pt]{$1$} -- (0.75,0) node[right=0pt]{$4$}; \draw (120:0.75) node[left=0pt]{$2$} -- (-60:0.75) node[right=0pt]{$5$}; \draw (60:0.75) node[right=0pt]{$3$} -- (-120:0.75) node[left=0pt]{$6$}; \end{tikzpicture}}\;\Longrightarrow &\;\frac{1}{s_{16} s_{23} s_{45}}+\frac{1}{s_{12} s_{34} s_{56}}+\frac{1}{s_{12} s_{45} s_{123}}+\frac{1}{s_{23} s_{45} s_{123}}+\frac{1}{s_{12} s_{56} s_{123}}\nonumber\\ &+\frac{1}{s_{23} s_{56} s_{123}}+\frac{1}{s_{12} s_{34} s_{126}}+\frac{1}{s_{16} s_{34} s_{126}}+\frac{1}{s_{12} s_{45} s_{126}}+\frac{1}{s_{16} s_{45} s_{126}}\label{6p-re5}\\ &+\frac{1}{s_{16} s_{23} s_{156}}+\frac{1}{s_{16} s_{34} s_{156}}+\frac{1}{s_{23} s_{56} s_{156}}+\frac{1}{s_{34} s_{56} s_{156}}~.~~~\nonumber \end{align} Again, the planar separation $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(4)\pmb{|}(5)\pmb{|}(6)$ tells us that the above $14$ terms in~\eref{6p-re5} can be effectively represented by a six point vertex, which becomes a building block for higher point analysis. The P-type cycle representations have two different structures, collected respectively in the first and second row of~\eqref{eq:6pPtype}. Among the three cycle representations in the first row, we study $(61)(25)(34)$ as an example. First, the planar separation $(61)\pmb{|}(25)(34)$ gives a cubic vertex with legs $\{6, 1, P_{61}\}$, and a five point substructure $(P_{61})(25)(34)$. In its equivalent class, we have a V-type cycle representation $(P_{61})(2)(3)(4)(5)$, indicating that the substructure is just an effective five point vertex. Thus from the planar separation $(61)\pmb{|}(25)(34)$, we reproduce five terms in Eq.~\eqref{6p-re5} that contain the pole $s_{61}$. Similarly, the planar separation $(61)(25)\pmb{|}(34)$ gives a propagator $s_{34}$ and a substructure $(P_{34})(61)(25)$, which has a V-type cycle representation $(P_{34})(5)(6)(1)(2)$. Thus this substructure is another effective five point vertex, which means that the planar separation $(61)(25)\pmb{|}(34)$ gives another five terms in Eq.~\eqref{6p-re5} that contain the pole $s_{34}$. They have two common terms to the result of the first planar separation, so the P-type cycle representation $(61)(25)(34)$ gives eight terms in Eq.~\eqref{6p-re5}. If we combine the contributions from all the three P-type ones in the first row of~\eqref{eq:6pPtype} and remove the overlaps, we get the complete answer~\eqref{6p-re5}. Finally, each of the three cycle representations in the second row of~\eqref{eq:6pPtype} gives four terms in Eq.~\eqref{6p-re5}. For example, the planar separation $(1)(26)\pmb{|}(35)(4)$ gives a propagator $s_{612}$, together with two four point substructures $(P)(1)(26)$ and $(P)(35)(4)$. In their equivalent classes, the former has a V-type cycle representation $(P)(6)(1)(2)$ while the latter has $(P)(3)(4)(5)$, both of which are effective quartic vertices, so that their contribution is \begin{equation} (1)(26)\pmb{|}(35)(4)\;\Longrightarrow\;\frac{1}{s_{612}}\left(\frac{1}{s_{12}}+\frac{1}{s_{61}}\right)\left(\frac{1}{s_{34}}+\frac{1}{s_{45}}\right)~.~~~ \end{equation} Summing over the contributions of these three P-type cycle representations, we reproduce the twelve terms in Eq.~\eqref{6p-re5} that contain a three-particle pole $s_{ijk}$. \paragraph{With $5$ Feynman diagrams:} There are six PT-factors in this category, \begin{eqnarray} \vev{123465}~~,~~\vev{123546}~~,~~\vev{124356}~~,~~\vev{126543}~~,~~\vev{132456}~~~,~~\vev{154326}~.~~~ \end{eqnarray} The last five PT-factors can be generated from the first one by cyclic permutation $i\mapsto i+1$. They actually form an orbit under $\mathcal{D}_6$ action. In the equivalent class of $\vev{123465}$, the good cycle representations are \begin{eqnarray} \text{V-type}: &~~~& (1)(2)(3)(4)(56)~~,~~(14)(23)(5)(6)~,~~~\nonumber \\ \text{P-type}: &~~~ & (1526)(34)~~,~~(3546)(12)~~,~~(13)(2)(456)~~,~~(24)(3)(165)~.~~~\end{eqnarray} We first derive the result by combining the information from all the V-type cycle representations. From previous examples, we can easily tell that the planar separation $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(4)\pmb{|}(56)$ indicates an effective five point vertex while $(14)(23)\pmb{|}(5)\pmb{|}(6)$ indicates a cubic vertex. Thus we have fixed the effective Feynman diagram and obtain following result, \begin{equation} \adjustbox{raise=-1.1cm}{\begin{tikzpicture} \draw (150:0.75) node[left=0pt]{$3$} -- (0,0) -- (0.75,0) -- ++(45:0.75) node[right=0pt]{$5$} (0.75,0) -- ++(-45:0.75) node[right=0pt]{$6$}; \draw (-150:0.75) node[left=0pt]{$2$} -- (0,0) -- (0,0.75) node[above=0pt]{$4$} (0,0) -- (0,-0.75) node[below=0pt]{$1$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{56}}\left(\frac{1}{s_{12}s_{34}}+\frac{1}{s_{12}s_{123}}+\frac{1}{s_{23}s_{123}}+\frac{1}{s_{23}s_{156}}+\frac{1}{s_{34}s_{156}}\right)~.~~~ \end{equation} As an alternative approach, we repeat the result by using only one V-type cycle representation and recursively those of substructures. For example, besides the cubic vertex, the separation $(14)(23)\pmb{|}(5)\pmb{|}(6)$ also indicates a substructure $(P)(14)(23)$, which has the V-type cycle representation $(P)(1)(2)(3)(4)$ in its equivalent class generated by~\eqref{eq:equivperm}. Thus we again recover the effective five point vertex. Alternatively, let us use the P-type cycle representations to find the result. There are two different structures $(1526)\pmb{|}(34)$ and $(3546)\pmb{|}(12)$ manifest a two-particle pole, while $(13)(2)\pmb{|}(456)$ and $(24)(3)\pmb{|}(165)$ manifest a three-particle pole. In the first class, the substructure $(P)(1526)$ gives the V-type cycle representation $(P)(56)(1)(2)$ after using~\eqref{eq:equivperm}, so we have \begin{equation} (1526)\pmb{|}(34)\;\Longrightarrow\;\frac{1}{s_{34}s_{56}}\left(\frac{1}{s_{12}}+\frac{1}{s_{561}}\right)~.~~~ \end{equation} Similarly, we can derive that \begin{equation} (3546)\pmb{|}(12)\;\Longrightarrow\;\frac{1}{s_{12}s_{56}}\left(\frac{1}{s_{34}}+\frac{1}{s_{123}}\right)~.~~~ \end{equation} For $(13)(2)\pmb{|}(456)$ and $(24)(3)\pmb{|}(165)$, similar analysis applies for each substructure, we find that \begin{eqnarray} (13)(2)\pmb{|}(456)\;\Longrightarrow\;\frac{1}{s_{456}s_{56}}\left(\frac{1}{s_{12}}+\frac{1}{s_{23}}\right)~~~,~~~(24)(3)\pmb{|}(165)\;\Longrightarrow\;\frac{1}{s_{561}s_{56}}\left(\frac{1}{s_{23}}+\frac{1}{s_{34}}\right)~.~~~ \end{eqnarray} The complete result can only be recovered by combining all four P-type contributions. \paragraph{With $4$ Feynman diagrams:} There are three PT-factors $\vev{123654}$, $\vev{125436}$ and $\vev{143256}$ that evaluate to four Feynman diagrams. They are related by the cyclic permutation $i\mapsto i+1$, and form an orbit under $\mathcal{D}_6$ action. Let us take $\vev{123654}$ as an example. The good cycle representations are \begin{eqnarray*} \text{V-type}: &~~~ & (1)(2)(3)(46)(5)~~,~~(13)(2)(4)(5)(6)~,\\ \text{P-type}: &~~~& (1432)(56)~~,~~(1236)(45)~~,~~(12)(3456)~~,~~(23)(1654)~.\end{eqnarray*} In the V-type cycle representations, the planar separation $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(46)(5)$ indicates an effective quartic vertex with legs $\{1, 2, 3 ,P_{456}\}$, while the planar separation $(13)(2)\pmb{|}(4)\pmb{|}(5)\pmb{|}(6)$ indicates another effective quartic vertex with legs $\{4, 5, 6 ,P_{123}\}$. By combining them, we get an effective Feynman diagram with two quartic vertices, evaluated to \begin{equation} \adjustbox{raise=-1.1cm}{\begin{tikzpicture} \draw (-0.75,0) node[left=0pt]{$2$} -- (1.5,0) node[right=0pt]{$5$}; \draw (0,-0.75) node[below=0pt]{$1$} -- (0,0.75) node[above=0pt]{$3$}; \draw (0.75,-0.75) node[below=0pt]{$6$} -- (0.75,0.75) node[above=0pt]{$4$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{12}s_{123}s_{45}}+\frac{1}{s_{12}s_{123}s_{56}}+\frac{1}{s_{23}s_{123}s_{45}}+\frac{1}{s_{23}s_{123}s_{56}}~.~~~\label{6p-F4-1} \end{equation} We can also arrive at above result by analyzing just one V-type cycle representation with its substructures. For $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(46)(5)$, the substructure $(P)(46)(5)$ has an equivalent V-type cycle representation $(P)(4)(5)(6)$ in the equivalent class generated by~\eqref{eq:equivperm}, so we recover the result~\eqref{6p-F4-1}. If we start from $(13)(2)\pmb{|}(4)\pmb{|}(5)\pmb{|}(6)$, the analysis is similar. For the P-type cycle representations, each one contributes two terms in~\eref{6p-F4-1}. For example, the planar separation $(12)\pmb{|}(3456)$ gives a pole $s_{12}$ and a substructure $(P_{12})(3456)$, and the substructure has an equivalent V-type cycle representation $(P_{12})\pmb{|}(3)\pmb{|}(5)(46)$ in the equivalent class. We can then read out the pole $s_{456}$ and another substructure $(P_{456})(5)(46)$. This substructure further gives a V-type cycle representation $(P_{456})(4)(5)(6)$, indicating a quartic effective vertex. Putting them together, we get \begin{equation} (12)\pmb{|}(3456)\;\Longrightarrow\;\frac{1}{s_{12}s_{456}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)~.~~~ \end{equation} Similar analysis can be done for other three P-type cycle representations, and the results are \begin{align} (1432)\pmb{|}(56) & \Longrightarrow \frac{1}{s_{56}s_{123}}\left(\frac{1}{s_{12}}+\frac{1}{s_{23}}\right)~,\nonumber \\ (1236)\pmb{|}(45) & \Longrightarrow \frac{1}{s_{45}s_{123}}\left(\frac{1}{s_{12}}+\frac{1}{s_{23}}\right)~,\nonumber \\ (1654)\pmb{|}(23) & \Longrightarrow \frac{1}{s_{23}s_{456}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)~. \end{align} Again, we see that a single P-type cycle representation is not sufficient to provide the complete information, and we need to combine all of them. \paragraph{With $2$ Feynman diagrams:} There are $21$ PT-factors that evaluate to two Feynman diagrams. According to the action of $\mathcal{D}_6$, we can divide them into three orbits as \begin{subequations} \label{eq:6p2F} \begin{align} &\mathcal{S}_1:& &\vev{125643}~~,~~\vev{126534}~~,~~\vev{132546}~~,~~\vev{145326}~~,~~\vev{124365}~~,~~\vev{154236}~,~~~\\ &\mathcal{S}_2:& &\vev{126453}~~,~~\vev{132465}~~,~~\vev{153426}~,~~~\\ &\mathcal{S}_3:& &\vev{123645}~~,~~\vev{143265}~~,~~\vev{134526}~~,~~\vev{124563}~~,~~\vev{142356}~~,~~\vev{125346}~,~~~\nonumber\\ & & &\vev{123564}~~,~~\vev{152346}~~,~~\vev{126345}~~,~~\vev{132654}~~,~~\vev{134256}~~,~~\vev{124536}~.~~~ \end{align} \end{subequations} For the category $\mathcal{S}_1$ of~\eqref{eq:6p2F}, we take $\vev{125643}$ as an example. The good cycle representations are \begin{eqnarray*} \text{V-type}: &~~~& (12)(3)(4)(56)~~,~~(1)(2)(3546)~~,~~(1423)(5)(6)~,\\ \text{P-type}: &~~~& (132)(456)~~,~~(15)(26)(34)~.\end{eqnarray*} The planar separations in the V-type cycle representations indicate the following vertex structures, \begin{subequations} \begin{eqnarray} (12)\pmb{|}(3)\pmb{|}(4)\pmb{|}(56)\; &\Longrightarrow&\; \text{quartic vertex with legs }P_{12}\,,3\,,4\,,\text{ and }P_{56}~,~~~\\ (1)\pmb{|}(2)\pmb{|}(3546)\;&\Longrightarrow&\; \text{cubic vertex with legs }1\,,2\,,\text{ and }P_{12}~,~~~\\ (1423)\pmb{|}(5)\pmb{|}(6)\;&\Longrightarrow&\; \text{cubic vertex with legs }5\,,6\,,\text{ and }P_{56}~.~~~ \end{eqnarray} \end{subequations} Combining them together, we get an effective Feynman diagram with two cubic vertices and one quartic vertex, and the result is \begin{eqnarray} \adjustbox{raise=-0.65cm}{\begin{tikzpicture} \draw (135:0.75) node[left=0pt]{$2$} -- (0,0) -- (-135:0.75) node[left=0pt]{$1$}; \draw (0,0) -- (1.5,0) -- ++(45:0.75) node[right=0pt]{$5$} (1.5,0) -- ++(-45:0.75) node[right=0pt]{$6$}; \draw (0.75,0) -- ++(135:0.75) node[above=0pt]{$3$} (0.75,0) -- ++(45:0.75) node[above=0pt]{$4$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{12}s_{56}}\left(\frac{1}{s_{123}}+\frac{1}{s_{34}}\right)~.~~~\label{6p-F2-A-1}\end{eqnarray} We can also reproduce above result by considering only one V-type cycle representation and its substructures. For example, the planar separation $(1)\pmb{|}(2)\pmb{|}(3546)$ gives the substructure $(P)(3546)$, which has a V-type cycle representation $(P)(3)(4)(56)$ generated by~\eqref{eq:equivperm}. Thus the substructure contains a quartic vertex with legs $P_{12}$, $3$, $4$, $P_{56}$, and a cubic vertex with legs $5$, $6$, $P_{56}$. Alternatively, let us discuss the contribution from P-type cycle representation. For the one $(132)\pmb{|}(456)$, we can derive two substructure of V-type cycle representations $(P)(12)(3)$ and $(P)(4)(56)$. Thus we get the contribution $\frac{1}{s_{12}s_{56}s_{123}}$. Similarly, the P-type cycle representation $(15)(26)\pmb{|}(34)$ has a substructure $(P)(15)(26)$ that has a V-type cycle representation $(P)(12)(56)$. Thus we get the contribution $\frac{1}{s_{12}s_{34}s_{56}}$. Again, we need to combine the information of all the P-type cycle representations to get the full result. Next, we move to the category $\mathcal{S}_2$ of~\eqref{eq:6p2F}, and take $\vev{126453}$ as an example. The good cycle representations are \begin{eqnarray*} \text{V-type}: &~~~& (1)(2)(36)(4)(5)~~,~~(12)(3)(45)(6)~,\\ \text{P-type}: &~~~& (132)(465)~~,~~(126)(345)~.\end{eqnarray*} The V-type cycle representation $(1)(2)(36)(4)(5)$ allows two different planar separations $(1)\pmb{|}(2)\pmb{|}(36)(4)(5)$ and $(1)(2)(36)\pmb{|}(4)\pmb{|}(5)$, so it gives two cubic vertices. Meanwhile, $(12)(3)(45)(6)$ has only one planar separation $(12)\pmb{|}(3)\pmb{|}(45)\pmb{|}(6)$, so it gives an effective quartic vertex. Putting all these vertices together, we get the effective Feynman diagram \begin{eqnarray} \adjustbox{raise=-1.1cm}{\begin{tikzpicture} \draw (135:0.75) node[left=0pt]{$2$} -- (0,0) -- (-135:0.75) node[left=0pt]{$1$}; \draw (0,0) -- (1.5,0) -- ++(45:0.75) node[right=0pt]{$4$} (1.5,0) -- ++(-45:0.75) node[right=0pt]{$5$}; \draw (0.75,-0.75) node[below=0pt]{$6$} -- (0.75,0.75) node[above=0pt]{$3$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{12} s_{45}} \left(\frac{1}{s_{123}}+\frac{1}{s_{612}}\right)~.~~~\label{6p-F2-B-1}\end{eqnarray} Now we follow another approach by considering only one V-type cycle representation to reproduce the result~\eref{6p-F2-B-1}. If we focus on the cycle representation $(12)(3)(45)(6)$, the planar separation $(12)\pmb{|}(3)\pmb{|}(45)\pmb{|}(6)$ gives the final result directly. While if we focus on the V-type cycle representation $(1)(2)(36)(4)(5)$, the planar separation $(1)\pmb{|}(2)\pmb{|}(36)(4)(5)$ will manifest the cubic vertex with legs $1$, $2$, $P_{12}$, and the substructure $(P)(36)(4)(5)$ then generates a V-type cycle representation $(P)(3)(45)(6)$, which leads to a quartic and a cubic vertex. Hence we do reproduce the result ~\eqref{6p-F2-B-1}. Analysis of $(1)(2)(36)(4)(5)$ from the planar separation $(1)(2)(36)\pmb{|}(4)\pmb{|}(5)$ is exactly the same. We note that the P-type cycle representation $(132)\pmb{|}(465)$ gives $\frac{1}{s_{12}s_{45}s_{123}}$ while $(126)\pmb{|}(345)$ gives $\frac{1}{s_{12}s_{45}s_{612}}$. Thus by combining them, we get the complete result. Finally, we study the category $\mathcal{S}_3$ of~\eqref{eq:6p2F}, and take $\vev{123645}$ as example. The good cycle representations are \begin{eqnarray*} \text{V-type}: &~~~& (1)(2)(3)(465)~~,~~(1236)(4)(5)~~,~~(13)(2)(45)(6)~,\\ \text{P-type}: &~~~& (12)(356)(4)~~,~~(23)(164)(5)~.\end{eqnarray*} Using the V-type cycle representations, we see that $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(465)$ gives an effective quartic vertex, while $(1236)\pmb{|}(4)\pmb{|}(5)$ and $(13)(2)\pmb{|}(45)\pmb{|}(6)$ give two cubic vertices. Putting all of them together, we get the effective Feynman diagram \begin{equation} \adjustbox{raise=-1.1cm}{\begin{tikzpicture} \draw (-0.75,0) node[left=0pt]{$2$} -- (1.5,0) -- ++(45:0.75) node[right=0pt]{$4$} (1.5,0) -- ++(-45:0.75) node[right=0pt]{$5$}; \draw (0,-0.75) node[below=0pt]{$1$} -- (0,0.75) node[above=0pt]{$3$} (0.75,0) -- (0.75,-0.75) node[below=0pt]{$6$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{123} s_{45}} \left( \frac{1}{s_{12}}+\frac{1}{s_{23}}\right)~.~~~\label{6p-F2-C-1} \end{equation} Alternatively, let us take just one single V-type cycle representation to reproduce above result. From the planar separation $(1)\pmb{|}(2)\pmb{|}(3)\pmb{|}(465)$, the substructure $(P)(465)$ generates a V-type cycle representation $(P)(45)(6)$ according to~\eqref{eq:equivperm}. Thus we get the same effective Feynman diagram as in~\eref{6p-F2-C-1}. From the planar separation $(1236)\pmb{|}(4)\pmb{|}(5)$, the substructure $(P)(1236)$ has a V-type cycle representation $(P6)(1)(2)(3)$, while from $(13)(2)\pmb{|}(45)\pmb{|}(6)$, the substructure $(P)(13)(2)$ gives a V-type cycle representation $(P)(1)(2)(3)$. Both of them recover the result~\eqref{6p-F2-C-1} respectively. For the P-type cycle representations, we note that $(12)\pmb{|}(356)(4)$ gives the partial result $\frac{1}{s_{12}s_{123}s_{45}}$ following our algorithm, while $(23)\pmb{|}(164)(5)$ gives another piece $\frac{1}{s_{23}s_{123}s_{45}}$. They combine to give the full result~\eqref{6p-F2-C-1}. \paragraph{With $1$ Feynman diagram:} There are 14 PT-factors that evaluate to one Feynman diagram. According to the action of $\mathcal{D}_6$, they can be distributed into three orbits, \begin{subequations} \label{eq:6p1F} \begin{align} &\mathcal{S}_1:& &\vev{132645}~~,~~\vev{134265}~~,~~\vev{135426}~~,~~\vev{124653}~~,~~\vev{153246}~~,~~\vev{126435}~,~~~\\ &\mathcal{S}_2:& &\vev{125463}~~,~~\vev{142365}~~,~~\vev{143526}~~,~~\vev{132564}~~,~~\vev{152436}~~,~~\vev{126354}~,~~~\\ &\mathcal{S}_3:& &\vev{125634}~~,~~\vev{145236}~.~~~ \end{align} \end{subequations} For the category $\mathcal{S}_1$ of~\eqref{eq:6p1F}, we analyze $\vev{132645}$ as example. Its good cycle representations are \begin{eqnarray} \text{V-type}: &~~~& (1)(23)(465)~~,~~(123)(45)(6)~~,~~(136)(2)(4)(5)~~,~~(2)(3)(164)(5)~.~~~ \end{eqnarray} There is no P-type cycle representation. This can be understood as follows. Since the final result only contains one term, so that we do not have partial result. From the V-type cycle representations, we see that each planar separation of $(1)\pmb{|}(23)\pmb{|}(465)$, $(123)\pmb{|}(45)\pmb{|}(6)$, $(136)(2)\pmb{|}(4)\pmb{|}(5)$ and $(2)\pmb{|}(3)\pmb{|}(164)(5)$ indicates a cubic vertex. Thus we get a trivalent Feynman diagram evaluated to \begin{eqnarray} \adjustbox{raise=-0.8cm}{\begin{tikzpicture} \draw (-0.75,0) node[left=0pt]{$1$} -- (2.25,0) node[right=0pt]{$6$}; \draw (0,0) -- (0,0.75) -- ++(45:0.75) node[above=0pt]{$3$} (0,0.75) -- ++(135:0.75) node[above=0pt]{$2$}; \draw (1.5,0) -- (1.5,0.75) -- ++(45:0.75) node[above=0pt]{$5$} (1.5,0.75) -- ++(135:0.75) node[above=0pt]{$4$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{23} s_{45} s_{123}}~.~~~\label{6p-F1-A-1}\end{eqnarray} Of course, we can get the same result by considering a single V-type cycle representation and its substructures. For example, besides the cubic vertex with legs $\{1, P_{23} ,P_{123}\}$, the planar separation $(1)\pmb{|}(23)\pmb{|}(465)$ also indicates a substructure $(P)(465)$, which contains a V-type cycle representation $(P)(45)(6)$, giving exactly the other two cubic vertices in Eq.~\eqref{6p-F1-A-1}. Analysis for the other V-type cycle representations is the same. For the category $\mathcal{S}_2$ of~\eqref{eq:6p1F}, we analyze $\vev{125463}$ as example. Its good cycle representations are \begin{eqnarray} \text{V-type}: &~~~& (1)(2)(356)(4)~~,~~(132)(45)(6)~~,~~(1)(236)(4)(5)~~,~~(12)(3)(465)~.~~~\end{eqnarray} Each planar separation of $(1)\pmb{|}(2)\pmb{|}(356)(4)$, $(132)\pmb{|}(45)\pmb{|}(6)$, $(1)(236)\pmb{|}(4)\pmb{|}(5)$ and $(12)\pmb{|}(3)\pmb{|}(465)$ indicates a cubic vertex. After combining them, we get the trivalent Feynman diagram evaluated to \begin{eqnarray} \adjustbox{raise=-0.8cm}{\begin{tikzpicture} \draw (-0.75,0) node[left=0pt]{$1$} -- (2.25,0) node[right=0pt]{$6$}; \draw (0,0) -- (0,0.75) node[above=0pt]{$2$} (0.75,0) -- (0.75,0.75) node[above=0pt]{$3$}; \draw (1.5,0) -- (1.5,0.75) -- ++(45:0.75) node[above=0pt]{$5$} (1.5,0.75) -- ++(135:0.75) node[above=0pt]{$4$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{12} s_{45} s_{123}}~.~~~\label{6p-F1-B-1}\end{eqnarray} Finally, for the category $\mathcal{S}_3$ of~\eqref{eq:6p1F}, we analyze $\vev{125634}$ as example. Its good cycle representations are \begin{eqnarray} \text{V-type}: &~~~& (1)(2)(35)(64)~~,~~(13)(24)(5)(6)~~,~~(15)(26)(3)(4)~~,~~(12)(34)(56)~.~~~\end{eqnarray} Again, all the planar separations $(1)\pmb{|}(2)\pmb{|}(35)(64)$, $(13)(24)\pmb{|}(5)\pmb{|}(6)$, $(15)(26)\pmb{|}(3)\pmb{|}(4)$ and $(12)\pmb{|}(34)\pmb{|}(56)$ indicate cubic vertices, so that the final result is \begin{eqnarray} \adjustbox{raise=-0.8cm}{\begin{tikzpicture} \draw (-0.75,0) node[left=0pt]{$1$} -- (2.25,0) node[right=0pt]{$6$}; \draw (0,0) -- (0,0.75) node[above=0pt]{$2$} (1.5,0) -- (1.5,0.75) node[above=0pt]{$5$}; \draw (0.75,0) -- (0.75,0.75) -- ++(45:0.75) node[above=0pt]{$4$} (0.75,0.75) -- ++(135:0.75) node[above=0pt]{$3$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{12} s_{34} s_{56}}~.~~~\label{6p-F1-C-1}\end{eqnarray} Both Eq.~\eqref{6p-F1-B-1} and~\eqref{6p-F1-C-1} can be reproduced by a single V-type cycle representation, and the analysis is almost the same to that of the category $\mathcal{S}_1$ shown above. \paragraph{With no Feynman diagrams:} Finally, there are $15$ PT-factors that evaluate to zero. According to the action of $\mathcal{D}_6$, they can be distributed into three orbits, \begin{subequations} \label{eq:6p0F} \begin{align} &\mathcal{S}_1:& & \vev{124635}~~,~~\vev{146235}~~,~~\vev{134625}~~,~~\vev{136245}~~,~~\vev{135624}~~,~~\vev{135246}~,\\ &\mathcal{S}_2:& & \vev{125364}~~,~~\vev{146325}~~,~~\vev{143625}~~,~~\vev{136254}~~,~~\vev{136524}~~,~~\vev{142536}~,\\ &\mathcal{S}_3:& & \vev{135264}~~,~~\vev{136425}~~,~~\vev{142635}~.~~~ \end{align} \end{subequations} The category $\mathcal{S}_3$ of~\eqref{eq:6p0F} does not have any good cycle representations, and indeed it evaluates to zero. For the category $\mathcal{S}_1$ of~\eqref{eq:6p0F}, we take $\vev{124635}$ as example. The good cycle representations are \begin{align} \label{eq:catA6p} \text{V-type}:\quad(1)(2)(3465)~~~~~~,~~~~~~\text{P-type}:\quad(12)(3564)~.~~~ \end{align} Both of them manifest a pole $P_{12}$, together with a substructure $(P)(3465)$ and $(P)(3564)$ respectively. However, both substructures are members of~\eqref{eq:5p0F}, which give zero contribution. This can be seen clearly if we replace $P$ by $2$, and then replace $i\rightarrow i-1$ for the rest. Thus we see that the category $\mathcal{S}_1$ evaluates to zero because it contains a substructure with zero contribution. Actually, the category $\mathcal{S}_2$ of~\eqref{eq:6p0F} evaluates to zero for the same reason. For example, $\vev{125364}$ has good cycle representations \begin{align} \label{eq:catB6p} \text{V-type}:\quad(1)(2)(3564)~~~~~~,~~~~~~\text{P-type}:\quad(12)(3465)~,~~~ \end{align} whose substructures are identical to the case of category $\mathcal{S}_1$. This result can also be understood from another point of view. As we have shown with many examples, the V-type cycle representations encode the vertex structure of corresponding effective Feynman diagram. In the current case, there is only one V-type cycle representation and it has only one planar separation of cubic vertex. If we use $v_m$ to denote the number of $m$-point vertices, we should have the following constraint for all the valid effective Feynman diagrams as \begin{eqnarray} \sum_{m= 3}^{n} (m-2) v_m= n-2~.~~~\label{V-cond}\end{eqnarray} For the cases ~\eqref{eq:catA6p} and~\eqref{eq:catB6p}, we only have $v_3=1$ and all the other $v_i=0$, while for category $\mathcal{S}_3$, all $v_i$'s are zero. Thus the identity~\eref{V-cond} is violated, indicating that no such effective Feynman diagram exits. One can check that for all the cases with nonzero results, the identity~\eref{V-cond} is satisfied. \subsection{Higher point examples} Let us now consider an eight point example with PT-factor $\vev{12347856}$. It has 16 cycle representations, and we can classify them as \begin{center} \begin{tabular}{|c|c|} \hline V-type & $(1)(2)(3)(4)(57)(68)~~,~~(153)(264)(7)(8)~~,~~(137)(248)(5)(6)~~,~~(14)(23)(56)(78)$\\ \hline P-type & $(12)(367458)~~,~~(13)(2)(468)(5)(7)~~,~~(175)(24)(3)(6)(8)~~,~~(34)(185276)$\\ \hline Bad & $(16785432)~~,~~(1874)(25)(36)~~,~~(1735)(2846)~~,~~(1456)(27)(38)$ \\ & $(12347658)~~,~~(1638)(2547)~~,~~(1)(2648)(357)~~,~~(1537)(286)(4)$\\ \hline\end{tabular} \end{center} Among them, four are V-type cycle representations, and each one encodes the vertex information. For each cycle representation, there is only one planar separation, and from which we can directly work out \begin{center} \begin{tikzpicture} \draw[](1.25,0.75)--(2,0)--(1.25,-0.75); \draw[](2,1)--(2,-1); \draw[](2,0)--(3,0); \node at (2,-1.15) []{{\footnotesize $1$}}; \node at (1.1,-0.75) []{{\footnotesize $2$}}; \node at (1.1,0.75) []{{\footnotesize $3$}}; \node at (2,1.15) []{{\footnotesize $4$}}; \node at (3,0.2) []{{\footnotesize $P_{5678}$}}; \node at (2.2,-1.7) []{{\footnotesize $(1){\color{red}\pmb{|}}(2){\color{red}\pmb{|}}(3){\color{red}\pmb{|}}(4){\color{red}\pmb{|}}(57)(68)$}}; \draw[](5,0)--(6,0); \draw[](6.75,0.75)--(6,0)--(6.75,-0.75); \node at (6.9,-0.75) []{{\footnotesize $8$}}; \node at (6.9,0.75) []{{\footnotesize $7$}}; \node at (5,0.2) []{{\footnotesize $-P_{78}$}}; \node at (6,-1.7) []{{\footnotesize $(153)(264){\color{red}\pmb{|}}(7){\color{red}\pmb{|}}(8)$}}; \draw[](9,0)--(10,0); \draw[](10.75,0.75)--(10,0)--(10.75,-0.75); \node at (10.9,-0.75) []{{\footnotesize $6$}}; \node at (10.9,0.75) []{{\footnotesize $5$}}; \node at (9,0.2) []{{\footnotesize $-P_{56}$}}; \node at (10,-1.7) []{{\footnotesize $(137)(248){\color{red}\pmb{|}}(5){\color{red}\pmb{|}}(6)$}}; \draw[](13,0)--(14,0); \draw[](14.75,0.75)--(14,0)--(14.75,-0.75); \node at (15,-0.75) []{{\footnotesize $P_{78}$}}; \node at (15,0.75) []{{\footnotesize $P_{56}$}}; \node at (13,0.2) []{{\footnotesize $-P_{5678}$}}; \node at (14,-1.7) []{{\footnotesize $(14)(23){\color{red}\pmb{|}}(56){\color{red}\pmb{|}}(78)$}}; \node at (15.5,0) {$.$}; \end{tikzpicture} \end{center} Collectively considering all these four V-type cycle representations, and gluing them via propagators, we get the Feynman diagram as \begin{equation} \label{eq:8p5F} \adjustbox{raise=-1.9cm}{\begin{tikzpicture} \draw (72:1) node[above=0pt]{$4$} -- (0,0) -- (144:1) node[above=0pt]{$3$}; \draw (-72:1) node[below=0pt]{$1$} -- (0,0) -- (-144:1) node[below=0pt]{$2$}; \draw (0,0) -- (0.75,0) -- ++(45:0.75) -- ++(0,1) node[above=0pt]{$5$}; \draw (0.75,0) ++(45:0.75) -- ++(1,0) node[above=0pt]{$6$}; \draw (0.75,0) -- ++(-45:0.75) -- ++(1,0) node[below=0pt]{$7$}; \draw (0.75,0) ++(-45:0.75) -- ++(0,-1) node[below=0pt]{$8$}; \end{tikzpicture}}\;\Longrightarrow\;\frac{1}{s_{56}s_{78}s_{1234}}\left(\frac{1}{s_{12}s_{123}}+\frac{1}{s_{23}s_{123}}+\frac{1}{s_{23}s_{234}}+\frac{1}{s_{34}s_{234}}+\frac{1}{s_{12}s_{34}}\right)\,, \end{equation} which exactly produces the result of CHY-integrand $\vev{12345678}\times\vev{12347856}$. Alternatively, we can reproduce the same result from only one V-type cycle representation by going into its substructures. We use the V-type cycle representation $(14)(23)(56)(78)$ as our example. The planar separation $(14)(23)\pmb{|}(56)\pmb{|}(78)$ gives three substructures, namely $(14)(23)(P_{5678})$, $(56)(P_{56})$ and $(78)(P_{78})$. For each one, we can find a V-type cycle representation that manifests the vertex structure in the equivalent class, \begin{center} \begin{tikzpicture}[scale=1] \draw[](-2.75,0.75)--(-2,0)--(-2.75,-0.75) (-2,1)--(-2,-1) (-2,0)--(-1,0) (1,0)--(2,0) (2.25,0.25)--(2,0)--(2.25,-0.25) (3,2)--(3,1)--(4,1) (3,1)--(2.75,0.75) (3,-2)--(3,-1)--(4,-1) (3,-1)--(2.75,-0.75); \draw[dotted,thick](-1,0)--(1,0) (2.25,0.25)--(2.75,0.75) (2.25,-0.25)--(2.75,-0.75); \node at (-2,-1.15) []{{\footnotesize $1$}}; \node at (-2.9,-0.75) []{{\footnotesize $2$}}; \node at (-2.9,0.75) []{{\footnotesize $3$}}; \node at (-2,1.15) []{{\footnotesize $4$}}; \node at (3,2.15) []{{\footnotesize $5$}}; \node at (4.15,1) []{{\footnotesize $6$}}; \node at (4.15,-1) []{{\footnotesize $7$}}; \node at (3,-2.15) []{{\footnotesize $8$}}; \node at (0,1.5) []{{\footnotesize $(14)(23)~{\color{red}\pmb{|}}~(78)~{\color{red}\pmb{|}}~(56)$}}; \node at (-5,0) []{{\footnotesize $(1){\color{red}\pmb{|}}(2){\color{red}\pmb{|}}(3){\color{red}\pmb{|}}(4){\color{red}\pmb{|}}(P_{5678})$}}; \node at (5.5,-1) []{{\footnotesize $(7){\color{red}\pmb{|}}(8){\color{red}\pmb{|}}(-P_{78})$}}; \node at (5.5,1) []{{\footnotesize $(5){\color{red}\pmb{|}}(6){\color{red}\pmb{|}}(-P_{56})$}}; \draw[dashed, brown, thick](-1.5,1.3)--(-0.25,1.3)--(-0.25,1.7)--(-1.5,1.7)--(-1.5,1.3); \draw[dashed, brown, thick](-1.5,1.5)--(-5,1.5); \draw[dashed,brown, thick, ->,>=stealth](-5,1.5)--(-5,0.25); \draw[dashed, brown, thick](0.85,1.3)--(1.55,1.3)--(1.55,1.7)--(0.85,1.7)--(0.85,1.3); \draw[dashed,brown, thick](1.2,1.7)--(1.2,2.35)--(5.3,2.35); \draw[dashed, brown, thick, ->,>=stealth](5.3,2.35)--(5.3,1.3); \draw[dashed, brown, thick](-0.05,1.3)--(0.65,1.3)--(0.65,1.7)--(-0.05,1.7)--(-0.05,1.3); \draw[dashed, brown, thick](0.3,1.7)--(0.3,2.45)--(7,2.45)--(7,-1); \draw[dashed, brown, thick, ->,>=stealth](7,-1)--(6.65,-1); \end{tikzpicture} \end{center} By connecting the substructures together, we obtain the effective Feynman diagram as in~\eqref{eq:8p5F}. There are also four P-type cycle representations. As mentioned previously, they should be considered collectively in order to produce the complete result, while each one only contributes a partial result. From the planar separations of these cycle representations and their substructures, we can work out the contribution of each P-type cycle representation. We will not repeat the detailed analysis here, but only give the result as, \begin{center} \begin{tikzpicture}[scale=0.9] \draw[](1,0)--(2,0)--(2.5,0.5)--(2.5,1.5) (2.5,0.5)--(3.5,0.5) (2,0)--(2.5,-0.5)--(2.5,-1.5) (2.5,-0.5)--(3.5,-0.5); \draw[](0.25,0.75)--(1,0)--(0.25,-0.75) (1,1)--(1.5,0)--(2,1); \node at (0.1,-0.75) []{{\footnotesize $1$}}; \node at (0.1,0.75) []{{\footnotesize $2$}}; \node at (1,1.15) []{{\footnotesize $3$}}; \node at (2,1.15) []{{\footnotesize $4$}}; \node at (2.5,1.65) []{{\footnotesize $5$}}; \node at (3.65,0.5) []{{\footnotesize $6$}}; \node at (3.65,-0.5) []{{\footnotesize $7$}}; \node at (2.5,-1.65) []{{\footnotesize $8$}}; \draw[](5,0)--(6,0)--(6.5,0.5)--(6.5,1.5) (6.5,0.5)--(7.5,0.5) (6,0)--(6.5,-0.5)--(6.5,-1.5) (6.5,-0.5)--(7.5,-0.5); \draw[](4.25,0.75)--(5,0)--(4.25,-0.75) (5,-1)--(5.5,0)--(6,-1); \node at (4.1,-0.75) []{{\footnotesize $3$}}; \node at (4.1,0.75) []{{\footnotesize $4$}}; \node at (5,-1.15) []{{\footnotesize $2$}}; \node at (6,-1.15) []{{\footnotesize $1$}}; \node at (6.5,1.65) []{{\footnotesize $5$}}; \node at (7.65,0.5) []{{\footnotesize $6$}}; \node at (7.65,-0.5) []{{\footnotesize $7$}}; \node at (6.5,-1.65) []{{\footnotesize $8$}}; \draw[](9,0)--(10,0)--(10.5,0.5)--(10.5,1.5) (10.5,0.5)--(11.5,0.5) (10,0)--(10.5,-0.5)--(10.5,-1.5) (10.5,-0.5)--(11.5,-0.5); \draw[](8,0)--(9,0)--(9,1) (9,0)--(9,-1) (9.5,0)--(9.5,1); \node at (9,-1.15) []{{\footnotesize $1$}}; \node at (7.85,0) []{{\footnotesize $2$}}; \node at (9,1.15) []{{\footnotesize $3$}}; \node at (9.5,1.15) []{{\footnotesize $4$}}; \node at (10.5,1.65) []{{\footnotesize $5$}}; \node at (11.65,0.5) []{{\footnotesize $6$}}; \node at (11.65,-0.5) []{{\footnotesize $7$}}; \node at (10.5,-1.65) []{{\footnotesize $8$}}; \draw[](13,0)--(14,0)--(14.5,0.5)--(14.5,1.5) (14.5,0.5)--(15.5,0.5) (14,0)--(14.5,-0.5)--(14.5,-1.5) (14.5,-0.5)--(15.5,-0.5); \draw[](12,0)--(13,0)--(13,1) (13,0)--(13,-1) (13.5,0)--(13.5,-1); \node at (13,-1.15) []{{\footnotesize $2$}}; \node at (11.85,0) []{{\footnotesize $3$}}; \node at (13,1.15) []{{\footnotesize $4$}}; \node at (13.5,-1.15) []{{\footnotesize $1$}}; \node at (14.5,1.65) []{{\footnotesize $5$}}; \node at (15.65,0.5) []{{\footnotesize $6$}}; \node at (15.65,-0.5) []{{\footnotesize $7$}}; \node at (14.5,-1.65) []{{\footnotesize $8$}}; \node at (1.5,-2)[]{$\Uparrow$}; \node at (1.5,-2.5)[]{{\footnotesize $(12){\color{red}\pmb{|}}(367458)$}}; \node at (5.5,-2)[]{$\Uparrow$}; \node at (5.5,-2.5)[]{{\footnotesize $(34){\color{red}\pmb{|}}(185276)$}}; \node at (9.5,-2)[]{$\Uparrow$}; \node at (9.5,-2.5)[]{{\footnotesize $(13)(2){\color{red}\pmb{|}}(468)(5)(7)$}}; \node at (13.5,-2)[]{$\Uparrow$}; \node at (13.5,-2.5)[]{{\footnotesize $(24)(3){\color{red}\pmb{|}}(175)(6)(8)$}}; \end{tikzpicture} \end{center} We see that, each P-type cycle representation gives a quartic vertex contained in the original five point vertex. Thus each P-type cycle representation produces two terms. Only after summing up the above four contributions and removing the duplicates can we recover the complete result. Our last example involves a nine point PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{123857649}$. There are in all $18$ cycle representations and we can classify them as \begin{center} \begin{tabular}{|c|c|} \hline V-type & $(1)(2)(3)(48)(5)(67)(9)~,~(12)(39)(4)(567)(8)~,~(13)(2)(4985)(6)(7)$ \\ \hline P-type & $(19432)(586)(7)~,~(12389)(457)(6)~,~(19)(247368)(5)~,~(1)(29)(346578)~,~(18764)(23)(59)$ \\ \hline Bad & $(142968753)~,~(163978524)~,~(173495)(26)(8)~,~(159836)(27)(4)~,~(182546937)$\\ & $(135647928)~,~(15)(286974)(3)~,~(1796)(25)(384)~,~(1627)(35)(489)~,~(1458)(2637)(9)$ \\ \hline \end{tabular} \end{center} There are three V-type cycle representations. Each planar separation of V-type cycle representation tells us about the vertex information. The V-type cycle representation $(1)(2)(3)(48)(5)(67)(9)$ allows two different planar separations while each of the other two allow one planar separation. The vertex information of them are presented as follows, \begin{center} \begin{tikzpicture} \draw[](1.25,0.75)--(2,0)--(1.25,-0.75); \draw[](2,1)--(2,-1); \draw[](2,0)--(3,0); \node at (2,-1.15) []{{\footnotesize $9$}}; \node at (1.1,-0.75) []{{\footnotesize $1$}}; \node at (1.1,0.75) []{{\footnotesize $2$}}; \node at (2,1.15) []{{\footnotesize $3$}}; \node at (3,0.2) []{{\footnotesize $-P_{1239}$}}; \node at (2.2,-1.7) []{{\footnotesize $(1){\color{red}\pmb{|}}(2){\color{red}\pmb{|}}(3){\color{red}\pmb{|}}(48)(5)(67){\color{red}\pmb{|}}(9)$}}; \draw[](5,0)--(7,0); \draw[](6,0)--(6,1); \node at (5,0.2) []{{\footnotesize $-P_{567}$}}; \node at (7,0.2) []{{\footnotesize $P_{67}$}}; \node at (6,1.15) []{{\footnotesize $5$}}; \node at (6,-1.7) []{{\footnotesize $(9)(1)(2)(3)(48){\color{red}\pmb{|}}(5){\color{red}\pmb{|}}(67)$}}; \draw[](9,0)--(11,0); \draw[](10,-1)--(10,1); \node at (8.85,0) []{{\footnotesize $4$}}; \node at (11.15,0) []{{\footnotesize $8$}}; \node at (10,1.2) []{{\footnotesize $P_{567}$}}; \node at (10,-1.2) []{{\footnotesize $P_{1239}$}}; \node at (10,-1.7) []{{\footnotesize $(12)(39){\color{red}\pmb{|}}(4){\color{red}\pmb{|}}(567){\color{red}\pmb{|}}(8)$}}; \draw[](13,0)--(14,0); \draw[](14.75,0.75)--(14,0)--(14.75,-0.75); \node at (14.9,-0.75) []{{\footnotesize $7$}}; \node at (14.9,0.75) []{{\footnotesize $6$}}; \node at (13,0.2) []{{\footnotesize $-P_{67}$}}; \node at (14,-1.7) []{{\footnotesize $(13)(2)(4985){\color{red}\pmb{|}}(6){\color{red}\pmb{|}}(7)$}}; \end{tikzpicture} \end{center} The first planar separation indicates a five point vertex, which corresponds to five possible terms, while the second planar separation indicates a quartic vertex which corresponds to two possible terms. So gluing them together we get the nine point effective Feynman diagram as \begin{center} \begin{tikzpicture} \draw[](1.25,0.75)--(2,0)--(1.25,-0.75); \draw[](2,1)--(2,-1); \draw[](2,0)--(5,0); \draw[](3,1)--(3,-1); \draw[](4,0)--(4,1); \draw[](5.75,0.75)--(5,0)--(5.75,-0.75); \node at (2,-1.15) []{{\footnotesize $9$}}; \node at (1.1,-0.75) []{{\footnotesize $1$}}; \node at (1.1,0.75) []{{\footnotesize $2$}}; \node at (2,1.15) []{{\footnotesize $3$}}; \node at (3,1.15) []{{\footnotesize $4$}}; \node at (4,1.15) []{{\footnotesize $5$}}; \node at (5.9,0.75) []{{\footnotesize $6$}}; \node at (5.9,-0.75) []{{\footnotesize $7$}}; \node at (3,-1.15) []{{\footnotesize $8$}}; \end{tikzpicture} \end{center} which is a collection of 10 trivalent Feynman diagrams, agreeing with the result of the CHY-integrand $\vev{123456789}\times\vev{123857649}$. Alternatively, we derive the above result from one V-type cycle representation and its substructures. For instance, the planar separation $(1){\pmb{|}}(2){\pmb{|}}(3){\pmb{|}}(48)(5)(67){\pmb{|}}(9)$ indicates a five point vertex, while the substructure $(P_{1239})(48)(5)(67)$ has further structure. By working out the equivalent class of $(P_{1239})(48)(5)(67)$, we find a V-type cycle representation of this substructure that allows the planar separation $(4)\pmb{|}(567)\pmb{|}(8)\pmb{|}(P_{1239})$, indicating a quartic vertex and a four point substructure $(567)(P_{567})$. Then the equivalent V-type cycle representation $(5)\pmb{|}(67)\pmb{|}(P_{567})$ of this four point substructure indicates two cubic vertices with legs $5$, $P_{67}$, $P_{567}$ and $6$, $7$, $P_{67}$ respectively, according to our four point discussion in \S\ref{secFeynmanSub1}. The above recursive process can be graphically represented by \begin{center} \begin{tikzpicture} \node at (1,1.5) []{{\footnotesize $(1){\color{red}\pmb{|}}(2){\color{red}\pmb{|}}(3){\color{red}\pmb{|}}~(48)(5)(67)~{\color{red}\pmb{|}}(9)$}}; \draw[](0.25,0.75)--(1,0)--(0.25,-0.75); \draw[](1,1)--(1,-1); \draw[](1,0)--(2,0)(4,0)--(5,0); \draw[dotted,thick](2,0)--(4,0); \draw[brown,dashed,thick](0.7,1.3)--(2.3,1.3)--(2.3,1.7)--(0.7,1.7)--(0.7,1.3); \draw[brown,dashed,thick](1.5,1.7)--(1.5,1.85)--(3.1,1.85)--(3.1,1.5); \draw[brown,dashed,thick,->,>=stealth](3.1,1.5)--(3.5,1.5); \node at (1,-1.15)[]{{\footnotesize $9$}}; \node at (0.1,-0.75)[]{{\footnotesize $1$}}; \node at (0.1,0.75)[]{{\footnotesize $2$}}; \node at (1,1.15)[]{{\footnotesize $3$}}; \node at (5,1.15)[]{{\footnotesize $4$}}; \node at (9,1.15)[]{{\footnotesize $5$}}; \node at (12.9,0.75)[]{{\footnotesize $6$}}; \node at (12.9,-0.75)[]{{\footnotesize $7$}}; \node at (5,-1.15)[]{{\footnotesize $8$}}; \node at (5.15,1.5) []{{\footnotesize $(4){\color{red}\pmb{|}}~(567)~{\color{red}\pmb{|}}(8){\color{red}\pmb{|}}(P_{1239})$}}; \draw[brown,dashed,thick](4.2,1.3)--(5.05,1.3)--(5.05,1.7)--(4.2,1.7)--(4.2,1.3); \draw[brown,dashed,thick](4.625,1.7)--(4.625,1.85)--(7,1.85)--(7,1.5); \draw[brown,dashed,thick,->,>=stealth](7,1.5)--(7.5,1.5); \draw[](5,1)--(5,-1); \draw[](5,0)--(6,0) (8,0)--(9,0); \draw[dotted,thick](6,0)--(8,0); \node at (8.85,1.5)[]{{\footnotesize $(5){\color{red}\pmb{|}}~(67)~{\color{red}\pmb{|}}(-P_{567})$}}; \draw[brown, dashed, thick](8.15,1.3)--(8.85,1.3)--(8.85,1.7)--(8.15,1.7)--(8.15,1.3); \draw[brown, dashed,thick](8.5,1.7)--(8.5,1.85)--(10.3,1.85)--(10.3,1.5); \draw[brown,dashed,thick,->,>=stealth](10.3,1.5)--(10.7,1.5); \draw[](9,0)--(9,1); \draw[](9,0)--(10,0) (11,0)--(12,0); \draw[dotted,thick](10,0)--(11,0); \node at (12,1.5)[]{{\footnotesize $(6){\color{red}\pmb{|}}~(7)~{\color{red}\pmb{|}}(-P_{67})$}}; \draw[](12.75,0.75)--(12,0)--(12.75,-0.75); \end{tikzpicture} \end{center} Hence from one V-type cycle representation it is sufficient to obtain the complete result. On the contrary, if taking only one P-type cycle representation, we will end up with a partial result. For instance, if we take the P-type cycle representation $(19432)(586)(7)$, it has only one planar separation $(19432)\pmb{|}(586)(7)$ that splits the external legs into two parts. For both the substructures $(19432)(P_{5678})$ and $(P_{5678})(586)(7)$, we can work out their contributions by recursively going into their substructures. We will not repeat the details but show the result as follows, \begin{center} \begin{tikzpicture} \draw[](4.25,0.75)--(5,0)--(4.25,-0.75) (5,1)--(5,-1) (5,0)--(6,0)--(6,1) (6,0)--(7,0); \draw[dotted,thick](7,0)--(9,0); \draw[](9,0)--(10,0)--(10,-1) (10,0)--(11,0)--(11,1) (11,0)--(12,0) (12.75,0.75)--(12,0)--(12.75,-0.75); \node at (5,-1.15) []{{\footnotesize $9$}}; \node at (4.1,-0.75) []{{\footnotesize $1$}}; \node at (4.1,0.75) []{{\footnotesize $2$}}; \node at (5,1.15) []{{\footnotesize $3$}}; \node at (6,1.15) []{{\footnotesize $4$}}; \node at (11,1.15) []{{\footnotesize $5$}}; \node at (12.9,0.75) []{{\footnotesize $6$}}; \node at (12.9,-0.75) []{{\footnotesize $7$}}; \node at (10,-1.15) []{{\footnotesize $8$}}; \node at (2,0) []{{\footnotesize $(19432)(P_{5678})\Longrightarrow$}}; \node at (15,0) []{{\footnotesize $\Longleftarrow (-P_{5678})(586)(7)$}}; \end{tikzpicture} \end{center} The corresponding effective Feynman diagram obtained by gluing these two subdiagrams contributes only half of the full result, since the leg $4$ and $8$ is connected in just one way of the two allowed by the quartic vertex $\{P_{9123},4,P_{567},8\}$ in the original Feynman diagram. We need to include all the P-type cycle representations to reproduce the complete result. \section{From Feynman diagrams to permutations} \label{secPermutation} In \S \ref{secFeynman}, we have addressed the problem that given a PT-factor as a permutation acting on the identity element, how we can determine the Feynman diagrams the CHY-integrand evaluated. In this section, we will consider the inverse problem, namely, given an effective Feynman diagram, how to obtain directly the corresponding good cycle representations. We will show that there is a recursive construction to produce the good cycle representations of a given Feynman diagram from the relation between subdiagrams and planar separations. Later, we will use the eight point example given in Fig.~\ref{Feyndiagram} to illustrate general discussions. \begin{figure} \centering \subfloat[][]{\begin{tikzpicture} \draw[very thick](0,0)--(1,1)--(2,1)--(3,2)--(4,2)--(5,3); \draw[very thick](1,1)--(0,2); \draw[very thick](3,2)--(2,3); \draw[very thick](4,2)--(5,1); \draw[very thick](2,1)--(3,0)--(4,0); \draw[very thick](3,0)--(3,-1); \draw[very thick](4,2)--(5,2); \node (A1) at (0,-0.25)[]{$1$}; \node (A2) at (0,2.25)[]{$2$}; \node (A3) at (2,3.25)[]{$3$}; \node (A4) at (5,3.25)[]{$4$}; \node (A5) at (5.25,2)[]{$5$}; \node (A6) at (5,0.75)[]{$6$}; \node (A7) at (4.25,0)[]{$7$}; \node (A8) at (3,-1.25)[]{$8$}; \end{tikzpicture}\label{8pFeyn}}\qquad\qquad \subfloat[][]{\begin{tikzpicture}[vertex/.style={inner sep=0pt}] \coordinate (g1) at (22.5:2.5); \coordinate (g2) at (67.5:2.5); \coordinate (g3) at (112.5:2.5); \coordinate (g4) at (157.5:2.5); \coordinate (g5) at (-157.5:2.5); \coordinate (g6) at (-112.5:2.5); \coordinate (g7) at (-67.5:2.5); \coordinate (g8) at (-22.5:2.5); \draw [very thick,blue] (g2) -- (g7) (g7) -- (g3) (g3) -- (g5) (g5) -- (g7); \draw [very thick] (g1) -- (g2) -- (g3) -- (g4) -- (g5) -- (g6) -- (g7) -- (g8) -- cycle; \coordinate (V12) at (barycentric cs:g3=1,g4=1,g5=1); \coordinate (V1) at (barycentric cs:g3=-0.35,g4=1,g5=1); \coordinate (V2) at (barycentric cs:g3=1,g4=1,g5=-0.35); \coordinate (V456) at (barycentric cs:g7=1,g8=1,g1=1,g2=1); \coordinate (V4) at (barycentric cs:V456=-0.5,g1=1,g2=1); \coordinate (V5) at (barycentric cs:V456=-0.5,g1=1,g8=1); \coordinate (V6) at (barycentric cs:V456=-0.5,g7=1,g8=1); \coordinate (V78) at (barycentric cs:g5=1,g6=1,g7=1); \coordinate (V7) at (barycentric cs:g5=-0.35,g6=1,g7=1); \coordinate (V8) at (barycentric cs:g5=1,g6=1,g7=-0.35); \coordinate (V7812) at (barycentric cs:g3=1,g5=1,g7=1); \coordinate (V3456) at (barycentric cs:g2=1,g3=1,g7=1); \coordinate (V3) at (barycentric cs:g2=1,g3=1,g7=-0.2); \draw [very thick,gray,dashed] (V1) -- (V12) -- (V2) (V12) -- (V7812) -- (V78) -- (V7) (V78) -- (V8) (V7812) -- (V3456) -- (V3) (V3456) -- (V456) -- (V4) (V456) -- (V5) (V456) -- (V6); \node at (V1) [above=2pt] {$1$}; \node at (V2) [left=2pt]{$2$}; \node at (V3) [above=-1pt] {$3$}; \node at (V4) [above=-1pt]{$4$}; \node at (V5) [right=-1pt]{$5$}; \node at (V6) [below=-1pt]{$6$}; \node at (V7) [left=5pt]{$7$}; \node at (V8) [below=1pt]{$8$}; \end{tikzpicture}\label{8pngon}} \caption{An eight point Feynman diagram and its dual $n$-gon diagram.}\label{Feyndiagram} \end{figure} We remind the readers that an $m$-point vertex in the Feynman diagram stands for the sum of all the $m$-point trivalent diagrams. For example, a quartic vertex gives the sum of $s$ and $t$ channel trivalent diagrams. The Feynman diagram in Fig.~\ref{8pFeyn} corresponds to the CHY-integrand $\vev{12345678}\times\vev{12783654}$, which evaluates to \begin{eqnarray} \frac{1}{s_{12}s_{78}s_{1278}s_{456}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)~.~~~\end{eqnarray} As we have mentioned in previous section, the PT-factor ${\mbox{PT}}(\pmb{{\beta}})$ determines a permutation acting on identity element and it encodes the pole structure of the Feynman diagram. Conversely, the pole structure in the Feynman diagram also encodes the information of permutation. To describe the way of reading out the PT-factor and cycle representations, we find it is more convenient to introduce the polygon with $n$ edges ($n$-gon) that is dual to the $n$-point Feynman diagram under inspection. An $n$-point effective Feynman diagram can be described as \emph{partial triangulation} of $n$-gon diagram~\cite{Gao:2017dek,delaCruz:2017zqr,Arkani-Hamed:2017mur}, and an example of our considered eight point Feynman diagram is presented in Fig.~\ref{Feyndiagram}. Each external leg is dual to an edge of the $n$-gon, and each vertex is dual to a subpolygon in the interior. A triangulation line inside the $n$-gon, which cut it into two subpolygons, is dual to a propagator. If the Feynman diagram considered is trivalent diagram with only cubic vertices, the corresponding $n$-gon is completely triangulated, while if it is an effective Feynman diagram with also higher point vertices, the $n$-gon diagram is partially triangulated. If we use $E_i$ to denote the number of edges of a subpolygon inside the original $n$-gon, then the number of terms in the final result is given by \begin{eqnarray} \prod_{i\in {\text{all~polygons}}}C(E_i)=\prod_{i\in {\text{all~polygons}}}\frac{2^{E_i-2}(2E_i-5)!!}{(E_i-1)!}~,~~~\end{eqnarray} where $C(n)$ is also the number of all possible $n$-point color ordered trivalent Feynman diagrams. The blue line in Fig.~\ref{8pngon} represents the triangulation of our considered example, and the dashed gray line gives the Feynman diagram dual to the partial triangulation of $n$-gon. Discussion on the Feynman diagram can as well be applied to the $n$-gon diagram, and the latter is naturally enrolled in the associahedron~\cite{delaCruz:2017zqr,Arkani-Hamed:2017mur}. \subsection{The zig-zag path and cycle-representation of permutation} \label{secPermutation1} Now let us return to the problem of reading out the PT-factor from Feynman diagram. A solution for this problem has been provided in paper \cite{Baadsgaard:2015ifa}, where a pictorial method has been proposed to write the PT-factor for a given Feynman diagram. Based on their discussion, we will rephrase it by the language of \emph{zig-zag path}.\footnote{More discussions on the zig-zag path can be found in~\cite{Kenyon,Hanany:2005ss,Feng:2005gw}.} The basic idea comes as follows, \begin{itemize} \item Any tree-level Feynman diagram can be placed as a planar diagram, while the external legs lying in the plane apparently define an ordering, identified as PT-factor ${\mbox{PT}}(\pmb{\alpha})$. \item Starting from any external leg, we can draw a zig-zag path along the boundary of diagram, which crosses each internal line it meets and closes at the starting point. The ordering of legs along the direction of zig-zag path is identified as PT-factor ${\mbox{PT}}(\pmb{\beta})$. \end{itemize} The corresponding CHY-integrand ${\mbox{PT}}(\pmb{\alpha})\times{\mbox{PT}}(\pmb{\beta})$ then evaluates to the given Feynman diagram. \begin{figure} \centering \subfloat[][]{\begin{tikzpicture} \draw[very thick](0,0)--(1,1)--(2,1)--(3,2)--(4,2)--(5,3); \draw[very thick](1,1)--(0,2); \draw[very thick](3,2)--(2,3); \draw[very thick](4,2)--(5,1); \draw[very thick](2,1)--(3,0)--(4,0); \draw[very thick](3,0)--(3,-1); \draw[very thick](4,2)--(5,2); \node (A1) at (0,-0.25)[]{$1$}; \node (A2) at (0,2.25)[]{$2$}; \node (A3) at (2,3.25)[]{$3$}; \node (A4) at (5,3.25)[]{$4$}; \node (A5) at (5.25,2)[]{$5$}; \node (A6) at (5,0.75)[]{$6$}; \node (A7) at (4.25,0)[]{$7$}; \node (A8) at (3,-1.25)[]{$8$}; \tikzstyle{line}=[red,very thick, dashed] \tikzstyle{arrow}=[red,->,>=latex,shorten >=-4pt, very thick,dashed] \draw[arrow](0,-0.5)--(-0.25,-0.25); \draw[line](-0.25,-0.25)--(-0.5,0); \draw[arrow](-0.5,0)--(0,0.5); \draw[line](0,0.5)--(0.5,1); \draw[arrow](0.5,1)--(0,1.5); \draw[line](0,1.5)--(-0.5,2); \draw[arrow] (-0.5,2)--(-0.25,2.25); \draw[line](-0.25,2.25)--(0,2.5); \draw[arrow](0,2.5)--(1,1.5); \draw[line](1,1.5)--(2,0.5); \draw[arrow](2,0.5)--(3.5,0.5); \draw[line](3.5,0.5)--(4.5,0.5); \draw[arrow](4.5,0.5)--(4.5,0); \draw[line](4.5,0)--(4.5,-0.5); \draw[arrow](4.5,-0.5)--(4,-0.5); \draw[line](4,-0.5)--(3.5,-0.5); \draw[arrow](3.5,-0.5)--(3.5,-1); \draw[line](3.5,-1)--(3.5,-1.5); \draw[arrow](3.5,-1.5)--(3,-1.5); \draw[line](3,-1.5)--(2.5,-1.5); \draw[arrow](2.5,-1.5)--(2.5,-0.5); \draw[line](2.5,-0.5)--(2.5,0.5); \draw[arrow](2.5,0.5)--(2.5,1); \draw[line](2.5,1)--(2.5,2); \draw[arrow](2.5,2)--(2,2.5); \draw[line](2,2.5)--(1.5,3); \draw[arrow](1.5,3)--(1.75,3.25); \draw[line](1.75,3.25)--(2,3.5); \draw[arrow](2,3.5)--(2.75,2.75); \draw[arrow](2.75,2.75)--(4.25,1.25); \draw[line](4.25,1.25)--(5,0.5); \draw[arrow](5,0.5)--(5.25,0.75); \draw[line](5.25,0.75)--(5.5,1); \draw[arrow](5.5,1)--(5.125,1.375); \draw[line](5.125,1.375)--(4.75,1.75); \draw[arrow](4.75,1.75)--(5.25,1.75); \draw[line](5.25,1.75)--(5.5,1.75); \draw[arrow](5.5,1.75)--(5.5,2); \draw[line](5.5,2)--(5.5,2.25); \draw[arrow](5.5,2.25)--(5.25,2.25); \draw[line](5.25,2.25)--(4.75,2.25); \draw[arrow](4.75,2.25)--(5.125,2.625); \draw[line](5.125,2.625)--(5.5,3); \draw[arrow](5.5,3)--(5.25,3.25); \draw[line](5.25,3.25)--(5,3.5); \draw[arrow](5,3.5)--(4,2.5); \draw[line](4,2.5)--(3,1.5); \draw[arrow](3,1.5)--(2.25,1.5); \draw[line](2.25,1.5)--(2,1.5); \draw[arrow](2,1.5)--(0.75,0.25); \draw[line](0.75,0.25)--(0,-0.5); \end{tikzpicture}\label{pathFeyn}}\qquad\qquad \subfloat[][]{\begin{tikzpicture} \coordinate (g1) at (22.5:2.5); \coordinate (g2) at (67.5:2.5); \coordinate (g3) at (112.5:2.5); \coordinate (g4) at (157.5:2.5); \coordinate (g5) at (-157.5:2.5); \coordinate (g6) at (-112.5:2.5); \coordinate (g7) at (-67.5:2.5); \coordinate (g8) at (-22.5:2.5); \draw [very thick,blue] (g2) -- (g7) (g7) -- (g3) (g3) -- (g5) (g5) -- (g7); \draw [very thick] (g1) -- (g2) -- (g3) -- (g4) -- (g5) -- (g6) -- (g7) -- (g8) -- cycle; \path [name path=p12] ($(g1)!0.15cm!90:(g2)$) -- ++(135:2); \path [name path=p23] ($(g2)!0.15cm!90:(g3)$) -- ++(-2,0); \path [name path=p81] ($(g8)!0.15cm!90:(g1)$) -- ++(0,2); \path [name path=p78] ($(g7)!0.15cm!90:(g8)$) -- ++(45:2); \path [name path=p72a] ($(g2)!0.15cm!90:(g7)$) -- ++(0,-5); \path [name path=p72b] ($(g2)!-0.15cm!90:(g7)$) -- ++(0,-5); \path [name path=p73a] ($(g3)!0.15cm!90:(g7)$) -- ++(-67.5:5); \path [name path=p73b] ($(g3)!-0.15cm!90:(g7)$) -- ++(-67.5:5); \path [name path=p34] ($(g3)!0.15cm!90:(g4)$) -- ++(-135:2); \path [name path=p45] ($(g4)!0.15cm!90:(g5)$) -- ++(0,-2); \path [name path=p35a] ($(g3)!0.15cm!90:(g5)$) -- ++(-112.5:5); \path [name path=p35b] ($(g3)!-0.15cm!90:(g5)$) -- ++(-112.5:5); \path [name path=p57a] ($(g5)!0.15cm!90:(g7)$) -- ++(-22.5:5); \path [name path=p57b] ($(g5)!-0.15cm!90:(g7)$) -- ++(-22.5:5); \path [name path=p56] ($(g5)!0.15cm!90:(g6)$) -- ++(-45:2); \path [name path=p67] ($(g6)!0.15cm!90:(g7)$) -- ++(2,0); \path [name path=c72] ($(g2)!0.5!(g7)$) circle (0.3cm); \path [name path=c73] ($(g3)!0.5!(g7)$) circle (0.3cm); \path [name path=c35] ($(g3)!0.5!(g5)$) circle (0.3cm); \path [name path=c57] ($(g7)!0.5!(g5)$) circle (0.3cm); \path [name intersections={of=p12 and p81,name=a}] (a-1); \path [name intersections={of=p78 and p81,name=b}] (b-1); \path [name intersections={of=p78 and p72a,name=c}] (c-1); \path [name intersections={of=p12 and p72a,name=d}] (d-1); \path [name intersections={of=p23 and p72b,name=e}] (e-1); \path [name intersections={of=p23 and p73a,name=f}] (f-1); \path [name intersections={of=p72b and p73a,name=g}] (g-1); \path [name intersections={of=p73b and p35a,name=h}] (h-1); \path [name intersections={of=p34 and p35b,name=i}] (i-1); \path [name intersections={of=p34 and p45,name=j}] (j-1); \path [name intersections={of=p35b and p45,name=k}] (k-1); \path [name intersections={of=p57a and p35a,name=l}] (l-1); \path [name intersections={of=p57a and p73b,name=m}] (m-1); \path [name intersections={of=p57b and p56,name=n}] (n-1); \path [name intersections={of=p67 and p56,name=o}] (o-1); \path [name intersections={of=p57b and p67,name=p}] (p-1); \path [name intersections={of=c72 and p72a,name=q}] (q-1); \path [name intersections={of=c72 and p72a,name=q}] (q-2); \path [name intersections={of=c72 and p72b,name=r}] (r-1); \path [name intersections={of=c72 and p72b,name=r}] (r-2); \path [name intersections={of=c73 and p73a,name=s}] (s-1); \path [name intersections={of=c73 and p73a,name=s}] (s-2); \path [name intersections={of=c73 and p73b,name=t}] (t-1); \path [name intersections={of=c73 and p73b,name=t}] (t-2); \path [name intersections={of=c35 and p35a,name=u}] (u-1); \path [name intersections={of=c35 and p35a,name=u}] (u-2); \path [name intersections={of=c35 and p35b,name=v}] (v-1); \path [name intersections={of=c35 and p35b,name=v}] (v-2); \path [name intersections={of=c57 and p57a,name=w}] (w-1); \path [name intersections={of=c57 and p57a,name=w}] (w-2); \path [name intersections={of=c57 and p57b,name=x}] (x-1); \path [name intersections={of=c57 and p57b,name=x}] (x-2); \draw [red,dashed,thick,rounded corners=3pt] (a-1) -- (b-1) -- (c-1) -- (q-2) -- (r-1) -- (e-1) -- (f-1) -- (s-1) -- (t-2) -- (m-1) -- (w-1) -- (x-1) -- (n-1) -- (o-1) -- (p-1) -- (x-2) -- (w-2) -- (l-1) -- (u-2) -- (v-1) -- (i-1) -- (j-1) -- (k-1) -- (v-2) -- (u-1) -- (h-1) -- (t-1) -- (s-2) -- (g-1) -- (r-2) -- (q-1) -- (d-1) -- cycle; \node at ($(g1)!0.5!(g2)$) [above right=-1.5pt]{$4$}; \node at ($(g2)!0.5!(g3)$) [above=0pt]{$3$}; \node at ($(g3)!0.5!(g4)$) [above left=-1.5pt]{$2$}; \node at ($(g4)!0.5!(g5)$) [left=0pt]{$1$}; \node at ($(g5)!0.5!(g6)$) [below left=-1.5pt]{$8$}; \node at ($(g6)!0.5!(g7)$) [below=0pt]{$7$}; \node at ($(g7)!0.5!(g8)$) [below right=-1.5pt]{$6$}; \node at ($(g8)!0.5!(g1)$) [right=0pt]{$5$}; \node at ($(a-1)!0.5!(b-1)$) [rotate=0,red] {\tikz\draw[-Latex](0,0);}; \node at ($(b-1)!0.5!(c-1)$) [rotate=-45,red] {\tikz\draw[-Latex](0,0);}; \node at ($(c-1)!0.5!(q-2)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(q-1)!0.5!(d-1)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(d-1)!0.5!(a-1)$) [rotate=45,red] {\tikz\draw[-Latex](0,0);}; \node at ($(f-1)!0.5!(e-1)$) [rotate=-90,red] {\tikz\draw[-Latex](0,0);}; \node at ($(e-1)!0.5!(r-2)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(r-2)!0.5!(g-1)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(g-1)!0.5!(s-2)$) [rotate=22.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(s-1)!0.5!(f-1)$) [rotate=22.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(t-1)!0.5!(h-1)$) [rotate=22.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(h-1)!0.5!(u-1)$) [rotate=157.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(u-2)!0.5!(l-1)$) [rotate=157.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(l-1)!0.5!(w-2)$) [rotate=-112.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(w-1)!0.5!(m-1)$) [rotate=-112.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(m-1)!0.5!(t-2)$) [rotate=22.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(k-1)!0.5!(j-1)$) [rotate=0,red] {\tikz\draw[-Latex](0,0);}; \node at ($(j-1)!0.5!(i-1)$) [rotate=-45,red] {\tikz\draw[-Latex](0,0);}; \node at ($(p-1)!0.5!(o-1)$) [rotate=90,red] {\tikz\draw[-Latex](0,0);}; \node at ($(o-1)!0.5!(n-1)$) [rotate=45,red] {\tikz\draw[-Latex](0,0);}; \end{tikzpicture}\label{pathngon}} \caption{The zig-zag path in an eight point Feynman diagram and $n$-gon diagram.}\label{FigZigzag} \end{figure} The zig-zag path for our considered example is shown in Fig.~\ref{FigZigzag}, both in the Feynman diagram and $n$-gon diagram. In the Feynman diagram, the zig-zag path is along the external legs, while in the $n$-gon diagram, it is along the interior of edges. In both diagrams, the path crosses the lines whenever they are propagators. It is easy to tell that, for the Feynman diagram shown in Fig.~\ref{FigZigzag}, we have ${\mbox{PT}}(\pmb{\alpha})=\vev{12345678}$, while along the arrows of zig-zag path, we can read out ${\mbox{PT}}(\pmb{\beta})=\vev{12783654}$, as it should be. However, there are two subtleties we should pay attention to. The PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{12783654}$ is obtained under the condition that we read out the zig-zag path from leg 1 towards a specific direction. If starting from the same leg $1$ but with opposite direction, we will get ${\mbox{PT}}(\pmb{\beta})=\vev{14563872}$. If we start from another leg and a chosen direction, we will get another PT-factor, though they are in the same equivalent class. Special attention should be paid to the orientation of zig-zag path. It is a local but not global property, and it only make sense with respect to a vertex. For example, for the cubic vertex where legs $\{1, 2\}$ attached to, the zig-zag path around it is clockwise, while for the quartic vertex where legs $\{4, 5, 6\}$ attached to, the zig-zag path around it is anti-clockwise. The orientation of zig-zag path is more obvious in the $n$-gon diagram, as shown in Fig.~\ref{pathngon}. For each polygon inside the $n$-gon, the zig-zag path can be considered as a closed loop with definite orientation. If the zig-zag path in the triangle with edge 1, 2 is clockwise, then the zig-zag paths in the triangles with legs 7, 8 and with legs 3 are also clockwise, while zig-zag paths in the triangle in middle and the quadrangle are anti-clockwise. To the whole Feynman diagram or $n$-gon diagram, we do not need to worry about the ambiguity of the zig-zag path orientation, since our canonical definition of PT-factors in~\S\ref{secSetup} fixes which leg to start and which direction it should ahead. However, as we would discuss soon, the orientation of zig-zag path is important in the recursive construction of PT-factors. \begin{figure} \centering \subfloat[][]{\begin{tikzpicture} \draw[very thick](0,0)--(1,1)--(2,1)--(2.5,1.5); \draw[very thick](1,1)--(0,2); \draw[very thick](2,1)--(3,0)--(4,0); \draw[very thick](3,0)--(3,-1); \draw[very thick](3.5,1.5)--(4,2)--(5,2)--(6,3); \draw[very thick](4,2)--(3,3); \draw[very thick](5,2)--(6,1); \draw[very thick](5,2)--(6,2); \node (A1) at (0,-0.25)[]{$1$}; \node (A2) at (0,2.25)[]{$2$}; \node (A3) at (3,3.25)[]{$3$}; \node (A4) at (6,3.25)[]{$4$}; \node (A5) at (6.25,2)[]{$5$}; \node (A6) at (6,0.75)[]{$6$}; \node (A7) at (4.25,0)[]{$7$}; \node (A8) at (3,-1.25)[]{$8$}; \tikzstyle{line}=[red,very thick, dashed] \tikzstyle{arrow}=[red,->,>=latex,shorten >=-4pt, very thick,dashed] \draw[arrow](0,-0.5)--(-0.25,-0.25); \draw[line](-0.25,-0.25)--(-0.5,0); \draw[arrow](-0.5,0)--(0,0.5); \draw[line](0,0.5)--(0.5,1); \draw[arrow](0.5,1)--(0,1.5); \draw[line](0,1.5)--(-0.5,2); \draw[arrow] (-0.5,2)--(-0.25,2.25); \draw[line](-0.25,2.25)--(0,2.5); \draw[arrow](0,2.5)--(1,1.5); \draw[line](1,1.5)--(2,0.5); \draw[arrow](2,0.5)--(3.5,0.5); \draw[line](3.5,0.5)--(4.5,0.5); \draw[arrow](4.5,0.5)--(4.5,0); \draw[line](4.5,0)--(4.5,-0.5); \draw[arrow](4.5,-0.5)--(4,-0.5); \draw[line](4,-0.5)--(3.5,-0.5); \draw[arrow](3.5,-0.5)--(3.5,-1); \draw[line](3.5,-1)--(3.5,-1.5); \draw[arrow](3.5,-1.5)--(3,-1.5); \draw[line](3,-1.5)--(2.5,-1.5); \draw[arrow](2.5,-1.5)--(2.5,-0.5); \draw[line](2.5,-0.5)--(2.5,1); \draw[arrow](2.5,1)--(2.6875,1.1875); \draw[line](2.6875,1.1875)--(2.875,1.375); \draw[arrow](2.875,1.375)--(2.625,1.625); \draw[line](2.625,1.625)--(2.375,1.875); \draw[arrow](2.375,1.875)--(0.75,0.25); \draw[line](0.75,0.25)--(0,-0.5); \node (P1) at (2.5,1.25)[]{{\footnotesize $P$}}; \node (P2) at (3.4,1.75)[]{{\footnotesize $-P$}}; \node (V1) at (2,0.75)[]{{\footnotesize $V_1$}}; \node (V2) at (4.05,1.75)[]{{\footnotesize $V_2$}}; \draw[arrow](3.5,2)--(3,2.5); \draw[line](3,2.5)--(2.5,3); \draw[arrow](2.5,3)--(2.75,3.25); \draw[line](2.75,3.25)--(3,3.5); \draw[arrow](3,3.5)--(3.75,2.75); \draw[arrow](3.75,2.75)--(5.25,1.25); \draw[line](5.25,1.25)--(6,0.5); \draw[arrow](6,0.5)--(6.25,0.75); \draw[line](6.25,0.75)--(6.5,1); \draw[arrow](6.5,1)--(6.125,1.375); \draw[line](6.125,1.375)--(5.75,1.75); \draw[arrow](5.75,1.75)--(6.25,1.75); \draw[line](6.25,1.75)--(6.5,1.75); \draw[arrow](6.5,1.75)--(6.5,2); \draw[line](6.5,2)--(6.5,2.25); \draw[arrow](6.5,2.25)--(6.25,2.25); \draw[line](6.25,2.25)--(5.75,2.25); \draw[arrow](5.75,2.25)--(6.125,2.625); \draw[line](6.125,2.625)--(6.5,3); \draw[arrow](6.5,3)--(6.25,3.25); \draw[line](6.25,3.25)--(6,3.5); \draw[arrow](6,3.5)--(5,2.5); \draw[line](5,2.5)--(3.625,1.125); \draw[arrow](3.625,1.125)--(3.375,1.375); \draw[line](3.375,1.375)--(3.125,1.625); \draw[arrow](3.125,1.625)--(3.375,1.875); \draw[line](3.375,1.875)--(3.5,2); \node at (1.5,-0.75) {$L$}; \node at (4.55,3) {$R$}; \draw[dashed,very thick](2.5,2)--(3.5,1); \end{tikzpicture}\label{pathFeynFact}}\qquad\qquad \subfloat[][]{\begin{tikzpicture} \begin{scope}[xshift=0.75cm] \coordinate (g1) at (22.5:2.5); \coordinate (g2) at (67.5:2.5); \coordinate (g3) at (112.5:2.5); \coordinate (g8) at (-22.5:2.5); \coordinate (g7) at (-67.5:2.5); \draw [very thick,blue] (g2) -- (g7) (g7) -- (g3); \draw [very thick] (g7) -- (g8) -- (g1) -- (g2)-- (g3); \path [name path=p12] ($(g1)!0.15cm!90:(g2)$) -- ++(135:2); \path [name path=p23] ($(g2)!0.15cm!90:(g3)$) -- ++(-2,0); \path [name path=p81] ($(g8)!0.15cm!90:(g1)$) -- ++(0,2); \path [name path=p78] ($(g7)!0.15cm!90:(g8)$) -- ++(45:2); \path [name path=p72a] ($(g2)!0.15cm!90:(g7)$) -- ++(0,-5); \path [name path=p72b] ($(g2)!-0.15cm!90:(g7)$) -- ++(0,-5); \path [name path=p73a] ($(g3)!0.15cm!90:(g7)$) -- ++(-67.5:5); \path [name path=c72] ($(g2)!0.5!(g7)$) circle (0.3cm); \path [name intersections={of=p12 and p81,name=a}] (a-1); \path [name intersections={of=p78 and p81,name=b}] (b-1); \path [name intersections={of=p78 and p72a,name=c}] (c-1); \path [name intersections={of=p12 and p72a,name=d}] (d-1); \path [name intersections={of=p23 and p72b,name=e}] (e-1); \path [name intersections={of=p23 and p73a,name=f}] (f-1); \path [name intersections={of=p72b and p73a,name=g}] (g-1); \path [name intersections={of=c72 and p72a,name=q}] (q-1); \path [name intersections={of=c72 and p72a,name=q}] (q-2); \path [name intersections={of=c72 and p72b,name=r}] (r-1); \path [name intersections={of=c72 and p72b,name=r}] (r-2); \draw [red,dashed,thick,rounded corners=3pt] (a-1) -- (b-1) -- (c-1) -- (q-2) -- (r-1) -- (e-1) -- (f-1) -- (g-1) -- (r-2) -- (q-1) -- (d-1) -- cycle; \node at ($(g1)!0.5!(g2)$) [above right=-1.5pt]{$4$}; \node at ($(g2)!0.5!(g3)$) [above=0pt]{$3$}; \node at ($(g7)!0.5!(g8)$) [below right=-1.5pt]{$6$}; \node at ($(g8)!0.5!(g1)$) [right=0pt]{$5$}; \node at (barycentric cs:g2=1,g3=1,g7=1) {$V_2$}; \node at ($(a-1)!0.5!(b-1)$) [rotate=0,red] {\tikz\draw[-Latex](0,0);}; \node at ($(b-1)!0.5!(c-1)$) [rotate=-45,red] {\tikz\draw[-Latex](0,0);}; \node at ($(c-1)!0.5!(q-2)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(q-1)!0.5!(d-1)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(d-1)!0.5!(a-1)$) [rotate=45,red] {\tikz\draw[-Latex](0,0);}; \node at ($(f-1)!0.5!(e-1)$) [rotate=-90,red] {\tikz\draw[-Latex](0,0);}; \node at ($(e-1)!0.5!(r-2)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(r-2)!0.5!(g-1)$) [rotate=180,red] {\tikz\draw[-Latex](0,0);}; \node at ($(g-1)!0.5!(f-1)$) [rotate=22.5,red] {\tikz\draw[-Latex](0,0);}; \end{scope} \begin{scope}[xshift=-0.75cm] \coordinate (g3) at (112.5:2.5); \coordinate (g4) at (157.5:2.5); \coordinate (g5) at (-157.5:2.5); \coordinate (g6) at (-112.5:2.5); \coordinate (g7) at (-67.5:2.5); \draw [very thick,blue] (g7) -- (g3) (g3) -- (g5) (g5) -- (g7); \draw [very thick](g3) -- (g4) -- (g5) -- (g6) -- (g7); \path [name path=p73b] ($(g3)!-0.15cm!90:(g7)$) -- ++(-67.5:5); \path [name path=p34] ($(g3)!0.15cm!90:(g4)$) -- ++(-135:2); \path [name path=p45] ($(g4)!0.15cm!90:(g5)$) -- ++(0,-2); \path [name path=p35a] ($(g3)!0.15cm!90:(g5)$) -- ++(-112.5:5); \path [name path=p35b] ($(g3)!-0.15cm!90:(g5)$) -- ++(-112.5:5); \path [name path=p57a] ($(g5)!0.15cm!90:(g7)$) -- ++(-22.5:5); \path [name path=p57b] ($(g5)!-0.15cm!90:(g7)$) -- ++(-22.5:5); \path [name path=p56] ($(g5)!0.15cm!90:(g6)$) -- ++(-45:2); \path [name path=p67] ($(g6)!0.15cm!90:(g7)$) -- ++(2,0); \path [name path=c35] ($(g3)!0.5!(g5)$) circle (0.3cm); \path [name path=c57] ($(g7)!0.5!(g5)$) circle (0.3cm); \path [name intersections={of=p73b and p35a,name=h}] (h-1); \path [name intersections={of=p34 and p35b,name=i}] (i-1); \path [name intersections={of=p34 and p45,name=j}] (j-1); \path [name intersections={of=p35b and p45,name=k}] (k-1); \path [name intersections={of=p57a and p35a,name=l}] (l-1); \path [name intersections={of=p57a and p73b,name=m}] (m-1); \path [name intersections={of=p57b and p56,name=n}] (n-1); \path [name intersections={of=p67 and p56,name=o}] (o-1); \path [name intersections={of=p57b and p67,name=p}] (p-1); \path [name intersections={of=c35 and p35a,name=u}] (u-1); \path [name intersections={of=c35 and p35a,name=u}] (u-2); \path [name intersections={of=c35 and p35b,name=v}] (v-1); \path [name intersections={of=c35 and p35b,name=v}] (v-2); \path [name intersections={of=c57 and p57a,name=w}] (w-1); \path [name intersections={of=c57 and p57a,name=w}] (w-2); \path [name intersections={of=c57 and p57b,name=x}] (x-1); \path [name intersections={of=c57 and p57b,name=x}] (x-2); \draw [red,dashed,thick,rounded corners=3pt] (h-1) -- (u-1) -- (v-2) -- (k-1) -- (j-1) -- (i-1) -- (v-1) -- (u-2) -- (l-1) -- (w-2) -- (x-2) -- (p-1) -- (o-1) -- (n-1) -- (x-1) -- (w-1) -- (m-1) -- cycle; \node at ($(g3)!0.5!(g4)$) [above left=-1.5pt]{$2$}; \node at ($(g4)!0.5!(g5)$) [left=0pt]{$1$}; \node at ($(g5)!0.5!(g6)$) [below left=-1.5pt]{$8$}; \node at ($(g6)!0.5!(g7)$) [below=0pt]{$7$}; \node at (barycentric cs:g3=1,g5=1,g7=1) {$V_1$}; \node at ($(m-1)!0.5!(h-1)$) [rotate=22.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(h-1)!0.5!(u-1)$) [rotate=157.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(u-2)!0.5!(l-1)$) [rotate=157.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(l-1)!0.5!(w-2)$) [rotate=-112.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(w-1)!0.5!(m-1)$) [rotate=-112.5,red] {\tikz\draw[-Latex](0,0);}; \node at ($(k-1)!0.5!(j-1)$) [rotate=0,red] {\tikz\draw[-Latex](0,0);}; \node at ($(j-1)!0.5!(i-1)$) [rotate=-45,red] {\tikz\draw[-Latex](0,0);}; \node at ($(p-1)!0.5!(o-1)$) [rotate=90,red] {\tikz\draw[-Latex](0,0);}; \node at ($(o-1)!0.5!(n-1)$) [rotate=45,red] {\tikz\draw[-Latex](0,0);}; \end{scope} \draw [very thick,dashed] (-67.5:2.5) -- (112.5:2.5); \end{tikzpicture}} \caption{The zig-zag paths of subdiagrams in Feynman diagram and $n$-gon diagram. }\label{Figzigzagbreak} \end{figure} Now we consider splitting the Feynman diagram into two subdiagrams (labeled as $L$ and $R$) at the propagator $P_{7812}$ that connects the vertex $V_1$ and $V_2$, as shown in Fig.~\ref{Figzigzagbreak}. In each subdiagram, the zig-zag path form a closed loop, from which we can read out the PT-factors. In order to define the canonical ordering of PT-factors, we read out the $\pmb\alpha$-orderings from both subdiagrams in the clockwise direction as \begin{eqnarray} {\mbox{PT}}(\pmb{\alpha}_L)=\vev{7812P}~~~,~~~ {\mbox{PT}}(\pmb{\alpha}_R)=\vev{P3456}~.~~~\end{eqnarray} Next, we read out ${\mbox{PT}}(\pmb{\beta}_{L,R})$ from the zig-zag paths of both subdiagrams, starting from the leg $P$. We emphasize that the zig-zag path for subdiagram $L$ is anti-clockwise with respect to the vertex $V_1$, while the zig-zag path for subdiagram $R$ is clockwise with respect to $V_2$. The orientations of two zig-zag paths are always opposite with respect to the two vertices connect by the split propagator. This is a generic and important feature since, as we mentioned before, the orientations of zig-zag paths of two adjacent polygons in the $n$-gon diagram are always opposite. This feature will play a consequential role in determining the cycle representations for subdiagrams. Now we write down the PT-factors for the subdiagrams according to the arrows in the zig-zag paths of Fig.~\ref{pathFeynFact} as \begin{eqnarray} {\mbox{PT}}(\pmb{\beta}_L)=\vev{P1278}~~~,~~~{\mbox{PT}}(\pmb{\beta}_R)=\vev{P3654}~.~~~\label{zigzagCycle}\end{eqnarray} The complete ${\mbox{PT}}(\pmb{\beta})$ is given by the union of~\eqref{zigzagCycle} in a specific way: $\vev{1278P}\oplus \vev{P3654}\to \vev{12783654}$. The pattern will be clear if we use their cycle representation, \begin{align} {\mbox{PT}}(\pmb{\beta}_L)=\vev{P1278}\sim (P)(17)(28)~~~,~~~{\mbox{PT}}(\pmb{\beta}_R)=\vev{P3654}\sim (P)(3)(46)(5)~.~~~\label{eq:betaLRcycle} \end{align} Namely, we can obtain a V-type cycle representation of the complete Feynman diagram by the union of the above two after eliminating the single cycle $(P)$: $(17)(28)(P)\oplus(P)(3)(46)(5)=(17)(28)(3)(46)(5)$.\footnote{Now the operation $\oplus$ can be properly defined as follows: if $\pmb{\beta}_1$ and $\pmb{\beta}_2$ are two permutations that have only one overlap element $P$, and in both of them $P$ sits in a single cycle, namely, $\pmb{\beta}_{1,2}=(P)\pmb{\beta}'_{1,2}$, we have \begin{equation*} \pmb{\beta}_1\oplus\pmb{\beta}_2=\pmb{\beta}'_1\pmb{\beta}'_2\,. \end{equation*}} To obtain the above result, we have to select two specific cycle representations for the subdiagrams out of the $10$ equivalent ones of $\pmb{\beta}_{L,R}$, \begin{subequations} \label{eq:betaLR} \begin{align} &\pmb{\beta}_{L}: & &\left\{ \setlength{\arraycolsep}{10pt}\begin{array}{lllll} (P)(17)(28) & (P8127) & (P78)(1)(2) & (P21)(7)(8) & (P1872)\\ (P)(78)(12) & (P17)(8)(2) & (P2718) & (P7281) & (P82)(7)(1) \end{array} \right\} \\ &\pmb{\beta}_{R}: & &\left\{ \setlength{\arraycolsep}{10pt}\begin{array}{lllll} (P)(3)(46)(5) & (P43)(56) & (P534)(6) & (P635)(4) & (P36)(45)\\ (P)(3456) & (P3)(4)(5)(6) & (P654)(3) & (P5)(364) & (P46)(35) \end{array} \right\}~.~~~ \end{align} \end{subequations} First of all, we choose those that leave $P$ in a single cycle as $(P)$ in the equivalent class. This is reasonable since when gluing subdiagrams, $P$ should get eliminated without affecting other cycles, which is possible only if $P$ is in a single cycle. This limits us to the two in the first column of~\eqref{eq:betaLR}. Next, we need to answer which one to choose among the two. Looking back to~\eqref{eq:betaLRcycle}, we find that $\pmb{\beta}_R$ is a V-type cycle representation that manifests the vertex $V_2$, while $\pmb{\beta}_L$ is not a good cycle representation. If we color the legs according to how they are separated by $V_1$ and $V_2$, then for ${\mbox{PT}}(\pmb{\beta}_R)$, we have $({P})({\color{red}3})({\color{cyan}46})({\color{cyan}5})$, namely, each cycle contains elements with the same color, i.e., elements from the same part of the $V_2$ splitting. We say that this cycle representation satisfies \emph{planar splitting} for short. In contrary, for ${\mbox{PT}}(\pmb{\beta}_L)$, some cycles contain elements with different colors, i.e., the elements inside one cycle are from different parts of the $V_1$ splitting. We call it cycle representation \emph{non-planar splitting} for short. One notices that among the two subdiagrams of Fig.~\ref{pathFeynFact}, one cycle representation is planar splitting while the other is non-planar splitting. For instance, if the arrows in Fig.~\ref{pathFeynFact} is reversed globally, then for ${\mbox{PT}}(\pmb{{\beta}}_L)$ the consequent cycle representation is $({P})({\color{red}78})({\color{cyan}12})$ while for ${\mbox{PT}}(\pmb{{\beta}}_R)$ it is $({P})({\color{red}3}{\color{cyan}456})$. Again, one cycle representation is planar splitting and the other is non-planar splitting. By gluing them together, we get $(78)(12)(3456)$, which is another V-type cycle representation of the complete Feynman diagram. This feature results from the fact that the orientations of the subdiagram zig-zag paths are opposite with respect to the two vertices connected by the split propagator. It is generally true no matter how we cut the complete Feynman diagram. The above discussion indicates clearly that the permutation representation of PT-factor can be recursively constructed by breaking a complete Feynman diagram into subdiagrams along internal propagators. We will provide a systematic and abstract construction in next subsection. \subsection{The recursive construction of PT-factor via cycle representation} \label{secPermutation2} We already have the experience that, (1) in the gluing of two subdiagrams to one complete diagram, cycle representation of one subdiagram should be planar splitting, and that of the other should be non-planar splitting, (2) the planar splitting cycle representation corresponds to a zig-zag path in clockwise direction, while the non-planar splitting cycle representation corresponds to a zig-zag path in anti-clockwise direction. The orientation of zig-zag path is related to the planar or non-planar splitting of cycle representations because we use the convention that ${\mbox{PT}}(\pmb{\alpha})$ is obtained by traversing the external legs in clockwise direction. Let us head to a more systematic and abstract discussion on the recursive construction of cycle representation for a Feynman diagram. Consider a generic $n$-point effective Feynman diagram, where besides cubic vertices, the effective $(m>3)$-point vertices can also appear. An $m$-point vertex represents a collection of all possible $\frac{(2m-4)!}{(m-1)!(m-2)!}$ such $m$-point trivalent subdiagrams. If we use $v_m$ to denote the number of $m$-point vertices appearing in the effective Feynman diagram, then they should satisfy the constraint \begin{eqnarray} \sum_{m=3}^{n}(m-2)v_m=n-2~,~~~\label{eq:vertexrel}\end{eqnarray} where the total number of vertices $\sum v_m$ falls between 1 and $n-2$. An illustration of the $n$-point Feynman diagram as well as the dual $n$-gon diagram with also the zig-zag path is shown in Fig.~\ref{Figzigzagrecur}. \begin{figure} \centering \begin{tikzpicture} (* The main city *) \fill (0,0) circle (0.15); \fill (-6,0) circle (0.15); \draw [very thick](0,0)--(0,2.5); \filldraw [color=black,fill=white,very thick] (0,3) circle (0.5); \draw[very thick](-0.354,3.354)--(-1,4); \draw[very thick](0.354,3.354)--(1,4); \draw [very thick](0,0)--(0,-2.5); \filldraw [color=black,fill=white,very thick] (0,-3) circle (0.5); \draw [very thick](0,0)--(2.5,0); \draw[very thick](-0.354,-3.354)--(-1,-4); \draw[very thick](0.354,-3.354)--(1,-4); \filldraw [color=black,fill=white,very thick] (3,0) circle (0.5); \draw[very thick](3.354,-0.354)--(4,-1); \draw[very thick](3.354,0.354)--(4,1); \draw [very thick](0,0)--(-4,0); \draw [very thick](0,0)--(1.768,1.768); \filldraw [color=black,fill=white,very thick] (2.121,2.121) circle (0.5); \draw [very thick](0,0)--(1.768,-1.768); \draw[very thick](2.121,2.621)--(2.121,3.5); \draw[very thick](2.621,2.121)--(3.5,2.121); \filldraw [color=black,fill=white,very thick] (2.121,-2.121) circle (0.5); \draw [very thick](0,0)--(-1.768,1.768); \draw[very thick](2.121,-2.621)--(2.121,-3.5); \draw[very thick](2.621,-2.121)--(3.5,-2.121); \filldraw [color=black,fill=white,very thick] (-2.121,2.121) circle (0.5); \draw [very thick](0,0)--(-1.768,-1.768); \draw[very thick](-2.621,2.121)--(-3.5,2.121); \draw[very thick](-2.121,2.621)--(-2.121,3.5); \filldraw [color=black,fill=white,very thick] (-2.121,-2.121) circle (0.5); \draw[very thick](-2.621,-2.121)--(-3.5,-2.121); \draw[very thick](-2.121,-2.621)--(-2.121,-3.5); \node at (-4.25,0)[]{$P_i$}; (* the suburban *) \draw[very thick](-4.5,0)--(-6,0); \draw[very thick](-6,0)--(-6,1.5); \filldraw [color=black,fill=white,very thick] (-6,2) circle (0.5); \draw[very thick](-6.354,2.354)--(-7,3); \draw[very thick](-5.646,2.354)--(-5,3); \draw[very thick](-6,0)--(-6,-1.5); \filldraw [color=black,fill=white,very thick] (-6,-2) circle (0.5); \draw[very thick](-6.354,-2.354)--(-7,-3); \draw[very thick](-5.646,-2.354)--(-5,-3); \draw[very thick](-6,0)--(-8.5,0); \filldraw [color=black,fill=white,very thick] (-9,0) circle (0.5); \draw[very thick](-9.354,0.354)--(-10,1); \draw[very thick](-9.354,-0.354)--(-10,-1); (* partial triangulation *) \draw[very thick, blue](-0.5,1.5)--(0.5,1.5)--(1.5,0.5)--(1.5,-0.5)--(0.5,-1.5)--(-0.5,-1.5) --(-1.5,-0.5)--(-1.5,0.5)--(-0.5,1.5); \draw[very thick,blue](-7.5,0.5)--(-1.5,0.5); \draw[very thick,blue](-7.5,-0.5)--(-1.5,-0.5); \draw[very thick,blue](-7.5,0.5)--(-7.5,-0.5); \draw[very thick, dotted, blue](-1.5,0.5)--(-3.1,1.5); \draw[very thick, dotted, blue](-1.5,-0.5)--(-3.1,-1.5); \draw[very thick,dotted, blue](-0.5,1.5)--(-1.35,3.25); \draw[very thick,dotted, blue](-0.5,-1.5)--(-1.35,-3.25); \draw[very thick,dotted, blue](0.5,1.5)--(1.35,3.25); \draw[very thick,dotted, blue](0.5,-1.5)--(1.35,-3.25); \draw[very thick,dotted, blue](1.5,0.5)--(3.25,1.35); \draw[very thick,dotted, blue](1.5,-0.5)--(3.25,-1.35); \draw[very thick,dotted, blue](-7.5,0.5)--(-8.25,1.75); \draw[very thick,dotted, blue](-7.5,-0.5)--(-8.25,-1.75); (* zig-zag path *) \tikzstyle{line}=[red,very thick, dashed] \tikzstyle{arrowout}=[red,->,>=latex,shorten >=-4pt, very thick,dashed] \tikzstyle{arrowin}=[red,-<,>=latex,shorten >=1pt, very thick,dashed] \tikzstyle{arrow2}=[red,>-,>=latex,shorten >=0pt, very thick,dashed] \draw[line](-1.25,0.25)--(-1.75,-0.25); \draw[line](-1.25,-0.25)--(-1.75,0.25); \draw[arrowout](-1.25,0.25)--(-0.875,0.25); \draw[line](-0.875,0.25)--(-0.5,0.25)--(-0.75,0.5); \draw[arrow2](-0.75,0.5)--(-1,0.75); \draw[line](-1,0.75)--(-1,1.25); \draw[line](-1.25,1)--(-0.75,1); \draw[arrowout](-0.75,1)--(-0.5,0.75); \draw[line](-0.5,0.75)--(-0.25,0.5)--(-0.25,0.875); \draw[arrow2](-0.25,0.875)--(-0.25,1.25); \draw[arrowin](-1.25,1)--(-2.5,1.5); \draw[arrowout](-1,1.25)--(-1.5,2.5); \draw[line](-0.25,1.75)--(0.25,1.25); \draw[line](-0.25,1.25)--(0.25,1.75); \draw[arrowout](0.25,1.25)--(0.25,0.875); \draw[line](0.25,0.875)--(0.25,0.5)--(0.5,0.75); \draw[arrow2](0.5,0.75)--(0.75,1); \draw[arrowin](-0.25,1.75)--(-0.75,3); \draw[arrowout](0.25,1.75)--(0.75,3); \draw[line](1,0.75)--(1,1.25); \draw[line](1.25,1)--(0.75,1); \draw[arrowout](1,0.75)--(0.75,0.5); \draw[line](0.75,0.5)--(0.5,0.25)--(0.875,0.25); \draw[arrow2](0.875,0.25)--(1.25,0.25); \draw[arrowout](1.25,1)--(2.5,1.5); \draw[arrowin](1,1.25)--(1.5,2.5); \draw[line](1.25,0.25)--(1.75,-0.25); \draw[line](1.25,-0.25)--(1.75,0.25); \draw[arrowout](1.25,-0.25)--(0.875,-0.25); \draw[line](0.875,-0.25)--(0.5,-0.25)--(0.75,-0.5); \draw[arrow2](0.75,-0.5)--(1,-0.75); \draw[arrowin](1.75,0.25)--(3,0.75); \draw[arrowout](1.75,-0.25)--(3,-0.75); \draw[line](1,-0.75)--(1,-1.25); \draw[line](1.25,-1)--(0.75,-1); \draw[arrowout](0.75,-1)--(0.5,-0.75); \draw[line](0.5,-0.75)--(0.25,-0.5)--(0.25,-0.875); \draw[arrow2](0.25,-0.875)--(0.25,-1.25); \draw[arrowin](1.25,-1)--(2.5,-1.5); \draw[arrowout](1,-1.25)--(1.5,-2.5); \draw[line](-0.25,-1.75)--(0.25,-1.25); \draw[line](-0.25,-1.25)--(0.25,-1.75); \draw[arrowout](-0.25,-1.25)--(-0.25,-0.875); \draw[line](-0.25,-0.875)--(-0.25,-0.5)--(-0.5,-0.75); \draw[arrow2](-0.5,-0.75)--(-0.75,-1); \draw[arrowout](-0.25,-1.75)--(-0.75,-3); \draw[arrowin](0.25,-1.75)--(0.75,-3); \draw[line](-1,-0.75)--(-1,-1.25); \draw[line](-1.25,-1)--(-0.75,-1); \draw[arrowout](-1,-0.75)--(-0.75,-0.5); \draw[line](-0.75,-0.5)--(-0.5,-0.25)--(-0.875,-0.25); \draw[arrow2](-0.875,-0.25)--(-1.25,-0.25); \draw[arrowout](-1.25,-1)--(-2.5,-1.5); \draw[arrowin](-1,-1.25)--(-1.5,-2.5); (* zig-zag path in suburban *) \draw[line](-6.25,-0.25)--(-5.75,-0.75); \draw[line](-6.25,-0.75)--(-5.75,-0.25); \draw[arrowout](-5.75,-0.75)--(-5.25,-2); \draw[arrowin](-6.25,-0.75)--(-6.75,-2); \draw[arrowout](-1.75,0.25)--(-3.75,0.25); \draw[line](-3.75,0.25)--(-5.75,0.25); \draw[arrowout](-5.75,-0.25)--(-3.75,-0.25); \draw[line](-3.75,-0.25)--(-1.75,-0.25); \draw[line](-7.75,-0.25)--(-7.25,0.25); \draw[line](-7.25,-0.25)--(-7.75,0.25); \draw[arrowout](-7.25,-0.25)--(-6.75,-0.25); \draw[line](-6.75,-0.25)--(-6.25,-0.25); \draw[arrowout](-7.75,-0.25)--(-9,-0.75); \draw[arrowin](-7.75,0.25)--(-9,0.75); \draw[line](-6.25,0.25)--(-5.75,0.75); \draw[line](-6.25,0.75)--(-5.75,0.25); \draw[arrowout](-6.25,0.25)--(-6.75,0.25); \draw[line](-6.75,0.25)--(-7.25,0.25); \draw[arrowin](-5.75,0.75)--(-5.25,2); \draw[arrowout](-6.25,0.75)--(-6.75,2); (* zig-zag path in dotted form *) \draw[dotted,red,very thick](-2.5,1.5) to [out=135,in=225](-2.625,2.625) to [out=45,in=135] (-1.5,2.5); \draw[dotted,red,very thick](-2.5,-1.5) to [out=225,in=135](-2.625,-2.625) to [out=315,in=225] (-1.5,-2.5); \draw[dotted,red,very thick](-0.75,3) to [out=135,in=180](0,3.75) to [out=0,in=45] (0.75,3); \draw[dotted,red,very thick](-0.75,-3) to [out=225,in=180](0,-3.75) to [out=0,in=315] (0.75,-3); \draw[dotted,red,very thick](1.5,2.5) to [out=45,in=135](2.625,2.625) to [out=315,in=45] (2.5,1.5); \draw[dotted,red,very thick](1.5,-2.5) to [out=315,in=225](2.625,-2.625) to [out=45,in=315] (2.5,-1.5); \draw[dotted,red,very thick](3,0.75) to [out=45,in=90](3.75,0) to [out=270,in=315] (3,-0.75); \draw[dotted,red,very thick](-9,0.75) to [out=135,in=90](-9.75,0) to [out=270,in=225] (-9,-0.75); \draw[dotted,red,very thick](-6.75,2) to [out=135,in=180](-6,2.75) to [out=0,in=45] (-5.25,2); \draw[dotted,red,very thick](-6.75,-2) to [out=225,in=180](-6,-2.75) to [out=0,in=315] (-5.25,-2); (* n-gon *) \draw[gray,very thick](-3,1.7)--(-3,2.5); \draw[gray, very thick](-2.5,3)--(-1.7,3); \draw[dotted,gray, very thick](-3,2.5)--(-2.5,3); \draw[dotted,gray, very thick](-1.7,3)--(-1,3.5); \draw[gray,very thick](-1,3.5)--(-0.5,4); \draw[gray, very thick](0.5,4)--(1,3.5); \draw[dotted,gray, very thick](-0.5,4)--(0.5,4); \draw[dotted,gray, very thick](1,3.5)--(1.7,3); \draw[gray,very thick](3,1.7)--(3,2.5); \draw[gray, very thick](2.5,3)--(1.7,3); \draw[dotted,gray, very thick](2.5,3)--(3,2.5); \draw[dotted,gray, very thick](3,1.7)--(3.5,1); \draw[gray,very thick](3.5,1)--(4,0.5); \draw[gray, very thick](4,-0.5)--(3.5,-1); \draw[dotted,gray, very thick](4,0.5)--(4,-0.5); \draw[dotted,gray, very thick](3.5,-1)--(3,-1.7); \draw[gray,very thick](3,-1.7)--(3,-2.5); \draw[gray, very thick](2.5,-3)--(1.7,-3); \draw[dotted,gray, very thick](3,-2.5)--(2.5,-3); \draw[dotted,gray, very thick](1.7,-3)--(1,-3.5); \draw[gray,very thick](-1,-3.5)--(-0.5,-4); \draw[gray, very thick](0.5,-4)--(1,-3.5); \draw[dotted,gray, very thick](0.5,-4)--(-0.5,-4); \draw[dotted,gray, very thick](-1,-3.5)--(-1.7,-3); \draw[gray,very thick](-3,-1.7)--(-3,-2.5); \draw[gray, very thick](-2.5,-3)--(-1.7,-3); \draw[dotted,gray, very thick](-2.5,-3)--(-3,-2.5); (* n-gon in suburban *) \draw[gray,very thick](-5,-2.5)--(-5.5,-3); \draw[gray, very thick](-6.5,-3)--(-7,-2.5); \draw[dotted,gray, very thick](-5.5,-3)--(-6.5,-3); \draw[dotted,gray, very thick](-7,-2.5)--(-9.5,-1); \draw[dotted,gray,very thick](-3,-1.7) to [out=90,in=45] (-5,-2.5); \draw[gray,very thick](-9.5,-1)--(-10,-0.5); \draw[gray, very thick](-10,0.5)--(-9.5,1); \draw[dotted,gray, very thick](-10,-0.5)--(-10,0.5); \draw[dotted,gray, very thick](-9.5,1)--(-7,2.5); \draw[gray,very thick](-5,2.5)--(-5.5,3); \draw[gray, very thick](-6.5,3)--(-7,2.5); \draw[dotted,gray, very thick](-6.5,3)--(-5.5,3); \draw[dotted,gray,very thick](-3,1.7) to [out=270,in=315] (-5,2.5); (* Labels *) \fill (0,3) circle (0.07); \fill (0.25,3) circle (0.07); \fill (-0.25,3) circle (0.07); \fill (0,-3) circle (0.07); \fill (0.25,-3) circle (0.07); \fill (-0.25,-3) circle (0.07); \fill (-9,0.25) circle (0.07); \fill (-9,0) circle (0.07); \fill (-9,-0.25) circle (0.07); \node at (-2.121,2.121) []{$\mathsf{A}_{i+1}$}; \node at (-2.121,-2.121) []{$\mathsf{A}_{i-1}$}; \node at (2.121,2.121) []{$\mathsf{A}_{m-1}$}; \node at (2.121,-2.121) []{$\mathsf{A}_{1}$}; \node at (3,0) []{$\mathsf{A}_{m}$}; \node at (-6,-2)[]{$\mathsf{a}_{i_1}$}; \node at (-6,2)[]{$\mathsf{a}_{i_s}$}; \end{tikzpicture} \caption{The recursive construction of $n$-point PT-factor. Black lines represent a general $n$-point Feynman diagram. Gray lines denotes the dual $n$-gon. Blue line represent the partial triangulations of $n$-gon dual to the Feynman diagram, while red dashed lines denote the zig-zag path. The direction of zig-zag path is labeled by arrows in preference of our convention. All dotted lines are abbreviation of their detailed structures that are not explicitly shown in the diagram.}\label{Figzigzagrecur} \end{figure} Let us focus on a $m$-point vertex, marked as a black dot in the middle of the blue octagon\footnote{It should connect $m$ propagators, but we only sketch eight lines as illustration. The circles with dots inside represent many subdiagrams.} in Fig.~\ref{Figzigzagrecur}. This vertex connects $m$ subdiagrams via $m$ propagators $P_k$ with $k=1,2\ldots m$. Our goal is to write the cycle representation of $n$-point PT-factor into a form that manifests the factorization into those cycle representations of the $m$ subdiagrams connected to the $m$-point vertex. Since according to our convention, the ${\mbox{PT}}(\pmb{{\alpha}}_m)$ is read in clockwise direction, we have \begin{eqnarray} {\mbox{PT}}(\pmb{{\alpha}}_m)=\vev{P_1P_2\cdots P_m}\end{eqnarray} for the subdiagram inside the blue octagon. We intentionally choose the direction of zig-zag path as clockwise with respect to the considered $m$-point vertex, so that for this subdiagram, we have \begin{eqnarray} {\mbox{PT}}(\pmb{{\beta}}_m)=\vev{P_1P_2\cdots P_m}\sim (P_1)(P_2)\cdots (P_m)~.~~~\end{eqnarray} Note that among the $2m$ equivalent cycle representations of ${\mbox{PT}}(\pmb{{\beta}}_m)$, this is the {\sl only} one that allows every $P_k$ appear as a single element in a cycle. Now consider the $m$ subdiagrams in the other side of $P_k$, denoted as $\mathsf{A}_1$, $\mathsf{A}_2$, $\ldots$, $\mathsf{A}_{m}$. Since $P_k$ is an external leg of $\mathsf{A}_k$, we know {\sl a priori} that there must be a cycle representation $(P_k)\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_k}$ for this subdiagram, where $\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_k}$ is to be determined. So when gluing all the $m$ subdiagrams to the one inside the octagon by propagator $P_k$'s, we obtain the follow factorization form \begin{eqnarray} \pmb{\beta}^\text{cyc-rep}&=&(P_1)(P_2)\cdots (P_m)\oplus(P_1)\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_1}\oplus \cdots \oplus (P_m)\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_m}\nonumber \\ &=&\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_1}\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_2}\cdots \pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_{m}}~,~~~\label{cyclefactor}\end{eqnarray} namely, it allows a planar separation into $m$ parts. Once the cycle representations of PT-factors for subdiagrams are known, the complete cycle representation is simply a combination of them. Note that the factorization~\eqref{cyclefactor} is based upon a given vertex. Now we can go into each subdiagram $\mathsf{A}_i$ and perform the same construction, until we reach a subdiagram with only one vertex. A remaining problem is that suppose all the cycle representations of a subdiagram $\mathsf{A}_i$ is known, how do we choose the $\pmb{\beta}^{\text{cyc-rep}}_{\mathsf{A}_i}$ that is used as the building block of~\eqref{cyclefactor}. According to Fig.~\ref{Figzigzagrecur}, the subset $\mathsf{A}_i$ and propagator $P_i$ form a $(|\mathsf{A}_i|+1)$-point subdiagram, connected to the remaining parts via the propagator $P_i$. In order to connect it with the subdiagram inside octagon, we already constrain the cycle representation of this subdiagram as $(P_i)\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_{i}}$, where $(P_i)$ itself is a cycle. From the definition of equivalent class~\eqref{eq:equivperm}, we know that there are two cycle representations satisfying this condition. One of them can be constructed according to~\eqref{cyclefactor}: suppose the propagator $P_i$ is connected to an $(s+1)$-point vertex in subdiagram $\mathsf{A}_i$, marked as black dot in the interior of blue rectangular in Fig.~\ref{Figzigzagrecur}. This $(s+1)$-point vertex splits the subdiagram $\mathsf{A}_i$ into $s$ disjoint sub-subsets $\mathsf{a}_{i_1}$ to $\mathsf{a}_{i_s}$ via $s$ propagator $P_{i_\ell}$ with $\ell=1,\ldots,s$. Then following~\eqref{cyclefactor}, we should choose the zig-zag path inside the rectangle in clockwise direction, from which we can obtain a cycle representation satisfying the planar splitting, \begin{equation} \label{eq:cycAi} \pmb{\beta}_{\mathsf{A}_i+1}^{\text{cyc-rep}}=(P_i)\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_i}=(P_i)~\pmb{\beta}^\text{cyc-rep}_{\mathsf{a}_{i_1}}\pmb{\beta}^\text{cyc-rep}_{\mathsf{a}_{i_2}}\cdots\pmb{\beta}^\text{cyc-rep}_{\mathsf{a}_{i_s}}~.~~~ \end{equation} However, since we have already set the zig-zag path around the octagon to be clockwise, the zig-zag path around the rectangle must be anti-clockwise, and what we should use is the cycle representation other than~\eqref{eq:cycAi} with $P_i$ in a single cycle. Thus we can obtain $(P_i)\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_i}$ by acting reversing and cyclic rotation onto~\eqref{eq:cycAi}. Moreover, the $\pmb{\beta}_{\mathsf{A}_i}^{\text{cyc-rep}}$ obtained this way must be non-planar splitting, namely, at least one cycle of $\pmb{\beta}^\text{cyc-rep}_{\mathsf{A}_i}$ contains elements from different subsets $\mathsf{a}_{i_\ell}$. In other words, there must exist at least one cycle that can not be a part of any $\pmb{\beta}^\text{cyc-rep}_{\mathsf{a}_{i_\ell}}$. To prove this point, it is suffices to study the following problem. Given an identity element in the permutation group, \begin{eqnarray} \vev{P_i,{\color{red}a_1,a_2,\ldots, a_i},{\color{cyan} b_1, b_2, \ldots , b_j}}~,~~~\label{permumapped}\end{eqnarray} which splits into three planar parts $\{P_i\}$, $\{a_1,\ldots,a_i\}$ and $\{b_1,\ldots,b_j\}$. Let us consider the planar splitting cycle representation $(P_i)\pmb{a}^\text{cyc-rep}\pmb{b}^\text{cyc-rep}$ that maps the identity element~\eqref{permumapped} into another permutation as \begin{equation} ({P_i})~{\color{red}\pmb{a}}^\text{cyc-rep}{\color{cyan}\pmb{b}}^\text{cyc-rep}=\begin{pmatrix} {P_i} & {\color{red}a_1} & {\color{red}a_2} & {\color{red}\cdots} & {\color{red}a_i} & {\color{cyan}b_1} & {\color{cyan}b_2 }& {\color{cyan}\cdots} & {\color{cyan}b_j} \\ \downarrow & \downarrow & \downarrow & \cdots & \downarrow & \downarrow & \downarrow & \cdots & \downarrow \\ {P_i} & {\color{red}a'_1} & {\color{red}a'_2} & {\color{red}\cdots }& {\color{red}a'_i} & {\color{cyan}b'_1} & {\color{cyan}b'_2} & {\color{cyan}\cdots} & {\color{cyan}b'_j} \end{pmatrix}~, \end{equation} where $\{a'_1,\ldots,a'_i\}$ is a permutation of $\{{\alpha}_1,\ldots,{\alpha}_i\}$ and $\{b'_1,\ldots,b'_j\}$ is a permutation of $\{b_1,\ldots,b_j\}$. The other cycle representation with $(P_i)$ as a single cycle in equivalent class is thus given by reversing the ordering and dragging $P_i$ to the first position as \begin{equation} \begin{pmatrix} {P_i} & {\color{red}a_1} & {\color{red}a_2} & {\color{red}\cdots} & {\color{red}a_i} & {\color{cyan}b_1} & {\color{cyan}\cdots} & {\color{cyan}b_{j-1}} &{\color{cyan} b_j }\\ \downarrow & \downarrow & \downarrow & \cdots & \downarrow & \downarrow & \cdots & \downarrow & \downarrow \\ {P_i} & {\color{cyan}b'_j} & {\color{cyan}b'_{j-1}} & {\color{cyan}\cdot}{\color{red}\cdot}{\color{cyan}\cdot}{\color{red}\cdot} & {\color{cyan}\cdot}{\color{red}\cdot}{\color{cyan}\cdot}{\color{red}\cdot} & {\color{cyan}\cdot}{\color{red}\cdot}{\color{cyan}\cdot}{\color{red}\cdot} & {\color{cyan}\cdot}{\color{red}\cdot}{\color{cyan}\cdot}{\color{red}\cdot} & {\color{red} a'_2} & {\color{red}a'_1} \end{pmatrix}=({P_i})({\color{red}a_1} {\color{cyan}b'_j}{\color{cyan}\cdot}{\color{red}\cdot}{\color{cyan}\cdot}{\color{red}\cdot})({\color{cyan}\cdot}{\color{red}\cdot}{\color{cyan}\cdot}{\color{red}\cdot})\cdots({\color{cyan}\cdot}{\color{red}\cdot}{\color{cyan}\cdot}{\color{red}\cdot})~.~~~\label{permumapto} \end{equation} Since $b'_j\in \{b_1,\ldots,b_j\}$ and $a_1\in \{a_1,\ldots,a_i\}$, it is clear in (\ref{permumapto}) that the legs in two subsets $\{a_1,\ldots,a_i\}$, $\{b_1,\ldots,b_j\}$ must appear together in at least one cycle in the cycle representation. To recap, the cycle representation of a Feynman diagram can be written as a simple combination of cycle representations of subdiagrams as presented in~\eqref{cyclefactor}. This leads to a recursive construction of cycle representation from those of lower point Feynman subdiagrams. Crucially, \emph{the cycle representation of subdiagram $\mathsf{A}_i$ that used in the complete cycle representation~\eqref{cyclefactor} should be the one with $P_i$ as a single cycle $(P_i)$ and be non-planar splitting with respect to its vertex connected to $P_i$.} Then we can work out the permutation from cycle representation and eventually the PT-factor ${\mbox{PT}}(\pmb{\beta})$. Note that, it is possible to start the recursive construction from any vertex of a Feynman diagram, and different choice leads to different cycle representation but they are all in the same equivalent class. We will show in Appendix~\ref{secAppendix} that different planar splittings characterize the shapes of the associahedron boundaries. \subsection{Examples} Let us now present some nontrivial examples to illustrate the recursive construction of cycle representation. First we consider the three point diagram. The cubic vertex splits diagram into three subdiagrams, each one is trivially a single external leg. This is also true for the diagrams with only a single vertex. So following~\eqref{cyclefactor}, we get \begin{align} \adjustbox{raise=-0.75cm}{\begin{tikzpicture} \draw [] (0,0) -- (1,0) node[above=0pt]{$3$} (0,0) -- (135:1) node[left=0pt]{$2$} (0,0) -- (-135:1) node[left=0pt]{$1$}; \fill[color=red] (0,0) circle (0.1); \end{tikzpicture}}\;\Longleftrightarrow\;\pmb{\beta}^\text{cyc-rep}=(1)(2)(3)~~~,~~~ \adjustbox{raise=-0.75cm}{\begin{tikzpicture} \draw [] (0,0) -- (1,0) node[above=0pt]{$n$} (0,0) -- (135:1) node[left=0pt]{$3$} (-1,0) node[left=0pt]{$2$} -- (0,0) -- (-135:1) node[left=0pt]{$1$}; \fill[color=red] (0,0) circle (0.1); \foreach \x in {15,30,45,60,75,90,105,120} { \fill (\x:0.7) circle (0.75pt); } \end{tikzpicture}}\;\Longleftrightarrow\;\pmb{\beta}^\text{cyc-rep}=(1)(2)(3)\ldots (n)~.~~~ \end{align} Note that $(12)(3)$ is in the same equivalent class of $(1)(2)(3)$ for this three point diagram. When it appears as a subdiagram, one leg becomes an internal line $P$, namely \begin{equation} \label{eq:3psub} \adjustbox{raise=-0.75cm}{\begin{tikzpicture} \draw [] (0,0) -- (1,0) node[pos=0.5,above=0pt]{$P$} (0,0) -- (135:1) node[left=0pt]{$a_2$} (0,0) -- (-135:1) node[left=0pt]{$a_1$}; \fill (0,0) circle (2pt); \foreach \x in {-135,-90,-45,0,45,90,135} { \draw [] (1,0) -- ++(\x:0.3);} \fill (1,0) circle (2pt); \end{tikzpicture}}~.~~~ \end{equation} In this case, we should use the non-planar splitting cycle representation $(a_1a_2)(P)$ instead of $(a_1)(a_2)(P)$. To see this, let us proceed to a four point Feynman diagram as shown below. If we start from the vertex marked by red dot, the diagram splits to three subdiagrams, two of which are single external legs and one is three point subdiagram. The non-planar splitting for three point subdiagram appears as $(12)(P_{12})$, so according to~\eqref{cyclefactor}, the recursive procedure is described as follows, \begin{equation} \label{eq:4pv1} \adjustbox{raise=-0.85cm}{\begin{tikzpicture}[every node/.style={font=\fontsize{8pt}{8pt}\selectfont,inner sep=1pt}] \draw [thick] (135:1) node[left=0pt]{$2$} -- (0,0) -- (-135:1) node[left=0pt]{$1$}; \draw [thick] (0,0) -- (1,0) -- ++(45:1) node[right=0pt]{$3$} (1,0) -- ++(-45:1) node[right=0pt]{$4$}; \filldraw [draw=red,fill=red] (1,0) circle (0.1); \node at (2.5,0) {$\longrightarrow$}; \begin{scope}[xshift=4cm] \draw [thick] (135:1) node[left=0pt]{$2$} -- (0,0) -- (-135:1) node[left=0pt]{$1$}; \node (A) at (0.75,0.5) [above=1pt]{$(12)(P_{12})$}; \node (B) at (0.75,1.5) {$(1)(2)(P_{12})$}; \node (C) at (1.25,0) {$P_{12}$}; \draw [-stealth] (B.south) -- (A.north); \draw [stealth-] (45:0.15) -- (A.south); \draw [thick] (0,0) -- (C.west) (C.east) -- (2.5,0) -- ++(45:1) node[right=0pt]{$3$} (2.5,0) -- ++(-45:1) node[right=0pt]{$4$}; \node (E) at (2.5,0) [circle,fill=red,inner sep=2pt] {}; \node (F) at (1.75,-0.5) [below=1pt] {$(P_{34})(3)(4)$}; \draw [stealth-] (2.5,0) ++(-135:0.15) -- (F.north) ; \end{scope} \end{tikzpicture}}\,\Longrightarrow\quad\pmb{\beta}^\text{cyc-rep}=(12)(3)(4)~.~~~ \end{equation} From above results, we can recursively compute the cycle representation of ${\mbox{PT}}(\pmb{\beta})$ for five point CHY-integrand. Here we present an example as follows, \begin{align} {\begin{tikzpicture}[baseline={(current bounding box.center)}] \draw [] (135:1) node[left=0pt]{$2$} -- (0,0) -- (1,0) -- ++(0,1) node[above=0pt]{$3$} (1,0) -- (2,0) -- ++(45:1) node [above=0pt]{$4$} (2,0) -- ++(-45:1) node[left=0pt]{$5$} (-135:1) node[left=0pt]{$1$} -- (0,0); \fill (0,0) circle (0.05); \fill[color=red] (1,0) circle (0.1) node [below=1pt,black]{$V_2$}; \fill[color=red] (2,0) circle (0.1) node [below=1pt,black]{$V_1$}; \node at (3,0) {$,$}; \end{tikzpicture} } \end{align} and construct the cycle representation starting from two different vertices respectively. If we start from vertex $V_1$, then the diagram is split to three parts: the external leg $4$, $5$ and a four point subdiagram. As mentioned above, $(12)(3)(P_{123})$ is a planar splitting cycle representation with respect to $V_2$, and we should take the non-planar splitting one in the equivalent class, namely, $(132)(P_{123})$, to form the complete cycle representation, which is obtained from $(12)(3)(P)$ by acting the reversing permutation $(13)(2)$, namely, $[(12)(3)]\cdot[(13)(2)]=(132)$. Hence, the final result is $(132)(4)(5)$. Alternatively, we can start from vertex $V_2$. Then the diagram is split to another three parts, two of which are three point subdiagrams with non-planar splitting cycle representation $(12)(P_{12})$ and $(45)(P_{45})$, while the other is the single external leg $3$. Connecting them via the vertex $(P_{12})(P_3)(P_{45})$, we obtain $(12)(3)(45)$. We see that different splitting of diagram leads to different cycle representations. However, all of them are in the same equivalent class. In fact, both $(132)(4)(5)$ and $(12)(3)(45)$ lead to the PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{12453}$. Next, we give a seven point example. The Feynman diagram is shown below, together with the resultant cycle representations when we carry out the recursive construction at different vertices, \begin{align} &{\begin{tikzpicture}[baseline={(current bounding box.center)},every node/.style={black}] \draw [] (135:1) node[left=0pt]{$2$} -- (0,0) -- (-135:1) node[left=0pt]{$1$} (0,0) -- (1,0) -- ++(0,1) node[above=0pt]{$3$} (1,0) -- (2,0) -- ++(0,1) node[above=0pt]{$4$} (2,0) -- (3,0) -- ++(0,-1) node[below=0pt]{$7$} (3,0) -- (4,0) -- ++(45:1) node[right=0pt]{$5$} (4,0) -- ++(-45:1) node[below=0pt]{$6$}; \fill [red](0,0) node [below=0.1cm] {$V_1$} circle (0.1) (1,0) node[below=0.1cm]{$V_2$} circle (0.1) ++(1,0) node[below=0.1cm]{$V_3$} circle (0.1) ++(1,0) node [above=0.1cm] {$V_4$} circle (0.1) ++(1,0) node[below=0.1cm] {$V_5$} circle (0.1); \end{tikzpicture}} & &\begin{array}{l} (V_1)\qquad\pmb\beta=(1)(2)(347)(5)(6) \\ (V_2)\qquad\pmb\beta=(12)(3)(467)(5) \\ (V_3)\qquad\pmb\beta=(132)(4)(576) \\ (V_4)\qquad\pmb\beta=(143)(2)(56)(7) \\ (V_5)\qquad\pmb\beta=(1)(2)(347)(5)(6) \end{array} \end{align} Some brief explanation is in order. For the vertex $V_4$, the planar splitting cycle representation is $(P)(56)(7)$, while its non-planar splitting one is $[(7)(56)]\cdot[(75)(6)]=(576)$. The former can be used in the recursive construction starting from vertex $V_4$, while the latter can be used in the recursive construction starting from vertex $V_3$. Similarly, for the vertex $V_3$, the planar splitting cycle representation is $(P)(4)(576)$, while the non-planar splitting one is $[(4)(576)]\cdot[(47)(56)]=(467)(5)$. The latter can be used in the recursive construction starting from vertex $V_2$. This recursive construction can be easily taken to higher points, and we have run extensive checking up to eight point diagrams. \section{Relations between different PT-factors} \label{secRelation} After clarifying the relations between permutations of PT-factors and the Feynman diagrams, we move on to the relations between different PT-factors in the language of permutation and cycle representation. This topic has been discussed from the \emph{associahedron} point of view \cite{Arkani-Hamed:2017mur} and before proceeding let us briefly review their result. The major conclusion is that, the canonical form of an $(n-3)$-dimensional associahedron is the $n$-particle tree-level amplitude of bi-adjoint scalar theory with identical ordering. A consequence is that, the codimension $d$ faces of an associahedron are in one-to-one correspondence with the partial triangulations with $d$ diagonals, while the partial triangulations are dual to cuts on planar cubic diagrams with each diagonal corresponding to a cut. Hence the faces of the associahedron are dual to the singularities of cubic scalar amplitude. In this sense, PT-factors can also be related to corresponding faces of associahedron. For instance, a three point amplitude is dual to a triangle, allowing only one trivial triangulation. Thus the corresponding associahedron is just a zero dimensional point, on which sits the only independent PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{123}$. A four point amplitude is dual to a box, while the associahedron formed by its (partial) triangulations is a one dimensional line as shown in Fig.~\ref{4passo}. The vertices correspond to all complete triangulations of the box, while the edge corresponds to partial triangulations. The edge is related to ${\mbox{PT}}(\pmb{\beta})=\vev{1234}$ with amplitude $\frac{1}{s_{12}}+\frac{1}{s_{23}}$, while the two ending vertices are related to ${\mbox{PT}}(\pmb{\beta})=\vev{1243}$ and $\vev{1324}$ respectively with amplitude $\frac{1}{s_{12}}$ and $\frac{1}{s_{23}}$. The relations between different PT-factors are manifest in this geometric picture. \begin{figure}[t] \centering \begin{tikzpicture} \filldraw [very thick] (0,0) circle (2pt) -- (4,0) circle (2pt); \draw[blue] (-0.5,-0.5)--(0.5,-0.5)--(0.5,-1.5)--(-0.5,-1.5)--(-0.5,-0.5); \draw[blue] (0.5,-0.5)--(-0.5,-1.5); \draw[blue] (1.5,-0.5)--(2.5,-0.5)--(2.5,-1.5)--(1.5,-1.5)--(1.5,-0.5); \draw[blue] (3.5,-0.5)--(4.5,-0.5)--(4.5,-1.5)--(3.5,-1.5)--(3.5,-0.5); \draw[blue] (3.5,-0.5)--(4.5,-1.5); \begin{scope}[xshift=8cm,yshift=-0.75cm] \node at (0,0) [above=0.1cm] { \begin{tikzpicture}[scale=0.2] \draw (0,0) -- (0,1) -- (-1,2) (0,1) -- (1,2) (0,2) -- (0.5,1.5); \end{tikzpicture} }; \node at (4,0) [above=0.1cm]{ \begin{tikzpicture}[scale=0.2] \draw (0,0) -- (0,1) -- (-1,2) (0,1) -- (1,2) (0,2) -- (-0.5,1.5); \end{tikzpicture} }; \node at (2,0) [above=0.1cm]{ \begin{tikzpicture}[scale=0.2] \draw (0,0) -- (0,1) -- (-1,2) (0,1) -- (1,2) (0,2) -- (0,1); \end{tikzpicture} }; \filldraw [very thick] (0,0) circle (2pt) -- (4,0) circle (2pt); \node at (0,0) [below=0.1cm] {$-\frac{1}{s_{12}}$}; \node at (4,0) [below=0.1cm] {$-\frac{1}{s_{23}}$}; \node at (2,0) [below=0.1cm] {$\frac{1}{s_{12}}+\frac{1}{s_{23}}$}; \node at (0,0) [above=0.6cm] {\scriptsize$(12)(3)(4)$}; \node at (4,0) [above=0.6cm] {\scriptsize$(1)(23)(4)$}; \node at (2,0) [above=0.6cm] {\scriptsize$(1)(2)(3)(4)$}; \node at (5,0) {$.$}; \end{scope} \end{tikzpicture} \caption{The associahedron for four point amplitudes and PT-factors.}\label{4passo} \end{figure} A five point amplitude is dual to a pentagon, and the associahedron constructed from all its (partial) triangulations is also a pentagon, as shown in Fig.~\ref{5passo}, where the thick black lines form the associahedron and the blue line is the triangulations of pentagon. The face corresponds to $\vev{12345}$, while the edges correspond to $\vev{12543}$, $\vev{12354}$, $\vev{13245}$, $\vev{14325}$, $\vev{12435}$, and the vertices correspond to $\vev{12453}$, $\vev{13254}$, $\vev{14235}$, $\vev{13425}$, $\vev{12534}$. The PT-factor of each vertex evaluates to a single Feynman diagram. An edge connects two vertices, which means that the PT-factor of the edge evaluates to two Feynman diagrams. Two edges share a common point, which means that there is common Feynman diagram shared by them. The face contains five vertices, such that its PT-factor evaluates to five Feynman diagrams. Therefore the relation among different PT-factors is obvious in the diagram. For higher point amplitude, the associahedron would be much more complicated geometric objects, however the correspondence is similar. \begin{figure}[t] \centering \begin{tikzpicture} \filldraw [very thick] (0,0) circle (2pt) -- (1,-2) circle (2pt)-- (3,-2) circle (2pt)-- (4,0) circle (2pt)-- (2,1.5) circle (2pt)--(0,0); \draw [blue] (-1.25,0.25)--(-1.,-0.25)--(-0.5,-0.25)--(-0.25,0.25)--(-0.75,0.625)--(-1.25,0.25); \draw[blue](-1,-0.25)--(-0.75,0.625)--(-0.5,-0.25); \draw[blue](4.25,0.25)--(4.5,-0.25)--(5.,-0.25)--(5.25,0.25)--(4.75,0.625)--(4.25,0.25); \draw[blue] (4.25,0.25)--(5.25,0.25)--(4.5,-0.25); \draw[blue](0,-2.5)--(0.25,-3.)--(0.75,-3.)--(1,-2.5)--(0.5,-2.125)--(0,-2.5); \draw[blue] (0,-2.5) -- (0.75,-3) -- (0.5,-2.125); \draw[blue](1.5,-2.5)--(1.75,-3.)--(2.25,-3.)--(2.5,-2.5)--(2.,-2.125)--(1.5,-2.5); \draw[blue] (2.25,-3)--(1.5,-2.5); \draw[blue](3,-2.5)--(3.25,-3.)--(3.75,-3.)--(4,-2.5)--(3.5,-2.125)--(3,-2.5); \draw[blue](4,-2.5)--(3,-2.5)--(3.75,-3); \draw[blue](3.75,-1)--(4.,-1.5)--(4.5,-1.5)--(4.75,-1)--(4.25,-0.625)--(3.75,-1); \draw[blue](3.75,-1)--(4.75,-1); \draw[blue](3,1.25)--(3.25,0.75)--(3.75,0.75)--(4,1.25)--(3.5,1.625)--(3,1.25); \draw[blue](4,1.25)--(3.25,0.75); \draw[blue](1.5,2.25)--(1.75,1.75)--(2.25,1.75)--(2.5,2.25)--(2.,2.625)--(1.5,2.25); \draw[blue](2,2.625)--(1.75,1.75)--(2.5,2.25); \draw[blue](0,1.25)--(0.25,0.75)--(0.75,0.75)--(1,1.25)--(0.5,1.625)--(0,1.25); \draw[blue](0.25,0.75)--(0.5,1.625); \draw[blue](1.5,-0.25)--(1.75,-0.75)--(2.25,-0.75)--(2.5,-0.25)--(2.,0.125)--(1.5,-0.25); \draw[blue](-0.75,-1)--(-0.5,-1.5)--(0.,-1.5)--(0.25,-1)--(-0.25,-0.625)--(-0.75,-1); \draw[blue](0,-1.5)--(-0.25,-0.625); \node at (2,-0.885) []{\scriptsize $1$}; \node at (2.5,-0.55) []{\scriptsize $5$}; \node at (2.375,0.05) []{\scriptsize $4$}; \node at (1.625,0.05) []{\scriptsize $3$}; \node at (1.5,-0.55) []{\scriptsize $2$}; \begin{scope}[xshift=10cm] \node at (0,0) { \begin{tikzpicture}[scale=0.2] \draw (0,0) -- (0,1) -- (1.5,2) (0,1) -- (0.5,2) (0,1) -- (-0.5,2) (0,1) -- (-1.5,2); \end{tikzpicture} }; \node at (0,0) [above=0.15cm] {\scriptsize$(1)(2)(3)(4)(5)$}; \node (A) at (0,3) [inner sep=0] {}; \node (B) at (18:3) [inner sep=0] {}; \node (C) at (-54:3) [inner sep=0] {}; \node (D) at (-126:3) [inner sep=0] {}; \node (E) at (162:3) [inner sep=0] {}; \node (A1) at ($(A)!0.5!(B)$) [inner sep=0] {}; \node (B1) at ($(B)!0.5!(C)$) [inner sep=0] {}; \node (C1) at ($(C)!0.5!(D)$) [inner sep=0] {}; \node (D1) at ($(D)!0.5!(E)$) [inner sep=0] {}; \node (E1) at ($(E)!0.5!(A)$) [inner sep=0] {}; \draw [very thick] (A) -- (B) -- (C) -- (D) -- (E) -- (A); \fill (A) circle (2pt) (B) circle (2pt) (C) circle (2pt) (D) circle (2pt) (E) circle (2pt); \node at (A) [above=0cm] { \begin{tikzpicture}[scale=0.2] \draw (0,0) -- (0,1) -- (1.5,2) (0.5,2) -- (-0.5,1.333) (-0.5,2) -- (-1,1.666) (0,1) -- (-1.5,2); \end{tikzpicture} }; \node at (A) [above=0.5cm] {\scriptsize$(15)(23)(4)$}; \node at (A) [below=0.25cm] {$\frac{1}{s_{23}s_{15}}$}; \node at (B) [right=-0.25cm] { \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(15)(2)(34)$}; \draw (0,0) -- (0,1) -- (1.5,2) (0.5,2) -- (-0.5,1.333) (-0.5,2) -- (0,1.666) (0,1) -- (-1.5,2); \end{tikzpicture} }; \node at (B) [left=1pt] {$\frac{1}{s_{15}s_{34}}$}; \node at (C) [right=0cm]{ \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(12)(34)(5)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (0.5,1.333) (0.5,2) -- (0,1.666) (0,1) -- (-1.5,2); \end{tikzpicture} }; \node at (C) [above left=0pt] {$\frac{-1}{s_{12}s_{34}}$}; \node at (D) [left=0cm] { \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(12)(3)(45)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (0.5,1.333) (0.5,2) -- (1,1.666) (0,1) -- (-1.5,2); \end{tikzpicture} }; \node at (D) [above right=0pt] {$\frac{-1}{s_{12}s_{45}}$}; \node at (E) [left=0pt] { \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(1)(23)(45)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (-1,1.666) (0.5,2) -- (1,1.666) (0,1) -- (-1.5,2); \end{tikzpicture} }; \node at (E) [right=0pt] {$\frac{1}{s_{23}s_{45}}$}; \node at (A1) [above right=-8pt] { \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(15)(2)(3)(4)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (-0.5,1.333) (0.5,2) -- (-0.5,1.333) (0,1) -- (-1.5,2); \node at (-5,-1) [right=1pt] {\tiny$\frac{1}{s_{15}}(\frac{1}{s_{23}}+\frac{1}{s_{34}})$}; \end{tikzpicture} }; \node at (B1) [right=-3pt] { \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(1)(2)(34)(5)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (0,1.666) (0.5,2) -- (0,1.666) -- (0,1) (0,1) -- (-1.5,2); \node at (-5,-1) [right=1pt] {\tiny$\frac{-1}{s_{34}}(\frac{1}{s_{12}}+\frac{1}{s_{15}})$}; \end{tikzpicture} }; \node at (C1) [below=0pt]{ \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(12)(3)(4)(5)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (0.5,1.333) (0.5,2) -- (0.5,1.333) (0,1) -- (-1.5,2); \node at (0,0) [below=0pt] {\tiny$\frac{1}{s_{12}}(\frac{1}{s_{34}}+\frac{1}{s_{45}})$}; \end{tikzpicture} }; \node at (D1) [left=-3pt] { \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(1)(2)(3)(45)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (0,1) (0.5,2) -- (1,1.666) (0,1) -- (-1.5,2); \node at (5,-1) [left=1pt] {\tiny$\frac{1}{s_{45}}(\frac{1}{s_{12}}+\frac{1}{s_{23}})$}; \end{tikzpicture} }; \node at (E1) [above left=-8pt] { \begin{tikzpicture}[scale=0.2] \node at (0,2) [above=0pt] {\scriptsize$(1)(23)(4)(5)$}; \draw (0,0) -- (0,1) -- (1.5,2) (-0.5,2) -- (-1,1.666) (0.5,2) -- (0,1) (0,1) -- (-1.5,2); \node at (5,-1) [left=1pt] {\tiny$\frac{1}{s_{23}}(\frac{1}{s_{45}}+\frac{1}{s_{51}})$}; \end{tikzpicture} }; \end{scope} \end{tikzpicture} \caption{The associahedron for five point amplitudes and the PT-factors.}\label{5passo} \end{figure} \subsection{Relation analysis via cycle representation} From the $n$-gon picture, we see that by adding one more triangulation line (i.e., fix one more propagator), we get the immediate child amplitude. In contrary, by removing one triangulation line (i.e., unfix a propagator), we recover the mother amplitude. In terms of zig-zag path, we have the following picture, \begin{equation} \label{eq:split} \adjustbox{raise=-1.8cm}{\begin{tikzpicture} \draw [thick] (0.5,-1) -- (0,0) -- (0.5,1); \draw [thick] (2.5,-1) -- (3,0) -- (2.5,1); \draw [line width=2pt,orange,draw opacity=0.5] (1.5,0) -- ++(158.2:2) (1.5,0) -- ++(-21.8:2) (1.5,0) -- ++(-158.2:2) (1.5,0) -- ++(21.8:2); \draw [line width=2pt,orange,draw opacity=0.5] (1.5,0) -- ++(75:2) (1.5,0) -- ++(105:2); \draw [line width=2pt,orange,draw opacity=0.5] (1.5,0) -- ++(-75:2) (1.5,0) -- ++(-105:2); \foreach \x in {80,85,90,95,100} {\filldraw (1.5,0) ++(\x:1.7) circle (0.7pt);} \foreach \x in {-80,-85,-90,-95,-100} {\filldraw (1.5,0) ++(\x:1.7) circle (0.7pt);} \draw [thick,blue] (1.5,1) circle (0.5cm) (1.5,-1) circle (0.5cm); \draw [thick,blue,rounded corners=4pt,postaction={decorate,decoration={markings,mark=at position 0.25 with {\arrowreversed{latex}}, mark=at position 0.65 with {\arrowreversed{latex}}}}] (1,1) -- (0.7,1) -- (0.2,0) -- (0.7,-1) -- (1,-1); \draw [thick,blue,rounded corners=4pt,postaction={decorate,decoration={markings,mark=at position 0.35 with {\arrow{latex}}, mark=at position 0.75 with {\arrow{latex}}}}] (2,1) -- (2.3,1) -- (2.8,0) -- (2.3,-1) -- (2,-1); \node at (1.5,1) [blue]{\Large$\pmb\circlearrowright$}; \node at (1.5,-1) [blue]{\Large$\pmb\circlearrowright$}; \node at (4,0) {$\Longleftrightarrow$}; \begin{scope}[xshift=5cm] \draw [thick] (0.5,-1) -- (0,0) -- (0.5,1); \draw [thick] (2.5,-1) -- (3,0) -- (2.5,1); \draw [thick] (0,0) -- (3,0); \draw [thick,blue] (1.5,1) circle (0.5cm) (1.5,-1) circle (0.5cm); \draw [line width=2pt,orange,draw opacity=0.5] (1.5,-0.3) -- (1.5,0.3) (1.5,0.3) -- ++(170.91:2) (1.5,0.3) -- ++(9.09:2) (1.5,0.3) -- ++(70:1.7) (1.5,0.3) -- ++(110:1.7) (1.5,-0.3) -- ++(-9.09:2) (1.5,-0.3) -- ++(-170.91:2) (1.5,-0.3) -- ++(-70:1.7) (1.5,-0.3) -- ++(-110:1.7); \foreach \x in {80,85,90,95,100} {\filldraw (1.5,0) ++(\x:1.7) circle (0.7pt);} \foreach \x in {-80,-85,-90,-95,-100} {\filldraw (1.5,0) ++(\x:1.7) circle (0.7pt);} \draw [thick,blue,rounded corners=4pt,postaction={decorate,decoration={markings,mark=at position 0.15 with {\arrow{latex}}, mark=at position 0.9 with {\arrow{latex}}}}] (1,-1) -- (0.7,-1) -- (0.3,-0.2) -- (2.7,0.2) -- (2.3,1) -- (2,1); \draw [thick,blue,rounded corners=4pt,postaction={decorate,decoration={markings,mark=at position 0.15 with {\arrow{latex}}, mark=at position 0.9 with {\arrow{latex}}}}] (1,1) -- (0.7,1) -- (0.3,0.2) -- (2.7,-0.2) -- (2.3,-1) -- (2,-1); \node at (1.5,1) [red]{\Large$\pmb\circlearrowleft$}; \node at (1.5,-1) [blue]{\Large$\pmb\circlearrowright$}; \node at (3.5,0){.}; \end{scope} \end{tikzpicture}} \end{equation} The extra triangulation separates the $n$-gon into two subpolygons. Notably, in one of them we need to reverse the ordering. In this section, we study how the above picture is realized by good cycle representations, namely, how to merge or split certain parts in good cycle representations to fix or unfix a propagator. We start with the general discussion. The left hand side of Eq.~\eqref{eq:split} indicates that there exists a good cycle representation with the form \begin{equation} \label{eq:split_prop} \pmb{\beta}=\pmb{\beta}_{\text{lower}}\pmb{\beta}_{\text{upper}}=\underbrace{(\ldots)\pmb{|}(\ldots)(\ldots)\pmb{|}\ldots(\ldots)}_{\text{lower}}\,\pmb{\Bigg|}\,\underbrace{(\ldots)\pmb{|}(\ldots)(\ldots)\pmb{|}\ldots\ldots}_{\text{upper}}~,~~~ \end{equation} where subscripts $_{\text{lower}}$ and $_{\text{upper}}$ denote the external legs below and above the extra triangulation line in~\eqref{eq:split}. This cycle representation can either be a V-type or P-type one. Suppose the upper set consists of \begin{equation} \text{upper}=\{i,i+1\ldots j\}~,~~~ \end{equation} then the reversing process of~\eqref{eq:split} can be realized by \begin{equation} \label{eq:partreverse} \pmb{\beta}_{\text{upper}}^{\text{reversed}}=\pmb{\beta}_{\text{upper}}\pmb{\beta}_{r}~,~~~ \end{equation} where $\pmb{\beta}_{r}$ simply flips the ordering $\pmb{\beta}_r|i,i+1\ldots j\rangle=|j\ldots i+1,i\rangle$. Similar to Eq.~\eqref{eq:grgc}, $\pmb{\beta}_r$ has the cycle representation \begin{equation} \label{eq:beta_r} \pmb{\beta}_{r}=\left\{\begin{array}{lcl} (ij)(i+1,j-1)\ldots(\frac{i+j-1}{2},\frac{i+j+1}{2}) & \qquad\qquad &j-i=\text{odd} \\ (ij)(i+1,j-1)\ldots(\frac{i+j-2}{2},\frac{i+j+2}{2})(\frac{i+j}{2}) & \qquad\qquad & j-i=\text{even} \end{array}\right.~.~~~ \end{equation} Therefore, the process of~\eqref{eq:split} can be realized in an algebraic way as\footnote{Equivalently, we can reverse the lower part. The result will only differ by an overall reversing.} \begin{equation} \pmb{\beta}_{\text{lower}}\pmb{\beta}_{\text{upper}}\;\Longleftrightarrow\;\pmb{\beta}_{\text{lower}}\pmb{\beta}_{\text{upper}}\pmb{\beta}_{r}~.~~~ \end{equation} If the propagator manifested in~\eqref{eq:split_prop} is an \emph{overall} one, then the above process gives the immediate mother amplitude that has the original amplitude as a part. \emph{Otherwise}, the above process gives the immediate child amplitude that contains this propagator as an overall factor, and is a part of the original amplitude. Next, we use the example given in~\eref{8p-bad-1-1} with cycle representations~\eref{8p-bad-1-2} to demonstrate our idea, \begin{equation} \label{eq:example1} {\mbox{PT}}(\pmb{\beta})=\langle 12846573\rangle\;\Longrightarrow\;\frac{1}{s_{12}s_{56}s_{8123}}\left(\frac{1}{s_{812}}+\frac{1}{s_{123}}\right)\left(\frac{1}{s_{456}}+\frac{1}{s_{567}}\right)~.~~~ \end{equation} We first consider its immediate child amplitudes by fixing the poles $s_{812}$, $s_{123}$, $s_{456}$ and $s_{567}$ one by one. \paragraph{Fix the pole $s_{812}$:} To achieve this goal, we need to use the good cycle representations that manifest the separation $\{8,1,2\}$ and $\{3,4,5,6,7\}$. In~\eref{8p-bad-1-2}, only $(12)(3)(47)(5)(6)(8)$ and $(128)(3467)(5)$ satisfy the condition. We can achieve the child amplitude by using either one. Starting with $(12)(3)(47)(5)(6)(8)$, we split all cycles into two parts according to the pole, namely, $(8)(12)\pmb{|}(3)(47)(5)(6)$. Now we can keep one part invariant and perform the prescription~\eqref{eq:partreverse} to the other. For example, we keep the part $(8)(12)$, such that for another part we should do the following manipulation as \begin{eqnarray} [(3)(47)(5)(6)]\cdot[(37)(46)(5)]=(3467)(5)~,~~~\end{eqnarray} where $(37)(46)(5)$ is the reversing permutation $\pmb{\beta}_r$ obtained from~\eqref{eq:beta_r}. Putting the two parts together, we get the cycle representation $(8)(12)(3467)(5)$, which corresponds to the PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{12837564}$. It indeed gives the desired child amplitude, \begin{eqnarray} {\mbox{PT}}(\pmb{\beta})=\vev{12837564}\;\Longrightarrow\;\frac{1}{s_{12} s_{56}s_{8123}} \left(\frac{1}{s_{812}}\right)\left(\frac{1}{s_{456}}+\frac{1}{s_{567}}\right)~.~~~\label{8p-bad-1-daughter-812}\end{eqnarray} Alternatively, we keep the part $(3)(47)(5)(6)$ intact and act the prescription~\eqref{eq:partreverse} onto the part $(8)(12)$, \begin{eqnarray} [(8)(12)]\cdot[(82)(1)]=(812)~.~~~\end{eqnarray} Putting them together, we get another cycle representation $(812)(3)(47)(5)(6)$, corresponding to the same PT-factor as~\eqref{8p-bad-1-daughter-812}. If we use the other cycle representation $(128)\pmb{|}(3467)(5)$, we get the same result, \begin{eqnarray} [(128)]\cdot[(82)(1)](3476)(5)\ & = & (8)(12)(3467)(5)~,\nonumber\\ % (128)[(3467)(5)]\cdot[(37)(46)(5)] & = & (128)(3)(47)(5)(6)~.~~~\end{eqnarray} % \paragraph{Fix the pole $s_{123}$:} For this case, the cycle representations $(12)(3)(47)(5)(6)(8)$ and $(132)(4875)(6)$ from~\eref{8p-bad-1-2} can be used. By similar manipulations, we get \begin{subequations} \begin{eqnarray} (12)(3)(47)(5)(6)(8) & \Longrightarrow & \left\{\begin{array}{rcl} [(12)(3)]\cdot[(13)(2)](47)(5)(6)(8) & = & (132)(47)(5)(6)(8) \\ (12)(3)[(47)(5)(6)(8)]\cdot[(48)(57)(6)] & = & (12)(3)(4875)(6) \end{array}\right. \,,\\ (132)(4875)(6) & \Longrightarrow & \left\{\begin{array}{rcl} (132)]\cdot[(13)(2)](4875)(6) & = & (12)(3)(4875)(6) \\ (132)[(4875)(6)]\cdot[(48)(57)(6)] & = & (132)(47)(5)(6)(8) \end{array}\right.~.~~~ \end{eqnarray} \end{subequations} % Both results correspond to the PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{12756483}$, which is evaluated to \begin{eqnarray} \frac{1}{s_{12} s_{56}s_{8123}} \left(\frac{1}{s_{123}}\right)\left(\frac{1}{s_{456}}+\frac{1}{s_{567}}\right)~.~~~\label{8p-bad-1-daughter-123}\end{eqnarray} \paragraph{Fix the pole $s_{456}$:} For this case, the cycle representations $(1)(2)(38)(4)(56)(7)$ and $(1)(2378)(456)$ from~\eref{8p-bad-1-2} can be used. By similar manipulations we get \begin{subequations} \begin{eqnarray} (1)(2)(38)(4)(56)(7) & \Longrightarrow & \left\{\begin{array}{rcl} (1)(2)(38)(7)[(4)(56)]\cdot[(46)(5)] & = & (1)(2)(38)(7)(456) \\ \left[(1)(2)(38)(7)\right]\cdot[(73)(82)(1)](4)(56) & = & (1)(2378)(4)(56) \end{array}\right.\,,\\ (1)(2378)(456) & \Longrightarrow & \left\{\begin{array}{rcl} (1)(2378)[(456)]\cdot[(46)(5)] & = & (1)(2378)(4)(56) \\ \left[(1)(2378)\right]\cdot[(73)(82)(1)](456) & = &(1)(2)(38)(7)(456) \end{array}\right.~.~~~ \end{eqnarray} \end{subequations} Both results correspond to the PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{12856473}$, which is evaluated to \begin{eqnarray} \frac{1}{s_{12} s_{56}s_{8123}} \left(\frac{1}{s_{123}}+\frac{1}{s_{812}}\right)\left(\frac{1}{s_{456}}\right)~.~~~\label{8p-bad-1-daughter-456}\end{eqnarray} \paragraph{Fix the pole $s_{567}$:} For this case, the cycle representations $(1)(2)(38)(4)(56)(7)$ and $(1843)(2)(576)$ from~\eref{8p-bad-1-2} can be used. By similar manipulations we get \begin{subequations} \begin{eqnarray} (1)(2)(38)(4)(56)(7) & \Longrightarrow &\left\{\begin{array}{rcl} (1)(2)(38)(4)[(56)(7)]\cdot[(57)(6)] & = & (1)(2)(38)(4) (576) \\ \left[(1)(2)(38)(4)\right]\cdot[(84)(13)(2)](56)(7) & = & (8431)(2)(56)(7) \end{array}\right.\,, \\ (1843)(2)(576) & \Longrightarrow & \left\{\begin{array}{rcl} (1843)(2)[(576)]\cdot[(57)(6)] & = & (1843)(2)(56)(7)\\ \left[(1843)(2)\right]\cdot[(84)(13)(2)](576) & = & (1)(2)(38)(4)(576) \end{array}\right.~.~~~ \end{eqnarray} \end{subequations} Both results correspond to the PT-factor ${\mbox{PT}}(\pmb{\beta})=\vev{12847563}$, which is evaluated to \begin{eqnarray} \frac{1}{s_{12} s_{56}s_{1238}} \left(\frac{1}{s_{123}}+\frac{1}{s_{812}}\right)\left(\frac{1}{s_{567}}\right)~.~~~\label{8p-bad-1-daughter-567}\end{eqnarray} After showing how to get the child amplitudes, we now discuss the case of mother amplitudes. This happens when we perform the prescription~\eqref{eq:partreverse} to a separation that manifests an \emph{overall} propagator. For the case~\eqref{eq:example1}, there are three common poles $s_{12}$, $s_{56}$ and $s_{8123}$. Relaxing any one of them, we can get a mother amplitude. The procedure is similar to the above discussion, and we again do it one by one. \paragraph{Unfix the pole $s_{12}$:} For this case, the following two cycle representations $(1)(2)(38)(4)(56)(7)$ and $(12)(3)(47)(5)(6)(8)$ manifest the pole $s_{12}$. If we take $(1)(2)(38)(4)(56)(7)$, we have the following calculation according to~\eqref{eq:partreverse}, \begin{subequations} \begin{align} (1)(2)(38)(4)(56)(7) & \Longrightarrow & \left\{\begin{array}{rcl} [(1)(2)]\cdot[(12)](38)(4)(56)(7) & = & (12)(38)(4)(56)(7) \\ % (1)(2)[(38)(4)(56)(7)]\cdot[(38)(47)(56)] & = & (1)(2)(3)(47)(5)(6)(8) \end{array}\right.~,~~~ \\ (12)(3)(47)(5)(6)(8) & \Longrightarrow & \left\{\begin{array}{rcl} [(12)]\cdot[(12)](3)(47)(5)(6)(8) & = & (1)(2)(3)(47)(5)(6)(8) \\ (12)[(3)(47)(5)(6)(8)]\cdot[(38)(47)(56)] & = & (12)(38)(4)(56)(7) \end{array}\right.~,~~~ \end{align} \end{subequations} which correspond to ${\mbox{PT}}(\pmb{\beta})=\vev{12375648}$. It gives ten terms, among which four are just~\eqref{eq:example1}. \paragraph{Unfix the pole $s_{56}$:} For this case, the cycle representations $(1)(2)(38)(4)(56)(7)$ and $(12)(3)(47)(5)(6)(8)$ manifest the pole $s_{56}$. We have the following calculation according to~\eqref{eq:partreverse}, \begin{subequations} \begin{eqnarray} (1)(2)(38)(4)(56)(7) & \Longrightarrow & \left\{\begin{array}{rcl} [(56)]\cdot[(56)](1)(2) (38)(4)(7) & = & (1)(2) (38)(4)(5)(6)(7) \\ % (56)[(1)(2)(38)(4)(7)]\cdot[(47)(38)(12)] & = & (56)(12)(3)(8)(47) \end{array}\right.\,, \\ (12)(3)(47)(5)(6)(8) & \Longrightarrow &\left\{\begin{array}{rcl} [(5)(6)]\cdot[(56)](12)(3)(47)(8) & = & (12)(3)(47)(56)(8) \\ % (5)(6)[(12)(3)(47)(8)]\cdot[(47)(38)(12)] & = & (5)(6)(1)(2)(4)(7)(38) \end{array}\right.~.~~~ \end{eqnarray} \end{subequations} One can check that they correspond to ${\mbox{PT}}(\pmb{\beta})=\vev{12845673}$. It also gives ten terms, among which four are just~\eqref{eq:example1}. \paragraph{Unfix the pole $s_{8123}$:} For this case, we have two good cycle representations $(1)(2)(38)(4)(56)(7)$ and $(12)(3)(47)(5)(6)(8)$ that manifest the pole $s_{8123}$. We have the following calculation according to~\eqref{eq:partreverse}, \begin{subequations} \begin{eqnarray} (1)(2)(38)(4)(56)(7) & \Longrightarrow & \left\{\begin{array}{rcl} [(1)(2)(38)]\cdot[(12)(38)](4)(56)(7) & = & (12)(3)(8)(4)(56)(7) \\ % (1)(2)(38)[(4)(56)(7)]\cdot[(47)(56)] & = & (1)(2)(38)(47)(5)(6) \end{array}\right.\,,\\ (12)(3)(47)(5)(6)(8) & \Longrightarrow & \left\{\begin{array}{rcl} [(12)(3)(8)]\cdot[(12)(38)](47)(5)(6) & = &(1)(2)(38)(47)(5)(6)\\ % (12)(3)(8)[(47)(5)(6)]\cdot[(47)(56)] & \to &(12)(3)(8)(4)(56)(7) \end{array}\right.~,~~~ \end{eqnarray} \end{subequations} which correspond to ${\mbox{PT}}(\pmb{\beta})=\vev{12875643}$. It gives 14 terms, among which four are just~\eqref{eq:example1}. \subsection{Relation analysis via cross-ratio factor} We can also study the relations of PT-factor from another approach. As we have already known, the relations between different PT-factors can be seen as selecting terms corresponding to specific pole structures in the evaluated results. For instance, $\vev{1234}\vev{1234}$ evaluates to $\frac{1}{s_{12}}+\frac{1}{s_{23}}$ while $\vev{1234}\vev{1243}$ evaluates to $\frac{1}{s_{12}}$. It means that, by selecting terms with pole $\frac{1}{s_{12}}$ in $\vev{1234}\vev{1234}$, we can reproduce the result of $\vev{1234}\vev{1243}$. To achieve this goal at the CHY-integrand level, we can use the cross-ratio factor given in paper \cite{Feng:2016nrf}, which we will call it the \emph{selecting factor} \begin{eqnarray} f^{\text{select}}[a,b,c,d]:=\frac{[ab][cd]}{[ac][bd]}~~~,~~~[ab]:=\sigma_{ab}~.~~~\end{eqnarray} To pick up the Feynman diagrams with a pole $\frac{1}{s_A}$ from a given CHY-integral result, where $A$ follows a certain color ordering, we propose to multiply the CHY-integrand with a selecting factor $f^{\text{select}}[\overline{A}_{-1},A_1,A_{-1},\overline{A}_1]$, where $A_1,A_{-1}$ are the first and last elements of the subset $A$ respectively, and $\overline{A}$ is the complement subset of $A$, i.e., in order to pick up terms with pole $\frac{1}{s_A}$, the arguments $b,c$ in the selecting factor should be the two ending legs in the set $A$, while the arguments $a,d$ are the nearby two legs of $b,c$ outside the set $A$ respectively. As an illustration, let us consider the above mentioned PT-factors $\vev{1234}$ and $\vev{1243}$. We want to select terms with $\frac{1}{s_{12}}$ pole in the evaluated result of $\vev{1234}\vev{1234}$, which means that we need to take the selecting factor $f^{\text{select}}[4,1,2,3]$, \begin{eqnarray} \vev{1234}f^{\text{select}}[4,1,2,3]=\frac{1}{\sigma_{12}\sigma_{23}\sigma_{34}\sigma_{41}}\frac{\sigma_{41}\sigma_{23}}{\sigma_{42}\sigma_{13}}=-\vev{1243}~.~~~\end{eqnarray} It indeed produces the PT-factor $\vev{1243}$, despite of the overall sign. There is a subtlety in the choice of the selecting factor. The bi-adjoint scalar theory has two color orderings, given by ${\mbox{PT}}(\pmb {\alpha})$ and ${\mbox{PT}}(\pmb {\beta})$. We should choose $A$ to follow one of the orderings. If $\pmb{\alpha}=\pmb{\beta}$, there is no ambiguity in defining the selecting factor, which is the situation discussed in~\cite{Feng:2016nrf}. However if $\pmb {\alpha}\neq \pmb {\beta}$, the multiplication \begin{eqnarray} {\mbox{PT}}(\pmb {\alpha})\times {\mbox{PT}}(\pmb{\beta})\times f^{\text{select}}[\overline{A}_{-1},A_1,A_{-1},\overline{A}_1]\end{eqnarray} has two choices for a given set $A$. The arguments of $f^{\text{select}}$ depend on the color ordering of legs, and we have two color orderings to rely on. It can be shown that although we would get two different selecting factors, the resulting CHY-integrands are equivalent in the sense that the difference of two CHY-integrands evaluates to zero. We note that this happens only when $\frac{1}{s_A}$ is indeed a physical pole of the integrated result of ${\mbox{PT}}(\pmb{\alpha})\times{\mbox{PT}}(\pmb{\beta})$, but not an \emph{overall} one. Now let us present a brief explanation about why the selecting factor $f^{\text{select}}$ is able to pick up terms with specific poles. It is known that from \cite{Baadsgaard:2015voa,Baadsgaard:2015ifa,Baadsgaard:2015hia}, the order of pole $\frac{1}{s_A}$ in the evaluated result is characterized by the \emph{pole index} \begin{eqnarray} \chi[A]=\mathbb{L}[A]-2(|A|-1)~,~~~\end{eqnarray} where $\mathbb{L}[A]$ is the linking number of subset $A$, and $|A|$ is the length of subset $A$.\footnote{The linking number $\mathbb{L}[A]$ can be read out from the so-called {\sl 4-regular} diagrams, which is discussed in details in~\cite{Baadsgaard:2015voa,Baadsgaard:2015ifa,Baadsgaard:2015hia}.} If $\chi [A]<0$, there is no ${s_A}$ pole, while if $\chi[A]\geqslant 0$, the pole would appear in the result as $\frac{1}{s_A^{\chi+1}}$. For CHY-integrands with two PT-factors, we only have simple poles, namely, $\chi[A]\leqslant 0$ for any subset $A$. With this in mind, let us take a further look on the selecting factor $f^{\text{select}}$, assuming that $\frac{1}{s_A}$ is not an overall pole. The combinations in both the numerator and denominator of $f^{\text{select}}$ represent lines connecting elements in subset $A$ and its complement $\overline{A}$, so that $f^{\text{select}}$ will not change the linking number of $A$ itself: after multiplying $f^{\text{select}}$, we still have $\chi[A]=0$ and the pole $\frac{1}{s_A}$ remains unchanged. Now suppose there is another pole $\frac{1}{s_B}$, where $B$ has nonempty overlap with both $A$ and $\overline{A}$, then it will be removed by $f^{\text{select}}$. The reason is that in the denominator of PT-factor there are $[A_{-1}\overline{A}_1]$ and $[\overline{A}_{-1}A_{1}]$, while in the numerator of the selecting factor there are $[A_{-1}\overline{A}_1]$ and $[\overline{A}_{-1}A_1]$, which at least reduces the linking number of subset $B$ by one, such that $\chi[B]$ is reduced from $0$ to $-1$. Thus all the poles that are not compatible with $\frac{1}{s_A}$ are removed. Finally, we need to show that the terms with $\frac{1}{s_A}$ are not altered by the selecting factor, namely, we should confirm that in a term with pole $\frac{1}{s_A}$, $f^{\text{select}}$ do not change the pole indices of all the other poles. By the compatibility condition, these poles should either be a subset of $A$ or a subset of $\overline{A}$. For the four factors $[\overline{A}_{-1}A_1]$, $[A_{-1}\overline{A}_1]$, $[\overline{A}_{-1}A_{-1}]$ and $[A_1\overline{A}_1]$ in $f^{\text{select}}$, each one contains an element from subset $A$ and another from $\overline{A}$, so that none of them contributes to the linking number of either $A$ or $\overline{A}$. In general, when using $f^{\text{select}}$ to pick up a pole $\frac{1}{s_A}$, we will encounter three situations. In the first situation, the original theory does not contain such a pole. Then multiplying the selecting factor does not make any sense. For instance, the CHY-integrand $\vev{123456}\vev{124563}$ evaluates to \begin{eqnarray} { \vev{123456}}\times { \vev{124563}}\to \frac{1}{s_{12}s_{123}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)~,~~~\label{selectExam}\end{eqnarray} which does not contain the pole $\frac{1}{s_{34}}$. If we insist to take the selecting factor $f^{\text{select}}[2,3,4,5]$ following the color ordering of the first PT-factor, we get \begin{eqnarray} \Big({ \vev{123456}} f^{\text{select}}[2,3,4,5]\Big)\times { \vev{124563}}\to -\frac{1}{s_{56}s_{124}}\left(\frac{1}{s_{12}}+\frac{1}{s_{56}}\right)~,~~~\end{eqnarray} which is a completely irrelevant answer. In the second situation, the pole $\frac{1}{s_A}$ we pick is overall to all the terms. By multiplying the selecting factor, we produce the mother amplitude, obtained by pinching the propagator $\frac{1}{s_A}$ in the Feynman diagram. For example, there are two overall poles $\frac{1}{s_{12}}$ and $\frac{1}{s_{123}}$ in~\eqref{selectExam}. If we follow the color ordering of the first PT-factor, and multiply $f^{\text{select}}[6,1,2,3]$ that corresponds to $\frac{1}{s_{12}}$, we get \begin{eqnarray} \Big({ \vev{123456}}f^{\text{select}}[6,1,2,3]\Big)\times { \vev{124563}} \to \frac{1}{s_{12}s_{123}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)+ \frac{1}{s_{13}s_{123}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)~,~~~\end{eqnarray} which is a mother amplitude with additional terms produced. Similarly, if we follow the color ordering of the second PT-factor, and multiply $f^{\text{select}}[3,1,2,4]$ that corresponds to $\frac{1}{s_{12}}$, we get \begin{eqnarray} { \vev{123456}}\times \Big({ \vev{124563}}f^{\text{select}}[3,1,2,4]\Big)\to \frac{1}{s_{12}s_{123}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)+ \frac{1}{s_{23}s_{123}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)~,~~~\end{eqnarray} which is another mother amplitude with different additional terms produced. Similar calculation can be done for the overall pole $s_{123}$ and the two different mother amplitudes are given by \begin{align} \Big({ \vev{123456}}f^{\text{select}}[6,1,3,4]\Big)\times { \vev{124563}} & \to \frac{1}{s_{12} s_{123}}\left(\frac{1}{s_{45}}+\frac{1}{s_{56}}\right)+\frac{1}{s_{12} s_{124}}\left( \frac{1}{s_{36}}+\frac{1}{s_{56}}\right)+\frac{1}{s_{12} s_{36}s_{45}}~,~~~\nonumber\\ { \vev{123456}}\times \Big( { \vev{124563}}f^{\text{select}}[6,3,2,4]\Big) & \to \frac{1}{s_{12} s_{123}}\left( \frac{1}{s_{45}}+\frac{1}{s_{56}}\right)+\frac{1}{s_{12} s_{126}}\left( \frac{1}{s_{34}}+\frac{1}{s_{45}}\right)+\frac{1}{s_{12} s_{34} s_{56}}~,~~~\nonumber \end{align} indicating that~\eqref{selectExam} can be part of the mother amplitudes with different color orderings. In the third situation, the pole we pick is physical but an overall one, thus multiplying the selecting factor we get the immediate child amplitude. This is the case we have considered in detail in the beginning of this subsection. Consider an eight point CHY-integrand with ${\mbox{PT}}(\pmb {\alpha})=\vev{12345678}$ and ${\mbox{PT}}(\pmb{\beta})=\vev{12348765}$. The amplitude is \begin{align} {\mbox{PT}}(\pmb{\alpha}){\mbox{PT}}(\pmb {\beta})&\to\frac{1}{s_{1234}}\left( \frac{1}{s_{12} s_{34}} +\frac{1}{s_{12} s_{123}}+\frac{1}{s_{23} s_{123}}+\frac{1}{s_{23} s_{234}}+\frac{1}{s_{34} s_{234}}\right)\nonumber\\ &~~~~~~~~~~~~\times\left( \frac{1}{s_{56} s_{78}} +\frac{1}{s_{56} s_{567}}+\frac{1}{s_{67} s_{567}}+\frac{1}{s_{67} s_{678}}+\frac{1}{s_{78} s_{678}}\right)\nonumber\\ &=\frac{1}{s_{1234}}\Bigg(\begin{array}{c} \begin{tikzpicture}[scale=0.5] \draw[](0,0)--(1,0) (0.5,1)--(0,0)--(0.5,-1) (-0.7,0.7)--(0,0)--(-0.7,-0.7); \node at (1.2,0.35) []{{\footnotesize $P_{1234}$}}; \node at (0.5,-1.35) []{{\footnotesize $1$}}; \node at (-0.75,-1.1) []{{\footnotesize $2$}}; \node at (-0.75,1.1) []{{\footnotesize $3$}}; \node at (0.5,1.35) []{{\footnotesize $4$}}; \end{tikzpicture} \end{array}\Bigg)\Bigg( \begin{array}{c} \begin{tikzpicture}[scale=0.5] \draw[](0,0)--(-1,0) (-0.5,1)--(0,0)--(-0.5,-1) (0.7,0.7)--(0,0)--(0.7,-0.7); \node at (-1.2,0.35) []{{\footnotesize $P_{5678}$}}; \node at (-0.5,-1.35) []{{\footnotesize $8$}}; \node at (0.75,-1.1) []{{\footnotesize $7$}}; \node at (0.75,1.1) []{{\footnotesize $6$}}; \node at (-0.5,1.35) []{{\footnotesize $5$}}; \end{tikzpicture} \end{array}\Bigg)~,~~~\label{selectExam1} \end{align} where we used the five point vertices to represent the terms in two parentheses. For instance, if we want to pick up terms with pole $\frac{1}{s_{78}}$, either $f^{\text{select}}[6,7,8,1]$ following the color ordering of ${\mbox{PT}}(\pmb {\alpha})$ or $f^{\text{select}}[4,8,7,6]$ following the color ordering of ${\mbox{PT}}(\pmb {\beta})$ can do the job. Indeed, by direct computation, we confirm that \begin{equation} \left.\renewcommand*{\arraystretch}{1.5}\begin{array}{l} \Big({\mbox{PT}}(\pmb {\alpha})f^{\text{select}}[6,7,8,1]\Big)\times {\mbox{PT}}(\pmb {\beta}) \\ {\mbox{PT}}(\pmb {\alpha}) \times \Big({\mbox{PT}}(\pmb {\beta})f^{\text{select}}[4,8,7,6]\Big) \end{array}\right\}\to \frac{1}{s_{1234}}\times\frac{1}{s_{78}}\left( \frac{1}{s_{56}}+\frac{1}{s_{678}}\right)\times \Bigg(\begin{array}{c} \begin{tikzpicture}[scale=0.5] \draw[](0,0)--(1,0) (0.5,1)--(0,0)--(0.5,-1) (-0.7,0.7)--(0,0)--(-0.7,-0.7); \node at (1.2,0.35) []{{\footnotesize $P_{1234}$}}; \node at (0.5,-1.35) []{{\footnotesize $1$}}; \node at (-0.75,-1.1) []{{\footnotesize $2$}}; \node at (-0.75,1.1) []{{\footnotesize $3$}}; \node at (0.5,1.35) []{{\footnotesize $4$}}; \end{tikzpicture} \end{array}\Bigg)~.~~~ \label{selectExam2} \end{equation} In fact, we can multiply more than one selecting factors to pick up terms with several specific poles. For example, if we want to pick up terms with pole $\frac{1}{s_{78}}\frac{1}{s_{678}}$, we can start from the result~\eqref{selectExam2} and take the selecting factor $f^{\text{select}}[5,6,8,1]$ following the color ordering of ${\mbox{PT}}(\pmb {\alpha})$ in the second row of~\eqref{selectExam2}, or $f^{\text{select}}[4,8,6,5]$ following the color ordering of ${\mbox{PT}}(\pmb {\beta})$ in the first row of~\eqref{selectExam2}. It leads to four possible multiplications, and by direct computation, they produce the same result as \small \begin{eqnarray} \left.{\renewcommand*{\arraystretch}{1.5} \begin{array}{l} \big({\mbox{PT}}(\pmb {\alpha})f^{\text{select}}[6,7,8,1]f^{\text{select}}[5,6,8,1]\big)\times {\mbox{PT}}(\pmb {\beta})\\ \big({\mbox{PT}}(\pmb {\alpha})f^{\text{select}}[6,7,8,1]\big)\times \big({\mbox{PT}}(\pmb {\beta})f^{\text{select}}[4,8,6,5]\big)\\ \big({\mbox{PT}}(\pmb {\alpha})f^{\text{select}}[5,6,8,1]\big)\times \big({\mbox{PT}}(\pmb {\beta})f^{\text{select}}[4,8,7,6]\big)\\ {\mbox{PT}}(\pmb {\alpha})\times \big({\mbox{PT}}(\pmb {\beta})f^{\text{select}}[4,8,7,6]f^{\text{select}}[4,8,6,5]\big) \end{array}}\right\} \to \frac{1}{s_{1234}}\times\frac{1}{s_{78}s_{678}}\times \Bigg(\begin{array}{c} \begin{tikzpicture}[scale=0.5] \draw[](0,0)--(1,0) (0.5,1)--(0,0)--(0.5,-1) (-0.7,0.7)--(0,0)--(-0.7,-0.7); \node at (1.2,0.35) []{{\footnotesize $P_{1234}$}}; \node at (0.5,-1.35) []{{\footnotesize $1$}}; \node at (-0.75,-1.1) []{{\footnotesize $2$}}; \node at (-0.75,1.1) []{{\footnotesize $3$}}; \node at (0.5,1.35) []{{\footnotesize $4$}}; \end{tikzpicture} \end{array}\Bigg)~.~~~ \label{selecttwopoles} \end{eqnarray}\normalsize We note that the first and last of them do not end up directly with pure PT-factors. Nontrivial identities are required to further reduce the results. Thus, multiplying the selecting factor is a little bit broader than the situation discussed in the previous subsection. We can further pick up terms with, say, $\frac{1}{s_{12}}$ pole, from the previous result. It can be checked that the following eight multiplications of selecting factors \small \begin{align*} &\left({\mbox{PT}}(\pmb{{\alpha}})f[6,7,8,1]f[5,6,8,1]f[8,1,2,3]\right)\times {\mbox{PT}}(\pmb{{\beta}})& &\left({\mbox{PT}}(\pmb{{\alpha}})f[6,7,8,1]f[5,6,8,1]\right)\times \left({\mbox{PT}}(\pmb{{\beta}}) f[5,1,2,3]\right)\\ &\left({\mbox{PT}}(\pmb{{\alpha}})f[6,7,8,1]f[8,1,2,3]\right)\times \left({\mbox{PT}}(\pmb{{\beta}})f[4,8,6,5]\right)& &\left({\mbox{PT}}(\pmb{{\alpha}})f[6,7,8,1]\right)\times \left({\mbox{PT}}(\pmb{{\beta}})f[4,8,6,5]f[5,1,2,3]\right)\\ &\left({\mbox{PT}}(\pmb{{\alpha}})f[5,6,8,1]f[8,1,2,3]\right)\times \left({\mbox{PT}}(\pmb{{\beta}})f[4,8,7,6]\right)& &\left({\mbox{PT}}(\pmb{{\alpha}})f[5,6,8,1]\right)\times \left({\mbox{PT}}(\pmb{{\beta}})f[4,8,7,6]f[5,1,2,3]\right)\\ &\left( {\mbox{PT}}(\pmb{{\alpha}})f[8,1,2,3]\right)\times \left({\mbox{PT}}(\pmb{{\beta}})f[4,8,7,6]f[4,8,6,5]\right)& &\left( {\mbox{PT}}(\pmb{{\alpha}})\right)\times \left({\mbox{PT}}(\pmb{{\beta}})f[4,8,7,6]f[4,8,6,5]f[5,1,2,3]\right) \end{align*}\normalsize indeed evaluate to $\frac{1}{s_{1234}}\left( \frac{1}{s_{12} s_{34}} +\frac{1}{s_{12} s_{123}}\right)\times \frac{1}{s_{78} s_{678}}$. So we have extracted all terms with poles $\frac{1}{s_{12}}\frac{1}{s_{78}}\frac{1}{s_{678}}$ from the result~\eqref{selectExam1}. As another illustration, let us apply the above discussion to six point CHY-integrands with ${\mbox{PT}}(\pmb {\alpha})=\vev{123456}$ fixed, such that we can examine relations between the independent ${\mbox{PT}}(\pmb \beta)$'s. Starting from the identity PT-factor ${\mbox{PT}}(\pmb\beta)=\vev{123456}$, we can choose the selecting factors to corresponds to the independent Mandelstam variables $\{s_{12}, s_{23},s_{34},s_{45},s_{56},s_{61}\}$ and $\{s_{123},s_{234},s_{345}\}$. Explicitly, we have \begin{eqnarray} \vev{123456}\times \left\{ \begin{array}{c} f^{\text{select}}[6,1,2,3]_{s_{12}}\to \vev{126543}\\ f^{\text{select}}[1,2,3,4]_{s_{23}}\to \vev{132456}\\ f^{\text{select}}[2,3,4,5]_{s_{34}}\to \vev{124356}\\ f^{\text{select}}[3,4,5,6]_{s_{45}}\to \vev{123546}\\ f^{\text{select}}[4,5,6,1]_{s_{56}}\to \vev{123465}\\ f^{\text{select}}[5,6,1,2]_{s_{61}}\to \vev{154326}\\ \end{array}\right.~~~,~~~\vev{123456}\times \left\{ \begin{array}{c} f^{\text{select}}[6,1,3,4]_{s_{123}}\to \vev{123654}\\ f^{\text{select}}[1,2,4,5]_{s_{234}}\to \vev{143256}\\ f^{\text{select}}[2,3,5,6]_{s_{345}}\to \vev{125436}\\ \end{array}\right.~,~~~\label{cross6pt1}\end{eqnarray} where the subscripts are to remind which pole we are picking up. The left column shows the results of picking up the terms with a common two-particle pole $\frac{1}{s_{i,i+1}}$, while the right column shows the results of picking up terms with a common three-particle pole $\frac{1}{s_{i,i+1,i+2}}$. It can be simply checked that the six resulting PT-factors in the left column are those evaluating to five diagrams, while the three in the right column are those evaluating to four diagrams, as presented in \S\ref{secFeynmanSub3}. Based on above results, we can further consider multiplying one more selecting factor. For example, let us take the resulting PT-factor $\vev{123465}$ from the left column and $\vev{123654}$ from the right column. For $\vev{123465}$, it has overall pole $\frac{1}{s_{56}}$ in the evaluated result. From compatibility condition, the selecting factors we can take are those corresponding to poles $\frac{1}{s_{12}}$, $\frac{1}{s_{23}}$, $\frac{1}{s_{34}}$, $\frac{1}{s_{123}}$ and $\frac{1}{s_{234}}$. For $\vev{123654}$, it has overall pole $\frac{1}{s_{123}}$ in the evaluated result, and from compatibility condition the selecting factors we can take are those corresponding to poles $\frac{1}{s_{12}}$, $\frac{1}{s_{23}}$, $\frac{1}{s_{45}}$ and $\frac{1}{s_{56}}$. Explicitly, we have \begin{eqnarray} \vev{123465}\times \left\{ \begin{array}{c} f^{\text{select}}[5,1,2,3]_{s_{12}}\to \vev{125643}\\ f^{\text{select}}[1,2,3,4]_{s_{23}}\to \vev{132465}\\ f^{\text{select}}[2,3,4,6]_{s_{34}}\to \vev{124365}\\ f^{\text{select}}[5,1,3,4]_{s_{123}}\to \vev{123564}\\ f^{\text{select}}[1,2,4,6]_{s_{234}}\to \vev{143265}\\ \end{array}\right.~~~,~~~\vev{123654}\times \left\{\begin{array}{c} f^{\text{select}}[4,1,2,3]_{s_{12}}\to \vev{124563}\\ f^{\text{select}}[1,2,3,6]_{s_{23}}\to \vev{132654}\\ f^{\text{select}}[6,5,4,1]_{s_{45}}\to \vev{123645}\\ f^{\text{select}}[3,6,5,4]_{s_{56}}\to \vev{123564}\\ \end{array}\right.~.~~~\label{cross6pt2}\end{eqnarray} It can be checked that all the resulting PT-factors evaluate to two Feynman diagrams. In fact, all the six resulting PT-factors in the left column of~\eqref{cross6pt1} can be treated in the same manner as $\vev{123465}$, which lead to $6\times 5=30$ PT-factors. All the three resulting PT-factors in the right column of~\eqref{cross6pt1} can be treated in the same manner as $\vev{123654}$, which in all produce $3\times 4=12$ PT-factors. However, we have over-counted each PT-factor by one since the result is independent of the order of picking up poles. Therefore, the independent PT-factors that can be produced from all resulting PT-factors in~\eqref{cross6pt1} by multiplying another selecting factor should be $\frac{6\times 5+3\times 4}{2!}=21$, which is exactly the number of independent PT-factors that evaluated to two Feynman diagrams.\footnote{Alternatively, we can pick up the two poles from $\vev{123456}$ at the same time, similar to our example~\eqref{selecttwopoles}. The resultant integrands will be different at the first glance, but still evaluate to the same amplitudes.} Indeed, it can be checked that the $21$ resulting PT-factors as partly shown in~\eqref{cross6pt2}, together with the other not written down, are just those evaluated to two diagrams as presented in~\S\ref{secFeynmanSub3}. Again, based on above result, we can multiply one more selecting factor that is compatible with the previous two. The resulting PT-factors from~\eqref{cross6pt2} can be shown as \small \begin{align} \begin{array}{l} \vev{125643}\times \left\{ \begin{array}{c} f^{\text{select}}[6,4,3,1]_{s_{34}}\to \vev{125634}\\ f^{\text{select}}[4,3,2,5]_{s_{123}}\to \vev{124653} \end{array}\right.\\ \vev{132465}\times \left\{\begin{array}{c} f^{\text{select}}[5,1,2,4]_{s_{123}}\to \vev{132564}\\ f^{\text{select}}[1,3,4,6]_{s_{234}}\to \vev{142365} \end{array}\right.\\ \vev{124365}\times \left\{\begin{array}{c} f^{\text{select}}[5,1,2,4]_{s_{12}}\to \vev{125634}\\ f^{\text{select}}[1,2,3,6]_{s_{234}}\to \vev{134265} \end{array}\right.\\ \vev{123564}\times \left\{\begin{array}{c} f^{\text{select}}[4,1,2,3]_{s_{12}}\to \vev{124653}\\ f^{\text{select}}[1,2,3,5]_{s_{23}}\to \vev{132564} \end{array}\right.\\ \vev{143265}\times \left\{\begin{array}{c} f^{\text{select}}[4,3,2,6]_{s_{23}}\to \vev{142365}\\ f^{\text{select}}[1,4,3,2]_{s_{34}}\to \vev{134265} \end{array}\right.\\ \end{array}~~~,~~~\begin{array}{l} \vev{124563}\times \left\{\begin{array}{c} f^{\text{select}}[2,4,5,6]_{s_{45}}\to \vev{125463}\\ f^{\text{select}}[4,5,6,3]_{s_{56}}\to \vev{124653} \end{array}\right.\\ \vev{132654}\times \left\{\begin{array}{c} f^{\text{select}}[6,5,4,1]_{s_{45}}\to \vev{132645}\\ f^{\text{select}}[2,6,5,4]_{s_{56}}\to \vev{132564} \end{array}\right.\\ \vev{123645}\times \left\{\begin{array}{c} f^{\text{select}}[5,1,2,3]_{s_{12}}\to \vev{125463}\\ f^{\text{select}}[1,2,3,6]_{s_{23}}\to \vev{132645} \end{array}\right.\\ \vev{123564}\times \left\{\begin{array}{c} f^{\text{select}}[4,1,2,3]_{s_{12}}\to \vev{124653}\\ f^{\text{select}}[1,2,3,5]_{s_{23}}\to \vev{132564} \end{array}\right.\\ \end{array}~.~~~\label{cross6pt3} \end{align}\normalsize We notice that each PT-factor in~\eqref{cross6pt2} has two compatible selecting factors, leading to two new PT-factors, all of which evaluate to one Feynman diagram. Since from~\eqref{cross6pt1} to~\eqref{cross6pt2}, we get $42$ PT-factors (of which $21$ are independent), in this step we will get $42\times 2=84$ PT-factors. After excluding the double counting, we get $\frac{42\times 2}{3!}=14$ independent PT-factors, which are exactly the PT-factors that evaluated to one Feynman diagram. Our results~\eqref{cross6pt3}, together with those not written down, agree with the $14$ independent PT-factors that evaluated to one Feynman diagram as presented in \S\ref{secFeynmanSub3}, after deleting the double counting. Since now every pole is overall, if we multiply another selecting factor, we will either get an irrelevant result or return to the mother amplitude. Before closing, we give a criterion on whether a pole $s_A$ is overall with fixed ${\mbox{PT}}(\pmb{\alpha})$. First, $s_A$ is a physical pole {\sl iff} $A$ is consecutive with both ${\mbox{PT}}(\pmb{\alpha})$ and ${\mbox{PT}}(\pmb{\beta})$. Next, we define $\{\overline{A}_{-1},A_1,A_{-1},\overline{A}_{1}\}$ according to ${\mbox{PT}}(\pmb{\beta})$, and put ${\mbox{PT}}(\pmb{\alpha})$ into the unique form ${\mbox{PT}}(\pmb{\alpha})=\vev{A_1\ldots A_{-1}\ldots}$. Then $s_A$ is an overall pole {\sl iff} in ${\mbox{PT}}(\pmb{\alpha})$ we have $A_{-1}\prec \overline{A}_{-1}\prec \overline{A}_1$, namely, $\overline{A}_{-1}$ precedes $\overline{A}_1$. Otherwise, $s_A$ is not an overall pole. This can be easily understood by the zig-zag path in $n$-gon. Since an overall pole corresponds to a partial triangulation line in the $n$-gon, a reverse of ordering must happen when the zig-zag path cross this triangulation line. \section{Conclusion} \label{secConclusion} The CHY-integrand of bi-adjoint cubic scalar theory consists of two PT-factors as ${\mbox{PT}}(\pmb{{\alpha}})\times {\mbox{PT}}(\pmb{{\beta}})$. Once we fix the color ordering of first PT-factor ${\mbox{PT}}(\pmb{{\alpha}})$ as the natural ordering $\pmb{e}=\vev{12\cdots n}$, the second PT-factor ${\mbox{PT}}(\pmb{{\beta}})$ then can be interpreted as a permutation acting on the identity element. It is shown in this paper that, the pole structure and vertex information of Feynman diagrams evaluated by a CHY-integrand is completely encoded in the permutations of corresponding PT-factors. The cycle representation of permutation, which neatly organizes the external legs into disjoint cycles, manifests the pole and vertex information. More concretely, since a PT-factor is invariant under cyclic rotations and gains at most a sign $(-)^{n}$ under reversing of color ordering, we are actually considering $2n$ equivalent permutations of a PT-factor. We then write all the equivalent permutations of ${\mbox{PT}}(\pmb{\beta})$ into the cycle representation, and pick out the good ones. Those that can be separated to at least three consecutive parts with respect of ${\mbox{PT}}(\pmb{\alpha})$ are called V-type cycle representations. Those that can only be separated into two parts, while each part contains more than two elements, are called P-type cycle representations. We show that the CHY-integrand ${\mbox{PT}}(\pmb{e})\times {\mbox{PT}}(\pmb{{\beta}})$ gives nonzero contributions if and only if the ways of planar separations allowed by all the V-type representations satisfy the constraint~\eqref{V-cond}. The Feynman diagram of a CHY-integrand can be completely determined by one V-type cycle representation by going into its substructures, or collectively determined by all P-type cycle representations of a PT-factor. The vertex structure can be obviously seen from the planar separation of V-type or P-type cycle representations. We presented the algorithm to read out the physical poles and vertices from them. On the other hand, given an effective Feynman diagram, with possible effective higher point vertices, we have proposed a recursive algorithm to obtain directly the correct cycle representation of corresponding PT-factor ${\mbox{PT}}(\pmb{{\beta}})$. We show that cycle representations of any Feynman diagram allow a factorization as Eq.~\eqref{cyclefactor} with respect to an arbitrary $m$-point vertex, called a planar splitting. We have figured out that, the cycle representations of subdiagrams that used in the factorization~\eqref{cyclefactor} are the non-planar splitting ones in the equivalent class of PT-factors of subdiagrams. The same algorithm applies to the subdiagrams as well, so we can reconstruct the cycle representation of any $n$-point PT-factor basically from three point PT-factor. We show that all the discussions are parallel in the Feynman diagram and $n$-gon diagram, while the latter also takes its role in the associahedron discussion. It is shown in~\cite{Arkani-Hamed:2017mur} that different PT-factors are neatly connected in the associahedron picture. In this paper, we also investigate the relations among different PT-factors via the reversing permutation on cycles, which corresponds to adding or removing triangulation line in the $n$-gon diagrams. The merging and splitting of cycles in a cycle representation mainly select terms with the same poles in a result. From the same thought, we further study the relations among PT-factors via the multiplication of certain cross-ratio factor which we call selecting factor. They all give similar topology about how the PT-factors are connected. Finally, since the planar diagram possess a natural interpretation as the vertices and boundaries of an associahedron, the structure of good cycle representation introduced in this paper can be used to characterize certain boundaries. We have shown how this can be achieved by merging of cycles, in the equivalent class of a ${\mbox{PT}}(\pmb\beta)$, the number of different factorizations into disjoint permutations describes the shape of boundaries of the associahedron. \acknowledgments We thank Song He for helpful discussions. We also thank the hospitality of the Institute of Theoretical Physics at Chinese Academy of Science, where this work was initiated. B.F. is supported by Qiu-Shi Funding and the National Natural Science Foundation of China (NSFC) with Grant No.11575156 and No.11135006. R.H. is supported by the NSFC with Grant No.11575156 and the starting grant from Nanjing Normal University. F.T. is supported in part by the Knut and Alice Wallenberg Foundation under grant KAW 2013.0235 and the Ragnar S\"{o}derberg Foundation under grant S1/16.
2,877,628,088,824
arxiv
\section{Introduction} Throughout the paper $R$ will denote a commutative ring. In \cite{Nee} Neeman gives a new description of the homotopy category $\mathbf{K}(\mathrm{Proj}(R))$ as a quotient of $\mathbf{K}(\mathrm{Flat}(R))$. The main advantage of the new description is that it does not involve projective objects, so it can be generalized to non-affine schemes (see \cite[Remark 3.4]{Nee2}). So, in his thesis \cite{Mur}, Murfet \emph{mocks} the homotopy category of projectives on a non-affine scheme, by considering the category ${\bf D}(\mathrm{Flat}(X))$\footnote{\,The original terminology in \cite{Mur} for ${\bf D}(\mathrm{Flat}(X))$ was $\mathbf{K}_m(\mathrm{Proj} (X))$. This is referred in \cite{MS} as the \emph{pure derived category of flat sheaves} on $X$ and denoted by ${\bf D}(\mathrm{Flat}(X))$.} defined as the Verdier quotient $${\bf D}(\mathrm{Flat}(X)):=\frac{{\bf K}(\mathrm{Flat}(X))}{\widetilde{\mathrm{Flat}\mkern 0mu}^{\scalebox{0.6}{\bf K}}\!\!(X)},$$ where $ \widetilde{\mathrm{Flat}\mkern 0mu}^{\scalebox{0.6}{\bf K}}(X)$ denotes the class of acyclic complexes in ${\bf K}(\mathrm{Flat}(X))$ with flat cycles. In the language of model categories, Gillespie showed in \cite{G} that ${\bf D}(\mathrm{Flat}(X))$ can be realized as the homotopy category of a Quillen model structure on the category $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ of unbounded chain complexes of quasi-coherent sheaves on a quasi com\-pact and semi-separated scheme, and that, in fact, in case $X=\Spec(R)$ is affine, both homotopy categories ${\bf D}(\mathrm{Flat}(X))$ and $\mathbf{K}(\mathrm{Proj}(R))$ are triangle equivalent, coming from a Quillen equivalence between the corresponding models. However, from an homological point of view, flat modules are much more complicated than projective modules. For instance, for a general commutative ring, the exact category of flat modules has infinite homological dimension. In order to partially remedy these complications, recently Positselski in \cite{P} has introduced a refinement of the class of flat quasi-coherent sheaves, the so-called \emph{very flat} quasi-coherent sheaves (see Section \ref{section.veryflat} for the definition and main properties) and showed that this class shares many nice properties with the class of flat sheaves, but it has potentially several advantages with respect to it, for instance, it can be applied to matrix factorizations (see the introduction of the recent preprint \cite{PS} for a nice and detailed treatment on the goodness of the very flat sheaves). Moreover, in the affine case $X=\Spec(R)$, the exact category of very flat modules has finite homological dimension (every very flat module has projective dimension $\leq 1$). Therefore one easily obtains in this case a triangulated equivalence between ${\bf D}(\mathcal{V}\!\mathcal{F}(R))$ and $\mathbf{K}(\mathrm{Proj}(R))$ (here $\mathcal{V}\!\mathcal{F}(R)$ denotes the class of very flat $R$-modules). In particular it is much less involved than the aforementioned triangulated equivalence between ${\bf D}(\mathrm{Flat}(R))$ and $\mathbf{K}(\mathrm{Proj}(R))$ (\cite[Theorem 1.2]{Nee}). So, if we denote by $\mathcal{V}\!\mathcal{F}(X)$ the class of very flat quasi-coherent sheaves, one can also think in ``mocking'' the homotopy category of projectives over a non-affine scheme by defining the Verdier quotient $${\bf D}(\mathcal{V}\!\mathcal{F}(X)):=\frac{{\bf K}(\mathcal{V}\!\mathcal{F}(X))}{\widetilde{\mathcal{V}\!\mathcal{F}\mkern 0mu}^{\scalebox{0.6}{\bf K}}\!\!(X)}.$$ It is then natural to wonder whether or not the (indirect) triangulated equivalence between ${\bf D}(\mathrm{Flat}(R))$ and ${\bf D}(\mathcal{V}\!\mathcal{F}(R))$ still holds over a non-affine scheme. This was already proved to be the case for a semi-separated Noetherian scheme of finite Krull dimension in \cite[Corollary 5.4.3]{P}. As a first consequence of the results in this paper, we extend in Corollary \ref{cor.triang.equiv.flatveryflat} this result for arbitrary (quasi-compact and semi-separated) schemes. \medskip\par\noindent {\bf Corollary 1.} For any scheme $X$, the categories ${\bf D}(\mathrm{Flat}(X))$ and ${\bf D}(\mathcal{V}\!\mathcal{F}(X))$ are triangle equivalent. \medskip\par Recall from Totaro \cite{Totaro} (see Gross \cite{Gross} for the general notion) that a scheme $X$ satisfies the \emph{resolution property} provided that $X$ has enough locally frees, that is, for every quasi-coherent sheaf $\mathscr{M}$ there exists an exact map $\oplus_i \mathscr{V}_i\to \mathscr{M}\to 0$, for some family $\{\mathscr{V}_i: i\in I\}$ of vector bundles. In this case the class of infinite-dimensional vector bundles (in the sense of Drinfeld \cite{D}) constitutes the natural extension of the class of projective modules for non-affine schemes. And one can define the derived category of infinite-dimensional vector bundles again as the Verdier quotient $${\bf D}(\mathrm{Vect}(X)):=\frac{{\bf K}(\mathrm{Vect}(X))}{\widetilde{\mathrm{Vect}\mkern 0mu}^{\scalebox{0.6}{\bf K}}\!\!(X)}.$$ This definition trivially agrees with $\mathbf{K}(\mathrm{Proj}(R))$ in case $X=\Spec(R)$ is affine. By using the class of very flat sheaves we obtain in Corollary \ref{eq.vbundles} the following meaningful consequence, which does not seem clearly to admit a direct proof (i.e. a proof without using very flat sheaves). \medskip\par\noindent {\bf Corollary 2.} Let $X$ be a quasi-compact and semi-separated scheme satisfying the resolution property (for instance if $X$ is divisorial \cite[Proposition 6(a)]{Mur2}). Murfet's and Neeman's derived category of flats, ${\bf D}(\mathrm{Flat}(X))$, is triangle equivalent to ${\bf D}(\mathrm{Vect}(X))$, the derived category of infinite-dimensional vector bundles. \medskip\par Indeed the methods developed in this paper go beyond the class of very flat quasi-coherent sheaves. More precisely, we investigate which are the conditions that a subclass $\mathcal{A}_{\rm qc}$ of flat quasi-coherent sheaves has to fulfil in order to get a triangle equivalent category to ${\bf D}(\mathrm{Flat}(X))$. In fact, we show that the triangulated equivalence comes from a Quillen equivalence between the corresponding models. We point out that there are well-known examples of non Quillen equivalent models with equivalent homotopy categories. The precise statement of our main result is in Theorem \ref{t.mc.general case} (see the setup in Section \ref{section.q_equivalent} for unexplained terminology). \medskip\par\noindent {\bf Theorem.} Let $X$ be a quasi-compact and semi-separated scheme and let $\mathcal P$ be a property of modules and $\mathcal{A}$ its associated class of modules. Assume that $\mathcal{A}\subseteq \mathrm{Flat}$, and that the following conditions hold: \begin{enumerate} \item The class $\mathcal{A}$ is Zariski-local. \item For each $R=\OO_X(U)$, $U\in \U$, the pair $(\mathcal{A}_R,\mathcal{B}_R)$ is a hereditary cotorsion pair generated by a set. \item For each $R=\OO_X(U)$, $U\in \U$, every flat $\mathcal{A}_R$-periodic module is trivial. \item $j_*(\mathcal{A}_{{\rm qc}(U_{\alpha})})\subseteq \mathcal{A}_{{\rm qc}(X)}$, for each $\alpha\subseteq \{0,\ldots,m\}$. \end{enumerate} Then the class $\mathcal{A}_{{\rm qc}}$ defines an abelian model category structure in $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ whose homotopy category $\mathbf D\mathbb(\mathcal{A}_{{\rm qc}})$ is triangle equivalent to ${\bf D}(\mathrm{Flat}(X))$, induced by a Quillen equivalence between the corresponding model categories. \medskip\par It is interesting to observe that conditions (1), (2) and (3) in the previous theorem only involve properties of modules. Thus we find useful and of independent interest to explicitly state in Theorem \ref{t.mc.affine case} the affine version of the previous theorem (and give an easy proof). Section \ref{section.examples} is meant to make abundantly clear the variety of examples of classes of modules that fit into those conditions. Of particular interest is the class $\mathcal A(\kappa)$ of \emph{restricted flat Mittag-Leffler modules} considered in Theorem \ref{rest.fml} which has been widely studied in the literature in the recent years (see, for instance, \cite{EGPT,EGT,GT,HT,Sar}). So regarding this class, we obtain the following meaningful consequences: \medskip\par\noindent {\bf Corollary.} Let $\kappa$ be an infinite cardinal and $\mathcal{A}(\kappa)$ be the class of $\kappa$-restricted flat Mittag-Leffler modules (notice that $\mathcal{A}(\kappa)=\mathrm{Proj}(R)$ in case $\kappa=\aleph_0$). \begin{enumerate} \item Every pure acyclic complex with components in $\mathcal{A}(\kappa)$ has cycles in $\mathcal{A}(\kappa)$. \item The categories ${\bf D}(\mathcal{A}(\kappa))$ and ${\bf K}(\mathrm{Proj}(R))$ are triangle equivalent. \end{enumerate} \medskip\par The proof of (1) can be found in Theorem \ref{rest.fml} whereas the proof of (2) is a particular instance of Theorem \ref{t.mc.affine case} with $\mathcal{A}=\mathcal{A}(\kappa)$. In the special case $\kappa=\aleph_0$, the statement (1) recovers a well-known result due to Benson and Goodearl (\cite[Theorem 1.1]{BG}). \section{Preliminaries} \begin{ipg}\underline{Zariski local classes of modules}. Let $\mathcal{P}$ be a property of modules and let $\mathcal{A}$ be the corresponding class of modules satisfying $\mathcal P$, i.e. for any ring $R$, the class $\mathcal{A}_R$ consists of $M\in \Modl R$ such that $M$ satisfies $\mathcal P_R$. We define the class $\mathcal{A}_{{\rm qc}(X)}$ in $\mathfrak{Qcoh}(X)$ (or just $\mathcal{A}_{\rm qc}$ if the scheme is understood) as the class of all quasi-coherent sheaves $\mathscr M$ such that, for each open affine $U\subseteq X$, the module of sections $\mathscr M(U)\in \mathcal{A}_{\OO_X(U)}$. We will be only interested in those properties of modules $\mathcal P$ such that the property of being in $\mathcal{A}_{{\rm qc}(X)}$ can be tested on an open affine covering of $X$. In this case we will say that the class $\mathcal{A}$ of modules (associated to $\mathcal P$) is \emph{Zariski-local}.\\ The following is a specialization of the \emph{ascent-descent} conditions (\cite[Definition 3.4]{EGT}) that suffices to prove Zariski locality (see Vakil \cite[Lemma 5.3.2]{Vakil} and also \cite[\S 27.4]{SP}): \begin{lem}\label{ZL} The class of modules $\mathcal{A}$ associated to the property of modules $\mathcal P$ is Zariski-local if and only if satisfies the following: \begin{enumerate} \item If an $R$-module $M\in \mathcal{A}_R$, then $M_f\in \mathcal{A}_{R_f}$ for all $f\in R$. \item If $\left (f_1,\ldots,f_n\right )=R$, and $M_{f_i}=R_{f_i}\otimes_R M\in \mathcal{A}_{R_{f_i}}$, for all $i\in \{1,\ldots,n\}$, then $M\in \mathcal{A}_R$. \end{enumerate} \end{lem} It is easy to see that the class $\mathrm{Flat}$ of flat modules is Zariski-local. A module $M$ is \emph{Mittag-Leffler} provided that the canonical map $M\otimes_R\prod_{i\in I}M_i\to \prod_{i\in I}M\otimes_R M_i$ is monic for each family of left $R$-modules $(M_i|\, i\in I)$. The classes $\textrm{FlatML}$ (of flat Mittag-Leffler modules) and $\mathrm{Proj}$ (of projective modules) are also Zariski-local by 3.1.4.(3) and 2.5.2 in \cite[Seconde partie, 2.5.2]{RG}. The class $\textrm{rFlatML}$ of \emph{restricted} flat Mittag-Leffler modules (in the sense of \cite[Example 2.1(3)]{EGT}) is also Zariski-local by \cite[Theorem 4.2]{EGT}. \end{ipg} \begin{ipg}\underline{Precovers, envelopes and complete cotorsion pairs}.\label{cotorsion-pairs} Throughout this section the symbol $\mathcal{G}$ will denote an abelian category. Let $\mathcal{C}$ be a class of objects in $\mathcal{G}$. A morphism $C\stackrel{\phi}{\rightarrow}{M}$ in $\mathcal{G}$ is called a \emph{$\mathcal{C}$-precover} if $C$ is in $\mathcal{C}$ and $\operatorname{Hom}_{\mathcal{G}}(C',C) \to \operatorname{Hom}_{\mathcal{G}} (C',M) \to 0$ is exact for every $C' \in \mathcal{C}$. If every object in $\mathcal{G}$ has a $\mathcal{C}$-precover, then the class $\mathcal{C}$ is called \emph{precovering}. The dual notions are \emph{preenvelope} and \emph{preenveloping} class. A pair $(\mathcal{A},\mathcal{B})$ of classes of objects in $\mathcal{G}$ is a \emph{cotorsion pair} if $\rightperp{\mathcal{A}}=\mathcal{B}$ and $\mathcal{A} = \leftperp{\mathcal{B}}$, where, given a class $\mathcal{C}$ of objects in $\mathcal{A}$, the right orthogonal $\rightperp{\mathcal{C}}$ is defined to be the class of all $Y \in \mathcal{G}$ such that $\operatorname{Ext}^1_{\mathcal{G}}(C,Y) = 0$ for all $C \in \mathcal{C}$. The left orthogonal $\leftperp{\mathcal{C}}$ is defined similarly. A cotorsion pair $(\mathcal{A},\mathcal{B})$ is called \emph{hereditary} if $\operatorname{Ext}^i_{\mathcal{G}}(A,B) = 0$ for all $A \in \mathcal{A}$, $B \in \mathcal{B}$, and $i \geqslant 1$. A cotorsion pair $(\mathcal{A},\mathcal{B})$ is \emph{complete} if it has \emph{enough projectives} and \emph{enough injectives}, i.e.~for each $D \in \mathcal{G}$ there exist short exact sequences $0 \xrightarrow{} B \xrightarrow{} A \xrightarrow{} D \xrightarrow{} 0$ (enough projectives) and $0 \xrightarrow{} D \xrightarrow{} B' \xrightarrow{} A' \xrightarrow{} 0$ (enough injectives) with $A,A' \in \mathcal{A}$ and $B,B' \in \mathcal{B}$. It is then easy to observe that $A\xrightarrow{} D$ is an $\mathcal{A}$-precover of $D$ (such precovers are called \emph{special}). Analogously, $D\xrightarrow{} B'$ is a \emph{special} $\mathcal{B}$-preenvelope of $D$. A cotorsion pair $(\mathcal{A},\mathcal{B})$ is \emph{generated by a set} provided that there exists a \emph{set} $\mathcal{S}\subseteq \mathcal{A}$ such that $\rightperp{\mathcal{S}}=\mathcal{B}$. In case $\mathcal{G}$ is, in addition Grothendieck, it is known that a cotorsion pair generated by a set $\mathcal{S}$ which contains a generating set of $\mathcal{G}$ is automatically complete. \end{ipg} \begin{ipg}\underline{Exact model categories and Hovey triples}. In \cite{hovey} Hovey relates complete cotorsion pairs with abelian (or exact) model category structures. An \emph{abelian model structure} on $\mathcal{G}$, that is, a model structure on $\mathcal{G}$ which is compatible with the abelian structure in the sense of \cite[Definition~2.1]{hovey}, corresponds by \cite[Theorem 2.2]{hovey} to a triple $(\mathcal{C},\mathcal{W},\mathcal{F})$ of classes of objects in $\mathcal{A}$ for which $\mathcal{W}$ is thick\footnote{\,Recall that a class $\mathcal{W}$ in an abelian (or, more generally, in an exact) category $\mathcal{G}$ is \emph{thick} if it is closed under direct summands and satisfies that whenever two out of three of the terms in a short exact sequence are in $\mathcal{W}$, then so is the third.} and $(\mathcal{C} \cap \mathcal{W},\mathcal{F})$ and $(\mathcal{C},\mathcal{W} \cap \mathcal{F})$ are complete cotorsion pairs in $\mathcal{G}$. In the model structure on $\mathcal{G}$ determined by such a triple, $\mathcal{C}$ is precisely the class of cofibrant objects, $\mathcal{F}$ is precisely the class of fibrant objects, and $\mathcal{W}$ is precisely the class of trivial objects (that is, objects weakly equivalent to zero). Such triple is often referred as a \emph{Hovey triple}. Gillespie extends in \cite[Theorem~3.3]{G4} Hovey's correspondance, mentioned above, from the realm of abelian categories to the realm of weakly idempotent complete exact categories (\cite[Definition~2.2]{G4}). More precisely, if $\mathcal{G}$ is a weakly idempotent complete exact categories (not necessarily abelian), then an \emph{exact model structure} on $\mathcal{G}$ (i.e. a model structure on $\mathcal{G}$ which is compatible with the exact structure in the sense of \cite[Definition~3.1]{G4}) corresponds to a Hovey triple $(\mathcal{C},\mathcal{W},\mathcal{F})$ in $\mathcal{G}$. \end{ipg} \begin{ipg}\underline{Deconstructible classes}.\label{deconst} A well ordered direct system, $(M_{\alpha}:\, \alpha\le \lambda)$, of objects in $\mathcal{G}$ is called \emph{continuous} if $M_0=0$ and, for each limit ordinal $\beta\leq \lambda$, we have $M_{\beta} = \varinjlim_{\alpha<\beta } M_{\alpha}$. If all morphisms in the system are monomorphisms, then the system is called a \emph{continuous directed union}. Let $\mathcal{S}$ be a class of objects in $\mathcal{G}$. An object $M$ in $\mathcal{G}$ is called \emph{$\mathcal{S}$-filtered} if there is a continuous directed union $(M_{\alpha}:\, \alpha\le \lambda)$ of subobjects of $M$ such that $M = M_{\lambda}$ and for every $\alpha<\lambda$ the quotient $M_{\alpha+1}/M_{\alpha}$ is isomorphic to an object in $\mathcal{S}$. We denote by $\Filt(\mathcal{S})$ the class of all $\mathcal{S}$-filtered objects in $\mathcal{G}$. A class $\mathcal{C}$ is called \emph{deconstructible} provided that there exists a set $\mathcal{S}$ such that $\mathcal{C}=\Filt(\mathcal{S})$ (see \cite[Definition 1.4]{Sto}). It is then known by \cite[Theorem pg.195]{Sto} that any deconstructible class is precovering. \end{ipg} \begin{ipg}\underline{Chain complexes of modules}. We denote by $\mathrm{Ch}(\mathcal{G})$ the category of unbounded chain complexes of objects in $\mathcal{G}$, i.e. complexes $G_\bullet$ of the form $$\cdots\to G_{n+1}\xrightarrow{d^G_{n+1}}G_n\xrightarrow{d^G_{n}} G_{n-1}\to\cdots.$$ We will denote by $Z_n G_\bullet$ the \emph{$n$-cycle of} $G$, i.e. $Z_nG=\operatorname{Ker}(d^G_n)$. Given a chain complex $G$ the \emph{$n^{th}$-suspension of $G$}, $\Sigma^n G$, is the complex defined as $(\Sigma^n G)_k=G_{k-n}$ and $d^{\Sigma^n G}_k=(-1)^n d_{k-n}$. And for a given object $A\in \mathcal{G}$, the \emph{$n$-disk} complex $D^n(A)$ is the complex with the object $A$ in the components $n$ and $n-1$, $d_n$ as the identity map, and 0 elsewhere. We denote by ${\bf K}(\mathcal{G})$ the homotopy category of $\mathcal G$, i.e. ${\bf K}(\mathcal{G})$ has the same objects as $\mathrm{Ch}(\mathcal{G})$ and the morphisms are the homotopy classes of morphisms of chain complexes. In case $\mathcal{G}=\Modl R$, we will denote $\mathrm{Ch}(\mathcal{G})$ (resp. ${\bf K}(\mathcal{G})$) simply by $\mathrm{Ch}(R)$ (resp. ${\bf K}(R)$). Given a class $\mathcal{C}$ in $\mathcal{G}$, we shall consider the following classes of chain complexes: \begin{itemize} \item $\mathrm{Ch}(\mathcal{C})$ (resp. ${\bf K}(\mathcal{C})$) is the full subcategory of $\mathrm{Ch}(\mathcal{G})$ (resp. of ${\bf K}(\mathcal{C})$) of all complexes $C_\bullet\in\mathrm{Ch}(\mathcal{G})$ such that $C_n\in\mathcal{C}$. \item $\mathrm{Ch}_{\textrm{ac}}(\mathcal{C})$ (resp. ${\bf K}_{\textrm{ac}}(\mathcal{C})$) is the class of all acyclic complexes in $\mathrm{Ch}(\mathcal{C})$ (resp. in ${\bf K}(\mathcal{C})$). \item $\widetilde{\mathcal{C}}$ (resp. $\widetilde{\mathcal{C}\mkern 0mu}^{\scalebox{0.6}{\bf K}}$) is the class class of all complexes $C_\bullet\in \mathrm{Ch}_{\textrm{ac}}(\mathcal{C})$ (resp. $C_\bullet\in {\bf K}_{\textrm{ac}}(\mathcal{C})$) with the cycles $Z_nC_\bullet$ in $\mathcal{C}$ for all $n\in \mathbb{Z}$. A complex in $\widetilde{\mathcal{C}}$ is called a \emph{$\mathcal C$ complex}. \item If $(\mathcal{A}, \mathcal{B})$ is a cotorsion pair in $\mathcal{G}$, then $dg(\mathcal{A})$ is the class of all complexes $A_\bullet\in \mathrm{Ch}(\mathcal{A})$ such that every morphism $f\colon\, A_\bullet\to B_\bullet$, with $B_\bullet$ a $\mathcal{B}$ complex, is null homotopic. Since $\operatorname{Ext}^1_{\mathcal{G}}(A_n, B_n)=0$ for every $n\in \mathbb{Z}$, a standard formula allows to infer that $dg(\mathcal{A})={}^\perp{} \tilclass \mathcal{B}$. Analogously, $dg(\mathcal{B})$ is the class of all complexes $B_\bullet \in \mathrm{Ch}(\mathcal{B})$ such that every morphism $f\colon\, A_\bullet\to B_\bullet$, with $A_\bullet$ an $\mathcal{A}$ complex, is null homotopic. Hence $dg(\mathcal{B})=\widetilde{\mathcal{A}}{}^\perp{}$. \end{itemize} \end{ipg} \section{Very flat modules and sheaves}\label{section.veryflat} One of the main application of the results in this paper concerns the classes of very flat modules and very flat quasi-coherent sheaves, as defined by Positselski in \cite{P}. In the present section we summarize all relevant definitions and properties regarding this class and that will be relevant in the sequel. \begin{ipg} \underline{Very flat and contraadjusted modules}. Let us consider the set $\mathcal S=\{R[r^{-1}]:\ r\in R\}$ and let $(\mathcal{V}\!\mathcal{F}(R),\,\mathcal{C}\!\!\mathcal{A}(R))$ the complete cotorsion pair generated by $\mathcal S$. The modules in the class $\mathcal{V}\!\mathcal{F}(R)$ are called \emph{very flat} and the modules in the class $\,\mathcal{C}\!\!\mathcal{A}(R)$ are called \emph{contraadjusted}. It is then clear that every projective module is very flat, and that every very flat module is, in particular, flat. In fact it is easy to observe that every very flat module has finite projective dimension $\leq 1$. Thus, the complete cotorsion pair $(\mathcal{V}\!\mathcal{F},\,\mathcal{C}\!\!\mathcal{A})$ is automatically hereditary and $\,\mathcal{C}\!\!\mathcal{A}$ is closed under quotients. We finally notice that $L$ is very flat in any short exact sequence $0\to L\to V\to M\to 0$ in which $V$ is very flat and $\mathrm{pd}_R(M)\leq 1$ (where $\mathrm{pd}_R(M)$ is the projective dimension of $M$). \end{ipg} \begin{prop}[Positselski] The class of very flat modules is Zariski-local. \end{prop} \begin{proof} Condition $(1)$ of Lemma \ref{ZL} holds by \cite[Lemma 1.2.2(b)]{P}.\\ Condition $(2)$ of Lemma \ref{ZL} follows from \cite[Lemma 1.2.6(a)]{P}. \end{proof} \begin{ipg} \underline{Very flat and contraadjusted quasi-coherent sheaves}. Let $X$ be any scheme. A quasi-coherent sheaf $\mathscr{M}$ is \emph{very flat} if there exists an open affine covering $\U$ of $X$ such that $\mathscr{M}(U)$ is a very flat $\mathcal O_X(U)$-module for each $U\in \U$. By the previous proposition, the definition of very flat quasi-coherent sheaf is independent of the choice of the open affine covering. A quasi-coherent sheaf $\mathscr{N}$ is \emph{contraadjusted} if $\operatorname{Ext}^n(\mathscr{M},\mathscr{N})=0$ for each very flat quasi-coherent sheaf $\mathscr{M}$ and every integer $n\geq 1$. Since the class of very flat modules is resolving (i.e. closed under kernels of epimorphisms) we infer that the class of very flat quasi-coherent sheaves is also resolving. \end{ipg} \begin{ipg} \underline{Very flat generators in $\mathfrak{Qcoh}(X)$}. Let $X$ be a quasi-compact and semi-separated sche\-me, with $\U=\{U_0,\cdots, U_d\}$ a semi-separated finite affine covering of $X$. Let $U=U_{i_0}\cap\cdots\cap~U_{i_p}$ be any intersection of open sets in the cover $\mathfrak U$ and let $j:U\hookrightarrow X$ be the inclusion of $U$ in $X$. The inverse image functor $j^*$ is just the restriction, so it is exact and preserves quasi-coherence. The direct image functor $j_*$ is exact and preserves quasi-coherence because $j:U\hookrightarrow X$ is an affine morphism, due to the semi-separated assumption. Thus we have an adjunction $(j^*,j_{*})$ with $j_{*}:\mathfrak{Qcoh}(U)\to \mathfrak{Qcoh}(X)$ and $j^*:\mathfrak{Qcoh}(X)\to \mathfrak{Qcoh}(U)$. The proof of the next proposition is implicit in \cite[Proposition 1.1]{Leo} (see also Murfet \cite[Proposition 3.29]{Mur} for a very detailed treatment) by noticing that the direct image functor $j_{*}$ preserves not just flatness but in fact \emph{very} flatness (by \cite[Corollary 1.2.5(b)]{P}). The reader can find a short and direct proof in \cite[Lemma 4.1.1]{P}. \begin{prop}\label{prop.vf.generators} Let $X$ be a quasi-compact and semi-separated scheme. Every quasi-co\-he\-rent sheaf is a quotient of a very flat quasi-coherent sheaf. Therefore $\mathfrak{Qcoh}(X)$ possesses a family of very flat generators. \end{prop} \end{ipg} \begin{ipg}\underline{The very flat cotorsion pair in $\mathfrak{Qcoh}(X)$}. For any scheme $X$, the class $\mathcal{V}\!\mathcal{F}(X)$ of very flat quasi-coherent sheaves is deconstructible (by \cite[Corollary 3.14]{EGPT}). Therefore the class of very flat quasi-coherent sheaves is a precovering class (see \ref{deconst}). If, in addition, the scheme $X$ is quasi-compact and semi-separated we infer from \cite[Corollary 3.15]{EGPT} and \cite[Corollary 4.1.2]{P} that the pair $(\mathcal{V}\!\mathcal{F}(X),\,\mathcal{C}\!\!\mathcal{A}(X))$ is a complete hereditary cotorsion pair in $\mathfrak{Qcoh}(X)$ (where $\,\mathcal{C}\!\!\mathcal{A}(X)$ denotes the class of all contraadjusted quasi-coherent sheaves on $X$). By \cite[Lemma 1.2.2(d)]{P} the class of very flat modules (and hence the class of very flat quasi-coherent sheaves) is closed under tensor products. Thus, in case $X$ is quasi-compact and semi-separated, \cite[Theorem 4.5]{EGPT} yields a cofibrantly generated and monoidal model category structure in $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ where the weak equivalences are the homology isomorphisms. The cofibrations (resp. trivial cofibrations) are monomorphisms whose cokernels are dg-very flat complexes (resp. very flat complexes). The fibrations (resp. trivial fibrations) are epimorphisms whose kernels are dg-contraadjusted complexes (resp. contraadjusted complexes). Therefore the corresponding triple is $$(dg(\mathcal{V}\!\mathcal{F}(X)),\mathrm{Ch}_{\rm ac}(\mathfrak{Qcoh}(X)),dg(\,\mathcal{C}\!\!\mathcal{A}(X))).$$ \end{ipg} \section{The property of modules involved. Examples}\label{section.examples} As we will see in the next sections, we are mainly concerned in deconstructible classes of modules that are closed under certain periodic modules. We start by recalling the notion of $\mathcal{C}$-periodic module with respect to a class $\mathcal{C}$ of modules. \begin{defn} Let $\mathcal C$ be a class of modules. A module $M$ is called $\mathcal C$-periodic if there exists a short exact sequence $0\to M\to C\to M\to 0$, with $C\in \mathcal C$. \end{defn} The following proposition relating flat periodic $\mathcal A$-modules and acyclic complexes with components in $\mathcal A$ is standard, but relevant for our purposes. The reader can find a proof in \cite[Proposition 1 and Proposition 2]{EFI}. \begin{prop}\label{prop.periodic} Let $\mathcal A$ be a class of modules closed under direct sums and direct summands. The following are equivalent: \begin{enumerate} \item Every cycle of an acyclic complex with flat cycles and with components in $\mathcal A$ belongs to $\mathcal A$. \item Every flat $\mathcal A$-periodic module belongs to $\mathcal A$. \end{enumerate} \end{prop} We are interested in deconstructible classes of modules $\mathcal{A}$ satisfying condition $(2)$ in the previous proposition. Of course the first trivial example is the class $\mathrm{Flat}(R)$ of flat modules itself. Since the class of all flat Mittag-Leffler modules is closed under pure submodules, this class also trivially yields an example of a class $\mathcal{A}$ satisfying that every flat $\mathcal A$-periodic module is in $\mathcal{A}$. However this class has an important drawback: it is only deconstructible in the trivial case of a perfect ring (see Herbera and Trlifaj \cite[Corollary 7.3]{HT}). This setback can be remedied by considering the \emph{restricted} flat Mittag-Leffler modules, in the sense of \cite[Example 2.1(3)]{EGT}, as we will show in Theorem \ref{rest.fml} below. Now we will provide with other interesting non-trivial examples of such classes $\mathcal{A}$ satisfying condition (2) above, and that will be relevant in the applications of our main results in the next sections. The first example is the class $\mathcal{A}=\mathrm{Proj}(R)$ of projective $R$-modules and goes back to Benson and Goodearl \cite[Theorem 1.1]{BG}. \begin{prop}\label{bg} Let $\mathrm{Proj}(R)$ be the class of all projective $R$-modules. Every flat $\mathrm{Proj}(R)$-periodic module is projective. As a consequence every pure acyclic complex of projectives is contractible (i.e. has projective cycles). \end{prop} The second application is the class $\mathcal{A}=\mathcal{V}\!\mathcal{F}(R)$ of very flat modules (this is due to \v S\v t'ov\' \i\v cek, personal communication). \begin{prop} Every flat $\mathcal{V}\!\mathcal{F}(R)$-periodic module is very flat. As a consequence every pure acyclic complex of very flat modules has very flat cycles. \end{prop} \begin{proof} Let $0\to F\to G\to F\to 0$ be an exact sequence with $F$ flat and $G$ very flat. Let $0\to F_1\to P\to F\to 0$ be an exact sequence with $P$ projective; then $F_1$ is flat. An application of the horseshoe lemma gives the following commutative diagram \[\xymatrix{ &0 \ar[d] & 0 \ar[d] & 0 \ar[d] \\ 0 \ar[r] & F_1 \ar[d] \ar[r] &Q\ar[r] \ar[d] & F_1\ar[d] \ar[r] & 0 \\ 0 \ar[r] & P \ar[d] \ar[r] &P\oplus P\ar[r] \ar[d] & P\ar[d] \ar[r] & 0 \\ 0 \ar[r] & F \ar[d]\ar[r] &G\ar[r] \ar[d] & F\ar[d] \ar[r] & 0 \\ &0 & 0 & 0. } \] where $Q$ is projective, since $\mathrm{pd}_R(G)\leq 1$. Thus, by Proposition \ref{bg}, $F_1$ is projective and therefore $\mathrm{pd}_R(F)\leq 1$. Let $C\in \,\mathcal{C}\!\!\mathcal{A}(R)$. Then applying $\operatorname{Hom}_R(-,C)$ to the short exact sequence yields $0=\operatorname{Ext}^1_R(G, C)\to \operatorname{Ext}^1_R(F, C)\to \operatorname{Ext}^2_R(F, C)=0$, hence $F\in \mathcal{V}\!\mathcal{F}(R)$. Finally, the consequence follows from Proposition \ref{prop.periodic}(1) (with $\mathcal{A}=\mathcal{V}\!\mathcal{F}(R)$). \end{proof} The last example is the announced deconstructible class of \emph{restricted} flat Mittag-Leffler modules as defined in \cite[Example 2.1(3)]{EGT}. \begin{thm}\label{rest.fml} let $\kappa$ be an infinite cardinal and $\mathcal{A}(\kappa)$ be the class of $\kappa$-restricted flat Mittag-Leffler modules. Every flat $\mathcal{A}(\kappa)$-periodic module is in $\mathcal{A}(\kappa)$. As a consequence every pure acyclic complex with components in $\mathcal{A}(\kappa)$ has cycles in $\mathcal{A}(\kappa)$. \end{thm} \begin{proof} The proof mostly follows the pattern outlined in \cite{BG}; the main difference is that instead of direct sum decomposition, we work with filtrations and Hill Lemma (cf.\ \cite[Theorem 7.10]{GT}). Given a short exact sequence \begin{equation}\label{BG-ses} 0 \to F \to G \stackrel f\to F \to 0 \end{equation} with $F$ flat and $G \in \mathcal{A}(\kappa)$, we fix a Hill family $\mathcal{H}$ for $G$. The goal is to pick a filtration $(G_\alpha \mid \alpha \leq \sigma)$ from $\mathcal{H}$ such that for each $\alpha < \sigma$, $f(G_\alpha) = F \cap G_\alpha$, $f(G_\alpha) \subseteq_* F$, and $G_{\alpha+1}/G_\alpha$ is $\leq \kappa$-presented flat Mittag-Leffler. Once this is achieved, we obtain a filtration of the whole short exact sequence \eqref{BG-ses} by short exact sequences of the form \[ 0 \to F_{\alpha+1}/F_\alpha \to G_{\alpha+1}/G_\alpha \to F_{\alpha+1}/F_\alpha \to 0 \] (putting $F_{\alpha} = f(G_\alpha) = F \cap G_\alpha$). Since the property of being flat Mittag-Leffler passes to pure submodules (cf.\ \cite[Corollary 3.20]{GT}), this would make $F_{\alpha+1}/F_\alpha$ an $\leq\kappa$-presented flat Mittag-Leffler module and hence imply $F \in \mathcal{A}(\kappa)$. (Note that by \cite[Lemma 2.7 (1)]{EGPT}, each $\leq \kappa$-generated flat Mittag-Leffler module is (even strongly) $\leq \kappa$-presented.) Put $G_0 = 0$. For limit ordinals $\alpha$, it suffices to take unions of already constructed submodules $G_\beta$, $\beta<\alpha$; note that by property (H2) in Hill Lemma, $G_\alpha \in \mathcal{H}$ then. Having constructed modules up to $G_\alpha$ (and assuming $G_\alpha \neq G$), we construct $G_{\alpha+1}$ as follows: We pass to the quotient short exact sequence \[ 0 \to F/F_\alpha \to G/G_\alpha \stackrel{\overline f}\to F/F_\alpha \to 0, \] which, by assumption, satisfy that $F/F_\alpha$ is flat and $G/G_\alpha \in \mathcal{A}(\kappa)$. Note that $F/F_\alpha$, being (identified with) a pure submodule of $G/G_\alpha$, is flat Mittag-Leffler. The Hill family $\mathcal{H}$ gives rise to family $\mathcal{H}'$ for $G/G_\alpha$, which consists of factors of modules from $\mathcal{H}$ (containing $G_\alpha$) by $G_\alpha$. Let us first show that any $\leq\kappa$-generated submodule $Y$ of $G/G_\alpha$ can be enlarged to $\leq\kappa$-generated $\overline G \in \mathcal{H}'$ with the property that $\overline f(\overline G) \subseteq_* F/F_\alpha$ and $\overline G \cap F/F_\alpha$ is $\leq\kappa$-generated. To this end, we construct inductively a chain of submodules $\overline G_n \in \mathcal{H}'$ with union $\overline G$ (utilizing property (H2)). Let $\overline G_0$ be an arbitrary $\leq\kappa$-generated module $\overline G_0 \in \mathcal{H}'$ containing $Y$ (obtained via (H4)). Assuming we have constructed $\overline G_n$, we get $\overline G_{n+1}$ by taking these steps: \begin{enumerate} \item Enlarge $\overline f(\overline G_n)$ to a $\leq\kappa$-generated pure submodule $X_n$ of $F/F_\alpha$; this is possible by \cite[Lemma 2.7 (2)]{EGPT} once we notice that $F/F_\alpha$, being a pure submodule of $G/G_\alpha$, is flat Mittag-Leffler. \item Take $\leq\kappa$-generated $\overline G_{n+1} \in \mathcal{H}'$ such that $X \subseteq \overline f(G_n')$; this is again possible by property (H4) of the Hill Lemma. \end{enumerate} We have $\overline f(\overline G) = \bigcup_{n \in \mathbb{N}} \overline f(\overline G) = \bigcup_{n \in \mathbb{N}} X_n \subseteq_* F/F_\alpha$. This also shows that $\overline f(\overline G)$ is flat Mittag-Leffler, hence $\leq\kappa$-presented. The short exact sequence \[ 0 \to \overline G \cap (F/F_\alpha) \to \overline G \to \overline f(\overline G) \to 0 \] now shows that $\overline G \cap (F/F_\alpha)$ is indeed $\leq\kappa$-generated. Now iterate the claim as follows: Start with arbitrary $\leq\kappa$-generated non-zero $Y_0 \subseteq G/G_\alpha$ and obtain $\overline G_0$ from the claim. Enlarge it to $\overline G_1 \in \mathcal{H}'$ satisfying $\overline G_0 \cap (F/F_\alpha) \subseteq \overline f(\overline G_1)$ (which we may do using (H4), since $\overline G_0 \cap (F/F_\alpha)$ is $\leq\kappa$-generated). Taking $Y_1 = \overline G_1 + \overline f(\overline G_1)$ and applying the claim, we get $\overline G_2$ etc. This way we obtain a chain \[ \overline G_0 \cap (F/F_\alpha) \subseteq \overline f(\overline G_1) \subseteq \overline G_2 \cap (F/F_\alpha) \subseteq \overline f(\overline G_3) \subseteq \ldots, \] so for $\overline G = \bigcup_{n \in \mathbb{N}} \overline G_n \in \mathcal{H}'$ we have $\overline G \cap (F/F_\alpha) = \overline f(\overline G)$. Also the purity of $\overline f(\overline G)$ in $F/F_\alpha$ and being $\leq\kappa$-generated is ensured. The desired module $G_{\alpha+1}$ is now the one satisfying $G_{\alpha+1}/G_\alpha = \overline G$. \end{proof} Note that in the case $\kappa = \aleph_0$, $\mathcal{A}(\kappa)$ is just the class of projective modules by \cite[Seconde partie, Section 2.2]{RG}, so this also covers the case of \cite{BG}. \section{Quillen equivalent models for ${\bf K}(\mathrm{Proj}(R))$} It is known (see Bravo, Gillespie and Hovey \cite[Corollary 6.4]{BGH}) that the homotopy category of projectives ${\bf K}(\mathrm{Proj}(R))$ can be realized as the homotopy category of the model $\mathcal{M}_{\rm proj}=(\mathrm{Ch}(\mathrm{Proj}(R)), \mathrm{Ch}(\mathrm{Proj}(R))^{\perp}, \mathrm{Ch}(R))$ in $\mathrm{Ch}(R)$. Now, by \cite[Remark 4.2]{G}, the class $\mathrm{Ch}(\mathrm{Flat}(R))$ induces model category in $\mathrm{Ch}(R)$ given by the triple, $$(\mathrm{Ch}(\mathrm{Flat}(R)),\mathrm{Ch}(\mathrm{Proj}(R))^{\perp}, dg(\mathrm{Cot}(R))).$$The last model is thus Quillen equivalent to $\mathcal{M}_{\rm proj}$. Therefore, its homotopy category, the derived category of flats ${\bf D}(\mathrm{Flat}(R))$, is triangulated equivalent to ${\bf K}(\mathrm{Proj}(R))$. The next theorem gives sufficient conditions on a class of modules $\mathcal{A}$ to get ${\bf D}(\mathcal{A})$ and ${\bf D}(\mathrm{Flat}(R))$ to be triangulated equivalent. For concrete examples of such classes the reader should have in mind the classes of modules considered in Section \ref{section.examples}. \begin{thm}\label{t.mc.affine case} Let $\mathcal{A}\subseteq \mathrm{Flat}(R)$ be a class of modules such that: \begin{enumerate} \item The pair $(\mathcal{A},\mathcal{B})$ is a hereditary cotorsion pair generated by a set. \item Every flat $\mathcal{A}$-periodic module is trivial. \end{enumerate} Then there is an abelian model category structure $\mathcal M=(\mathrm{Ch}(\mathcal{A}),\mathrm{Ch}(\mathrm{Proj}(R))^{\perp},dg\widetilde{\mathcal{B}})$ in $\mathrm{Ch}(R)$. If we denote by $\mathbf D\mathbb(\mathcal{A})$ the homotopy category of $\mathcal{M}$, then ${\bf D}(\mathrm{Flat}(R))$, $\mathbf D\mathbb(\mathcal{A})$ and ${\bf K}(\mathrm{Proj}(R))$ are triangulated equivalent, induced by a Quillen equivalence between the corresponding model categories. \end{thm} \begin{proof} Let $\mathcal M=(\mathrm{Ch}(\mathcal{A}),\mathcal{W},dg\widetilde{\mathcal{B}})$ be the model associated to the complete hereditary cotorsion pairs $(\mathrm{Ch}(\mathcal{A}), \mathrm{Ch}(\mathcal{A})^\perp)$ and $(\tilclass{\mathcal{A}},dg\tilclass{B})$ in $\mathrm{Ch}(R)$. To get the claim it suffices to show that $\mathcal{W}=\mathrm{Ch}(\mathrm{Proj}(R))^\perp$. To this aim we will use \cite[Lemma 4.3(1)]{G}, i.e. we need to prove: \begin{enumerate} \item[(i) ] $\tilclass{\mathcal{A}}=\mathrm{Ch}(\mathcal{A})\cap \mathrm{Ch}(\mathrm{Proj}(R))^\perp$. \item[(ii) ] $ \mathrm{Ch}(\mathcal{A})^\perp\subseteq \mathrm{Ch}(\mathrm{Proj}(R))^\perp$. \end{enumerate} Condition (ii) is clear because $\mathrm{Proj}(R)\subseteq \mathcal{A}$. Now, by Neeman \cite[Theorem 8.6]{Nee}, $\mathrm{Ch}(\mathcal{A})\cap \mathrm{Ch}(\mathrm{Proj}(R))^\perp=\tilclass{\mathrm{Flat}}(R)\cap \mathrm{Ch}(\mathcal{A})$. But, by the assumption (2), we follow that $\tilclass{\mathrm{Flat}}(R)\cap \mathrm{Ch}(\mathcal{A})=\tilclass{\mathcal{A}}$. \end{proof} \begin{remark} Starting with a class $\mathcal{A}$ in the assumptions of Theorem \ref{t.mc.affine case}, we may construct, for each integer $n\geq 0$, the class $\mathcal{A}^{\leq n}$ of modules $M$ possessing an exact sequence $$0\to A_n\to A_{n-1}\to \ldots \to A_0\to M\to 0$$ with $A_i\in \mathcal{A}$, $i=1,\ldots, n$. The derived categories $\mathbf{D}(\mathcal A^{\leq n})$ and $\mathbf{D}(\mathcal A)$ are triangulated equivalent (see Positselski \cite[Proposition A.5.6]{P}). In particular we can infer from this a triangulated equivalence between ${\bf K}(\mathrm{Proj}(R))$ and ${\bf D}(\mathcal{V}\!\mathcal{F}(R))$. By using a standard argument of totalization one can also check that $\mathbf{D}(\mathcal A^{\leq n})$ and $\mathbf{D}(\mathcal A)$ can be realized as the homotopy categories of two models $\mathcal{M}_1$ and $\mathcal{M}_2$ and that these models are Quillen equivalent without using Neeman \cite[Theorem 8.6]{Nee}. From this point of view it seems that the triangulated equivalence between ${\bf K}(\mathrm{Proj}(R))$ and ${\bf D}(\mathcal{V}\!\mathcal{F}(R))$ is much less involved than the one between ${\bf K}(\mathrm{Proj}(R))$ and ${\bf D}(\mathrm{Flat}(R))$. \end{remark} \section{Quillen equivalent models for ${\bf D}(\mathrm{Flat}(X))$}\label{section.q_equivalent} \noindent {\bf Setup:} Throughout this section $X$ will denote a quasi-compact and semi-separated scheme. If $\U=\{U_0,\ldots,U_m\}$ is an affine open cover of $X$ and $\alpha=\{i_0,\ldots, i_k\}$ is a finite sequence of indices in the set $\{0,\ldots,m\}$ (with $i_0<\cdots<i_k$), we write $U_{\alpha}=U_{i_0}\cap \cdots \cap U_{i_k}$ for the corresponding affine intersection. \medskip In \cite{Mur} Murfet shows that the derived category of flat quasi-coherent sheaves on $X$, ${\bf D}(\mathrm{Flat}(X))$, constitutes a good replacement of the homotopy category of projectives for non-affine schemes, because in case $X=\Spec(R)$ is affine, the categories ${\bf D}(\mathrm{Flat}(X))$ and ${\bf K}(\mathrm{Proj}(R))$ are triangulated equivalent. There is a model for ${\bf D}(\mathrm{Flat}(X))$ in $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ given by the triple $$\mathcal{M}_{\rm flat}=(\mathrm{Ch}(\mathrm{Flat}(X),\mathcal{W}, dg(\mathrm{Cot}(X))).$$(see \cite[Corollary 4.1]{G}). We devote this section to provide a general method to produce model categories $\mathcal{M}$ in $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ which are Quillen equivalent to $\mathcal{M}_{\rm flat}$. In particular this implies that the homotopy category $\operatorname{Ho}(\mathcal{M})$ and ${\bf D}(\mathrm{Flat}(X))$ are triangulated equivalent. \begin{theorem}\label{t.mc.general case} Let $X$ be a scheme and let $\mathcal P$ be a property of modules and $\mathcal{A}$ its associated class of modules. Assume that $\mathcal{A}\subseteq \mathrm{Flat}$, and that the following conditions hold: \begin{enumerate} \item The class $\mathcal{A}$ is Zariski-local. \item For each $R=\OO_X(U)$, $U\in \U$, the pair $(\mathcal{A}_R,\mathcal{B}_R)$ is a hereditary cotorsion pair generated by a set. \item For each $R=\OO_X(U)$, $U\in \U$, every flat $\mathcal{A}_R$-periodic module is trivial. \item $j_*(\mathcal{A}_{{\rm qc}(U_{\alpha})})\subseteq \mathcal{A}_{{\rm qc}(X)}$, for each $\alpha\subseteq \{0,\ldots,m\}$. \end{enumerate} Then there is an abelian model category structure $ \mathcal{M}_{\mathcal{A}_{{\rm qc}}}$ in $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ given by the triple $(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),\mathcal{W},dg(\mathcal{B}))$. If we denote by $\mathbf D\mathbb(\mathcal{A}_{{\rm qc}})$ the homotopy category of $\mathcal{M}_{\mathcal{A}_{{\rm qc}}}$, then the categories ${\bf D}(\mathrm{Flat}(X))$ and $\mathbf D\mathbb(\mathcal{A}_{{\rm qc}})$ are triangulated equivalent, induced by a Quillen equivalence between the corresponding model categories. In case $X=\Spec(R)$ is affine, $\mathbf D\mathbb(\mathcal{A}_{R})$ is triangulated equivalent to $\mathbf{K}(\mathrm{Proj}(R))$. \end{theorem} Before proving the theorem, let us focus on one particular instance of it: if we take $\mathcal{A}=\mathcal{V}\!\mathcal{F}$ (the class of very flat modules) the theorem gives us that ${\bf D}(\mathrm{Flat}(X))$ and $\mathbf D\mathbb(\mathcal{V}\!\mathcal{F}(X))$ are triangulated equivalent. This generalizes to arbitrary schemes \cite[Corollary 5.4.3]{P}, where such a triangulated equivalence is obtained for a semi-separated Noetherian scheme of finite Krull dimension. \begin{cor}\label{cor.triang.equiv.flatveryflat} For any scheme $X$, the categories ${\bf D}(\mathrm{Flat}(X))$ and $\mathbf D\mathbb(\mathcal{V}\!\mathcal{F}(X))$ are triangulated equivalent. \end{cor} Let us prove Theorem \ref{t.mc.general case}. We firstly require the following useful lemma. \begin{lemma}\label{l.keylemma} Suppose $\mathcal{A}$ is as in Theorem \ref{t.mc.general case} (possibly without satisfying condition (3)). Then for any $\mathscr M_\bullet\in \mathrm{Ch}(\mathrm{Flat}(X))$ there exists a short exact sequence $$0\to \mathscr K _\bullet\to \mathscr F_\bullet\to \mathscr M_\bullet\to 0,$$ where $\mathscr F_\bullet\in \mathrm{Ch}(\mathcal{A}_{{\rm qc}(X)})$ and $\mathscr K_\bullet\in\tilclass\mathrm{Flat}(X)$. \end{lemma} \begin{proof} We essentially follow the proof of \cite[Lemma 4.1.1]{P1}; the main difference is that instead of sheaves, we are dealing with complexes of sheaves. Starting with the empty set, we gradually construct such a short exact sequence with the desired properties manifesting on larger and larger unions of sets from $\U$, reaching $X$ in a finite number of steps. Assume that for an open subscheme $T$ of $X$ we have constructed a short exact sequence $0 \to \shcplx L \to \shcplx G \to \shcplx M \to 0$ such that the restriction $h^*(\shcplx G)$ belongs to $\mathrm{Ch}(\mathcal{A}_{{\rm qc}(T)})$ ($h\colon T \hookrightarrow X$ being the inclusion map) and $\shcplx L \in \tilclass\mathrm{Flat}(X)$. Let $U \in \U$ (with inclusion map $j\colon U \hookrightarrow X$); our goal is to construct a short exact sequence $0 \to \shcplx L' \to \shcplx G' \to \shcplx M \to 0$ with the same property with respect to the set $U \cup T$. Let us note that the adjoint pairs of functors on sheaves $(j^*, j_*)$, $(h^*, h_*)$ yield corresponding adjoint pairs of functors on complexes of sheaves. Pick a short exact sequence $0 \to \shcplx K' \to \shcplx Z \to j^*(\shcplx G) \to 0$ of complexes of sheaves over the affine subscheme $U$, where $\shcplx Z \in \mathrm{Ch}(\mathcal{A}_{{\rm qc}(U)}) = \mathrm{Ch}(\mathcal{A}_{\OO_U(U)})$ and $\shcplx K' \in \mathrm{Ch}(\mathcal{A}_{\OO_U(U)})^{\perp}$, i.e.\ special precover in the category of complexes of $\OO_U(U)$-modules. In this (affine) setting we know from \cite{Nee} that $\shcplx K' \in \tilclass\mathrm{Flat}(U)$, since $\shcplx K' \in \mathrm{Ch}(\mathrm{Flat}(U)) \cap \mathrm{Ch}(\mathcal{A}_{\OO_U(U)})^{\perp} \subseteq \mathrm{Ch}(\mathrm{Flat}(U)) \cap \mathrm{Ch}(\mathrm{Proj}(U))^{\perp}$. Using the direct image functor, we get $0 \to j_*(\shcplx K') \to j_*(\shcplx Z') \to j_*j^*(\shcplx G) \to 0$ over $X$. Since $U \in \U$ is affine, $j_*$ is an exact functor taking flats to flats and also preserving $\mathcal{A}$ by condition (4), so $j_*(\shcplx K') \in \tilclass\mathrm{Flat}(X)$, whence $j_*(\shcplx Z)$ stays in $\mathrm{Ch}(\mathcal{A}_{{\rm qc}})$. Now considering the pull-back with respect to the adjunction morphism $\shcplx G \to j_*j^*(\shcplx G)$, one gets a new short exact sequence ending in $\shcplx G$; let $\shcplx G'$ be its middle term: \[\xymatrix{ 0 \ar[r] & j_*(\shcplx K') \ar[r]\ar@{=}[d] & \shcplx G' \ar[r]\ar[d] & \shcplx G \ar[r]\ar[d] & 0 \\ 0 \ar[r] & j_*(\shcplx K') \ar[r] & j_*(\shcplx Z') \ar[r] & j_*j^*(\shcplx G) \ar[r] & 0 }\] Let us now check that the components of $\shcplx G'$ are in $\mathrm{Ch}(\mathcal{A}_{{\rm qc}(U \cup T)})$; this is sufficient to check on $U$ and $T$ separately. Firstly, $j^*(\shcplx G') \cong \shcplx Z$, which is in $\mathrm{Ch}(\mathcal{A}_{{\rm qc}(U)})$ by construction. On the other hand, the complex $j^*(\shcplx G)$, when further restricted to $U \cap T$, is in $\mathrm{Ch}(\mathcal{A}_{{\rm qc}(U \cup T)})$ ($\mathcal{A}$ being Zariski-local class), and the same holds for the complex $\shcplx K'$ by the resolving property of $\mathcal{A}$. The embedding $U \cap T \hookrightarrow T$ is an affine morphism (by semi-separatedness) and preserving $\mathcal{A}$ by (4), so $j_*(\shcplx K') \in \mathrm{Ch}(A_{{\rm qc}(T)})$. Therefore $\shcplx G'$, as an extension of $j_*(\shcplx K')$ by $\shcplx G$, belongs to $\mathrm{Ch}(A_{{\rm qc}(T)})$, too. Finally, the kernel $\shcplx K$ of the composition of morphisms $\shcplx G' \to \shcplx G \to \shcplx M$ is an extension of $\shcplx L$ and $j_*(\shcplx K')$, hence a complex from $\tilclass\mathrm{Flat}(X)$. This proves the existence of the short exact sequence from the statement. \end{proof} \begin{proof}[Proof of Theorem \ref{t.mc.general case}] First of all we notice that the class $\mathcal{A}_{{\rm qc}}$ contains a family of generators for $\mathfrak{Qcoh}(X)$; this is just a variation of the idea used in the proof of \cite[Lemma 4.1.1]{P}, where we replace the class of very flats by $\mathcal{A}$ (and do not care about the kernel of the morphisms), which is possible thanks to property (4). Then, by \cite[Corollary 3.15]{EGPT} we get in $\mathfrak{Qcoh}(X)$ the complete hereditary cotorsion pair $(\mathcal{A}_{{\rm qc}},\mathcal{B})$ generated by a set. Thus by \cite[Theorem 4.10]{G} we get the abelian model structure $\mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\rm qc}=(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),{\mathcal{W}_1}, dg(\mathcal{B}))$ in $\mathrm{Ch}(\mathfrak{Qcoh}(X)) $ given by the two complete hereditary cotorsion pairs: $$(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),\mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp})\quad \text{and}\quad (\tilclass{\mathcal{A}}_{{\rm qc}}, dg(\mathcal{B})).$$ Since $\mathcal{A}_{{\rm qc}}\subseteq \mathrm{Flat}(X) $, we get the corresponding induced cotorsion pairs in $\mathrm{Ch}(\mathrm{Flat}(X))$ (with the induced exact structure from $\mathrm{Flat}(X)$): $$(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),\mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp}\cap \mathrm{Ch}(\mathrm{Flat}(X)))\quad \text{and}\quad (\tilclass{\mathcal{A}}_{{\rm qc}}, dg(\mathcal{B})\cap \mathrm{Ch}(\mathrm{Flat}(X))).$$ To see that e.g.\ the former one is indeed a cotorsion pair, we have to check that $\mathrm{Ch}(\mathcal{A}_{{\rm qc}}) = {}^\perp(\mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp}\cap \mathrm{Ch}(\mathrm{Flat}(X))) \cap \mathrm{Ch}(\mathrm{Flat}(X))$. The inclusion ``$\subseteq$'' is clear. To see the other one, pick $\mathscr X_\bullet \in {}^\perp(\mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp}\cap \mathrm{Ch}(\mathrm{Flat}(X))) \cap \mathrm{Ch}(\mathrm{Flat}(X))$ and consider a short exact sequence $0 \to \mathscr B_\bullet \to \mathscr A_\bullet \to \mathscr X_\bullet \to 0$ with $\mathscr A_\bullet \in \mathrm{Ch}(\mathcal{A}_{{\rm qc}})$ and $\mathscr B_\bullet \in \mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp}$. As $\mathcal{A}_{{\rm qc}} \subseteq \mathrm{Flat}(X)$ and $\mathrm{Ch}(\mathrm{Flat}(X))$ is a resolving class, we infer that $\mathscr B_\bullet \in \mathrm{Ch}(\mathrm{Flat}(X))$. Thus the sequence splits and $\mathscr X_\bullet$ is a direct summand of $\mathscr A_\bullet$, hence an element of $\mathrm{Ch}(\mathcal{A}_{{\rm qc}})$. The proof for the latter cotorsion pair goes in a similar way. Now we will apply \cite[Lemma 4.3]{G} to these two complete cotorsion pairs in the category $\mathrm{Ch}(\mathrm{Flat}(X))$ and to the thick class $\mathcal{W}=\tilclass{\mathrm{Flat}}(X)$ in $\mathrm{Ch}(\mathrm{Flat}(X))$. So we need to check that the following conditions hold: \begin{enumerate} \item[(i)] $\tilclass{\mathcal{A}}_{{\rm qc}}=\mathrm{Ch}(\mathcal{A}_{{\rm qc}})\cap \tilclass{\mathrm{Flat}}(X)$. \item[(ii)] $\mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp}\cap \mathrm{Ch}(\mathrm{Flat}(X))\subseteq \tilclass{\mathrm{Flat}}(X)$. \end{enumerate} Since every flat $\mathcal{A}_R$-periodic module is trivial and the classes $\mathcal{A}$ and $\mathrm{Flat}$ are Zariski-local, we immediately infer that every flat $\mathcal{A}_{{\rm qc}}$-periodic quasi-coherent sheaf is trivial. Thus, from Proposition \ref{prop.periodic}, we get condition (i). So let us see condition (ii). Let $\mathscr L_\bullet\in\mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp}\cap \mathrm{Ch}(\mathrm{Flat}(X)) $. Since the pair $(\mathrm{Ch}(\mathrm{Flat}(X)),\mathrm{Ch}(\mathrm{Flat}(X))^\perp)$ in $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ has enough injectives, there exists an exact sequence, $$0\to \mathscr L_\bullet\to \mathscr P_\bullet\to \mathscr M_\bullet\to 0, $$ with $\mathscr P_\bullet\in \mathrm{Ch}(\mathrm{Flat}(X))^\perp$ and $\mathscr M_\bullet\in \mathrm{Ch}(\mathrm{Flat}(X))$. Now, since $\mathscr L_\bullet\in\mathrm{Ch}(\mathrm{Flat}(X))$, we get that $\mathscr P_\bullet\in \mathrm{Ch}(\mathrm{Flat}(X))\cap \mathrm{Ch}(\mathrm{Flat}(X))^\perp=\tilclass{\mathrm{Flat}\mathrm{Cot}}(X)$. By Lemma \ref{l.keylemma}, there exists an exact sequence $$0\to \mathscr K _\bullet\to \mathscr F_\bullet\to \mathscr M_\bullet\to 0,$$ where $\mathscr F_\bullet\in \mathrm{Ch}(\mathcal{A}_{{\rm qc}})$ and $\mathscr K_\bullet\in\tilclass\mathrm{Flat}(X)$. Now, we take the pull-back of $\mathscr P_\bullet\to \mathscr M_\bullet$ and $\mathscr F_\bullet \to \mathscr M_\bullet$, so we get a commutative diagram: \begin{equation*} \xymatrix{ & & 0\ar[d] & 0\ar[d] & & \\ & & \mathscr K_\bullet \ar[d] \ar@{=}[r] &\mathscr K_\bullet \ar[d]\\ 0\ar[r] & \mathscr L_\bullet\ar@{=}[d]\ar[r] & \mathscr Q_\bullet \ar[r] \ar[d]\ar[r] &\mathscr F_\bullet \ar[r]\ar[d] &0\\ 0\ar[r] & \mathscr L_\bullet\ar[r] & \mathscr P_\bullet \ar[r]\ar[d] &\mathscr M_\bullet\ar[r]\ar[d] &0 \\ & & 0 & 0 } \end{equation*} In the middle column, the complexes $\mathscr K_\bullet$ and $\mathscr P_\bullet$ belong to $\tilclass{\mathrm{Flat}}(X)$. Therefore, the complex $\mathscr Q_\bullet$ also belongs to $\tilclass{\mathrm{Flat}}(X)$. Since $\mathscr F_\bullet\in \mathrm{Ch}(\mathcal{A}_{{\rm qc}})$ and $\mathscr L_{\bullet}\in \mathrm{Ch}(\mathcal{A}_{{\rm qc}})^{\perp}$, the exact sequence in the middle row splits. So, $\mathscr L_\bullet \in \tilclass{\mathrm{Flat}}(X)$ as desired. Therefore by \cite[Lemma 4.3]{G} we have the exact model structure in $\mathrm{Ch}(\mathrm{Flat}(X))$ given by the triple $$\mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\mathrm{Ch}_{\rm flat}}=(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),\tilclass{\mathrm{Flat}}(X), dg(\mathcal{B})\cap \mathrm{Ch}(\mathrm{Flat}(X))).$$ Since it has the same class of trivial objects, this model is Quillen equivalent to the flat model in $\mathrm{Ch}(\mathrm{Flat}(X))$, $$\mathcal{M}_{\rm flat}^{\mathrm{Ch}_{\rm flat}}=(\mathrm{Ch}(\mathrm{Flat}(X)),\tilclass{\mathrm{Flat}}(X),dg(\mathrm{Cot}(X))\cap \mathrm{Ch}(\mathrm{Flat}(X)) ).$$ This is, in turn, the restricted model of the model $$\mathcal{M}_{\rm flat}=(\mathrm{Ch}(\mathrm{Flat}(X),\mathcal{W}, dg(\mathrm{Cot}(X)))$$ in $\mathrm{Ch}(\mathfrak{Qcoh}(X))$ with respect to the exact category $\mathrm{Ch}(\mathrm{Flat}(X))$ of cofibrant objects. Thus, $\mathcal{M}_{\rm flat}$ and $\mathcal{M}_{\rm flat}^{\mathrm{Ch}_{\rm flat}}$ are canonically Quillen equivalent. To finish the proof, let us show that the model $\mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\mathrm{Ch}_{\rm flat}}$ is Quillen equivalent to $$\mathcal{M}_{\mathcal{A}_{{\rm qc}}}=(\mathrm{Ch}(\mathcal{A}_{{\rm qc}(X)}),\mathcal{W}_1, dg(\mathcal{B})).$$ But this model is canonically Quillen equivalent to its restriction to the cofibrant objects, i.e. $$\mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\mathrm{Ch}_{\mathcal{A}_{\rm qc}}}=(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),\tilclass{\mathcal{A}}_{{\rm qc}}, dg(\mathcal{B})\cap \mathrm{Ch}(\mathcal{A}_{{\rm qc}})).$$ Finally the Quillen equivalent cofibrant restricted model of $$\mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\mathrm{Ch}_{\rm flat}}=(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),\tilclass{\mathrm{Flat}}(X), dg(\mathcal{B})\cap \mathrm{Ch}(\mathrm{Flat}(X)))$$ is given by the triple $$(\mathrm{Ch}(\mathcal{A}_{{\rm qc}}),\tilclass{\mathrm{Flat}}(X)\cap\mathrm{Ch}(\mathcal{A}_{{\rm qc}}), dg(\mathcal{B})\cap \mathrm{Ch}(\mathcal{A}_{{\rm qc}}))), $$ which by condition (i) above is precisely the previous model $\mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\mathrm{Ch}_{\mathcal{A}_{\rm qc}}}$. In summary, we have the following chain of Quillen equivalences among the several models, $$ \mathcal{M}_{\rm flat}\simeq \mathcal{M}_{\rm flat}^{\mathrm{Ch}_{\rm flat}}\simeq \mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\mathrm{Ch}_{\rm flat}}\simeq \mathcal{M}_{\mathcal{A}_{{\rm qc}}}^{\mathrm{Ch}_{\mathcal{A}_{\rm qc}}}\simeq \mathcal{M}_{\mathcal{A}_{{\rm qc}}}. $$ The first and the last models give our desired Quillen equivalence. \end{proof} Recall from \cite{D} that $\mathscr M\in \mathfrak{Qcoh}(X)$ is an \emph{infinite-dimensional vector bundle} if, for each $U\in \mathcal U$, the $\mathscr O_X(U)$-module $\mathscr M(U)$ is projective. We will denote by $\mathrm{Vect}(X)$ the class of all infinite-dimensional vector bundles on $X$. In case $\mathrm{Vect}(X)$ contains a generating set of $\mathfrak{Qcoh}(X)$, we know from \cite[Corollary 3.15 and 3.16]{EGPT} that the pair $(\mathrm{Vect}(X),\mathcal B)$ (where $\mathcal B:=\mathrm{Vect}(X)^{\perp}$) is a complete cotorsion pair generated by a set. It is hereditary, because the class $\mathrm{Vect}(X)$ is resolving. Thus by \cite[Theorem 4.10]{G} we get the abelian model structure $\mathcal{M}_{\textrm{vect}}=(\mathrm{Ch}(\mathrm{Vect}(X)),{\mathcal{W}_1}, dg(\mathcal{B}))$ in $\mathrm{Ch}(\mathfrak{Qcoh}(X)) $ given by the two complete hereditary cotorsion pairs: $$(\mathrm{Ch}(\mathrm{Vect}(X)),\mathrm{Ch}(\mathrm{Vect}(X))^{\perp})\quad \text{and}\quad (\tilclass{\mathrm{Vect}}(X), dg(\mathcal{B})).$$ We will denote by ${\bf D}(\mathrm{Vect}(X))$ its homotopy category. We are now in position to prove Corollary 2 in the Introduction. \begin{cor}\label{eq.vbundles} Let $X$ be a scheme with enough infinite-dimensional vector bundles. Then the categories ${\bf D}(\mathrm{Flat}(X))$ and $\mathbf D\mathbb(\mathrm{Vect}(X))$ are triangle equivalent, the equivalence being induced by a Quillen equivalence between the corresponding model categories. \end{cor} \begin{proof} The proof will follow by showing that ${\bf D}(\mathrm{Vect}(X))$ and $\mathbf D\mathbb(\mathcal{V}\!\mathcal{F}(X))$ are Quillen equivalent, and then by applying Corollary \ref{cor.triang.equiv.flatveryflat}. To this end, we will prove that the model structures $\mathcal{M}_{\mathrm{vect}}$ and $\mathcal{M}_{\mathcal{V}\!\mathcal{F}}$ have the same trivial objects. To achieve this, by \cite[Theorem 1.2]{G2}, it suffices to show that the trivial fibrant and cofibrant objects of one structure are trivial also in the other structure. This assertion is clearly satisfied by the trivial cofibrants of $\mathcal{M}_{\mathrm{vect}}$ and trivial fibrants of $\mathcal{M}_{\mathcal{V}\!\mathcal{F}}$, as \[ \widetilde{\mathrm{Vect}}(X) \subseteq \widetilde{\mathcal{V}\!\mathcal{F}}(X) \quad \text{and} \quad \mathrm{Ch}(\mathcal{V}\!\mathcal{F}(X))^\perp \subseteq \mathrm{Ch}(\mathrm{Vect}(X))^\perp.\] Now let $\shcplx V \in \widetilde{\mathcal{V}\!\mathcal{F}}(X)$; since there are enough infinite-dimensional vector bundles, the cotorsion pair $(\tilclass{\mathrm{Vect}}(X), dg(\mathcal{B}))$ has enough projectives, hence there is a short exact sequence \[ 0 \to \shcplx Q \to \shcplx P \to \shcplx V \to 0 \] with $\shcplx P \in \tilclass{\mathrm{Vect}}(X)$. Restricting this to an open affine subset of $X$, we obtain a short exact sequence with a complex of projective modules in the middle and ending in a complex of very flat modules, and the objects of cycles also belonging to the respective classes. Since the projective dimension of very flat modules does not exceed $1$, it follows that $\shcplx Q$ has also projective cycles after this restriction, hence $\shcplx Q \in \tilclass{\mathrm{Vect}}(X)$. We conclude that $\shcplx V$, being a factor of two trivial objects, is itself trivial in $\mathcal{M}_{\mathrm{vect}}$. Finally, pick $\shcplx M \in \mathrm{Ch}(\mathrm{Vect}(X))^\perp$. Using the completeness of the cotorsion pair $(\mathrm{Ch}(\mathcal{V}\!\mathcal{F}(X)), \mathrm{Ch}(\mathcal{V}\!\mathcal{F}(X))^\perp)$, we obtain a short exact sequence \[ 0 \to \shcplx K \to \shcplx V \to \shcplx M \to 0 \] with $\shcplx V \in \mathrm{Ch}(\mathcal{V}\!\mathcal{F}(X))$ and $\shcplx K \in \mathrm{Ch}(\mathcal{V}\!\mathcal{F}(X))^\perp$. As $\shcplx K$ is trivial in $\mathcal{M}_{\mathrm{vect}}$, it suffices to show that $\shcplx V$ is trivial, too. Furthermore, $\mathrm{Ch}(\mathcal{V}\!\mathcal{F}(X))^\perp \subseteq \mathrm{Ch}(\mathrm{Vect}(X))^\perp$ implies that in fact, $\shcplx V \in \mathrm{Ch}(\mathrm{Vect}(X))^\perp$. So as above, construct a short exact sequence \[ 0 \to \shcplx Q \to \shcplx P \to \shcplx V \to 0, \] this time with $\shcplx P \in \mathrm{Ch}(\mathrm{Vect}(X))$ and $\shcplx Q \in \mathrm{Ch}(\mathrm{Vect}(X))^\perp$. The same local argument as above shows that $\shcplx Q \in \mathrm{Ch}(\mathrm{Vect}(X))$, and we also have $\shcplx P \in \mathrm{Ch}(\mathrm{Vect}(X))^\perp$ (being an extension of two objects from the class). Hence $\shcplx V$ is a factor of two complexes from the class $\mathrm{Ch}(\mathrm{Vect}(X)) \cap \mathrm{Ch}(\mathrm{Vect}(X))^\perp$, which is a subclass of $\tilclass{\mathrm{Vect}}(X)$ and consequently $\tilclass{\mathcal{V}\!\mathcal{F}}(X)$, therefore consisting of trivial objects of $\mathcal{M}_{\mathcal{V}\!\mathcal{F}}$. \end{proof} Finally, the last consequence is also an application of Theorem \ref{t.mc.general case} for the class of very flat quasi-coherent sheaves. It follows from Gillespie \cite[Theorem 4.10]{G} \begin{cor} There is a recollement $$\xymatrix{ \mathbf D_{\mathrm{ac}}(\mathcal{V}\!\mathcal{F}(X)) \ar[rr]|j && \mathbf D(\mathcal{V}\!\mathcal{F}(X)) \ar@/_1pc/[ll]\ar@/^1pc/[ll]\ar[rr]|w &&\mathbf D(X) \ar@/_1pc/[ll]\ar@/^1pc/[ll] }$$ \end{cor} \medskip\par\noindent \begin{remark} Murfet and Salarian deal in \cite{MS} with a suitable generalization of total acyclicity for sche\-mes. Namely, they define the category ${\bf D}_{{\rm F}\textrm{-}{\rm tac}}(\mathrm{Flat}(X))$\footnote{\,The terminology used in \cite{MS} is ${\bf D}_{{\rm tac}}(\mathrm{Flat}(X))$.} of \emph{F-totally acyclic complexes} in ${\bf D}(\mathrm{Flat}(X))$ and prove that, in case $X=\Spec(R)$ is affine and $R$ is Noetherian of finite Krull dimension, ${\bf D}_{{{\rm F}\textrm{-}{\rm tac}}}(\mathrm{Flat}(X))$ is triangle equivalent to ${\bf K}_{\rm tac}({\mathrm{Proj}}(R))$ (the homotopy category of totally acyclic complexes of projective modules) showing that ${\bf D}_{{\rm F}\textrm{-}{\rm tac}}(\mathrm{Flat}(X))$ also constitutes a good replacement of ${\bf K}_{\rm tac}({\mathrm{Proj}}(R))$ in a non-affine context. An analogous version of Theorem \ref{t.mc.general case} allows to restrict the equivalence between ${\bf D}(\mathrm{Flat}(X))$ and $\mathbf D\mathbb(\mathcal{A}_{{\rm qc}})$ to their corresponding categories of F-totally acyclic complexes ${\bf D}_{{\rm F}\textrm{-}{\rm tac}}(\mathcal{A}_{\rm qc})$ and ${\bf D}_{{\rm F}\textrm{-}{\rm tac}}(\mathrm{Flat}(X))$. In particular, the full subcategory ${\bf D}_{{\rm F}\textrm{-}{\rm tac}}(\mathcal{V}\!\mathcal{F}(X))$ of F-totally acyclic complexes of very flat quasi-coherent sheaves in ${\bf D}(\mathcal{V}\!\mathcal{F}(X))$ is triangle equivalent with Murfet's and Salarian's derived category of F-totally acyclic complexes of flats. \end{remark} \section*{acknowledgements} We would like to thank Jan \v S\v t'ov\' \i\v cek for many useful comment and discussions during the preparation of this manuscript. We would also like to thank Leonid Positselski for his inspiring work \cite{P} and for sharing his knowledge on the subject of very flat modules.
2,877,628,088,825
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{D}{omain} adaptation (DA) aims to learn a discriminative classifier in the presence of a shift between training data in source domain and test data in target domain \cite{Ben_DA_bounds,GaninL15,XiaoG15,Kun_zhang_multi_source,Kun_zhang_TCS,stojanov2019data}. Currently, DA can be divided into three categories: \emph{supervised DA} \cite{Kate_SDA}, \emph{semi-supervised DA} \cite{ICML_SSDA,Li2014,Duan2012b,Xiao2015,yan2017learning} and \emph{unsupervised DA} (UDA) \cite{KSaito_ICML17,Gong2016,Long_DAN_journal,Gopalan2014,zhang2019self,liu2018TFS,ghifary2016deep,deng2019cluster}. When the number of labeled data is few in target domain, supervised DA is also known as \emph{few-shot DA} \cite{FSDA17}. Since unlabeled data in target domain can be easily obtained, UDA exhibits the greatest potential in the real world \cite{li2017visual,DANN_JMLR,GFK_CVPR,Saito:MCD,DIRT-T,Kang_cvpr19,GCAN_cvpr19,Ziser_ACL19,Xu2014,gong2019dlow,li2018unsupervised}. UDA methods train with clean labeled data in a source domain (i.e., clean source data) and unlabeled data in a target domain (i.e., unlabeled target data) to obtain classifiers for the \emph{target domain} (TD), which mainly consist of three orthogonal techniques: \emph{integral probability metrics} (IPM) \cite{Ghifary2017,Gong2016,Gretton2012,Wasserstein_distances,Long_DAN,Long_JAN,Xiyu_TL_noise}, \emph{adversarial training} \cite{DANN_JMLR,Mingming_CGDAB,Judy_adversarial,Kun_zhang_CIAN,Saito:MCD,Judy_ADDA,zhang2018collaborative} and \emph{pseudo labeling} \cite{KSaito_ICML17}. Compared to IPM- and adversarial-training-based methods, the pseudo-labeling-based method (i.e., \emph{asymmetric tri-training domain adaptation} (ATDA) \cite{KSaito_ICML17}) can construct a high-quality target-specific representation, providing a better classification performance. \begin{figure}[!tp] \centering \includegraphics[width=0.4\textwidth]{NeurIPS2019.pdf} \caption{Wildly unsupervised domain adaptation (WUDA). The blue line denotes that UDA transfers knowledge from clean source data ($P_s$) to unlabeled target data ($P_{x_t}$). However, perfectly clean data is hard to acquire. This brings \emph{wildly unsupervised domain adaptation} (WUDA), namely transferring knowledge from noisy source data ($\widetilde{P}_s$) to unlabeled target data ($P_{x_t}$). Note that label corruption process (black dash line) is unknown in practice. To handle WUDA, a compromise solution is a two-step approach (green line), which sequentially combines label-noise algorithms ($\widetilde{P}_s \rightarrow \hat{P}_s$, label correction) and existing UDA ($\hat{P}_s \rightarrow P_{x_t}$). This paper proposes a robust one-step approach called Butterfly (red line, $\widetilde{P}_s \rightarrow P_{x_t}$ directly), which eliminates noise effects from $\tilde{P}_s$.} \label{fig: our_solution} \vspace{-1em} \end{figure} \begin{figure*}[tp] \begin{center} \subfigure[Symmetry-flip noise: \emph{S}$\rightarrow$\emph{M} (left), \emph{M}$\rightarrow$\emph{S} (right)] {\includegraphics[width=0.24\textwidth]{Noisy_rate_S2M_S.png} \includegraphics[width=0.24\textwidth]{Noisy_rate_M2S_S.png}} \subfigure[Pair-flip noise: \emph{S}$\rightarrow$\emph{M} (left), \emph{M}$\rightarrow$\emph{S} (right)] {\includegraphics[width=0.24\textwidth]{Noisy_rate_S2M_P.png} \includegraphics[width=0.24\textwidth]{Noisy_rate_M2S_P.png}} \end{center} \vspace{-1em} \caption{{WUDA ruins representative UDA methods.} Representative UDA methods includes \emph{deep adaptation network} (DAN, an IPM based method \mbox{\cite{Long_DAN}}), \emph{domain-adversarial neural network} (DANN, an adversarial training based method \mbox{\cite{DANN_JMLR}}), \emph{asymmetric tri-training domain adaptation} (ATDA, a pseudo-label based method \mbox{\cite{KSaito_ICML17}}) and \emph{transferable curriculum learning} (TCL, a robust UDA method \mbox{\cite{TCL_long}}). B-Net is our proposed WUDA method. We report target-domain accuracy of all methods when the noise rate of source domain changes (a) from $5\%$ to $70\%$ (symmetry-flip noise) and (b) from $5\%$ to $45\%$ (pair-flip noise). Clearly, as the noise rate of source domain increases, the target-domain accuracy of representative UDA methods drops quickly while that of B-Net keeps stable consistently.} \label{fig:UDA_failure} \vspace{-1em} \end{figure*} However, in the wild, the data volume of the source domain tends to be large \cite{tan2014towards}. To avoid the expensive labeling cost, labeled data in the source domain normally come from amateur annotators or the Internet \cite{cleanNet,Data_web,test_bed_dataset}. This brings us a new, more realistic and more challenging problem, \emph{wildy unsupervised domain adaptation} (abbreviated as WUDA, Figure~\ref{fig: our_solution}). This adaptation aims to transfer knowledge from noisy labeled data in the source domain ($\widetilde{P}_s$, i.e., noisy source data) to unlabeled target data ($P_{x_t}$). Unfortunately, existing UDA methods share an implicit assumption that \textit{there are no noisy source data} \cite{yu2017transfer,TCL_long}. Namely, these methods focus on transferring knowledge from clean source data ($P_s$) to unlabeled target data ($P_{x_t}$). Therefore, these methods cannot well handle WUDA (Figure~\ref{fig:UDA_failure}). To validate this fact, we empirically reveal the deficiency of existing UDA methods (Figure~\ref{fig:UDA_failure}, e.g., \emph{deep adaptation network} (DAN) \cite{Long_DAN} and \emph{domain-adversarial neural network} (DANN) \cite{DANN_JMLR}). To improve these methods, a straightforward solution is a two-step approach. In Figure~\ref{fig: our_solution}, we can first use label-noise algorithms to train a classifier on noisy source data, then leverage this trained classifier to assign pseudo labels for noisy source data. Via UDA methods, we can transfer knowledge from pseudo-labeled source data ($\hat{P}_s$) to unlabeled target data ($P_{x_t}$). Nonetheless, pseudo-labeled source data are still noisy, and such two-step approach may not eliminate noise effects. To circumvent the issue of two-step approach, we present a robust one-step approach called \textit{Butterfly}. In high level, Butterfly directly transfers knowledge from $\widetilde{P}_s$ to $P_{x_t}$, and uses the transferred knowledge to construct target-specific representations. In low level, Butterfly maintains four networks dividing two branches (Figure~\ref{fig: sketch_fig}): Two networks in Branch-I are jointly trained on noisy source data and pseudo-labeled target data (data in \emph{mixture domain} (MD)); while two networks in Branch-II are trained on pseudo-labeled target data. Our ablation study (see Section~\ref{sec:abl_study}) confirms the network design of Butterfly (see Section~\ref{sec:Butterfly_net}) is the optimal. The reason why Butterfly can be robust takes root in the \textit{dual-checking principle} (DCP): Butterfly checks high-correctness data out, from not only the data in MD but also the pseudo-labeled target data. After cross-propagating these high-correctness data, Butterfly can obtain high-quality \emph{domain-invariant representations} (DIR) and \emph{target-specific representations} (TSR) simultaneously in an iterative manner. If we only check data in MD (i.e., B-Net-M in Section~\ref{sec:abl_study}), the error existed in pseudo-labeled target data will accumulate, leading to the low-quality DIR and TSR. We conduct experiments on simulated WUDA tasks, including $4$ \emph{MNIST-to-SYND} tasks, $4$ \emph{SYND-to-MNIST} tasks and $24$ human-sentiment tasks. Besides, we conduct experiments on $3$ real-world WUDA tasks. Empirical results demonstrate that Butterfly can robustly transfer knowledge from noisy source data to unlabeled target data. Meanwhile, Butterfly performs much better than existing UDA methods when \emph{source domain} (SD) suffers the extreme (e.g., $45\%$) noise. \section{{Literature Review}} {This section reviews the existing UDA methods in detail. UDA methods train with clean source data and unlabeled target data to classify target-domain data, which mainly consist of three orthogonal techniques: \textit{integral probability metrics} (IPM) \mbox{\cite{Ghifary2017,Gong2016,Gretton2012,Wasserstein_distances,Long_DAN}}, \textit{adversarial training} \mbox{\cite{DANN_JMLR,Mingming_CGDAB,Judy_adversarial,Kun_zhang_CIAN,Saito:MCD,Judy_ADDA}} and \textit{pseudo labeling} \mbox{\cite{KSaito_ICML17}}.} {IPMs (such as maximum mean discrepancy \mbox{\cite{Gretton2012,liu2020learning}} and Wasserstein distance \mbox{\cite{Wasserstein_distances}}) are used to measure the discrepancy between distributions of two domains. By minimizing the IPM between two domains, models trained with clean source data can classify unlabeled target data accurately \mbox{\cite{Gong2016,Ghifary2017,Long_DAN}}. In this line, representative methods include conditional transferable components \mbox{\cite{Gong2016}}, scatter component analysis \mbox{\cite{Ghifary2017}} and DAN \mbox{\cite{Long_DAN}}.} {Another technique is the adversarial training method inspired by the theory of domain adaptation \mbox{\cite{Ben_DA_bounds}}. This theory suggests that predictions must be based on features, and these features cannot be used to discriminate source and target domains \mbox{\cite{DANN_JMLR,Judy_adversarial,Judy_ADDA}}. For example, DANN considers two deep networks: one is used to construct new features that predict labels in the TD; while the other is to make two domains non-distinguishing based on these new features \mbox{\cite{DANN_JMLR}}. DANN simultaneously trains two deep networks to find domain-invariant representations between two domains.} {The last technique is the pseudo-label method, which regards pseudo labels given by a classifier as true labels \mbox{\cite{KSaito_ICML17,Long_JDA}}. The \emph{joint domain adaptation} (JDA) matches joint distributions of two domains using these pseudo labels \mbox{\cite{Long_JDA}}. The \emph{asymmetric tri-training domain adaptation} (ATDA) leverages three networks asymmetrically \mbox{\cite{KSaito_ICML17}}. Specifically, two networks are used to annotate unlabeled target data, namely generating pseudo labels. The other network can obtain target-specific representations based on the pseudo-labeled data. Since pseudo-label UDA methods can effectively reduce the upper bound of expected risk in the TD \mbox{\cite{SourceDataFreeUDA20,KSaito_ICML17}}, we also consider using the pseudo-label technique to help address the WUDA problem (like ATDA \mbox{\cite{KSaito_ICML17}}).} \section{Preliminary} \label{Asec: noisy generation} In this section, we introduce notations used in this paper and two common label-noise generation processes \cite{Tongliang_TPAMI,jiang2017mentornet}. \subsection{Notations} \label{sec:notations} The following notations are used to demonstrate theoretical results of this paper. \begin{itemize} \item a space $\mathcal{X}\subset \mathbb{R}^d$ and $\mathcal{Y}=\{1,2,\dots,K\}$ as a label set; \item $f_t(x_t)$ and $\tilde{f}_t(x_t)$ represent the ground-truth and pseudo labeling function of the target domain, where $f_t,\tilde{f}_t: \mathcal{X} \rightarrow \mathcal{Y}$; \item $A = \{x:\tilde{f}_t(x)\neq f_t(x)\}$ and $B = \mathcal{X}/A$ represent the area where $\tilde{f}_t(x)\neq f_t(x)$ (the set $A$) and the area where $\tilde{f}_t(x) = f_t(x)$ (the set $B$); \item $\tilde{p}_s(x_s,\tilde{y}_s)$, $p_s(x_s,y_s)$ and $q_s(x_s,y_s)$ represent probability densities of noisy, correct and incorrect \emph{multivariate random variable} (m.r.v.) defined on $\mathcal{X}\times \mathcal{Y}$, respectively, and $\tilde{p}_{x_s}(x_s)$, $p_{x_s}(x_s)$ and $q_{x_s}(x_s)$ are their marginal densities on $\mathcal{X}$; \item $p_{x_t}(x_t)$ represents the probability density of m.r.v.~$X_t$ defined on $\mathcal{X}$; \item $q_{x_t}(x) = p_{x_t}(x)1_A(x)/P_{x_t}(A)$ represents the probability density of $X_t$ restricted in $A$; \item $p_{x_t}^{\prime}(x_t)=p_{x_t}(x_t)1_B(x_t)/P_{x_t}(B)$ represents the probability density of $X_t$ restricted in $B$; \item $\mathcal{H}$ is the class of arbitrary \emph{decision functions} $h: \mathcal{X} \rightarrow \mathbb{R}^K$; \item $\ell: \mathbb{R}^K\times \mathcal{Y} \rightarrow \mathbb{R}_{+}$ is the loss function. $\ell(t,y)$ means the loss incurred by predicting an output $t$ (e.g., $h(x)$) when the ground truth is $y$; \item $\mathbb{L}_\mathcal{H}=\{\ell(h(x),y)|h\in \mathcal{H},x\in \mathcal{X},y \in \mathcal{Y}\}$ is the class of loss functions associated with $\mathcal{H}$; \item {expected risks on the noisy m.r.v. and correct m.r.v.:} \begin{align*} \tilde{R}_s(h)&=\mathbb{E}_{\tilde{p}_s(x_s,\tilde{y}_s)}[\ell(h(x_s),\tilde{y}_s)], \\ {R}_s(h)&=\mathbb{E}_{ {p}_s(x_s,{y}_s)}[\ell(h(x_s),{y}_s)]; \end{align*} \item expected discrepancy (associated with $\ell$) between an arbitrary {decision function} $h$ and a ground-truth or pseudo labeling function $f$ ($f$ could be $f_t$ or $\tilde{f}_t$) under different marginal densities: \begin{align*} \tilde{R}_s(h,f)&=\mathbb{E}_{\tilde{p}_{x_s}(x_s)}[\ell(h(x_s),f(x_s))], \\ R_s(h,f)&=\mathbb{E}_{p_{x_s}(x_s)}[\ell(h(x_s),f(x_s))], \\ R_t(h,f)&=\mathbb{E}_{p_{x_t}(x_t)}[\ell(h(x_t),f(x_t))]. \end{align*} \end{itemize} \subsection{Generating label-noise via the transition matrix} \label{Asec: noisy generation_Q} {We assume that there are clean source data denoted by a m.r.v. ($X_s,Y_s$) defined on $\mathcal{X}\times \mathcal{Y}$ with the probability density $p_s(x_s,y_s)$. However, samples of ($X_s,Y_s$) cannot be directly obtained and we can only observe noisy source data (denoted by m.r.v. ($X_s,\tilde{Y}_s$)) with the probability density $\tilde{p}_s(x_s,\tilde{y}_s)$ \mbox{\cite{Tongliang_TPAMI}}. $\tilde{p}_s(x_s,\tilde{y}_s)$ is generated from ${p}_s(x_s,{y}_s)$ and a transition matrix $Q_{ij}=\textnormal{Pr}(\tilde{Y}_s=j|Y_s = i)$. Each element in $Q$, $\textnormal{Pr}(\tilde{Y}_s=j|Y_s = i)$, is a transition probability, i.e., the flip rate from a correct label $i$ to a noisy label $j$.} \subsection{Generating label-noise via the sample selection} \label{Asec: noisy generation_S} {The transition matrix $Q$ is easily estimated in certain situations \mbox{\cite{Tongliang_TPAMI}}. However, in more complex situations, such as clothing1M dataset \mbox{\cite{Clothing1M_xiaodong}}, noisy source data is directly generated by selecting data from a pool, which mixes correct data (data with correct labels) and incorrect data (data with incorrect labels). Namely, how the correct label $i$ is corrupted to $j$ ($i\neq j$) is unclear. Let $(X_s,Y_s,V_s)$ be a m.r.v. defined on $\mathcal{X}\times \mathcal{Y} \times \mathcal{V}$ with the probability density $p_s^{\text{po}}(x_s,y_s,v_s)$, where $\mathcal{V}=\{0,1\}$ is the \textit{perfect-selection random variable}. $V_s=1$ means ``correct'' and $V_s=0$ means ``incorrect''. Nonetheless, samples of $(X_s,Y_s,V_s)$ cannot be obtained and we can only observe $(X_s,\tilde{Y}_s)$ from a distribution with the following density.} \begin{align} \label{eq: noise_observ} \tilde{p}_s(x_s,\tilde{y}_s) = \sum_{v_s=0}^1p_{X_s,Y_s|V_s}^{\text{po}}(x_s,y_s|v_s)p_{V_s}^{\text{po}}(v_s), \end{align} {where $p_{V_s}^{\text{po}}(v_s) = \int_{\mathcal{X}}\sum_{y_s=1}^Kp_s^{\text{po}}(x_s,y_s,v_s)dx_s$. Eq.~({\ref{eq: noise_observ}}) means that we lose the information regarding $V_s$. If we uniformly draw samples from $\tilde{p}_s(x_s,\tilde{y}_s)$, the noise rate of these samples is $p_{V_s}^{\text{po}}(0)$. It is clear that the m.r.v. $(X_s,Y_s|V_s=1)$ is the m.r.v. $(X_s,Y_s)$ mentioned in Section~{\ref{Asec: noisy generation_Q}}. Then, $q_s(x_s,y_s)$ is used to describe the density of incorrect m.r.v. $(X_s,Y_s|V_s=0)$. Using $p_s(x_s,y_s)$ and $q_s(x_s,y_s)$, $\tilde{p}_s(x_s,\tilde{y}_s)$ is expressed by} \begin{align} \label{eq: noise_observ_simple} \tilde{p}_s(x_s,\tilde{y}_s) = (1-\rho)p_s(x_s,y_s)+\rho q_s(x_s,y_s), \end{align} {where $\rho=p_{V_s}^{\text{po}}(0)$. To reduce noise effects from incorrect data, researchers aim to recover the information of $V_s$, i.e., to select the correct data \mbox{\cite{Co-teaching,jiang2017mentornet,DeCoupling}}.} \section{Wildly Unsupervised Domain Adaptation} In this section, we first define a new, more realistic and more challenging problem setting called \textit{wildly unsupervised domain adaptation (WUDA)}, and explain the nature of WUDA. Then, we empirically show that representative UDA methods cannot handle WUDA well, which motivates us to propose a novel method to address the WUDA problem (Section~\ref{sec:Butterfly_net}). \begin{mypro}[{Wildly Unsupervised Domain Adaptation}] \label{def-1} {Let $X_t$ be a m.r.v. defined on the space $\mathcal{X}$ with respect to the probability density $p_{x_t}$, $(X_s,\widetilde{Y}_s)$ be a m.r.v. defined on the space $\mathcal{X}\times \mathcal{Y}$ with respect to the probability density $\tilde{p}_{s}$, where $\tilde{p}_{s}$ is the probability density regarding noisy source data (generated in Section~{\ref{Asec: noisy generation_Q}} or {\ref{Asec: noisy generation_S}}), and $\mathcal{Y}=\{1,\dots, K\}$ is the label set. Let $p_{x_s}$ be the marginal density of $\tilde{p}_{s}$. Given i.i.d. data $\tilde{D}_s=\{(x_{si},\tilde{y}_{si})\}_{i=1}^{n_s}$ and $D_t=\{x_{ti}\}_{i=1}^{n_t}$ drawn from $\tilde{P_s}$ and $P_{x_t}$ separately, in wildly unsupervised domain adaptation, we aim to train with noisy source data $\tilde{D}_s$ and target data $D_t$ to accurately annotate data drawn from $P_{x_t}$, where $p_{x_s}\neq p_{x_t}$.} \end{mypro} \begin{myrem}\upshape In Problem~\ref{def-1}, $\tilde{D}_s$ is noisy source data, $D_t$ is unlabeled target data, and $\tilde{P_s}$ and $P_{x_t}$ are two probability measures corresponding to densities $\tilde{p}_s(x_s,\tilde{y}_s)$ and $p_{x_t}(x_t)$. \end{myrem} \begin{figure*}[!tp] \begin{center} {\includegraphics[width=0.7\textwidth]{Butterfly.pdf}} \end{center} \vspace{-1em} \caption{Butterfly Framework. Two networks ($F_1$ and $F_2$) in Branch-I are jointly trained on noisy source data and pseudo-labeled target data (mixture domain). Two networks in Branch-II ($F_{t1}$ and $F_{t2}$) are trained on pseudo-labeled target data. By using the \emph{dual-checking principle} (DCP), Butterfly checks high-correctness data out from both mixture and pseudo-labeled target data. After cross-propagating checked data, Butterfly can obtain high-quality \emph{domain-invariant representations} (DIR) and \emph{target-specific representations} (TSR) simultaneously in an iterative manner (Algorithms {\ref{alg: Cross_update}} and {\ref{alg: ButterNET}}). {Note that DIR interacts with TSR via the shared CNN. Besides, in the first training epoch, since we do not have any pseudo-labeled target data, we use noisy source data as the pseudo-labeled target data, which follows \mbox{\cite{KSaito_ICML17}}.}} \label{fig: sketch_fig} \end{figure*} \subsection{Nature of WUDA} Specifically, there are five distributions involved in WUDA setting: 1) a marginal distribution on source data, i.e., $p_{x_s}$ in Problem~\ref{def-1}; 2) a marginal distribution on target data, i.e., $p_{x_t}$ in Problem~\ref{def-1}; 3) an incorrect conditional distribution of label given $x_s$, $q(y_s|x_s)$; 4) a correct conditional distribution of label given $x_s$, $p(y_s|x_s)$ and 5) a correct conditional distribution of label given $x_t$, $p(y_t|x_t)$. Based on Problem~\ref{def-1} and Section \ref{Asec: noisy generation_S}, noisy source data $\tilde{D}_s$ are drawn from $\tilde{p}_s(x_s,{y}_s)=p_{x_s}(x_s)((1-\rho)p(y_s|x_s)+\rho q(y_s|x_s))$, where $\rho$ is the noise rate in source data. Namely, source data $\tilde{D}_s$ are mixture of correct source data from $p_{x_s}(x_s)p(y_s|x_s)$ and incorrect data from $p_{x_s}(x_s)q(y_s|x_s)$. Target data ${D}_t$ are drawn from $p_{x_t}$. In WUDA setting, we aim to train a classifier with $\tilde{D}_s$ and ${D}_t$. This classifier is expected to accurately annotate data from $p_{x_t}$, i.e., to accurately simulate distribution 5). {This paper considers WUDA under the common assumption used in the label-noise field, i.e., the $i^{th}$ element in the diagonal of the noise transition matrix is greater than other elements in the $i^{th}$ row or $i^{th}$ column of the noise transition matrix, where $i=1,2,\dots,K$ \mbox{\cite{Tongliang_TPAMI}}. Therefore, the proposed approach is able to solve any WUDA problem under the above assumption in principle. } \subsection{WUDA ruins UDA methods} We take a simple example to illustrate the phenomenon that WUDA ruins representative UDA methods. In Section~\ref{Asec:theory:WUDA_ruins_UDA}, we theoretically analyze the reason of this phenomenon. We corrupt source data using symmetry flipping \cite{Patrini_CVPR2017} and pair flipping \cite{Co-teaching} that are two representative ways to corrupt true labels. {Precise definitions of symmetry flipping ($Q_S$) and pair flipping ($Q_P$) are presented below, where $\rho$ is the noise rate and $K$ is the number of labels. } \begin{align} \label{eq:QS} Q_S = \begin{bmatrix} 1-\rho & \frac{\rho}{K-1} & \dots & \frac{\rho}{K-1} & \frac{\rho}{K-1}\\ \frac{\rho}{K-1} & 1-\rho & \frac{\rho}{K-1} & \dots & \frac{\rho}{K-1}\\ \vdots & & \ddots & & \vdots\\ \frac{\rho}{K-1} & \dots & \frac{\rho}{K-1} & 1-\rho & \frac{\rho}{K-1}\\ \frac{\rho}{K-1} & \frac{\rho}{K-1} & \dots & \frac{\rho}{K-1} & 1-\rho \end{bmatrix}_{K\times K}, \end{align} \begin{align} \label{eq:QP} Q_P = \begin{bmatrix} 1-\rho & \rho & 0 & \dots & 0\\ 0 & 1-\rho & \rho & & 0 \\ \vdots & & \ddots & \ddots & \vdots \\ 0 & & & 1- \rho & \rho \\ \rho & 0 & \dots & 0 & 1-\rho \\ \end{bmatrix}_{K\times K}. \end{align} {For example, if $\rho=0.4$ and $K=11$, for the symmetry flipping, the probability that label ``0'' is corrupted to label ``1'' is $(1-\rho)/(K-1)=0.04$. For the pair flipping, the probability that label ``0'' is corrupted to label ``1'' is $\rho=0.4$.} To instantiate noisy source data and target data, we leverage \emph{MNIST} and \emph{SYND} (see Figure~\ref{fig:V_dig}), respectively (i.e., $K=10$). We first construct two WUDA tasks with symmetry-flip noise: corrupted \emph{SYND}$\rightarrow$\emph{MNIST} (\emph{S}$\rightarrow$\emph{M}) and corrupted \emph{MNIST}$\rightarrow$\emph{SYND} (\emph{M}$\rightarrow$\emph{S}). In Figure~\ref{fig:UDA_failure}-(a), we report accuracy of representative UDA methods on unlabeled target data, when the noise rate $\rho$ of SD changes from $5\%$ to $70\%$. It is clear that target-domain accuracy of these representative UDA methods drops quickly when $\rho$ increases. This means that WUDA ruins representative UDA methods. Then, we construct another two WUDA tasks with pair-flip noise. In Figure~\ref{fig:UDA_failure}-(b), we report target-domain accuracy, when the noise rate $\rho$ of SD changes from $5\%$ to $45\%$. Again, WUDA still ruins representative UDA methods. Note that, in practice, pair-flip noise is much harder than symmetry-flip noise, the noise rate of pair-flip noise cannot be over $50\%$ \cite{Co-teaching}. However, the proposed Butterfly network (abbreviated as B-Net, Figure~\ref{fig: sketch_fig}) performs robustly when $\rho$ increases (blue lines in Figure~\ref{fig:UDA_failure}). In Section~\ref{sec:ana_WUDA}, we will analyze the WUDA problem in theory and show why WUDA provably ruins all UDA methods and why the two-step approach is a compromise solution. Then, Section~\ref{sec:ana_address_WUDA} presents how to address the WUDA problem in principle. \section{Analysis of WUDA problem} \label{sec:ana_WUDA} {In this section, we analyze the WUDA problem from a theoretical view and show the difficulty of the WUDA problem. Completed proofs of lemmas and theorems are demonstrated in the Appendix. In the main content, we provide the main ideas of proving these theoretical results (i.e., \emph{Proof (sketch))}.} \subsection{WUDA provably ruins UDA methods} \label{Asec:theory:WUDA_ruins_UDA} {Theoretically, we show that existing UDA methods cannot directly transfer useful knowledge from $\tilde{D}_s$ to $D_t$. We first present the relation between $R_s(h)$ and $\tilde{R}_s(h)$.} \begin{mythm} \label{thm: risks relation} If $\tilde{p}_s(x_s,\tilde{y}_s)$ is generated by a transition matrix $Q$ as demonstrated in Section~\ref{Asec: noisy generation_Q}, we have \begin{equation} \label{eq: risks relation_Q_TM} \tilde{R}_s(h) = R_s(h) + \mathbb{E}_{p_{x_s}(x_s)}[\bm{\eta}^T(x_s)(Q-I)\bm{\ell}(h(x_s))], \end{equation} where $\bm{\ell}(h(x_s)) = [\ell(h(x_s),1),..., \ell(h(x_s),K)]^T$ and $\bm{\eta}(x_s)=[p_{Y_s|X_s}(1|x_s),..., p_{Y_s|X_s}(K|x_s)]^T$. If $\tilde{p}_s(x_s,\tilde{y}_s)$ is generated by sample selection as described in in Section~\ref{Asec: noisy generation_S}, we have \begin{equation} \label{eq: risks relation} \tilde{R}_s(h) = (1-\rho)R_s(h) + \rho\mathbb{E}_{q_{x_s}(x_s)}[\bm{\eta_q}^T(x_s)\bm{\ell}(h(x_s))], \end{equation} where $\bm{\eta_q}(x_s)=[q_{Y_s|X_s}(1|x_s),..., q_{Y_s|X_s}(K|x_s)]^T$. \end{mythm} \begin{proofskt} For Eq.~(\ref{eq: risks relation_Q_TM}), we can prove it using the definition of the transition matrix defined in Section~\ref{Asec: noisy generation_Q} and the fact $\tilde{p}_s(x_s,\tilde{y}_s) = \tilde{p}_{\tilde{Y}_s|X_s}(\tilde{y}_s|x_s)p_{x_s}(x_s)$. For Eq.~\eqref{eq: risks relation}, we can prove it using Eq.~\eqref{eq: noise_observ_simple} and the definition of $R_s(h)$. \end{proofskt} \begin{myrem}\upshape \label{rem: tremondous noise assumption} In Eq.~(\ref{eq: risks relation}), $\mathbb{E}_{q_{x_s}(x_s)}[\bm{\eta_q}^T(x_s)\bm{\ell}(h(x_s))]$ represents the expected risk on the incorrect m.r.v.. To ensure to obtain useful knowledge from $\tilde{P}_s$, we need to avoid $\tilde{R}_s(h)\approx\mathbb{E}_{q_{x_s}(x_s)}$ $[\bm{\eta_q}^T(x_s)\bm{\ell}(h(x_s))]$. Specifically, we assume: there is a constant $0<M_s<\infty$ such that $\mathbb{E}_{q_{x_s}(x_s)}[\bm{\eta_q}^T(x_s)\bm{\ell}(h(x_s))]\leq R_s(h)+M_s$. \end{myrem} {Theorem {\ref{thm: risks relation}} shows that $\tilde{R}_s(h)$ equals ${R}_s(h)$ if only two cases happen: 1) $Q=I$ and $\rho=0$, or 2) some special combinations (e.g., special $p_{x_s}$, $q_{x_s}$, $Q$, $\mathbf{\eta}$ and $\ell$) make the second term in Eq.~({\ref{eq: risks relation_Q_TM}}) equal zero or make the second term in Eq.~({\ref{eq: risks relation}}) equal $\rho R_s(h)$. Case 1) means that source data are clean, which is not real in the wild. Case 2) rarely happens, since it is difficult to find such special combinations when $p_{x_s}$, $q_{x_s}$, $Q$ and $\mathbf{\eta}$ are unknown. As a result, $\tilde{R}_s(h)$ has an essential difference with $R_s(h)$. Then, following the proof skills in \mbox{\cite{Ben_DA_bounds}}, we derive the upper bound of $R_t(h)$ as below.} \begin{mythm} \label{thm:upper_bound_target} For any $h\in\mathcal{H}$, we have \begin{align} \label{eq:risk bound population} R_t(h,f_t) &\leq \underbrace{\tilde{R}_s(h)}_{(i)~\textbf {noisy-data risk}} +~~~~ \underbrace{|R_t(h,\tilde{f}_t) - \tilde{R}_s(h,\tilde{f}_t)|}_{(ii)~\textbf {discrepancy~between~distributions}}\nonumber \\ &~~~~+~ \underbrace{|R_s(h,\tilde{f}_t) - R_s(h)|}_{(iii)~\textbf {domain dissimilarity}} \nonumber \\ &~~~~+~\underbrace{|\tilde{R}_s(h)-R_s(h)|+|\tilde{R}_s(h,\tilde{f}_t) - R_s(h,\tilde{f}_t)|}_{(iv)~\textbf {noise~ effects~from~source~$\Delta_s$}}~\nonumber \\ &~~~~+~ \underbrace{|R_t(h,f_t) - R_t(h,\tilde{f}_t)|}_{(v)~\textbf {noise~ effects~from~target~$\Delta_t$}}. \end{align} \end{mythm} \begin{proofskt} For any $h\in\mathcal{H}$, we have \begin{align*} &~~~~~R_t(h,f_t) \nonumber \\ &= R_t(h,f_t) + \tilde{R}_s(h) - \tilde{R}_s(h) + R_s(h,f_t) - R_s(h,f_t) \nonumber \\ &=\tilde{R}_s(h) + R_t(h,f_t) - \tilde{R}_s(h,f_t) + R_s(h,f_t) - R_s(h) \nonumber \\ &~~~~~+R_s(h)- \tilde{R}_s(h)+ \tilde{R}_s(h,f_t) - R_s(h,f_t). \end{align*} Since we do not know $f_t$, we substitute the following equations into the above equation, \begin{align*} R_t(h,f_t) = R_t(h,\tilde{f}_t) + R_t(h,f_t) - R_t(h,\tilde{f}_t), \\ \tilde{R}_s(h,f_t) = \tilde{R}_s(h,\tilde{f}_t) + \tilde{R}_s(h,f_t) - \tilde{R}_s(h,\tilde{f}_t), \\ R_s(h,f_t) = R_s(h,\tilde{f}_t) + R_s(h,f_t) - R_s(h,\tilde{f}_t), \end{align*} which proves this theorem. \end{proofskt} \begin{myrem}\upshape \label{rem: tremondous_noise_assumption_f} To ensure that we can gain useful knowledge from $\tilde{f}_t(x_t)$, we assume: there is a constant $0<M_t<\infty$ such that $\mathbb{E}_{q_{x_s}(x)}[\ell(h(x),\tilde{f}_t(x))]\leq R_s(h,\tilde{f}_t)+M_t$ and $\mathbb{E}_{q_{x_t}(x)}[\ell(h(x),\tilde{f}_t(x))]\leq R_t(h,f_t)+M_t$. {Since we do not have labels in the target domain, we also assume that there exists an $h\in\mathcal{H}$ such that $R_t(h,f_t)+{R}_s(h)$ is a small value. This assumption follows common assumption of UDA problem \mbox{\cite{Ben_DA_bounds}} and ensures that the adaptation is possible.} \end{myrem} It is clear that the upper bound of $R_t(h,f_t)$, shown in Eq.~(\ref{eq:risk bound population}), has $5$ terms. However, existing UDA methods only focus on minimizing $(i)$ + $(ii)$ \cite{DANN_JMLR,Ghifary2017,Long_DAN} or $(i)$ + $(ii)$ + $(iii)$ \cite{KSaito_ICML17}, which ignores terms $(iv)$ and $(v)$ (i.e., $\Delta = \Delta_s + \Delta_t$). Thus, existing UDA methods cannot handle WUDA well. \subsection{Two-step approach is a compromise solution} \label{sec:two-step} To reduce noise effects, a straightforward solution is two-step approach. For example, in the first step, we can train a classifier with noisy source data using co-teaching \cite{Co-teaching} and use this classifier to annotate pseudo labels for source data. In the second step, we use ATDA \cite{KSaito_ICML17} to train a target-domain classifier with pseudo-labeled-source data and pseudo-labeled target data. Nonetheless, the pseudo-labeled source data are still noisy. Let labels of noisy source data $\tilde{y}_s$ be replaced with pseudo labels $\tilde{y}^{\prime}_s$ after using co-teaching. Noise effects $\Delta$ will become pseudo-label effects $\Delta_p$ as follows. \begin{equation} \setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{5pt} \label{eq: pseudo-label effects} \Delta_p = \underbrace{|\tilde{R}'_s(h)-R_s(h)|+|\tilde{R}'_s(h,\tilde{f}_t) - R_s(h,\tilde{f}_t)|}_{\textbf {pseudo-labeled-source~effects~$\Delta_s^{\prime}$}}+\Delta_t, \end{equation} where $\tilde{R}'_s(h)$ and $\tilde{R}'_s(h,\tilde{f}_t)$ correspond to $\tilde{R}_s(h)$ and $\tilde{R}_s(h,\tilde{f}_t)$ in $\Delta_s$. It is clear that the difference between $\Delta_p$ and $\Delta$ is $\Delta_s^{\prime}-\Delta_s$. {The left term in $\Delta_s^{\prime}$ may be less than that in $\Delta_s$ due to a label-noise algorithm (e.g., co-teaching \mbox{\cite{Co-teaching}}), but the right term in $\Delta_s^{\prime}$ may be higher than that in $\Delta_s$ since a label-noise algorithm does not consider minimizing it.} Thus, it is hard to say whether $\Delta_s^{\prime}<\Delta_s$ (i.e., $\Delta_p<\Delta$). This means that two-step approach may not really reduce noise effects. \section{How to address WUDA in principle} \label{sec:ana_address_WUDA} \label{sec:butter_eliminate_noise} To eliminate noise effects $\Delta$, we aim to select correct data simultaneously from noisy source data and pseudo-labeled target data. In theory, we prove that noise effects will be eliminated if we can select correct data with a high probability. Let $\rho_{01}^s$ represent the probability that incorrect data is selected from noisy source data, and $\rho_{01}^t$ represent the probability that incorrect data is selected from pseudo-labeled target data. Theorem \ref{thm: epsilon_effects} shows that $\Delta\rightarrow0$ if $\rho^s_{01}\rightarrow0$ and $\rho^t_{01}\rightarrow0$ and presents a new upper bound of $R_t(h,f_t)$. Before stating Theorem~\ref{thm: epsilon_effects}, we first present two m.r.v.s below. \begin{itemize} \item $(X_s,Y_s,V_s)$ defined on $\mathcal{X}\times \mathcal{Y} \times \mathcal{V}$ with the probability density $p_s^{\text{po}}(x_s,y_s,v_s)$, where $\mathcal{V}=\{0,1\}$; \item $(X_t,V_t)$ defined on $\mathcal{X} \times \mathcal{V}$ with the probability density ${p}_t^{\text{po}}(x_t,v_t)$, where $\mathcal{V}=\{0,1\}$. $p_{V_t}^{\text{po}}(v_t)$ is the marginal density of ${p}_t^{\text{po}}(x_t,v_t)$. \end{itemize} The $V_s$ has been introduced in Section~\ref{Asec: noisy generation_S}. Similar with $V_s$, $V_t$ is also a \emph{perfect-selection random variable}. Data drawn from the distribution of $(X_t,V_t)$ can be regarded as a pool that mixes the correct ($v_t=1$) and incorrect ($v_t=0$) pseudo-labeled target data. Namely, $V_t=1$ means $f_t(x_t)=\tilde{f}_t(x_t)$ and $V_t=0$ means $f_t(x_t)\neq \tilde{f}_t(x_t)$. It is clear that, higher value of $p_{V_t}^{\text{po}}(V_t=1)$ means that $\tilde{f}_t$ is more like $f_t$. In following, we use $\rho_{v_t}$ to represent $p_{V_t}^{\text{po}}(v_t=0)$. Note that both perfect-selection random variables $V_s$ and $V_t$ cannot be observed and we can only observe following m.r.v.s. \begin{itemize} \item $(X_s,Y_s,U_s)$ defined on $\mathcal{X}\times \mathcal{Y} \times \mathcal{V}$ with the probability density $\tilde{p}_s^{\text{po}}(x_s,y_s,u_s)$; \item $(X_t,U_t)$ defined on $\mathcal{X} \times \mathcal{V}$ with the probability density $\tilde{p}_t^{\text{po}}(x_t,u_t)$. $\tilde{p}_{U_t}^{\text{po}}(u_t)$ is the marginal density of $\tilde{p}_t^{\text{po}}(x_t,u_t)$. \end{itemize} The $U_s$ and $U_t$ are \textit{algorithm-selection random variables}. Data drawn from the distribution of $(X_s,Y_s,U_s)$ can be regarded as a pool that mixes the selected ($u_s=1$) and unselected ($u_s=0$) noisy source data. Data drawn from the distribution of $(X_t,U_t)$ can be regarded as a pool that mixes the selected ($u_t=1$) and unselected ($u_t=0$) pseudo-labeled target data. We can obtain observations of $(X_s,Y_s,U_s)$ and $(X_t,U_t)$ using an algorithm that is used to select correct data. After executing the algorithm, we can obtain observations $\{x_{si},\tilde{y}_{si},u_{si}\}_{i=1}^{n_s}$ and $\{x_{ti},u_{ti}\}_{i=1}^{n_t}$. Based on $(X_s,Y_s,U_s)$ and $(X_t,U_t)$, we can define the following expected risks. \begin{align*} &\tilde{R}^{\text{po}}_s(h,u_s)=(1-\rho_{u_s})^{-1}\mathbb{E}_{\tilde{p}_s^{\text{po}}(x_s,y_s,u_s)}[u_s\ell(h(x_s),y_s)], \\ &\tilde{R}^{\text{po}}_t(h,\tilde{f}_t,u_t)=(1-\rho_{u_t})^{-1}\mathbb{E}_{\tilde{p}_t^{\text{po}}(x_t,u_t)}[u_t\ell(h(x_t),\tilde{f}_t(x_t))], \\ &\tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)=(1-\rho_{u_s})^{-1}\mathbb{E}_{\tilde{p}_s^{\text{po}}(x_s,y_s,u_s)}[u_s\ell(h(x_s),\tilde{f}_t(x_s))]. \end{align*} where $\rho_{u_s}=\tilde{p}_{U_s}^{\text{po}}(u_s=0)$ and $\rho_{u_t}=\tilde{p}_{U_t}^{\text{po}}(u_t=0)$. Since we can observe $(X_s,Y_s,U_s)$ and $(X_t,U_t)$, the empirical estimators of these three risks can be easily computed. Then, we define following probabilities to describe the relation between perfect-selection random variables and algorithm-selection random variables, where $i,j=0,1$. \begin{itemize} \item $\rho_{ji}^s=\textnormal{Pr}(V_s=j|U_s=i)$ represents the probability of the event: $V_s=j$ given $U_s=i$, \item $\rho_{ji}^t=\textnormal{Pr}(V_t=j|U_t=i)$ represents the probability of the event: $V_t=j$ given $U_t=i$. \end{itemize} \begin{myrem}\upshape Based on above definitions, we know that 1) $\rho_{01}^s$ is the probability that incorrect data is selected from noisy source data, and 2) $\rho_{01}^t$ is the probability that incorrect data is selected from pseudo-labeled target data. \end{myrem} Using $\rho_{ji}^s$ and $\rho_{ji}^t$, we can show the relation between probability densities of $(X_s,Y_s|V_s)$ and $(X_s,Y_s|U_s)$, and the relation between probability densities of $(X_t|V_t)$ as follows. \begin{align*} &\tilde{p}_{X_s,Y_s|U_s}^{\text{po}}(x_s,y_s|i) =~ \rho_{0i}^s{p}_{X_s,Y_s|V_s}^{\text{po}}(x_s,y_s|0) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ \rho_{1i}^s{p}_{X_s,Y_s|V_s}^{\text{po}}(x_s,y_s|1), \\ &\tilde{p}_{X_t|U_t}^{\text{po}}(x_t|i) =~\rho_{0i}^t{p}_{X_t|V_t}^{\text{po}}(x_t|0)+ \rho_{1i}^t{p}_{X_t|V_t}^{\text{po}}(x_t|1). \end{align*} Since \begin{align*} & {p}_{X_s,Y_s|V_s}^{\text{po}}(x_s,y_s|1) = p_s(x_s,y_s), \\ & {p}_{X_s,Y_s|V_s}^{\text{po}}(x_s,y_s|0) = q_s(x_s,y_s), \\ &{p}_{X_t|V_t}^{\text{po}}(x_t|0) =p_{x_t}(x_t)1_A(x_t)/P_{x_t}(A) = q_{x_t}(x_t), \\ &{p}_{X_t|V_t}^{\text{po}}(x_t|1) =p_{x_t}(x_t)1_B(x_t)/P_{x_t}(B) = p_{x_t}^{\prime}(x_t), \end{align*} we have \begin{align} \label{eq: transit_u_s} \tilde{p}_{X_s,Y_s|U_s}^{\text{po}}(x_s,y_s|i) = \rho_{0i}^sq_s(x_s,y_s)+ \rho_{1i}^sp_s(x_s,y_s), \end{align} \begin{align} \label{eq: transit_u_t} \tilde{p}_{X_t|U_t}^{\text{po}}(x_t|i) = \rho_{0i}^tq_{x_t}(x_t)+ \rho_{1i}^tp_{x_t}^{\prime}(x_t). \end{align} \begin{myrem}\upshape Eq.~\eqref{eq: transit_u_s} and Eq.~\eqref{eq: transit_u_t} show that, if $\rho_{01}^s\rightarrow 0$ and $\rho_{01}^t\rightarrow 0$, we have 1) $\tilde{p}_{X_s,Y_s|U_s}^{\text{po}}(x_s,y_s|1)\rightarrow p_s(x_s,y_s)$ and 2) $\tilde{p}_{X_t|U_t}^{\text{po}}(x_t|1)\rightarrow p_{x_t}^{\prime}(x_t)$. However, we cannot prove the main theorem (Theorem~\ref{thm: epsilon_effects}) using 1) and 2), since we only take care of risks instead of densities (like 1) and 2)). \end{myrem} Next, we present a lemma to show the relation between $\tilde{R}_s^{\text{po}}(h,u_s)$ and $R_s(h)$. \begin{mylem} \label{lem: selection discrepancy} Given the m.r.v. $(X_s,Y_s,U_s)$ with the probability density $\tilde{p}^{\text{po}}_s(x_s,y_s,u_s)$ and Eq.~(\ref{eq: transit_u_s}), we have \begin{align} \label{eq: selection_risk} &|\tilde{R}_s^{\text{po}}(h,u_s)-R_s(h)|\nonumber \\ \leq &\rho_{01}^s\max \{\mathbb{E}_{q_s(x_s,y_s)}[\ell(h(x_s),y_s)], R_s(h)\}. \end{align} \end{mylem} \begin{proofskt} Based on definition of $\tilde{R}_s^{\text{po}}(h,u_s)$ and the fact $\tilde{p}_s^{\text{po}}(x_s,y_s,u_s) = \tilde{p}_{X_s,Y_s|U_s}^{\text{po}}(x_s,y_s|1)\tilde{p}_{U_s}^{\text{po}}(1)$, $\tilde{R}_s^{\text{po}}(h,u_s)$ equals \begin{align*} \frac{\int_\mathcal{X}\sum_{y_s=1}^K\ell(h(x_s),y_s)\tilde{p}_{X_s,Y_s|U_s}^{\text{po}}(x_s,y_s|1)\tilde{p}_{U_s}^{\text{po}}(1)dx_s}{1-\rho_{u_s}} \end{align*} Then, we can use the definition of $\rho_{u_s}$ and the Eq.~\eqref{eq: transit_u_s} to prove this lemma. \end{proofskt} Similar with Lemma \ref{lem: selection discrepancy}, we can obtain \begin{align} \label{eq: selection_risk_ts} &|\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_s)-R_s(h,\tilde{f}_t)|\nonumber \\ \leq &\rho^s_{01}\max \{\mathbb{E}_{q_{x_s}(x_s)}[\ell(h(x_s),\tilde{f}_t(x_s))], R_s(h,\tilde{f}_t)\}. \end{align} Then, we give another lemma to show relation between $\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)$ and $R_t(h,\tilde{f}_t)$. \begin{mylem} \label{lem: selection discrepancy_t} Given the m.r.v. $(X_t,U_t)$ with the probability density $\tilde{p}^{\text{po}}_s(x_t,u_t)$ and Eq.~(\ref{eq: transit_u_t}), if $\mathbb{E}_{p_{x_t}^{\prime}(x_t)}[\ell(h(x_t),f_t(x_t))]\leq R_t(h,f_t)+\rho_{01}^sM_t$, then we have \begin{align} \label{eq: selection_risk_t} &|\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)-R_t(h,{f}_t)|\nonumber \\ \leq &\rho^t_{01} \max\{\mathbb{E}_{q_{x_t}(x_t)}[\ell(h(x_t),\tilde{f}_t(x_t))],R_t(h,f_t)\}+\rho_{11}^t\rho_{01}^sM_t. \end{align} \end{mylem} \begin{proofskt} According to definition of $\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)$, we can unfold it to be \begin{align*} &~~~~~\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)\nonumber \\ & = (1-\rho_{u_t})^{-1}\int_\mathcal{X}\ell(h(x_t),\tilde{f}_t(x_t))\tilde{p}_{X_t|U_t}^{\text{po}}(x_t|1)\tilde{p}_{U_t}^{\text{po}}(1)dx_t. \end{align*} Then, using the definition of $\rho_{u_s}$, Eq.~(\ref{eq: transit_u_s}), the definition of $V_t$ ($f_t(x_t)=\tilde{f}_t(x_t)$ when $V_t=1$) and the assumption that $\mathbb{E}_{p_{x_t}^{\prime}(x_t)}[\ell(h(x_t),f_t(x_t))]\leq R_t(h,f_t)+\rho_{01}^sM_t$, we have \begin{align*} &~~~~\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)\nonumber \\ &\leq \rho_{01}^t\mathbb{E}_{q_{x_t}(x_t)}[\ell(h(x_t),\tilde{f}_t(x_t))] + \rho_{11}^t(R_t(h,f_t)+\rho_{01}^sM_t). \end{align*} Finally, we can upper bound $|\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)-R_t(h,{f}_t)|$ using the above inequality, which proves this lemma. \end{proofskt} \begin{myrem}\upshape \label{rem:last_assumption} In Lemma \ref{lem: selection discrepancy_t}, $\mathbb{E}_{p_{x_t}^{\prime}(x_t)}[\ell(h(x_t),f_t(x_t))]\leq R_t(h,f_t)+\rho_{01}^sM_t$ means that the expected risk restricted in $B$ (i.e., $\mathbb{E}_{p_{x_t}^{\prime}(x_t)}[\ell(h(x_t),f_t(x_t))]$) can represent the true risk $R_t(h,f_t)$ when $\rho_{01}^s$ is small. If this assumption fails, we cannot gain useful knowledge from $\tilde{f}_t$ even when we can select correct data from pseudo-labeled target data ($\rho_{01}^t=0$). \end{myrem} Inequalities (\ref{eq: selection_risk}), (\ref{eq: selection_risk_ts}) and (\ref{eq: selection_risk_t}) show that if we can perfectly avoid annotating incorrect data as ``correct'' (i.e., $\rho^s_{01}=0$ and $\rho^t_{01}=0$), we have $\tilde{R}_s^{\text{po}}(h,u_s) = R_s(h)$, $\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_t)=R_s(h,\tilde{f}_t)$ and $\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)=R_t(h,{f}_t)$. Nonetheless, $\rho^s_{01}$ and $\rho^t_{01}$ never equal zero, and $\mathbb{E}_{q_s(x_s,y_s)}[\ell(h(x),y)]$, $\mathbb{E}_{q_{x_s}(x_s)}[\ell(h(x_s),\tilde{f}_t(x_s))]$ and $\mathbb{E}_{q_{x_t}(x_t)}[\ell(h(x_t),\tilde{f}_t(x_t))]$ may equal $+\infty$ for some $h\in \mathcal{H}$. Namely, even when $\rho^s_{01}$ and $\rho^t_{01}$ are very small, $\tilde{R}_s^{\text{po}}(h,u_s)$ is probably far away from $R_s(h)$. Thus, without proper assumptions,it is useless to use $(X_s,Y_s,U_s)$ to represent $(X_s,Y_s|V_s=1)$. In Theorem \ref{thm: epsilon_effects}, we prove that, under assumptions in Remarks \ref{rem: tremondous noise assumption}, \ref{rem: tremondous_noise_assumption_f} and Lemma~\ref{lem: selection discrepancy_t}, $\tilde{R}_s^{\text{po}}(h,u_s) \rightarrow R_s(h)$, $\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_t)\rightarrow R_s(h,\tilde{f}_t)$ and $\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)\rightarrow R_t(h,{f}_t)$ if $\rho^s_{01} \rightarrow 0$ and $\rho^t_{01} \rightarrow 0$. Moreover, we give a new upper bound of $R_t(h,f_t)$. In the new upper bound, we show that: $\Delta\rightarrow0$ if $\rho^s_{01}\rightarrow0$ and $\rho^t_{01}\rightarrow0$. \begin{mythm} \label{thm: epsilon_effects} Given two m.r.v.s $(X_s,Y_s,U_s)$ defined on $\mathcal{X} \times \mathcal{Y} \times \mathcal{V}$ and $(X_t,U_t)$ defined on $\mathcal{X} \times \mathcal{V}$, under the assumptions in Remark \ref{rem: tremondous noise assumption}, Remark \ref{rem: tremondous_noise_assumption_f} and Lemma~\ref{lem: selection discrepancy_t}, $\forall \epsilon \in (0,1)$, there are $\delta_s$ and $\delta_t$, if $\rho^s_{01}<\delta_s$ and $\rho^t_{01}<\delta_t$, for any $h\in\mathcal{H}$, we will have \begin{equation} \setlength{\abovedisplayskip}{4pt} \setlength{\belowdisplayskip}{4pt} \label{eq:3epsilon} |\tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s) -R_s(h,\tilde{f}_t)| + |\tilde{R}^{\text{po}}_s(h,u_s)-R_s(h)|< 2\epsilon. \end{equation} Moreover, we will have \begin{align} \label{eq:risk bound NEW} R_t(h,f_t) &\leq \underbrace{\tilde{R}^{\text{po}}_s(h,u_s)}_{(i)~\textbf {noisy-data risk}} +~\underbrace{|\tilde{R}^{\text{po}}_t(h,\tilde{f}_t,u_t) - \tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)|}_{(ii)~\textbf {discrepancy~between~distributions}} \nonumber \\ &~~~~+~ \underbrace{|R_s(h,\tilde{f}_t) - R_s(h)|}_{(iii)~\textbf {domain dissimilarity}}+~\underbrace{2\epsilon}_{(iv)~\textbf {noise~effects ~$\Delta_s$}} \nonumber \\ &~~~~+~\underbrace{2\epsilon}_{(iv)~\textbf {noise~effects ~$\Delta_t$}}. \end{align} \end{mythm} \begin{proof} We first prove upper bounds of $|\tilde{R}_s^{\text{po}}(h,u_s)-R_s(h)|$, $|\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_t)- R_s(h,\tilde{f}_t)|$ and $|\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)-R_t(h,{f}_t)|$ under assumptions in Theorem \ref{thm: epsilon_effects}. Based on Lemma \ref{lem: selection discrepancy}, \begin{align} \label{eq:first2_epsilon} &~~~~|\tilde{R}_s^{\text{po}}(h,u_s) - R_s(h)|\nonumber \\ & = |\rho_{01}^s\mathbb{E}_{q_s(x_s,y_s)}[\ell(h(x_s),y_s)] - (1-\rho_{11}^s)R_s(h)|\nonumber \\ & \leq |\rho_{01}^s(R_s(h)+M_s) - \rho_{01}^sR_s(h)|\nonumber \\ & = \rho_{01}^sM_s. \end{align} Similar, we have \begin{align} \label{eq:second2_epsilon} |\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_s)- R_s(h,\tilde{f}_t)| \leq \rho_{01}^tM_t, \end{align} \begin{align} \label{eq:third2_epsilon} |\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)-R_t(h,{f}_t)| \leq \rho_{01}^tM_t + \rho_{11}^t\rho_{01}^sM_t. \end{align} Since $M_s$ and $M_t$ are positive constants, it is clear that $\tilde{R}_s^{\text{po}}(h,u_s) \rightarrow R_s(h)$, $\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_s)\rightarrow R_s(h,\tilde{f}_t)$ and $\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)\rightarrow R_t(h,{f}_t)$ when $\rho^s_{01} \rightarrow 0$ and $\rho^t_{01} \rightarrow 0$. Specifically, $\forall \epsilon \in (0,1)$, let $\delta_t=\epsilon/M_t$ and $\delta_s=\epsilon/\max\{M_s, \rho_{11}^tM_t\}$. When $\rho^s_{01}<\delta_s$ and $\rho^t_{01}<\delta_t$, we have \begin{align} \label{eq:first_epsilon} |\tilde{R}_s^{\text{po}}(h,u_s) - R_s(h)|+|\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_s)- R_s(h,\tilde{f}_t)| < 2\epsilon \end{align} \begin{align} \label{eq:second_epsilon} |\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)-R_t(h,{f}_t)| < 2\epsilon. \end{align} Hence, we prove the Eq.~(\ref{eq:3epsilon}). In following, we give a new upper bound of $R_t(h,f_t)$. Recall Theorem \ref{thm:upper_bound_target}, we replace 1) $\tilde{R}_s(h)$ with $\tilde{R}_s^{\text{po}}(h,u_s)$, 2) $\tilde{R}_s(h,\tilde{f}_t)$ with $\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_s)$, 3) $R_t(h,\tilde{f}_t)$ with $\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)$. Then, we have \begin{align} \label{eq:risk_bound_NEW:basic} R_t(h,f_t) &\leq {\tilde{R}_s^{\text{po}}(h,u_s)} +{|\tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t) - \tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_t))|} \nonumber \\ &~~~~+ {|R_s(h,\tilde{f}_t) - R_s(h)|+|\tilde{R}_s^{\text{po}}(h,u_s)-R_s(h)|} \nonumber \\ &~~~~+~{|\tilde{R}_s^{\text{po}}(h,\tilde{f}_t,u_s) - R_s(h,\tilde{f}_t)|}\nonumber \\ &~~~~+{|R_t(h,f_t) - \tilde{R}_t^{\text{po}}(h,\tilde{f}_t,u_t)|}. \end{align} Let $\rho^s_{01}\leq\delta_s$ and $\rho^t_{01}\leq\delta_t$, based on Eqs.~\eqref{eq:first_epsilon} and \eqref{eq:second_epsilon}, we have \begin{align*} R_t(h,f_t) &\leq \underbrace{\tilde{R}^{\text{po}}_s(h,u_s)}_{(i)~\textbf {noisy-data risk}} +~\underbrace{|\tilde{R}^{\text{po}}_t(h,\tilde{f}_t,u_t) - \tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)|}_{(ii)~\textbf {discrepancy~between~distributions}} \nonumber \\ &~~~~+~ \underbrace{|R_s(h,\tilde{f}_t) - R_s(h)|}_{(iii)~\textbf {domain dissimilarity}}+~\underbrace{2\epsilon}_{(iv)~\textbf {noise~effects ~$\Delta_s$}} \nonumber \\ &~~~~+~\underbrace{2\epsilon}_{(iv)~\textbf {noise~effects ~$\Delta_t$}}. \end{align*} Hence, we prove this theorem. \end{proof} Theorem \ref{thm: epsilon_effects} shows that if selected data have a high probability to be correct ones ($\rho^s_{01}\rightarrow0$ and $\rho^t_{01}\rightarrow0$), then $\Delta_s$ and $\Delta_t$ approach zero, meaning that noise effects are eliminated. This motivates us to find a reliable way to select correct data from noisy source data and pseudo-labeled target data and propose the butterfly to WUDA problem. \begin{myrem}\upshape {Note that, since Theorems~{\ref{thm:upper_bound_target}} and {\ref{thm: epsilon_effects}} hold for any hypothesis and any data distributions, the bounds in both theorems are loose and pessimistic. However, both theorems are proposed to show which factors we should take care of in the WUDA problem and both theorems point out the major difference between WUDA and UDA. From this perspective, both theorems are very important for positioning and understanding the WUDA problem.} \end{myrem} \vspace{-1.5em} \section{Butterfly: Towards robust one-step approach}\label{sec:Butterfly_net} This section presents Butterfly to solve the WUDA problem. \vspace{-0.5em} \subsection{What is the Principle-guided Solution?} \label{sec:principle_butterfly} \label{sec:principle_rule} Guided by Theorem \ref{thm: epsilon_effects}, a robust approach should check high-correctness data out (meaning $\rho^s_{01}\rightarrow0$ and $\rho^t_{01}\rightarrow0$). This checking process will make $(iv)$ and $(v)$, $2\epsilon+2\epsilon$, become $0$. Then, we can obtain gradients of $\tilde{R}^{\text{po}}_s(h,u_s)$, $\tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)$ and $\tilde{R}^{\text{po}}_t(h,\tilde{f}_t,u_t)$ w.r.t. parameters of $h$ and use these gradients to minimize them, which minimizes $(i)$ and $(ii)$ as $(i)+(ii)\leq \tilde{R}^{\text{po}}_s(h,u_s)+\tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)+\tilde{R}^{\text{po}}_t(h,\tilde{f}_t,u_t)$. Note that $(iii)$ cannot be directly minimized since we cannot pinpoint clean source data. However, following \cite{KSaito_ICML17}, we can indirectly minimize $(iii)$ via minimizing $\tilde{R}^{\text{po}}_s(h,u_s) + \tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)$, as $(iii)\leq R_s(h,\tilde{f}_t) + R_s(h) \leq \tilde{R}^{\text{po}}_s(h,u_s)+\tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)+2 \epsilon$, where the last inequality follows Eq.~(\ref{eq:3epsilon}). This means that a robust approach guided by Theorem \ref{thm: epsilon_effects} can minimize all terms in the right side of inequality in Eq.~(\ref{eq:risk bound NEW}). \subsection{Dual-checking principle} \label{Sec:dualCheckingP} {\textbf{Memorization effects of deep networks.} Recently, an interesting observation for deep networks is that they can memorize easy samples first, and gradually adapt to hard samples as increasing training epochs \mbox{\cite{arpit2017closer}}. Namely, although deep networks can fit everything (e.g., mislabeled data) in the end, they \emph{learn patterns first} \mbox{\cite{arpit2017closer}}: this suggests deep networks can gradually memorize the data, moving from regular data to irregular data such as outliers. To utilize this memorization effects, previous studies have shown that we can regard small-loss data as correct ones (also known as the \emph{small-loss trick}). Then we can obtain a good classifier that is trained with the small-loss data \mbox{\cite{jiang2017mentornet}}.} \vspace{-0.5em} \newline \newline {\textbf{Co-teaching learning paradigm.} However, if we only use small-loss trick to select correct data (like \mbox{\cite{jiang2017mentornet}}), we will get accumulated errors caused by sample-selection bias \mbox{\cite{Co-teaching}}. Therefore, researchers also consider a new deep learning paradigm called \emph{co-teaching}, where we train two deep networks simultaneously, and let them \emph{teach each other} \mbox{\cite{Co-teaching}}. Based on this novel learning paradigm, we can effectively reduce the negative effects from the accumulated errors caused by sample-selection bias.} \vspace{-0.5em} \newline \newline {\textbf{Dual-checking principle.} Motivated by Section~{\ref{sec:principle_rule}}, we propose the \emph{dual-checking principle} (DCP): we need to check high-correctness data out in the source and target domains simultaneously. According to the memorization effects of deep networks, we realize DCP based on deep networks, small-loss trick and the co-teaching learning paradigm (i.e., the Butterfly introduced below).} \subsection{Principle-guided Butterfly} To realize the robust approach for addressing the WUDA problem, we propose a Butterfly framework, which trains four networks dividing into two branches (Figure \ref{fig: sketch_fig}). By using DCP, Branch-I checks which data is correct in the mixture domain; while Branch-II checks which pseudo-labeled target data is correct. To ensure these checked data highly-correct, we apply the small-loss trick based on memorization effects of deep learning \cite{arpit2017closer}. After cross-propagating these checked data \cite{bengio2014evolving}, Butterfly can obtain high-quality DIR and TSR simultaneously in an iterative manner. Theoretically, Branch-I minimizes $(i)+(ii)+(iii)+(iv)$; while Branch-II minimizes $(ii)+(v)$. This means that Butterfly can minimize all terms in the right side of inequality in Eq.~(\ref{eq:risk bound NEW}). \begin{algorithm*}[t] \small {\bfseries 1: Input} networks $F_1$, $F_2$, mini-batch $D$, learning rate $\eta$, remember rate $\alpha$; {\bfseries 2: Obtain} ${{\bm{u}}}_1 = \arg\min_{{{\bm{u}}}^{\prime}_1:\bm{1}{{\bm{u}}}^{\prime}_1>\alpha|D|}\mathcal{L}(\theta_1,{{\bm{u}}}^{\prime}_1;F_1, D)$; \hfill // Check high-correctness data {\bfseries 3: Obtain} ${{\bm{u}}}_2 = \arg\min_{{{\bm{u}}}^{\prime}_2:\bm{1}{{\bm{u}}}^{\prime}_2>\alpha|D|}\mathcal{L}(\theta_2,{{\bm{u}}}^{\prime}_2;F_2, D)$; \hfill // Check high-correctness data {\bfseries 4: Update} $\theta_1 = \theta_1 - \eta\nabla \mathcal{L}(\theta_1,{{\bm{u}}}_2;F_1, D)$; \hfill // Update $\theta_1$ {\bfseries 5: Update} $\theta_2 = \theta_2 - \eta\nabla \mathcal{L}(\theta_2,{{\bm{u}}}_1;F_2, D)$; \hfill // Update $\theta_2$ {\bfseries 6: Output $F_{1}$ and $F_{2}$} \caption{Checking($F_1$, $F_2$, $D$, $\eta$, $\alpha$) }\label{alg: Cross_update} \end{algorithm*} \begin{algorithm*}[!tp] \small {\bfseries 1: Input} $\tilde{D}_s$, $D_t$, learning rate $\eta$, fixed $\tau$, fixed $\tau_t$, epoch $T_k$ and $T_{max}$, iteration $N_{max}$, \# of pseudo-labeled target data $n_{init}$, max of $n_{init}$ $n_{t,max}^l$; {\bfseries 2: Initial} $F_1$, $F_2$, $F_{t1}$, $F_{t2}$, $\tilde{D}_t^l=\tilde{D}_s$, $\tilde{D}=\tilde{D}_s$, $n_t^l = n_{init}$; \For{$T = 1,2,\dots,T_{max}$}{ {\bfseries 3: Shuffle} training set $\tilde{D}$; \hfill // Noisy dataset \For{$N = 1,\dots,N_{max}$}{ {\bfseries 4: Fetch} mini-batch $\check{D}$ from $\tilde{D}$; {\bfseries 5: Update} Branch-I: $F_1,F_2$ = Checking($F_1,F_2,\check{D},\eta,R(T)$); \hfill // Check data in MD using Algorithm~\ref{alg: Cross_update} {\bfseries 6: Fetch} mini-batch $\check{D}_t$ from $\tilde{D}_t^l$; {\bfseries 7: Update} Branch-II: $F_{t1},F_{t2}$ = Checking($F_{t1},F_{t2},\check{D}_t,\eta,R_t(T)$); \hfill // Check data in TD using Algorithm~\ref{alg: Cross_update} } {\bfseries 8: Obtain} $\tilde{D}_t^l$ = Labeling$(F_1,F_2,D_t,n_t^l)$; \hfill // Label $D_t$, following \cite{KSaito_ICML17} {\bfseries 9: Obtain} $\tilde{D}=\tilde{D}_s \cup \tilde{D}_t^l$; \hfill // Update MD {\bfseries 10: Update} $n_t^l = \min\{T/20*n_t,n_{t,max}^l\}$; {\bfseries 11: Update} $R(T) = 1 - \min\{\frac{T}{T_k} \tau,\tau\}$, $R_t(T) = 1 - \min\{\frac{T}{T_k} \tau_t,\tau_t\}$; } {\bfseries 12: Output} $F_{t1}$ and $F_{t2}$ \caption{Butterfly Framework: quadruple training for WUDA problem}\label{alg: ButterNET} \end{algorithm*} \subsection{Loss function in Butterfly} According to $\tilde{R}^{\text{po}}_s(h,u_s)$, $\tilde{R}^{\text{po}}_t(h,\tilde{f}_t,u_t)$ and $\tilde{R}^{\text{po}}_s(h,\tilde{f}_t,u_s)$ defined in Section~\ref{sec:ana_address_WUDA}, four networks trained by Butterfly share the same loss function but with different inputs. \begin{equation} \setlength{\abovedisplayskip}{5pt} \setlength{\belowdisplayskip}{5pt} \label{eq: loss_butterfly} \mathcal{L}(\theta,\bm{u};F,D) =\frac{1}{\sum_{i=1}^n{u}_{i}}\sum_{i=1}^nu_{i}\ell(F(x_i),\check{y}_i), \end{equation} where $n$ is the batch size (i.e., $n=|D|$), and $F$ represents a network (e.g., $F_1,F_2,F_{t1}$ and $F_{t2}$). $D=\{(x_i,\check{y}_i)\}_{i=1}^n$ is a mini-batch for training a network, where $\{x_i,\check{y}_i\}_{i=1}^n$ could be data in MD or TD (Figure~\ref{fig: sketch_fig}), and $\theta$ represents parameters of $F$ and $\bm{u}=[u_1, ..., u_n]^T$ is an $n$-by-$1$ vector whose elements equal $0$ or $1$. For two networks in Branch-I, following \cite{KSaito_ICML17}, we also add a regularizer $|\theta_{f11}^T \theta_{f21}|$ in their loss functions, where $\theta_{f11}$ and $\theta_{f21}$ are weights of the first fully-connect layer of $F_1$ and $F_2$. With this regularizer, $F_1$ and $F_2$ will learn from different features. \vspace{-0.5em} \newline \newline \textbf{Nature of the loss $\mathcal{L}$.} {In the loss function $\mathcal{L}$, we have $n$ samples: $\{(x_i, \check{y}_i)\}_{i=1}^n$. For the $i^{th}$ sample, we will compute its cross-entropy loss (i.e., $\ell(F(x_i),\check{y}_i)$), and we will denote this sample as ``selected'' if $u_i = 1$. Thus, the nature of $\mathcal{L}$ is actually the average value of cross-entropy loss of these ``selected'' samples. Note that, we need to set a constrain to prevent $\sum_{i=1}^nu_{i}=0$ in $\mathcal{L}$, which means that we should select at least one sample to compute $\mathcal{L}$.} \subsection{Training procedures of Butterfly} \label{sec:butterfly_net_train} {This subsection will first present the checking process in Butterfly (Algorithm~{\ref{alg: Cross_update}}). Then, the full training procedure of Butterfly (Algorithm~{\ref{alg: ButterNET}}) will be introduced in detail.} \subsubsection{Checking process in Butterfly (Algorithm~\ref{alg: Cross_update})} We first obtain four inputs: 1) networks $F_1$ and $F_2$, and 2) a mini-batch $D$, and 3) learning rate $\eta$ and 4) remember rate $\alpha$ (line 1). Then, we will obtain the best $\bm{u}_1$ by solving a minimization problem (line 2). $\mathcal{L}$ represents the loss function defined in Eq.~(\ref{eq: loss_butterfly}). $\theta_1$ represents the parameters of the network $F_1$. Similarly we will obtain the best $\bm{u}_2$ (line 3). $\theta_2$ represents the parameters of the network $F_2$. Next, $\theta_1$ and $\theta_2$ are updated using gradient descent, where the gradients are computed using a given optimizer (lines 4-5). Finally, we substitute the updated $\theta_1$ into $F_1$ and the updated $\theta_2$ into $F_2$ and output $F_1$ and $F_2$ (line 6). Note that, lines 2-3 correspond to the small-loss trick mentioned in Section~\ref{Sec:dualCheckingP}, and lines 4-5 corresponds to the co-teaching paradigm in Section~\ref{Sec:dualCheckingP}. \begin{myrem}\upshape {In line $2$ or $3$ in Algorithm~\ref{alg: Cross_update}, we need to solve a minimization problem: $\min_{{{\bm{u}}}^{\prime}:\bm{1}{{\bm{u}}}^{\prime}>\alpha|D|}\mathcal{L}(\theta,{{\bm{u}}}^{\prime};F, D)$ and return the best $\bm{u}^{\prime}$ as $\bm{u}$ ($\bm{u}_1$ in line $2$ and $\bm{u}_2$ in line $3$). In this paragraph, we will show how to quickly solve this problem using a sorting algorithm. Recall the nature of the loss $\mathcal{L}$, we know $\mathcal{L}$ is the average value of cross-entropy losses of ``selected'' samples, and $\bm{1}\bm{u}^{\prime}$ is the number of these ``selected'' samples. Therefore, this minimization problem is equivalent to ``given a fixed $F$ ($F_1$ or $F_2$) and $n$ samples in $D$, how to select at least $k$ samples such that $\mathcal{L}$ is minimized'', where $k=\lceil \alpha|D| \rceil$. To solve this problem, we first use a sorting algorithm (top\_k function in TensorFlow) to sort these $n$ samples according to their cross-entropy losses $\ell(F_1(x_i),\check{y}_i)$. Then, we select $k$ samples with the smallest cross-entropy losses. Finally, let $u_i$ of these $k$ samples be $1$ and $u_i$ of the other samples be $0$, and we can get the best $\bm{u}=[u_1, \dots, u_n]$. The average value of cross-entropy losses of these $k$ samples is the minimized value of $\mathcal{L}(\theta,\bm{u}^{\prime};F, D)$ under the constrain $\bm{1}\bm{u}^{\prime}>\alpha|D|$. It is clear that this solving process is equivalent to finding small-loss samples.} \end{myrem} \subsubsection{Training procedures of Butterfly (Algorithm~\ref{alg: ButterNET})} \textbf{Update parameters of networks.} First, we initialize training data for two branches ($\tilde{D}$ for Branch-I and $\tilde{D}^l_t$ for Branch-II), four networks ($F_1,F_2,F_{t1}$ and $F_{t2}$) and the number of pseudo labels (line $2$). {In the first epoch ($T=1$), following \mbox{\cite{KSaito_ICML17}}, $\tilde{D}^l_t$ is the same with $\tilde{D}_s$ (i.e., we use noisy source data as pseudo-labeled target data), since we cannot annotate pseudo labels for target data when $T=1$.} After mini-batch $\check{D}$ is fetched from $\tilde{D}$ (line $4$), $F_1$ and $F_2$ check high-correctness data out and update their parameters (lines $5$) using Algorithm \ref{alg: Cross_update}. Using similar procedures, $F_{t1}$ and $F_{t2}$ also update their parameters using Algorithm \ref{alg: Cross_update} (lines $6$-$7$). \vspace{-0.5em} \newline \newline {\textbf{Assign pseudo labels.} In each epoch, after $N_{max}$ mini-batch updating, we randomly select $n_t^l$ unlabeled target data and assign them pseudo labels using the Labeling function \mbox{\cite{KSaito_ICML17}}, $F_1$ and $F_2$ (lines $8$). Following \mbox{\cite{KSaito_ICML17}}, the Labeling function in Algorithm~{\ref{alg: ButterNET}} (line $8$) assigns pseudo labels to unlabeled target data, when predictions of $F_1$ and $F_2$ agree and at least one of them is confident about their predictions (probability above $0.9$ or $0.95$).} Using this function, we can obtain the pseudo-labeled target data $\tilde{D}^l_t$ for training Branch-II in the next epoch. Then, we merge $\tilde{D}^l_t$ and $\tilde{D}_s$ to be $\tilde{D}$ for training Branch-I in the next epoch (line $9$). \vspace{-0.5em} \newline \newline \textbf{Update other parameters.} Finally, we update $n_t^l$, $R(T)$ and $R_t(T)$ in lines $10$-$11$. {Note that $R(T)$ and $R_t(T)$ are actually piecewise-defined linear functions:} \begin{equation*} R(T)=\left\{ \begin{aligned} 1-\tau,&~~~~~~~ T\geq T_k, \\ 1 - T/T_k \times \tau,&~~~~~~~T\le T_k,\\ \end{aligned} \right. \end{equation*} \begin{equation*} R_t(T)=\left\{ \begin{aligned} 1-\tau_t,&~~~~~~~ T\geq T_k, \\ 1 - T/T_k \times \tau_t,&~~~~~~~T\le T_k.\\ \end{aligned} \right. \end{equation*} In Algorithm~\ref{alg: ButterNET}, we use $\tau$ to represent the noise rate (i.e., the ratio of data with incorrect labels) in MD and use $\tau_t$ to represent the noise rate in TD. However, in WUDA, we cannot obtain the ground-truth $\tau$ and $\tau_t$. Thus, we regard $\tau$ and $\tau_t$ as hyper-parameters. \subsection{Can we realize DCP using other models?} {Based on Theorem~{\ref{thm: epsilon_effects}}, if we check high-correctness source data and pseudo-labeled target data out, we can reduce the negative effects of noisy source data significantly. Thus, we propose the DCP to check correct data out, which is introduced in Section~{\ref{Sec:dualCheckingP}}. In Butterfly, we realize DCP using deep networks, since the memorization effects of deep networks ensures that we can check correct data out. For non-network models, if they also have memorization effects like deep networks, they can also be used into our approach. We also tried other models. Unfortunately, these models cannot fit the pattern first (like what deep networks did when fitting training data), meaning that, currently, we can only realize our approach using deep networks.} \vspace{-1em} \subsection{A Generalization Bound for WUDA} In this subsection, we prove a generalization bound for WUDA problem using the loss function Eq.~\eqref{eq: loss_butterfly} and Theorem~\ref{thm: epsilon_effects}\footnote{Please note that this is a generalization bound for WUDA problem rather than Butterfly. In Butterfly, we essentially have four classifiers ($F_1,F_2,F_{t1},F_{t2}$), which is very difficult to analyze it. We will develop a generalization and estimation error bound for Butterfly in the future.}. Practitioner may safely skip it. First, we introduce the Rademacher complexity of a class of vector-valued functions \cite{BartlettM02_Rader,Mansour2009,maurer2016vector,JianLi_NeurIPS19_local_RC,JianLi_IJCAI19_local_RC,Long19_ICML_theory}, which measures the degree to which a class can fit random noise. Rademacher Complexity of $\mathcal{H}$ is defined as follows. \begin{mydef}[Rademacher Complexity of $\mathcal{H}$] Given a sample $S=\{(x_i)\}_{i=1}^n$, the empirical Rademacher complexity of the set $\mathcal{H}$ is defined as follows. \begin{align*} \hat{\Re}_S(\mathcal{H})=\frac2n\underset{\sigma}{\mathbb{E}}\Big( \underset{h\in \mathcal{H}}{\sup}\sum_{i=1}^n\sum_{k=1}^K \sigma_{ik} h_k(x_{i}) \Big), \end{align*} where $h_k(\cdot)$ is the $k^{th}$ component of function $h\in\mathcal{H}$ and the $\sigma_{ik}$ are $n\times K$ matrix of independent Rademacher variables \cite{maurer2016vector}. The Rademacher complexity of the set $\mathcal{H}$ is defined as the expectation of $\hat{\Re}_H(\mathcal{H})$ over all samples of size $n$: \begin{align*} {\Re}_n(\mathcal{H})=\underset{S}{\mathbb{E}}\Big(\hat{\Re}_S(\mathcal{H}) \Big| |S|=n \Big). \end{align*} \end{mydef} Then, using the Rademacher complexity, we can prove an upper bound of $\tilde{R}^{\text{po}}_s(h,u_s)$ to show the relation between $\tilde{R}^{\text{po}}_s(h,u_s)$ and the loss function Eq.~\eqref{eq: loss_butterfly}. As a common practice \cite{mohri2018foundations,kiryo2017positive}, we assume that, 1) there are $C_h>0$ and $C_L>0$ such that $\sup_{h\in\mathcal{H}}\|h\|_{\infty}\leq C_h$ and $\sup_{\|t\|_{\infty}\leq C_h}\max_y \ell(t,y) \leq C_L$, and 2) $\ell(t,y)$ is Lipschitz continuous in $\|t\|_{\infty}\leq C_h$ with a Lipschitz constant $L_\ell$. \begin{mylem} \label{lem:R_s_po_est_err_bound} Given a sample $S_s=\{(x_{si},y_{si},u_{si})\}_{i=1}^n$ drawn from the probability density $\tilde{p}_s^{\text{po}}(x_s,y_s,u_s)$, with the probability of at least $1-\delta$ over samples $S_s$ of size $n$ drawn from $\tilde{p}_s^{\text{po}}(x_s,y_s,u_s)$, the following inequality holds. \begin{align} \label{eq:Gbound_1st_risk} \tilde{R}^{\text{po}}_s(h,\bm{u}_s) \leq &~\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_s) + \frac{\sqrt{2}L_\ell\hat{\Re}_{D^x_s}(\mathcal{H})}{1-\tau_s} \nonumber \\ &~+\frac{3C_L}{1-\tau_s}\sqrt{\frac{\ln\frac{\delta}{2}}{2n}}, \end{align} where $\mathcal{L}$ is defined in Eq.~\eqref{eq: loss_butterfly}, $D^{xy}_s=\{x_{si},y_{si}\}_{i=1}^n$, $D^x_s=\{x_{si}\}_{i=1}^n$, $\bm{u}_s=[u_{s1},\dots,u_{sn}]^T$ and $\tau_s=\rho_{u_s}=1 - \sum_{i=1}^n u_{si}/n$. \end{mylem} \begin{proofskt} For simplicity, in this proof, we let $\mathcal{L}_{S_s}(\ell, h)=\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_s)$, $\tilde{R}^{\text{po}}_s(\ell, h) = \tilde{R}^{\text{po}}_s(h,u_s)$, and $\mathbb{E}_{S_s}[\cdot]=\mathbb{E}_{S_s\sim (\tilde{P}_s^{\text{po}})^n}[\cdot]$, where $\tilde{P}_s^{\text{po}}$ is the probability measure corresponding to the density $\tilde{p}_s^{\text{po}}$. We first prove that $\mathcal{L}_{S_s}(\ell, h)$ is an unbiased estimator of $\tilde{R}^{\text{po}}_s(\ell, h)$ based on the definition of $\tilde{R}^{\text{po}}_s(h,u_s)$ in Section~\ref{sec:butter_eliminate_noise}. Then, let $\Phi(S_s)=\sup_{\ell \in \mathbb{L}_{\mathcal{H}}}\big(\tilde{R}^{\text{po}}_s(\ell,h) - \mathcal{L}_{S_s}(\ell,h)\big)$. Changing a point of $S_s$ affects $\Phi(S_s)$ at most $C_L/(n(1-\tau_s))$. Thus, by McDiarmid’s inequality applied to $\Phi(S_s)$, for any $\delta>0$, with probability of at least $1-\delta/2$, the following inequality holds. \begin{align*} \Phi(S_s)\leq \mathbb{E}_{S_s}[\Phi(S_s)] + \frac{C_L}{1-\tau_s}\sqrt{\frac{\ln(\delta/2)}{2n}}. \end{align*} Then, we have \begin{align} \label{eq:Lemma3_skt1} &\mathbb{E}_{S_s}[\Phi(S_s)]=\mathbb{E}_{S_s}\Big[\sup_{\ell \in \mathbb{L}_{\mathcal{H}}}\big(\tilde{R}^{\text{po}}_s(h,u_s) - \mathcal{L}_{S_s}(h)\big)\Big] \nonumber \\ \leq~& \frac{2}{n(1-\tau_s)}\mathbb{E}_{\sigma,S_s}\Big[\sup_{\ell \in \mathbb{L}_{\mathcal{H}}}\sum_{i=1}^n{\sigma_iu_{si}\ell(h(x_{si}),y_{si})}\Big]. \end{align} Because of the existence of $u_{si}$, Eq.~\eqref{eq:Lemma3_skt1} is not the Rademacher complexity of $\mathbb{L}_{\mathcal{H}}$ (i.e., $\Re(\mathbb{L}_{\mathcal{H}})$). However, we can prove that Eq.~\eqref{eq:Lemma3_skt1} can be bounded by $\Re(\mathbb{L}_{\mathcal{H}})/(1-\tau_s)$ using the property of $\sup$. Since changing a point of $S_s$ affects $\Re_n(\mathbb{L}_\mathcal{H})$ at most $2C_L/n$, by McDiarmid’s inequality, for any $\delta>0$, with probability of at least $1-\delta/2$, the following inequality holds. \begin{align*} \Re_n(\mathbb{L}_{\mathcal{H}})\leq\hat{\Re}_{S_s}(\mathbb{L}_{\mathcal{H}}) + 2C_L\sqrt{\frac{\ln(\delta/2)}{2n}}. \end{align*} Since $\ell$ is Lipschitz continuous, according to \cite{maurer2016vector}, we have \begin{align*} \hat{\Re}_{S_s}(\mathbb{L}_{\mathcal{H}})\leq \sqrt{2}L_{\ell}\hat{\Re}_{D_s^x}(\mathcal{H}), \end{align*} which proves this lemma. \end{proofskt} Finally, we prove the generalization bound for WUDA problem as follows. \begin{mythm} \label{thm:generalization} Given a sample $S_s=\{(x_{si},y_{si},u_{si})\}_{i=1}^{n_s}$ drawn from the probability density $\tilde{p}_s^{\text{po}}(x_s,y_s,u_s)$ and a sample $S_t=\{(x_{ti},u_{ti})\}_{i=1}^{n_t}$ drawn from the probability density $\tilde{p}_t^{\text{po}}(x_t,u_t)$, under the assumptions in Remark \ref{rem: tremondous noise assumption}, Remark \ref{rem: tremondous_noise_assumption_f} and Lemma~\ref{lem: selection discrepancy_t}, $\forall \epsilon \in (0,1)$, there are $\delta_s$ and $\delta_t$, if $\rho^s_{01}<\delta_s$ and $\rho^t_{01}<\delta_t$, then, with the probability of at least $1-3\delta$, for any $h\in \mathcal{H}$, the following inequality holds. \begin{align} \label{eq:empicial_bound_NEW} R_t(h,f_t) \leq & 2\Big(\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_s) + {\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_{\tilde{s}})}\Big) \nonumber \\ &+ \mathcal{L}(\theta,h;\bm{u}_t,D^{xy}_{\tilde{t}})+ \frac{4\sqrt{2}L_\ell\hat{\Re}_{D^x_s}(\mathcal{H})}{1-\tau_s} \nonumber \\ &+ \frac{\sqrt{2}L_\ell\hat{\Re}_{D^x_t}(\mathcal{H})}{1-\tau_t} +\frac{12C_L}{1-\tau_s}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_s}}\nonumber \\ &+ \frac{3C_L}{1-\tau_t}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_t}} + 6\epsilon, \end{align} where $\mathcal{L}$ is defined in Eq.~\eqref{eq: loss_butterfly}, $D^{xy}_s=\{x_{si},y_{si}\}_{i=1}^{n_s}$, $D^{xy}_{\tilde{s}}=\{x_{si},\tilde{f}_t(x_{si})\}_{i=1}^{n_s}$, $D^{xy}_{\tilde{t}}=\{x_{ti},\tilde{f}_t(x_{ti})\}_{i=1}^{n_t}$ $D^x_s=\{x_{si}\}_{i=1}^{n_s}$, $D^x_t=\{x_{ti}\}_{i=1}^{n_t}$, $\bm{u}_s=[u_{s1},\dots,u_{sn_s}]^T$, $\tau_s=\rho_{u_s}=1 - \sum_{i=1}^{n_s} u_{si}/{n_s}$, $\bm{u}_t=[u_{t1},\dots,u_{tn_t}]^T$ and $\tau_t=\rho_{u_t}=1 - \sum_{i=1}^{n_t} u_{ti}/{n_t}$. \end{mythm} \begin{proof} We prove this theorem (i.e., Inequality \eqref{eq:empicial_bound_NEW}) according to Inequality \eqref{eq:risk_bound_NEW:basic}, where \eqref{eq:empicial_bound_NEW} has $7$ terms in the right side and \eqref{eq:risk_bound_NEW:basic} have $6$ terms in the right side. 1) For last $3$ terms in \eqref{eq:risk_bound_NEW:basic}, according to \eqref{eq:first2_epsilon}, \eqref{eq:second2_epsilon} and \eqref{eq:third2_epsilon}, we know the sum of last three terms of \eqref{eq:risk_bound_NEW:basic} is less than or equal to $4\epsilon$. 2) For first $3$ terms in \eqref{eq:risk_bound_NEW:basic}, we have shown that (in Section~\ref{sec:principle_butterfly}) the sum of the first $3$ terms in \eqref{eq:risk_bound_NEW:basic} is less than or equal to $(*)$: \begin{align*} 2\tilde{R}_s^{\text{po}}(h,u_s)+2\tilde{R}^{\text{po}}_s(h,\tilde{f}_t,\bm{u}_s)+\tilde{R}^{\text{po}}_t(h,\tilde{f}_t,\bm{u}_t)+2\epsilon. \end{align*} Then, we can prove that (similar with Lemma~\ref{lem:R_s_po_est_err_bound}), with probability of at least $1-\delta$, for any $h\in\mathcal{H}$, \begin{align} \label{eq:Gbound_2nd_risk} \tilde{R}^{\text{po}}_s(h,\tilde{f}_t,\bm{u}_s) \leq &~\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_{\tilde{s}}) + \frac{\sqrt{2}L_\ell\hat{\Re}_{D^x_s}(\mathcal{H})}{1-\tau_s} \nonumber \\ &~+\frac{3C_L}{1-\tau_s}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_s}}, \end{align} \begin{align} \label{eq:Gbound_3rd_risk} \tilde{R}^{\text{po}}_t(h,\tilde{f}_t,\bm{u}_t) \leq &~\mathcal{L}(\theta,h;\bm{u}_t,D^{xy}_{\tilde{t}}) + \frac{\sqrt{2}L_\ell\hat{\Re}_{D^x_t}(\mathcal{H})}{1-\tau_s} \nonumber \\ &~+\frac{3C_L}{1-\tau_t}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_t}}. \end{align} Combining \eqref{eq:Gbound_1st_risk}, \eqref{eq:Gbound_2nd_risk}, \eqref{eq:Gbound_3rd_risk} with $(*)$, based on 1), we prove this theorem. Note that, $6\epsilon$ equals $4\epsilon$ (in 1)) $+$ $2\epsilon$ (in $(*)$). \end{proof} \begin{mycor}[{Generalization Bound for WUDA}] \label{cor:generalization} {Given a sample $S_s$ and a sample $S_t$ defined in Theorem~{\ref{thm:generalization}}, under the assumptions in Remark {\ref{rem: tremondous noise assumption}}, Remark {\ref{rem: tremondous_noise_assumption_f}} and Lemma~{\ref{lem: selection discrepancy_t}}, if $\rho^s_{01}<C_\rho^s/\sqrt{n_sT}$ and $\rho^t_{01}<C_\rho^t/\sqrt{n_tT}$, then, with the probability of at least $1-3\delta$, for any $h\in \mathcal{H}$, the following inequality holds.} \begin{align} \label{eq:empicial_bound_NEW_cor} &R_t(h,f_t) \nonumber\\ \leq~& 2\Big(\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_s) + {\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_{\tilde{s}})}\Big) \nonumber \\ &+ \mathcal{L}(\theta,h;\bm{u}_t,D^{xy}_{\tilde{t}})+ \frac{4\sqrt{2}L_\ell\hat{\Re}_{D^x_s}(\mathcal{H})}{1-\tau_s} \nonumber \\ &+ \frac{\sqrt{2}L_\ell\hat{\Re}_{D^x_t}(\mathcal{H})}{1-\tau_t} +\frac{12C_L}{1-\tau_s}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_s}}+ \frac{3C_L}{1-\tau_t}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_t}}\nonumber \\ & + \frac{C_\rho^s(2M_s+M_t)}{\sqrt{n_sT}} + \frac{3C_\rho^tM_t}{\sqrt{n_tT}}, \end{align} {where $\mathcal{L}$, $D^{xy}_s$, $D^{xy}_{\tilde{s}}$, $D^{xy}_{\tilde{t}}$, $D^x_t$, $\bm{u}_s$, $\tau_s$, $\bm{u}_t$, $\tau_t$ are defined in Theorem~{\ref{thm:generalization}}, $T$ is the number of training epochs, and $C_\rho^s$ and $C_\rho^t$ are two finite constants.} \end{mycor} \begin{myrem}\upshape {In Corollary~{\ref{thm:generalization}}, we assume that $\rho_{01}^s$ and $\rho_{01}^t$ will go to zero with the convergence speed of $O(1/\sqrt{n_sT})$ and $O(1/\sqrt{n_tT})$, respectively. In Section~{\ref{sec:verify_rho}}, we verify this assumption through our experiments.} \end{myrem} Corollary~\ref{thm:generalization} shows the empirical upper bound of the target risk (i.e., $R_t(h,f_t)$). Based on this bound, we can obtain the estimation error bound of $R_t(h,f_t)$ as follows. First, let \begin{align} \hat{R}_t^{\mathcal{L}}(h,S_s,S_t)=&2\Big(\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_s) + {\mathcal{L}(\theta,h;\bm{u}_s,D^{xy}_{\tilde{s}})}\Big) \nonumber \\ &+ \mathcal{L}(\theta,h;\bm{u}_t,D^{xy}_{\tilde{t}}), \end{align} where $D^{xy}_s$, $D^{xy}_{\tilde{s}}$ and $D^{xy}_{\tilde{t}}$ are defined in Theorem~\ref{thm:generalization}, and $\widetilde{h} = \arg\min_{h\in\mathcal{H}}\hat{R}_t^{\mathcal{L}}(h,S_s,S_t)$ means the empirical minimizer of $\hat{R}_t^{\mathcal{L}}(h,S_s,S_t)$, and ${h^*} = \arg\min_{h\in\mathcal{H}}R_t(h,f_t)$ means the true risk minimizer of $R_t(h,f_t)$, and $\mathcal{H}' = \{h|\hat{R}_t^{\mathcal{L}}(h,S_s,S_t) \leq \epsilon'\}$. Then, we have \begin{align} &R_t(\widetilde{h},f_t) - R_t(h^*,f_t) \nonumber \\ =~& R_t(\widetilde{h},f_t) - \hat{R}_t^{\mathcal{L}}(\widetilde{h},S_s,S_t) + \hat{R}_t^{\mathcal{L}}(\widetilde{h},S_s,S_t) -R_t(h^*,f_t) \nonumber \\ ~~~~&+\hat{R}_t^{\mathcal{L}}(h^*,S_s,S_t)-\hat{R}_t^{\mathcal{L}}(h^*,S_s,S_t) \nonumber \\ =~& R_t(\widetilde{h},f_t) - \hat{R}_t^{\mathcal{L}}(\widetilde{h},S_s,S_t) + \hat{R}_t^{\mathcal{L}}(h^*,S_s,S_t) -R_t(h^*,f_t)\nonumber \\ ~~~~&+\hat{R}_t^{\mathcal{L}}(\widetilde{h},S_s,S_t) -\hat{R}_t^{\mathcal{L}}(h^*,S_s,S_t) \nonumber \\ \leq~& \sup_{h\in\mathcal{H'}}(R_t(h,f_t) - \hat{R}_t^{\mathcal{L}}(h,S_s,S_t)) + \epsilon' + 0, \end{align} where $\hat{R}_t^{\mathcal{L}}(\widetilde{h},S_s,S_t) \leq \hat{R}_t^{\mathcal{L}}(h^*,S_s,S_t)$ due to the definition of $\widetilde{h}$. If all conditions in Theorem~\ref{thm:generalization} are satisfied, with the probability of at least $1-3\delta$, for any $h\in\mathcal{H}$, we have \begin{align} \label{eq:est_err_bound} &R_t(\widetilde{h},f_t) - R_t(h^*,f_t)\nonumber \\ \leq~& \frac{4\sqrt{2}L_\ell\hat{\Re}_{D^x_s}(\mathcal{H})}{1-\tau_s} + \frac{\sqrt{2}L_\ell\hat{\Re}_{D^x_t}(\mathcal{H})}{1-\tau_t} +\frac{12C_L}{1-\tau_s}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_s}}\nonumber \\ &+ \frac{3C_L}{1-\tau_t}\sqrt{\frac{\ln\frac{\delta}{2}}{2n_t}} + \frac{C_{\rho}^s(2M_s+M_t)}{\sqrt{n_sT}} + \frac{3C_{\rho}^tM_t}{\sqrt{n_tT}} + \epsilon'. \end{align} Eq.~\eqref{eq:est_err_bound} ensures that learning with $\hat{R}_t^{\mathcal{L}}(\widetilde{h},S_s,S_t)$ is consistent: as $n_s,n_t\rightarrow \infty$ and $\epsilon' \rightarrow 0$, $R_t(\widetilde{h},f_t) \rightarrow R_t(h^*,f_t)$. For linear-in-parameter model with a bounded norm, $\hat{\Re}_{D^x_s}(\mathcal{H})=\mathcal{O}(1/\sqrt{n_s})$ and $\hat{\Re}_{D^x_t}(\mathcal{H})=\mathcal{O}(1/\sqrt{n_t})$ and thus $R_t(\widetilde{h},f_t) \rightarrow R_t(h^*,f_t)$ in $\mathcal{O}(1/\sqrt{n_s}+1/\sqrt{n_t})$. \section{Comparison to related works} In this section, we compare Butterfly with related works and show why related works cannot handle WUDA problem. \vspace{-1em} \newline \newline \textbf{Relations to co-teaching.} As Butterfly is related to co-teaching, we discuss their major differences here. Although co-teaching applies the small-loss trick and the cross-update technique to train deep networks against noisy data, it can only deal with one-domain problem instead cross-domain problem. Besides, we argue that Butterfly is not a simple mixtrue of co-teaching and ATDA for two reasons. \vspace{-1em} \newline \newline First, network structure of Butterfly is different with that of ATDA and co-teaching: Butterfly maintains four networks; while ATDA maintains three and co-teaching maintains two. We cannot simply combine ADTA and co-teaching to derive Butterfly. Second, we have justified that the sequential mixture of co-teaching and ATDA (i.e., two-step method) cannot eliminate noise effects caused by noisy source data (see Section~\ref{sec:two-step}). Specifically, two-step methods only take care of part of noise effects but Butterfly takes care of the whole noise effects. Thus, Butterfly is the first method to eliminate noise effects rather than alleviate it. \vspace{-1em} \newline \textbf{Relations to TCL.} Recently, \emph{transferable curriculum learning} (TCL) is a robust UDA method to handle noise \cite{TCL_long}. TCL uses small-loss trick to train DANN \cite{DANN_JMLR}. However, TCL can only minimize $(i)+(ii)+(iv)$, while Butterfly can minimize all terms in the right side of Eq.~(\ref{eq:risk bound NEW}). \begin{figure*}[tp] \begin{center} \subfigure[\emph{MNIST}] {\includegraphics[width=0.25\textwidth]{MNIST.png}} \subfigure[\emph{SYND}] {\includegraphics[width=0.25\textwidth]{SYND.png}} \caption{Visualization of \emph{MNIST} and \emph{SYND}.} \label{fig:V_dig} \end{center} \vspace{-1em} \end{figure*} \begin{figure*}[h] \begin{center} \subfigure[\emph{Bing} provided by \cite{Bing_data}] {\includegraphics[width=0.24\textwidth]{Bing_horse.jpg}} \subfigure[\emph{Caltech256} provided by \cite{Caltech256}] {\includegraphics[width=0.24\textwidth]{Caltech256_horse.jpg}} \subfigure[\emph{ImageNet} provided by \cite{ImageNet}] {\includegraphics[width=0.24\textwidth]{ImageNet_horse.jpg}} \subfigure[\emph{SUN} provided by \cite{SUN_data}] {\includegraphics[width=0.24\textwidth]{SUN_horse.jpg} } \caption{Visualization of \emph{Bing}, \emph{Caltech256}, \emph{ImageNet} and \emph{SUN} (taking ``horse'' as the common class).} \label{fig:BCIS} \end{center} \vspace{-2em} \end{figure*} \section{Experiments} \label{sec:exp} We conduct experiments on $32$ simulated WUDA tasks and $3$ real-world WUDA tasks to verify the efficacy of Butterfly. \subsection{Simulated WUDA tasks} We verify the effectiveness of our approach on three benchmark datasets (vision and text), including \textit{MNIST}, \textit{SYN-DIGITS (SYND)}\footnote{\emph{Digit} datasets (\emph{MNIST} and \emph{SYN Digit}) can be downloaded from official code of ATDA. The link is \url{https://github.com/ksaito-ut/atda}.} and \textit{human-sentiment} analysis (i.e., \textit{Amazon products reviews} on \emph{book}, \emph{dvd}, \emph{electronics} and \emph{kitchen}) \footnote{\emph{Sentiment} datasets (\emph{Amazon products reviews}) can be downloaded from the official code of marginalized Stacked Denoising Autoencoder. The link is \url{https://www.cse.wustl.edu/~mchen/code/mSDA.tar}.}. They are used to construct $14$ basic tasks: \textit{MNIST}$\rightarrow$\textit{SYND} (\emph{M}$\rightarrow$\emph{S}), \textit{SYND}$\rightarrow$\textit{MNIST} (\emph{S}$\rightarrow$\emph{M}), \emph{book}$\rightarrow$\emph{dvd} (\emph{B}$\rightarrow$\emph{D}), \emph{book}$\rightarrow$\emph{electronics} (\emph{B}$\rightarrow$\emph{E}), $\ldots$~, and \emph{kitchen} $\rightarrow$ \emph{electronics} (\emph{K}$\rightarrow$\emph{E}). These tasks are often used for evaluation of UDA methods \cite{DANN_JMLR,KSaito_ICML17,Saito:MCD}. Figure~\ref{fig:V_dig} shows datasets \emph{MNIST} and \emph{SYND}. Since all source datasets are clean, we corrupt source data using symmetry flipping \cite{Patrini_CVPR2017} and pair flipping \cite{Co-teaching} with noise rate $\rho$ chosen from $\{0.2,0.45\}$. {Note that, there are other ways to generate the noisy source data, such as asymmetry flipping. However, since the asymmetry flipping can be regarded as the combination of symmetry flipping and pair flipping, we only use symmetry flipping and pair flipping to generate \emph{simulated} WUDA tasks. In \emph{real-world} WUDA tasks, we have more complex noisy source data, where the noisy type in the source domain is unknown.} Therefore, for each basic task, we have four kinds of noisy source data: \emph{Pair-}$45\%$ (P$45$), \emph{Pair-}$20\%$ (P$20$), \emph{Symmetry-}$45\%$ (S$45$), \emph{Symmetry-}$20\%$ (S$20$). Following \cite{Co-teaching,jiang2017mentornet}, we can corrupt clean-label datasets manually using the noise transition matrix $Q_S$ and $Q_P$. Namely, we evaluate the performance of each method using $32$ simulated WUDA tasks: $8$ digit tasks and $24$ human-sentiment tasks. Since the human-sentiment task is a binary classification problem, pair flipping is equal to symmetry flipping, meaning that we have $24$ human-sentiment tasks. \vspace{-0.5em} \subsection{Real-world WUDA tasks} We also verify the efficacy of our approach on ``cross-dataset benchmark'' including \emph{Bing}, \emph{Caltech256}, \emph{Imagenet} and \emph{SUN} \cite{test_bed_dataset} \footnote{\emph{Real-world} datasets (\emph{BCIS}) can be downloaded from the website of the project ``A Testbed for Cross-Dataset Analysis'': \url{https://sites.google.com/site/crossdataset/home/files} ("setup DENSE decaf7", 1.3GB, decaf7 features).}. In this benchmark, \emph{Bing}, \emph{Caltech256}, \emph{Imagenet} and \emph{SUN} contain common $40$ classes. {Since \emph{Bing} dataset was formed by collecting images retrieved by Bing image search, it contains rich noisy data, with presence of multiple objects in the same image and caricaturization \mbox{\cite{test_bed_dataset}}. We use \emph{Bing} as noisy source data}, and \emph{Caltech256}, \emph{Imagenet} and \emph{SUN} as unlabeled target data, which can form three real-world WUDA tasks. Figure~\ref{fig:BCIS} shows datasets \textit{Bing}, \textit{Caltech256}, \textit{Imagenet} and \textit{SUN} (taking ``horse'' as the common class). \vspace{-0.5em} \subsection{Baselines} We realize Butterfly using four networks (B-Net) and compare B-Net with following baselines: 1) ATDA: representative pseudo-labeling-based UDA method \cite{KSaito_ICML17}; 2) DAN: representative IPM-based UDA method \cite{Long_DAN}; 3) DANN: representative adversarial-training-based UDA method \cite{DANN_JMLR}; 4) {\emph{Manifold embedded distribution alignment} (MEDA): a representative non-deep UDA method \mbox{\cite{MADA_ACMMM18}}}; 5) TCL: an existing robust UDA method; 6) co-teaching+ATDA (Co+ATDA): a two-step method (see Section~\ref{sec:two-step}); and {7) co-teaching+TCL (Co+TCL).} Since MEDA cannot extract features from images, we only compare with MEDA on human-sentiment tasks, where features are already given. \begin{figure*}[tp] \begin{center} {\includegraphics[width=0.75\textwidth]{Digit.pdf}} \caption{The architecture of B-Net for digit WUDA tasks \emph{SYND} $\leftrightarrow$ \emph{MNIST}. We added BN layer in the last convolution layer in CNN and FC layers in $F_1$ and $F_2$. We also used dropout in the last convolution layer in CNN and FC layers in $F_1$, $F_2$, $F_{t1}$ and $F_{t2}$ (dropout probability is set to $0.5$).} \label{fig:dig_cnn} \end{center} \vspace{-2em} \end{figure*} \begin{figure*}[tp] \begin{center} \subfigure[Human-sentiment] {\includegraphics[width=0.4\textwidth]{Amazon.pdf}} \subfigure[Real-world] {\includegraphics[width=0.5\textwidth]{Object.pdf}} \caption{The architecture of B-Net for (a) human-sentiment WUDA tasks and (b) real-world WUDA tasks. We added BN layer in the first FC layers in $F_1$ and $F_2$. We also used dropout in the first FC layers in $F_1$, $F_2$, $F_{t1}$ and $F_{t2}$ (dropout probability is set to $0.5$).} \label{fig: ama_cnn} \end{center} \vspace{-1em} \end{figure*} \vspace{-1em} \subsection{Network structure and optimizer} We implement all methods on Python 3.6 with a NIVIDIA P100 GPU. We use MomentumSGD for optimization in digit and real-world tasks, and set the momentum as $0.9$. We use Adagrad for optimization in human-sentiment tasks because of sparsity of review data \cite{KSaito_ICML17}. $F_{1}$, $F_{2}$, $F_{t1}$ and $F_{t2}$ are $6$-layer CNN ($3$ convolutional and $3$ fully-connected layers) for digit tasks; and are $3$-layer neural networks ($3$ fully-connected layers) for human-sentiment tasks; and are $4$-layer neural networks ($4$ fully-connected layers) for real-world tasks. The ReLU active function is used as activation function of these networks. Besides, dropout and batch normalization are also used. The network topology is shown in Figures~\ref{fig:dig_cnn} and \ref{fig: ama_cnn}. As deep networks are highly nonconvex, even with the same network and optimization method, different initializations can lead to different local optimal. Thus, following \cite{DeCoupling}, we take four networks with the same architecture but different initialization as four classifiers. \subsection{Experimental setup} {Since this paper deals with the challenging situation where no labeled data are available in the target domain, we follow the common protocol to set hyperparameters that the similar tasks have the same hyperparameters \mbox{\cite{Long_JAN}}. For example, we set the same hyperparameters for all WUDA tasks regarding digit datasets (there are $8$ WUDA tasks regarding digit datasets). The selected hyperparameters are robust to many tasks rather than a specific task. Details can be found below.} \begin{figure*}[tp] \begin{center} \subfigure[S$20$] {\includegraphics[width=0.24\textwidth]{deep_S2M_S20_REG.png}} \vspace{-1em} \subfigure[S$45$] {\includegraphics[width=0.24\textwidth]{deep_S2M_S45_REG.png}} \subfigure[P$20$] {\includegraphics[width=0.24\textwidth]{deep_S2M_P20_REG.png}} \subfigure[P$45$] {\includegraphics[width=0.24\textwidth]{deep_S2M_P45_REG.png}} \caption{{\color{mydarkblue}Target-domain accuracy vs. number of epochs on four \textit{SYND$\rightarrow$\textit{MNIST}} WUDA tasks.}} \label{fig: result-S2M-part} \end{center} \vspace{-2em} \end{figure*} \begin{figure*}[tp] \begin{center} \subfigure[S$20$] {\includegraphics[width=0.24\textwidth]{deep_M2S_S20_REG.png}} \vspace{-1em} \subfigure[S$45$]{ \includegraphics[width=0.24\textwidth]{deep_M2S_S45_REG.png}} \subfigure[P$20$] {\includegraphics[width=0.24\textwidth]{deep_M2S_P20_REG.png}} \subfigure[P$45$]{ \includegraphics[width=0.24\textwidth]{deep_M2S_P45_REG.png}} \caption{{\color{mydarkblue}Target-domain accuracy vs. number of epochs on four \textit{MNIST$\rightarrow$\textit{SYND}} WUDA tasks.}} \label{fig: result-M2S-part} \end{center} \vspace{-1em} \end{figure*} \begin{table*}[!tp] \centering \footnotesize \caption{Target-domain accuracy on $8$ digit WUDA tasks (\textit{SYND}$\leftrightarrow$\textit{MNIST}). Bold value represents the highest accuracy in each row.} \vspace{-1em} \begin{tabular}{ccllllllll} \toprule \multicolumn{1}{l}{Tasks} & \multicolumn{1}{l}{Type} & DAN & DANN & \multicolumn{1}{p{3.335em}}{ATDA} & \multicolumn{1}{p{3.28em}}{TCL} & \multicolumn{1}{p{3.92em}}{Co+TCL} & \multicolumn{1}{p{3.92em}}{Co+ATDA} & \multicolumn{1}{p{3.555em}}{B-Net} \\ \midrule \multicolumn{1}{l}{\multirow{4}[2]{*}{\textit{S}$\rightarrow$\textit{M}}} & \multicolumn{1}{l}{P$20$} & 90.17\% &79.06\% & 55.95\% & 80.81\% & {\color{mydarkblue}88.56\%} &\textbf{95.37\%} & 95.29\% \\ & \multicolumn{1}{l}{P$45$} & 67.00\% & 55.34\% & 53.66\% & 55.97\% & {\color{mydarkblue}73.27\%} & 75.43\% & \textbf{90.21\%} \\ & \multicolumn{1}{l}{S$20$} & 90.74\% & 75.19\% & 89.87\% & 80.23\% & {\color{mydarkblue}85.88\%} & 95.22\% & \textbf{95.88\%} \\ & \multicolumn{1}{l}{S$45$} & 89.31\% & 65.87\% & 87.53\% & 68.54\% & {\color{mydarkblue}75.69\%} & 92.03\% & \textbf{94.97\%} \\ \midrule \multicolumn{1}{l}{\multirow{4}[2]{*}{\textit{M}$\rightarrow$\textit{S}}} & \multicolumn{1}{l}{P$20$} & 40.82\% & 58.78\% & 33.74\% & 58.88\% & {\color{mydarkblue}59.08\%} & 58.02\% & \textbf{60.36\%} \\ & \multicolumn{1}{l}{P$45$} & 28.41\% & 43.70\% & 19.50\% & 45.31\% & {\color{mydarkblue}47.15\%} & 46.80\% & \textbf{56.62\%} \\ & \multicolumn{1}{l}{S$20$} & 30.62\% & 53.52\% & 49.80\% & 56.74\% & {\color{mydarkblue}56.91\%} & 56.64\% & \textbf{57.05\%} \\ & \multicolumn{1}{l}{S$45$} & 28.21\% & 43.76\% & 17.20\% & 49.91\% & {\color{mydarkblue}51.22\%} & 54.29\% & \textbf{56.18\%} \\ \midrule \multicolumn{2}{c}{Average} & 58.16\% & 58.01\% & 50.91\% & 62.05\% & {\color{mydarkblue}67.22\%} & 71.73\% & \textbf{75.82\%} \\ \bottomrule \end{tabular}% \label{tab: digit_results}% \vspace{-1.2em} \end{table*}% For all $35$ WUDA tasks, $T_k$ is set to $5$, and $T_{max}$ is set to $30$, and $\ell(\cdot,\cdot)$ is the cross-entropy loss function. Learning rate is set to $0.01$ for simulated tasks and $0.05$ for real-world WUDA tasks, $\tau_t$ is set to $0.05$ for simulated tasks and $0.02$ for real-world WUDA tasks. Confidence level of labeling function in line $8$ of Algorithm \ref{alg: ButterNET} is set to $0.95$ for $8$ digit tasks, and $0.9$ for $24$ human-sentiment tasks and $0.8$ for real-world WUDA tasks. $\tau$ is set to $0.4$ for digit tasks, $0.1$ for human-sentiment tasks, $0.2$ for real-world WUDA tasks. $n_{t,max}^l$ is set to $15,000$ for digit tasks, $500$ for human-sentiment tasks and $4000$ for real-world WUDA tasks. $N_{max}$ is set to $1000$ for digit tasks, and $200$ for human-sentiment and real-world tasks. Batch size is set to $128$ for digit, real-world WUDA tasks, and $24$ for human-sentiment tasks. Penalty parameter is set to $0.01$ for digit, real-world WUDA tasks, and $0.001$ for human-sentiment tasks. To fairly compare all methods, they have the same network structure. Namely, ATDA, DAN, DANN, TCL and B-Net adopt the same network structure for each dataset. Note that DANN and TCL use the same structure for their discriminate networks. All experiments are repeated $10$ times and we report the average accuracy values and \emph{standard deviation} (STD) of accuracy values of $10$ experiments. \subsection{Results on simulated WUDA tasks} This subsection presents accuracy on unlabled target data (i.e., target-domain accuracy) in $32$ simulated WUDA tasks. \subsubsection{Results on digits WUDA tasks} Table~\ref{tab: digit_results} reports the target-domain accuracy in $8$ digit tasks. As can be seen, average target-domain accuracy of B-Net is higher than those of all baselines. On S$20$ case (the easiest case), most methods work well. ATDA has a satisfactory performance although it does not consider the noise effects explicitly. Then, when facing harder cases (i.e., P$20$ and P$45$), ATDA fails to transfer useful knowledge from noisy source data to unlabeled target data. When facing the hardest cases (i.e., \textit{M}$\rightarrow$\textit{S} with P$45$ and S$45$), DANN has higher accuracy than DAN and ATDA have. However, when facing the easiest cases (i.e., \textit{S}$\rightarrow$\textit{M} with P$20$ and S$20$), the performance of DANN is worse than that of DAN and ATDA. {Although two-step method Co+ATDA (or Co+TCL) outperforms ATDA (or TCL) in all $8$ tasks, it cannot beat one-step method: B-Net in terms of average target-domain accuracy.} This result is an evidence for the claim in Section \ref{sec:two-step}. In the task \textit{S}$\rightarrow$\textit{M} with P$20$, Co+ATDA outperforms all methods (slightly higher than B-Net), since pseudo-labeled source data are almost correct. Figures~\ref{fig: result-S2M-part} and \ref{fig: result-M2S-part} show the target-domain accuracy vs. number of epochs among ATDA, Co+ATDA and B-Net. Besides, we show the accuracy of ATDA trained with clean source data (ATDA-TCS) as a reference point. When accuracy of one method is close to that of ATDA-TCS (red dash line), this method successfully eliminates noise effects. From our observations, it is clear that B-Net is very close to ATDA-TCS in $7$ out of $8$ tasks (except for \textit{S$\rightarrow$M} task with P$45$, Figure~\ref{fig: result-S2M-part}-(d)), which is an evidence that Butterfly can eliminate noise effects. {Since P$45$ case is the hardest one and we only have finite samples, it is reasonable that B-Net cannot perfectly eliminate noise effects.} An interesting phenomenon is that, B-Net outperforms ATDA-TCS in $2$ \textit{M$\rightarrow$S} tasks (Figure~\ref{fig: result-M2S-part}-(a), (c)). This means that B-Net transfers more useful knowledge (from noisy source data to unlabeled target data) even than ATDA-TCS (from clean source data to unlabeled target data). \begin{table*}[h] \centering \footnotesize \caption{Target-domain accuracy on $12$ {human-sentiment} WUDA tasks with the $20\%$ noise rate. Bold values mean the highest values in each row.} \vspace{-1em} \begin{tabular}{lllllllll} \toprule Tasks & \multicolumn{1}{p{3.5em}}{DAN} & \multicolumn{1}{p{3.5em}}{DANN} & \multicolumn{1}{p{3.22em}}{ATDA} & \multicolumn{1}{p{3.22em}}{TCL} & \multicolumn{1}{p{4.28em}}{{MEDA}} & \multicolumn{1}{p{4.28em}}{Co+TCL} & \multicolumn{1}{p{4.28em}}{Co+ATDA} & \multicolumn{1}{p{3.5em}}{B-Net} \\ \midrule \emph{B}$\rightarrow$\emph{D} & 68.28\% & 68.08\% & 70.31\% & 71.40\% & 74.81\% &{\color{mydarkblue}67.81\%} & 66.70\% & \bf{71.84\%} \\ \emph{B}$\rightarrow$\emph{E} & 63.78\% & 63.53\% & 72.79\% & 65.08\% & 65.18\% &{\color{mydarkblue}60.54\%} & 68.89\% & \bf{75.92\%} \\ \emph{B}$\rightarrow$\emph{K} & 65.48\% & 64.63\% & 71.79\% & 66.80\% & 68.65\% &{\color{mydarkblue}61.23\%} & 66.51\% & \bf{76.32\%} \\ \emph{D}$\rightarrow$\emph{B} & 64.63\% & 64.52\% & 70.25\% & 67.33\% & 67.63\% &{\color{mydarkblue}65.22\%} & 68.04\% & \bf{70.56\%} \\ \emph{D}$\rightarrow$\emph{E} & 65.33\% & 65.16\% & 69.99\% & 66.74\% & 69.51\% &{\color{mydarkblue}64.55\%} & 67.32\% & \bf{73.73\%} \\ \emph{D}$\rightarrow$\emph{K} & 65.68\% & 66.28\% & 74.53\% & 68.82\% & 72.24\% &{\color{mydarkblue}67.98\%} & 72.20\% & \bf{77.97\%} \\ \emph{E}$\rightarrow$\emph{B} & 60.41\% & 60.15\% & \bf{63.89\%} & 63.13\% & 63.36\% &{\color{mydarkblue}61.18\%} & 61.08\% & 62.22\% \\ \emph{E}$\rightarrow$\emph{D} & 62.35\% & 61.67\% & 62.30\% & 62.93\% & 66.18\% &{\color{mydarkblue}60.81\%} & 59.77\% & \bf{63.53\%} \\ \emph{E}$\rightarrow$\emph{K} & 72.05\% & 71.51\% & 74.00\% & 75.36\% & 75.42\% &{\color{mydarkblue}72.65\%} & 70.85\% & \bf{78.96\%} \\ \emph{K}$\rightarrow$\emph{B} & 59.94\% & 59.40\% & \bf{63.53\%} & 62.77\% & 65.13\% &{\color{mydarkblue}60.71\%} & 61.22\% & 63.36\% \\ \emph{K}$\rightarrow$\emph{D} & 61.46\% & 61.51\% & 64.66\% & 64.16\% & 66.87\% &{\color{mydarkblue}64.15\%} & 64.94\% & \bf{66.98\%} \\ \emph{K}$\rightarrow$\emph{E} & 70.60\% & 72.23\% & 74.75\% & 74.14\% & 75.99\% &{\color{mydarkblue}68.95\%} & 69.69\% & \bf{76.96\%} \\ \midrule Average & 65.00\% & 64.89\% & 69.40\% & 67.39\% & 69.25\% &{\color{mydarkblue}64.65\%} & 66.43\% & \textbf{71.53\%} \\ \bottomrule \end{tabular}% \label{tab: Amazon20}% \vspace{-1em} \end{table*}% \begin{table*}[h] \centering \footnotesize \caption{Target-domain accuracy on $12$ {human-sentiment} WUDA tasks with the $45\%$ noise rate. Bold values mean the highest values in each row.} \vspace{-1em} \begin{tabular}{lllllllll} \toprule Tasks & \multicolumn{1}{p{3.5em}}{DAN} & \multicolumn{1}{p{3.5em}}{DANN} & \multicolumn{1}{p{3.22em}}{ATDA} & \multicolumn{1}{p{3.22em}}{TCL} & \multicolumn{1}{p{4.28em}}{{MEDA}} & \multicolumn{1}{p{4.28em}}{Co+TCL} & \multicolumn{1}{p{4.28em}}{Co+ATDA} & \multicolumn{1}{p{3.5em}}{B-Net} \\ \midrule \emph{B}$\rightarrow$\emph{D} & 52.43\% & 52.98\% & 53.56\% & 54.44\% & 54.50\% &{\color{mydarkblue}53.21\%} & 54.32\% & \bf{56.59\%} \\ \emph{B}$\rightarrow$\emph{E} & 52.17\% & 53.50\% & 55.14\% & 54.14\% & 54.29\% &{\color{mydarkblue}53.98\%} & \bf{57.34\%} & 55.74\% \\ \emph{B}$\rightarrow$\emph{K} & 52.89\% & 51.84\% & 51.14\% & 53.32\% & 53.68\% &{\color{mydarkblue}51.77\%} & 53.28\% & \bf{57.00\%} \\ \emph{D}$\rightarrow$\emph{B} & 53.11\% & 53.04\% & 54.48\% & 53.27\% & 53.66\% &{\color{mydarkblue}54.85\%} & \bf{55.95\%} & 55.15\% \\ \emph{D}$\rightarrow$\emph{E} & 51.30\% & 53.04\% & 54.21\% & 53.77\% & 54.11\% &{\color{mydarkblue}55.63\%} & 56.08\% & \bf{58.91\%} \\ \emph{D}$\rightarrow$\emph{K} & 52.15\% & 53.17\% & 57.99\% & 52.45\% & 52.45\% &{\color{mydarkblue}58.10\%} & 59.94\% & \bf{66.20\%} \\ \emph{E}$\rightarrow$\emph{B} & 51.38\% & 51.08\% & 52.54\% & 52.14\% & 52.56\% &{\color{mydarkblue}54.88\%} & 53.30\% & \bf{54.93\%} \\ \emph{E}$\rightarrow$\emph{D} & 52.83\% & 51.24\% & 49.02\% & 52.57\% & 53.03\% &{\color{mydarkblue}50.03\%} & 49.62\% & \bf{52.88\%} \\ \emph{E}$\rightarrow$\emph{K} & 54.21\% & 53.58\% & 51.66\% & 55.04\% & 55.42\% &{\color{mydarkblue}56.15\%} & 52.10\% & \bf{56.12\%} \\ \emph{K}$\rightarrow$\emph{B} & 50.44\% & 51.77\% & \bf{51.96\%} & 51.50\% & 51.52\% &{\color{mydarkblue}53.81\%} & 52.59\% & 51.39\% \\ \emph{K}$\rightarrow$\emph{D} & 52.20\% & 51.45\% & 52.86\% & 53.19\% & 53.38\% &{\color{mydarkblue}55.69\%} & 54.52\% & \bf{53.53\%} \\ \emph{K}$\rightarrow$\emph{E} & \bf{54.72\%} & 53.33\% & 52.11\% & 53.46\% & 53.81\% &{\color{mydarkblue}51.26\%} & 52.62\% & 53.71\% \\ \midrule Average & 52.49\% & 52.50\% & 53.65\% & 53.27\% & 53.54\% &{\color{mydarkblue}54.11\%} & 54.31\% & \textbf{56.01\%} \\ \bottomrule \end{tabular}% \label{tab: Amazon45}% \vspace{-1em} \end{table*}% \vspace{-0.5em} \begin{table*}[!tp] \caption{Target-domain accuracy on $3$ real-world WUDA tasks. The source domain is the \emph{Bing} dataset that contains noisy information from the Internet. Bold value represents the highest accuracy in each row.} \vspace{-1em} \label{tab: Real}% \footnotesize \begin{center} \begin{tabular}{llllllll} \toprule Target & \multicolumn{1}{p{3.445em}}{DAN} & DANN & \multicolumn{1}{p{3.335em}}{ATDA} & \multicolumn{1}{p{3.28em}}{TCL} & \multicolumn{1}{p{4.22em}}{Co+TCL}& \multicolumn{1}{p{4.22em}}{Co+ATDA} & \multicolumn{1}{p{3.555em}}{B-Net} \\ \midrule \textit{Caltech256} & 77.83\% & 78.00\% & 80.84\% & 79.35\% &{\color{mydarkblue}79.27\%} & 79.89\% & \bf{81.71\%} \\ \textit{Imagenet} & 70.29\% & 72.16\% & 74.89\% & 72.53\% &{\color{mydarkblue}72.33\%} & 74.73\% & \bf{75.00\%} \\ \textit{SUN} & 24.56\% & 26.80\% & 26.26\% & 28.80\% &{\color{mydarkblue}29.15\%} & 26.31\% & \bf{30.54\%} \\ \midrule Average & 57.56\% & 58.99\% & 60.66\% & 60.23\% &{\color{mydarkblue}60.25\%} & 60.31\% & \bf{62.42\%} \\ \bottomrule \end{tabular}% \vspace{-1em} \end{center} \end{table*}% \vspace{-0.2em} \subsubsection{Results on human sentiment WUDA tasks} \label{Asec:Results_Amazon} Tables \ref{tab: Amazon20} and \ref{tab: Amazon45} report the target-domain accuracy of each method in $24$ human-sentiment WUDA tasks. For these tasks, B-Net has the highest average target-domain accuracy. It should be noted that two-step method does not always perform better than existing UDA methods, such as for $20\%$-noise situation. The reason is that co-teaching performs poorly when pinpointing clean source data from noisy source data. Another observation is that noise effects is not eliminated like target-domain accuracy in $8$ digit WUDA tasks. The reason mainly includes that 1) these datasets only provide predefined features (i.e., we cannot extract better features from original contents in the training process), and 2) we only have finite samples and the number of samples in these datasets is smaller than those of digit datasets. \subsection{Results on real-world WUDA tasks} Table~\ref{tab: Real} reports the target-domain accuracy in $3$ tasks. B-Net enjoys the best performance on all tasks. It should be noted that, in \textit{Bing}$\rightarrow$\textit{Caltech256} and \textit{Bing}$\rightarrow$\textit{ImageNet} tasks, ATDA is slightly worse than B-Net. However, in \textit{Bing}$\rightarrow$\textit{SUN} task, ATDA is much worse than B-Net. The reason is that the DIR between \textit{Bing} and \textit{SUN} are more affected by noisy source data. This is also observed when comparing DANN and TCL. Compared to Co+ATDA, ATDA is slightly better than Co+ATDA. This abnormal phenomenon can be explained using $\Delta$ (see Section~\ref{sec:two-step}), after using co-teaching to assign pseudo labels to noisy source data, the second term in $\Delta_s$ may increase, which results in that $\Delta$ increases, i.e., noise effects actually increase. This phenomenon is an evidence that a two-step method may not really reduce noise effects. \begin{figure*}[tp] \begin{center} \subfigure[Values of $\rho_{01}^s$ and $\rho_{01}^t$] {\includegraphics[width=0.26\textwidth]{deep_S2M_P20_demo.pdf}} \vspace{-1em} \subfigure[Convergence speed of $\rho_{01}^s$]{ \includegraphics[width=0.26\textwidth]{deep_S2M_P20_demo_s.pdf}} \subfigure[Convergence speed of $\rho_{01}^t$] {\includegraphics[width=0.26\textwidth]{deep_S2M_P20_demo_t.pdf}} \caption{{The values of $\rho_{01}^s$ and $\rho_{01}^t$ on the \textit{S}$\rightarrow$\textit{M} task (P20), where $n_s=494,000$ is much larger than $n_t=10,000$. Since we do not have pseudo-labeled target data in the first epoch, we illustrate $\rho_{01}^t$ from the second epoch.}} \label{fig: result-S2M-thm} \end{center} \vspace{-1em} \end{figure*} \begin{table*}[!t] \centering \caption{Results of ablation study. Average target-domain accuracy on $8$ simulated digit WUDA tasks (\emph{Digit}), $24$ simulated human-sentiment WUDA tasks (\emph{Sentiment}) and $3$ real-world WUDA tasks (\emph{Real-world}). Bold value represents the highest accuracy in each row.} \vspace{-1em} \label{tab: ablation}% \begin{tabular}{lllllllllll} \toprule Datasets & Tri-C-Net &B w/o C & DCP-D & DCP-M & B-Net-S & B-Net-T & B-Net-ST & B-Net-M & B-Net \\ \midrule \emph{Digit} & 59.80\% & 74.52\% & 59.19\% & 70.85\% & 71.93\% & 52.00\% & 72.27\% & 73.89\% & \bf{75.82\%} \\ \emph{Sentiment} & 61.25\% & 63.57\% & 61.37\% & 63.39\% & 61.49\% & 61.12\% & 61.73\% & 62.21\% & \bf{63.77\%} \\ \emph{Real-world} & 61.50\% & 62.27\% & 59.82\% & 62.34\% & 61.91\% & 60.87\% & 62.24\% & 62.17\% & \bf{62.42\%} \\ \bottomrule \end{tabular}% \vspace{-1em} \end{table*}% \vspace{-0.5em} \subsection{Can we check correct data out?} \label{sec:verify_rho} {This subsection verifies that $\rho_{01}^s$ and $\rho_{01}^t$ will go to zero with the convergence speed of $O(1/\sqrt{n_sT})$ and $O(1/\sqrt{n_tT})$, respectively. Figure~{\ref{fig: result-S2M-thm}} shows the values of $\rho_{01}^s$ and $\rho_{01}^t$. It can be seen that $\rho_{01}^s$ and $\rho_{01}^t$ will go to zero when increasing the training epochs. $\rho_{01}^s$ is always lower than $\rho_{01}^t$ because that $n_s$ is much larger than $n_t$, indicating that we can check more correct data out when more samples are available. Figure~{\ref{fig: result-S2M-thm}}-(b) shows that we can always find two finite $C_\rho^s$ such that $\rho_{01}^s$ goes to the zero with the convergence speed of $O(1/\sqrt{n_sT})$. So do $C_\rho^t$ and $\rho_{01}^t$ in Figure~{\ref{fig: result-S2M-thm}}-(c).} \subsection{Ablation study} \label{sec:abl_study} Finally, we conduct thorough experiments to show the contribution of individual components in B-Net. We report average target-domain accuracy on $32$ simulated WUDA tasks ($8$ digit and $24$ human-sentiment WUDA tasks) and $3$ real-world WUDA tasks. We consider following baselines: \begin{itemize} \item Tri-C-Net: \underline{tri}ply \underline{c}heck data in SD, MD and TD. Compared to B-Net, Tri-C-Net has another branch (denoted by Branch-III) to check data in SD. Namely, Tri-C-Net has three branches (i.e., six networks). Parameters of CNN of the Branch-III are the same with that of Branch-I and Branch-II. \item B w/o C: train \underline{B}-Net by Algorithm~\ref{alg: ButterNET}, \underline{without} adding $|\theta_{f11}^T \theta_{f21}|$ into the loss function of B-Net. \item DCP-D: realize \underline{DCP} via \underline{D}ecoupling \cite{DeCoupling} to check data in MD and TD. \item DCP-M: realize \underline{DCP} via \underline{M}entorNet \cite{jiang2017mentornet} to check data in MD and TD. \item B-Net-S: train \underline{B-Net} where the check is turned on for \underline{S}ource data in MD. \item B-Net-T: train \underline{B-Net} where the check is turned on for \underline{T}arget data in TD. \item B-Net-ST: train \underline{B-Net} where the checks are turned on for \underline{S}ource data in MD and \underline{T}arget data in TD. \item B-Net-M: train \underline{B-Net} where the check is turned on for all data in \underline{M}D. \end{itemize} Note that in the full B-Net, the checks are turned on for all data in MD and TD. Comparing B-Net with Tri-C-Net shows whether two branches (i.e., four networks) are the optimal design. Comparing B-Net with B w/o C reveals if the constraint $|\theta_{f11}^T \theta_{f21}|$ takes effects. Comparing B-Net with DCP-D and DCP-M shows whether realizing DCP via co-teaching is the optimal way. Comparing B-Net with B-Net-S, B-Net-T, B-Net-ST and B-Net-M reveals if DCP is necessary. Table~\ref{tab: ablation} reports average target-domain accuracy of above baselines and B-Net. As can be seen, 1) maintaining $4$ networks (like B-Net) is better than maintaining $6$ networks (like Tri-C-Net) since B-Net outperforms Tri-C-Net in terms of average target-domain accuracy; 2) B-Net benefits from adding the constraint to the loss function $\mathcal{L}$; 3) realizing DCP by co-teaching is better than using Decoupling or MentorNet; and 4) DCP is necessary since accuracy of B-Net is higher than those of B-Net-S, B-Net-T, B-Net-ST and B-Net-M. \vspace{-0.5em} \section{Conclusions} This paper opens a new problem called \emph{wildly unsupervised domain adaptation} (WUDA). However, existing UDA methods cannot handle WUDA well. To address this problem, we propose a robust one-step approach called \emph{Butterfly}. Butterfly maintains four deep networks simultaneously: Two take care of all adaptations; while the other two can focus on classification in target domain. We compare Butterfly with existing UDA methods on $32$ simulated and $3$ real-world WUDA tasks. Empirical results demonstrate that Butterfly can robustly transfer knowledge from noisy source data to unlabeled target data. In the future, we will extend our Butterfly framework to address open-set WUDA, where label space of target domain is larger than that of source domain. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi FL, JL and GZ were supported by the Australian Research Council (ARC) under FL190100149. BH was supported by the RGC Early Career Scheme No. 22200720 and NSFC Young Scientists Fund No. 62006202, HKBU Tier-1 Start-up Grant, HKBU CSD Start-up Grant, HKBU CSD Departmental Incentive Grant, and a RIKEN BAIHO Award. GN and MS were supported by JST AIP Acceleration Research Grant Number JPMJCR20U3, Japan. MS was also supported by the Institute for AI and Beyond, UTokyo. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \small
2,877,628,088,826
arxiv
\section*{Contents} \renewcommand{\labelenumii}{\Roman{enumii}} \begin{enumerate} \item Abstract \item Introduction \item CNN Architecture \item Implementation details \item Results \item References \end{enumerate} \section{Abstract} We propose novel methods of solving two tasks using Convolutional Neural Networks, firstly the task of generating HDR map of a static scene using differently exposed LDR images of the scene captured using conventional cameras and secondly the task of finding an optimal tone mapping operator that would give a better score on the TMQI metric compared to the existing methods. We quantitatively show the performance of our networks and illustrate the cases where our networks performs good as well as bad. \section{Introduction} Natural scenes have a large range of intensity values(thus a large dynamic range) and conventional non-HDR cameras cannot capture this range in a single image. By controlling various factors, one of them being the exposure time of the shot we can capture a particular window in the total dynamic range of the scene. So we need multiple "Low dynamic range" images of the scene to get the complete information of the scene. Fig.1 illustrates an example. The internal processing pipeline of the camera is highly non linear i.e. the pixel intensity value at location $(i,j)$, $Z_{ij}$ is equal to $f(E_{ij}\Delta t)$ where $\Delta t$ is the exposure time of the shot and $E_{ij}$ is the irradiance value at location $(i,j)$. $f(x)$ is known as the camera response function of that particular camera which is a non linear function. Given the values of $Z_{ij}$ for differently exposed images of the same scene we can get a robust,noise free estimate of the $E_{ij}$'s (methods like Debevec et. al. (1997) use a weighted average of the $f^{-1}(Z_{ij})/\Delta t$ to get a robust estimate of the corresponding $E_{ij}$. We use a deep neural network to estimate the function taking as input the LDR pixel intensities of 5 LDR images of a static scene to estimate the irradiance values, i.e. the HDR map of the scene. Fig 1. shows the camera acquisition pipeline in modern cameras and the non-linear transforms involved. \\ \\ We further conduct experiments of getting another similar convolutional neural network to approximate a tone mapping operator. Our training set includes HDR images of scenes and their corresponding tone mapped images generated by one of the Tone mapping operators provided in MATLAB's HDR-Toolbox (Banterle et al. 2011) which gives the highest value of the TMQI metric (Yeganeh et al. 2013). We try further experiments to improve the results, details of which are provided in the main report. \\ \\ \includegraphics[width=\textwidth]{debevec_malik_pipeline.png} \begin{center} \footnotesize This is the end to end pipeline involved in digital image acquisition in modern digital cameras, which indicates that the mapping from the scene radiance to the pixel values at a spatial location in the image is non-linear. \end{center} \section{CNN Architecture} \label{sec:cnn} \subsection{LDR2HDR network architecture} We have a collected a dataset of 957 HDR images from the internet from the websites of various groups around the world working on HDR imaging and the camera response functions of the cameras with which those photos were taken. We use these to generate LDR images from the HDR's whose exposure time values are taken from a list of exposure times( a geometric progression with the first term 1, common ratio 4 and the last term equal to $4^9$. Our network takes as input a stack of 5 RGB LDR images of a particular scene, each having a different exposure. In the initial experiments we fixed the exposure of these LDR images to be [1,8,64,512,4096], and then we moved on to adaptive exposure based method in which we choose first the LDR image of a scene which gives the maximum value of entropy (entropy of a single channel image is defined as -$\sum p\log{p}$ where $p$ is the histogram count of a particular intensity value in the image and the sum is calculated for all the possible intensities in the image) and we take two images of the previous and next two intensity values. We have 3 networks, one for each of the R,G and B channels of the inputs, so We conducted many experiments with both the former and the latter case approach but we were able to obtain plausible results only in the first case where the exposure times were fixed. The graphs of training error vs. epochs for 3 different models which turned out to be the best after testing many different sets of hyperparameters are shown below for the case where the exposure times are kept constant. The final test error that we get for the best model is . We conducted experiments in models with dropout in each layer with $p=0.4$ in order to improve generalization. We also also added spatial batch normalization just before passing the activations of each layer through the ReLU non linearity. Batch size was kept 40 (decided by the memory limitations of the GPU). BatchNorm strictly improved the results as the same training error was attained in less number of epochs. The architecture of the network is illustrated in the Fig \ref{fig:tone_cnn} \subsection{HDR2ToneMap network architecture} We first create a dataset using the existing 957 HDR images. We then use the tone mapping operators provided in the HDR-toolbox by Francesco Banterle et al. and use them to create different tone maps of each HDR and run the TMQI metric on each of the tone maps and choose the one which gives the highest TMQI score. Some are local and some are global tone mapping operators, so our approach is not fully justified. We then train a convolutional neural network whose input is the HDR image and its corresponding truth is the best Tone map corresponding to the TMQI metric. We use 3x3 conv in the first layer followed by 1x1 convs in the subsequent layers. We get this intuition from the works of Mertens et al. whose method most of the time gave the best TMQI score. In their work the final intensity value at location $(x,y)$ depends only on the 3x3 neighborhood of radiance value in its corresponding HDR image at location $(x,y)$. Further study of the other tone mapping works is required in order to improve the architecture after the pilot testing that we have done in the course of the summer. After preliminary results it was observed that the network was not able to deal with high frequency inputs simultaneously with low frequency ones so in order to tackle that problem we first convert both the input and output pairs to Lab space, apply a Bilateral Filter to the $L$ channel, create a new channel $L_{original} - L_{filtered}$ and train 4 networks each for these new channels as well as for a and b channels. We obtain better results for this method. In order to obtain a good estimate of the hyperparameters of the network, we test out several values of then by training their corresponding architectures for 2 epochs and observing the validation error. Due to computational constraints, this could not be afforded for more epochs and only 4 sets of hyperparameters could be tested. \begin{figure}[h!] \centering \begin{tikzpicture} \node at (0.2,-1){\begin{tabular}{c}$\uparrow$\\input LDR stack slice \\size 5*M*N \\with padding\end{tabular}}; \draw (0.0,0.3) -- (1.0,0.3) -- (1.0,1.3) -- (0.0,1.3) -- (0.0,0.3); \draw[fill=black] (0.4,0.7) circle (0.2ex); \draw[fill=black] (0.6,0.9) circle (0.2ex); \draw[fill=black] (0.8,1.1) circle (0.2ex); \draw (0.5,0.8) -- (1.5,0.8) -- (1.5,1.8) -- (0.5,1.8) -- (0.5,0.8); \node at (3,3.5){\begin{tabular}{c}3x3 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=60\\$\downarrow$\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55,1.05) -- (3.55,1.05) -- (3.55,2.05) -- (2.55,2.05) -- (2.55,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3,0.8) -- (3.3,0.8) -- (3.3,1.8) -- (2.3,1.8) -- (2.3,0.8); \draw[fill=black] (2.25,0.75) circle (0.2ex); \draw[fill=black] (2.35,0.85) circle (0.2ex); \draw[fill=black] (2.45,0.95) circle (0.2ex); \draw[fill=black] (2.55,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75,0.25) -- (2.75,0.25) -- (2.75,1.25) -- (1.75,1.25) -- (1.75,0.25); \draw[fill=black,opacity=0.2,draw=black] (1.5,0) -- (2.5,0) -- (2.5,1) -- (1.5,1) -- (1.5,0); \node at (4.5,-1.5){\begin{tabular}{c}$\uparrow$\\1x1 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=40\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55+2.25,1.05) -- (3.55+2.25,1.05) -- (3.55+2.25,2.05) -- (2.55+2.25,2.05) -- (2.55+2.25,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3+2.25,0.8) -- (3.3+2.25,0.8) -- (3.3+2.25,1.8) -- (2.3+2.25,1.8) -- (2.3+2.25,0.8); \draw[fill=black] (2.25+2.25,0.75) circle (0.2ex); \draw[fill=black] (2.35+2.25,0.85) circle (0.2ex); \draw[fill=black] (2.45+2.25,0.95) circle (0.2ex); \draw[fill=black] (2.55+2.25,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75+2.25,0.25) -- (2.75+2.25,0.25) -- (2.75+2.25,1.25) -- (1.75+2.25,1.25) -- (1.75+2.25,0.25); \draw[fill=black,opacity=0.2,draw=black] (3.75,0) -- (4.75,0) -- (4.75,1) -- (3.75,1) -- (3.75,0); \node at (7,3.5){\begin{tabular}{c}1x1 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=20\\$\downarrow$\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55+4.5,1.05) -- (3.55+4.5,1.05) -- (3.55+4.5,2.05) -- (2.55+4.5,2.05) -- (2.55+4.5,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3+4.5,0.8) -- (3.3+4.5,0.8) -- (3.3+4.5,1.8) -- (2.3+4.5,1.8) -- (2.3+4.5,0.8); \draw[fill=black] (2.25+4.5,0.75) circle (0.2ex); \draw[fill=black] (2.35+4.5,0.85) circle (0.2ex); \draw[fill=black] (2.45+4.5,0.95) circle (0.2ex); \draw[fill=black] (2.55+4.5,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75+4.5,0.25) -- (2.75+4.5,0.25) -- (2.75+4.5,1.25) -- (1.75+4.5,1.25) -- (1.75+4.5,0.25); \draw[fill=black,opacity=0.2,draw=black] (1.5+4.5,0) -- (2.5+4.5,0) -- (2.5+4.5,1) -- (1.5+4.5,1) -- (1.5+4.5,0); \node at (9.3,-1.5){\begin{tabular}{c}$\uparrow$\\1x1 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=20\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55+4.5+2.25,1.05) -- (3.55+4.5+2.25,1.05) -- (3.55+4.5+2.25,2.05) -- (2.55+4.5+2.25,2.05) -- (2.55+4.5+2.25,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3+4.5+2.25,0.8) -- (3.3+4.5+2.25,0.8) -- (3.3+4.5+2.25,1.8) -- (2.3+4.5+2.25,1.8) -- (2.3+4.5+2.25,0.8); \draw[fill=black] (2.25+4.5+2.25,0.75) circle (0.2ex); \draw[fill=black] (2.35+4.5+2.25,0.85) circle (0.2ex); \draw[fill=black] (2.45+4.5+2.25,0.95) circle (0.2ex); \draw[fill=black] (2.55+4.5+2.25,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75+4.5+2.25,0.25) -- (2.75+4.5+2.25,0.25) -- (2.75+4.5+2.25,1.25) -- (1.75+4.5+2.25,1.25) -- (1.75+4.5+2.25,0.25); \draw[fill=black,opacity=0.2,draw=black] (1.5+4.5+2.25,0) -- (2.5+4.5+2.25,0) -- (2.5+4.5+2.25,1) -- (1.5+4.5+2.25,1) -- (1.5+4.5+2.25,0); \node at (11.8,3.5){\begin{tabular}{c}1x1 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=20\\$\downarrow$\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55+4.5+2.25+2.25,1.05) -- (3.55+4.5+2.25+2.25,1.05) -- (3.55+4.5+2.25+2.25,2.05) -- (2.55+4.5+2.25+2.25,2.05) -- (2.55+4.5+2.25+2.25,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3+4.5+2.25+2.25,0.8) -- (3.3+4.5+2.25+2.25,0.8) -- (3.3+4.5+2.25+2.25,1.8) -- (2.3+4.5+2.25+2.25,1.8) -- (2.3+4.5+2.25+2.25,0.8); \draw[fill=black] (2.25+4.5+2.25+2.25,0.75) circle (0.2ex); \draw[fill=black] (2.35+4.5+2.25+2.25,0.85) circle (0.2ex); \draw[fill=black] (2.45+4.5+2.25+2.25,0.95) circle (0.2ex); \draw[fill=black] (2.55+4.5+2.25+2.25,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75+4.5+2.25+2.25,0.25) -- (2.75+4.5+2.25+2.25,0.25) -- (2.75+4.5+2.25+2.25,1.25) -- (1.75+4.5+2.25+2.25,1.25) -- (1.75+4.5+2.25+2.25,0.25); \draw[fill=black,opacity=0.2,draw=black] (1.5+4.5+2.25+2.25,0) -- (2.5+4.5+2.25+2.25,0) -- (2.5+4.5+2.25+2.25,1) -- (1.5+4.5+2.25+2.25,1) -- (1.5+4.5+2.25+2.25,0); \node at (14.3,2.25){\begin{tabular}{c}output layer\\$\downarrow$\end{tabular}}; \draw (1.75+4.5+2.25+2.25+2.25+1.125,0.25) -- (2.75+4.5+2.25+2.25+2.25+1.125,0.25) -- (2.75+4.5+2.25+2.25+2.25+1.125,1.25) -- (1.75+4.5+2.25+2.25+2.25+1.125,1.25) -- (1.75+4.5+2.25+2.25+2.25+1.125,0.25); \end{tikzpicture} \caption[Architecture of a traditional convolutional neural network.]{This is the network architecture which is used to generate the HDR image from the LDR stack. There are 3 networks for each of the R,G and B channels.} \label{fig:tone_cnn} \end{figure} \begin{figure}[h!] \centering \begin{tikzpicture} \node at (0.2,-1){\begin{tabular}{c}$\uparrow$\\input HDR slice \\size 1*M*N \\with padding\end{tabular}}; \draw (0.0,0.3) -- (1.0,0.3) -- (1.0,1.3) -- (0.0,1.3) -- (0.0,0.3); \node at (3,3.5){\begin{tabular}{c}3x3 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=100\\$\downarrow$\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55,1.05) -- (3.55,1.05) -- (3.55,2.05) -- (2.55,2.05) -- (2.55,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3,0.8) -- (3.3,0.8) -- (3.3,1.8) -- (2.3,1.8) -- (2.3,0.8); \draw[fill=black] (2.25,0.75) circle (0.2ex); \draw[fill=black] (2.35,0.85) circle (0.2ex); \draw[fill=black] (2.45,0.95) circle (0.2ex); \draw[fill=black] (2.55,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75,0.25) -- (2.75,0.25) -- (2.75,1.25) -- (1.75,1.25) -- (1.75,0.25); \draw[fill=black,opacity=0.2,draw=black] (1.5,0) -- (2.5,0) -- (2.5,1) -- (1.5,1) -- (1.5,0); \node at (4.5,-1.5){\begin{tabular}{c}$\uparrow$\\1x1 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=80\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55+2.25,1.05) -- (3.55+2.25,1.05) -- (3.55+2.25,2.05) -- (2.55+2.25,2.05) -- (2.55+2.25,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3+2.25,0.8) -- (3.3+2.25,0.8) -- (3.3+2.25,1.8) -- (2.3+2.25,1.8) -- (2.3+2.25,0.8); \draw[fill=black] (2.25+2.25,0.75) circle (0.2ex); \draw[fill=black] (2.35+2.25,0.85) circle (0.2ex); \draw[fill=black] (2.45+2.25,0.95) circle (0.2ex); \draw[fill=black] (2.55+2.25,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75+2.25,0.25) -- (2.75+2.25,0.25) -- (2.75+2.25,1.25) -- (1.75+2.25,1.25) -- (1.75+2.25,0.25); \draw[fill=black,opacity=0.2,draw=black] (3.75,0) -- (4.75,0) -- (4.75,1) -- (3.75,1) -- (3.75,0); \node at (7,3.5){\begin{tabular}{c}1x1 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=50\\$\downarrow$\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55+4.5,1.05) -- (3.55+4.5,1.05) -- (3.55+4.5,2.05) -- (2.55+4.5,2.05) -- (2.55+4.5,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3+4.5,0.8) -- (3.3+4.5,0.8) -- (3.3+4.5,1.8) -- (2.3+4.5,1.8) -- (2.3+4.5,0.8); \draw[fill=black] (2.25+4.5,0.75) circle (0.2ex); \draw[fill=black] (2.35+4.5,0.85) circle (0.2ex); \draw[fill=black] (2.45+4.5,0.95) circle (0.2ex); \draw[fill=black] (2.55+4.5,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75+4.5,0.25) -- (2.75+4.5,0.25) -- (2.75+4.5,1.25) -- (1.75+4.5,1.25) -- (1.75+4.5,0.25); \draw[fill=black,opacity=0.2,draw=black] (1.5+4.5,0) -- (2.5+4.5,0) -- (2.5+4.5,1) -- (1.5+4.5,1) -- (1.5+4.5,0); \node at (9.3,-1.5){\begin{tabular}{c}$\uparrow$\\1x1 conv $\rightarrow$ Spatial\\Batch Normalization\\$\rightarrow$ReLU\\d=10\end{tabular}}; \draw[fill=black,opacity=0.2,draw=black] (2.55+4.5+2.25,1.05) -- (3.55+4.5+2.25,1.05) -- (3.55+4.5+2.25,2.05) -- (2.55+4.5+2.25,2.05) -- (2.55+4.5+2.25,1.05); \draw[fill=black,opacity=0.2,draw=black] (2.3+4.5+2.25,0.8) -- (3.3+4.5+2.25,0.8) -- (3.3+4.5+2.25,1.8) -- (2.3+4.5+2.25,1.8) -- (2.3+4.5+2.25,0.8); \draw[fill=black] (2.25+4.5+2.25,0.75) circle (0.2ex); \draw[fill=black] (2.35+4.5+2.25,0.85) circle (0.2ex); \draw[fill=black] (2.45+4.5+2.25,0.95) circle (0.2ex); \draw[fill=black] (2.55+4.5+2.25,1.05) circle (0.2ex); \draw[fill=black,opacity=0.2,draw=black] (1.75+4.5+2.25,0.25) -- (2.75+4.5+2.25,0.25) -- (2.75+4.5+2.25,1.25) -- (1.75+4.5+2.25,1.25) -- (1.75+4.5+2.25,0.25); \draw[fill=black,opacity=0.2,draw=black] (1.5+4.5+2.25,0) -- (2.5+4.5+2.25,0) -- (2.5+4.5+2.25,1) -- (1.5+4.5+2.25,1) -- (1.5+4.5+2.25,0); \node at (11.3,2.25){\begin{tabular}{c}output layer\\$\downarrow$\end{tabular}}; \draw (1.75+4.5+2.25+2.25,0.25) -- (2.75+4.5+2.25+2.25,0.25) -- (2.75+4.5+2.25+2.25,1.25) -- (1.75+4.5+2.25+2.25,1.25) -- (1.75+4.5+2.25+2.25,0.25); \end{tikzpicture} \caption[Architecture of a traditional convolutional neural network.]{This is the network architecture which is used to approximate the tone mapping operator, there are four networks, one for the bilaterally filtered L channel, Original L minus the filtered L channel, a and b channel of the of the HDR image} \label{fig:tone_cnn} \end{figure} \section{Implementation Details} For all the data processing tasks we use MATLAB and for implementing and testing our neural networks we use the Torch framework in the Lua scripting language, and most of the models are trained on a single NVIDIA GeForce GT 730 graphics processor, although for a brief amount of time during which we had access to a HPC node which had 2 GPU's, a Tesla K20C and a Titan X, we did multi-GPU training of our models using the following algorithm - \begin{itemize} \item Have the same network on the 2 GPU's at the beginning of every iteration in an epoch. \item Independently processing two different batches on the two GPU's and then copying over the accumulated gradients in the backward pass on one of the GPU's to the other, adding them to the accumulated gradients on the other GPU during the backward pass on it . \item Updating the parameters of the model on the GPU to which the gradients were copied. \item Process the next set of batches \end{itemize} One drawback with this approach was that the inter GPU communication overhead outweighed the almost 2X gain time in the actual training of the networks (the time required to the forward-backward pass). During our other experiments, in order to save time in loading of the data, we implemented a multi-threaded approach to load our mini-batch. \\ Another important thing to note is that we were not able to process even a single input example of dimensions 15 X M X N as our images were of quite high resolution and during a forward pass of the network since every individual module in Torch caches its local output, the GPU's memory didn't turn out to be sufficient, so we broke each image into patches of 64 X 64 and after that we were able to keep the minibatch size to be 40 without overloading the GPU's memory. Due to computing power issues we were not able to test our models that efficiently. The code will shortly be made available at my github repository. \section{Results} \subsection{LDR2HDR Results} In the results we present the graph of the training error vs no. of epochs for the best three models(Fig 3.). The test error for the best model is 0.09345. Visual results are shown below. It is clear that further experiments with validating the hyperparameters are required to find the optimal architecture for the task. The network is clearly able to generate plausible results for some of the colors but not all(Fig. 3 and 4). Also it is clear that the network is able to generate outputs without under/over saturation of regions that have high and low radiance values in the same image which hence proves that the dynamic range of the output is quite high. \begin{figure}[bp!] \includegraphics[width = \textwidth]{err_plot.jpg} \caption{The training error plot for the three best architectures, blue and green share the same architecture, whereas the final one,red, is one more no. of activations in the first layer. The architecture of the final network has been illustrated in Fig 1.)} \end{figure} \begin{figure}[bp!] \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.62]{752_result.PNG} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.62]{938_result.PNG} \end{subfigure} \caption{Some examples from test set where the LDR2HDR network gives plausible results.} \end{figure} \begin{figure}[bp!] \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.62]{783_result.PNG} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.62]{795_result.PNG} \end{subfigure} \caption{Some examples from test set where the LDR2HDR network does not give plausible results.} \end{figure} \begin{figure}[bp!] \includegraphics[width = \textwidth]{err_plot2.jpg} \caption{The training error plot for the final tone mapping network whose architecture has been illustrated in Fig 2.} \end{figure} \begin{figure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.5]{752_comparison.PNG} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.5]{947_comparison.PNG} \end{subfigure} \caption{Some examples from test set where the tone mapping network gives plausible results.} \end{figure} \begin{figure}[bp!] \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.5]{757_comparison.PNG} \end{subfigure} \begin{subfigure}{0.5\textwidth} \includegraphics[scale = 0.5]{758_comparison.PNG} \end{subfigure} \caption{Some examples from test set where the tone mapping network does not give plausible results.} \end{figure} \subsection{HDR2Tonemap Results} In the results we present the outputs of the final validated architecture for the cases where the network performs good as well as bad (Fig 7. and 8.) . The final test error for that model after 28 epochs of training is 0.002764 (Fig 6.). We also present the plot of training error vs no. of epochs for that model. \section{References} 1.) Debevec, Paul E., and Jitendra Malik. "Recovering high dynamic range radiance maps from photographs." ACM SIGGRAPH 2008 classes. ACM, 2008. \\ 2.) Mitsunaga, Tomoo, and Shree K. Nayar. "Radiometric self calibration." Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on.. Vol. 1. IEEE, 1999. \\ 3.) Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. \\ 4.) Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. \\ 5.)Wang, Zhou, et al. "Image quality assessment: from error visibility to structural similarity." IEEE transactions on image processing 13.4 (2004): 600-612. \end{document}
2,877,628,088,827
arxiv
\section{Introduction} Currently several gravitational wave detectors such as LIGO~\cite{LIGO:2007kva,LIGO_web}, Virgo~\cite{VIRGO_FAcernese_etal2008,VIRGO_web} or GEO~\cite{GEO_web} are already operating, while several others are in the planning or construction phase~\cite{Schutz99}. One of the most promising sources for these detectors are the inspirals and mergers of binary black holes. In order to make predictions about the final phase of such inspirals and mergers, fully non-linear numerical simulations of the Einstein Equations are required. To numerically evolve the Einstein equations, at least two ingredients are necessary. First we need a specific formulation of the evolution equations. And second, to start such simulations initial data are needed. As the first ingredient most groups nowadays use the BSSNOK formulation~\cite{Nakamura87,Shibata95,Baumgarte:1998te} of the evolution equations. This formulation is usually evolved using finite differencing methods, but for single black holes there have been some attempts to use spectral methods~\cite{Tichy:2006qn,Tichy:2009yr,Tichy:2009zr}. For binary black holes the BSSNOK system is usually used together with the moving puncture approach~\cite{Campanelli:2005dd, Baker:2005vv}. This approach so far works only with finite differencing methods since certain evolved variables are not smooth inside the black holes (at the punctures). Almost all simulations using the BSSNOK formulation to date use standard puncture data~\cite{Brandt97b,Ansorg:2004ds} as initial data. These initial data are very flexible in that they contain free parameters for the position, momentum and spin of each black hole and thus one can setup practically any kind of orbit. Note, however, that the emission of gravitational waves tends to circularize the orbits~\cite{Peters:1963ux,Peters:1964}. Thus for realistic binary black hole systems that have been inspiraling already for a long time, we expect the two black holes to be in quasi-circular orbits around each other with a radius which shrinks on a timescale much larger than the orbital timescale. This means that the initial data should be such that the orbit has no or at least very small eccentricity. For our purposes here we follow the NRAR (Numerical Relativity - Analytical Relativity) collaboration~\cite{NRAR_web} guidelines which consider eccentricities of order a few times $10^{-3}$ acceptably small. There have been several previous works that have considered eccentricities for puncture initial data~\cite{Baker:2006ha,Husa:2007rh,Walther:2009ng}. However, the most successful approach in terms of achieving low eccentricities was implemented for excision type initial data~\cite{Pfeiffer:2007yz,Boyle:2007ft,Mroue:2010re}. The method discussed in this work aims at lowering the eccentricity for the kind of puncture initial data that is routinely used with the moving puncture approach. Throughout we will use units where $G=c=1$. The black hole masses are denoted by $m_1$ and $m_2$. We also introduce the total mass $M=m_1+m_2$, the reduced mass $\mu = m_1 m_2/M$ and $\nu=\mu/M$. The paper is organized as follows. Sec.~\ref{eccentricity} introduces and compares several eccentricity measures. In Sec.~\ref{numerics} we describe grid setups that can be used in numerical simulations aimed at measuring the eccentricity. Sec.~\ref{parchoices} discusses a simple method to pick initial momentum parameters. This is followed by Sec.~\ref{ecc_reduc} which describes how to iterate these parameters to arrive at a reduced eccentricity. We conclude with a discussion of our results in Sec.~\ref{discussion}. \section{Defining eccentricity for inspiral orbits} \label{eccentricity} Real binary black hole orbits can never be circular. They always follow spirals. So when we are aiming for low eccentricity initial data, we really want data that result in trajectories that spiral in smoothly without oscillations in the black hole separation. Of course this issue is further complicated by the fact that trajectories are coordinate dependent. There are several earlier eccentricity definitions for inspiral orbits in the literature~\cite{Baker:2006ha,Husa:2007rh, Pfeiffer:2007yz,Boyle:2007ft,Mroue:2010re,Walther:2009ng}. All of them define eccentricity as a deviation from an underlying smooth, secular trend in some specific quantity that is associated with the orbits. In~\cite{Baker:2006ha} the frequency of the dominant $l=m=2$ mode of the gravitational waves emitted is fitted to a fourth order monotonic polynomial, and the deviation of the frequency from this fit is used to compute the eccentricity. This approach works for non-spinning binaries. Essentially the same method is also used in~\cite{Husa:2007rh}, but instead of the gravitational wave frequency~\cite{Husa:2007rh} uses the orbital frequency and also the coordinate separation to obtain two eccentricity measures. These same measures are also used in~\cite{Walther:2009ng}. The approaches in~\cite{Pfeiffer:2007yz,Boyle:2007ft} fit a linear function plus a sine function to the coordinate separation and also the proper separation. The eccentricity can then be obtained from the amplitude of the fitted sine function. In~\cite{Mroue:2010re} the same fitting approach as in~\cite{Husa:2007rh} is used, but the fitted quantities are coordinate separation, proper separation and also orbital frequency. All the approaches based in orbital parameters should in principle also work for systems with spin. Also note that all these eccentricity definitions are chosen such that they result in the correct value for Newtonian orbits. Below we will introduce two new eccentricity definitions and compare them to the earlier definition based on fitting the orbital frequency~\cite{Husa:2007rh,Mroue:2010re}. The first eccentricity definition is based on the coordinate separation of the two black holes. It is given by \begin{equation} \label{e_r_def} e_{r}(t) = \frac{ \Delta r_{max}(t) - \Delta r_{min}(t) }{2r_{av}} \end{equation} where the average separation, and the maximum and minimum deviation from a smoothed value $r_{s}$ are given by \begin{eqnarray} r_{av} &=& \int_{t-T/2}^{t+T/2} r(t') dt' / T \\ \Delta r_{max}(t) &=& \max_{t'\in[t-T/2,t+T/2]} [r(t')-r_{s}(t',t)] \\ \Delta r_{min}(t) &=& \min_{t'\in[t-T/2,t+T/2]} [r(t')-r_{s}(t',t)] . \end{eqnarray} Here the period $T$ is defined using Kepler's law \begin{equation} T = 2\pi (r^3/M)^{1/2} . \end{equation} Notice that the actual orbital period may be slightly different, but this estimate suffices to get an approximate eccentricity measure. The smoothed value $r_{s}(t',t)$ is obtained from \begin{equation} \label{r_lin_fit} r_{s}(t',t) = r(t) + \frac{r(t+T/2)-r(t-T/2)}{T} (t'-t) , \end{equation} but different smoothings are possible (e.g., by performing a least-squares fit of e.g. a linear or quadratic function to $r(t)$ in the interval $[t+T/2,t-T/2]$). Essentially the definition in Eq.~(\ref{e_r_def}) measures how much the coordinate separation oscillates over the time $T$. For Newtonian orbits it coincides with the usual eccentricity definition for elliptic orbits. For orbits whose radius shrinks linearly in time (without any oscillations) $e_{r}(t)$ is zero. Another similar eccentricity measure can be obtained using the gravitational wave signal of the inspiraling binary. The idea is to determine the separation in a more gauge invariant way from the amplitude of $\Psi_4$ instead of using the gauge dependent coordinate separation. In~\cite{Buonanno:2006ui} it is shown that for a non-precessing binary in the quadrupole approximation the amplitude of the $l=m=2$ spin-weighted Spherical Harmonic mode is given by \begin{equation} |C_{22}| = 32 \sqrt{\pi/5} \nu (M \omega)^{8/3} , \end{equation} where $\omega$ is the orbital angular velocity. Using Kepler's law we can define a separation \begin{equation} r_{22} = M^{1/3} \omega^{-2/3} = M [ |C_{22}|/(32 \sqrt{\pi/5} \nu) ]^{-1/4} \end{equation} which is directly related to the amplitude of $|C_{22}|$ of the $l=m=2$ mode of $\Psi_4$. Replacing the coordinate separation in Eq.~(\ref{e_r_def}) by $r_{22}$ we define \begin{equation} \label{e_22_def} e_{22}(t) = \frac{ \Delta r_{22,max}(t) - \Delta r_{22,min}(t)}{2r_{22,av}} , \end{equation} which is an eccentricity definition that can be computed from $\Psi_4$ alone. Note that this definition needs to be extended for the case of precessing orbits, since in that case $|C_{22}|$ will oscillate even for spherical orbits (i.e. orbits with $r=const$). The extension could be achieved by instead using a $|C_{22}|$ that is computed in a coordinate system where the z-axis points along the instantaneous orbital angular momentum. We have also tested an eccentricity definition based on the coordinate angular velocity $\omega$. Here the eccentricity is defined by~\cite{Husa:2007rh,Mroue:2010re} \begin{equation} \label{e_om_def} e_{\omega}(t) = \frac{ \omega(t)-\omega_{fit}(t)}{2\omega_{fit}(t)} , \end{equation} where $\omega(t)$ is simply the coordinate angular velocity and $\omega_{fit}(t)$ is a polynomial fit of order 5 to $\omega(t)$ over a time interval corresponding to several complete orbits. The hope is that the fit will smooth out oscillations so that $e_{\omega}(t) \propto \omega(t)-\omega_{fit}(t)$ becomes a measure of how much $\omega$ oscillates. The actual eccentricity is the maximum magnitude of $e_{\omega}(t)$. In order to compare these three eccentricity definitions we now present the eccentricities from an actual numerical simulation. \begin{figure} \includegraphics[scale=0.33,clip=true]{ep02.eps} \caption{\label{ecc_comp} This plot shows results from a numerical run (with parameters from row 6 of table~\ref{punc_par_tab}. The upper panel shows the coordinate separation $r$. The middle panel depicts the magnitude of the $l=m=2$ mode $|C_{22}|$ of $\Psi_4$ extracted at a separation of $70M$ from the center of mass. In the lower panel we plot the three different eccentricities obtained with the definitions given in Eqs.~(\ref{e_r_def}), (\ref{e_22_def}) and (\ref{e_om_def}). Note that $e_{r}$ and $e_{r}$ directly measure eccentricity, while for $e_{\omega}$ the eccentricity corresponds to the maximum values of $|e_{\omega}(t)|$. } \end{figure} As we can see in Fig.~\ref{ecc_comp} all three eccentricity definitions agree well for $300M \leq t \leq3000M$. They yield a value of about $0.02$. Notice that $e_{r}$ and $e_{22}$ are direct eccentricity measures, while in the case of $e_{\omega}$ the eccentricity corresponds to the maximum value of the magnitude of $e_{\omega}$. The eccentricity definition $e_{r}$ which is calculated from the separation has certain problems for $t<T/2\sim135M$. These occur because we have no data for $t<0$, which is needed in the average over one complete period (centered around $t$) and also for the fitting in Eq.~(\ref{r_lin_fit}). These same problems also affect $e_{22}$. In $e_{22}$, however, they are exacerbated by the initial junk radiation that dominates $|C_{22}|$ until about $150M$ (see middle panel). We see that $e_{22}$ does not completely settle down until about $700M$. Notice also that during standard moving puncture evolutions the coordinates adjust quite rapidly initially. This means that any eccentricity definition that is based on the coordinate separation or the coordinate angular velocity will not be completely reliable during the first $100M$ or so. Thus it comes as no surprise that the eccentricity definition $e_{\omega}$ has a different frequency and amplitude in the beginning. The curves for $e_{r}$ and $e_{22}$ in Fig.~\ref{ecc_comp} oscillate even at later times. This oscillation is due to the fact that the period $T$ which we get from Kepler's law is not exactly equal to the actual orbital period. The magnitude of this oscillation can be used as an error estimate of our eccentricity measures. For a conservative eccentricity estimate we can use the maximum values of $e_{r}$ and $e_{22}$. Let us point out that $e_{r}$ and $e_{22}$ are easier to compute than $e_{\omega}$. The latter depends on a polynomial fit to the measured orbital angular velocity. The problem is that this fit has to be done over a certain time interval, that must terminate well before the merger. Thus it requires a certain amount of fine tuning and human intervention. On the other hand $e_{r}$ and $e_{22}$ can be computed at any time in a very simple way. Recall however, that $e_{22}$ is more sensitive to the initial junk radiation. Thus for short runs used to probe the eccentricity of a configuration we usually just use $e_{r}$. \section{Numerical evolutions} \label{numerics} The numerical results discussed in this paper have been obtained with the BAM code~\cite{Bruegmann:2003aw,Bruegmann:2006at,Marronetti:2007wz}. As already mentioned, the gravitational fields are evolved using the BSSNOK formalism~\cite{Nakamura87,Shibata95,Baumgarte:1998te} in the variation known as the ``moving punctures" method~\cite{Campanelli:2005dd, Baker:2005vv}. The particulars of our BSSNOK implementation can be found in~\cite{Bruegmann:2006at, Marronetti:2007wz}. For completeness we note that lapse and shift evolve according to \ba (\partial_t - \beta^i\partial_i) \alpha &=& -2\alpha K , \nonumber \\ (\partial_t - \beta^k\partial_k)\beta^i &=& \frac{3}{4} B^i , \nonumber \\ (\partial_t - \beta^k\partial_k) B^i &=& (\partial_t - \beta^k\partial_k)\tilde \Gamma^i - \eta B^i . \ea The shift driver parameter is set $\eta =2/M$ in all our runs. The BAM code is based on a method of lines approach using sixth order finite differencing in space and explicit fourth order Runge-Kutta time stepping. The time step size is chosen such that the Courant factor is either $0.25$ or $0.5$. For efficiency, Berger-Oliger type mesh refinement is used~\cite{Berger84}. The numerical domain is represented by a hierarchy of nested Cartesian boxes. The hierarchy consists of $L+1$ levels of refinement, indexed by $l = 0, \ldots, L$. A refinement level consists of one or two Cartesian boxes with a constant grid-spacing $h_l = h_0/2^l$ on level $l$. We have used here $L=10$ to $11$ for the number of refinement levels, with the levels 0 through 5 each consisting of a single fixed box centered on the origin (the center of mass). On each of the finer levels 6 through $L$, we initially use two sets of moving boxes centered on each black hole. When the black holes get close enough that two of these boxes start touching, they are replaced by a single box. The position of each hole is tracked by integrating the shift vector. We use this same set up but with different resolutions depending on the purpose of each simulation. For an accurate simulation of the inspiral and merger of two non-spinning equal mass black holes we might use $L=10$ with a resolution $h_{10}=M/96$ on the finest level using 144 points on the fixed levels and 72 points on the moving levels. The notation we use to describe this grid setup for this simulation is: \begin{equation} [5\times 72, 6\times 144][M/h_{10}=96, OB=768M][C=0.25] \end{equation} which indicates that we have 5 moving levels with 72 points in each box and 6 fixed levels with 144 points each. The resolution is given by $M/h_{10}=96$ on the finest level, which results in an outer boundary at $768M$. The Courant factor here is chosen to be $0.25$, which implies a time step of $dt_{10}=0.25 h_{10}=M/384$ on the finest level. If the black holes have spins and/or unequal masses even more resolution is needed. For example for a mass ratio of 3 and a dimensionless spin magnitude of $0.6$ on the larger hole we might use a setup described by: \begin{equation} [6\times 72, 6\times 144][M/h_{11}=192, OB=768M][C=0.25] \end{equation} this setup has twice the resolution on the finest level, so that the resolution in terms of the individual masses is now given by $m_1/h_{11}=48$ for the smaller and $m_2/h_{11}=144$ for the larger hole. Such runs are quite expensive. Until merger they take about one month on a Cray XT5 supercomputer like NICS Kraken if we use 96 cores. However, if our objective is to simply measure the orbital eccentricity of a binary (characterized by certain initial parameters), it is sufficient to evolve for only a few orbits. We have found that such an evolution does not need great accuracy. So if the goal is to simply determine the initial eccentricity we use the following setup \begin{equation} \label{ecc_det_run} [5\times 48, 6\times 96][M/h_{10}=85.3, OB=576M][C=0.5] \end{equation} With this grid setup it takes only two days to evolve up $t=800M$ if we use 48 cores on NICS Cray XT5 Kraken. \begin{figure} \includegraphics[scale=0.33,clip=true]{er72_VS_48h12l10cfp5.eps} \caption{\label{er72_VS_48h12l10cfp5} This plot shows the eccentricity $e_r$ for two different grid setups. Both start with the parameters from row 7 of table~\ref{punc_par_tab}. The solid line shows $e_r$ for a high resolution grid, while the broken line shows the measured $e_r$ if we use a less accurate and computer intensive grid setup. Both yield similar orbital eccentricities. } \end{figure} In Fig. \ref{er72_VS_48h12l10cfp5} we see that the eccentricity $e_r$ from this cheaper run agrees quite well with the $e_r$ from a more expensive run. As explained above no eccentricity estimate is accurate at the start of a run. We need to wait until about $500M$ for $e_r$ to settle down to a regular oscillation. As mentioned before we use the maxima of $e_r$ as a conservative eccentricity estimate. Thus the eccentricities of the cheap and expensive runs are $0.0013$ and $0.0010$. Hence we need to evolve until at least about $800M$ to get a reliable estimate. From the oscillations in the curves we see that the errors in both eccentricity estimates are about $0.0005$. \section{Choosing puncture parameters} \label{parchoices} In order to start our simulations we need initial data for binary black holes with arbitrary spins and masses at some given initial separation $r$. Since we will employ the moving punctures approach in our evolutions, we will use standard puncture initial data~\cite{Brandt97b}. Thus the 3-metric and extrinsic curvature are given by \begin{eqnarray} \label{punc-3-metric} g_{ij} &=& \psi^4 \delta_{ij} \\ K^{ij} &=& \psi^{-10} \sum_{A=1}^2 \Big\{ \frac{3}{r_A^2}\left[ p_A^{(i} n_A^{j)} - \frac{\delta_{kl}p_A^k n_A^l}{2}(\delta^{ij}-n_A^i n_A^j) \right] \nonumber \\ &&+\frac{6}{r_A^2}\left[\epsilon_{klm} S_A^l n_A^m \delta^{k(i} n_A^{j)} \right] \Big\} . \end{eqnarray} Here $p_A^i$ and $S_A^i$ are the momentum and spin parameters of black hole $A$, while $r_A$ and $n_A^i$ denote the distance and normal vector measured from hole $A$. The conformal factor is \begin{equation} \label{psi_punc} \psi = 1 + \frac{m_{b_1}}{2r_1} + \frac{m_{b_2}}{2r_2} + u , \end{equation} where $m_{b_1}$ and $m_{b_2}$ denote the black hole bare mass parameters. The scalar $u$ is computed by numerically solving the Hamiltonian constraint. These initial data are very flexible since the free parameters for the position, momentum and spin of each black hole can be chosen freely. Thus one can setup practically any kind of orbit. Note, however, that our goal is to set up data for black holes that are in quasi-circular orbit. This means that we need to choose our momentum parameters such that the eccentricity is as small as possible. Since we start our evolutions in a frame where the center of mass is at rest, both black holes have momenta that are equal in magnitude but opposite in direction. Thus we have to choose only two parameters: the tangential and radial component of the momentum of one of the black holes. To complete the definition of the initial data, we also need to specify initial values for the lapse $\alpha$ and shift vector $\beta^i$. At time $t=0$ we use \begin{eqnarray} \label{ini_lapse_shift} \alpha &=& \psi^{-2}, \nonumber\\ \beta^i &=& 0 . \end{eqnarray} \subsection{Finding the momentum parameters} \label{finding_p_pars} There have been various attempts to guess appropriate momentum parameters. Some have used the quasi-equilibrium approach~\cite{Baumgarte00a,Tichy:2003zg,Tichy:2003qi}, but most are based on post-Newtonian (PN) approximations (see e.g.~\cite{Marronetti:2007wz,Marronetti:2007ya,Brugmann:2008zz}). The latter have been taken to their extreme in~\cite{Husa:2007rh} and~\cite{Walther:2009ng} who integrate PN equations in the ADMTT gauge~\cite{Schaefer85} in the hope that the thus obtained momentum parameters will lead to orbits with less eccentricity. However, as mentioned in~\cite{Walther:2009ng}, it is not certain that non-eccentric PN parameters in ADMTT gauge should produce non-eccentric orbits in full General Relativity. We want to point out here, that standard puncture data are inconsistent with PN theory beyond $(v/c)^3$ ~\footnote{The standard PN jargon for a term of order $(v/c)^n$ in the equations of motion is $\frac{n}{2}$PN term. Notice that according to the virial theorem $v^2\sim M/r$, so that $\frac{n}{2}$PN terms are also $O(M/r)^{\frac{n}{2}}$.} ~\cite{Tichy02,Yunes:2006iw,Yunes:2005nn,Marronetti:2007ya}, even in ADMTT gauge! The reasons for this inconsistency are as follows. First, the 3-metric of puncture data is always conformally flat (see Eq.(\ref{punc-3-metric})), while the PN 3-metric contains deviations from conformal flatness at order $(v/c)^4$~\cite{Tichy02,JohnsonMcDaniel:2009dq}. This is true for both harmonic and ADMTT gauge. Second, the conformal factor in ADMTT gauge is given by~\cite{Tichy02} \begin{equation} \label{psi_PN} \psi_{PN} = 1 + \frac{E_1}{2r_1} + \frac{E_2}{2r_2} , \end{equation} where $E_A$ is the energy of each particle used to model the black holes. If we compare Eqs.~(\ref{psi_PN}) and (\ref{psi_punc}) we see that the conformal factor in ADMTT gauge is not identical to the conformal factor used in puncture initial data. One difference is that the ADMTT $\psi_{PN}$ contains the particle energies $E_A$ while the puncture $\psi$ contains the bare masses $m_{b_A}$. These two only agree for infinite separation. Furthermore, the puncture $\psi$ contains the additional piece $u$ that is obtained by numerically solving the Hamiltonian constraint, while PN data violates the Hamiltonian constraint. One might like to argue that the PN data should agree with puncture data up to the higher order terms that are neglected in the PN approximation. This is however not true close to the black holes, because PN theory is not valid in regions of very strong gravity. Thus near the black holes the ADMTT metric differs from the puncture metric by terms that are of low PN order. From the above explanation it is clear that the coordinate separation in ADMTT gauge does not have the same physical meaning as the coordinate separation in puncture initial data. Thus if by some method we arrive at the momentum parameters needed at a particular separation $r$ (in ADMTT coordinates), we should not simply use these same momentum parameters at the same puncture separation. So if (for some configurations) the more complicated approaches in~\cite{Husa:2007rh} or~\cite{Walther:2009ng} lead to less eccentric orbits than e.g. the simpler approaches in~\cite{Marronetti:2007wz,Marronetti:2007ya,Brugmann:2008zz} this should be considered a coincidence, since it is not due to the inclusion of higher order terms or due to the integration of post-Newtonian equations of motion. In fact, we find that (as anticipated by~\cite{Walther:2009ng}) for many configurations the approaches in~\cite{Husa:2007rh} or~\cite{Walther:2009ng} do not lead to less eccentricity than the ones in~\cite{Marronetti:2007wz,Marronetti:2007ya,Brugmann:2008zz}. For these reasons we take a different approach here. We will take a simple PN formula to obtain a reasonable guess for both the tangential and radial momentum components $p_t$ and $p_r$. Then we numerically evolve the resulting initial data for a short time (using the efficient grid setup in Eq.~(\ref{ecc_det_run})) to see how eccentric they really are. Afterward we adjust $p_t$ to reduce the eccentricity. In this way we can obtain low eccentricity orbits for any configuration. In order to come up with a reasonable guess for $p_t$ and $p_r$ we use 2PN accurate expressions of Kidder~\cite{Kidder:1995zr} in harmonic gauge. Specifically, we freely choose the two masses $m_1$, $m_2$, the six spin components of $\mathbf{S}_1$ and $\mathbf{S}_2$ and a separation $r$. Next, choosing $\hat{\mathbf{L}}_N$ in the $z$-direction, we use Eqs.~(2.8) and (4.7) of~\cite{Kidder:1995zr} to compute the total orbital angular momentum $\mathbf{L}$, and Eq.~(4.12) of~\cite{Kidder:1995zr} to compute $\dot{r}$. We rotate $\mathbf{L}$, $\mathbf{S}_1$ and $\mathbf{S}_2$ so that $\mathbf{L}$ points in the $z$-direction. Then we obtain the momentum in the $xy$-plane as \begin{eqnarray} \label{PN_mom_pt} p_t = |\mathbf{L}|/r \\ \label{PN_mom_pr} p_r = \mu |\dot{r}| . \end{eqnarray} In all cases discussed in this paper we put the two punctures on the $y$-axis at $y_1 = m_2 r/M$ and $y_2 = -m_1 r/M$. The two initial black hole momenta are then $\mathbf{p}_{1}=(-p_t,-p_r,0)$ and $\mathbf{p}_{2}= (p_t,p_r,0)$. \subsection{Determining the bare mass parameters} The momenta $\mathbf{p}_{1}$, $\mathbf{p}_{2}$ and the spins $\mathbf{S}_1$, $\mathbf{S}_2$ for a coordinate separation $r$ directly enter the Bowen-York extrinsic curvature of standard puncture data. Note, however, that the bare mass parameters $m_{b_1}$ and $m_{b_2}$ which appear in the construction of standard puncture data are not equal to the individual black hole masses. As in~\cite{Tichy:2007hk,Marronetti:2007wz,Marronetti:2007ya,Tichy:2008du} we obtain the bare masses from the condition that the ADM masses \begin{equation} m^{ADM}_{A} = m_{b_A} (1+ u_A) + \frac{m_{b_1} m_{b_2}}{2r} , \end{equation} measured at the punctures~\cite{Tichy03a} should be equal to $m_1$ and $m_2$, where $u_A$ is the value of $u$ at puncture $A$. As in~\cite{Tichy03a,Tichy:2003qi,Ansorg:2004ds} we assume that the ADM masses measured at each puncture are a good approximation for the initial individual black hole masses. Numerically this condition is implemented as a root finder in the initial data solver that picks $m_{b_1}$ and $m_{b_2}$ such that the ADM masses at the punctures are equal to $m_1$ and $m_2$. \section{Reducing the eccentricity} \label{ecc_reduc} Now that we know how to generate initial data on approximately circular orbits for arbitrary spins, masses and separations, it is time to present some numerical results to demonstrate what eccentricities we get and how we can reduce them. For our purposes here, we consider the eccentricity small enough if it is $0.003$ or less, which agrees with the NRAR target of an eccentricity of a few times $10^{-3}$. \begin{table*} {\small \begin{tabular}{r|c|c|c|c|c|l|l|c|c|c} row & $\frac{r}{M}$ & $\frac{m_{b_1}}{M}$ & $\frac{m_{b_2}}{M}$ & $\frac{m_1}{M}$ & $\frac{m_2}{M}$ & $\frac{|\mathbf{S}_1|}{m_1^2},\theta_1,\phi_1$ & $\frac{|\mathbf{S}_2|}{m_2^2},\theta_2,\phi_2$ & $\frac{10^2 p_t}{M}$ & $\frac{10^4 p_r}{M}$ & $10^3 e_r$ \\ \hline 1 & 11.9718 & 0.488255 & 0.488255 & 1/2 & 1/2 & $0 $ & $0$ & 8.5018 & 3.88 & 5.0 \\ 2 & 11.9718 & 0.488249 & 0.488249 & 1/2 & 1/2 & $0$ & $0$ & 8.5168 & 3.88 & 1.5 \\ \hline 3 & 11.9694 & 0.404597 & 0.404687 & 1/2 & 1/2 &$0.6,60,0$ &$0.6,120,90$& 8.5013& 3.87& 4.0 \\ 4 & 11.9694 & 0.404592 & 0.404681 & 1/2 & 1/2 &$0.6,60,0$ &$0.6,120,90$& 8.5163& 3.87& 2.0 \\ 5 & 11.9694 & 0.404588 & 0.404677 & 1/2 & 1/2 &$0.6,60,0$ &$0.6,120,90$& 8.5280& 3.87& 1.5 \\ \hline 6 & 11.4546 & 0.2239\phantom{00} & 0.6145\phantom{00} & 1/4 & 3/4 & $0.4,0,0$ & $0.6,0,0$ & 6.4416 & 2.21 & 20 \\ 7 & 10.7271 & 0.223341 & 0.614005 & 1/4 & 3/4 & $0.4,0,0$ & $0.6,0,0$ & 6.6396 & 2.50 & 1.0 \\ \hline 8 & 12.0815 & 0.224174 & 0.614794 & 1/4 & 3/4 & $0.4,180,0$ & $0.6,180,0$ & 6.5900 & 2.50 & 1.0 \\ 9 & 11.7173 & 0.223891 & 0.614572 & 1/4 & 3/4 & $0.4,180,0$ & $0.6,180,0$ & 6.7497 & 4.08 & 5.0 \\ \hline 10 & 10.9876 & 0.223317 & 0.614121 & 1/4 & 3/4 & $0.4,0,0$ & $0.6,180,0$ & 6.9756 & 3.19 & 3.0 \\ 11 & 11.5010 & 0.300017 & 0.543886 & 1/3 & 2/3 & $0.4,60,0$ & $0.6,60,0$ & 7.6504 & 3.18 & 2.0 \\ 12 & 11.5033 & 0.300020 & 0.543926 & 1/3 & 2/3 & $0.4,60,0$ & $0.6,60,90$ & 7.6509 & 3.19 & 1.8 \\ 13 & 11.6182 & 0.300023 & 0.543980 & 1/3 & 2/3 & $0.4,60,0$ & $0.6,120,90$& 7.7896 & 3.51 & 2.1 \\ 14 & 11.5502 & 0.310095 & 0.656229 & 1/3 & 2/3 & $0.3,0,0$ & $0$ & 7.7196 & 3.32 & 1.3 \\ \end{tabular} } \caption{\label{punc_par_tab} Initial data parameters. The black holes have coordinate separation $r$. We give both bare masses $m_{b_1}$, $m_{b_2}$ as well as physical masses $m_1$, $m_2$. The punctures are located on the $y$-axis at $y_1 = m_2 r/M$ and $y_2 = -m_1 r/M$. The spins are given in terms of their magnitudes and the usual polar angles of spherical coordinates measured in degrees. The linear momenta are $(\mp p_t, \mp p_r, 0)$. The last column shows the resulting eccentricity. } \end{table*} The first example we consider is an equal mass binary without spin. The momentum parameters are picked according to Eqs.~(\ref{PN_mom_pt}) and (\ref{PN_mom_pr}). The values of all the initial parameters are given in the first line of table \ref{punc_par_tab}. As we can see, the eccentricity is about 0.005 in this case. In order to test how it varies with $p_t$ we have also performed a run where we have increased $p_t$ by \begin{equation} \label{p_corr} \Delta p_t = \mu \sqrt{\frac{M}{r}} \left[ 11.29\left(\frac{M}{r}\right)^3 - 92.37\left(\frac{M}{r}\right)^4\right] . \end{equation} The results of this increase are shown in the second line of table \ref{punc_par_tab} and yield an eccentricity that is reduced by more than a factor of 3. The expression for Eq.~(\ref{p_corr}) comes from the fitting of two of our older equal mass runs that resulted in low eccentricity. Thus non-zero spins or unequal masses are not taken into account by this fit. Therefore Eq.~(\ref{p_corr}) certainly does not present the optimum momentum correction. In fact we have found that adding $\Delta p_t$ to $p_t$ does not always reduce the eccentricity since the PN estimate of Eq.~(\ref{PN_mom_pt}) for $p_t$ is sometimes too large and sometimes too small for generic orbits. The expression in Eq.~(\ref{p_corr}) simply gives us a rough estimate by how much we might have to raise or lower $p_t$ in order to reduce the eccentricity. So in order to really reduce the eccentricity we usually start one run on the coarse grid described by Eq.~(\ref{ecc_det_run}). This run normally uses $p_t$ given by Eq.~(\ref{PN_mom_pt}). We then look at the coordinate separation $r(t)$ for this run. Usually one can tell whether the initial tangential momentum $p_t$ was too large or too small. We then start a new run where we either increase or decrease $p_t$ by $\Delta p_t$. This run then gives a new $r(t)$ and $e_r$. From the two resulting simulations one can then extrapolate to zero eccentricity to obtain a more refined tangential momentum parameter. This procedure is illustrated in rows 3,4, and 5 of table~\ref{punc_par_tab}. In row 3 we have used the $p_t=8.5013\times 10^{-2} M$ from Eq.~(\ref{PN_mom_pt}) and obtained an eccentricity of 0.004. In row 4 we have increased this $p_t$ by $\Delta p_t$ which leads to a decrease of the eccentricity to 0.002. A further increase to $p_t=8.5280\times 10^{-2} M$ (as in row 5) then yields an eccentricity of 0.0015. Row 6 of table~\ref{punc_par_tab} shows the initial data that were used to produce Fig.~\ref{ecc_comp}. In order to produce an eccentricity which is clearly visible in the $r(t)$ curve we have used a $p_t$ that is deliberately chosen much larger than what is predicted by Eq.~(\ref{PN_mom_pt}). In row 7 we show the same configuration (at somewhat closer separation), but this time we choose $p_t$ according to Eq.~(\ref{PN_mom_pt}), we see that in this case the eccentricity is already small enough to satisfy e.g. the NRAR guidelines. In row 8 we have used our method to produce momentum parameters for a similar configuration, but this time both spins point in the negative $z$-direction. Again we can reach the NRAR target of an eccentricity of order 0.001. In row 9 we also show the eccentricity resulting from the method introduced in~\cite{Walther:2009ng} for the same mass ratio and spin configuration. We can see that it is about 5 times larger, which supports our argument that integrating PN equations of motion does not necessarily lead to better results. \begin{figure} \includegraphics[scale=0.33,clip=true]{WTvsPMq3S04S06down.eps} \caption{\label{r_WTvsPM} This plot shows $r(t)$ for the two numerical runs with parameters from rows 8 (solid line) and 9 (broken line) of table~\ref{punc_par_tab}. The parameters for the solid line were picked according to the method explained in this paper while the broken line is the result of setting the parameters according to the much more complicated method in~\cite{Walther:2009ng}. As we can see the eccentricity difference of about a factor of 5 is quite noticeable in the $r(t)$ curves. } \end{figure} The results from these two runs are also shown in Fig.~\ref{r_WTvsPM}. The solid line corresponds to row 8 (i.e. our approach) and the broken line is generated using the approach in~\cite{Walther:2009ng}. As we can see our method does give a noticeably less eccentric $r(t)$ curve. Notice that the initial dip in the coordinate separation $r(t)$, is due to the aforementioned initial coordinate adjustment. It does not mean the two holes really plunge toward each other initially. The last five rows of table~\ref{punc_par_tab} give a few more examples of parameters and resulting eccentricities. They show that we can obtain low eccentricities for generic mass ratios and spin orientations. Notice that our eccentricity reduction method is similar in spirit to the method described in~\cite{Pfeiffer:2007yz,Boyle:2007ft}, in that we also use short numerical runs to adjust certain parameters. There are, however, important differences. In~\cite{Pfeiffer:2007yz,Boyle:2007ft} excision type initial data, not punctures are used. These data are constructed using an extension to the conformal thin sandwich formalism~\cite{York99}. Empirically it turns out that the tangential momentum that is achieved with the conformal thin sandwich method is quite close to what is needed for low eccentricity data. The main reason for eccentricity is the absence of a radial momentum in the original conformal thin sandwich method. In~\cite{Pfeiffer:2007yz} a method is developed that adds an arbitrary radial velocity parameter to the initial data. This radial velocity parameter together with the tangential momentum are then adjusted to reduce the eccentricity. The method introduced in~\cite{Pfeiffer:2007yz,Boyle:2007ft} is capable of producing eccentricities of order of only $10^{-5}$. For standard puncture data the method in~\cite{Pfeiffer:2007yz,Boyle:2007ft} cannot be used directly. One problem is that standard moving puncture simulations start with the lapse and shift given in Eqs.~(\ref{ini_lapse_shift}). Thus the coordinates used are not well adapted to quasi-equilibrium. Hence oscillations in the hole separation $r$ are not due to real eccentricity alone but also due to the fact that the coordinates are still evolving as well. This problem is quite visible in Fig.~\ref{r_WTvsPM}. The broken line shows oscillations that have more than one frequency. Thus we cannot fit the curve very well with a straight line plus a single sine function as in~\cite{Pfeiffer:2007yz,Boyle:2007ft}. To summarize, reducing the eccentricity for puncture data is harder than for the extended conformal thin sandwich data in~\cite{Pfeiffer:2007yz}. Therefore, our results in both final eccentricities and reduction of eccentricity per iteration are not as good as in~\cite{Pfeiffer:2007yz,Boyle:2007ft}. In the cases we have studied so far, we have found it unnecessary to adjust $p_r$ (away from the PN value in Eq.~(\ref{PN_mom_pr})) in order to reach an eccentricity of order 0.001. It is much more important to choose an appropriate $p_t$. However, we expect that adjusting $p_r$ will be necessary to reach even lower eccentricities. \section{Discussion} \label{discussion} We have introduced the two new eccentricity measures $e_r$ and $e_{22}$. Both are easy to compute since their calculation does not involve any free parameters unlike earlier definitions akin to $e_{\omega}(t)$. To compute eccentricity measures such as $e_{\omega}(t)$ one needs to specify a time interval during the inspiral phase over which we fit the orbital angular velocity to a polynomial of some low degree. This degree is essentially another free parameter and is usually chosen to be 4 or 5. Note, however, that all eccentricity definitions are ambiguous to a certain extend since the entire concept of eccentricity is only rigorously defined for periodic orbits. All eccentricity definitions for inspiral orbits depend on how we split a function of time (like the separation) into a smooth and an oscillating piece. Thus we do not claim that our definitions $e_r$ and $e_{22}$ are less ambiguous than earlier ones. For example our definitions depend on the period $T$ which we compute from Kepler's law, but other choices are possible (e.g. an estimate of the actual orbital period of the numerical simulation at each time). Since our definitions $e_r$ and $e_{22}$ do not require us to choose extra parameters such as fitting intervals they are easier to use. We also show that certain low resolution grid setups (which are relatively inexpensive) can be used to estimate the initial eccentricity using our measure $e_r$. This gives us a relatively efficient way to measure the eccentricity of any initial data. Furthermore, we have explained why the coordinates of standard puncture data are not the same as in PN calculations in ADMTT or harmonic gauge. Thus even if one arrives at a highly accurate estimate of the momentum parameters needed for low eccentricity orbits in ADMTT gauge, there is no easy way to incorporate this knowledge into simulations starting from standard puncture data. Indeed we find that using accurate parameters in ADMTT gauge~\cite{Husa:2007rh,Walther:2009ng} in standard puncture initial data can lead to relatively large eccentricities. We provide a simpler approach which starts from momentum parameters using the relatively short expressions given by Kidder~\cite{Kidder:1995zr}. After measuring the resulting eccentricity $e_r$ in a short inexpensive numerical simulation, these parameters can then be refined by changing $p_t$ by an amount of order $\Delta p_t$ (see Eq.~(\ref{p_corr})). In this way we can always arrive at eccentricities that are low enough for the purposes of the NRAR collaboration~\cite{NRAR_web}. In order to achieve this we usually do not need more than three numerical runs. \begin{acknowledgments} It is a pleasure to thank Doreen M\"uller for useful discussions about the approach in~\cite{Walther:2009ng}. This work was supported by NSF grant PHY-0855315. Computational resources were provided by the Ranger cluster at the Texas Advanced Computing Center (allocation TG-PHY090095) and the Kraken cluster (allocation TG-PHY100051) at the National Institute for Computational Sciences. \end{acknowledgments}
2,877,628,088,828
arxiv
\section{Introduction} The nature of the obscuration that defines Seyfert-2 AGN remains a matter of study. The obscuring gas and dust may be associated with a distant, parsec-scale torus that covers about half of the sky, as seen from the central engine (e.g., Antonucci 1993). However, the obscuration could originate orders of magnitude closer to the black hole, at radii consistent with the optical broad line region (BLR). Some models of the BLR suggest that it is also dusty, and that the enhanced cross section of the dust is responsible for lifting the material above the plane of the disk (e.g., Czerny \& Hryniewicz 2011, Czerny et al.\ 2015). The light crossing time out to a parsec-scale torus is over three years, and Keplerian time scales at that radius are much longer. Thus, even variability on a time scale of months is not easy to reconcile with very distant obscuration. In a study of obscured AGN, Risaliti (2002) reported column density variations on the time scale of months in 23 of 24 cases, typically by factors of 2--3. NGC 4388 may represent one of the most extreme examples of such variability. In this AGN, Elvis et al.\ (2004) found significant absorption variability between observations separated by hours. That specific episode was an ``unveiling event'', wherein the obscuring column was dramatically reduced. It is reasonable to expect, then, that the source could also display periods of greatly enhanced obscuration. However, despite widespread study and strong variability, NGC 4388 has never been observed in a Compton-thick state ($N_{\rm H}\geq 1.5\times 10^{24}~{\rm cm}^{-2}$). This may suggest that special conditions, such as chance alignment of distant clouds, are unlikely to cause a source to be observed as Compton-thick with any regularity. X-ray emission line spectroscopy provides another angle on the nature and location of neutral absorption in obscured AGN. When cold gas is illuminated by hard X-rays, a characteristic ``reflection'' spectrum is produced (e.g., George \& Fabian 1991). The most prominent part of a reflection spectrum is an Fe K emisison line. The strength of this line relative to the observed continuum can encode the fraction of the central engine that is covered by the reflector, and any velocity broadening to the line can be used to constrain the radial location of the obscuring and reflecting gas (for a recent general treatment of geometrical inferences, see Ramos Almeida \& Ricci 2017). In a study of {\it Chandra} gratings spectra of Seyfert-1 galaxies, Shu et al.\ (2010) found that the narrow Fe K emission line is consistent with being produced in the optical BLR in approximately half of the cases examined. This suggests that the BLR is plausibly the inner edge of the obscuration that is observed in Seyfert-2 galaxies. In a separate study of 13 Seyfert-1 AGN, Gandhi et al.\ (2015) found that the dust sublimation radius likely forms an outer envelope to the neutral Fe K line production zone. Recent focused studies of single sources also conclude that cold reflecting gas (that would be seen as obscuration in a Seyfert-2) is located within the optical BLR. The neutral Fe K line in the Seyfert-1 AGN NGC 7213 is resolved in a {\it Chandra} gratings spectrum, and clearly links the line to the optical broad line region (Bianche et al.\ 2008). In the Seyfert-1.5 AGN NGC 4151, asymmetry in the Fe K emission line is observed in the {\it Chandra} grating spectrum (Miller et al.\ 2018), suggesting an origin within the BLR or potentially smaller radii. NGC 4388 is an Sb galaxy in the Virgo Cluster. Recent work has estimated its distance at $d=18.0$~Mpc (Sorce et al.\ 2014). In some obscured AGN, measurements of the black hole mass can be particularly difficult since broad emission lines are either weak or unobserved. However, NGC 4388 has an H$_{2}$O megamaser in its disk; exploiting this gives an extremely precise black hole mass of $M_{BH} = 8.4\pm 0.2\times 10^{6}~M_{\odot}$ (Kuo et al.\ 2011). This excellent measurement is of major help to efforts to understand the radial location and geometry of obscuration in this source. Owing to its bright, neutral Fe~K$\alpha$ line and prior evidence of dramatic absorption variability, NGC 4388 was targeted for early observations with {\it NICER} (Gendreau et al.\ 2016). In Section 2, the data reduction is described. Section 3 details the spectral analysis that was undertaken. In Section 4, the results are discussed. \section{Observations and Reduction} NGC 4388 was observed on numerous occasions through the early phase of the {\it NICER} mission. The first observation was obtained on 2017-12-03 (MJD 58090); this analysis includes all data accumulated on and before 2019-03-29 (MJD 58571). The reduction was accomplished using HEASOFT version 6.25, and the latest calibration files consistent with that software release. The data were processed using \texttt{NICERDAS}. Using \texttt{nimaketime}, the raw exposures were filtered to exclude times close to the South Atlantic Anomaly, times with high particle background background (via the \texttt{COR\_SAX} parameter), times of high optical loading (via the \texttt{FPM\_UNDERONLY\_COUNT}), and to avoid bad sun angles (we required a minimum of 30 and 40 degrees above the Earth limb and bright Earth limb, respectively). After filtering, the net exposure time is 105.6~ks. The individual event lists were then combined using the FTOOL \texttt{nicermergeclean}, and the total spectrum was extracted using the tool \texttt{XSELECT}. The ISS orbit takes {\em NICER} through a wide range of geomagnetic latitudes, each with its own background characteristics. At high latitudes, the particle background is dependent on space weather and the variability of the Sun. Individual observations thus have different background levels that must be understood to maximize the science return. To calibrate the background, {\em NICER} has (to date) collected over 2.7 Ms of exposure on background fields used and characterized by {\it RXTE} (Jahoda 1996) and also a few select locations near some of the faint millisecond pulsars that are key to the {\em NICER} core science mission. For this paper, the background was estimated using the ``space weather'' method (Gendreau et al., in prep.) and uses environmental data to parse the background database. This tool uses a model of the magnetic cut-off rigidity the (\texttt{COR\_SAX} parameter) and the space weather index ``KP'' (Kennziffer Planetary; Bartels et al.\ 1939). KP is derived from a worldwide network of magnetometers that publishes data every 3 hours [ https://www.swpc.noaa.gov/products/planetary-k-index ]. KP values range from $0-9$, where low values indicate calm space weather, while higher values indicate geomagnetic storms. The space weather background tool builds a library of background spectra divided amongst these environmental variables to predict a background spectrum for a given observation. Figure 1 plots the background-subtracted {\it NICER} light curve of NGC 4388, and the evolution of the source hardness. The hardness curve is the ratio of counts in the 8--12~keV band to counts in the 0.3--2.0~keV band. In the higher band, the source is dominant and relatively free of obscuration. In the lower band, contributions from the host galaxy are likely to be important and to give a measure of flux stability, though absorption is important. The data show a concentration of low and negative flux points, and negative spectral hardness, around $t \simeq 1.2$~Ms. This episode is due to a brief period of high background that was not caught by the standard screening criteria. This interval was excised and not considered for spectral analysis. The light curve shows rare intervals with high flux, but these are sporadic. The hardness ratio may be a better indicator of large changes in the source spectrum and/or obscuring column, but the errors on the hardness curve are large given the low count rates. The centroid values are fairly steady, however, with only a few points that deviate from the envelope of centroid values. An apparent spike in the count rate about 3.3~Ms into the monitoring campaign is not highly significant, and a putative delayed reaction in the hardness ratio is within the errors. For these reasons, this initial analysis of NGC 4388 is confined to the time-averaged spectrum of the source. Owing to the fact that the spectrum is comprised of very strong features, and because the modeling is not concerned with, e.g., separating relativistic disk reflection features from the continuum, we have not ``Crab-corrected'' the spectra (see, e.g., Ludlam et al.\ 2018, Miller et al.\ 2018). Prior to spectral fitting, the data were grouped to require a signal-to-noise ratio of 10, using the FTOOL \texttt{ftgrouppha}. \section{Analysis and Results} The spectra were analyzed using XSPEC version 12.10.1 (Arnaud 1996). A $\chi^{2}$ statistic was minimized in the fitting procedure. {\it NICER} spectra are nominally valid down to 0.2~keV, but we restricted our fits to the 0.6--10.0~keV range owing to the limitations of some models (see below). Initial fits were made using the standard default weighting within XSPEC; refinements were made using ``model'' weighting. All of the errors in this work reflect the value of the given parameter at its 1$\sigma$ confidence limits. Uncertainties on the physical models were obtained after running a Markov Chain Monte Carlo within XSPEC. The chains started from the best-fit model, used the Goodman-Weare algorithm (Goodman \& Weare 2010) with 100 walkers, had a length of $4\times 10^{6}$, and a burn-in of $6\times 10^{5}$. \subsection{Phenomenological Models} Simple models, such as an absorbed power-law, fail to fit the full spectrum. However, it can be useful to characterize the strength of the neutral Fe~K$\alpha$ line and the sensitivity at which it is detected over a narrow band. Fitting with a simple Gaussian and power-law in the 5.5--6.9~keV band to avoid broad continuum trends and the Fe K edge, we measure a line centroid of ${\rm E} = 6.372^{+0.009}_{-0.003}$~keV (in the frame of NGC 4388). This is slightly below the laboratory value, but it is within the uncertainty in the instrumental energy calibration. The line is nominally resolved (measurements of its width statistically exclude zero) with $\sigma = 40^{+10}_{-10}$~eV, or about three times the nominal separation of the Fe~K$\alpha_{1}$ and Fe~K$\alpha_{2}$ components (13 eV). This translates to a broadening of $v = 1200^{+400}_{-400}~{\rm km}~ {\rm s}^{-1}$. If the broadening is Keplerian, it may not be the full velocity but rather a projected velocity. If the accretion plane is viewed at $\theta = 60^{\circ}$, for instance, the velocity should be twice larger, $v = 2300^{+800}_{-800}~{\rm km}~ {\rm s}^{-1}$. This velocity would correspond to orbits at $r = 1.7_{-0.8}^{+1.8}\times 10^{4}~ GM/c^{2}$, consistent with the optical BLR in many AGN (see, e.g., Peterson et al.\ 2004). Finally, the flux of the line is measured to be $F = 8.3^{+0.6}_{-0.6}\times 10^{-5}~ {\rm ph} ~{\rm cm}^{-2}~ {\rm s}^{-1}$. This translates to an equivalent width of $W = 73^{+5}_{-5}$~eV. In Figure 2, for instance, which is based on more physical modeling (see below), it is clear that the line is only a fraction of the local continuum flux. In contrast, the neutral Fe~K$\alpha$ line in Compton-thick AGN is often several times stronger than the local continuum (e.g., Kammoun et al.\ 2019; however, also see Boorman et al.\ 2018). \subsection{Modeling with Pexmon} We next pursued modeling with \texttt{pexmon}. This model is built on \texttt{pexrav} (Magdziarz \& Zdziarski 1995), which treats cold, neutral reflection -- minus lines -- from a slab of infinite column density. Nandra et al.\ (2007) updated \texttt{pexrav} to include Fe and Ni K$\alpha$ and K$\beta$ lines, with strengths linked to the reflection in the manner dictated by atomic physics (see, e.g., George \& Fabian 1991). If a source is not Compton-thick, and if the inclination is not extreme, it is likely that the viewing angle to the central engine is obscured while reflected emission from the far side of the central engine is not as obscured. Similarly, diffuse plasma emission from the larger nuclear region should not be obscured. However, the direct emission from the central engine should be obscured, with only a fraction passing through to the observer. There is an angle dependence within \texttt{pexmon}; this is the inclination at which the {\em reflector} is viewed, not the scattering angle, nor the inner disk inclination. Within XSPEC, we built our model as follows:\\ \noindent \texttt{phabs[1]*(zphabs[2]*photoion[3]* cutoffpl[4] $+$ zphabs[5]*const[6]*cutoffpl[7] $+$ pexmon[8] $+$ mekal[9] $+$ mekal[10] $+$ mekal[11]}.\\ \noindent Here, \texttt{phabs[1]} is the line of sight absorption in the Milky Way (fixed to $N_{\rm H} = 4.0\times 10^{21}~ {\rm cm}^{-2}$ in all fits; this value is not high enough to affect any fits in the 0.6--10.0 keV band). Then, \texttt{zphabs[2]} is the neutral column density along the line of sight acting on the continuum emission from the central engine in the source frame, modeled as a cut-off power-law via \texttt{cutoffpl[4]}. In between the central engine and the neutral obscuration, the flux passes through an ionized absorber, \texttt{photoion[3]} (described in more detail below). Some IR studies of Seyfert-2 AGN find evidence of dust in the polar regions (e.g., Honig et al.\ 2013), so the model allows for a fraction of the flux from the central engine to be scattered into our line of sight (via \texttt{constant[6]*cutoffpl[7]}, where the parameters of \texttt{cutoffpl[7]} are linked to those of \texttt{cutoffpl[4]} and $0\leq \texttt{contant[6]} \leq 1$). (In preliminary tests, we found that the cut-off was poorly constrained, so $E_{\rm cut} = 500$~keV was enforced in all fits.) This scattered emission is allowed to pass through a different column density than that measured along the line of sight (via \texttt{zphabs[5]}). Reflection -- nominally from distant material on the far side of the central engine -- is modeled with \texttt{pexmon[8]}. The power-law index, cut-off energy, and flux normalization within \texttt{pexmon[8]} were linked to the values in \texttt{cutoffpl[4]}. Finally, the low energy spectrum required three diffuse plasma components, modeled with \texttt{mekal}. These collisional plasma models are adopted for simplicity; fits to the {\it NICER} spectrum are equally good if one of the \texttt{mekal} components is replaced with a photoionized plasma model. The photoionized absorption component is a high-resolution XSTAR table model (e.g., Kallman \& Bautista 2001; also see Miller et al.\ 2015, 2016), generated assuming a standard $\Gamma = 1.7$ X-ray power-law and a UV blackbody disk. The UV/X-ray flux ratio was set to be 10:1, broadly consistent with bolometric corrections to Seyfert X-ray fluxes (Vasudevan \& Fabian 2009). A bolometric lumnosity of $L = 10^{44}~ {\rm erg}~ {\rm s}^{-1}$ was assumed. Assuming a different luminosity would not directly affect the ionization parameter, since this measurement is driven by line flux ratios. Fits to the data without this model showed evidence of absorption in the 6.6--6.7~keV range, potentially consistent with He-like Fe~XXV seen in a number of Seyfert-1s. The variable parameters that can be constrained with XSPEC include the absorption column density ($N_{\rm H}$), ionization (${\rm log}~\xi$, where $\xi = L/nr^{2}$), and the velocity shift. The fits did not measure a significant shift so this parameter was fixed to have zero velocity. Comparing the $\chi^{2}$ fit statistic to models excluding the XSTAR component, a simple F-test suggests that the absorber is significant at the $4\sigma$ level of confidence. The parameters measured with this model are listed in Table 1, and the fit is shown in Figure 2. The confidence contours resulting from the MCMC error analysis are shown in Figure 3. The neutral obscuration is measured to be Compton-thin, with upper limits that exclude the Compton-thick regime. The ``reflection fraction'' is constrained to be much less than unity, again consistent with the source being Compton-thin. Emission that is scattered from the polar region, or another region out of the line of sight, is very small but nonzero. An iron abundance of $A_{Fe} = 2.0$ relative to solar was slightly preferred by the data, and fixed in all fits, but is not required when \texttt{mytorus} is used (see below). The model detailed in Table 1 is not formally acceptable in a statistical sense; this is largely driven by remaining calibration uncertainties in the Si~K and Au M-shell region near 2--3 keV. Finally, we note that after removing the neutral obscuring column, our best-fit \texttt{pexmon} model construction implies an unabsorbed flux of $F \sim 2.21(7)\times 10^{-10}~ {\rm erg}~ {\rm cm}^{-2}~ {\rm s}^{-1}$ in the 0.6--10~keV band. This corresponds to an 0.6--10.0~keV X-ray luminosity of $L = 8.6(3)\times 10^{42}~ {\rm erg}~ {\rm s}^{-1}$, or $L_{X}/L_{\rm Edd} = 8.1(3)\times 10^{-3}$. Bolometric corrections for Seyferts can range between 15--70 (Vasudevan \& Fabian 2007), but they tend to be higher at higher Eddington fractions. For the lowest correction factor, an Eddington fraction of $L_{\rm bol}/L_{\rm Edd.} \simeq 0.12$ is implied; for the largest factor, $L_{\rm bol}/L_{\rm Edd.} \simeq 0.57$. \subsection{Modeling with Mytorus} We also made fits with the \texttt{mytorus} suite (Murphy \& Yaqoob 2009, Yaqoob et al.\ 2010), to add even more physical self-consistency than is possible with \texttt{pexmon}. The model is only valid above 0.6~keV. \texttt{Mytorus} does not include a cut-off energy in its power-law functions; we used versions of the obscuring, scattered, and line files that assume a simple power-law out to 100~keV (e.g., we used the line emission file $mytl\_V000010nEp000H100\_v00.fits$). We constructed a ``decoupled'' \texttt{mytorus} model (see, e.g., Yaqoob et al.\ 2012), wherein line-of-sight effects (defined to have an inclination of $\theta = 90^{\circ}$) and reflection effects (defined to have $\theta = 0^{\circ}$) are separated and independently constrained. In effect, the $\theta=90^{\circ}$ components account for phenomena on the near side of the torus (or, obscuring geometry), through which our line of sight passes, and the $\theta=0^{\circ}$ components account for irradiation of the face of the ``torus'' on the far side of the central engine. This model can be written as follows:\\ \noindent \texttt{phabs[1]*(${\rm MYTZ}_{90}$[2]*photoion[3]* zpowerlaw[4] $+$ zphabs[5]*const[6]* zpowerlaw[7] $+$ const[8]*(${\rm MYTS}_{90}$[9] $+$ ${\rm MYTL}_{90}$[10]) $+$ const[11]*(${\rm MYTS}_{0}$[12] $+$ ${\rm MYTL}_{0}$[13]) + mekal[14] + mekal[15] + mekal[16]})\\ \noindent Here, the components follow logically from those in the \texttt{pexmon} model construction. The \texttt{${\rm MYTZ}_{90}[2]$} component is the neutral absorption component, with parameters including the column density, inclination angle, and red-shift of the source. The \texttt{MYTS} and \texttt{MYTL} components are the scattered emission and line emission components. For these components, the column density parameters are linked to the same parameter in \texttt{${\rm MYTZ}_{90}[2]$}, the inclination is set to the value indicated, and the power-law index and normalization are linked to the same parameters in \texttt{zpowerlaw[4]}. All of the components above have abundances fixed at solar values. The photoionized absorption model is exactly the same as that employed in the prior model, and is again required at the 4$\sigma$ level of confidence via a simple F-test. As with the prior \texttt{pexmon} model construction, this \texttt{mytorus} model yields a good description of the data, and physically reasonable parameter values (see Table 1, and Figures 4 and 5). It is nominally a better description of the data than the \texttt{pexmon} model, but the difference is not highly significant. The fact that an equally good or superior fit is possible with solar abundances suggests that super-solar abundances are not required. In keeping with the \texttt{pexmon} model results, scattered emission from a dusty polar region is found to be small. In the best-fit model, the normalization \texttt{const[11]} modulating the \texttt{${\rm MYTS}_{90}$} and \texttt{${\rm MYTL}_{90}$} components has a value of $0.62^{+0.07}_{-0.10}$, while that modulating the same components viewed face-on at $\theta=0^{\circ}$ has a very small value, $4.0^{+0.3}_{-0.4}\times 10^{-3}$. The face-on components thus make a small contribution, but the constant is well determined, and they are important to the fit. Nominally, this disparity would indicate that the reflection-like spectrum observed in NGC 4388 is dominated by scattering within the near side of the absorber (relative to the central engine), not reflection from the far side. However, additional fits indicate that this is partly an artificial result, likely owing to a degeneracy that the current data cannot break: requiring these constants to be equal only results in a slightly worse fit ($\Delta \chi^{2} = 13$). Given that the models strongly require Compton-thin obscuration, making it likely that the far side of the torus is partly visible, a conservative interpretation of our results is that the nominal ``reflection spectrum'' is partly reflection in the traditional sense, and partly scattering within the near side of the absorber. A data set that sampled a larger range in column density, for which hardness may be a proxy, might be able to break this degeneracy. Fits with \texttt{mytorus} measure the same flux as the fits made with \texttt{pexmon}, in the 0.6--10.0~keV band. However, the best-fit \texttt{mytorus} model implies an unabsorbed flux that is slightly higher: $F_{\rm unabs.} = 2.63(2)\times 10^{-10}~ {\rm erg}~ {\rm cm}^{-2}~ {\rm s}^{-1}$. The corresponding X-ray luminosity is $L_{X} = 1.03(1)\times 10^{43}~ {\rm erg}~ {\rm s}^{-1}$, implying $L_{X}/L_{\rm Edd.} = 9.7(7)\times 10^{-3}$. The upper limit of the range of bolometric corrections determined for Seyferts by Vasudevan \& Fabian (2006), 70, would imply that the central engine in NGC 4388 is still just slightly sub-Eddington: $L_{\rm bol}/L_{\rm Edd.} \simeq 0.68$. It is worth noting that our model construction applies the ionized absorption column to flux from the central engine, but not the scattered or line components within \texttt{mytorus}. This effectively assumes that the ionized gas - perhaps a wind with an outflow velocity that is low (at least in projection) - has the bulk of its column density interior to the neutral gas. However, if the ionized and neutral gas were cospatial, then the scattered and line flux would pass through the ionized gas. The low column, optically thin ionized gas that we have observed would add negligible width to the Fe K$\alpha$ line, rendering this inconsistency negligible as well. For much higher ionized columns, however, the inconsistency may not be negligible. \subsection{The Inner Extent of the ``Torus''} Above, fits to the Fe~K$\alpha$ line with a simple Gaussian measured a width consistent with the optical BLR in most AGN. Indeed, studies of narrow Fe K$\alpha$ lines observed with the {\it Chandra}/HETGS find that approximately half have widths consistent with an origin in the optical BLR (Shu et al.\ 2010). In {\it Chandra} spectra of NGC 4151, weakly relativistic dynamical imprints on the Fe~K$\alpha$ line and its variability appear to directly constrain the line to originate in regions even closer to the black hole than the optical BLR (Miller et al.\ 2018, Zoghbi et al.\ 2019 in prep). Direct measurements of the broadening expected from Keplerian orbital motion in the BLR is likely beyond the reach of {\it NICER}. However, we can investigate the greatest degree of broadening that is consistent with the {\it NICER} data, and interpret this as a lower limit on the innermost extent of the obscuring material. To this end, we added an \texttt{rdblur} relativistic blurring function to modify the \texttt{pexmon} model. This function is analytic, and based on the \texttt{diskline} model (e.g. Fabian et al.\ 1989), which describes the line profile expected due to gravitational red-shifts and strong Doppler-shifts around a Schwarzschild black hole. This model is not appropriate close to a spinning black hole, but sufficient at the distances of interest in this work, and has the advantage of extending to arbitrarily large radii. The variable parameters of the \texttt{rdblur} model include the line emissivity, the inner and outer line production radii, and the inclination at which the geometry is observed. In our fits, the emissivity index was fixed at a value of $q=3$ (where the emissivity is given by $J\propto r^{-q}$), and the outermost line production radius was fixed at $r = 10^{6}~GM/c^{2}$. We considered a set of fixed inclination values in order to be unbiased concerning the reflecting geometry; thus, the only variable parameter was the inner radius. The fit made assuming $\theta=5^{\circ}$ is consistent with that detailed in Table 1, but the fits made assuming higher inclination values are progressively worse. Table 2 details the inclinations, fit statistics, and lower radius limits obtained throught this fitting experiment. \section{Discussion and Conclusions} We have analyzed a summed, time-averaged {\em NICER} spectrum of the Seyfert-2 AGN NGC 4388, with a total exposure of 105.6~ks. The data have excellent sensitivity across the 0.6-10.0~keV band, permitting strong constraints on the column density and covering factor of neutral obscuration along our line of sight, enabling a limit on the innermost extent of this obscuration, and facilitating the detection of ionized absorption like that found in many Seyfert-1 AGN. In this section, we compare our results to prior studies of NGC 4388, and suggest methods for further improving our knowledge of this source with future observatories. The values that we have measured for the equivalent neutral hydrogen column density within the nucleus of NGC 4388 (see Table 1) are in broad agreement with measurements made using {\it Suzaku} and {\it XMM-Newton}. The {\it Suzaku} data were analyzed by Shirai et al.\ (2008); using two related models, values of $N_{\rm H} = 2.36(6)\times 10^{23}~ {\rm cm}^{-2}$ and $N_{\rm H} = 2.51^{+0.04}_{-0.03}\times 10^{23}~ {\rm cm}^{-2}$ were measured. Fits to the spectra obtained in two {\it XMM-Newton}/EPIC observations of NGC 4388 measured column densities of $N_{\rm H} = 2.45^{+0.20}_{-0.21}\times 10^{23}~ {\rm cm}^{-2}$ and $N_{\rm H} = 2.79(7)\times 10^{23}~ {\rm cm}^{-2}$ (Beckmann et al.\ 2004). A {\it NuSTAR} observation of NGC 4388 made in late 2013 was analyzed by Kamraj et al.\ (2017). Two measurements of the absorbing column were obtained in separate fits to the {\it NuSTAR} data: $N_{\rm H} = 6.5(8)\times 10^{23}~ {\rm cm}^{-2}$, and $N_{\rm H} = 5.3(7)\times 10^{23}~ {\rm cm}^{-2}$; the former was made using \texttt{pexrav}, and the latter using \texttt{mytorus}. As noted above, \texttt{pexrav} does not include line emission, though the strength of the Fe~K$\alpha$ line relative to the edge is set by atomic physics and this scaling is key to accurately constraining the crucial parameters. The \texttt{mytorus} modeling undertaken by Kamraj et al.\ (2017) included only a single scattering and line component group, and a single inclination (fixed to the viewing angle of the accretion flow). Masini et al.\ (2016) had previously analyzed the {\it NuSTAR} spectra of NGC 4388 as part of a larger analysis of water maser AGN. They examined a small set of self-consistent \texttt{mytorus} models, including one that is essentially the same as our implementation. That treatment resulted in an obscuring column of $N_{\rm H} = 4.2(5) \times 10^{23}~{\rm cm}^{-2}$. The fact that a higher column is measured using the same model suggests that the difference could reflect a genuine reduction in the obscuring column between the {\it NuSTAR} observations in 2013, and our more recent {\it NICER} program. It is interesting to note that -- similar to our findings -- Masini et al.\ (2016) also report that scattering within the obscuring gas between the central engine and the observer dominates over forward-scattering from the same material on the far side of the central engine. In the absence of a direct density constraint, writing $N_{\rm H} = nr$ allows an upper limit on the absorption radius via the ionization parameter, $\xi = L/Nr$. As noted above, and in Table 1, $L_{x}/L_{\rm Edd}$ is 0.0078--0.0104. A modest bolometric correction is likely more appropriate, based on the work of Vasudevan \& Fabian (2007). Adopting a correction factor of 20 then implies $L_{\rm bol} \simeq 2\times 10^{44}~ {\rm erg}~ {\rm s}^{-1}$. Using this value, and the absorption parameters listed in Table 1, the ionization parameter formalism implies a radius limit of $r\leq 5\times 10^{18}~{\rm cm}$, or $r\leq 4\times 10^{6}~GM/c^{2}$ for the mass of NGC 4388. However, assuming a density of $n=10^{8}~{\rm cm}^{-3}$ -- consistent with ionized absorber density constraints in some Seyferts (e.g., Krongold et al.\ 2007) -- a radius of $r \simeq 2\times 10^{4}~GM/c^{2}$ results, broadly consistent with an origin in the optical BLR. There, it might help to pressure-confine cold, clumpy gas. We note that the photoionization model assumed a power-law index of $\Gamma = 1.7$ whereas values consistent with $\Gamma = 1.55$ are measured, but this difference is minor compared to uncertainties in the bolometric correction factor. Using the neutral reflected spectrum as a trace of the innermost extent of the cold obscuration, we are able to estimate lower limits on the innermost extent of this absorption. The smallest value, $r\geq 270~GM/c^{2}$, is 1--2 orders of magnitude smaller than a typical optical BLR. If the reflector in NGC 4388 originates at such small radii, it would likely trace an X-ray BLR, warp, or other geometry. A larger limit that is not strongly rejected by an increased fit statistic, $r\geq 1600~GM/c^{2}$ (corresponding to the fit made assuming $\theta = 30^{\circ}$), is consistent with the inner extent of the optical BLR in many sources. Especially in galaxies for which our line of sight passes close to the plane of the galactic disk, such as NGC 4388, the host galaxy itself can contribute significantly to the gaseous and dust obscuration detected in Seyferts and even some Compton-thick AGN (see, e.g., Shi et al.\ 2006, Gandhi et al.\ 2009, Goulding et al.\ 2012, Puccetti et al.\ 2014). It is therefore possible that the host galaxy contributes to the gaseous column density observed in NGC 4388, but the bulk of the neutral and ionized column likely arises on much smaller scales, based on the variability, line widths, and ionization arguments discussed above. However, Shi et al.\ (2006) measure a strong 9.7~$\mu$m silicate feature in NGC 4388, and {\em Hubble} images of NGC 4388 reveal strong spiral arms and prominent dust lanes (Greene et al.\ 2013); the host galaxy could contribute significantly to the dust obscuration. If the majority of the gaseous obscuration occurs on small scales within the nucleus, and if at least some part of the dust obscuration occurs within the host galaxy, is there a role for a conventional parsec-scale torus in NGC 4388? Dust reverberation mapping -- actually reverberation using the K-band continuum at 2.2$\mu$m because it is suitably close to the peak emissivity of hot dust grains -- finds clear evidence of dust on parsec scales (e.g., Koshida et al.\ 2014) in nearby Seyferts. However, mid-IR interferometry has shown that the bulk of the IR emission occurs in polar regions rather than in equatorial regions (e.g., Raban et al.\ 2009; Honig et al.\ 2012, 2013). This may indicate that in NGC 4388 and other Seyferts, a traditional torus geometry is only a minor factor in shaping the broadband SED; however, this does not mean that different obscuration zones are completely disconnected. Recent studies of several AGN link X-ray activity in the central engine to dust flows on galactic scales (e.g., Cicone et al.\ 2014). Future missions will combine spectral resolution and sensitivity, and may be able to build on the results achieved with {\it NICER}. Figure 6 shows simulated {\it XRISM/Resolve} (Tashiro et al.\ 2018) and {\it Athena/X-IFU} (Barret et al.\ 2018) spectra of NGC 4388, based on the results presented in this work. The {\it XRISM/Resolve} spectra were simulated using {\it Hitomi} responses with a resolution of 5~eV. Based on the best-fit \texttt{mytorus} model in Table 1, if the reflector traces the innermost extent of the obscuration and originates at $r = 1600~GM/c^{2}$ (see above), dynamical broadening would be detected in a 100~ks exposure. The X-IFU simulation was constructed using the current public response matrices, with a resolution of 2~eV. We find that the {\em X-IFU} will be able to detect small changes (factors of $\sim2$) in the high resolution features tied to the neutral and ionized absorbers on time scales as short as 10~ks, commensurate with the variability reported in Elvis et al.\ (2004) but based on low-resolution data. This would represent a very strong confirmation of the prior variability, and open a window into the nature of obscuration in Seyferts on dynamical time scales. Finally, it is worth noting some of the limitations of our spectral modeling. Both \texttt{pexmon} and \texttt{mytorus} assume cold, neutral gas. If the gas that drives obscuration along our line of sight (and, reflection from the far side of the central engine) is as close as the optical BLR, the gas may not be entirely neutral. A model such as \texttt{xillver} (Garcia et al.\ 2013) describes reflection from a broad range of gas ionization levels. Exploratory fits with \texttt{xillver} replacing \texttt{pexmon}, for instance, yield equally good fits. Whereas \texttt{pexmon} only includes lines from Fe and Ni, \texttt{xillver} includes a broad range of abundant elements, and contributes some line flux (particularly Si) that is described by the \texttt{mekal} components in our models. However, more self-consistent implementations of \texttt{xillver} that replace the neutral obscuration with ionized obscuration through \texttt{zxipcf}, yield significantly worse fits. This may indicate that the absorption and reflection at least occur in a mixed medium, potentially with an ionization gradient. Future refinements in the calibration of {\em NICER}, and future missions, may be able to address such points. We thank the anonymous referee for comments that improved the paper. JMM acknowledges helpful conversations with Richard Mushotzky.
2,877,628,088,829
arxiv
\section{Introduction} \noindent In the Earth's outer core, turbulent motions of the electrically conducting fluid sustains the geomagnetic field through dynamo action. Part of this field, the poloidal one, crosses the mantle and can be observed at the Earth's surface and above. Because of the low conductivity of the mantle (see \citet{Velimsky2010,Jault2015}), once measured and modeled, the poloidal field can be estimated everywhere outside and at the outer boundary of the core. Since at the core mantle boundary (CMB) the flow is advecting the magnetic field, a close examination of the geomagnetic field variations at the Earth's surface, make it possible to infer the velocity field responsible for them. \noindent Within the outer core, the evolution of the magnetic field is prescribed by the induction equation. At the CMB this equation can be simplified. Under the assumption that the mantle is a perfect electrical insulator, the toroidal field, which interacts with the poloidal field inside the outer core, vanishes at the CMB. In addition, since the fluid cannot penetrate the mantle, its associated velocity field is purely two dimensional. And finally, on short timescales, diffusion effects can be considered as negligible in comparison to advection effects as shown in \citet{Holme2007}. So all in all, the induction equation expressed at the CMB can be simplified into the so-called Frozen Flux (FF) approximation (see \citet{Backus1996}). By inverting this equation, which couples the velocity field to the radial component of the magnetic field and the secular variation, fluid motions at the CMB can be recovered. \noindent However, since the velocity field has two components for one equation, and since any flow scale can interact with the magnetic field to generate the large scale, observable, secular variation (SV), the inverse problem is ill-posed. To reduce the non-uniqueness of the velocity field, physical assumptions decreasing the dimension of possible solutions can be made. Most of the constraints generally used in core flow inversions, such as quasi-geostrophy, columnar, tangential-geostrophy or purely toroidal, are derived in \citet{Holme2007,Finlay2010}. Nevertheless, to obtain a unique solution, additional constraints on the velocity field need to be imposed. Typically, one enforces the energy associated with the small scale velocity field to rapidly decay, based on the so-called large scale assumption (see \citet{Holme2007,Finlay2010}). However, \citet{Baerenzung2016} has shown that although the flow is dominant at large scales, its total kinetic energy spectrum does not exhibit a strong decaying slope. Recently, other strategies have been developed to bypass the issues raised by the nonuniqueness of the velocity field. In particular, \citet{Aubert2014} proposed to use the statistical properties of an Earth like geodynamo simulation (the Coupled-Earth model of \citet{Aubert2013}) to characterize the flow and the magnetic field a priori. A major advantage of such an approach is that the correlations between the fields at the CMB and the fields within the outer core are available allowing for imaging the entire outer core state. \noindent Constraining a priori the temporal dependency of the velocity field is a more delicate operation as constraining it spatially. Optimally, one should account for the dynamics of the outer core fluid and magnetic field prescribed by the magnetohydrodynamic equations. Approaches such as variational data assimilation (see \citet{Canet2009,Li2014}) allow for physical modeling within an inversion framework. To do so, the method search for the optimal initial conditions of the system in order for the deterministic trajectories of the different fields to explain at best the observations. The drawback of the method is that all data are treated simultaneously, so whenever the dimension of data or their amount are large, the algorithm become computationally expensive. An alternative to avoid such a block inversion is to operate sequentially. \citet{Kuang2008} were the first to adopt a sequential assimilation algorithm in the context of geomagnetic modeling. The optimal interpolation algorithm they used proceeds recursively in two steps. In the first one, the state variables are propagated in space and time with a given physical model, in the case of \citet{Kuang2008}, a three-dimensional geodynamo simulation. Once observations become available, the state variables are corrected accordingly, and then the prediction restart. Since with this approach uncertainties are not modeled, they have to be specified in an ad-hoc manner. Algorithms which permit to simulate a mean model and its associated uncertainties exist, the Kalman filter (KF) approach (see \citet{Kalman1960,Talagrand1997,Cohn1997,Evensen2003}) is one them. As for the optimal interpolation algorithm the KF proceeds sequentially with a forecast and an analysis steps. The main difference is that the evolution of errors is also predicted, and whenever data become available, these errors are taken into account for the Bayesian update of the state variables. The KF can only be applied to systems exhibiting a linear dynamics. When the dynamics of the system is nonlinear, as it is the case here for the geomagnetic field, propagation of errors cannot be analytically derived. However, it can be either approximated by linearization with the extended version of the KF, or represented through an ensemble of possible solutions in the ensemble Kalman filter (EnKF). As for \citet{Barrois2017} the latter option is the one we chose in this study. \noindent In the EnKF, the different fields of interest are represented through an ensemble of possible states. For the prediction step, the dynamical model of the system prescribes the spatio temporal evolution of each individual member of the ensemble. At the analysis, covariances deriving from the forecasted fields and data are combined to correct the state of the ensemble predicted at observation time. Due to the limitations in available computational power, a balance between complexity of the dynamical model and size of the ensemble has to be found. Here we decided to favor accuracy in statistical representation. Therefore, the evolution of the core magnetic field and velocity field are only modeled at the level of the core-mantle boundary, through respectively the frozen flux equation and a first order auto regressive process. Extending the approach of \citet{Baerenzung2016} to the time domain, the parameters of the auto regressive process, are assumed to derive from scale dependent power laws, and are directly estimated with the COV-OBS.x1 core magnetic field secular variation model of \citet{Gillet2013,Gillet2015b}. \noindent The article is organized as follows. In section \ref{governingEquations} the mathematical approach chose to tackle the inverse problem is described. It is then applied to the real geophysical context, and the results are shown in section \ref{results}. Finally some conclusions are drawn in section \ref{conclusion}. \section{Models and parameters}\label{governingEquations} \subsection{Quantities of interest and notations}\label{notations} \noindent In this study three fields are of particular interest, the radial component of the magnetic field $B_r(x,t)$, the secular variation $\partial_t B_r(x,t)$, and the velocity field $u(x,t)$ all expressed at the Earth's core mantle boundary. \noindent The spectral counter parts of $B_r(x,t)$ and $\partial_t B_r(x,t)$ are respectively given by the spherical harmonics coefficients $b_{l,m}$ and ${\gamma}_{l,m}$ such as: \begin{eqnarray} B_r(x,t) & = & -\sum_{l=1}^{l=+\infty} (l+1)\sum_{m=-l}^{m=+l} b_{l,m}(t)Y_{l,m}(x) \ , \\ \partial_t B_r(x,t) & = & -\sum_{l=1}^{l=+\infty} (l+1)\sum_{m=-l}^{m=+l} {\gamma}_{l,m}\left(t\right)Y_{l,m}(x) \ , \label{BrSHdecompostion} \end{eqnarray} with $Y_{l,m}(x)$ the Schmidt semi-normalized spherical harmonics (SH) of degree $l$ and order $m$. \noindent The velocity field at the CMB $u(x,t)$ is decomposed into a poloidal $\phi(x,t)$ and toroidal $\psi(x,t)$ scalar field such as: \begin{equation} u(x,t) = {r}\times {\nabla_H} \psi(x,t) + {\nabla_H} (|r|\phi(x,t)) \label{poloToro} \ , \end{equation} where ${\nabla_H}$ corresponds to the horizontal divergence operator. In spectral space, the poloidal and toroidal fields are respectively deriving from the coefficients $\phi_{l,m}$ and $\psi_{l,m}$ through the formulation: \begin{eqnarray} \phi(x,t) & = & \sum_{l=1}^{l=+\infty} \sum_{m=-l}^{m=+l} \phi_{l,m}(t){Y}_{l,m}(x) \ ,\\ \psi(x,t) & = & \sum_{l=1}^{l=+\infty} \sum_{m=-l}^{m=+l} \psi_{l,m}(t){Y}_{l,m}(x) \ . \end{eqnarray} \noindent According to the spherical harmonics expansion of $B_r(x,t)$, $\partial_t B_r(x,t)$ and $u(x,t)$, we define the magnetic field and secular variation energy spectra as following: \begin{eqnarray} E_b(l) &=& (l+1)\sum_{m=-l}^{m=l} b_{l,m}^2 \label{magneticSpectrum} \\ E_\gamma(l) &=& (l+1)\sum_{m=-l}^{m=l}\gamma_{l,m}^2 \label{svSpectrum} \ . \end{eqnarray} and the velocity field poloidal and toroidal energy spectra respectively as: \begin{eqnarray} E_\phi(l) &=& \frac{l(l+1)}{2l+1}\sum_{m=-l}^{m=l}\phi_{l,m}^2 \label{poloidalSpectrum} \\ E_\psi(l) &=& \frac{l(l+1)}{2l+1}\sum_{m=-l}^{m=l}\psi_{l,m}^2 \label{toroidalSpectrum} \ . \end{eqnarray} \noindent In the following, we will also adopt two types of notation, one with normal characters and another one with bold characters. Normal characters will be associated with single epoch quantities. This includes $\gamma$, $b$ and $u$, which will respectively contain the spherical harmonics coefficients of the secular variation $\gamma_{l,m}$, the magnetic field $b_{l,m}$, and both the poloidal and toroidal fields $\phi_{l,m}$ and $\psi_{l,m}$, but also any scalar, vector or matrix associated with a quantity expressed at a given time. \noindent Bold characters will be used for quantities depending on both space and time. As an example, a vector $\bf{a}$ will contain spatial vectors $a$ at $N$ different epochs, such as: \begin{equation} {\bf a} = \left(a_0,a_1, \cdots, a_{N-2},a_{N-1}\right)^T \end{equation} \noindent Finally, a constant formalism for some statistical quantities will be used all over the manuscript. The mean and maximum value of a distribution $p(a)$ will respectively be written with an over bar and a hat such as: \begin{eqnarray} \bar{a} & = & E[a] = \int a p(a) da \\ \hat{a} & = & \argmax_a \left(p\left(a\right)\right) \end{eqnarray} and the covariance associated with a random variable $a$ or between a random variable $a$ and a random variable $b$ will respectively be expressed as: \begin{eqnarray} \Sigma_a & = & E\left[\left( a - \bar{a}\right) \left( a - \bar{a}\right)^T\right] \\ \Sigma_{ab} & = & E\left[\left( a - \bar{a}\right) \left( b - \bar{b}\right)^T\right] \ . \end{eqnarray} Note that the use of bold characters for space time dependent variables also applies to the latter statistical quantities. \subsection{Characterization of the magnetic field dynamical behavior} \noindent As mentioned in the introduction, under the assumption that the mantle is a perfect electrical insulator, and supposing that the observed secular variation is mainly induced by advection of the magnetic field at the CMB, the dynamical evolution for the radial component of the magnetic field $Br$ at the CMB is given by the frozen flux equation which reads: \begin{equation} \partial_t B_r\left(x,t\right) = -{\nabla_H}\left(u\left(x,t\right) B_r\left(x,t\right)\right) \ .\label{FF} \end{equation} \noindent Following the notations given in section \ref{notations}, this equation can be written in spectral space as: \begin{equation} \gamma = -A_b u = -A_u b = -A(ub)\ .\label{FFv} \end{equation} where the linear operators $A_b$ and $A_u$ and the third order tensor $A$ allow us to calculate the SH coefficients associated with the advection term ${\nabla_H}(u\left(x,t\right) B_r\left(x,t\right))$ when they are respectively applied to $u$, $b$ and $(ub)$. \subsection{Characterization of the flow spatio-temporal behavior} \noindent The dynamical evolution of the fluid within the Earth's outer core is prescribed by the Navier-Stokes equations. Numerically solving this equation is not only numerically expensive, it remains out of reach at the regime the Earth is exhibiting. Since on short timescales, the observable magnetic field and secular variation only depend on the velocity field at the CMB, we propose to model the outer core flow only at this location. Following \citet{Gillet2015} we chose a first order autoregressive process to do so. With such a model, the velocity field $u(t+\Delta t)$, depends on the velocity field $u(t)$ through the relation: \begin{equation} u(t+\Delta t) = \Gamma(\Delta t) u(t) + \xi(\Delta t) \ , \end{equation} where $\Gamma$ is the matrix associated with the parameters of the autoregressive model, we refer it as memory term, and $\xi$ is a white noise characterized by the Gaussian distribution $\mathcal{N}(0,\tilde{\Sigma}_{u|_\mathcal{M}})$ with $\tilde{\Sigma}_{u|_\mathcal{M}}$ the covariance of the noise given a certain parametrization $\mathcal{M}$ of the AR process. If each singular value of $\Gamma$ is smaller than $1$, the process is stationary, a state that the flow at the CMB is certainly fulfilling. Under such conditions, the time-averaged spatial covariance of the velocity field $\Sigma_{u|_\mathcal{M}}$ is related to the covariance of the white noise $\xi$ through the relation: \begin{equation} \tilde{\Sigma}_{u|_\mathcal{M}} = \Sigma_{u|_\mathcal{M}} - \Gamma \Sigma_{u|_\mathcal{M}} \Gamma^T \ . \end{equation} Defining the vector ${\bf u}= \left(u_0,u_1,\cdots,u_{N-2},u_{N-1}\right)^T$ containing the velocity fields at $N$ different epochs, its full covariance ${\bf \Sigma_{u|_\mathcal{M}}}$ is given by: \begin{equation} {\bf \Sigma_{u|_\mathcal{M}}} = \begin{pmatrix} \Sigma_{u|_\mathcal{M}} & \Gamma\Sigma_{u|_\mathcal{M}} & \cdots & \Gamma^{(N-1)}\Sigma_{u|_\mathcal{M}} \\ \Sigma_{u|_\mathcal{M}}\Gamma^T & \Sigma_{u|_\mathcal{M}} & \cdots & \Gamma^{(N-2)}\Sigma_{u|_\mathcal{M}} \\ \vdots & \vdots & \ddots & \vdots \\ \Sigma_{u|_\mathcal{M}}{\Gamma^{(N-1)}}^T & \Sigma_{u|_\mathcal{M}}{\Gamma^{(N-2)}}^T & \cdots & \Sigma_{u|_\mathcal{M}} \end{pmatrix}\ . \end{equation} \noindent If $\Gamma$ is a scalar, this parametrization of the flow spatio-temporal behavior is similar to the one used by \citet{Gillet2015}. Indeed, in their study they chose a temporal correlation of the velocity field depending on the characteristic time $\tau_u$ such as $\Gamma(\Delta t) = \mathrm{exp}(-\frac{\Delta t}{\tau_u})$, with $\Delta t$ the time stepping of the auto regressive process. \subsection{Parametrization of the auto regressive process}\label{flowParameters} \noindent To fully describe the autoregressive process, its two main parameters, the spatial covariance $\Sigma_{u|_\mathcal{M}}$ and the memory term of the process $\Gamma$, have to be characterized. Following the developments of \citet{Baerenzung2016}, $\Sigma_{u|_\mathcal{M}}$ is chosen to derive from the poloidal and toroidal stationary spectra of the flow, the latter being assumed to behave as power laws with different spectral ranges. $\Gamma$, which contains the information on the temporal correlations of the velocity field, is assumed to be scale dependent, therefore we choose it to derive from power laws presenting the same ranges than the flow energy spectra. Under such assumptions, the poloidal and toroidal stationary spectra of the flow and memory terms of the AR process are given by: \begin{eqnarray} E_\phi(l) &= C_{E_\phi}^i A_{E_\phi}^2 l^{\mbox{-}P_{E_\phi}^i} &\quad \text{for} \quad l \in \Delta_{\phi}^i \label{Ephi}\\ E_\psi(l) &= C_{E_\psi}^j A_{E_\psi}^2 l^{\mbox{-}P_{E_\psi}^j} &\quad \text{for} \quad l \in \Delta_{\psi}^j \label{Epsi} \\ \Gamma_\phi(l) &= C_{\Gamma_\phi}^i A_{\Gamma_\phi}^2 l^{\mbox{-}P_{\Gamma_\phi}^i} &\quad \text{for} \quad l \in \Delta_{\phi}^i \label{Gphi}\\ \Gamma_\psi(l) &= C_{\Gamma_\psi}^j A_{\Gamma_\psi}^2 l^{\mbox{-}P_{\Gamma_\psi}^j} &\quad \text{for} \quad l \in \Delta_{\psi}^j \label{Gpsi} \end{eqnarray} where the $A$'s are the magnitudes of the energy spectra and the memory terms, and the $P^k$'s are their slopes within the spherical harmonics ranges $\Delta^k$'s. The constants $C^k$'s are given by: \begin{equation} C^k = \prod_{a=2}^{a=k} \mathrm{exp}\left( \log\left(l_{a\mbox{-}1}\right)\left(P^a-P^{a\mbox{-}1}\right)\right) \end{equation} $l_{a}$ being the spherical harmonics degrees where transitions in slope occur. \noindent According to this parametrization, whereas the matrix $\Gamma$ is diagonal and contains both $\Gamma_\phi(l)$ and $\Gamma_\psi(l)$, the spatial covariance $\Sigma_{u|_\mathcal{M}}$ derives from the poloidal and toroidal energy spectra such as: \begin{eqnarray} \overline{\phi_{l,m} \phi_{l^\prime,m^\prime}} &=& \frac{E_\phi(l)}{l(l+1)}\delta_{l l^{\prime}} \delta_{m m^{\prime}} \label{covPhiPhi}\\ \overline{\psi_{l,m} \psi_{l^\prime,m^\prime}} &=& \frac{E_\psi(l)}{l(l+1))} \delta_{l l^{\prime}} \delta_{m m^{\prime}} \label{covPsiPsi}\\ \overline{\phi_{l,m} \psi_{l^\prime,m^\prime}} &=& 0 \quad \forall l,l^{\prime},m ,m^{\prime} \label{covPhiPsi} \ . \end{eqnarray} \noindent The total parameters of the autoregressive process, referred as $\mathcal{M}$, can be divided into two categories, one containing the parametrization of the spatial covariances, namely $\mathcal{M}_\Sigma$, and the other associated with the memory terms of the process $\mathcal{M}_\Gamma$. $\mathcal{M}_\Sigma$ and $\mathcal{M}_\Gamma$ are respectively given by: \begin{eqnarray} \mathcal{M}_\Sigma &=& \left\{A_{E_\phi},P_{{E_\phi}}^i,\Delta_{\phi}^i, A_{E_\psi},P_{{E_\phi}}^j,\Delta_{\psi}^j\right\} \label{parametersSigma} \\ \mathcal{M}_\Gamma &=& \left\{A_{\Gamma_\phi},P_{{\Gamma_\phi}}^i,\Delta_{\phi}^i, A_{\Gamma_\psi},P_{{\Gamma_\psi}}^j,\Delta_{\psi}^j\right\} \label{parametersSigma} \ . \end{eqnarray} \noindent where the index $i$ and $j$ are associated with the different poloidal and toroidal spherical harmonics ranges. Once $\mathcal{M} = \left\{\mathcal{M}_\Sigma, \mathcal{M}_\Gamma \right\}$ is given, the prior distribution of the velocity field can be expressed and reads: \begin{equation} p({\bf u}|\mathcal{M}) = {\bf \mathcal{N}}({\bf 0},{\bf \Sigma_{u|_\mathcal{M}}}) \label{priorU} \ . \end{equation} \subsection{Posterior distribution the autoregressive parameters} \noindent To forecast the evolution of the velocity field in the ensemble Kalman filter algorithm, the parameters of the autoregressive process have to be known. Their posterior distribution, $p(\mathcal{M}|{\bf \bar{\gamma}^o})$, can be expressed through $p({\bf u},{\bf b},\mathcal{M}|{\bf \bar{\gamma}^o})$, the joint posterior distribution of the AR parameters, the core mantle boundary velocity field and magnetic field, such as: \begin{eqnarray} p(\mathcal{M}|{\bf \bar{\gamma}^o}) &=& \iint p({\bf u},{\bf b},\mathcal{M}|{\bf \bar{\gamma}^o})\mathrm{d}{\bf u} \mathrm{d}{\bf b} \\ &=& \frac{1}{p({\bf \bar{\gamma}^o})}\iint p({\bf \bar{\gamma}^o}|{\bf u},{\bf b},\mathcal{M}) p({\bf b}) p({\bf u}|\mathcal{M})p(\mathcal{M})\label{marginalParameters} \mathrm{d}{\bf u} \mathrm{d}{\bf b} \nonumber\ , \end{eqnarray} where ${\bf \bar{\gamma}^o}$ is the observed secular variation. For this study, ${\bf \bar{\gamma}^o}$ is taken from the COV-OBS.x1 model of \citet{Gillet2013,Gillet2015b}, and the covariance matrix of the secular variation ${\bf \Sigma_{\gamma}^o}$, is derived from the $100$ ensemble members provided by the model. Because of the singular nature of the covariance matrix, only its diagonal part is kept. Under such conditions, the likelihood distribution reads: \begin{equation} p({\bf \bar{\gamma}^o}|{\bf u},{\bf b},\mathcal{M}) = \mathcal{N}(-{\bf A} ({\bf ub}),{\bf \Sigma_{\gamma}^o}) \ . \end{equation} \noindent The prior distribution of the magnetic field $p({\bf b})$ is decomposed into two parts. The first part provides the statistical properties of the large scale field ${\bf b}^<$, whereas the second part describes our prior knowledge on the small scale field ${\bf b}^>$. The large scale magnetic field is characterized by the prior distribution: \begin{equation} p({\bf b}^<) = \mathcal{N}({\bf \bar{b}^o},{\bf \Sigma_{b}^o}) \ , \end{equation} where ${\bf \bar{b}^o}$ and ${\bf \Sigma_{b}^o}$ are respectively the COV-OBS.x1 magnetic field and covariance matrix. As for the secular variation, ${\bf \Sigma_{b}^o}$ is evaluated with the $100$ ensemble members of the model, and only its diagonal part of kept. \noindent The small scale magnetic field is chosen to be at any time isotropically distributed, with a $0$ mean and a covariance ${\bf \Sigma_{b^>}}$ deriving from the extrapolation of the large scale field spectrum $E_{b^<}(l)$. Here we chose the formulation proposed by \citet{Buffett2007} to characterize the magnetic field spectrum at the CMB. It reads: \begin{equation} E_{b^>}(l) = C_1 \chi^l \label{buffettExt} \end{equation} where $\chi=0.99$. To determine the constant $C_1$, we used the COV-OBS.x1 magnetic field sampled every two years between $1900.0$ and $2014.0$, and performed a weighted least square fit of the associated energy spectra between SH degree $l=2$ and $l=13$. We obtained that $C_1 = 7.15 \times 10^9$ nT${}^2$. Note that another type of extrapolation has also been tried, assuming an exponential decay of the magnetic field spectrum. Although we do not show the results associated with this modeling, we observed that such an assumption would provide insufficient levels of energy at small scales, leading to suboptimal predictions of the magnetic field evolution. From the extrapolation given in equation (\ref{buffettExt}) we construct the covariance of the small scale magnetic field $\Sigma_{b^>}$ at a given time through the relation: \begin{equation} \overline{b_{l,m}^> b_{l^\prime,m^\prime}^>} = \frac{ E_{b^>}(l)}{(l+1)(2l+1)} \delta_{l l^{\prime}} \delta_{m m^{\prime}} \ .\label{SSBcov} \end{equation} Neglecting a priori the temporal correlations between the small scales of the magnetic field, the full covariance ${\bf \Sigma_{b^>}}$ is simply a bloc diagonal matrix where every block are identical and given by $\Sigma_{b^>}$. \noindent As expressed in the previous section, the prior distribution of the velocity field conditioned by the AR parameters $p({\bf u}|\mathcal{M})$ is given by equation (\ref{priorU}). \noindent The last distribution entering equation (\ref{marginalParameters}), is the prior distribution of the AR parameters $p(\mathcal{M})$. These parameters are depending on the magnitudes $A$'s, the slopes ${P}$'s, and the spherical harmonics ranges ${\Delta}$'s of the flow stationary spectra and the AR memory terms. Whereas the ranges $\Delta$'s will be a priori imposed, and therefore considered as known, the magnitudes and slopes are completely undetermined. To reflect this lack of knowledge we characterize them by uniform distributions such as: \begin{eqnarray} p(A) &=& \mathcal{U}(0, \infty) \label{priorMA} \\ p(P) &=& \mathcal{U}(-\infty, \infty) \label{priorMP} \ . \end{eqnarray} The full prior distribution of $\mathcal{M}$ is simply the product of the prior distributions of each individual AR parameter. \noindent Finally, following the development of \citet{Baerenzung2016}, the posterior distribution of the AR parameters given the secular variation of equation (\ref{marginalParameters}) is approximated by the following distribution: \begin{equation} p(\mathcal{M}|{\bf \bar{\gamma}^o}) = \frac{\mathrm{exp}\left[-\frac{1}{2} { \mathbf{ \bar{\gamma}} ^{\mathbf{o}T} } \bf{\Sigma_{\mathcal{M}_{|\bar{\gamma}^o}}^{-1}} \bf{\bar{\gamma}^o}\right]} {(2\pi)^{\frac{d}{2}}|\bf{\Sigma_{\mathcal{M}_{|\bar{\gamma}^o}}}|^{\frac{1}{2}}} \label{posteriorParameter} \end{equation} \noindent where $d$ is the dimension of the secular variation vector. To construct the matrix $\bf{\Sigma_{\mathcal{M}_{|\bar{\gamma}^o}}}$, the covariance between ${\bar{\gamma}^o}$ at a time $t_\alpha$ and ${\bar{\gamma}^o}$ at a time $t_\beta$, with respect to the distribution $p({\bf \bar{\gamma}^o}|{\bf u},{\bf b},\mathcal{M}) p({\bf b}) p({\bf u}|\mathcal{M})$ is calculated for every combination of epochs considered. The component at a row index $i$ and a column index $j$ of the resulting covariance matrix $\Sigma_{\mathcal{M}_{|\bar{\gamma}^o}}^{t_\alpha t_\beta}$reads: \begin{eqnarray} \left(\Sigma_{\mathcal{M}_{|\bar{\gamma}^o}}^{t_\alpha t_\beta} \right)_{ij} & = & \left(\Sigma_{\gamma}^{o^{t_\alpha t_\beta}}\right)_{ij} + \left( A_{\bar{b}_{o}}(t_\alpha)\Sigma_{u_{|\mathcal{M}}}^{t_\alpha t_\beta} A_{\bar{b}_{o}}^T(t_\beta)\right)_{ij} \nonumber\\ & & + A_{imn} \left(\Sigma_{u_{|\mathcal{M}}}^{t_\alpha t_\beta}\right)_{mr} \left(\Sigma_{b}^{o^{t_\alpha t_\beta}}\right)_{ns} A_{jrs} \ , \end{eqnarray} where we recall that the third order tensor $A$ is defined such as $A_{ijk}(u)_j(b)_k = \left(A_{b} u \right)_i = \left(A_u b \right)_i$. \subsection{Sequential assimilation of the core secular variation and magnetic field} \noindent To combine our dynamical model for the magnetic field and the velocity field at the CMB to a magnetic field and secular variation model derived from geomagnetic data, we implemented the Ensemble Kalman filter approach proposed by \citet{Evensen2003}. This method proceeds in two steps. In the first one, referred as the prediction step, the spatio temporal evolution of the state variables (the velocity and magnetic fields) represented through an ensemble is forecasted until data become available. Then, the second step, called the analysis, is initiated, and each member of the ensemble is corrected accordingly to the data. \noindent As mentioned previously, while the dynamical behavior of the velocity field is prescribed by a first order auto regressive process, the evolution of the magnetic field is constrained by the frozen flux equation. Nevertheless, directly solving the FF equation is numerically unstable. Indeed, since in this equation has no diffusion mechanism, cascading magnetic energy will have a tendency to accumulate on the smallest simulated scales, and slowly contaminate the entire field through non linear interactions. To counter this latter effect an extra hyperdiffusion term is added to the FF equation. Under such conditions the prediction step of the EnKF for each member $k$ of the ensemble of velocity and magnetic fields $\left\{u,b \right\}$ is given by: \begin{eqnarray} u_k(t+\Delta_t) &=& \Gamma(\Delta_t)u_k(t) + \xi_k(\Delta_t) \label{forecastU}\\ \partial_t b_k(t) &=& -A({u_k}(t) b_k(t)) - \eta_D \Delta_H^4 b_k(t)\label{forecastB} \end{eqnarray} where the memory term $\Gamma$ and the random noise $\xi$ of the AR process are scaled accordingly to the time step $\Delta t$. The hypperdiffusivity is set to $\eta_D = 9\times 10^{13}$ km${}^8$.yr${}^{-1}$. If the magnetic field was only diffused over time with such an hyperdiffusion, in $100$ years the latter would loose $0.09\%$, $18\%$ and $99\%$ of energy at respectively spherical harmonics degrees $13$, $26$ and $39$. The numerical resolution of the FF equation is performed through an Euler scheme for the first iteration, and a second order Adams-Bashforth scheme for the following ones. The time step $\Delta_t$ has been set to half a year. \noindent For the analysis step of the EnKF, both the observed magnetic field and secular variation are assimilated simultaneously. To do so, the model state has to be augmented in order to take the secular variation into account. Therefore, from each pair $(u_k^f,b_k^f)$ of the forecasted ensemble, a prediction for the observables is build accordingly to the following relations: \begin{eqnarray} \gamma_k^f &=& -A(u_k^f b_k^f) \label{gammaFor}\\ b^{f<}_k &=& H b_k \end{eqnarray} where the linear operator $H$ simply truncates the forecasted magnetic field at the level of the observed one. Normally equation (\ref{gammaFor}) should contain the hyperdiffusion term of equation (\ref{forecastB}). However, the effects of this hyperdiffusion are so weak on the range of scales describing the observed secular variation (the spectral expansion of the COV-OBS.x1 SV does not excess SH degree $l=13$ in this study), that they are neglected. \noindent From the augmented ensemble $\left\{u^f,b^f,\gamma^f,Hb^f \right\}$ the covariances necessary for the analysis step of the EnKF are calculated. Recalling that the covariance of a field $a^f$ is referred as $\Sigma_a^f$ and the covariance between a field $a^f$ and a field $b^f$ is expressed as $\Sigma_{ab}^f$, each updated pair of velocity field and magnetic field $(u_k^a,b_k^a)$ is given by: \begin{eqnarray} \begin{pmatrix} u_k^a \\ b_k^a \end{pmatrix} &=& \begin{pmatrix} u_k^f \\ b_k^f \end{pmatrix} + \begin{pmatrix} \Sigma_{u\gamma}^f & \Sigma_{ub}^fH^T \\ \Sigma_{b\gamma}^f & \Sigma_{b}^f H^T \end{pmatrix}\\ &\times& \begin{pmatrix} \Sigma_{\gamma}^f+\Sigma_{\gamma}^o & \Sigma_{\gamma b}^fH^T \\ H\Sigma_{b\gamma}^f & H\Sigma_{b}^f H^T + \Sigma_{b}^o \end{pmatrix}^{-1} \begin{pmatrix} \gamma_{k}^o - \gamma_k^f\\ b_{k}^o - Hb_k^f \end{pmatrix}\nonumber \label{enfk} \end{eqnarray} where $\gamma_{k}^o$ and $b_{k}^o$ are random realizations from the distributions of the COV-OBS.x1 model for respectively the secular variation $p(\gamma^0)=\mathcal{N}(\bar{\gamma}^o,\Sigma_{\gamma^o})$ and the magnetic field $p(b^0)=\mathcal{N}(\bar{b}^o,\Sigma_{b^o})$. \section{Geophysical application}\label{results} \subsection{Numerical set up}\label{numericalSetUp} \noindent The time period considered in this study is $1900.0-2014.0$, and the magnetic field and secular variation data are taken from the COV-OBS.x1 model with a $2$ year sampling rate, corresponding to the knots of the model's B-spline expansion. Every simulation is performed through a pseudo spectral approach on the Gauss-Legendre grid provided by \citet{Schaeffer2013}. Both the poloidal and toroidal parts of the velocity field are expanded up to SH degree $l=26$, and the radial component of the magnetic field is expressed up to SH degree $l=39$ in order for the field to possess a large enough diffusion range. Whereas the COV-OBS.x1 magnetic field is always taken up to SH degree $l=13$, the expansion of the COV-OBS.x1 secular variation depends on the variance level associated with each scale. If globally at a certain scale the standard deviation of the secular variation is larger than the absolute value of the mean field, the total field is truncated at this scale. Under such a condition, the COV-OBS.x1 secular variation is taken up SH degrees $l=10$, $l=11$, $l=12$ and $l=13$ for the respective time windows $[1900-1923]$, $[1924-1943]$, $[1944-1963]$ and $[1964-2010]$. Finally, the state of the system is characterized by $40000$ pairs of magnetic field and velocity field at the core mantle boundary. \subsection{Estimation of the flow optimal auto regressive parameters}\label{AREstimation} \noindent To simulate the spatio temporal evolution of the flow at the CMB, the parameters of the auto regressive process have to be estimated. We recall that their posterior distribution, $p(\mathcal{M}|{\bf \bar{\gamma}^o})$, is expressed in equation (\ref{posteriorParameter}). By maximizing this distribution, one should get the optimal parameters for the AR process. However, instead of estimating both temporal and spatial parameters simultaneously, we proceed in two steps. First, only the spatial covariance of the velocity field is evaluated following the method proposed and tested by \citet{Baerenzung2016}. This approach consists in maximizing the distribution $p(\mathcal{M}|{\bf \bar{\gamma}^o})$ in which only the block diagonal part of the covariance matrix $\Sigma_{\mathcal{M}_{|\bar{\gamma}^o}}$ is kept. Once the spatial covariance is determined, it is assumed to be known, and the maximum of $p(\mathcal{M}_\Gamma|{\bf \bar{\gamma}^o},\hat{\mathcal{M}}_\Sigma)$ is calculated. \noindent Following the developments of section \ref{flowParameters}, the AR parameters are decomposed into scale dependent power laws exhibiting different spectral ranges. \citet{Baerenzung2016} showed that if the stationary spectra of the flow are decomposed into two spectral ranges, the optimal scales where transition in slope occurs are $l=3$ and $l=8$ for respectively the toroidal and poloidal energy spectra. Here, whereas we keep the same decomposition for the spectrum associated with the poloidal field, more degrees of freedom are allowed for the toroidal field spectrum. Since we wish to accurately determine the spatio-temporal evolution of the eccentric gyre, toroidal field component at SH degree $l=1$ and $l=2$, the main components of the gyre, are free to exhibit any variance level and characteristic time. Similarly, toroidal field components at SH degree $l=3$ are also assumed to be unconstrained by surrounding velocity field scales. This choice is motivated by the particular low level of energy that these scales are exhibiting over recent epochs (see \citet{Baerenzung2016, Whaler2016}). Finally, one spectral range is used to characterize the toroidal field spatial variance and memory effects between SH degrees $l=4$ and $l=26$. So all in all, the AR parameters associated with the toroidal field exhibit the four respective spectral ranges, $\Delta_0 = [1]$, $\Delta_1 = [2]$, $\Delta_2 = [3]$ and $\Delta_3 = [4,26]$. \noindent As mentioned in the beginning of the section, the estimation of the stationary energy spectra parameters is performed between $1900.0$ and $2014.0$, taking the COV-OBS.x1 magnetic field and secular variation every $\Delta_t=2$ years. On figure \ref{priorSpectra} the resulting power law spectra are displayed with crosses. As already observed in \citet{Baerenzung2016}, the toroidal field (in black), and in particular its large scales (SH degree $l=1$ and $l=2$), exhibits a much larger energetic level than the poloidal field (in gray). Nevertheless the toroidal energy spectrum also presents a strong increase of energy towards its smallest scales. This effect, which is in contradiction with the results of \citet{Baerenzung2016}, is very likely to be attributed to a slight underestimation of the COV-OBS.x1 secular variation uncertainties. Therefore, in order to better estimate the small scale energy spectra of the velocity field, we performed different estimations of the stationary spectra parameters by varying the time window in which the evaluation is computed. We found that the largest period where the spectra did not exhibit an anomalous behavior was $1970.0-2014.0$. The resulting prior spectra are shown in figure \ref{priorSpectra} with circles. Combining the small scale spectra of the $1970.0-2014.0$ evaluation to the large scale ones of the $1900.0-2014.0$ estimation, we get the final prior spectra for both the toroidal and poloidal field displayed in figure \ref{priorSpectra} with solid lines. The values associated with the spectra parameters are given in table \ref{spectraTable}. \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth]{./Figure1-eps-converted-to.pdf} \caption{Prior kinetic energy spectra for the toroidal part of the velocity field (black) and for its poloidal part (gray). Estimations with the COV-OBS.x1 secular variation and magnetic field model between $1900.0$ and $2014.0$ (crosses) and between $1970.0$ and $2014$ (circles). The solid lines are the combination of the two evaluations used as a prior information to parametrize the autoregressive process for the flow.}\label{priorSpectra} \end{center} \end{figure} \begin{table} \caption{Combined optimal covariance parameters $\hat{\mathcal{M}}_\Sigma$ of the flow auto regressive process, within the $1970.0-2014.0$ and the $1900.0-2014.0$ periods. $A_i$ correspond to the magnitudes and $P_i$ to the slopes of the prior stationary spectra, within the spectral ranges $\Delta_i$ (see equations (20-21) and (25-26)).} \centering \begin{tabular}{c c c c c } \hline Flow field & index $i$ & $\Delta_i$ & $A_{i}$ & $P_i$ \\ \hline \multirow{4}{*}{Toroidal } & $1$ & $1$ & $5.41$ & $0$ \\ & $2$ & $2$ & $4.56$ & $0$ \\ & $3$ & $3$ & $1.71$ & $0$ \\ & $4$ & $[4\ ,26]$ & $2.05$ & $5.8\times 10^{-2}$ \\ \hline \multirow{2}{*}{Poloidal } & $1$ & $[1\ ,8]$ & $1.52$ & $0.54$ \\ & $2$ & $[8\ ,26]$ & $663$ & $6.4$ \\ \hline \label{spectraTable} \end{tabular} \end{table} \noindent The spatial covariances of the AR process being characterized, the evaluation of the memory terms is now performed by maximizing the distribution $p(\mathcal{M}_\Gamma|{\bf \bar{\gamma}^o},\hat{\mathcal{M}}_\Sigma)$ within the $1900.0-2014.0$ time window. The results, expressed through the scale dependent characteristic time $\tau(l) = -\frac{\Delta_t}{\log(\Gamma(l))}$ are shown on figure \ref{timescale} and the parameters of the memory terms are given in table \ref{tableParameters}. The most striking feature one can observe, is the very long memory time (of the order of thousand years) associated with the main component of the eccentric gyre. This indicates that this structure is very persistent over time. Nevertheless, these values should be taken with care since their evaluation is performed on a comparatively short time window of $114$ years. In contrast with the large scale field, the toroidal field components at spherical harmonics degree varying from $l=3$ to $l=26$ exhibit lower characteristic memory times, with a decaying behavior where $\tau(l=3)\sim 50$ yr and $\tau(l=26)\sim 30$ yr. Note that this limiting time is similar to the e-folding time of the Geodynamo as calculated by \citet{Hulot2010,Lhuillier2011}. The characteristic times associated with the poloidal field indicate that the latter presents a slower dynamical behavior at large scales than at small scales with $\tau(l)$ varying from $\tau(l=1)\sim 400$ yr to $\tau(l=8) \sim 40$ yr. \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth]{./Figure2-eps-converted-to.pdf} \caption{Prior characteristic timescale $\tau(l)$ for the autoregressive process of the flow, associated with the toroidal part of the velocity field (in black) and its poloidal part (in gray). }\label{timescale} \end{center} \end{figure} \begin{table} \caption{Optimal parameters for the memory term of the auto regressive process $\hat{\mathcal{M}}_\Gamma$, evaluated within the $1900-2014$ time window. $A_i$ and $P_i$ are respectively the magnitudes and the slopes of the assumed power laws within the spectral ranges $\Delta_i$. Also given are the characteristic times of the AR process $\tau = -\frac{1}{\log(\Gamma)}$ expressed in years. } \centering \begin{tabular}{c c c c c c c} \hline Flow field & index $i$ & $\Delta_i$ &$1-A_{i}$ & $P_i$ &$\tau(\Delta_i)$ \\ \hline \multirow{4}{*}{Toroidal } & $1$ & $1$ & $1.43\times 10^{-4}$ & $0$ & $3495$ \\ & $2$ & $2$ & $5.03\times 10^{-4}$ & $0$ & $994$ \\ & $3$ & $3$ & $8.53\times 10^{-3}$ & $0$ & $58$ \\ & $4$ & $[4\ ,26]$ & $5.42\times 10^{-3}$ & $5.72\times 10^{-3}$ & $[53\ ,34]$ \\ \cline{1-6} \multirow{2}{*}{Poloidal } & $1$ & $[1\ ,8]$ & $1.24\times 10^{-3}$ & $1.13\times 10^{-3}$ & $[403\ ,38]$ \\ & $2$ & $[8\ ,26]$ & $-1.47\times 10^{-2}$ & $2.65\times 10^{-2}$ & $[38\ ,17]$ \\ \hline \label{tableParameters} \end{tabular} \end{table} \subsection{General properties of the flow at the core mantle boundary}\label{flow} \noindent The AR process for the flow being parametrized, estimation of the velocity field and magnetic field at the CMB through the EnKF algorithm can be performed. To initialize the fields in $1900.0$, we applied the Gibbs sampling algorithm proposed by \citet{Baerenzung2016} and which is detailed in the appendix \ref{appA}. However, instead of sampling the joint posterior distribution of the flow and the magnetic field at the $1900.0$ epoch only, we sampled the distribution characterizing simultaneously the fields in $1900.0$, $1950.0$ and $2000.0$, in order to constraint the initial state with recent observations. \noindent Because of the sequential nature of the EnKF algorithm, accuracy of the estimated fields is not constant over time, but increases whenever new data are assimilated. This effect is well illustrated by figure \ref{spectra} displaying at two different epochs, $1900.0$ (grey) and $2000.0$ (black), the energy spectra of the toroidal (top) and poloidal (bottom) mean velocity fields (thick solid lines) and uncertainty fields (circles). Whereas in $1900.0$ the mean toroidal field exhibit a level of energy larger or of the same order than the variance of the field up to SH degree $l=3$, in $2000.0$ reliable information become available up to SH degree $l=9$. Above these scales the posterior variance of the flow rapidly reaches its prior level. One can also notice that the larger the scale, the stronger the variance reduction over time. The latter observation, which is also valid for the poloidal field, is linked to the behavior of the characteristic times $\tau(l)$ associated with the different flow scales (see figure \ref{timescale}). The fact that $\tau(l)$ is a strictly decaying function of $l$, implies that small scales velocity field will exhibit a higher randomization rate than large scales, and so between two analysis step, the prior variance will increase faster at small than at large scales. \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth]{./Figure3-eps-converted-to.pdf} \caption{Toroidal (top) and poloidal (bottom) energy spectra associated with the ensemble mean fields (thick lines) and standard deviation (circles), for the $1900.0$ (gray) and $2014.0$ (black) epochs. Thin lines correspond to the prior spectra. }\label{spectra} \end{center} \end{figure} \noindent In physical space, the gain of flow accuracy over time is particularly stricking for the toroidal part of the velocity field as shown on figure \ref{flows}. In this figure, the toroidal (left) and poloidal (right) mean velocity fields are displayed with black arrows for three different epochs, $1900.0$ (top), $1950.0$ (middle) and $2000.0$ (bottom). Color maps, representing the $90\%$ confidence interval on the velocity field orientation, provide information on locations where the mean flow direction can be reliably estimated (violet and blue) or not (red). In $1900.0$, very little parts of the eccentric gyre can be confidently estimated. Only the westward flow below Africa and the Atlantic ocean, the Southern branches of the gyre, and the Northern circulation around and partially inside the tangent cylinder (the cylinder tangent to the inner core and aligned with the axis of rotation of the Earth), appear as reliable patterns. At later times the gyre is well defined, and many of its small scale structures become visible. Globally, uncertainties on the toroidal part of the flow are decreasing with time. This does not seem to be the case for the poloidal field where in $1950.0$ reliable patterns are covering a larger surface of the CMB than in $2000.0$. Nevertheless, the r.m.s velocity of the poloidal field and associated standard deviation of $3.49\pm0.23$ in $1950.0$ and $2.49\pm0.21$ in $2000.0$, indicate that the global uncertainty level of the poloidal field as well as its magnitude have been decreasing between $1950.0$ and $2000.0$. \begin{figure}[h] \begin{center} \includegraphics[width=0.95\linewidth]{./Figure4-eps-converted-to.pdf} \caption{Toroidal (left) and poloidal (right) velocity fields in $1900.0$ (top), $1950.0$ (middle) and $2000.0$ (bottom). Arrows correspond the mean fields (over the ensemble), and color maps are displaying the $90\%$ confidence interval on their orientation. Note that the scaling for the velocity field is different between the toroidal and poloidal part of the flow.}\label{flows} \end{center} \end{figure} \noindent Contrary to flow models constrained to be geostrophic (see \citet{Bloxham1991,Amit2013}), up and downwelling fluid motions, which converts into poloidal field when meeting the CMB, are not particularly located around the equator. Instead, a strong and persistent poloidal structure evolves below the Indian ocean and South Africa. According to \citet{Bloxham1986}, such a poloidal field could be at the origin, through the expulsion of magnetic flux from the outer core, of the intense reversed flux patch located there. Although the frozen flux equation cannot model the transport of magnetic structures between the core and its outer boundary, poloidal field spreading or concentrating magnetic patches around location of intense up or downwelling motions can be detected. \noindent Other specific features of the velocity field are in apparent contradiction with a possible geostrophic state of the outer core flow. This includes the reliable part of the toroidal field penetrating the tangent cylinder, or its component crossing the equator below India and South America as already reported in other studies (see \citet{Barrois2017}). Particularly striking, is the apparent violation by the eccentric gyre of the equatorial symmetry condition imposed by quasi-geostrophy (see \citet{Amit2004}). Indeed, the flow responsible for the westward drift together with the circulations around the tangent cylinder, exhibit different levels of intensity in the Northern hemisphere than in the Southern one. These visual observations are confirmed in figure \ref{rms}, where the r.m.s. velocity and associated standard deviation of the toroidal part of the flow, are measured in different locations of the CMB. One can observe on the top right of figure \ref{rms}, that the flow evolving below Africa and the Atlantic ocean within the areas shown on bottom right of the figure, is at any time more intense in the South than in the North. For the circulations around the tangent cylinder, both southern and northern part exhibit similar levels of energy between $1900.0$ and $1960.0$ as shown on the bottom left of figure \ref{rms}. However, in $1960.0$ the flow below Alaska and the Eastern part of Siberia starts to accelerate, and intensifies almost continuously over the last decades to reach a r.m.s. velocity around $23$ km.yr${}^{-1}$ in $2014.0$. This acceleration has already been observed during the satellite era by \citet{Livermore2017}. Here we observe that it takes its origin at quite early times. Although the toroidal part of the flow exhibits some clear deviation from geostrophy, comparisons of its r.m.s. velocity over the entire CMB (black symbols on the top left of figure \ref{rms}) with the r.m.s. velocity of its equatorial symmetric part (gray symbols), shows that the latter component remains at any time dominant. \begin{figure*}[h] \begin{center} \includegraphics[width=0.85\linewidth]{./Figure5-eps-converted-to.pdf} \caption{r.m.s. velocity of the toroidal part of the flow and associated standard deviation in different locations of the core mantle boundary. Colored curves correspond to the r.m.s. velocity within the areas surrounded by the same colored contour on the bottom right of the figure. The black and gray symbols on the top left corresponds to the r.m.s. velocity of respectively the toroidal flow and its equatorial symmetric part over the entire surface of the CMB. The arrows on the bottom right are associated with the $1980.0-2014.0$ time averaged mean toroidal field. }\label{rms} \end{center} \end{figure*} \subsection{Variations in length of day}\label{lodSection} \noindent Many factors influence the Earth's rotation and thus the length of the day (LOD) $\Lambda$. Variations on short timescales are mostly caused by an exchange of angular momentum between Earth's solid body and atmospheric and oceanic currents. \citet{Jault1988} have shown that the coupling between core and mantle induced decadal LOD changes. Nevertheless, modification of the Earth's oblateness ($J_2$) through the melting of glacier and ice caps and through the global sea level rise is also thought to have a non negligible influence on the LOD over such timescales (see \citet{Munk2002}). Over centenial and millennial timescales, two principle mechanisms are modifying the rotation rate of the Earth. The first one is the tidal friction. By deforming the Earth's surface, tidal forces are inducing a dissipation of energy in the Earth-Moon system. As a consequence, the rotation rate of the Earth is decreasing, and the LOD increases with a rate of $\dot{\Lambda}_{\textsc{\tiny{MOON}}} \sim 2.4$ ms/cy as estimated by \citet{Williams2016}. The other phenomenon modifying the long term variations in LOD is the glacial isostatic adjustment (GIA). When accumulating on the polar caps over the last glaciation period, ice was compressing the mantle, inducing an increase of the Earth's oblateness. When the ice melted, the mantle tend to regain its initial shape at a rate depending on its viscosity profile. \citet{Peltier2015} has shown that the associated decrease in oblateness would decrease the LOD at a rate of $\dot{\Lambda}_{\textsc{\tiny{GIA}}} \sim -0.6$ms/cy. \noindent The sum of the latter effects $\dot{\Lambda}_{\textsc{\tiny{MOON}}} + \dot{\Lambda}_{\textsc{\tiny{GIA}}} \sim 1.8$ms/cy explains very well the millennial trend of increase in LOD of $1.78$ms/cy observed by \citet{Stephenson2016}. Nevertheless, if one accounts for the influence of the global sea level rise and ice melting over the last century (see \citet{Church2011,Hay2015}), the trend in the observed variations in LOD (${\Lambda}_{\textsc{\tiny{OBS}}}$) should be slightly larger than $1.8$ms/cy. More precisely, through measurements of the Earth's oblateness with satellite laser ranging, \citet{Cheng2013} have shown that the oblateness was decreasing before the 1990's, but not sufficiently to be explained by the GIA alone, and increasing after. Therefore, the trend in ${\Lambda}_{\textsc{\tiny{OBS}}}$ should lie between $1.8$ms/cy and $2.4$ ms/cy before the 1990's and it should be larger than $\dot{\Lambda}_{\textsc{\tiny{MOON}}} =2.4$ ms/cy after. \noindent As highlighted by \citet{Munk2002}, these estimations are in contradiction with the trend of $1.4$ms/cy observed over the almost last $200$ years. Therefore, either a mechanism increasing the rotation rate of the Earth is missing, or GIA effects are underestimated. \citet{Mitrovica2015} showed that combining a GIA model exhibiting a lower $J_2$ rate than the one of \cite{Peltier2015}, to lower estimates of the global sea level rise over the last century, could explain the discrepancies between observed and predicted trend in ${\Lambda}$. Nevertheless, for this model to be compatible with the millennial trend, a mechanism slowing down the rotation of the Earth over very long periods of time is necessary. They suggested that the outer core flow estimated by \citet{Dumberry2006} could be responsible for this. This means that the oscillations of the core angular momentum highlighted by \citet{Dumberry2006} would also be accompanied by a global decay. \noindent So because of the uncertain nature of the trend in ${\Lambda}$ , we estimated the optimal value it should take accordingly to our ensembles of velocity fields. To compute the variations in LOD deriving from the flow at the core mantle boundary, we used the formula of \citet{Jault2015} which reads: \begin{equation} {\Lambda}_{\textsc{\tiny{FLOW}}} = 1.232(\psi_1^0 + 1.776 \psi_3^0 + 0.08 \psi_5^0 + 0.002 \psi_7^0)\ . \label{Jault} \end{equation} \noindent Based on the previous arguments, we decompose the LOD into the core contribution ${\Lambda}_{\textsc{\tiny{FLOW}}}$ and an additional long-term linear trend with rate $a$ modeling the effects discussed above such as: \begin{equation} \Lambda(t) = a t + b + \dot{\Lambda}_{\textsc{\tiny{FLOW}}} \end{equation} where $b$ is related to a reference observation at $t_0$: $b= \Lambda_{\textsc{\tiny{OBS}}}(t_0) - a t_0$. Instead of prescribing $a$ and $b$ we search for their optimal values accordingly to our core flow model in a Bayesian approach. Assuming that $a$ and $b$ are a priori unknown (uniform prior over infinite ranges), the posterior distribution of $a$ can be expressed as: \begin{equation} p(a|{\Lambda}_{\textsc{\tiny{OBS}}}) \sim \int p({\Lambda}_{\textsc{\tiny{OBS}}}|a,b,{\Lambda}_{\textsc{\tiny{FLOW}}}) p({\Lambda}_{\textsc{\tiny{FLOW}}})p(b) \mathrm{d}b \mathrm{d}{\Lambda}_{\textsc{\tiny{FLOW}}} \ .\label{posteriorTrend} \end{equation} Because of the abrupt change in the Earth's oblateness during the $1990$'s, we restrict the analysis to pre $1990.0$ epochs. Observed LOD variations ${\Lambda}_{\textsc{\tiny{OBS}}}$ are taken from \citet{Gross2001} who also provides uncertainty estimates $\Sigma_{{\Lambda}_{\textsc{\tiny{OBS}}}}$. The likelihood distribution is approximated by a Gaussian distribution such as \begin{equation} ({\Lambda}_{\textsc{\tiny{OBS}}}|a,b,{\Lambda}_{\textsc{\tiny{FLOW}}}) = \mathcal{N}\left({\Lambda}_{\textsc{\tiny{OBS}}} - a\textbf{t} - {b} - \bar{{\Lambda}}_{\textsc{\tiny{FLOW}}}, \Sigma_{{\Lambda}_{\textsc{\tiny{OBS}}}}\right). \end{equation} The prior distribution of ${\Lambda}_{\textsc{\tiny{FLOW}}}$ is also assumed to be Gaussian with a mean and a covariance deriving from the ensemble of ${\Lambda}_{\textsc{\tiny{FLOW}}}$ time series calculated with equation (\ref{Jault}). The distribution $p(a|{\Lambda}_{\textsc{\tiny{OBS}}})$ is thus also a Gaussian distribution. Our computation suggest a mean of $\bar{a} = 2.2\,$ms/cy and a large standard deviation of $\sigma_a=2.2\,$ms/cy which embraces the values discussed above. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\linewidth]{./Figure6-eps-converted-to.pdf} \caption{Observed variations in length of day (black curve) taken from \citet{Gross2001} and extended after $1997.0$ with a time series provided by the Earth orientation center. A trend of $2.2$ms.cy${}^{-1}$ estimated in section 3.4 has been removed from the observed time series. The variations in length of day induced by the outer core flow and associated standard deviation are shown with gray circles and error bars.}\label{LOD} \end{center} \end{figure} \noindent The mean estimated trend $\bar{a}$ lies well between the $1.8$ms/cy and $2.4$ ms/cy interval prescribed by tidal forces and by the measurements of the Earth's oblateness. Furthermore, once the optimal trend is removed from ${\Lambda}_{\textsc{\tiny{OBS}}}$, comparisons with the core flow contribution exhibit a good agreement on decadal time scales as illustrated by figure \ref{LOD}. Although the large uncertainty levels associated with $\bar{a}$ and $\bar{{\Lambda}}_{\textsc{\tiny{FLOW}}}$, forbids any precise conclusions on the impact of the last century sea level rise and ice melting on the variations in length of day, our results suggest that the scenario proposed by \citet{Mitrovica2015} is not the most likely (otherwise the optimal trend would be close to $1.4$ms/cy). Instead, estimates of $\dot{\Lambda}_{\textsc{\tiny{GIA}}}$ given by \citet{Peltier2015} are probably more appropriate. This means that discrepancies between predicted and observed trend in ${\Lambda}$ over the last centuries could be compensated by a recent increase in core angular momentum, and that the outer core flow would not slow down the rotation of the Earth over very long periods of time. \subsection{Predictions}\label{prediction} \noindent The ability of a model to successfully predict the system evolution not only suggests that the model correctly captures the dynamics on the considered time scales but also points towards useful applications. We test our model with so-called hindcast simulations, which means that the analysis steps in the assimilation are only carried out until a time $T_0$, and then integrated as a free model run up to a time $T_F$. The free run corresponds to the forecast, or prediction, which can be compared with the data. \noindent Here we use six different $T_0$ values, $1940.0$, $1960.0$, $1980.0$, $1990.0$, $2000.0$, and $2010.0$, and compare predictions for $T_F=2015$ with the respective epoch in the CHAOS-6 magnetic field model by \citet{Finlay2016}. This means that we attempt predictions over periods ranging from $5$ to $55$ years. Figure \ref{predictionSpectra} presents the results in terms of energy spectra at the Earth's core mantle boundary. Thick black lines show the CHAOS-6 reference field $b^c$ while thin gray lines show the predicted ensemble mean $\bar{b}^f$. Crosses illustrate the prediction error $b^c-\bar{b}^f$ which can be compared with thick gray line that depicts the predicted error, which is simply the standard deviation of the $40000$ ensemble members. As expected, the errors increase with the prediction period (decreasing $T_0$). The predicted error provides a good estimate for the prediction error with the exception of degree $l=13$ in the $5$ year period. This likely owed to the fact that the CHAOS-6 core field contains the crustal field that starts to contribute more significantly at this scale. The larger prediction error outweighs this deficiency for longer prediction periods. \noindent The comparison with two other more trivial prediction methods further allows to judge the advantage of our more sophisticated approach. 'No cast' refers to the assumption that the field remains identical to the field at $T_0$. 'Linear extrapolation' uses the secular variation at $T_0$ to linearly extrapolate the field from $T_0$ to $2015$: $b_l(2015) = b^o(T_0) + (2015-T_0)\gamma^o$. The prediction errors of these two trivial methods are shown as triangles and circles in figure \ref{predictionSpectra}. Linear extrapolation and assimilation prediction errors remain similar for the two shortest prediction periods. However, for predictions beyond the $10$ yr horizon, our assimilation formalism starts to pay off. For the longest prediction period of $55$ years, the errors in the two trivial methods already exceed the spectral energy at degree $6$ while the assimilation predictions remains appropriate until degree $9$. \begin{figure}[h] \begin{center} \includegraphics[width=0.99\linewidth]{./Figure7-eps-converted-to.pdf} \caption{Results of the hindcast tests from $T_0$ to $T_F=2015.0$, expressed in terms of energy spectra at the core mantle boundary. Spectra of the observed magnetic field in $2015.0$ (black lines), its mean prediction (thin gray lines), the prediction error (crosses) and the predicted error (thick gray lines). Triangles and circles respectively correspond to the energy spectra of the linear extrapolation error, and the no cast error. }\label{predictionSpectra} \end{center} \end{figure} \noindent Working with an ensemble also allows the evaluation of statistical properties of quantities like inclination or declination that depend nonlinearly on the state variables. Figure 8 compares the inclination (left) and declination (right) for $1990$ (black lines) and $2015$ (red lines) with the ensemble mean predictions using $T_0=1990$ (yellow lines). The prediction errors are quantified by the absolute local difference in degrees and are shown as color maps in the top two panels. Color maps in the bottom two panels show the $90\%$ confidence interval of the ensemble prediction. \begin{figure*}[h] \begin{center} \includegraphics[width=0.8\linewidth]{./Figure8-eps-converted-to.pdf} \caption{$1990.0-2015.0$ hindcast. Isocontours of the observed inclination (left) and declination (right), in $1990.0$ (black) and $2015.0$ (yellow), and their predictions in $2015.0$ (red). Color maps correspond to the absolute value of prediction error (top) and to the $90\%$ confidence interval on the prediction (bottom). The white symbols on the right are associated with the North magnetic dip pole, observed in $1990.0$ (square) and in $2015.0$ (circle) and predicted in $2015.0$ (triangle).} \label{declinationInclination} \end{center} \end{figure*} \noindent As expected, inclination errors are large for the small or vanishing values around the equator. This is illustrated in the top panel of the right column and also well captured by the larger variance in the ensemble used to predict the error shown in the lower right panel. We calculated the area where the prediction error remains within the $90$\% confidence interval defined by the prediction ensemble, which amounts to $89.9$\% of the total surface. This indicates that the inclination uncertainties provide a good estimate of the prediction error. \noindent Since the declination becomes undefined at the North and South dip poles, the surrounding regions show larger prediction errors and ensemble variances, as is demonstrated in the left column of figure 8. The area where prediction errors remain within the $90\%$ confidence interval amounts to only $81.1\%$ of the surface, which suggests a somewhat inferior error prediction likely related to the dip poles. The error prediction seems more reliable in the southern hemisphere alone where the relative area increases to $92.2\%$. \noindent Although not clearly visible in figure 8 because the color has been saturated at $5^\circ$, the prediction error for the location of the North Magnetic Dip Pole (NMDP) is also particularly large. Its real and predicted position for 2015 have been marked by gray circles and triangles, respectively. According to \citet{Chulliat2010}, the rapid acceleration of the NMDP drift during the 1990's can be explained by the expulsion of magnetic flux below the New Siberian Islands. This may be the reason for the larger prediction error, since such expulsions are not modeled in the frozen flux approximation used here. Note that the NMDP drift is much better captured when the hindcast test starts at $T_0\geq 2000.0$. \noindent Finally, we note that our model accurately predicts the evolution of the inclination and declination associated with South Atlantic anomaly despite the significant changes between $1990.0$ and $2015.0$. This highlight the potential usefulness of the method for forecasting core's magnetic field features. \subsection{Predictability}\label{predictability} In order to quantify the different sources of forecast errors we analyze the secular variation which is responsible for advancing the field from $T_0$ to $T_F$: $b(T_F) = b(T_0) + \int_{T_0}^{T_F}\gamma(s) ds$. Under the frozen flux approximation, $\gamma$ depends on the flow and the magnetic field, through the relation $\gamma = -\nabla_H(ub)$. When separating $b$ into the observable part $b^{<}$ and the non observable small scale part $b^{>}$ we can distinguish three error sources: \begin{equation} \gamma^\prime = -\nabla_H(u^\prime b) -\nabla_H(u b^{<\prime}) -\nabla_H(u b^{>\prime}). \end{equation} Here the primed quantities on the right hand side indicate deviations from the ensemble expectation value, for example $u^\prime = u-\bar{u}$. \begin{figure}[h] \begin{center} \includegraphics[width=0.99\linewidth]{./Figure9-eps-converted-to.pdf} \caption{Sources of secular variation (SV) error in magnetic field predictions, at the initial time $T_0=1990.0$ (left) and at the forecast time $T_F=2015.0$(right). Energy spectra at the Earth's surface of the secular variation (black lines) and of different types of error (symbols) generated when estimating the SV. Triangles and circles correspond to the errors induced by respectively the large scale and small scale variable parts of the magnetic field. Squares are associated with the uncertainties arising from the variable part of the velocity field. }\label{SVuncertainties} \end{center} \end{figure} Figure 9 compares the spectra of the different error contributions to the secular variation spectrum at the Earth's surface for $T_0=1990$ (left) and $T_F=2015$ (right). The last analysis step performed at $T_0$ directly constraints $b^{<}$ and leads to a small related variance and thus a small error contribution that lies about two orders of magnitude below the $u$ and $b^{>}$ related error. At the end of the forecast at $2015$, however, the $b^{<}$ related error has grown significantly but remains the smallest contribution. Since $b^{>}$ is mostly constrained by the apriorily assumed statistics at all times, the related error changes only mildly. While the $u$ related error is smaller than the $b^{>}$ error at $1990$, it becomes the dominant contribution at $2015$ due to the increase in flow dispersion. \noindent Thus neither the observational error in the large scale magnetic field nor the lack of knowledge on the small scale contributions is the limiting factor for the predictions but rather randomization of the different velocity field scales over time. \noindent We also tested the predictability range for the magnetic field. Starting a forecast in $2014$, we let the system evolve until the scale per scale variance of the magnetic field exceeds the mean predicted energy. Like in the forecast from 1990 to 2015 illustrated in the middle left panel of figure \ref{predictionSpectra}, after $75$ yrs only, the variance in contributions beyond degree $l=7$ exceed the mean energy. The predictability limit further decreases to $l=5$, $l=3$, and $l=2$ after $160$yrs, $400$yrs and $640$ yrs repsectively. After $1950$ yrs, even the dipole energy is exceeded by the respective variance level. \section{Conclusion}\label{conclusion} \noindent We have employed a sequential data assimilation framework to model the dynamics of the geomagnetic field and the flow at the top of Earth's core in the 20th century using the COV-OBS.x1 model of \citet{Gillet2015} as observations. The method extends the approach in \citet{Baerenzung2016} to the time domain, as a sequential propagation in time of flow and field inversions under weak prior constraints. The prior is a dynamical model that combines the induction equation in the frozen flux approximation with a simple AR1 process describing the flow evolution. The latter comprises a memory term and stochastic forcing which are both constrained by the secular variation observations following the ideas presented in \citet{Baerenzung2016}. \noindent We use an ensemble approach, the EnKF (\citet{Evensen2003}), where the dynamical model uncertainties are characterized by statistically sound covariances. Using the AR1 process falls short of integrating a proper Navier-Stokes equation but allows us to forward a large ensemble of 40,000 members in time in order to characterize the errors and prior covariances. \noindent The optimal parameters characterizing the flow prior spatial and temporal properties points to particularly long time scales of several centuries to millennia for the toroidal field at SH degrees $l=1$ and $2$. This flow contribution can be attributed to a large scale slowly evolving gyre which has also been identified in core flow inversions (\citet{Pais2000,Gillet2015}) and numerical simulations (\citet{Aubert2013b,Schaeffer2017}). The most prominent feature of the gyre is the well documented pronounced westward drift at low and mid-latitudes of the Atlantic hemisphere. Smaller scale flows have characteristic time scales in the decadal range that are consistent with previous estimates (\citet{Christensen2004,Hulot2010}). Typical related features are local modifications of the gyre in the southern hemisphere or the acceleration of the westward flow at and around the tangent cylinder underneath Alaska and the eastern part of Siberia, which has already been reported by \citet{Livermore2017}. This points to an important contribution of ageostrophic motions to the dominantly geostrophic overall core flow. \noindent Predictions of decadal length of day variations (LOD) from changes in angular momentum of our core flow model yield remarkable resemblance to corresponding independent observations. Furthermore, the recent increase in core angular momentum, that we attribute to an acceleration of the geostrophic contribution of the gyre, enables to compensate the difference between the recently observed trend in LOD changes and the expected one highlighted by \citet{Munk2002}. \noindent We further tested the capability of our model to forecast the evolution of magnetic and flow fields through hindcast experiments. Comparisons of the magnetic field evolution with linear extrapolations and no casts (in which the field is assumed static) show that our more sophisticated model significantly improves predictions beyond 10 or 15 years. Moreover, we inspected the reliability of the forecast errors predicted by the ensemble dispersion, which resulted in good agreement with the hindcast errors. Such match reveals that the characteristic times estimated for the auto regressive process of the flow correspond to a realistic randomization rate of the fields. \noindent However, it is the dispersion of the velocity field itself which seems to dominate the uncertainties in secular variation estimations, what limits therefore the predictability of the geomagnetic field. Within this limitation, the scale dependent predictability corresponds to 1950, 640, 400, 160 and 75 years for degrees $l=1$, 2, 3, 5 and 7, respectively. A more realistic dynamical model such as a geodynamo simulation would possibly extend the predictability horizon. Nevertheless, the enormous numerical power required to perform dynamo simulations at more extreme parameters would preclude the type of ensemble Kalman filter approach followed here. Even with compromises in dynamo model parameters and the ensemble size, the computational costs would still increase by orders of magnitude. Moreover, since dynamo simulations are strongly nonlinear, the system bears an intrinsic sensitivity to initial perturbations. This amounts to an important e-folding time, which is estimated to be about 30 years with a temporal rescaling within the secular variation time scale (\citet{Hulot2010,Lhuillier2011}). Since this characteristic time is not so different from the one associated with the small length scales of our flow model, we expect that the rate at which information is lost in the system will be somewhat equivalent in both modeling strategies.
2,877,628,088,830
arxiv
\section{Introduction} Vector equilibrium problem is a unified model of several problems, for instance, vector variational inequalities and vector optimization problems. For further relevant information on this topic, the reader is referred to the following recent publications available in our bibliography: [1-7], [9-12], [14-17]. In this paper, we will suppose that $X$\textit{\ }is a nonempty, convex and compact set in a Hausdorff locally convex space $E$ , $A:X\rightarrow 2^{X}$ and $F:X\times X\times X\rightarrow 2^{X}$ are correspondences and $C\subset X$ is a nonempty closed convex cone with int$C\neq \emptyset $. We consider the following generalized strong vector quasi-equilibrium problem (in short, GSVQEOP):\newline find $x^{\ast }\in X$ such that $x^{\ast }\in \overline{A}(x^{\ast })$ and each $u\in A(x^{\ast })$ implies that $F(u,x^{\ast },z)\nsubseteq $int$C$ for each $z\in A(x^{\ast }).$ \section{Preliminary results} Let $X$, $Y$ be topological spaces and $T:X\rightarrow 2^{Y}$ be a correspondence. $T$ is said to be \textit{upper semicontinuous} if for each x\in X$ and each open set $V$ in $Y$ with $T(x)\subset V$, there exists an open neighborhood $U$ of $x$ in $X$ such that $T(x)\subset V$ for each $y\in U$. $T$ is said to be \textit{lower semicontinuous} if for each x$\in X$ and each open set $V$ in $Y$ with $T(x)\cap V\neq \emptyset $, there exists an open neighborhood $U$ of $x$ in $X$ such that $T(y)\cap V\neq \emptyset $ for each $y\in U$. $T$ is said to have \textit{open lower sections} if T^{-1}(y):=\{x\in X:y\in T(x)\}$ is open in $X$ for each $y\in Y.$ The following lemma will be crucial in the proofs. \begin{lemma} (Yannelis and Prabhakar, \cite{yan}). \textit{Let }$X$\textit{\ be a paracompact Hausdorff topological space and }$Y$\textit{\ be a Hausdorff topological vector space. Let }$T:X\rightarrow 2^{Y}$\textit{\ be a correspondence with nonempty convex values\ and for each }$y\in Y$\textit{, $T^{-1}(y)$\textit{\ is open in }$X$\textit{. Then, }$T$\textit{\ has a continuous selection that is, there exists a continuous function } f:X\rightarrow Y$\textit{\ such that }$f(x)\in T(x)$\textit{\ for each } x\in X$\textit{.\medskip } \end{lemma} The correspondence $\overline{T}$ is defined by $\overline{T}(x):=\{y\in Y:(x,y)\in $cl$_{X\times Y}$ Gr $T\}$ (the set cl$_{X\times Y}$ Gr $(T)$ is called the adherence of the graph of $T$)$.$ It is easy to see that cl T(x)\subset \overline{T}(x)$ for each $x\in X.$ If $X$ and $Y$ are topological vector spaces, $K$ is a nonempty subset of X, $ $C$ is a nonempty closed convex cone and $T:K\rightarrow 2^{Y}$ is a correspondence, then \cite{luc}, $T$ is called \textit{upper }$C$\textit -continuous at} $x_{0}\in K$ if, for any neighborhood $U$ of the origin in Y,$ there is a neighborhood $V$ of $x_{0}$ such that, for all $x\in V,$ T(x)\subset T(x_{0})+U+C.$ $T$ is called \textit{lower }$C$\textit -continuous at} $x_{0}\in K$ if, for any neighborhood $U$ of the origin in Y,$ there is a neighborhood $V$ of $x_{0}$ such that, for all $x\in V,$ T(x_{0})\subset T(x)+U-C.$ The property of properly $C-$quasi-convexity for correspondences is presented below. Let $X$ be a nonempty convex subset of a topological vector space\textit{\ } E,$ $Y$ be a topological vector space, and $C$ be a pointed closed convex cone in $Z$ with its interior int$C\neq \emptyset .$ Let $T:X\rightarrow 2^{Y}$ be a correspondence with nonempty values. $T$ is said to be \textit properly }$C-$\textit{quasi-convex on} $X$ (\cite{long}), if for any x_{1},x_{2}\in X$ and $\lambda \in \lbrack 0,1],$ either $T(x_{1})\subset T(\lambda x_{1}+(1-\lambda )x_{2})+C$ or $T(x_{2})\subset T(\lambda x_{1}+(1-\lambda )x_{2})+C.\medskip $ In order to establish our main theorems, we need to prove some auxiliary results. The starting point is the following statement: \begin{theorem} Let $X$ be \textit{a nonempty, convex and compact set in a} \textit Hausdorff locally convex space }$E$ and let $\mathcal{C\ }$\ be a nonempty subset of $X\times X.$\textit{\ Assume that t}he following conditions are fulfilled: \end{theorem} \textit{a) }$\mathcal{C}^{-}(y)$ $=\{x\in X:(x,y)\in C\}$ \textit{is open fo } \textit{each } $y\in X;$ \textit{b)} $\mathcal{C}^{+}(x)=\{y\in X:(x,y)\in C\}\mathit{\ }$ \textit{is convex and nonempty for each} $x\in X.$ \textit{Then, there exists }$x^{\ast }\in X$ such that $(x^{\ast },x^{\ast })\in C$\textit{.\medskip } \begin{proof} Let us define the correspondence $T:X\rightarrow 2^{X},$ by $T(x)=\mathcal{C}^{+}(x)$ for each $x\in X.$ The correspondence $T$ is nonempty and convex valued and it has open lower sections$.$ We apply the Yannelis and Prabhakar's Lemma and we obtain that $T$\textit{\ has a continuous selection $f:X\rightarrow X.$ According to the Tychonoff fixed point Theorem \cite{is}, there exists x^{\ast }\in X$ such that $f(x^{\ast })=x^{\ast }.$ Hence, $x^{\ast }\in T(x^{\ast })$ and obviously, $(x^{\ast },x^{\ast })\in \mathcal{C}.\medskip \medskip $ \end{proof} The next two results are direct consequences of Theorem 1.\medskip \begin{theorem} Let $X$ be \textit{a nonempty, convex and compact set in a} \textit Hausdorff locally convex space }$E$ , and let $A:X\rightarrow 2^{X}$ and P:X\times X\rightarrow 2^{X}$ be correspondences such that the following conditions are fulfilled: \end{theorem} \textit{a)} $A$\textit{\ has nonempty, convex values and open lower sections } \textit{b)} \textit{the set }$\{y\in X:$ $A(x)\cap P(x,y)=\emptyset \}\cap A(x)$ \ \textit{is nonempty for each} $x\in X;$ \textit{c) }$\{y\in X:$\textit{\ }$A(x)\cap P(x,y)=\emptyset \}$\textit{\ is convex for each }$x\in X;$ \textit{d)} $\{x\in X:A(x)\cap P(x,y)=\emptyset \}\mathit{\ }$ \textit{is open for each} $y\in X.$ \textit{Then, there exists }$x^{\ast }\in $\textit{\ }$X$\textit{\ such that }$\ x^{\ast }\in A(x^{\ast })$ \textit{and }$A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .\medskip $ \begin{proof} Let us define the set $\mathcal{C}=\{(x,y)\in X\times X:$ $A(x)\cap P(x,y)=\emptyset \}\cap $Gr$A.$ Then, $\mathcal{C}^{+}(x)=\{y\in X:$ $A(x)\cap P(x,y)=\emptyset \}\cap A(x)$ for each $\ x\in X$ and $\mathcal{C}^{-}(y)=A^{-1}(y)\cap \{x\in X:A(x)\cap P(x,y)=\emptyset \}\ for each $x\in X.$ Assumption b) implies that $\mathcal{C}$ is nonempty$.$ The set $\mathcal{C ^{-}(y)$ is open for each $y\in X$ since Assumptions a) and d) hold. According to Assumptions b) and c), $A(x)\cap $ $\mathcal{C}^{+}(x)$ is nonempty and convex for each $x\in X.$ All hypotheses of Theorem 1 are fulfilled, and then, there exists $x^{\ast }\in $\ $X$\ such that $\ x^{\ast }\in A(x^{\ast })$ and $A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .$ \end{proof} We establish the following result as a consequence of Theorem 2. It will be used in order to prove the existence of solutions for the considered vector quasi-equilibrium problem. \begin{theorem} Let $X$ be \textit{a nonempty, convex and compact set in a} \textit Hausdorff locally convex space }$E$ , and let $A:X\rightarrow 2^{X}$, P:X\times X\rightarrow 2^{X}$ be correspondences such that the following conditions are fulfilled: \end{theorem} \textit{a)} $A$\textit{\ has nonempty, convex values and open lower sections } \textit{b)} \textit{the set }$\{y\in X:$ $\overline{A}(x)\cap P(x,y)=\emptyset \}\cap A(x)$ \ \textit{is nonempty for each} $x\in X;$ \textit{c) }$\{y\in X:$\textit{\ }$\overline{A}(x)\cap P(x,y)=\emptyset \} \textit{\ is convex for each }$x\in X;$ \textit{d)} $\{x\in X:\overline{A}(x)\cap P(x,y)=\emptyset \}\mathit{\ }$ \textit{is open for each} $y\in X.$ \textit{Then, there exists }$x^{\ast }\in $\textit{\ }$X$\ \textit{\ such that }$\ x^{\ast }\in A(x^{\ast })$ \textit{and }$A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .\medskip $ We note that, according to Theorem 2, there exists $x^{\ast }\in $\textit{\ $X$\textit{\ }such that $\ x^{\ast }\in A(x^{\ast })$ and\textit{\ } \overline{A}(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .$ Obviously, \overline{A}(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset $ implies A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .\medskip $ If $A(x)=X$ for each $x\in X$, Theorem 2 implies the following corollary. \begin{corollary} Let $X$ be \textit{a nonempty, convex and compact set in a} \textit Hausdorff locally convex space }$E$ , and let $P:X\times X\rightarrow 2^{X}$ be a correspondence such that the following conditions are fulfilled: \end{corollary} \textit{a) }$\{y\in X:$\textit{\ }$P(x,y)=\emptyset \}$\textit{\ is nonempty and convex for each }$x\in X;$ \textit{b)} $\{x\in X:P(x,y)=\emptyset \}\mathit{\ }$ \textit{is open for each} $y\in X.$ \textit{Then, there exists }$x^{\ast }\in $\textit{\ }$X$\textit{\ such that }$\ P(x^{\ast },x^{\ast })=\emptyset .\medskip $ By applying an approximation method of proof, we can prove Theorem 4. \begin{theorem} Let $X$ be \textit{a nonempty, convex and compact set in a} \textit Hausdorff locally convex space }$E$ and $\mathcal{C\ }$\ be a nonempty, closed subset of $X\times X.$\textit{\ Assume that there exists a sequence } (G_{k})_{k\in \mathbb{N}^{\ast }}$ \textit{of subsets of }$X\times X$ such that \textit{t}he following conditions are fulfilled: \end{theorem} \textit{a) }for each $k\in \mathbb{N}^{\ast },$ $G_{k}^{-}(y)$ $=\{x\in X:(x,y)\in G_{k}\}$ \textit{is open for} \textit{each } $y\in X;$ \textit{b)} for each $k\in \mathbb{N}^{\ast },$ $G_{k}^{+}(x)=\{y\in X:(x,y)\in G_{k}\}\mathit{\ }$ \textit{is convex and nonempty for each} x\in X;$ \textit{c) }$G_{k}\supseteq G_{k+1}$\textit{\ for each }$k\in \mathbb{N ^{\ast };$ \textit{d) for every open set }$G$\textit{\ with }$G\supset \mathcal{C}, \textit{\ there exists }$k\in \mathbb{N}^{\ast }$\textit{\ such that } G_{k}\subseteq \mathcal{C}.$ \textit{Then, there exists }$x^{\ast }\in X$\textit{\ such that }$(x^{\ast },x^{\ast })\in \mathcal{C}$\textit{.\medskip } \begin{proof} For each $k\in \mathbb{N}^{\ast },$ we apply Theorem 1. Let $x^{k}\in X$ such that $(x^{k},x^{k})\in G_{k}.$ Since $X$ is a compact set, we can consider that the sequence $(x^{k})_{k}$ converges to some $x^{\ast }\in X.$ We claim that $(x^{\ast },x^{\ast })\in \mathcal{C}.$ Indeed, let us suppose, by way of contradiction, that $(x^{\ast },x^{\ast })\notin \mathcal{C}.$ Since $\mathcal{C}$\textit{\ }is nonempty and compact, we can choose a neighborhood $V_{(x^{\ast },x^{\ast })}$ of (x^{\ast },x^{\ast })$ and an open set $G$ such that $G\supset \mathcal{C}$ and $V_{(x^{\ast },x^{\ast })}\cap G=\emptyset .$ According to Assumptions d) and c), there exists $k_{1}\in \mathbb{N}^{\ast }$ such that G_{k}\subseteq G$ for each $k\geq k_{1}.$ Since $V_{(x^{\ast },x^{\ast })}$ is a neighborhood of $(x^{\ast },x^{\ast }),$ there exists $k_{2}\in \mathbb N}^{\ast }$ such that $(x^{k},x^{k})\in V_{(x^{\ast },x^{\ast })}$ for each k\geq k_{2}.$ $\ $Hence, for $k\geq $max($k_{2},k_{1}),$ (x^{k},x^{k})\notin G_{k},$ which is a contradiction. Consequently, $(x^{\ast },x^{\ast })\in \mathcal{C}.$ \end{proof} Theorem 5 is a consequence of Theorem 4 and it will be used in Section 3 in order to prove the existence of solutions for GSVQEP. \begin{theorem} \textit{Let }$X$\textit{\ be a nonempty, convex and compact set in a Hausdorff locally convex space }$E$\textit{\ , and let }$A:X\rightarrow 2^{X} $\textit{\ and }$P:X\times X\rightarrow 2^{X}$\textit{\ be correspondences such that the following conditions are fulfilled:} \end{theorem} \textit{a) }$A$\textit{\ has nonempty, convex values and open lower sections } \textit{b) the set }$U=\{(x,y)\in X\times X:$\textit{\ }$\overline{A}(x)\cap P(x,y)=\emptyset \}$\textit{\ \ is closed} \textit{and} $U\cap $Gr$\overline A}$ \textit{is nonempty;} \textit{c) there exists a sequence }$(P_{k})_{k\in N}$\textit{\ of correspondences, where, for each }$k\in \mathbb{N}^{\ast },$\textit{\ } P_{k}:X\times X\rightarrow 2^{X}$\textit{\ and let }$U_{k}=\{(x,y)\in X\times X:$\textit{\ }$A(x)\cap P_{k}(x,y)=\emptyset \}$\textit{. Assume that:} \textit{\ }$\ \ \ $\textit{c1) }$U_{k}^{+}(x)=\{y\in X:$\textit{\ } \overline{A}(x)\cap P_{k}(x,y)=\emptyset \}$\textit{\ is convex for each } x\in X$\textit{\ and }$U_{k}^{+}(x)\cap A(x)\neq \emptyset $\textit{\ for each }$x\in X;$ \textit{\ \ \ \ c2 ) }$U_{k}^{-}(y)=\{x\in X:\overline{A}(x)\cap P_{k}(x,y)=\emptyset \}\ $\textit{\ is open for each }$y\in X;$ \textit{\ \ \ \ c3) for each }$k\in \mathbb{N}^{\ast },$\textit{\ } P_{k}(x,y)\subseteq P_{k+1}(x,y)$\textit{\ for each }$(x,y)\in X\times X;$ \textit{\ \ \ \ \ c4) for every open set }$G$\textit{\ with }$G\supset U\cap $Gr$\overline{A},$\textit{\ there exists }$k\in \mathbb{N}^{\ast }$\textit{\ such that }$G\supseteq U_{k}\cap $Gr$A.$ \textit{Then, there exists }$x^{\ast }\in $\textit{\ }$X$\textit{\ such that }$\ x^{\ast }\in \overline{A}(x^{\ast })$\textit{\ and }$A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .\medskip $ \begin{proof} Let us define $\mathcal{C=}U\cap $Gr$\overline{A}.$ According to Assumptions b) and c), $\mathcal{C\ }$\ is a nonempty and closed subset of $X\times X.$ Further, \textit{\ }for each\textit{\ }$k\in \mathbb{N}^{\ast },$ let us define $G_{k}=U_{k}\cap $Gr$A\subseteq X\times X.$ Then, for each $k\in \mathbb{N}^{\ast },$ $G_{k}^{+}(x)$ $=\{y\in X:(x,y)\in G_{k}\}=U_{k}^{+}(x)\cap A(x)$ is nonempty and convex for each\textit{\ } x\in X,$ since Assumptions a) and c1) hold. For each $k\in \mathbb{N}^{\ast },$ $G_{k}^{-}(y)$ $=\{x\in X:(x,y)\in G_{k}\}=U_{k}^{-}(y)\cap A^{-1}(y)$ is open for each\textit{\ }$y\in X,$ since Assumptions a) and c2) hold. Assumption c3) implies that, \textit{\ }for each\textit{\ }$k\in \mathbb{N ^{\ast },$ $U_{k+1}\subseteq U_{k}$ and then, $G_{k}\supseteq G_{k+1}$ and Assumption c4) implies that for every open set $G$ with $G\supset \mathcal{C ,$ there exists $k\in \mathbb{N}^{\ast }$ such that $G_{k}\subseteq \mathcal C}.$ All hypotheses of Theorem 4 are verified. Therefore, there exists $x^{\ast }\in X$ such that $(x^{\ast },x^{\ast })\in \mathcal{C}$. Consequently, there exists $x^{\ast }\in X$ such that $\ x^{\ast }\in \overline{A}(x^{\ast })$\textit{\ }and\textit{\ }$A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .\medskip $ \end{proof} \section{Main results} This section is devoted to the study of the existence of solutions for the considered generalized strong vector quasi-equilibrium problem. We derive our main results by using the auxiliary theorems concerning correspondences, which have been established in the previous section. This new approach to solve GSVQEP is intended to provide new conditions under which the solutions exist.\medskip The first theorem states that GSVQEP has solutions if $F(\cdot ,y,\cdot )$ is lower ($-C$)-semicontinuous for each $y\in X$ and $F(u,\cdot ,z)$\ is properly $C-$\ quasi-convex for each $(u,z)\in X\times X.$ \begin{theorem} \textit{Let }$F:X\times X\times X\rightarrow 2^{X}$\textit{\ be a correspondence with nonempty values. Suppose that:} \end{theorem} \textit{a) }$A$\textit{\ has nonempty, convex values and open lower sections } \textit{b) for each }$x\in X,$\textit{\ there exists }$y\in A(x)$\textit{\ such that each }$u\in \overline{A}(x)$\textit{\ implies that } F(u,y,z)\nsubseteq $\textit{int}$C$\textit{\ for each }$z\in \overline{A}(x); $ \textit{c) }$F(\cdot ,y,\cdot ):$\textit{\ }$X\times X\rightarrow 2^{X} \textit{\ is lower (}$-C$\textit{)-semicontinuous for each }$y\in X;$ \textit{d) for each }$(u,z)\in X\times X,$\textit{\ }$F(u,\cdot ,z):X\rightarrow 2^{X}$\textit{\ is properly }$C-$\textit{\ quasi-convex.} \textit{Then, there exists }$x^{\ast }\in X$\textit{\ such that }$x^{\ast }\in A(x^{\ast })$\textit{\ and each }$u\in A(x^{\ast })$\textit{\ implies that }$F(u,x^{\ast },z)\nsubseteq $\textit{int}$C$\textit{\ for each }$z\in A(x^{\ast }),$\textit{\ that is, }$x^{\ast }$\textit{\ is a solution for GSVQEP.\medskip } \begin{proof} Let us define $P:X\times X\rightarrow 2^{X},$ by $P(x,y)=\{u\in X:$ $\exists z\in \overline{A}(x)$ such that F(u,y,z)\subseteq C\}$ for each $(x,y)\in X\times X.$ Assumption b) implies that the set $\{y\in X:$ $\overline{A}(x)\cap P(x,y)=\emptyset \}\cap A(x)$ \ is nonempty for each $x\in X.$ We claim that the set $E(x)$ $\ $is convex for each $x\in X,$ where\ E(x)=\{y\in X:$\textit{\ }$\overline{A}(x)\cap P(x,y)=\emptyset \}=$ $=\{y\in X:$\textit{\ }each $u\in \overline{A}(x)$ implies that F(u,y,z)\nsubseteq C$ for each $z\in \overline{A}(x)\}.$ Indeed, let us fix $x_{0}\in X$ and let us consider $y_{1},y_{2}\in E(x_{0}). $ This means that each $u\in \overline{A}(x_{0})$ implies that F(u,y_{1},z)\nsubseteq C$ and $F(u,y_{2},z)\nsubseteq C$ for each $z\in \overline{A}(x_{0}).$ Let $y(\lambda )=\lambda y_{1}+(1-\lambda )y_{2}$ be defined for each \lambda \in \lbrack 0,1].$ We claim that $y(\lambda )\in E(x_{0})$ for each $\lambda \in \lbrack 0,1].$ Suppose, on the contrary, that there exist $\lambda _{0}\in \lbrack 0,1]$ $,$ $u^{\prime }\in \overline{A}(x_{0})$ and $z^{\prime }\in \overline{A}(x_{0})$ such that $F(u^{\prime },y(\lambda _{0}),z^{\prime })\subseteq C.$ Since (F(u^{\prime },\cdot ,z^{\prime }):X\rightarrow 2^{X}$ is\textit{\ }properl \textit{\ }$C-$\textit{\ }quasi-convex$,$ we have that: $F(u^{\prime },y_{1},z^{\prime })\subseteq F(u^{\prime },y(\lambda ),z^{\prime })+C$ or $F(u^{\prime },y_{2},z^{\prime })\subseteq F(u^{\prime },y(\lambda ),z^{\prime })+C$ On the other hand, it is true that $F(u^{\prime },y(\lambda ),z^{\prime })\subseteq C.$ We obtain that: $F(u^{\prime },y_{j},z^{\prime })\subseteq C+C\subseteq C$ for $j=1$ or for j=2$. This contradicts the assumption that $y_{1},y_{2}\in E(x_{0})$. Consequently, $E(x_{0})$ is convex and Assumption c) from Theorem 3 is fulfilled. Now, we will prove that $D(y)=\{x\in X:\overline{A}(x)\cap P(x,y)=\emptyset \}\mathit{\ }$ is open for each $y\in X.$ In order to do this, we will show that $^{C}D(y)$ is closed for each $y\in X, $ where $\ ^{C}D(y)=\{x\in X:\overline{A}(x)\cap P(x,y)\neq \emptyset \ \mathit{=}\{x\in X:$ there exist $u,z\in \overline{A}(x)$ such that F(u,y,z)\subseteq C\}.$ Let $(x_{\alpha })_{\alpha \in \Lambda }$ be a net in $^{C}D(y)$ such that lim$_{\alpha }x_{\alpha }=x_{0}.$ Then, there exist $u_{\alpha },z_{\alpha }\in \overline{A}(x_{\alpha })$ such that $F(u_{\alpha },y,z_{\alpha })\subseteq C.$ Since $X$ is a compact set, we can suppose that $(u_{\alpha })_{\alpha \in \Lambda },(z_{\alpha })_{\alpha \in \Lambda }$ are convergent nets and let lim$_{\alpha }u_{\alpha }=u_{0}$ and lim$_{\alpha }z_{\alpha }=z_{0}.$ The closedness of $\overline{A}$ implies that $u_{0},z_{0}\in \overline{A (x_{0}).$ Now, we claim that $F(u_{0},y,z_{0})\subseteq C.$ Since $F(u_{\alpha },y,z_{\alpha })\subseteq C$ and $F(\cdot ,y,\cdot ): \textit{\ }$X\times X\rightarrow 2^{X}$\textit{\ }is lower ($-C )-semicontinuous, for each neighborhood $U$ of the origin in $X,$ there exists a subnet $(u_{\beta },z_{\beta })_{\beta }$ of $(u_{\alpha },z_{\alpha })_{\alpha }$ such that $F(u_{0},y,z_{0})\subset F(u_{\beta },y,z_{\beta })+U+C.$ Hence, $F(u_{0},y,z_{0})\subset U+C.$ We will show that $F(u_{0},y,z_{0})\subset C.$ Suppose, by way of contradiction, that there exists $t\in F(u_{0},y,z_{0})\cap ^{C}C.$ We note that $B=C-t$ is a closed set which does not contain $0.$ It follows that ^{C}B$ is open and contains $0.$ Since $X$ is locally convex, there exists a convex neighborhood $U_{1}$ of origin such that $U_{1}\subset X\backslash B$ and $U_{1}=-U_{1}$. Thus, $0\notin B+U_{1}$ and then, $t\notin C+U_{1},$ which is a contradiction. Therefore, $F(u_{0},y,z_{0})\subset C.$ We proved that there exist $u_{0},z_{0}\in \overline{A}(x_{0})$ such that F(u_{0},y,z_{0})\subseteq C.$ It follows that $^{C}D(y)$ is closed. Then, D(y)$ is an open set and Assumption d) from Theorem 3 is fulfilled. Consequently, all conditions of Theorem 3 are verified, so that there exists $x^{\ast }\in $\ $X$\ such that $\ x^{\ast }\in A(x^{\ast })$ and $A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .\medskip $ Obviously, $x^{\ast } \textit{\ }is a solution for GSVQEP.\medskip \end{proof} In order to obtain a second result concerning the existence of solutions of GSVQEP, we use an approximation method and Theorem 5. We mention that this result does not require convexity properties for the correspondence $F.$ \begin{theorem} \textit{Let }$F:X\times X\times X\rightarrow 2^{X}$\textit{\ be a correspondence. Suppose that:} \end{theorem} \textit{a) }$A$\textit{\ has nonempty, convex values and open lower sections; }$\overline{A}$\textit{\ is lower semicontinuous;} \textit{b) }$F$\textit{\ is upper semicontinuous with nonempty, closed values;} \textit{c) }$U\cap $Gr$\overline{A}$\textit{\ is nonempty, where }$U=\{ \textit{\ }$(x,y)\in X\times X:u\in \overline{A}(x)$\textit{\ implies that } F(u,y,z)\nsubseteq $\textit{int}$C$\textit{\ for each }$z\in \overline{A (x)\};$ \textit{d) there exists a sequence }$(F_{k})_{k\in N}$\textit{\ of correspondences, such that, for each }$k\in \mathbb{N}^{\ast },$\textit{\ } F_{k}:X\times X\times X\rightarrow 2^{X}$\textit{\ and let } U_{k}=\{(x,y)\in X\times X:u\in \overline{A}(x)$\textit{\ implies that } F_{k}(u,y,z)\nsubseteq $\textit{int}$C$\textit{\ for each }$z\in \overline{A (x)\}$\textit{. Assume that:} \textit{d1) for each }$k\in \mathbb{N}^{\ast }$\textit{\ and for each }$x\in X,$\textit{\ there exists }$y\in A(x)$\textit{\ such that each }$u\in \overline{A}(x)$\textit{\ implies that }$F_{k}(u,y,z)\nsubseteq $\textit{int $C$\textit{\ for each }$z\in A(x);$ \textit{d2) for each }$k\in \mathbb{N}^{\ast }$\textit{\ and for each } (u,z)\in X\times X,$\textit{\ }$F_{k}(u,\cdot ,z):X\rightarrow 2^{X}$\textit \ is properly }$C-$\textit{\ quasi-convex;} \textit{d3) for each }$k\in \mathbb{N}^{\ast }$ \textit{and} \textit{\ for each }$y\in X,$\textit{\ }$F_{k}(\cdot ,y,\cdot ):$\textit{\ }$X\times X\rightarrow 2^{X}$\textit{\ is lower (}$-C$\textit{)-semicontinuous}$;$ \textit{d4) for each }$k\in \mathbb{N}^{\ast },$\textit{\ for each } (x,y)\in X\times X,$\textit{\ and for each }$u\in X$\textit{\ with the property that }$\exists z\in \overline{A}(x)$\textit{\ such that } F_{k}(u,y,z)\subseteq C,$\textit{\ there exists }$z^{\prime }\in \overline{A (x)$\textit{\ such that }$F_{k+1}(u,y,z^{\prime })\subseteq C;$ \textit{d5) for every open set }$G$\textit{\ with }$G\supset U\cap $Gr \overline{A},$\textit{\ there exists }$k\in \mathbb{N}^{\ast }$\textit{\ such that }$G\supseteq U_{k}\cap $\textit{Gr}$A.$ \textit{Then, there exists }$x^{\ast }\in X$\textit{\ such that }$x^{\ast }\in \overline{A}(x^{\ast })$\textit{\ and each }$u\in A(x^{\ast })$\textit \ implies that }$F(u,x^{\ast },z)\nsubseteq $\textit{int}$C$\textit{\ for each }$z\in A(x^{\ast }),$\textit{\ that is, }$x^{\ast }$\textit{\ is a solution for GSVQEP.\medskip } \begin{proof} Let us define $P:X\times X\rightarrow 2^{X},$ by $P(x,y)=\{u\in X:$ $\exists z\in \overline{A}(x)$ such that F(u,y,z)\subseteq C\}$ for each $(x,y)\in X\times X.$ We claim that, $U=\{$\textit{\ }$(x,y)\in X\times X:u\in \overline{A}(x) \textit{\ }implies that $F(u,y,z)\nsubseteq $int$C$ for each $z\in \overline A}(x)\}$ is closed$.$ Let $(x^{0},y^{0})\in $cl$U.$ Then, there exists $(x^{\alpha },y^{\alpha })_{\alpha \in \Lambda }$ a net in $U$ such that $\lim_{\alpha }(x^{\alpha },y^{\alpha })=(x^{0},y^{0})\in X\times X.$ Let $u\in \overline{A}(x^{0})$ and $z\in \overline{A}(x^{0}).$ Since $\overline{A}$ is lower semicontinuous and $\lim_{\alpha }x^{\alpha }=x^{0},$ there exists nets $(u^{\alpha })_{\alpha \in \Lambda }\ $\ and $(z^{\alpha })_{\alpha \in \Lambda }$ in $X$ such that $u^{\alpha },z^{\alpha }\in \overline{A}(x^{\alpha })$ for each \alpha \in \Lambda $ and $\lim_{\alpha }u^{\alpha }=u,$ $\lim_{\alpha }z^{\alpha }=z.$ Since $(x^{\alpha },y^{\alpha })_{\alpha \in \Lambda }$ is a net in $U,$ $\ $then$,$ for each $\alpha \in \Lambda ,$ $F(u^{\alpha },y^{\alpha },z^{\alpha })\nsubseteq $int$C$ $,$ that is, $F(u^{\alpha },y^{\alpha },z^{\alpha })\cap W\neq \emptyset ,$ where $W=X\backslash $int C,$ that is, there exists $(t^{\alpha })_{\alpha \in \Lambda }$ a net in $X$ such that $t^{\alpha }\in F(u^{\alpha },y^{\alpha },z^{\alpha })\cap W$ for each $\alpha \in \Lambda .$ Since $X$ is compact, we can suppose that $\lim_{\alpha }t^{\alpha }=t^{0}.$ The closedness of $W$ implies that $t^{0}\in W.$ We invoke here the closedness of $F$ and we conclude that $t^{0}\in F(u,y^{0},z).$ Therefore, F(u,y^{0},z)\cap W\neq \emptyset ,$ and, thus, $u\in \overline{A}(x^{0})$ implies $F(u,y^{0},z)\nsubseteq $int$C$ for each $z\in \overline{A}(x^{0}).$ Hence, $U$ is closed. For each $k\in \mathbb{N}^{\ast },$ let us define $P_{k}:X\times X\rightarrow 2^{X},$ by $P_{k}(x,y)=\{u\in X:$ $\exists z\in \overline{A}(x)$ such that F_{k}(u,y,z)\subseteq C\}$ for each $(x,y)\in X\times X$ and $U_{k}=\{(x,y)\in X\times X:u\in \overline{A}(x)$ implies that F_{k}(u,y,z)\nsubseteq $int$C$ for each $z\in \overline{A}(x)\}=\{(x,y)\in X\times X:$\textit{\ }$\overline{A}(x)\cap P_{k}(x,y)=\emptyset \}.$ \textit{\ }Let\textit{\ }$k\in \mathbb{N}^{\ast }.$ Assumption d1) implies that $U_{k}^{+}(x)\cap A(x)\neq \emptyset $\textit{\ for each\textit{\ }$x\in X$ and Assumption d2) implies that $U_{k}^{+}(x)$ is convex \textit{\ }for each\textit{\ }$x\in X$ (we use a similar proof with the one of Theorem 6) Since $F_{k}(\cdot ,y,\cdot ):$\textit{\ }$X\times X\rightarrow 2^{X} \textit{\ }is lower ($-C$)-semicontinuous for each $y\in X,$ by following an argument similar with the one from the proof of Theorem 6, we can prove that $U_{k}^{-}(y)=\{x\in X:\overline{A}(x)\cap P_{k}(x,y)=\emptyset \}\ $ is open for each\textit{\ }$y\in X.$ Assumption d4) implies that $P_{k}(x,y)\subseteq P_{k+1}(x,y)$\textit{\ }for each\textit{\ }$(x,y)\in X\times X.$ All conditions of Theorem 5 are verified, so that there exists $x^{\ast }\in $\ $X$\ such that $\ x^{\ast }\in \overline{A}(x^{\ast })$ and $A(x^{\ast })\cap P(x^{\ast },x^{\ast })=\emptyset .\medskip $ Obvioulsy, $x^{\ast } \textit{\ }is a solution for GSVQEP.\medskip \end{proof} \section{Concluding remarks} This paper developed a framework for discussing the existence of solutions for a generalized strong vector quasi-equilibrium problem. The results have been obtained under assumptions which are different than the existing ones in literature. An approximation technique of proof has been developed.
2,877,628,088,831
arxiv
\section{Introduction} The rearrangement of the multiplet structure of elementary fields in Gauge\-+Higgs models in the process of symmetry breaking is one of the most fundamental concepts upon which the Standard Model of elementary particle interactions is built \cite{higgs64,higgs66}. The transmutation of the pseudo-Goldstone field into the longitudinal polarisation state of massive vector bosons {\it at the minimum of the effective potential} has been successfully used in perturbative calculations. The absence of light (massless) fields from the spectra of the electroweak sector of the Standard Model was also tested non-perturbatively first at relatively strong coupling \cite{evertz86}, later close to the continuum \cite{csikor96,csikor99}. Real time investigations of classical gauge field dynamics mostly concentrated on topological aspects of the process of baryogenesis, e.g. measuring Chern-Simons densities \cite{ambjorn85,moore99}, and on the emergence and the evolution of gauged strings \cite{rajantie01,shellard01,bellido04}. The idea of far-from-equilibrium baryogenesis \cite{bellido99,copeland01,copeland02,kuzmin03,smit03} led to increasingly detailed studies of the real time mechanism of the reheating of the Universe at the end of inflation. Features of the parametric resonance and tachyonic instability were first studied in simplified inflaton+Higgs systems \cite{linde94,kofman97,kofman01,borsanyi03}, realising the hybrid inflation scenario. In these systems the breaking of the global symmetry leads to the generation of Goldstone fields and related global topological excitations (e.g. strings). Recent numerical studies of Higgs systems started the detailed quantitative exploration of the real time excitation process of the Higgs and massive vector particles \cite{smit02,smit03,bellido04}. J. Smit {\it et al.} \cite{smit02,smit03} developed methods for the identification of the different physical degrees of freedom in the process of excitation in Coulomb and unitary gauges with help of measuring the corresponding dispersion relations. They measured the time variation of the corresponding number densities during tachyonic (spinodal) instability. In the unitary gauge they find significant differences between the early values taken by the number densities of the transversal and longitudinal degrees of freedom. The differences diminish when the system approaches equilibrium. Our final goal in the real time investigation of the Abelian Gauge\-+Higgs system is to measure the evolution of the equations of state of its physical components during the reheating period. To achieve this we apply methods developed in our earlier investigation of pure scalar (inflaton+Higgs) systems \cite{borsanyi02,borsanyi03}. In order to understand the nature of the final state to which the system eventually relaxes in this paper a careful numerical study of the real time fluctuations in equilibrium systems with low energy densities is presented. We extract the equilibrium equations of state of the different quasi-particle species and an attempt will be described to establish the corresponding effective field-coordinates. Such dynamical variables are present in rather strongly interacting systems of condensed matter physics, even when the original particles might decay into each other. Of course, this investigation could be performed also in the Euclidean formulation of equilibrium field theories on lattice. For the clarification of specific issues already at this stage the possibility of real time studies of the non-equilibrium time evolution was quite useful (see below). We solved numerically the field equations of the Abelian Higgs model, discretized on a spatial lattice in the axial $(A_0=0)$ gauge. The initial conditions were chosen with very low energy density in order to end up in the broken symmetry phase. In the present investigation mainly the couplings $e^2=1, \lambda=6$ were used. The influence of the variation of the couplings on our findings was systematically explored in a broad range. As a simple (global) signature of reaching near thermal equilibrium we have chosen the equality of the energy density stored in the scalar field (defined as part of the Hamiltonian density independent of the gauge fields) with one third of the energy density in gauge modes. It took surprisingly long time to reach stationary equality: only after $t \approx 4 \times 10^5 |\textrm{m}|^{-1} $ did fluctuations in the difference of the energy densities settle at the one percent level. Then characteristic thermodynamical quantities were analysed for an extended time interval, sufficiently long to explore the thermal features. It is remarkable that the equations of state, to be discussed below, take their equilibrium form considerably before the true equilibrium is reached. This experience supports the findings of \cite{borsanyi03,berges04}. The equilibrium configurations were transformed to the unitary gauge and decomposed into a scalar (Higgs) field, plus transversal and longitudinal gauge fields. The full energy density and pressure of the system can be split up in a rather natural way into pieces associated with these fields. In section 2 we shall present the method of constructing spectral decompositions for these quantities without making any assumption about the nature of the statistically independent field variables. Still, we can convincingly argue that the quasi-particle interpretation of the thermodynamics works and the extracted mass degeneracy agrees with the result of the weak coupling analysis. In section 3, evidence will be shown that the transverse vector modes behave as independent Gaussian statistical variables. On the other hand, the longitudinal vector mode, defined with the usual projection violates the expected Gaussian relation between the quadratic and quartic moments and does not average independently from the Higgs mode even when the average energy per degree of freedom was one thousendth of the Higgs mass. The possible dependence of this characteristic difference on the Higgs vs. vector mass relation (that is on the couplings $e^2, \lambda$) was carefully explored. Also the effect arising from the presence of vortex-antivortex pairs was understood with help of studying the non-equilibrium time evolution of the system. The persistent interaction between the longitudinal gauge and the Higgs field motivates the search for the quasi-particle ''coordinates'' in terms of non-linear combinations of these fields. Section 4 presents the results of the first step made in this direction. In our search for a Gaussian field related to the longitudinal vector mode, and fluctuating independently from the other three fields, a new pair of canonical variables is proposed: \begin{equation} {\bf {\cal A}}_L^{(1)}({\bf x},t)=(1+\alpha\rho^2({\bf x},t)){\bf A}_L({\bf x},t), \quad {\bf {\cal P}}_L^{(1)}({\bf x},t)=\frac{1}{(1+\alpha\rho^2({\bf x},t))} {\bf \Pi}_L({\bf x},t). \label{nonlin} \end{equation} The value of the coefficient $\alpha$ was determined by the requirement to have for ${\cal A}_L ({\cal P}_L)$ and $\rho$ independent statistics. Having done this the new variables are checked for Gaussianity. Also an alternative composite field of the form \begin{equation} {\cal A}_L^{(2)}({\bf x},t)=\rho({\bf x},t)^\alpha{\bf A}_L({\bf x},t), \qquad {\cal P}_L^{(2)}({\bf x},t)=\rho({\bf x},t)^{-\alpha} {\bf\Pi}_L({\bf x},t) \label{nonlin2} \end{equation} will be tested with satisfactory results. The change of variables influences only very mildly the Higgs field itself. This is rather fortunate since this variable, as well as the transverse gauge degrees of freedom, consistently obeys free quasi-particle thermodynamics. Further possible tests of the existence of a Gaussian composite field involving the longitudinal vector mode are discussed in the Conclusions. Technical details of the real time numerical investigation are summarized in an Appendix. \section{The thermodynamics of the Abelian Higgs model} In the first part of this section we review the continuum expressions of the relevant thermodynamical quantities of the Abelian Higgs model in the unitary gauge. A model-independent spectral test is performed which confirms that in the time interval where the analysis of the thermal features is realized the energy is equally partitioned in momentum space among the different modes. After checking that the potential energy density and the kinetic energy of the spectral modes are equal in equilibrium this fact is used to derive simple expressions for the spectral equations of state determining the thermodynamics of the Higgs, the longitudinal and the transverse vector modes. The mass degeneracy of the different vector polarisations is demonstrated. The corresponding effective masses are close to the tree level expectations. The Lagrangian of the Abelian Higgs model is given by the expression \begin{equation} L=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}+\frac{1}{2}D_\mu\Phi(D^\mu\Phi)^*-V(\Phi), \end{equation} where $F_{\mu\nu}=\partial_{[\mu}A_{\nu]}, D_\mu=\partial_\mu+ieA_\mu$. $V(\Phi)$ is the usual quartic potential of the complex Higgs field $\Phi$: \begin{equation} V(\Phi)=\frac{1}{2}m^2|\Phi|^2+\frac{\lambda}{24}|\Phi|^4,\qquad m^2<0. \end{equation} The result of the standard calculation of the energy-momentum tensor for the Abelian Higgs model leads in the unitary gauge to the following decomposition for the energy density: \begin{eqnarray} \epsilon&=&\epsilon_\rho+\epsilon_T+\epsilon_L,\nonumber\\ \epsilon_\rho&=&\frac{1}{2}\Pi_\rho^2+\frac{1}{2}(\nabla\rho)^2+V(\rho), \nonumber\\ \epsilon_T&=&\frac{1}{2}[{\bf \Pi}_T^2+(\nabla\times{\bf A}_T)^2+e^2\rho^2{\bf A}_T^2],\nonumber\\ \epsilon_L&=&\frac{1}{2}[{\bf \Pi}_L^2+e^2\rho^2({\bf A}_L^2+A_0^2)]. \label{energydensities} \eea Here the indices $T~(L)$ refer to the fact that the corresponding terms contain the transverse (longitudinal) component of the vector field and its canonically conjugated momentum. The energy density is written in Hamiltonian formulation employing the relations: \begin{equation} \Pi_\rho=\dot\rho,\quad {\bf\Pi}_T=\dot{\bf A}_T,\quad {\bf \Pi}_L=\dot{\bf A}_L+\nabla A_0. \end{equation} Also one has to understand $A_0$ as a dependent variable in view of the Gauss-condition: \begin{equation} \nabla{\bf \Pi}_L=e^2\rho^2A_0. \end{equation} The space-space diagonal components of the energy-momentum tensor give similar expressions for the partial pressures: \begin{eqnarray} p&=&p_\rho+p_T+p_L,\nonumber\\ p_\rho &=&\frac{1}{2}\Pi_\rho^2-\frac{1}{6}(\nabla\rho)^2-V(\rho),\nonumber\\ p_T&=&\frac{1}{6}[{\bf \Pi}_T^2+(\nabla\times{\bf A}_T)^2-e^2\rho^2{\bf A}_T^2],\nonumber\\ p_L&=&\frac{1}{6}[{\bf \Pi}_L^2-e^2\rho^2{\bf A}_L^2]+ \frac{1}{2}\frac{1}{e^2\rho^2}(\nabla{\bf\Pi}_L)^2. \label{pressures} \eea Spectral energy densities can be introduced for each of the three contributions defined in Eq.(\ref{energydensities}) by considering the square root of each $\epsilon_i({\bf x},t),~i=\rho, T, L$ and taking the absolute square of their Fourier-transform: \begin{eqnarray} \overline{\epsilon}_i(t)&=&\frac{1}{V}\int d^3x\epsilon_i({\bf x},t)=\int \frac{d^3k}{(2\pi)^3}|\epsilon_i^{(1/2)}({\bf k},t)|^2,\nonumber\\ \sqrt{\epsilon_i({\bf x},t)}&=&\int\frac{d^3k}{(2\pi)^3}e^{i{\bf kx}}\epsilon^{(1/2)}_i({\bf k},t). \eea There is some ambiguity in factoring the densities into two \textit {identical} powers. This choice is the most natural for free quasi-particle interpretation of the thermodynamics, since those expressions are quadratic in the effective field-coordinates. In the broken symmetry phase at equilibrium one expects $|\epsilon^{(1/2)}_i({\bf k},t)|^2$ to fluctuate around $\bf k$-independent values obeying the relation: \begin{equation} \overline{|\epsilon^{(1/2)}_L({\bf k})|^2}: \overline{|\epsilon^{(1/2)}_T({\bf k})|^2}: \overline{|\epsilon^{(1/2)}_\rho({\bf k})|^2}=1:2:1. \end{equation} The overline means an average over the different modes with the same length $k$, and over a certain time-interval. This was found to be satisfied with 10\% relative fluctuation in the relevant time interval as illustrated in Fig. \ref{kep:ekvi}. When below we refer to a sort of ''temperature'' we have in mind the average energy density in the $\bf k$-space. If a free quasi-particle model works, then this temperature is twice the average kinetic energy per mode. \begin{figure} \begin{center} \includegraphics[width=12cm]{ekvi.eps} \end{center} \caption{ Energy densities of the Higgs and gauge fields in equilibrium, corresponding to $\epsilon_{total}= 3.77 |m|^4 $. The data points and the error bars were obtained by binning and forming the mean energy per mode and its fluctuation for each bin. } \label{kep:ekvi} \end{figure} Similar construction can be performed also for the partial pressure of the $i$-th constituent, providing a model-independent definition for its spectral decomposition: \begin{equation} \overline{p_i}({\bf k})\equiv \overline{|p^{(1/2)}_i({\bf k})|^2}. \end{equation} The central step in the analysis is the investigation of the \textit{spectral equation of state} defined as \begin{equation} w_i({\bf k})\equiv \frac{\overline{p_i}({\bf k})}{\overline{\epsilon_i}({\bf k})}. \label{specteqstate} \end{equation} This quantity can be measured for each configuration. In the equilibrium the spectral densities of the kinetic and of the potential energy are equal on average. Using this feature as a hypothesis for the Fourier-modes a very simple functional form is predicted in case, for instance, of the Higgs-mode: \begin{equation} w_\rho({\bf k})=\frac{1}{3}\frac{{\bf k}^2\overline{|\rho({\bf k})|^2}}{{\bf k}^2\overline{|\rho({\bf k})|^2}+2\overline{V_\rho({\bf k})}}. \end{equation} If the Higgs-field is a ''pure quasi-particle coordinate'', then it performs small amplitude oscillations near its equilibrium value $\rho_0$: \begin{equation} V(\rho)=\frac{1}{2}M_{\rho,eff}^2(\rho-\rho_0)^2,\qquad V_\rho({\bf k})=\frac{1}{2}M_{\rho,eff}^2|\rho({\bf k})|^2,\quad k\neq 0. \end{equation} This leads to a one-parameter ''free-field'' form of its spectral equation of state: \begin{equation} w_\rho({\bf k})=\frac{1}{3}\frac{|{\bf k}|^2}{|{\bf k}|^2+M_{\rho,eff}^2}. \label{eqstatrho1} \end{equation} Similar line of thought can be followed also in the spectral analysis of the transversal gauge energy density and pressure. The result is the following: \begin{equation} w_T=\frac{1}{3}\frac{{\bf k}^2}{{\bf k}^2+M_{T,eff}^2},\qquad M_{T,eff}^2=\frac{\overline{|(e\rho{\bf A}_T)({\bf k})|^2}} {\overline{|{\bf A}_T({\bf k})|^2}}. \label{eqstattrans1} \end{equation} If one finds a good description of the measured spectral equation of state by this formula with a ${\bf k}$-independent mass, this provides evidence for the free quasi-particle interpretation of the transverse part of the thermodynamics. For the longitudinal polarisation a very similar formula can be obtained when the ratio $w_L({\bf k})$ is defined by the scaled energy density $e^2\rho^2\epsilon_L$ and pressure $e^2\rho^2p_L$ as \begin{equation} w_L({\bf k})\equiv\frac{\overline{|(e\rho \sqrt{p_L})({\bf k})|^2}} {\overline{|(e\rho \sqrt{\epsilon_L})({\bf k})|^2}}. \label{eqstatlong1} \end{equation} It leads to the above quasi-particle form if the mode-by-mode equality of the scaled kinetic and potential energies is obeyed in the form \begin{equation} {\bf k}^2\overline{|{\Pi_L}({\bf k})|^2}+\overline{|(e\rho\Pi_L)({\bf k})|^2}= \overline{|(e^2\rho^2{\bf A}_L)({\bf k})|^2}, \label{equilong} \end{equation} with the following effective mass formula: \begin{equation} M_{L,eff}^2=\frac{\overline{|(e\rho\Pi_L)({\bf k})|^2}} {\overline{|\Pi_L({\bf k})|^2}}. \end{equation} The first task was therefore to check numerically if the average equality of the kinetic and potential mode energies is fullfilled (for the longitudinal case, Eq.(\ref{equilong}), see Fig. \ref{kep:ekvipart}). Next, the measured $w_i({\bf k})$ functions were fitted with $k$-independent effective masses. Finally their values were compared to the tree-level estimates: \begin{equation} M^2_{L,eff}=M^2_{T,eff}=e^2\overline{\rho^2},\quad M_{\rho,eff}^2=\frac{\lambda}{3}\overline{\rho^2}. \label{Lagr-mass} \end{equation} \begin{figure} \begin{center} \includegraphics[width=8.5cm]{abra2.eps} \end{center} \caption{Spectral equations of state computed from Eqs. (\ref{specteqstate}) and (\ref{eqstatlong1}) as applied to the three constituting terms of the energy density and pressure.} \label{kep:wekvi} \end{figure} Fig.\ref{kep:wekvi} displays the measured equations of state of the three types of fields. It is obvious that the longitudinal and the transverse gauge fields show degenerate behaviour despite of the rather different formal procedure of the determination of $w_L$ and $w_T$. The fitted squared masses appear in Table \ref{tabl1}. \begin{table}[hbt!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{Lagrangian mass squared} & \multicolumn{3}{c|}{equipartition mass squared} \\ \hline $ T_{kinetic}(|m|) $ & $\epsilon_{total} (|m|^4) $ & scalar & trans \& long & scalar & trans & long \\ \hline \input masstabl2.tab \hline \end{tabular} \end{center} \caption{The comparison of the averaged mass terms of the Lagrangian (e.g. $M_i^2\varphi_i^2, ~~ \varphi_i=\rho,{\bf A}_T,{\bf A}_L$) with the thermodynamically fitted effective squared masses $M_{i,eff}^2$.} \label{tabl1} \end{table} It is remarkable that for the Higgs field the measured values of the squared mass are 10-15\% higher than expected on the basis of (\ref{Lagr-mass}). This is the effect of the ultraviolet fluctuations which modify at one loop its mass \cite{copeland01} the following way: \begin{equation} M^2_{H,lattice}=m^2+\frac{\lambda}{2}\overline{\rho^2}+\left(3e^2+ \frac{\lambda}{2}\right)\frac{0.226}{a^2}. \end{equation} Indeed, the variation of the deviation from the tree level squared mass at very low energy densities was found to be inversely proportional to $a^2$. The non-perturbatively fitted coefficient is much smaller, than the 1-loop estimate appearing above. The convergence of the measured Higgs mass to a well-defined value for $T\approx 0$ makes it possible to define the renormalised Higgs mass. If one chooses it to coincide with the curvature of the classical potential at its minimum (e.g. (\ref{Lagr-mass})), then with the same subtraction one can define also the temperature dependent mass. In this way a very good agreement of the renormalised squared masses with the predicted temperature dependence was found. The effective squared masses of the vector modes were found very close to $M^2_V=\overline{e^2\rho^2}$ without observable lattice spacing dependence. \section{The statistics of the elementary fields} \begin{figure} \begin{center} \includegraphics[width=8.5cm]{abra1a.eps} \includegraphics[width=8.5cm]{abra1b.eps} \end{center} \caption{Correlation between the scalar field and the longitudinal and transversal gauge fields. On the left the binned averages of (\ref{nonfactlong}) are displayed as a function of time (measured in units $|m|^{-1}$). The error bars show the actual amplitude of its fluctuations. The average values of the correlation coefficients are shown in the right hand figure as a function of the energy density (measured in units $|m|^4$). The variation is compatible with the vanishing of the correlation for vanishing energy density (its slope is: $ 1.04 \pm 0.04 $). } \label{kep:elso} \end{figure} In this section we shall analyse the data in further details focusing on the statistical correlations of the field variables $\rho, {\bf A}_L, {\bf A}_T$. Our aim is to find the field-coordinates of the quasi-particles, emerging from the thermodynamical analysis of the previous section. There are two aspects of this search: \begin{itemize} \item{} Can one consider the above variables mutually independent in statistical sense? \item{} Do they follow Gaussian statistics expected for small amplitude, nearly free quasi-particle oscillations? \end{itemize} As a signature of independence of two field variables we shall consider the factorisability of the spatial average of their quadratic products by testing if, for instance, \begin{equation} \Delta[{\bf A}_T,\rho]\equiv \frac{\overline{\rho^2({\bf x},t){\bf A}_T^2({\bf x},t)}- \overline{\rho^2({\bf x},t)}~\overline{{\bf A}_T^2({\bf x},t)}} {\overline{\rho^2({\bf x},t){\bf A}_T^2({\bf x},t)}} = 0 \label{nonfactlong} \end{equation} is fullfilled (analoguous test is performed for $\Delta[{\bf A}_L,\rho]$). For variables obeying Gaussian statistics the equality \begin{equation} \Gamma[{\bf A}_T]\equiv\frac {\overline{\left({\bf A}_T^2({\bf x},t)\right)^2}-3\left( \overline{{\bf A}_T^2({\bf x},t)}\right)^2} {\overline{\left({\bf A}_T^2({\bf x},t)\right)^2}}= 0,\nonumber\\ \label{nonGauss} \end{equation} is obeyed. Clear quantitative evidence appears on the left hand side of Fig.\ref{kep:elso} for the stationary (time-independent) independence of ${\bf A}_T$ and $\rho$. On the other hand ${\bf A}_L$ and $\rho$ show non-zero correlation, which appears also constant in time. The test of Gaussianity shows that from very early times both $\Gamma[{\bf A}_T]$ and $\Gamma[\rho]$ vanish, while $\Gamma[{\bf A}_L]$ violates the criterium concerning the Gaussian nature of its statistics. On the right hand side of Fig. \ref{kep:elso} we give the dependence of $\Delta[{\bf A}_L,\rho]$ on the average energy density of the system. (The corresponding temperatures, understood in the sense described in section 2, are listed in Table 1). The effect increases linearly with the energy density. Analogous behaviour is observed for $\Gamma[{\bf \Pi}_L]$ and $\Delta[{\bf \Pi}_L,\rho]$, where ${\bf \Pi}_L$ is the canonical momentum field conjugate to ${\bf A}_L$. The correlation coefficient $\Delta ({\bf A}_L,\rho)$ was computed also on lattices of the same physical size but smaller lattice spacing providing essentially the same result. The lack of the $\rho - {\bf A}_L$ factorisation was demonstrated on a number of functions $f(\rho)$ replacing $\rho^2$, among them also $(\rho-\overline{\rho})^2$. In systems with large values of the couplings this strong correlation could be natural to expect. In this respect it is more surprising that the transversal gauge field behaves as a perfect independent variable. In non-equilibrium time evolution, when in addition to the small amplitude white noise initial conditions applied to the ${\bf k}\neq 0$ modes, the average Higgs field (${\bf k}=0$) starts from the unstable maximum at $\overline{\rho}=0$ we have observed frequently the generation of rather large nonzero values for $\Delta[{\bf A}_T,\rho]$. Some spectacular examples are presented in Fig.\ref{kep:transverse-korr}. One observes a rather abrupt vanishing of these values after certain time elapses. It turns out that in case of the transverse fields nonzero quasi-stationary values of $\Delta[{\bf A}_T,\rho]$ are very sensitive indicators of the presence of Nielsen-Olesen vortex-antivortex pairs \cite{olesen73}. A representative vortex-antivortex pair is displayed in the right hand figure of Fig.\ref{kep:transverse-korr}. The drop of the absolute value of $\Delta[{\bf A}_T,\rho]$ occurs when the pair annihilates. Similar, but less pronounced drop is observed in the correlation of the Higgs field and the magnetic plaquette variable, which is strictly gauge invariant even for finite lattice spacing. It is clear that in the vortex solution presented in \cite{olesen73} the transversal vector potential and the real Higgs field vary in a correlated way. The detailed statistics of the vortex production during spinodal instability will be discussed in a separate publication. Here we conclude as a byproduct from this analysis, that the vanishing of $\Delta[{\bf A}_T,\rho]$ in equilibrium not only hints to the independence of the ${\bf A}_T$-statistics, but also represents evidence that no vortex lines are present in our equilibrium configurations. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{deltatr.eps} \includegraphics[width=8.5cm]{vortexpar.eps} \end{center} \caption{Left: Three examples of the non-equilibrium evolution of $\Delta[{\bf A}_T,\rho]$. In two time-histories stationary nonzero values of $\approx -0.6$ occured for a restricted time interval. It signals the presence of a vortex-antivortex pair and can be easily distinguished from the fast and smooth relaxation to zero, when no vortices are generated. Right: The vortex-antivortex pair corresponding to non-zero $\Delta$, is found by looking for the points where $\rho < 0.3$} \label{kep:transverse-korr} \end{figure} We have investigated also the dependence of $\Delta[{\bf A}_L,\rho]$ and $\Delta[{\bf A}_T,\rho]$ on the Higgs-gauge mass ratio, that is on $\lambda/e^2$. The value of $\lambda$ was varied in the range $(0.75,75)$ keeping the energy density fixed and $e^2=1$. $\Delta[{\bf A}_L,\rho]$ linearly increases from $\sim 0.001$ up to $\sim 0.1$. Specifically, no effect on this linear variation can be observed when the decay channel of the Higgs into two ''photons'' opens around $\lambda/e^2\sim 12$. The time average of $\Delta[{\bf A}_T,\rho]$ stays zero, only the size of its fluctuations increases. We have checked that the correlative effects do not change for $e^2=0.01$. In concluding this section we elaborate on the consequences of the statistical independence of ${\bf A}_T$ and $\rho$ on the thermodynamics. The Gaussian nature of the Fourier modes of ${\bf A}_T$ is expressed by the following formula: \begin{equation} \overline{{\bf A}_T({\bf k}){\bf A}^*_T({\bf k}')}=\overline{|{\bf A}_T({\bf k})|^2}\delta_{{\bf k},{\bf k}'}. \label{at-statistics} \end{equation} Let us apply it to the statistical expectation of a weighted space-average of ${\bf A}_T({\bf x})$. One can make the observation that both $\epsilon_T$ and $p_T$ are quadratic expressions of ${\bf A}_T$, multiplied by some function of $\rho$. First one substitutes for ${\bf A}_T$ its Fourier expansion. When the independence of the ${\bf A}_T$ statistics from $\rho$ is used, the average can be written in a factorized way. Finally, (\ref{at-statistics}) is used and provides a simple general formula: \begin{eqnarray} \overline{\frac{1}{V}\int d^3xf[\rho({\bf x},t)]{\bf A}_T^2({\bf x},t)}&=& \overline {\frac{1}{V}\int d^3xf[\rho({\bf x},t)]\sum_{{\bf k},{\bf k}'}e^{i({\bf k}-{\bf k}'){\bf x}}~}\times\overline{{\bf A}_T({\bf k},t){\bf A}_T^*({\bf k}',t)}\nonumber\\ &=&\overline{\langle f[\rho({\bf x},t)]\rangle_V}\sum_{\bf k}\overline{|{\bf A}_T({\bf k},t)|^2}, \eea where the bracket with lower index $V$ was introduced to denote the spatial averaging. This representation allows for the following identification of the corresponding spectral density, with no ambiguity left in its definition: \begin{equation} \overline{\langle f[\rho]{\bf A}^2\rangle_V}({\bf k})\equiv \overline{\langle f[\rho({\bf x},t)]\rangle_V}~\overline{|{\bf A}({\bf k},t)|^2}. \end{equation} Direct consequences of this are the following expressions for $\overline{\epsilon}_T, \overline{p}_T$, where factorized averaging over ${\bf A}_T$ and $\rho$ appears: \begin{eqnarray} \overline{\epsilon_T}({\bf k})&=&\frac{1}{2}\left[\overline{|{\bf\Pi}_T({\bf k})|^2}+({\bf k}^2+\overline{\langle e^2\rho^2\rangle_V} ) \overline{|{\bf A}_T({\bf k})|^2}\right], \nonumber\\ \overline{p_T}({\bf k})&=&\frac{1}{6}\left [\overline{|{\bf\Pi}_T({\bf k})|^2}+({\bf k}^2-\overline{\langle e^2\rho^2\rangle_V}) \overline{|{\bf A}_T({\bf k})|^2}\right]. \eea They are equivalently given through the expression of the dispersion relation and the equation of state of the transversal quasi-particles: \begin{equation} \omega_T^2({\bf k})=\frac{\overline{|{\bf\Pi}_T({\bf k})|^2}} {\overline{|{\bf A}_T({\bf k})|^2}}={\bf k}^2+\overline{\langle e^2\rho^2\rangle_V},\qquad w_T({\bf k})\equiv\frac{\overline{p}_T({\bf k})} {\overline{\epsilon}_T({\bf k})}=\frac{1}{3}\frac{{\bf k}^2}{{\bf k}^2+\overline{\langle e^2\rho^2\rangle_V}}. \label{eqstattrans} \end{equation} (The dispersion relation emerges from the average kinetic-potential energy equality for each mode). These expressions reproduce perfectly the original transversal spectral densities and the EoS extracted in the previous section. Similarly good representation is obtained for the thermodynamics of the Higgs-sector when the Gaussianity of $\rho$ is imposed on the averaging. In case of ${\bf A}_L$ a very bad quality representation of the spectral equation of state is possible if independence and Gaussianity are forced upon the longitudinal part of the vector potential. The quality of the description can be characterized by the violation of the kinetic-potential energy equality (\ref{equilong}), which is displayed in Fig.\ref{kep:ekvipart}. Since the characteristics of the longitudinal equation of state clearly possess a quasi-particle character, as was seen in section 2, our conclusion is that ${\bf A}_L$ cannot be the corresponding quasi-particle coordinate. Instead in the next section we search for an appropriate non-linear combination of ${\bf A}_L$ and $\rho$. \section{Search for the longitudinal quasi-particle} We search for nonlinear combinations of the longitudinal vector potential and $\rho$ for which in (\ref{nonfactlong}) and (\ref{nonGauss}) equalities can be verified. This should support the general view of almost free quasi-particle composition of finite temperature media. It should be checked whether for this new combination the equipartition and the spectral equation of state can be interpreted consistently under the assumption of the factorisation of their statistics. It will be also checked to what extent it follows Gaussian statistics. Eventually we show that the quality of the equation of state based on this independent new variable is almost as good as the result of Section 2. We shall work out the example of (\ref{nonlin}), and only comment on the performance of the alternative proposition (\ref{nonlin2}). The coefficient $\alpha$ in (\ref{nonlin}) could be estimated analytically by assuming that it is small. For a quantitative treatment there is no need to apply approximations. By tuning $\alpha$ continuously we have measured the normalised reduced correlation functions $\Delta({\cal A}_L,\rho)$ and $\Delta({\cal P}_L,\rho)$. In Fig. \ref{kep:alfamer} these two curves are displayed as functions of $\alpha$ together with the quantities testing the Gaussian nature of the new variables. It is remarkable that both the new coordinate and its conjugate momentum become uncorrelated with the Higgs-field for the same $\alpha$. Although they are not perfect Gaussian variables, the deviation from Gaussianity is minimal just for this $\alpha$ value, it is about 5 times smaller than at $\alpha=0$. The situation is very similar for the nonlinear mapping (\ref{nonlin2}) as can be seen from the twin figure of Fig. \ref{kep:alfamer}. It is important to notice that in the latter case $\alpha\neq 1$, which would be the naive expectation on the basis of the way the kinetic-potential equipartition is fulfilled. Further in the analysis we use the optimal $\alpha$ values found as the average over 40 independent configurations. It has to be emphasised that no unique $\alpha_{opt}$ exists for single configurations, the optimal values found from the four different criteria fluctuate somewhat. It is the ensemble average which leads to the remarkable coincidence displayed in Fig. \ref{kep:alfamer}. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{abra4a.eps} \includegraphics[width=8.5cm]{abra4b.eps} \end{center} \caption{Correlators of two versions of the nonlinearly mapped ''longitudinal'' field as a function of the parameter $\alpha$. On the left the results of the mapping (\ref{nonlin}), on the right the mapping (\ref{nonlin2}) are shown.} \label{kep:alfamer} \end{figure} Now one has to reexpress the partial energy densities and pressures in terms of the new variables. One should not forget that as a consequence of (\ref{nonlin}) also the canonical momentum conjugate to $\rho$ changes: \begin{equation} {\cal P}_\rho+\frac{2\alpha\rho}{1+\alpha\rho^2}{\cal A}_L{\cal P}_L={\bf \Pi}_\rho \end{equation} Assuming factorisation of the statistical averaging over $\rho, \Pi_\rho$ and ${\cal A}_L, {\cal P}_L$ one obtains the following expressions for the averaged energy densities: \begin{eqnarray} \overline{\epsilon}_\rho&=&\frac{1}{2}\overline{{\cal P}_\rho^2}+ \overline{\frac{2\alpha^2\rho^2}{3(1+\alpha\rho^2)^2}}\overline{{\cal A}_L^2}~\overline{{\cal P}_L^2}+\frac{1}{2}\overline{(\nabla\rho)^2}+\frac{1}{2} M_{eff}^2\overline{\rho^2},\nonumber\\ \overline{\epsilon}_L&=&\frac{1}{2}\left(\overline{(1+\alpha\rho^2)^2}+ \overline{\frac{(\alpha\nabla\rho^2)^2}{e^2\rho^2}}\right) \overline{{\cal P}_L^2}+\frac{1}{2} \overline{\frac{(1+\alpha\rho^2)^2}{e^2\rho^2}}\overline{(\nabla{\cal P}_L)^2}\nonumber\\ &+&\frac{1}{2}\overline{\frac{e^2\rho^2}{(1+\alpha\rho^2)^2}} \overline{{\cal A}_L^2}. \label{energynonlin} \eea It is worth now to check if the equality between the first two and the last term of Eq.(\ref{energynonlin}) is fullfilled in the present factorised form. Fig. \ref{kep:ekvipart} demonstrates no more than 1\% deviation from the almost perfect equality found in Section 2. \begin{figure} \begin{center} \includegraphics[width=12cm]{abra5.eps} \end{center} \caption{ The quality of the mode-by-mode equality of the longitudinal kinetic and potential energies when i) the original ${\bf A}_L$, ii) the nonlinearly constructed variable ${\cal A}_L^{(i)}$ are proposed as statistically independent fields. For comparison the perfect fullfillment of (\ref{equilong}) is also presented . The region $k\geq 2|m|$ is shown, where the equipartition is very well fullfilled for the other degrees of freedom. } \label{kep:ekvipart} \end{figure} The corresponding expressions for the partial pressures are as follows: \begin{eqnarray} \overline{p}_\rho&=&\frac{1}{2}\overline{{\cal P}_\rho^2}+ \overline{\frac{2\alpha^2\rho^2}{3(1+\alpha\rho^2)^2}}\overline{{\cal A}_L^2}~\overline{{\cal P}_L^2}-\frac{1}{6}\overline{(\nabla\rho)^2}-\frac{1}{2} M_{eff}^2\overline{\rho^2},\nonumber\\ \overline{p}_L&=&\frac{1}{6}\left(\overline{(1+\alpha\rho^2)^2}~\overline{{\cal P}_L^2}-\overline{\frac{e^2\rho^2}{(1+\alpha\rho^2)^2}} \overline{{\cal A}_L^2}\right)\nonumber\\ &+&\frac{1}{2} \overline{\frac{(\alpha\nabla\rho^2)^2}{e^2\rho^2}}\overline{{\cal P}_L^2}+\frac{1}{2}\overline{\frac{(1+\alpha\rho^2)^2}{e^2\rho^2}} \overline{(\nabla{\cal P}_L)^2}. \label{pressurenonlin} \eea It is worth to write explicitly the spectral equation of state resulting for the new effective field variables, when the improved but still approximate equipartition is exploited: \begin{equation} \displaystyle w_{new}(k)=\frac{1}{3}\frac{\overline{\frac{(\alpha\nabla\rho^2)^2} {e^2\rho^2}}+{\bf k}^2\overline{\frac{(1+\alpha\rho^2)^2}{e^2\rho^2}}} {\overline{(1+\alpha\rho^2)^2}+\overline{\frac{(\alpha\nabla\rho^2)^2} {e^2\rho^2}}+{\bf k}^2\overline{\frac{(1+\alpha\rho^2)^2}{e^2\rho^2}}}. \label{eqstatnonlintheory} \end{equation} The correction in the energy density and the pressure of $\rho$ turns out to be negligible. In Fig. \ref{kep:theory} we plot the ''longitudinal'' equation of state using (\ref{energynonlin}) and (\ref{pressurenonlin}), by measuring the corresponding averages directly, and the ''theoretical'' curve (\ref{eqstatnonlintheory}). A rather remarkable agreement of the curves is seen which works the best for low and moderate values of $k$. The systematic deviation experienced in the high $k$ region would require smaller mass, which however would lead to large discrepancy in the low $k$ region. Similar features are observed when the equation of state is constructed with help of the mapping (\ref{nonlin2}). The curve corresponding to (\ref{eqstatnonlintheory}) agrees very well with the longitudinal equation of state appearing in Fig. \ref{kep:wekvi}. \begin{figure} \begin{center} \includegraphics[width=12cm]{abra6.eps} \end{center} \caption{ Sperctral equation of state with ${\bf {\cal A}}_L^{(1)}$ (continuous line) as compared with the curve (\ref{eqstatnonlintheory}) (''theory'', dotted line) where the $\rho$ dependence of the coefficients is averaged numerically. The equation of state obtained in Section 2 (''without factorisation'', dashed line) is also shown for comparison.} \label{kep:theory} \end{figure} \section{Conclusions} In the present investigation we searched for the quasi-particles of the classical Abelian Higgs model at finite (low) energy density. The validity of a quasi-particle representation of the thermodynamics is essential for cosmological applications. The intuitively expected statistical independence and Gaussian behaviour of the transversal gauge field and of the Higgs excitations in the unitary gauge was confirmed. The longitudinal component of the vector field is not independent statistically at finite energy density from the Higgs oscillations. This correlation and the deviation from the Gaussian statistics grows with the temperature and also with the $\lambda/e^2$ ratio. Still, its thermodynamical characteristics represented in form of a spectral equation of state were shown to be degenerate with the features of the transverse vector modes. The first step of the search for an independent Gaussian pair of canonically conjugate field variables describing the non-transversal gauge fluctuations was put forward in this paper. We are confident that a fourth quasi-particle degree of freedom completes the thermodynamical characterisation, though there is no unique prescription for its systematical construction. One possibility is to abandon the idea of a local mapping in ${\bf x}$-space and try to use a ${\bf k}$-dependent mixing parameter: $\alpha({\bf k})$. The present experience is very useful also for the investigation of the real time evolution of the field excitations in an inflaton+Gauge+Higgs system starting from an unstable (symmetric) initial state and ending in the broken symmetry phase. We are interested in the rate of excitation of the different field degrees of freedom immediately after the exit from inflation. Already at present it is clear that vortex-antivortex pair production will be rather frequent and can be studied also with help of the correlation coefficients introduced in the present study. It will be interesting to see whether the energy transfer from the inflaton to the quasi-particles related to the transverse and the longitudinal vector fields is symmetrical. These questions are under current investigation. \section*{Appendix} The thermal equilibrium of the system was realised in our real time simulations by long temporal evolution. It started from a noisy initial state having finite energy density. In order to implement the equations of motion in the temporal axial gauge, a spatial lattice was introduced. On this lattice the gauge field and the covariant (forward, $F$) derivatives of the scalar field are approximated with help of link variables: \begin{equation} U_i(x)=e^{-ie|a_i|A_i(x)},\quad D_i^F\Phi (x)=\frac{1}{|a_i|}\left(U_i(x)\Phi (x+a_i)- \Phi (x)\right). \end{equation} The compact nature of $U$'s has no effect at very low energy densities studied in the persent investigation. The solution of the equations of motion was started with all conjugate momenta set to zero. Under this condition the Gauss constraint is trivially fulfilled in the temporal axial gauge and it is preserved during the evolution. The Fourier modes of the canonical coordinate fields were given an amplitude with random phase, providing for each mode equal potential energy. The simulations were done mostly on $N=32$ three-dimensional lattices with a lattice constant $a=0.35|m|^{-1}$. In order to test the possible lattice constant dependence of the effects the system was solved also on $N=64, a=0.174|m|^{-1}$ lattice. Notable lattice spacing dependence was observed only for the Higgs mass (see the main discussion). The vector potential and its conjugate momenta are computed on an isotropic lattice $(a_i=a)$ as \begin{equation} A_i(x)=-\frac{1}{ea}{\rm Im}U_i(x),\quad \Pi_i=\frac{1}{ea\delta t}{\rm Im}\left(U_i(x+a\delta t)U^{-1}_i(x)\right), \end{equation} where the discrete time-step we use is $a_0=a\delta t$ with $\delta t\sim 0.1$. These fields were decomposed into tranverse and longitudinal components, we need in the formulae for the decomposition of the energy and pressure. The magnetic field strength is measured from the plaquette variables \begin{eqnarray} U_{ij}&=&U_j^{-1}(x)U_i^{-1}(x+a_j)U_j(x+a_i)U_i(x),\nonumber\\ {\rm Im}U_{ij}&=&-ea^2\epsilon_{ijk}B_k. \eea In the moments of measurements the actual configuration was transformed to unitary gauge. \section*{Acknowledgements} The authors are grateful to J. Smit for valuable suggestions. Enjoyable discussions with Sz. Bors\'anyi, A. Jakov\'ac and P. Petreczky are also acknowledged. This research was supported by the Hungarian Research Fund (OTKA), contract No. T-037689.
2,877,628,088,832
arxiv
\section{Introduction}\label{intro} Let $M$ be an oriented $3$-manifold, $R=\mathbb{Z}[u^{\pm1},z^{\pm1}]$, $\mathcal{L}$ the set of all oriented links in $M$ up to ambient isotopy in $M$ and let $S$ the submodule of $R\mathcal{L}$ generated by the skein expressions $u^{-1}L_{+}-uL_{-}-zL_{0}$, where $L_{+}$, $L_{-}$ and $L_{0}$ are oriented links that have identical diagrams, except in one crossing, where they are as depicted in Figure~\ref{skein}. \begin{figure}[!ht] \begin{center} \includegraphics[width=1.7in]{skein} \end{center} \caption{The links $L_{+}, L_{-}, L_{0}$ locally.} \label{skein} \end{figure} \noindent For convenience we allow the empty knot, $\emptyset$, and add the relation $u^{-1} \emptyset -u\emptyset =zT_{1}$, where $T_{1}$ denotes the trivial knot. Then the {\it Homflypt skein module} of $M$ is defined to be: \begin{equation*} \mathcal{S} \left(M\right)=\mathcal{S} \left(M;{\mathbb Z}\left[u^{\pm 1} ,z^{\pm 1} \right],u^{-1} L_{+} -uL_{-} -zL{}_{0} \right)={\raise0.7ex\hbox{$ R\mathcal{L} $}\!\mathord{\left/ {\vphantom {R\mathcal{L} S }} \right. \kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$ S $}}. \end{equation*} \smallbreak Unlike the Kauffman bracket skein module, the Homflypt skein module of a $3$-manifold, also known as \textit{Conway skein module} and as \textit{third skein module}, is very hard to compute (see [P-2] for the case of the product of a surface and the interval). \smallbreak Let ST denote the solid torus. In \cite{Tu}, \cite{HK} the Homflypt skein module of the solid torus has been computed using diagrammatic methods by means of the following theorem: \begin{thm}[Turaev, Kidwell--Hoste] \label{turaev} The skein module $\mathcal{S}({\rm ST})$ is a free, infinitely generated $\mathbb{Z}[u^{\pm1},z^{\pm1}]$-module isomorphic to the symmetric tensor algebra $SR\widehat{\pi}^0$, where $\widehat{\pi}^0$ denotes the conjugacy classes of non trivial elements of $\pi_1(\rm ST)$. \end{thm} A basic element of $\mathcal{S}({\rm ST})$ in the context of \cite{Tu, HK}, is illustrated in Figure~\ref{tur}. In the diagrammatic setting of \cite{Tu} and \cite{HK}, ST is considered as ${\rm Annulus} \times {\rm Interval}$. The Homflypt skein module of ST is particularly important, because any closed, connected, oriented (c.c.o.) $3$-manifold can be obtained by surgery along a framed link in $S^3$ with unknotted components. \begin{figure} \begin{center} \includegraphics[width=1.3in]{basicelement} \end{center} \caption{A basic element of $\mathcal{S}({\rm ST})$.} \label{tur} \end{figure} \smallbreak A different basis of $\mathcal{S}({\rm ST})$, known as Young idempotent basis, is based on the work of Morton and Aiston \cite{MA} and Blanchet \cite{B}. \smallbreak In \cite{La2}, $\mathcal{S}({\rm ST})$ has been recovered using algebraic means. More precisely, the generalized Hecke algebra of type B, $\textrm{H}_{1,n}(q)$, is introduced, which is isomorphic to the affine Hecke algebra of type A, $\widetilde{\textrm{H}_n}(q)$. Then, a unique Markov trace is constructed on the algebras $\textrm{H}_{1,n}(q)$ leading to an invariant for links in ST, the universal analogue of the Homflypt polynomial for ST. This trace gives distinct values on distinct elements of the \cite{Tu, HK}-basis of $\mathcal{S}({\rm ST})$. The link isotopy in ST, which is taken into account in the definition of the skein module and which corresponds to conjugation and the stabilization moves on the braid level, is captured by the the conjugation property and the Markov property of the trace, while the defining relation of the skein module is reflected into the quadratic relation of $\textrm{H}_{1,n}(q)$. In the algebraic language of \cite{La2} the basis of $\mathcal{S}({\rm ST})$, described in Theorem~\ref{turaev}, is given in open braid form by the set $\Lambda^{\prime}$ in Eq.~\ref{Lpr}. Figure~\ref{els3} illustrates the basic element of Figure~\ref{tur} in braid notation. Note that in the setting of \cite{La2} ST is considered as the complement of the unknot (the bold curve in the figure). The looping elements $t^{\prime}_i \in \textrm{H}_{1,n}(q)$ in the monomials of $\Lambda^{\prime}$ are all conjugates, so they are consistent with the trace property and they enable the definition of the trace via simple inductive rules. \smallbreak In this paper we give a new basis $\Lambda$ for $\mathcal{S}({\rm ST})$ conjectured by the J.~H.~Przytycki, using the algebraic methods developed in \cite{La2}. The motivation of this work is the computation of $\mathcal{S} \left( L(p,q) \right)$ via algebraic means. The new basic set is described in Eq.~\ref{basis} in open braid form. The looping elements $t_i$ are in the algebras $\textrm{H}_{1,n}(q)$ and they are commuting. For a comparative illustration and for the defining formulas of the $t_i$'s and the $t_i^{\prime}$'s the reader is referred to Figure~\ref{genh} and Eq.~\ref{lgen} respectively. Moreover, the $t_i$'s are consistent with the handle sliding move or band move used in the link isotopy in $L(p,q)$, in the sense that a braid band move can be described naturally with the use of the $t_i$'s (see for example \cite{DL} and references therein). Our main result is the following: \begin{thm}\label{mainthm} The following set is a $\mathbb{Z}[q^{\pm1}, z^{\pm1}]$-basis for $\mathcal{S}({\rm ST})$: \begin{equation}\label{basis} \Lambda=\{t^{k_0}t_1^{k_1}\ldots t_n^{k_n},\ k_i\in \mathbb{Z}\setminus\{0\}\ \forall i,\ n \in \mathbb{N} \}. \end{equation} \end{thm} \begin{figure} \begin{center} \includegraphics[width=1.3in]{basicelmnt2} \end{center} \caption{An element of the new basis $\Lambda$.} \label{tbasis} \end{figure} Our method for proving Theorem~\ref{mainthm} is the following: \smallbreak $\bullet$ We define total orderings in the sets $\Lambda^{\prime}$ and $\Lambda$ and \smallbreak $\bullet$ we show that the two ordered sets are related via a lower triangular infinite matrix with invertible elements on the diagonal. \smallbreak More precisely, two analogous sets, $\Sigma_n$ and $\Sigma^{\prime}_n$, are given in \cite{La2} as linear bases for the algebra $\textrm{H}_{1,n}(q)$. See Theorem~\ref{basesH} in this paper. The set $\bigcup_n \Sigma_n$ includes $\Lambda$ as a proper subset and the set $\bigcup_n \Sigma^{\prime}_n$ includes $\Lambda^{\prime}$ as a proper subset. The sets $\Sigma_n$ come directly from the works of S.~Ariki and K.~Koike, and M.~Brou\`{e} and G.~Malle on the cyclotomic Hecke algebras of type B. See \cite{La2} and references therein. The second set $\bigcup_n \Sigma^{\prime}_n$ includes $\Lambda^{\prime}$ as a proper subset. The sets $\Sigma^{\prime}_n$ appear naturally in the structure of the braid groups of type B, $B_{1,n}$; however, it is very complicated to show that they are indeed basic sets for the algebras $\textrm{H}_{1,n}(q)$. The sets $\Sigma_n$ play an intrinsic role in the proof of Theorem~\ref{mainthm}. Indeed, when trying to convert a monomial $\lambda^{\prime}$ from $\Lambda^{\prime}$ into a linear combination of elements in $\Lambda$ we pass by elements of the sets $\Sigma_n$. This means that in the converted expression of $\lambda^{\prime}$ we have monomials in the $t_i$'s, with possible gaps in the indices followed by monomials in the braiding generators $g_i$. So, in order to reach expressions in the set $\Lambda$ we need: \smallbreak $\bullet$ to manage the gaps in the indices of the $t_i$'s and \smallbreak $\bullet$ to eliminate the braiding `tails'. \smallbreak The paper is organized as follows. In Section~1 we recall the algebraic setting and the results needed from \cite{La2}. In Section~2 we define the orderings in the two sets $\Lambda$ and $\Lambda^{\prime}$ and we prove that the sets are totally ordered. In Section~3 we prove a series of lemmas for converting elements in $\Lambda^{\prime}$ to elements in the sets $\Sigma_n$. In Section~4 we convert elements in $\Sigma_n$ to elements in $\Lambda$ using conjugation and the stabilization moves. Finally in Section~5 we prove that the sets $\Lambda^{\prime}$ and $\Lambda$ are related through a lower triangular infinite matrix mentioned above. A computer program converting elements in $\Lambda^{\prime}$ to elements in $\Sigma_n$ has been developed by K. Karvounis and will be soon available on $http://www.math.ntua.gr/^{\sim}sofia$. \smallbreak The algebraic techniques developed here will serve as basis for computing Homflypt skein modules of arbitrary c.c.o. $3$-manifolds using the braid approach. The advantage of this approach is that we have an already developed homogeneous theory of braid structures and braid equivalences for links in c.c.o. $3$-manifolds (\cite{LR1, LR2, DL}). In fact, these algebraic techniques are used and developed further in \cite{KL} for knots and links in $3$-manifolds represented by the $2$-unlink. \section{The Algebraic Settings} \subsection{Mixed Links in $S^3$} We now view ST as the complement of a solid torus in $S^3$. An oriented link $L$ in ST can be represented by an oriented \textit{mixed link} in $S^{3}$, that is a link in $S^{3}$ consisting of the unknotted fixed part $\widehat{I}$ representing the complementary solid torus in $S^3$ and the moving part $L$ that links with $\widehat{I}$. \smallbreak A \textit{mixed link diagram }is a diagram $\widehat{I}\cup \widetilde{L}$ of $\widehat{I}\cup L$ on the plane of $\widehat{I}$, where this plane is equipped with the top-to-bottom direction of $I$. \begin{figure} \begin{center} \includegraphics[width=1.1in]{mixedlink} \end{center} \caption{A mixed link in $S^3$.} \label{mlink} \end{figure} \smallbreak Consider now an isotopy of an oriented link $L$ in ST. As the link moves in ST, its corresponding mixed link will change in $S^{3}$ by a sequence of moves that keep the oriented $\widehat{I}$ pointwise fixed. This sequence of moves consists in isotopy in the $S^{3}$ and the \textit{mixed Reidemeister moves}. In terms of diagrams we have the following result for isotopy in ST: \smallbreak The mixed link equivalence in $S^{3}$ includes the classical Reidemeister moves and the mixed Reidemeister moves, which involve the fixed and the standard part of the mixed link, keeping $\widehat{I}$ pointwise fixed. \subsection{Mixed Braids in $S^3$} \noindent By the Alexander theorem for knots in solid torus, a mixed link diagram $\widehat{I}\cup \widetilde{L}$ of $\widehat{I}\cup L$ may be turned into a \textit{mixed braid} $I\cup \beta $ with isotopic closure. This is a braid in $S^{3}$ where, without loss of generality, its first strand represents $\widehat{I}$, the fixed part, and the other strands, $\beta$, represent the moving part $L$. The subbraid $\beta$ shall be called the \textit{moving part} of $I\cup \beta $. \begin{figure} \begin{center} \includegraphics[width=2in]{closure} \end{center} \caption{The closure of a mixed braid to a mixed link.} \label{mbtoml} \end{figure} The sets of braids related to the ST form groups, which are in fact the Artin braid groups type B, denoted $B_{1,n}$, with presentation: \[ B_{1,n} = \left< \begin{array}{ll} \begin{array}{l} t, \sigma_{1}, \ldots ,\sigma_{n-1} \\ \end{array} & \left| \begin{array}{l} \sigma_{1}t\sigma_{1}t=t\sigma_{1}t\sigma_{1} \ \ \\ t\sigma_{i}=\sigma_{i}t, \quad{i>1} \\ {\sigma_i}\sigma_{i+1}{\sigma_i}=\sigma_{i+1}{\sigma_i}\sigma_{i+1}, \quad{ 1 \leq i \leq n-2} \\ {\sigma_i}{\sigma_j}={\sigma_j}{\sigma_i}, \quad{|i-j|>1} \\ \end{array} \right. \end{array} \right>, \] \noindent where the generators $\sigma _{i}$ and $t$ are illustrated in Figure~\ref{gen}. \begin{figure} \begin{center} \includegraphics[width=2.3in]{gen} \end{center} \caption{The generators of $B_{1,n}$.} \label{gen} \end{figure} Isotopy in ST is translated on the level of mixed braids by means of the following theorem. \begin{thm}[Theorem~3, \cite{La1}] \label{markov} Let $L_{1} ,L_{2}$ be two oriented links in ST and let $I\cup \beta_{1} ,{\rm \; }I\cup \beta_{2}$ be two corresponding mixed braids in $S^{3}$. Then $L_{1}$ is isotopic to $L_{2}$ in ST if and only if $I\cup \beta_{1}$ is equivalent to $I\cup \beta_{2}$ in $\mathop{\cup }\limits_{n=1}^{\infty } B_{1,n}$ by the following moves: \[ \begin{array}{clll} (i) & Conjugation: & \alpha \sim \beta^{-1} \alpha \beta, & {\rm if}\ \alpha ,\beta \in B_{1,n}. \\ (ii) & Stabilization\ moves: & \alpha \sim \alpha \sigma_{n}^{\pm 1} \in B_{1,n+1}, & {\rm if}\ \alpha \in B_{1,n}. \\ \end{array} \] \end{thm} \subsection{The Generalized Iwahori-Hecke Algebra of type B} It is well known that $B_{1,n}$ is the Artin group of the Coxeter group of type B, which is related to the Hecke algebra of type B, $\textrm{H}_{n}{(q,Q)}$ and to the cyclotomic Hecke algebras of type B. In \cite{La2} it has been established that all these algebras form a tower of B-type algebras and are related to the knot theory of ST. The basic one is $\textrm{H}_{n}{(q,Q)}$, a presentation of which is obtained from the presentation of the Artin group $B_{1,n}$ by adding the quadratic relations \begin{equation}\label{quad} {g_{i}^2=(q-1)g_{i}+q} \end{equation} \noindent and the relation $t^{2} =\left(Q-1\right)t+Q$, where $q,Q \in {\mathbb C}\backslash \{0\}$ are seen as fixed variables. The middle B--type algebras are the cyclotomic Hecke algebras of type B, $\textrm{H}_{n}(q,d)$, whose presentations are obtained by the quadratic relation~(\ref{quad}) and $t^d=(t-u_{1})(t-u_{2}) \ldots (t-u_{d})$. The topmost Hecke-like algebra in the tower is the \textit{generalized Iwahori--Hecke algebra of type B}, $\textrm{H}_{1,n}(q)$, which, as observed by T.tom Dieck, is isomorphic to the affine Hecke algebra of type A, $\widetilde{\textrm{H}}_n(q)$ (cf. \cite{La2}). The algebra $\textrm{H}_{1,n}(q)$ has the following presentation: \[ \textrm{H}_{1,n}{(q)} = \left< \begin{array}{ll} \begin{array}{l} t, g_{1}, \ldots ,g_{n-1} \\ \end{array} & \left| \begin{array}{l} g_{1}tg_{1}t=tg_{1}tg_{1} \ \ \\ tg_{i}=g_{i}t, \quad{i>1} \\ {g_i}g_{i+1}{g_i}=g_{i+1}{g_i}g_{i+1}, \quad{1 \leq i \leq n-2} \\ {g_i}{g_j}={g_j}{g_i}, \quad{|i-j|>1} \\ {g_i}^2=(q-1)g_{i}+q, \quad{i=1,\ldots,n-1} \end{array} \right. \end{array} \right>. \] \noindent That is: \begin{equation*} \textrm{H}_{1,n}(q)= \frac{{\mathbb Z}\left[q^{\pm 1} \right]B_{1,n}}{ \langle \sigma_i^2 -\left(q-1\right)\sigma_i-q \rangle}. \end{equation*} Note that in $\textrm{H}_{1,n}(q)$ the generator $t$ satisfies no polynomial relation, making the algebra $\textrm{H}_{1,n}(q)$ infinite dimensional. Also that in \cite{La2} the algebra $\textrm{H}_{1,n}(q)$ is denoted as $\textrm{H}_{n}(q, \infty)$. \smallbreak In \cite{Jo} V.F.R. Jones gives the following linear basis for the Iwahori-Hecke algebra of type A, $\textrm{H}_{n}(q)$: {\small $$ S =\left\{(g_{i_{1} }g_{i_{1}-1}\ldots g_{i_{1}-k_{1}})(g_{i_{2} }g_{i_{2}-1 }\ldots g_{i_{2}-k_{2}})\ldots (g_{i_{p} }g_{i_{p}-1 }\ldots g_{i_{p}-k_{p}})\right\}, \mbox{ for } 1\le i_{1}<\ldots <i_{p} \le n-1{\rm \; }. $$ } \noindent The basis $S$ yields directly an inductive basis for $\textrm{H}_{n}(q)$, which is used in the construction of the Ocneanu trace, leading to the Homflypt or $2$-variable Jones polynomial. In $\textrm{H}_{1,n}(q)$ we define the elements: \begin{equation}\label{lgen} t_{i}:=g_{i}g_{i-1}\ldots g_{1}tg_{1} \ldots g_{i-1}g_{i}\ \rm{and}\ t^{\prime}_{i}:=g_{i}g_{i-1}\ldots g_{1}tg_{1}^{-1}\ldots g_{i-1}^{-1}g_{i}^{-1}, \end{equation} as illustrated in Figure~\ref{genh}. \smallbreak In \cite{La2} the following result has been proved. \begin{thm}[Proposition~1, Theorem~1 \cite{La2}] \label{basesH} The following sets form linear bases for ${\rm H}_{1,n}(q)$: \[ \begin{array}{llll} (i) & \Sigma_{n} & = & t_{i_{1} } ^{k_{1} } t_{i_{2} } ^{k_{2} } \ldots t_{i_{r}}^{k_{r} } \cdot \sigma ,\ {\rm where}\ 1\le i_{1} <\ldots <i_{r} \le n-1,\\ & & & \\ (ii) & \Sigma^{\prime} _{n} & = & {t^{\prime}_{i_1}}^{k_{1}} {t^{\prime}_{i_2}}^{k_{2}} \ldots {t^{\prime}_{i_r}}^{k_{r}} \cdot \sigma ,\ {\rm where}\ 1\le i_{1} < \ldots <i_{r} \le n, \\ \end{array} \] \noindent where $k_{1}, \ldots ,k_{r} \in {\mathbb Z}$ and $\sigma$ a basic element in $\textrm{H}_{n}(q)$. \end{thm} \begin{figure} \begin{center} \includegraphics[width=3.2in]{loop} \end{center} \caption{The elements $t^{\prime}_{i}$ and $t_{i}$.} \label{genh} \end{figure} \begin{remark}\label{conind}\rm \begin{itemize} \item[(i)] The indices of the $t^{\prime}_i$'s in the set $\Sigma^{\prime}_n$ are ordered but are not necessarily consecutive, neither do they need to start from $t$. \item[(ii)] A more straight forward proof that the sets $\Sigma_n^{\prime}$ form bases for $\textrm{H}_{1,n}(q)$ can be found in \cite{D}. \end{itemize} \end{remark} In \cite{La2} the basis $\Sigma^{\prime}_{n}$ is used for constructing a Markov trace on $\bigcup _{n=1}^{\infty }\textrm{H}_{1,n}(q)$. \begin{thm}[Theorem~6, \cite{La2}] \label{tr} Given $z,s_{k}$, with $k\in {\mathbb Z}$ specified elements in $R={\mathbb Z}\left[q^{\pm 1} \right]$, there exists a unique linear Markov trace function \begin{equation*} {\rm tr}:\bigcup _{n=1}^{\infty }{\rm H}_{1,n}(q) \to R\left(z,s_{k} \right),k\in {\mathbb Z} \end{equation*} \noindent determined by the rules: \[ \begin{array}{lllll} (1) & {\rm tr}(ab) & = & {\rm tr}(ba) & \quad {\rm for}\ a,b \in {\rm H}_{1,n}(q) \\ (2) & {\rm tr}(1) & = & 1 & \quad \textrm{for}\ all\ {\rm H}_{1,n}(q) \\ (3) & {\rm tr}(ag_{n}) & = & z{\rm tr}(a) & \quad \textrm{for}\ a \in {\rm H}_{1,n}(q) \\ (4) & {\rm tr}(a{t^{\prime}_{n}}^{k}) & = & s_{k}{\rm tr}(a) & \quad \textrm{for}\ a \in {\rm H}_{1,n}(q),\ k \in {\mathbb Z}. \\ \end{array} \] \end{thm} Note that the use of the looping elements $t_i^{\prime}$ enable the trace ${\rm tr}$ to be defined by just extending the three rules of the Ocneanu trace on the algebras ${\rm H}_n(q)$ \cite{Jo} by rule (4). Using $\textrm{tr}$ Lambropoulou constructed a universal Homflypt-type invariant for oriented links in ST. Namely, let $\mathcal{L}$ denote the set of oriented links in ST. Then: \begin{thm} [Definition~1, \cite{La2}] \label{inv} The function $X:\mathcal{L}$ $\rightarrow R(z,s_{k})$ \begin{equation*} X_{\widehat{\alpha}}=\left[-\frac{1-\lambda q}{\sqrt{\lambda } \left(1-q\right)} \right]^{n-1} \left(\sqrt{\lambda } \right)^{e} {\rm tr}\left(\pi \left(\alpha \right)\right), \end{equation*} \noindent where $\alpha \in B_{1,n}$ is a word in the $\sigma _{i}$'s and $t^{\prime}_{i} $'s, $e$ is the exponent sum of the $\sigma _{i}$'s in $\alpha $, and $\pi$ the canonical map of $B_{1,n}$ in ${\rm H}_{1,n}(q)$, such that $t\mapsto t$ and $\sigma _{i} \mapsto g_{i} $, is an invariant of oriented links in ST. \end{thm} The invariant $X$ satisfies a skein relation \cite{La1}. Theorems~\ref{basesH}, \ref{tr} and \ref{inv} hold also for the algebras $\textrm{H}_{n}(q,Q)$ and $\textrm{H}_{n}(q,d)$, giving rise to all possible Homflypt-type invariants for knots in ST. For the case of the Hecke algebra of type B, $\textrm{H}_{n}(q,Q)$, see also \cite{La1} and \cite{LG}. \subsection{The basis of $\mathcal{S}({\rm ST})$ in algebraic terms} Let us now see how $\mathcal{S}({\rm ST})$ is described in the above algebraic language. We note first that an element $\alpha$ in the basis of $\mathcal{S}({\rm ST})$ described in Theorem~\ref{turaev} when ST is considered as ${\rm Annulus} \times {\rm Interval}$, can be illustrated equivalently as a mixed link in $S^3$ when ST is viewed as the complement of a solid torus in $S^3$. So we correspond the element $\alpha$ to the minimal mixed braid representation, which has increasing order of twists around the fixed strand. Figure~\ref{els3} illustrates an example of this correspondence. Denoting \begin{equation}\label{Lpr} \Lambda^{\prime}=\{ {t^{\prime}_1}^{k_1}{t^{\prime}_2}^{k_2} \ldots {t^{\prime}_n}^{k_n}, \ k_i \in \mathbb{Z}\setminus\{0\} \ \forall i,\ n\in \mathbb{N} \}, \end{equation} \noindent we have that $\Lambda^{\prime}$ is a subset of $\bigcup_n{\textrm{H}_{1,n}}$. In particular $\Lambda^{\prime}$ is a subset of $\bigcup_n{\Sigma^{\prime}_n}$. \begin{figure} \begin{center} \includegraphics[width=3.3in]{basicelmnt} \end{center} \caption{An element in $\Lambda^{\prime}$.} \label{els3} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.5in]{basis21} \end{center} \caption{An element of $\Lambda$.} \label{els4} \end{figure} Applying the inductive trace rules to a word $w$ in $\bigcup_n\Sigma^{\prime}_n$ will eventually give rise to linear combinations of monomials in $R(z, s_k)$. In particular, for an element of $\Lambda^{\prime}$ we have: $${\rm tr}(t^{k_0}{t^{\prime}_1}^{k_1}\ldots {t^{\prime}_{n-1}}^{k_{n-1}})=s_{k_{n-1}}\ldots s_{k_{1}}s_{k_{0}}.$$ Further, the elements of $\Lambda^{\prime}$ are in bijective correspondence with increasing $n$-tuples of integers, $(k_0,k_1,\ldots,k_{n-1})$, $n \in \mathbb{N}$, and these are in bijective correspondence with monomials in $s_{k_0},s_{k_1}, \ldots, s_{k_{n-1}}$. \begin{remark} \rm The invariant $X$ recovers the Homflypt skein module of ST since it gives different values for different elements of $\Lambda^{\prime}$ by rule~4 of the trace. \end{remark} \section{An ordering in the sets $\Lambda$ and $\Lambda^{\prime}$} In this section we define an ordering relation in the sets $\Lambda$ and $\Lambda^{\prime}$. Before that, we will need the notion of the index of a word in $\Lambda^{\prime}$ or in $\Lambda$ . \begin{defn} \label{index} \rm The {\it index} of a word $w$ in $\Lambda^{\prime}$ or in $\Lambda$, denoted $ind(w)$, is defined to be the highest index of the $t_i^{\prime}$'s, resp. of the $t_i$'s, in $w$. Similarly, the \textit{index} of an element in $\Sigma_n^{\prime}$ or in $\Sigma_n$ is defined in the same way by ignoring possible gaps in the indices of the looping generators and by ignoring the braiding part in $\textrm{H}_{n}(q)$. Moreover, the index of a monomial in $\textrm{H}_{n}(q)$ is equal to $0$. \end{defn} \noindent For example, $ind({t^{\prime}}^{k_0}{t^{\prime}_1}^{k_1}\ldots {t^{\prime}_n}^{k_n})=ind(t^{u_0}\ldots t_n^{u_n})=n$. \begin{defn} \label{order} \rm We define the following {\it ordering} in the set $\Lambda^{\prime}$. Let $w={t^{\prime}_{i_1}}^{k_1}{t^{\prime}_{i_2}}^{k_2}\ldots {t^{\prime}_{i_{\mu}}}^{k_{\mu}}$ and $\sigma={t^{\prime}_{j_1}}^{\lambda_1}{t^{\prime}_{j_2}}^{\lambda_2}\ldots {t^{\prime}_{j_{\nu}}}^{\lambda_{\nu}}$, where $k_t , \lambda_s \in \mathbb{Z}$, for all $t,s$. Then: \smallbreak \begin{itemize} \item[(a)] If $\sum_{i=0}^{\mu}k_i < \sum_{i=0}^{\nu}\lambda_i$, then $w<\sigma$. \vspace{.1in} \item[(b)] If $\sum_{i=0}^{\mu}k_i = \sum_{i=0}^{\nu}\lambda_i$, then: \vspace{.1in} \noindent (i) if $ind(w)<ind(\sigma)$, then $w<\sigma$, \vspace{.1in} \noindent (ii) if $ind(w)=ind(\sigma)$, then: \vspace{.1in} \noindent \ \ \ \ ($\alpha$) if $i_1=j_1, i_2=j_2, \ldots , i_{s-1}=j_{s-1}, i_{s}<j_{s}$, then $w>\sigma$, \vspace{.1in} \noindent \ \ \ \ ($\beta$) if $i_t=j_t\ \forall t$ and $k_{\mu}=\lambda_{\mu}, k_{\mu-1}=\lambda_{\mu-1}, \ldots k_{\i+1}=\lambda_{i+1}, |k_i|<|\lambda_i|$, then $w<\sigma$, \vspace{.1in} \noindent \ \ \ \ ($\gamma$) if $i_t=j_t\ \forall t$ and $k_{\mu}=\lambda_{\mu}, k_{\mu-1}=\lambda_{\mu-1}, \ldots k_{\i+1}=\lambda_{i+1}, |k_i|=|\lambda_i|$ and $k_i>\lambda_i,$ \vspace{.1in} \noindent \ \ \ \ \ \ \ \ \ then $w<\sigma$, \vspace{.1in} \noindent \ \ \ \ ($\delta$) if $i_t=j_t\ \forall t$ and $k_i=\lambda_i$, $\forall i$, then $w=\sigma$. \vspace{.1in} \item[(c)] In the general case where $w={t^{\prime}_{i_1}}^{k_1}{t^{\prime}_{i_2}}^{k_2}\ldots {t^{\prime}_{i_{\mu}}}^{k_{\mu}} \cdot \sigma_1$ and $\sigma={t^{\prime}_{j_1}}^{\lambda_1}{t^{\prime}_{j_2}}^{\lambda_2}\ldots {t^{\prime}_{j_{\nu}}}^{\lambda_{\nu}}\cdot \sigma_2$, where $\sigma_1, \sigma_2 \in \textrm{H}_n(q)$, the ordering is defined in the same way by ignoring the braiding parts $\sigma_1, \sigma_2$. \end{itemize} \end{defn} The same ordering is defined on the set $\Lambda$, where the $t^{\prime}_i$'s are replaced by the corresponding $t_i$'s. Moreover, the same ordering is defined on the sets $\Sigma_n$ and $\Sigma_n^{\prime}$ by ignoring the braiding parts. \begin{prop} The set $\Lambda^{\prime}$ equipped with the ordering given in Definition~\ref{order}, is totally ordered set. \end{prop} \begin{proof} In order to show that the set $\Lambda^{\prime}$ is totally ordered set when equipped with the ordering given in Definition~\ref{order}, we need to show that the ordering relation is antisymmetric, transitive and total. We only show that the ordering relation is transitive. Antisymmetric property follows similarly. Totality follows from Definition~\ref{order} since all possible cases have been considered. \smallbreak Let $w={t}^{k_0}{t^{\prime}_{1}}^{k_1}\ldots {t^{\prime}_{m}}^{k_m}$, $\sigma={t}^{\lambda_0}{t^{\prime}_{1}}^{\lambda_1} \ldots {t^{\prime}_{n}}^{\lambda_n}$ and $v=t^{\mu_0} {t^{\prime}_{1}}^{\mu_1} \ldots {t^{\prime}_{p}}^{\mu_p}$ and let $w<\sigma$ and $\sigma<v$. \smallbreak Since $w<\sigma$, one of the following holds: \smallbreak \begin{itemize} \item[(a)] Either $\sum_{i=1}^{m}k_i<\sum_{i=1}^{n}\lambda_i$ and since $\sigma<v$, we have that $\sum_{i=1}^{n}\lambda_i\leq \sum_{i=1}^{p}\mu_i$ and so\\ \smallbreak \noindent $\sum_{i=1}^{m}k_i<\sum_{i=1}^{p}\mu_i$. Thus $w<v$.\\ \bigbreak \item[(b)] Either $\sum_{i=1}^{m}k_i=\sum_{i=1}^{n}\lambda_i$ and $ind(w)=m<n=ind(\sigma)$. Then, since $\sigma<v$ we have\\ \smallbreak \noindent that either $\sum_{i=1}^{n}\lambda_i<\sum_{i=1}^{p}\mu_i$ $\big(\textrm{same as in case (a)}\big)$ or $\sum_{i=1}^{n}\lambda_i=\sum_{i=1}^{p}\mu_i$ and \\ \smallbreak \noindent $ind(\sigma)\leq p = ind(v)$. Thus, $ind(w)=m<p=ind(v)$ and so we conclude that $w<v$.\\ \bigbreak \item[(c)] Either $\sum_{i=1}^{m}k_i=\sum_{i=1}^{n}\lambda_i$, $ind(w)=ind(\sigma)$ and $i_1=j_1, \ldots , i_{s-1}=j_{s-1}, i_s>j_s$. Then,\\ \smallbreak \noindent since $\sigma<v$, we have that either:\\ \smallbreak $\bullet$ $\sum_{i=1}^{n}\lambda_i<\sum_{i=1}^{p}\mu_i$, same as in case (a), or\\ \smallbreak $\bullet$ $\sum_{i=1}^{n}\lambda_i=\sum_{i=1}^{p}\mu_i$ and $ind(\sigma)<ind(v)$, same as in case (b), or\\ \smallbreak $\bullet$ $ind(\sigma)=ind(v)$ and $j_1=\varphi_1, \ldots , j_p>\varphi_p$. Then:\\ \smallbreak \ \ \ $(i)$ if $p=s$ we have that $i_s>j_s>\varphi_s$ and we conclude that $w<v$.\\ \smallbreak \ \ \ $(ii)$ if $p<s$ we have that $i_p=j_p>\varphi_p$ and thus $w<v$ and if $s<p$ we have that \\ \smallbreak \ \ \ \ \ \ \ \ \ $i_s>j_s=\varphi_s$ and so $w<v$.\\ \bigbreak \item[(d)] Either $\sum_{i=1}^{m}k_i=\sum_{i=1}^{n}\lambda_i$, $ind(w)=ind(\sigma)$, $i_n=j_n\ \forall n$ and $k_n=\lambda_n, \ldots, |k_q|<|\lambda_q|$.\\ \smallbreak \noindent Then, since $\sigma<v$, we have that either:\\ \smallbreak $\bullet$ $\sum_{i=1}^{n}\lambda_i<\sum_{i=1}^{p}\mu_i$, same as in case (a), or\\ \smallbreak $\bullet$ $\sum_{i=1}^{n}\lambda_i=\sum_{i=1}^{p}\mu_i$ and $ind(\sigma)<ind(v)$, same as in case (b), or\\ \smallbreak $\bullet$ $ind(\sigma)=ind(v)$ and $j_1=\varphi_1, \ldots, j_q>\varphi_q$, same as in case (c), or\\ \smallbreak $\bullet$ $j_n=\varphi_n$, for all $n$ and $|\mu_p| \geq |\lambda_p|$ for some $p$.\\ \smallbreak \ \ \ $(1)$ If $|\mu_p| > |\lambda_p|$, then:\\ \smallbreak \ \ \ \ \ \ \ \ $(i)$ If $p\geq q$ then $|k_p|=|\lambda_p|<|\mu_p|$ and thus $w<v$.\\ \smallbreak \ \ \ \ \ \ \ \ $(ii)$ If $p<q$ then $|k_q|<|\lambda_q|=|\mu_q|$ and thus $w<v$.\\ \smallbreak \ \ \ $(2)$ If $|\mu_p| = |\lambda_p|$, then:\\ \smallbreak \ \ \ \ \ \ \ \ $(i)$ If $p\geq q$ then $|k_p|=|\lambda_p|=|\mu_p|$ and $k_p=\lambda_p>\mu_p$. Thus $w<v$.\\ \smallbreak \ \ \ \ \ \ \ \ $(ii)$ If $p<q$ then $|k_q|<|\lambda_q|=|\mu_q|$ and thus $w<v$. \end{itemize} \bigbreak \noindent So, we conclude that the ordering relation is transitive. \end{proof} \begin{defn} \label{level} \rm We define the subset of \textit{level $k$}, $\Lambda_k$, of $\Lambda$ to be the set $$\Lambda_k:=\{t^{k_0}t_1^{k_1}\ldots t_{m}^{k_m} | \sum_{i=0}^{m}{k_i}=k \}$$ and similarly, the subset of \textit{level $k$} of $\Lambda^{\prime}$ to be $$\Lambda^{\prime}_k:=\{t^{k_0}{t^{\prime}_1}^{k_1}\ldots {t^{\prime}_{m}}^{k_m} | \sum_{i=0}^{m}{k_i}=k \}.$$ \end{defn} \begin{remark} \label{ordgap} \rm Let $w \in \Lambda_k$ a monomial containing gaps in the indices and $u \in \Lambda_k$ a monomial with consecutive indices such that $ind(w)=ind(u)$. Then, it follows from Definition~\ref{order} that $w<u$. \end{remark} \begin{prop} The sets $\Lambda_k$ are totally ordered and well-ordered for all $k$. \end{prop} \begin{proof} Since $\Lambda_k \subseteq \Lambda,\ \forall k$, $\Lambda_k$ inherits the property of being a totally ordered set from $\Lambda$. Moreover, $t^k$ is the minimum element of $\Lambda_k$ and so $\Lambda_k$ is a well-ordered set. \end{proof} We also introduce the notion of \textit{homologous words} as follows: \begin{defn} \rm We shall say that two words $w^{\prime}\in \Lambda^{\prime}$ and $w\in \Lambda$ are {\it homologous}, denoted $w^{\prime}\sim w$, if $w$ is obtained from $w^{\prime}$ by turning $t^{\prime}_i$ into $t_i$ for all $i$. \end{defn} With the above notion the proof of Theorem~\ref{mainthm} is based on the following idea: Every element $w^{\prime}\in \Lambda^{\prime}$ can be expressed as linear combinations of monomials $w_i\in \Lambda$ with coefficients in $\mathbb{C}$, such that: \begin{itemize} \item[(i)] $\exists \ j$ such that $w_j:=w\sim w^{\prime}$, \item[(ii)] $w_j < w_i$, for all $i \neq j$, \item[(iii)] the coefficient of $w_j$ is an invertible element in $\mathbb{C}$. \end{itemize} \section{From $\Lambda^{\prime}$ to $\Sigma_n$} In this section we prove a series of lemmas relating elements of the two different basic sets $\Sigma_n$, $\Sigma^{\prime}_n$ of $\textrm{H}_{1,n}(q)$. In the proofs we underline expressions which are crucial for the next step. Since $\Lambda^{\prime}$ is a subset of $\Sigma^{\prime}_n$, all lemmas proved here apply also to $\Lambda^{\prime}$ and will be used in the context of the bases of $\mathcal{S}({\rm ST})$. \subsection{Some useful lemmas in $\textrm{H}_{1,n}(q)$} We will need the following results from \cite{La2}. The first lemma gives some basic relations of the braiding generators. \begin{lemma}[Lemma~1 \cite{La2}] \label{brrel} For $\epsilon \in \{ \pm 1 \}$ the following hold in $\textrm{H}_{1,n}(q)$: \noindent \textit{(i)}\ $ g_i^m \ \ \ = \ \left(q^{m-1}-q^{m-2}+ \ldots +(-1)^{m-1} \right)g_i + \left(q^{m-1}-q^{m-2}+ \cdots +(-1)^{m-2}q \right)$\\ \noindent \ \ \ \ \ $ g_i^{-m} \ = \ \left(q^{-m}-q^{1-m}+ \ldots + (-1)^{m-1}q^{-1} \right)g_i + \left(q^{-m}-q^{1-m} + \cdots +(-1)^{m-1}q^{-1}+(-1)^{m} \right)$\\ \noindent \textit{(ii)} $ {g_i}^\epsilon ({g_k}^{\pm 1}g_{k-1}^{\pm 1}\ldots {g_j}^{\pm 1}) \ = \ ({g_k}^{\pm 1}g_{k-1}^{\pm 1}\ldots {g_j}^{\pm 1}){g_{i+1}}^\epsilon, \ \mbox{ \rm for} \ k>i\geq j$, \\ \noindent \ \ \ \ \ ${g_i}^\epsilon ({g_j}^{\pm 1}g_{j+1}^{\pm 1}\ldots {g_k}^{\pm 1})\ = \ ({g_j}^{\pm 1}g_{j+1}^{\pm 1}\ldots {g_k}^{\pm1}){g_{i-1}}^\epsilon, \ \mbox{ \rm for} \ k\geq i> j$,\\ \noindent where the sign of the ${\pm 1}$ exponent is the same for all generators.\\ \noindent \textit{(iii)} $ g_ig_{i-1}\ldots g_{j+1}{g_j} g_{j+1}\ldots g_i \ \ \ \ \ =\ g_jg_{j+1}\ldots g_{i-1}{g_i} g_{i-1}\ldots g_{j+1}{g_j} $\\ \noindent \ \ \ \ \ \ ${g_i}^{- 1}g_{i-1}^{- 1}\ldots g_{j+1}^{- 1}{g_j}^\epsilon g_{j+1}\ldots g_i \ = \ g_jg_{j+1}\ldots g_{i-1}{g_i}^\epsilon g_{i-1}^{- 1}\ldots g_{j+1}^{-1}{g_j}^{- 1} $\\ \noindent \textit{(iv)} $ {g_i}^\epsilon \ldots {g_{n-1}}^\epsilon {g_n}^{2\epsilon} {g_{n-1}}^\epsilon\ldots {g_i}^\epsilon\ \ \ = \ \sum_{r=0}^{n-i+1} \, (q^\epsilon -1)^{\epsilon_r} q^{\epsilon r} \, ({g_i}^\epsilon \ldots {g_{n-r}}^\epsilon \ldots {g_i}^\epsilon)$,\\ \noindent where $\epsilon_r=1 \ \mbox{ if } \ r\leq n-i \ \mbox{ and } \ \epsilon_{n-i+1}=0.$ Similarly,\\ \noindent \textit{(v)} $ {g_i}^\epsilon \ldots {g_2}^\epsilon {g_1}^{2\epsilon} {g_2}^\epsilon \ldots {g_i}^\epsilon \ = \ \sum_{r=0}^{i}\, (q^\epsilon -1)^{\epsilon_r} q^{\epsilon r} \, ({g_i}^\epsilon \ldots{g_{r+2}}^\epsilon {g_{r+1}}^\epsilon {g_{r+2}}^\epsilon \ldots {g_i}^\epsilon),$\\ \noindent where $\epsilon_r=1 \ \mbox{ if } \ r\leq i-1 \ \mbox{ and } \ \epsilon_i=0$. \end{lemma} The next lemma comprises relations between the braiding generators and the looping generator $t$. \begin{lemma}[cf. Lemmas~1, 4, 5 \cite{La2}] \label{brlre1} For $\epsilon \in \{ \pm 1 \}$, $i, k\in \mathbb{N}$ and $\lambda \in \mathbb{Z}$ the following hold in ${\rm H}_{1,n}(q)$: \[ \begin{array}{llll} (i) & t^{\lambda} g_1tg_1 & = & g_1tg_1 t^{\lambda} \\ &&&\\ (ii) & t^\epsilon {g_1}^\epsilon t^{\epsilon k}{g_1}^\epsilon & = & {g_1}^\epsilon t^{\epsilon k} {g_1}^\epsilon t^\epsilon + (q^\epsilon -1) t^\epsilon {g_1}^\epsilon t^{\epsilon k} + (1-q^\epsilon) t^{\epsilon k}{g_1}^\epsilon t^\epsilon \\ &&&\\ & t^{-\epsilon} {g_1}^\epsilon t^{\epsilon k}{g_1}^\epsilon & = & {g_1}^\epsilon t^{\epsilon k} {g_1}^\epsilon t^{-\epsilon} + (q^\epsilon -1) t^{\epsilon (k-1)} {g_1}^\epsilon + (1-q^\epsilon) {g_1}^\epsilon t^{\epsilon (k-1)}\\ &&&\\ (iii) & t^{\epsilon i}{g_1}^{\epsilon} t^{\epsilon k}{g_1}^{\epsilon} & = & g_1^{\epsilon} t^{\epsilon k} g_1^{\epsilon} t^{\epsilon i} + (q^{\epsilon} -1) \sum_{j=1}^{i}{t^{\epsilon j} g_1^{\epsilon} t^{\epsilon (k+i-j)}} + (1-q^{\epsilon}) \sum_{j=0}^{i-1}{t^{\epsilon (k+j)} g_1^{\epsilon} t^{\epsilon (i-j)}} \\ &&&\\ & t^{-\epsilon i}{g_1}^\epsilon t^{\epsilon k}{g_1}^\epsilon & = & {g_1}^\epsilon t^{\epsilon k}{g_1}^\epsilon t^{-\epsilon i} + (q^\epsilon -1) \sum_{j=1}^{i}{t^{\epsilon (k-j)} g_1^{\epsilon} t^{-\epsilon (i-j)}} + (1-q^{\epsilon}) \sum_{j=1}^{i}{t^{\epsilon (i-j)} g_1^{\epsilon} t^{\epsilon (k-j)}} \\ \end{array} \] \end{lemma} The next lemma gives the interactions of the braiding generators and the loopings $t_i$s and $t^{\prime}_i$s. \begin{lemma}[Lemmas~1 and 2 \cite{La2}] \label{brlre2} The following relations hold in $\textrm{H}_{1,n}(q)$: \[ \begin{array}{llll} (i) & g_i{t_k}^\epsilon & = & {t_k}^\epsilon g_i \ \mbox{ \rm for} \ k>i, \, k<i-1 \\ &&&\\ & g_it_i & = & q t_{i-1}g_i+(q-1) t_i \\ &&&\\ & g_it_{i-1} & = & q^{-1} t_ig_i+(q^{-1}-1) t_i \ =\ t_ig_i^{-1} \\ &&&\\ & g_i{t^{-1}_{i-1}} & = & q {t_i}^{-1}g_i+(q-1) {t_{i-1}}^{-1} \\ &&&\\ & g_i{t^{-1}_i} & = & q^{-1} {t_{i-1}}^{-1}g_i+(q^{-1}-1) {t_{i-1}}^{-1}\ = \ t_{i-1}^{-1}g_i^{-1}\\ &&&\\ (ii) & t_n^kg_n & = & (q-1)\sum_{j=0}^{k-1}{q^jt_{n-1}^jt_n^{k-j}}+q^kg_nt_{n-1}^k, \mbox{ \rm if} \ k\in \mathbb{N} \\ &&&\\ & t_n^kg_n & = & (1-q)\sum_{j=0}^{k-1}{q^jt_{n-1}^jt_n^{k-j}}+q^kg_nt_{n-1}^k, \mbox{ \rm if} \ k\in \mathbb{Z} - \mathbb{N} \\ &&&\\ (iii) & {t_i}^k{t_j}^\lambda & = & {t_j}^\lambda {t_i}^k\ \mbox{ \rm for} \ i\neq j \ \mbox{ \rm and} \ k, \lambda\in {\mathbb Z} \\ &&&\\ (iv) & g_i{t^{\prime}_k}^\epsilon & = & {t^{\prime}_k}^\epsilon g_i \ \ \mbox{ \rm for} \ k>i, \ k<i-1 \\ &&&\\ & g_i{t^{\prime}_i}^\epsilon & = & {t^{\prime}_{i-1}}^\epsilon g_i + (q-1) {t^{\prime}_i}^\epsilon + (1-q) {t^{\prime}_{i-1}}^\epsilon \\ &&&\\ & g_i{t^{\prime}_{i-1}}^\epsilon & = & {t^{\prime}_i}^\epsilon g_i \\ &&&\\ (v) & {t_i^{\prime}}^k & = & g_i\ldots g_1 t^k {g_1}^{-1}\ldots {g_i}^{-1} \ \mbox{ \rm for} \ k \in {\mathbb Z}. \\ \end{array} \] \end{lemma} Using now Lemmas~\ref{brrel}, \ref{brlre1} and \ref{brlre2} we prove the following relations, which we will use for converting elements in $\Lambda^{\prime}$ to elements in $\Sigma_n$. Note that whenever a generator is overlined, this means that the specific generator is omitted from the word. \begin{lemma} \label{brlre3} The following relations hold in ${\rm H}_{1,n}(q)$ for $k \in \mathbb{N}$: \[ \begin{array}{llll} (i) & g_{m+1}t_m^k & = & q^{-(k-1)}t_{m+1}^k g_{m+1}^{-1}\ +\ \sum_{j=1}^{k-1}{q^{-(k-1-j)}(q^{-1}-1)t_m^j t_{m+1}^{k-j}},\\ &&&\\ (ii) & g_{m+1}^{-1}t_m^{-k} & = & q^{(k-1)}t_{m+1}^{-k} g_{m+1}\ +\ \sum_{j=1}^{k-1}{q^{(k-1-j)}(q-1)t_m^{-j} t_{m+1}^{-(k-j)}}. \end{array} \] \end{lemma} \begin{proof} We prove relations~(i) by induction on $k$. Relations~(ii) follow similarly. For $k=1$ we have that $g_{m+1}t_m=t_{m+1}g_{m+1}^{-1}$, which holds from Lemma~\ref{brlre2}~(i). Suppose that the relation holds for $k-1$. Then, for $k$ we have:\\ \noindent $g_{m+1}t_m^k=g_{m+1}t_m^{k-1} t_m \overset{ind.}{\underset{step}{=}} q^{-(k-2)}t_{m+1}^{k-1} \underline{g_{m+1}^{-1}}t_m\ +\ \sum_{j=1}^{k-2}{q^{-(k-2-j)}(q^{-1}-1)t_m^j t_{m+1}^{k-1-j}}t_m=$\\ \noindent $=\ q^{-(k-1)}g_{m+1}t_m\ +\ q^{-(k-2)}(q^{-1}-1)t_m t_{m+1}^{k-1} \ + \sum_{j=1}^{k-2}{q^{-(k-2-j)}(q^{-1}-1)t_m^{j+1} t_{m+1}^{k-1-j}}\ =$\\ \noindent $=\ q^{-(k-1)}t_{m+1}g_{m+1}^{-1}\ +\ \sum_{j=1}^{k}{q^{-(k-1-j)}(q^{-1}-1)t_m^{j} t_{m+1}^{k-j}}$. \end{proof} \begin{lemma} \label{loopcycles1} In ${\rm H}_{1,n}(q)$ the following relations hold: \begin{itemize} \item[(i)] For the expression $A=\left(g_rg_{r-1} \ldots g_{r-s}\right)\cdot t_{k}$ the following hold for the different values of $k \in \mathbb{N}$: \[ \begin{array}{llll} (1) & A & = & t_k \left(g_r \ldots g_{r-s} \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k>r\ {\rm or}\ k<r-s-1 \\ &&&\\ (2) & A & = & t_r \left(g_{r}^{-1} \ldots g_{r-s}^{-1} \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k=r-s-1 \\ &&&\\ (3) & A & = & qt_{r-1} \left(g_r \ldots g_{r-s})+(q-1)t_r(g_{r-1} \ldots g_{r-s} \right) \ \ \ \ \ \qquad \ \ \ \ \ {\rm for}\ k=r \\ &&&\\ (4) & A & = & qt_{r-s-1} \left(g_{r} \ldots g_{r-s} \right)+(q-1)t_r \left(g_{r}^{-1} \ldots g_{r-s+1}^{-1} \right) \ \ \ \ \ \ \ \ \ \ {\rm for}\ k=r-s \\ &&&\\ (5) & A & = & t_{m-1} \left(g_{r} \ldots g_{r-s} \right)+(q-1)t_r \left(g_{r}^{-1} \ldots g_{m+1}^{-1} \right)(g_{m-1}\ldots g_{r-s})\\ &&&\\ & & & {\rm for} \ k=m\in \{r-s+1,\ldots, r-1\}. \\ \end{array} \] \item[(ii)] For the expression $A=\left(g_rg_{r-1} \ldots g_{r-s}\right)\cdot t_{k}^{-1}$ the following hold for the different values of $k \in \mathbb{N}$: \[ \begin{array}{llll} (1) & A & = & t_{k}^{-1} \left(g_r \ldots g_{r-s} \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k>r\ {\rm or}\ k<r-s-1 \\ &&&\\ (2) & A & = & t_{r-s-1}^{-1} \left(g_r \ldots g_{r-s+1}g_{r-s}^{-1} \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k=r-s \\ &&&\\ (3) & A & = & t_{m-1}^{-1} \left(g_rg_{r-1} \ldots g_{m+1}g_{m}^{-1}g_{m-1}\ldots g_{k-s} \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k=m\in \{r-s+1, \ldots, r\} \\ &&&\\ (4) & A & = & q^{s+1}t_{r}^{-1} \left(g_{r} \ldots g_{r-s} \right)+(q-1)\sum_{j=1}^{s+1}q^{s-j+1}t_{r-j}^{-1} \left(g_r\ldots g_{r-j+2}g_{r-j}\ldots g_{r-s}\right) \\ &&&\\ & & & {\rm for}\ k=r-s-1. \\ \end{array} \] \end{itemize} \end{lemma} \begin{proof} We only prove relations~(ii) for $k=r-s-1$ by induction on $s$ (case~4). All other relations follow from Lemma~\ref{brlre2}~(i). \smallbreak \noindent For $s=1$ we have: \[ \begin{array}{lll} g_r \underline{g_{r-1}t_{r-2}^{-1}} & = & g_r[qt_{r-1}^{-1}g_{r-1}+(q-1)t_{r-2}^{-1}]\ =\ q\underline{g_rt_{r-1}^{-1}}g_{r-1}+(q-1)g_rt_{r-2}^{-1} \\ &&\\ & = & q[qt_r^{-1}g_r+(q-1)t_{r-1}^{-1}]g_{r-1}+(q-1)t_{r-2}^{-1}g_r\\ &&\\ & = & q^2t_r^{-1}(g_rg_{r-1})+(q-1)\left[qt_{r-1}^{-1}g_{r-1}\ + \ q^0t_{r-2}^{-1}g_r \right],\\ \end{array} \] \bigbreak \noindent and so the relation holds for $s=1$. Suppose that the relation holds for $s=n$. We will show that it holds for $s=n+1$. Indeed we have:\\ \noindent $(g_r \ldots g_{r-n-1})t_{r-n-2}^{-1} = (g_r \ldots g_{r-n})(\underline{g_{r-n-1}t_{r-n-2}^{-1}}) = (g_r \ldots g_{r-n})\left[qt_{r-n-1}^{-1}g_{r-n-1}+(q-1)t_{r-n-2}^{-1}\right] = $\\ \noindent $ = q(\underline{g_r \ldots g_{r-n}t_{r-n-1}^{-1}})g_{r-n-1} + (q-1)(g_r \ldots g_{r-n})t_{r-n-2}^{-1} \overset{\rm ind. step}{=}\ q^{n+2}t_r^{-1}(g_r \ldots g_{r-n-1})\ +$ \\ \noindent $ +\ (q-1)\sum_{j=1}^{n+1}{q^{n-j+2}t_{r-j}^{-1}(g_r\ldots g_{r-j+2}g_{r-j}\ldots g_{r-n-1})}\ +\ (q-1)t_{r-n-2}^{-1}(g_r\ldots g_{r-n})\ =$ \\ \noindent $=\ q^{n+2}t_r^{-1}(g_r \ldots g_{r-n-1}) +\ (q-1)\sum_{j=1}^{n+2}{q^{(n+1)-j+1}t_{r-j}^{-1}(g_r\ldots g_{r-j+2}g_{r-j}\ldots g_{r-n-1})}.$\\ \end{proof} Before proceeding with the next lemma we introduce the notion of length of $w\in \textrm{H}_n(q)$. For convenience we set $\delta_{k,r}:= g_kg_{k-1}\ldots g_{r+1}g_r$ for $k>r$ and by convention we set $\delta_{k,k}:= g_k$. \begin{defn}\label{length} \rm We define the \textit{length} of $\delta_{k,r} \in {\rm H}_n(q)$ as $l(\delta_{k,r}):=k-r+1$ and since every element of the Iwahori-Hecke algebra of type A can be written as $\prod_{i=1}^{n-1}{\delta_{k_i,r_i}}$ so that $k_j<k_{j+1}\ \forall j$, we define the \textit{length} of an element $w\in \textrm{H}_n(q)$ as $$l(w):=\sum_{i=1}^{n-1}{l_i(\delta_{k_i,r_i})} = \sum_{i=1}^{n-1}{k_i-r_i+1}.$$ \end{defn} Note that $l(g_k)=l(\delta_{k,k})=k-k+1=1$. \begin{lemma}\label{brlre4} For $k>r$ the following relations hold in ${\rm H}_{1,n}(q)$: $$t_k \delta_{k,r} = \sum_{i=0}^{k-r}{q^i(q-1)\delta_{k,\overline{k-i},r}t_{k-i}} + q^{l(\delta_{k,r})} \delta_{k,r} t_{r-1},$$ \noindent where $\delta_{k,\overline{k-i},r}:= g_kg_{k-1} \ldots g_{k-i+1} g_{k-i-1} \ldots g_r:= g_k \ldots \overline{g_{k-i}} \ldots g_r$. \end{lemma} \begin{proof} We prove relations by induction on $k$. For $k=1$ we have that $t_1 g_1\ =\ (q-1) t_1 + q g_1 t$, which holds. Suppose that the relation holds for $(k-1)$, then for $k$ we have: \[ \begin{array}{lll} t_k \delta_{k,r} & = & \underline{t_k g_k} \cdot \delta_{k-1,r} = (q-1) t_k \delta_{k-1,r} + q g_k \underline{t_{k-1} \delta_{k-1,r}} =\\ &&\\ & = & (q-1) \delta_{k-1,r} t_k + q g_k \sum_{i=0}^{k-1-r}{q^i(q-1)\delta_{k-1,\overline{k-1-i},r}t_{k-1-i}} + q^{l(\delta_{k-1,r})+1} g_k \delta_{k-1,r} t_{r-1}=\\ &&\\ & = & \sum_{i=0}^{k-r}{q^i(q-1)\delta_{k,\overline{k-1-i},r}t_{k-1-i}} + q^{l(\delta_{k,r})} \delta_{k,r} t_{r-1}.\\ \end{array} \] \end{proof} \begin{lemma} \label{loopcycles2} In ${ \rm H}_{1,n}(q)$ the following relations hold: \begin{itemize} \item[(i)] For the expression $A=\left(g_rg_{r+1} \ldots g_{r+s} \right)\cdot t_{k}$ the following hold for the different values of $k \in \mathbb{N}$: \[ \begin{array}{llll} (1) & A & = & t_k \left(g_r \ldots g_{r+s}\right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k\geq r+s+1 \ {\rm or} \ k<r-1 \\ &&&\\ (2) & A & = & t_{k+1} \left(g_{r} \ldots g_{k} g_{k+1}^{-1} g_{k+2} \ldots g_{r+s} \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ r-1 \leq k < r+s \\ &&&\\ (3) & A & = & (q-1) \sum_{i=r}^{r+s}{q^{r+s-i}t_i \left(g_r\ldots \overline{g_i} \ldots g_{r+s} \right)} + q^{s+1}t_{r-1} \left(g_r\ldots g_{r+s} \right) \ \ {\rm for} \ \ \ \ \ \ \ k=r+s \\ \end{array} \] \item[(ii)] For the expression $A=\left(g_rg_{r+1} \ldots g_{r+s} \right)\cdot t_{k}^{-1}$ the following hold for the different values of $k \in \mathbb{N}$: \[ \begin{array}{llll} (1) & A & = & t_{k}^{-1}\left(g_rg_{r+1} \ldots g_{r+s} \right) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k \geq r+s+1\ {\rm or} \ k < r-1 \\ &&&\\ (2) & A & = & q\ t_{k+1}^{-1} \left(g_r \ldots g_{r+s}\right) + (q-1)\ t_{r-1}^{-1} \left(g_{r}^{-1} \ldots g_k^{-1} g_{k+2}\ldots g_{r+s} \right)\ \ \ {\rm for}\ r-1 \leq k < r+s \\ &&&\\ (3) & A & = & t_{r-1}^{-1} \left(g_r^{-1} \ldots g_{r+s}^{-1} \right)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm for}\ k=r+s \\ \end{array} \] \end{itemize} \end{lemma} \begin{proof} We prove relation~(i) for $r+s=k$ by induction on $k$ (case~3). All other relations follow from Lemmas~\ref{brrel} and \ref{brlre2}. For $k=1$ we have: $g_1t_1 = \underline{g_1^2}tg_1 = qtg_1+(q-1)t_1$. Suppose that the relation holds for $k=n$. Then, for $k=n+1$ we have that: \noindent $g_r\ldots \underline{g_{n+1} t_{n+1}} = q(\underline{g_r \ldots g_n t_n}) g_{n+1} + (q-1) \underline{(g_r\ldots g_n)t_{n+1}} \overset{ind. step}{=}$\\ \noindent $=q\left[(q-1) \sum_{i=r}^{n}{q^{n-i}t_i(g_r \ldots \overline{g_i} \ldots g_n)} + q^{n-r+1}t_{r-1}(g_r\ldots g_n) \right]g_{n+1} + (q-1) t_{n+1}(g_r\ldots g_n)=$\\ \noindent $= \left( (q-1) \sum_{i=r}^{n}{q^{n-i+1}t_i(g_r \ldots \overline{g_i} \ldots g_ng_{n+1})} + (q-1) t_{n+1}(g_r\ldots g_n) \right) + q^{n+1-r+1}t_{r-1}(g_r\ldots g_ng_{n+1})= $\\ \noindent $= (q-1) \sum_{i=r}^{n+1}{q^{n+1-i}t_i(g_r \ldots \overline{g_i} \ldots g_{n+1})} + q^{n+1-r+1}t_{r-1}(g_r \ldots g_{n+1}).$ \end{proof} \begin{lemma} \label{loopbridge} The following relations hold in ${ \rm H}_{1,n}(q)$ for $k \in \mathbb{N}$: \[ \begin{array}{llll} (i) & \left(g_1\ldots g_{i-1}g_i^{2}g_{i-1}\ldots g_1\right)\cdot t & = & (q-1) \sum_{k=1}^{i}{q^{i-k} t_k \left(g_1\ldots g_{k-1}g_k^{-1}g_{k-1}^{-1}\ldots g_1^{-1}\right)}+q^it \\ &&&\\ (ii) & \left(g_1^{-1} \ldots g_{i-1}^{-1} g_i^{-2} g_{i-1}^{-1} \ldots g_1^{-1}\right) \cdot t^{-1} & = & (q^{-1}-1) \sum_{k=1}^{i}{q^{-(i-k)} t_k^{-1} \left(g_1^{-1} \ldots g_{k-1}^{-1} g_k g_{k-1} \ldots g_1\right)} + \\ &&&\\ & & + & q^{-i}\ t^{-1} \\ &&&\\ (iii) & \left(g_k^{-1} \ldots g_2^{-1}g_1^{-2}g_2^{-1} \ldots g_k^{-1}\right)\cdot t_k & = & (q^{-1}-1) \sum_{i=1}^{k-1}{q^{-k}t_i \left(g_k^{-1} \ldots g_{i+2}^{-1}g_{i+1}g_{i+2} \ldots g_k \right)}\ + \\ &&&\\ & & + & q^{-k}t_k \\ &&&\\ (iv) & \left(g_k^{-1} \ldots g_2^{-1}g_1^{-2}g_2^{-1} \ldots g_k^{-1}\right)\cdot t_k^{-1} & = & t^{-1} q^{-k}(q^{-1}-1)g_k^{-1}\ldots g_1^{-1}\ldots g_k^{-1}\ + \\ &&&\\ & & + & \sum_{i=0}^{k-1}{t_i^{-1}q^{-k+i}(q^{-1}-1)g_k^{-1}\ldots g_1^{-2}\ldots g_i^{-1}g_{i+2}^{-1}\ldots g_k^{-1}}\ + \\ &&&\\ & & + & t_k^{-1} \big[\sum_{i=2}^{k}{q^{-k+i}(q^{-1}-1)^2g_{i-1}^{-1}\ldots g_2^{-1}g_1^{-2}g_2^{-1}\ldots g_{i-1}^{-1}}\ +\\ &&&\\ & & + & q^{-(k+1)}(q^{2}-q+1)\big].\\ \end{array} \] \end{lemma} \begin{proof} We prove relations~(i) by induction on $i$. All other relations follow similarly. For $i=1$ we have: $g_1^2t=g_1g_1tg_1g_1^{-1}=g_1t_1g_1^{-1}=(q-1)t_1g_1^{-1}+qt$. Suppose that the relation holds for $i=n$. Then, for $i=n+1$ we have: \smallbreak \noindent $\left(g_1\ldots g_{n}g_{n+1}^{2}g_{n}\ldots g_1\right)\cdot t \ = \ (q-1)\left(g_1\ldots g_{n+1}g_n\ldots g_1\right)\cdot t\ + \ q\left(g_1\ldots g_{n-1}g_n^{2}g_{n-1}\ldots g_1\right)\cdot t \ =$\\ \noindent $=\ (q-1)g_1\ldots g_nt_{n+1}g_{n+1}^{-1}\ldots g_1^{-1}\ +\ q\sum_{k=1}^{n}{q^{n-k}(q-1)t_k\left(g_1\ldots g_{k-1}g_k^{-1}\ldots g_1^{-1}\right)}+q^{n+1}t\ =$\\ \noindent $=\ (q-1)t_{n+1}\left(g_1\ldots g_ng_{n+1}^{-1}\ldots g_1^{-1}\right)\ +\ \sum_{k=1}^{n}{q^{n+1-k}(q-1)t_k\left(g_1\ldots g_{k-1}g_k^{-1}\ldots g_1^{-1}\right)}+q^{n+1}t\ =$\\ \noindent $=\ \sum_{k=1}^{n+1}q^{n+1-k}(q-1)t_k\left(g_1\ldots g_{k-1}g_k^{-1}\ldots g_1^{-1}\right)+q^{n+1}t.$ \end{proof} \subsection{Converting elements in $\Lambda^{\prime}$ to elements in $\Sigma_n$} We are now in the position to prove a set of relations converting monomials of $t^{\prime}_i$'s to expressions containing the $t_i$'s. In \cite{D} we provide lemmas converting monomials of $t_i$'s to monomials of $t^{\prime}_i$'s in the context of giving a simple proof that the sets $\Sigma_n^{\prime}$ form bases of $\textrm{H}_{1,n}(q)$. \begin{lemma} \label{tpr1} The following relations hold in ${\rm H}_{1,n}(q)$ for $k \in \mathbb{N}$: \[ \begin{array}{llll} (i) & {t_1^{\prime}}^{-k} & = & q^k t_1^{-k} \ + \ \sum_{j=1}^{k}{q^{k-j}(q-1) t^{-j}t_1^{j-k}\cdot g_1^{-1}}, \\ &&&\\ (ii) & {t_1^{\prime}}^{k} & = & q^{-k} t_1^{k} \ + \ \sum_{j=1}^{k}{q^{-(k-j)}(q^{-1}-1) t^{j-1}t_1^{k+1-j}\cdot g_1^{-1}}. \\ \end{array} \] \end{lemma} \begin{proof} We prove relations~(i) by induction on $k$. Relations~(ii) follow similarly. For $k=1$ we have: $ {t^{\prime}_1}^{-1} = \underline{g_1}\ t^{-1}\ g_1^{-1}\ =\ q\ \underline{g_1^{-1}\ t^{-1}\ g_1^{-1}}\ +\ (q-1)\ t^{-1}\ g_1^{-1}\ =\ q\ t_1^{-1}\ +\ (q-1)\ t^{-1}\ g_1^{-1}.$ \smallbreak \noindent Suppose that the relation holds for $k-1$. Then, for $k$ we have: \[ \begin{array}{lll} {t_1^{\prime}}^{-k} & = & {t_1^{\prime}}^{-(k-1)} {t_1^{\prime}}^{-1}\ \overset{ind.}{{\underset{step}{=}}}\ q^{k-1}t_{1}^{-(k-1)}{t_1^{\prime}}^{-1} + \sum_{j=1}^{k-1}q^{k-1-j}(q-1)t^{-j}t_1^{j-(k-1)}g_1^{-1}{t_1^{\prime}}^{-1}\\ &&\\ & = & q^k t_1^{-k} + q^{k-1} t^{-1} t_1^{-(k-1)} g_1^{-1} + \sum_{j=1}^{k-1}q^{k-1-j}(q-1)t^{-j}t_1^{j-(k-1)}t^{-1}g_1^{-1}\\ &&\\ & = & q^k t_1^{-k} + q^{k-1}(q-1) t^{-1} t_1^{-(k-1)} g_1^{-1} + \sum_{j=1}^{k-1}q^{k-1-j}(q-1)t^{-j-1}t_1^{j-(k-1)}g_1^{-1}\\ &&\\ & = & q^k t_1^{-k} + \sum_{j=1}^{k}q^{k-j}(q-1)t^{-j}t_1^{j-k}g_1^{-1}.\\ \end{array} \] \end{proof} \begin{lemma} \label{tprneg} The following relations hold in ${\rm H}_{1,n}(q)$ for $k \in \mathbb{N}$: $$ {t^{\prime}_k}^{-1}\ =\ q^k\ t_k^{-1}\ +\ (q-1)\ \sum_{i=0}^{k-1}{q^i\ t_i^{-1}\ (\ g_k\ g_{k-1}\ \ldots\ g_{i+2}\ g_{i+1}^{-1}\ \ldots \ g_{k-1}^{-1}\ g_k^{-1}\ )}. $$ \end{lemma} \begin{proof} We prove the relations by induction on $k$. For $k=1$ we have: \smallbreak \noindent $ {t^{\prime}_1}^{-1} = \underline{g_1}\ t^{-1}\ g_1^{-1}\ =\ q\ \underline{g_1^{-1}\ t^{-1}\ g_1^{-1}}\ +\ (q-1)\ t^{-1}\ g_1^{-1}\ =\ q\ t_1^{-1}\ +\ (q-1)\ t^{-1}\ g_1^{-1}$. \bigbreak \noindent Suppose that the relations hold for $k=n$. Then, for $k=n+1$ we have that: \noindent ${t^{\prime}_{n+1}}^{-1} = g_{n+1}\ \underline{{t^{\prime}_n}^{-1}}\ g_{n+1}^{-1} \overset{ind.\ step}{=} g_{n+1} \big[q^n t_n^{-1}\ +\ (q-1) \sum_{i=0}^{n-1}{q^i\ t_i^{-1} ( g_n \ldots g_{i+2} g_{i+1}^{-1} \ldots g_n^{-1} )} \big] g_{n+1}^{-1} = $\\ \noindent $ = \ q^n\ \underline{g_{n+1}\ t_n^{-1}}\ g_{n+1}^{-1}\ +\ (q-1) \sum_{i=0}^{n-1}{q^i \underline{g_{n+1} t_i^{-1}} ( g_n \ldots g_{i+2} g_{i+1}^{-1} \ldots g_n^{-1} g_{n+1}^{-1})}\ = $\\ \noindent $ =\ q^n \big[q t_{n+1}^{-1} g_{n+1} \ +\ (q-1) t_n^{-1} \big] g_{n+1}^{-1}\ +\ (q-1) \sum_{i=0}^{n-1}{q^i t_i^{-1} ( g_{n+1} \ldots g_{i+2} g_{i+1}^{-1} \ldots g_{n+1}^{-1})}\ = $ \\ \noindent $= \ q^{n+1} t_{n+1}^{-1}\ +\ q^{n} (q-1) t_{n}^{-1} g_{n+1}^{-1}\ +\ (q-1) \sum_{i=0}^{n-1}{q^i t_i^{-1} ( g_{n+1} \ldots g_{i+2} g_{i+1}^{-1} \ldots g_{n+1}^{-1})}\ =$ \\ \noindent $ = \ q^{n+1} t_{n+1}^{-1}\ +\ (q-1) \sum_{i=0}^{n}{q^i t_i^{-1}\ ( g_{n+1} \ldots\ g_{i+2} g_{i+1}^{-1} \ldots g_{n+1}^{-1} )}.$ \end{proof} \begin{lemma}\label{loops} The following relations hold in ${\rm H}_{1,n}(q)$ for $k \in \mathbb{Z}\backslash \{0 \}$: $$ {t_m^{\prime}}^{k}\ =\ q^{-m k}t_{m}^{k} \ +\ \sum_{i}{f_i(q) t_m^{k} w_i} \ +\ \sum_{i}{g_i(q) t^{\lambda_0}t_1^{\lambda_1}\ldots t_m^{\lambda_m}u_i}, $$ \noindent where $w_i, u_i \in {\rm H}_{m+1}(q),\ \forall i$, $\sum_{i=0}^{m}{\lambda_i}=k$ and $\lambda_i \geq 0$, if $k_m>0$ and $\lambda_i \leq 0$, if $k_m<0$. \end{lemma} \begin{proof} We prove relations by induction on $m$. The case $m=1$ is Lemma~\ref{tpr1}. Suppose now that the relations hold for $m-1$. Then, for $m$ we have:\\ \noindent $ {t_m^{\prime}}^{k}\ =\ g_m {t_{m-1}^{\prime}} g_m^{-1}\ \overset{ind.}{\underset{step}{=}}\ q^{-(m-1) k}\underline{g_m t_{m-1}^{k}} g_m^{-1} \ + \sum_{i}{f_i(q) \underline{g_m t_{m-1}^{k}} w_i g_m^{-1}}\ +$\\ \noindent $ +\ \sum_{i}{g_i(q) t^{\lambda_0}t_1^{\lambda_1}\ldots t_{m-2}^{\lambda_{m-2}} \underline{g_m t_{m-1}^{\lambda_{m-1}}} u_i } g_m^{-1} \ \overset{(L.4)}{=}$\\ \noindent $=\ q^{-(m-1)k}q^{-(k-1)}t_m^k \underline{g_m^{-2}}\ +\ \sum_{j=1}^{k-1}{q^{-(k-1-j)}(q^{-1}-1)t_{m-1}^jt_m^{k-j}g_m^{-1}}\ +\ \sum \ +\ \sum \ =$\\ \noindent $=\ q^{-m k}t_{m}^{k} \ +\ \sum_{i}{f_i(q) t_m^{k} w_i} \ +\ \sum_{i}{g_i(q) t^{\lambda_0}t_1^{\lambda_1}\ldots t_m^{\lambda_m}u_i}$. \end{proof} \noindent Using now Lemma~\ref{loops} we have that every element $u \in \Lambda^{\prime}$ can be expressed to linear combinations of elements $v_i \in \Sigma_n$, where $\exists\ j : v_j \sim u$. More precisely: \begin{thm} \label{convert} The following relations hold in ${\rm H}_{1,n}(q)$ for $k \in \mathbb{Z}$: $$ t^{k_0}{t_1^{\prime}}^{k_1} \ldots {t_m^{\prime}}^{k_m} \ = \ q^{- \sum_{n=1}^{m}{nk_n}}\cdot t^{k_0}t_1^{k_1}\ldots t_m^{k_m}\ + \ \sum_{i}{f_i(q)\cdot t^{k_0}t_1^{k_1}\ldots t_m^{k_m}\cdot w_i} \ + \ \sum_{j}{g_j(q)\tau_j \cdot u_j}, $$ \noindent where $w_i, u_j \in {\rm H}_{m+1}(q), \forall i$, $\tau_j \in \Lambda$, such that $\tau_j < t^{k_0}t_1^{k_1}\ldots t_m^{k_m}, \forall i$. \end{thm} \begin{proof} We prove relations by induction on $m$. Let $k_1 \in \mathbb{N}$, then for $m=1$ we have:\\ \noindent $t^{k_0}{t_1^{\prime}}^{k_1}\ \overset{(L.9)}{=}\ q^{-k_1}t^{k_0}t_1^{k_1}\ +\ \sum_{j=1}^{k_1}{q^{-(k_1-j)}(q^{-1}-1)t^{k_0+j-1}t_1^{k_1+1-j}g_1^{-1}}\ =$\\ \noindent $=\ q^{-k_1}t^{k_0}t_1^{k_1}\ +\ q^{-k_1}(q^{-1}-1) t^{k_0}t_1^{k_1}g_1^{-1} \ + \sum_{j=2}^{k_1}{q^{-(k_1-j)}(q^{-1}-1)t^{k_0+j-1}t_1^{k_1+1-j}g_1^{-1}}$.\\ \noindent On the right hand side we obtain a term which is the homologous word of $t^{k_0}{t_1^{\prime}}^{k_1}$ with scalar $q^{-k_1}\in \mathbb{C}$, the homologous word again followed by $g_1^{-2}\in \textrm{H}_2(q)$ and with scalar $q^{-(k_1-1)}(q^{-1}-1) \in \mathbb{C}$ and the terms $t^{k_0+j-1}t_1^{k_1+1-j}$, which are of less order than the homologous word $t^{k_0}t_1^{k_1}$, since $k_1 > k_1+1-j$,\ for all $j \in \{2, 3, \ldots k_1 \}$. So the statement holds for $m=1$ and $k_1 \in \mathbb{N}$. The case $m=1$ and $k_1 \in \mathbb{Z} \backslash \mathbb{N}$ is similar. \smallbreak \noindent Suppose now that the relations hold for $m-1$. Then, for $m$ we have: \[ \begin{array}{lcl} t^{k_0}{t_1^{\prime}}^{k_1} \ldots {t_m^{\prime}}^{k_m} & \overset{ind.}{\underset{step}{=}} & q^{- \sum_{n=1}^{m-1}{nk_n}}\cdot t^{k_0} \ldots t_{m-1}^{k_{m-1}}\cdot {t_m^{\prime}}^{k_m}\ + \ \sum_{i}{f_i(q)\cdot t^{k_0}t_1^{k_1}\ldots t_{m-1}^{k_{m-1}}\cdot w_i \cdot {t_m^{\prime}}^{k_m}}\\ &&\\ &+& \sum_{j}{g_j(q)\tau_j \cdot u_j \cdot {t_m^{\prime}}^{k_m}}.\\ \end{array} \] \noindent Now, since $w_i, u_i \in \textrm{H}_{m}(q),\ \forall i$ we have that $w_i {t_m^{\prime}}^{k_m}\ =\ {t_m^{\prime}}^{k_m} w_i$ and $u_i {t_m^{\prime}}^{k_m}\ = \ {t_m^{\prime}}^{k_m} u_i, \ \forall i$. Applying now Lemma~\ref{loops} to ${t_m^{\prime}}^{k}$ we obtain the requested relation. \end{proof} \begin{example}\label{eg1}\rm We convert the monomial $t^{-1}{t_{1}^{\prime}}^{2}{t_2^{\prime}}^{-1} \in \Lambda^{\prime}$ to linear combination of elements in $\Sigma_n$. We have that: \[ \begin{array}{lllr} {t_{1}^{\prime}}^{2} & = & q^{-2}t_1^2+q^{-1}(q^{-1}-1)t_1^2g_1^{-1}+(q^{-1}-1)tt_1g_1^{-1},& {\rm (Lemma~\ref{tpr1}),}\\ &&&\\ {t_2^{\prime}}^{-1} & = & q^2t_2^{-1}+q(q-1)t^{-1}g_2^{-1}g_1^{-1}g_2^{-1}+q(q-1)t_1^{-1}g_2^{-1}+(q-1)^2t^{-1}g_1^{-1}g_2^{-1},& {\rm (Lemma~\ref{tprneg}),}\\ \end{array} \] \noindent and so: \[ \begin{array}{lll} t^{-1}{t_{1}^{\prime}}^{2}{t_2^{\prime}}^{-1} & = & t^{-1}t_{1}^{2}t_2^{-1}\cdot \left(1+ q^2(q^{-1}-1)g_1^{-1} \right)\ +\ t^{-2}t_1^{2} \cdot \left(q^{-1}(q-1) g_2^{-1}g_1^{-1}g_2^{-1}\right) \ +\\ &&\\ & + & t^{-1}t_1 \cdot \left(q^{-1}(q-1)g_2^{-1}\ +\ (q-1)(q^{-1}-1)g_2^{-1}g_1^{-1}\ +\ (q-1)(q^{-1}-1)g_1^{-1}g_2^{-1} \right)\ +\\ &&\\ & + & 1 \cdot \left( -(q-1)^2 g_2^{-1}g_1^{-1} \right)\ +\ t_1t_2^{-1} \left( q^2(q^{-1}-1) g_1^{-1} \right).\\ \end{array} \] We obtain the homologous word $w=t^{-1}t_{1}^{2}t_2^{-1}$, the homologous word again followed by the braiding generator $g_1^{-1}$ and all other terms are of less order than $w$ since, either they contain gaps in the indices such as the term $t_1t_2^{-1}$, or their index is less than $ind(w)$ (the terms $t^{-1}t_1$, $t^{-2}t_1^{2}$, $1$). \end{example} \section{ From $\Sigma_n$ to $\Lambda$} \subsection{Managing the gaps} Before proceeding with the proof of Theorem~\ref{mainthm} we need to discuss the following situation. According to Lemma~\ref{tpr1}, for a word $w^{\prime}=t^{k}{t^{\prime}_1}^{-\lambda}\in\Lambda^{\prime}$, where $k,\lambda \in \mathbb{N}$ and $k<\lambda$ we have that: \[ \begin{array}{lllll} w^{\prime} & = & t^{k}{t^{\prime}_1}^{-\lambda} & = & t^{k-1}{t_1}^{-\lambda+1}\alpha_1+t^{k-2}{t_1}^{-\lambda+2}\alpha_2\ +\ \ldots \ +\ t^{k-(k-1)}{t_1}^{-\lambda+(k-1)}\alpha_{k-1}\ +\\ &&&&\\ & & & + & t^0{t_1}^{-\lambda+k}\alpha_{k}\ +\ t^{-1}{t_1}^{-\lambda+k+1}\alpha_{k+1}\ +\ \ldots \ +\ t^{-\lambda+k}\alpha_{\lambda},\\ \end{array} \] \noindent where $\alpha_i\in \textrm{H}_n(q),\ \forall i$. We observe that in this particular case, in the right hand side there are terms which do not belong to the set $\Lambda$. These are the terms of the form $t_1^m$. So these elements cannot be compared with the highest order term $w\sim w^{\prime}$. The point now is that a term $t_1^m$ is an element of the basis $\Sigma_n$ on the Hecke algebra level, but, when we are working in $\mathcal{S}({\rm ST})$, such an element must be considered up to conjugation by any braiding generator and up to stabilization moves. Topologically, conjugation corresponds to closing the braiding part of a mixed braid. Conjugating $t_1$ by $g_1^{-1}$ we obtain $tg_1^{2}$ (view Figure~\ref{conj2}) and similarly conjugating $t_1^m$ by $g_1^{-1}$ we obtain $tg_1^2tg_1^2\ldots tg_1^2$. Then, applying Lemma~\ref{brlre2} we obtain the expression $\sum_{k=1}^{m-1}{t^kt_1^{m-k}}v_k$, where $v_k\in \textrm{H}_n(q)$, for all $k$, that is, we obtain now elements in the $\bigcup_n\textrm{H}_n(q)$-module $\Lambda$. \begin{figure} \begin{center} \includegraphics[width=4.7in]{conj} \end{center} \caption{Conjugating $t_i$ by $g_1^{-1}\ldots g_{i}^{-1}$.} \label{conj2} \end{figure} \smallbreak We shall next treat this situation in general. For the expressions that we obtain after appropriate conjugations we shall use the notation $\widehat{=}$. We will call {\it gaps} in monomials of the $t_i$'s, gaps occurring in the indices and \textit{size} of the gap $t_i^{k_i}t_j^{k_j}$ the number $s_{i,j}=j-i \in \mathbb{N}$. \begin{lemma}\label{gapsimple} For $k_0,k_1 \ldots k_i \in \mathbb{Z}$, $\epsilon = 1$ or $\epsilon = -1$ and $s_{i,j}>1$ the following relation holds in ${\rm H}_{1,n}(q)$: $$ t^{k_0}t_1^{k_1}\ldots t_{i-1}^{k_{i-1}}t_i^{k_i}\cdot t^{\epsilon}_j \ \widehat{=} \ t^{k_0}t_1^{k_1}\ldots t_{i-1}^{k_{i-1}}t_i^{k_i}\cdot t^{\epsilon}_{i+1} \left(g^{\epsilon}_{i+2}\ldots g^{\epsilon}_{j-1}g_j^{2\epsilon}g^{\epsilon}_{j-1} \ldots g^{\epsilon}_{i+2} \right).$$ \end{lemma} \begin{proof} We have that $t_j^{\epsilon}\ =\ \left(g_j^{\epsilon} \ldots g_{i+2}^{\epsilon} \right)\ t_{i+1}^{\epsilon}\ \left(g_{i+2}^{\epsilon} \ldots g_{j}^{\epsilon} \right)$ and so: \[ \begin{array}{lcll} t^{k_0}t_1^{k_1}\ldots t_{i-1}^{k_{i-1}}t_i^{k_i}t_j^{\epsilon} & = & t^{k_0}t_1^{k_1}\ldots t_{i-1}^{k_{i-1}}t_i^{k_i} (g_j^{\epsilon} \ldots g_{i+2}^{\epsilon})\ t_{i+1}^{\epsilon}\ (g_{i+2}^{\epsilon} \ldots g_{j}^{\epsilon}) & =\\ &&&\\ & = & (g_j^{\epsilon} \ldots g_{i+2}^{\epsilon})\ t^{k_0}t_1^{k_1}\ldots t_{i-1}^{k_{i-1}}t_i^{k_i}t_{i+1}^{\epsilon} (g_{i+2}^{\epsilon} \ldots g_{j}^{\epsilon}) & \widehat{=}\\ &&&\\ & \widehat{=} & t^{k_0}\ldots t_{i-1}^{k_{i-1}}t_i^{k_i}t_{i+1}^{\epsilon} (g_{i+2}^{\epsilon}\ldots g_{j-1}^{\epsilon}g_j^{2{\epsilon}}g_{j-1}^{\epsilon} \ldots g_{i+2}^{\epsilon} ).&\\ \end{array} \] \end{proof} In order to pass to a general way for managing gaps in monomials of $t_i$'s we first deal with gaps of size one. For this we have the following. \begin{lemma}\label{conj} For $k \in \mathbb{N}$, $\epsilon = 1$ or $\epsilon = -1$ and $\alpha \in {\rm H}_{1,n}(q)$ the following relations hold: $$t_i^{\epsilon k} \cdot \alpha\ \widehat{=}\ \sum_{u=1}^{k-1}{q^{\epsilon (u-1)}(q^{\epsilon}-1)t_{i-1}^{\epsilon u}t_i^{\epsilon (k-u)} (\alpha g_i^{\epsilon})}\ +\ q^{\epsilon (k-1)}t_{i-1}^{\epsilon k} ( g_i^{\epsilon} \alpha g_i^{\epsilon}). $$ \end{lemma} \begin{proof} We prove the relations by induction on $k$. For $k=1$ we have $t_i^{\epsilon}\cdot \alpha \ \widehat{=}\ g_i^{\epsilon}t_{i-1}^{\epsilon} g_i^{\epsilon} \cdot \alpha \ \widehat{=}\ t_{i-1}^{\epsilon} g_i^{\epsilon} \cdot \alpha \cdot g_i^{\epsilon}$. Suppose that the assumption holds for $k-1>1$. Then for $k$ we have: \smallbreak \noindent $t_i^{\epsilon k} \cdot \alpha\ \widehat{=}\ t_i^{\epsilon (k-1)} (t_i^{\epsilon} \cdot \alpha) \ \overset{(t_i^{\epsilon}\cdot \alpha\ =\ \beta)}{=}\ t_i^{\epsilon (k-1)} \cdot \beta\ \underset{ind.\ step}{\widehat{=}}$ \noindent $=\ \sum_{u=1}^{k-2}{q^{\epsilon (u-1)}(q^{\epsilon}-1)t_{i-1}^{\epsilon u}t_i^{\epsilon (k-1-u)} (\beta g_i^{\epsilon})}\ +\ q^{\epsilon (k-2)}t_{i-1}^{\epsilon (k-1)} ( g_i^{\epsilon} \beta g_i^{\epsilon})\ \overset{(\beta\ =\ t_i^{\epsilon}\cdot \alpha)}{=}$\\ \noindent $=\ \sum_{u=1}^{k-2}{q^{\epsilon (u-1)}(q^{\epsilon}-1)t_{i-1}^{\epsilon u}t_i^{\epsilon (k-1-u)}t_i^{\epsilon} (\alpha g_i^{\epsilon})}\ +\ q^{\epsilon (k-2)}t_{i-1}^{\epsilon (k-1)} ( g_i^{\epsilon}t_i^{\epsilon} \alpha g_i^{\epsilon})\ =$\\ \noindent $=\ \sum_{u=1}^{k-2}{q^{\epsilon (u-1)}(q^{\epsilon}-1)t_{i-1}^{\epsilon u}t_i^{\epsilon (k-u)} (\alpha g_i^{\epsilon})}\ +\ q^{\epsilon (k-2)}t_{i-1}^{\epsilon (k-1)} t_i^{\epsilon} \alpha g_i^{\epsilon}\ +\ q^{\epsilon (k-1)}t_{i-1}^{\epsilon (k-1+1)} ( g_i^{\epsilon}t_i^{\epsilon} \alpha g_i^{\epsilon})\ =$\\ \noindent $=\ \sum_{u=1}^{k-1}{q^{\epsilon (u-1)}(q^{\epsilon}-1)t_{i-1}^{\epsilon u}t_i^{\epsilon (k-u)} (\alpha g_i^{\epsilon})}\ +\ q^{\epsilon (k-1)}t_{i-1}^{\epsilon k} ( g_i^{\epsilon} \alpha g_i^{\epsilon}) $. \end{proof} We now introduce the following notation. \begin{nt}\label{nt} \rm We set $\tau_{i,i+m}^{k_{i,i+m}}:=t_i^{k_i}t^{k_{i+1}}_{i+1}\ldots t^{k_{i+m}}_{i+m}$, where $m\in \mathbb{N}$ and $k_j\neq 0$ for all $j$ and \[ \delta_{i,j}:=\left\{\begin{matrix} g_ig_{i+1}\ldots g_{j-1}g_{j} & if \ i<j \\ &\\ g_ig_{i-1}\ldots g_{j+1}g_{j} & if \ i>j \end{matrix}\right. ,\ \ \delta_{i,\widehat{k},j}:=\left\{\begin{matrix} g_ig_{i+1} \ldots g_{k-1}g_{k+1} \ldots g_{j-1}g_{j} & if \ i<j \\ &\\ g_ig_{i-1} \ldots g_{k+1}g_{k-1} \ldots g_{j+1}g_{j} & if \ i>j \end{matrix}\right. \] \noindent We also set $w_{i,j}$ an element in $\textrm{H}_{j+1}(q)$ where the minimum index in $w$ is $i$. \end{nt} Using now the notation introduced above, we apply Lemma~\ref{conj} $s_{i,j}$-times to 1-gap monomials of the form $\tau_{0,i}^{k_{0,i}}\cdot t_j^{k_j}$ and we obtain monomials with no gaps in the indices, followed by words in ${\rm H}_n(q)$. \begin{example} For $s_{i,j}>1$ and $\alpha \in {\rm H}_n(q)$ we have: \[ \begin{array}{lrcl} (i) & \tau_{0,i}^{k_i}\cdot t_j\cdot \alpha & \widehat{=} & \tau_{0,i}^{k_i}\cdot t_{i+1}\cdot \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}\\ &&&\\ (ii) & \tau_{0,i}^{k_i}\cdot t^{2}_j\cdot \alpha & \widehat{=} & \tau_{0,i}^{k_i}\cdot t^{2}_{i+1}\cdot \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}\ +\ \tau_{0,i}^{k_i}\cdot t_{i+1}t_{i+2}\cdot \beta,\ \rm{where}\\ &&&\\ & \beta & = & \left[(q-1)\sum_{s=i+2}^{j}{q^{j-s} \delta_{i+3,s} \delta_{i+2,s-1} \delta_{s+1,j}}\ \alpha\ \delta_{j,i+2}\delta_{s,i+3}\right]\\ &&&\\ (iii) & \tau_{0,i}^{k_i}\cdot t^{3}_j\cdot \alpha & \widehat{=} & \left[q^{j-(i+2)+1} \right]^2 \tau_{0,i}^{k_i}\cdot t^{3}_{i+1}\cdot \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}\ +\ \tau_{0,i}^{k_i}\cdot t^{2}_{i+1}t_{i+2}\cdot \beta\ +\\ &&&\\ && + & \tau_{0,i}^{k_i}\cdot t_{i+1}t^{2}_{i+2}\cdot \gamma\ +\ \tau_{0,i}^{k_i}\cdot t_{i+1}t_{i+2}t_{i+3}\cdot \mu,\ \rm{where}\\ &&&\\ &\gamma & = & q^{j-(i+3)+1}(q-1) \delta_{i+3,j} \delta_{i+2,s-1} \delta_{s+1,j}\ \alpha\ \delta_{j,i+2} \delta_{s,i+3},\ \rm{and}\\ &&&\\ & \mu & = & \sum_{s=i+2}^{j} \sum_{r=s+1}^{j}q^{2j-r-s}\ (q-1)^2 \delta_{i+4,r} \delta_{i+2,s-1} \delta_{s+1,r-1} \delta_{r+1,j}\ \alpha\ \delta_{j,i+2} \delta_{s,i+3} \delta_{r,i+4} \\ &&&\\ && + & \sum_{s=i+2}^{j} \sum_{r=i+3}^{s}q^{2j-r-s}\ (q-1)^2 \delta_{i+4,r} \delta_{i+3,r-1} \delta_{r+1,s} \delta_{i+2,s-1} \delta_{s+1,j}\ \alpha\ \delta_{j,i+2} \delta_{s,i+3} \end{array} \] \end{example} Applying Lemma~\ref{conj} to the one gap word $\tau^{k_{0,i}}_{0,i}\cdot t_{j}^{k_j}$, where $k_j \in \mathbb{Z}\backslash \{0\}$ and $\alpha \in {\rm H}_n(q)$ we obtain: $$\tau^{k_{0,i}}_{0,i}\cdot t_{j}^{k_j} \alpha \widehat{=}\left\{\begin{matrix} \sum_{\lambda}{\tau^{k_{0,i}}_{0,i}t_{i+1}^{\lambda_{i+1}}\ldots t_{i+k_j}^{\lambda_{i+k_j}}\alpha^{\prime}} & {\rm if} \ k_j<s_{i,j} \\ &\\ \sum_{\lambda}{\tau^{k_{0,i}}_{0,i}t_{i+1}^{\lambda_{i+1}}\ldots t_j^{\lambda_j}}\beta^{\prime} & {\rm if} \ k_j \geq s_{i,j} \end{matrix}\right. , $$ where $\alpha^{\prime}, \beta^{\prime} \in \textrm{H}_{n}(q)$, $\sum_{\mu=i+1}^{i+k_j}{\lambda_{\mu}}= k_j$, $\lambda_{\mu} \geq 0,\ \forall \mu$ and if $\lambda_u=0$, then $\lambda_v=0$, $\forall v \geq u$. \smallbreak More precisely: \begin{lemma}\label{gap} For the 1-gap word $A\ =\ \tau_{0,i}^{k_{0,i}}\cdot t_j^{k_j}\cdot \alpha$, where $\alpha \in {\rm H}_n(q)$ we have: \[ \begin{array}{lllcl} (i) & {\rm If}\ |k_j|<s_{i,j},\ {\rm then}: & A & \widehat{=} & (q^{k_j-1})^{j-(i+1)}\tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j}\ \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}\ +\\ &&&&\\ &&&+& \sum_{\sum k_{i+1,i+k_j}=k_j} f(q,z) \tau_{0,i}^{k_{0,i}}\cdot \tau_{i+1,i+k_j}^{k_{i+1,i+k_j}} \cdot \beta\ \alpha\ \beta^{\prime}. \\ &&&&\\ (ii) & {\rm If}\ |k_j| \geq s_{i,j},\ {\rm then}: & A & \widehat{=} & (q^{k_j-1})^{j-(i+1)}\tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j}\ \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}\ +\\ &&&&\\ &&&+& \sum_{\sum k_{i+1,j}=k_j} f(q) \tau_{0,i}^{k_{0,i}}\cdot \tau_{i+1,j}^{k_{i+1,j}} \cdot \beta \alpha \beta^{\prime}.\\ \end{array} \] \noindent where $\beta$ and $\beta^{\prime}$ are of the form $w_{i+1,j} \in {\rm H}_{j+1}(q)$ and $\sum k_{i+1,k_j} = k_j$ such that $|k_{i+1}|< |k_j|$ and if $k_{\mu}=0$, then $k_{s}=0,$ for all $s>\mu$. \end{lemma} \begin{proof} We prove the relations by induction on $k_j$. Let $0<k_j< j-i$. \noindent For $k_j=1$ we have $A\ \widehat{=}\ \left[q^{(1-1)}\right]^{j-(i+1)}\ \tau_{0,i}^{k_{0,i}}\cdot t_{i+1} \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}$ (Lemma~\ref{gapsimple}). Suppose that the relation holds for $k_j-1>1$. Then for $k_j$ we have: $$ \begin{array}{lllcl} A & = & \tau_{0,i}^{k_{0,i}}\cdot t^{k_j-1}_j\cdot (t_j\ \alpha) & \underset{ind.step}{\widehat{=}}& \underset{B}{\underbrace{\left[q^{k_j-2} \right]^{j-(i+1)} \tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j-1} \underline{\delta_{i+2,j}\ t_j}\ \alpha \ \delta_{j,i+2}}}\ +\\ &&&&\\ &&& + & \underset{C}{\underbrace{ \sum_{k_{i_1,i+k_j-1}} f(q,z) \tau_{0,i}^{k_{0,i}} \cdot \tau_{i+1,i+k_j-1}^{k_{i_1,i+k_j-1}} \underline{\beta \ t_j}\ \beta^{\prime}}}.\\ \end{array} $$ \noindent We now consider $B$ and $C$ separately and apply Lemma~\ref{brlre3} to both expressions: \[ \begin{array}{cl} B & \overset{(L.~\ref{brlre3})}{=} \\ &\\ = & \left[q^{k_j-2} \right]^{j-(i+1)} \tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j-1}\left[ (q-1) \sum_{k+i+2}^{j} q^{j-k} t_k \delta_{i+2,k-1} \delta_{k+1,j} + q^{j-(i+2)+1} t_{i+1} \delta_{i+2,j}\right] \alpha \delta_{j,i+2}\\ &\\ = & \left[q^{k_j-2} \right]^{j-(i+1)}(q-1) \tau_{0,i}^{k_{0,i}}t_{i+1}\cdot \sum_{k+i+2}^{j}q^{j-k}t_k \delta_{i+2,k-1}\delta_{k+1,j}\alpha \delta_{j,i+2}\ +\\ &\\ + & \left[q^{k_j-1} \right]^{j-(i+1)} \tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j} \delta_{i+2,j}\alpha \delta_{j,i+2}.\\ \end{array} \] \noindent We now do conjugation on the $\left(j-(i+3)\right)$-one gap words that occur and since $t_k\cdot \beta\ \widehat{=}\ t_{i+2}\cdot \delta_{i+3,k}\ \beta\ \delta_{k,i+3}$ we obtain: \[ \begin{array}{lcl} B & \widehat{=} & \left[q^{k_j-1} \right]^{j-(i+1)} \tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j} \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}\ +\\ &&\\ & + & \tau_{0,i}^{k_{0,i}} t_{i+1}t_{i+2}\sum_{k=i+2}^{j} f(q,z) \delta_{i+3,k}\delta_{i+2,k-1}\delta_{k+1,j} \alpha \delta_{j,i+2}\delta_{k,i+3}\ =\\ &&\\ & = & \left[q^{k_j-1} \right]^{j-(i+1)} \tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j} \delta_{i+2,j}\ \alpha\ \delta_{j,i+2}\ +\ \tau_{0,i}^{k_i} t_{i+1}t_{i+2}\cdot \beta_1,\\ \end{array} \] \noindent where $\beta_1 \in \textrm{H}_{j+1}(q)$. \smallbreak \noindent Moreover, $C\ = \ \sum_{k_r} f(q) \tau_{0,i}^{k_{0,i}} \cdot \tau_{i+1,i+k_j-1}^{k_{i+1,i+k_j-1}} \beta \ t_j\ \beta^{\prime}$ and since $\beta\ =\ w_{i+k_j-1,j}$, we have that: $\beta\cdot t_j\ \overset{(L.~\ref{brlre3})}{=}\ \sum_{s=i+k_j-1}^{j} t_s\cdot \gamma_s$, where $\gamma_s \in \textrm{H}_{j+1}(q)$ and so: $C\ \widehat{=}\ \sum_{v_r}f(q) \tau_{0,i}^{k_{0,i}}\cdot \tau_{i+1, i+k_j}^{v_{i+1,i+k_j}}\cdot \beta_2$, where $\beta_2 \in \textrm{H}_{j+1}(q)$. \smallbreak This concludes the proof. \end{proof} We now pass to the general case of one-gap words. \begin{prop}\label{fp} For the $1$-gap word $B\ =\ \tau_{0,i}^{k_{0,i}} \cdot \tau_{j,j+m}^{k_{j,j+m}} \cdot \alpha$, where $\alpha \in {\rm H}_n(q)$ we have: \[ \begin{array}{lcl} B & \widehat{=} & \prod_{s=0}^{m}{(q^{k_{j+s}-1})^{j-(i+1)}}\cdot \tau_{0,i}^{k_{0,i}} \tau_{i+1,i+m}^{k_{j,j+m}} \\ & & \\ & \cdot & \prod_{s=0}^{m}(\delta_{i+m+2-s,j+s}) \cdot \alpha \cdot \prod_{s=0}^{m}(\delta_{j+s,i+m+2-s})\ +\\ & & \\ &+&\sum_{u_r} f(q) \tau_{0,i}^{k_{0,i}}\cdot (\tau_{i+1,i+m}^{u_{1,m}})\cdot \alpha^{\prime} \\ \end{array} \] \noindent where $\alpha^{\prime}\in {\rm H}_n(q)$, $\sum u_{1,m} = k_j$ such that $u_1< k_j$ and if $u_{\mu}=0$, then $u_{s}=0, \forall s>\mu$. \end{prop} \begin{proof} The proof follows from Lemma~\ref{gap}. The idea is to apply Lemma~\ref{gap} on the expression $\underline{\tau_{0,i}^{k_{0,i}}\cdot t_j^{k_j}} \cdot \rho_1$, where $\rho_1\ =\ \tau_{j+1,j+m}^{k_{j+1,j+m}}$ and obtain the terms $\tau_{0,i}^{k_{0,i}}\cdot t_{i+1}^{k_j}\cdot \rho_2$ and $\tau_{0,i}^{k_{0,i}}\cdot \tau_{i+1,i+q}^{k_{i+1,i+q}} \cdot \rho_2$ and follow the same procedure until there is no gap in the word. \end{proof} We are now ready to deal with the general case, that is words with more than one gap in the indices of the generators. \begin{thm}\label{fth} For the $\phi$-gap word $C\ =\ \tau^{k_{0,i}}_{0,i}\cdot \tau^{k_{i+s_1,i+s_1+\mu_1}}_{i+s_1,i+s_1+\mu_1}\cdot \tau^{k_{i+s_2,i+s_2+\mu_2}}_{i+s_2,i+s_2+\mu_2} \ldots \tau^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}}_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}} \cdot \alpha$, where $k_i \in \mathbb{Z}\backslash \{0\}$ for all $i$, $\alpha \in {\rm H}_n(q)$, $s_j, \mu_j \in \mathbb{N}$, such that $s_1>1$ and $s_j>s_{j-1}+\mu_{j-1}$ for all $j$ we have: \[ \begin{array}{lcl} C & \widehat{=} & \prod_{j=1}^{\phi}{\left(q^{k_{i+s_j}-1} \right)^{s_j-j- \sum_{p=1}^{j-1}\mu_p}}\cdot \tau^{u_{0,i+\phi+ \sum_{p=1}^{\phi}{\mu_p}}}_{0,i+\phi+ \sum_{p=1}^{\phi}{\mu_p}}\cdot \left( \prod_{p=0}^{\phi -1}\alpha_{\phi-p} \right) \cdot \alpha \cdot \left( \prod_{p=1}^{\phi}\alpha^{\prime}_{p} \right)\ +\\ &&\\ & + & \sum_{v} f_v(q) \tau^{k_{0,v}}_{0,v}\cdot w_v, \ \mbox{\rm where} \\ \end{array} \] \begin{itemize} \item[(i)] $\alpha_j\ =\ \prod_{\lambda_j=0}^{\mu_j}{\delta_{i+j+1+\sum_{k=1}^{j}{\mu_k}-\lambda_j,\ i+s_j+ \mu_j- \lambda_j}},\ j\ =\ \{1,2,\ldots , \phi\},$ \vspace{.1in} \item[(ii)] $\alpha^{\prime}_j\ =\ \prod_{\lambda_j=0}^{\mu_j}{\delta_{i+j+1+\sum_{k=1}^{j-1}{\mu_k}+\lambda_j,\ i+s_j+ \lambda_j}},\ j\ =\ \{1,2,\ldots , \phi\},$ \vspace{.1in} \item[(iii)] $\tau^{u_{0,i+\phi+ \sum_{p=1}^{\phi}{\mu_p}}}_{0,i+\phi+ \sum_{p=1}^{\phi}{\mu_p}}\ = \ \tau_{0,i}^{k_{0,i}}\cdot \prod_{j=1}^{\phi}{\tau^{k_{i+s_j,i+s_j+\mu_j}}_{i+j+\sum_{p=1}^{j-1}{\mu_p},i+j+\sum_{p=1}^{j}{\mu_p}}},$ \vspace{.1in} \item[(iv)] $\tau^{u_{0,v}}_{0,v}< \tau^{u_{0,i+\phi+\sum_{p=1}^{\phi}\mu_p}}_{0,i+\phi+\sum_{p=1}^{\phi}\mu_p},$ for all $v$, \vspace{.1in} \item[(v)] $w_v$ of the form $w_{i+2,i+s_{\phi}+\mu_{\phi}} \in {\rm H}_{i+s_{\phi}+\mu_{\phi}+1}(q)$, for all $v$, \vspace{.1in} \item[(vi)] the scalars $f_v(q)$ are expressions of $q\in \mathbb{C}$ for all $v$. \end{itemize} \end{thm} \begin{proof} We prove the relations by induction on the number of gaps. For the $1$-gap word $\tau^{k_{0,i}}_{0,i}\cdot \tau^{k_{i+s,i+s+\mu}}_{i+s,i+s+\mu}\cdot \alpha$, where $\alpha \in \textrm{H}_n(q)$, we have: $$ \begin{array}{lcl} A & \widehat{=} & \left[ \prod_{\lambda=0}^{\mu}{\left(q^{k_{i+s+\lambda}-1} \right)^{s-1}}\right] \cdot \tau^{k_{0,i}}_{0,i}\cdot \tau^{k_{i+s,i+s+\mu}}_{i+1,i+1+\mu}\cdot \prod_{\lambda=0}^{\mu}{\delta_{i+2+\mu-\lambda, i+s+\mu - \lambda}}\cdot \alpha \cdot \prod_{\lambda=0}^{\mu}{\delta_{i+2+\mu+\lambda, i+s+ \lambda}}\ +\\ &&\\ & + & \sum_{v}{f_v(q)\cdot \tau^{u_{0,v}}_{0,v}\cdot w_v}, \mbox{ \rm which holds from Proposition~\ref{fp}.} \\ \end{array} $$ \smallbreak Suppose that the relation holds for $(\phi -1)$-gap words. Then for a $\phi$-gap word we have: \smallbreak \noindent $\left(\underline{\tau^{k_{0,i}}_{0,i}\cdot \tau^{k_{i+s_1,i+s_1+\mu_1}}_{i+s_1,i+s_1+\mu_1}\cdot \tau^{k_{i+s_2,i+s_2+\mu_2}}_{i+s_2,i+s_2+\mu_2} \ldots \tau^{k_{i+s_{\phi -1},i+s_{\phi -1}+\mu_{\phi -1}}}_{i+s_{\phi -1},i+s_{\phi -1}+\mu_{\phi -1}}}\right) \cdot \tau^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}}_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}} \cdot \alpha \underset{ind.step}{\widehat{=}}$\\ \noindent $\prod_{j=1}^{\phi-1}\left(q^{k_{i+s_j}-1} \right)^{s_j-j-\sum_{k=1}^{j-1}\mu_k}\cdot \tau_{0,i+\phi-1+\sum_{k=1}^{\phi-1}\mu_k}^{u_{0,i+\phi-1+\sum_{k=1}^{\phi-1}\mu_k}} \cdot \underline{\prod_{k=0}^{\phi-2}\alpha_{\phi-1-k}\cdot \tau_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}}} \cdot \alpha \cdot \prod_{k=1}^{\phi-1}\alpha^{\prime}_k\ +$\\ \noindent $\sum_{v}f_v(q)\cdot \tau_{0,v}^{u_{0,v}}\cdot \underline{w \cdot \tau_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}}}\ \overset{s_{\phi}>s_{\phi-1} + \mu_{\phi-1}}{=}$\\ \noindent $\prod_{j=1}^{\phi-1}\left(q^{k_{i+s_j}-1} \right)^{s_j-j-\sum_{k=1}^{j-1}\mu_k}\cdot \underline{\tau_{0,i+\phi-1+\sum_{k=1}^{\phi-1}\mu_k}^{u_{0,i+\phi-1+\sum_{k=1}^{\phi-1}\mu_k}} \cdot \tau_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}}} \cdot \prod_{k=0}^{\phi-2}\alpha_{\phi-1-k} \cdot \alpha \cdot \prod_{k=1}^{\phi-1}\alpha^{\prime}_k\ +$\\ \noindent $\sum_{v}f_v(q)\cdot {\tau_{0,v}^{u_{0,v}}\cdot \tau_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}}}\cdot w \ \overset{(Prop.~\ref{fp})}{=}$\\ \noindent $\prod_{j=1}^{\phi-1}\left(q^{k_{i+s_j}-1} \right)^{s_j-j-\sum_{k=1}^{j-1}\mu_k}\cdot \prod_{p=0}^{\mu_{\phi}}\left(q^{k_{i+s_{\phi}+p}-1} \right)^{s_{\phi}-\phi-\sum_{k=1}^{\phi-1}\mu_k} \tau_{0,i+\phi-1+\sum_{k=1}^{\phi-1}\mu_k}^{u_{0,i+\phi-1+\sum_{k=1}^{\phi-1}\mu_k}} \cdot $\\ \noindent $\tau_{i+{\phi}+\sum_{k=1}^{\phi-1}{\mu_k},i+{\phi}+\sum_{k=1}^{\phi-1}{\mu_k}+\mu_{\phi}}^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}} \cdot \prod_{k=0}^{\phi-1}\alpha_{\phi-1-k} \cdot \alpha \cdot \prod_{k=1}^{\phi-1}\alpha^{\prime}_k + \sum_{v}f_v(q)\cdot \underline{\tau_{0,v}^{u_{0,v}}\cdot \tau_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}^{k_{i+s_{\phi},i+s_{\phi}+\mu_{\phi}}}}\cdot w \ \overset{(Prop.~\ref{fp})}{=}$\\ \noindent $ \left[ \prod_{\lambda=0}^{\mu}{\left(q^{k_{i+s+\lambda}-1} \right)^{s-1}}\right] \cdot \tau^{k_{0,i}}_{0,i}\cdot \tau^{k_{i+s,i+s+\mu}}_{i+1,i+1+\mu}\cdot \prod_{\lambda=0}^{\mu}{\delta_{i+2+\mu-\lambda, i+s+\mu - \lambda}}\cdot \alpha \cdot \prod_{\lambda=0}^{\mu}{\delta_{i+2+\mu+\lambda, i+s+ \lambda}}\ +\\$ \noindent $ \sum_{v}{f_v(q)\cdot \tau^{u_{0,v}}_{0,v}\cdot w_v}.$ \end{proof} All results are best demonstrated in the following example on a word with two gaps. \begin{example}\rm For the 2-gap word $t^{k_0}t_1^{k_1}t_3t_{5}^2t_6^{-1}$ we have:\\ \noindent $ t^{k_0}\underline{t_1^{k_1}t_3}t_{5}^2t_6^{-1}\ =\ t^{k_0}t_1^{k_1}\underline{g_3t_2g_3}t_{5}^2t_6^{-1}\ =\ g_3t^{k_0}t_1^{k_1}t_2t_{5}^2t_6^{-1}g_3 \ \widehat{=} \ t^{k_0}t_1^{k_1}\underline{t_2t_{5}^2}t_6^{-1}g_3^2\ = $\\ \noindent $=\ t^{k_0}t_1^{k_1}t_2\underline{t_{5}}t_{5}t_6^{-1}g_3^2 \ = \ t^{k_0}t_1^{k_1}t_2\underline{g_5g_4}t_3g_4g_5t_{5}t_6^{-1}g_3^2\ =\ \underline{g_5g_4}t^{k_0}t_1^{k_1}t_2t_3g_4g_5t_{5}t_6^{-1}g_3^2\ \widehat{=}\\$ \noindent $\widehat{=}\ t^{k_0}t_1^{k_1}t_2t_3\underline{g_4g_5t_{5}}t_6^{-1}g_3^2g_5g_4\ =\ t^{k_0}t_1^{k_1}t_2t_3 \left[q^2t_3g_4g_5+q(q-1)t_4g_5 \ + \ (q-1)t_5g_4 \right]t_6^{-1}g_3^2g_5g_4\ =$ \\ \noindent $=\ q^2t^{k_0}t_1^{k_1}t_2t_3^2\underline{g_4g_5t_6^{-1}}g_3^2g_5g_4\ +\ q(q-1)t^{k_0}t_1^{k_1}t_2t_3t_4\underline{g_5t_6^{-1}}g_3^2g_5g_4\ +\ (q-1)t^{k_0}t_1^{k_1}t_2t_3t_5\underline{g_4t_6^{-1}}g_3^2g_5g_4\ =$\\ \noindent $=\ q^2t^{k_0}t_1^{k_1}t_2t_3^2\underline{t_6^{-1}}g_4g_5g_3^2g_5g_4\ + \ (q-1)t^{k_0}t_1^{k_1}t_2t_3\underline{t_5}t_6^{-1}g_4g_3^2g_5g_4 \ + \ q(q-1)t^{k_0}t_1^{k_1}t_2t_3t_4\underline{t_6^{-1}}g_5g_3^2g_5g_4\ \widehat{=}$\\ \noindent $\widehat{=}\ q^2t^{k_0}t_1^{k_1}t_2t_3^2\underline{g_6^{-1}g_5^{-1}}t_4^{-1}g_5^{-1}g_6^{-1}g_4g_5g_3^2g_5g_4\ + \ q(q-1)t^{k_0}t_1^{k_1}t_2t_3t_4\underline{g_6^{-1}}t_5^{-1}g_6^{-1} g_5g_3^2g_5g_4$\\ \noindent $+\ (q-1)t^{k_0}t_1^{k_1}t_2t_3\underline{g_5}{t_4}\underline{g_5}t_6^{-1}g_4g_3^2g_5g_4 \ = \ q^2g_6^{-1}g_5^{-1}t^{k_0}t_1^{k_1}t_2t_3^2t_4^{-1}g_5^{-1}g_6^{-1}g_4g_5g_3^2g_5g_4\ +$\\ \noindent $+\ q(q-1)g_6^{-1}t^{k_0}t_1^{k_1}t_2t_3t_4t_5^{-1}g_6^{-1} g_5g_3^2g_5g_4 \ +\ (q-1){g_5}t^{k_0}t_1^{k_1}t_2t_3{t_4}t_6^{-1}{g_5}g_4g_3^2g_5g_4 \ \widehat{=}$\\ \noindent $\widehat{=}\ q^2t^{k_0}t_1^{k_1}t_2t_3^2t_4^{-1}g_5^{-1}g_6^{-1}g_4g_5g_3^2g_5g_4g_6^{-1}g_5^{-1}\ +\ q(q-1)t^{k_0}t_1^{k_1}t_2t_3t_4t_5^{-1}g_6^{-1} g_5g_3^2g_5g_4g_6^{-1}\ +$\\ \noindent $+\ (q-1)t^{k_0}t_1^{k_1}t_2t_3{t_4}\underline{t_6^{-1}}{g_5}g_4g_3^2g_5g_4{g_5} \ =\ q^2t^{k_0}t_1^{k_1}t_2t_3^2t_4^{-1}g_5^{-1}g_6^{-1}g_4g_5g_3^2g_5g_4g_6^{-1}g_5^{-1}\ +$\\ \noindent $+\ q(q-1)t^{k_0}t_1^{k_1}t_2t_3t_4t_5^{-1}g_6^{-1} g_5g_3^2g_5g_4g_6^{-1} \ +\ (q-1)t^{k_0}t_1^{k_1}t_2t_3{t_4}\underline{g_6^{-1}}t_5^{-1}g_6^{-1}{g_5}g_4g_3^2g_5g_4{g_5} \ \widehat{=}$\\ \noindent $\widehat{=}\ q^2t^{k_0}t_1^{k_1}t_2t_3^2t_4^{-1}g_5^{-1}g_6^{-1}g_4g_5g_3^2g_5g_4g_6^{-1}g_5^{-1}\ + \ q(q-1)t^{k_0}t_1^{k_1}t_2t_3t_4t_5^{-1}g_6^{-1} g_5g_3^2g_5g_4g_6^{-1} \ +$\\ \noindent $+\ (q-1)t^{k_0}t_1^{k_1}t_2t_3{t_4}t_5^{-1}g_6^{-1}{g_5}g_4g_3^2g_5g_4{g_5}{g_6^{-1}}.$ \end{example} \subsection{Eliminating the tails} So far we have seen how to convert elements of the basis $\Lambda^{\prime}$ to linear combinations of elements of $\Sigma_n$ and then, using conjugation, how these elements are expressed to linear combinations of elements of the $\bigcup_n\textrm{H}_n(q)$-module $\Lambda$. We will show now that using conjugation and stabilization moves all these elements of the $\bigcup_n\textrm{H}_n(q)$-module $\Lambda$ are expressed to linear combinations of elements in the set $\Lambda$ with scalars in the field $\mathbb{C}$. We will use the symbol $\simeq$ when a stabilization move is performed and $\widehat{\simeq}$ when both stabilization moves and conjugation are performed. \smallbreak Let us consider a generic word in $\textrm{H}_{1,n+1}(q)$. This is of the form $\tau_{0,n}^{k_{0,n}}\cdot w_{n+1}$, where $w_{n+1} \in \textrm{H}_{n+1}(q)$. Without loss of generality we consider the exponent of the braiding generator with the highest index to be $(-1)$ when the exponent of the corresponding loop generator is in $\mathbb{N}$ and $(+1)$ when the exponent of the corresponding loop generator is in $\mathbb{Z} \backslash \mathbb{N}$. We then apply Lemma~\ref{brlre2} and \ref{brlre3} in order to interact $t_n^{\pm k_n}$ with $g_n^{\mp 1}$ and obtain words of the following form: \[ \begin{array}{llll} (1) & \tau_{0,p}^{\lambda_{0,p}}\cdot v, & \textrm{where} & \tau_{0,p}^{\lambda_{0,p}} < \tau_{0,n}^{k_{0,n}}\ \textrm{and}\ v \in \textrm{H}_{n+1}(q)\ \textrm{of\ any\ length,\ or}\\ &&&\\ (2) & \tau_{0,q}^{k_{0,q}}\cdot u, & \textrm{where} & \tau_{0,q}^{\lambda_{0,q}} < \tau_{0,n}^{k_{0,n}}\ \textrm{and}\ u \in \textrm{H}_{n}(q)\ \textrm{such that}\ l(u)<l(w).\\ \end{array} \] In the first case we obtain monomials of $t_i$s of less order than the initial monomial, followed by a word in ${\rm H}_{n+1}(q)$ of any length. After at most $(k_n+1)$-interactions of $t_n$ with $g_n$, the exponent of $t_n$ will become zero and so by applying a stabilization move we obtain monomials of $t_i$s of less index, and thus of less order (Definition~\ref{order}), followed by a word in ${\rm H}_{n}(q)$. In the second case, we have monomials of $t_i$s of less order than the initial monomial followed by words $u \in \textrm{H}_{n}(q)$ such that $l(u) < l(w)$. We interact the generator with the maximum index of $u$, $g_m$ with the corresponding loop generator until the exponent of $t_m$ becomes zero. A gap in the indices of the monomials of the $t_i$s occurs and we apply Theorem~\ref{fth}. This leads to monomials of $t_i$s of less order followed by words of the braiding generators of any length. We then apply stabilization moves and repeat the same procedure until the braiding `tails' are eliminated. \begin{thm} \label{tails} Applying conjugation and stabilization moves on a word in the $\bigcup_{\infty}{\rm H}_n(q)$-module, $\Lambda$ we have that: $$\tau_{0,m}^{k_{0,m}}\cdot w_n\ \widehat{\simeq}\ f(q,z)\cdot \sum_{j}{f_j(q,z)\cdot \tau_{0,u_j}^{v_{0,u_j}}},$$ \noindent such that $\sum{v_{0,u_j}}=\sum{k_{0,m}}$ and $\tau_{0,u_j}^{v_{0,u_j}} < \tau_{0,m}^{k_{0,m}}$, for all $j$. \end{thm} The logic for the induction hypothesis is explained above. We shall now proceed with the proof of the theorem. \begin{proof} We prove the statement by double induction on the length of $w_n\in \textrm{H}_n(q)$ and on the order of $\tau_{0,m}^{k_{0,m}}\in \Lambda$, where order of $\tau_{0,m}^{k_{0,m}}$ denotes the position of $\tau_{0,m}^{k_{0,m}}$ in $\Lambda$ with respect to total-ordering. \smallbreak For $l(w)=0$, that is for $w=e$ we have that $\tau_{0,m}^{k_{0,m}}\ \widehat{\simeq}\ \tau_{0,m}^{k_{0,m}}$ and there's nothing to show. Moreover, the minimal element in the set $\Lambda$ is $t^k$ and for any word $w\in \textrm{H}_n(q)$ we have that $t^k \cdot w\ \simeq\ f(q,z)\cdot t^k$, by the quadratic relation and stabilization moves. \smallbreak Suppose that the relation holds for all $\tau_{0,p}^{u_{0,p}}\cdot w^{\prime}$, where $\tau_{0,p}^{u_{0,p}} \leq \tau_{0,m}^{k_{0,m}}$ and $l(w^{\prime})=l$, and for all $\tau_{0,q}^{v_{0,q}}\cdot w$, where $\tau_{0,q}^{v_{0,q}} < \tau_{0,m}^{k_{0,m}}$ and $l(w)=l+1$. We will show that it holds for $\tau_{0,m}^{k_{0,m}}\cdot w$. Let the exponent of $t_r$, $k_r \in \mathbb{N}$ and let $w\in \textrm{H}_{r+1}(q)$. Then, $w$ can be written as $w^{\prime} \cdot g_r^{-1} \cdot \delta_{r-1,d}$, where $w^{\prime} \in \textrm{H}_{r}(q)$ and $d<r$. We have that: \[ \begin{array}{lcl} \tau_{0,m}^{k_{0,m}}\cdot w & = & \tau_{0,r-1}^{k_{0,r-1}} t_r^{k_r-1} \tau_{r+1,m}^{k_{r+1,m}} \cdot w^{\prime} \cdot t_r g_r^{-1} \delta_{r-1,d}\ =\\ &&\\ & = & \tau_{0,r-1}^{k_{0,r-1}} t_r^{k_r-1} \tau_{r+1,m}^{k_{r+1,m}} \cdot w^{\prime} \cdot g_r \underline{t_{r-1} \delta_{r-1,d}}\ \overset{L.~6}{=}\\ &&\\ & = & \tau_{0,r-1}^{k_{0,r-1}} t_r^{k_r-1} \tau_{r+1,m}^{k_{r+1,m}} \cdot w^{\prime} \cdot g_r\\ &&\\ && \cdot \left( \sum_{j=0}^{r-1-d}{q^j(q-1)\delta_{r-1,\widehat{r-1-j},d}t_{r-1-j}}\ +\ q^{l(\delta_{r-1,d})}\delta_{r-1,d}t_{d-1} \right) \widehat{=}\\ &&\\ & \widehat{=} & \sum_{j=0}^{r-1-d}{q^j(q-1) \tau_{0,r-1}^{k_{0,r-1}} t_r^{k_r-1} \tau_{r+1,m}^{k_{r+1,m}}\cdot t_{r-1-j}}\cdot w^{\prime}\cdot g_r \delta_{r-1,\widehat{r-1-j},d}\ +\\ &&\\ & + & q^{l(\delta_{r-1,d})} \tau_{0,r-1}^{k_{0,r-1}} t_r^{k_r-1} \tau_{r+1,m}^{k_{r+1,m}}\cdot t_{d-1} \cdot w.\\ \end{array} \] \bigbreak We have that $\left(\tau_{0,r-1}^{k_{0,r-1}} t_r^{k_r-1} \tau_{r+1,m}^{k_{r+1,m}}\cdot t_{r-1-j}\right) < \left(t_{0,m}^{k_{0,m}} \right)$, for all $j\in \{1,2,\ldots r-1-d \}$ and $l\left(w^{\prime}\cdot g_r \delta_{r-1,\widehat{r-1-j},d}\right)=l$ and $ \left( \tau_{0,r-1}^{k_{0,r-1}} t_r^{k_r-1} \tau_{r+1,m}^{k_{r+1,m}}\cdot t_{d-1}\right) < \left(t_{0,m}^{k_{0,m}} \right)$. So, by the induction hypothesis, the relation holds. \end{proof} \begin{example}\rm In this example we demonstrate how to eliminate the braiding `tail' in a word in $\Sigma_n$. \[ \begin{array}{lcl} t^{-1}\underline{t_1^2}t_{2}^{-1}g_1^{-1} & = & t^{-1}t_1t_{2}^{-1}\underline{t_1g_1^{-1}}\ = \ t^{-1}t_1t_{2}^{-1} g_1 \underline{t} \ \widehat{=} \ \underline{t_1}t_{2}^{-1} g_1\ =\ t_{2}^{-1}\underline{t_1 g_1}\ =\\ &&\\ & = & (q-1) \underline{t_1}t_{2}^{-1}\ +\ q t_{2}^{-1} g_1 \underline{t}\ \widehat{=}\ (q-1) t\underline{t_2^{-1}}g_1^{2}\ +\ qt \underline{t_{2}^{-1}} g_1\ =\\ &&\\ & = & (q-1) tt_1^{-1}g_2^{-1}g_1^{2}g_2^{-1}\ +\ q tt_1^{-1}g_2^{-1}g_1g_2^{-1}.\\ \end{array} \] \noindent We have that: \[ \begin{array}{lcl} g_2^{-1}g_1g_2^{-1} & = & q^{-2}g_1g_2g_1\ +\ q^{-1}(q^{-1}-1)g_2g_1\ +\ q^{-1}(q^{-1}-1) g_1g_2\ +\ (q^{-1}-1)^2g_1,\\ &&\\ g_2^{-1}g_1^2g_2^{-1} & = & q^{-2}(q-1)g_1g_2g_1\ -\ (q^{-1}-1)^2g_2g_1\ -\ (q^{-1}-1)^2 g_1g_2\ +\ (q-1)(q^{-1}-1)^2g_1\\ &&\\ & + & q(q^{-1}-1)g_2^{-1}\ +\ 1,\\ \end{array} \] \noindent and so \[ \begin{array}{rcl} (q-1) tt_1^{-1}g_2^{-1}g_1^{2}g_2^{-1} & \widehat{\simeq} & \left( (q-1)+ q^{-1}(q-1)^3 \right)\cdot tt_1^{-1}\ -\ q^{-3}(q^{-1}-1)^3z^2\cdot 1\ +\\ &&\\ & + & 3q^{-3}(q-1)^4z\cdot 1\ - \ q^{-1}(q-1)^2z\cdot 1\ -\ q^{-3}(q-1)^5\cdot 1,\\ &&\\ q tt_1^{-1}g_2^{-1}g_1g_2^{-1} & \widehat{\simeq} & z \cdot tt_1^{-1}\ +\ q^{-1}(q^{-1}-1)z^2\cdot 1\ +\ 2(q^{-1}-1)^2z\cdot 1\ +\ q(q^{-1}-1)^3\cdot 1.\\ \end{array} \] \end{example} \section{The basis $\Lambda$ of $\mathcal{S}({\rm ST})$} In this section we shall show that the set $\Lambda$ is a basis for $\mathcal{S}({\rm ST})$, given that $\Lambda^{\prime}$ is a basis of $\mathcal{S}({\rm ST})$. This is done in two steps: \smallbreak $\bullet$ We first relate the two sets $\Lambda$ and $\Lambda^{\prime}$ via an infinite lower triangular matrix with invertible elements in the diagonal. Since $\Lambda^{\prime}$ is a basis for $\mathcal{S}({\rm ST})$, the set $\Lambda$ spans $\mathcal{S}({\rm ST})$. \smallbreak $\bullet$ Then, we prove that the set $\Lambda$ is linear independent and so we conclude that $\Lambda$ forms a basis for $\mathcal{S}({\rm ST})$. \subsection{The infinite matrix} With the orderings given in Definition~\ref{order} we shall show that the infinite matrix converting elements of the basis $\Lambda^{\prime}$ to elements of the set $\Lambda$ is a block diagonal matrix, where each block is an infinite lower triangular matrix with invertible elements in the diagonal. Note that applying conjugation and stabilization moves on an element of some $\Lambda_k$ followed by a braiding part won't alter the sum of the exponents of the loop generators and thus, the resulted terms will belong to the set of the same level $\Lambda_k$. Fixing the level $k$ of a subset of $\Lambda^{\prime}$, the proof of Theorem~\ref{mainthm} is equivalent to proving the following claims: \smallbreak \begin{itemize} \item[(1)] A monomial $w^{\prime} \in \Lambda_k^{\prime} \subseteq \Lambda^{\prime}$ can be expressed as linear combinations of elements of $\Lambda_k \subseteq \Lambda$, $v_i$, followed by monomials in $\textrm{H}_n(q)$, with scalars in $\mathbb{C}$ such that $\exists \ j: v_j=w\sim w^{\prime}$. \smallbreak \item[(2)] Applying conjugation and stabilization moves on all $v_i$'s results in obtaining elements in $\Lambda_k$, $u_i$'s, such that $u_i < v_i$ for all $i$. \smallbreak \item[(3)] The coefficient of $w$ is an invertible element in $\mathbb{C}$. \smallbreak \item[(4)] $ \Lambda_{k} \ni w < u \in \Lambda_{k+1}$. \end{itemize} \bigbreak Indeed we have the following: Let $w^{\prime} \in S^{\prime}_k \subseteq \Lambda^{\prime}$. Then, by Theorem~7 the monomial $w^{\prime}$ is expressed to linear combinations of elements of $\Sigma_n$, where the only term that isn't followed by a braiding part is the homologous monomial $w \in \Lambda$. Other terms in the linear combinations involve lower order terms than $w$ (with possible gaps in the indices) followed by a braiding part and words of the form $w \cdot \beta$, where $\beta \in \textrm{H}_n(q)$. Then, by Theorem~8 elements of $\Sigma_n$ are expressed to linear combinations of elements of the $\textrm{H}_{n}(q)$-module $\Lambda$ (regularizing elements with gaps) and obtaining words which are of less order than the initial word $w$. In Theorem~9 all elements who are followed by a braiding part are expressed as linear combinations of elements of $\Lambda$ with coefficients in $\mathbb{C}$. It is essential to mention that when applying Theorem~9 to a word of the form $w\cdot \beta$ one obtains elements in $\Lambda$ that are less ordered that $w$. Thus, we obtain a lower triangular matrix with entries in the diagonal of the form $q^{-A}$ (see Theorem~7), which are invertible elements in $\mathbb{C}$. The fourth claim follows directly from Definition~\ref{order}. \smallbreak If we denote as $[\Lambda_k]$ the block matrix converting elements in $\Lambda^{\prime}_k$ to elements in $\Lambda_k$ for some $k$, then the change of basis matrix will be of the form: $$ S=\left[ \begin{array}{ccccccc} \ddots & 0 & 0 & 0 & 0 & 0 & \\ & [\Lambda_{k-2}] & 0 & 0 & 0 & 0 & \\ & 0 & [\Lambda_{k-1}] & 0 & 0 & 0 & \\ & 0 & 0 & [\Lambda_{k}] & 0 & 0 & \\ & 0 & 0 & 0 & [\Lambda_{k+1}] & 0 & \\ & 0 & 0 & 0 & 0 & [\Lambda_{k+2}] & \\ & 0 & 0 & 0 & 0 & 0 & \ddots \\ \end{array}\right] $$ \smallbreak \begin{center} The infinite block diagonal matrix \end{center} \subsection{Linear independence of $\Lambda$} Consider an arbitrary subset of $\Lambda$ with finite many elements $\tau_1, \tau_2, \ldots, \tau_k$. Without loss of generality we consider $\tau_1 < \tau_2 < \ldots < \tau_k$ according to Definition~\ref{order}. We convert now each element $\tau_i \in \Lambda$ to linear combination of elements in $\Lambda^{\prime}$ according to the infinite matrix. We have that $$\tau_i\ \widehat{\simeq}\ A_i \tau_i^{\prime}\ +\ \sum_{j}A_j \tau_j^{\prime}\ ,$$ \noindent where $\tau_i^{\prime} \sim \tau_i$, $A_i \in \mathbb{C}\setminus \{0\}$, $\tau_j^{\prime} < t_i^{\prime}$ and $A_j \in \mathbb{C}, \forall j$. \smallbreak So, we have that: \[ \begin{array}{ccl} \tau_1 & \widehat{\simeq} & A_1 \tau_1^{\prime} + \sum_{j}A_{1j} \tau_{1j}^{\prime}\\ &&\\ \tau_2 & \widehat{\simeq} & A_2 \tau_2^{\prime} + \sum_{j}A_{2j} \tau_{2j}^{\prime}\\ &&\\ \vdots && \ \ \ \ \ \ \ \ \vdots \\ &&\\ \tau_{k-1} & \widehat{\simeq} & A_{k-1} \tau_{k-1}^{\prime} + \sum_{j}A_{(k-1)j} \tau_{(k-1)j}^{\prime}\\ &&\\ \tau_k & \widehat{\simeq} & A_k \tau_k^{\prime} + \sum_{j}A_{kj} \tau_{kj}^{\prime}\\ \end{array} \] \smallbreak Note that each $\tau_i^{\prime}$ can occur as an element in the sum $\sum_{j}A_{pj} \tau_{pj}^{\prime}$ for $p > i$. We consider now the equation $\sum_{i=1}^{k}\lambda_i \cdot \tau_i\ =\ 0\ ,\ \lambda_i\in \mathbb{C}, \forall i$ and we show that this holds only when $\lambda_i=0, \forall i$. Indeed, we have: $$\sum_{i=1}^{k}\lambda_i \cdot \tau_i\ =\ 0\ \Leftrightarrow\ \lambda_k A_k \tau_k^{\prime}\ +\ \sum_{i=1}^{k}\sum_{j}\lambda_i A_ij \tau_{ij}^{\prime}\ =\ 0,$$ \noindent where $\tau_k^{\prime} > \tau_{ij}^{\prime}, \forall i, j$. So we conclude that $\lambda_k \ =\ 0$. Using the same argument we have that: $$\sum_{i=1}^{k}\lambda_i \cdot \tau_i\ =\ 0\ \Leftrightarrow\ \sum_{i=1}^{k-1}\lambda_i \cdot \tau_i\ =\ 0\ \Leftrightarrow\ \lambda_{k-1} A_{k-1} \tau_{k-1}^{\prime}\ +\ \sum_{i=1}^{k-1}\sum_{j}\lambda_i A_ij \tau_{ij}^{\prime}\ =\ 0,$$ \noindent where $\tau_{k-1}^{\prime} > \tau_{ij}^{\prime}, \forall i, j$. So, $\lambda_{k-1} \ =\ 0$. Retrospectively we get: $$ \sum_{i=1}^{k}\lambda_i \cdot \tau_i\ =\ 0\ \Leftrightarrow\ \lambda_i\ =\ 0,\ \forall i,$$ \noindent and so an arbitrary finite subset of $\Lambda$ is linear independent. Thus, the set $\Lambda$ is linear independent and it forms a basis for $\mathcal{S} (\rm ST)$. \smallbreak The proof of our main theorem is now concluded. \hfill QED \section{Conclusions} In this paper we gave a new basis $\Lambda$ for $\mathcal{S} (\rm ST)$, different from the Turaev-Hoste-Kidwell basis and the Morton-Aiston basis. This basis has been conjectured by J.H.~Przytycki. The new basis is appropriate for describing the handle sliding moves, whilst the old basis $\Lambda^{\prime}$ is consistent with the trace rules \cite{La2}. In a sequel paper we shall use the bases $\Lambda^{\prime}$ and $\Lambda$ of $\mathcal{S}({\rm ST})$ and the change~of~basis~matrix in order to compute the Homflypt skein module of the lens spaces $L(p,1)$.
2,877,628,088,833
arxiv
\section{Introduction} Laakso spaces are introduced in \cite{Laakso2000} as a quotient space of the cartesian product of the unit interval with the middle thirds Cantor set, and in \cite{ST08} it is shown that such spaces can be constructed as the projective limit of quantum graphs, verifying a comment in \cite{BarlowEvans2004}. The motivation behind the projective limit is the construction of Markov processes and associated infinitesimal generators; moreover, the spectrum of the Laplacian generating the most natural of these processes is given in \cite{ST08} by using the quantum graph approximations to construct a complete set of eigenfunctions. Here, we build on these previous results by performing calculations motivated by analogy to Quantum Mechanics in the setting of Laakso spaces. Other authors have investigated similar questions on finitely ramified fractals in \cite{FKS09,S09,BajorinEtAl2008} and more directly in \cite{ADT2009,ADT2010}. After a preliminary discussion in Section \ref{PB}, which recalls the definition of Laakso spaces and the derivation of the Laplacian, $\Delta_L$, the body of the paper falls into three parts. The first is Section \ref{sect: SA} in which the spectrum of a Hamiltonian operator of the form $H=\Delta +V(x)$ is studied numerically in three cases (infinite square well, parabolic well, and Coulomb) and analytically in the infinite square well case, as well. Then in Section \ref{sect:CasimirForce}, we introduce analytic regularization techniques used in physics literature and Number Theory as a tool to calculate an analogue to the Casimir effect \cite{Casimir1948}, which describes a Quantum Mechanical force experienced by perfectly conducting uncharged plates. Then in Section \ref{sect: Zeta} the spectral zeta function associated to the Laplacian $\Delta_L$ is studied in greater detail, and, in circumstances where the Laakso space is strictly self-similar, specific values are calculated directly. \section{Preliminary Background} \label{PB} \subsection{Laakso Spaces} Laakso spaces are defined in \cite{Laakso2000} as a quotient space of $I \times K$ by an iteratively defined series of identifications, where \emph{K} is the Cantor set, $I$ is the unit interval, and $\iota: I \times K \rightarrow L$ is the quotient map. Such spaces are also constructable as projective limits of finite quantum graphs $\{F_{n}\}$, as in \cite{ST08}. The projective limit construction provides a convenient approximation of any Laakso space. According to \cite{ST08}, the construction of $F_{n}$ is specified by an equivalence relation that is encoded by a sequence of integers $\{j_{n}\}_{n=1}^\infty$, which give the number of identifications---or subdivisions of each cell---at the $n^{th}$ level of construction. \begin{figure}[htb] \begin{center} \epsfig{file=2.eps, width=3.5cm}\hspace{1cm} \epsfig{file=arrow.eps, width=1.5cm}\hspace{1cm} \epsfig{file=2x2newnodesconnect.eps, width=3.5cm}\vspace{1cm} \end{center} \end{figure} \vspace{-1.2cm} \hspace{2cm} $F_{1} \hspace{6.1cm} F_{2}$ \begin{figure}[htb] \begin{center} \epsfig{file=thiswontwork.eps, width=5cm}\\ $F_{2}$ \end{center} \caption[projective limit construction]{Construction of $F_{2}$ from $F_{1}$ with $j_{1}=j_{2}=2$.} \label{projective limit construction} \end{figure} The most elementary approximation is $F_{0}$=$[0,1]$. We construct $F_{n+1}$ by dividing each interval in $F_{n}$ into $j_{n}$ equal subintervals, the new nodes identify the boundaries of these newly-formed subintervals. Next, duplicate $F_{n}$ and connect each new node to the corresponding node in the other copy. For convenience in the counting arguments to come, align the nodes in columns. This ensures that each $F_{n}$ is vertically and horizontally symmetric. The following quantity will be used frequently: \begin{equation} I_{n}=\prod_{i=1}^n j_{i}, \end{equation} where $I_{0}=1$. (See Figure \ref{projective limit construction}.) The sequence of quantum graphs $\{F_{n}\}$, $n \geq 0$, approximates a specific Laakso space, where the depth of approximation increases as \emph{n} increases. A Laakso space has other important properties which can be expressed in terms of the sequence $\{ j_{n} \}$. For example, the Hausdorff dimension of a Laakso space is \begin{equation} Q_{L}=\lim_{n \rightarrow \infty}\left(1+\frac{\log(2^n)}{\log(I_n)} \right), \label{Hdimension} \end{equation} provided the limit exists. If $\{j_i\}_{i=1}^{\infty}$ is a repeating sequence with period \emph{T}, equation \ref{Hdimension} conveniently reduces to \begin{equation} Q_{L}=1+\frac{\log(2^T)}{\log(I_T)}. \end{equation} Furthermore, each $F_{n}$ can be decomposed into three distinct shapes---V's, loops, and crosses (see Figure \ref{A V, a loop, and a cross})---the counts of which are necessary in determining the spectrum of the square well Hamiltonian in Subsection \ref{subsect: SW} and an arrangement of conducting plates in Subsection \ref{sect:CasimirForce}. \begin{figure}[htb] \begin{center} \epsfig{file=singleV.eps, width=1.7cm} \hspace{1.5cm} \epsfig{file=singleloop.eps, width=1.75cm} \hspace{1.5cm} \epsfig{file=singlecross.eps, width=3.2cm} \end{center} \caption[A V, a loop, and a cross]{A V, a loop, and a cross} \label{A V, a loop, and a cross} \end{figure} \subsection{Laplacian} In \cite{ST08} the Laplacian $\Delta$ is constructed on a Laakso space as the minimal self-adjoint extension of a compatible sequence of operators $\left\{ A_{n} \right\}$, where each $A_{n}$ acts by $-\frac{d^{2}}{dx_{e}^{2}}$ along edges in the $F_{n}$ quantum graph approximation. Specifically, $\Delta$ acts by \begin{equation} \iota^{*}\Delta[f]=\left( -\frac{d^{2}}{dx^{2}} \right) \iota^{*}[f]~\text{where}~x \in \emph{I}~and~\iota: \emph{I} \times \emph{K} \rightarrow \emph{L}~ \text{is the quotient map.} \end{equation} Moreover, the spectrum of this self-adjoint operator can be decomposed into the union of eigenvalues in an orthogonal basis on each quantum graph, and in \cite{BDMS10} the eigenvalues of $\Delta$ on a Laakso space with associated sequence $\{ j_{n} \}$ are explicitly shown to be \begin{eqnarray} \label{eq: Lspectrum} \sigma(\Delta) = &\bigcup_{k=0}^\infty \left\{ \pi^{2} k^{2} \right\} \cup \bigcup_{n=1}^\infty \bigcup_{k=0}^\infty \left\{ (k+1/2)^{2} \pi^{2} I_{n}^{2} \right\} \cup \bigcup_{n=1}^\infty \bigcup_{k=1}^\infty \left\{k^{2} \pi^{2} I_{n}^{2} \right\} \notag \\ & \cup \bigcup_{n=2}^\infty \bigcup_{k=1}^\infty \left\{ k^{2} \pi^{2} I_{n}^{2} \right\} \cup \bigcup_{n=2}^\infty \bigcup_{k=1}^\infty \left\{ \frac{k^{2}\pi^{2} I_{n}^{2}}{4} \right\} \end{eqnarray} with respective multiplicities: \begin{equation*} 1, 2^{n}, 2^{n-1}(j_{n}-2)I_{n-1}, 2^{n-1}(I_{n-1}-1), 2^{n-2}(I_{n-1}-1). \end{equation*} \subsection{Quantum Mechanics} The mathematical formalism of Quantum Mechanics consists of two fundamental types of objects: (1) normalized state vectors commonly denoted by $\mid \psi \rangle$, which represent physical systems and reside in a complex separable Hilbert space; and (2) self-adjoint operators, which act on state vectors, represent various observable quantities, and whose spectra correspond to the possible values of the measurement of those observables \cite{Griffiths2005}. For example, the position and momentum operators act by $\hat{x}[f]=xf$ and $\hat{p}[f]=-i \hbar \frac{ d}{dx}f$, respectively. In accordance with classical mechanics, we formulate energy as a function of momentum and position, namely $E=\frac{p^{2}}{2m}+V(x)$, and then associate to each observable its corresponding operator. In this way, we motivate the Hamiltonian energy operator $\hat{H}$, which acts by \begin{equation} \hat{H}[f]=\left(\frac{p^{2}}{2m}+V(x,t) \right)[f]=\left( -\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}} +V(x,t) \right)[f]. \end{equation} Despite the fact that the domain of each self-adjoint operator is a complex separable Hilbert space, the domains for different operators are, in general, distinct. In the case of the position and momentum spaces, the appropriate domain is $L^{2}(\mathbb{R}^{n})$ with natural inner product $\langle \phi \mid \psi \rangle=\int_{\mathbb{R}^{n}} \phi\psi^{*} ~dx$ and normalization condition $\langle \psi \mid \psi \rangle=\int_{\mathbb{R}^{n}} \left| \psi \right| ^{2} ~dx=1$. In fact, there is a probabilistic interpretation to the inner product on the position and momentum spaces. If $\mid \psi \rangle$ represents a particle in position or momentum space, then the probability that a measurement of that particle's position or momentum will fall in the interval $[A,B]$ is $ \int_{A}^{B} \left| \psi \right| ^{2} ~dx$ \cite{Griffiths2005}. The normalization condition then arises out of the necessity that a measurement of either the particle's position or momentum--- but not both simultaneously--- will be observed to have some well-defined value. Additionally, if $\left\{ \psi_{n} \right\} $ is an orthonormal basis of eigenfunctions with associated eigenvalues $\left\{\lambda_{n} \right\}$ for the self-adjoint operator $\emph{A}$ with corresponding observable $O_{A}$, then any normalized state vector $\mid \psi\rangle \in \Dom{(A)}$ can be written as a linear combination of orthonormal eigenfunctions, i.e. $\mid \psi \rangle=\sum_{i=1}^\infty a_{i} \psi_{i}$ with the following probabilistic interpretation: a measurement of $O_{A}$ for the system represented by $\mid\psi \rangle$ will yield the value $\lambda_{n}$ with probability $a_{n}^{2}$. If a particular eigenvalue of $\emph{A}$ is degenerate, then the total probability of observing that eigenvalue is given by adding all associated $a_{i}^{2}$. It follows from these considerations that the expected value of the observable $O_{A}$ is \begin{equation} \langle A \rangle= \langle \psi \mid A \mid \psi \rangle. \end{equation} In keeping with this discussion, we interpret the eigenvalues associated with the infinite square well, parabolic, and Coulomb potentials in Subsections \ref{subsect: SW} and \ref{subsect: CP} as the set of allowable energy measurements for a particle affected by those potentials. In Section \ref{sect:CasimirForce}, eigenvalues represent the permissible energy states for eigenfunctions in the presence of conducting plates. \section{Spectral Analysis of Hamiltonians} \label{sect: SA} In this section, we examine the spectrum of three different Hamiltonians. Theorem \ref{thrm: SWspectrum} gives the spectrum and associated multiplicities of the Laplacian with an infinite square well potential. The following two subsections consider the Laplacian with a parabolic potential and a coulomb potential, respectively. In the last subsection, we discuss the numerical approximations of the spectra accompanied by some data. \subsection{Infinite Square Well Potential} \label{subsect: SW} Let $x$ be the coordinate on $L$ and $F_n$ depending solely on the ``horizontal'' direction. The function $V(x)$ on $L$ is described so that $\iota^{*}(V)(x,w)$ depends only on $x$ and not on $w$. We first discuss the infinite square well Hamiltonian $H_{SW}$, where \begin{equation} V_{SW}(x) = \left\{ \begin{array}{rl} \infty & : x \hspace{1 mm} \in \hspace{1 mm}[0, \frac{1}{4}) \cup (\frac{3}{4}, 1]\\ 0 & : x \hspace{1 mm} \in \hspace{1 mm}[\frac{1}{4}, \frac{3}{4}],\\ \end{array} \right. \end{equation} and \begin{equation} H_{SW}[f]=\left( \Delta + V_{SW} \right)[f]. \end{equation} \begin{definition} The differential operator $A_{n}$ acts on $F_{n}$ by \begin{equation*} A_{n}[f]= \left(-\frac{d^{2}}{dx_{e}^{2}}+V_{SW}\right) [f] \end{equation*} along each edge e with $\Dom(A_{n})=\left\{ \right. f \in C(F_n) \mid f \in H^{2}(e)~\forall e$, except at the square well boundary where f may have a discontinuous derivative, and $f(x)=0~\forall x$ outside of the square well $\left. \right\}$. \\ \end{definition} \begin{theorem} \label{SpectrumOrtho} Let $\Phi_{n}$ be the projection of the Laakso space onto $F_{n}$ and $D_{n}=\left\{ f \circ \Phi_{n}| f \in \Dom(A_{n}) \right\}$. Define $D_{0}^{\prime}=D_{0}=\Phi_{0}^{*} \Dom(A_{0})$ and $D_{n}^{\prime}=D_{n-1}^{\prime \perp} \cap D_{n}$. Since $(H_{SW}, \Dom(H_{SW}))$ is the minimal self-adjoint extension of the projective system $\left( A_{n}, \Dom(A_{n})\right) $, \begin{equation*} \sigma(H_{SW})=\bigcup_{n=0}^\infty \sigma(A_{n}|_{D_{n}^{\prime}}). \end{equation*} \end{theorem} \begin{proof} The proof of this theorem follows closely from the free case proven in \cite{ST08}. \end{proof} Theorem \ref{thrm: SWspectrum} gives the spectrum and associated multiplicites of this Hamiltonian, which we prove in the remaining portion of this subsection. \begin{definition} The expression $w_{n}=\frac{1}{4}(I_{n})$ denotes the number of columns between $x=0$ and $x=\frac{1}{4}$. Let $d_{n}$ be the x-distance from the wall of the square well to the nearest column of nodes inside the square well; this is well defined by the symmetry of $L$. \end{definition} To distinguish one set of loops from another in an arbitrary row, we assign each set a number $m=\{1, 2, \cdots, I_{n-1} \}$, counting from left to right. Similarly, we distinguish one cross in an arbitrary row by assigning each a number $l=\{1, 2, \cdots, I_{n-1}-1 \}$. \begin{theorem} \label{thrm: SWspectrum} Given any Laakso space, $L$, with associated sequence $\{j_i\}$, the spectrum of $H_{SW}$, $\sigma(H_{SW})$, is \begin{eqnarray}\small \bigcup_{k=1}^{\infty} \{4 \pi^2 k^2\}\cup \bigcup_{k=1}^{\infty} \left\{ \frac{ k^{2} \pi^{2}} {d_{1}^{2}} \right\} \cup \bigcup_{k=1}^{\infty} \left\{ 9 k^{2} \pi^{2} \right\} \cup \bigcup_{n=1}^\infty \bigcup_{k=1}^{\infty} \left\{ \frac{k^2 \pi^2}{d_{n}^{2}} \right\} \cup\bigcup_{n=1}^\infty \bigcup_{k=1}^\infty \{k^2\pi^2I_n^2\} \notag\\ \cup \bigcup_{n=2}^{\infty} \bigcup_{k=1}^{\infty} \left\{ \frac{ k^{2} \pi^{2}}{d_{n}^{2}} \right\} \cup \bigcup_{n=2}^{\infty} \bigcup_{k=1}^{\infty} \left\{ k^{2} \pi^{2} I_{n}^{2} \right\} \cup \bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \left\{ \frac{ k^{2} \pi^{2}}{(d_{n}+\frac{1}{I_{n}})^{2}} \right\} \notag\\ \cup \bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \{k^2\pi^2I_n^2\} \cup \bigcup_{n=2}^\infty \bigcup_{k=1}^{\infty} \left\{ \frac{k^2\pi^2 I_n^2}{4} \right\} \notag \\ \end{eqnarray} \noindent Eigenvalues in these ten sets have the following respective multiplicities: \begin{enumerate} \item[1)] $1$; \item[2)] $2$~\text{if}~$\left( j_{1} \in \left\{ 2,3 \right\} \right)$~\text{and}\\ $0$~\text{otherwise}; \item[3)] $1$~\text{if}~$\left( j_{1}=3 \right)$~\text{and}\\ $0$~\text{otherwise}; \item[4)] $2^{n}$~\text{if}~$ \left( d_{n} \neq 0~\text{and}~(m-1)j_{n}+1 < w_{n} < mj_{n}-1 \right)$~\text{and}\\ $0$~\text{otherwise}; \item[5)] $2^{n-1}(j_{n}-2) I_{n-1} -2^{n} (1+ \lceil w_{n} \rceil -2m)$~\text{if}~$\left( (m-1)j_{n}+1 \leq w_{n} \leq mj_{n}-1 \right)$, \\ $2^{n-1}(j_{n}-2)I_{n-1}-m2^{n}(j_{n}-2)$~\text{if}~$ \left( mj_{n}-1 \leq w_{n} \leq mj_{n}+1 \right)$, ~\text{and} \\ $0$~\text{otherwise}; \item[6)] $2^{n-1}$~\text{if}~$\left( d_{n} \neq 0~\text{and}~mj_{n}-1 < w_{n} < mj_{n}+1 \right)$ ~\text{and}\\ $0$~\text{otherwise}; \item[7)] $2^{n-1}$~\text{if}~$\left( mj_{n}-1<w_{n} \leq mj_{n} \right)$ ~\text{and}\\ $0$~\text{otherwise}; \item[8)] $2^{n-1}$~\text{if}~$\left( mj_{n}-1< w_{n} < mj_{n} \right)$~\text{and}\\ $0$~\text{otherwise}; \item[9)] $2^{n-1}(I_{n-1}-1)-(m-1)2^{n}$~\text{if}~$\left( (m-1)j_{n}+1\leq w_{n} \leq mj_{n}-1 \right)$,\\ $2^{n-1}(I_{n-1}-1)-m2^{n}$~\text{if}~ $\left( mj_{n}-1<w_{n} \leq mj_{n}+1 \right)$, ~\text{and}\\ $0$~\text{otherwise}; \item[10)] $2^{n-2}(I_{n-1}-1)-(m-1)2^{n-1}$~\text{if}~$\left( (m-1)j_{n}+1\leq w_{n} \leq mj_{n}-1 \right)$, \\ $2^{n-2}(I_{n-1}-1)-m2^{n-1}$~\text{if}~$\left( mj_{n}-1<w_{n} \leq mj_{n}+1 \right)$, ~\text{and} \\ $0$~\text{otherwise}. \end{enumerate} \end{theorem} Theorem \ref{thrm: SWspectrum} is a consequence of Theorem \ref{SpectrumOrtho} and the following lemmas. \begin{lemma} \label{SWSpectrumF_{0}} For $n=0$, $\sigma(A_{0}|_{D_{0}^{\prime}})= \bigcup_{k=1}^\infty \left\{ 4 k^{2} \pi^{2} \right\}$ with multiplicity one for all $k$. \end{lemma} \begin{proof} We look for eigenfunctions of $A_{0}$ on $F_{0}$ in $\Dom(A_{0})$. The only functions that satisfy these restrictions are translations of $\left\{\sin(2 k \pi x ) \right\}~\forall k \in \mathbb{N}$ supported inside the square well. We immediately obtain the eigenvalues, each with multiplicity one. \end{proof} The set of eigenvalues in Lemma \ref{SWSpectrumF_{0}} comprises the first union in Theorem \ref{thrm: SWspectrum}. Now, we tackle the most complicated set, namely \begin{equation*} \bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \left\{ \frac{ k^{2} \pi^{2}}{(d_{n}+\frac{1}{I_{n}})^{2}}, \right\} \end{equation*} with the understanding that the complete list of eigenvalues comes from an application of similar arguments in conjunction with Theorem \ref{SpectrumOrtho}. \begin{lemma} \label{SplitCrosses} The set of eigenvalues $\bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \left\{ \frac{ k^{2} \pi^{2}}{(d_{n}+\frac{1}{I_{n}})^{2}} \right\}$ has multiplicity \begin{equation*} 2^{n-1}~\text{if}~\left( mj_{n}-1< w_{n} < mj_{n} \right)~\text{and}~0~\text{otherwise}~\forall~n\geq 2. \end{equation*} \end{lemma} \begin{proof} In keeping with \cite{RS09} and \cite{ST08}, we can decompose each quantum graph approximation $F_{n}$ into V's, loops, and crosses, and then deduce from the orthogonality conditions required by Theorem \ref{SpectrumOrtho} that each shape found within the square well contributes once to the overall multiplicity. The eigenvalues in the lemma come from crosses that straddle the boundary and whose centers lie in the square well. In this case, the eigenfunctions in $\Dom(A_{n})$ must take opposite values along the two X's that comprise the cross so that the eigenfunction is determined by the value it takes on the upper X. Therefore, one solution is to construct eigenfunctions that assume the same value on the upper and lower parts of the X and which vanish outside the square well and vanish on the corners inside the square well, namely \begin{equation*} \sin\left( \frac{ k \pi } {d_{n} + \frac{1}{I_{n}}} (x-x_{0}) \right)~\text{for $x$ on the cross}~\text{and}~0~\text{otherwise}. \end{equation*} From this, we read off the associated eigenvalues \begin{equation*} \left\{ \frac{ k^{2} \pi^{2} }{\left( d_{n}+\frac{1}{I_{n}} \right)^{2}} \right\}~\forall~k\in \mathbb{N}. \end{equation*} Lastly, it is shown in Lemma \ref{CrossLemma} that the number of such split crosses in $F_{n}$ is \begin{equation*} 2^{n-1}~\text{if}~\left( mj_{n}-1< w_{n} < mj_{n} \right)~\text{and}~0~\text{otherwise}. \end{equation*} Since each split cross contributes one eigenvalue, we have the claimed multiplicity. \end{proof} The remaining portion of this section gives the counts and placements for the shapes which comprise $F_{n}$. Combining these results with arguments similar to one in Lemma \ref{SplitCrosses} gives the eigenvalues and multiplicities in Theorem \ref{thrm: SWspectrum}. We can derive from Lemmas 3.1, 3.2, and 3.3 in \cite{BDMS10} that V's occupy a total of two columns, loops occupy a total of $I_{n-1}(j_{n}-2)$ columns, and crosses occupy a total of $2(I_{n-1}-1)$ columns. Thus, since there are $I_{n}$ columns in the $F_{n}$ quantum graph approximation, \begin{equation} I_{n}=2+I_{n-1}(j_{n}-2)+2(I_{n-1}-1).\label{I_n short} \end{equation} Moreover, proposition 3.1 in \cite{BDMS10} implies that a V occupies the first column in $F_{n}$, loops occupy the next $j_{n}-2$ columns, and a cross occupies the next two columns. Loops and crosses continue to alternate across $F_{n}$---loops arising in clusters of $j_{n}-2$ and crosses covering two columns each. The last $j_{n}-1$ columns are occupied by loops and a V, respectively. Thus, we can expand equation \ref{I_n short} into \begin{equation} I_{n}=1+(j_{n}-2)+2+(j_{n}-2)+2+\cdots+2+(j_{n}-2)+1. \end{equation} \vspace{1mm} \begin{proposition} Let m be an integer. \begin{enumerate} \item[(a)] The column boundary of the V's is denoted by $[0, 1]$ and $[I_{n}-1, I_{n}]$.\\ \item[(b)] The column boundary of the $m^{th}$ set of loops in any row is denoted by $[(m-1)j_{n}+1$,\hspace{1.5mm} $mj_{n}-1]$, where $1 \leq m \leq I_{n-1}$.\\ \item[(c)] The column boundary of the $m^{th}$ cross in any row is denoted by $[mj_{n}-1$,\hspace{1.5mm} $mj_{n}+1]$, where $1 \leq m \leq I_{n-1}-1$ \end{enumerate} \label{cross and loop boundary} \end{proposition} \begin{proof} \begin{enumerate} \item[(a)] Since V's are found at the edges of a Laakso space, they occupy the first and last columns.\\ \item[(b)] We note that the left-most set of loops in $F_{n}$ has a column boundary of $[1$, $j_{n}-1]$. Suppose that the $m^{th}$ set of loops occupies the column $[(m-1)j_{n}+1$, $mj_{n}-1]$. We are going to show that the $(m-1)^{th}$ set of loops must occupy the column $[(m-2)j_{n}+1$, $(m-1)j_{n}-1]$. First, we subtract two from $(m-1)j_{n}+1$ to get the upper bound of the $(m-1)^{th}$ loop. Subtracting an additional $j_{n}-2$ gives us the lower bound. Thus, by induction, we have shown that the $m^{th}$ loop occupies the column $[(m-1)j_{n}+1$, $mj_{n}-1]$.\\ \item[(c)] We note that the left most cross in $F_{n}$ has a column boundary of $[j_{n}-1$, $j_{n}+1]$. Let's assume that the $m^{th}$ cross occupies the column $[mj_{n}-1$, $mj_{n}+1]$. We are going to show that the $(m-1)^{th}$ cross must occupy the column $[(m-1)j_{n}-1$, $(m-1)j_{n}+1]$. First, we subtract $j_{n}-2$ from $mj_{n}-1$ to get the upper bound of the $(m-1)^{th}$ cross. Subtracting an additional two columns gives us the lower bound. Thus, by induction, we have shown that the $m^{th}$ cross occupies the column $[mj_{n}-1$, $mj_{n}+1]$. \end{enumerate} \end{proof} \begin{lemma} \label{CrossLemma} Let $w_{n},m > 0$. \begin{enumerate} \item[(a)] When $(m-1)j_{n} < w_{n} \leq mj_{n}-1$, then there are \begin{equation} 2^{n-2}(I_{n-1}-1)-(m-1)2^{n-1} \label{crosses in SW1} \end{equation}\ \noindent full crosses on the interior of the square well. \item[(b)] When $mj_{n}-1 < w_{n} \leq mj_{n}$, there are \begin{equation} 2^{n-2}(I_{n-1}-1)-(m-1)2^{n-1} \label{crosses in SW2} \end{equation}\ \noindent full crosses on the interior of the square well and $2^{n-1}$ half crosses. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[(a)] When $w_{n} \in ((m-1)j_{n}, mj_{n}-1]$, then the $(m-1)^{th}$ cross in any given row either straddles the wall of the square well or is outside the square well. There are $2^{n-2}$ rows of crosses in $F_{n}$, so we multiply this number by $2(m-1)$, which gives us the total number of crosses that are not in the square well. Finally, we subtract the resulting number from the total number of crosses in the graph, giving us the formula in \ref{crosses in SW1}. Note that if $x=\frac{1}{4}$ intersects the $(m-1)^{th}$ cross, then more than half of the cross will remain on the exterior of the square well. Thus, the square well will never contain a half cross.\\ \item[(b)] When $w_{n} \in (mj_{n}-1, mj_{n}]$, then $x=\frac{1}{4}$ intersects the left half of the $m^{th}$ cross in any given row. Although the entire cross is not in the square well, the right half- cross is. So, there are $2^{n-2}(I_{n-1}-1)-m2^{n-1}$ intact crosses and $2^{n-1}$ half-crosses. \end{enumerate} \end{proof} \begin{lemma} Let $w_{n},m > 0$. \begin{enumerate} \item[(a)] When $(m-1)j_{n} < w_{n} \leq mj_{n}-1$, then there are \begin{equation} 2^{n-1}(j_{n}-2)(I_{n-1})-2^{n}(1+\lceil w_{n} \rceil -2m) \label{loops in SW1} \end{equation}\ \noindent loops on the interior of the square well. \item[(b)] When $mj_{n}-1 < w_{n} \leq mj_{n}$, there are \begin{equation} 2^{n-1}(j_{n}-2)(I_{n-1})-m2^{n}(j_{n}-2) \label{loops in SW2} \end{equation}\ \noindent loops on the interior of the square well. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[(a)] Consider when $(m-1)j_{n} < w_{n} \leq mj_{n}-1$. In any given row, we note that $mj_{n}-1-\lceil w_{n} \rceil$ gives us the number of loops in the $m^{th}$ set of loops that falls to the right of $x=\frac{1}{4}$. We subtract this number from the total number of loops in that cluster, giving us $j_{n}-2-(mj_{n}-1-\lceil w_{n} \rceil)$. There are $(j_{n}-2)(m-1)$ remaining loops to the left of $x=\frac{1}{4}$, so we add these two terms together and multiply the sum by $2^{n-1}$ giving us $2^{n-1}(1+\lceil w \rceil-2m)$. Since we have only accounted for the total number of loops to the left or on $x=\frac{1}{4}$, we must multiply the previous value by two, giving us $2^{n}(1+\lceil w_{n} \rceil-2m)$. Finally, we subtract $2^{n}(1+\lceil w \rceil-2m)$ from the total number of loops in the graph, giving us \ref{loops in SW1}.\\ \item[(b)] When $mj_{n}-1 < w_{n} \leq mj_{n}$, then the number of loops on the exterior of the infinite square well are the same as the number of loops on the exterior when $w_{n}=mj_{n}-1$. Part a implies that when $w_{n}=mj_{n}-1$, then the number of loops on the interior of the square well is $2^{n-1}(j_{n}-2)(I_{n-1})-m2^{n}(j_{n}-2)$. \end{enumerate} \end{proof} \subsection{Coulomb and Parabolic Potentials} \label{subsect: CP} The next two potentials we wish to discuss are the Coloumb Potential and the Parabolic Potential. The Hamiltonian with Coloumb potential is $H_{C}=\Delta$ + $V(x)$, where $V(x)$ is given by \begin{equation} V(x) = \frac{-1}{(x-\frac{1}{2})^2} + \frac{1}{4}. \end{equation} \noindent In Section \ref{sect:NM}, we provide the bottom of $H_{C}$'s spectrum (Table \ref{table: CP}) and a graph of the eigenfunction corresponding to the smallest eigenvalue (Figure \ref{fig: CP}). As expected, the eigenfunction is zero at $x = \frac{1}{2}$. The Hamiltonian with parabolic potential is $H_{P}=\Delta$ + $V(x)$, with $V(x)$ defined as \begin{equation} V(x) = \frac{1}{x(1-x)}. \end{equation} \noindent In Section \ref{sect:NM}, we provide the bottom of $H_{P}$'s spectrum (Table \ref{table: CP}) and a graph of the eigenfunction corresponding to the smallest eigenvalue (Figure \ref{fig: CP}). As expected, the eigenfunction is zero at $x = 0$ and $x=1$. In general there is no method for calculating the closed-form solutions to a linear second-order differential equation. However, by assuming a Taylor expansion for such a solution in the presence of a locally linear potential one can see that the eigenvalues to depend not only on the constant term but also on the first order term in the potential. \subsection{Numerical Methods} \label{sect:NM} We modified the MatLab script used in \cite{RS09} and \cite{ST08} to calculate the eigenvalues of the Hamiltonian with certain potentials of classical interest. In all three cases, $V(x)$ was represented by a diagonal matrix with large finite cut-offs to approximate infinity. \begin{table}[t] \small \begin{center} \begin{tabular} {| c | c | c | c | c |} \hline \multicolumn{5}{|c|}{$H_{SW}$ Spectrum} \\ \hline \multicolumn{2}{|c|}{Bound: $10^{15}$} & \multicolumn{2}{|c|}{Bound: $10^{15}$} & Expected\\ \hline \multicolumn{2}{|c|}{$n = 6$} & \multicolumn{2}{|c|}{$n = 8$} & \\ \hline $\lambda$ & m & $\lambda$ & m & \\ \hline 38.1& 1 & 39 &1 & $(2\pi)^2$ = 39.48\\ \hline 88.8 & 1 & 89 & 1 & $(3\pi)^2$ = 88.83\\ \hline 152.2 & 3 & 157 & 3& $(4\pi)^2$ = 157.91 \\ \hline 342.3 & 1 & & & \\ \hline 355.1 & 9 & 353 & 10& $(6\pi)^2$ = 355.31\\ \hline 608.2 & 3 & 628 & 3& $(8\pi)^2$ = 631.65 \\ \hline 798.31 & 1 & 799 & 1& $(9\pi)^2$ = 799.44\\ \hline 949.8 & 1 & 981 & 1& $(10\pi)^2$ = 986.96\\ \hline 1272.2 & 4 & && \\ \hline 1366.7 & 3&1395.4& 4 &\\ \hline 1417.6 & 21&1412&24& $(12\pi)^2$ = 1421.22 \\ \hline 1858.8 & 1 &1922&1& $(14\pi)^2$ = 1934.44\\ \hline 2211.9 & 1& 2220&1&$(15\pi)^2$ = 2220.7 \\ \hline 2425.0 & 3 &2511&3& $(16\pi)^2$ = 2526.62\\ \hline 3065.6 & 1&3178&1&\\ \hline 3179.5 & 29&3197&29&$(18\pi)^2 = 3197.8$\\ \hline 3779.8 & 3 &3923&3& $(20\pi)^2$ = 3947.84\\ \hline 4318.8 &1&4253&1 & $(21\pi)^2 = 4352.50$\\ \hline 4567 & 1 &4746&1& $(22\pi)^2$ = 4776.89 \\ \hline 5055.9 & 4 & 5580&4&$(24\pi)^2 = 5684.89$\\ \hline \end{tabular} \end{center} \caption{The first 20 eigenvalues computed for $H_{SW}$ and $j_{n} = [2,3,2,3...]$.} \label{table: SW} \end{table} In Table \ref{table: SW}, we compare the MatLab calculations for $H_{SW}$ with eigenvalues found in Theorem \ref{thrm: SWspectrum}. These columns match for most values, but MatLab gives additional quantities as well. However, as the finite cut-off grows, these extraneous eigenvalues are lost, allowing for the MatLab calculations to coincide with the predicted spectrum. \begin{table}[t] \small \begin{center} \begin{tabular} {| c | c | c | c | } \hline \multicolumn{2}{|c|}{$H_{C}$ Spectrum} & \multicolumn{2}{|c|}{$H_{P}$ Spectrum}\\ \hline \multicolumn{2}{|c|}{Bound: $-10^{15}$} & \multicolumn{2}{|c|}{Bound: $10^{15}$}\\ \hline $\lambda$ & m & $\lambda$ & m \\ \hline -1391.7& 16 &14.7& 1\\ \hline -.6 & 4 & 45.7&3\\ \hline 64.9& 4 &92.9&1\\ \hline 87.8 & 4&95.8&1\\ \hline 227.5 & 4&165.4& 3\\ \hline 318.8 & 4&254.7&1\\ \hline 328.4 & 4&359.1&2\\ \hline 342.3 & 4&359.2&3\\ \hline 349.8 & 4&359.6&4\\ \hline 354.6& 8&360.5&4\\ \hline 478.9 & 4&362.5&4\\ \hline 796.4 & 4&363.5&3\\ \hline 816.5 & 4&370.7&4\\ \hline 1238.4& 4&491.9&1\\ \hline 1354.6 & 4& 639.9&3\\ \hline 1376.5 & 4&802.4&1 \\ \hline 1398.2 & 4 &807.5&1 \\ \hline 1404.0 & 4 &994.6&3 \\ \hline 1409.8 & 4 &1201.3&1\\ \hline 1412.1 & 4 &1421.7&6 \\ \hline \end{tabular} \end{center} \caption{The first 20 eigenvalues for $H_{C}$ and $H_{P}$ and $j_{n} = [2,3,2,3,2,3]$} \label{table: CP} \end{table} In Table \ref{table: CP}, we give the 20 eigenvalues closest to zero for $H_{C}$ and $H_{P}$. As expected, we have some negative values in the spectrum of $H_{C}$. For $H_{C}$, we approximated $V(x)$ at $x = \frac{1}{2}$ to be $-10^{15}$, and for $H_{P}$ we approximated $V(x)$ at $x = 0$ and $x = 1$ to be $10^{15}$. \begin{figure}[htbp] \begin{center} \includegraphics[scale=.42] {HCeig1} \includegraphics[scale=.42]{HPeig1} $H_{C} \hspace{6.5cm}$ $H_{P}$ \end{center} \caption[]{$H_{C}$ and $H_{P}$ with $j_{n}$ = [2,3,2,3]: Eigenfunction 1} \label{fig: CP} \end{figure} In Figure \ref{fig: CP}, we see the first eigenfunction of $H_{C}$ and $H_{P}$, respectively. Notice the position of zero for these eigenfunctions. For $H_{C}$, the eigenfunction is zero at $x = \frac{1}{2}$, and for $H_{P}$, it is zero at $x = 0$ and $x = 1$. This is exactly where the potentials are infinite. \begin{figure}[htbp] \begin{center} \includegraphics[scale=.5] {C232323-30-1} \includegraphics[scale=.5]{PW232323-15-1} $H_{C} \hspace{6.5cm}$ $H_{P}$ \end{center} \caption{$H_{C}$ and $H_{P}$ with $j_{n}$ = [2,3,2,3]: Eigenfunction 30 and 15 respectively} \end{figure} \section{Electromagnetic Fields and Conducting Plates in Laakso Spaces}\label{sect: Casimir} One of the simplest and most frequently mentioned cases in Quantum Mechanics is the harmonic oscillator, which we conceptualize as a block attached to a massless spring of some fixed equilibrium length \cite{Griffiths2005}. To describe the mechanics of such a system, we introduce a conserved Hamiltonian \begin{equation} H=E_{Kinetic}~+~E_{Potential}= \frac{1}{2} \left( \frac{p^{2}}{m}+m \omega^{2} q^{2} \right), \end{equation} where \begin{align*} m= &\textit{ mass ~of ~block ~attached ~to~ spring,} \\ q=& \textit{ displacement~ of ~block ~from ~equilibrium,} \\ p=& \textit{ momentum ~of~ block,} \\ \omega=& \textit{constant involving spring constant and mass}. \end{align*} \vspace{2mm} \noindent We can also consider the Hamiltonian and its components as operators: \begin{align*} p[f]= &-i\hbar \frac{d}{dx}f \\ q[f]=&qf \\ H[f]=&\frac{1}{2} \left( \frac{p^{2}}{m}+m\omega^{2} q^{2} \right)[f]=\frac{1}{2} \left(\frac{-\hbar^{2}}{m} \frac{d^{2}}{dx^{2}}f+ m \omega^{2}q^{2}f \right). \end{align*} The importance of this configuration becomes apparent in \cite{Bog1980} where the quantized Hamiltonian of the electromagnetic field is shown to be equivalent to the Hamiltonian of a set of independent harmonic oscillators, \begin{equation} H_{EM}=\frac{1}{2} \sum_{k,s} \left( \frac{ p_{k,s}^{2}(t) } {m} +m \omega_{k}^{2} q_{k,s}^{2}(t) \right), \end{equation} where $p_{k,s}(t)$ and $ q_{k,s} (t)$ are time-dependent functions of the electromagnetic field, $\left\{ \omega_{k} \right\} = \left\{ \sqrt{ \lambda}~ | ~\lambda \in \sigma{(\Delta)} \right\}$, \emph{k} is the wave vector of the radiation, and \emph{s} is the degree of freedom resulting from the different possible polarizations. Given this equivalence, one should not be surprised to find that, just as an isolated oscillator assumes a discrete set of allowable energies given by \begin{equation*} \left\{ \left.\hbar \omega \left( n+\frac{1}{2} \right) \right | n \in \mathbb{N} \cup{\left\{ 0 \right\}} \right\}, \end{equation*} the energies for the quantized electromagnetic field are \begin{equation*} \left\{ \left. \sum_{j} \hbar \omega_{j} \left(n_{j}+\frac{1}{2} \right) \right | n_{j} \in \mathbb{N}\cup{\left\{0 \right\}}~\forall j \right\}. \end{equation*} Similarly, the respective zero-point energies are \begin{eqnarray} E_{0}=&\left. \hbar \omega (n+\frac{1}{2}) \right |_{n=0} &=\frac{\hbar \omega}{2} \\ \label{Zero-Point} E_{0}=&\left. \sum_{j} \hbar \omega_{j} (n_{j}+\frac{1}{2}) \right |_{(n_{1},n_{2}, \dots)=0}&=\frac{ \hbar} {2} \sum_{n} \omega_{n}, \end{eqnarray} where the sum in Equation \ref{Zero-Point} is over $\left\{ \omega_{n} \right\} = \left\{ \sqrt{ \lambda}~ | ~\lambda \in \sigma{(\Delta)} \right\}$ \cite{Berkolaiko2009}. An immediate consequence of this equation is that free space has a non-zero minimum energy density. Another perhaps less obvious result is derived in a 1948 paper by H.B.G. Casimir, which claimes that under appropriate boundary conditions, two uncharged conducting plates should experience a mutual force of attraction which varies with the inverse fourth power of their separation distance \cite{Casimir1948}. Specifically, \begin{equation} \left. \right | F_{C} \left. \right |=\frac{ \pi^{2} c \hbar}{240 a^{4}}. \end{equation} In the years following Casimir's original paper, mathematicians have made numerous attempts to verify an attractive force between two uncharged conducting plates. In 1957, M. J. Sparnaay found Casimir's expected value to be within his experimental margin of error in \cite{Sparnaay1957}, while in 2002 a research team at the University of Padua experimentally confirmed this formula in \cite{Bressi2002}. Here, we derive a general formula for the Casimir force in a fractal setting. Fix the following configuration: two uncharged conducting plates are symmetrically placed in a Laakso Space and attached to all nodes that intersect the conductors in quantum graph approximations. To ease the calculation, we only consider conducting plates which attach to nodes in $F_{1}$ and require that eigenfunctions of the Laplacian satisfy Dirichlet conditions at conducting nodes and Kirchoff conditions at non-conducting nodes. At non-conducting nodes of degree one, this last requirement is equivalent to the Neumann condition that requires nodal derivatives to vanish. Moreover, these two restrictions preserve the self-adjointness of the Laplacian operator and incorporate appropriate boundary conditions at the plate by analogy to the laws of classical electrostatics. Most importantly, the two conducting plates within the Laakso Space are allowed to move as long as the symmetry between the two plates is preserved and the Laakso space is appropriately distorted. If the plates move towards one another, intervals in the region between the plates of the quantum graph approximation are compressed, and intervals in the two exterior regions are stretched. Conversely, if the plates move apart, intervals in the interior region are stretched, and intervals in the exterior regions are compressed. \begin{definition} \begin{enumerate} \setlength{\itemsep}{2mm} \item The $j=N$ Laakso space has $j_{n}=N \hspace{1mm} ~\forall n$. \item $X_{0} =$ the distance between a conducting plate and the center of the Laakso space. \item $Z = $ the number of nodes lying between the two conducting plates in the $F_{1}$ quantum graph. \end{enumerate} \end{definition} \vspace{2mm} \begin{lemma} Let two uncharged conducting plates be symmetrically attached to nodes in the $F_{1}$ quantum graph approximation so that Z nodes in $F_{1}$ lie between them. Then \begin{enumerate} \item The number of cells that lie between the plates in $F_{n}$ is $1$ if $n=0$ and $\frac{ (Z+1)}{N}2^{n}I_{n} ~if~n \geq 1$. The number of exterior cells is $2$ if $n=0$, and $(1-\frac{Z+1}{N})2^{n}I_{n} ~if~n \geq 1$.\\ \item The number of interior loops in $F_{n}$ is $Z+1$ if $n=1$, and $\frac{Z+1}{N} 2^{n-1}I_{n-1}(j_{n}-2)$ if $n \geq 2$. The number of exterior loops in $F_{n}$ is $N-Z-3$ if $n=1$, and $(1-\frac{Z+1}{N})(2^{n-1}I_{n-1})(j_{n}-2)$ if $n \geq 2$.\\ \item All V's are located in the exterior region, and there are $2^{n}$ of them in $F_{n}$ for $n \geq 1.$ \\ \item The number of interior crosses in $F_{n}$ is $\frac{Z+1}{N}2^{n-2}I_{n-1} ~\forall n \geq 2$ , while the number of exterior crosses is $2^{n-2}[(1-\frac{Z+1}{N})I_{n-1}-1] ~ \forall n \geq 2$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item \emph{Interior Region}. By the self-similarity of the $j=N$ Laakso space, we know that $\forall n,k \in \mathbb{N}\cup \left\{0 \right\}$ and $\forall k: 0 \leq k<N$, the number of cells with $x$-coordinate in $[k/N,(k+1)/N]$ are equal for all $k$. We have already shown that the $F_{n}$ graph approximation contains $2^{n}I_{n}$ cells and see that $Z+1$ almost disjoint intervals of the form $[k/N,(k+1)/N]$ comprise the region between the conducting plates. Therefore, since each region has $2^{n}I_{n}/N$ cells in $F_{n}$ and $Z+1$ regions lie between the plates, $\frac{Z+1}{N} 2^{n} I_{n}$ cells lie between the conductors in $F_{n}$.\\ \indent \emph{Exterior Region}. Subtracting the number of interior cells in $F_{n}$ from from the total yields the number of exterior cells: $(1-\frac{Z+1}{N})2^{n}I_{n}$. \\ \item \emph{Interior Region}. At the first level of construction, we have $Z-2$ loops. At the $n-1$ stage of construction, we have $\frac{(Z+1)}{N}2^{n-1}I_{(n-1)}$ cells in the interior region. Since every cell at the $n-1$ stage will yield exactly $j_{n}-2$ loops in the $n$ level, we compute that $\frac{Z+1}{N} 2^{n-1}I_{n-1}(j_{n}-2)$ loops lie in the interior region of $F_{n}$ for $n \geq 2$.\\ \indent \emph{Exterior Region}. Since $F_{1}$ has a total of $N-2$ loops and the interior region has only $Z+1$ of those loops, we are left with $N-Z-3$ loops in the exterior region for $n=1$. Since $(1-\frac{Z+1}{N})(2^{n-1}I_{n-1})$ cells inhabit the exterior region of $F_{n}$, we have by the same argument as before that $(1-\frac{Z+1}{N})(2^{n-1}I_{n-1})(j_{n}-2)$ loops are to be found in $F_{n}$ whenever $n \geq 2$. \\ \item This follows immediately from the Barlow-Evans construction of Laakso Spaces. \\ \item \emph{Interior Region}. Since all V's are located in the exterior region, the only possible structures in the interior region are loops and crosses. Moreover, we know the total number of loops and the total number of cells in the interior region for any value for n, so letting $X_{n}$ =(number of interior $F_{N}$ cells in crosses) yields \begin{equation*} \frac{Z+1}{N}2^{n}I_{n-1}(j_{n}-2)~+~X_{n}=\frac{Z+1}{N}2^{n}I_{n} \end{equation*} \begin{equation*} \implies X_{n}=\frac{Z+1}{N}2^{n+1}I_{n-1}. \end{equation*} Since every cross is comprised of exactly 8 cells, we simply divide by 8 to calculate the total number of crosses in the interior region: \begin{equation*} \frac{Z+1}{N}2^{n-2}I_{n-1} ~\forall n \geq 2. \end{equation*} The total counts two disjoint half-crosses as one whole cross.\\ \indent \emph{Exterior Region}. Every exterior cell in $F_{n}$ forms either part of a cross, loop, or V. Therefore, to find $X_{n}$=(number of exterior $F_{n}$ cells in crosses), we simply solve \begin{equation*} 2^{n+1}+\left\{ 1-\frac{Z+1}{N} \right\} 2^{n}I_{n-1}(j_{n}-2) +X_{n}= \left\{ 1-\frac{Z+1}{N} \right\} 2^{n}I_n \end{equation*} \begin{equation*} \implies X_{n}= 2^{n+1} \left\{ \left\{ 1-\frac{Z+1}{N} \right\} I_{n-1}-1 \right\}. \end{equation*} We divide the result by 8 to obtain the total number of exterior crosses: \begin{equation} 2^{n-2}\left\{ \left\{ 1-\frac{Z+1}{N} \right\} I_{n-1}-1 \right\} ~ \forall n \geq 2. \end{equation} \end{enumerate} \end{proof} When counting the total number of crosses in the interior and exterior regions, we also treat two disjoint half-crosses as one whole cross. \begin{proposition} The total number of crosses centered along the two conducting plates in $F_{n}$ is $2^{n-1} ~\forall n \geq 2$. \end{proposition} \begin{proof} In $F_{1}$ we have two conducting nodes. Since each of these two nodes is degree four, we will have at the second level of construction two crosses centered over the conductors. Each of these new crosses will contribute two additional conducting nodes, so that four degree four nodes lie across the conductors in $F_{2}$. By induction, we arrive at a general formula for the total number of crosses centered along the two conducting plates in $F_{n}$: $2^{n-1} ~\forall n \geq 2$. \end{proof} \begin{corollary} There are $2^{n-1}$ half-crosses lying along the conducting plates in each of the interior and exterior regions. \end{corollary} \begin{proof} By symmetry, we may deduce that the number of half-crosses along the conductors in the exterior region is always equal to the number of half-crosses in the interior region. \end{proof} \begin{lemma} If conducting plates are attached to two symmetric nodes of a $j=N$ Laakso space so that $Z$ nodes lie between the plates in $F_{1}$, and the plates are moved to a distance $X_{0}$ from the center, allowing interior and exterior regions of the Laakso space to expand and contract, then $~\forall n \geq 1$, the interior and exterior cell lengths in $F_{n}$ will be functions of $X_{0}$ given by $\frac{2NX_{0}}{Z+1}I_{n}^{-1}$ and $\frac{1-2X_{0}}{1-\frac{Z+1}{N}}I_{n}^{-1}$, respectively. \end{lemma} \begin{proof} Each cell in the unperturbed $F_{1}$ quantum graph approximation has length 1/N. If we move the conducting plates to a distance $X_{0}$ from the center, every $F_{1}$ cell in the interior region will have length $\frac{2X_{0}}{Z+1}$. Therefore, the scaling factor is $\frac{2X_{0}N}{Z+1}$. Moreover, every $F_{1}$ cell in the exterior region will have length $\frac{1-2X_{0}}{N-(Z+1)}$ so that the scaling factor is $\frac{1-2X_{0}}{1-\frac{Z+1}{N}}$. Lastly, it is clear from the self-similarity of j=N Laakso spaces that interior and exterior cells from more detailed quantum graph approximations will also be scaled by the same ratios. \end{proof} \begin{corollary} The interior and exterior metric diameters in $F_{n}$ $~\forall n \geq 1$ will be functions of $X_{0}$ given by $\frac{Z+1}{2NX_{0}}I_{n}$ and $\frac{1-\frac{Z+1}{N}}{1-2X_{0}}I_{n}$, respectively. \end{corollary} Cells in $F_{0}$ require separate treatment. \begin{proposition} The interior region in $F_{0}$ has metric diameter $\frac{1}{2X_{0}}$, while each of the two exterior regions has metric diameter $\frac{2}{1-2X_{0}}$. \end{proposition} \begin{proof} The metric diameter of a cell is the multiplicative inverse of cell length. \end{proof} To demonstrate that the Laplacian operator on our modified space is self-adjoint, it suffices to prove the result on every quantum graph approximation and check for mutual compatibility. \begin{definition} \label{Bdef} The differential operator $B_{n}$ acts on $F_{n}$ by \begin{equation*} B_{n}[f]=-\frac{d^{2}}{dx_{e}^{2}} f \end{equation*} along each edge e with $\Dom(B_{n})=\left\{ \right.$continuous $f \in H^{2}(e)~\forall e$ with Dirichlet vertex conditions at each conducting node and Kirchoff vertex conditions at each non-conducting node$\left. \right\}$. \end{definition} \begin{theorem} $B_{n}$ is a self-adjoint operator on $F_{n}$. \end{theorem} \begin{proof} Let $v$ be a vertex of degree $d$ in $F_{n}$ and $f$ a function on $F_{n}$. Define $F_{v}=(f_{1}(v),f_{2}(v),...,f_{d}(v))^{T}$ and $F^{\prime}_{v}=(f_{1}^{\prime}(v),f_{2}^{\prime}(v),...,f_{d}^{\prime}(v))^{T}$. By Theorem 3 in \cite{Kuchment2008}, to prove self-adjointness it suffices to find for each vertex in a quantum graph approximation matrices $C_{v}$ and $D_{v}$ such that $C_{v}F+D_{v}F^{\prime}=0$ and $(C_{v}D_{v})$ is maximal. Since this condition is local and both Dirichlet and Neumann vertex conditions produce self-adjoint operators on a quantum graph, the theorem follows. \end{proof} \begin{theorem} \label{Spectrum} Let $B_{n}$ be the operator in Definition \ref{Bdef}, and let $\Delta$ be the minimal self-adjoint extension of the sequence $\left\{ B_{n} \right\}$. Then \begin{eqnarray*} \sigma(\Delta)=&& \bigcup_{k=1}^\infty \left\{\left[ \frac{ k \pi}{2X_{0}} \right]^{2}\right\} \cup \bigcup_{k=0}^\infty \left\{\left[ \frac{ (k+1/2) \pi}{(1-2X_{0})/2} \right]^{2}\right\}\\& \cup& \bigcup_{k=0}^\infty \left\{\left[(k+1/2) \pi \frac{N-(Z+1)}{1-2X_{0}} \right]^{2}\right\}\cup \bigcup_{k=1}^\infty \left\{\left[k \pi \frac{N-(Z+1)}{1-2X_{0}} \right]^{2}\right\}\\ &\cup& \bigcup_{k=1}^\infty \left\{\left[ k \pi \frac{Z+1}{2X_{0}} \right]^{2}\right\} \cup \bigcup_{n=2}^{\infty} \bigcup_{k=0}^\infty \left\{\left[ I_{n}(k+1/2) \pi \frac{1-\frac{Z+1}{N}}{1-2X_{0}} \right]^{2}\right\} \\ &\cup& \bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \left\{\left[I_{n} k \pi \frac{1-\frac{Z+1}{N}}{1-2X_{0}} \right]^{2}\right\} \cup \bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \left\{\left[ I_{n} k \pi \frac{1-\frac{Z+1}{N}}{2(1-2X_{0})} \right]^{2}\right\} \\ &\cup& \bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \left\{\left[I_{n}k \pi \frac{Z+1}{2NX_{0}} \right]^{2}\right\} \cup \bigcup_{n=2}^{\infty} \bigcup_{k=1}^\infty \left\{\left[ I_{n}k \pi \frac{Z+1}{4NX_{0}} \right]^{2}\right\}. \end{eqnarray*} \noindent Eigenvalues in these ten sets have the following respective multiplicities: \begin{enumerate} \item[1)] $1$; \item[2)] $2$; \item[3)] $2$; \item[4)] $N-Z-3$; \item[5)] $Z+1$; \item[6)] $2^{n}$; \item[7)] $(1-\frac{Z+1}{N})I_{n-1}2^{n-1}(N-2)+2^{n-1}(1-\frac{Z+1}{N})I_{n-1}$; \item[8)] $2^{n-2}[(1-(\frac{Z+1}{N})I_{n-1}-1]-2^{n-2}$; \item[9)] $\frac{Z+1}{N}I_{n-1}2^{n-1}(N-2)+2^{n-1}\frac{Z+1}{N}I_{n-1}+2^{n-1}$; \item[10)] $2^{n-2}[\frac{Z+1}{N}I_{n-1}-1]$. \end{enumerate} \end{theorem} Using Theorem \ref{SpectrumOrtho}, we break the proof of Theorem \ref{Spectrum} into separate lemmas. \begin{lemma} \label{SpectrumF_{0}} For $F_{0}, \sigma(B_{0}|_{D_{0}^{\prime}})= \bigcup_{k=0}^\infty \left\{ [\frac{ (k+1/2) \pi}{(1-2X_{0})/2}]^{2} \right\}\cup \bigcup_{k=1}^\infty \left\{ [\frac{ k \pi}{2X_{0}}]^{2} \right\}$ with multiplicities $2$ and $1$, respectively. \end{lemma} \begin{proof} We look for eigenfunctions of $B_{0}$ on $F_{0}$ in $\Dom(B_{0})$. The only functions that satisfy these restrictions are $\left\{ \cos(2(k+1/2)\pi x /(1-2X_{0})) \right\}$ on the two exterior regions and $\left\{\sin(k\pi x /(2X_{0})) \right\}$ in the interior region. We immediately obtain the eigenvalues. \end{proof} \begin{lemma} \label{SpectrumF_{1}} For $F_{1}$, \begin{eqnarray*} \sigma(B_{1}|_{D_{1}^{\prime}})=&& \bigcup_{k=0}^\infty \left\{ [(k+1/2) \pi (N-(Z+1))/(1-2X_{0}) ]^{2} \right\} \\ &\cup& \bigcup_{k=1}^\infty \left\{ [k \pi (N-(Z+1))/(1-2X_{0})]^{2} \right\}\\ &\cup& \bigcup_{k=1}^\infty \left\{ [k \pi (Z+1)/(2X_{0})]^{2} \right\} \end{eqnarray*} Eigenvalues in these three sets have the following respective multiplicities: \begin{enumerate} \item[1)] $2$; \item[2)] $N-Z-3$; \item[3)] $Z+1$. \end{enumerate} \end{lemma} \begin{proof} The only eigenfunctions on $F_{1}$ that are orthogonal to the pullback of functions in $D_{0}^{\prime}$ are given by functions that take opposite values in the two copies of $F_{0}$ that are glued together. For instance, the eigenfunctions on the V's must be those whose values on the top branch equal the negative values for the function on the bottom branch so that the function defined on the top branch determines the value on the lower branch. Looking for eigenfunctions with Neumann conditions and Dirichlet boundary conditions at opposite ends of an interval of length $(1-2X_{0})/(N-(Z+1))$ gives $\left\{\cos((k+1/2)\pi x (N-(Z+1))/(1-2X_{0})) \right\}$. Similarly, the only eigenfunctions for loops on $F_{1}$ that meet the orthogonality restrictions are those whose values on the top branch equal the negative values for the function on the bottom branch. Looking for eigenfunctions of $B_{1}$ that have Dirichlet boundary conditions at the ends of an interval of length $(1-2X_{0})/(N-(Z+1))$ and $(2X_{0})/ (Z+1)$ gives $\left\{\sin((k \pi x (N-(Z+1))/(1-2X_{0})) \right\}$ and $\left\{\sin(k \pi x (Z+1)/(2X_{0})) \right\}$, respectively. Since $F_{1}$ has two V's, $N-Z-3$ loops in the exterior region, and $Z+1$ loops in the interior, we have the corresponding multiplicities. \end{proof} \begin{lemma} \label{SpectrumF_{n}} For $F_{n} : n \geq 2$, \begin{eqnarray*} \sigma(B_{n}|_{D_{n}^{\prime}}) = && \bigcup_{k=0}^\infty \left\{ [I_{n}(k+1/2) \pi (1-(Z+1)/N)/(1-2X_{0}) ]^{2} \right\}\\ &\cup& \bigcup_{k=1}^\infty \left\{ [I_{n} k \pi (1-(Z+1)/N)/(1-2X_{0})]^{2} \right\} \\ &\cup& \bigcup_{k=1}^\infty \left\{ [I_{n} k \pi (1-(Z+1)/N)/(2(1-2X_{0}))]^{2} \right\} \\ &\cup& \bigcup_{k=1}^\infty \left\{ [I_{n}k \pi (Z+1)/(2NX_{0})]^{2} \right\} \\ &\cup& \bigcup_{k=1}^\infty \left\{ [I_{n}k \pi (Z+1)/(4NX_{0})]^{2} \right\} \end{eqnarray*} Eigenvalues in these five sets have the following respective multiplicities: \begin{enumerate} \item[1)] $2^{n}$; \item[2)] $(1-\frac{Z+1}{N})2^{n-1} I_{n-1}(N-2)+2^{n-1}(1-\frac{Z+1}{N})I_{n-1}$; \item[3)] $2^{n-2}[(1-\frac{Z+1}{N})I_{n-1}-1]-2^{n-2}$; \item[4)]\ $\frac{Z+1}{N}2^{n-1}I_{n-1} (N-2)+\frac{Z+1}{N} 2^{n-1} I_{n-1}+2^{n-1}$; \item[5)] $\frac{Z+1}{N} 2^{n-2}I_{n-1}-2^{n-2}$. \end{enumerate} \end{lemma} \begin{proof} \noindent \begin{enumerate} \item \emph{V's}. In keeping with the procedure introduced in \cite{RS09} and \cite{ST08}, we look for eigenfunctions of $B_{n}$ on $F_{n}$ in $\Dom(B_{n})$ which are orthogonal to the pullback of functions in $\Dom(B_{i})$ $\forall < n$. Consider the V's in $F_{n}$. The orthogonality condition requires---as before---that the value of the eigenfunction on the top branch of a V equal the negative of the value on the lower branch so that the function on one branch, which vanishes at the junction of the V, fully determines the eigenfunction in the spectrum. By the definition of $\Dom(B_{n})$, the function must have Neumann boundary conditions at the two degree one nodes and Dirichlet boundary conditions at the degree two node. Therefore, a basis for the eigenfunctions of $B_{n}$ on each V of $F_{n}$ in $\Dom(B_{n})$ and orthogonal to the pullback of functions in $\Dom(B_{i}) ~\forall i<n$ is given by $ \left\{\cos(I_{n} (k+1/2) \pi x (1-(Z+1)/N)/(1-2X_{0})) \right\}$ and yields the set of eigenvalues $\cup_{k=0}^\infty \left\{ [I_{n}(k+1/2) \pi \left(1-\frac{Z+1}{N} \right)/(1-2X_{0}) ]^{2} \right\}$. Clearly, the total multiplicity of each eigenvalue for one V is $2^{n}$ since $2^{n}$ V's live in the exterior region of $F_{n}$.\\ \item \emph{Exterior Loops}. Eigenfunctions must take opposite values on the two branches of the loop, and we must therefore only consider eigenfunctions on an interval of length $[I_{n} (1-\frac{Z+1}{N})/(1-2X_{0})]^{-1}$ with Dirichlet boundary conditions at the endpoints. We see immediately that the set of functions is $\left\{\sin(I_{n} k \pi x (1-\frac{Z+1}{N})/ (1-2X_{0})) \right\}$ which in turn yields a set of eigenvalues $\left\{ [I_{n} k \pi (1-(Z+1)/N)/(1-2X_{0})]^{2} \right\}$ each with multiplicity one. Since each loop will add the same set of eigenvalues, we use a previous result to see that each eigenvalue in the set $\left\{ [I_{n} k \pi \left( 1-\frac{Z+1}{N} \right)/(1-2X_{0})]^{2} \right\}$ will have total multiplicity $\frac{Z+1}{N} 2^{n-1}$ $I_{n-1}(N-2)$.\\ \item \emph{Exterior Crosses}. The crosses are more complicated because $2^{n-1}$ of them are split along the conducting plates and so receive slightly different boundary conditions. For each cross not centered along a conducting plate, we treat an intact cross as two overlapping X's joined together at the four corner nodes. By the orthogonality condition, any permissible eigenfunction takes opposite values on the two X's of the cross so that once a function is applied to one X, the values of that function on the other cross are fully determined. From this, it is clear that the function must vanish at the four corners of the X. We can view this shape as an upper and lower V, joined at the central vertex, which is the $F_{1}$ quantum graph approximation for a $j=2$ Laakso space. Here, \begin{equation*} \sigma(B_{0}|_{D_{0}^{\prime}})= \left\{ \left( \frac{k \pi I_{n} (1-\frac{Z+1}{N})}{2(1-2X_{0})} \right) ^{2} \right\} \end{equation*} where $\left\{ [I_{n} (1-(Z+1)/N)/[2(1-(2X_{0})]]^{-1} \right\} $ is the metric length of the upper branch of the X. We also know that \begin{equation*} \sigma(B_{1}|_{D_{1}^{\prime}})= \left\{ \left( \frac{ k \pi I_{n} (1-\frac{Z+1}{N})}{(1-2X_{0})} \right) ^{2} \right\} \end{equation*} for each of the left and right side V's. Since this argument works only for those crosses which do not lie along the conducting plates, we have total multiplicities of $2^{n-2}[(1-(Z+1)/N)I_{n-1}-1]-2^{n-2}$ for $\left\{ \left(\frac{k \pi I_{n} (1-(Z+1)/N)}{2(1-2X_{0})} \right)^{2} \right\}$ and $2^{n-1}[(1-(Z+1)/N)I_{n-1}-1]-2^{n-1}$ for $\left\{ \left( \frac{k \pi I_{n} (1-(Z+1)/N)}{(1-2X_{0})}\right)^{2} \right\}$. \\ \item \emph{Exterior Half-Crosses}. For each cross centered on the conducting plates, take the half-cross lying in the exterior region, on which all permissible eigenfunctions take opposite values on the two halves of the half-cross. Thus, the function on the upper half of the half-cross fully determines the eigenfunction. Since we are imposing Dirichlet boundary conditions along the conductors, it is clear that the set of eigenfunctions is given by $\left\{ [k \pi I_{n} (1-\frac{Z+1}{N})/(1-2X_{0})]^{2} \right\}$, with multiplicity two to account for the fact that two separate intervals of length $\left\{ [I_{n} (1-\frac{Z+1}{N})/(1-2X_{0})]^{-1} \right\}$ form every half of the half-cross. Since there are exactly $2^{n-1}$ such half-crosses in the exterior region of $F_{n}$, we calculate a total multiplicity of $2^{n}$ for each of the eigenvalues listed in $\left\{ \left[ k \pi I_{n} \frac{1-\frac{Z+1}{N}}{(1-2X_{0})} \right]^{2} \right\}$. \\ \item \emph{Interior Region}. This case is handled by similar means. \end{enumerate} \end{proof} The proof of Theorem \ref{Spectrum} now follows from the results of Lemma \ref{SpectrumF_{0}}, Lemma \ref{SpectrumF_{1}}, Lemma \ref{SpectrumF_{n}}, and Theorem \ref{SpectrumOrtho}. Before calculating the Casimir Force in Subsection \ref{sect:CasimirForce}, we provide the following proviso. \subsection{Proviso} \label{Proviso} Analytic regularization is a method employed by modern physics and, in particular, Quantum Field Theory to grapple with divergent sums which often arise in calculations of vacuum energy. More specifically, a divergent sum is interpreted to be the analytic continuation of a function that converges only on a subregion of the complex plane. In practice, we take the formal sum $\sum_{n=0}^{\infty} z^{n}$, realize it converges everywhere inside the unit circle to the value $\frac{1}{1-z}$, and analytically continue the sum to a meromorphic function on the complex plane. So, we pair $\sum_{n=0}^\infty 2^{n}$ with $\frac{1}{1-2}=-1$. Of course, this last calculation becomes a formal equality if we are working in the $s$-adic number system where $\sum_{n=0}^{N} s^{n}=\frac{s^{N+1}-1}{s-1}$ and $\lim_{N \to \infty}~ \sum_{n=0}^{N} s^{n}=\lim_{N \to \infty}~ \frac{s^{N+1}-1}{s-1}=\frac{-1}{s-1}$. Moreover, we interpret $\sum_{n=1}^\infty n $ to be the value of the analytic continuation of the Riemann zeta function at $s=-1$. Regularizing $\sum_{n=1}^\infty n$ yields the value $\zeta(-1)=-\frac{1}{12}$. The expression $\sum_{n=1}^\infty (n+1/2)$ is recognized to be the analytic continuation of the Hurwitz zeta function and evaluated along similar lines. While calculations of the Casimir Effect using zeta regularization have been experimentally verified in \cite{Bressi2002} and \cite{Sparnaay1957}, the results do not imply that that nature always chooses analytic regularization. Rather, we employ the method carefully in the interest of obtaining finite answers to questions which seem closely related to Casimir's original setup and in the hope that analogous reasoning applies here in the case of Laakso spaces. \subsection{Formulae for the Casimir Energy and Force on a General Laakso Space} \label{sect:CasimirForce} \begin{definition} \label{SpectrumDef} Let $\sigma(A) =\left\{ \lambda_{n} \right\}$ be the spectrum of the differential operator A with respective multiplicities $\left\{ g_{n} \right\} $. Then the spectral zeta function $\zeta(s)=~\sum_{n=1}^\infty \frac{g_{n}}{\lambda_{n}^{s}}$. \end{definition} \begin{corollary} \label{SpectralFunction} Given a Laakso space configuration specified by $ j_{n}=N ~\forall n, X_{0} \in \left( 0, 1/2 \right), and~Z \in \mathbb{N}\cup{\left\{0 \right\} }$, the spectral zeta function of the self-adjoint operator $\Delta$ is \begin{align*} \zeta_{N,X_{0}, Z}(s) =&~ \sum _{k=0}^\infty \frac{2}{[(2k+1) \pi/(1-2X_{0})]^{2s}} +\sum_{k=1}^\infty \frac{1}{[k \pi/(2X_{0})]^{2s}}\\& +\sum _{k=1}^\infty \frac{(N-Z-3)}{[Nk \pi\frac{(1-(Z+1)/N)}{1-2X_{0}}]^{2s}}+\sum _{k=1}^\infty \frac{Z+1}{[k \pi(Z+1) / (2X_{0})]^{2s}} \\ & + \sum_{n=1}^\infty \sum_{k=0}^\infty \frac{2^{n}}{[I_{n}(k+1/2) \pi (1-\frac{Z+1}{N})/(1-2X_{0})]^{2s}} \\ & + \sum_{n=2}^\infty \sum_{k=1}^\infty \frac{(1-\frac{Z+1}{N})2^{n-1}I_{n-1}(N-2)+2^{n-1}(1-(Z+1)/N) I_{n-1}}{[I_{n}k \pi \frac{(1-(Z+1)/N)}{(1-2X_{0})}]^{2s}} \\& + \sum_{n=2}^\infty \sum_{k=1}^\infty \frac{ 2^{n-2}[(1-(Z+1)/N)I_{n-1}-1]-2^{n-2}}{ [ I_{n}k \pi(1-(Z+1)/N)/[2(1-2X_{0})]]^{2s}} \\ & +\sum_{n=2}^\infty \sum_{k=1}^\infty \frac{(Z+1)/N[2^{n-1}I_{n-1}(N-2)+2^{n-1}I_{n-1}]+2^{n-1}}{[k \pi I_{n}(Z+1)/(2NX_{0})]^{2s}} \\ & + \sum_{n=2}^\infty \sum_{k=1}^\infty \frac{(Z+1)/N[2^{n-2}I_{n-1}]-2^{n-2}}{[I_{n}k \pi / (4NX_{0})]^{2s}}. \end{align*} \end{corollary} \begin{proof} This follows immediately from an application of Definition \ref{SpectrumDef} to the results of Theorem \ref{Spectrum}. \end{proof} \begin{proposition} Let $\zeta_{N,X_{0}, Z}(s)$ be the spectral zeta function. Then $\frac{\hbar}{2}\zeta_{N,X_{0},Z}(-\frac{1}{2})$ gives the Casimir energy $E_{C}$ of the conducting plates in the Laakso space. \end{proposition} \begin{proof} This follows from the fact that $E_{0}=\frac{\hbar}{2} \sum_{n} \omega_{n}$ and $\left\{ \omega_{n} \right\}=\left\{ \sqrt{ \lambda_{n}} ~|~ \lambda_{n} \in \sigma(\Delta) \right\}$. \end{proof} \begin{proposition} \label{ForceProp} Let two conducting metal plates be situated a distance $X_{0}$ symmetrically about x=$\frac{1}{2}$ in a Laakso space with $j_{n}=N~\forall n$. Furthermore, let Z nodes lie between the conductors in the $F_{1}$ quantum graph approximation. Then the Casimir Force $F_{C}$ experienced by each of the plates is given by $\frac{\hbar}{2} \frac{d}{dx} \zeta_{N,x,Z}(-\frac{1}{2}) |_{x=X_{0}}$ where a positive sign indicates an attractive force. \end{proposition} \begin{proof} Because of the bilateral symmetry of the arrangement, the forces experienced by each of the plates must be equal in magnitude and opposite in direction. The force experienced by a system is given by the negative energy gradient, so our expression for $F_{C}$ is correct up to sign. Lastly, if $\frac{d}{dx} \zeta_{N,x,Z}(-\frac{1}{2})|_{x=X_{0}}$ is positive, energy increases as the plates move apart so that it is energetically favorable for the plates to move closer together, which means the force is attractive as claimed. \end{proof} \begin{proposition} The generalized Casimir Force experienced by two uncharged conducting metal plates is \begin{align*} F_{C}&= \frac{2 \hbar \pi (N-(Z+1))}{24(1-2N)(1-2X_{0})^{2}} - \frac{\hbar (N-(Z+3))(N-(Z+1))}{12(1-2X_{0})^{2}} \\&- \frac{ 2 \hbar \pi N^{3} (N-2)}{12 (1-2N^{2})}\left\{ \frac{1-\frac{Z+1}{N}}{1-2X_{0}} \right\}^{2} -\frac{5 \hbar \pi (1-\frac{Z+1}{N})}{24 (1-2X_{0})^{2}} \left\{ \frac{N^{2}(N-(Z+1)}{1-2N^{2}}-\frac{N^{2}}{1-2N} \right\} \\&+ \frac{ \hbar \pi (Z+1)^{2}}{48 X_{0}^{2}} + \frac{ \hbar \pi N (Z+1)^{2} (N-2)}{24 X_{0}^{2} (1-2N^{2})} + \frac{5 \hbar \pi N (Z+1)^{2}}{96 (1-2N^{2}) X_{0}^{2}} + \frac{ \hbar \pi}{6 (1-2X_{0})^{2}}\\& + \frac{ \hbar \pi}{48 X_{0}^{2}} - \frac{\hbar \pi N^{2} (1-\frac{Z+1}{N})}{24 (1-2N) (1-2X_{0})^{2}}+\frac{ \hbar \pi N (Z+1)}{96 X_{0}^{2}(1-2N)}-\frac{\hbar \pi N^{2} (1-\frac{Z+1}{N})}{12 (1-2X_{0})^{2}(1-2N)}\\& + \frac{\hbar \pi (Z+1) N}{48 X_{0}^{2} (1-2N)}. \end{align*} \end{proposition} \begin{proof} Using the expression in Corollary~\ref{SpectralFunction}, take the derivative as instructed in Proposition~\ref{ForceProp} and then reduce the result using the analytic continuation techniques discussed in Subsection \ref{Proviso}. \end{proof} \subsection{The Spectral Zeta Function for Laakso Spaces} \label{sect: Zeta} \begin{theorem}\label{spectral} For a Laakso space defined by a repeating sequence of $j_n$ with period $T$, the spectral zeta function can be analytically continued to the following function: \begin{eqnarray}\label{spectralequation}\zeta_L(s)=&&\frac{\zeta_R(2s)}{\pi^{2s}}\left[\sum_{p=2}^{T+1} \left(\left(\frac{I_T^{2s}}{I_T^{2s}-I_T2^T}\right)\left(\frac{(2^{p-1})(I_{p-1})(2^{2s-1}+j_p-1)}{I_p^{2s}}\right)\right.\right.\nonumber\\ &&+\left.\left.\left(\frac{I_T^{2s}}{I_T^{2s}-2^T}\right)\left(\frac{(2^{p-1})(\frac{3}{2}2^{2s}-3)}{I_p^{2s}}\right)\right)+\frac{2^{2s+1}-4+j_1}{j_1^{2s}}+1\right]. \end{eqnarray} \end{theorem} \begin{proof} The following is from \cite[Chapter 6]{ST08}: \begin{eqnarray} \zeta_L(s)&=&\frac{\zeta_R(2s)}{\pi^{2s}}\left[\left(\sum_{n=2}^{\infty}\frac{2^{n-1}(I_{n-1})(2^{2s-1}+j_n-1)+2^{n-1}(\frac{3}{2}2^{2s}-3)}{I_n^{2s}}\right)\right.\nonumber \\ &&+\left.\frac{2^{2s+1}-4+j_1}{j_1^{2s}}+1\right]. \end{eqnarray} \noindent gives us the value of the spectral zeta function for a generalized Laakso space. Since we have the sequence of $j_n$ repeating with period \emph{T}, we know that for any nonnegative integer $n$, and for any integer $p$, where $0\leq{p}<T$, $I_{p+nT}=(I_T^n)(I_p)$. For $s>1$, the series is absolutely convergent, so we can split up the terms by their remainders (mod $k$) and write the series as follows: \begin{eqnarray}\label{spectral1} \zeta_L(s)&=&\frac{\zeta_R(2s)}{\pi^{2s}}\left[\sum_{p=2}^{T+1}\left(\sum_{n=0}^{\infty}\left(\frac{2^{nT+p-1}(I_{nT+p-1})(2^{2s-1}+j_{nT+p}-1)}{I_{nT+p}^{2s}}+\right.\right.\right.\nonumber\\ &&\left.\left.\left.\frac{2^{nT+p-1}(\frac{3}{2}2^{2s}-3)}{I_{nT+p}^{2s}}\right)\right)+\frac{2^{2s+1}-4+j_1}{j_1^{2s}}+1\right]. \end{eqnarray} Since $I_{p+nT}=(I_T^n)(I_p)$ and $j_{p+nT}=j_p$, we are left with a finite sum of geometric series which are absolutely convergent when $Re(s) > 1$ and meromorphically extendable to the rest of the complex plane. \end{proof} \begin{corollary}\label{1/2} For every Laakso space except the space where $j_n=2$ for all $n$, \begin{eqnarray} \lim_{s \rightarrow \frac{1}{2}} \zeta_L(s)&=&\frac{1}{2\pi}\left[\sum_{p=2}^{T+1}\left(\frac{2^p\ln2}{j_p(1-2^T)}-\frac{2^p\ln(I_p)}{1-2^T}+\frac{2^pI_T(3\ln2)}{I_p(I_T-2^T)}\right)\right.\\ &&\left.+\frac{2^{T+2}\ln(I_T)}{1-2^T}+\frac{8\ln2}{j_1}-2\ln(j_1)\right].\nonumber \end{eqnarray} \end{corollary} \begin{proof} This follows from Theorem \ref{spectral}, using L'Hospital's rule, since the Riemann zeta function has a simple pole at 1. \end{proof} \begin{corollary} The spectral zeta function of a repeating Laakso space has poles at \begin{equation}\label{poles} \bigcup_{m \in \mathbb{Z}}\left\{\frac{\ln (2^TI_T) +2T\pi im}{\ln (I_T^2)}\right\} \cup \bigcup_{m \in \mathbb{Z}}\left\{\frac{\ln (2^T) +2T\pi im}{\ln (I_T^2)}\right\}.\nonumber \end{equation} \end{corollary} \begin{proof} This follows from Theorem \ref{spectral} and Corollary \ref{1/2} and is consistent with the results in \cite{ST08}. \end{proof} \begin{proposition} The spectral dimension of the Laakso space with period T is \begin{equation}\label{eqdim} d_s=\frac{\ln\left(2^TI_T\right)}{\ln\left(I_T\right)}. \end{equation} \end{proposition} \begin{proof} In the periodic case we have a closed form for the meromorphic spectral zeta function. As in \cite{ST08} the spectral dimension is taken to be the largest real part of the poles of the zeta function. The proposition follows by inspection. \end{proof} From Theorem \ref{spectral} we can calculate the value of the spectral zeta function at specific values of $j$, where $j_n=j$ for all values of $n$. \begin{corollary}\label{constantj} The following expressions give values for the spectral zeta function on a Laakso space with constant $j_n$: {\small \begin{eqnarray}\label{>1/2} \label{<1/2}\zeta_L(s)&=&\frac{\zeta_R(2s)}{\pi^{2s}}\left[\frac{j^{4s}-j^{2s+1}+2(2^{2s}j^{2s})-6j^{2s}-3(2^{2s}j)+8j-2^{2s}+2}{(j^{2s}-2j)(j^{2s}-2)}\right]. \end{eqnarray} } \end{corollary} Notably, we have the following value for $\zeta_L(-\frac{1}{2})$: \begin{eqnarray}\zeta_L\left(-\frac{1}{2}\right)=\frac{-\pi}{12}\left(\frac{13}{8}+\frac{3j-2}{8j^2-4}+\frac{9}{16j-8}\right) \end{eqnarray} \begin{corollary}\label{alternatingj} The spectral zeta function for a sequence of $j_n$ with period $2$, is \begin{eqnarray}\zeta_L&=&\frac{\zeta_R(2s)}{\pi^{2s}}\left[\left(\frac{2j_1}{I_2^{2s}-4I_2}\right)\left(2^{2s-1}+j_2-1+\frac{2j_2(2^{2s-1}+j_1-1)}{j_1^{2s}}\right)\right.\\ &&+\left.\left(\frac{3(2^{2s})-6}{I_2^{2s}-4}\right)\left(1+\frac{2}{j_1^{2s}}\right)+\frac{2^{2s+1}-4+j_1}{j_1^{2s}}+1\right].\nonumber \end{eqnarray} \end{corollary} \subsection*{Acknowledgments} The authors would like to acknowledge Matthew Begue, Levi DeValve, David Miller and Kevin Romeo for making available their MatLab code which served as the basis for calculations presented in Section \ref{sect: SA}.
2,877,628,088,834
arxiv
\section{Introduction} Strong gravitational lensing by galaxies enables the study of distant sources and luminous and dark matter in galaxies over a range of redshifts. When the source is a quasar, its multiple images give a wealth of information on: the source central engine and the stellar content of the lens, via microlensing by individual stars \citep{sch16,bra16}; substructure in the lens, via astrometric and flux-ratio `anomalies' \citep{dal02,nie14,agn17}; quasars and their hosts at $z_{s}\approx2$ \citep{rus14,agn16,din17}; and cosmological distances, from the time-delays between the light-curves of different images \citep{ref64,par09,suy14}. However, strong lenses are rare, as they require the precise alignment of a distant source with (at least) a galaxy. Since quasars are rare objects themselves and are less frequent at higher redshift, quasar lenses are even rarer. \citet{om10} estimate that $\approx0.2$ lensed quasars per square degree, brighter than $i=21,$ should be present in wide-field surveys, with a majority of doubly imaged quasars and $\approx20\%$ quadruples. Their predicted source redshifts are distributed at $z\approx3.0\pm0.3,$ higher than those of most quasar lenses discovered in wide-field searches, specifically in the SDSS Quasar Lens Search \citep[SQLS,][]{ogu06}, which targeted objects with quasar fibre-spectra, themselves based on UV excess (UVx) preselection. Extensions of lens searches to higher completeness must rely solely on photometric preselection with limited UVx information. In view of current and upcoming wide-field surveys diverse techniques have been developed to this aim \citep[e.g.][]{agn15,sch16,mor16,ost17,wil17,lin17}. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Atemlos_BVRi.png}\\ \includegraphics[width=0.3\textwidth]{pa0_spec.pdf} \includegraphics[width=0.3\textwidth]{pa90_spec.pdf} \includegraphics[width=0.3\textwidth]{pa0_ratio.pdf} \caption{{J1433+6007 discovery and follow-up data. \textit{Top:} ALFOSC $BVRi$ median coadds and colour-composites, $0.21^{{\prime\prime}}$/px, North up and East left. A faint arc, barely visible in $B-$band, emanates from the northern-most image. \textit{Bottom:} North-South (left) and East-West spectra (center), and flux-ratios of spectra in North-South slit (right). Image C leaks in the North-South slit and is significantly reddened (right, green line), whereas images A,B have almost indistinguishable spectra (right, black line). The source has $z_{s}=2.74,$ as evaluated from C~\textsc{iv}, where the wavelength calibration is most accurate. }} \label{fig:puppies} \end{figure*} Here, we report on the discovery and follow-up of a quadruply lensed quasar, J1433+6007, mined with a novel technique in the Sloan Digital Sky Survey \citep[SDSS,][]{aba09} DR12 footprint without spectroscopic or UVx information. The two brightest images are separated by $\approx3.6^{{\prime\prime}},$ the source redshift from the discovery spectra is $z_{s}=2.74,$ and the lens redshift is $z_{l}=0.407$ from follow-up Keck-ESI spectroscopy. We describe the target/candidate selection procedure in Section \ref{sect:cands}, the confirmation and follow-up of J1433 in Sect.~\ref{sect:fu}, and first lens models in Sect.~\ref{sect:mods}. We conclude in Section~\ref{sect:conc}. In what follows, SDSS magnitudes are in the AB system, WISE \citep{wri10} magnitudes in the Vega system, and where necessary we adopt concordance cosmological parameters $\Omega_{\Lambda}=0.7,$ $\Omega_{m}=0.3,$ $H_{0}=70$km/s/Mpc. \section{Candidate Selection} \label{sect:cands} Since half of the known quasar lenses in SDSS have extended morphology, due to the presence of the deflector \citep[see][for a discussion]{wil17}, objects with $\log_{10}\mathcal{L}_{star,i}<-11$ or \texttt{psf{\_}i}-\texttt{mod{\_}i}$>$0.075, \texttt{mod{\_}i}$<20.5,$ and WISE $W1-W2>0.55,$ $W2-W3<3.1+1.5(W1-W2-1.075)$ were pre-selected. Minimal $griz$ cuts were used. The $i-$band selection is used as a morphological preselection, whereas the WISE cuts are an extension of those by \citet{ass13} to exclude most quasars at $z<0.35$ and narrow-line galaxies. Quasar lens \textit{targets} were then selected based on their catalog magnitudes, then visually inspected to exclude obvious contaminants, yielding the final \textit{candidate} sample. Lensed quasars are rare among quasars, which in turn are rarer than blue galaxies, hence we used a novel outlier selection to mine targets (detailed elsewhere, Agnello 2017), retaining peculiar objects by excluding more common ones. When tested on the SQLS morphologically-selected lens candidates of \citet{ina12}, this procedure recovered 9 of the 10 lenses and excluded half of the 40 false positives, without any UVx or fibre-spectroscopy information. Four classes of `common' objects were defined, roughly corresponding to nearby ($z<0.75$) quasars, isolated quasars at higher ($z\approx2$) redshift, blue-cloud galaxies and faint ($W2\gtrsim15$) objects. Each class $k$ was represented by a single Gaussian with mean $\boldsymbol{\mu}_{k}$ and covariance $\mathbf{C}_{k}$ in a space given by $g-r,$ $g-i,$ $r-z,$ $i-W1,$ $W1-W2,$ $W2-W3,$ $W2$ and each object $\mathbf{f}$ was assigned pseudo-distances defined as $d_{k}=0.5\left\langle\mathbf{f}-\boldsymbol{\mu}_{k},\mathbf{C}_{k}^{-1}(\mathbf{f}-\boldsymbol{\mu}_{k})\right\rangle.$ Objects that were `far' enough from the four class centers, based on linear combinations of their $d_{k}$ values, were retained as targets. This yielded $\approx250$ candidates brighter than $i=20.0$ over the whole SDSS-DR12 footprint, of which $\approx40$ known quasar lenses or pairs. J1433+6007, at r.a.= 14:33:22.8, dec.=+60:07:13.44 (J2000), showed two well-separated blue images on either sides of two red objects, blended in three photometric components by the SDSS pipeline. \begin{table*} \centering \begin{tabular}{lc|cc|cccc|ccl} \hline img. & $\delta x(^{\prime\prime})$ & $\delta y(^{\prime\prime})$ & $B$ & $V$ & $R$ & $i$ & $\mu$ & $t-t_{A}$ \\ & $=-\cos(\mathrm{dec.})\delta\mathrm{r.a.}$ & $=\delta\mathrm{dec.}$ & (mag) & (mag) & (mag) & (mag) & & (days)\\ \hline A & $0.00\pm0.025$ & $0.00\pm0.025$ & $20.26\pm0.04$ & $19.78\pm0.01$ & $19.26\pm0.01$ & $19.32\pm0.01$ & $2.62$ & 0.00 \\ B & $-0.070\pm0.025$ & $-3.650\pm0.025$ & $20.09\pm0.03$ & $19.63\pm0.01$ & $19.13\pm0.01$ & $19.10\pm0.01$ & $3.57$ & 15.0 \\ C & $0.766\pm0.025$ & $-2.056\pm0.025$ & $20.50\pm0.05$ & $19.92\pm0.01$ & $19.30\pm0.01$ & $19.14\pm0.01$ & $-3.07$ & 25.0 \\ D & $-2.138\pm0.050$ & $-2.132\pm0.050$ & $22.00\pm0.14$ & $21.30\pm0.02$ & $20.63\pm0.02$ & $20.38\pm0.01$ & $-0.62$ & 113.0 \\ G & $-1.152\pm0.025$ & $-1.950\pm0.025$ & $21.87\pm0.15$ & $20.69\pm0.12$ & $19.39\pm0.01$ & $18.52\pm0.01$ & --- & ---\\ \hline \end{tabular} \caption{Image positions, $BVRi$ magnitudes, and model-predicted values of magnifications and time-delays. The nominal errors on some magnitudes would be smaller than quoted, but are limited by the accuracy of the ALFOSC zero-points and observing sky conditions.} \label{tab:bigtab} \end{table*} \section{Follow-up} \label{sect:fu} Long-slit discovery spectra were obtained on 2017/01/19,20 as part of a candidate lens follow-up program (P42-019, PI Grillo). We used the Andalucia Faint Object Spectrograph and Camera (ALFOSC) at the 2.5m Nordic Optical Telescope (NOT) in La Palma (Spain), and the $1^{\prime\prime}-$wide long-slit with the {\#}4 grism, covering a wavelength range $3200\rm{\AA} <\lambda< 9600\rm{\AA}$ with a dispersion of $3.3 \rm{\AA}/\rm{pixel}$. Standard \textsc{Iraf} routines were used for bias subtraction, flat-field corrections and wavelength calibration. From ALFOSC $BVRi$ imaging, we obtained the positions and magnitudes subsequently used for lens models. Deeper, high-resolution spectroscopy was obtained with the Echellette Spectrograph and Imager (ESI) at the Keck \textsc{II} telescope on 2017/01/20 (PI Jones), with a $1^{\prime\prime}$-wide slit, and reduced with ESIRedux\footnote{Available at \texttt{http://www2.keck.hawaii.edu/inst/esi/ESIRedux/}}. In what follows, quasar images are labeled as shown in Fig~\ref{fig:puppies}, top-left, along the expected arrival times. ALFOSC pixels measure $0.21^{{\prime\prime}}$ per side. \subsection{NOT discovery and follow-up spectra} We took two 600s exposures with the slit aligned North-South, through 14:33:22.8+60:07:13.44, and one (900s) with East-West alignment, through 14:33:22.8+60:07:14.5. This enabled simultaneous spectroscopy of the two prominent quasar images and the two red objects, respectively. Arc (HeNe, Ar) and flat lamps were used for calibrations. The North-South spectra show three nearly identical traces corresponding to the same $z_{s}=2.74$ quasar (Fig.~\ref{fig:puppies}, lower-left panel). The bright, outer traces correspond to the blue images (A,B) visible in the SDSS. The central, fainter trace is given by a third quasar image C, corresponding to the West-most red object, just outside the slit, thus confirming J1433 as a multiply imaged quasar. The East-West spectra (Fig.~\ref{fig:puppies}, lower-middle panel) show clear traces corresponding to images C,D and the lens galaxy (G). Images A,B have almost indistinguishable spectra, with uniform flux ratio $\approx1.05,$ whereas image C undergoes substantial extinction bluewards of C~\textsc{iii}$\left.\right]$. Micro-lensing results in $\approx5-10\%$ flux-ratio differences between the continua and emission lines (Fig.~\ref{fig:puppies}, lower-right panel). \subsection{NOT imaging} Follow-up imaging data with good seeing ($\approx0.6^{{\prime\prime}}$ FWHM at Zenith) were obtained with ALFOSC in $B,$~$V,$~$R,$~$i$ bands, using multiple $60$s exposures per band. Standard \textsc{Python} routines were used for bias subtraction, flat-fielding and coadding. The median coadds and colour-composites are shown in Fig.~\ref{fig:puppies}, where Ly~$\alpha$ from the quasar images dominates in $B-$band and the lens brightens up in redder bands. The relative displacements are obtained both from imaging and spectroscopic data. Individual traces in the spectra are well described by Gaussians in the spatial direction, whose run with wavelength can be modeled with uncertainties as low as 0.125px$=25$mas. Uncertainties from imaging-only data, though nominally smaller, are dominated by systematics from different noise realizations. Residuals between imaging data and model, mostly due to faint features and PSF mismatch, are within the noise level. Table~1 gives the positions of the four images (A,B,C,D) and deflector (G), relative to image A, and $BVRi$ magnitudes. Images C and D are substantially reddened, and blending between D and G is significant in $B-$band. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{ESI.pdf}\\ \caption{{ESI follow-up spectra, with prominent absorption features at $z_{l}=0.407,$ which we associate with G. \textit{Red dashed} lines: Ca~HK, G-band $\lambda$4304, Mg~\textsc{i}, Na~D absorption at $z_l$. \textit{Blue solid} lines: quasar emission at $z_s$. The Ca~HK complex is visible, albeit at lower S/N, also in the ALFOSC spectra. }} \label{fig:ESI} \end{figure} \subsection{Keck-ESI follow-up} ESI spectra of G were taken in echellette mode, with $1.0^{{\prime\prime}}$ slit-width oriented East-West. The total integration time was 45 minutes split into 3 exposures of 900 seconds each. The combined 1D spectrum of G and D, shown in Figure~2, has distinctive absorption features at $z_{l}=0.407$ (Ca~HK, G-band, H$\beta,$ Mg~\textsc{i}/Fe complex, Na~D), which we associate with the lens galaxy. The same features could be seen in the ALFOSC discovery spectra, though not as clear. \begin{table} \centering \begin{tabular}{lc|ccccc|} \hline & $\theta_{\rm E}$ & $q$ & $\phi_{l}$ & $\phi_{\rm s}$ & $\gamma_{\rm s}$ \\ \hline best & 1.80$^{\prime\prime}$ & 0.50 & 0.18[rad] & 1.12[rad] & 0.10 \\ 68\% low & 1.70$^{\prime\prime}$ & 0.43 & 0.13[rad] & 0.96[rad] & 0.08 \\ 68\% high & 1.90$^{\prime\prime}$ & 0.58 & 0.25[rad] & 1.22[rad] & 0.13 \\ \hline \end{tabular} \caption{Lens model parameters: best-fit (first column) and 68\% confidence intervals, marginalized over other parameters. Tight (and expected) degeneracies among parameters are present, given in eq.~(\ref{eq:degen}), except for the combination $\theta_{\rm E},$ which corresponds to the Einstein radius. Angles $\phi_{l},\phi_{\rm s}$ are positive counter-clockwise from West.} \label{tab:lenstab} \end{table} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{GLEE_j1433_crit.pdf}\\ \caption{{Lens model results: input images (green circles) and lens center (cyan square), $0.2^{\prime\prime}-$radius circle around the inferred source position (cyan), critical curve (blue) and radial and astroid caustics (red). Purple lines show isophotes corresponding to the source-plane contour. The position of image D is slightly biased, due to blending with G, but is recovered within the adopted uncertainties. }} \label{fig:GLEE} \end{figure} \section{Lens models} \label{sect:mods} From the relative displacements in Table~1, we fit a simple lens model to obtain the enclosed (2D) mass and predicted magnifications and time-delays. We do a conjugate-point analysis using \textsc{Glee} \citep{suy10,suy12}, adopting $25$mas uncertainties on the positions of A,B,C and G, and $50$mas uncertainties on the position of image D. The surface density of the lens is described by \begin{equation} \kappa(x,y)=\frac{\Sigma(x,y)}{\Sigma_{\rm cr}} =\frac{b}{2\sqrt{X^{2}/(1+e)^{2}\, +Y^{2}/(1-e)^{2}}} \end{equation} \citep{kas93}, along the principal axes $(X,Y)$ of G, where $\Sigma_{\rm cr}=\mathrm{c}^{2}D_{s}/(4\pi G\, D_{l}D_{ls})$ factors the dependence on angular-diameter distances. The Einstein radius is defined as the geometric mean of the major and minor axes of the critical curve, $\theta_{\rm E}=2b\sqrt{q}/(1+q).$ We include external shear, with amplitude $\gamma_{\rm s}$ and angle $\phi_{\rm s},$ and let all parameters free to vary, including the lens flattening $q=(1-e)/(1+e)$ and position angle $\phi_{l}.$ The results are summarized in Tables~1,2 and Figure~3. The Einstein radius is robustly determined to $\theta_{\rm E}=(1.80\pm0.10)^{\prime\prime}$, very close to half the A-B separation and independent of other inferred quantities. Flux-ratios from the best-fit model are comparable to those measured in $i-$band, accounting for differential extinction whose existence in lensing is well established \citep[e.g.][]{fal99,med05,agn17}. The predicted delays $t_D-t_A>100$d, $t_C-t_A\approx25$d can be accurately measured in one or two seasons of high-cadence monitoring. Monopole-quadrupole degeneracies \citep[e.g.][]{koc06} are present, in particular \begin{eqnarray} \nonumber \gamma_{\rm s} & \approx & 0.10+0.35(q-0.5),\\ \nonumber \phi_{\rm s} & \approx & 1.1-1.5(b-2.5),\\ \gamma_{\rm s} & \approx & 0.10+0.35(\phi_{l}-0.8) \label{eq:degen} \end{eqnarray} for this system. These may be broken with: observations of the line-of-sight environment, to characterize external contributions to the deflections; and higher-resolution imaging, of both the lensed quasar host and of G, to disentangle shear and lens flattening. The velocity dispersion $\sigma$ of stars in the lens is a useful observable that can be estimated from the lens model itself. In fact, a direct measurement of $\sigma$ and its comparison with predictions from lensing can be used to measure cosmological distances and to constrain the dark matter density profile of the lens \citep{tk04,gri08,par09,suy14,son15,jee16}. In the Singular Isothermal Sphere (SIS) limit $q\rightarrow1,$ $\sigma$ depends weakly on location\footnote{The details depend on the steepness of the starlight's profile and the orbital anisotropy \citep{bar11,agn13}.} ~and is well approximated by \begin{equation} \sigma_{\rm sis}^{2}=\frac{\mathrm{c}^{2}\theta_{\rm E}D_{s}}{4\pi D_{ls}}\ . \label{eq:SIS} \end{equation} With the above values, we then obtain $\sigma=(290\pm8)$ km/s, as expected for massive ellipticals \citep{tre05}. A direct comparison between the estimate in eq.~(\ref{eq:SIS}) and direct measurements will need dynamical models that encompass asphericity and inclination effects \citep{bar11}. \section{Discussion and Conclusions} \label{sect:conc} We have found a new, quadruply lensed quasar at RA= 14:33:22.8, DEC=+60:07:14.5, via an outlier-selection procedure applied to the SDSS-DR12 photometric footprint. Similar to other recent, wide-field photometric searches, this search did not rely on previous spectroscopic or UV excess information. This approach enables the discovery of systems with sources at higher redshifts than typically probed, and with appreciable differential reddening by the lens galaxy, similar to the recently discovered DES~J0408-5354 \citep{lin17,agn17}. Discovery NOT-ALFOSC data confirmed this system as a lens with $z_{s}=2.74$ and four images on a fold configuration. The lens redshift is $z_{l}=0.407$ from follow-up Keck-ESI spectroscopy, which together with the lens model results with $\theta_{\rm E}=(1.80\pm0.10)^{{\prime\prime}}$ predicts a velocity dispersion $\sigma=(290\pm9)$km/s. Saddle-point images C,D are significantly reddened by the lens galaxy, while the A/B flux ratios agree with predictions by the lens model within $\approx0.1$mag. Microlensing is evident in the differential magnification of lines and continua. The expected time-delays (tab.~1) can be measured accurately with high-cadence campaigns spanning one or two monitoring seasons. With current data, significant monopole-quadrupole degeneracies arise in the lens model, but they can be broken with deeper and higher-resolution follow-up. \section*{Acknowledgments} C.G. and M.B. acknowledge support by VILLUM FONDEN Young Investigator Programme through grant no. 10123. TJ acknowledges support provided by NASA through Program \# HST-HF2-51359 through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. TT acknowledges support by the Packard Foundation through a Packard Research Fellowship and by the National Science Foundation through grant AST-1450141. S.H.S. gratefully acknowledges support from the Max Planck Society through the Max Planck Research Group. The data presented here were obtained in part with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOTSA. We thank R.T.~Rasmussen and T.~Pursimo for support at the NOT, and H.~Fischer for being the lucky charm. The ESI data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain, and we respectfully say mahalo.
2,877,628,088,835
arxiv
\section{Introduction} The probabilistic notion of independence underlies several key concepts in ergodic theory as means for expressing randomness or indeterminism. Strong mixing, weak mixing, and ergodicity all capture the idea of asymptotic independence, the first in a strict sense and the second two in a mean sense. Moreover, positive entropy in ${\mathbb Z}$-systems is reflected via the Shannon-McMillan-Breiman theorem in independent behaviour along positive density subsets of iterates (see for example Section~3 of \cite{GW}). One can also speak of independence in topological dynamics, in which case the issue is not the size of certain intersections as in the probabilistic context but rather the simple combinatorial fact of their nonemptiness. Although the topological analogues of strong mixing, weak mixing, and ergodicity and their relatives form the subject of an extensive body of research (stemming in large part from Furstenberg's seminal work on disjointness \cite{Fur,CDSRP}), a systematic approach to independence as a unifying concept for expressing and analyzing recurrence and mixing properties seems to be absent in the topological dynamics literature. The present paper aims to establish such an approach, with a particular emphasis on combinatorial arguments. At the same time we propose a recasting of the theory in terms of tensor products. This alternative formulation has the advantage of being applicable to general $C^*$-dynamical systems, in line with the principle in operator space theory that it is the tensor product viewpoint which typically enables the quantization of concepts from Banach space theory \cite{ER,Pis}. In fact it is from within the theory of Banach spaces that the inspiration for our combinatorial approach to dynamical independence originates. In the Banach space context, independence at the dual level is associated with $\ell_1$ structure, as was strikingly demonstrated by Rosenthal in the proof of his characterization of Banach spaces containing $\ell_1$ isomorphically \cite{Ros}. Rosenthal's groundbreaking $\ell_1$ theorem initiated a line of research based in Ramsey methods that led to the work of Bourgain, Fremlin, and Talagrand on pointwise compact sets of Baire class one functions \cite{BFT} (see \cite{Gow,Tod} for general references). The transfer of these ideas to the dynamical realm was initiated by K{\"o}hler, who applied the Bourgain-Fremlin-Talagrand dichotomy for spaces of Baire class one functions to obtain a corresponding statement for enveloping semigroups of continuous interval maps \cite{K}. This dynamical Bourgain-Fremlin-Talagrand dichotomy was recently extended by Glasner and Megrelishvili to general metrizable systems \cite{GM} (see also \cite{tame}). The dichotomy hinges on the notion of tameness, which was introduced by K{\"o}hler (under the term regularity) and refers to the absence of orbits of continuous functions which contain an infinite subset equivalent to the standard basis of $\ell_1$. The link between topological entropy and $\ell_1$ structure via coordinate density was discovered by Glasner and Weiss, who proved using techniques from the local theory of Banach spaces that if a homeomorphism of a compact metric space $X$ has zero entropy then so does the induced weak$^*$ homeomorphism of the space of probability measures on $X$ \cite{GW}. This connection to Banach space geometry was pursued in \cite{EID} with applications to Voiculescu-Brown entropy in $C^*$-dynamics and then in \cite{DEBS} within a general Banach space framework. It was shown in \cite{DEBS} that functions in the topological Pinsker algebra can be described by the property that their orbits do not admit a positive density subset equivalent to the standard basis of $\ell_1$. While tameness is concerned with infinite subsets of orbits with no extra structure and thereby calls for the application of Ramsey methods, the positive density condition in the case of entropy reflects a tie to quantitative results in the local theory of Banach spaces involving the Sauer-Perles-Shelah lemma and Hilbertian geometry. What is common to both cases is the dynamical appearance of $\ell_1$ as a manifestation of combinatorial independence. How this link between linear geometry and combinatorial structure plays out in the study of local dynamical behaviour is in general not so straightforward however, and a major goal of this paper is to understand the local situation from a combinatorial standpoint. Over the last ten years a substantial local theory of topological entropy for ${\mathbb Z}$-systems has unfolded around the concept of entropy pair introduced by Blanchard in \cite{Disj}. Remarkably, every significant result to date involving entropy pairs has been obtained using measure-dynamical techniques by way of a variational principle. This has raised the question of whether more direct topological-combinatorial arguments can be found (see for example \cite{mu}). Applying a local variational principle, Huang and Ye have recently obtained a characterization of entropy pairs, and more generally of entropy tuples, in terms of an independence property \cite{LVRA}. We will give a combinatorial proof of this result in Section~\ref{S-entropy comb} with a key coordinate density lemma inspired by work of Mendelson and Vershynin \cite{MV}. Our argument has the major advantage of portability and provides the basis for a versatile combinatorial approach to the local analysis of entropy. It works equally well for noninvertible surjective continuous maps and actions of discrete amenable groups, applies to sequence entropy where no variational principle exists (Section~\ref{S-null}), and is potentially of use in the study of Banach space geometry. The tuples of points enjoying the independence property relevant to entropy we call IE-tuples, and in analogy we also define IN-tuples and IT-tuples as tools for the local study of sequence entropy and tameness, respectively. While positive entropy is keyed to independent behaviour along positive density subsets of iterates, what matters for positive sequence entropy and untameness are independence along arbitrarily large finite subsets and independence along infinite subsets, respectively (see Sections~\ref{S-null} and \ref{S-tame}). We investigate how these local independence properties are interrelated at the global level (Section~\ref{S-I-indep}), as well as how their quantizations are connected to various types of asymptotic Abelianness in $C^*$-dynamical systems (Section~\ref{S-NC}). We begin the main body of the paper in Section~\ref{S-prelim} by laying down the general notation and definitions that will form the groundwork for subsequent discussions. Our setting will be that of topological semigroups with identity acting continuously on compact Hausdorff spaces by surjective continuous maps, or strongly continuously on unital $C^*$-algebras by injective unital $^*$-endomorphisms, with notable specializations to singly generated systems in Section~\ref{S-entropy comb} (except for the last part on actions of discrete amenable groups) and Section~\ref{S-entropy tensor} and to actions of groups in Section~\ref{S-minimal} (where the main results are in fact for the Abelian case), the second half of Section~\ref{S-NC}, and Section~\ref{S-UHF}. In Section~\ref{S-entropy comb} we introduce the notion of IE-tuple and establish several basic properties in parallel with those of entropy tuples, including behaviour under taking products, which we prove by measure and density arguments in a $0$--$1$ product space. We then argue that entropy tuples are IE-tuples by applying the key coordinate density result which appears as Lemma~\ref{L-key}. In particular, we recover the result of Glasner on entropy pairs in products \cite{mu} without having invoked a variational principle. Using IE-tuples we give an alternative proof of the fact due to Blanchard, Glasner, Kolyada, and Maass \cite{BGKM} that positive entropy implies Li-Yorke chaos. Our arguments show moreover that the set of entropy pairs contains a dense subset consisting of Li-Yorke pairs. To conclude Section~\ref{S-entropy comb} we discuss how the theory readily extends to actions of discrete amenable groups. Note in contrast that the measure-dynamical approach as it has been developed for ${\mathbb Z}$-systems does not directly extend to the general amenable case, as one needs for example to find a substitute for the procedure of taking powers of a single automorphism (see \cite{LVRA} and Section~19.3 of \cite{ETJ}). We continue our discussion on entropy in Section~\ref{S-entropy tensor} by shifting to the tensor product perspective and determining how concepts like uniformly positive entropy translate, with a view towards formulating the notion of a $C^*$-algebraic K-system. In Section~\ref{S-null} we define IN-tuples as the analogue of IE-tuples for topological sequence entropy. We give a local description of nullness (i.e., the vanishing of sequence entropy over all sequences) at the Banach space level in terms of IN-pairs and show that nondiagonal IN-pairs are the same as sequence entropy pairs as defined in \cite{NSSEP}. Here our combinatorial approach is essential, as there is no variational principle for sequence entropy. Section~\ref{S-tame} concerns independence in the context of tameness. We define IT-tuples and establish several properties in relation to untameness. While nullness implies tameness, we illustrate with a WAP subshift example that the converse is false. Section~\ref{S-minimal} investigates tame extensions of minimal systems. Under the hypothesis of an Abelian acting group, we establish in Theorem~\ref{T-decomposition} a proximal-equicontinuous decomposition that answers a question of Glasner from \cite{tame} and show in Theorem~\ref{T-uniquely ergodic} that tame minimal systems are uniquely ergodic. These theorems generalize results from \cite{NSSEP} which cover the metrizable null case. In Section~\ref{S-I-indep} we define I-independence as a tensor product property that may be thought of as a $C^*$-dynamical analogue of measure-theoretic weak mixing. Theorem~\ref{T-equiv-indep-prod} asserts that, for systems on compact Hausdorff spaces, I-independence is equivalent to uniform untameness (resp.\ uniform nonnullness) of all orders and the weak mixing (resp.\ transitivity) of all of the $n$-fold product systems, and, in the case of an Abelian acting group, to untameness, nonnullness, and weak mixing. We also demonstrate that, for general $C^*$-dynamical systems, I-independence implies complete untameness. Section~\ref{S-NC} focuses on independence in the noncommutative context. For dynamics on simple unital nuclear $C^*$-algebras, we show that independence essentially amounts to Abelianness. It is also observed that in certain situations the existence of a faithful weakly mixing state implies independence along a thickly syndetic set. In the opposite direction, we prove in Section~\ref{S-UHF} that I-independence in the setting of a UHF algebra implies weak mixing for the unique tracial state. Moreover, for Bogoliubov actions on the even CAR algebra, I-independence is actually equivalent to weak mixing for the unique tracial state. Continuing with the theme of UHF algebras, we round out Section~\ref{S-UHF} by showing that, in the type $d^\infty$ case, I-independence for $^*$-automorphisms is point-norm generic. In the final two sections we construct an example of a tame nonnull Toeplitz subshift (Section~\ref{S-Toeplitz}) and prove that the action of a convergence group is null (Section~\ref{S-convergence}). After this paper was finished Wen Huang informed us that he has shown that every tame minimal action of an Abelian group on a compact metrizable space is a highly proximal extension of an equicontinuous system and is uniquely ergodic \cite{TSSP}. Corollary~\ref{C-HP over eq} and Theorem~\ref{T-uniquely ergodic} in our paper strengthen these results from the perspective of our geometric formulation of tameness. We also remark that our paper answers the last three questions in \cite{TSSP}, the first negatively and the second two positively. \medskip \noindent{\it Acknowledgements.} The initial stages of this work were carried out while the first author was visiting the University of Tokyo under a JSPS Postdoctoral Fellowship and the second author was at the University of Toronto. The first author is grateful to JSPS for support and to Yasuyuki Kawahigashi for hosting his stay at the University of Tokyo over the 2004--2005 academic year. We thank Eli Glasner for some useful comments. \section{General notation and basic notions}\label{S-prelim} By a {\it dynamical system} we mean a pair $(X,G)$ where $X$ is a compact Hausdorff space and $G$ is a topological semigroup with identity with a continuous action $(s,x)\mapsto sx$ on $X$ by surjective continuous maps. By a {\it $C^*$-dynamical system} we mean a triple $(A,G,\alpha )$ where $A$ is a unital $C^*$-algebra, $G$ is a topological semigroup with identity, and $\alpha$ is an action $(s,a)\mapsto \alpha_s (a)$ of $G$ on $A$ by injective unital $^*$-endomorphisms. The identity of $G$ will be written $e$. We denote by $G_0$ the set $G\setminus \{ e \}$. A dynamical system $(X,G)$ gives rise to an action $\alpha$ of the opposite semigroup $G^{\rm op}$ on $C(X)$ defined by $\alpha_s (f)(x) = (sf)(x) = f(sx)$ for all $s\in G^{\rm op}$, $f\in C(X)$, and $x\in X$, using the same notation for corresponding elements of $G$ and $G^{\rm op}$. Whenever we define a property for $C^*$-dynamical systems we will also speak of the property for dynamical systems and surjective continuous maps by applying the definition to this associated $C^*$-dynamical system. If a $C^*$-dynamical system is defined by a single $^*$-endomorphism then we will talk about properties of the system as properties of the $^*$-endomorphism whenever convenient, with a similar comment applying to singly generated dynamical systems. For a semigroup with identity, we write ${\mathcal{N}}$ for the collection of nonempty subsets of $G_0$, ${\mathcal{I}}$ for the collection of infinite subsets of $G_0$, and ${\mathcal{TS}}$ for the collection of thickly syndetic subsets of $G_0$. Recall that a subset $K$ of $G$ is {\it syndetic} if there is a finite subset $F$ of $G$ such that $FK = G$ and {\it thickly syndetic} if for every finite subset $F$ of $G$ the set $\bigcap_{s\in F} sK$ is syndetic. When $G={\mathbb Z}$ we say that a subset $I\subseteq G$ has {\it positive density} if the limit \[ \lim_{n\to\infty} \frac{|I\cap \{ -n,-n+1, \dots ,n \} |}{2n+1} \] exists and is nonzero. When $G={\mathbb Z}_{\geq 0}$ we say that a subset $I\subseteq G$ has {\it positive density} if the limit \[ \lim_{n\to\infty} \frac{|I\cap \{ 0,1, \dots ,n \} |}{n+1} \] exists and is nonzero. Given a $C^*$-dynamical system $(A,G,\alpha )$ and an element $a\in A$, a subset $I\subseteq G$ is said to be an {\it $\ell_1$-isomorphism set} for $a$ if the set $\{ \alpha_s (a) \}_{s\in I}$ is equivalent to the standard basis of $\ell_1^I$. If this a $\lambda$-equivalence for some $\lambda\geq 1$ then we also refer to $I$ as an {\it $\ell_1$-$\lambda$-isomorphism set}. These definitions also make sense more generally for actions on Banach spaces by isometric endomorphisms. For a dynamical system $(X,G)$ we will speak of $\ell_1$-isomorphism sets for elements of $C(X)$ in reference to the induced action of $G^{\rm op}$ described above. For a dynamical system $(X,G)$ we denote by $M (X)$ the weak$^*$ compact convex set of Borel probability measures on $X$ and by $M (X,G)$ the weak$^*$ closed convex subcollection of $G$-invariant Borel probability measures. For subsets $Y$ and $Z$ of a metric space $(X,d)$ and $\varepsilon > 0$ we write $Y\subseteq_\varepsilon Z$ and say that $Z$ approximately includes $Y$ to within $\varepsilon$ if for every $y\in Y$ there is a $z\in Z$ with $d(z,y) < \varepsilon$. For a set $X$ and an $m\in{\mathbb N}$ we write $\Delta_m (X)=\{(x, \dots , x)\in X^m:x\in X\}$. An element of $X^m$ which is not contained in $\Delta_m (X)$ is said to be {\it nondiagonal}. The linear span of a set $\Omega$ of elements in a linear space will often be written $[\Omega ]$. A partition of unity $\{ f_1 , \dots , f_n \}$ of a compact Hausdorff space $X$ is said to be {\it effective} if $\max_{x\in X} f_i (x) = 1$ for each $i=1,\dots ,n$, which is equivalent to the existence of $x_1 , \dots , x_n \in X$ such that $f_i (x_j ) = \delta_{ij}$. In this case the linear map ${\rm span} \{ f_1 , \dots ,f_n \} \to {\mathbb C}^n$ given by evaluation at the points $x_1 , \dots , x_n$ is an isometric order isomorphism. See Section~8 of \cite{GILQ} for more information on order structure and finite-dimensional approximation in commutative $C^*$-algebras. An {\it operator space} is a closed subspace of a $C^*$-algebra, or equivalently of ${\mathcal B} ({\mathcal H} )$ for some Hilbert space ${\mathcal H}$. The distinguishing characteristic of an operator space is its collection of matrix norms, in terms of which one can formulate an abstract definition, the equivalence of which with the concrete definition is a theorem of Ruan. A linear map $\varphi : V\to W$ between operator spaces is said to be {\it completely bounded} if $\sup_{n\in{\mathbb N}} \| {\rm id}_{M_n} \otimes\varphi \| < \infty$, in which case we refer to this supremum as the c.b.\ (completely bounded) norm, written $\| \varphi \|_{\rm cb}$. We say that a map $\varphi : V\to W$ between operator spaces is a {\it $\lambda$-c.b.-isomorphism} if it is invertible and the c.b.\ norms of $\varphi$ and $\varphi^{-1}$ satisfy $\| \varphi \|_{\rm cb} \| \varphi^{-1} \|_{\rm cb} \leq\lambda$. The minimal tensor product of operator spaces $V\subseteq {\mathcal B} ({\mathcal H} )$ and $W\subseteq {\mathcal B} ({\mathcal K} )$, written $V\otimes W$, is the closure of the algebraic tensor product of $V$ and $W$ under its canonical embedding into ${\mathcal B} ({\mathcal H}\otimes_2 {\mathcal K} )$. When applied to closed subspaces of commutative $C^*$-algebras, the minimal operator space tensor product is the same as the Banach space injective tensor product (ignoring the matricial data). An {\it operator system} is a closed unital self-adjoint subspace of a unital $C^*$-algebra. Let $V$ be an operator system and $I,I'$ nonempty finite sets with $I \subseteq I'$. We regard $V^{\otimes I}$ as an operator subsystem of $V^{\otimes I'}$ under the complete order embedding given by $v\mapsto v\otimes 1\in V^{\otimes I} \otimes V^{\otimes I' \setminus I} = V^{\otimes I'}$. With respect to such inclusions we define $V^{\otimes J}$ for any index set $J$ as a direct limit over the finite subsets of $J$. For general references on operator spaces and operator systems see \cite{ER,Pis}. A collection $\{ (A_{i,0} , A_{i,1} )\}_{i\in I}$ of pairs of disjoint subsets of a set $X$ is said to be {\it independent} if for every finite set $F\subseteq I$ and $\sigma\in \{ 0,1 \}^F$ we have $\bigcap_{i\in F} A_{i,\sigma (i)} \neq\emptyset$. Our basic concept of dynamical independence and its quantization are given by the following definitions. \begin{definition}\label{D-indep} Let $(X,G)$ be a dynamical system. For a tuple ${\overrightarrow{A}} = (A_1, A_2, \dots, A_k )$ of subsets of $X$, we say that a set $J\subseteq G$ is an {\it independence set} for ${\overrightarrow{A}}$ if for every nonempty finite subset $I\subseteq J$ and function $\sigma : I\to \{1,2, \dots, k\}$ we have $\bigcap_{s\in J} s^{-1} A_{\sigma(s)} \neq \emptyset$, where $s^{-1} A$ for a set $A\subseteq X$ refers to the inverse image $\{ x\in X : sx \in A \}$. \end{definition} \begin{definition}\label{D-tensorindep} Let $(A,G,\alpha )$ be a $C^*$-dynamical system, and let $V$ be a finite-dimensional operator subsystem of $A$. Associated to every tuple $(s_1 , \dots , s_k )$ of elements of $G$ is the linear dynamical multiplication map $V^{\otimes [1,k]} \to A$ determined on elementary tensors by $a_1 \otimes\cdots\otimes a_k \mapsto \alpha_{s_1} (a_1 ) \cdots \alpha_{s_k} (a_k )$. For $\lambda\geq 1$, we say that a tuple of elements of $G$ is a {\it $\lambda$-contraction tuple} for $V$ if the associated multiplication map has c.b.\ norm at most $\lambda$, a {\it $\lambda$-expansion tuple} for $V$ if the multiplication map has an inverse with c.b.\ norm at most $\lambda$, and a {\it $\lambda$-independence tuple} for $V$ if the multiplication map is a $\lambda$-c.b.-isomorphism onto its image. A subset $J\subseteq G$ is said to be a {\it $\lambda$-contraction set}, {\it $\lambda$-expansion set}, or {\it $\lambda$-independence set} if every tuple of distinct elements in $J$ is of the corresponding type. \end{definition} \begin{remark}\label{R-comm} If $A$ is a commutative $C^*$-algebra, then for each of the linear maps in Definition~\ref{D-tensorindep} the norm and c.b.\ norm coincide. Moreover the linear map $V^{\otimes [1,k]} \to A$ is contractive and thus $\varphi$ is a $\lambda$-isomorphism onto its image if and only if it has a bounded inverse of norm at most $\lambda$. Thus in this case a $\lambda$-expansion tuple (resp.\ set) is the same as a $\lambda$-independence tuple (resp.\ set). \end{remark} Notice that if a tuple $(s_1 , \dots , s_k )$ of elements of $G$ is a $\lambda$-expansion tuple for a finite-dimensional operator subsystem $V\subseteq A$ with $\lambda$ close to one, then the associated multiplication map $\varphi$ gives rise to an operator space matrix norm on $V^{\otimes [1,k]}$ which is close to being a cross norm in the sense that for all $a_1 , \dots , a_k \in V$ we have \begin{align*} \lambda^{-1} \| a_1 \| \cdots \| a_k \| &= \lambda^{-1} \| a_1 \otimes\cdots\otimes a_k \| \leq \| \varphi (a_1 \otimes\cdots\otimes a_k ) \| \\ &= \| \alpha_{s_1} (a_1 ) \cdots \alpha_{s_k} (a_k ) \| \leq \| a_1 \| \cdots \| a_k \| . \end{align*} \begin{definition}\label{D-contr} Let $(A,G,\alpha )$ be a $C^*$-dynamical system, and let $V$ be a finite-dimensional operator subsystem of $A$. For $\varepsilon\geq 0$, we define ${\rm Con} (\alpha , V, \varepsilon )$ to be the set of all $s\in G_0$ such that $(e,s)$ is a $(1+\varepsilon )$-contraction tuple, ${\rm Exp} (\alpha , V, \varepsilon )$ to be the set of all $s\in G_0$ such that $(e,s)$ is a $(1+\varepsilon )$-expansion tuple, and ${\rm Ind} (\alpha , V, \varepsilon )$ to be the set of all $s\in G_0$ such that $(e,s)$ is a $(1+\varepsilon )$-independence tuple. For a collection ${\mathcal C}$ of subsets of $G_0$ which is closed under taking supersets, we say that the system $(A,G,\alpha )$ or the action $\alpha$ is {\it ${\mathcal C}$-contractive} if for every finite-dimensional operator subsystem $V\subseteq A$ and $\varepsilon > 0$ the set ${\rm Con} (\alpha , V, \varepsilon )$ is a member of ${\mathcal C}$, {\it ${\mathcal C}$-expansive} if the same criterion holds with respect to the set ${\rm Exp} (\alpha , V, \varepsilon )$, and {\it ${\mathcal C}$-independent} if the criterion holds with respect to the set ${\rm Ind} (\alpha , V, \varepsilon )$. \end{definition} We will mainly be interested in applying Definition~\ref{D-contr} to the collections ${\mathcal{N}}$, ${\mathcal{I}}$, and ${\mathcal{TS}}$ as defined above. Note that, by Remark~\ref{R-comm}, for dynamical systems ${\mathcal C}$-contractivity is automatic and ${\mathcal C}$-expansivity and ${\mathcal C}$-independence amount to the same thing. To verify ${\mathcal C}$-contractivity, ${\mathcal C}$-expansivity, or ${\mathcal C}$-independence it suffices to check the condition in question over a collection of operator subsystems with dense union. This fact is recorded in Proposition~\ref{P-dense} and rests on the following perturbation lemma, which is a slight variation on Lemma~2.13.2 of \cite{Pis} with essentially the same proof. \begin{lemma}\label{L-perturb} Let $V$ be a finite-dimensional operator space with Auerbach system\linebreak $\{ (v_i , f_i ) \}_{i=1}^n$ and let $\varepsilon > 0$ be such that $\varepsilon (1+\varepsilon ) < 1$. Let $W$ be an operator space, $\rho : V\to W$ a linear map which is an isomorphism onto its image with $\max (\| \rho \|_{\rm cb} , \| \rho^{-1} \|_{\rm cb} ) < 1+\varepsilon$, and $w_1 , \dots ,w_n$ elements of $W$ such that $\| \rho (v_i ) - w_i \| < \dim (V)^{-1} \varepsilon$ for each $i=1,\dots , n$. Then the linear map $\varphi : V\to W$ determined by $\varphi (v_i ) = w_i$ for $i=1,\dots ,n$ is an isomorphism onto its image with $\| \varphi \|_{\rm cb} < 1 + 2\varepsilon$ and $\| \varphi^{-1} \|_{\rm cb} < (1+\varepsilon )(1 - \varepsilon (1+\varepsilon ))^{-1}$. \end{lemma} \begin{proof} Define the linear map $\delta : V\to W$ by $\delta (v) = \sum_{i=1}^n f_i (v) (w_i - \rho (v_i ))$ for all $v\in V$. Since the norm and c.b.\ norm of a rank-one linear map coincide, we have $\| \delta \|_{\rm cb} \leq \sum_{i=1}^n \| f_i \| \| w_i - \rho (v_i ) \| < \varepsilon$, and thus since $\varphi = \rho + \delta$ we see that $\| \varphi \|_{\rm cb} < 1 + 2\varepsilon$. The bound $\| \varphi^{-1} \|_{\rm cb} < (1+\varepsilon )(1 - \varepsilon (1+\varepsilon ))^{-1}$ follows by applying Lemma~2.13.1 of \cite{Pis}. \end{proof} \begin{proposition}\label{P-dense} Let $(A,G,\alpha )$ be a $C^*$-dynamical system. Let $\mathfrak{S}$ be a collection of finite-dimensional operator subsystems of $A$ with the property that for every finite set $\Omega\subseteq A$ and $\varepsilon > 0$ there is a $V\in\mathfrak{S}$ such that $\Omega\subseteq_\varepsilon V$. Let ${\mathcal C}$ be a collection of subsets of $G_0$ which is closed under taking supersets. Then $\alpha$ is ${\mathcal C}$-contractive if and only if for every $V\in\mathfrak{S}$ and $\varepsilon > 0$ the set ${\rm Con} (\alpha , V, \varepsilon )$ is a member of ${\mathcal C}$, ${\mathcal C}$-expansive if and only if for every $V\in\mathfrak{S}$ and $\varepsilon > 0$ the set ${\rm Exp} (\alpha , V, \varepsilon )$ is a member of ${\mathcal C}$, and ${\mathcal C}$-independent if and only if for every $V\in\mathfrak{S}$ and $\varepsilon > 0$ the set ${\rm Ind} (\alpha , V, \varepsilon )$ is a member of ${\mathcal C}$. \end{proposition} \begin{proof} We will show the third equivalence, the first two involving similar perturbation arguments. For the nontrivial direction, suppose that for every $V\in\mathfrak{S}$ and $\varepsilon > 0$ the set ${\rm Ind} (\alpha , V, \varepsilon )$ is a member of ${\mathcal C}$. Let $V$ be a finite-dimensional operator subsystem of $A$ and let $\varepsilon > 0$. Let $\delta$ be a positive real number to be further specified below. By assumption we can find a $W\in\mathfrak{S}$ such that the set ${\rm Ind} (\alpha , W, \delta )$ is a member of ${\mathcal C}$ and the unit ball of $V$ is approximately included to within $\delta$ in $W$. By Lemma~\ref{L-perturb}, if $\delta$ is sufficiently small then, taking an Auerbach basis ${\mathcal S} = \{ v_i \}_{i=1}^r$ for $V$ and choosing $w_1, \dots , w_r \in W$ with $\| w_i - v_i \| < \delta$ for each $i=1, \dots r$, the linear map $\rho : V\to W$ determined on ${\mathcal S}$ by $\rho (v_i ) = w_i$ is an isomorphism onto its image with $\| \rho \|_{\rm cb} \| \rho^{-1} \|_{\rm cb} < \sqrt[4]{1 + \varepsilon}$. Now let $s\in {\rm Ind} (\alpha , W, \delta )$. Let $\varphi : W\otimes W \to A$ be the dynamical multiplication map determined on elementary tensors by $w_1 \otimes w_2 \mapsto w_1 \alpha_s (w_2 )$. Consider the linear map $\theta : [\rho (V)\alpha_s (\rho (V))] \to [V\alpha_s (V)]$ determined by $\rho (v_1 ) \alpha_s (\rho (v_2 )) \mapsto v_1 \alpha_s (v_2 )$ for $v_1 , v_2 \in V$, which is well defined by our choice of $\rho$ and $s$. Another application of Lemma~\ref{L-perturb} shows that if $\delta$ is small enough then the composition $\theta\circ\varphi$ is an isomorphism onto its image satisfying $\| \theta\circ\varphi \|_{\rm cb} \| (\theta\circ\varphi )^{-1} \|_{\rm cb} < \sqrt{1 + \varepsilon}$. Notice now that the dynamical multiplication map $\psi : V\otimes V \to A$ determined on elementary tensors by $v_1 \otimes v_2 \mapsto v_1 \alpha_s (v_2 )$ factors as $\theta\circ\varphi\circ (\rho\otimes\rho )$, and hence \[ \| \psi \|_{\rm cb} \| \psi^{-1} \|_{\rm cb} \leq \| \theta\circ\varphi \|_{\rm cb} \| (\theta\circ\varphi )^{-1} \|_{\rm cb} \| \rho \|_{\rm cb}^2 \| \rho^{-1} \|_{\rm cb}^2 \leq 1+\varepsilon . \] Thus $s\in {\rm Ind} (\alpha , V, \varepsilon )$, and we conclude that $\alpha$ is ${\mathcal C}$-independent. \end{proof} \section{Topological entropy and combinatorial independence}\label{S-entropy comb} The local theory of topological entropy based on entropy pairs is developed in the literature for ${\mathbb Z}$-systems, but here will we consider general continuous surjective maps. In fact one of the novel features of our combinatorial approach is that it applies not only to singly generated systems but also to actions of any discrete amenable group, as we will indicate in the last part of the section. Thus, with the exception of the last part of the section, $G$ will be one of the additive semigroups ${\mathbb Z}$ and ${\mathbb Z}_{\geq 0}$ and we will denote the generating surjective endomorphism of $X$ by $T$. Recall that the topological entropy ${\text{\rm h}}_{\text{\rm top}} (T,{\mathcal U} )$ of an open cover ${\mathcal U}$ of $X$ with respect to $T$ is defined as $\lim_{n\to\infty} \frac{1}{n}\ln N({\mathcal U} \vee T^{-1} {\mathcal U} \vee\cdots\vee T^{-n+1} {\mathcal U} )$, where $N(\cdot )$ denotes the minimal cardinality of a subcover. The topological entropy ${\text{\rm h}}_{\text{\rm top}} (T)$ of $T$ is the supremum of ${\text{\rm h}}_{\text{\rm top}} (T,{\mathcal U} )$ over all open covers ${\mathcal U}$ of $X$. A pair $(x_1 ,x_2 )\in X^2\setminus \Delta_2(X)$ is said to be an {\it entropy pair} if whenever $U_1$ and $U_2$ are closed disjoint subsets of $X$ with $x_1 \in{\rm int} (U_1 )$ and $x_2 \in{\rm int} (U_2 )$, the open cover $\{ U_1^{\rm c} , U_2^{\rm c} \}$ has positive topological entropy. More generally, following \cite{TEE} we call a tuple ${\overrightarrow{x}} = (x_1 , \dots, x_k)\in X^k\setminus \Delta_k(X)$ an {\it entropy tuple} if whenever $U_1, \dots , U_l$ are closed pairwise disjoint neighbourhoods of the distinct points in the list $x_1 , \dots , x_k$, the open cover $\{ U_1^{\rm c} , \dots , U_l^{\rm c} \}$ has positive topological entropy. \begin{definition}\label{D-IE-pair} We call a tuple ${\overrightarrow{x}} = (x_1 , \dots, x_k )\in X^k$ an {\it IE-tuple} (or an {\it IE-pair} in the case $k=2$) if for every product neighbourhood $U_1 \times\cdots\times U_k$ of ${\overrightarrow{x}}$ the tuple $(U_1, \dots, U_k)$ has an independence set of positive density. We denote the set of ${\rm IE}$-tuples of length $k$ by ${\rm IE}_k (X,G)$. \end{definition} The argument in the second paragraph of the proof of Theorem~3.2 in \cite{GW} shows the following lemma, which will be repeatedly useful for converting finitary density statements to infinitary ones. \begin{lemma}\label{L-density} A tuple ${\overrightarrow{A}} = (A_1 , \dots , A_k )$ of subsets of $X$ has an independence set of positive density if and only if there exists a $d>0$ such that for any $M>0$ we can find an interval $I$ in $G$ with $|I|\geq M$ and an independence set $J$ for ${\overrightarrow{A}}$ contained in $I$ for which $|J|\geq d|I|$. \end{lemma} On page 684 of \cite{GW} a pair $(x_1 , x_2 )\in X^2\setminus \Delta_2(X)$ is defined to be an {\it E-pair} if for any neighbourhoods $U_1$ and $U_2$ of $x_1$ and $x_2$, respectively, there exists a $\delta > 0$ and a $k_0$ such that for every $k\geq k_0$ there exists a sequence $0\leq n_1 < n_2 < \cdots < n_k < k/\delta$ such that $\bigcap_{j=1}^k T^{-n_j} (U_{\sigma(j)} )\neq\emptyset$ for every $\sigma\in\{ 1,2 \}^k$. From Lemma~\ref{L-density} we see that E-pairs are the same as nondiagonal IE-pairs. We now proceed to establish some facts concerning the set of IE-pairs as captured in Propositions~\ref{P-basic E}, \ref{P-f to pair E}, and Theorem~\ref{T-product E}. For ${\mathbb Z}$-systems these can be proved via a local variational principle route by combining the known analogues for entropy pairs (see \cite{Disj,ETJ,DEBS}) with Huang and Ye's characterization (in different terminology) of entropy pairs as IE-pairs \cite{LVRA}, which itself will be reproved and extended to cover noninvertible surjective continuous maps in Theorem~\ref{T-E vs IE} (see also the end of the section for actions of discrete amenable groups). Let $k\ge 2$ and let $Z$ be a nonempty finite set. Let $\mathcal{U}$ be the cover of $\{0,1, \dots , k\}^Z=\prod_{z\in Z}\{0,1, \dots , k\}$ consisting of subsets of the form $\prod_{z\in Z}\{i_z\}^{\rm c}$, where $1\le i_z\le k$ for each $z\in Z$. For $S\subseteq \{0,1,\dots , k\}^Z$ we write $F_S$ to denote the minimal number of sets in $\mathcal{U}$ one needs to cover $S$. The following result plays a key role in our combinatorial approach to the study of ${\rm IE}$-tuples (and ${\rm IN}$-tuples in Section~\ref{S-null}). The idea of considering the property (i) in its proof comes from the proof of Theorem~4 in \cite{MV}. \begin{lemma}\label{L-key} Let $k\ge 2$ and let $b>0$ be a constant. There exists a constant $c>0$ depending only on $k$ and $b$ such that for every finite set $Z$ and $S\subseteq \{0, 1, \dots , k\}^{Z}$ with $F_S\ge k^{b|Z|}$ there exists a $W\subseteq Z$ with $|W|\ge c|Z|$ and $S|_W\supseteq \{1, \dots , k\}^W$. \end{lemma} \begin{proof} Pick a constant $0<\lambda<\frac{1}{3}$ such that $b_1:=b+\log_k(1-\lambda)>0$. Set $b_2:= \log_k \!\big( \frac{1-\lambda}{\lambda} \big) >0$ and $t=(2b_2 )^{-1} b_1\log_2 \!\big( \frac{k+1}{k} \big)$. Denote by $H_S$ the number of non-empty subsets $W$ of $Z$ such that $S|_W\supseteq \{1, \dots , k\}^W$. By Stirling's formula there is a constant $c>0$ depending only on $t$ (and hence depending only on $k$ and $b$) such that $\sum_{1\le j\le cn}\binom{n}{j}<2^{tn}$ for all $n$ large enough. If $H_S\ge 2^{t|Z|}$ and $|Z|$ is large enough, then there exists a $W\subseteq Z$ for which $|W|\ge c|Z|$ and $S|_W \supseteq \{1, \dots , k\}^W$. It thus suffices to show that $H_S\ge 2^{t|Z|}$. Set $S_0=S$ and $Z_0=Z$. We shall construct $Z_0\supseteq Z_1\supseteq \dots\supseteq Z_m$ for $m:=\lceil t|Z|/\log_2\frac{k+1}{k} \rceil$ and $S_j\subseteq \{0, 1, \dots , k\}^{Z_j}$ for all $1\le j\le m$ with the following properties: \begin{enumerate} \item[(i)] $H_{S_{j-1}}\ge \frac{k+1}{k}H_{S_j}$ for all $1\le j\le m$, \item[(ii)] $F_{S_j}\ge k^{b|Z|}(1-\lambda)^{|Z\setminus Z_j|-j}\lambda^j$ for all $0\le j\le m$. \end{enumerate} Suppose that we have constructed $Z_0, \dots , Z_j$ and $S_0, \dots , S_j$ with the above properties for some $0\le j<m$. If we have a $Q\subseteq Z_j$ and a $\sigma\in \{1,\dots , k\}^{Z_j\setminus Q}$ such that $F_{S_{j, \sigma}}\ge (1-\lambda)^{|Z_j\setminus Q|}F_{S_j}$, where $S_{j, \sigma}$ is the restriction of $\{f\in S_j:f(x)\neq \sigma(x) \mbox { for all } x\in Z_j\setminus Q\}$ on $Q$, then \begin{align*} |Q| &\ge \log_k(F_{S_{j, \sigma}}) \\ &\ge (|Z_j\setminus Q|)\log_k(1-\lambda)+\log_k(F_{S_j}) \\ &\ge (|Z_j\setminus Q|+|Z\setminus Z_j|-j) \log_k(1-\lambda)+b|Z|+j\log_k{\lambda} \\ &= (|Z|-|Q|-j)(b+\log_k(1-\lambda))+b(|Q|+j)+j\log_k{\lambda} \\ &= (|Z|-|Q|)b_1+b|Q|- jb_2 \\ &\ge (|Z|-|Q|)b_1+b|Q|-b_2 \Big( 1+t|Z|/\log_2 \!\Big( \frac{k+1}{k} \Big) \Big) \\ &= (|Z|-|Q|)b_1+b|Q|-b_2-\frac{b_1}{2}|Z| \end{align*} and hence $|Q|\ge \frac{|Z|b_1/2-b_2}{1+b_1-b}\ge 2$ when $|Z|$ is large enough. Take $Q$ and $\sigma$ as above such that $|Q|$ is minimal. Then $|Q|\ge 2$. Pick a $z\in Q$, and set $S_{j,i}$ to be the restriction of $\{f\in S_{j, \sigma}:f(z)=i\}$ to $Z_{j+1}:=Q\setminus \{z\}$ for $i=1,\dots , k$. Then \[F_{S_{j, i}}\ge \lambda(1-\lambda)^{|Z_j\setminus Q|}F_{S_j}\ge k^{b|Z|}(1-\lambda)^{|Z\setminus Z_{j+1}|-(j+1)}\lambda^{j+1}\] for $i=1,\dots , k$ (here one needs the fact that $|Q|\ge 2$). Now take $S_{j+1}$ to be one of the sets among $S_{j, 1}, \dots , S_{j, k}$ with minimal $H$-value, say $S_{j, l}$. For each $1\le i\le k$ denote by $B_i$ the set of nonempty subsets $W\subseteq Z_{j+1}$ such that $S_{j, i}|_W\supseteq \{1,\dots , k\}^W$. Note that $H_{S_j}\ge |\bigcup^k_{i=1} B_i|+|\bigcap^k_{i=1} B_i|$. If $|\bigcup^k_{i=1} B_i|\ge \frac{k+1}{k}|B_l|$, then $H_{S_j}\ge \frac{k+1}{k}|B_l|=\frac{k+1}{k}H_{S_{j+1}}$. Suppose that $|\bigcup^k_{i=1} B_i|< \frac{k+1}{k}|B_l|$. Note that \begin{gather*} \big| {\textstyle\bigcap}^k_{i=1} B_i \big|\cdot k+ \big( \big| {\textstyle\bigcup}^k_{i=1} B_i \big| - \big| {\textstyle\bigcap}^k_{i=1} B_i \big| \big) (k-1) \ge \sum^k_{i=1}|B_i| \ge k|B_l|. \end{gather*} Thus \[ \big| {\textstyle\bigcap}^k_{i=1} B_i \big| \ge k|B_l|- (k-1)\big| {\textstyle\bigcup}^k_{i=1} B_i \big| \ge k|B_l|-(k-1)\cdot \frac{k+1}{k}|B_l|=\frac{1}{k}|B_l|.\] Therefore $H_{S_j}\ge |\bigcup^k_{i=1} B_i|+|\bigcap^k_{i=1} B_i|\ge |B_l|+\frac{1}{k}|B_l| = \frac{k+1}{k}H_{S_{j+1}}$. Hence the properties (i) and (ii) are also satisfied for $j+1$. A simple calculation shows that $k^{b|Z|}(1-\lambda)^{|Z\setminus Z_m|-m}\lambda^m\ge k^{b|Z|}(1-\lambda)^{|Z|-m}\lambda^m>1$ when $|Z|$ is large enough. Thus $F_{S_m}>1$ according to property (ii) and hence $H_{S_m}\ge 1$. By property (i) we have $H_S\ge (\frac{k+1}{k})^mH_{S_m}\ge (\frac{k+1}{k})^m\ge 2^{t|Z|}$. This completes the proof of the proposition. \end{proof} \noindent For a cover ${\mathcal U}$ of $X$ we denote by ${\text{\rm h}_\text{\rm c}}(T, {\mathcal U})$ the combinatorial entropy of ${\mathcal U}$ with respect to $T$, which is defined using the same formula as for the topological entropy of open covers. \begin{lemma}\label{L-E vs IE} Let $k\ge 2$. Let $U_1, \dots , U_k$ be pairwise disjoint subsets of $X$ and set ${\mathcal U}=\{U^{{\rm c}}_1, \dots , U^{{\rm c}}_k\}$. Then ${\overrightarrow{U}}:=(U_1, \dots , U_k)$ has an independence set of positive density if and only if ${\text{\rm h}_\text{\rm c}}(T, {\mathcal U})>0$. \end{lemma} \begin{proof} The ``only if'' part is trivial. For the ``if'' part, set $b:={\text{\rm h}_\text{\rm c}}(X, \mathcal{U})$ and consider the map $\varphi_n: X\rightarrow \{0, 1, \dots , k\}^{\{1, \dots, n\}}$ defined by \[ (\varphi_n(x))(j) = \left\{ \begin{array}{l@{\hspace*{7mm}}l} i, & \mbox{if } T^{j}(x)\in U_i \mbox{ for some } 1\le i\le k , \\ 0, & \mbox{otherwise.} \end{array} \right. \] Then $N\big( \bigvee^n_{i=1}T^{-i}\mathcal{U} \big) =F_{\varphi_n(X)}$, and so $F_{\varphi_n(X)}>e^{\frac{b}{2}n}$ for all large enough $n$. By Lemma~\ref{L-key} there exists a constant $c>0$ depending only on $k$ and $b$ such that $\varphi_n(X)|_W\supseteq \{1,\dots , k\}^W$ for some $W\subseteq \{1, \dots, n\}$ with $|W|\ge cn$ when $n$ is large enough. Then $W$ is an independence set for the tuple ${\overrightarrow{U}}$. Thus by Lemma~\ref{L-density} ${\overrightarrow{U}}$ has an independence set of positive density. \end{proof} We will need the following consequence of Karpovsky and Milman's generalization of the Sauer-Perles-Shelah lemma \cite{Sauer,Shelah,KM}. It also follows directly from Lemma~\ref{L-key}. \begin{lemma}[\cite{KM}]\label{L-KM} Given $k\geq 2$ and $\lambda > 1$ there is a constant $c>0$ such that, for all $n\in {\mathbb N}$, if $S\subseteq \{1, 2, \dots , k \}^{\{1, 2, \dots , n\}}$ satisfies $|S|\geq ((k-1)\lambda )^n$ then there is an $I\subseteq \{1, 2, \dots , n\}$ with $|I|\geq cn$ and $S|_I = \{1, 2,\dots , k \}^I$. \end{lemma} The case $|Z|=1$ of the following lemma appeared in \cite{Pajor83}. \begin{lemma}\label{L-decomposition E combinatorics} Let $Z$ be a finite set such that $Z\cap \{1, 2, 3\} = \emptyset$. There exists a constant $c>0$ depending only on $|Z|$ such that, for all $n\in {\mathbb N}$, if $S\subseteq (Z\cup \{ 1, 2\} )^{\{1, 2, \dots , n \}}$ is such that $\Gamma_n |_S : S\to (Z\cup \{3\})^{\{1, 2, \dots , n\}}$ is bijective, where $\Gamma_n : (Z\cup \{ 1, 2\})^{\{ 1, 2, \dots , n\}}\to (Z\cup \{3\})^{\{1, 2, \dots , n\}}$ converts the coordinate values $1$ and $2$ to $3$, then there is some $I\subseteq \{ 1, 2, \dots , n\}$ with $|I|\ge cn$ and either $S|_I \supseteq (Z\cup \{1\})^I$ or $S|_I\supseteq (Z\cup \{ 2\})^I$. \end{lemma} \begin{proof} The case $Z=\emptyset$ is trivial. So we assume that $Z$ is nonempty. Fix a (small) constant $0<t<\frac{1}{8}$ which we shall determine later. Denote by $S'$ the elements of $S$ taking value in $Z$ on at least $(1-4t)n$ many coordinates in $\{1, 2, \dots , n\}$. Then $|S'|\ge \binom{n}{3tn}|Z|^{(1-4t)n}$ when $n$ is large enough. Note that each $\sigma\in S'$ takes values $1$ or $2$ on at most $4tn$ many coordinates in $\{1, 2, \dots , n\}$. For $i=1, 2$, set $S'_i$ to be the elements in $S'$ taking value $i$ on at most $2tn$ many coordinates in $\{1, 2, \dots , n\}$. Then $\max(|S'_1|, |S'_2|)\ge \frac{1}{2}|S'|\ge \frac{1}{2}\binom{n}{3tn}|Z|^{(1-4t)n}$ when $n$ is large enough. Without loss of generality, we may assume that $|S'_1|\ge \frac{1}{2}\binom{n}{3tn}|Z|^{(1-4t)n}$. For each $\beta\subseteq \{1, 2, \dots , n\}$ with $|\beta|\le 2tn$ denote by $S^{\beta}$ the set of elements in $S'_1$ taking value $1$ exactly on $\beta$. The number of different $\beta$ is \[ \sum_{0\le m\le 2tn}\binom{n}{m}\le (2tn+1)\binom{n}{2tn} . \] By Stirling's formula we can find $M_1, M_2 > 0$ such that for all $n\in {\mathbb N}$ we have \[ \binom{n}{2tn}\le \frac{M_1}{\sqrt{tn}} \bigg( \frac{1}{(1-2t)^{1-2t}(2t)^{2t}} \bigg)^n \] and \[\binom{n}{3tn}\ge \frac{M_2}{\sqrt{tn}} \bigg( \frac{1}{(1-3t)^{1-3t}(3t)^{3t}} \bigg)^n . \] Therefore, when $n$ is large enough we can find some $\beta$ such that \[|S^{\beta}|\ge \frac{\frac{1}{2}\binom{n}{3tn}|Z|^{(1-4t)n}}{(2tn+1)\binom{n}{2tn}} \ge M|Z|^{(1-4t)n}(f(t))^n\frac{1}{2tn+1} , \] where $M:=\frac{M_2}{2M_1}>0$ and $f(t):=\frac{(1-2t)^{1-2t}(2t)^{2t}}{(1-3t)^{1-3t}(3t)^{3t}}$. Note that $\lim_{t\to 0^+}t^{-1} \ln{f(t)} = \infty$. Fix $t$ such that $f(t)\ge(2|Z|)^{4t}$. Then there is some $n_0 > 0$ such that \[|S^{\beta}|\ge M (|Z|2^{4t})^n\frac{1}{2tn+1}\ge (|Z|2^t)^n\] for all $n\ge n_0$. By Lemma~\ref{L-KM} there exists a constant $c>0$ depending only on $|Z|$ such that for all $n\ge n_0$ we can find an $I\subseteq \{1, 2, \dots , n\}\setminus \beta$ for which $|I|\ge c|\{1, 2, \dots , n\}\setminus \beta|\ge c(1-2t)n$ and $S^{\beta}|_I = (Z\cup \{2\})^I$. Now we may reset $c$ to be $\min(c(1-2t), 1/n_0)$. \end{proof} As an immediate consequence of Lemma~\ref{L-decomposition E combinatorics} we have: \begin{lemma}\label{L-decomposition indep} Let $c$ be as in Lemma~\ref{L-decomposition E combinatorics} for $|Z|=k-1$. Let ${\overrightarrow{A}}=(A_1, \dots, A_k)$ be a $k$-tuple of subsets of $X$ and suppose $A_1 = A_{1, 1}\cup A_{1,2}$. If $H$ is a finite independence set for ${\overrightarrow{A}}$, then there exists some $I\subseteq H$ such that $|I|\ge c|H|$ and $I$ is an independence set for $(A_{1,1}, \dots, A_k)$ or $(A_{1,2}, \dots, A_k)$. \end{lemma} The next lemma follows directly from Lemmas~\ref{L-density} and \ref{L-decomposition indep}. \begin{lemma}\label{L-decomposition E} Let ${\overrightarrow{A}}= (A_1, \dots, A_k )$ be a $k$-tuple of subsets of $X$ which has an independence set of positive density. Suppose that $A_1 = A_{1, 1}\cup A_{1, 2}$. Then at least one of the tuples $(A_{1,1},\dots, A_k)$ and $(A_{1,2}, \dots, A_k)$ has an independence set of positive density. \end{lemma} \begin{proposition}\label{P-basic E} The following are true: \begin{enumerate} \item Let $(A_1, \dots , A_k )$ be a tuple of closed subsets of $X$ which has an independence set of positive density. Then there exists an IE-tuple $(x_1,\dots , x_k)$ with $x_j\in A_j$ for all $1\le j\le k$. \item ${\rm IE}_2(X, T)\setminus \Delta_2(X)$ is nonempty if and only if ${\text{\rm h}}_{\text{\rm top}}(T)>0$. \item ${\rm IE}_k(X,T)$ is a closed $T\times\cdots\times T$-invariant subset of $X^k$. \item Let $\pi:(X, T)\rightarrow (Y,S)$ be a factor map. Then $(\pi\times\cdots\times \pi )({\rm IE}_k(X, T))={\rm IE}_k(Y, S)$. \item Suppose that $Z$ is a closed $T$-invariant subset of $X$. Then ${\rm IE}_k(Z, T|_Z )\subseteq {\rm IE}_k(X, T)$. \end{enumerate} \end{proposition} \begin{proof} Assertion (1) follows from Lemma~\ref{L-decomposition E} and a simple compactness argument. One can easily show that ${\text{\rm h}}_{\text{\rm top}}(T)>0$ if and only if there is some two-element open cover ${\mathcal U}=\{U_1, U_2\}$ of $X$ with positive topological entropy (see for instance the proof of Proposition~1 in \cite{Disj}). Then assertion (2) follows directly from assertion (1) and Lemma~\ref{L-E vs IE}. Assertions (3)--(5) either are trivial or follow directly from assertion (1). \end{proof} We remark that (2) and (4) of Proposition~\ref{P-basic E} show that the topological Pinsker factor (i.e., the largest zero-entropy factor) of $(X,T)$ is obtained from the closed invariant equivalence relation on $X$ generated by the set of IE-pairs (cf.\ \cite{ZEF}). In \cite{DEBS} we introduced the notion of ${\rm CA}$ entropy for isometric automorphisms of a Banach space. One can easily extend the definition to isometric endomorphisms of Banach spaces and check that Theorem~3.5 in \cite{DEBS} holds in this general setting. In particular, ${\text{\rm h}}_{\text{\rm top}}(T)>0$ if and only if there exists an $f\in C(X)$ with an $\ell_1$-isomorphism set (with respect to the induced $^*$-endomorphism $f\mapsto f\circ T$) of positive density. Thus if $f$ is a function in $C(X)$ with an $\ell_1$-isomorphism set of positive density then the dynamical factor of $(X,T)$ spectrally generated by $f$ has positive entropy and hence by Proposition~\ref{P-basic E}(2) has a nondiagonal ${\rm IE}$-pair, from which we infer using Proposition~\ref{P-basic E}(4) that $(X,T)$ has an ${\rm IE}$-pair $(x,y)$ such that $f(x) \neq f(y)$. This yields one direction of the following proposition. The other direction follows by a standard argument which appears in the proof of the Rosenthal-Dor $\ell_1$ theorem \cite{Dor}. \begin{proposition}\label{P-f to pair E} Let $f\in C(X)$. Then $f$ has an $\ell_1$-isomorphism set of positive density if and only if there is an IE-pair $(x, y)$ with $f(x)\neq f(y)$. \end{proposition} We next describe in Proposition~\ref{P-Xm} the set ${\rm IE}_1(X, T)$, which can also be identified with ${\rm IE}_k(X, T)\cap \Delta_k(X)$ for each $k$. The following lemma is mentioned on page 35 of \cite{BGH}. For completeness we provide a proof here. \begin{lemma}\label{L-closed positive density} Let $A$ be a closed subset of $X$. Then $A$ has an independence set of positive density if and only if there exists a $\mu \in M(X, T)$ with $\mu(A)>0$. \end{lemma} \begin{proof} Suppose $\mu(A)>0$ for some $\mu \in M(X, T)$. Let $H$ be a finite subset of $G$. Denote by $M$ the maximum over all $x\in X$ of the cardinality of the set of $s\in H$ such that $x\in s^{-1}A$. Then $\sum_{s\in H}1_{s^{-1}A}\le M$. Thus \[ M\ge \int \sum_{s\in H}1_{s^{-1}A}\, d\mu=|H|\mu(A).\] Therefore we can find a subset $I\subseteq H$ such that $|I|\ge |H|\mu(A)$ and $I$ is an independence set for $A$. By Lemma~\ref{L-density} $A$ has an independence set of positive density. This proves the ``if'' part. Conversely, suppose that $A$ has an independence set $H$ with density $d>0$. We consider the case $G={\mathbb Z}$. The case $G={\mathbb Z}_{\ge 0}$ is dealt with similarly. Let $y\in \bigcap_{s\in H}s^{-1}A$. Then \[ \liminf_{n\to \infty} \bigg( \frac{1}{2n+1}\sum^n_{j=-n}\delta_{T^j(y)} \bigg) (A)\ge d. \] Take an accumulation point $\mu$ of the sequence $\{(2n+1)^{-1} \sum^n_{j=-n}\delta_{T^j(y)})\}_{n\in {\mathbb N}}$ in $M(X)$. Then $\mu\in M(X, T)$ and $\mu(A)\ge d$. This proves the ``only if'' part. \end{proof} As a consequence of Lemma~\ref{L-closed positive density} we have: \begin{proposition}\label{P-Xm} The set ${\rm IE}_1(X, T)$ is the closure of the union of ${\rm supp}(\mu)$ over all $\mu \in M(X, T)$. \end{proposition} We describe next in Theorem~\ref{T-product E} the IE-tuples of a product system. The corresponding statement for entropy pairs was proved by Glasner in \cite[Theorem 3.(7)(9)]{mu} (see also \cite[Theorem 19.24]{ETJ}) for a product of metrizable systems using a local variational principle. For any tuple ${\overrightarrow{A}}=(A_1, \dots, A_k)$ of subsets of $X$, denote by ${\mathcal P}_{{\overrightarrow{A}}}$ the set of all independence sets for ${\overrightarrow{A}}$. Identifying subsets of $G$ with elements of $\Omega_2:=\{0, 1\}^{G}$ by taking indicator functions, we may think of ${\mathcal P}_{{\overrightarrow{A}}}$ as a subset of $\Omega_2$. Endow $\Omega_2$ with the shift induced from addition by $1$ on $G$. Clearly ${\mathcal P}_{{\overrightarrow{A}}}$ is closed and shift-invariant (i.e., the image of ${\mathcal P}_{{\overrightarrow{A}}}$ under the shift coincides with ${\mathcal P}_{{\overrightarrow{A}}}$). We say a closed shift-invariant subset ${\mathcal P}\subseteq \Omega_2$ has {\it positive density} if it has an element with positive density. Then by definition ${\overrightarrow{A}}$ has an independence set of positive density exactly when ${\mathcal P}_{{\overrightarrow{A}}}$ has positive density. We also say ${\mathcal P}$ is {\it hereditary} if any subset of any element in ${\mathcal P}$ is an element in ${\mathcal P}$. Note that ${\mathcal P}_{{\overrightarrow{A}}}$ is hereditary. For $s\in G$ we denote by $[s]$ the set of subsets of $G$ containing $s$. \begin{lemma}\label{L-positive density subsets} Let ${\mathcal P}$ be a closed shift-invariant subset of $\Omega_2$. Then ${\mathcal P}$ has positive density if and only if $\mu({\mathcal P}\cap [0])>0$ for some shift-invariant Borel probability measure $\mu $ on ${\mathcal P}$. \end{lemma} \begin{proof} Notice that ${\mathcal P}$ has positive density if and only if ${\mathcal P}\cap [0]$ has an independence set of positive density. The lemma then follows from Lemma~\ref{L-closed positive density}. \end{proof} \begin{lemma}\label{L-intersection of positive density} Let ${\mathcal P}$ and ${\mathcal Q}$ be hereditary closed shift-invariant subsets of $\Omega_2$ with positive density. Then ${\mathcal P}\cap {\mathcal Q}$ is also a hereditary closed shift-invariant subset of $\Omega_2$ with positive density. \end{lemma} \begin{proof} Clearly ${\mathcal P}\cap {\mathcal Q}$ is a hereditary closed shift-invariant subset of $\Omega_2$. By Lemma~\ref{L-positive density subsets} there is a shift-invariant Borel probability measure $\mu$ (resp.\ $\nu$) on ${\mathcal P}$ (resp.\ ${\mathcal Q}$) such that $\mu ({\mathcal P} \cap [0])>0$ (resp.\ $\nu ({\mathcal Q}\cap [0])>0$). Then $(\mu\times \nu )(U)>0$ for $U:=({\mathcal P}\cap [0])\times ({\mathcal Q}\cap [0])$. By Lemma~\ref{L-closed positive density}, $U$ has an independence set $H$ of positive density. Take a pair \[(x', y')\in \bigcap\nolimits_{s\in H}s^{-1}U= ({\mathcal P}\times {\mathcal Q})\cap \Big( \Big( \bigcap\nolimits_{s\in H}[s]\Big)\times\Big( \bigcap\nolimits_{s\in H}[s] \Big) \Big) .\] Then $H\subseteq x', y'$. Since both ${\mathcal P}$ and ${\mathcal Q}$ are hereditary, $H\in {\mathcal P}\cap {\mathcal Q}$. This finishes the proof. \end{proof} \begin{theorem}\label{T-product E} Let $(X, T)$ and $(Y, S)$ be dynamical systems. Then \[ {\rm IE}_k(X\times Y, T\times S)={\rm IE}_k(X, T)\times {\rm IE}_k(Y, S).\] \end{theorem} \begin{proof} By Proposition~\ref{P-basic E}.(4) the left hand side is included in the right hand side. The other direction follows from Lemma~\ref{L-intersection of positive density}. \end{proof} We next record the fact that, as an immediate consequence of Lemma~\ref{L-E vs IE}, nondiagonal ${\rm IE}$-tuples are the same as entropy tuples. This was established for ${\mathbb Z}$-systems in \cite{LVRA} using a local variational principle. \begin{theorem}\label{T-E vs IE} Let $(x_1, \dots , x_k )$ be a tuple in $X^k\setminus \Delta_k(X)$ with $k\ge 2$. Then $(x_1, \dots , x_k)$ is an entropy tuple if and only if it is an IE-tuple. \end{theorem} In \cite[Corollary 2.4(2)]{BGKM} Blanchard, Glasner, Kolyada, and Maass showed using the variational principle that positive entropy implies Li-Yorke chaos. We will next give an alternative proof of this fact using IE-tuples. The notion of Li-Yorke chaos was introduced in \cite{BGKM} and is based on ideas from \cite{LY}. In the case that $X$ is a metric space with metric $\rho$, a pair $(x_1, x_2)$ of points in $X$ is said to be a {\it Li-Yorke pair} (with modulus $\delta$) if \[ \limsup_{n\to \infty}\rho(T^nx_1, T^nx_2)=\delta>0 \, \, \mbox{ and } \, \, \liminf_{n\to \infty}\rho(T^nx_1, T^nx_2)=0 . \] A set $Z\subseteq X$ is said to be {\it scrambled} if all nondiagonal pairs of points in $Z$ are Li-Yorke. The system $(X, T)$ is said to be {\it Li-Yorke chaotic} if $X$ contains an uncountable scrambled set. We begin by establishing the following lemma, in which we use the notation established just before Lemma~\ref{L-positive density subsets}. For any subset ${\mathcal P}$ of $\Omega_2$, we say that a finite subset $J\subset G$ has {\it positive density with respect to ${\mathcal P}$} if there exists a $K\subseteq G$ with positive density such that $(K-K)\cap (J-J)=\{0\}$ and $K+J\in {\mathcal P}$. We say that a subset $J\subseteq G$ has {\it positive density with respect to ${\mathcal P}$} if every finite subset of $J$ has positive density with respect to ${\mathcal P}$. \begin{lemma}\label{L-positive density double} Let ${\mathcal P}$ be a hereditary closed shift-invariant subset of $\Omega_2$ with positive density. Then there exists a $J\subseteq {\mathbb Z}_{\ge 0}$ with positive density which also has positive density with respect to ${\mathcal P}$. \end{lemma} \begin{proof} Denote by ${\mathcal Q}$ the set of subsets of ${\mathbb Z}_{\ge 0}$ which have positive density with respect to ${\mathcal P}$. Then ${\mathcal Q}$ is a hereditary closed shift-invariant subset of $\{ 0,1 \}^{{\mathbb Z}_{\ge 0}}$. Take $H\in {\mathcal P}$ with density $d>0$. Fix $0<d'<\frac{d}{3}$. We claim that if $n$ is large enough so that $0<\frac{d+d'}{2d' n}-\frac{2}{n+2}=:b$ and $d'n\ge 2$, then there exists an $S\in {\mathcal Q}$ such that $|S|=c_n:=\lfloor d' n\rfloor$ and $S\subseteq [0, n]$. When $m$ is large enough, we have $|[-m, m]\cap H|>\frac{d+d'}{2}m$. Arrange the elements of $[-m, m]\cap H$ in increasing order as $a_1<a_2< \dots <a_k$, where $k>\frac{d+d'}{2}m$. Consider the numbers $a_{jc_n}-a_{(j-1)c_n+1}$ for $1\le j\le d_{n, m}:=\Big\lfloor \frac{\frac{d+d'}{2}m}{c_n}\Big\rfloor$. Set $M_{n, m}=|\{1\le j\le d_{n, m}: a_{jc_n}-a_{(j-1)c_n+1}\le n\}|.$ Then \[ 2m+1\ge a_k-(a_1-1)\ge (d_{n, m}-M_{n, m})(n+2).\] Thus \[M_{n, m}\ge d_{n, m}-\frac{2m+1}{n+2}\ge \frac{\frac{d+d'}{2}m}{d' n}-1-\frac{2m+1}{n+2}>\frac{b}{2}m \] when $m$ is large enough. Note that if $a_{jc_n}-a_{(j-1)c_n+1}\le n$, then $0<a_{(j-1)c_n+2}-a_{(j-1)c_n+1}<\dots <a_{jc_n}-a_{(j-1)c_n+1}$ are contained in $[0, n]$. Consequently, when $m$ is large enough, there exist an $S_m\subseteq [0, n]$ with $|S_m|=c_n$ and a $W_m\subseteq [-m, m]\cap H$ with $|W_m|\ge \frac{b}{2\binom{n}{c_n-1}}m$ such that $(W_m-W_m)\cap (S_m-S_m)=\{0\}$ and $W_m+S_m\subseteq H$. Since ${\mathcal P}$ is hereditary, $W_m+S_m\in {\mathcal P}$. Then there exists an $S\subseteq [0, n]$ which coincides with infinitely many of the $S_m$. Note that the collection ${\mathcal W}_S$ consisting of the sets $W\subseteq G$ such that $(W-W)\cap (S-S)=\{0\}$ and $W+S\in {\mathcal P}$ is a closed shift-invariant subset of $\Omega_2$. By Lemma~\ref{L-density} ${\mathcal W}_S\cap [0]$ has an independence set of positive density. Thus $S\in {\mathcal Q}$. By Lemma~\ref{L-density} again we see that ${\mathcal Q}\cap [0]$ has an independence set of positive density. In other words, ${\mathcal Q}$ has positive density. This finishes the proof. \end{proof} \begin{theorem}\label{T-LY} Suppose that $X$ is metrizable with a metric $\rho$. Suppose that $k\ge 2$ and ${\overrightarrow{x}}=(x_1, \dots, x_k)$ is an ${\rm IE}$-tuple in $X^k\setminus \Delta_k(X)$. For each $1\le j\le k$, let $A_j$ be a neighbourhood of $x_j$. Then there exist a $\delta>0$ and a Cantor set $Z_j\subseteq A_j$ for each $j=1,\dots ,k$ such that the following hold: \begin{enumerate} \item every nonempty finite tuple of points in $Z:=\bigcup_jZ_j$ is an ${\rm IE}$-tuple; \item for all $m\in {\mathbb N}$, distinct $y_1, \dots, y_m \in Z$, and $y'_1, \dots, y'_m \in Z$ one has \[ \liminf_{n\to \infty}\max_{1\le i\le m} \rho(T^ny_i, y'_i)=0.\] \end{enumerate} \end{theorem} \begin{proof} We may assume that the $A_j$ are closed and pairwise disjoint. We shall construct, via induction on $m$, closed nonempty subsets $A_{\sigma}$ for $\sigma\in \Sigma_m:=\{1, 2, \dots, k\}^{\{1, 2, \dots, m\}}$ with the following properties: \begin{enumerate} \item[(a)] when $m=1$, $A_\sigma = A_{\sigma (1)}$ for all $\sigma\in \Sigma_m$, \item[(b)] when $m\ge 2$, $A_{\sigma}\subseteq A_{\sigma |_{\{1, 2, \dots, m-1\}}}$ for all $\sigma\in \Sigma_m$, \item[(c)] when $m\ge 2$, for every map $\gamma:\Sigma_m\rightarrow \Sigma_{m-1}$ there exists an $h_{\gamma}\in G$ with $h_\gamma \geq m$ such that $h_{\gamma}(A_\sigma )\subseteq A_{\gamma(\sigma )}$ for all $\sigma\in \Sigma_m$, \item[(d)] when $m\ge 2$, ${\rm diam}(A_{\sigma})\le 2^{-m}$ for all $\sigma\in \Sigma_m$, \item[(e)] for every $m$, the collection $\{A_\sigma : \sigma\in \Sigma_m\}$, ordered into a tuple, has an independence set of positive density. \end{enumerate} Suppose that we have constructed such $A_\sigma$ over all $m$. Then the $A_\sigma$ for all $\sigma$ in a given $\Sigma_m$ are pairwise disjoint because of property (c). Set $\Sigma = \{1, 2, \dots, k\}^{{\mathbb N}}$. Properties (b) and (d) imply that for each $\sigma \in \Sigma$ we have $\bigcap_m A_{\sigma |_{\{1, 2, \dots, m\}}}= \{z_\sigma \}$ for some $z_\sigma \in X$ and that $Z_j=\{z_\sigma :\sigma\in \Sigma \text{ and } \, \sigma (1)=j\}$ is a Cantor set for each $j=1,\dots ,k$. Property (a) implies that $Z_j\subseteq A_j$. Condition (1) follows from properties (d) and (e). Condition (2) follows from properties (c) and (d). We now construct the $A_\sigma$. Define $A_\sigma$ for $\sigma\in \Sigma_1$ according to property (a). By assumption property (e) is satisfied for $m=1$. Assume that we have constructed $A_\sigma$ for all $\sigma\in \Sigma_j$ and $j=1,\dots ,m$ with the above properties. Set ${\overrightarrow{A}}_m$ to be $\{A_\sigma : \sigma\in \Sigma_m\}$ ordered into a tuple. By Lemma~\ref{L-positive density double} there exist an $H\subseteq {\mathbb Z}_{\ge 0}$ with positive density which also has positive density with respect to ${\mathcal P}_{{\overrightarrow{A}}_m}$. Then for any nonempty finite subset $J$ of $H$, the sets $A_{J, \omega}:=\bigcap_{h\in J}h^{-1} A_{\omega (h)}$ for all $\omega\in (\Sigma_m)^J$, taken together as a tuple, have an independence set of positive density. Replacing $H$ by $H-h$ for the smallest $h\in H$ we may assume that $0\in H$ and hence may require that $0\in J$. For each $\gamma\in (\Sigma_m )^{\Sigma_{m+1}}$ take an $h_{\gamma}\in J$ with $h_\gamma \geq m+1$. As we can take $|J|$ to be arbitrarily large, we may assume that $h_{\gamma}\neq h_{\gamma'}$ for $\gamma\neq \gamma'$ in $(\Sigma_m )^{\Sigma_{m+1}}$. Take a map $f:\Sigma_{m+1}\rightarrow (\Sigma_m)^J$ such that $(f(\sigma ))(0)=\sigma |_{\{1, \dots, m\}}$ and $(f(\sigma ))(h_{\gamma})=\gamma(\sigma )$ for all $\sigma\in \Sigma_{m+1}$ and $\gamma\in (\Sigma_m )^{\Sigma_{m+1}}$. Set $A_\sigma =A_{J, f(\sigma )}$ for all $\sigma\in \Sigma_{m+1}$. Then properties (b), (c), and (e) hold for $m+1$. For each $\sigma\in\Sigma_{m+1}$ write $A_\sigma$ as the union of finitely many closed subsets each with diameter no bigger than $2^{-(m+1)}$. Using Lemma~\ref{L-decomposition E} we may replace $A_\sigma$ by one of these subsets. Consequently, property (d) is also satisfied for $m+1$. This completes the induction procedure and hence the proof of the theorem. \end{proof} The set $Z$ in Theorem~\ref{T-LY} is clearly scrambled. As a consequence of Proposition~\ref{P-basic E}(2) and Theorem~\ref{T-LY} we obtain the following corollary. \begin{corollary}\label{C-LY} Suppose that $X$ is metrizable. If ${\text{\rm h}}_{\text{\rm top}}(T)>0$, then $(X, T)$ is Li-Yorke chaotic. \end{corollary} As mentioned above, Corollary~\ref{C-LY} was proved in \cite[Corollary 2.4.(2)]{BGKM} using measure-dynamical techniques. Denote by ${\rm LY}(X, T)$ the set of Li-Yorke pairs in $X\times X$. Employing a local variational principle, Glasner showed in \cite[Theorem 4.(3)]{mu} (see also \cite[Theorem 19.27]{ETJ}) that for ${\mathbb Z}$-systems the set of proximal entropy pairs is dense in the set of entropy pairs. As a consequence of Theorem~\ref{T-LY} we have the following improvement: \begin{corollary}\label{C-proximal} Suppose that $X$ is metrizable. Then ${\rm LY}(X, T)\cap {\rm IE}_2(X, T)$ is dense in ${\rm IE}_2(X, T)\setminus \Delta_2(X)$. \end{corollary} The next corollary is a direct consequence of Theorem~\ref{T-LY} in the case $X$ is metrizable and follows from the proof of Theorem~\ref{T-LY} in the general case. \begin{corollary}\label{C-no isolated} Let $W$ be a neighbourhood of an ${\rm IE}$-pair $(x_1, x_2)$ in $X^2\setminus \Delta_2(X)$. Then $W\cap {\rm IE}_2(X, T)$ is not of the form $\{x_1\}\times Y$ for any subset $Y$ of $X$. In particular, ${\rm IE}_2(X, T)$ does not have isolated points. \end{corollary} Corollary~\ref{C-no isolated}, as stated for entropy pairs, was proved by Blanchard, Glasner, and Host in \cite[Theorem 6]{BGH}. We round out this section by explaining how the facts about IE-tuples captured in Propositions~\ref{P-basic E}, \ref{P-f to pair E}, and \ref{P-Xm} and Theorems~\ref{T-product E} and \ref{T-E vs IE} apply to actions of any discrete amenable group. So for the remainder of the section $G$ will be a discrete amenable group. For a finite $K\subseteq G$ and $\delta>0$ we denote by $M(K, \delta)$ the set of all nonempty finite subsets $F$ of $G$ which are $(K, \delta)$-invariant in the sense that \[ |\{s\in F: Ks\subseteq F\}|\ge (1-\delta )|F| . \] According to the F{\o}lner characterization of amenability, $M(K, \delta)$ is nonempty for every finite set $K\subseteq G$ and $\delta > 0$. This is equivalent to the existence of a {\it F{\o}lner net}, i.e., a net $\{ F_\gamma \}_\gamma$ of nonempty finite subsets of $G$ such that $\lim_\gamma | sF_\gamma \Delta F_\gamma | / | F_\gamma | = 0$ for all $s\in G$. For countable $G$ we may take this net to be a sequence, in which case we speak of a {\it F{\o}lner sequence}. For countable $G$, a sequence $\{ F_n \}_{n\in{\mathbb N}}$ of nonempty finite subsets of $G$ is said to be {\it tempered} if for some $c > 0$ we have $| \bigcup_{k=1}^{n-1} F_k^{-1} F_n | \leq c | F_n |$ for all $n\in{\mathbb N}$. By \cite[Prop.\ 1.4]{PET}, every F{\o}lner sequence has a tempered subsequence. Below we will make use of the pointwise ergodic theorem of Lindenstrauss \cite{PET}, which applies to tempered F{\o}lner sequences. The following subadditivity result was established by Lindenstrauss and Weiss for countable $G$ \cite[Theorem 6.1]{LW}. We will reduce the general case to their result. Note that there is also a version for left invariance. \begin{proposition}\label{P-subadditive} If $\varphi$ is a real-valued function which is defined on the set of finite subsets of $G$ and satisfies \begin{enumerate} \item $0\le \varphi(A)<+\infty$ and $\varphi(\emptyset)=0$, \item $\varphi(A)\le \varphi(B)$ for all $A\subseteq B$, \item $\varphi(As)=\varphi(A)$ for all finite $A\subseteq G$ and $s\in G$, \item $\varphi(A\cup B)\le \varphi(A)+\varphi(B)$ if $A\cap B=\emptyset$, \end{enumerate} then $\frac{1}{|F|} \varphi(F)$ converges to some limit $b$ as the set $F$ becomes more and more invariant in the sense that for every $\varepsilon>0$ there exist a finite set $K\subseteq G$ and a $\delta>0$ such that $\big| \frac{1}{|F|} \varphi(F) -b \big| <\varepsilon$ for all $(K, \delta)$-invariant sets $F\subseteq G$. \end{proposition} \begin{proof} For a finite $K\subseteq G$ and $\delta>0$ we denote by $Z(K, \delta)$ the closure of $\big\{ \frac{1}{|F|}\varphi(F): F\in M(K, \delta)\big\}$. The collection of sets $Z(K, \delta)$ has the finite intersection property and clearly the conclusion of the proposition is equivalent to the set $Z:=\bigcap_{(K, \delta)}Z(K, \delta)$ containing only one point. Suppose that $x$ and $y$ are distinct points of $Z$. Then one can find a sequence $\{ (A_n, B_n) \}_{n\in{\mathbb N}}$ of pairs of finite subsets of $G$ such that $\{A_n\}_{n\in {\mathbb N}}$ and $\{B_n\}_{n\in {\mathbb N}}$ are both F{\o}lner sequences for the subgroup $H$ of $G$ generated by $\bigcup_{n\in {\mathbb N}} (A_n\cup B_n)$ and $\max\! \big( \big| x-\frac{1}{|A_n|}\varphi(A_n)\big| , \big| y-\frac{1}{|B_n|}\varphi(B_n) \big| \big) <1/n$ for each $n\in {\mathbb N}$. Since subgroups of discrete amenable groups are amenable \cite[Proposition 0.16]{Paterson}, $H$ is amenable. This contradicts the result of Lindenstrauss and Weiss. Thus $Z$ contains only one point. \end{proof} Let $(X, G)$ be a dynamical system. We define the topological entropy of $(X,G)$ by first defining the topological entropy of a finite open cover of $X$ using Proposition~\ref{P-subadditive} and then taking a supremum over all finite open covers (this was originally introduced in \cite{JMO} without the subadditivity result). For a finite tuple ${\overrightarrow{A}}=(A_1, \dots , A_k)$ of subsets of $X$, Proposition~\ref{P-subadditive} also applies to the function $\varphi_{{\overrightarrow{A}}}$ given by \[ \varphi_{{\overrightarrow{A}}}(F)=\max\{|F\cap J|: J \mbox{ is an independence set for } {\overrightarrow{A}}\} . \] This permits us to define the {\it independence density} $I({\overrightarrow{A}} )$ of ${\overrightarrow{A}}$ as the limit of $\frac{1}{|F|}\varphi_{{\overrightarrow{A}}}(F)$ as $F$ becomes more and more invariant, providing a numerical measure of the dynamical independence of ${\overrightarrow{A}}$. \begin{proposition}\label{P-ps} Let $(X, G)$ be a dynamical system. Let ${\overrightarrow{A}}=(A_1, \dots , A_k)$ be a tuple of subsets of $X$. Let $c>0$. Then the following are equivalent: \begin{enumerate} \item $I({\overrightarrow{A}} ) \geq c$, \item for every $\varepsilon>0$ there exist a finite set $K\subseteq G$ and a $\delta>0$ such that for every $F\in M(K, \delta)$ there is an independence set $J$ for ${\overrightarrow{A}}$ with $|J\cap F|\ge (c-\varepsilon)|F|$. \item for every finite set $K\subseteq G$ and $\varepsilon>0$ there exist an $F\in M(K, \varepsilon)$ and an independence set $J$ for ${\overrightarrow{A}}$ such that $|J\cap F|\ge (c-\varepsilon)|F|$. \end{enumerate} When $G$ is countable, these conditions are also equivalent to: \begin{enumerate} \item[4.] for every tempered F{\o}lner sequence $\{F_n\}_{n\in {\mathbb N}}$ of $G$ there is an independence set $J$ for ${\overrightarrow{A}}$ such that $\lim_{n\to \infty}\frac{|F_n\cap J|}{|F_n|}\ge c$. \item[5.] there are a tempered F{\o}lner sequence $\{F_n\}_{n\in {\mathbb N}}$ of $G$ and an independence set $J$ for ${\overrightarrow{A}}$ such that $\lim_{n\to \infty}\frac{|F_n\cap J|}{|F_n|}\ge c$. \end{enumerate} \end{proposition} \begin{proof} The equivalences (1)$\Leftrightarrow$(2)$\Leftrightarrow$(3) follow from Proposition~\ref{P-subadditive}. Assume now that $G$ is countable. Then the implications (4)$\Rightarrow$(5)$\Rightarrow$(3) are trivial. Suppose that (3) holds and let us show (4). Then one can easily show that there is a $G$-invariant Borel probability measure $\mu$ on ${\mathcal P}_{{\overrightarrow{A}}}\subseteq \{0, 1\}^G$ with $\mu([e]\cap {\mathcal P}_{{\overrightarrow{A}}})\ge c$, as in the proof of Lemma~\ref{L-closed positive density}. Here ${\mathcal P}_{{\overrightarrow{A}}}$ is defined as before Lemma~\ref{L-positive density subsets} and $\{0, 1\}^G$ is equipped with the shift given by $sx(t)=x(ts)$ for all $x\in \{0, 1\}^G$ and $s, t\in G$. Replacing $\mu$ by a suitable ergodic $G$-invariant Borel probability measure in the ergodic decomposition of $\mu$, we may assume that $\mu$ is ergodic. Let $\{F_n\}_{n\in {\mathbb N}}$ be a tempered F{\o}lner sequence for $G$. The pointwise ergodic theorem \cite[Theorem 1.2]{PET} asserts that $\lim_{n\to \infty} \frac{1}{| F_n |}\sum_{s\in F_n} f(sx)=\int f\, d\mu$ $\mu$-a.e.\ for every $f\in L^1(\mu)$. Setting $f$ to be the characteristic function of $[e]\cap {\mathcal P}_{{\overrightarrow{A}}}$ and taking $J$ to be some $x$ satisfying the above equation, we get (4). \end{proof} Effectively extending Definition~\ref{D-IE-pair}, we call a tuple ${\overrightarrow{x}} = (x_1 , \dots, x_k )\in X^k$ an {\it IE-tuple} (or an {\it IE-pair} in the case $k=2$) if for every tuple ${\overrightarrow{U}} = (U_1,\dots, U_k)$ associated to a product neighbourhood $U_1 \times\cdots\times U_k$ of ${\overrightarrow{x}}$ the independence density $I({\overrightarrow{U}} )$ is nonzero. Denoting the set of ${\rm IE}$-tuples of length $k$ by ${\rm IE}_k (X,G)$ and replacing everywhere the existence of positive density independence sets for a tuple ${\overrightarrow{A}}$ by the nonvanishing of the independence density $I({\overrightarrow{A}} )$ in our earlier discussion for singly generated systems, we see that Propositions~\ref{P-basic E} and \ref{P-Xm} and Theorem~\ref{T-product E} continue to hold in our current setting. In particular, the topological Pinsker factor (i.e., the largest zero-entropy factor) of $(X,G)$ arises from the closed invariant equivalence relation on $X$ generated by the set of IE-pairs. Given an action $\alpha$ of $G$ by isometric automorphisms of a Banach space $V$ and an element $v\in V$, we can apply the left invariance version of Proposition~\ref{P-subadditive} to the function $\varphi_{v,\lambda}$ given by \[ \varphi_{v,\lambda} (F)=\max\{|F\cap J|: J \mbox{ is an }\ell_1 \mbox{-} \lambda \mbox{-isomorphism set for } v\} \] and define the {\it $\ell_1$-$\lambda$-isomorphism density} $I(v,\lambda )$ of $v$ as the limit of $\frac{1}{|F|}\varphi_{v,\lambda} (F)$ as $F$ becomes more and more invariant. Defining the CA entropy of $\alpha$ by taking a limit supremum of averages as in Section~2 of \cite{DEBS} but this time along a F{\o}lner net, one can check that the analogue of Theorem~3.5 of \cite{DEBS} holds. In particular, the topological entropy of $(X,G)$ is nonzero if and only if there exists an $f\in C(X)$ with nonvanishing $\ell_1$-$\lambda$-isomorphism density for some $\lambda\geq 1$. Then we obtain the analogue of Proposition~\ref{P-f to pair E}, i.e., a function $f\in C(X)$ has nonvanishing $\ell_1$-$\lambda$-isomorphism density for some $\lambda\geq 1$ if and only if there is an ${\rm IE}$-pair $(x,y)$ in $X\times X$ with $f(x) \neq f(y)$. Finally, if we define entropy tuples in the same way as for singly generated systems, then Theorem~\ref{T-E vs IE} still holds. \section{Topological entropy and tensor product independence}\label{S-entropy tensor} In this section we will see how combinatorial independence in the context of entropy translates into the language of tensor products, with a hint at how the theory might thereby be extended to noncommutative $C^*$-dynamical systems. As in the previous section, our dynamical systems here will have acting semigroup ${\mathbb Z}$ or ${\mathbb Z}_{\geq 0}$ with generating endomorphism $T$. To start with, we remark that, given a dynamical system $(X,T)$ and denoting by $\alpha_T$ the induced $^*$-endomorphism $f\mapsto f\circ T$ of $C(X)$, the following conditions are equivalent: \begin{enumerate} \item ${\text{\rm h}}_{\text{\rm top}} (T) > 0$, \item ${\rm IE}_2 (X,T)\setminus\Delta_2 (X) \neq \emptyset$, \item there is an $f\in C(X)$ with an $\ell_1$-isomorphism set of positive density, \item there is a 2-dimensional operator subsystem $V\subseteq C(X)$ and a $\lambda\geq 1$ such that $V$ has a $\lambda$-independence set of positive density for $\alpha_T$. \end{enumerate} The equivalence (1)$\Leftrightarrow$(3) was established in \cite{DEBS} for ${\mathbb Z}$-systems and is similarly seen to be valid for endomorphisms, and (1)$\Leftrightarrow$(2) is Proposition~\ref{P-basic E}(2) of Section~\ref{S-entropy comb}. To show (2)$\Rightarrow$(4) we simply need to take a pair $(A,B)$ of disjoint closed subsets of $X$ with an independence set of positive density and consider the operator system generated by a norm-one self-adjoint function in $C(X)$ taking the constant values $1$ and $-1$ on $A$ and $B$, respectively. Finally, the implication (4)$\Rightarrow$(3) is a consequence of the following lemma, which expresses in a form suited to our context a well-known phenomenon observed by Rosenthal in the proof of his $\ell_1$ theorem \cite{Ros,ell1}, namely that $\ell_1$ geometry ensues in a natural way from independence. \begin{lemma}\label{L-tensor-indep} Let $V$ be an operator system and let $v$ be a nonscalar element of $V$. For each $j\in{\mathbb N}$ set $v_j = 1\otimes v\otimes 1 \in V^{\otimes [1,j-1]} \otimes V\otimes V^{\otimes [j+1,\infty )} = V^{\otimes{\mathbb N}}$. Set $Z = \{ \sigma (v) : \sigma\in S(V) \}$ where $S(V)$ is the state space of $V$. Let $0 < \eta < \frac12 {\rm diam} (Z)$. Then for all $n\in{\mathbb N}$ and complex scalars $c_1 , \dots , c_n$ we have \[ \frac{\eta}{4} \sum_{j=1}^n | c_j | \leq \bigg\| \sum_{j=1}^n c_j v_j \bigg\| . \] \end{lemma} \begin{proof} Choose points $b_1 , b_2 \in Z$ such that $| b_1 - b_2 | = {\rm diam} (Z)$, and take disks $D_1$ and $D_2$ in the complex plane centred at $b_1$ and $b_2$, respectively, with common diameter $d$ and at distance greater than $\max (2\eta ,2d)$ from each other. For each $j\in{\mathbb N}$ define the subsets \begin{align*} U_j &= \{ \sigma\in S(V^{\otimes{\mathbb N}} ) : \sigma (v_j ) \in D_1 \} ,\\ V_j &= \{ \sigma\in S(V^{\otimes{\mathbb N}} ) : \sigma (v_j ) \in D_2 \} \end{align*} of the state space $S(V^{\otimes{\mathbb N}} )$. Then the collection of pairs $(U_j , V_j )$ for $j\in{\mathbb N}$ is independent, and so we obtain the result by the proof in \cite{Dor}. \end{proof} What can be said about the link between $\ell_1$ structure and independence in connection with the global picture of entropy production? This question is tied to the divergence in topological dynamics between the notions of completely positive entropy and uniformly positive entropy. Recall that the dynamical system $(X,T)$ is said to have {\it completely positive entropy} or {\it c.p.e.} if each of its nontrivial factors has positive topological entropy, i.e., if its Pinsker factor is trivial \cite{UPE}. The functions in $C(X)$ which lie in the topological Pinsker algebra (the $C^*$-algebraic manifestation of the Pinsker factor) are characterized by the fact that they lack an $\ell_1$-isomorphism set of positive density for the induced $^*$-endomorphism of $C(X)$ \cite{DEBS}. Thus $(X,T)$ has c.p.e.\ precisely when every nonscalar function in $C(X)$ has an $\ell_1$-isomorphism set of positive density. The system $(X,T)$ is said to have {\it uniformly positive entropy} or {\it u.p.e.} if every nondiagonal pair in $X\times X$ is an entropy pair \cite{UPE}. More generally, following \cite{TEE} we say that $(X,T)$ has {\it u.p.e.\ of order $n$} if every tuple in $X^n \setminus\Delta_n (X)$ is an entropy tuple (see the beginning of Section~\ref{S-entropy comb}). By Theorem~\ref{T-E vs IE}, this is equivalent to every $n$-tuple of nonempty open subsets of $X$ having an independence set of positive density. Finally, we say that $(X,T)$ has {\it u.p.e.\ of all orders} if it has u.p.e.\ of order $n$ for each $n\geq 2$. U.p.e.\ implies c.p.e., but the converse is false. Also, in \cite{LVRA} it is shown that, for every $n\geq 2$, u.p.e.\ of order $n$ does not imply u.p.e.\ of order $n+1$. The following propositions supply functional-analytic characterizations of u.p.e.\ and u.p.e.\ of all orders, complementing the characterization of c.p.e.\ in terms of $\ell_1$-isomorphism sets. \begin{proposition}\label{P-equiv-D-indep} Let $X$ be a compact Hausdorff space and $T:X\to X$ a surjective continuous map. Then the following are equivalent: \begin{enumerate} \item $(X,T)$ has u.p.e.\ of all orders, \item for every finite set $\Omega\subseteq C(X)$ and $\delta > 0$ there is a finite-dimensional operator subsystem $V\subseteq C(X)$ which approximately includes $\Omega$ to within $\delta$ and has a $1$-independence set of positive density, \item for every finite set $\Omega\subseteq C(X)$ and $\delta > 0$ there is a finite-dimensional operator subsystem $V\subseteq C(X)$ which approximately includes $\Omega$ to within $\delta$ and has a $\lambda$-independence set of positive density for some $\lambda\geq 1$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow$(2). Let $\Omega$ be a finite subset of $A$ and let $\delta > 0$. Then we can construct a partition of unity $\{ g_1 , \dots , g_k \}$ in $C(X)$ such that the operator system it spans approximately includes $\Omega$ to within $\delta$ and for each $i=1,\dots ,k$ we have $g_i (x) = 1$ for all $x$ in some nonempty open set $U_i$. By (1) and Theorem~\ref{T-E vs IE}, the tuple $(U_1 , \dots , U_k )$ has an independence set $I$ of positive density. Let $(s_1 , \dots , s_n )$ be a tuple of distinct elements of $I$ and let $\varphi : V^{\otimes [1,n]} \to C(X)$ be the contractive linear map determined on elementary tensors by $f_1 \otimes\cdots\otimes f_n \mapsto (f_1 \circ T^{s_1} ) \cdots (f_n \circ T^{s_n} )$. Then the collection $\{ (g_{\sigma (1)} \circ T^{s_1} ) \cdots (g_{\sigma (n)} \circ T^{s_n} ) : \sigma\in \{ 1,\dots ,k\}^{\{ 1,\dots ,n\}} \}$ is an effective $k^n$-element partition of unity of $X$ and hence is isometrically equivalent to the standard basis of $\ell_\infty^{k^n}$. Since the subset $\{ g_{\sigma (1)} \otimes\cdots\otimes g_{\sigma (n)} : \sigma\in \{ 1,\dots ,k\}^{\{ 1,\dots ,n\}} \}$ of $V^{\otimes [1,n]}$ is also isometrically equivalent to the standard basis of $\ell_\infty^{k^n}$, we conclude that $I$ is a $1$-independence set for $V$, yielding (2). (2)$\Rightarrow$(3). Trivial. (3)$\Rightarrow$(1). Let $k\geq 2$ and let $(U_1 , \dots , U_k )$ be a $k$-tuple of nonempty open subsets of $X$ with pairwise disjoint closures. To obtain (1) it suffices to show that $(U_1 , \dots , U_k )$ has an independence set of positive density. Since the sets $U_i$ have pairwise disjoint closures we can construct positive norm-one functions $g_1 , \dots , g_k \in C(X)$ such that, for each $i$, the set on which $g_i$ takes the value $1$ is a closed subset of $U_i$. By (3), given a $\delta > 0$ we can find an operator subsystem $V\subseteq C(X)$ such that $V$ has a $\lambda$-independence set $J$ of positive density for some $\lambda\geq 1$ and there exist $f_1 , \dots , f_k \in V$ for which $\| f_i - g_i \| < \delta$ for each $i=1, \dots ,k$. Since for each $i$ continuity implies that $g_i^{-1} ((\rho , 1]) \subseteq U_i$ for some $\rho\in (0,1)$, by taking $\delta$ small enough we may ensure that the norm-one self-adjoint elements $h_i = \| (f_i + f_i^* )/2 \|^{-1} (f_i + f_i^* )/2 \in V$ for $i=1, \dots , k$ are defined and for some $\theta\in (0,1)$ satisfy $h_i^{-1} ((\theta ,1]) \subseteq U_i$ for each $i$. Choose an $r\in{\mathbb N}$ large enough so that $\theta^r < \lambda^{-1}$ and a $b>0$ small enough so that $\frac{k}{k-1} 2^{-2b} > 1$. By Stirling's formula there is a $c\in (0,1/2)$ such that $\binom{n}{cn} \leq 2^{bn}$ for all $n\in{\mathbb N}$. By Lemma~\ref{L-KM} there is a $d>0$ such that, for all $n\in{\mathbb N}$, if $\Gamma\subseteq\{ 1,\dots ,k \}^{\{ 1,\dots ,n \}}$ and $| \Gamma | \geq (k2^{-2b} )^n = \big( (k-1)\big( \frac{k}{k-1} 2^{-2b} \big) \big)^n$ then there exists a set $I\subseteq \{ 1,\dots , n\}$ such that $|I| \geq dn$ and $\Gamma |_I = \{ 1,\dots ,k \}^I$. By shifting $J$ we may assume that it contains $0$, and so by positive density there is an $a>0$ such that for each $m\in{\mathbb Z}$ the set $J_m := J \cap \{ -m, -m+1 , \dots , m\}$ has cardinality at least $am$. Now suppose we are given an $m\in{\mathbb N}$ with $m\geq \max (r/ab , r/ac)$. Enumerate the elements of $J_m$ as $j_1 , \dots , j_n$. For each $\sigma\in\{ 1,\dots ,k \}^{\{ 1,\dots ,n \}}$ we define a set $K_\sigma \subseteq I$ as follows. Pick an $x\in X$ such that $h_{\sigma (1)} (T^{j_1} x)\cdots h_{\sigma (n)} (T^{j_n} x) = \| (h_{\sigma (1)} \circ T^{j_1} )\cdots (h_{\sigma (n)} \circ T^{j_n} ) \|$. Since $\| (h_{\sigma (1)} \circ T^{j_1} )\cdots (h_{\sigma (n)} \circ T^{j_n} ) \| \geq \lambda^{-1} > \theta^r$, there exists a $K_\sigma \subseteq \{ 1, \dots , n\}$ with $|K_\sigma | = n-r$ such that $h_{\sigma (i)} (T^{j_i} x) > \theta$ for each $i\in K_\sigma$. Hence $x\in\bigcap_{i\in K_\sigma} T^{-j_i} U_{\sigma (i)}$, so that $\bigcap_{i\in K_\sigma} T^{-j_i} U_{\sigma (i)}$ is nonempty. Now since $r\leq cam \leq cn$, we have \[ \big| \big\{ K_\sigma : \sigma\in\{ 1,\dots ,k \}^{\{ 1,\dots ,n \}} \big\} \big| \leq \binom{n}{r} \leq 2^{bn} . \] We can thus find a $K\in \big\{ K_\sigma : \sigma\in \{ 1,\dots ,k \}^{\{ 1,\dots ,n \}} \big\}$ such that the set $\mathcal{R}$ of all $\sigma\in\{ 1,\dots ,k \}^{\{ 1,\dots ,n \}}$ for which $K_\sigma = K$ has cardinality at least $k^n / 2^{bn}$. Then the set of all restrictions of elements of $\mathcal{R}$ to $K$ has cardinality at least $| \mathcal{R} | / 2^{n-|K|} \geq 2^{-r} (k2^{-b} )^n \geq (k2^{-2b} )^n$. It follows that there is a set $I_m \subseteq J_m$ with $|I_m | \geq d|J_m | \geq dam$ such that the set of all restrictions of elements of $\mathcal{R}$ to $I_m$ is $\{ 1,\dots ,k \}^{I_m}$. Since $\bigcap_{i\in I_m} T^{-j_i} U_{\sigma (i)} \neq\emptyset$ for every $\sigma\in\{ 1,\dots ,k \}^{I_m}$, $I_m$ is an independence set for $(U_1 , \dots , U_k )$. By Lemma~\ref{L-density} we conclude that $(U_1 , \dots , U_k )$ has an independence set of positive density, finishing the proof. \end{proof} We denote by ${\mathcal S}_2 (X)$ the collection of $2$-dimensional operator subsystems of $C(X)$ equipped with the metric given by the Hausdorff distance between unit balls. \begin{proposition}\label{P-upe} For a ${\mathbb Z}$-dynamical system $(X,T)$ the following are equivalent: \begin{enumerate} \item $(X,T)$ has u.p.e., \item the collection of $2$-dimensional operator subsystems of $C(X)$ which have a $1$-independence set of positive density is dense in ${\mathcal S}_2 (X)$, \item the collection of $2$-dimensional operator subsystems of $C(X)$ which have a $\lambda$-independence set of positive density for some $\lambda\geq 1$ is dense in ${\mathcal S}_2 (X)$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow$(2). Let $V$ be a $2$-dimensional operator subsystem of $C(X)$. Then $V$ has a linear basis of the form $\{ 1,g \}$ for some nonscalar $g\in C(X)$, which we may assume to be self-adjoint by replacing it with $g+g^*$ if necessary. By scaling and scalar-translating $g$, we may furthermore assume that the spectrum of $g$ is a subset of $[0,1]$ containing $0$ and $1$. Given $\delta > 0$, by a simple perturbation argument we can construct a positive norm-one $h\in C(X)$ such that $\| h-g \| < \delta$, $h(x) = 0$ for all $x$ in some nonempty open set $U_0$, and $h(x) = 1$ for all $x$ in some nonempty open set $U_1$. By taking $\delta$ small enough we can make the unit ball of the operator system $W = {\rm span} \{ 1,h \}$ as close as we wish to the unit ball of $V$, and so we will obtain (2) once we show that $W$ has a $1$-independence set of positive density, and this can be done as in the proof of the corresponding implication in Proposition~\ref{P-equiv-D-indep} using the partition of unity $\{ h , 1-h \}$. (2)$\Rightarrow$(3). Trivial. (3)$\Rightarrow$(1). Argue as in the proof of the corresponding implication in Proposition~\ref{P-equiv-D-indep}. \end{proof} In \cite{LVRA} Huang and Ye proposed u.p.e.\ of all orders as the most suitable topological analogue of a K-system. In view of Proposition~\ref{P-equiv-D-indep} we might then define a unital $^*$-endomorphism $\alpha$ of a unital $C^*$-algebra $A$ to be a $C^*$-algebraic K-system if for every finite set $\Omega\subseteq A$ and $\delta > 0$ there is a finite-dimensional operator subsystem $V\subseteq A$ which approximately includes $\Omega$ to within $\delta$ and has a $(1+\varepsilon )$-independence set of positive density for every $\varepsilon > 0$. This property holds prototypically for the shift on the infinite minimal tensor product $A^{\otimes{\mathbb Z}}$ for any unital $C^*$-algebra $A$, and it implies completely positive Voiculescu-Brown entropy \cite{toral}, as can be see from Remark~3.10 of \cite{EID} and Lemma~\ref{L-tensor-indep}. In fact to deduce completely positive Voiculescu-Brown entropy all we need is that the collection of $2$-dimensional operator subsystems of $A$ which have a $\lambda$-independence set of positive density for some $\lambda\geq 1$ is dense in the collection of all $2$-dimensional operator subsystems of $A$ with respect to the metric given by Hausdorff distance between unit balls. \begin{remark} Following up on the discussion in the last part of Section~\ref{S-entropy comb}, we point out that the results of this section hold more generally for any dynamical system $(X,G)$ with $G$ discrete and amenable if the relevant terms are interpreted or reformulated as follows. U.p.e., u.p.e\ of order $n$, and u.p.e.\ of all orders are defined in the same way as for singly generated systems. For a finite-dimensional operator subsystem $V\subseteq C(X)$ and a $\lambda\geq 1$, the left invariance version of Proposition~\ref{P-subadditive} applies to the function $\varphi_{V,\lambda}$ given by $\varphi_{V,\lambda} (F)=\max\{|F\cap J|: J \mbox{ is a } \lambda\mbox{-independence set for } V\}$, so that we may define the {\it $\lambda$-independence density} $I(V,\lambda )$ of $V$ as the limit of $\frac{1}{|F|}\varphi_{V,\lambda} (F)$ as $F$ becomes more and more invariant. We can then replace ``$\lambda$-independence set of positive density'' everywhere above by ``nonvanishing $\lambda$-independence density''. \end{remark} \section{Topological sequence entropy, nullness, and independence}\label{S-null} In this section we will examine the local theory of topological sequence entropy and nullness from the viewpoint of independence. Sequence entropy is developed in the literature for single continuous maps, but here we will work in the framework of a general dynamical system $(X,G)$. Following Goodman \cite{SE}, for a sequence $\mathfrak{s} = \{ s_n \}_{n\in{\mathbb N}}$ in $G$ we define the topological sequence entropy of $(X,G)$ with respect to $\mathfrak{s}$ and a finite open cover ${\mathcal U}$ of $X$ by \[ {\text{\rm h}}_{\text{\rm top}} (X,{\mathcal U} ; \mathfrak{s} ) = \limsup_{n\to\infty} \frac1n \log N\bigg( \bigvee_{i=1}^n s_i^{-1} {\mathcal U} \bigg) \] where $N(\cdot )$ denotes the minimal cardinality of a subcover. Following Huang, Li, Shao, and Ye \cite{NSSEP}, we call a nondiagonal pair $(x,y)\in X\times X$ a {\it sequence entropy pair} if for any disjoint closed neighbourhoods $U$ and $V$ of $x$ and $y$, respectively, there exists a sequence $\mathfrak{s}$ in $G$ such that ${\text{\rm h}}_{\text{\rm top}}(X, \{U^{{\rm c}}, V^{{\rm c}} \};\mathfrak{s} )>0$. More generally, following Huang, Maass, and Ye \cite{HMY} we call a tuple ${\overrightarrow{x}} = (x_1 , \dots, x_k)\in X^k\setminus \Delta_k(X)$ a {\it sequence entropy tuple} if whenever $U_1, \dots , U_l$ are closed pairwise disjoint neighbourhoods of the distinct points in the list $x_1 , \dots , x_k$, the open cover $\{ U_1^{\rm c} , \dots , U_l^{\rm c} \}$ has positive topological sequence entropy with respect to some sequence in $G$. We say that $(X, G)$ is {\it null} if ${\text{\rm h}}_{\text{\rm top}} (X, {\mathcal U}; \mathfrak{s})=0$ for all open covers ${\mathcal U}$ of $X$ and all sequences $\mathfrak{s}$ in $G$. We say that $(X, G)$ is {\it nonnull} otherwise. Then the basic facts recorded in \cite[Proposition 2.1]{NSSEP} and \cite[Proposition 3.2]{HMY} also hold in our general setting. \begin{definition}\label{D-IN-pair} We call a tuple ${\overrightarrow{x}} = (x_1 , \dots, x_k )\in X^k$ an {\it IN-tuple} (or an {\it IN-pair} in the case $k=2$) if for any product neighbourhood $U_1 \times\cdots\times U_k$ of ${\overrightarrow{x}}$ the tuple $(U_1, \dots, U_k)$ has arbitrarily large finite independence sets. We denote the set of ${\rm IN}$-tuples of length $k$ by ${\rm IN}_k (X,G)$. \end{definition} We will show in Proposition~\ref{P-SE vs IN} that sequence entropy tuples are exactly nondiagonal ${\rm IN}$-tuples. First we record some basic facts pertaining to IN-tuples. For a cover ${\mathcal U}$ of $X$ and a sequence $\mathfrak{s}$ in $G$ we denote by ${\text{\rm h}_\text{\rm c}}(T, {\mathcal U}; \mathfrak{s})$ the combinatorial sequence entropy of ${\mathcal U}$ with respect to $\mathfrak{s}$, which is defined using the same formula as for the topological sequence entropy of open covers. From Lemma~\ref{L-key} we infer the following analogue of Lemma~\ref{L-E vs IE}. \begin{lemma}\label{L-SE vs IN} Let $k\ge 2$. Let $U_1, \dots , U_k$ be disjoint subsets of $X$ and set ${\mathcal U}=\{U^{{\rm c}}_1, \dots , U^{{\rm c}}_k\}$. Then ${\overrightarrow{U}}:=(U_1, \dots , U_k)$ has arbitrarily large finite independence sets if and only if\linebreak ${\text{\rm h}_\text{\rm c}}(T, {\mathcal U}; \mathfrak{s})>0$ for some sequence $\mathfrak{s}$ in $G$. \end{lemma} From Lemma~\ref{L-decomposition indep} we infer the following analogue of Lemma~\ref{L-decomposition E}. \begin{lemma}\label{L-decomposition N} Let ${\overrightarrow{A}}= (A_1, \dots, A_k )$ be a $k$-tuple of subsets of $X$ with arbitrarily large finite independence sets. Suppose that $A_1 = A_{1, 1}\cup A_{1, 2}$. Then at least one of the tuples $(A_{1,1},\dots, A_k)$ and $(A_{1,2}, \dots, A_k)$ has arbitrarily large finite independence sets. \end{lemma} Using Lemmas~\ref{L-SE vs IN} and ~\ref{L-decomposition N} we obtain the following analogue of Proposition~\ref{P-basic E}. \begin{proposition}\label{P-basic N} The following are true: \begin{enumerate} \item Let $(A_1, \dots , A_k )$ be a tuple of closed subsets of $X$ which has arbitrarily large finite independence sets. Then there exists an IN-tuple $(x_1,\dots , x_k)$ with $x_j\in A_j$ for all $1\le j\le k$. \item ${\rm IN}_2(X, G)\setminus \Delta_2(X)$ is nonempty if and only if $(X, G)$ is nonnull. \item ${\rm IN}_k(X,G)$ is a closed $G$-invariant subset of $X^k$. \item Let $\pi:(X, G)\rightarrow (Y, G)$ be a factor map. Then $(\pi\times\cdots\times \pi )({\rm IN}_k(X, G))={\rm IN}_k(Y, G)$. \item Suppose that $Z$ is a closed $G$-invariant subset of $X$. Then ${\rm IN}_k(Z, G)\subseteq {\rm IN}_k(X, G)$. \end{enumerate} \end{proposition} We remark that (2) and (4) of Proposition~\ref{P-basic N} show that the largest null factor of $(X,G)$ is obtained from the closed invariant equivalence relation on $X$ generated by the set of IN-pairs. \begin{corollary}\label{C-permanence N} For a fixed $G$, the class of null $G$-systems is preserved under taking factors, subsystems and products. \end{corollary} \begin{corollary}\label{C-se=1} Suppose that $(X,G)$ is nonnull. Then there exist an open cover $\mathcal{U}=\{U, V\}$ of $X$ and a sequence $\mathfrak{s}$ in $G$ such that ${\text{\rm h}}_{\text{\rm top}} (X, \mathcal{U};\mathfrak{s} )=\log{2}$. In the case $G={\mathbb Z}$ the sequence can be taken in ${\mathbb N}$. \end{corollary} \begin{definition}\label{D-null} We say that a function $f\in C(X)$ is {\it null} if there does not exist a $\lambda\geq 1$ such that $f$ has arbitrarily large finite $\ell_1$-$\lambda$-isomorphism sets. Otherwise $f$ is said to be {\it nonnull}. \end{definition} The proof of the following proposition is similar to that of Proposition~\ref{P-f to pair E}, where this time we use Theorem~5.8 in \cite{DEBS} as formulated for the more general context of isometric endomorphisms of Banach spaces. \begin{proposition}\label{P-f to pair N} Let $f\in C(X)$. Then $f$ is nonnull if and only if there is an IN-pair $(x, y)$ with $f(x)\neq f(y)$. \end{proposition} In analogy with the situation for entropy, nondiagonal IN-tuples turn out to be the same as sequence entropy tuples, as follows from Lemma~\ref{L-SE vs IN}. \begin{theorem}\label{P-SE vs IN} Let $(x_1, \dots , x_k )$ be a tuple in $X^k\setminus \Delta_k(X)$ with $k\ge 2$. Then $(x_1, \dots , x_k)$ is a sequence entropy tuple if and only if it is an IN-tuple. \end{theorem} In parallel with Blanchard's definition of uniform positive entropy \cite{UPE}, we say that $(X, G)$ is {\it uniformly nonnull} if ${\rm IN}_2(X, G)=X\times X$, or, equivalently, if any pair of nonempty open subsets of $X$ has arbitrarily large finite independence sets. By Theorem~\ref{P-SE vs IN} this is the same as s.u.p.e.\ as defined on page 1507 of \cite{NSSEP}. We say that $(X, G)$ is {\it completely nonnull} if the maximal null factor of $(X, G)$ is trivial. Uniform nonnullness implies complete nonnullness, but the converse is false. Blanchard showed in \cite[Example 8]{UPE} that the shift action on $X=\{a, b\}^{{\mathbb Z}}\cup \{a, c\}^{{\mathbb Z}}$ has completely positive entropy but is not transitive. Hence this action is completely nonnull but by Theorem~\ref{T-equiv-indep-prod} fails to be uniformly nonnull. We now briefly examine the tensor product viewpoint. In analogy with the case of entropy (see the beginning of Section~\ref{S-entropy tensor}), it can be shown that the system $(X,G)$ is nonnull if and only if there is a 2-dimensional operator subsystem $V\subseteq C(X)$ and a $\lambda\geq 1$ such that $V$ has arbitrarily large finite $\lambda$-independence sets for the induced $C^*$-dynamical system. Using arguments similar to those in the proof of Proposition~\ref{P-upe} (with ${\mathcal S}_2 (X)$ defined as in the discussion there) one can show: \begin{proposition}\label{P-unn} For a dynamical system $(X,G)$ the following are equivalent: \begin{enumerate} \item $(X,G)$ is uniformly nonnull, \item the collection of $2$-dimensional operator subsystems of $C(X)$ which have arbitrarily large finite $1$-independence sets is dense in ${\mathcal S}_2 (X)$, \item for every $2$-dimensional operator subsystem $V\subseteq C(X)$ there is a $\lambda\geq 1$ such that $V$ has arbitrarily large finite $\lambda$-independence sets. \end{enumerate} \end{proposition} For $n\geq 2$ we say that $(X, G)$ is {\it uniformly nonnull of order $n$} if ${\rm IN}_n(X, G)=X^n$, or, equivalently, if every $n$-tuple of nonempty open subsets of $X$ has arbitrarily large finite independence sets. We say that $(X, G)$ is {\it uniformly nonnull of all orders} if it is uniformly nonnull of order $n$ for every $n\geq 2$. The analogue of Proposition~\ref{P-unn} for uniform nonnullness of all orders is also valid and will be subsumed as part of Proposition~\ref{P-equiv-indep} and Theorem~\ref{T-equiv-indep-prod} in connection with I-independence, to which uniform nonnullness of all orders is equivalent. \section{Tameness and independence}\label{S-tame} Let $(X, G)$ be a dynamical system. We say that $(X, G)$ is {\it tame} if no element $f\in C(X)$ has an infinite $\ell_1$-isomorphism set and {\it untame} otherwise. The concept of tameness was introduced by K\"{o}hler in \cite{K} under the term regularity. Here we are following the terminology of \cite{tame}. Actually in \cite{tame} the system $(X, G)$ is defined to be tame if its enveloping semigroup is separable and Fr\'{e}chet, which is equivalent to our geometric definition when $X$ is metrizable. \begin{definition}\label{D-IT-pair} We call a tuple ${\overrightarrow{x}} = (x_1 , \dots, x_k )\in X^k$ an {\it IT-tuple} (or an {\it IT-pair} in the case $k=2$) if for any product neighbourhood $U_1 \times\cdots\times U_k$ of ${\overrightarrow{x}}$ the tuple $(U_1, \dots, U_k)$ has an infinite independence set. We denote the set of ${\rm IT}$-tuples of length $k$ by ${\rm IT}_k (X,G)$. \end{definition} In contrast to the density conditions in the context of entropy and sequence entropy, we are interested here in the existence of infinite sets along which independence (or, in the definition of tameness, equivalence to the standard $\ell_1$ basis) occurs. This places us in the realm of Rosenthal's $\ell_1$ theorem \cite{Ros} and the Ramsey methods involved in its proof (see \cite{Gow}). Indeed we begin by observing the following immediate consequence of \cite[Theorem 2.2]{Ros}. Note that though Theorem 2.2 of \cite{Ros} is stated for sequences of pairs of disjoint subsets, the proof there works for sequences of tuples of (not necessarily disjoint) subsets. \begin{lemma}\label{L-infinite indep} Let ${\overrightarrow{A}}=(A_1, \dots , A_k)$ be a tuple of closed subsets of $X$. Then the pair ${\overrightarrow{A}}$ has an infinite independence set if and only if there is an infinite set $H\subseteq G$ such that for every infinite set $W\subseteq H$ there exists an $x\in X$ for which the sets $(x, A_j, W)^{\perp}:=\{s\in W: sx\in A_j\}$ for $j=1, \dots , k$ are all infinite. \end{lemma} \begin{lemma}\label{L-decomposition R} Let ${\overrightarrow{A}}= (A_1, \dots, A_k )$ be a $k$-tuple of closed subsets of $X$ with an infinite independence set. Suppose that $A_1 = A_{1, 1}\cup A_{1, 2}$ and that both $A_{1, 1}$ and $A_{1, 2}$ are closed. Then at least one of the tuples $(A_{1,1}, \dots, A_k)$ and $(A_{1,2}, \dots, A_k)$ has an infinite independence set. \end{lemma} \begin{proof} Let $H\subseteq G$ be as in Lemma~\ref{L-infinite indep} for ${\overrightarrow{A}}$. Suppose that neither of $(A_{1,1}, \dots, A_k)$ and $(A_{1, 2}, \dots, A_k)$ has an infinite independence set. Then by Lemma~\ref{L-infinite indep} we can find infinite subsets $W_1\supseteq W_2$ of $H$ such that for every $x\in X$ and $i=1,2$ at least one of the sets $(x, A_{1, i}, W_i)^{\perp}$ and $(x, A_j, W_i)^{\perp}$ for $2\le j\le k$ is finite. But there exists an $x\in X$ such that $(x, A_j, W_2)^{\perp}$ is infinite for all $1\le j\le k$. Thus $(x, A_{1, 1}, W_1)^{\perp}$ and $(x, A_{1, 2}, W_2)^{\perp}$ are finite. Since $(x, A_1, W_2)^{\perp}\subseteq (x, A_{1, 1}, W_2)^{\perp}\cup (x, A_{1, 2}, W_2)^{\perp}$, we obtain a contradiction. Therefore at least one of the tuples $(A_{1,1}, \dots, A_k)$ and $(A_{1,2}, \dots, A_k)$ has an infinite independence set. \end{proof} \begin{proposition}\label{P-basic R} The following are true: \begin{enumerate} \item Let $(A_1, \dots , A_k )$ be a tuple of closed subsets of $X$ which has an infinite independence set. Then there exists an IT-tuple $(x_1,\dots , x_k)$ with $x_j\in A_j$ for all $1\le j\le k$. \item ${\rm IT}_2(X, T)\setminus \Delta_2(X)$ is nonempty if and only if $(X, G)$ is untame. \item ${\rm IT}_k(X,G)$ is a closed $G$-invariant subset of $X^k$. \item Let $\pi:(X, G)\rightarrow (Y,G)$ be a factor map. Then $(\pi\times\cdots\times \pi )({\rm IT}_k(X, G))={\rm IT}_k(Y, G)$. \item Suppose that $Z$ is a closed $G$-invariant subset of $X$. Then ${\rm IT}_k(Z, G)\subseteq {\rm IT}_k(X, G)$. \end{enumerate} \end{proposition} The proof of Proposition~\ref{P-basic R} is similar to that of Proposition~\ref{P-basic E}, with (1) following from Lemma~\ref{L-decomposition R} and (2) following from Proposition~\ref{P-f to pair R} below. We remark that (2) and (4) of Proposition~\ref{P-basic R} show that the largest tame factor of $(X, G)$ is obtained from the closed invariant equivalence relation on $X$ generated by the set of IT-pairs. \begin{corollary}\label{C-permanence R} For a fixed $G$, the class of tame $G$-systems is preserved under taking factors, subsystems, and products. \end{corollary} \begin{proposition}\label{P-f to pair R} Let $f\in C(X)$. Then $f$ has an infinite $\ell_1$-isomorphism set if and only if there is an ${\rm IT}$-pair $(x, y)$ with $f(x)\neq f(y)$. \end{proposition} \begin{proof} The ``if'' part follows as in the analogous situation for entropy by the well-known Rosenthal-Dor argument. For the ``only if'' part, by Proposition~\ref{P-basic R}(1) it suffices to show the existence of a pair of disjoint closed subsets $A$ and $B$ of $X$ which have an infinite independence set and satisfy $f(A)\cap f(B)=\emptyset$. Let $H=\{s_j:j\in {\mathbb N}\}\subseteq G^{\rm op}$ be an $\ell_1$-isomorphism set for $f$. Then the sequence $\{s_j f \}_{j\in {\mathbb N}}$ has no weakly convergent subsequence. Using Lebesgue's theorem one sees that $\{s_j f \}_{j\in {\mathbb N}}$ has no pointwise convergent subsequence. In Gowers' proof of Rosenthal's $\ell_1$ theorem \cite[page 1079-1080]{Gow} it is shown that there exist disjoint closed subsets $Z_1, Z_2\subseteq {\mathbb C}$ and a subsequence $\{s_{n_k} f\}_{k\in {\mathbb N}}$ such that the sequence of pairs $\{((s_{n_k} f)^{-1}(Z_1), (s_{n_k} f)^{-1}(Z_2))\}_{k\in {\mathbb N}}$ is independent. Therefore $\{s_{n_k} : k\in {\mathbb N}\}$ is an infinite independence set for the pair $(f^{-1}(Z_1) , f^{-1}(Z_2))$, yielding the proposition. \end{proof} In parallel with the cases of entropy and nullness, we say that $(X, G)$ is {\it uniformly untame} if ${\rm IT}_2(X, G)=X\times X$, or, equivalently, if any pair of nonempty open subsets of $X$ has an infinite independence set. We say that $(X, G)$ is {\it completely untame} if the maximal tame factor of $(X, G)$ is trivial. Complete untameness is strictly weaker than uniform untameness, as illustrated by Blanchard's example mentioned in the paragraph following Proposition~\ref{P-SE vs IN}. We demonstrate in the following example that untame systems need not be null by constructing a ${\rm WAP}$ (weakly almost periodic) nonnull subshift. The proof of \cite[Corollary 5.7]{DEBS} shows that if $(X, G)$ is ${\rm HNS}$ (hereditarily nonsensitive \cite[Definition 9.1]{GM}) then it is tame. Since ${\rm WAP}$ systems are ${\rm HNS}$ \cite[Section 9]{GM}, our example is tame. \begin{example}\label{E-tame nonnull} Let $0<m_1<m_2<\cdots$ be a sequence in ${\mathbb N}$ with $m_j-m_i>m_i-m_k$ for all $j>i>k$. Let $\{k_n\}_{n\in {\mathbb N}}$ be an unbounded sequence in ${\mathbb N}$ and let $S_j=\sum^j_{i=1}k_i$ for $j\in{\mathbb N}$ be the partial sums of $\{k_n\}_{n\in {\mathbb N}}$. Denote by $A_j$ the set of all elements in $\{0, 1\}^{{\mathbb Z}}$ whose support is contained in $\{m_k:S_{j-1}<k\le S_j\}$. One checks easily that the union $X$ of the orbits of $\bigcup_j A_j$ under the shift $T$ is closed. Thus $(X, {\mathbb Z})$ is a subshift. Denote by $Z$ the set of elements in $X$ supported exactly at one point. It is easy to see that ${\rm IN}_2(X, {\mathbb Z})\setminus \Delta_2(X)=(Z\times \{0\})\cup (\{0\}\times Z)$. Since the set of elements in $C(X)$ whose ${\mathbb Z}$-orbit is precompact in the weak topology of $C(X)$ is a closed ${\mathbb Z}$-invariant algebra of $C(X)$, to see that $(X, {\mathbb Z})$ is ${\rm WAP}$ it suffices to check that the orbit of $f$ is precompact in the weak topology, where $f\in C(X)$ is defined by $f(x)=x(0)$ for $x\in X$. However, the union of the zero function and the orbit of $f$ is evidently compact in ${\mathbb C}^X$ and thus is compact in the weak topology by a result of Grothendieck \cite{Gr} \cite[Theorem 1.43.1]{ETJ}. \end{example} In Section~\ref{S-Toeplitz} we will see that there exist minimal tame nonnull ${\mathbb Z}$-systems. Turning finally to tensor products, in analogy with entropy and sequence entropy one can show that the system $(X,G)$ is untame if and only if there is a 2-dimensional operator subsystem $V\subseteq C(X)$ with an infinite independence set for the induced $C^*$-dynamical system. With ${\mathcal S}_2 (X)$ defined as in the paragraph preceding Proposition~\ref{P-upe}, one has the following characterizations of uniform untameness (see the proof of Proposition~\ref{P-equiv-indep}). \begin{proposition}\label{P-unr} For a dynamical system $(X,G)$ the following are equivalent: \begin{enumerate} \item $(X,G)$ is uniformly untame, \item the collection of $2$-dimensional operator subsystems of $C(X)$ which have an infinite $1$-independence set is dense in ${\mathcal S}_2 (X)$, \item the collection of $2$-dimensional operator subsystems of $C(X)$ which have an infinite $\lambda$-independence set for some $\lambda\geq 1$ is dense in ${\mathcal S}_2 (X)$. \end{enumerate} \end{proposition} For $n\geq 2$ we say that $(X,G)$ is {\it uniformly untame of order $n$} if ${\rm IT}_n(X, G)=X^n$, or, equivalently, if every $n$-tuple of nonempty open subsets of $X$ has an infinite independence set. If $(X,G)$ is uniformly untame of order $n$ for each $n\geq 2$ then we say that it is {\it uniformly untame of all orders}. For the analogue of Proposition~\ref{P-unr} for uniform untameness of all orders see Proposition~\ref{P-equiv-indep} and Theorem~\ref{T-equiv-indep-prod}. \section{Tame extensions of minimal systems}\label{S-minimal} An extension $\pi:X\rightarrow Y$ of dynamical systems with acting group $G$ is said to be {\it tame} if ${\rm IT}_{\pi}\setminus \Delta_2(X)=\emptyset$, where $R_{\pi}:=\{(x_1, x_2)\in X\times X:\pi(x_1)=\pi(x_2)\}$ and ${\rm IT}_{\pi}:={\rm IT}_2(X, G)\cap R_{\pi}$. In this section we will analyze the structure of tame extensions of minimal systems. Throughout $G$ will be a group, and we will frequently specialize to the Abelian case, to which the main results (Theorems~\ref{T-metric tame minimal to highly proximal}, \ref{T-decomposition}, and \ref{T-uniquely ergodic}) apply. Theorems~\ref{T-metric tame minimal to highly proximal} and \ref{T-decomposition} address the relation between proximality and equicontinuity within the frame of tame extensions, while Theorem~\ref{T-uniquely ergodic} asserts that tame minimal systems are uniquely ergodic. We refer the reader to \cite{Aus,Vri} for general information on extensions of dynamical systems. As a direct consequence of Proposition~\ref{P-basic R}(4) we have: \begin{proposition}\label{P-basic R extension} The following are true: \begin{enumerate} \item Let $\psi:X\rightarrow Y$ and $\varphi:Y\rightarrow Z$ be extensions. Then $\varphi\circ \psi$ is tame if and only if both $\varphi$ and $\psi$ are tame. \item Let $\pi_j:X_j\rightarrow Y_j$ for $j\in J$ be extensions. If every $\pi_j$ is tame, then $\prod_{j\in J} \pi_j:\prod_{j\in J}X_j\rightarrow \prod_{j\in J}Y_j$ is tame. \item Let $\{\pi_{\alpha}:X_{\alpha}\rightarrow Y \}_{\alpha<\nu}$ be an inverse system of extensions, where $\nu$ is an ordinal. Then $\pi:=\varprojlim \pi_{\alpha}$ is tame if and only if every $\pi_{\alpha}$ is tame. \item For every commutative diagram \begin{gather*} \xymatrix{ {X'} \ar[d]_{\pi'} \ar[r]^{\sigma} & {X}\ar[d]^{\pi} \\ Y' \ar [r]^{\tau} & Y } \end{gather*} of extensions, if $X'$ is a subsystem of $Y'\times X$, $\pi'$ and $\sigma$ are the restrictions to $X'$ of the coordinate projections of $Y'\times X$ onto $Y'$ and $X$, respectively, and $\pi$ is tame, then $\pi'$ is tame. \end{enumerate} \end{proposition} A continuous map $f:A\rightarrow B$ between topological spaces is said to be {\it semi-open} if the image of every nonempty open subset of $A$ under $f$ has nonempty interior. For a dynamical system $(X, G)$, denote by ${\rm RP}(X, G)$ the {\it regionally proximal relation} of $X$, that is, ${\rm RP}(X, G)=\bigcap_{U\in \mathcal{U}_{X}}\overline{GU}$, where $\mathcal{U}_{X}$ is the collection of open neighbourhoods of the diagonal $\Delta_2(X)$ in $X^2$. Following \cite{NSSEP} we call a pair $(x_1,x_2)\in X^2\setminus\Delta_2(X)$ a {\it weakly mixing pair} if for any neighbourhoods $U_1$ and $U_2$ of $x_1$ and $x_2$, respectively, there exists an $s\in G$ such that $sU_1\cap U_1\neq \emptyset$ and $sU_1\cap U_2\neq \emptyset$. We denote the set of weakly mixing pairs by ${\rm WM}(X, G)$. When $(X, G)$ is minimal, ${\rm WM}(X, G)={\rm RP}(X, G)\setminus \Delta_2(X)$ \cite[Theorem 2.4(1)]{NSSEP}. The proofs of Lemma~3.1 and Corollary~4.1 in \cite{NSSEP} yield the following two results. \begin{lemma}\label{L-semiopen to IT} Suppose that $G$ is Abelian. Let $(x_1, x_2)\in X^2\setminus \Delta_2(X)$ and $A=\overline{G(x_1, x_2)}$. If $\pi_1:A\rightarrow X$ is semi-open, where $\pi_1$ is the projection to the first coordinate, then the following are equivalent: \begin{enumerate} \item $(x_1, x_2)\in {\rm IT}_2(X, G)$, \item $(x_1, x_2)\in {\rm IN}_2(X, G)$, \item $(x_1, x_2)\in {\rm WM}(X, G)$. \end{enumerate} \end{lemma} \begin{lemma}\label{L-distal ext} Suppose that $G$ is Abelian. Let $\pi:(X, G)\rightarrow (Y, G)$ be a distal extension of minimal systems. Let $(x_1, x_2)\in R_{\pi} \setminus \Delta_2(X)$. Then the following are equivalent: \begin{enumerate} \item $(x_1, x_2)\in {\rm IT}_2(X, G)$, \item $(x_1, x_2)\in {\rm IN}_2(X, G)$, \item $(x_1, x_2)\in {\rm WM}(X, G)$, \item $(x_1, x_2)\in {\rm RP}(X, G)$. \end{enumerate} \end{lemma} We call an extension $\pi:X\rightarrow Y$ a {\it Bron{\v s}tein extension} if $R_{\pi}$ has a dense subset of almost periodic points. Recall that $\pi$ is said to be {\it highly proximal} if for every nonempty open subset $U$ of $X$ and every point $z\in Y$ there exists an $s\in G$ such that $\pi^{-1}(z)\subseteq sU$. In this case $(X, G)$ and $(Y, G)$ are necessarily minimal and $\pi$ is proximal. We say that $\pi$ is {\it strictly PI} if it can be obtained by a transfinite succession of proximal and equicontinuous extensions, and {\it PI} if there exists a proximal extension $\psi:X'\rightarrow X$ such that $\pi\circ \psi$ is strictly PI. If in these definitions proximality is replaced by high proximality, then we obtain the notions of {\it strictly HPI} extensions and {\it HPI} extensions. \begin{lemma}\label{L-B+tame} Suppose that $G$ is Abelian. Let $\pi:X\rightarrow Y$ be a tame Bron{\v s}tein extension of minimal systems. Suppose that $\pi$ has no nontrivial equicontinuous factors. Then $\pi$ is an isomorphism. \end{lemma} \begin{proof} For an extension $\psi:X'\rightarrow Y'$ denote by ${\rm RP}_{\psi}$ the {\it relative regionally proximal relation} of $\psi$, that is, ${\rm RP}_{\psi}=\bigcap_{U\in \mathcal{U}_{X'}}\overline{GU\cap R_{\psi}}$, where $\mathcal{U}_{X'}$ is the collection of open neighbourhoods of the diagonal $\Delta_2(X')$ in $X'\times X'$. As $X$ is minimal, if we denote by $S_{\pi}$ the smallest closed $G$-invariant equivalence relation on $X$ containing ${\rm RP}_{\pi}$, then the induced extension $X/S_{\pi}\rightarrow Y$ is an equicontinuous factor of $\pi$ \cite{Ell} \cite[Theorem V.2.21]{Vri}. Since $\pi$ has no nontrivial equicontinuous factors, $S_{\pi}=R_{\pi}$. It is a theorem of Ellis that for any Bron{\v s}tein extension $\psi$ of minimal systems, ${\rm RP}_{\psi}$ is an equivalence relation \cite[Theorem 2.6.2]{Veech} \cite[Theorem VI.3.20]{Vri}. Therefore $R_{\pi}=S_{\pi}={\rm RP}_{\pi}\subseteq {\rm RP}(X, G)$. Since $X$ is minimal, ${\rm WM}(X, G)={\rm RP}(X, G)\setminus \Delta_2(X)$ \cite[Theorem 2.4(1)]{NSSEP}. Thus $R_{\pi}\setminus \Delta_2(X) \subseteq {\rm WM}(X, G)$. Suppose that $\pi$ is not an isomorphism. Since $\pi$ is a Bron{\v s}tein extension, we can find an almost periodic point $(x_1, x_2)$ in the nonempty open subset $R_{\pi}\setminus \Delta_2(X)$ of $R_{\pi}$. As extensions of minimal systems are semi-open, we conclude that $(x_1, x_2)\in {\rm IT}_2(X, G)$ by Lemma~\ref{L-semiopen to IT}. This is in contradiction to the tameness of $\pi$. Thus $\pi$ is an isomorphism. \end{proof} \begin{lemma}\label{L-tame to PI} Suppose that $G$ is Abelian. Then any tame extension $\pi:X\rightarrow Y$ of minimal systems is PI. \end{lemma} \begin{proof} Consider the canonical PI tower of $\pi$ \cite{EGS} \cite[Theorem VI.4.20]{Vri}. This is a commuting diagram of extensions of minimal systems of the form displayed in Proposition~\ref{P-basic R extension}(4), where $\sigma$ is proximal, $\tau$ is strictly PI, $\pi'$ is RIC (relatively incontractible), and $\pi'$ has no nontrivial equicontinuous factors. Furthermore, $X'$ is a subsystem of $Y'\times X$, and $\pi'$ and $\sigma$ are the restrictions to $X'$ of the coordinate projections of $Y'\times X$ onto $Y'$ and $X$, respectively (see for example \cite[VI.4.22]{Vri}). Since $\pi$ is tame, by Proposition~\ref{P-basic R extension}(4) so is $\pi'$. As every RIC-extension is a Bron{\v s}tein extension \cite[Corollary 5.12]{EGS} \cite[Corollary VI.2.8]{Vri}, $\pi'$ is a Bron{\v s}tein extension. Then $\pi'$ is an isomorphism by Lemma~\ref{L-B+tame}. Hence $\pi$ is PI. \end{proof} Lemma~\ref{L-tame to PI} generalizes a result of Glasner \cite[Theorem 2.3]{tame}, who proved it for tame metrizable $(X, G)$. A theorem of van der Woude asserts that an extension $\pi:X\rightarrow Y$ of minimal systems is HPI if and only if every topologically transitive subsystem $W$ of $R_{\pi}$ for which both of the coordinate projections $\pi_i:W\rightarrow X$ are semi-open is minimal \cite[Theorem 4.8]{HPI}. \begin{lemma}\label{L-tame metrizable to HPI} Suppose that $G$ is Abelian. Then any tame extension $\pi:X\rightarrow Y$ of minimal metrizable systems is HPI. \end{lemma} \begin{proof} Let $S_{\pi}$ be as in the proof of Lemma~\ref{L-B+tame}. Then we have the natural extensions $\psi:X\rightarrow X/S_{\pi}$ and $\varphi: X/S_{\pi}\rightarrow Y$ with $\pi=\varphi\circ \psi$, and $\varphi$ is equicontinuous. To show that $\pi$ is HPI, it suffices to show that $\psi$ is HPI. Let $W$ be a topologically transitive subsystem of $R_{\psi}=S_{\pi}$ for which both of the coordinate projections $\pi_i:W\rightarrow X$ are semi-open. By the theorem of van der Woude, the proof will be complete once we show that $W$ is minimal. Since $G$ is Abelian, $X$ has a $G$-invariant Borel probability measure by a result of Markov and Kakutani \cite[Theorem VII.2.1]{Dav}. Then ${\rm RP}(X, G)$ is an equivalence relation by \cite[Theorem 9.8]{Aus} \cite[Theorem V.1.17]{Vri}. Thus $S_{\pi}\subseteq {\rm RP}(X, G)$. Since $X$ is minimal, ${\rm WM}(X, G)={\rm RP}(X, G)\setminus \Delta_2(X)$ \cite[Theorem 2.4(1)]{NSSEP}. Then $R_{\psi}\setminus \Delta_2(X)=S_{\pi}\setminus \Delta_2(X) \subseteq {\rm WM}(X, G)$. As $X$ is metrizable and $W$ is topologically transitive, we can find a transitive point $(x_1, x_2)$ of $W$. Since $\pi$ is tame, by Lemma~\ref{L-semiopen to IT} we must have $x_1=x_2$. Therefore $W=\Delta_2(X)$ is minimal. \end{proof} \begin{lemma}\label{L-open tame to HPI} Suppose that $G$ is Abelian. Let $\pi:X\rightarrow Y$ be an open tame extension of minimal systems. If $Y$ is metrizable, then $\pi$ is HPI. \end{lemma} To prove Lemma~\ref{L-open tame to HPI} we need a variation of the ``reduction-to-the-metric-case construction'' in \cite{MW}, which is in turn a relativization of a method of Ellis \cite{Ellis}. Let $\pi:(X, G)\rightarrow (Y, G)$ be an open extension of $G$-systems. Denote by ${\rm CP}(X)$ the set of all continuous pseudometrics on $X$. For a $\rho\in {\rm CP}(X)$ and a countable subgroup $H$ of $G$, define $C_{\rho, H}:=\{(x_1, x_2)\in X\times X : \rho(sx_1, sx_2)=0 \mbox{ for all } s\in H\}$. Then $C_{\rho, H}$ is a closed $H$-invariant equivalence relation on $X$. Denote $X/C_{\rho, H}$ by $X_{\rho, H}$, and denote the quotient map $X\rightarrow X_{\rho, H}$ by $\psi_{\rho, H}$. Say $H=\{s_1, s_2, \dots\}$ with $s_1 = e$. Define a pseudometric $d_{\rho, H}$ on $X$ by \begin{gather}\label{E-d_H} d_{\rho, H}(x_1, x_2):=\sum^{\infty}_{i=1}2^{-i}\rho(s_ix_1, s_ix_2) \end{gather} for all $x_1, x_2\in X$. Then $X_{\rho, H}$ is a metrizable space with a metric $d'_{\rho, H}$ defined by \begin{gather}\label{E-d'_H} d'_{\rho, H}(\psi_{\rho, H}(x_1), \psi_{\rho, H}(x_2))=d_{\rho, H}(x_1, x_2) \end{gather} for all $x_1, x_2\in X$. Let $C'_{\rho, H}=\{(y_1, y_2)\in Y\times Y : \psi_{\rho, H}(\pi^{-1}(y_1)) =\psi_{\rho, H}(\pi^{-1}(y_2))\}$. Then $C'_{\rho, H}$ is a closed $H$-invariant equivalence relation on $Y$. Denote $Y/C'_{\rho, H}$ by $Y_{\rho, H}$, and denote the quotient map $Y\rightarrow Y_{\rho, H}$ by $\tau_{\rho, H}$. Define a map $\sigma_{\rho, H}:X\rightarrow X_{\rho, H}\times Y_{\rho, H}$ by $\sigma_{\rho, H}(x)=(\psi_{\rho, H}(x), \tau_{\rho, H}\circ \pi(x))$, and write $\sigma_{\rho, H}(X)$ as $X^*_{\rho, H}$. Denote by $\pi_{\rho, H}$ the restriction to $X^*_{\rho, H}$ of the projection of $X_{\rho, H}\times Y_{\rho, H}$ onto $Y_{\rho, H}$. \begin{lemma}\cite[Lemma V.3.3]{Vri}\label{L-open to quotient} For every $\rho\in {\rm CP}(X)$ and countable subgroup $H$ of $G$ the diagram \begin{gather}\label{D-open to quotient} \xymatrix{ {(X, H)} \ar[d]_{\sigma_{\rho, H}} \ar[r]^{\pi} & {(Y, H)}\ar[d]^{\tau_{\rho, H}} \\ (X^*_{\rho, H}, H) \ar [r]^{\pi_{\rho, H}} & (Y_{\rho, H}, H) } \end{gather} of extensions of $H$-systems commutes and $X^*_{\rho, H}$ is metrizable. \end{lemma} For $\rho_1, \rho_2\in {\rm CP}(X)$ we write $\rho_1\preceq \rho_2$ if $\rho_1(x_1, x_2)=0$ whenever $\rho_2(x_1, x_2)=0$. \begin{lemma}\label{L-non-minimal} Let $W$ be a nonminimal $G$-subsystem of $R_{\pi}$. Then there exists a $\rho_0\in {\rm CP}(X)$ such that $((\sigma_{\rho, H}\times \sigma_{\rho, H})(W), H)$ is nonminimal for every $\rho\in {\rm CP}(X)$ with $\rho_0\preceq \rho$ and every countable subgroup $H$ of $G$. \end{lemma} \begin{proof} Let $W'$ be a minimal $G$-subsystem of $W$, and let $(x_1, x_2)\in W\setminus W'$. As in the proof of \cite[Lemma VI.4.41]{Vri}, it can be shown that there exists a $\rho_0\in {\rm CP}(X)$ such that $\max(\rho_0(x'_1, x_1), \rho_0(x'_2, x_2))>0$ for all $(x'_1, x'_2)\in W'$. Then for every $\rho\in {\rm CP}(X)$ with $\rho_0\preceq \rho$ and every countable subgroup $H$ of $G$ we have $(\sigma_{\rho, H}\times \sigma_{\rho, H})(x_1 ,x_2 )\notin (\sigma_{\rho, H}\times \sigma_{\rho, H})(W')$ so that $((\sigma_{\rho, H}\times \sigma_{\rho, H})(W), H)$ is nonminimal. \end{proof} For all $\rho_1\preceq \rho_2$ in ${\rm CP}(X)$ and all countable subgroups $H_1\subseteq H_2$ of $G$ it is clear that there exist unique maps $\sigma_{21}:X^*_{\rho_2, H_2}\rightarrow X^*_{\rho_1, H_1}$ and $\tau_{21}:Y_{\rho_2, H_2}\rightarrow Y_{\rho_1, H_1}$ such that $\sigma_{21}\circ \sigma_{\rho_2, H_2}=\sigma_{\rho_1, H_1}$ and $\tau_{21}\circ \tau_{\rho_2, H_2}=\tau_{\rho_1, H_1}$. For $\rho_1\preceq \rho_2\preceq \cdots$ in ${\rm CP}(X)$ and countable subgroups $H_1\subseteq H_2\subseteq \cdots $ of $G$, we write $X^*_{\infty}$ and $Y_{\infty}$ for $\varprojlim X^*_{\rho_n, H_n}$ and $\varprojlim Y_{\rho_n, H_n}$, respectively. Then we have induced maps $\sigma_{\infty}:X\rightarrow X^*_{\infty}$, $\pi_{\infty}:X^*_{\infty}\rightarrow Y_{\infty}$, and $\tau_{\infty}:Y\rightarrow Y_{\infty}$. The diagram \begin{gather}\label{D-limit} \xymatrix{ X \ar[d]_{\sigma_{\infty}} \ar[r]^{\pi} & Y \ar[d]^{\tau_{\infty}} \\ X^*_{\infty} \ar [r]^{\pi_{\infty}} & Y_{\infty} } \end{gather} is easily seen to commute. Moreover, (\ref{D-limit}) can be identified in a natural way with (\ref{D-open to quotient}) taking \begin{gather}\label{E-rho H} \rho:=\sum^{\infty}_{n=1}2^{-n}\rho_n/({\rm diam}(\rho_n)+1) \hspace*{3mm}\mbox{and}\hspace*{3mm} H=\bigcup^{\infty}_{n=1}H_n . \end{gather} \begin{lemma}\label{L-semi-open} Suppose that $(X, G)$ is minimal. Let $W$ be a topologically transitive $G$-subsystem of $R_{\pi}$ such that the coordinate projections $\pi_i:W\rightarrow X$ are both semi-open. Then for every $\rho_0\in {\rm CP}(X)$ and countable subgroup $H_0$ of $G$ there exist a $\rho\in {\rm CP}(X)$ with $\rho_0\preceq \rho$ and a countable subgroup $H$ of $G$ with $H_0\subseteq H$ such that \begin{enumerate} \item[(a)] $(X^*_{\rho, H}, H)$ is a minimal $H$-system, \item[(b)] $((\sigma_{\rho, H}\times \sigma_{\rho, H})(W), H)$ is a topologically transitive $H$-system, \item[(c)] the coordinate projections $\pi^*_i:(\sigma_{\rho, H}\times \sigma_{\rho, H})(W)\rightarrow X^*_{\rho, H}$ for $i=1,2$ are both semi-open. \end{enumerate} \end{lemma} \begin{proof} We shall show by induction that there exist $\rho_0\preceq \rho_1 \preceq \cdots$ in ${\rm CP}(X)$, countable subgroups $H_0\subseteq H_1\subseteq \cdots $ of $G$ and a sequence $\{(x_n, x'_n)\}_{n\in {\mathbb Z}_{\ge 0}}$ of elements in $W$ such that for every $n\in {\mathbb Z}_{\ge 0}$ the following conditions are satisfied: \begin{enumerate} \item[(a')] $H_{n+1}\sigma^{-1}_{\rho_n, H_n}(U)=X$ for any nonempty open subset $U$ of $X^*_{\sigma_n, H_n}$, \item[(b')] $(\sigma_{\rho_n, H_n}\times\sigma_{\rho_n, H_n})(H_{n+1}(x_n, x'_n))$ is dense in $(\sigma_{\rho_n, H_n}\times\sigma_{\rho_n, H_n})(W)$, \item[(c')] for any nonempty open subset $V$ of $(\sigma_{\rho_n, H_n}\times\sigma_{\rho_n, H_n})(W)$ there exist $t_1, t_2\in X$ and $\delta>0$ such that $B_{\rho_{n+1}}(t_i, \delta)\subseteq \pi_i((\sigma_{\rho_n, H_n}\times\sigma_{\rho_n, H_n})^{-1}(V)\cap W)$ for $i=1,2$, where $B_{\rho_{n+1}}(t_i, \delta):=\{x\in X : \rho_{n+1}(x, t_i)<\delta\}$. \end{enumerate} We first indicate how this can be used to prove the lemma. Define $\rho$ and $H$ via (\ref{E-rho H}). Then conditions (a) and (b) follow from (a') and (b'), respectively, as in the proof of \cite[Lemma VI.4.43]{Vri}. We may identify $(\sigma_{\rho, H}\times\sigma_{\rho, H})(W)$ with $(\sigma_{\infty}\times \sigma_{\infty})(W)=\varprojlim (\sigma_{\rho_n, H_n}\times\sigma_{\rho_n, H_n})(W)$. Thus for any nonempty open subset $V'\subseteq (\sigma_{\rho, H}\times\sigma_{\rho, H})(W)$ we can find an $n\in {\mathbb N}$ and a nonempty open subset $V$ of $(\sigma_{\rho_n, H_n}\times\sigma_{\rho_n, H_n})(W)$ such that $V'\supseteq (\sigma_{\infty, n}\times \sigma_{\infty, n})^{-1}(V) \cap (\sigma_{\infty}\times \sigma_{\infty})(W) =(\sigma_{\infty}\times \sigma_{\infty})((\sigma_{\rho_n, H_n}\times \sigma_{\rho_n, H_n})^{-1}(V)\cap W)$, where $\sigma_{\infty,n}:X^*_{\infty}\rightarrow X^*_{\sigma_n, H_n}$ is the natural map. Let $t_i$ and $\delta$ be as in (c'). Then \begin{align*} \pi^*_i(V')&\supseteq \pi^*_i((\sigma_{\infty}\times \sigma_{\infty}) ((\sigma_{\rho_n, H_n}\times \sigma_{\rho_n, H_n})^{-1}(V)\cap W)) \\ &= \sigma_{\rho, H}(\pi_i((\sigma_{\rho_n, H_n}\times \sigma_{\rho_n, H_n})^{-1}(V)\cap W))\\ &\supseteq \sigma_{\rho, H}(B_{\rho_{n+1}}(t_i, \delta))\supseteq \sigma_{\rho, H}(B_{\rho}(t_i, \delta'))\\ &\supseteq \tilde{\pi}^{-1}_{\rho, H}(B_{d'_{\rho, H}}(\psi_{\rho, H}(t_i), \delta')), \end{align*} where $\delta':=2^{-n-1}\delta /({\rm diam}(\rho_{n+1})+1)$, $d'_{\rho, H}$ is the metric on $X_{\rho, H}$ defined in (\ref{E-d'_H}), and $\tilde{\pi}_{\rho, H}: X^*_{\rho, H}\rightarrow X_{\rho, H}$ is the coordinate projection. Therefore $ \pi^*_i(V')$ has nonempty interior and hence condition (c) follows from (c'). It remains to construct $\{\rho_n\}_{n\ge 0}$, $\{H_n\}_{n \ge 0}$, and $\{(x_n, x'_n)\}_{n\ge 0}$ satisfying (a'), (b') and (c'). Note that (a') and (b') do not depend on the choice of $\rho_{n+1}$ while (c') does not depend on the choice of $H_{n+1}$ and $(x_n, x'_n)$. Thus we can choose $H_{n+1}$ and $(x_n, x'_n)$ to satisfy (a') and (b') exactly as in the proof of \cite[Lemma VI.4.43]{Vri}. Now we explain how to choose $\rho_1$ in order to satisfy (c'). Repeating the procedure we will obtain the desired $\rho_n$. Let $\{V_1, V_2, \dots\}$ be a countable nonempty open base for the topology on the metrizable space $(\sigma_{\rho_0, H_0}\times\sigma_{\rho_0, H_0})(W)$. Then $Z_{m, i}:=\pi_i((\sigma_{\rho_0, H_0}\times\sigma_{\rho_0, H_0})^{-1}(V_m)\cap W)$ has nonempty interior for each $m\in {\mathbb N}$ and $i=1,2$. Pick a $t_{m,i}$ in the interior of $Z_{m, i}$, and take a continuous function $f_{m, i}$ on $X$ such that $0\le f_{m, i}\le 1$, $f_{m, i}(t_{m, i})=0$ and $f_{m, i}|_{Z^{{\rm c}}_{m, i}}=1$. Define $\rho_{m, i}\in {\rm CP}(X)$ by $\rho_{m,i}(z_1, z_2)=|f_{m, i}(z_1)-f_{m, i}(z_2)|$ for $z_1, z_2\in X$, and define $\rho_1\in {\rm CP}(X)$ by $\rho_1=\rho_0+\sum^{\infty}_{m=1}2^{-m}(\rho_{m, 1}+\rho_{m, 2})$. Then $B_{\rho_1}(t_{m, i}, 2^{-m})\subseteq Z_{m, i}$. One sees immediately that (c') holds for $n=0$. This completes the proof of Lemma~\ref{L-semi-open}. \end{proof} \begin{lemma}\label{L-injective} Suppose that $Y$ is metrizable. Then there exists a $\rho_1\in {\rm CP}(X)$ such that $\tau_{\rho, H}$ is an isomorphism for every $\rho\in {\rm CP}(X)$ with $\rho_1\preceq \rho$ and every countable subgroup $H$ of $G$. \end{lemma} \begin{proof} Let $d$ be a metric on $Y$ inducing its topology. Simply define $\rho_1\in {\rm CP}(X)$ by $\rho_1(x_1, x_2)=d(\pi(x_1), \pi(x_2))$ for all $x_1, x_2\in X$. \end{proof} \begin{proof}[Proof of Lemma~\ref{L-open tame to HPI}] Suppose that $\pi$ is not HPI. By the theorem of van der Woude there exists a topologically transitive nonminimal $G$-subsystem $W$ of $R_{\pi}$ such that the coordinate projections $W\rightarrow X$ are both semi-open. Let $\rho_0$ and $\rho_1$ be as in Lemmas~\ref{L-non-minimal} and \ref{L-injective}, respectively. Take $\rho$ and $H$ as in Lemma~\ref{L-semi-open} such that $\rho\succeq \rho_0+\rho_1$. Then $\tau_{\rho, H}$ is an isomorphism and $((\sigma_{\rho, H}\times\sigma_{\rho, H})(W), H)$ is nonminimal. Since $\pi$ is tame as an extension of $G$-systems, it is also tame as an extension of $H$-systems. By Proposition~\ref{P-basic R extension}(1), $\pi_{\sigma, H}$ is a tame extension of $H$-systems. By Lemma~\ref{L-tame metrizable to HPI} $\pi_{\rho, H}$ is HPI. Thus any topologically transitive $H$-subsystem of $R_{\pi_{\rho, H}}$ for which both of the coordinate projections to $X^*_{\rho, H}$ are semi-open must be minimal by the theorem of van der Woude. This is a contradiction to Lemma~\ref{L-semi-open}. Therefore $\pi$ is HPI. \end{proof} The proof of the next lemma follows essentially that of \cite[Theorem 4.3]{NSSEP}. \begin{lemma}\label{L-PI to proximal} Suppose that $G$ is Abelian. Let $\pi:X\rightarrow Y$ be a PI (resp.\ HPI) tame extension of minimal systems such that every equicontinuous factor of $(X, G)$ factors through $\pi$. Then $\pi$ is proximal (resp.\ highly proximal). \end{lemma} \begin{proof} Let $\varphi:X'\rightarrow X$ be a proximal (resp.\ highly proximal) extension such that $\pi\circ \varphi$ is strictly PI (resp.\ strictly HPI). Replacing $X'$ by a minimal $G$-subsystem we may assume that $X'$ is minimal. Denote by $\psi$ the canonical extension of $(X, G)$ over its maximal equicontinuous factor $(X_{\rm eq} , G)$. Then $\psi$ factors through $\pi$ by our assumption. Since $\varphi$ is proximal, $(X_{\rm eq} , G)$ is also the maximal equicontinuous factor of $(X', G)$. Now we need: \begin{lemma}\label{L-no equi} Let $(Y'_1, G)$ and $(Y'_2, G)$ be systems with extensions $X'\rightarrow Y'_1$, $\theta : Y'_1\rightarrow Y'_2$, and $Y'_2\rightarrow Y$ such that the composition $X'\rightarrow Y'_1\rightarrow Y'_2\rightarrow Y$ is $\pi\circ \varphi$. Suppose that $\theta$ is distal. Then $\theta$ is an isomorphism. \end{lemma} \begin{proof} Clearly $(X_{\rm eq} , G)$ is also the maximal equicontinuous factor of $(Y'_1, G)$. Suppose that $y_1'$ and $y'_2$ are distinct points in $Y'_1$ with $\theta(y'_1)=\theta(y'_2)$. Since $G$ is Abelian, $Y'_1$ has a $G$-invariant Borel probability measure by a result of Markov and Kakutani \cite[Theorem VII.2.1]{Dav}. Then $(Y'_1/{\rm RP}(Y'_1, G), G)$ is the maximal equicontinuous factor of $(Y'_1, G)$ by \cite[Theorem 9.8]{Aus} \cite[Theorem V.1.17]{Vri}. Thus $(y'_1, y'_2)\in {\rm RP}(Y'_1, G)$. So $(y'_1, y'_2)\in {\rm IT}_2(Y'_1, G)$ by Lemma~\ref{L-distal ext}. By Proposition~\ref{P-basic R} we can find in $X'$ preimages $x'_1$ and $x'_2$ of $y'_1$ and $y'_2$, respectively, such that $(x'_1, x'_2)\in {\rm IT}_2(X', G)$. Then $(\varphi(x'_1), \varphi(x'_2))\in {\rm IT}_2(X, G)\cap R_{\pi}={\rm IT}_{\pi}$. Since $\theta$ is distal, the pair $(y'_1, y'_2)$ is distal. Then the pair $(x'_1, x'_2)$ is also distal. As $\varphi$ is proximal, $\varphi(x'_1)\neq \varphi(x'_2)$. Thus $\pi$ is not tame, which contradicts our assumption. Therefore $\theta$ is an isomorphism. \end{proof} Back to the proof of Lemma~\ref{L-PI to proximal}. Since equicontinuous extensions are distal, by Lemma~\ref{L-no equi} we conclude that $\pi\circ \varphi$ can be obtained by a transfinite succession of proximal (resp.\ highly proximal) extensions. Since proximal (resp.\ highly proximal) extensions are preserved under transfinite compositions, $\pi\circ\varphi$ is proximal (resp.\ highly proximal), and hence $\pi$ is proximal (resp.\ highly proximal). \end{proof} \begin{theorem}\label{T-metric tame minimal to highly proximal} Suppose that $G$ is Abelian. Let $\pi: X\rightarrow Y$ be a tame extension of minimal systems. Consider the following conditions: \begin{enumerate} \item $\pi$ is highly proximal, \item $\pi$ is proximal, \item every equicontinuous factor of $(X, G)$ factors through $\pi$. \end{enumerate} Then one has (1)$\Rightarrow$(2)$\Leftrightarrow$(3). Moreover, if $X$ is metrizable or $\pi$ is open and $Y$ is metrizable, then conditions (1) to (3) are all equivalent. \end{theorem} \begin{proof} The implications (1)$\Rightarrow$(2)$\Rightarrow$(3) are trivial, and (3)$\Rightarrow$(2) follows from Lemmas~\ref{L-tame to PI} and \ref{L-PI to proximal}. When $X$ is metrizable, (3)$\Rightarrow$(1) follows from Lemmas~\ref{L-tame metrizable to HPI} and \ref{L-PI to proximal}. When $\pi$ is open and $Y$ is metrizable, (3)$\Rightarrow$(1) follows from Lemmas~\ref{L-open tame to HPI} and \ref{L-PI to proximal}. \end{proof} \begin{theorem}\label{T-decomposition} Suppose that $G$ is Abelian. Let $\pi: X\rightarrow Y$ be a tame extension of minimal systems. Then, up to isomorphisms, $\pi$ has a unique decomposition as $\pi=\varphi\circ \psi$ such that $\psi$ is proximal and $\varphi$ is equicontinuous. If furthermore $X$ is metrizable or $\pi$ is open and $Y$ is metrizable, then $\psi$ is highly proximal. \end{theorem} \begin{proof} When such a decomposition exists, clearly $\varphi$ must be the maximal equicontinuous factor of $\pi$. This proves uniqueness. Let ${\rm RP}_{\pi}$ and $S_{\pi}$ be as in the proof of Lemma~\ref{L-B+tame}. Then we have the natural extensions $\psi:X\rightarrow X/S_{\pi}$ and $\varphi: X/S_{\pi}\rightarrow Y$ with $\pi=\varphi\circ \psi$, and $\varphi$ is equicontinuous. For any extension $\theta : X\rightarrow W$ with $(W, G)$ equicontinuous, since ${\rm RP}_{\pi}\subseteq {\rm RP}(X, G)\subseteq R_{\theta}$ we see that $\theta$ factors through $\psi$. As $\pi$ is tame, so is $\psi$. By Theorem~\ref{T-metric tame minimal to highly proximal} $\psi$ is proximal. If $X$ is metrizable, then by Theorem~\ref{T-metric tame minimal to highly proximal} $\psi$ is highly proximal. If $\pi$ is open and $Y$ is metrizable, then by Lemma~\ref{L-open tame to HPI} $\pi$ is HPI. Using van der Woude's characterization of HPI extensions one sees immediately that $\psi$ is also HPI. By Lemma~\ref{L-PI to proximal}, $\psi$ is highly proximal. \end{proof} \begin{corollary}\label{C-HP over eq} Suppose that $G$ is Abelian. Then every tame minimal system $(X, G)$ is a highly proximal extension of an equicontinuous system. \end{corollary} Corollary~\ref{C-HP over eq} answers a question of Glasner \cite[Problem 2.5]{tame}, who asked whether every metrizable tame minimal system $(X, G)$ with $G$ Abelian is a proximal extension of an equicontinuous system. It also generalizes \cite[Theorem 4.3]{NSSEP} in which the conclusion is established for metrizable null minimal systems $(X, {\mathbb Z})$. Recall that a subset $H\subseteq G$ is called {\it thick} if for any finite $F\subseteq G$, one has $H\supseteq sF$ for some $s\in G$. We say that $H\subseteq G$ is {\it Poincar{\'e}} if for any measure preserving action of $G$ on a finite measure space $(Y, \mathscr{B}, \mu)$ and any $A\in \mathscr{B}$ with $\mu(A)>0$, one has $\mu(sA\cap A)>0$ for some $s\in H$. The argument on page~74 of \cite{RCNT} shows that every thick set is Poincar{\'e}. For a dynamical system $(X, G)$ and a Borel subset $U\subseteq X$, denote by $N(U, U)$ the set $\{s\in G: sU\cap U\neq \emptyset\}$. If $\mu(U)>0$ for some $G$-invariant Borel probability measure $\mu$ on $X$, then $N(U, U)$ has nonempty intersection with every Poincar{\'e} set. In particular, in this case $N(U, U)$ has nonempty intersection with every thick set, or, equivalently, $N(U, U)$ is syndetic \cite[page 16]{ETJ}. Using this fact and Lemma~\ref{L-semiopen to IT}, one sees that the proof of case 1 in \cite[Theorem 3.1]{NSSEP} leads to: \begin{lemma}\label{L-E to untame} Suppose that $G$ is Abelian. Suppose that a metrizable system $(X, G)$ is nonminimal, has a unique minimal subsystem, and has a $G$-invariant Borel probability measure with full support. Then $(X, G)$ is untame. \end{lemma} Using Corollary~\ref{C-HP over eq} and Lemmas~\ref{L-semiopen to IT} and \ref{L-E to untame} one also sees that the proof of \cite[Theorem 4.4]{NSSEP} works in our context, so that we obtain: \begin{lemma}\label{L-uniquely ergodic} Suppose that $G$ is Abelian. Then any metrizable tame minimal system $(X, G)$ is uniquely ergodic. \end{lemma} \begin{theorem}\label{T-uniquely ergodic} Suppose that $G$ is Abelian. Then any tame minimal system $(X, G)$ is uniquely ergodic. \end{theorem} \begin{proof} Let $\mu_1$ and $\mu_2$ be two $G$-invariant Borel probability measures on $X$. We use the notation established after Lemma~\ref{L-open tame to HPI}. Denote by $I$ the set of all pairs $(\rho, H)$ such that $\rho\in {\rm CP}(X)$, $H$ is a countable subgroup of $G$, and $(X^*_{\rho, H}, H)$ is minimal. Take $(Y, G)$ to be the trivial system in Lemma~\ref{L-open to quotient}. For every $(\rho, H)\in I$ we have, by Lemma~\ref{L-uniquely ergodic}, $\sigma_{\rho, H, *}(\mu_1)= \sigma_{\rho, H, *}(\mu_2)$, where $\sigma_{\rho, H, *}:M (X)\rightarrow M (X^*_{\rho, H})$ is the map between spaces of Borel probability measures induced by $\sigma_{\rho, H}$. Define a partial order on $I$ by $(\rho_1, H_1)\le (\rho_2, H_2)$ if $\rho_1\le \rho_2$ and $H_1\subseteq H_2$. As mentioned right after Lemma~\ref{L-non-minimal}, when $(\rho_1, H_1)\le (\rho_2, H_2)$ there exists a unique map $\sigma_{21}:X^*_{\rho_2, H_2}\rightarrow X^*_{\rho_1, H_1}$ such that $\sigma_{21}\circ \sigma_{\rho_2, H_2}=\sigma_{\rho_1, H_1}$. By \cite[Lemma V.3.9]{Vri}, $I$ is directed. It is easily checked that $X=\varprojlim_{(\rho, H)\in I} X^*_{\rho, H}$. Thus $M (X)=\varprojlim_{(\rho, H)\in I} \ M (X^*_{\rho, H})$. Therefore $\mu_1=\mu_2$. \end{proof} Theorem~\ref{T-uniquely ergodic} generalizes \cite[Theorem 4.4]{NSSEP} in which the conclusion is established for metrizable null minimal systems $(X, {\mathbb Z})$. \section{I-independence}\label{S-I-indep} Here we tie together several properties via the notion of I-independence, which, as Theorem~\ref{T-equiv-indep-prod} suggests, can be thought of as an analogue of measure-theoretic weak mixing for $C^*$-dynamical systems (compare also Theorem~\ref{T-even-WM}). \begin{definition}\label{D-I-indep} A $C^*$-dynamical system $(A,G,\alpha )$ is said to be {\it I-independent} if for every finite-dimensional operator subsystem $V\subseteq A$ and $\varepsilon > 0$ there is a sequence $\{ s_k \}_{k=1}^\infty$ in $G$ such that $(s_1, \dots , s_k )$ is a $(1+\varepsilon )$-independence tuple for $V$ for each $k\geq 1$. \end{definition} Note that I-independence is to be distinguished from ${\mathcal{I}}$-independence, although the two turn out to be equivalent, as the next proposition demonstrates. \begin{proposition}\label{P-C-equiv-indep} Let $(A,G,\alpha )$ be $C^*$-dynamical system. Let $\mathfrak{S}$ be a collection of finite-dimensional operator subsystems of $A$ with the property that for every finite set $\Omega\subseteq A$ and $\varepsilon > 0$ there is a $V\in\mathfrak{S}$ such that $\Omega\subseteq_\varepsilon V$. Then the following are equivalent: \begin{enumerate} \item $\alpha$ is I-independent, \item for every $V\in\mathfrak{S}$ and $\varepsilon > 0$ the set ${\rm Ind} (\alpha , V, \varepsilon )$ is infinite, \item for every $V\in\mathfrak{S}$ and $\varepsilon > 0$ the set ${\rm Ind} (\alpha , V, \varepsilon )$ is nonempty, \item $\alpha$ is ${\mathcal{I}}$-independent, \item $\alpha$ is ${\mathcal{N}}$-independent. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow$(2)$\Rightarrow$(3) and (4)$\Rightarrow$(5). Trivial. (2)$\Rightarrow$(4) and (3)$\Rightarrow$(5). Apply Proposition~\ref{P-dense}. (5)$\Rightarrow$(1). Let $V$ be a finite-dimensional operator subsystem of $A$ and let $\varepsilon > 0$. With the aim of verifying I-independence, we may assume that $V$ does not equal the scalars, so that $V$ has linear dimension at least two. By recursion we will construct a sequence $\{ s_1 , s_2 ,\dots \}$ of distinct elements of $G$ such that for each $k\geq 1$ the linear map $\varphi_k : V^{\otimes [1,k]} \to V_k := [ \alpha_{s_1} (V) \cdots \alpha_{s_k} (V) ] \subseteq A$ determined on elementary tensors by $a_1 \otimes\cdots\otimes a_k \mapsto \alpha_{s_1} (a_1 ) \cdots \alpha_{s_k} (a_k )$ is a $(1+\varepsilon )^{1-2^{-k+1}}$-c.b.-isomorphism. To begin with set $s_1 = e$. Now let $k\geq 1$ and suppose that $s_1 , s_2 , \dots , s_k$ have been defined so that $\varphi_k$ is a $(1+\varepsilon )^{1-2^{-k+1}}$-c.b.-isomorphism onto its image. By (3) there is an $s_{k+1}\in G$ such that the linear map $\psi : V_k \otimes V_k \to [V_k \alpha_{s_{k+1}} (V_k )]$ determined by $a\otimes b \mapsto a\alpha_{s_{k+1}} (b)$ is a $(1+\varepsilon )^{2^{-k}}$-c.b.-isomorphism. Since $V$ has linear dimension at least two, we must have $s_{k+1} \notin\{ s_1 , \dots , s_k \}$, for otherwise $\psi$ would not be injective. Set $\gamma = \varphi_k \otimes {\rm id}_V : V^{\otimes [1,k]} \otimes V = V^{\otimes [1,k+1]} \to V_k \otimes V$. By the injectivity of the minimal tensor product, we may view $V_k \otimes V$ as a subspace of $V_k \otimes V_k$, in which case we have $\varphi_{k+1} = \psi\circ\gamma$. Then \begin{align*} \| \varphi_{k+1} \|_{\rm cb} \| \varphi_{k+1}^{-1} \|_{\rm cb} &\leq \| \psi \|_{\rm cb} \| \varphi_k \otimes {\rm id}_V \|_{\rm cb} \| \psi^{-1} \|_{\rm cb} \| \varphi_k^{-1} \otimes {\rm id}_V \|_{\rm cb} \\ &= \| \psi \|_{\rm cb} \| \psi^{-1} \|_{\rm cb} \| \varphi_k \|_{\rm cb} \| \varphi_k^{-1} \|_{\rm cb} \\ &\leq (1+\varepsilon )^{2^{-k}} (1+\varepsilon )^{1-2^{-k+1}} \\ &= (1+\varepsilon )^{1-2^{-k}} , \end{align*} so that $\varphi_{k+1}$ is a $(1+\varepsilon )^{1-2^{-k}}$-c.b.-isomorphism, as desired. Since for each $k\geq 1$ the map $\varphi_k$ is a $(1+\varepsilon )$-c.b.-isomorphism, we obtain (1). \end{proof} \begin{proposition}\label{P-I-indep-powers} A $C^*$-dynamical system $(A,G,\alpha )$ is I-independent if and only if the product system $(A^{\otimes [1,m]},G,\alpha^{\otimes [1,m]} )$ is I-independent for every $m\in{\mathbb N}$. \end{proposition} \begin{proof} For the nontrivial direction we can apply Proposition~\ref{P-C-equiv-indep} using the following two observations: (i) the collection of operator subsystems of $A^{\otimes [1,m]}$ of the form $\bigotimes_{i=1}^m V_i$ for finite-dimensional operator subsystems $V_1 , \dots , V_m \subseteq A$ satisfies the property required of $\mathfrak{S}$ in the statement of Proposition~\ref{P-C-equiv-indep}, and (ii) given finite-dimensional operator subsystems $V_1 , \dots , V_m \subseteq A$ and a $\lambda\geq 1$, if a tuple in $G$ is a $\lambda$-independence tuple for the linear span of $\bigcup_{i=1}^m V_i$ then it is a $\lambda^m$-independence tuple for $V_1 \otimes\cdots\otimes V_m$, as follows from the fact that for any c.b.\ isomorphisms $\varphi_i : E_i \to F_i$ between operator spaces for $i=1,\dots ,m$ the tensor product $\varphi = \bigotimes_{i=1}^m \varphi_i$ is a c.b.\ isomorphism with $\| \varphi \|_{\rm cb} \| \varphi^{-1} \|_{\rm cb} = \prod_{i=1}^m \| \varphi_i \|_{\rm cb} \| \varphi_i^{-1} \|_{\rm cb}$. \end{proof} \begin{proposition}\label{P-equiv-indep} Let $(X,G)$ be a dynamical system. Let ${\mathcal B}$ be a basis for the topology on $X$ which does not contain the empty set. Then the following are equivalent: \begin{enumerate} \item $(X,G)$ is I-independent, \item for every finite-dimensional operator subsystem $V\subseteq C(X)$ there are a sequence $\{ s_k \}_{k=1}^\infty$ in $G$ and a $\lambda\geq 1$ such that $(s_1, \dots , s_k )$ is a $\lambda$-independence tuple for $V$ for each $k\geq 1$, \item every finite-dimensional operator subsystem of $C(X)$ has arbitrarily long $\lambda$-inde-\linebreak pendence tuples for some $\lambda\geq 1$, \item every nonempty finite subcollection of ${\mathcal B}$ has an infinite independence set, \item every nonempty finite subcollection of ${\mathcal B}$ has arbitrarily large finite independence sets, \item for every nonempty finite collection $\{ U_1 , \dots , U_m \} \subseteq{\mathcal B}$ there is an $s\in G$ such that $U_i \cap s^{-1} U_j \neq\emptyset$ for all $i,j=1,\dots ,m$. \end{enumerate} \end{proposition} \begin{proof} (1)$\Rightarrow$(2)$\Rightarrow$(3). Trivial. (2)$\Rightarrow$(4). Let $\{ U_1 , \dots , U_m \}$ be a nonempty finite subcollection of ${\mathcal B}$. By shrinking the $U_i$ if necessary, we may assume for the purpose of establishing (4) that $U_i \cap U_j = \emptyset$ for $i\neq j$. For each $i=1,\dots ,m$ take a nonempty closed set $W_i \subseteq U_i$. Let $\Theta = \{ g_0 , g_1 , \dots , g_m \}$ be a partition of unity of $X$ such that ${\rm supp} (g_i )\subseteq U_i$ for each $i$ and ${\rm supp} (g_0 ) \subseteq X\setminus \bigcup_{i=1}^m W_i$. Let $V$ be the linear span of $\Theta$. By (2) there are a sequence $\{ s_1 , s_2 , \dots \}$ in $G$ and a $\lambda\geq 1$ such that for each $k\geq 1$ the contractive linear map $V^{\otimes [1,k]} \to C(X)$ determined on elementary tensors by $f_1 \otimes\cdots\otimes f_k \mapsto \alpha_{s_1} (f_1 ) \cdots \alpha_{s_k} (f_k )$ is a $\lambda$-isomorphism onto its image. Now suppose we are given a $k\in{\mathbb N}$ and a $\sigma\in {\{ 1,\dots ,m \}}^{\{ 1 , \dots , k \}}$. Then $\| \alpha_{s_1} (g_{\sigma (1)} )\cdots\alpha_{s_k} (g_{\sigma (k)} ) \| \geq \lambda^{-1}$. Choose $x\in X$ such that $g_{\sigma (1)} (s_1 x)\cdots g_{\sigma (k)} (s_k x) = \| \alpha_{s_1} (g_{\sigma (1)} )\cdots\alpha_{s_k} (g_{\sigma (k)} ) \|$. Then for each $i=1,\dots ,k$ we must have $g_{\sigma (i)} (s_i x) > 0$, which implies that $x\in s_i^{-1} U_{\sigma (i)}$. Hence $\bigcap_{i=1}^k s_i^{-1} U_{\sigma (i)} \neq\emptyset$, and so we obtain (4). (4)$\Rightarrow$(5)$\Rightarrow$(6). Trivial. (3)$\Rightarrow$(5). This can be proved along the lines of the argument used for (2)$\Rightarrow$(4). (6)$\Rightarrow$(1). Let $\Theta = \{ g_1 , \dots , g_m \}$ be a partition of unity of $X$ for which there are elements $U_1 , \dots , U_m$ of ${\mathcal B}$ such that $g_i |_{U_j} = \delta_{ij} \chi_{U_j}$, where $\chi_{U_j}$ is the characteristic function of $U_j$. Note that the collection of subspaces of $C(X)$ spanned by such partitions of unity has the property required of $\mathfrak{S}$ in the statement of Proposition~\ref{P-C-equiv-indep}. By (6) there is an $s\in G$ such that $U_i \cap s^{-1} U_j \neq\emptyset$ for all $i,j=1,\dots ,m$. Let $V$ be the subspace of $C(X)$ spanned by $\Theta$ and let $\varphi : V\otimes V \to A$ be the linear map determined on elementary tensors by $f\otimes g \mapsto f\alpha_s (g)$. Then $\{ g_i \alpha_s (g_j ) \}_{1\leq i,j\leq m}$ is an effective partition of unity of $X$, and hence is isometrically equivalent to the standard basis of $\ell_\infty^{m^2}$. Since the subset $\{ g_i \otimes g_j \}_{1\leq i,j\leq m}$ of $V\otimes V$ is also isometrically equivalent to standard basis of $\ell_\infty^{m^2}$, we conclude that $\varphi$ is an isometric isomorphism. In view of Proposition~\ref{P-C-equiv-indep}, this yields (1). \end{proof} \begin{remark} In the case $G={\mathbb Z}$, the proof of Proposition~\ref{P-C-equiv-indep} shows that the sequence in the definition of I-independence can be taken to be in ${\mathbb N}$. since ${\rm Ind} (\alpha , V, \varepsilon )$ is closed in general under taking inverses (given a c.b.\ isomorphism $\varphi : V\otimes V \to [V\alpha_s (V)]$ with $\varphi (v\otimes w) = v\alpha_s (w)$, the map $V\otimes V \mapsto [V\alpha_{s^{-1}} (V)]$ defined by $v\otimes w \mapsto v\alpha_{s^{-1}} (w) = \alpha_{s^{-1}} (\varphi (w^* \otimes v^* )^* )$ has the same c.b.\ isomorphism constant). Thus in Proposition~\ref{P-equiv-indep} the sequence in each of conditions (2) and (4) can be taken to be in ${\mathbb N}$. \end{remark} The next theorem extends \cite[Theorem 2.1]{NSSEP}. Recall that $(X, G)$ is said to be {\it (topologically) transitive} if every nonempty open invariant subset of $X$ is dense, and {\it (topologically) weakly mixing} if the product system $(X\times X, G)$ is transitive. For the definitions of uniform nonnullness and untameness of all orders see Sections~\ref{S-null} and \ref{S-tame}, respectively. \begin{theorem}\label{T-equiv-indep-prod} Let $(X,G)$ be a dynamical system and consider the following conditions: \begin{enumerate} \item $(X,G)$ is I-independent, \item $(X,G)$ is uniformly untame of all orders, \item $(X,G)$ is uniformly nonnull of all orders, \item for every $n\in{\mathbb N}$ the product system $(X^n ,G)$ is weakly mixing, \item for every $n\in{\mathbb N}$ the product system $(X^n ,G)$ is transitive, \item $(X,G)$ is uniformly untame, \item $(X,G)$ is uniformly nonnull, \item $(X,G)$ is weakly mixing. \end{enumerate} Then conditions (1) to (5) are equivalent, and (5)$\Rightarrow$(6)$\Rightarrow$(7). When $G$ is Abelian, conditions (1) to (8) are all equivalent. \end{theorem} \begin{proof} The implications (2)$\Rightarrow$(6)$\Rightarrow$(7) are trivial. In the case that $G$ is Abelian, (7)$\Rightarrow$(8) follows as in the proof of \cite[Theorem 2.1]{NSSEP} using the lemma in \cite{WM}, while (8)$\Rightarrow$(5) is \cite[Proposition II.3]{Fur}. The implication (1)$\Rightarrow$(2) follows from Proposition~\ref{P-equiv-indep}, (2)$\Rightarrow$(3) and (4)$\Rightarrow$(5) are trivial, and (3)$\Rightarrow$(4) is readily seen. Finally, to show (5)$\Rightarrow$(1), let $\{ U_1 , \dots , U_n \}$ be a nonempty finite collection of nonempty open subsets of $X$. Let $\Lambda$ be the set of all integer pairs $(i,j)$ with $1\leq i,j\leq n$. Set $V_0 = \prod_{(i,j)\in\Lambda} U_i \subseteq X^\Lambda$ and $V_1 = \prod_{(i,j)\in\Lambda} U_j \subseteq X^\Lambda$. By (5) there is an $s\in G$ such that $V_0 \cap s^{-1} V_1 \neq\emptyset$, so that $U_i \cap s^{-1} U_j \neq\emptyset$ for all $i,j=1, \dots ,n$. It then follows by Proposition~\ref{P-equiv-indep} that $(X,G)$ is I-independent. \end{proof} By the above theorem, we can consider I-independence to be the analogue for noncommutative $C^*$-dynamical systems of, among other properties, uniform untameness of all orders (compare the discussion in the last paragraph of Section~\ref{S-entropy tensor}). On the other hand, the notions of tameness (resp.\ untameness) and complete untameness make sense for any $C^*$-dynamical system, the former meaning that no element (resp.\ some element) has an infinite $\ell_1$-isomorphism set and the latter that every nonscalar element has an infinite $\ell_1$-isomorphism set. We will end this section by observing that, in the general $C^*$-dynamical context, I-independence (in fact a weaker independence condition) implies complete untameness. To verify complete untameness it suffices to check the existence of an infinite $\ell_1$-isomorphism set over ${\mathbb R}$ for every self-adjoint nonscalar element, as the following lemma based on Rosenthal-Dor arguments demonstrates. \begin{lemma}\label{L-sa} Let $(A,G,\alpha )$ be a $C^*$-dynamical system. Let $a\in A$, and suppose that at least one of ${\rm re} (a)$ and ${\rm im} (a)$ has an infinite $\ell_1$-isomorphism set over ${\mathbb R}$. Then $a$ has an infinite $\ell_1$-isomorphism set. \end{lemma} \begin{proof} Suppose that $\{ \alpha_{s_n} (a) \}_{n\in{\mathbb N}}$ is a sequence in the orbit of $a$ which converges weakly to some $b\in A$. Then $\lim_{n\to\infty} \sigma ({\rm re} (\alpha_{s_n} (a))) = \sigma ({\rm re} (b))$ and $\lim_{n\to\infty} \sigma ({\rm im} (\alpha_{s_n} (a)) = \sigma ({\rm im} (b))$ for all self-adjoint $\sigma\in A^*$, so that both $\{ \alpha_{s_n} ({\rm re} (a)) \}_{n\in{\mathbb N}}$ and $\{ \alpha_{s_n} ({\rm im} (a)) \}_{n\in{\mathbb N}}$ are weakly convergent over ${\mathbb R}$. It follows from this observation and \cite{Dor} that $a$ has an infinite $\ell_1$-isomorphism set. \end{proof} For a unital $C^*$-algebra $A$, we denote by ${\mathcal S}_2 (A)$ the collection of $2$-dimensional operator subsystems of $A$ equipped with the metric given by Hausdorff distance between unit balls. \begin{proposition}\label{P-I-indep-cnr} Let $(A,G,\alpha )$ be a $C^*$-dynamical system. Let $\lambda\geq 1$, and suppose that in ${\mathcal S}_2 (A)$ there is a dense collection of $V$ for which there is a sequence $\{ s_1 , s_2 , \dots \}$ in $G$ such that $(s_1 , \dots , s_k )$ is a $\lambda$-independence tuple for $V$ for each $k\geq 1$. Then $\alpha$ is completely untame. In particular, an I-independent $C^*$-dynamical system is completely untame. \end{proposition} \begin{proof} Let $a$ be a nonscalar self-adjoint element of $A$, and denote by $V$ the $2$-dimensional operator system ${\rm span} \{ 1,a \}$. Suppose that there is a sequence $\{ s_1 , s_2 , \dots \}$ in $G$ such that for each $k$ the linear map $V^{\otimes [1,k]} \to A$ determined by $a_1 \otimes\cdots\otimes a_k \mapsto \alpha_{s_1} (a_1 ) \cdots \alpha_{s_k} (a_k )$ is a $\lambda$-c.b.-isomorphism onto its image. It then follows by Lemma~\ref{L-tensor-indep} that the set $\{ \alpha_{s_k} (a) \}_{k\in{\mathbb N}}$ is $\lambda'$-equivalent to the standard basis of $\ell_1^{\mathbb N}$ for some $\lambda'$ depending only on $\lambda$ and the spectral diameter of $a$. By a straightforward perturbation argument and Lemma~\ref{L-sa} we conclude that $\alpha$ is completely untame. \end{proof} \section{Independence, Abelianness, and weak mixing in $C^*$-dynamical systems}\label{S-NC} Once we express independence in terms of minimal tensor products and move into the noncommutative realm, a close connection to Abelianness reveals itself. Indeed one of the goals of this section is to show that, in simple unital nuclear $C^*$-algebras, independence and Abelianness in the dynamical context essentially amount to the same thing (Theorem~\ref{T-nuclear}). This provides a conceptual basis for the sense gained from examples that concepts like hyperbolicity and topological K-ness should be interpreted in the simple nuclear case as certain types of asymptotic Abelianness. See for instance \cite{Narn,Nesh} and the discussion at the end of Section~\ref{S-entropy tensor}, and compare also \cite{Dis}, where a relationship between tensor product structure and asymptotic Abelianness is established as a tool in the study of derivations and dissipations. The second main result of this section concerns the implications for independence of the existence of weakly mixing invariant states (Theorem~\ref{T-WMcontrInd}). \begin{definition}\label{D-Abelian} Let $(A,G,\alpha )$ be a $C^*$-dynamical system. Let $V$ be a finite-dimensional operator subsystem of $A$ and let $\varepsilon > 0$. We define ${\rm Ab} (\alpha , V, \varepsilon )$ to be the set of all $s\in G_0$ such that $\| [v,\alpha_s (w) ] \| \leq\varepsilon \| v \| \| w \|$ for all $v,w\in V$. For a collection ${\mathcal C}$ of subsets of $G_0$ which is closed under taking supersets, we say that the system $(A,G,\alpha )$ or the action $\alpha$ is {\it ${\mathcal C}$-Abelian} if for every finite-dimensional operator subsystem $V\subseteq A$ and $\varepsilon > 0$ the set ${\rm Ab} (\alpha , V, \varepsilon )$ is a member of ${\mathcal C}$. \end{definition} When $G$ is an infinite group, ${\mathcal{I}}$-Abelianness can be characterized as follows. \begin{proposition}\label{P-Abeliandense} Let $(A,G,\alpha )$ be a $C^*$-dynamical system with $G$ an infinite group. Then the following are equivalent: \begin{enumerate} \item $\alpha$ is ${\mathcal{I}}$-Abelian, \item $\alpha$ is ${\mathcal{N}}$-Abelian, \item there is a net $\{ s_\gamma \}_\gamma$ in $G$ (which can be taken to be a sequence if $A$ is separable) such that $\lim_\gamma \| [a,\alpha_{s_\gamma} (b) ] \| = 0$ for all $a,b\in A$. \end{enumerate} \end{proposition} \begin{proof} The implication (1)$\Rightarrow$(2) is trivial. Given a net $\{ s_\gamma \}_\gamma$ as in (3), if it has a subnet tending to infinity then we obviously obtain (1), and if not then it has convergent subnet and hence $A$ is commutative, so that we again obtain (1). Suppose then that (2) holds and let us prove (3). Let $\mathfrak{D}$ be a set of finite-dimensional operator subsystems of $A$ directed by inclusion such that $\bigcup\mathfrak{D}$ is dense in $A$. Let $\Gamma$ be the directed set of all pairs $(V,\varepsilon )$ such that $V\in\mathfrak{D}$ and $\varepsilon > 0$, where $(V' ,\varepsilon' )\succ (V,\varepsilon )$ means that $V' \supseteq V$ and $\varepsilon' \leq\varepsilon$. For every $\gamma = (V,\varepsilon )\in\Gamma$ we can find by (2) an $s_\gamma \in G$ such that $\| [v,\alpha_{s_\gamma} (w) ] \| \leq\varepsilon \| v \| \| w \|$ for all $v,w\in V$. Then $\lim_\gamma \| [a,\alpha_{s_\gamma} (b) ] \| = 0$ for all $a,b\in A$, as desired. In the case that $A$ is separable we can use instead a sequence of pairs $(V_n ,1/n)$ where $V_1 \subseteq V_2\subseteq \dots$ is an increasing sequence of finite-dimensional operator subsystems of $A$ such that $\bigcup_{n\in{\mathbb N}} V_n$ is dense in $A$. \end{proof} Condition (3) in Proposition~\ref{P-Abeliandense} in the case where $G$ is the entire $^*$-automorphism group of $A$ is of importance in $C^*$-algebra structure and classification theory. See for example Lemma~5.2.3 in \cite{Ror}. \begin{lemma}\label{L-cp} For every $\varepsilon > 0$ there is a $\delta > 0$ such that whenever $V$ is an operator system, ${\mathcal H}$ is a Hilbert space, and $\varphi : V\to{\mathcal B} ({\mathcal H} )$ is a unital c.b.\ map with $\| \varphi \|_{\rm cb} \leq 1 + \delta$, there is a complete positive map $\psi : V\to{\mathcal B} ({\mathcal H} )$ with $\| \psi - \varphi \| \leq \varepsilon$. \end{lemma} \begin{proof} Let $V$ be an operator system, ${\mathcal H}$ a Hilbert space, and $\varphi : V\to{\mathcal B} ({\mathcal H} )$ a unital c.b.\ map. We may assume $V$ to be an operator subsystem of a unital $C^*$-algebra $A$. Then by the Arveson-Wittstock extension theorem we can extend $\varphi$ to a c.b.\ map $\tilde{\varphi} : A\to{\mathcal B} ({\mathcal H} )$ with $\| \tilde{\varphi} \|_{\rm cb} = \| \varphi \|_{\rm cb}$. Using the representation theorem for c.b.\ maps as in the proof of Lemma~3.3 in \cite{PS}, we then can produce a completely positive map $\rho : A\to{\mathcal B} ({\mathcal H} )$ such that $\| \tilde{\varphi} - \rho \|_{\rm cb} \leq \sqrt{2 \| \tilde{\varphi} \|_{\rm cb} (\| \tilde{\varphi} \|_{\rm cb} - 1)}$, from which the lemma follows. \end{proof} \begin{lemma}\label{L-contrAb} For every $\varepsilon > 0$ there is a $\delta > 0$ such that whenever $A$ is a unital $C^*$-algebra, $\alpha$ is a unital $^*$-endomorphism of $A$, and $V$ is a finite-dimensional operator subsystem of $A$ for which the linear map $V\otimes V\to [V\alpha (V)]$ determined on elementary tensors by $v_1 \otimes v_2 \mapsto v_1 \alpha (v_2 )$ has c.b.\ norm at most $1+\delta$, we have $\| [v_1 , \alpha (v_2 ) ] \| \leq \varepsilon \| v_1 \| \| v_2 \|$ for all $v_1 , v_2 \in V$. \end{lemma} \begin{proof} Let $\varepsilon > 0$, and take $\delta$ as given by Lemma~\ref{L-cp} with respect to $\varepsilon /2$. Let $A$ be a unital $C^*$-algebra, $\alpha$ a unital $^*$-endomorphism of $A$, and $V$ an operator subsystem of $A$, and suppose that the linear map $\varphi : V\otimes V\to [V\alpha (V)]$ determined on elementary tensors by $v_1 \otimes v_2 \mapsto v_1 \alpha (v_2 )$ has c.b.\ norm at most $1+\delta$. We regard $A$ as a unital $C^*$-algebra of ${\mathcal B} ({\mathcal H} )$ for some Hilbert space ${\mathcal H}$. Then by our choice of $\delta$ there is a completely positive map $\psi : V\otimes V \to{\mathcal B} ({\mathcal H} )$ with $\| \psi - \varphi \| \leq \varepsilon /2$. It follows that, for all $v_1 , v_2 \in V$, \begin{align*} \| v_1 \alpha (v_2 ) - \alpha (v_2 )v_1 \| &\leq \| \varphi (v_1 \otimes v_2 ) - \psi (v_1 \otimes v_2 ) \| + \| \psi (v_1^* \otimes v_2^* )^* - \varphi (v_1^* \otimes v_2^* )^* \| \\ &\leq 2 \| \varphi - \psi \| \| v_1 \| \| v_2 \| \\ &\leq \varepsilon \| v_1 \| \| v_2 \| , \end{align*} yielding the lemma. \end{proof} Lemma~\ref{L-contrAb} immediately yields: \begin{proposition}\label{P-contrAb} Let ${\mathcal C}$ a collection of subsets of $G_0$ which is closed under taking supersets. Then for a $C^*$-dynamical system with acting semigroup $G$, ${\mathcal C}$-contractivity implies ${\mathcal C}$-Abelianness. \end{proposition} \begin{theorem}\label{T-nuclear} Let $(A,G,\alpha )$ be a $C^*$-dynamical system with $A$ nuclear. Let ${\mathcal C}$ be a collection of subsets of $G_0$ which is closed under taking supersets. Consider the following conditions: \begin{enumerate} \item $\alpha$ is ${\mathcal C}$-independent, \item $\alpha$ is ${\mathcal C}$-contractive, \item $\alpha$ is ${\mathcal C}$-Abelian. \end{enumerate} Then (1)$\Rightarrow$(2)$\Leftrightarrow$(3), and if $A$ is simple then all three conditions are equivalent. \end{theorem} \begin{proof} The implication (1)$\Rightarrow$(2) is trivial, while (2)$\Rightarrow$(3) is a special case of Proposition~\ref{P-contrAb}. Assume that (3) holds and let us prove (2). Suppose that $\alpha$ is not ${\mathcal C}$-contractive. Then there are a finite-dimensional operator subsystem $V\subseteq A$ and an $\varepsilon > 0$ such that no $(1+\varepsilon )$-contraction set for $V$ is a member of ${\mathcal C}$. Denote by $\Lambda$ the set of pairs $(E,\delta )$ where $E$ is a finite-dimensional operator subsystem of $A$ containing $V$ and $\delta > 0$. For every $\lambda = (E,\delta )\in\Lambda$ let $H_\lambda$ be the set of all $s\in G$ such that $\| [v,\alpha_s (w)] \| \leq \delta \| v \| \| w \|$ for all $v,w\in E$ and the linear map $\varphi_s : V\otimes V \to [V\alpha_s (V)]$ defined on elementary tensors by $\varphi_s (v\otimes w) = v\alpha_s (w)$ has c.b.\ norm greater than $1+\varepsilon$. Since $\alpha$ is ${\mathcal C}$-Abelian, $H_\lambda$ is nonempty for every $\lambda\in\Lambda$, and for all $\lambda_1 = (E_1 , \delta_1 )$ and $\lambda_2 = (E_2 , \delta_2 )$ in $\Lambda$ we have $H_\lambda \subseteq H_{\lambda_1} \cap H_{\lambda_2}$ for $\lambda = (E_1 + E_2 , \min (\delta_1 , \delta_2 ))$. Thus the collection $\{ H_\lambda : \lambda\in\Lambda \}$ forms a filter base over $G$. Let $\omega$ be an ultrafilter containing this collection. We write $\ell_\infty^G (A) / c_\omega (A)$ for the ultrapower of $A$ with respect to $\omega$ (see \cite[Sect.\ 6.2]{Ror}). Denote by $\pi$ the quotient map $\ell_\infty^G (A) \to \ell_\infty^G (A) / c_\omega (A)$. Let $\Phi_1 , \Phi_2 : A \to \ell_\infty^G (A) / c_\omega (A)$ be the $^*$-homomorphisms given by $\Phi_1 (a) = \pi ((a)_{s\in G} )$ and $\Phi_2 (a) = \pi ((\alpha_s (a))_{s\in G} )$ for all $a\in A$. Then the images of $\Phi_1$ and $\Phi_2$ commute, and thus, since $A \otimes A = A \otimes_{\rm max} A$ by the nuclearity of $A$, we obtain a $^*$-homomorphism $\Phi : A\otimes A \to\ell_\infty^G (A) / c_\omega (A)$ such that $\Phi (a_1 \otimes a_2 ) = \Phi_1 (a_1 ) \Phi_2 (a_2 )$ for all $a_1 , a_2 \in A$ \cite[Prop.\ IV.4.7]{Tak1}. Using the description of nuclearity as a completely positive approximation property we can recursively construct a separable nuclear operator subsystem $W$ of $A$ containing $V$. By the Choi-Effros lifting theorem \cite{CE} there is a unital completely positive map $\psi : W\otimes W\to\ell_\infty^G (A)$ such that $\pi\circ\psi = \Phi |_{W\otimes W}$, viewing $W\otimes W$ as a subspace of $A\otimes A$. Since the unit ball of $V\otimes V$ is compact, $\omega$ contains the set $H$ of all $s\in G$ for which $\| \varphi_s - \pi_s \circ\psi |_{V\otimes V} \| \leq \dim (V)^{-2} \varepsilon$, where $\pi_s$ denotes the coordinate projection $\ell_\infty^G (A) \to A$ associated to $s$ and $V\otimes V$ is viewed as a subspace of $W\otimes W$. Appealing to \cite[Cor.\ 2.2.4]{ER}, for $s\in H$ we have \begin{align*} \| \varphi_s \|_{\rm cb} &\leq \| \pi_s \circ\psi |_{V\otimes V} \|_{\rm cb} + \| \varphi_s - \pi_s \circ\psi |_{V\otimes V} \|_{\rm cb} \\ &\leq 1 + \dim (V)^2 \| \varphi_s - \pi_s \circ\psi |_{V\otimes V} \| \\ &\leq 1+\varepsilon , \end{align*} which yields a contradiction since $H$ intersects every $H_\lambda$. We thus obtain (2). Now suppose that $A$ is simple and let us show (3)$\Rightarrow$(1). Suppose that $\alpha$ is not ${\mathcal C}$-independent. Then there are a finite-dimensional operator subsystem $V\subseteq A$ and an $\varepsilon > 0$ such that no $(1+\varepsilon )^2$-independence set for $V$ is a member of ${\mathcal C}$. As before define $\Lambda$ to be the set of pairs $(E,\delta )$ where $E$ is a finite-dimensional operator subsystem of $A$ containing $V$ and $\delta > 0$. For every $\lambda = (E,\delta )\in\Lambda$ let $H_\lambda$ be the set of all $s\in G$ such that $\| [v,\alpha_s (w)] \| \leq \delta \| v \| \| w \|$ for all $v,w\in E$ and the linear map $\varphi_s : V\otimes V \to [V\alpha_s (V)]$ defined on elementary tensors by $\varphi_s (v\otimes w) = v\alpha_s (w)$ either has c.b.\ norm greater than $1+\varepsilon$ or is not invertible or has an inverse with c.b.\ norm greater than $1+\varepsilon$. As in the previous paragraph we construct a separable nuclear operator subsystem $W$ of $A$ containing $V$ and apply the Choi-Effros lifting theorem to obtain a unital completely positive map $\psi : W\otimes W\to\ell_\infty^G (A)$ such that $\pi\circ\psi = \Phi |_{W\otimes W}$. Since $A$ is simple so is $A\otimes A$, and hence $\Phi$ is faithful. Consequently $\psi$ is a complete order isomorphism. Let $\delta$ be a positive number to be specified below. Since $A$ is nuclear, $V$ is $1$-exact, and thus $V\otimes V$ is $1$-exact (see \cite{ER,Pis}). By a result of Smith \cite[Prop.\ 2.2.2]{ER} and the characterization of $1$-exactness in terms of almost completely isometric embeddings into matrix algebras (see \cite[Lemma 17.8]{Pis}), it follows that there exists a $k\in{\mathbb N}$ such that whenever $E$ is an operator space and $\rho : E\to V\otimes V$ is a bounded linear map we have $\| \rho \|_{\rm cb} \leq \sqrt{1+\delta} \| {\rm id}_{M_k} \otimes \rho \|$. Under the canonical identification $(M_k \otimes \ell_\infty^G (A)) / (M_k \otimes c_\omega (A)) \cong M_k \otimes (\ell_\infty^G (A) / c_\omega (A))$, we regard the complete order embedding ${\rm id}_{M_k} \otimes \psi : M_k \otimes W\otimes W \to M_k \otimes \ell_\infty^G (A)$ as a lift of the restriction to $M_k \otimes W\otimes W$ of the $^*$-homomorphism ${\rm id}_{M_k} \otimes\Phi : M_k \otimes A \otimes A\to M_k \otimes (\ell_\infty^G (A) / c_\omega (A))$ with respect to the quotient map $M_k \otimes \ell_\infty^G (A) \to (M_k \otimes \ell_\infty^G (A)) / (M_k \otimes c_\omega (A))$. Since the unit ball of $M_k \otimes V\otimes V$ is compact, $\omega$ contains the set $H$ of all $s\in G$ for which (i) $\| \varphi_s - \pi_s \circ\psi |_{V\otimes V} \| \leq \dim (V)^{-2} \varepsilon$ and (ii) $\pi_s \circ\psi |_{V\otimes V} $ is invertible with $\| {\rm id}_{M_k} \otimes (\pi_s \circ\psi |_{V\otimes V} )^{-1} \| < \sqrt{1 + \delta}$ and $\| \varphi_s - \pi_s \circ\psi |_{V\otimes V} \| < \delta$. From (i) we obtain $\| \varphi_s \|_{\rm cb} \leq 1 + \varepsilon$ as before, while from (ii) we obtain $\| (\pi_s \circ\psi |_{V\otimes V} )^{-1} \|_{\rm cb} < 1 + \delta$ so that, by Lemma~\ref{L-perturb}, if $\delta$ is small enough as a function of $\dim (V)$ and $\varepsilon$ then $\varphi_s$ is invertible with $\| \varphi_s^{-1} \|_{\rm cb} \leq 1+\varepsilon$. This produces a contradiction since $H$ intersects every $H_\lambda$. Thus (3)$\Rightarrow$(1) in the simple case. \end{proof} For the remainder of this section $G$ will be a group. Let $(A,G,\alpha )$ be a $C^*$-dynamical system and $\sigma$ a $G$-invariant state on $A$. We write ${\mathcal H}_\sigma$ for the GNS Hilbert space of $\sigma$ and $\Omega_\sigma$ for the vector in ${\mathcal H}_\sigma$ associated to the unit of $A$. We denote by ${\mathcal H}_{\sigma ,0}$ the orthogonal complement in ${\mathcal H}_\sigma$ of the one-dimensional subspace spanned by $\Omega_\sigma$. Note that ${\mathcal H}_{\sigma ,0}$ is invariant under the action of $G$. We denote by $\mathfrak{m}$ the unique invariant mean on the space ${\rm WAP} (G)$ of weakly almost periodic bounded uniformly continuous functions on $G$. A {\it flight function} is a function $f\in{\rm WAP} (G)$ such that $\mathfrak{m} (|f|) = 0$, which is equivalent to the condition that for every $\varepsilon > 0$ the set $\{ s\in G : | f(s) | < \varepsilon \}$ is thickly syndetic (see \cite{BerRos,ETJ}). We write $C_\sigma$ for the norm closure of the linear span of the functions $s\mapsto \langle U_s \xi , \zeta \rangle$ on $G$ for all $\xi , \zeta\in{\mathcal H}_{\sigma ,0}$, which is a subspace of ${\rm WAP} (G)$. We can alternatively describe $C_\sigma$ as the norm closure of the linear span of the functions $s\mapsto\sigma (b\alpha_s (a))$ on $G$ for all $a,b\in A$ such that $\sigma (a) = \sigma (b) = 0$. Recall that a strongly continuous unitary representation $\pi$ of $G$ on a Hilbert space ${\mathcal H}$ is said to be {\it weakly mixing} if for all $\xi , \zeta\in{\mathcal H}$ the function $s\to |\langle\pi (s)\xi , \zeta\rangle |$ on $G$ is a flight function. \begin{definition} Let $(A,G,\alpha )$ be a $C^*$-dynamical system. A $G$-invariant state $\sigma$ on $A$ is said to be {\it weakly mixing} if the representation of $G$ on ${\mathcal H}_{\sigma ,0}$ is weakly mixing, i.e., if $f$ is a flight function for every $f\in C_\sigma$. \end{definition} \begin{definition} Let $(A,G,\alpha )$ be a $C^*$-dynamical system. A $G$-invariant state $\sigma$ on $A$ is said to be {\it T-mixing} if for all $a_1 , a_2 , b_1 , b_2 \in A$ and $\varepsilon > 0$ the set of all $s\in G$ such that $| \sigma (a_1 \alpha_s (b_1 ) a_2 \alpha_s (b_2 )) - \sigma (a_1 a_2 ) \sigma (b_1 b_2 ) | < \varepsilon$ is thickly syndetic. \end{definition} \noindent Since a finite intersection of thickly syndetic sets is thickly syndetic, the $G$-invariant state $\sigma$ is T-mixing if and only if for every finite set $\Omega\subseteq A$ and $\varepsilon > 0$ the set of all $s\in G$ such that $| \sigma (a_1 \alpha_s (b_1 ) a_2 \alpha_s (b_2 )) - \sigma (a_1 a_2 ) \sigma (b_1 b_2 ) | < \varepsilon$ for all $a_1 , a_2 , b_1 , b_2 \in\Omega$ is thickly syndetic. Similarly, $\sigma$ is weakly mixing if and only if for every finite set $\Omega\subseteq A$ and $\varepsilon > 0$ the set of all $s\in G$ such that $| \sigma (a \alpha_s (b)) - \sigma (a) \sigma (b) | < \varepsilon$ for all $a,b \in\Omega$ is thickly syndetic. T-mixing is a weak form of a clustering property that has been studied in the context of quantum statistical mechanics (see for example \cite{BN}). Note that it implies weak mixing. If the system $(A,G,\alpha )$ is ${\mathcal{TS}}$-Abelian in the sense of Definition~\ref{D-Abelian} (in particular if it is commutative) then T-mixing is equivalent to weak mixing (use Lemma~\ref{L-WM char}). This is not the case in general however, for if we take a $C^*$-probability space $(A,\sigma )$ and consider the shift $^*$-automorphism $\alpha$ on the infinite reduced free product $(B,\omega ) = (A,\sigma )^{*{\mathbb Z}}$, then $\omega$ is weakly mixing but not T-mixing. See also Example~\ref{E-Bog}. \begin{lemma}\label{L-TSexpansive} Let $(A,G,\alpha )$ be $C^*$-dynamical system with $A$ exact. Let $\tau$ be a faithful T-mixing $G$-invariant tracial state on $A$. Then $\alpha$ is ${\mathcal{TS}}$-expansive. \end{lemma} \begin{proof} Let $V$ be a finite-dimensional operator subsystem of $A$ and let $\varepsilon > 0$. Let $(\pi , {\mathcal H} , \xi )$ be the GNS triple associated to $\tau$. Since $\tau$ is faithful, $\pi$ is a faithful representation, and so via $\pi$ we can view $A$ as acting on ${\mathcal H}$ and $A\otimes A$ as acting on the Hilbertian tensor product ${\mathcal H}\otimes{\mathcal H}$. By the injectivity of the minimal operator space tensor product we can view $V\otimes V$ as an operator subsystem of $A\otimes A$. Let $\{ v_i \}_{i=1}^r$ be an Auerbach basis for $V$. Then ${\mathcal S} = \{ v_i \otimes v_j \}_{i,j=1}^r$ is an Auerbach basis for $V\otimes V$. By Lemma~\ref{L-perturb} there exists a $\delta > 0$ such that whenever $W$ is an operator space, $\rho : V\otimes V \to W$ is a linear isomorphism onto its image with $\max (\| \rho \|_{\rm cb} , \| \rho^{-1} \|_{\rm cb} ) \leq 1+\delta$, and $\{ w_{ij} \}_{i,j=1}^r$ is a subset of $W$ with $\| \rho (v_i \otimes v_j ) - w_{ij} \| < 4\delta$ for all $1\leq i,j \leq r$, the linear map $\psi : V\otimes V \to W$ determined on ${\mathcal S}$ by $\psi (v_i \otimes v_j ) = w_{ij}$ is an isomorphism onto its image with $\max (\| \psi \|_{\rm cb} , \| \psi^{-1} \|_{\rm cb} ) \leq 1 + \varepsilon$. Since $A$ is exact, $V$ is $1$-exact as an operator space, and so by Lemma~17.8 of \cite{Pis} we can find a finite-dimensional unital subspace $E\subseteq A$ such that, with $p$ denoting the orthogonal projection of ${\mathcal H}$ onto $E\xi$, the (completely contractive) compression map $\gamma : V\to pVp$ given by $v\mapsto pvp$ has an inverse with c.b.\ norm less than $\sqrt[4]{1 + \delta}$. Then the (completely contractive) compression map $\theta = \gamma\otimes\gamma : V\otimes V \to (p\otimes p)(V\otimes V) (p\otimes p)$ given by $z\mapsto (p\otimes p)z(p\otimes p)$ is invertible and $\| \theta^{-1} \|_{\rm cb} = \| \gamma^{-1} \otimes\gamma^{-1} \|_{\rm cb}\leq \| \gamma^{-1} \|^2_{\rm cb} < \sqrt{1 + \delta}$. Let $\{ a_i \xi \}_{i=1}^m$ be an orthonormal basis for $E\xi$. Then ${\mathcal T} = \{ a_i \xi \otimes a_j \xi \}_{i,j=1}^m$ is an orthonormal basis for $E\xi\otimes E\xi$. Set $\delta' = (\dim (E))^{-4} \delta$, and let $\eta$ be a positive real number to be further specified below. Define $K$ to be the set of all $s\in G\setminus \{ e \}$ such that \begin{enumerate} \item $| \tau (a_k^* a_i \alpha_s (a_j a_l^* )) - \tau (a_k^* a_i ) \tau (a_j a_l^* ) | < \eta$ for all $1\leq i,j,k,l \leq m$, and \item $| \tau (c^* v\alpha_s (w) a\alpha_s (bd^* )) - \tau (c^* va) \tau (wbd^* ) | < \delta'$ for all $v\otimes w\in{\mathcal S}$ and $a\xi\otimes b\xi , c\xi\otimes d\xi \in {\mathcal T}$. \end{enumerate} Since $\tau$ is T-mixing, the set $K$ is thickly syndetic. Let $s\in K$. Define $S : E\xi \otimes E\xi \to [E\alpha_s (E)]\xi$ to be the surjective linear map determined on the basis ${\mathcal T}$ by $S(a_i \xi \otimes a_j \xi ) = a_i \alpha_s (a_j ) \xi$. For $1\leq i,j,k,l\leq m$ we have \begin{align*} \langle S(a_i \xi \otimes a_j \xi ) , S(a_k \xi\otimes a_l \xi ) \rangle &= \langle a_i \alpha_s (a_j ) \xi , a_k \alpha_s (a_l ) \xi \rangle \\ &= \tau ((a_k \alpha_s (a_l ))^* a_i \alpha_s (a_j )) = \tau (a_k^* a_i \alpha_s (a_j a_l^* )) \\ &\approx_\eta \tau (a_k^* a_i ) \tau (a_j a_l^* ) = \langle a_i \xi \otimes a_j \xi , a_k \xi \otimes a_l \xi \rangle . \end{align*} We thus see by a simple perturbation argument that by taking $\eta$ small enough we can ensure that $S$ is invertible with $\| S \| \| S^{-1} \| \leq \min (\sqrt{1 + \delta} , 2)$ and $\| S^{-1} - S^* \| < \delta'$. Denote by $q$ the orthogonal projection of ${\mathcal H}$ onto $[E\alpha_s (E)]\xi$, and let $\rho : V\otimes V \to {\mathcal B} (q{\mathcal H} )$ be the linear map given by $\rho (z) = S\theta (z)S^{-1}$ for all $z\in V\otimes V$. Then $\rho$ is an isomorphism onto its image with inverse given by $\rho^{-1} (z) = \theta^{-1} (S^{-1} zS)$, and $\| \rho \|_{\rm cb} \leq \| S \| \| S^{-1} \| \leq \sqrt{1 + \delta}$ while $\| \rho^{-1} \|_{\rm cb} \leq \| \theta^{-1} \|_{\rm cb} \| S^{-1} \| \| S \| \leq 1+\delta$. Now suppose we are given $v\otimes w\in{\mathcal S}$. For all $a\xi \otimes b\xi , c\xi \otimes d\xi \in{\mathcal T}$, we have, using the bound $\| S^{-1} - S^* \| < \delta'$ and the definition of the set $K$, \begin{align*} \lefteqn{\langle S^{-1} qv\alpha_s (w)q S (a\xi \otimes b\xi ), c\xi \otimes d\xi \rangle \approx_{\delta'} \langle v\alpha_s (w) a\alpha_s (b) \xi , c\alpha_s (d) \xi \rangle} \hspace*{35mm} \\ &= \tau (\alpha_s (d^* ) c^* v\alpha_s (w) a\alpha_s (b)) = \tau (c^* v\alpha_s (w) a\alpha_s (bd^* )) \\ &\approx_{\delta'} \tau (c^* va) \tau (wbd^* ) = \tau (c^* va) \tau (d^* wb) \\ &= \langle va\xi , c\xi \rangle \langle wb\xi , d\xi \rangle = \langle va\xi \otimes wb\xi , c\xi \otimes d\xi \rangle \\ &= \langle \theta (v\otimes w) (a\xi \otimes b\xi ), c\xi \otimes d\xi \rangle . \end{align*} Thus $\| \theta (v\otimes w) - S^{-1} qv\alpha_s (w)q S \| < 2\delta' (\dim (E))^4 = 2\delta$, and so \[ \| \rho (v\otimes w ) - qv\alpha_s (w)q \| \leq \| S \| \| \theta (v\otimes w) - S^{-1} qv\alpha_s (w)q S \| \| S^{-1} \| < 4\delta . \] We conclude by our choice of $\delta$ that the linear map $\psi : V\otimes V \to q[V\alpha_s (V)]q$ determined on elementary tensors by $\psi (v \otimes w) = qv \alpha_s (w)q$ is an isomorphism onto its image with $\max (\| \psi \|_{\rm cb} , \| \psi^{-1} \|_{\rm cb} ) \leq 1 + \varepsilon$. Letting $\kappa : [V\alpha_s (V)] \to q[V\alpha_s (V)]q$ be the cut-down map $z\mapsto qzq$, we have a commuting diagram \begin{gather*} \xymatrix{ V\otimes V \ar[r]^(0.46){\varphi} \ar[dr]_(0.43){\psi} &[V\alpha_s (V)] \ar[d]^{\kappa} \\ & q[V\alpha_s (V)]q.} \end{gather*} where $\varphi$ is the linear map determined on elementary tensors by $v\otimes w \mapsto v\alpha_s (w)$. Since $\kappa$ is completely contractive, the map $\varphi$ is invertible and $\| \varphi^{-1} \|_{\rm cb} = \| \psi^{-1} \circ\kappa \|_{\rm cb} \leq \| \psi^{-1} \|_{\rm cb} \leq 1 + \varepsilon$. Thus $K$ is a $(1+\varepsilon )$-expansion set for $V$, and we obtain the result. \end{proof} \begin{theorem}\label{T-WMcontrInd} Let $(A,G,\alpha )$ be a ${\mathcal{TS}}$-contractive $C^*$-dynamical system with $A$ exact. Let $\tau$ be a faithful weakly mixing $G$-invariant tracial state on $A$. Then $\alpha$ is ${\mathcal{TS}}$-independent. \end{theorem} \begin{proof} By Proposition~\ref{P-contrAb}, $\alpha$ is ${\mathcal{TS}}$-Abelian. Hence $\tau$ is T-mixing, in which case we can apply Lemma~\ref{L-TSexpansive} to conclude that $\alpha$ is ${\mathcal{TS}}$-independent. \end{proof} Theorem~\ref{T-WMcontrInd} applies in particular to the commutative situation, where the ${\mathcal{TS}}$-contrac-\linebreak tivity and exactness hypotheses are automatic. In this case one can also obtain ${\mathcal{TS}}$-independence in a combinatorial fashion from the characterization of weak mixing in terms of thickly syndetic sets and a partition of unity argument (cf.\ the proof of Proposition~\ref{P-equiv-indep}(6)$\Rightarrow$(1)). It follows, for example, that if $X$ is a compact manifold, possibly with boundary, of dimension at least $2$ and $\mu$ is a nonatomic Borel probability measure on $X$ with full support which is zero on the boundary, then a generic member of the set $\mathfrak{H}_\mu (X)$ of $\mu$-preserving homeomorphisms from $X$ to itself equipped with the uniform topology is ${\mathcal{TS}}$-independent, since the elements of $\mathfrak{H}_\mu (X)$ which are weakly mixing for $\mu$ form a dense $G_\delta$ subset \cite{KaSt} (see also \cite{AP}). The assumption of ${\mathcal{TS}}$-contractivity in Theorem~\ref{T-WMcontrInd} cannot be dropped in general. Indeed certain Bogoliubov automorphisms of the CAR algebra are strongly mixing with respect to the unique tracial state but fail to be ${\mathcal{I}}$-contractive, as the following example demonstrates. \begin{example}\label{E-Bog} Let ${\mathcal H}$ be a separable infinite-dimensional Hilbert space over the complex numbers. We write $A({\mathcal H} )$ for the CAR algebra over ${\mathcal H}$. This is the unique, up to $^*$-isomorphism, unital $C^*$-algebra generated by the image of an antilinear map $\xi\mapsto a(\xi )$ from ${\mathcal H}$ to $A({\mathcal H} )$ for which the anticommutation relations \begin{align*} a(\xi ) a(\zeta )^* + a(\zeta )^* a(\xi ) &= \langle \xi ,\zeta \rangle 1_{A({\mathcal H} )} ,\\ a(\xi ) a(\zeta ) + a(\zeta ) a(\xi ) &= 0, \end{align*} hold for all $\xi ,\zeta \in{\mathcal H}$ (see \cite{BR2}). Every $U$ in the unitary group ${\mathcal U} ({\mathcal H} )$ gives rise to a $^*$-automorphism $\alpha_U$ of $A({\mathcal H} )$, called a Bogoliubov automorphism, by setting $\alpha_U (a(\xi )) = a(U\xi )$ for every $\xi\in{\mathcal H}$. The unique tracial state $\tau$ on $A({\mathcal H} )$ is given on products of the form $a(\zeta_n )^* \cdots a(\zeta_1 )^* a(\xi_1 ) \cdots a(\xi_m )$ by \[ \tau (a(\zeta_n )^* \cdots a(\zeta_1 )^* a(\xi_1 ) \cdots a(\xi_m )) = \delta_{nm} \det [ \langle {\textstyle\frac12} \xi_i , \zeta_j \rangle ] . \] Let $U$ be a unitary operator on ${\mathcal H}$ such that $\lim_{|n| \to\infty} \langle U^n \xi,\zeta\rangle = 0$ for all $\xi , \zeta\in{\mathcal H}$ (for example, the bilateral shift with respect to some orthonormal basis). It is well known that the corresponding Bogoliubov automorphism $\alpha_U$ of $A({\mathcal H} )$ is strongly mixing for $\tau$, i.e., $\lim_{n\to\infty} | \tau (a\alpha_U^n (b)) - \tau (a)\tau (b) | = 0$ for all $a,b\in A({\mathcal H} )$ (see Example~5.2.21 in \cite{BR2}). On the other hand, for every $\xi\in{\mathcal H}$ we have \begin{align*} \| [ a(U^n \xi ) , a(\xi )] \| = 2 \| a(U^n \xi )a(\xi ) \| &= 2 \| a(\xi)^* a(U^n \xi )^* a(U^n \xi ) a(\xi )) \|^{1/2} \\ &\geq 2| \tau (a(\xi)^* a(U^n \xi )^* a(U^n \xi ) a(\xi )) |^{1/2} \\ &= \sqrt{\| \xi \|^4 - | \langle U^n \xi ,\xi \rangle |^2} , \end{align*} and this last quantity converges to $\| \xi \|^2$ as $|n| \to \infty$. This shows that $\alpha_U$ fails to be ${\mathcal{I}}$-Abelian and hence by Proposition~\ref{P-contrAb} fails to be ${\mathcal{I}}$-contractive. We also remark that $\tau$ fails to be T-mixing with respect to $\alpha_U$. Indeed for $\xi\in{\mathcal H}$ the quantity $\tau (a(\xi )^* a(U^n \xi )^* a(\xi )a(U^n \xi ))$ is equal to $\frac14 (|\langle U^n \xi ,\xi \rangle |^2 - \| \xi \|^4 )$, which converges to $-\frac14 \| \xi \|^4 = -\tau (a(\xi )^* a(\xi ))^2$ as $n\to\infty$. \end{example} \section{Independence and weak mixing in UHF algebras}\label{S-UHF} In the previous section we showed that, under certain conditions, the existence of a weakly mixing faithful invariant state implies ${\mathcal{TS}}$-independence. What can be said in the reverse direction? The transfer in dynamics from topology to measure theory is a subtle one in general; for example, in \cite{LVRA} Huang and Ye exhibited a u.p.e.\ ${\mathbb Z}$-system which lacks an ergodic invariant measure of full support. On the other hand, the presence of noncommutativity at the $C^*$-algebra level can give rise to a kind of rigidity that in the commutative setting is more characteristic of measure-theoretic structure. We will illustrate this phenomenon in one of its extreme forms by showing that, for actions on a UHF algebra, I-independence implies weak mixing for the unique tracial state (Theorem~\ref{T-UHF}) and, for single $^*$-automorphisms in the type $d^\infty$ case, is point-norm generic (Theorem~\ref{T-UHFdenseGdelta}). We also obtain characterizations of I-independence for Bogoliubov actions on the even CAR algebra in terms of measure-theoretic weak mixing (Theorem~\ref{T-even-WM}). We begin by recording a lemma which reexpresses Corollary~1.6 of \cite{BerRos} in our $C^*$-dynamical context (see the discussion after Theorem~\ref{T-nuclear} for information on weak mixing). Note that although $G$ is generally taken to be $\sigma$-compact and locally compact in \cite{BerRos}, the arguments on which Corollary~1.6 of \cite{BerRos} are based do not require these assumptions. \begin{lemma}\label{L-WM char} Let $(A,G,\alpha )$ be a $C^*$-dynamical system and let $\sigma$ be a $G$-invariant state on $A$. Then $\sigma$ is weakly mixing if and only if for every finite set $\Omega\subseteq A$ and $\varepsilon > 0$ there is an $s\in G$ such that $| \sigma (a^* \alpha_s (a)) - \sigma (a^* )\sigma (a) | < \varepsilon$ for all $a\in\Omega$. \end{lemma} When $G$ is Abelian, it suffices in the above characterization of weak mixing to quantify over singletons in $A$ instead of finite subsets. \begin{lemma}\label{L-fdnear} For every $d\in{\mathbb N}$ and $\varepsilon > 0$ there is a $\delta > 0$ such that if $A_1$ and $A_2$ are finite-dimensional unital $C^*$-subalgebras of a unital $C^*$-algebra $A$ with $\dim (A_2 ) \leq d$ and $\| [a_1 ,a_2 ] \| \leq \delta \| a_1 \| \| a_2 \|$ for all $a_1 \in A_1$ and $a_2 \in A_2$, then there is a faithful $^*$-homomorphism $\gamma$ from $A_2$ to the commutant $A_1'$ such that $\| \gamma - {\rm id}_{A_2} \| < \varepsilon$. \end{lemma} \begin{proof} Let $d\in{\mathbb N}$ and $\varepsilon > 0$, and suppose that $A_1$ and $A_2$ are finite-dimensional unital $C^*$-subalgebras of a unital $C^*$-algebra $A$. By Lemma~2.1 of \cite{Bra} we can find a $\delta > 0$ depending on $\varepsilon$ and $d$ such that if the unit ball of $A_2$ is approximately included to within $\delta$ in $A_1'$ then there exists a faithful $^*$-homomorphism $\gamma : A_2 \to A_1'$ such that $\| \gamma - {\rm id}_{A_2} \| < \varepsilon$. Let $\mu$ be the normalized Haar measure on the unitary group ${\mathcal U} (A_1 )$, which is compact by finite-dimensionality. Then setting \[ E(a) = \int_{{\mathcal U} (A_1 )} uau^*\, d\mu (u) \] for $a\in A$ we obtain a conditional expectation $E$ from $A$ onto $A_1'$. Moreover, if $\| [a_1 ,a_2 ] \| \leq \delta \| a_1 \| \| a_2 \|$ for all $a_1 \in A_1$ and $a_2 \in A_2$, then $\| E(a) - a \| \leq \delta \| a \|$ for all $a\in A_2$, so that the unit ball of $A_2$ is approximately included to within $\delta$ in $A_1'$, yielding the lemma. \end{proof} Actually, by a result of Christensen \cite{near}, in the above lemma a $\delta$ not depending on $d$ can be found. \begin{theorem}\label{T-UHF} Let $(A,G,\alpha )$ be an I-independent $C^*$-dynamical system with $A$ a UHF algebra. Then $\alpha$ is weakly mixing for the unique (and hence $\alpha$-invariant) tracial state $\tau$ on $A$. \end{theorem} \begin{proof} By Propositions~\ref{P-C-equiv-indep} and \ref{P-contrAb}, $\alpha$ is ${\mathcal{I}}$-Abelian. Let $B$ be a simple finite-dimensional unital $C^*$-subalgebra of $A$, and let $\varepsilon > 0$. Suppose we are given an $s\in G$ and a $\delta > 0$ such that $\| [b_1 ,\alpha_s (b_2 )] \| \leq \delta \| b_1 \| \| b_2 \|$ for all $b_1 , b_2 \in B$. By Lemma~\ref{L-fdnear}, if we assume $\delta$ to be sufficiently small we can find a $^*$-homomorphism $\gamma : B\to A$ such that $\| \gamma - \alpha_s |_B \| < \varepsilon$ and the $C^*$-subalgebras $B$ and $\gamma (B)$ of $A$ commute. Denote by $\Phi$ the $^*$-homomorphism from $B \otimes B$ to the $C^*$-algebra generated by $B$ and $\gamma (B)$ determined on the factors by $b\otimes 1\mapsto b$ and $1\otimes b \mapsto \gamma (b)$. Since $B\otimes B$ is simple, the tracial state $\tau\circ\Phi$ is unique and equal to $\tau |_B \otimes \tau |_B$. For all $a$ and $b$ in the unit ball of $B$, we then have \[ \tau (a \alpha_s (b)) \approx_\varepsilon \tau (a \gamma (b)) \\ = (\tau\circ\Phi )(a \otimes b) \\ = \tau (a)\tau (b) , \] so that the set of all $s\in G$ with $| \tau (a \alpha_s (b)) - \tau (a)\tau (b) | < \varepsilon$ is nonempty. Thus, since $A$ is the closure of an increasing sequence of simple finite-dimensional unital $C^*$-subalgebras of $A$, we conclude by Lemma~\ref{L-WM char} that $\tau$ is weakly mixing. \end{proof} \noindent The converse of Theorem~\ref{T-UHF} is false, as Example~\ref{E-Bog} demonstrates. We will next examine Bogoliubov actions on the even CAR algebra. We refer the reader to Example~\ref{E-Bog} for notation and an outline of the context (see \cite{BR2} for a general reference). Let ${\mathcal H}$ be a separable Hilbert space. The even CAR algebra $A({\mathcal H} )_{\rm e}$ is the unital $C^*$-subalgebra of the CAR algebra $A({\mathcal H} )$ consisting of those elements which are fixed by the Bogoliubov automorphism arising from scalar multiplication by $-1$ on ${\mathcal H}$, and it is generated by even products of creation and annihilation operators. Both the CAR algebra and the even CAR algebra are $^*$-isomorphic to the type $2^\infty$ UHF algebra (see \cite[Thm.\ 5.2.5]{BR2} and \cite{even}). Every Bogoliubov automorphism $\alpha_U$ of $A({\mathcal H} )$ restricts to a $^*$-automorphism of $A({\mathcal H} )_{\rm e}$. A strongly continuous unitary representation $s\mapsto U_s$ of the group $G$ on ${\mathcal H}$ gives rise via Bogoliubov automorphisms to a $C^*$-dynamical system on each of $A({\mathcal H} )$ and $A({\mathcal H} )_{\rm e}$, in which case we will speak of a Bogoliubov action and write $\alpha_U$. \begin{theorem}\label{T-even-WM} Let ${\mathcal H}$ be a separable Hilbert space. Let $U$ be a strongly continuous unitary representation of $G$ on ${\mathcal H}$, and consider the corresponding Bogoliubov action $\alpha_U$ on $A({\mathcal H} )_{\rm e}$. The following are equivalent: \begin{enumerate} \item $U$ is a weakly mixing representation of $G$, \item the unique tracial state $\tau$ on $A({\mathcal H} )_{\rm e}$ is weakly mixing for $\alpha_U$, \item $\alpha_U$ is ${\mathcal{TS}}$-independent, \item $\alpha_U$ is I-independent, \item $\alpha_U$ is ${\mathcal{TS}}$-Abelian, \item $\alpha_U$ is ${\mathcal{N}}$-Abelian. \end{enumerate} \end{theorem} \begin{proof} (1)$\Rightarrow$(5). Since the representation $U$ of $G$ is weakly mixing, for every finite set $\Theta\subseteq{\mathcal H}$ and every $\varepsilon > 0$ the set of all $s\in G$ for which $| \langle U_s \xi , \zeta \rangle | < \varepsilon$ for all $\xi ,\zeta\in\Theta$ is thickly syndetic. It is then straightforward to check using the anticommutation relations and the fact that the even CAR algebra is generated by even products of creation and annihilation operators that for every finite set $\Omega\subseteq A({\mathcal H} )_{\rm e}$ and $\varepsilon > 0$ the set of all $s\in G$ for which $\| [ a,\alpha_{U_s} (b) ] \| < \varepsilon$ for all $a,b\in\Omega$ is thickly syndetic, i.e., $\alpha_U$ is ${\mathcal{TS}}$-Abelian. (5)$\Rightarrow$(3). Apply Theorem~\ref{T-nuclear}. (3)$\Rightarrow$(4). Apply Proposition~\ref{P-C-equiv-indep}. (4)$\Rightarrow$(2). Apply Theorem~\ref{T-UHF}. (2)$\Rightarrow$(1). Since $\tau$ is weakly mixing for $\alpha_U$, for all $\xi , \zeta\in{\mathcal H}$ the function \[ s\mapsto \tau (a(\xi )^* a(U_s\zeta )) - \tau (a(\xi )^* ) \tau (a(\zeta )) = \frac12 \langle U_s \zeta , \xi \rangle \] on $G$ is a flight function, i.e., $U$ is weakly mixing. (4)$\Leftrightarrow$(6). This is a consequence of Theorem~\ref{T-nuclear} and Proposition~\ref{P-C-equiv-indep}. \end{proof} We will now restrict our attention to UHF algebras of the form $M_d^{\otimes{\mathbb Z}}$ for some $d\geq 2$ and establish generic I-independence for $^*$-automorphisms. For a $C^*$-algebra $A$ we denote by ${\rm Aut} (A)$ its $^*$-automorphism group with the point-norm topology, which is a Polish space when $A$ is separable. \begin{lemma}\label{L-I-indGdelta} Let $A$ be a separable unital $C^*$-algebra. Then the I-independent $^*$-automor-\linebreak phisms of $A$ form a $G_\delta$ subset of ${\rm Aut} (A)$. \end{lemma} \begin{proof} For every finite-dimensional operator subsystem $V\subseteq A$ and $\varepsilon > 0$ define \begin{gather*} \Gamma (V , \varepsilon ) = \big\{ \alpha\in{\rm Aut} (A) : {\rm Ind} (\alpha,V,\varepsilon' ) \neq\emptyset \text{ for some } \varepsilon' \in (0,\varepsilon ) \big\} , \end{gather*} which is an open subset of ${\rm Aut} (A)$, as can be seen by a straightforward perturbation argument using Lemma~2.13.2 of \cite{Pis}. Take an increasing sequence $V_1 \subseteq V_2 \subseteq \dots$ of finite-dimensional operator subsystems of $A$ whose union is dense in $A$. Then by Proposition~\ref{P-C-equiv-indep} the set of I-independent $^*$-automorphisms of $A$ is equal to $\bigcap_{j=1}^\infty \bigcap_{k=1}^\infty \Gamma (V_j ,1/k)$, which is a $G_\delta$ set. \end{proof} For the definition of the Rokhlin property in the sense that we use it in the next lemma, see \cite{Kis}. \begin{lemma}\label{L-R-denseclass} Let $\gamma$ be a $^*$-automorphism of $M_d^{\otimes{\mathbb Z}}$ with the Rokhlin property. Then $\gamma$ has dense conjugacy class in ${\rm Aut} (M_d^{\otimes{\mathbb Z}} )$. \end{lemma} \begin{proof} Let $\alpha\in{\rm Aut} (M_d^{\otimes{\mathbb Z}} )$, and let $\Omega$ be a finite subset of $M_d^{\otimes{\mathbb Z}}$ and $\varepsilon > 0$. To show that $\alpha$ can be norm approximated to within $\varepsilon$ on $\Omega$ by a conjugate of $\gamma$, we may assume that $\Omega\subseteq M_d^{\otimes I}$ for some finite set $I\subseteq{\mathbb Z}$. By the classification theory for AF algebras, $\alpha$ is approximately inner. Thus, by enlarging $I$ if necessary, we can find a unitary $u\in M_d^{\otimes I}$ such that the $^*$-automorphism ${\rm Ad}\, u \otimes {\rm id}$ of $M_d^{\otimes I} \otimes M_d^{\otimes {\mathbb Z} \setminus I} = M_d^{\otimes{\mathbb Z}}$ satisfies $\| \alpha (a) - ({\rm Ad}\, u \otimes {\rm id} )(a) \| < \varepsilon$ for all $a\in\Omega$. Pick a $^*$-automorphism $\gamma'$ of $M_d^{\otimes {\mathbb Z} \setminus I}$ conjugate to $\gamma$. Since $\Omega\subseteq M_d^{\otimes I}$, we have $\| \alpha (a) - ({\rm Ad}\, u \otimes \gamma' )(a) \| < \varepsilon$ for all $a\in\Omega$. Since $\gamma$ has the Rokhlin property, so does ${\rm Ad}\, u \otimes \gamma'$. Hence by Theorem~1.4 of \cite{Kis} there is a $^*$-automorphism $\beta$ of $M_d^{\otimes{\mathbb Z}}$ such that $\| {\rm Ad}\, u \otimes \gamma' - \beta\circ\gamma\circ\beta^{-1} \| < \varepsilon$. It follows that $\| \alpha (a) - (\beta\circ\gamma\circ\beta^{-1} )(a) \| < \varepsilon$ for all $a\in\Omega$, which establishes the lemma. \end{proof} \begin{theorem}\label{T-UHFdenseGdelta} The I-independent $^*$-automorphisms of $M_d^{\otimes{\mathbb Z}}$ form a dense $G_\delta$ subset of ${\rm Aut} (M_d^{\otimes{\mathbb Z}} )$. \end{theorem} \begin{proof} The two-sided tensor product shift $\alpha$ on $M_d^{\otimes{\mathbb Z}}$ satisfies the Rokhlin property \cite{BSKR,RPS} (note that the Rokhlin property for the one-sided shift implies the Rokhlin property for the two-sided shift) and thus has dense conjugacy class in ${\rm Aut} (M_d^{\otimes{\mathbb Z}} )$ by Lemma~\ref{L-R-denseclass}. It is also clearly I-independent, and so with an appeal to Lemma~\ref{L-I-indGdelta} we obtain the result. \end{proof} For systems on UHF algebras, I-independence is equivalent to ${\mathcal{I}}$-Abelianness by Proposition~\ref{P-C-equiv-indep} and Theorem~\ref{T-nuclear}. The following corollary thus ensues by applying Corollary~4.3.11 of \cite{BR1}. \begin{corollary} The invariant state space of a generic $^*$-automorphism in ${\rm Aut} (M_d^{\otimes{\mathbb Z}} )$ is a simplex. \end{corollary} Proposition~\ref{P-I-indep-cnr} yields the next corollary. \begin{corollary}\label{C-generic-cnr} A generic $^*$-automorphism of $M_d^{\otimes{\mathbb Z}}$ is completely untame. \end{corollary} \begin{corollary} The set of inner $^*$-automorphisms of $M_d^{\otimes{\mathbb Z}}$ is of first category in ${\rm Aut} (M_d^{\otimes{\mathbb Z}} )$. \end{corollary} \begin{proof} By Corollary~\ref{C-generic-cnr} it suffices to show that no inner $^*$-isomorphism of $M_d^{\otimes{\mathbb Z}}$ is completely untame. Let $u$ be a unitary in $M_d^{\otimes{\mathbb Z}}$. If $u$ is scalar then ${\rm Ad}\, u$ is the identity map, which is obviously tame. If $u$ is not scalar then ${\rm Ad}\, u$ fails to be completely untame because $u$, being fixed by ${\rm Ad}\, u$, does not have an infinite $\ell_1$-isomorphism set. \end{proof} It is interesting to compare the generic I-independence in Theorem~\ref{T-UHFdenseGdelta} with generic behaviour for homeomorphisms of the Cantor set. Kechris and Rosendal showed by model-theoretic means that the Polish group of homeomorphisms of the Cantor set has a dense $G_\delta$ conjugacy class \cite{KR}, and a description of the generic homeomorphism has been given by Akin, Glasner, and Weiss in \cite{AGW}. This generic homeomorphism can be seen to be null by examining as follows the construction of Section~1 in \cite{AGW}, to which we refer the reader for notation. The ``special'' homeomorphism $T(D,C)$ of the Cantor set $X(D,C)$ is defined at the end of Section~1 in \cite{AGW} and represents the generic conjugacy class. It is a product of the homeomorphism $\tau_{(D,C)}$ of the Cantor set $Z(D,C)$ and an identity map, and so it suffices to show that $\tau_{(D,C)}$ is null. Now $Z(D,C)$ is a closed subset of $q(Z(D, C)) \times \Theta$, and $\tau_{(D,C)}$ is the restriction of the product of an obviously null homeomorphism of $q(Z(D,C))$ and the universal adding machine on $\Theta$, which is an inverse limit of finite systems and hence is also null. It follows that $\tau_{(D,C)}$ is null, as desired. As a final remark, we point out that the $^*$-automorphisms of $M_d^{\otimes{\mathbb Z}}$ which are weakly mixing for $\tau$ form a dense $G_\delta$ subset of ${\rm Aut} (M_d^{\otimes{\mathbb Z}} )$. This is easily seen using Lemma~\ref{L-WM char} and the fact that tensor product shift on $M_d^{\otimes{\mathbb Z}}$ is weakly mixing for $\tau$ and has dense conjugacy class in ${\rm Aut} (M_d^{\otimes{\mathbb Z}} )$ (cf.\ the proof of Theorem~\ref{T-UHFdenseGdelta}). \section{A tame nonnull Toeplitz subshift}\label{S-Toeplitz} We construct in this section a tame nonnull Toeplitz subshift. Toeplitz subshifts were introduced by Jacobs and Keane in \cite{JK}. An element $x\in \Omega_m:=\{0, 1, \dots, m-1\}^{{\mathbb Z}}$ is called a {\it Toeplitz sequence} if for any $j\in {\mathbb Z}$ there exists an $n\in {\mathbb N}$ such that $x(j+kn)=x(j)$ for all $k\in {\mathbb Z}$. The subshift generated by $x$ is called a {\it Toeplitz subshift}. Note that every Toeplitz subshift is minimal. Set $m=2$. To construct our Toeplitz sequence $x$ in $\Omega_2$, we choose an increasing sequence $n_1<n_2<\dots$ in ${\mathbb N}$ with $n_j|n_{j+1}$ for each $j\in {\mathbb N}$ and $j\cdot2^j+1$ distinct elements $y_{j, 0}, y_{j, 1}, \dots, y_{j, j\cdot2^j}$ in ${\mathbb Z}/n_j{\mathbb Z}$ for each $j\in {\mathbb N}$ with the following properties: \begin{enumerate} \item[(i)] $y_{j+1, k}\equiv y_{j, 0} \!\mod{n_j}$ for all $j\in {\mathbb N}$ and all $0\le k\le (j+1)2^{j+1}$, \item[(ii)] for each $j\in {\mathbb N}$ and $1\le t\le 2^j$, setting $Y_{j, t}:=\{y_{j, k}: (t-1)j<k\le tj\}$, we have, for all $1\le t<2^j$, $Y_{j, t+1}=Y_{j, t}+z_{j, t}$ for some $z_{j, t}\in {\mathbb Z}/n_j{\mathbb Z}$, \item[(iii)] there does not exist any $z\in {\mathbb Z}$ with $z\equiv y_{j, 0} \!\mod{n_j}$ for all $j\in {\mathbb N}$, \item[(iv)] for each $j\in {\mathbb N}$ and any $1\le k_1, k_2\le j\cdot 2^j$ and any $0\le k_3\le j\cdot 2^j$ we have $y_{j, k_1}-y_{j, 0}\neq y_{j, k_3}-y_{j, k_2}$. \end{enumerate} Take a map $f:\bigcup_{j\in {\mathbb N}, 1\le t\le 2^j}Y_{j, t}\rightarrow \{0, 1\}$ such that, for each $j\in {\mathbb N}$, the maps $\{1, 2, \dots, j\}\rightarrow \{0, 1\}$ given by $p\mapsto f(y_{j, (t-1)j+p})$ yield exactly all of the elements in $\{0, 1\}^{\{1, 2, \dots, j\}}$ as $t$ runs through $1, \dots, 2^j$. Now we define our $x$ by \begin{gather*} x(s)= \begin{cases} f(y_{j, k}), \mbox{ if } s\equiv y_{j, k} \!\!\mod{n_j} \mbox{ for some } j\in {\mathbb N} \mbox{ and some } 1\le k\le 2^j ,\\ 0, \mbox{ otherwise}. \end{cases} \end{gather*} Property (i) guarantees that $x$ is well defined. Property (iii) implies that $x$ is a Toeplitz sequence. Denote by $X$ the subshift generated by $x$. Also denote by $A$ (resp.\ $B$) the set of elements in $X$ taking value $1$ (resp.\ $0$) at $0$. Property (ii) and the condition on $f$ imply that $\tilde{y}_{j, 1}, \dots, \tilde{y}_{j, j}$ is an independence set for $(A, B)$ for each $j\in {\mathbb N}$, where $\tilde{y}_{j, i}$ is any element in ${\mathbb Z}$ with $\tilde{y}_{j, i}\equiv y_{j, i} \!\mod n_j$. Thus $(A, B)$ has arbitrarily large finite independence sets and hence the subshift $X$ is nonnull by Proposition~\ref{P-basic N}. Since ${\rm IT}_2(X, {\mathbb Z})$ is ${\mathbb Z}$-invariant, to show that $X$ is tame, by Proposition~\ref{P-basic R}(2) it suffices to show that $(A, B)$ has no infinite independence set. For each $s\in {\mathbb Z}$ with $x(s)=1$ let $J(s)$ and $K(s)$ denote the positive integers such that $1\leq K(s) \leq J(s)2^{J(s)}$ and $s\equiv y_{J(s), K(s)} \!\mod n_{J(s)}$. \begin{lemma} \label{L-Toeplitz 1} Suppose that $x(s_1)=x(s_2)=1$ for some $s_1$ and $s_2$ in ${\mathbb Z}$ with $J(s_1)<J(s_2)$. Also suppose that $x(s_1+a)=0$ and $x(s_2+a)=1$ for some $a\in {\mathbb Z}$. Then $J(s_2+a)=J(s_1)$. \end{lemma} \begin{proof} If $J(s_2+a)>J(s_1)$, then \[s_1-s_2\equiv (s_1+a)-(s_2+a)\equiv (s_1+a)-s_2 \!\!\mod n_{J(s_1)}\] by property (i) and hence $s_1\equiv s_1+a \!\mod n_{J(s_1)}$. Consequently $x(s_1+a)=x(s_1)=1$, in contradiction to the fact that $x(s_1+a)=0$. Therefore $J(s_2+a)\le J(s_1)$. If $J(s_2+a)<J(s_1)$, then $s_1\equiv s_2 \!\mod n_{J(s_2+a)}$ by property (i) and hence $s_1+a\equiv s_2+a \!\mod n_{J(s_2+a)}$. Consequently, $x(s_1+a)=x(s_2+a)=1$, in contradiction to the fact that $x(s_1+a)=0$. Therefore $J(s_2+a)\ge J(s_1)$ and hence $J(s_2+a)=J(s_1)$, as desired. \end{proof} \begin{lemma}\label{L-Toeplitz 2} Suppose that $\{s_1, s_2, s_3\}$ is an independence set for $(A,B)$ and $x(s_i)=1$ for all $1\le i\le 3$. Then $J(s_1)=J(s_2)=J(s_3)$. \end{lemma} \begin{proof} Suppose that the $J(s_i)$ are not all the same. Without loss of generality, we may assume that $J(s_1)< \min (J(s_2),J(s_3))$ or $J(s_1)=J(s_3)<J(s_2)$. Consider the case $J(s_1)<\min (J(s_2), J(s_3))$ first. Since $\{s_1, s_2, s_3\}$ is an independence set for $(A, B)$, we have $x(s_1+a)=x(s_3+a)=0$ and $x(s_2+a)=1$ for some $a\in {\mathbb Z}$. By Lemma~\ref{L-Toeplitz 1} we have $J(s_2+a)=J(s_1)$. Note that $s_3\equiv s_2\mod n_{J(s_1)}$ by property (i). Consequently, $s_3+a\equiv s_2+a \mod n_{J(s_1)}$ and hence $x(s_3+a)=x(s_2+a)=1$, contradicting the fact that $x(s_3+a)=0$. This rules out the case $J(s_1)<\min (J(s_2),J(s_3))$. Consider now the case $J(s_1)=J(s_3)<J(s_2)$. Since $\{s_1, s_2, s_3\}$ is an independence set for $(A, B)$, we have $x(s_1+a)=0$ and $x(s_2+a)=x(s_3+a)=1$ for some $a\in {\mathbb Z}$. By Lemma~\ref{L-Toeplitz 1} we have $J(s_2+a)=J(s_1)$. Note that $s_3\equiv s_2 \!\mod n_{j}$ for any $j<J(s_1)\le \min (J(s_2), J(s_3))$ by property (i). If $J(s_3+a)<J(s_1)=J(s_2+a)$, then $s_3+a\not \equiv s_2+a \!\mod n_{J(s_3+a)}$ leading to a contradiction. Thus $J(s_3+a)\ge J(s_1)$. We then have \begin{align*} y_{J(s_3),K(s_3)}-y_{J(s_1), 0} &\equiv y_{J(s_3), K(s_3)}-y_{J(s_2), K(s_2)}\\ &\equiv y_{J(s_3+a), K(s_3+a)}-y_{J(s_2+a), K(s_2+a)}\!\!\mod n_{J(s_1)} \end{align*} in contradiction to property (iv). Therefore the case $J(s_1)=J(s_3)<J(s_2)$ is also ruled out, and we obtain the lemma. \end{proof} \begin{lemma}\label{L-Toeplitz 3} Suppose that $H$ and $H'$ are disjoint nonempty subsets of ${\mathbb Z}$ and that $H\cup H'$ is an independence set for $(A, B)$. Suppose also that $|H|\ge j\cdot (2^j+1)$. Then there exist an $H_1\subseteq H$ and an $a\in {\mathbb Z}$ such that $x(s+a)=1$ and $J(s+a)>j$ for all $s\in H'\cup (H\setminus H_1)$ and $|H_1|\le j$. \end{lemma} \begin{proof} We shall prove the assertion via induction on $j$. The case $j=0$ is trivial since $H'\cup H$ is an independence set for $(A, B)$. Assume that the assertion holds for $j=i$. Suppose that $|H|\ge (i+1)\cdot (2^{i+1}+1)$. By the assumption there exist an $H_1\subseteq H$ and an $a\in {\mathbb Z}$ such that $x(s+a)=1$ and $J(s+a)>i$ for all $s\in H'\cup (H\setminus H_1)$ and $|H_1|\le i$. Note that $|H\setminus H_1|\ge (i+1)2^{i+1}+1\ge 3$. By Lemma~\ref{L-Toeplitz 2}, $J(s+a)$ does not depend on $s\in H'\cup (H\setminus H_1)$. If $J(s+a)>i+1$ for $s\in H'\cup (H\setminus H_1)$, we are done with the induction step. Suppose then that $J(s+a)=i+1$ for all $s\in H'\cup (H\setminus H_1)$. Then $K(s_1+a)=K(s_2+a)$ for some distinct $s_1$ and $s_2$ in $H\setminus H_1$. In other words, $s_1\equiv s_2 \mod n_{i+1}$. Set $H_2:=H_1\cup \{s_1\}$. Since $H'\cup H$ is an independence set for $(A, B)$, there exists a $b\in {\mathbb Z}$ with $x(s+b)=1$ for all $s\in H'\cup (H\setminus H_2)$ and $x(s_1+b)=0$. Using property (i) one sees easily that $J(s_2+a)<J(s_2+b)$. By Lemma~\ref{L-Toeplitz 2} we have $J(s+b)=J(s_2+b)>i+1$ for all $s\in H'\cup (H\setminus H_2)$. This finishes the induction step and proves the lemma. \end{proof} Suppose that $H\subseteq {\mathbb Z}$ is an infinite independence set for $(A, B)$. Choose a nonempty finite subset $H'\subset H$. By Lemma~\ref{L-Toeplitz 3}, for any $j\in {\mathbb N}$ there exists an $a_j\in {\mathbb Z}$ such that $x(s+a_j)=1$ and $J(s+a_j)>j$ for all $s\in H'$. By property (i) we have $s+a_j\equiv s'+a_j \!\mod n_j$ for all $s, s'\in H'$. Thus $n_j | s-s'$. Since $n_j\to \infty$ as $j\to \infty$, we see that $H'$ cannot contain more than one element, which is a contradiction. Therefore $(A, B)$ has no infinite independence sets and we conclude that $X$ is tame. \section{Convergence groups}\label{S-convergence} Boundary-type actions of free groups are often described as exhibiting chaotic behaviour. The geometric features which account for this description are tied to complexity within the group structure and are suggestive of the free-probabilistic notion of independence. We will see here on the other hand that any convergence group (in particular, any hyperbolic group acting on its Gromov boundary) displays a strong lack of dynamical independence in our sense. Let $(X,G)$ be a dynamical system with $G$ a group and $X$ containing at least $3$ points. We call a net $\{s_i\}_{i\in I}$ in $G$ {\it wandering} if for any $s\in G$, $s_i \neq s$ for sufficiently large $i\in I$. Recall that $G$ is said to act as a {\it (discrete) convergence group} on $X$ if for any wandering net $\{s_i\}_{i\in I}$ in $G$ there exist $x, y\in X$ and a subnet $\{s_j\}_{j\in J}$ of $\{s_i\}_{i\in I}$ such that $s_j K\to y$ as $j\to\infty$ for every compact set $K\subseteq X\setminus \{x\}$ \cite{Bowditch}. When $X$ is metrizable, one can replace ``wandering net'' and ``subnet'' by ``sequence of distinct elements'' and ``subsequence'', respectively, in the above definition. \begin{lemma}\label{L-bound} Let $G$ act as a convergence group on $X$. Let $Z_1$ and $Z_2$ be two disjoint closed subsets of $X$. Then there exists a finite subset $F\subseteq G$ such that $HH^{-1}\subseteq F$ for every independence set $H$ of $Z_1$ and $Z_2$. \end{lemma} \begin{proof} Suppose that such an $F$ does not exist. Then we can find independence sets $H_1, H_2, \dots $ for the pair $(Z_1 , Z_2 )$ such that $H_nH^{-1}_n\not \subseteq \bigcup^{n-1}_{k=1}H_kH^{-1}_k$ for all $n\geq 2$. Choose an $s_n\in H_nH^{-1}_n\setminus \bigcup^{n-1}_{k=1}H_kH^{-1}_k$ for each $n$. Since independence sets are right translation invariant, $\{e, s_n\}$ is an independence set for $(Z_1 , Z_2 )$ when $n\ge 2$. As $G$ acts as a convergence group on $X$ we can find $x, y\in X$ and a subnet $\{s_j\}_{j\in J}$ of $\{s_n\}_{n\in {\mathbb N}}$ such that $s_j K\rightarrow y$ as $j\rightarrow \infty$ for every compact set $K\subseteq X\setminus \{x\}$. Without loss of generality we may assume that $x\notin Z_1$. Then $s_j Z_1 \rightarrow y$ as $j\rightarrow \infty$. We separate the cases $y\notin Z_1$ and $y\in Z_1$. If $y\notin Z_1$, then $s_j Z_1 \cap Z_1 = \emptyset$ when $j$ is large enough, contradicting the fact that $\{e, s_j\}$ is an independence set for $(Z_1 , Z_2 )$. If $y\in Z_1$, then $s_j Z_1 \cap Z_2 =\emptyset$ when $j$ is large enough, again contradicting the fact that $\{e, s_j\}$ is an independence set for $(Z_1 , Z_2 )$. Therefore there does exist an $F$ with the desired property. \end{proof} \begin{theorem}\label{T-convergence} Let $G$ act as a convergence group on $X$. Then $(X, G)$ is null. \end{theorem} \begin{proof} Given disjoint closed subsets $Z_1$ and $Z_2$ of $X$ let $F$ be as in Lemma~\ref{L-bound}. Then $|H|\le |HH^{-1}|\le |F|$ for any independence set $H$ for the pair $(Z_1 , Z_2 )$. Thus $Z_1$ and $Z_2$ do not have arbitrarily large finite independence sets. It follows by Proposition~\ref{P-basic N}(2) that $(X, G)$ is null. \end{proof} An action of $G$ on a locally compact Hausdorff space $Y$ is said to be {\it properly discontinuous} if, given any compact subset $Z\subseteq Y$, $sZ\cap Z\neq \emptyset$ for only finitely many $s\in G$. For a general reference on hyperbolic spaces and hyperbolic groups, see \cite{BH}. It is a theorem of Tukia that if a group $G$ acts properly discontinuously as isometries on a proper geodesic hyperbolic space $Y$, then the induced action of $G$ on the Gromov compactification $\overline{Y}=Y\cup \partial Y$ is a convergence group action \cite[Theorem 3A]{Tukia}. Thus we get: \begin{corollary}\label{C-boundary} If a group $G$ acts properly discontinuously as isometries on a proper geodesic hyperbolic space $Y$, then the induced action of $G$ on the Gromov compactification $\overline{Y}$ is null. In particular, the action of a hyperbolic group on its Gromov boundary is null. \end{corollary} It happens on the other hand that, as the following remark notes, nonelementary convergence group actions fail to be ${\rm HNS}$ (hereditarily nonsensitive \cite[Definition 9.1]{GM}), although ${\rm HNS}$ dynamical systems are tame (see the proof of \cite[Corollary 5.7]{DEBS}). \begin{remark}\label{R-convergence vs equicontinuous} For any dynamical system $(X, G)$, denote by $L(X, G)$ its {\it limit set}, defined as the set of all $x\in X$ such that $| \{ s\in G : U\cap sU \neq\emptyset\} | =\infty$ for every neighbourhood $U$ of $x$. Clearly $L(X, G)$ is a closed $G$-invariant subset of $X$. A convergence group action of $G$ on $X$ is {\it nonelementary} if $|L(X,G)|>2$, or, equivalently, there is no one- or two-point subset of $X$ fixed setwise by $G$ \cite[Theorem 2T]{Tukia}. In this event, $L(X, G)$ is an infinite perfect set, the action of $G$ on $L(X, G)$ is minimal, and there exists a {\it loxodromic} element $s\in G$ \cite[Theorem 2S, Lemmas 2Q, 2D]{Tukia}, i.e., $s$ has exactly two distinct fixed points $x$ and $y$, and $s^n K\to y$ as $n\to \infty$ for any compact set $K\subseteq X\setminus\{x\}$. Notice that $\{x, y\}\subseteq L(X, G)$. Clearly the action of $G$ on $L(X, G)$ is nonequicontinuous in this event. Since minimal ${\rm HNS}$ dynamical systems are equicontinuous and subsystems of ${\rm HNS}$ dynamical systems are ${\rm HNS}$, we see that no nonelementary convergence group action is ${\rm HNS}$. \end{remark}
2,877,628,088,836
arxiv
\section{Introduction}\label{section1} This paper is devoted to the study of the boundedness of the curl of solutions to a semilinear system in a bounded and convex domain $\Omega$ in $\mathbb{R}^3$: \begin{equation}\label{1.1} \begin{cases} \curl\left(|\curl\mathbf u|^{p-2}\curl\mathbf u\right)={\bm f} &\text{\rm in } \Omega,\\ \dv\mathbf u=0 &\text{\rm in } \Omega,\\ \mathbf u\times \bm\nu =0 &\text{\rm on } \p \Omega, \end{cases} \end{equation} where $\bm\nu$ denotes the outward unit normal to $\p \Omega$, $p\in (1,\infty)$ is fixed, $\mathbf{u}: \Omega \to \mathbb{R}^3$ is a vector-valued unknown function and ${\bm f} :\Omega \to \mathbb{R}^3$ is a given and divergence-free vector. The system \eqref{1.1} appears as a model of the magnetic induction in a high-temperature superconductor operating near its critical current (see \cite{Yin2006Regularity, Laforest}). More precisely, $\mathbf u$ and $\curl\mathbf u$ are the magnetic field and the total current density respectively, and ${\bm f}$ denotes the internal magnetic current. The system \eqref{1.1} is also the steady-state approximation for Bean's critical-state model for type-$\mathrm{II}$ superconductors (see \cite{Chapman2000A,Yin2001,Yin2002}), where the magnetic field $\mathbf u$ is approximated by the solution of the $p$-curl-evolution system. For the detailed physical background, we refer to \cite{Bean1964Magnetization, Chapman2000A, DeGennes}. We need to mention that the Cauchy problem for the $p$-curl-evolution system in unbounded domains can be found in \cite{Yin2001} by Yin, who studied the existence, uniqueness, and regularity of solutions to this system. In \cite{Yin2002}, Yin considered the same problem of the $p$-curl-evolution system in $\Omega\times(0,T]$, where $\Omega$ is a bounded domain in $\mathbf{R}^3$ with $C^{1,1}$ boundary and no holes in the interior. The results for domains in $\mathbf{R}^2$ had been established in \cite{Barrett}. Recently, the work of \cite{Yin2002} had been extended to more general resistivity term of the form $\rho=g(x,|\curl\mathbf{H}|)$ for some function $g$ by Aramaki (see \cite{Aramaki}). We refer to \cite{Antontsev} and \cite{FAntontsev} for more results of the parabolic version of the system \eqref{1.1}. Some recent results with respect to this kind of problems can be found in \cite{Bartsch, Mederski,Tang}. Regardless of the time dimension, Laforest \cite{Laforest} studied the existence and uniqueness of the weak solution to the steady-state $p$-curl system \eqref{1.1} in a bounded domain whose boundary is sufficiently smooth. Yin\cite{Yin2006Regularity} studied the regularity of the weak solution to the system \eqref{1.1} in a bounded and simply-connected domain $\Omega$ with $\p\Omega\in C^2$ and showed that the optimal regularity weak solutions is of class $C^{1+\alpha}$, where $\alpha\in(0,1)$. For such problems mentioned above, one usually works in spaces of divergence-free vector fields. In \cite{ChenJun}, Chen and Pan considered the existence of solutions to a quasilinear degenerate elliptic system with lower-order terms in a bounded domain under the Dirichlet boundary condition or the Neumann boundary condition, and the divergence-free subspaces are no longer suitable as admissible spaces. We also mention that further generalizations of $p$-curl systems with $p$ instead of $p(x)$ are the subject of \cite{FAntontsev, XiangM} and the references therein. Moreover, system \eqref{1.1} is a natural generalization of the $p$-Laplacian system for which a well-developed theory already exists (see \cite{Cianchi2014Global, Barrett1993Finite, Dibenedetto2010Degenerate, Yin1998On, Uralceva}). In particular, the boundedness of the gradient up to the boundary for solutions to Dirichlet and Neumann problems for divergence form elliptic systems with Uhlenbeck type structure was established by Cianchi and Maz'ya in \cite{Cianchi2014Global}, where the $p$-Laplacian system is also included. Note that the system \eqref{1.1} is a curl-type elliptic system with Uhlenbeck type structure. In parallel with the problem for $p$-Laplacian systems in \cite{Cianchi2014Global}, our study is aimed at checking whether we have the boundedness of the curl of solutions to the system \eqref{1.1}. The elliptic characteristic of system \eqref{1.1} can be inferred by this result and the system thus can satisfy the properties of elliptic partial differential equations. But unlike the Dirichlet boundary condition or the Neumann boundary condition for the divergence-type system in \cite{Cianchi2014Global}, the difficulties of our work is to tackle the vanished tangential components and the degeneration of the operator $curlcurl$ in system \eqref{1.1}. To state our result, we first need to introduce Sobolev spaces $W_t^p(\Omega,\dv 0)$ with $1<p<\infty$ and Lorentz spaces $L(m,p)$ with $1\leq m<\infty$ and $1\leq p\leq\infty$. Define by $$ W_t^p(\Omega,\dv 0):=\left\{\mathbf u, \curl\mathbf u\in L^{p}(\Omega)~:~ \dv\mathbf u=0 \text{ in } \Omega \text{ and }\bm\nu\times\mathbf u=0 \text{ on }\p\Omega\right\} $$ with the norm $$ \|\mathbf u\|_{W_t^p(\Omega,\dv 0)} = \|\mathbf u\|_{L^p(\Omega)}+\|\curl\mathbf u\|_{L^p(\Omega)}. $$ Let $(X,S,\mu)$ be a $\sigma-$finite measure space and $f:X\to\mathbb{R}$ be a measurable function. We define the distribution function of $f$ as $$f_{*}(s)=\mu(\{|f|>s\}),\quad s>0, $$ and the non-increasing rearrangement of $f$ as \begin{equation}\label{7.2} f^{*}(t)=\inf\{s>0, f_{*}(s)\leq t\},\quad t>0. \end{equation} The Lorentz space now is defined by $$L(m,p)=\left\{f: X\to\mathbb{R} \text{ measurable}, \|f\|_{L^{m,p}}<\infty\right\} \quad \text{with } 1\leq m<\infty $$ equipped with the quasi-norm $$\|f\|_{L(m,p)}=\Big(\int_0^{\infty}\left(t^{1/m}f^{*}(t)\right)^p\frac{dt}{t}\Big)^{1/p}, \quad 1\leq p<\infty $$ and $$\|f\|_{L(m,\infty)}=\sup_{t>0} t^{1/m} f^{*}(t),\quad p=\infty. $$ Then we introduce the weak solution to the system \eqref{1.1}: \begin{Def} We say that $\mathbf u\in W_t^p(\Omega,\dv 0)$ is a (weak) solution to the system \eqref{1.1} if $$ \int_{\Omega} |\curl\mathbf u|^{p-2}\curl\mathbf u\cdot\curl\mathbf\Phi dx=\int_{\Omega} {\bm f}\cdot\mathbf\Phi dx $$ for any $\mathbf\Phi\in W_t^p(\Omega,\dv 0).$ \end{Def} Our result now reads as follows: \begin{Thm}\label{Theorem1.1} Let $\Omega$ be a bounded convex domain in $\mathbb{R}^3$. Assume that ${\bm f} \in L^{3,1}(\Omega)$ with $\dv{\bm f}=0$ in the sense of distribution, then there exists a unique (weak) solution $\mathbf u\in W_t^p(\Omega,\dv 0)$ to the system \eqref{1.1}. Moreover, we have the estimate \begin{equation}\label{1.2} \lVert\curl\mathbf u\rVert _{L^{\infty}(\Omega)}\leq C \|{\bm f}\|_{L^{3,1}(\Omega)}^{\frac{1}{p-1}}, \end{equation} where the constant $C$ depends on $p$ and $\Omega.$ \end{Thm} The proof of Theorem \ref{Theorem1.1} will be given in Section 2. Throughout this paper, a bold letter represents a three-dimensional vector or vector function. \section{proof of the main result}\label{section2} We now give the proof of our main theorem. \begin{proof}[Proof of Theorem \ref{Theorem1.1}] Note that the system \eqref{1.1} is the Euler equation of the strictly convex functional $$ J(\mathbf u)=\int_{\Omega}\left(\frac{1}{p}|\curl\mathbf u|^p -{\bm f}\cdot\mathbf u \right)dx $$ in the space $W^p_t(\Omega, \dv 0).$ It is easy to see that the functional is of weak lower semi-continuity and coercivity, then the existence and the uniqueness of the weak solution to the system \eqref{1.1} follows, see \cite[Theorem 1.6]{Str}. Since $ J(0)\leq 0, $ then we have the estimate $$ \frac{1}{p}\int_{\Omega}|\curl\mathbf u|^p dx\leq \|{\bm f}\|_{L^{3,1}(\Omega)}\|\mathbf u\|_{L^{3/2,\infty}(\Omega)}. $$ The space $W^p_t(\Omega, \dv 0)$ is continuously embedded into the space $L^{3/2, \infty}(\Omega),$ we can obtain that \begin{equation}\label{7.3} \|\curl\mathbf u\|_{L^p(\Omega)}\leq C(p,\Omega) \|{\bm f}\|_{L^{3,1}(\Omega)}^{\frac{1}{p-1}}. \end{equation} Then, we show the inequality \eqref{1.2}. The proof is divided into three steps. Step 1. First, we make the following assumptions: \begin{equation}\label{2.3} \p\Omega\in C^{\infty}, \end{equation} and \begin{equation}\label{2.4} {\bm f}\in C^{\infty}(\bar{\Omega}). \end{equation} From the main theorem in \cite{Yin1998On}, we see that $\mathbf u\in C^{1,\alpha}(\bar{\Omega})$ for some $\alpha \in (0,1).$ Then classical regularity results can show that the solution $$\mathbf u \in C^3\left(\bar{\Omega}\bigcap \overline{\{|{\curl\mathbf u}|>t\}}\right),$$ for $t\geq|\bm{\omega}|^*(|\Omega|/2),$ where $|\bm{\omega}|^*$ is defined by \eqref{7.2}, For simplicity, we now rewrite $\bm{\omega}$, $G(|\bm{\omega}|)$ instead of $\curl\mathbf u$, $|\curl\mathbf u|^{p-2}$, respectively. We first establish the following inequality, \begin{equation}\label{1.5} \aligned (p-1)t^{p-1}(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)\leq & \int_{\{|\bm{\omega}|>t\}} \frac{1}{4(p-1)}\frac{1}{|\bm\omega|^{p-2}}|{\bm f}|^2 dx +t\int_{\{|\bm{\omega}|=t\}}|{\bm f}| dS\\ &+\int_{\p\Omega\bigcap \p\{|\bm{\omega}|>t\}} G(|\bm{\omega}|) \sum_{i,j=1}^3 \nu_i \omega_j \p_j \omega_i dS. \endaligned \end{equation} First, by applying the formula $\dv(\mathbf A\times\mathbf B)=\curl\mathbf A\cdot\mathbf B-\mathbf A\cdot\curl\mathbf B$ to the system \eqref{1.1}, we see that \begin{equation}\label{1.6} -{\bm f}\curl\bm\omega=\dv(G(|\bm\omega|)\bm\omega\times\curl\bm\omega)+G(|\bm\omega|)\bm\omega\cdot\curl\curl\bm\omega. \end{equation} By simple computations, it follows that $$ \aligned G(|\bm\omega|)\bm\omega\cdot&\curl\curl\bm\omega =-\dv(G(|\bm\omega|)\nabla|\bm\omega||\bm\omega|) +G(|\bm\omega|)|\nabla|\bm\omega||^2 +G'(|\bm\omega|)|\nabla|\bm\omega||^2|\bm\omega|. \endaligned $$ Substituting the above equality to \eqref{1.6}, we have \begin{equation}\label{1.7} \aligned -{\bm f}\curl\bm\omega =\dv(G(|\bm\omega|)\bm\omega\times\curl\bm\omega) -\dv(G(|\bm\omega|)\nabla|\bm\omega||\bm\omega|)+(p-1)|\bm\omega|^{p-2}|\nabla|\bm\omega||^2. \endaligned \end{equation} By Cauchy inequality, one has that \begin{equation}\label{1.8} {\bm f}\curl\bm\omega\geq -\frac{|{\bm f}|^2}{4(p-1)|\bm\omega|^{p-2}}-(p-1)|\bm\omega|^{p-2}|\curl\bm\omega|^2. \end{equation} Coupling \eqref{1.7} with \eqref{1.8} tells us that \begin{displaymath} \aligned &-\dv(G(|\bm\omega|)\bm\omega\times\curl\bm\omega)+\dv(G(|\bm\omega|)\nabla|\bm\omega||\bm\omega|)\\ &\geq-\frac{1}{4(p-1)}\frac{1}{|\bm\omega|^{p-2}}|{\bm f}|^2. \endaligned \end{displaymath} Integrating the above inequality over the region ${\{|\bm\omega|>t\}}$ and by Green's formula, we have \begin{equation}\label{1.9} \aligned &\int_{\p\{|\bm{\omega}|>t\}} \left[G(|\bm{\omega}|)\curl \bm{\omega}\times \bm{\omega} +G(|\bm{\omega}|)\nabla|\bm{\omega}||\bm{\omega}|\right]\cdot\bm\nu dS\\ &\geq -\int_{\{|\bm{\omega}|>t\}}\frac{1}{4(p-1)}\frac{1}{|\bm\omega|^{p-2}}|{\bm f}|^2 dx, \endaligned \end{equation} where $\bm\nu(x)$ represents the unit outer normal vector at $x\in \p\{|\bm{\omega}|>t\}.$ For almost every $t>0$, The level surface $\p\left\{|\nabla\bm\omega|>t\right\}$ has $$ \p\{|\bm{\omega}|>t\}=\p\Omega\bigcap\p\{|\bm{\omega}|>t\} +\{|\bm{\omega}|=t\}. $$ Also, for $x\in \{|\bm{\omega}|=t\}\bigcap\{|\nabla |\bm{\omega}||\neq 0\}$ we have \begin{equation}\label{1.10} \bm\nu(x)=-\frac{\nabla |\bm{\omega}|}{|\nabla |\bm{\omega}||}. \end{equation} From Sard's theorem, we know that \begin{center} the image $|\bm{\omega}|(X)$ has Lebesgue measure $0$, where $X=\{|\nabla |\bm{\omega}||=0\}$. \end{center} Let us focus on the terms in the left-hand side of \eqref{1.9}. Since, for every $x \in \p\Omega$, $$\left(\curl \bm{\omega}\times \bm{\omega} +\nabla|\bm{\omega}||\bm{\omega}|\right)\cdot\bm\nu(x) =\sum_{i,j=1}^3\nu_i \omega_j \p_j \omega_i.$$ Combining \eqref{1.1} and \eqref{1.10}, and making use of the above equality yields \begin{equation}\label{1.11} \aligned &\int_{\p\{|\bm{\omega}|>t\}}[G(|\bm\omega|)\curl\bm\omega\times\bm\omega+G(|\bm\omega|)\nabla|\bm\omega||\bm\omega|]\cdot\bm\nu dS\\ \leq& \int_{\p\Omega\bigcap \p\{|\bm{\omega}|>t\}} G(|\bm{\omega}|)\sum_{i,j=1}^3 \nu_i \omega_j \p_j \omega_i dS+t\int_{\{|\bm{\omega}|=t\}}|{\bm f}|dS\\ &-\int_{\{|\bm{\omega}|=t\}}(\bm\omega\times\bm\nu)(G'(|\bm\omega|)\nabla|\bm\omega|\times\bm\omega)dS-t\int_{\{|\bm{\omega}|=t\}}G(|\bm\omega|)|\nabla|\bm\omega||dS. \endaligned \end{equation} Furthermore, direct computations show that $$ \aligned &-\int_{\{|\bm{\omega}|=t\}}(\bm\omega\times\bm\nu)(G'(|\bm\omega|)\nabla|\bm\omega|\times\bm\omega)dS-t\int_{\{|\bm{\omega}|=t\}}G(|\bm\omega|)|\nabla|\bm\omega||dS\\ =&-(p-1)t^{p-1}\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS. \endaligned $$ Consequently, \begin{equation}\label{1.12} \aligned &\int_{\p\{|\bm{\omega}|>t\}}[G(|\bm\omega|)\curl\bm\omega\times\bm\omega+G(|\bm\omega|)\nabla|\bm\omega|\bm\omega]\cdot\bm\nu dS\\ \leq&\int_{\p\Omega\bigcap \p\{|\bm{\omega}|>t\}} G(|\bm{\omega}|)\sum_{i,j=1}^3 \nu_i \omega_j \p_j \omega_i dS+t\int_{\{|\bm{\omega}|=t\}}|{\bm f}|dS\\ &-(p-1)t^{p-1}\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS. \endaligned \end{equation} Combining \eqref{1.9} and \eqref{1.12}, we obtain \eqref{1.5}. Now we consider the following two cases separately: (A) $p\geq2$, and (B) $p< 2$. In case (A), multiplying both sides of \eqref{1.5} by $t^{p-2}(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}$ and then integrating the resulting inequality from $t_0$ to $T$, we have \begin{equation}\label{1.13} \aligned \int_{t0}^T(p-1)t^{2p-3}dt\leq&\int_{t0}^T(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}\int_{\{|\bm{\omega}|>t\}}\frac{1}{4(p-1)}|{\bm f}|^2dxdt\\ &+\int_{t0}^T(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}t^{p-1}\int_{\{|\bm{\omega}|=t\}}|{\bm f}|dSdt\\ &+\int_{t0}^T(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}t^{p-2}\int_{\p\Omega\bigcap \p\{|\bm{\omega}|>t\}} G(|\bm{\omega}|) \sum_{i,j=1}^3 \nu_i \omega_j \p_j \omega_i dSdt. \endaligned \end{equation} The estimates for the first and second integrals in the right-hand side of \eqref{1.13} can be achieved directly from \cite[(2.16) (2.17)]{xiangxf}, in which infers that \begin{equation}\label{1.14} \aligned &\int_{t_0}^{T}(\int_{\{|\bm{\omega}|=t\}}|\nabla|\nabla \bm\omega||dS)^{-1}(\int_{\{|\bm{\omega}|>t\}}\frac{1}{4(p-1)}|{\bm f}|^2dx)dt \leq \frac{1}{4(p-1)}C(\Omega) \|{\bm f}\|_{L^{3,1}(\Omega)}^2, \endaligned \end{equation} and \begin{equation}\label{1.15} \aligned \int_{t_0}^{T}(\int_{\{|\bm{\omega}|=t\}}|\nabla|\nabla\bm\omega||dS)^{-1}(\int_{\{|\bm{\omega}|>t\}}t^{p-1}|{\bm f}|dx)dt \leq T^{p-1}C(\Omega) \|{\bm f}\|_{L^{3,1}(\Omega)}, \endaligned \end{equation} where the constant C depends only on the size of domain $\Omega$ and the regularity of the domain (the cone condition for $\Omega$). Moreover, the last integral on the right-hand side of \eqref{1.13} is negative. Indeed, according to \cite[p.135-137]{Grisvard1985Elliptic}, it follows that $$ \sum_{i,j=1}^3 \nu_i \omega_j \p_j \omega_i =2\bm{\omega}_{\tau}\cdot\nabla_{\tau}\omega_n -\mathrm{Div}(\omega_n \bm{\omega}_{\tau} )+ \mathscr{B}(\bm{\omega}_{\tau},\bm{\omega}_{\tau})+ \mathrm{tr}\mathscr{B} \omega_n^2. $$ Since $\omega_n=0$ on $\p\Omega$ and the domain $\Omega$ is convex, for any $x\in\p\Omega$, we have $$\sum_{i,j=1}^3 \nu_i \omega_j \p_j \omega_i =\mathscr{B}(\bm{\omega}_{\tau},\bm{\omega}_{\tau}) \leq 0.$$ Note that $$ \aligned t_0:=|\bm\omega|^*(|\Omega|/2)&\leq C(p, \Omega)\|\curl \mathbf u\|_{L^p(\Omega)}\\ &\leq C(p, \Omega)\|{\bm f}\|^{\frac{1}{p-1}}_{L^{3,1}(\Omega)}, \endaligned $$ where the last inequality follows from \eqref{7.3}. Substituting \eqref{1.14}, \eqref{1.15} into \eqref{1.13}, we have \begin{equation}\label{1.17} CT^{2p-2} \leq C(p, \Omega)\lVert {\bm f}\rVert ^2_{L^{3,1}(\Omega)}+C(\Omega)T^{p-1}\lVert {\bm f}\rVert _{L^{3,1}(\Omega)}. \end{equation} \iffalse By \eqref{1.3} and the fact that $L^2(\Omega)$ is continuously embedded into Lorentz space $L_{3,1}(\Omega)$, we deduce that \begin{equation}\label{1.18} \aligned \lVert\curl \mathbf u\rVert ^{2p-2}_{L^1(\Omega)} \leq \lVert\curl \mathbf u\rVert^{2p-2}_{L^p(\Omega)} \leq (\frac{1}{2})\lVert{\bm f}\rVert ^{4-{\frac{4}{p}}}_{L^2(\Omega)}\leq (\frac{1}{2})\lVert{\bm f}\rVert ^{4-{\frac{4}{p}}}_{L^{3,1}(\Omega)}. \endaligned \end{equation} By the fact that $L_q(\Omega)$ is continuously embedded into $W_{1,p}(\Omega)$ when $p>\frac{1}{3}(\Omega)$, combined with \cite[Theorem 1.1]{Von2010Estimating}, we deduce that \begin{equation}\label{1.19} \lVert\mathbf u\rVert _{L^{3,1}(\Omega)}\leq \lVert \mathbf u \rVert_L^q(\Omega) \leq C' \lVert\mathbf u\rVert _{W^{1,p}(\Omega)} \leq C(\lVert\dv\mathbf u\rVert _{L^p(\Omega)}+\lVert\curl\mathbf u\rVert _{L^{p}(\Omega)}+\lVert\mathbf u\rVert _{L^p(\Omega)}), \end{equation} where constants $C$ and $C'$ depends on p and $\Omega$. According to the system \eqref{1.1}, we obtain $\dv\mathbf u=\dv{\bm f}$ in $\Omega$. Combining \eqref{1.17} \eqref{1.18} \eqref{1.19} yields $$ \aligned C(\Omega)T^{2p-2}\leq& C(p,\Omega,\lVert{\bm f}\rVert _{L^{3,1}(\Omega)})+C(p,\Omega,\lVert\dv\mathbf u\rVert _{L^{p}(\Omega)},\lVert{\bm f}\rVert _{L^{3,1}(\Omega)})\\ &+T^{p-1}C(p,\Omega,\lVert\dv\mathbf u\rVert _{L^{p}(\Omega)},\lVert{\bm f}\rVert _{L^{3,1}(\Omega)}). \endaligned $$ \fi Now letting $T\rightarrow \lVert\bm\omega\rVert _{L^{\infty}(\Omega)}$, we consequently obtain that for any $x\in\Omega$, there exists a constant $C$ depending on $p,\Omega$, such that \begin{equation}\label{9.15} \lVert\curl\mathbf u\rVert _{L^{\infty}(\Omega)}\leq C \|{\bm f}\|_{L^{3,1}(\Omega)}^{\frac{1}{p-1}}. \end{equation} In case (B), multiplying both sides of \eqref{1.5} by $(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}$ and then integrating the resulting inequality from $t_0$ to $T$, from \eqref{1.5} we obtain that \begin{equation} \aligned \int_{t_0}^{T}(p-1)t^{p-1}dt\leq&\int_{t_0}^{T}(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}\int_{\{|\bm{\omega}|>t\}}\frac{1}{4(p-1)}T^{2-p}|{\bm f}|^2dxdt\\ &+\int_{t_0}^{T}(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}T\int_{\{|\bm{\omega}|=t\}}|{\bm f}|dSdt\\ &+\int_{t_0}^{T}(\int_{\{|\bm{\omega}|=t\}}|\nabla|\bm\omega||dS)^{-1}\int_{\p\Omega\bigcap \p\{|\bm{\omega}|>t\}} G(|\bm{\omega}|) \sum_{i,j=1}^3 \nu_i \omega_j \p_j \omega_i dSdt. \endaligned, \end{equation} Similarly, we have $$CT^p\leq C(p,\Omega)T^{2-p}\|{\bm f}\|_{L^{3,1}(\Omega)}^2+C(\Omega)T\|{\bm f}\|_{L^{3,1}(\Omega)}+C(p,\Omega)\|{\bm f}\|_{L^{3,1}(\Omega)}^{\frac{p}{p-1}}.$$ Letting $T\rightarrow \lVert\bm\omega\rVert _{L^{\infty}(\Omega)}$, we consequently obtain that for any $x\in\Omega$, there exists a constant $C$ depending on $p,\Omega$, the estimate \eqref{9.15} is established. Step 2. We remove the assumption \eqref{2.3}. We look for a sequence $\{\Omega_m\}_{m\in \mathbb{N}}$ of bounded domains $\Omega_m\supset \Omega$ such that $\Omega_m\in C^{\infty}, |\Omega_m\backslash \Omega|\to 0, \Omega_m\to\Omega$ with respect to the Hausdorff distance (see \cite{Verchota1984}). Let $\mathbf u_m$ be the solutions to the system \eqref{1.1} with the domain $\Omega$ replaced by $\Omega_m$. From step 1, we see that $$ \|\curl\mathbf u_m\|_{L^{\infty}(\Omega_m)}\leq C \|{\bm f}\|_{L^{3,1}(\Omega_m)}^{\frac{1}{p-1}}, $$ where the constant $C$ depends on $p$ and $\Omega$ (the Lipschitz character of the domain). Based on this fact, then by the $C^{1,\alpha}$ local regularity theory, for any compact subset $\mathcal {K}$ of $\Omega$, we have $$ \|\mathbf u_m\|_{C^{1,\alpha}(\mathcal {K})}\leq C \left(\mathcal {K}, \Omega\right), $$ see \cite{Lieberman1988}. By Arezela's theorem, we see that $$ \curl\mathbf u_m\to \curl\hat{\mathbf u} \quad\q a.e. \quad \text{in }\Omega. $$ For any $\psi\in W_t^p(\Omega,\dv 0)$, we have $$ \aligned &\int_{\Omega}|\curl\mathbf u_m|^{p-2}\curl\mathbf u_m \cdot \curl\psi dx =\int_{\Omega}{\bm f}\cdot\psi dx. \endaligned $$ Now by Lebesgue's dominated convergence theorem, we have $$ \aligned &\int_{\Omega}|\curl\mathbf u|^{p-2}\curl\mathbf u\cdot \curl\psi dx =\int_{\Omega}{\bm f}\cdot\psi dx. \endaligned $$ Step 3. We remove the assumption \eqref{2.4}. We first extend the vector ${\bm f}$ to $\Omega_0$ with $\Omega\subset\Omega_0$. There exists an extension of ${\bm f}$, denoted by $\tilde{{\bm f}}$, such that $\dv\tilde{{\bm f}}=0$ in $\Omega_0$ and satisfies $$ \|\tilde{{\bm f}}\|_{L^{3,1}(\Omega_0)}\leq C\|\tilde{{\bm f}}\|_{L^{3,1}(\Omega)}. $$ There exists a sequence $\tilde{{\bm f}}_n\in C^{\infty}(\Omega_0)$ with $\dv\tilde{{\bm f}_n}=0$ and $$ \tilde{{\bm f}}_n\to \tilde{{\bm f}} \quad \text{in } L^{3,1}(\Omega_0), $$ for the proof we may refer to \cite{Costabel1990A}. The following proof is the same as step 4 in the proof of Theorem 1.1 in \cite{Cianchi2011}. The proof is complete. \end{proof} \v0.2in \noindent{\bf ACKNOWLEDGMENTS} \v0.1in The author wishes to thank her supervisor Professor Baojun Bian for his persistent guidance and constant encouragement. The research work was partly supported by the National Natural Science Foundation of China grant No. 11771335. \vspace {0.5cm} \begin {thebibliography}{DUMA} \bibitem{Antontsev} S. Antontsev, F. Miranda, L. Santos, \newblock A class of electromagnetic p-curl systems: blow-up and finite time extinction, \newblock {Nonlinear Anal.} 75 (2012) 3916-3929. \bibitem{FAntontsev} S. Antontsev, F. Miranda, L. Santos, \newblock Blow-up and finite time extinction for p(x,t)-curl systems arising in electromagnetism, \newblock {J. Math. Anal. Appl.} 440 (2016) 300-322. \bibitem{Aramaki} J. Aramaki, \newblock On a Degenerate evolution system associated with the Bean critical-state for type II superconductors, \newblock {Abstr. Appl. Anal.} (2015) 1-13. \bibitem{Barrett1993Finite} J.W. Barrett, W.B. Liu, \newblock Finite element approximation of the p-laplacian, \newblock {Mathematics of Computation.} 61 (204)(1994) 523-537. \bibitem{Barrett} J.W. Barrett, L. Prigozhin, \newblock Bean’s critical-state model as the $p\to\infty$ limit of an evolutionary p-Laplacian equation, \newblock{Nonlinear Anal.} 42 (2000) 977-993. \bibitem{Bartsch} T. Bartsch, J. Mederski, \newblock Ground and bound state solutions of semilinear time-harmonic Maxwell equations in a bounded domain, \newblock {Arch. Ration. Mech. Anal.} 215 (2014) 283-306. \bibitem{Bean1964Magnetization} C.P. Bean, \newblock Magnetization of high-field superconductors, \newblock {Rev. mod. phys.} 36 (1)(1964) 886-901. \bibitem{Chapman2000A} S.J. Chapman, \newblock A hierarchy of models for type-II superconductors, \newblock {Siam Review.} 42 (4)(2000) 555-598. \bibitem{ChenJun} J. Chen, X.B. Pan, \newblock Quasilinear systems involving curl, \newblock {Proc. Royal Soc. Edingburgh Sect. A Mathematics } 148 (2)(2018) 1-37. \bibitem{Cianchi2014Global} A. Cianchi, V.G. Maz'ya, \newblock Global boundedness of the gradient for a class of nonlinear elliptic systems, \newblock {Arch. Ration. Mech. Anal.} 212 (2014) 129-177. \bibitem{Cianchi2011} A. Cianchi, V.G. Maz'ya, \newblock Global Lipschitz regularity for a class of quasilinear elliptic equations, \newblock{Comm. Part. Differ. Eq.} 36 (2011) 100-133. \bibitem{Costabel1990A} M. Costabel, \newblock A remark on the regularity of solutions of Maxwell's equations on Lipschitz domains, \newblock {Math. Method Appl Sci.} 12 (4)(1990) 365-368. \bibitem{DeGennes} P.G. DeGennes, \newblock Superconducting of Metal and Alloys, \newblock {Benjamin, New York,} 1966. \bibitem{Dibenedetto2010Degenerate} E. Dibenedetto, \newblock Degenerate parabolic equations, \newblock {Springer-Verlag.} 2010. \bibitem{Grisvard1985Elliptic} P. Grisvard, \newblock Elliptic problems in nonsmooth domains, \newblock {Pitman Advanced Pub. Program.} 1985. \bibitem{Laforest} M. Laforest, \newblock The p-curlcurl: Spaces, traces, coercivity and a Helmholtz decomposition in $L_p$, \newblock preprint. \bibitem{Lieberman1988} G. Lieberman, \newblock Boundary regularity for solutions of degenerate elliptic equations, \newblock {Nonlinear Anal.} 12 (11)(1988) 1203-1219. \bibitem{Mederski} J. Mederski, \newblock Ground states of time-harmonic Maxwell equations in $R^3$ with vanishing permittivity, \newblock {Arch. Ration. Mech. Anal.} 218 (2)(2015) 825-861. \bibitem{Str} M. Struwe, \newblock Variational Methods, \newblock 3rd edition, Springer-Verlag, Berlin, 2006. \bibitem{Tang} X. Tang, D. Qin, \newblock Ground state solutions for semilinear time-harmonic Maxwell equations, \newblock {J. Math. Phys.} 57 (4)(2016) 823-864. \bibitem{Uralceva} N.N. Ural'ceva, \newblock Degenerate quasilinear elliptic systems, \newblock {Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI)} 7 (1968) 184-222 (Russian). \bibitem{Verchota1984} G. Verchota, \newblock Layer potentials and regularity for the Dirichlet problem for Laplace's equation in Lipschitz domains, \newblock {J. Funct. Anal.} 59 (3)(1984) 572-611. \bibitem{xiangxf} X.F. Xiang, \newblock $L^{\infty}$ estimate for a limiting form of Ginzburg-Landau systems in convex domains, \newblock {J. Math. Anal. Appl.} 438 (1)(2016) 328-338. \bibitem{XiangM} M.Q. Xiang, F.L. Wang, B.L. Zhang, \newblock Existence and multiplicity of solutions for p(x)-curl systems arising in electromagnetism, \newblock {J. Math. Anal. Appl.} 448 (2)(2016) 1600-1617. \bibitem{Yin1998On} H.M. Yin, \newblock On a singular limit problem for nonlinear Maxwell's equations, \newblock {J. Diff. Eqs.} 156 (2)(1998) 355-375. \bibitem{Yin2001} H.M. Yin, \newblock On a p-Laplacian type of evolution system and applications to the Bean model in the type-II superconductivity theory, \newblock {Quart. Appl. Math.} 59 (1)(2001) 47-66. \bibitem{Yin2002} H.M. Yin, B.Q. Li, J. Zou, \newblock A degenerate evolution system modeling Bean's critical-state type-II superconductors, \newblock {Discrete Contin. Dyn. Syst. Ser. A.} 8 (3)(2002) 781-794. \bibitem{Yin2006Regularity} H.M. Yin, \newblock Regularity of weak solution to an p-curl system, \newblock {Differ. Integral Equ.} 19 (4)(2006) 361-368. \end{thebibliography} \end {document} \] \end{document}
2,877,628,088,837
arxiv
\section{Introduction} Multicomponent flows appear in many applications including sedimentation, dialysis, electrolysis, and ion transport \cite{WeKr99}. These flows may be described by Euler or Euler-Korteweg equations for the various species, coupled through interaction forces proportional to the difference of the partial velocities. The equations can be simplified when the interaction is strong, leading in the zeroth-order limit to the Euler equations for the partial particle densities and the common velocity and in the first-order correction to diffusive systems of Maxwell-Stefan type coupled with the momentum balance equation for the barycentric velocity. While such relaxation and high-friction limits are widely explored in mono-species situations, there are no results for multicomponent Euler-Korteweg flows. The aim of this paper is to compute the Chapman-Enskog expansion and to justify the expansion via a relative entropy approach, extending results for the mono-species case to fluid mixtures \cite{GLT17,LaTz13,LaTz17}. We consider the following Euler-Korteweg equations for multicomponent fluids, \begin{align} \pa_t\rho_i + \diver(\rho_i v_i) &= 0, \label{1.eq1} \\ \pa_t(\rho_i v_i) + \diver(\rho_i v_i\otimes v_i) &= -\rho_i\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}) - \frac{1}{\eps}\sum_{j=1}^nb_{ij}\rho_i\rho_j(v_i-v_j), \label{1.eq2} \end{align} where $i=1,\ldots,n$, $x\in\R^3$, $t>0$, and $\bm{\rho} =\bm{\rho}(x,t) = (\rho_1,\ldots,\rho_n)(x,t)$. The initial conditions are $$ \rho_i(\cdot,0)=\rho_{i}^0, \quad v_i(\cdot,0)=v_{i}^0 \quad\mbox{in }\R^3,\ i=1,\ldots,n. $$ The variables $\rho_i$ are the partial densities and $v_i$ the partial velocities. The parameters $b_{ij}\ge 0$ model the interaction of the $i$th and $j$th components with a strength that is measured by $\eps>0$. Model \eqref{1.eq1}-\eqref{1.eq2} belongs to the general realm of multicomponent fluid mixtures whose thermodynamical structure has been extensively analyzed; see, e.g., \cite{BoDr15, MuRu93, RuSi09} and references therein. On the other hand, we adopt the mathematical structure espoused in \cite{GLT17}, in that the dynamics of the flow is determined by the functional ${\mathcal E}(\bm{\rho})$ of potential energy, with $\delta{\mathcal E}/\delta\rho_i$ standing for the variational derivatives with respect to the partial densities $\rho_i$. Several isothermal models fit into this framework. In this work, we consider energies of the form $$ {\mathcal E}(\bm{\rho}) = \int_{\R^3}\sum_{i=1}^n F_i(\rho_i,\na\rho_i)dx. $$ For instance, when $F_i=h_i(\rho_i)$ for some (convex) function $h_i$ we obtain the equations of multicomponent system of gas dynamics with friction. When \begin{equation}\label{1.korte} F_i = h_i(\rho_i) + \frac12\kappa_i(\rho_i)|\na\rho_i|^2 \, , \end{equation} we obtain the multicomponent Euler-Korteweg system $$ \pa_t(\rho_iv_i) + \diver(\rho_iv_i\otimes v_i) = \diver S_i[\rho_i] - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j), $$ where $$ S_i[\rho_i] := \bigg(-p_i(\rho_i)-\frac12\big(\rho_i\kappa'_i(\rho_i)+\kappa_i(\rho_i)\big) |\na\rho_i|^2+\diver(\rho_i\kappa_i(\rho_i)\na\rho_i) \bigg){\mathbb I} - \kappa_i(\rho_i)\na\rho_i\otimes\na\rho_i $$ is the stress tensor associated with the $i$th component and $p_i(\rho_i) = \rho_i h_i'(\rho_i)-h_i(\rho_i)$ is the partial pressure. A special case is the selection $\kappa_i(\rho_i)= k_i/(4 \rho_i)$ with $k_i =\mbox{const.}$, which yields the multicomponent quantum hydrodynamic system with friction, $$ \pa_t(\rho_iv_i) + \diver(\rho_iv_i\otimes v_i) + \nabla p_i(\rho_i) = \frac{1}{2} k_i \rho_i\na\bigg(\frac{\Delta\sqrt{\rho_i}}{\sqrt{\rho_i}}\bigg) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j), $$ used to describe quantum effects in semiconductors \cite{Jue09} or multicomponent quantum plasmas \cite{MSS08}. The dependence of $F_i$ on the density (and its gradient) of the $i$th component is crucial; the general case leads to mixed terms like $\pa F_i/\pa\rho_j$ that we cannot control. The interaction term (the last term in \eqref{1.eq2}) has an alignment effect on the partial velocities, and we expect that all partial velocities are the same in the high-friction limit $\eps\to 0$, leading to the zeroth-order limit system \begin{equation}\label{1.lim} \pa_t\bar\rho_i + \diver(\bar\rho_i\bar v) = 0, \quad \pa_t(\bar\rho\bar v) + \diver(\bar\rho\bar v\otimes\bar v) = -\sum_{i=1}^n\bar\rho_i\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bar\bm{\rho}) \end{equation} for $i=1,\ldots,n$, where $\bar\bm{\rho} = (\bar\rho_1,\ldots,\bar\rho_n)$, while $\bar\rho=\sum_{i=1}^n\bar\rho_i$ stands for the total density. In the first-order correction, the solution $(\bm{\rho}^\eps,\bm{v}^\eps)=(\rho_i^\eps,v_i^\eps)_{i=1,\ldots,n}$ to the hyperbolic relaxation system \eqref{1.eq1}-\eqref{1.eq2} is expected to be close to the hyperbolic-diffusive system \begin{align} \pa_t\widehat\rho_i^\eps + \diver(\widehat\rho_i^\eps\widehat v^\eps) &= \eps\diver\sum_{j=1}^n D_{ij}^\eps(\widehat\bm{\rho}^\eps) \na\frac{\delta{\mathcal E}}{\delta\rho_j}(\widehat\bm{\rho}^\eps), \label{1.CE1} \\ \pa_t(\widehat\rho^\eps\widehat v^\eps) + \diver(\widehat\rho^\eps \widehat v^\eps\otimes\widehat v^\eps) &= -\sum_{i=1}^n\widehat\rho_i^\eps \na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}^\eps) \label{1.CE2} \end{align} for $i=1,\ldots,n$, where $\widehat\bm{\rho}^\eps = ( \widehat\rho^\eps_1,\ldots, \widehat\rho^\eps_n)$ and $\widehat\rho^\eps=\sum_{i=1}^n\widehat\rho_i^\eps$. When the barycentric velocity $\widehat v^\eps$ vanishes, we recover the Maxwell-Stefan equations analyzed in, e.g., \cite{Bot11,BGV17,JuSt13}. Before stating our main results, we review the state of the art. The structure of relaxation systems and their relaxation limits were first explored for examples \cite{CaPa79} and later for general systems \cite{Bou04,CLL94,Tza05,Yon04}. We call the limit $\eps\to 0$ a relaxation limit if the time scale is of order $O(1/\eps)$. Rigorous relaxation limits in the mono-species Euler equations towards the heat or porous-medium equation were proved, using energy estimates \cite{CoGo07}, the relative entropy approach \cite{LaTz13}, or convergence in Besov spaces \cite{XuKa14}. The relaxation limit in non-isentropic flows was analyzed in, e.g., \cite{Wu16,XuYo09}. When the potential energy ${\mathcal E}$ also depends on the gradient of the particle density, system \eqref{1.eq1}-\eqref{1.eq2} is of Euler-Korteweg type. The relaxation (or high-friction) limit in these equations for single species was studied in \cite{LaTz17} for monotone pressures (i.e.\ convex energies) and in \cite{GiTz17} for non-monotone pressures. Giesselmann et al.\ \cite{GLT17} proved stability theorems for the Euler-Korteweg system between a weak and a strong solution and for the Navier-Stokes-Korteweg system. All these results concern the mono-species case. Relaxation limits in multi-species systems were proved in the Euler-Poisson equations for electrons and positively charged ions in plasmas or semiconductors \cite{JuPe99}. At the zeroth order, such a limit leads to equations \eqref{1.lim}. First-order corrections can be derived by a Chapman-Enskog expansion or Maxwell-iteration technique. This was done in the Euler system with temperature \cite{RuSi09}, leading to equations for multitemperature mixtures in nonequilibrium thermodynamics. The Chapman-Enkog expansion was validated in \cite{YaYo15,YYZ15} in the isentropic case, proving an error estimate for the difference of the solutions of equations \eqref{1.eq1}-\eqref{1.eq2} and \eqref{1.CE1}-\eqref{1.CE2}. Another validation was recently presented by Boudin et al.\ \cite{BGP18} by applying the formalism of Chen, Levermore, and Liu \cite{CLL94}. However, no results seem to be available in the literature for high-friction limits in Euler-Korteweg systems. In this paper, we prove the convergence of solutions to \eqref{1.eq1}-\eqref{1.eq2} towards the limit system \eqref{1.lim} and the first-order correction system \eqref{1.CE1}-\eqref{1.CE2}. The main results can be sketched as follows: \begin{labeling}{1a.} \item[1.] We compute the Chapman-Enskog expansion leading to \eqref{1.CE1}-\eqref{1.CE2} and show that \eqref{1.CE1} has a gradient-flow structure (Lemma \ref{lem.diff}). Moreover, when the barycentric velocity $\widehat v^\eps$ vanishes, the system is proved to be parabolic in the sense of Petrovskii (Lemma \ref{lem.Estar}). \item[2.] \label{re2} Assume that the functional \eqref{1.korte} satisfies some convexity conditions. For weak solutions to the relaxation system \eqref{1.eq1}-\eqref{1.eq2} and strong solutions to the approximate system \eqref{1.CE1}-\eqref{1.CE2} with uniform bounds on the velocities, assuming that the difference of the initial data is of order $O(\eps^2)$, we prove that \begin{align*} \chi(t) :&= \int_{\R^3}\sum_{i=1}^n \bigg(\frac12\rho^\eps_i|v_i^\eps-\widehat v_i^\eps|^2 + (\rho_i^\eps-\widehat\rho_i^\eps)^2 + \frac{1}{2\kappa_i(\rho_i^\eps)}|\kappa_i(\rho_i^\eps)\nabla \rho_i^\eps - \kappa_i(\widehat \rho_i^\eps) \nabla\widehat \rho_i^\eps|^2 \bigg)(t)dx \\&\le C(\chi(0) + \eps^2) \end{align*} uniformly in $t\in(0,T)$ for some constant $C>0$ independent of $\eps$, see Theorem \ref{thm.convCE}. \item[3a.] Isentropic case: Smooth solutions to \eqref{1.eq1}-\eqref{1.eq2} converge towards a smooth solution to the limit system \eqref{1.lim} in the sense $$ \sup_{0<t<T}\int_{\R^3}\sum_{i=1}^n\big((\rho_i^\eps-\bar\rho_i)^2 + |v_i^\eps-\bar v|^2\big)dx \to 0 \quad\mbox{as }\eps\to 0, $$ if the initial relative entropy converges to zero; see Theorem \ref{thm.isen}. \item[3b.] \label{re3a} Euler-Korteweg case with functional \eqref{1.korte}: Weak solutions to \eqref{1.eq1}-\eqref{1.eq2} converge towards a strong solution to the limit system \eqref{1.lim} in the sense \begin{align*} \chi(t) :&= \int_{\R^3}\sum_{i=1}^n \bigg(\frac12\rho^\eps_i|v_i^\eps-\bar v|^2 + (\rho_i^\eps-\bar\rho_i)^2 + \frac{1}{2\kappa_i(\rho_i^\eps)}|\kappa_i(\rho_i^\eps)\nabla \rho_i^\eps - \kappa_i(\bar \rho_i) \nabla \bar\rho_i|^2\bigg)(t)dx \\ &\le C(\chi(0) + \eps) \end{align*} uniformly in $t\in(0,T)$ for some constant $C>0$ independent of $\eps$; see Theorem \ref{thm.korte}. \end{labeling} For these results, we need that the functions $\rho_i^\eps$ are uniformly bounded away from vacuum as well as $h_i$ and $-1/\kappa_i$ are convex. The case of the multicomponent quantum hydrodynamic system and the system with constant capillarities are included. The idea of the proofs is to estimate the relative entropy between two solutions $$ {\mathcal E}_{\rm tot}(\bm{\rho},\bm{m}|\widehat\bm{\rho},\widehat{\bm{m}})(t) = \int_{\R^3}\sum_{i=1}^n\bigg(F_i(\rho_i,\na\rho_i|\widehat\rho_i, \na\widehat\rho_i) + \frac12\rho_i|v_i-\widehat v_i|^2\bigg)(t)dx, $$ where $\bm{m} = (m_1,\ldots,m_n)$ with $m_i=\rho_i v_i$, $\widehat{\bm{m}} = (\widehat m_1,\ldots,\widehat m_n)$ with $\widehat m_i=\widehat \rho_i \widehat v_i$, and $F_i(\rho_i,\na\rho_i|\widehat\rho_i,\na\widehat\rho_i)$ is the relative potential energy density, defined by $$ F_i(\rho_i,\na\rho_i|\widehat\rho_i,\na\widehat\rho_i) = F_i - \widehat F_i - \frac{\pa\widehat F_i}{\pa\rho_i}(\rho_i-\widehat\rho_i) - \frac{\pa \widehat F_i}{\pa \na \rho_i}\cdot\na(\rho_i-\widehat\rho_i), $$ with $F_i=F_i(\rho_i,\na\rho_i)$ and $\widehat F_i=F_i(\widehat\rho_i,\na\widehat\rho_i)$. This functional satisfies a relative entropy inequality, proved in Proposition \ref{prop.rei} for solutions to \eqref{1.eq1}-\eqref{1.eq2} and \eqref{1.CE1}-\eqref{1.CE2} and in Proposition \ref{prop.rei2} for solutions to \eqref{1.eq1}-\eqref{1.eq2} and \eqref{1.lim}. The relative entropy approach has the advantage of being very elementary and to be able to treat weak solutions to the original system \cite{GLT17,LaTz17}. For the proof of the high-friction limit in the isentropic case, we apply the general relaxation result in \cite{Tza05} which is also based on the relative entropy approach. We show that the framework is sufficiently general to include multicomponent Euler flows with friction. The paper is organized as follows. The formal Chapman-Enskog expansion as well as the proof of parabolicity of the first-order correction system are performed in section \ref{sec.formal}. Section \ref{sec.rigorCE} is devoted to the rigorous proof of the Chapman-Enskog expansion in the Euler-Korteweg case. The high-friction limit in both the isentropic and Euler-Korteweg case is shown in section \ref{sec.relax}. \section{Formal asymptotics}\label{sec.formal} In this section we perform a Chapman-Enskog asymptotic analysis to system \eqref{1.eq1}-\eqref{1.eq2} as $\eps \to 0$. As a preparation, we analyze the solvability properties of the linear system \begin{equation}\label{1.linnh} \sum_{j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j) = d_i , \quad i=1,\ldots,n \, , \end{equation} and the associated homogeneous system \begin{equation}\label{1.lin} \sum_{j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j) = 0, \quad i=1,\ldots,n. \end{equation} The key hypothesis for \eqref{1.lin}, to be assumed in the whole manuscript, reads as \begin{enumerate}[label=\bf (N)] \item \label{N} Let $(b_{ij})\in\R^{n\times n}$ be a symmetric matrix with nonnegative coefficients, $b_{i j} \ge 0$. For any $\rho_1,\ldots,\rho_n>0$, system \eqref{1.lin} has the one-dimensional null space $\operatorname{span}\{\mathbf{1}\}$, where $\mathbf{1}=(1,\ldots,1)\in\R^n$. \end{enumerate} By setting $B_{i j} = b_{i j} \rho_i \rho_j$, we rewrite \eqref{1.lin} in the form \begin{equation}\label{1p.lin} \sum_{j=1}^n B_{ij} (v_i - v_j ) = 0, \quad i=1,\ldots,n. \end{equation} If the coefficients $B_{i j}$ are symmetric and strictly positive, $B_{ij} > 0$ for $i \ne j$, then hypothesis \ref{N} is automatically satisfied. Indeed, due to the symmetry of $(B_{ij})$, \begin{align*} \sum_{i,j=1}^n B_{ij}(v_i-v_j)\cdot v_i &= \frac12\sum_{i,j=1}^n B_{ij}(v_i-v_j)\cdot v_i + \frac12\sum_{i,j=1}^n B_{ji}(v_j-v_i)\cdot v_j \\ &= \frac12\sum_{i,j=1}^n B_{ij}|v_i-v_j|^2. \end{align*} If \eqref{1p.lin} is satisfied, it follows that $v_i=v_j$ for all $i\neq j$, and the null space of system \eqref{1.lin} is the linear span of the vector $\mathbf{1}$. This conclusion cannot be guaranteed if some $b_{ij}$ vanish, which makes necessary assumption \ref{N}. The assumption guarantees that there are no extraneous conservation laws associated to the frictional coefficients $b_{ij}$, beyond the conservation of mass and total momentum. \subsection{Solution of a linear system}\label{sec.lin} In the sequel, we will need to solve the linear system \begin{equation}\label{2.lin} -\sum_{j=1}^n b_{ij}\rho_i\rho_j(u_i-u_j) = d_i\quad\mbox{for }i=1,\ldots,n, \quad\mbox{subject to }\sum_{i=1}^n \rho_iu_i=0. \end{equation} We give a semi-explicit solution to such systems, recalling the notation $B_{ij}=b_{ij}\rho_i\rho_j$. \begin{lemma}\label{lem.linsys} Let $d_1,\ldots,d_n\in\R^3$ satisfy $\sum_{i=1}^n d_i=0$, $\rho_1,\ldots,\rho_n>0$, and $(B_{ij})\in\R^{n\times n}$ be a symmetric matrix satisfying $B_{ij}\ge 0$ for all $i,j=1,\ldots,n$. We suppose that all solutions to the homogeneous system \begin{equation}\label{2.hom} \sum_{j=1}^n B_{ij}(u_i-u_j)=0, \quad i=1,\ldots,n, \end{equation} lie in the space $\operatorname{span}\{\mathbf{1}\}$. Then the system \begin{equation}\label{2.linB} -\sum_{j=1}^n B_{ij}(u_i-u_j) = d_i\quad\mbox{for }i=1,\ldots,n, \quad\mbox{subject to }\sum_{i=1}^n \rho_iu_i=0, \end{equation} has the unique solution \begin{equation}\label{2.sol} \rho_iu_i = -\sum_{j,k=1}^{n-1}\bigg(\delta_{ij}\rho_i-\frac{\rho_i\rho_j}{\rho} \bigg)\tau^{-1}_{jk}d_k, \quad \rho_n u_n =-\sum_{j=1}^{n-1}\rho_ju_j, \end{equation} where $i=1,\ldots,n$, $\rho=\sum_{i=1}^n \rho_i>0$ and $(\tau_{ij}^{-1})\in\R^{(n-1)\times(n-1)}$ is the inverse of a regular submatrix, obtained from reordering the matrix $(\tau_{ij})\in\R^{n\times n}$ of rank $n-1$ with coefficients $$ \tau_{ij} = \delta_{ij}\sum_{k=1}^n B_{ik} - B_{ij}, \quad i,j=1,\ldots,n. $$ \end{lemma} \begin{proof} We proceed similarly as in \cite[Section 4]{YYZ15}. The idea is to formulate the linear system in $n-1$ equations and to invert the resulting linear system semi-explicitly. First, we notice that we can write \eqref{2.hom} as $$ \sum_{j=1}^n \tau_{ij}u_j=0 \quad \mbox{ for }i=1,\ldots,n, $$ where \begin{equation}\label{eq:taudef} \tau_{ij}=-b_{ij}\rho_i\rho_j\mbox{ for } i \ne j\quad\text{and}\quad \tau_{ii}= -\sum_{j=1,\,j\neq i}^n \tau_{ij}. \end{equation} Since we assumed that all solutions to this system lie in the space $\operatorname{span}\{\mathbf{1}\}$, the matrix $(\tau_{ij})\in\R^{n\times n}$ has rank $n-1$. Thus, there exists an invertible submatrix $\tau=(\tau_{ij})\in\R^{(n-1)\times(n-1)}$ (possibly after reordering of the indices). The linear system \eqref{2.linB} can be formulated in terms of the first $n-1$ variables. Indeed, since $\sum_{j=1}^n\tau_{ij}=0$, we find that $$ -d_i = \sum_{j=1}^n\tau_{ij} u_j = \sum_{j=1}^{n-1}\tau_{ij} u_j + \tau_{in}u_n = \sum_{j=1}^{n-1}\tau_{ij}u_j - \sum_{j=1}^{n-1}\tau_{ij}u_n = \sum_{j=1}^{n-1}\tau_{ij}(u_j-u_n). $$ Using the property $\rho_nu_n=-\sum_{k=1}^{n-1}\rho_ku_k$, it follows that \begin{align} -d_i = \sum_{j=1}^{n-1}\tau_{ij}\bigg(u_j + \frac{1}{\rho_n}\sum_{k=1}^{n-1} \rho_ku_k\bigg) = \sum_{j,k=1}^{n-1}\tau_{ij}\bigg(\frac{1}{\rho_j}\delta_{jk} + \frac{1}{\rho_n} \bigg)\rho_ku_k = \sum_{j,k=1}^{n-1}\tau_{ij}Q_{jk}\rho_ku_k, \label{eq:taucal} \end{align} where $Q_{ij}=\delta_{ij}\rho_j^{-1}+\rho_n^{-1}$ for $i,j=1,\ldots,n-1$. The matrix $Q=(Q_{ij})\in\R^{(n-1)\times(n-1)}$ is invertible with inverse $(Q_{ij}^{-1})$, where $Q_{ij}^{-1}=\delta_{ij}\rho_j-\rho_i\rho_j/\rho$ for $i,j=1,\ldots,n-1$. Indeed, a straightforward computation shows that \begin{align*} \sum_{k=1}^{n-1}Q_{ik}Q^{-1}_{kj} &= \sum_{k=1}^{n-1}\bigg(\frac{1}{\rho_k}\delta_{ik} + \frac{1}{\rho_n}\bigg) \bigg(\delta_{kj}\rho_j-\frac{\rho_k\rho_j}{\rho}\bigg) \\ &= \delta_{ij} + \frac{\rho_j}{\rho_n} - \frac{\rho_j}{\rho} - \frac{\rho_j}{\rho_n\rho} \sum_{k=1}^{n-1}\rho_k = \delta_{ij}, \\ \sum_{k=1}^{n-1}Q^{-1}_{ik}Q_{kj} &= \sum_{k=1}^{n-1}\bigg(\delta_{ik}\rho_k-\frac{\rho_i\rho_k}{\rho}\bigg) \bigg(\frac{1}{\rho_k}\delta_{kj} + \frac{1}{\rho_n}\bigg) \\ &= \delta_{ij} - \frac{\rho_i}{\rho} + \frac{\rho_i}{\rho_n} - \frac{\rho_i}{\rho\rho_n} \sum_{k=1}^{n-1}\rho_k = \delta_{ij}. \end{align*} Thus, the matrix product $\tau Q$ is invertible with inverse $Q^{-1}\tau^{-1}$, and we infer that $$ \rho_i u_i = -\sum_{j,k=1}^{n-1}Q_{ij}^{-1}\tau_{jk}^{-1}d_k = -\sum_{j,k=1}^{n-1}\bigg(\delta_{ij}\rho_i-\frac{\rho_i\rho_j}{\rho}\bigg) \tau^{-1}_{jk}d_k, \quad i=1,\ldots,n-1. $$ This ends the proof. \end{proof} \subsection{Formal derivation of the Chapman-Enskog expansion}\label{sec.CE} We perform a formal Chapman-Enskog expansion of \eqref{1.eq1}-\eqref{1.eq2} in the high-friction regime, i.e.\ for small $\eps>0$. We introduce the moments $$ \rho = \sum_{i=1}^n\rho_i, \quad \rho v = \sum_{i=1}^n\rho_iv_i, $$ and the relative velocities $u_i=v_i-v$ for $i=1,\ldots,n$. This corresponds to a change of variables $(v_1,\ldots,v_n)\mapsto(v,u_1,\ldots,u_n)$. Then system \eqref{1.eq1}-\eqref{1.eq2} becomes \begin{align} \pa_t\rho_i + \diver(\rho_iu_i + \rho_i v) &= 0, \label{2.u1} \\ \pa_t(\rho_iu_i+\rho_iv) + \diver\big(\rho_i(u_i+v)\otimes(u_i+v)\big) &= -\rho_i\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\rho_i\rho_j(u_i-u_j), \label{2.u2} \end{align} subject to the constraint \begin{equation}\label{2.constr} \sum_{i=1}^n \rho_iu_i = \sum_{i=1}^n\rho_i(v_i-v) = \sum_{i=1}^n\rho_i v_i - \rho v = 0. \end{equation} The objective is to derive an effective equation in the spirit of the Chapman-Enskog expansion for the high-friction dynamics of system \eqref{2.u1}-\eqref{2.u2} subject to \eqref{2.constr}. For this, we introduce the Hilbert expansion \begin{align} \rho_i &= \rho_i^0 + \eps\rho_i^1+\eps^2\rho_i^2 + O(\eps^3), \nonumber \\ u_i &= u_i^0 + \eps u_i^1 + \eps^2 u_i^2 + O(\eps^3), \label{2.hilbert} \\ v &= v^0 + \eps v^1 + O(\eps^2). \nonumber \end{align} Inserting this expansion into $\rho=\sum_{i=1}^n\rho_i$, we find that \begin{equation}\label{2.rho} \rho = \rho^0 + \eps\rho^1 + O(\eps^2), \quad\mbox{where } \rho^0 := \sum_{i=1}^n\rho_i^0,\ \rho^1 := \sum_{i=1}^n\rho_i^1, \end{equation} and the constraint \eqref{2.constr} leads to $$ 0 = \sum_{i=1}^n\rho_iu_i = \sum_{i=1}^n\rho_i^0 u_i^0 + \eps\sum_{i=1}^n\big(\rho_i^0u_i^1+\rho_i^1u_i^0\big) + O(\eps^2). $$ Equating terms of the same order gives \begin{equation}\label{2.zero} \sum_{i=1}^n\rho_i^0 u_i^0=0, \quad \sum_{i=1}^n\big(\rho_i^0u_i^1+\rho_i^1u_i^0\big)=0. \end{equation} Next, we insert the Hilbert expansion \eqref{2.hilbert} into system \eqref{2.u1}-\eqref{2.u2} and identify terms of the same order: \\ $\bullet$ Terms of order $O(1/\eps)$: \begin{equation}\label{2.H-1} \sum_{j=1}^n b_{ij}\rho_i^0\rho_j^0(u_i^0-u_j^0) = 0, \quad i=1,\ldots,n. \end{equation} $\bullet$ Terms of order $O(1)$: \begin{align} &\pa_t\rho_i^0 + \diver(\rho_i^0 u_i^0+\rho_i^0v^0) = 0, \label{2.H01} \\ &\pa_t(\rho_i^0u_i^0+\rho_i^0v^0) + \diver\big(\rho_i^0(u_i^0+v^0)\otimes(u_i^0+v^0)\big) \label{2.H02} \\ &\qquad= -\rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) - \sum_{j=1}^nb_{ij}\rho_i^0\rho_j^0(u_i^1-u_j^1) - \sum_{j=1}^nb_{ij}(\rho_i^1\rho_j^0+\rho_i^0\rho_j^1)(u_i^0-u_j^0). \nonumber \end{align} $\bullet$ Terms of order $O(\eps)$: \begin{align} & \pa_t\rho_i^1 + \diver\big(\rho_i^1(u_i^0+v^0) + \rho_i^0(u_i^1+v^1)\big) = 0, \label{2.H11} \\ & \pa_t\big(\rho_i^1(u_i^0+v^0) + \rho_i^0(u_i^1+v^1)\big) + \diver\big(\rho_i^1(u_i^0+v^0)\otimes(u_i^0+v^0) \label{2.H12} \\ &\phantom{xxxx}{} + \rho_i^0(u_i^1+v^1)\otimes(u_i^0+v^0) + \rho_i^0(u_i^0+v^0)\otimes(u_i^1+v^1)\big) \nonumber \\ &\phantom{xx}{}= -\rho_i^1\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) - \rho_i^0\na\bigg(\sum_{j=1}^n\frac{\delta^2{\mathcal E}}{\delta\rho_i\delta\rho_j} (\bm{\rho}^0)\rho_j^1\bigg) \nonumber \\ &\phantom{xxxx}{}- \sum_{j=1}^n b_{ij}\Big(\rho_i^0\rho_j^0(u_i^2-u_j^2) + (\rho_i^1\rho_j^0+\rho_i^0\rho_j^1)(u_i^1-u_j^1) \nonumber \\ &\phantom{xxxx}{} + (\rho_i^0\rho_j^2+\rho_i^1\rho_j^1+\rho_i^2\rho_j^0)(u_i^0-u_j^0)\Big). \nonumber \end{align} First, we consider equations \eqref{2.H-1} of order $O(1/\eps)$. By assumption \ref{N} on page \pageref{N}, the first constraint in \eqref{2.zero}, and Lemma \ref{lem.linsys}, we deduce that $u_i^0=0$ for $i=1,\ldots,n$, which simplifies equations \eqref{2.H01}-\eqref{2.H12}. Then, summing \eqref{2.H02} from $i=1,\ldots,n$ and using the symmetry of $(b_{ij})$, $(\rho_1^0,\ldots,\rho_n^0,v^0)$ can be determined by solving the closed system \begin{align} \pa_t\rho_i^0 + \diver(\rho_i^0 v^0) &= 0, \label{2.I01} \\ \pa_t \bigg(\sum_{i=1}^n\rho_i^0v^0\bigg) + \diver\bigg(\sum_{i=1}^n\rho_i^0v^0\otimes v^0\bigg) &= -\sum_{i=1}^n\rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0). \label{2.I02} \end{align} It follows from \eqref{2.H02} that $u_1^1,\ldots,u_n^1$ satisfy the linear system \begin{align} & -\sum_{j=1}^n b_{ij}\rho_i^0\rho_j^0(u_i^1-u_j^1) = d_i^0, \label{2.linsys2} \\ & \mbox{where}\quad d_i^0 = \pa_t(\rho_i^0v^0) + \diver(\rho_i^0v^0\otimes v^0) + \rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0). \nonumber \end{align} Since $u_i^0=0$, the second constraint in \eqref{2.zero} becomes $\sum_{i=1}^n\rho_i^0 u_i^1=0$. Moreover, \eqref{2.I02} is equivalent to $\sum_{i=1}^n d_i^0=0$, which ensures the solvability of \eqref{2.linsys2}. By Lemma \ref{lem.linsys}, there exists a unique solution $(u_1^1,\ldots,u_n^1)$ to \eqref{2.linsys2}. Next, we focus on the terms \eqref{2.H11}-\eqref{2.H12} of order $O(\eps)$. We rewrite these equations using $u_i^0=0$ and the constraint $\sum_{i=1}^n\rho_i^0 u_i^1=0$ as \begin{align} & \pa_t\rho_i^1 + \diver(\rho_i^1v^0 + \rho_i^0v^1) = -\diver(\rho_i^0u_i^1), \label{2.I11} \\ & \pa_t\bigg(\sum_{i=1}^n\rho_i^1v^0 + \sum_{i=1}^n\rho_i^0v^1\bigg) + \diver\bigg(\sum_{i=1}^n\rho_i^1 v^0\otimes v^0 + \sum_{i=1}^n\rho_i^0(v^1\otimes v^0+v^0\otimes v^1)\bigg) \label{2.I12} \\ &\phantom{xx}{}= -\sum_{i=1}^n\bigg\{\rho_i^1\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) + \rho_i^0\na\bigg(\sum_{j=1}^n\frac{\delta^2{\mathcal E}}{\delta\rho_i\delta\rho_j} (\bm{\rho}^0)\rho_j^1\bigg)\bigg\}. \nonumber \end{align} This is a closed system providing $(\rho_1^1,\ldots,\rho_n^1,v^1)$. The last task is to reconstruct the effective equations that are valid asymptotically up to order $O(\eps^2)$. We are adding \eqref{2.I01} and $\eps$ times \eqref{2.I11} as well as \eqref{2.I02} and $\eps$ times \eqref{2.I12}: \begin{align*} &\pa_t(\rho_i^0+\eps\rho_i^1) + \diver\big(\rho_i^0v^0 + \eps(\rho_i^1v^0+\rho_i^0v^1)\big) = -\eps\diver(\rho_i^0 u_i^1), \\ & \pa_t\big(\rho^0v^0 + \eps(\rho^1v^0+\rho^0v^1)\big) + \diver\big({\rho}^0v^0\otimes v^0 + \eps(\rho^1v^0\otimes v^0 + \rho^0 v^1\otimes v^0 + \rho^0v^0\otimes v^1)\big) \\ &\phantom{xx}{}= -\sum_{i=1}^n\rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) - \eps\sum_{i=1}^n\bigg\{\rho_i^1\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) + \rho_i^0\na\bigg(\sum_{j=1}^n\frac{\delta^2{\mathcal E}}{\delta\rho_i\delta\rho_j} (\bm{\rho}^0)\rho_j^1\bigg)\bigg\}, \end{align*} where $\rho^0$ and $\rho^1$ are defined in \eqref{2.rho}. With the notation \begin{align*} & \rho_i^\eps = \rho_i^0 + \eps\rho_i^1 + O(\eps^2), \quad u_i^\eps = u_i^0 + \eps u_i^1 +O(\eps^2), \\ & v^\eps = v^0 + \eps v^1 + O(\eps^2), \quad \rho^\eps = \sum_{i=1}^n\rho_i^\eps, \end{align*} and recalling that $u_i^0=0$, we infer that $(\rho_1^\eps,\ldots,\rho_n^\eps,v^\eps)$ satisfies \begin{align*} \pa_t\rho_i^\eps + \diver(\rho_i^\eps v^\eps) &= -\diver(\rho_i^\eps u_i^\eps) + O (\eps^2), \\ \pa_t(\rho^\eps v^\eps) + \diver(\rho^\eps v^\eps\otimes v^\eps) &= -\sum_{i=1}^n\rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^\eps) + O(\eps^2). \end{align*} It remains to reconstruct the formula determining $(u_1^\eps,\ldots,u_n^\eps)$. We deduce from \eqref{2.linsys2} that \begin{equation}\label{2.d} -\sum_{j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps(u_i^\eps-u_j^\eps) = -\eps\sum_{j=1}^n b_{ij}\rho_i^0\rho_j^0(u_i^1-u_j^1) + O(\eps^2) = \eps d_i^0 + O(\eps^2). \end{equation} The variables $d_i^0$ can be expressed in terms of $\bm{\rho}^0$ only. Indeed, since $\pa_t\rho_i^0+\diver(\rho_i^0v^0)=0$ and $\pa_t\rho^0+\diver(\rho^0v^0)=0$, it follows that \begin{align*} d_i^0 &= \big(\pa_t\rho_i^0+\diver(\rho_i^0v^0)\big)v^0 + \rho_i^0\big(\pa_t v^0 + v^0\cdot\na v^0\big) + \rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) \\ &= \rho_i^0\big(\pa_t v^0 + v^0\cdot\na v^0\big) + \rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) \\ &= \frac{\rho_i^0}{\rho^0}\big(\pa_t(\rho^0v^0) + \diver(\rho^0v^0\otimes v^0)\big) + \rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0) \\ &= -\frac{\rho_i^0}{\rho^0}\sum_{j=1}^n\rho_j^0\na\frac{\delta{\mathcal E}}{\delta\rho_j} (\bm{\rho}^0) + \rho_i^0\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^0), \end{align*} where in the last step we have used \eqref{2.I02}. This motivates us to define \begin{equation}\label{2.deps} d_i^\eps := -\frac{\rho_i^\eps}{\rho^\eps}\sum_{j=1}^n\rho_j^\eps \na\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) + \rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^\eps). \end{equation} Hence, we can formulate \eqref{2.d} as $$ -\sum_{j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps(u_i^\eps-u_j^\eps) = \eps d_i^\eps + O(\eps^2). $$ The constraints $\sum_{i=1}^n\rho_i^0 u_i^0 = 0$ and $\sum_{j=1}^n\rho_i^0u_i^1=0$ from \eqref{2.zero} imply that $$ \sum_{i=1}^n\rho_i^\eps u_i^\eps = \sum_{i=1}^n\rho_i^0 u_i^0+ \eps\sum_{i=1}^n \rho_i^0u_i^1 + O(\eps^2) = O(\eps^2). $$ As the functions $\rho_i^\eps$, $v^\eps$, and $u_i^\eps$ are defined only up to order $O(\eps^2)$, we may set $\sum_{i=1}^n \rho_i^\eps u_i^\eps=0$ up to that order. We summarize our calculations. The functions $(\bm{\rho}^\eps,v^\eps)=(\rho_1^\eps,\ldots,\rho_n^\eps,v^\eps)$ satisfy up to order $O(\eps^2)$ the effective equations \begin{align} \pa_t\rho_i^\eps + \diver(\rho_i^\eps v^\eps) &= -\diver(\rho_i^\eps u_i^\eps), \label{2.eff1} \\ \pa_t(\rho^\eps v^\eps) + \diver(\rho^\eps v^\eps\otimes v^\eps) &= -\sum_{i=1}^n\rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^\eps), \label{2.eff2} \end{align} where $\rho^\eps=\sum_{i=1}^n\rho^\eps_i$, and $\bm{u}^\eps=(u_1^\eps,\ldots,u_n^\eps)$ is the unique solution to \begin{equation}\label{2.effu} -\sum_{j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps(u_i^\eps-u_j^\eps) = \eps d_i^\eps, \quad \sum_{j=1}^n\rho_j^\eps u_j^\eps=0, \end{equation} for $i=1,\ldots,n$, where $d_i^\eps$ is defined in \eqref{2.deps}. \subsection{Gradient-flow structure and parabolicity}\label{sec.para} We show that the effective equations have a formal gradient-flow structure and, if the total mass is constant, a parabolic structure in the sense of Petrovskii \cite{Ama93}. First, we reformulate system \eqref{2.eff1}-\eqref{2.effu}. \begin{lemma}[Gradient-flow structure]\label{lem.diff} System \eqref{2.eff1}-\eqref{2.effu} can be rewritten as \begin{align*} \pa_t\rho_i^\eps + \diver(\rho_i^\eps v^\eps) &= \eps\diver\sum_{j=1}^n D_{ij}^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps), \\ \pa_t(\rho^\eps v^\eps) + \diver(\rho^\eps v^\eps\otimes v^\eps) &= -\sum_{i=1}^n \rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^\eps), \end{align*} where $i=1,\ldots,n$, $\rho^\eps=\sum_{i=1}^n\rho_i^\eps$, and $$ D^\eps = G(Q^\eps)^{-1}(\tau^\eps)^{-1}(Q^\eps)^{-1} G^\top\in\R^{n\times n}, $$ where $(Q^\eps)^{-1}\in\R^{(n-1)\times(n-1)}$ has the coefficients $(Q^\eps)_{ij}^{-1}=\delta_{ij}\rho_i^\eps-\rho_i^\eps\rho_j^\eps/\rho^\eps$, $(\tau^\eps)^{-1}$ is the inverse of the $(n-1)\times(n-1)$ matrix introduced in Lemma \ref{lem.linsys}, and $G=(G_{ij})\in\R^{n\times(n-1)}$ is defined by $G_{ii}=1$, $G_{ni}=-1$ for $i=1,\ldots,n-1$, and $G_{ij}=0$ elsewhere. \end{lemma} \begin{proof} In view of Lemma \ref{lem.linsys}, the solution to \eqref{2.effu} can be expressed as \begin{equation}\label{2.aux1} \rho_i^\eps u_i^\eps = -\eps\sum_{j,k=1}^{n-1}\bigg(\delta_{ij}\rho_i^\eps - \frac{\rho_i^\eps\rho_j^\eps}{\rho^\eps}\bigg)({\tau}^\eps)_{jk}^{-1} d_k^\eps, \quad i=1,\ldots,n-1, \end{equation} where $({\tau}^\eps)^{-1}=(({\tau}^\eps)_{jk}^{-1})$ is the inverse of a regular matrix in $\R^{(n-1)\times(n-1)}$ whose coefficients only depend on $b_{ij}\rho_i^\eps\rho_j^\eps$. We wish to reformulate $d_i^\eps$ in terms of $\bm{\rho}^\eps$. For this, we compute, using $\rho^\eps_n = \rho^\eps - \sum_{j=1}^{n-1}\rho_j^\eps$, \begin{align*} d_i^\eps &= \rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^\eps) - \frac{\rho_i^\eps}{\rho^\eps}\sum_{j=1}^n \rho_j^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) \\ &= \sum_{j=1}^{n-1}\bigg(\delta_{ij}\rho_i^\eps - \frac{\rho_i^\eps\rho_j^\eps}{\rho^\eps}\bigg) \na\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) - \frac{\rho_i^\eps\rho_n^\eps}{\rho^\eps}\na\frac{\delta{\mathcal E}}{\delta\rho_n} (\bm{\rho}^\eps) \\ &= \sum_{j=1}^{n-1}(Q^\eps)_{ij}^{-1}\na\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) - \frac{\rho_i^\eps}{\rho^\eps}\bigg(\rho^\eps - \sum_{j=1}^{n-1}\rho_j^\eps\bigg)\na\frac{\delta{\mathcal E}}{\delta\rho_n}(\bm{\rho}^\eps) \\ &= \sum_{j=1}^{n-1}(Q^\eps)_{ij}^{-1}\na\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) - \sum_{j=1}^{n-1}\bigg(\delta_{ij}\rho_i^\eps - \frac{\rho_i^\eps\rho_j^\eps}{\rho^\eps}\bigg) \na\frac{\delta{\mathcal E}}{\delta\rho_n}(\bm{\rho}^\eps) \\ &= \sum_{j=1}^{n-1}(Q^\eps)_{ij}^{-1}\na\bigg(\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) - \frac{\delta{\mathcal E}}{\delta\rho_n}(\bm{\rho}^\eps)\bigg). \end{align*} Inserting this expression into \eqref{2.aux1} gives \begin{align} \rho_i^\eps u_i^\eps &= -\eps\sum_{j,k,\ell=1}^{n-1}(Q^\eps)_{ij}^{-1} (\tau^\eps)_{jk}^{-1}(Q^\eps)_{k\ell}^{-1} \na\bigg(\frac{\delta{\mathcal E}}{\delta\rho_\ell}(\bm{\rho}^\eps) - \frac{\delta{\mathcal E}}{\delta\rho_n}(\bm{\rho}^\eps)\bigg) \nonumber \\ &= -\eps\sum_{\ell=1}^{n-1} \widetilde D^\eps_{i\ell} \na\bigg(\frac{\delta{\mathcal E}}{\delta\rho_\ell}(\bm{\rho}^\eps) - \frac{\delta{\mathcal E}}{\delta\rho_n}(\bm{\rho}^\eps)\bigg), \label{2.rhou} \end{align} with $\widetilde D_{ij}^\eps$ the elements of the invertible matrix $\widetilde D^\eps=(Q^\eps)^{-1}(\tau^\eps)^{-1}(Q^\eps)^{-1}\in\R^{(n-1)\times(n-1)}$. Finally, setting $D^\eps:=G\widetilde D^\eps G^\top$, we can formulate \eqref{2.rhou} as \begin{align}\label{eq:rhou} \rho_i^\eps u_i^\eps = -\eps\sum_{j=1}^n D_{ij}^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_j} (\bm{\rho}^\eps), \quad i = 1,\ldots, n. \end{align} Note that in this writing, the last row of the matrix expresses the constraint $\rho_n u_n = - \sum_{j=1}^{n-1} \rho_j u_j$. We finish the proof after inserting this expression into \eqref{2.eff1}. \end{proof} Let $v^\eps=0$. Then the sum of \eqref{2.eff1} over $i=1,\ldots,n$ yields, because of $\sum_{i=1}^n\rho_i^\eps u_i^\eps=0$, $\pa_t\rho^\eps = 0$. Thus, $\rho^\eps$ does not depend on time and is fixed by the initial total mass. It is sufficient to consider $\widetilde\bm{\rho}^\eps:=(\rho_1^\eps,\ldots,\rho_{n-1}^\eps)$ since the last component can be recovered from $\rho_n^\eps =\rho^\eps-\sum_{i=1}^{n-1}\rho_i^\eps$. Accordingly, the energy can be formulated as a function of the variable $\widetilde\bm{\rho}^\eps$: \begin{equation} \label{redenergy} \widetilde{\mathcal E}(\widetilde\bm{\rho}^\eps) := {\mathcal E}\bigg(\rho_1^\eps,\ldots,\rho_{n-1}^\eps, \rho^\eps-\sum_{i=1}^{n-1}\rho_i^\eps\bigg). \end{equation} \begin{lemma}[Parabolicity in the sense of Petrovskii]\label{lem.Estar} Let $(\bm{\rho}^\eps,v^\eps)$ be a solution to \eqref{2.eff1}-\eqref{2.eff2} with $v^\eps =0$ and let $\bm{u}^\eps$ be a solution to \eqref{2.effu}. Suppose that ${\mathcal E}(\bm{\rho}^\eps)$ is strictly convex. Then $\bm{\rho}^\eps$ solves \begin{equation}\label{2.Estar} \pa_t\rho_i^\eps = \eps\diver\sum_{j=1}^{n-1}\widetilde D_{ij}^\eps \na\frac{\delta\widetilde{\mathcal E}}{\delta\rho_j} (\widetilde\bm{\rho}^\eps), \quad i=1,\ldots,n-1, \end{equation} the matrix $\widetilde D^\eps=(\widetilde D_{ij}^\eps)$ is positive definite, and the energy $\widetilde{\mathcal E}$ is a Lyapunov functional along solutions to \eqref{2.Estar}: $$ \frac{d\widetilde{\mathcal E}}{dt} = -\eps\int_{\R^3}\sum_{i,j=1}^n \widetilde D_{ij}^\eps \na\frac{\delta\widetilde{\mathcal E}}{\delta\rho_i}\cdot \na\frac{\delta\widetilde{\mathcal E}}{\delta\rho_j}dx \le 0. $$ Moreover, if $\rho_i^\eps>0$ for $i=1,\ldots,n$, all eigenvalues of $\widetilde D^\eps\widetilde{\mathcal E}''$ are real and positive (here, $\widetilde{\mathcal E}''=d^2\widetilde{\mathcal E}/d\widetilde{\bm{\rho}}^2$ is the Hessian of the energy $\widetilde{\mathcal E}$). This means that \eqref{2.Estar} is parabolic in the sense of Petrovskii. \end{lemma} A second-order system is called {\it parabolic in the sense of Petrovskii} if the real parts of the eigenvalues of the diffusion matrix are positive; see \cite[Remark 4.2a]{Ama93}. \begin{proof} Since the variational derivative of $\widetilde{\mathcal E}$ equals $$ \frac{\delta\widetilde{\mathcal E}}{\delta\rho_i}(\widetilde{\bm{\rho}}^\eps) = \frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^\eps) - \frac{\delta{\mathcal E}}{\delta\rho_n}(\bm{\rho}^\eps), \quad i=1,\ldots,n-1, $$ expression \eqref{2.rhou} in the proof of Lemma \ref{lem.diff} shows that for $i=1,\ldots,n-1$, $$ \eps\sum_{j=1}^n D_{ij}^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) = \eps\sum_{j=1}^{n-1} \widetilde D^\eps_{ij} \na\bigg(\frac{\delta{\mathcal E}}{\delta\rho_j}(\bm{\rho}^\eps) - \frac{\delta{\mathcal E}}{\delta\rho_n}(\bm{\rho}^\eps)\bigg) = \eps\sum_{j=1}^{n-1} \widetilde D^\eps_{ij} \na\frac{\delta\widetilde{\mathcal E}}{\delta\rho_j}(\widetilde{\bm{\rho}}^\eps), $$ proving \eqref{2.Estar}. Next, we show that $\widetilde D^\eps$ is positive definite. As $(b_{ij})$ is a symmetric matrix with nonnegative entries (by assumption \ref{N} on page \pageref{N}), the matrix $$ \tau^\eps_{ij} = \delta_{ij}\sum_{k=1}^n b_{ik}\rho_i^\eps\rho_k^\eps - b_{ij}\rho_i^\eps\rho_j^\eps $$ is symmetric, diagonally dominant, and has real nonnegative diagonal elements. Therefore, $(\tau_{ij}^\eps)$ is positive semidefinite. We know from the proof of Lemma \ref{lem.linsys} that there exists an invertible $(n-1)\times(n-1)$ submatrix $({\tau}^\eps)_{ij}^{-1}$. This submatrix is symmetric, positive semidefinite, and invertible, so all its eigenvalues must be positive and, in fact, it is positive definite. Moreover, since $(Q^\eps)^{-1}$ is regular, $\widetilde D^\eps=(Q^\eps)^{-1}(\tau^\eps)^{-1}(Q^\eps)^{-1}$ is positive definite. It remains to show that $\widetilde D^\eps\widetilde{\mathcal E}''$ has only real and positive eigenvalues. We claim that $\widetilde{\mathcal E}''$ is positive definite. To see this, we calculate (dropping the superindex $\eps$) $$ \widetilde{\mathcal E}'' = \frac{d}{d\widetilde\bm{\rho}} \bigg(\frac{d\widetilde{\mathcal E}}{d\bm{\rho}} \frac{d\bm{\rho}}{d\widetilde\bm{\rho}}\bigg) = \bigg(\frac{d\bm{\rho}}{d\widetilde\bm{\rho}}\bigg)^\top \frac{d^2\widetilde{\mathcal E}}{d\bm{\rho}^2} \bigg(\frac{d\bm{\rho}}{d\widetilde\bm{\rho}}\bigg) + \frac{d\widetilde{\mathcal E}}{d\bm{\rho}}\frac{d^2\bm{\rho}}{d\widetilde\bm{\rho}^2}. $$ Since $\bm{\rho}=(\rho_1,\ldots,\rho_{n-1},\rho-\sum_{i=1}^{n-1}\rho_i)$, we have $$ \frac{d\bm{\rho}}{d\widetilde\bm{\rho}} = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & & \vdots \\ \vdots & & \ddots & 0 \\ 0 & 0 & & 1 \\ -1 &-1 & \cdots &-1 \end{pmatrix}\in \R^{n\times (n-1)}, $$ and $d^2\bm{\rho}/d\widetilde\bm{\rho}^2$ vanishes since the transformation $\widetilde\bm{\rho}\mapsto\bm{\rho}$ is linear. By the strict convexity of $\widetilde{\mathcal E}$, there exists $\kappa>0$ such that for any $z=(z_1,\ldots,z_{n-1})\in\R^{n-1}$, $$ z^\top\widetilde{\mathcal E}''z = z^\top\bigg(\frac{d\bm{\rho}}{d\widetilde\bm{\rho}}\bigg)^\top \frac{d^2\widetilde{\mathcal E}}{d\bm{\rho}^2} \bigg(\frac{d\bm{\rho}}{d\widetilde\bm{\rho}}\bigg)z \ge \kappa\bigg|\frac{d\bm{\rho}}{d\widetilde\bm{\rho}}z\bigg|^2 = \kappa\sum_{i=1}^{n-1}z_i^2 + \kappa\bigg(\sum_{i=1}^{n-1}z_i\bigg)^2 \ge \kappa|z|^2. $$ This shows that $\widetilde{\mathcal E}''$ is symmetric and positive definite. Since also $\widetilde D^\eps$ is symmetric and positive definite, Proposition 6.1 of \cite{Ser10} implies that the eigenvalues of $\widetilde D^\eps\widetilde{\mathcal E}''$ are real and positive. \end{proof} \section{Justification of the Chapman-Enskog expansion}\label{sec.rigorCE} In this section, we justify the validity of the Chapman-Enskog expansion performed in section \ref{sec.CE}. We recall that the energy is the sum of the partial energies depending on the partial densities and their gradients, \begin{equation}\label{3.assumptE} {\mathcal E}(\bm{\rho}) = \int_{\R^3}\sum_{i=1}^n F_i(\rho_i,\na\rho_i)dx. \end{equation} It includes Euler-Korteweg models with the partial energy density \eqref{1.korte}. Under this hypothesis, it is shown in \cite[formula (2.25)]{GLT17} that the force term in \eqref{1.eq2} can be written as the divergence of a stress tensor $S_i$: \begin{equation}\label{3.ES} -\rho_i\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}) = \diver S_i(\bm{\rho}), \quad i=1,\ldots,n, \end{equation} where \begin{align} & S_i(\bm{\rho}) = -s_i(\rho_i,\na\rho_i) \mathbb{I} + \diver r_i(\rho_i,\na\rho_i)\mathbb{I} - H_i(\rho_i,\na\rho_i), \quad\mbox{and} \label{3.S} \\ & s_i(\rho_i,q_i) = \rho_i\frac{\pa F_i}{\pa\rho_i}(\rho_i,q_i) + q_i\cdot\frac{\pa F_i}{\pa q_i}(\rho_i,q_i) - F_i(\rho_i,q_i), \nonumber \\ & r_i(\rho_i,q_i) = \rho_i\frac{\pa F_i}{\pa q_i}(\rho_i,q_i), \nonumber \\ & H_i(\rho_i,q_i) = q_i\otimes \frac{\pa F_i}{\pa q_i}(\rho_i,q_i), \nonumber \end{align} and $q_i=\nabla \rho_i$, $\mathbb{I}$ is the unit matrix in $\R^{3\times 3}$. We consider weak solutions to the original system \eqref{1.eq1}-\eqref{1.eq2}, \begin{align} \pa_t\rho_i^\eps + \diver(\rho_i^\eps v_i^\eps) &= 0, \quad i=1,\ldots,n, \label{3.rho} \\ \pa_t(\rho_i^\eps v_i^\eps) + \diver(\rho_i^\eps v_i^\eps\otimes v_i^\eps) &= -\rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}^\eps) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps (v_i^\eps-v_j^\eps), \label{3.rhov} \end{align} and strong solutions to the approximate system \eqref{2.eff1}-\eqref{2.eff2}, \begin{align} \pa_t\widehat\rho_i^\eps + \diver(\widehat\rho_i^\eps\widehat v^\eps) &= -\diver(\widehat\rho_i^\eps\widehat u_i^\eps), \quad i=1,\ldots,n, \label{3.rhoeps}\\ \pa_t(\widehat\rho^\eps \widehat v^\eps) + \diver(\widehat\rho^\eps \widehat v^\eps\otimes\widehat v^\eps) &= -\sum_{j=1}^n\widehat\rho_j^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_j} (\widehat\bm{\rho}^\eps), \quad \widehat\rho^\eps = \sum_{j=1}^n\widehat\rho_j^\eps, \label{3.rhoveps} \end{align} where $(\widehat u_1^\eps,\ldots,\widehat u_n^\eps)$ solves \eqref{2.effu}, \begin{equation}\label{3.u} -\sum_{j=1}^n b_{ij}\widehat\rho_i^\eps\widehat\rho_j^\eps (\widehat u_i^\eps-\widehat u_j^\eps) = \eps \widehat d_i^\eps, \quad \sum_{j=1}^n\widehat\rho_j^\eps\widehat u_j^\eps = 0, \end{equation} and $\widehat d_i^\eps$ is given by \eqref{2.deps}, $$ \widehat d_i^\eps = -\frac{\widehat\rho_i^\eps}{\widehat\rho^\eps}\sum_{j=1}^n \widehat\rho_j^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_j}(\widehat\bm{\rho}^\eps) + \widehat\rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}^\eps). $$ Our aim is to show that the difference of the solutions of \eqref{3.rho}-\eqref{3.rhov} and \eqref{3.rhoeps}-\eqref{3.rhoveps} converges to zero as $\eps\to 0$ in a certain sense; see Theorem \ref{thm.convCE} below. Lemma \ref{lem.diff} shows that system \eqref{3.rhoeps}-\eqref{3.rhoveps} can be written without the variable $\widehat u_i^\eps$ as a diffusion system. However, the current formulation is more convenient to verify the convergence result. In the sequel, we replace $-\rho_i\na(\delta{\mathcal E}/\delta\rho_i)$ by $\diver S_i$ using \eqref{3.ES}. \subsection{Preparations}\label{sec.prep} We reformulate the approximate system \eqref{3.rhoeps}-\eqref{3.rhoveps} in a form that resembles the original system \eqref{3.rho}-\eqref{3.rhov} with an error term: \begin{lemma}\label{lem.reform} Setting $\widehat v_i^\eps=\widehat v^\eps+\widehat u_i^\eps$, system \eqref{3.rhoeps}-\eqref{3.rhoveps} is equivalent to \begin{align} \pa_t\widehat\rho_i^\eps + \diver(\widehat\rho_i^\eps\widehat v_i^\eps) &= 0, \label{3.rhoeps2} \\ \pa_t(\widehat\rho_i^\eps\widehat v_i^\eps) + \diver(\widehat\rho_i^\eps \widehat v_i^\eps\otimes\widehat v_i^\eps) &= -\widehat\rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i} (\widehat\bm{\rho}^\eps) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\widehat\rho_i^\eps\widehat\rho_j^\eps (\widehat v_i^\eps-\widehat v_j^\eps) + \widehat R_i^\eps, \label{3.rhoveps2} \end{align} where the remainder $\widehat R_i^\eps$ is given by \begin{equation}\label{3.Reps} \widehat R_i^\eps := -\widehat v^\eps \diver(\widehat\rho_i^\eps\widehat u_i^\eps) + \pa_t(\widehat\rho_i^\eps\widehat u_i^\eps) + \diver(\widehat\rho_i^\eps\widehat u_i^\eps\otimes\widehat v^\eps + \widehat\rho_i^\eps\widehat v^\eps\otimes\widehat u_i^\eps) + \diver(\widehat\rho_i^\eps\widehat u_i^\eps\otimes\widehat u_i^\eps). \end{equation} \end{lemma} \begin{proof} Equation \eqref{3.rhoeps2} follows directly from \eqref{3.rhoeps} and the definition $\widehat v_i^\eps=\widehat v^\eps+\widehat u_i^\eps$. We write the evolution of the momentum in a similar format as \eqref{3.rhov}, \begin{align*} \pa_t(\widehat\rho_i^\eps\widehat v_i^\eps) + \diver(\widehat\rho_i^\eps \widehat v_i^\eps\otimes\widehat v_i^\eps) = -\widehat\rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}^\eps) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\widehat\rho_i^\eps \widehat\rho_j^\eps (\widehat v_i^\eps-\widehat v_j^\eps) + \widehat R_i^\eps, \end{align*} where $\widehat R_i^\eps$ contains the remaining terms: \begin{align} \widehat R_i^\eps &= \pa_t(\widehat\rho_i^\eps\widehat v^\eps) + \diver(\widehat\rho_i^\eps\widehat v^\eps\otimes\widehat v^\eps) + \widehat\rho_i^\eps\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}^\eps) + \frac{1}{\eps}\sum_{j=1}^n b_{ij}\widehat\rho_i^\eps \widehat\rho_j^\eps(\widehat v_i^\eps-\widehat v_j^\eps) \nonumber \\ &\phantom{xx}{}+ \pa_t(\widehat\rho_i^\eps\widehat u_i^\eps) + \diver(\widehat\rho_i^\eps\widehat u_i^\eps\otimes\widehat v^\eps + \widehat\rho_i^\eps\widehat v^\eps\otimes\widehat u_i^\eps) + \diver(\widehat\rho_i^\eps\widehat u_i^\eps\otimes\widehat u_i^\eps). \label{3.aux1} \end{align} It remains to show that this expression equals \eqref{3.Reps}. The last three terms are already in the desired form. By \eqref{3.u}, we have $$ \frac{1}{\eps}\sum_{j=1}^n b_{ij}\widehat\rho_i^\eps\widehat\rho_j^\eps (\widehat v_i^\eps-\widehat v_j^\eps) = \frac{\widehat\rho_i^\eps}{\widehat\rho^\eps}\sum_{j=1}^n\widehat\rho_j^\eps \nabla \frac{\delta{\mathcal E}}{\delta\rho_j}(\widehat\bm{\rho}^\eps) - \widehat\rho_i^\eps \nabla \frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}^\eps). $$ Therefore, we can replace the third and fourth terms in $\widehat R_i^\eps$ by \begin{equation}\label{3.aux2} \frac{\widehat\rho_i^\eps}{\widehat\rho^\eps}\sum_{j=1}^n\widehat\rho_j^\eps \na\frac{\delta{\mathcal E}}{\delta\rho_j}(\widehat\bm{\rho}^\eps). \end{equation} We reformulate the first and second terms in $\widehat R_i^\eps$. Adding \eqref{3.rhoeps} over $i=1,\ldots,n$ and using $\sum_{j=1}^n\widehat\rho_j^\eps\widehat u_j^\eps=0$, we deduce that $\pa_t\widehat\rho^\eps + \diver(\widehat\rho^\eps\widehat v^\eps)=0$. This equation and \eqref{3.rhoeps}, \eqref{3.rhoveps} show that \begin{align*} \pa_t & (\widehat\rho_i^\eps\widehat v^\eps) + \diver(\widehat\rho_i^\eps \widehat v^\eps\otimes\widehat v^\eps) = \big(\pa_t\widehat\rho_i^\eps + \diver(\widehat\rho_i^\eps \widehat v^\eps) \big)\widehat v^\eps + \widehat\rho_i^\eps\big(\pa_t\widehat v^\eps + \widehat v^\eps\cdot\na\widehat v^\eps\big) \\ &= -\diver(\widehat\rho_i^\eps \widehat u_i^\eps)\widehat v^\eps + \frac{\widehat\rho_i^\eps}{\widehat\rho^\eps} \big(\pa_t(\widehat\rho^\eps\widehat v^\eps) - (\pa_t\widehat\rho^\eps)\widehat v^\eps + \widehat\rho^\eps\widehat v^\eps\cdot\na\widehat v^\eps\big) \\ &= -\diver(\widehat\rho_i^\eps \widehat u_i^\eps)\widehat v^\eps + \frac{\widehat\rho_i^\eps}{\widehat\rho^\eps} \big(\pa_t(\widehat\rho^\eps\widehat v^\eps) + \diver(\widehat\rho^\eps \widehat v^\eps\otimes\widehat v^\eps)\big) \\ &= -\diver(\widehat\rho_i^\eps \widehat u_i^\eps)\widehat v^\eps - \frac{\widehat\rho_i^\eps}{\widehat\rho^\eps}\sum_{j=1}^n\widehat\rho_j^\eps \na\frac{\delta{\mathcal E}}{\delta\rho_j}(\widehat\bm{\rho}^\eps), \end{align*} where we used \eqref{3.rhoveps} in the last step. The last term cancels with \eqref{3.aux2}, showing that \eqref{3.aux1} reduces to \eqref{3.Reps}. \end{proof} We need later the explicit expressions of the variational derivatives of ${\mathcal E}$ and $S_i$. \begin{lemma}[Variational derivatives of ${\mathcal E}$]\label{lem.second} Let ${\mathcal E}$ be given by \eqref{3.assumptE}. Then, for test functions $\psi_i$ and $\phi_i$, \begin{align*} \sum_{i=1}^n\bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}),\psi_i\bigg\rangle &= \int_{\R^3}\sum_{i=1}^n\bigg(\frac{\pa F_i}{\pa\rho_i}(\rho_i,\na\rho_i)\psi_i + \frac{\pa F_i}{\pa q_i}(\rho_i,\na\rho_i)\cdot\na\psi_i\bigg)dx, \\ \sum_{i=1}^n\bigg\langle\!\!\!\bigg\langle\frac{\delta^2{\mathcal E}}{\delta\rho_i^2} (\bm{\rho}),(\psi_i,\phi_i)\bigg\rangle\!\!\!\bigg\rangle &= \sum_{i=1}^n\int_{\R^3}(\phi_i,\na\phi_i) \begin{pmatrix} \pa^2 F_i/\pa\rho_i^2 & \pa^2 F_i/\pa\rho_i\pa q_i \\ \pa^2 F_i/\pa\rho_i\pa q_i & \pa^2 F_i/\pa q_i^2 \end{pmatrix} \begin{pmatrix} \psi_i \\ \na\psi_i \end{pmatrix} dx, \end{align*} \end{lemma} \begin{proof} We compute the first variational derivative with respect to the test function $\bm\psi=(\psi_1,\ldots,\psi_n)$: \begin{align*} \sum_{i=1}^n\bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i}(\bm{\rho}),\psi_i\bigg\rangle &= \frac{d}{d\tau}{\mathcal E}(\bm{\rho}+\tau\bm\psi)\bigg|_{\tau=0} = \frac{d}{d\tau}\int_{\R^3}\sum_{i=1}^n F_i(\rho_i+\tau\psi_i, \na\rho_i+\tau\na\psi_i)dx\bigg|_{\tau=0} \\ &= \int_{\R^3}\sum_{i=1}^n\bigg(\frac{\pa F_i}{\pa\rho_i}(\rho_i,\na\rho_i)\psi_i + \frac{\pa F_i}{\pa q_i}(\rho_i,\na\rho_i)\cdot\na\psi_i\bigg)dx. \end{align*} Next, we calculate the second variational derivative, where $\bm\phi=(\phi_1,\ldots,\phi_n)$: \begin{align*} \sum_{i=1}^n&\bigg\langle\!\!\!\bigg\langle \frac{\delta^2{\mathcal E}}{\delta\rho_i^2} (\bm{\rho}),(\psi_i,\phi_i)\bigg\rangle\!\!\!\bigg\rangle = \frac{d}{d\tau}\bigg\langle\sum_{i=1}^n\frac{\delta{\mathcal E}}{\delta\rho_i} (\bm{\rho}+\tau\bm\phi), \psi_i\bigg\rangle \bigg|_{\tau=0} \\ &= \frac{d}{d\tau}\int_{\R^3}\sum_{i=1}^n\bigg(\frac{\pa F_i}{\pa\rho_i} \big(\rho_i+\tau\phi_i,\na(\rho_i+\tau\phi_i)\big)\psi_i \\ &\phantom{xx}{} - \frac{\pa F_i}{\pa q_i}\big(\rho_i+\tau\phi_i,\na(\rho_i+\tau\phi_i)\big) \cdot\na\psi_i\bigg)dx\bigg|_{\tau=0} \\ &= \sum_{i=1}^n\bigg(\frac{\pa^2 F_i}{\pa\rho_i^2}(\bm{\rho})\phi_i\psi_i + \psi_i\frac{\pa^2 F_i}{\pa\rho_i\pa q_i}(\bm{\rho})\cdot\na\phi_i + \phi_i\frac{\pa^2 F_i}{\pa q_i\pa\rho_i}(\bm{\rho})\cdot\na\psi_i \\ &\phantom{xx}{}+ \frac{\pa^2 F_i}{\pa q_i^2}:(\na\phi_i\otimes\na\psi_i)\bigg)dx \\ &= \sum_{i=1}^n\int_{\R^3}(\phi_i,\na\phi_i) \begin{pmatrix} \pa^2 F_i/\pa\rho_i^2 & \pa^2 F_i/\pa\rho_i\pa q_i \\ \pa^2 F_i/\pa\rho_i\pa q_i & \pa^2 F_i/\pa q_i^2 \end{pmatrix} \begin{pmatrix} \psi_i \\ \na\psi_i \end{pmatrix} dx. \end{align*} This finishes the proof. \end{proof} Next, we define the relative potential energy \begin{align*} {\mathcal E}(\bm{\rho}|\widehat\bm{\rho}) &= {\mathcal E}(\bm{\rho}) - {\mathcal E}(\widehat\bm{\rho}) - \sum_{i=1}^n\bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}), \rho_i-\widehat\rho_i\bigg\rangle \end{align*} Taking $\psi_i=\phi_i = \rho_i - \widehat\rho_i$ in the above lemma leads to the formula \begin{align*} {\mathcal E}(\bm{\rho}|\widehat\bm{\rho}) &= \int_{\R^3}\sum_{i=1}^n F_i(\rho_i,\na\rho_i|\widehat\rho_i,\na\widehat\rho_i)dx. \end{align*} We also define the total energy \begin{align}\label{3.deftot} {\mathcal E}_{\rm tot}(\bm{\rho},\bm{m}) = {\mathcal E}(\bm{\rho})+ \int_{\R^3} \sum_{i=1}^n\frac{1}{2} \rho_i |v_i|^2 dx = \int_{\R^3}\sum_{i=1}^n\bigg(F_i(\rho_i,\na\rho_i) + \frac12\rho_i|v_i|^2\bigg)dx \end{align} and the relative total energy \begin{align} {\mathcal E}_{\rm tot}&(\bm{\rho},\bm{m} | \widehat\bm{\rho}, \bm{\widehat m})= {\mathcal E}_{\rm tot}(\bm{\rho},\bm{m} ) - {\mathcal E}_{\rm tot}(\widehat\bm{\rho},\bm{\widehat m} ) - \sum_{i=1}^n\bigg\langle \frac{\delta{\mathcal E}_{\rm tot}}{\delta\rho_i}(\widehat\bm{\rho},\bm{\widehat m}), \rho_i-\widehat\rho_i\bigg\rangle \nonumber \\ & -\sum_{i=1}^n\bigg\langle \frac{\delta{\mathcal E}_{\rm tot}}{\delta m_i}(\widehat\bm{\rho},\bm{\widehat m}), \rho_i v_i-\widehat\rho_i\widehat v_i\bigg\rangle = \int_{\R^3}\sum_{i=1}^n \bigg(F_i(\rho_i,\na\rho_i |\widehat\rho_i,\na\widehat\rho_i) + \frac{1}{2} \rho_i |v_i-\widehat v_i|^2 \bigg)dx. \label{eq:Etrel} \end{align} \subsection{Relative energy inequality}\label{sec.relent} We compare a weak solution to the original system \eqref{3.rho}-\eqref{3.rhov} with a strong solution to the approximate system \eqref{3.rhoeps2}-\eqref{3.rhoveps2} via a relative energy inequality. First, we make precise the notion of weak solution to the original system. \begin{definition}[Weak and dissipative weak solutions]\label{def.weak} A function $(\bm{\rho}^\eps,\bm{v}^\eps)$ is called a {\em weak solution} to \eqref{3.rho}-\eqref{3.rhov} if for all $i=1,\ldots,n$, \begin{align*} & 0\le \rho_i^\eps\in C^0([0,\infty);L^1(\R^3)), \quad \rho_i^\eps v_i^\eps\in C^0([0,\infty);L^1(\R^3;\R^3)), \\ & \rho_i^\eps v_i^\eps\otimes v_i^\eps,\ H_i^\eps\in L_{\rm loc}^1([0,\infty)\times\R^3;\R^{3\times 3}), \\ & s_i^\eps\in L^1_{\rm loc}([0,\infty)\times\R^3), \quad r_i^\eps\in L_{\rm loc}^1([0,\infty)\times\R^3;\R^3), \end{align*} and $(\bm{\rho}^\eps,\bm{v}^\eps)$ solves for $\psi_i\in C^\infty_0([0,\infty);C^\infty(\R^3))$ and $\phi_i\in C^{2}_0([0,\infty);C^\infty(\R^3;\R^3))$, \begin{align*} & -\int_0^\infty\int_{\R^3}\sum_{i=1}^n(\rho_i^\eps\pa_t\psi_i + \rho_i^\eps v_i^\eps \cdot\na\psi_i)dxdt = \int_{\R^3}\sum_{i=1}^n\rho_i^\eps(x,0)\psi_i(x,0)dx, \\ & -\int_0^\infty\int_{\R^3}\sum_{i=1}^n\big(\rho_i^\eps v_i^\eps\cdot\pa_t\phi_i + \rho_i^\eps v_i^\eps \otimes v_i^\eps : \na \phi_i + s_i^\eps\diver\phi_i + r_i^\eps\cdot\na\diver\phi_i + H_i^\eps:\na\phi_i\big)dxdt \\ &\phantom{xxxx}{}= \int_{\R^3}(\rho_i^\eps v_i^\eps)(x,0)\cdot\phi_i(x,0)dx - \frac{1}{\eps}\int_0^\infty\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps(v_i^\eps-v_j^\eps)\cdot\phi_i dxdt. \end{align*} Moreover, if additionally $\sum_{i=1}^n(F_i(\rho_i^\eps,\na\rho_i^\eps)+\frac12\rho_i^\eps|v_i^\eps|^2) \in C^0([0,\infty);L^1(\R^3))$ and the integrated energy inequality \begin{align} -\int_0^\infty{\mathcal E}_{\rm tot}(\bm{\rho}^\eps(t),\bm{m}^\eps(t))\theta'(t)dt &+ \frac{1}{2\eps}\int_0^\infty\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps|v_i^\eps-v_j^\eps|^2\theta(t)dxdt \nonumber \\ &\le {\mathcal E}_{\rm tot}(\bm{\rho}^\eps(0),\bm{m}^\eps(0))\theta(0) \label{3.dissweak} \end{align} holds for any $\theta\in W^{1,\infty}([0,\infty))$ compactly supported in $[0,\infty)$, then we call $(\bm{\rho}^\eps,\bm{v}^\eps)$ a {\em dissipative weak solution}. \end{definition} We impose the following assumption: \begin{enumerate}[label=(\bf A\arabic*)] \item \label{A1} The dissipative weak solution $(\bm{\rho}^\eps,\bm{v}^\eps)$ to \eqref{3.rho}-\eqref{3.rhov} has finite total mass and finite total energy, i.e., for any $T>0$, there exists a constant $K>0$ independent of $\eps$ such that $$ \sup_{0<t<T}\int_{\R^3}\sum_{i=1}^n\rho_i^\eps dx \le K, \quad \sup_{0<t<T}\int_{\R^3}\sum_{i=1}^n\bigg(F_i(\rho_i^\eps, \na\rho_i^\eps)+\frac12\rho_i^\eps|v_i^\eps|^2\bigg)dx \le K. $$ \end{enumerate} We proceed by establishing the relative energy inequality. \begin{proposition}[Relative energy inequality]\label{prop.rei} Let $(\bm{\rho}^\eps,\bm{v}^\eps)$ be a dissipative weak solution to \eqref{3.rho}-\eqref{3.rhov} satisfying \ref{A1}, let $(\widehat\bm{\rho}^\eps,\widehat{{v}}^\eps)$ be a strong solution to \eqref{3.rhoeps}, \eqref{3.rhoveps}, \eqref{3.u} such that $\widehat\rho_i^\eps>0$ in $\R^3$, $t>0$, and let assumption \ref{N} on page \pageref{N} holds. Then \begin{align} {\mathcal E}_{\rm tot}&(\bm{\rho}^\eps,\bm{m}^\eps|\widehat\bm{\rho}^\eps,\widehat{\bm{m}}^\eps)(t) + \frac{1}{2\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij} ´\rho_i^\eps\rho_j^\eps\big|(v_i^\eps-v_j^\eps)-(\widehat v_i^\eps-\widehat v_j^\eps) \big|^2 dxds \nonumber \\ &\le {\mathcal E}_{\rm tot}(\bm{\rho}^\eps,\bm{m}^\eps|\widehat\bm{\rho}^\eps,\widehat{\bm{m}}^\eps)(0) - \int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps (v_i^\eps-\widehat v_i^\eps)\otimes(v_i^\eps-\widehat v_i^\eps):\na\widehat v_i^\eps dxds \nonumber \\ &\phantom{xx}{}- \int_0^t\int_{\R^3}\sum_{i=1}^n\Big(s_i(\rho_i^\eps,\na\rho_i^\eps |\widehat\rho_i^\eps,\na\widehat\rho_i^\eps)\diver\widehat v_i^\eps + r_i(\rho_i^\eps,\na\rho_i^\eps|\widehat\rho_i^\eps,\na\widehat\rho_i^\eps) \cdot \nabla \diver \widehat v^\eps\nonumber\\ &\phantom{xx}{}+H_i(\rho_i^\eps,\na\rho_i^\eps |\widehat\rho_i^\eps,\na\widehat\rho_i^\eps):\na\widehat v_i^\eps\Big)dxds -\int_0^t\int_{\R^3}\sum_{i=1}^n\frac{\rho_i^\eps}{\widehat\rho_i^\eps} \widehat R_i^\eps\cdot(v_i^\eps-\widehat v_i^\eps)dxds \nonumber \\ &\phantom{xx}{}- \frac{1}{\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i^\eps (\rho_j^\eps-\widehat\rho_j^\eps)(v_i^\eps-\widehat v_i^\eps) \cdot(\widehat v_i^\eps-\widehat v_j^\eps)dxds, \label{3.rei} \end{align} where $s_i$, $r_i$, and $H_i$ are defined in \eqref{3.S} and $\widehat R_i^\eps$ is defined in \eqref{3.Reps}, and the relative stresses are given by \begin{align*} g_i(\rho_i^\eps,q_i^\eps|\widehat\rho_i^\eps,\widehat q_i^\eps) = g_i(\rho_i^\eps,q_i^\eps) - g_i(\widehat\rho_i^\eps,\widehat q_i^\eps) - \frac{\partial g_i}{\partial \rho_i} (\widehat\rho_i^\eps,\widehat q_i^\eps) (\rho_i^\eps - \widehat\rho_i^\eps) - \frac{\partial g_i}{\partial q_i} (\widehat\rho_i^\eps,\widehat q_i^\eps) \cdot(q_i^\eps - \widehat q_i^\eps), \end{align*} where $q_i^\eps = \nabla\rho_i^\eps, \widehat q_i^\eps=\nabla \widehat \rho_i^\eps$ and $g_i$ represents $s_i$, $r_i$, and $H_i$. \end{proposition} \begin{proof} The proof is similar to the proof of Theorem 1 in \cite{GLT17}, but we need to take care of the friction terms. To simplify the notation, we drop the superscript $\eps$. Recall that the relative total energy ${\mathcal E}_{\rm tot}(\bm{\rho},\bm{m} | \widehat\bm{\rho}, \bm{\widehat m})$ defined by \eqref{eq:Etrel} has four parts, ${\mathcal E}_{\rm tot}(\bm{\rho},\bm{m})$, $-{\mathcal E}_{\rm tot}(\widehat\bm{\rho},\bm{\widehat m})$, $-\sum_{i=1}^n\langle{(\delta{\mathcal E}_{\rm tot}}/{\delta\rho_i}) (\widehat\bm{\rho},\bm{\widehat m})$, $\rho_i-\widehat\rho_i\rangle$, and $-\sum_{i=1}^n\langle({\delta{\mathcal E}_{\rm tot}}/{\delta m_i}) (\widehat\bm{\rho},\bm{\widehat m}), \rho_i v_i-\widehat\rho_i\widehat v_i\rangle$. We first give the energy inequalities for the first two terms and then use the weak formulations to calculate the last two terms. {\em Step 1: The energy inequalities.} Introducing the test function \begin{equation}\label{3.theta} \theta(s) = \left\{\begin{array}{ll} 1 &\quad\mbox{for }0\le s<t, \\ (t-s)/\delta+1 &\quad\mbox{for }t\le s< t+\delta, \\ 0 &\quad\mbox{for }s>t+\delta, \end{array}\right. \end{equation} in the integrated energy inequality \eqref{3.dissweak} and passing to the limit $\delta\to 0$, we obtain \begin{align} {\mathcal E}_{\rm tot}(\bm{\rho}(t),\bm{m}(t)) + \frac{1}{2\eps}\int_0^t\int_{\R^3}b_{ij}\rho_i\rho_j |v_i-v_j|^2 dxds \le {\mathcal E}_{\rm tot}(\bm{\rho}(0),\bm{m }(0)). \label{eq:Energy} \end{align} To show the energy identity for the strong solution $(\widehat\bm{\rho}^\eps, \widehat{\bm{v}}^\eps)$, we write \eqref{3.rhoveps2} in nonconservative form: $$ \pa_t\widehat v_i + \widehat v_i\cdot\na\widehat v_i = -\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\widehat\rho_j (\widehat v_i-\widehat v_j) + \frac{\widehat R_i}{\widehat\rho_i}. $$ We multiply this equation by $\widehat\rho_i\widehat v_i$, multiply \eqref{3.rhoeps2} by $\frac12|\widehat v_i^\eps|^2$, and add the resulting equations: \begin{equation}\label{3.aux3} \frac12\pa_t(\widehat\rho_i|\widehat v_i|^2) + \frac12\diver(\widehat\rho_i \widehat v_i|\widehat v_i|^2) = -\widehat\rho_i\widehat v_i\cdot\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}) - \frac{1}{\eps}\widehat v_i\cdot\sum_{j=1}^n b_{ij}\widehat\rho_i\widehat\rho_j (\widehat v_i-\widehat v_j) + \widehat v_i\cdot\widehat R_i. \end{equation} Furthermore, we deduce from \eqref{3.rhoeps2} that $$ \frac{d}{dt}{\mathcal E}(\widehat\bm{\rho}) = \sum_{i=1}^n\bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}),\pa_t\widehat\rho_i \bigg\rangle = -\sum_{i=1}^n\bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}), \diver(\widehat\rho_i\widehat v_i)\bigg\rangle = \int_{\R^3}\sum_{i=1}^n\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}) \cdot(\widehat\rho_i\widehat v_i)dx. $$ Integrating \eqref{3.aux3}, summing over $i=1,\ldots,n$, and inserting the previous identity yields \begin{equation* \frac{d}{dt}\bigg({\mathcal E}(\widehat\bm{\rho}) + \frac12\int_{\R^3}\sum_{i=1}^n\widehat\rho_i|\widehat v_i|^2dx\bigg) = -\frac{1}{\eps}\int_{\R^3}\sum_{i,j=1}^n b_{ij}\widehat\rho_i\widehat\rho_j (\widehat v_i-\widehat v_j)\cdot\widehat v_i dx + \int_{\R^3} \sum_{i=1}^n\widehat R_i\cdot\widehat v_i dx. \end{equation*} The symmetry of $(b_{ij})$ and integration of the above equality over $(0,t)$ lead to the following energy equality: \begin{align}\label{eq:Energy2} & {\mathcal E}_{\rm tot}(\widehat\bm{\rho}(t),\widehat{\bm{m}}(t)) + \frac{1}{2\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\widehat\rho_i \widehat\rho_j|\widehat v_i-\widehat v_j|^2 dxds \nonumber\\ &\phantom{xx}{}= {\mathcal E}_{\rm tot}(\widehat\bm{\rho}(0),\widehat{\bm{m}}(0)) + \int_0^t\int_{\R^3}\sum_{i=1}^n\widehat R_i\cdot\widehat v_i dxds. \end{align} {\em Step 2: Equation for the difference.} We proceed to calculate $$ -\sum_{i=1}^n\bigg\langle\frac{\delta{\mathcal E}_{\rm tot}}{\delta\rho_i}(\widehat\bm{\rho}, \bm{\widehat m}),\rho_i-\widehat\rho_i\bigg\rangle \quad \mbox{and} \quad -\sum_{i=1}^n\bigg\langle\frac{\delta{\mathcal E}_{\rm tot}}{\delta m_i}(\widehat\bm{\rho},\bm{\widehat m}), \rho_i v_i-\widehat\rho_i\widehat v_i\bigg\rangle. $$ Following the definition of the weak solutions to \eqref{3.rho}-\eqref{3.rhov} and \eqref{3.rhoeps2}-\eqref{3.rhoveps2}, the differences of the solutions $(\rho_i-\widehat\rho_i, v_i - \widehat v_i)$ satisfy \begin{align*} -\int_0^\infty&\int_{\R^3}\sum_{i=1}^n\big((\rho_i-\widehat\rho_i)\pa_s\psi_i + (\rho_i v_i-\widehat\rho_i\widehat v_i)\cdot\na\psi_i\big)dxds \\ &= \int_{\R^3}\sum_{i=1}^n\big(\rho_i(x,0)-\widehat\rho_i(x,0)\big)\psi_i(x,0)dx, \\ -\int_0^\infty&\int_{\R^3}\sum_{i=1}^n\big((\rho_i v_i-\widehat\rho_i\widehat v_i) \cdot\pa_s\phi_i + (\rho_i v_i\otimes v_i - \widehat\rho_i\widehat v_i\otimes\widehat v_i):\na\phi_i \\ &\phantom{xx}{}+ (s_i-\widehat s_i)\diver\phi_i + (H_i-\widehat H_i):\na\phi_i + (r_i-\widehat r_i):\na\diver\phi_i\big)dxds \\ &= \int_{\R^3}\sum_{i=1}^n((\rho_i v_i)(x,0) - (\widehat\rho_i\widehat v_i)(x,0))\phi_i(x,0)dx \\ &\phantom{xx}{}-\frac{1}{\eps}\int_0^\infty\int_{\R^3} \sum_{i,j=1}^n b_{ij}\big(\rho_i\rho_j (v_i-v_j) - \widehat\rho_i\widehat\rho_j(\widehat v_i-\widehat v_j)\big) \cdot\phi_i dxds \\ &\phantom{xx}{}- \int_0^\infty\int_{\R^3}\sum_{i=1}^n\widehat R_i\cdot\phi_i dxds, \end{align*} where $s_i=s_i(\rho_i,\na\rho_i)$, $\widehat s_i=s_i(\widehat\rho_i,\na\widehat\rho_i)$, and similar for the other quantities. Taking the test functions $$ \psi_i(s) = \theta(s)\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i} - \frac12|\widehat v_i|^2\bigg)(s), \quad \phi_i(s) = \theta(s)\widehat v_i(s), $$ where $\theta$ is defined in \eqref{3.theta} and $\widehat F_i=F_i(\widehat\rho_i,\na\widehat\rho_i)$, the sum of the above equations becomes \begin{align} \sum_{i=1}^n & \left(\left\langle\frac{\delta{\mathcal E}_{\rm tot}}{\delta\rho_i} (\widehat\bm{\rho},\bm{\widehat m}),\rho_i-\widehat\rho_i\right\rangle + \left\langle\frac{\delta{\mathcal E}_{\rm tot}}{\delta m_i}(\widehat\bm{\rho},\bm{\widehat m}), \rho_i v_i-\widehat\rho_i\widehat v_i\right\rangle\right) \bigg|_0^t \nonumber \\ &= \int_{\R^3}\sum_{i=1}^n\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i} - \frac12|\widehat v_i|^2\bigg) (\rho_i-\widehat\rho_i)\bigg|_{0}^t dx + \int_{\R^3}\sum_{i=1}^n(\rho_i v_i-\widehat\rho_i\widehat v_i)\cdot\widehat v_i \bigg|_{0}^t dx \nonumber \\ &= \int_0^t\int_{\R^3}\sum_{i=1}^n\bigg\{\pa_s\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i} - \frac12|\widehat v_i|^2\bigg) (\rho_i-\widehat\rho_i) \nonumber \\ &\phantom{xx}{}+ (\rho_i v_i-\widehat\rho_i\widehat v_i)\cdot\na \bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i} - \frac12|\widehat v_i|^2\bigg)\bigg\}dxds \nonumber \\ &\phantom{xx}{} + \int_0^t\int_{\R^3}\sum_{i=1}^n\big((\rho_i v_i-\widehat\rho_i\widehat v_i) \cdot\pa_s\widehat v_i + (\rho_i v_i\otimes v_i - \widehat\rho_i\widehat v_i\otimes \widehat v_i):\na\widehat v_i\big) dxds \nonumber \\ &\phantom{xx}{}+ \int_0^t\int_{\R^3}\sum_{i=1}^n\big( (s_i-\widehat s_i)\diver \widehat v_i + (H_i-\widehat H_i):\na\widehat v_i + (r_i-\widehat r_i)\cdot\na\diver\widehat v_i \big)dxds \nonumber \\ &\phantom{xx}{} - \frac{1}{\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\big(\rho_i\rho_j(v_i-v_j) - \widehat\rho_i\widehat\rho_j(\widehat v_i-\widehat v_j)\big)\cdot\widehat v_i dxds \nonumber \\ &\phantom{xx}{}- \int_0^t\int_{\R^3}\sum_{i=1}^n\widehat R_i\cdot\widehat v_i dxds \nonumber\\ &=: I_1 + I_2+I_3+I_4+I_5. \label{3.aux} \end{align} We reorganize the term $I_1$ as follows: \begin{align*} I_1 &= I_{11} + I_{12} + I_{13}, \quad\mbox{where} \\ I_{11} &= \int_0^t\int_{\R^3}\sum_{i=1}^n\pa_s\bigg( \frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg) (\rho_i-\widehat\rho_i)dxds, \\ I_{12} &= \int_0^t\int_{\R^3}\sum_{i=1}^n(\rho_i v_i-\widehat\rho_i\widehat v_i) \cdot\na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg)dxds, \\ I_{13} &= \int_0^t\int_{\R^3}\sum_{i=1}^n\bigg(-\frac12\pa_s\big(|\widehat v_i|^2\big) (\rho_i-\widehat\rho_i) -\frac12 (\rho_i v_i-\widehat\rho_i\widehat v_i) \cdot\na\big(|\widehat v_i|^2) \bigg) dxds, \\ \end{align*} {\em Step 3: Calculation of $I_{11}$ and $I_{12}$.} Using \eqref{3.rhoeps2}, we obtain: \begin{align} I_{11} &= \int_0^t\int_{\R^3}\sum_{i=1}^n \bigg\{\bigg(\frac{\pa^2\widehat F_i}{\pa\rho_i^2}\pa_s\widehat\rho_i + \frac{\pa^2\widehat F_i}{\pa\rho_i\pa q_i}\cdot\pa_s\na\widehat\rho_i\bigg) (\rho_i-\widehat\rho_i) \nonumber \\ &\phantom{xx}{}- \bigg(\diver \bigg(\frac{\pa^2\widehat F_i}{\pa q_i\pa\rho_i} \pa_s\widehat\rho_i \bigg) + \diver\bigg(\frac{\pa^2\widehat F_i}{\pa q_i^2}\cdot\pa_s\na\widehat\rho_i\bigg) \bigg)(\rho_i-\widehat\rho_i)\bigg\}dxds \nonumber \\ &= -\int_0^t\int_{\R^3}\sum_{i=1}^n\bigg\{\bigg(\frac{\pa^2\widehat F_i}{\pa\rho_i^2} \diver(\widehat\rho_i\widehat v_i) + \frac{\pa^2\widehat F_i}{\pa\rho_i\pa q_i}\na\diver(\widehat\rho_i\widehat v_i) \bigg)(\rho_i-\widehat\rho_i) \nonumber \\ &\phantom{xx}{}- \bigg(\diver\bigg(\frac{\pa^2\widehat F_i}{\pa q_i\pa\rho_i} \diver(\widehat\rho_i\widehat v_i)\bigg) + \diver\bigg(\frac{\pa^2\widehat F_i}{\pa q_i^2}\cdot\na \diver(\widehat\rho_i\widehat v_i)\bigg)\bigg)(\rho_i-\widehat\rho_i)\bigg\}dxds \nonumber \\ &= -\int_0^t\int_{\R^3}\sum_{i=1}^n\bigg(\frac{\pa^2\widehat F_i}{\pa\rho_i^2} \diver(\widehat\rho_i\widehat v_i)(\rho_i-\widehat\rho_i) + \frac{\pa^2\widehat F_i}{\pa\rho_i\pa q_i}\cdot\na\diver(\widehat\rho_i\widehat v_i) (\rho_i-\widehat\rho_i) \nonumber \\ &\phantom{xx}{}+ \frac{\pa^2\widehat F_i}{\pa q_i\pa\rho_i}\cdot \na(\rho_i-\widehat\rho_i)\diver(\widehat\rho_i\widehat v_i) + \frac{\pa^2\widehat F_i}{\pa q_i^2}:\big(\na\diver(\widehat\rho_i\widehat v_i) \otimes\na(\rho_i-\widehat\rho_i)\big)\bigg)dxds. \label{3.first} \end{align} We claim that the second-order derivatives of $F_i$ can be related to the functional derivative of $S_i$. Indeed, we take the variational derivative of the weak formulation of \eqref{3.ES}, $$ \bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}), \diver(\widehat\rho_i\phi_i)\bigg\rangle = -\int_{\R^3}\widehat\rho_i\na\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho})\cdot \phi_i dx = -\int_{\R^3}S_i(\widehat\bm{\rho}_i):\na\phi_i dx $$ for some test function $\phi_i$. Let $\bm\psi=(\psi_1,\ldots,\psi_n)$ be another test function. Then the limit $\tau\to 0$ in \begin{align*} \frac{1}{\tau}\bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i} (\widehat\bm{\rho}+\tau\bm\psi) &- \frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}), \diver(\widehat\rho_i\phi_i)\bigg\rangle + \frac{1}{\tau}\bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i} (\widehat\bm{\rho}+\tau\bm\psi),\diver\big((\widehat\rho_i+\tau\psi_i)\phi_i\big) -\diver(\widehat\rho_i\phi_i)\bigg\rangle \\ &= -\frac{1}{\tau}\int_{\R^3}\big(S_i(\widehat\rho_i+\tau\psi_i) - S_i(\widehat\rho_i)\big):\na\phi_i dx \end{align*} and summation over $i=1,\ldots,n$ leads to \begin{align*} \sum_{i=1}^n & \bigg\langle\!\!\!\bigg\langle\frac{\delta^2{\mathcal E}}{\delta\rho_i^2}(\widehat\bm{\rho}), \big(\diver(\widehat\rho_i\phi_i),\psi_i\big)\bigg\rangle\!\!\!\bigg\rangle + \sum_{i=1}^n \bigg\langle\frac{\delta{\mathcal E}}{\delta\rho_i}(\widehat\bm{\rho}),\diver(\psi_i\phi_i) \bigg\rangle \\ &= -\sum_{i=1}^n\int_{\R^3}\bigg\langle\frac{\delta S_i}{\delta\rho_i}(\widehat\bm{\rho}), \psi_i\bigg\rangle:\na\phi_i dx. \end{align*} Inserting the expressions for the variational derivatives from Lemma \ref{lem.second} and choosing $\phi_i=\widehat v_i$ and $\psi_i=\rho_i-\widehat\rho_i$, we deduce that \begin{align} \int_{\R^3}&\sum_{i=1}^n\bigg(\frac{\pa^2\widehat F_i}{\pa\rho_i^2} \diver(\widehat\rho_i\widehat v_i)(\rho_i-\widehat\rho_i) + \frac{\pa^2\widehat F_i}{\pa\rho_i\pa q_i} \cdot\na(\diver(\widehat\rho_i\widehat v_i))(\rho_i-\widehat\rho_i) \nonumber \\ &\phantom{xx}{} + \frac{\pa^2\widehat F_i}{\pa q_i\pa\rho_i}\cdot\na(\rho_i-\widehat\rho_i) \diver(\widehat\rho_i\widehat v_i) + \frac{\pa^2\widehat F_i}{\pa q_i^2}:\big(\na(\diver(\widehat\rho_i\widehat v_i)) \otimes\na(\rho_i-\widehat\rho_i)\big)\bigg)dx \nonumber \\ &\phantom{xx}{} - \int_{\R^3}\sum_{i=1}^n\na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg)\cdot((\rho_i-\widehat\rho_i) \widehat v_i)dx \nonumber \\ &= \int_{\R^3}\sum_{i=1}^n\bigg\{\bigg(\frac{\pa\widehat s_i}{\pa\rho_i} (\rho_i-\widehat\rho_i) + \frac{\pa\widehat s_i}{\pa q_i}\cdot\na(\rho_i-\widehat\rho_i)\bigg) \diver\widehat v_i \nonumber \\ &\phantom{xx}{} + \bigg(\frac{\pa\widehat r_i}{\pa\rho_i}(\rho_i-\widehat\rho_i) + \frac{\pa\widehat r_i}{\pa q_i}\cdot\na(\rho_i-\widehat\rho_i)\bigg) \cdot\na\diver\widehat v_i \nonumber \\ &\phantom{xx}{}+ \bigg(\frac{\pa\widehat H_i}{\pa\rho_i}(\rho_i-\widehat\rho_i) + \frac{\pa\widehat H_i}{\pa q_i}\cdot\na(\rho_i-\widehat\rho_i)\bigg) :\na\widehat v_i\bigg\}dx. \label{3.T1} \end{align} The first four terms on the left-hand side correspond, up to the sign, to the right-hand side of \eqref{3.first}. Using \begin{align*} -\int_{\R^3}&\sum_{i=1}^n\na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg)\cdot((\rho_i-\widehat\rho_i) \widehat v_i)dx +\int_{\R^3}\sum_{i=1}^n(\rho_i v_i-\widehat\rho_i\widehat v_i) \cdot\na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg)dx \\ &= \int_{\R^3}\sum_{i=1}^n \na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg) \cdot \rho_i(v_i - \widehat v_i) dx, \end{align*} we find that \begin{align} I_{11} + I_{12} &= \int_0^t\int_{\R^3}\sum_{i=1}^n \na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg) \cdot \rho_i(v_i - \widehat v_i) dxds \nonumber \\ &\phantom{xx}{}- \int_0^t\int_{\R^3}\sum_{i=1}^n \bigg\{\bigg(\frac{\pa\widehat s_i}{\pa\rho_i}(\rho_i-\widehat\rho_i) + \frac{\pa\widehat s_i}{\pa q_i}\cdot\na(\rho_i-\widehat\rho_i)\bigg) \diver\widehat v_i \nonumber \\ &\phantom{xx}{} + \bigg(\frac{\pa\widehat r_i}{\pa\rho_i}(\rho_i-\widehat\rho_i) + \frac{\pa\widehat r_i}{\pa q_i}\cdot\na(\rho_i-\widehat\rho_i)\bigg) \cdot\na\diver\widehat v_i \nonumber \\ &\phantom{xx}{}+ \bigg(\frac{\pa\widehat H_i}{\pa\rho_i}(\rho_i-\widehat\rho_i) + \frac{\pa\widehat H_i}{\pa q_i}\cdot\na(\rho_i-\widehat\rho_i)\bigg) :\na\widehat v_i\bigg\}dxds. \label{3.I1I2} \end{align} {\em Step 4: Calculation of $I_{13}$ and $I_2$.} The sum of $I_{13}$ and $I_2$ is \begin{align} I_{13} + I_2 &= \int_0^t\int_{\R^3}\sum_{i=1}^n \bigg(-\frac12\pa_s\big(|\widehat v_i|^2\big)(\rho_i-\widehat\rho_i) -\frac12 (\rho_i v_i-\widehat\rho_i\widehat v_i) \cdot\na\big(|\widehat v_i|^2) \bigg) dxds \nonumber\\ &\phantom{xx}{}+ \int_0^t\int_{\R^3}\sum_{i=1}^n \big((\rho_i v_i-\widehat\rho_i\widehat v_i) \cdot\pa_s\widehat v_i + (\rho_i v_i\otimes v_i - \widehat\rho_i\widehat v_i\otimes \widehat v_i):\na\widehat v_i\big) dxds \nonumber \\ & = \int_0^t\int_{\R^3}\sum_{i=1}^n\big(- \widehat v_i \otimes (\rho_i v_i - \widehat \rho_i \widehat v_i) + (\rho_iv_i\otimes v_i -\widehat\rho_i \widehat v_i \otimes \widehat v_i)\big):\na\widehat v_i dxds \nonumber \\ &\phantom{xx}{}+\int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i(v_i-\widehat v_i) \pa_s\widehat v_i dxds. \label{eq:I13I2} \end{align} Observing that \eqref{3.rhoveps2} reads in nonconservative form as $$ \pa_t\widehat v_i + \widehat v_i\cdot\na\widehat v_i = - \na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\widehat\rho_j(\widehat v_i-\widehat v_j) + \frac{\widehat R_i}{\widehat\rho_i}, $$ it follows that \begin{align} \int_0^t&\int_{\R^3}\sum_{i=1}^n \rho_i(v_i-\widehat v_i)\cdot\pa_s\widehat v_i dxds \nonumber \\ &= \int_0^t\int_{\R^3}\sum_{i=1}^n \rho_i(v_i-\widehat v_i)\cdot\bigg(-\widehat v_i\cdot\na\widehat v_i - \na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg)\cdot(\rho_i(v_i-\widehat v_i)) \nonumber\\ &\phantom{xx}{}- \frac{1}{\eps}\sum_{j=1}^n b_{ij}\widehat\rho_j (\widehat v_i-\widehat v_j) + \frac{\widehat R_i}{\widehat\rho_i}\bigg)dxds \nonumber \\ &= \int_0^t\int_{\R^3}\sum_{i=1}^n\bigg(-\rho_i(v_i-\widehat v_i)\otimes\widehat v_i :\na\widehat v_i -\na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg) \nonumber\\ &\phantom{xx}{}- \frac{1}{\eps}\sum_{j=1}^n b_{ij}\rho_i\widehat\rho_j (v_i-\widehat v_i)\cdot(\widehat v_i-\widehat v_j) + \frac{\rho_i}{\widehat\rho_i}(v_i-\widehat v_i)\cdot \widehat R_i\bigg)dxds. \label{3.I3} \end{align} Substituting the above formula into \eqref{eq:I13I2} leads to \begin{align} I_{13}+I_2 &=\int_0^t\int_{\R^3}\sum_{i=1}^n \rho_i(v_i-\widehat v_i)\otimes (v_i-\widehat v_i):\na\widehat v_i dxds \nonumber \\ &\phantom{xx}{}- \int_0^t\int_{\R^3}\sum_{i=1}^n \na\bigg(\frac{\pa\widehat F_i}{\pa\rho_i} - \diver\frac{\pa\widehat F_i}{\pa q_i}\bigg) \cdot \rho_i(v_i-\widehat v_i)dxds \nonumber\\ &\phantom{xx}{}-\frac{1}{\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i\widehat\rho_j (v_i-\widehat v_i)\cdot(\widehat v_i-\widehat v_j) dxds \nonumber \\ &\phantom{xx}{}+ \int_0^t\int_{\R^3}\sum_{i=1}^n \frac{\rho_i}{\widehat\rho_i}(v_i-\widehat v_i)\cdot \widehat R_idxds. \label{eq:I13I2sum} \end{align} {\em Step 5: Calculation of $I_4$.} We collect the terms in $I_4$ and the friction term in \eqref{eq:I13I2sum}: \begin{align} \frac{1}{\eps}\sum_{i,j=1}^n &\Big(\big(-b_{ij}\rho_i\rho_j(v_i-v_j) + \widehat\rho_i\widehat\rho_j(\widehat v_i-\widehat v_j)\big)\cdot\widehat v_i - b_{ij}\rho_i\widehat\rho_j(v_i-\widehat v_i)\cdot(\widehat v_i-\widehat v_j)\Big) \nonumber \\ &= \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j)\cdot(v_i-\widehat v_i) - \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j)\cdot v_i \nonumber \\ &\phantom{xx}{} + \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\widehat\rho_i\widehat\rho_j (\widehat v_i-\widehat v_j)\cdot\widehat v_i - \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\widehat\rho_j(v_i-\widehat v_i) \cdot(\widehat v_i-\widehat v_j). \label{3.auxbij} \end{align} By the symmetry of $(b_{ij})$, the second and the third term on the right-hand side become \begin{align*} -\frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j)\cdot v_i &= -\frac{1}{2\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j|v_i-v_j|^2, \\ \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\widehat\rho_i\widehat\rho_j (\widehat v_i-\widehat v_j)\cdot\widehat v_i &= \frac{1}{2\eps}\sum_{i,j=1}^n b_{ij}\widehat\rho_i\widehat\rho_j |\widehat v_i-\widehat v_j|^2. \end{align*} We write the last term on the right-hand side of \eqref{3.auxbij} as \begin{align*} -\frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\widehat\rho_j(v_i-\widehat v_i)\cdot (\widehat v_i-\widehat v_j) &= \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i(\rho_j-\widehat\rho_j) (v_i-\widehat v_i)\cdot(\widehat v_i-\widehat v_j) \\ &\phantom{xx}{}- \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j (v_i-\widehat v_i)\cdot(\widehat v_i-\widehat v_j). \end{align*} The last term can be combined with the first term on the right-hand side of \eqref{3.auxbij}: \begin{align*} \frac{1}{\eps}&\sum_{i,j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j)\cdot(v_i-\widehat v_i) - \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j(v_i-\widehat v_i)\cdot (\widehat v_i-\widehat v_j) \\ &= \frac{1}{\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j \big((v_i-v_j)-(\widehat v_i-\widehat v_j)\big)\cdot(v_i-\widehat v_i) \\ &= \frac{1}{2\eps}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j |(v_i-v_j)-(\widehat v_i-\widehat v_j)|^2. \end{align*} Then, combining these results, we conclude from \eqref{3.auxbij} that \begin{align} I_4 &- \frac{1}{\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i\widehat\rho_j (v_i-\widehat v_i)\cdot(\widehat v_i-\widehat v_j)dxds \nonumber \\ &= -\frac{1}{\eps}\int_0^t\int_{\R^3} \sum_{i,j=1}^n \Big(\big(b_{ij}\rho_i\rho_j(v_i-v_j) - \widehat\rho_i\widehat\rho_j(\widehat v_i-\widehat v_j)\big)\cdot\widehat v_i \nonumber \\ &\phantom{xx}{} + b_{ij}\rho_i\widehat\rho_j(v_i-\widehat v_i)\cdot (\widehat v_i-\widehat v_j)\Big) dxds \nonumber \\ &= \frac{1}{2\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i\rho_j |(v_i-v_j)-(\widehat v_i-\widehat v_j)|^2 dxds \nonumber \\ &\phantom{xx}{}- \frac{1}{2\eps}\int_0^t\int_{\R^3} \sum_{i,j=1}^n b_{ij}\rho_i\rho_j|v_i-v_j|^2 + \frac{1}{2\eps}\int_0^t\int_{\R^3} \sum_{i,j=1}^n b_{ij}\widehat\rho_i\widehat\rho_j |\widehat v_i-\widehat v_j|^2 dxds \nonumber \\ &\phantom{xx}{} +\frac{1}{\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i(\rho_j-\widehat\rho_j) (v_i-\widehat v_i)\cdot(\widehat v_i-\widehat v_j)dxds. \label{3.I7} \end{align} Finally, we insert \eqref{3.I1I2}, \eqref{eq:I13I2sum}, and \eqref{3.I7} into \eqref{3.aux} and then subtract the resulting \eqref{3.aux} and equation \eqref{eq:Energy2} from \eqref{eq:Energy} to arrive at \eqref{3.rei}. \end{proof} \subsection{Convergence of the Chapman-Enskog expansion }\label{sec.conv} We proceed to justify the Chapman-Enskog expansion using the relative entropy identity. We place a series of assumptions: \begin{enumerate}[label=\bf (A\arabic*)] \setcounter{enumi}{1} \item \label{A2} The strong solution $(\widehat \rho_i^\eps, \widehat v^\eps)$ to \eqref{3.rhoeps}-\eqref{3.rhoveps} satisfies for $\widehat v_i^\eps = \widehat v^\eps + \widehat u_i^\eps$ with $\widehat u_i^\eps$ being a solution of \eqref{3.u}: There exists a constant $C>0$ such that for all $\eps>0$ and $i=1,\ldots,n$, $$ \|\nabla \widehat v_i^\eps\|_{L^\infty([0,T];L^\infty(\R^3))} + \|\na\diver \widehat v_i^\eps\|_{L^\infty([0,T];L^\infty(\R^3))} \le C. $$ \item \label{A3} The strong solution $\widehat\rho_i^\eps$ to \eqref{3.rhoeps}-\eqref{3.rhoveps} satisfies: There are constants $K>\kappa>0$ such that for all $\eps>0$, $x\in\R^3$, $t\in(0,T)$, and $i=1,\ldots,n$, $$ \kappa\le\widehat\rho_i^\eps(x,t)\le K . $$ \item \label{A4} Let $F_i(\rho_i,q_i)=h_i(\rho_i) + \frac12\kappa_i(\rho_i)|q_i|^2$, where $h_i$ and $\kappa_i$ are $C^3$ functions and there exists a constant $\alpha>0$ such that for all $i=1,\ldots,n$ and $\rho_i\ge 0$, $$ h_i''(\rho_i)\ge\alpha, \quad \kappa_i(\rho_i){\kappa_i''(\rho_i)} - {2\kappa_i'(\rho_i)^2} \ge 0, \quad \kappa_i (\rho_i)>0. $$ \item \label{A5} The dissipative weak solution $(\bm{\rho}^\eps,\bm{v}^\eps)$ satisfies that $\rho_i^\eps$ are uniformly bounded in $L^\infty([0,T];L^\infty(\mathbb{R}^3))$ and there are constants $K>\kappa>0$ such that $$ \kappa \le \rho_i^\eps \le K\quad \mbox{in }\R^3,\ 0<t<T. $$ \end{enumerate} Hypothesis \ref{A1} concerns the family of dissipative weak solutions which is assumed to satisfy the uniform bounds \ref{A5}. Hypotheses \ref{A2} and \ref{A3} concern the family of strong solutions to the target system \eqref{3.rhoeps}-\eqref{3.rhoveps}. Hypothesis \ref{A4} is a structural hypothesis on the model. It is in particular satisfied for $\kappa_i(\rho_i)=\rho_i^s$ with $s \in [-1,0]$ for $\rho_i>0$. The important special cases $s = -1$ (corresponding to the quantum hydrodynamic system) and $s=0$ (corresponding to constant capillarity) are included. \begin{theorem}\label{thm.convCE} Let $(\bm{\rho}^\eps, \bm{v}^\eps)$ be a dissipative weak solution to \eqref{3.rho}-\eqref{3.rhov} satisfying assumption \ref{A1} and \ref{A5}, and let $(\widehat\bm{\rho}^\eps,\widehat v^\eps)$ be a strong solution to \eqref{3.rhoeps}-\eqref{3.rhoveps} satisfying assumptions \ref{A2}-\ref{A4}. Furthermore, let assumption \ref{N} on page \pageref{N} hold and let $T>0$. We introduce $$ \chi(t) = \int_{\R^3}\sum_{i=1}^n\bigg( \frac12\rho_i^\eps|v_i^\eps-\widehat v_i^\eps|^2 + (\rho_i^\eps-\widehat\rho_i^\eps)^2 + \frac{1}{2\kappa_i(\rho_i^\eps)}|\kappa_i(\rho_i^\eps)\nabla \rho_i^\eps - \kappa_i(\widehat \rho_i^\eps)\na\widehat \rho_i^\eps|^2\bigg)(t)dx. $$ Then there exists a constant $C>0$ such that for all $\eps>0$ and $t\in(0,T)$, $$ \chi(t) \le C(\chi(0)+\eps^2), \quad t\in(0,T). $$ In particular, if $\chi(0)\to 0$ as $\eps\to 0$, we have $$ \sup_{t\in(0,T)}\chi(t) \to 0\quad\mbox{as }\eps\to 0. $$ \end{theorem} \begin{proof} We apply the relative energy inequality \eqref{3.rei}. First, we relate the total relative entropy to $\chi(t)$. The superscript $\eps$ is dropped for simplicity of calculations. The relative potential is \begin{align}\label{eq:relativeF} F_i(\rho_i,q_i|\widehat \rho_i,\widehat q_i) &= F_i(\rho_i,q_i) - F_i(\widehat \rho_i,\widehat q_i) - \frac{\pa F_i}{\pa\rho_i}(\widehat \rho_i,\widehat q_i)(\rho_i-\widehat \rho_i) - \frac{\pa F_i}{\pa q_i}(\widehat\rho_i,\widehat q_i)\cdot(q_i-\widehat q_i) \\ & = h_i(\rho_i|\widehat \rho_i) + \bigg(\frac{1}{2} \kappa_i(\rho_i) |q_i|^2\bigg)(\rho_i,q_i|\widehat \rho_i,\widehat q_i). \nonumber \end{align} The second term on the right-hand side of the above equation is calculated in detail as follows: \begin{align} \bigg(\frac{1}{2} &\kappa_i(\rho_i) |q_i|^2\bigg) (\rho_i,q_i|\widehat \rho_i,\widehat q_i) \nonumber \\ &= \frac{1}{2} \kappa_i(\rho_i) |q_i|^2 - \frac{1}{2} \kappa_i(\widehat \rho_i) |\widehat q_i|^2 - \frac{1}{2} \kappa_i'(\widehat \rho_i) |\widehat q_i|^2 (\rho_i - \widehat\rho_i) - \kappa_i(\widehat\rho_i) \widehat q_i(q_i-\widehat q_i) \nonumber\\ &= \frac{1}{2\kappa_i(\rho_i)} (\kappa_i^2(\rho_i)|q_i|^2 - 2\kappa_i(\widehat \rho_i) \kappa_i(\rho_i) q_i \cdot \widehat q_i + \kappa_i^2(\widehat \rho_i)|\widehat q_i|^2) \nonumber\\ &\phantom{xx}{}+ \frac{1}{2} |\widehat q_i|^2 \bigg(-\frac{\kappa_i^2(\widehat\rho_i)}{\kappa_i(\rho_i)} + \kappa_i(\widehat \rho_i) - \kappa_i'(\widehat\rho_i)(\rho_i - \widehat \rho_i) \bigg) \nonumber\\ &= \frac{1}{2\kappa_i(\rho_i)}|\kappa_i(\rho_i)q_i-\kappa_i(\widehat\rho_i) \widehat q_i|^2 + \frac{\kappa_i^2(\widehat \rho_i)|\widehat q_i|^2}{2} \bigg(-\frac{1}{\kappa_i(\rho_i)} + \frac{1}{\kappa_i(\widehat \rho_i)} - \frac{\kappa_i'(\widehat\rho_i)}{\kappa_i^2(\widehat\rho_i)} (\rho_i-\widehat\rho_i)\bigg) \nonumber\\ &= \frac{1}{2\kappa_i(\rho_i)}|\kappa_i(\rho_i)q_i-\kappa_i(\widehat\rho_i) \widehat q_i|^2 + \frac{\kappa_i^2(\widehat \rho_i)|\widehat q_i|^2}{2} \bigg(-\frac{1}{\kappa_i}\bigg)(\rho_i|\widehat\rho_i).\label{eq:kq2} \end{align} Assumption \ref{A4} implies that \begin{align*} \bigg(-\frac{1}{\kappa_i}\bigg)(\rho_i|\widehat\rho_i) &= -\frac{1}{\kappa_i(\rho_i)} + \frac{1}{\kappa_i(\widehat \rho_i)} -\frac{\kappa_i'(\widehat\rho_i)}{\kappa_i^2(\widehat\rho_i)} |\widehat q_i|^2(\rho_i-\widehat\rho_i) \\ &= \int_0^1\int_0^\tau -\frac{2(\kappa_i')^2-\kappa_i\kappa_i''}{\kappa_i^3} (s\rho_i + (1-s)\widehat\rho_i)) dsd\tau (\rho_i-\widehat\rho_i)^2 \ge 0. \end{align*} Due to assumption \ref{A4}, the Taylor expansion of $h_i(\rho_i|\widehat \rho_i)$ gives \begin{align*} h_i(\rho_i|\widehat\rho_i) &= h_i(\rho_i) - h_i(\widehat \rho_i) - h'_i(\widehat \rho_i)(\rho_i - \widehat\rho_i) \\ &= \int_0^1\int_0^\tau h''_i(s\rho_i + (1-s)\widehat\rho_i) dsd\tau (\rho_i-\widehat\rho_i)^2 \ge C|\rho_i-\widehat\rho_i|^2. \end{align*} It follows that, for some $C>0$ independent of $\eps$, \begin{align*} F_i(\rho_i,q_i|\widehat\rho_i,\widehat q_i) \ge C|\rho_i-\widehat\rho_i|^2 + \frac{1}{2\kappa_i(\rho_i)} |\kappa_i(\rho_i)q_i-\kappa_i(\widehat\rho_i)\widehat q_i|^2 . \end{align*} We deduce that $$ {\mathcal E}_{\rm tot}(\bm{\rho}^\eps,\bm{m}^\eps|\widehat\bm{\rho}^\eps,\widehat{\bm{m}}^\eps) = \int_{\R^3}\sum_{i=1}^n\bigg(F_i(\rho_i^\eps,\na\rho_i^\eps |\widehat\rho_i^\eps,\na\widehat\rho_i^\eps) + \frac12\rho_i^\eps|v_i^\eps-\widehat v_i^\eps|^2\bigg)dx \ge C\chi(t). $$ We turn to the right-hand side of the energy inequality \eqref{3.rei}. We write $J_1,\ldots,J_4$ for the four integrals on the right-hand side of \eqref{3.rei}. Thanks to assumption (A2), \begin{align} J_1 &= - \int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps (v_i^\eps-\widehat v_i^\eps)\otimes(v_i^\eps-\widehat v_i^\eps):\na\widehat v_i^\eps dxds \nonumber\\ &\le C\int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps|v_i^\eps -\widehat v_i^\eps|^2 dxds \le C\int_0^t\chi(s)ds.\label{eq:J1} \end{align} To estimate $J_2$, we first calculate the stress tensors using \eqref{3.S} and obtain \begin{align*} s_i(\rho_i,q_i) &= p_i(\rho_i) + \frac{1}{2}(\kappa_i(\rho_i)+\rho_i\kappa_i'(\rho_i)) |q_i|^2,\, p_i(\rho_i) =\rho_i h'_i(\rho_i)-h_i(\rho_i), \\ r_i(\rho_i,q_i) &= \rho_i \kappa_i(\rho_i) q_i, \\ H_i(\rho_i,q_i) &= \kappa_i(\rho_i) q_i \otimes q_i. \end{align*} For $s_i(\rho_i,q_i|\widehat\rho_i,\widehat q_i)$, we first split $s_i(\rho_i,q_i)$ as $$ s_i(\rho_i,q_i) = p_i(\rho_i) + \frac{1}{2}\kappa_i(\rho_i)|q_i|^2 + A_i(\rho_i,q_i),\quad A_i(\rho_i,q_i) = \frac{1}{2} \rho_i \kappa_i'(\rho_i) |q_i|^2. $$ Due to assumption \ref{A4}, $p_i''$ is a continuous function. Furthermore, thanks to assumptions \ref{A3} and \ref{A5}, $s\rho_i + (1-s)\widehat\rho_i$ is bounded for $s \in [0,1]$, so $p_i''(s\rho_i + (1-s)\widehat\rho_i))$ is bounded. The relative pressure becomes $$ p_i(\rho_i|\widehat \rho_i)= \int_0^1\int_0^\tau p_i''(s\rho_i + (1-s)\widehat\rho_i) dsd\tau (\rho_i-\widehat\rho_i)^2 \le C|\rho_i-\widehat\rho_i|^2. $$ For $A_i(\rho_i,q_i|\widehat\rho_i,\widehat q_i)$, we can replace $\kappa_i(\rho_i)$ in the calculations of $(\frac{1}{2} \kappa_i(\rho_i) |q_i|^2) (\rho_i,q_i|\widehat \rho_i,\widehat q_i)$ by $\rho_i \kappa_i'(\rho_i)$ to get \begin{align}\label{eq:A} A_i(\rho_i,q_i|\widehat\rho_i,\widehat q_i) = \frac{1}{2\rho_i \kappa_i'(\rho_i)} |\rho_i \kappa_i'(\rho_i) q_i - \widehat\rho_i \kappa_i'(\widehat\rho_i) \widehat q_i|^2 + \frac{|\widehat q_i|^2 \widehat{\rho_i}^2 (\kappa_i'(\widehat \rho_i))^2}{2} \bigg(-\frac{1}{\rho_i \kappa_i'}\bigg)(\rho_i|\widehat\rho_i). \end{align} The first term on the right-hand side can be estimated as follows: \begin{align*} &\frac{1}{2\rho_i \kappa_i'(\rho_i)} |\rho_i \kappa_i'(\rho_i) q_i - \widehat\rho_i \kappa_i'(\widehat\rho_i) \widehat q_i)|^2 \\ &\phantom{xx}{}=\frac{1}{2\rho_i \kappa_i'(\rho_i)} \Big| \frac{\rho_i \kappa_i'(\rho_i)}{\kappa_i(\rho_i)}(\kappa_i(\rho_i) q_i - \kappa_i(\widehat \rho_i) \widehat q_i) + \bigg(\frac{\rho_i \kappa_i'(\rho_i) \kappa_i(\widehat \rho_i)}{\kappa_i(\rho_i)} - \widehat\rho_i \kappa_i'(\widehat\rho_i)\bigg)\widehat q_i\Big|^2 \\ &\phantom{xx}{}\le \frac{\rho_i \kappa_i'(\rho_i)}{2\kappa_i^2(\rho_i)} |\kappa_i(\rho_i) q_i - \kappa_i(\widehat \rho_i) \widehat q_i|^2 + \frac{\kappa_i^2(\widehat\rho_i) \widehat q_i^2}{2\rho_i \kappa_i'(\rho_i)} \bigg|\frac{\rho_i\kappa_i'(\rho_i)}{\kappa_i(\rho_i)} - \frac{\widehat \rho_i \kappa_i'(\widehat\rho_i)}{\kappa_i(\widehat\rho_i)} \bigg|^2 \\ &\phantom{xx}{} \le \frac{C}{\kappa_i(\rho_i)} |\kappa_i(\rho_i) q_i - \kappa_i(\widehat \rho_i) \widehat q_i|^2 + C |\rho_i - \widehat\rho_i|^2. \end{align*} We use assumption \ref{A5} in the first item of the last inequality to obtain an upper bound on $\rho_i \kappa_i'(\rho_i)/\kappa_i(\rho_i)$. Assumptions \ref{A3} and \ref{A5} are used to estimate the second item. By the same assumptions, a Taylor expansion of the last term on the right-hand side of \eqref{eq:A} leads to $$ \bigg(-\frac{1}{\rho_i \kappa_i'}\bigg)(\rho_i|\widehat\rho_i) \le C |\rho_i - \widehat \rho_i|^2. $$ We thus have \begin{align}\label{eq:sest} s_i(\rho_i,q_i|\widehat\rho_i,\widehat q_i) \le C|\rho_i-\widehat\rho_i|^2 + \frac{1}{2\kappa_i(\rho_i)}|\kappa_i(\rho_i)q_i -\kappa_i(\widehat\rho_i)\widehat q_i|^2 . \end{align} Observe that \begin{align} r_i&(\rho_i,q_i|\widehat \rho_i,\widehat q_i) \nonumber \\ &= \rho_i \kappa_i(\rho_i) q_i - \widehat \rho_i \kappa_i(\widehat \rho_i) \widehat q_i - (\kappa_i(\widehat\rho_i) + \widehat \rho_i \kappa_i'(\widehat \rho_i)) \widehat q_i (\rho_i - \widehat \rho_i) - \widehat \rho_i \kappa_i(\widehat \rho_i)(q_i - \widehat q_i) \nonumber\\ &= (\rho_i \kappa_i(\rho_i) - \widehat \rho_i \kappa_i(\widehat \rho_i)) q_i - \kappa_i(\widehat\rho_i) \widehat q_i(\rho_i - \widehat \rho_i) - \widehat \rho_i \kappa_i'(\widehat \rho_i) \widehat q_i (\rho_i - \widehat \rho_i) \nonumber\\ &= \frac{\rho_i \kappa_i(\rho_i) - \widehat\rho_i \kappa_i(\widehat\rho_i)}{ \kappa_i(\rho_i)}(\kappa_i(\rho_i)q_i - \kappa_i(\widehat\rho_i)\widehat q_i) \nonumber \\ &\phantom{xx}{} + \widehat \rho_i \kappa_i^2(\widehat\rho_i) \widehat q_i (-\frac{1}{\kappa_i(\rho_i)} + \frac{1}{\kappa_i(\widehat\rho_i)} - \frac{\kappa_i'(\widehat\rho_i)}{\kappa_i^2(\widehat\rho_i)} (\rho_i-\widehat\rho_i)) \nonumber\\ &\le \frac{C }{\kappa_i(\rho_i)}|\rho_i \kappa_i(\rho_i) - \widehat \rho_i \kappa_i(\widehat \rho_i)|^2 + \frac{C }{\kappa_i(\rho_i)} |\kappa_i(\rho_i)q_i - \kappa_i(\widehat\rho_i)\widehat q_i|^2 + \widehat \rho_i \kappa_i^2(\widehat\rho_i)\widehat q_i (-\frac{1}{\kappa_i})(\rho_i|\widehat\rho_i) \nonumber\\ &\le\frac{C }{\kappa_i(\rho_i)} |\kappa_i(\rho_i)q_i - \kappa_i(\widehat\rho_i) \widehat q_i|^2 +C|\rho_i - \widehat\rho_i|^2, \label{eq:rest} \end{align} where we used assumptions \ref{A3} and \ref{A5} to show the boundness of $(1/\kappa_i)(\rho_i)$ and $(-1/\kappa_i)(\rho_i|\widehat\rho_i)$. They are also used to estimate $|\rho_i \kappa_i(\rho_i) - \widehat\rho_i \kappa_i(\widehat\rho_i)|^2 \le C|\rho_i-\widehat\rho_i|^2$. Next, focusing on the term $H_i(\rho_i,q_i)$, \begin{align} H_i(\rho_i,q_i|\widehat\rho_i,\widehat q_i) &= \kappa_i(\rho_i) q_i \otimes q_i - \kappa_i(\widehat\rho_i) \widehat q_i \otimes \widehat q_i - \kappa_i'(\widehat\rho_i) \widehat q_i \otimes \widehat q_i (\rho_i - \widehat \rho_i) \nonumber\\ &\phantom{xx}{} - \kappa_i(\widehat\rho_i)(q_i-\widehat q_i) \otimes \widehat q_i - \kappa_i(\widehat\rho_i)\widehat q_i \otimes (q_i-\widehat q_i) \nonumber\\ &= \frac{1}{\kappa_i(\rho_i)}(\kappa_i(\rho_i) q_i - \kappa_i(\widehat \rho_i) \widehat q_i) \otimes (\kappa_i(\rho_i) q_i - \kappa_i(\widehat \rho_i) \widehat q_i) \nonumber\\ &\phantom{xx}{} + \kappa_i^2(\widehat\rho_i)\widehat q_i \otimes \widehat q_i \bigg(-\frac{1}{\kappa_i(\rho_i)} + \frac{1}{\kappa_i(\widehat \rho_i)} -\frac{\kappa_i'(\widehat\rho_i)}{\kappa_i^2(\widehat\rho_i)} |\widehat q_i|^2(\rho_i-\widehat\rho_i)\bigg) \nonumber\\ &\le \frac{1}{\kappa_i(\rho_i)} |\kappa_i(\rho_i) q_i - \kappa_i(\widehat \rho_i) \widehat q_i|^2 + \kappa_i^2(\widehat \rho_i)|\widehat q_i|^2 (-\frac{1}{\kappa_i}) (\rho_i|\widehat\rho_i)\nonumber \\ &\le \frac{1}{\kappa_i(\rho_i)} |\kappa_i(\rho_i) q_i - \kappa_i(\widehat \rho_i) \widehat q_i|^2 + C|\rho_i-\widehat \rho_i|^2.\label{eq:Hest} \end{align} Combining \eqref{eq:sest}, \eqref{eq:rest}, and \eqref{eq:Hest} and using assumption \ref{A2}, we deduce that \begin{align} J_2 &=- \int_0^t\int_{\R^3}\sum_{i=1}^n\Big(s_i(\rho_i^\eps,\na\rho_i^\eps |\widehat\rho_i^\eps,\na\widehat\rho_i^\eps)\diver\widehat v_i^\eps + r_i(\rho_i^\eps,\na\rho_i^\eps|\widehat\rho_i^\eps,\na\widehat\rho_i^\eps) \cdot\na\diver\widehat v_i^\eps \nonumber \\ &\phantom{xx}{}+H_i(\rho_i^\eps,\na\rho_i^\eps |\widehat\rho_i^\eps,\na\widehat\rho_i^\eps):\na\widehat v_i^\eps\Big) dxds \nonumber\\ & \le C\int_0^t\int_{\R^3}\sum_{i=1}^n\left((\rho_i^\eps-\widehat\rho_i^\eps)^2 +\frac{1}{\kappa_i(\rho_i^\eps)} |\kappa_i(\rho_i^\eps)\nabla\rho_i^\eps - \kappa_i(\widehat\rho_i^\eps) \nabla\widehat\rho_i^\eps|^2\right)dxds \nonumber\\ &\le C\int_0^t\chi(s)ds. \label{eq:J2} \end{align} From equation \eqref{eq:rhou} we have $$ \widehat\rho_i^\eps\widehat u_i^\eps = -\eps\sum_{j=1}^n D_{ij}(\widehat\bm{\rho}^\eps) \na\frac{\delta{\mathcal E}}{\delta\rho_j}(\widehat\bm{\rho}^\eps). $$ Hence, by definition \eqref{3.Reps} and upon using assumptions \ref{A3} and \ref{A1}, we see that $\widehat R_i^\eps$ is of order $O(\eps)$ and that \begin{align*} J_3 &= -\int_0^t\int_{\R^3}\sum_{i=1}^n\frac{\rho_i^\eps}{\widehat\rho_i^\eps} \widehat R_i^\eps\cdot(v_i^\eps-\widehat v_i^\eps)dxds \nonumber\\ &\le C\int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps|v_i^\eps-\widehat v_i^\eps|^2 dxds + C\int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps\bigg( \frac{\widehat R_i^\eps}{\widehat\rho_i^\eps}\bigg)^2 dxds \\ &\le C\int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps|v_i^\eps-\widehat v_i^\eps|^2 dxds + C\eps^2 t. \end{align*} Also $\widehat v_i^\eps-\widehat v_j^\eps=\widehat u_i^\eps-\widehat u_j^\eps$ is of order $\eps$, so the last term $J_4$ is estimated using assumption \ref{A5} by \begin{align*} J_4 &=- \frac{1}{\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i^\eps (\rho_j^\eps-\widehat\rho_j^\eps)(v_i^\eps-\widehat v_i^\eps) \cdot(\widehat v_i^\eps-\widehat v_j^\eps)dxds\\ &\le C\int_0^t\int_{\R^3}\sum_{i=1}^n b_{ij}\rho_i^\eps |\rho_j^\eps-\widehat\rho_j^\eps||v_i^\eps-\widehat v_i^\eps|dxds \\ &\le C\int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps|v_i^\eps-\widehat v_i^\eps|^2 dxds + C\int_0^t\int_{\R^3}\sum_{i,j=1}^n\rho_i^\eps|\rho_j^\eps-\widehat\rho_j^\eps|^2 dxds \\ &\le \int_0^t\chi(s)ds. \end{align*} Putting these estimates together, we arrive at \begin{align*} \chi(t) &+ \frac{1}{2\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i^\eps \rho_j^\eps\big|(v_i^\eps-v_j^\eps)-(\widehat v_i^\eps-\widehat v_j^\eps)\big|^2 dxds \\ &\le C\chi(0) + C\int_0^t\chi(s)ds + C\eps^2 t. \end{align*} Then Gronwall's inequality gives $\chi(t) \le C(\chi(0) + \eps^2)e ^{CT}$, finishing the proof. \end{proof} \begin{remark} \rm The assumption $h''(\rho_i) \ge \alpha$ is not needed if we assume that $\kappa_i(\rho_i){\kappa_i''(\rho_i)} - {2\kappa_i'(\rho_i)^2} \ge \alpha$ and $|\nabla \widehat \rho_i|$ is bounded away from zero for any $i=1,\ldots,n$, because the second term on the right-hand side of \eqref{eq:kq2} controls $|\rho_i-\widehat \rho_i|^2$. The case of quantum hydrodynamics, $\kappa_i(\rho_i) = k_i/(4\rho_i)$ is included in the above proof. Indeed, $\chi(t)$ is taken to be $$ \chi(t) = \int_{\R^3}\sum_{i=1}^n\bigg( \frac12\rho_i^\eps|v_i^\eps-\widehat v_i^\eps|^2 + (\rho_i^\eps-\widehat\rho_i^\eps)^2 + \frac{2\rho_i^\eps}{k_i} \bigg|\frac{\nabla \rho_i^\eps}{\rho_i^\eps} - \frac{\nabla \widehat \rho_i^\eps}{\widehat\rho_i^\eps}\bigg|^2 \bigg)(t)dx. $$ The condition in assumption \ref{A4} becomes $$ \kappa_i(\rho_i){\kappa_i''(\rho_i)} - {2\kappa_i'(\rho_i)^2} = 0, $$ but one needs the assumption $h''_i(\rho_i)\ge \alpha$ to derive the bounds for $|\rho_i^\eps-\widehat\rho_i^\eps|^2$. The use of the nonlinear quadratic term $(2\rho_i^\eps/k_i)|\nabla \rho_i^\eps/\rho_i^\eps - \nabla \widehat \rho_i^\eps/\widehat\rho_i^\eps|^2$ is crucial to obtain the estimate. Finally, for the case of constant capillarity, $\kappa_i(\rho_i)=k_i$, we conclude that $\kappa_i(\rho_i){\kappa_i''(\rho_i)} - {2\kappa_i'(\rho_i)^2}=0$, such that assumption \ref{A4} is satisfied. Thus, Theorem \ref{thm.convCE} also holds in this case. \end{remark} \section{Justification of the high-friction limit}\label{sec.relax} We recall the original system \eqref{3.rho}-\eqref{3.rhov}: \begin{align} \pa_t\rho_i^\eps + \diver(\rho_i^\eps v_i^\eps) &= 0, \label{4.rho} \\ \pa_t(\rho_i^\eps v_i^\eps) + \diver(\rho_i^\eps v_i^\eps\otimes v_i^\eps) &= \diver S_i(\bm{\rho}) - \frac{1}{\eps}\sum_{j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps (v_i^\eps-v_j^\eps), \label{4.rhov} \end{align} where $\diver S_i = -\rho_i\na(\delta{\mathcal E}/\delta\rho_i)$. The limiting system for $\eps\to 0$ becomes \begin{align} \pa_t\bar\rho_i + \diver(\bar\rho_i\bar v) &= 0, \label{4.bar1} \\ \pa_t(\bar\rho\bar v) + \diver(\bar\rho\bar v\otimes\bar v) &= \diver\bar S, \label{4.bar2} \end{align} where $\bar S=\sum_{i=1}^n S_i(\bar\rho_i)$, $\bar\rho=\sum_{i=1}^n\bar\rho_i$, and $\bar\rho\bar v=\sum_{i=1}^n\bar\rho_i\bar v_i$. Indeed, system \eqref{4.bar1}-\eqref{4.bar2} corresponds to the zeroth-order Chapman-Enskog expansion \eqref{2.I01}-\eqref{2.I02}. In this section, we verify the limit $\eps\to 0$ rigorously, analyzing the isentropic case $F_i(\rho_i,q_i) = h_i(\rho_i)$ and the Korteweg case $F_i(\rho_i,q_i) = h_i(\rho_i) + \frac12\kappa_i(\rho_i)|q_i|^2$ separately. \subsection{High-friction limit in the isentropic case}\label{sec.isen} We consider the case when the energy density only depends on the particle density (and not on its gradients), $$ {\mathcal E}(\bm{\rho}) = \int_{\R^3}\sum_{i=1}^n F_i(\rho_i)dx, \quad F_i=h_i(\rho_i). $$ We prove the relaxation limit $\eps\to 0$ in \eqref{4.rho}-\eqref{4.rhov} by applying the general result of \cite{Tza05}. Noting that $\rho_i\na(\delta{\mathcal E}/\delta\rho_i) = \na p_i(\rho_i)$, where $$ p_i(\rho_i) = \rho_i h_i'(\rho_i) - h_i(\rho_i) $$ is the partial pressure, we can formulate \eqref{4.rho}-\eqref{4.rhov} as the system of balance laws \begin{equation}\label{4.Ueps} \pa_t U^\eps + \diver F(U^\eps) = \frac{1}{\eps}R(U^\eps), \end{equation} where $U^\eps=(\bm{\rho}^\eps,\bm{m}^\eps)$, $\bm{m}^\eps=(\rho_i^\eps v_i^\eps)_{i=1,\ldots,n}$, \begin{align*} F(U^\eps) &= \begin{pmatrix} \rho_i^\eps v_i^\eps \\ \rho_i^\eps v_i^\eps\otimes v_i^\eps + p_i(\rho_i^\eps) \end{pmatrix}_{i=1,\ldots,n}\in\R^{2n}, \\ R(U^\eps) &= \begin{pmatrix} 0 \\ -\sum_{j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps(v_i^\eps-v_j^\eps) \end{pmatrix}_{i=1,\ldots,n}\in\R^{2n}. \end{align*} The (formal) relaxation limit $\eps\to 0$ leads to $R(U)=0$, where $U=\lim_{\eps\to 0}U^\eps$. This implies that all limit velocities are the same, $v:=v_i$ for $i=1,\ldots,n$. Thus, the limit equations are expected to be \begin{equation* \pa_t\rho_i + \diver(\rho_i v) = 0, \quad \pa_t(\rho v) + \diver(\rho v\otimes v) + \na p = 0, \end{equation*} for $i=1,\ldots,n$, where $\rho=\sum_{i=1}^n\rho_i$ and $p=\sum_{i=1}^n p_i$. This system can be written as the conservation law \begin{align}\label{4.lim1} \pa_t u + \diver f(u) = 0, \end{align} where $u=(\bm{\rho},m)$, $m=\rho v$, and $f(u)=(\rho_1 v,\ldots,\rho_n v,\rho v\otimes v + p)$. System \eqref{4.Ueps} has an entropy $$ \eta(U) = \sum_{i=1}^n\bigg(h_i(\rho_i) + \frac12\rho_i |v_i|^2\bigg), $$ satisfying $\partial_t \int_{\mathbb{R}^3} \eta(U) dx \le 0$. We introduce the relative entropy density $$ \eta(U^\eps|\bar U) = \sum_{i=1}^n\bigg(h_i(\rho_i^\eps|\bar\rho_i) + \frac12\rho_i^\eps|v_i^\eps-\bar v|^2\bigg), $$ where $h_i(\rho_i^\eps|\bar\rho_i)=h_i(\rho_i^\eps)-h_i(\bar\rho_i) - h_i'(\bar\rho_i)(\rho_i^\eps-\bar\rho_i)$ and $\bar{U} = (\bar \rho_1, \ldots,\bar \rho_n, \bar \rho_1 \bar v, \ldots,\bar \rho_n \bar v)$. \begin{theorem}[Relaxation limit in the isentropic case]\label{thm.isen} Assume that \ref{N} on page \pageref{N} holds and that the function $h_i:[0,\infty)\to\R$ is uniformly convex on $(0,\infty)$ for all $i=1,\ldots,n$. Let $U^\eps=(\bm{\rho}^\eps,\bm{v}^\eps)$ be a smooth solution to \eqref{4.rho}-\eqref{4.rhov} or \eqref{4.Ueps} and let $\bar u=(\bar{\bm{\rho}},\bar\rho\bar v)$ be a smooth solution to \eqref{4.bar1}-\eqref{4.bar2} or \eqref{4.lim1}. We suppose that there exists $\kappa>0$ such that $\rho_i^\eps,\bar\rho_i\ge\kappa>0$ in $\R^3\times(0,T)$ for all $i=1,\ldots,n$. Then for any $r>0$, there exist $s>0$ and $C>0$ independent of $\eps$ such that for all $t\in(0,T)$, $$ \int_{\{|x|<r\}}\eta(U^\eps|\bar U)(x,t)dx \le C\bigg(\int_{\{|x|<r+st\}}\eta(U^\eps|\bar U)(x,0)dx + \eps\bigg). $$ In particular, if $$ \lim_{\eps\to 0}\int_{\{|x|<r+st\}}\eta(U^\eps|\bar U)(x,0)dx = 0 $$ then $$ \lim_{\eps\to 0}\sup_{0<t<T}\int_{\R^3}\sum_{i=1}^n\big((\rho_i^\eps-\bar\rho_i)^2 + |v_i^\eps-\bar v_i|^2\big)dx = 0. $$ \end{theorem} \begin{proof} As mentioned above, the result follows after applying Theorem 3.1 in \cite{Tza05}. To this end, we need to verify the structural conditions (h1)-(h7) of \cite{Tza05}. \begin{labeling}{h111} \item[(h1)] There exists a projection matrix $\mathbb{P}:\R^{2n}\to\R^{n+1}$ satisfying $\operatorname{rank}(\mathbb{P})=n+1$ and $\mathbb{P}(R(U))=0$ for all $U\in\R^{2n}$. This matrix relates the variables $u$ and $U$ and is given by $$ u = \mathbb{P}U, \quad \mathbb{P} = \begin{pmatrix} \mathbb{I}_n & \mathbb{O}_n \\ 0,\ldots,0 & 1,\ldots,1 \end{pmatrix}, $$ where $\mathbb{I}_n$ is the unit matrix of $\R^{n\times n}$, $\mathbb{O}_n$ is the zero matrix in $\R^{n\times n}$- It holds for all $U=(\bm{\rho},\bm{m})$, $(\mathbb{P}R(U))_i=0$ for $i=1,\ldots,n$ and $$ (\mathbb{P}R(U))_{n+1} = -\sum_{j,k=1}^n b_{jk}\rho_j\rho_k(v_j-v_k) = 0. $$ \item[(h2)] The equilibrium solutions to $R(U)=0$, called $M(u)$, satisfies $\mathbb{P}M(u)=u$. The equilibrium solutions are given by $M(u)=(\rho_1,\ldots,\rho_n,\rho_1 v,\ldots,\rho_n v)$, since $(\mathbb{P}M(u))_{i} = \rho_i$ for $i=1,\ldots,n$ and $(\mathbb{P}M(u))_{n+1}$ $=\sum_{j=1}^n\rho_j v=\rho v$. \item[(h3)] The nondegeneracy conditions $$ \dim\operatorname{ker}(R_U(M(u))) = n+1, \quad \dim\operatorname{ran}(R_U(M(u))) = n-1 $$ hold, where $R_U=dR/dU$. This can be verified by a straightforward computation. \item[(h4), (h5)] There exists an entropy density $\eta:\R^{2n}\to\R$ which is convex and satisfies $\eta_U F_U = J_U$ and $\eta_U\cdot R(U)\le 0$, where $J$ is the flux vector. We choose $$ \eta(U) = \sum_{i=1}^n\bigg(h_i(\rho_i) + \frac12\rho_i |v_i|^2\bigg), \quad J(U) = \sum_{i=1}^n\bigg(\rho_i h_i'(\rho_i)v_i + \frac12\rho_i|v_i|^2 v_i\bigg). $$ Then the inequality is a consequence of the energy inequality \eqref{eq:Energy}. \item[(h6)] The solution $u$ to \eqref{4.lim1} has the entropy-flux pair $$ \eta(M(u)) = \sum_{i=1}^n h_i(\rho_i) + \frac12\rho|v|^2, \quad J(M(u)) = \sum_{i=1}^n \rho_i h_i'(\rho_i)v + \frac12\rho|v|^2 v. $$ This follows from \eqref{eq:Energy2} with $\widehat\rho_i,\widehat v_i$ replaced by $\bar\rho_i,\bar v$. \item[(h7)] The following inequality holds: $$ -\big(\eta_U(U) - \eta_U(M(u)\big)\cdot\big(R(U)-R(M(u))\big) \ge \nu|U-M(u)|^2. $$ \end{labeling} The inequality in (h7) amounts to proving \begin{equation}\label{4.h7} \frac{1}{2}\sum_{i,j=1}^n b_{ij} \rho_i \rho_j |v_i-v_j|^2 \ge \nu \sum_{i=1}^n \rho_i^2|v_i-v|^2 \, . \end{equation} The proof of this statement is motivated by the analysis in \cite{YYZ15}. First, note that $\pa\eta/\pa\rho_i=h_i'(\rho_i)-\frac12|v_i|^2$ and $\pa\eta/\pa m_i=v_i$, where $m_i=\rho_iv_i$. Taking into account that $R(M(u))=0$, we have \begin{align*} -\big(&\eta_U(U) - \eta_U(M(u))\big)\cdot\big(R(U)-R(M(u))\big) \\ &= \sum_{i=1}^n(v_i-v)\cdot\sum_{j=1}^n b_{ij}\rho_i\rho_j(v_i-v_j) = \frac{1}{2} \sum_{i,j} b_{ij} \rho_i \rho_j |v_i-v_j|^2. \end{align*} For the proof of \eqref{4.h7}, let $v_i=v+u_i$, and we reformulate the left-hand side of the inequality in (h7) as \begin{align*} -\big(&\eta_U(U) - \eta_U(M(u))\big)\cdot\big(R(U)-R(M(u))\big)\\ &= \sum_{i,j=1}^n b_{ij}\rho_i\rho_j(u_i-u_j)\cdot u_i = \sum_{i,j=1}^n \tau_{ij}u_i\cdot u_j, \end{align*} where $\tau_{ij}=\delta_{ij}\sum_{k=1}^n b_{ik}\rho_i\rho_k - b_{ij}\rho_i\rho_j$ as in \eqref{eq:taudef}. Since $(\tau_{ij})$ is not positive definite, inequality \eqref{4.h7} does not follow directly. The idea is to use the fact that there exists a submatrix $(\tau_{ij})\in\R^{(n-1)\times(n-1)}$ that is positive definite; see the proof of Lemma \ref{lem.Estar}. Recalling the properties $Q_{ij}=\delta_{ij}/\rho_i + 1/\rho_n$, $\sum_{i=1}^n\rho_iu_i=0$, $\sum_{i=1}^n\tau_{ij}=0$, and \eqref{eq:taucal}, we compute \begin{align*} -\big(&\eta_U(U) - \eta_U(M(u))\big)\cdot\big(R(U)-R(M(u))\big) = \sum_{i=1}^n u_i\sum_{j,k=1}^{n-1}\tau_{ij}Q_{jk}\rho_k u_k \\ &= \sum_{i=1}^{n-1} u_i\sum_{j,k=1}^{n-1} \tau_{ij}Q_{jk}\rho_k u_k + u_n\sum_{j,k=1}^{n-1}\tau_{nj}Q_{jk}\rho_k u_k \\ &= \sum_{i=1}^{n-1} u_i\sum_{j,k=1}^{n-1} \tau_{ij}Q_{jk}\rho_k u_k - \sum_{\ell=1}^{n-1}\frac{\rho_\ell u_\ell}{\rho_n}\sum_{j,k=1}^{n-1} \bigg(-\sum_{m=1}^{n-1}\tau_{mj}\bigg)Q_{jk}\rho_k u_k \\ &= \sum_{i,j,k,\ell=1}^{n-1}\rho_\ell u_\ell\bigg(\frac{\delta_{i\ell}}{\rho_\ell} + \frac{1}{\rho_n}\bigg)\tau_{ij}Q_{jk}\rho_k u_k \\ &= \sum_{i,j,k,\ell=1}^{n-1}\rho_\ell u_\ell Q_{i\ell}\tau_{ij}Q_{jk}(\rho_k u_k) = W^TQ^\top\tau QW, \end{align*} where $W=(\rho_1u_1,\ldots,\rho_{n-1}u_{n-1})^\top$. Since $(\tau_{ij})\in\R^{(n-1)\times(n-1)}$ is positive definite and $Q$ is invertible, $Q^\top\tau Q$ is also positive definite. We infer that there exists a constant $\mu>0$ such that $$ -\big(\eta_U(U) - \eta_U(M(u))\big)\cdot\big(R(U)-R(M(u))\big) \ge \mu|W|^2 = \mu\sum_{i=1}^{n-1}|\rho_i u_i|^2. $$ We claim that we may sum from $i=1$ to $n$ using another constant. Indeed, we infer from $$ |\rho_n u_n|^2 = \bigg|-\sum_{i=1}^{n-1}\rho_iu_i\bigg|^2 \le (n-1)\sum_{i=1}^{n-1}|\rho_iu_i|^2 $$ that $$ \sum_{i=1}^n|\rho_iu_i|^2 = \sum_{i=1}^{n-1}|\rho_iu_i|^2 + |\rho_nu_n|^2 \le n\sum_{i=1}^{n-1}|\rho_iu_i|^2 $$ and therefore, $$ -\big(\eta_U(U) - \eta_U(M(u))\big)\cdot\big(R(U)-R(M(u))\big) \ge \frac{\mu}{n}\sum_{i=1}^n|\rho_iu_i|^2, $$ and the result follows with $\nu=\mu/n$. \end{proof} \subsection{High-friction limit in the Euler-Korteweg case}\label{sec.EK} We next justify the relaxation limit $\eps\to 0$ for energies $F_i$ depending on the particle density and its gradient. We place the assumption: \begin{enumerate}[label=\bf (A\arabic*)] \setcounter{enumi}{5} \item \label{A6} $\bar{u} = (\bar{\bm{\rho}},\bar{\rho} \bar{v})$ is a smooth solution to \eqref{4.bar1}-\eqref{4.bar2} satisfying $\bar{u}$, $\partial_t \bar{u}$, $\nabla \bar{u}$, $D^2 \bar{u}$, $D^3 \bar{\rho} \in L^\infty([0,T];L^\infty(\mathbb{R}^3))$. \end{enumerate} \begin{proposition}[Relative energy inequality]\label{prop.rei2} Let $(\bm{\rho}^\eps,\bm{v}^\eps)$ be a dissipative weak solution to \eqref{4.rho}-\eqref{4.rhov} satisfying assumption \ref{A1} on page \pageref{A1} and let $(\bar{\bm{\rho}},\bar v)$ be a smooth solution to \eqref{4.bar1}-\eqref{4.bar2} satisfying assumption \ref{A6}. Let assumption \ref{N} on page \pageref{N} hold. Then \begin{align} {\mathcal E}_{\rm tot}&(\bm{\rho},\bm{m}|\bar\bm{\rho},\bar{\bm{m}})(t) + \frac{1}{2\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps |v_i^\eps-v_j^\eps|^2 dxds \nonumber \\ &\le {\mathcal E}_{\rm tot}(\bm{\rho}^\eps,\bm{m}^\eps|\bar\bm{\rho},\bar{\bm{m}})(0) - \int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps(v_i^\eps-\bar v) \otimes(v_i^\eps-\bar v):\na\bar v dxds \nonumber \\ &\phantom{xx}{}- \int_0^t\int_{\R^3}\sum_{i=1}^n\Big(s_i(\rho_i^\eps,\na\rho_i^\eps |\bar\rho_i,\na\bar\rho_i)\diver\bar v + r_i(\rho_i^\eps,\na\rho_i^\eps|\bar\rho_i,\na\bar\rho_i) \cdot\na\diver\bar v \nonumber \\ &\phantom{xx}{}+ H_i(\rho_i^\eps,\na\rho_i^\eps |\bar\rho_i,\na\bar\rho_i):\na\bar v \Big)dxds \nonumber\\ &\phantom{xx}{}- \int_0^t\int_{\R^3}\sum_{i=1}^n \rho_i^\eps(v_i^\eps-\bar v) \cdot\bigg(\frac{\diver\bar S}{\bar\rho} - \frac{\diver \bar S_i}{\bar\rho_i} \bigg)dxds. \label{4.rei2} \end{align} \end{proposition} \begin{proof} The calculation is similar to the proof of Proposition \ref{prop.rei}. We can replace $\widehat\rho_i^\eps$, $\widehat v_i^\eps$ by $\bar \rho_i$, $\bar v_i$ in \eqref{3.rei}. To obtain the relative energy inequality, we need further to write the equation for $\bar\rho_i \bar v$ into the same form as \eqref{3.rhoveps2} and replace $\widehat R_i^\eps$ by $\bar{R}_i$, which is given by \begin{align*} \partial_t(\bar\rho_i \bar v) + \diver(\bar \rho_i \bar v \otimes \bar v) = - \bar{\rho}_i \nabla \frac{\delta {\mathcal E}}{\delta\bar \rho_i}(\bar\bm{\rho}) - \frac{1}{\eps} \sum_{j=1}^n b_{ij} \bar\rho_i\bar\rho_j(\bar v - \bar v) + \bar{R}_i. \end{align*} Using \eqref{4.bar1} and \eqref{4.bar2}, $\bar{R}_i$ can be calculated as \begin{align*} \bar{R}_i &= (\partial_t \bar\rho_i + \diver(\bar\rho_i\bar v))\cdot \bar v + \bar \rho_i(\pa_t\bar v+\bar v \cdot \na\bar v) + \bar\rho_i \nabla \frac{\delta {\mathcal E}}{\delta\bar \rho_i}\\ &=\frac{\bar\rho_i}{\bar \rho}(\pa_t(\bar\rho \bar v) + \na \cdot (\bar\rho \bar v\otimes \bar v)) + \bar\rho_i \nabla \frac{\delta {\mathcal E}}{\delta\bar \rho_i} \\ & = \frac{\bar\rho_i}{\bar\rho} \diver \bar{S} - \diver \bar{S}_i. \end{align*} Replacing $\widehat R_i^\eps$ with the above equation, \eqref{3.rei} becomes \eqref{4.rei2}. Notice that $\widehat v_i^\eps-\widehat v_j^\eps$ reduces to $\bar v - \bar v=0$ and the last term in \eqref{3.rei} vanishes. \end{proof} \begin{theorem}[Relaxation limit in the Korteweg case]\label{thm.korte} Let $(\bm{\rho}^\eps,\bm{v}^\eps)$ be a dissipative weak solution to \eqref{4.rho}-\eqref{4.rhov} satisfying \ref{A1} on page \pageref{A1} and \ref{A5} on page \pageref{A5} and let $(\bar\bm{\rho},\bar v)$ be a strong solution to \eqref{4.bar1}-\eqref{4.bar2} satisfying \ref{A6} on page \pageref{A6}. Suppose that for some constants $K>\kappa>0$, we have the uniform bounds $\kappa\le\rho_i^\eps(x,t)\le K$ and $\bar\rho_i(x,t)\ge\kappa$ for all $(x,t)\in\R^3\times(0,T)$ and $i=1,\ldots,n$. Furthermore, let assumption \ref{N} hold. We fix $T>0$ and set, as in Theorem \ref{thm.convCE}, $$ \chi(t) = \int_{\R^3}\sum_{i=1}^n\bigg( \frac12\rho_i^\eps|v_i^\eps-\bar v|^2 + (\rho_i^\eps-\bar\rho_i)^2+ \frac{1}{2\kappa_i(\rho_i^\eps)}|\kappa_i(\rho_i^\eps)\nabla \rho_i^\eps - \kappa_i(\bar\rho_i) \na\bar \rho_i|^2\bigg)(t)dx. $$ Then there exists a constant $C>0$ such that for all $\eps>0$ and $t\in(0,T)$, $$ \chi(t) \le C(\chi(0)+\eps), \quad t\in(0,T). $$ In particular, if $\chi(0)\to 0$ as $\eps\to 0$, we have $$ \sup_{t\in(0,T)}\chi(t) \to 0\quad\mbox{as }\eps\to 0. $$ \end{theorem} \begin{proof} We estimate the integrals on the right-hand side of the relative entropy inequality \eqref{4.rei2}. The second and third terms can be estimated in the same way as \eqref{eq:J1} and \eqref{eq:J2}, and they are bounded by $C\int_0^t \chi(s) ds$. We split the last term on the right-hand side of \eqref{4.rei2} into two parts: $$ -\int_0^t\int_{\R^3}\sum_{i=1}^n\rho_i^\eps(v_i^\eps-\bar v)\cdot \bigg(\frac{\diver \bar S}{\bar\rho} - \frac{\diver \bar S_i}{\bar\rho_i}\bigg)dxds = L_1 + L_2, $$ where \begin{align*} L_1 &= -\int_0^t\int_{\R^3}\sum_{i=1}^n \rho_i^\eps(v_i^\eps-v^\eps) \cdot\bigg(\frac{\diver \bar S}{\bar\rho} - \frac{\diver \bar S_i}{\bar\rho_i}\bigg)dxds, \\ L_2 &= -\int_0^t\int_{\R^3}\sum_{i=1}^n \rho_i^\eps(v^\eps-\bar v) \cdot\bigg(\frac{\diver \bar S}{\bar\rho} - \frac{\diver \bar S_i}{\bar\rho_i}\bigg)dxds. \end{align*} We infer that \begin{align*} L_1 &= \int_0^t\int_{\R^3}\sum_{i=1}^n \rho_i^\eps(v_i^\eps-v^\eps) \frac{\diver \bar S_i}{\bar\rho_i}dxds \\ &\le \frac{\nu}{2\eps}\int_0^t\int_{\R^3}\sum_{i=1}^n(\rho_i^\eps)^2 |v_i^\eps-v^\eps|^2 dxds + C\eps\int_0^t\int_{\R^3}\sum_{i=1}^n \bigg(\frac{\diver \bar S_i}{\bar\rho_i}\bigg)^2 dxds \\ &\le \frac{\nu}{2\eps}\int_0^t\int_{\R^3}\sum_{i=1}^n (\rho_i^\eps)^2 |v_i^\eps-v^\eps|^2 dxds + C\eps t. \end{align*} Using \eqref{4.h7}, we conclude that $$ L_1 \le \frac{1}{4\eps}\int_0^t\int_{\R^3}\sum_{i,j=1}^n b_{ij}\rho_i^\eps\rho_j^\eps |v_i^\eps-v_j^\eps|^2 + C\eps t. $$ To estimate $L_2$, recall that $\bar S=\sum_{i=1}^n\bar S_i$ and $\rho^\eps=\sum_{i=1}^n\rho_i^\eps$, yielding \begin{align} L_2 &= -\int_0^t\int_{\R^3}(v^\eps-\bar v)\cdot\sum_{i=1}^n\bigg( \frac{\rho^\eps}{\bar\rho}\diver \bar S_i - \frac{\rho_i^\eps}{\bar\rho_i} \diver\bar S_i\bigg)dxds \nonumber \\ &= -\int_0^t\int_{\R^3}\sum_{i=1}^n\bigg(\frac{1}{\bar\rho} - \frac{\rho_i^\eps}{\bar\rho_i\rho^\eps}\bigg)\rho^\eps (v^\eps-\bar v)\cdot(\diver \bar S_i)dxds \nonumber \\ &\le \int_0^t\int_{\R^3}\rho^\eps|v^\eps-\bar v|^2 dxds + C\int_0^t\int_{\R^3}\rho^\eps\sum_{i=1}^n\bigg(\frac{1}{\bar\rho} - \frac{\rho_i^\eps}{\bar\rho_i\rho^\eps}\bigg)^2 dxds. \label{4.K2} \end{align} To estimate the first term on the right-hand side, we need the uniform lower and upper bounds for $\rho_i^\eps$: \begin{align*} \rho^\eps|v^\eps-\bar v|^2 &= \frac{1}{\rho^\eps}\bigg|\sum_{i=1}^n\rho_i^\eps(v_i^\eps-\bar v)\bigg|^2 \le \frac{n}{\rho^\eps}\sum_{i=1}^n(\rho_i^\eps)^2|v_i^\eps-\bar v|^2 \le \frac{nK}{\kappa}\sum_{i=1}^n\rho_i^\eps|v_i^\eps-\bar v|^2. \end{align*} The last term in \eqref{4.K2} can be estimated according to $$ \sum_{i=1}^n\bigg(\frac{1}{\bar\rho} - \frac{\rho_i^\eps}{\bar\rho_i\rho^\eps}\bigg)^2 = \sum_{i=1}^n\bigg(\frac{\rho^\eps-\bar\rho}{\rho^\eps\bar\rho} + \frac{\bar\rho_i-\rho_i^\eps}{\rho^\eps\bar\rho_i}\bigg)^2 \le C\sum_{i=1}^n(\rho_i^\eps-\bar\rho_i)^2. $$ Therefore, $$ L_2 \le C\int_0^t\int_{\R^3}\sum_{i=1}^n\big(\rho_i^\eps|v_i^\eps-\bar v|^2 + (\rho_i^\eps-\bar\rho_i)^2\big)dxds \le C\int_0^t\chi(s)ds. $$ Finally, as in the proof of Theorem \ref{thm.convCE}, ${\mathcal E}_{\rm tot}(\bm{\rho}^\eps,\bm{m}^\eps|\bar\bm{\rho},\bar{\bm{m}})(t)\ge C\chi(t)$. We conclude that \begin{equation}\label{4.estchi} \chi(t) + \frac{1}{4\eps}\int_0^t\int_{\R^3}\sum_{i=1}^n b_{ij}\rho_i^\eps\rho_j^\eps |v_i^\eps-v_j^\eps|^2 dxds \le \chi(0) + C\int_0^t\chi(s)ds + C\eps t. \end{equation} An application of Gronwall's lemma then finishes the proof. \end{proof} \begin{remark}\rm In the previous proof, the interaction term involving $b_{ij}$ was crucial to estimate the term $L_1$. The symmetry of $(b_{ij})$ enables us to control the kinetic energy by the interaction energy, $$ \int_{\R^3}\sum_{i=1}^n\rho_i^2|v_i-v|^2dx \le \frac{1}{2\nu}\int_{\R^3}\sum_{i=1}^n b_{ij}\rho_i\rho_j|v_i-v_j|^2. $$ In the single component case, the interaction energy vanishes, and we recover Theorem 3 in \cite{GLT17}. \end{remark}
2,877,628,088,838
arxiv
\section{Introduction} Central compact objects (CCOs) constitute a group of X-ray-emitting, radio-quiet neutron stars found near the centers of supernova remnants (SNRs). To date, X-ray pulsations have been firmly detected from only three CCOs. They have relatively long spin periods ($0.1-0.4$ s), and long-term monitoring shows that their period derivatives ($\dot{P}\equiv {\rm d}P/{\rm d}t$) are remarkably small, suggesting weak surface magnetic fields\footnote{By assuming dipole spin-down, the surface field strength can be inferred from $B_{\rm surf} \equiv3.2\times10^{19}(P\dot{P})^{1/2}$\,G, where $P$ is in seconds.}. See \citet{Halpern10}, \citet{Got09,Got13}, and \citet{Ho13} for observations and overview of related theory. Due to the limited sample of CCOs that exhibit X-ray pulsations, the physical mechanism responsible for their X-ray emission is not well understood, and their active lifetime and long-term evolution are poorly constrained. It remains unclear if CCOs are active radio pulsars beamed away from us or if the radio emission mechanism is intrinsically inoperative. Since CCOs are associated with very young SNRs, their nature and evolution are highly relevant to the neutron star production rate and the physics underlying the diversity of neutron stars produced by core collapse. The compact X-ray source CXOU J185238.6+004020 was discovered in the center of the SNR Kesteven 79 by \citet{Seward02}. Subsequently, \citet{Got05} discovered 105 ms pulsations from this CCO, now named PSR J1852+0040, establishing it as a young neutron star. A dedicated long-term X-ray timing campaign of PSR J1852+0040 facilitated the first definite measurement of the spin-down rate of a CCO pulsar by \citet{Halpern10}. The measurements of $P= 0.105$ s and $\dot{P}=8.7 \times10^{-18}$ s s$^{-1}$, imply in the dipole spin-down formalism a surface magnetic field strength of only $B_{\rm surf} = 3.1\times10^{10}$ G; on this basis, it has been termed an ``anti-magnetar''. With a bolometric luminosity of $3.0\times10^{33}$ erg s$^{-1}$, an order of magnitude higher than its spin-down power $\dot{E} \propto \dot{P} P^{-3} = 3.0 \times 10^{32}$ erg s$^{-1}$, the X-ray radiation from PSR J1852+0040 is clearly not powered by the rotational kinetic energy of the star, thus requiring an additional energy source, such as residual cooling or low-level accretion. The thermal X-ray emission from PSR J1852+0040 is characterized by a single unusually broad pulse, with a very high pulsed fraction of $64\%\pm2\%$. X-ray observations spanning nearly five years are consistent with steady flux. Fitting of the X-ray spectrum to two blackbodies finds small emitting radii \citep[$R_1 = 1.9$ km and $R_2 = 0.45$ km, for components of $kT_1 = 0.30$ keV and $kT_2 = 0.52$ keV, respectively;][]{Halpern10}. Such small, hot regions are common among CCOs and are at odds with the inferred magnetic field strength since highly non-uniform surface temperature is usually attributed to the effects of much stronger magnetic fields. Thus it is unclear whether CCOs are intrinsically weakly magnetized neutron stars or whether they possess substantially stronger internal magnetic fields than the measured $\sim$$10^{10}$ G surface dipole field. This fundamental question regarding the nature of CCOs has generated a flurry of theoretical efforts aimed at constraining the key physics and evolutionary fate of these enigmatic objects \citep[see, e.g.,][]{Ho11,Shab12,Vig12,Ber13,Per13}. It is highly likely that the heat distribution on the stellar surface closely traces the magnetic field structure. Therefore, constraining the surface emission properties and heat distribution of PSR J1852+0040 can help resolve this essential mystery of CCOs. In this paper, I present modeling of the pulsed thermal X-ray emission from PSR J1852+0040 aimed at constraining key aspects of CCO physics based on the extensive set of archival \textit{XMM-Newton} observations. The work is organized as follows. In \S2 I summarize the archival data set and the data reduction procedures. In \S3 I describe the numerical model employed in this study, while in \S4 I show the results of the modeling. In \S5 I present a pulse-phase-resolved spectroscopic analysis. I discuss the implications of the results in \S5 and offer conclusions in \S6. \begin{deluxetable}{rcc} \tablewidth{0pt} \tablecaption{\textit{XMM-Newton} X-ray Timing Observations of PSR J1852+0040.} \tablehead{ \colhead{ObsID} & \colhead{Date} & \colhead{Exposure\tablenotemark{a}} \\ \colhead{} & \colhead{(UT)} & \colhead{(ks)}} \startdata 0204970201 & 2004 Oct 18 & 30.6 \\ 0204970301 & 2004 Oct 23 & 30.5 \\ 0400390201 & 2006 Oct 08 & 29.7 \\ 0400390301 & 2007 Mar 20 & 30.5 \\ 0550670201 & 2008 Sep 19 & 21.2 \\ 0550670301 & 2008 Sep 21 & 31.0 \\ 0550670401 & 2008 Sep 23 & 34.8 \\ 0550670501 & 2008 Sep 29 & 33.0 \\ 0550670601 & 2008 Oct 10 & 36.0 \\ 0550671001 & 2009 Mar 16 & 27.0 \\ 0550670901 & 2009 Mar 17 & 26.0 \\ 0550671201 & 2009 Mar 23 & 27.3 \\ 0550671101 & 2009 Mar 25 & 19.9 \\ 0550671301 & 2009 Apr 04 & 26.0 \\ 0550671901 & 2009 Apr 10 & 30.5 \\ 0550671801 & 2009 Apr 22 & 28.0 \enddata \tablenotetext{a}{Total observing time not corrected for the 29\% dead time of the EPIC pn small window mode.} \label{logtable} \end{deluxetable} \section{Data Reduction} I have retrieved the set of 16 archival \textit{XMM-Newton} European Photon Imaging Camera (EPIC) pn \citep{struder01} observations of PSR J1852+0040, for a combined 327 kiloseconds of net exposure time (see Table 1). All exposures were obtained in small window mode, which affords a 5.7 ms time resolution but at a cost of 29\% dead time during which no X-ray events are recorded. Each ODF data set was reprocessed with the SAS\footnote{The \textit{XMM-Newton} SAS is developed and maintained by the Science Operations Centre at the European Space Astronomy Centre and the Survey Science Centre at the University of Leicester.} version xmmsas\_20120523\_1702-12.0.0 {\tt epchain} pipeline to ensure that the latest calibration products and clock corrections (including leap seconds) are applied. The data were then filtered using the recommended standard pattern, flag, and pulse invariant values. None of the observations exhibit instances of high background flares. For the purposes of the analyses presented below, the photon arrival times from each observation were translated to the solar system barycenter with the SAS {\tt barycen} tool assuming the DE405 solar system ephemeris. The corrected arrival times were folded coherently at the pulsar period using the ephemeris presented in \citet{Halpern10}. Relative to this previous analysis, which was based on the same data set but processed with SAS version xmmsas\_20060628\_1801-7.0.0, there is a systematic offset of $+$2.9 ms in all photon arrival times. This difference can be attributed to improvements in the \textit{XMM-Newton} clock corrections. For all practical purposes this discrepancy is negligible and does not affect the results and conclusions of \citet{Halpern10}. The X-ray events from the pulsar was extracted from a 12$''$ radius circle. This relatively small region was chosen so as to minimize the contribution of the diffuse emission from the supernova remnant. An important prerequisite for the pulse profile analysis described below is a reliable estimate of the background level at the source position. However, due to the relatively bright diffuse emission, coupled with the complicated morphology of the portion of the remnant that falls within the small-window mode \textit{XMM-Newton} images, there is no obvious choice for a background extraction region. For this purpose, I take advantage of the sub-arcsecond resolution of the archival 30 ks \textit{Chandra} ACIS-S image of Kes 79 (ObsID 1982) to identify a representative background region. I estimated the background level at the pulsar position by extracting counts from an annulus with inner radius of $2''$, beyond which the point source emission becomes negligible, and outer radius of $12''$. The resulting value was used to identify a larger background region with a matching surface brightness (i.e., count rate per unit area). \section{The Numerical Model} \subsection{System Geometry and General Relativity} To study the pulsed X-rays from PSR J1852+0040, I employ a numerical model of surface emission from a neutron star assuming a Schwarzschild metric to describe the properties of the space-time in the vicinity of the star. It follows the basic formalism first presented by \citet{Pech83} and used in a host of subsequent works \citep[e.g.,][]{Ftac86,Riff88,Mill98,Crop01,Wein01,Belo02,Pou03,Vii04,Got10}. I represent the thermally-emitting pulsar by a neutron star of mass $M$, radius $R_{NS}$, spin period $P$, with different arrangements of X-ray-emitting surface elements on an otherwise cold neutron star. The surface normal of each element is at a position angle $\alpha$ relative to the spin axis, while the line of sight to the observer is at an angle $\zeta$ relative to the spin axis. The location of an X-ray-emitting surface element on a neutron star relative to the observer as a function of the time-varying rotational phase of the pulsar, $\phi(t)$, is then defined by the angle $\theta$ between the normal to the surface and the line of sight: \begin{equation} \cos\theta(t)=\sin \alpha \sin \zeta \cos \phi (t) + \cos \alpha \cos \zeta \end{equation} As the pulsar rotates, the varying projection of the emission area(s) causes flux variations (i.e. pulsations), with shape and amplitude determined in great part by the combination of $\alpha$ and $\zeta$. Note that by convention $\alpha$ is reckoned from the spin pole towards the equator. The observed flux per unit energy from an emitting region is given by \begin{equation} F(E)=I(E){\rm d}\Omega \end{equation} where $I(E)$ is the intensity of the radiation as measured by a distant observer and ${\rm d}\Omega$ is the apparent solid angle subtended by the emission region. Transforming both quantities to the rest frame of the emitting region yields \begin{equation} F(E)=(1-R_S/R_{NS})^{1/2} I'(E',\theta')\cos\theta\frac{{\rm d} \cos\theta}{{\rm d} \cos\psi} \frac{{\rm d}S'}{D^2} \end{equation} where the primed quantities are measured in the NS surface rest frame \citep{Pou03}, with ${\rm d}S\cos\theta={\rm d}S'\cos\theta'$. $R_S\equiv 2GM/c^2$ is the Schwarzschild radius, $I'(E',\theta')$ is the emergent intensity, which may be a function of emission angle in addition to energy, ${\rm d}S'$ is the emission area and $D$ is the distance. A photon emitted at an angle $\theta>0$ with respect to the local radial direction follows a curved trajectory and is observed at infinity at an angle $\psi>\theta$. The relation between these two angles is given by \citep{Pech83}: \begin{equation} \psi= \int_{R}^{\infty}\frac{{\rm d}r}{r^2}\left[\frac{1}{b^2}-\frac{1}{r^2}\left(1-\frac{R_S}{r}\right)\right]^{-1/2} \end{equation} where \begin{equation} b=\frac{R_{NS}}{\sqrt{1-R_S/R_{NS}}}\sin\theta \end{equation} is the impact parameter at infinity of a photon emitted from radius $R_{NS}$ at an angle $\theta$. For most real-world applications, including the analysis presented herein, a simplified approximate relation between $\psi$ and $\theta$ \citep{Belo02} can be used: \begin{equation} \cos\psi\approx\frac{\cos\theta-R_S/R_{NS}}{1-R_S/R_{NS}} \end{equation} which is valid for $R_{NS} > 2R_S$. This approximation greatly boosts the computational speed of the model while still maintaining a high degree of accuracy ($\lesssim$3\% error for $R \ge 3R_S$), allowing a thorough exploration of the model phase space and implementation of more complex emission regions. Owing to the relatively long spin period, special relativistic effects, such as Doppler boosting and aberration, as well as travel time differences are completely negligible \citep{Pou03}. The total observed flux for a given rotational phase is found by relating $\phi$ and $\theta$ for a given emitting region through $\psi$ via equations (1) and (4), using the desired $I'(E',\theta')$ in equation (3), and summing the computed flux from all surface elements. This approach can be used to construct an arbitrary emission region on the NS surface by considering any number and arrangement of surface elements, provided they are sufficiently point-like so as not to introduce significant errors in the model \citep[see, e.g.,][]{Tur13}. \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{f1.eps} \end{center} \caption{The relative intensities as a function of angle with respect to the surface normal ($\theta$) for the three emission patterns considered in this analysis: isotropic (dotted), cosine beaming (dotted), and ``pencil plus fan'' beaming (solid).} \end{figure} \subsection{Surface Emission Model} The surface composition of PSR J1852+0040, and CCOs in general, is highly uncertain as there are multiple plausible possibilities. For instance, a light element atmosphere may, in principle, build up due to spallation of fallback material after the supernova explosion. Due to gravitational sedimentation, as the lightest elements, hydrogen or helium are expected to surface rapidly and thus dominate the surface emission if a layer thicker than $\sim$1 cm accumulates \citep{Chang04}. On the other hand, if no accretion takes place or if thermonuclear reactions occur after accretion, a mid-Z element \citep[see, e.g.,][]{Mori07,Ho09,Chang10} or iron atmosphere may be present. The same is likely to be the case if any pulsar wind outflow is active that prevents accretion of material from the remnant. It is also quite possible that the stellar surface is devoid of an atmospheric layer, in which case the emission from the condensed neutron star surface may be reasonably well approximated by a blackbody \citep{Pot12}. Although based on the spin-down measurement the implied surface field at the magnetic equator of PSR J1852+0040 is $3.1\times10^{10}$ G, the strong X-ray pulsations may be a manifestation of a substantially stronger field at the location of the hot regions, although the value cannot be easily determined as there are no obvious absorption features (see \S5.1). This poses an additional difficulty in choosing the appropriate surface radiation model, considering that the emission characteristics of neutron star atmospheres can differ markedly between $\sim$$10^{10}$ G \citep{Sule12}, $\sim$$10^{12}$ G \citep{Pavlov94,Zavlin95}, and $\gtrsim$$10^{14}$ G \citep[e.g.,][]{vanAd06}. As a consequence, any inferences drawn from modeling the thermal emission are likely to be dependent on the true surface magnetic field strength and its orientation relative to the surface. For strongly magnetized atmospheres ($\gtrsim$$10^{12}$ G), a narrow ``pencil'' beam along the direction of the magnetic field can also appear in instances when the observer’s line of sight crosses the magnetic field lines, as well as a broad ``fan'' beam with peak intensity at intermediate angles with respect to the surface normal \citep[see, e.g.,][]{Pavlov94,Zavlin95}. For weakly magnetic models ($\lesssim$$10^{10}$ G), the emergent intensity declines with increasing angle with respect to the surface normal, resulting in a limb-darkening effect \citep{Rom87,Zavlin96}. For atmospheres with $\sim$$10^{10-11}$ G, the emission is strongly beamed at photon energies coincident with the harmonics of the cyclotron resonance frequency, with the strongest beaming occurring at the fundamental frequency and becoming progressively weaker with increasing harmonic number. Away from the cyclotron absorption lines, the emission generally declines with increasing angle away from the surface normal \citep[see][in particular their Figure 7]{Sul10}. Based on this information, to account for the variety of possible angle-dependent intensity patterns of the thermal radiation from PSR J1852+0040, I consider three possibilities: (i) a standard isotropically emitting Planck spectrum; (ii) an emission model with a cosine dependence of the intensity as a function of emission angle relative to the surface normal as a proxy for a weakly magnetic neutron star atmosphere, including a $\sim$$10^{10}$ G light-element atmosphere at photon energies away from the cyclotron harmonics\footnote{As shown in \S5, the phase-resolved spectra of PSR J1852+0040 exhibit no strong cyclotron harmonics so this assumption is appropriate.}; and (iii) a ``pencil plus fan'' beam pattern characteristic of strongly magnetic atmospheres for the case of a magnetic field perpendicular to the surface\footnote{As an approximation, I adopt the H atmosphere beaming pattern from \citet{Pavlov94} for $T_{\rm eff}=10^6$ K and $B=4.7\times10^{12}$ G at a photon energy of 2.28 keV.}. The three emission patterns are illustrated in Figure 1. Although none of the models account for the energy-dependence of the emergent intensity patterns of realistic atmosphere models \citep[][]{Rom87,Shib92,Pavlov94,Zavlin95}, for the purposes of this analysis the latter two provide an adequate representation of the angular dependence (i.e.~``beaming'') produced by a variety neutron star atmospheres, while being substantially less computationally demanding than the full models. Therefore, although the exact values of the parameters derived throughout this analysis may not correspond to the actual values, the general conclusions regarding the emission properties and heat distribution of the stellar surface should be robust. Neutron star atmospheres have the general property of producing continuum radiation with peak intensities at higher energies relative to a Planck spectrum for the same effective temperature \citep{Rom87,Shib92,Ho01}. As a consequence, when applied to thermal spectra they tend to yield lower temperatures and hence larger inferred emitting areas compared to a blackbody model. To account for this property while minimizing the additional computational cost, for the cosine beaming model, I use the empirical relation for non-magnetic H atmospheres given by McClintock et al.~(2004; see in particular their equations A17 and A18). For the pencil plus fan beam model, I implement a ``color correction'', obtained as follows. The spectrum of PSR J1852+0040 was fitted seperately with a blackbody model and a magnetic {\tt nsa} atmosphere with $1\times10^{12}$ G. The ratio of the derived emitting areas from the two models as a function of temperature was used as a multiplicative factor to correct the flux normalization in the pulse profile fits in order to obtain emitting areas comparable to those of an actual magnetic atmosphere model. \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{j1852_lc_tot_final.ps} \includegraphics[width=0.45\textwidth]{j1852_2lc_resid_final.ps} \end{center} \caption{(\textit{Top}) \textit{XMM-Newton} EPIC pn background-subtracted pulse profile of PSR J1852+0040 in the $1-5$ keV interval. (\textit{Middle}) Pulse profiles in the $1-1.77$ and $1.78-5$ keV bands. The solid lines show the best fit for a rotating neutron star with two circular, antipodal hot spots for a ``pencil plus fan'' beam pattern. The dashed lines represent the best fit models of a rotating NS with a longitudinally extended hot strip across the surface for a cosine beaming emission model. The dotted lines correspond to the best fit for a rotating neutron star with two circular, non-antipodal hot spots for a cosine beaming model. The cosine and isotropic models produce virtually identical pulse profiles and residuals for both cases so only the former is shown. In all cases, a neutron star with $M=1.4$ M$_{\odot}$ and $R_{NS}=12$ km is assumed. The bottom three panels show the best fit residuals for the circular polar cap and strip models. Two rotational cycles are shown for clarity.} \end{figure} \subsection{Emission Region Geometry} The spectra and pulse profiles accumulated from PSR J1852+0040 imply one or more multi-temperature hot emission regions that are significantly smaller than the full neutron star surface. For pulsar in general, the location and geometry of the heated regions is determined by the magnetic field structure at or beneath the stellar surface \citep[e.g.,][]{Heyl98,Heyl01,Pot01,Gep99,Gep06,Perez06,Pons09}, meaning that the surface emission can serve as a valuable tracer of the field topology. Previous studies by \citet{Shab12} and \citet{Per13} have attempted to reproduce the observed pulse properties of PSR J1852+0040 by computing the expected surface heat signature of various assumed magnetic field configurations. However, for the temperature distributions and emission models considered in these investigations, the broad pulse shape and the large pulse amplitude could not be simultaneously accounted for, hinting at a strongly anisotropic emission pattern and/or a non-standard arrangement of magnetic fields. Herein, rather than start from an assumed magnetic field configuration, I adopt the converse approach and aim to deduce the surface emission properties and magnetic field topology based on the heat distributions that can reproduce the phenomenology of PSR J1852+0040 by fitting the synthetic pulse profiles directly to the X-ray data. For many thermally-emitting pulsars, a pair of circular hot spots, presumably corresponding to the pulsar magnetic polar caps, provides an adequate description of the observed thermal X-ray pulse profile. Based on this, I consider antipodal as well as non-antipodal polar caps (arising, for instance, due to an offset dipole), following both the treatment of point-like hot spots presented in \citet{Belo02} and of extended circular polar caps described in \citet{Got10} and \citet{Tur13}. The unusual pulse morphology offers qualitative insight regarding the possible atmosphere emission pattern as well as the heat distribution on the stellar surface. In particular, the broad and effectively flat pulse peak requires that the flux from the star appear essentially unchanged to the observer for $\sim$20--30\% of the rotation period. This can be produced by either a strongly anisotropic emission pattern or a region on the surface that is elongated in the direction of rotation ($\phi$). To explore the latter possibility, I focus on regions on the surface that are much more extended in longitude than in latitude. The simplest way to describe such a high aspect ratio region using $\alpha$ and $\phi$ is to consider a strip of emission at constant latitude, which can be parameterized by angular extents in longitude and latitude ($\Delta\phi$ and $\Delta\alpha$, respectively), and the values of $\alpha_o$ and $\phi_o$ of the geometric center of the emitting region. For such a longitudinal strip, the area is obtained by computing the integral of the region on a sphere \begin{eqnarray} A_{\rm strip} &=& R_{NS}^2 \int_{\phi_o-\Delta\phi}^{\phi_o+\Delta\phi} \int_{\alpha_o-\Delta\alpha}^{\alpha_o+\Delta\alpha} \, \sin \alpha \, \mathrm{d}\alpha \, \mathrm{d}\phi \nonumber \\ &=& 2R_{NS}^2\Delta\phi_o[\cos(\alpha_o-\Delta\alpha)-\cos(\alpha_o+\Delta\alpha)] \end{eqnarray} This heat distribution can be easily modeled using Equation 3 by dividing the emission region into a grid consisting of smaller surface elements, each with an area defined by Equation 7. Although as defined, the rectangular shape of the strips is obviously not natural, given the available photon statistics such a geometry is indistinguishable from a more plausible one, such as an elliptical region or a strip with rounded corners or semicircular end caps. Moreover, the computational speed afforded by this simple parametrization allows a thorough exploration of the model phase space to identify the general type of heat distributions that can reproduce the data. \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{f3a.ps}\\ \includegraphics[width=0.45\textwidth]{f3b.ps}\\ \includegraphics[width=0.45\textwidth]{f3c.ps} \end{center} \caption{Hammer-Aitoff equal area projection of the surface of PSR J1852+0040 showing the likely temperature distribution inferred from the pulse profile modeling for an assumed neutron star radius of 12 km and mass $1.4$ M$_{\odot}$. The hot and warm regions are shown in blue and red, respectively. The isotropic, cosine, and pencil plus fan beam emission pattern results are shown from top to bottom, respectively. For the last case, the secondary polar cap does not contribute to the observed emission so only its outline is shown for reference.} \end{figure} \section{Pulse Profile Modeling Results} To enable a direct comparison with observations, the synthetic pulse profiles generated using the model described above were first convolved with the EPIC pn detector response. As a consequence of the high hydrogen column density along the line of sight towards the pulsar, little useful spectral information is available below $\sim$1 keV. Based on this, I chose two energy bands which allows some sensitivity to the spectral shape of the radiation. The fits were performed simultaneously in two energy bands, 1.0--1.77 and 1.78--5.0 keV, in which the warm and hot thermal components dominate, respectively. Cooler surface emission from the rest of the neutron star is likely negligible above $\sim$1 keV so it is not modeled. Throughout the analysis, I assume a neutron star with $M=1.4$ M$_{\odot}$ and $R_{NS}=12$ km at a distance of $D=7.1$ kpc and $N_{\rm H}=1.52\times10^{22}$ cm$^{-2}$ based on \citet{Gia09}. To assess the dependence of the results on the highly uncertain neutron star compactness, the analysis was repeated for other values of $R_{NS}$ in the range $9-15$ km. In the formal fits to the folded light curves I consider the allowed range of values for $\alpha$ and $\zeta$ ($0^{\circ}-180^{\circ}$, and the range of acceptable emission region areas and temperatures as deduced from spectral fits. In the case of the circular cap model, I also consider the radius of each hot spot, while in the longitudinal strip model, the angular extents in longitude and latitude, $\Delta\phi$ and $\Delta\alpha$ are additional free parameters. Constraints on these parameters were derived via Monte Carlo simulations of $5\times 10^3$ realizations for each combination of stellar mass, radius and one of the three emission models described in \S3.2. In the pulse profile fits, the emission regions were adaptively resized based on the input values of the angular extent of the entire region in each direction. The number of surface elements ($90$ and $45$ in the $\phi$ and $\alpha$ directions, respectively) was chosen so as to ensure that the size of each is effectively point-like, which is the case for angular extents $\lesssim$$5^{\circ}$ \citep[see][]{Tur13}. For both the polar cap and strip geometries, the hot and warm strips were allowed to intersect such that in the overlap region the emission is solely due to the hot region. \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{f4.eps} \end{center} \caption{Summary of results of the pulse profile fits of PSR J1852+0040 derived from Monte Carlo simulations, assuming a neutron star with $M=1.4$ M$_{\odot}$ and $R_{NS}=12$ km. The solid lines and dotted lines correspond to the beamed and isotropic emission models, respectively. (\textit{Top}) The fraction of the stellar circumference subtended by the hot (blue) and warm (red) emission regions in longitude at the latitude of the centroid of the strip, $\Delta\phi/2\pi$ (\textit{Bottom}) Aspect ratio ($\Delta\alpha/\Delta\phi$) of the angular extent of the X-ray emitting regions in latitude ($\Delta\alpha$) and longitude ($\Delta\phi$).} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{f5.eps} \end{center} \caption{The constraints on the angle $\alpha$ of the centroid of the hot (\textit{red}) and warm (\textit{blue}) regions and the viewing angle $\zeta$ (\textit{magenta}). The results for the isotropic, cosine, and pencil plus fan beam patterns are shown from top to bottom, respectively. For the latter, the hot and worm regions are co-located so $\alpha$ is identical for both. In all cases, a symmetric set of model solutions is obtained if the three angles are mirrored about the stellar equator, corresponding to 90$^{\circ}$ in this plot (\textit{green dashed line}).} \end{figure} Figure 2 shows the best fits to the \textit{XMM-Newton} EPIC pn pulse profile of PSR J1852+0040 with a model of a NS with a longitudinal heated strip, as well as a pair of circular, non-antipodal hot caps. For both the polar cap and strip geometries, the isotropic blackbody and cosine beaming models yield virtually identical best-fit model pulse profiles. For these emission patterns, it is apparent from the systematic residuals that the conventional polar cap model has difficulty simultaneously accounting for the wide and flattened peak and the narrow trough, even if the assumption of antipodal hot spots is relaxed. The best fit results in $\chi_{\nu}=1.48$ for 31 degrees of freedom. Therefore, if the surface emission exhibits an isotropic emission pattern or limb-darkening typical of weakly magnetized atmospheres, the heated regions on the surface of PSR J1852+0040 cannot be circular caps. In contrast, the ``pencil plus fan'' beam model can easily reproduce the pulse shape with the conventional antipodal polar cap model, yielding $\chi_{\nu}=0.90$ for 31 degrees of freedom. This is possible because of the fan component of the beam, which peaks at intermediate angles with respect to the surface normal (Figure 1) and is thus able to compensate for the decline in the projected area of the polar cap as the star rotates, resulting in a broad, flat-topped pulse. The best fit redshift-corrected temperatures are $T_h=(7.4\pm0.6)\times10^6$ K and $T_w=(3.2\pm0.6)\times10^6$ K, with the corresponding polar cap emission radii of $R_{h}=1.5\pm0.6$ km and $R_{w}=6.0\pm2.5$ km. The quoted uncertainties are at a 1$\sigma$ confidence level. The elongated strip configuration is able to reproduce the pulse shape for both the isotropic and cosine beaming models, resulting in best fits with $\chi_{\nu}=0.95$ for 29 degrees of freedom in both instances. The isotropic emission model produces best fit tempeartures (as measured at the neutron star surface) of $T_h=(7.0\pm0.6)\times10^6$ K and $T_w=(3.3\pm0.8)\times10^6$ K. The half-widths of the hot and warm regions in the latitudinal direction are $R_{\alpha_h}=0.7\pm0.3$ km $R_{\alpha_w}=6.5\pm1.4$ km, and $R_{\phi_h}=18.9\pm4.9$ km $R_{\phi_w}=28.2\pm7.1$ km in the longitudinal direction. For the cosine beaming model, the best fit parameters are $T_h=(3.2\pm0.5)\times10^6$ K and $T_w=(1.7\pm0.6)\times10^6$ K, $R_{\alpha_h}=0.56\pm0.11$ km and $R_{\alpha_w}=9.3\pm3.8$ km, and $R_{\phi_h}=20.9\pm6.3$ km and $R_{\phi_w}=24.2\pm8.4$ km. Note that for all three emission models, the quoted values of $R_{h}$ and $R_{w}$ correspond to arc lengths on the stellar surface. For the assumed $M=1.4$ M$_{\odot}$, the isotropic and cosine beaming emission pattern models produce no acceptable solutions for $R_{NS}\lesssim9$ km, while for the pencil plus fan beam, the same is true for $R_{NS}\lesssim8.5$ km. In the case of the isotropic model, this is expected since for more compact stars it is not possible to produce the remarkably large observed pulse amplitude because of the stronger gravitational bending of light effect, which acts to greatly diminish the amplitude of rotation-induced modulations \citep[see, e.g.,][]{Psa00}. For beamed emission, this effect is not as strong and a much larger pulsed fractions can in principle be achieved for the same set of model parameters because the anisotropic emission pattern of the emergent intensity acts to counter the suppression of pulsations caused by light bending \citep[see, e.g.,][for the case of the millisecond pulsar PSR J0030+0451, with a $\sim$70\% thermal pulsed fraction]{Bog09}. However, in the case of PSR J1852+0040 in particular, the area of the warm emission region required to produce a satisfactory fit exceeds the total surface area of a $9$ km and $8.5$ km neutron star, for the cosine and pencil plus fan beam patterns, respectively. Figure 3 illustrates the most probable geometric configurations of the emission regions deduced from the pulse profile modeling for a neutron star with radius of 12 km. Similar configurations are obtained for the range of plausible neutron star radii considered in the analysis. Figures 4 and 5 show summary plots of the various parameters of the fit based on the array of Monte Carlo simulations. Several noteworthy features of the inferred emission regions are evident. In particular, for the isotropic and cosine beaming patterns, the results favor emission regions that have substantial elongation in the longitudinal direction, with aspect ratios ranging from 3:1 for the warm emission region to nearly 100:1 for the hot component. The requirement for such extreme aspect ratios to reproduce the data explains why the conventional polar cap model cannot fit the pulse profile using these emission patterns. Even in the case of two polar caps that are adjacent and aligned in the $\phi$ direction, it is only possible to obtain an aspect ratio up to $\sim$2:1. As evident from Figure 3, for the elongated strip geometry the hot emission region tends to lie well away from the spin poles, which is a necessary condition for producing the large pulse amplitude at higher photon energies. The warm region is substantially more extended in both longitude and latitude, nearly wrapping around the star and covering up to $\sim$50\% of the entire stellar surface (see, e.g., the top two maps in Figure 3). In general, the fits favor co-located hot and warm regions, especially a thin hot strip enveloped entirely by a much larger warm region. This suggest that the thermal X-ray radiation originates from a single contiguous multi-temperature region. It is possible that the strips are not at constant latitude, but are instead inclined with respect to the spin equator. However, accounting for this would require the introduction of an additional free parameter, which, given the excellent fit of the current model, is not warranted by the data. Moreover, any such inclination is likely small (of order a few degrees) since a highly inclined hot strip would not reproduce the observed flat pulse. The best fit for the pencil plus fan beam model places the polar caps near to the spin pole (which actually lies within the larger warm region; see bottom of Figure 3) but the highly anisotropic beaming pattern is still able to produce a large amplitude pulse. Although for the pencil plus fan beam model two identical, antipodal polar caps were assumed, for the best fit geometry, the second polar cap resides in the region not visible to the observer. As a result, identical results are obtained with a single polar cap and the properties of the second polar cap are poorly constrained. For a given surface emission model and assumed stellar radius, the geometry and the location of the emission regions are very tightly constrained, owing to the unique morphology and large amplitude of the X-ray pulsations. It should be noted, however, if the possible values of the neutron star mass and radius, uncertain surface composition, and magnetic field strength are considered, the allowed range of the free parameters become quite large. In addition, although the X-ray emitting areas are assumed to be at two discrete temperatures, in reality, a smooth temperature gradient likely exists between the warm and hot regions. \begin{figure}[!t] \begin{center} \includegraphics[width=0.45\textwidth]{j1852_4spec_final_v2.ps} \includegraphics[width=0.45\textwidth]{j1852_model_final_v3.ps} \includegraphics[width=0.45\textwidth]{j1852_spec_resid_rev2.ps} \includegraphics[width=0.45\textwidth]{j1852_4spec_bkg_final.ps} \end{center} \caption{(\textit{Top}) Pulse phase-resolved spectra of PSR J1852+0040 fitted with a two-temperature $4\times10^{10}$ Gauss H atmosphere ({\tt nsmaxg} model number 1060). The phase intervals are based on the pulse profiles in Figure 2. (\textit{Second from top}) The best fit absorbed (solid) and unabsorbed (dotted) model spectrum. The four panels show the best fit residuals expressed in terms of $\sigma$ with error bars of size one. (\textit{Bottom}) The background spectrum used in the spectroscopic analysis.} \end{figure} \section{Phase-resolved Spectroscopy} \subsection{A Search for Narrow Spectral Features} The available \textit{XMM-Newton} data provide a sufficient harvest of source photons to allow an investigation of any phase-dependent spectral features. This is of particular relevance for CCOs given that 1E1207.4--5209 at the center of the supernova remnant PKS 1209--51/52 shows two distinct harmonically-related features at $0.7$ and $1.4$ keV \citep{Big03,DeL04}, plus two features at $2.1$ and $2.8$ keV whose existence is questionable \citep{San02,Mori05}. These absorption exhibit remarkable variability as a function of spin phase. The most plausible interpretation is that they arise due to resonant cyclotron absorption, with the $0.7$ keV feature corresponding to the fundamental frequency. If the absorption arises from electrons near the neutron star surface, the relation between the fundamental cyclotron energy (corrected for gravitational redshift) and magnetic field, $E_c=\hbar e B/mc=0.116(B/10^{10}~{\rm G})$ keV, implies a field strength of $8 \times 10^{10}$ G. An alternative interpretation focuses on helium-like oxygen or neon in a magnetic field of $\sim$$10^{12}$ G \citep[see, e.g.,][]{Hailey02,Mori05}. Motivated by this, I extracted phase-resolved spectra from the archival data in Table 1 using four equal pulse phase intervals: $0.125<\phi_1<0.375$, $0.375<\phi_2<0.625$, $0.625<\phi_3<0.8755$, and $0.875<\phi_3<0.125$. As shown in Figure 2, as defined, phase zero coincides with the pulse minimum. For each spectrum, the counts were grouped so as to ensure at least 30 counts per energy bin. All four spectra were fitted jointly in {\tt XSPEC} 12.7.1 \citep{Arnaud96} with the same temperatures for all four phases but with independent flux normalizations. In \citet{Halpern10}, only a blackbody and non-magnetic {\tt nsa} models were considered. However, in the standard vacuum dipole radiation formalism the expected magnetic field strength at the magnetic poles of PSR J1852+0040 is approximately $2B_{\rm surf} \simeq 6.1\times10^{10}$ G. A more realistic treatment \citep[see, e.g.,][]{Spit06} yields $(4-5)\times10^{10}$ G, depending on the magnetic inclination. Based on this, I consider a model that is more appropriate for this pulsar -- a two-temperature ({\tt nsmaxg}) neutron star hydrogen atmosphere model with $B=4\times10^{10}$ G \citep{Ho08}. Since the results from \S4 for the pencil plus fan beam model suggest a substantially stronger field at the polar cap, I also employ the {\tt nsa} model with $B=1\times10^{12}$ G \citep{Pavlov94}. In both cases, the models have been computed for $M=1.4$ M$_{\odot}$ and $R_{NS}=10$ km. Statistically acceptable, equally good fits are obtained for both models (see Table 2). As evident from Figure 6, which shows the best {\tt nsmaxg} model fit, several narrow-band residuals are apparent. However, the energies of these features coincide with features seen in the background spectrum (bottom panel of Figure 6). Based on this, I conclude that they arise due to imperfect background subtraction. As noted previously, PSR J1852+0040 is situated in a relatively X-ray-bright supernova remnant with significant spatially-dependent variations in brightness and spectral shape, causing difficulty in obtaining a representative background. Aside from these features, no statistically significant phase-dependent residuals that could be plausibly associated with cyclotron absorption/scattering are seen in the spectrum of PSR J1852+0040. This is not surprising given that the weaker magnetic field derived from spin-down relative to 1E1207.4--5209 only produces weak features above $\sim$1 keV from the higher order cyclotron harmonics, as evident from the second panel from the top in Figure 6 \citep[see also][]{Sul10}. The most prominent absorption features of the $B=4\times10^{10}$ G model, corresponding to the fundamental (at $\sim$0.35 keV) and first overtone ($\sim$0.7 keV) of the cyclotron resonance, would be severely attenuated by interstellar absorption, while the remaining features are too shallow to be identified in the present data given the limited energy resolution and insufficient photon statistics. In the $\sim$$10^{12}$ G scenario, no narrow-band spectral features are expected in the energy range under consideration. The absence of any phase-dependent absorption features intrinsic to the pulsar in the spectrum indicates that the observed rotation-induced flux variations are unlikely to be due to phase-dependent resonant cyclotron scattering of the surface thermal X-rays from a uniformly emitting neutron star, as recently proposed for ordinary pulsars \citep[see, e.g.,][]{Kar12}. Even in the case of 1E1207.4--5209, which unambiguously shows absorption features, the underlying cause for the pulsations is likely the changing view of the hot regions on the star due to rotation, with the resonant scattering only enhancing the rotation-induced flux modulations rather than being the sole cause. \subsection{A Carbon Atmosphere?} For the CCO in Cas A, the derived effective radii of the emission region for H and He atmosphere models ($4-6$ km) are much smaller than the expected NS radius. Based on this and the apparent lack of X-ray pulsations, \citet{Ho09} have argued that Cas A needs to be covered by a non-magnetic C atmosphere in order to produce an emission size $R=15.6^{+1.3}_{-2.7}$ km, assuming $M=1.4$ M$_{\odot}$ and $D=3.4$ kpc, that is consistent with the theoretical prediction for the radii of NSs. More recently, \citet{Klo13} have applied a C atmosphere to the X-ray spectra of the CCO in the HESS J1731--347/G353.6--0.7 remnant obtaining good fits for plausible values of the neutron star mass and radius as well. In light of these results it is interesting to compare the C atmosphere fits to the phase-resolved spectra of PSR J1852+0040. For this purpose, I have applied the recently published {\tt carbatm} model \citep{Klo13,Sule14} to the four phase-resolved spectra described in \S5.1. Table 3 summarizes the best fit parameters for fixed $D=7$ kpc and $M=1.4$ M$_{\odot}$ and three assumed stellar radii: 9, 12, and 14 km. It is apparent that for 9 and 12 km, the inferred area at pulse maximum exceeds the total surface area of the star. Even in the case of 14 km, the implied emission area is equivalent to $\approx$95\% of the NS surface. However, although a single-temperature C atmosphere model produces a statistically acceptable fit with $R_{\rm eff} \approx R_{NS}$, this finding cannot be reconciled with the strongly pulsed X-rays from PSR J1852+0040, which indicate emission from a much smaller portion of the stellar surface. After an age of about 1000 yr, a NS should cool enough to allow a light element atmosphere to accumulate \citep{Chang10}. Based on this, as Cas A is only $\sim$330 yr old \citep{Fes06}, a C atmosphere may in fact be present on its surface \citep[see, however,][]{Poss13}. On the other hand, the ages of PSR J1852+0040 and the CCO in G353.6--0.7 have been estimated to be $5400-7500$ yr \citep{Sun04} and $\sim$27,000 yr \citep{Tian08}, respectively. Therefore, there is no reason to expect a C atmosphere to dominate the surface emission in these older CCOs. Combined with this theoretical argument, the incongruity of the C atmosphere result with the strong X-ray pulsations from PSR J1852+0040 suggests that caution should be exercised when applying such spectroscopic models to other CCOs as it could lead to specious conclusions. This is especially true in cases where no pulsations have been detected, meaning that no information regarding the actual surface heat distribution can be gained. \begin{deluxetable}{lc} \tablewidth{0pt} \tablecaption{Hydrogen atmosphere spectral fits for PSR J1852+0040.} \tablehead{ \colhead{Parameter\tablenotemark{a}} & \colhead{Value}} \startdata \multicolumn{2}{c}{{\tt nsmaxg}\tablenotemark{b} ($B=4\times10^{10}$ G)}\\ \hline $N_{\rm H}$ ($10^{22}$) cm$^{-2}$ & $1.52$ \\ $T_{\rm eff,1}$ ($10^6$ K) & $3.46^{+0.11}_{-0.07}$ \\ $T_{\rm eff,2}$ ($10^6$ K) & $1.48^{+0.48}_{-0.35}$ \\ $R_{\rm eff}$ (km)\tablenotemark{c} & $3.2^{+3.3}_{-2.9}$ \\ $R_{\rm eff}$ (km)\tablenotemark{c} & $4.9^{+9.4}_{-4.9}$ \\ $\chi^2_{\nu}$/dof & $1.05/530$ \\ \hline \multicolumn{2}{c}{{\tt nsa}\tablenotemark{b} ($B=1\times10^{12}$ G)}\\ \hline $N_{\rm H}$ ($10^{22}$) cm$^{-2}$ & $1.52$ \\ $T_{\rm eff,1}$ ($10^6$ K) & $8.27^{+0.29}_{-0.31}$ \\ $T_{\rm eff,2}$ ($10^6$ K) & $3.07^{+0.24}_{-0.25}$ \\ $R_{\rm eff,1}$ (km)\tablenotemark{c} & $0.34^{+0.13}_{-0.11}$ \\ $R_{\rm eff,2}$ (km)\tablenotemark{c} & $3.5^{+1.8}_{-1.4}$ \\ $\chi^2_{\nu}$/dof & $1.06/530$ \enddata \tablenotetext{a}{Quoted uncertainties are at a 1$\sigma$ confidence level for one interesting parameter.} \tablenotetext{b}{A neutron star of mass 1.4 M$_{\odot}$ and radius 10 km is assumed for both models.} \tablenotetext{c}{Redshift-corrected effective emitting radius assuming $D=7.1$ kpc.} \label{hatmtable} \end{deluxetable} \begin{deluxetable}{lccc} \tablewidth{0pt} \tablecaption{Carbon atmosphere spectral fits for PSR J1852+0040.} \tablehead{ \colhead{} & \multicolumn{3}{c}{$R_{\rm NS}$ (km)\tablenotemark{a}} \\ \cline{2-4} \colhead{Parameter\tablenotemark{b}} & \colhead{9 km} & \colhead{12 km} & \colhead{14 km}} \startdata $N_{\rm H}$ ($10^{22}$) cm$^{-2}$ & $1.55\pm0.04$ & $1.52\pm0.04$ & $1.50\pm0.04$ \\ $T_{\rm eff}$ ($10^6$ K) & $2.11 \pm 0.05$ & $1.91\pm0.05$ & $1.84\pm0.05$ \\ $A_{\rm eff}/A_{\rm NS}$\tablenotemark{c} & $1.61^{+0.29}_{-0.24}$ & $1.16^{+0.21}_{-0.18}$ & $0.95^{+0.18}_{-0.15}$\\ $\chi^2_{\nu}$/dof & $1.11/534$ & $1.11/534$ & $1.11/534$ \enddata \tablenotetext{a}{A neutron star mass of 1.4 M$_{\odot}$ is assumed in all cases.} \tablenotetext{b}{Quoted uncertainties are at a 1$\sigma$ confidence level for one interesting parameter.} \tablenotetext{c}{Effective emitting area expressed as a fraction of the total NS surface area assuming $D=7.1$ kpc.} \label{catmtable} \end{deluxetable} \section{Discussion} \subsection{Comparison with Other CCOs} The pulse profile shape of PSR J1852+0040, especially the very broad pulse, differs substantially from other thermally-emitting neutron stars, including other CCOs like PSR 0821--4300 in Puppis A, 1E1207.4--5209 in PKS 1209--51/52, suggesting substantial differences in temperature distribution and/or viewing geometry. \citet{Got10} conducted detailed modeling of the X-ray pulsations and spectra of PSR J0821--4300, the CCO in the SNR Puppis A. The analysis demonstrated that a pair of thermal, diametrically opposite hot spots on the surface is able to fully account for the observed two-component thermal spectrum and energy-dependent pulse profile, including the remarkable $180^{\circ}$ phase reversal at $\approx$1.2 keV. However, the phase reversal requires that the temperatures of the two emission spots differ by a factor of two and their areas by a factor of $\sim$20. In contrast, the markedly non-sinusoidal pulse profile of PSR J1852+0040 exhibits no energy dependent phase shift. This could indicate that, unlike PSR J0821--4300, the emission regions of different temperatures are either co-located on the surface or their centroids are effectively at the same longitude. Alternatively, this may be the direct result of a surface heat map comparable to PSR J0821--4300 but with different combination of magnetic inclination and viewing angle. For PSR J1852+0040, in the best-fit antipodal hot spot model obtained with the pencil plus fan beam emission model (see bottom panel of Figure 3), the secondary polar cap does not contribute significantly to the observed emission, which when combined with the severe interstellar absorption of emission below $\sim$1 keV, would not produce a pulse phase reversal due to a much larger, cooler antipodal cap. The CCO 1E 1207.4--5209 exhibits much less pronounced X-ray pulsations, reaching a maximum $\sim$14\% pulsed fraction in the fundamental cyclotron absorption feature at $\sim$0.7 keV \citep{deluca08}. Aside from the enhancement in pulsations at energies coinciding with the absorption features, the low-amplitude and approximately sinusoidal pulsations suggest emission from a conventional hot-spot configuration. The evidence for a slight phase shift of the pulsations at energies below $\sim$0.5 keV, could be interpreted using the same heat distribution found for PSR J0821--4300 but with a different combination of $\alpha$ and $\zeta$. \subsection{A Strongly Magnetized Hot Spot?} \citet{Shab12} have attempted to account for the X-ray properties of PSR J1852+0040 by analyzing the expected heat distribution and resulting X-ray light curves of a neutron star with a weak centered dipole plus a strong ($\sim$$10^{14}$ G) toroidal crustal magnetic field. The resulting heat distribution, characterized by small hot spots, is capable of achieving a high X-ray pulsed fraction \citep[see Figures 2, 4, and 5 in][]{Shab12} but not a broad pulse shape that closely resembles that of PSR J1852+0040. A likely explanation for this is the assumption of a toroidal field that is large everywhere except near the magnetic polar caps. This results in a weak field ($\sim$$10^{10}$ G) at the polar caps, which (away from the lower order cyclotron harmonics) emits an emission pattern that is well-approximated by a cosine beaming function. As shown in \S4, such an emission pattern cannot account for the observed pulsations for the standard antipodal hot spot model. The spin-down measurement of PSR J1852+0040 implies a magnetic field at the magnetic poles of $\sim$$4\times10^{10}$ G (for $R=12$ km and moment of inertia $I=10^{45}$ g cm$^{2}$). However, the excellent fits to the pulse profile with the polar cap model for the pencil plus fan beam emission model suggests that the magnetic field needs to be substantially higher ($\gtrsim$$10^{12}$) G to produce such a highly anisotropic emission pattern. This is contradictory to the weak surface dipole field implied by the measured spin-down. One way to accommodate both findings is to displace the dipole field in the axial direction such that at the magnetic pole closer to the magnetic moment the field is significantly stronger, while at large distances from the stellar surface the field still appears weak ($\sim$$10^{10}$ G). In this sense, the implied heat distribution would be very similar to that inferred for PSR J0821--4300 in Puppis A (see Figure 7a). As noted in \S6.1, in this case the markedly different pulse properties between the two CCOs can then be easily accounted for by different combinations of $\alpha$ and $\zeta$. \subsection{An Extremely Offset Dipole?} The surface emission maps deduced using the isotropic and cosine beaming patterns are quite peculiar, as they imply a lack of discernable polar caps and the absence of azimuthal symmetry in the surface emission. This could, in principle, arise due to large deviations from a conventional centered magnetic field model. \citet{Per13} have investigated the surface temperature profiles for young, strongly magnetized ($10^{13-15}$ G) neutron stars by considering both purely poloidal and a mixture of poloidal and toroidal components magnetic fields. Surprisingly, this analysis revealed that for $\sim$5 kyr-old neutron stars (comparable to the age of PSR J1852+0040) with both $10^{14}$ G poloidal and $5\times10^{15}$ G toroidal fields, the highest surface temperature is situated not at the magnetic poles but in circumferential bands at intermediate magnetic colatitudes, reminiscent of the strips illustrated in Figure 3. However, a key feature of the X-ray-emitting regions shown in Figure 3 is the azimuthal asymmetry, namely partial hot bands that do not completely encircle the star. Indeed, the inherently axisymmetric magnetic field configurations assumed in \citep{Per13} and similar studies \citep[e.g.,][]{Gep06} cannot simultaneously account for the broad pulse, narrow trough, and anomalously high pulsed fraction if blackbody or emission patterns characteristic of weakly magnetized atmospheres are assumed. In principle, the necessary heat asymmetry can be achieved by displacing the magnetic moment from the center of the star or introducing a strong quadrupole component (provided that the associated sub-surface field is of sufficient strength to preferentially channel the interior heat to only a fraction of the surface; see \S6.3). A large displacement (of order $R_{NS}$) in a direction orthogonal to the dipole axis would cause the magnetic polar caps to become greatly elongated (as illustrated in Figure 7b). This strip may not in fact be contiguous, with a gap between the two ``polar strips'', but at the phase resolution afforded by the photon statistics of the presently available data such a gap is not discernable. \subsection{Submerged Strong Magnetic Fields?} As noted by \citet{Halpern10} and numerous subsequent works, the existence of hot areas that are a fraction of the total surface for CCOs is difficult to reconcile with an intrinsically weakly magnetized neutron star (i.e.~an ``anti-magnetar'') as it requires a mechanism to confine the heat to a small region. For strong fields, the heat conductivity is enhanced in the direction parallel to the magnetic field, while it is reduced in the perpendicular direction \citep[e.g.,][]{Heyl98,Heyl01,Pot01,Gep99,Gep06,Perez06,Pons09}. Hence, PSR J1852+0040 needs to possess a much larger ``hidden'' magnetic field in the crust than the dipole field inferred from the spin-down measurement from \citet{Halpern10}. This field acts as an insulator thus restricting the surface heat to a portion of the surface. One plausible way to simultaneously account for the low apparent field as measured from spin-down and the strong sub-surface field required to explain the highly non-uniform surface heat distribution is to consider the fallback of the debris of the supernova explosion onto the newborn neutron star. In particular, shortly after the violent explosion, the neutron star is believed to accrete material from the reverse shock at a rate greatly exceeding the Eddington limit \citep[e.g.,][]{Blondin86,Chev89, Houck91}. This episode of so-called ``hyper-critical'' accretion could bury the magnetic field into the crust of the nascent neutron star, resulting in an apparent surface field substantially weaker relative to the internal ``hidden'' magnetic field \citep{Vig12,Ber13}. A post-supernova accretion episode of $10^{-4}-10^{-3}$ M$_{\odot}$ over a large region of the surface is necessary to bury the magnetic field into the inner crust. This burial process can, in principle, result in crustal magnetic fields with $\sim$$10^{14}$ G, which in turn, produce high temperature contrast across the stellar surface, while still maintaining a low apparent surface field. The details of the current magnetic field topology presumably depend on the particular geometry of the supernova explosion ejecta, and may be the product of non-uniform fallback and/or low accretion rate \citep[see][]{Ber13}. In this scenario, the peculiar heat distributions of PSRs J1852+0040 and J0821--4300 may be the direct result of the configuration of the fallback material. Alternatively, it is possible that the natal magnetic field of the neutron star deviated significantly from a centered dipole field in the first place, possibly due to an off-center explosion \citep{Bur96,Lai00}, and the fallback uniformly submerged the field while still preserving the initial global configuration but with a much weaker surface field. \begin{figure}[!t] \begin{center} \includegraphics[width=0.4\textwidth]{f7a.eps}\\ \includegraphics[width=0.4\textwidth]{f7b.eps} \end{center} \caption{Schematic illustration of the possible offset magnetic dipole field configurations for PSRs J1852+0040 discussed in the text: (a) A dipole offset in the direction of the magnetic axis and (b) a dipole offset perpendicular to the magnetic axis. Scenario (a) is also applicable to PSR J0821--4300. The red shows the inferred X-ray emitting areas on the stellar surface. The dotted lines show the dipole magnetic field lines while the vertical and horizontal lines show the magnetic axis and equator, respectively.} \end{figure} An alternative interpretation for the restricted heat on the surface is on-going accretion of fall-back material at a sufficiently low rate via a thin disk. However, the steady spin-down over many years and the lack of evidence for any long-term X-ray variability do not favor this scenario. Localized heating due to a return current, driven by the rotation of the magnetized star, is also not likely as it requires rotation-power to supply the energy, but the observed X-ray luminosity greatly exceeds the spin-down luminosity of the pulsar. \section{Conclusion} I have presented modeling of the thermal X-ray pulsations from the central compact object and 105-millisecond X-ray pulsar PSR J1852+0040 in the Galactic supernova remnant Kesteven 79. Unlike previous studies \citep{Shab12,Per13}, the relatively simple models employed herein are able to simultaneously account for both the X-ray pulse amplitude and broad peak. The unusual morphology of the pulse profile can be reproduced with either: i) a conventional antipodal polar cap model with a ``pencil plus fan'' beam intensity pattern; or ii) emission regions on the stellar surface that are significantly elongated in longitude (i.e., in the direction of rotation). Although in the analysis presented above only approximations to emission models were considered for the sake of computational efficiency, the main findings are likely to remain valid for a more realistic treatment employing sophisticated atmosphere models of various surface magnetic field strengths and chemical compositions. Given that the observed thermal X-ray radiation from CCOs is due to passive cooling, the inferred temperature distribution suggests highly anisotropic heat conduction from the stellar interior. As posited by several existing studies \citep{Halpern10,Ho11,Vig12,Shab12,Ber13,Per13,Got13} if the heated regions on the surface of PSR J1852+0040 are closely associated with the magnetic field structure, strong magnetic fields beneath the stellar surface are required to channel heat to a relatively small portion of the star. The constraints on the heat distribution of PSR J1852+0040 presented herein further support the argument that rather than being born with intrinsically weak fields, CCOs possess strong ``hidden'' magnetic fields that were buried due to rapid accretion of fallback material shortly after the supernova explosion. This burial hypothesis avoids the requirement for a strong external global dipole magnetic field, which would manifest in the spin-down measurement. An offset dipole can provide a plausible explanation for the two surface temperature maps deduced in \S3 for PSR J1852+0040, while still being consistent with the weak field inferred from the pulsar spin-down. In particular, for the ``pencil plus fan'' beaming model, the implied strong surface field ($\gtrsim$$10^{12}$ G) needed to produce such a highly anisotropic emission pattern can be explained by a magnetic moment that is significantly displaced mostly along the axial direction of the dipole (Figure 7a). This configuration can also account for the two hot spots that differ greatly in size and temperature for the CCO in Puppis A, PSR J0821--4300. The linear geometry of the heated regions required for the isotropic and cosine beaming patterns can be produced if the offset of the dipole is in a direction orthogonal to the magnetic axis (Figure 7b). The large field displacements in both cases are possibly a consequence of an off-center supernova explosion. A phase-resolved spectroscopic analysis reveals no phase-dependent narrow-band features that could arise due to cyclotron absorption/scattering. In addition, although the same single-temperature C atmosphere model applied to the CCOs in Cas A and G353.6--0.7 produces a satisfactory fit to the spectrum of PSR J1852+0040, the implied emitting area at pulse maximum is comparable to the total neutron star surface area. This finding is difficult to reconcile with the observed large amplitude X-ray pulsations, suggesting that similar results obtained for other CCOs, especially G353.6--0.7, may not be valid as well. In future investigations, it is important to employ realistic atmospheres in modeling the X-ray emission from PSR J1852+0040. As noted previously, since the exact magnetic field and chemical composition for CCOs, in general, are quite uncertain, a wide variety of models need to be considered. Moreover, substantially deeper X-ray observations are needed to better constrain the energy-dependence and reveal the small-scale details of the pulse profile, especially in the pulse peak and trough, to further constrain the details of the surface heat distribution and, by extension, the magnetic field topology. In the theoretical realm, it is crucial to investigate the surface heat signatures of non-standard magnetic field configurations (e.g.,~non-star-centered and non-axisymmetric), since they appear to be required to reproduce the phenomenology of PSR J1852+0040. \acknowledgements I thank E.~V.~Gotthelf for helpful tips regarding the data reduction, J.~P.~Halpern for insightful discussions, and the anonymous referee whose helpful comments resulted in substantial improvements in the manuscript. This project was supported by NASA Astrophysics Data Analysis Program (ADAP) grant NNX12AE24G. The work presented was based on observations obtained with \textit{XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. This research has made use of the NASA Astrophysics Data System (ADS) and data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center. Facilities: \textit{XMM-Newton} (EPIC)
2,877,628,088,839
arxiv
\section{Introduction} Face recognition (FR) is now one of the most well-studied problems in the area of computer vision and pattern recognition. The rapid progress in face recognition accuracy can be attributed to developments in deep neural network models~\cite{he2016deep,huang2017densely,simonyan2014very,tan2019efficientnet}, sophisticated design of loss functions~\cite{deng2019arcface,liu2017sphereface,wang2018cosface,wen2016discriminative,wang2017normface,wang2018additive,zheng2018ring,meng2021magface,wang2020mis,liu2019adaptiveface,huang2020curricularface,sun2020circle}, and large-scale training datasets, \emph{e.g.}, MS-Celeb-1M~\cite{guo2016ms} and WebFace260M~\cite{zhu2021webface260m}. Despite this progress, state-of-the-art (SoTA) FR models do not work well on real-world surveillance imagery (unconstrained) due to the domain shift issue, that is, the large-scale training datasets (semi-constrained) obtained via web-crawled celebrity faces lack in-the-wild variations, such as inherent sensor noise, low resolution, motion blur, turbulence effect, \emph{etc}. For instance, $1{:}1$ verification accuracy reported by one of the SoTA models~\cite{shi2019probabilistic} on unconstrained IJB-S~\cite{kalka2018ijb} dataset is about $30\%$ lower than on semi-constrained LFW~\cite{huang2008labeled}. A potential remedy to such a performance gap is to assemble a large-scale unconstrained face dataset. However, constructing such a training dataset with tens of thousands of subjects is prohibitively difficult with high manual labeling cost. An alternative solution is to develop facial image generation models that can synthesize face images with desired properties. Face translation or synthesis using GANs~\cite{karras2019style,karras2020analyzing,choi2018stargan,karras2017progressive,yang2021gan,wang2021towards} or $3$D face reconstruction~\cite{thies2016face2face,rossler2019faceforensics++,tran2018nonlinear,kim2018deep,zhu2016face,riggable-3d-face-reconstruction-via-in-network-optimization,face-relighting-with-geometrically-consistent-shadows} has been well studied in photo-realistic image generation. However, most of these methods mainly focus on face image restoration or editing, and hence do not lead to better face recognition accuracies. A recent line of research~\cite{kortylewski2019analyzing,shi2020towards,qiu2021synface,trigueros2021generating} adopts disentangled face synthesis~\cite{tewari2020stylerig,deng2020disentangled,tran2017disentangled}, which can provide control over explicit facial properties (pose, expression and illumination) for generating additional synthetic data for varied training data distributions. However, the hand-crafted categorization of facial properties and lack of design for cross-domain translation limits their generalizability to challenging testing scenarios. Shi \emph{et al.}~\cite{shi2021boosting} propose to use an unlabeled dataset to boost unconstrained face recognition. However, all of the previous methods can be considered as performing \textit{blind} data augmentation, {\it i.e.}, without the feedback of the FR model, which is required to provide critical information for improving the FR performance. \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{fig/main/teaser.pdf} \vspace{-2mm} \caption{ (a) Given an input face image, our controllable face synthesis model (CFSM) enables precise control of the direction and magnitude of the targeted styles in the generated images. The latent style has both the direction and the magnitude, where the direction linearly combines the learned bases to control the \textit{type} of style, while the magnitude controls the \textit{degree} of style. (b) CFSM can incorporate the feedback provided by the FR model to generate synthetic training data that can benefit the FR model training and improve generalization to the unconstrained testing scenarios.} \vspace{-2mm} \label{fig:teaser} \end{figure} In Fig.~\ref{fig:teaser}, we show the difference between a blind and feedback-based face synthesis paradigm. For blind face synthesis, the FR model does not take part in the synthesis process, so there is no guidance from the FR model to avoid trivial synthesis. With feedback from the FR model, as in Fig.~\ref{fig:teaser} (b), synthesized images can be more relevant to increasing the FR performance. Therefore, it is the goal of our paper to allow the FR model to guide the face synthesis towards creating synthetic datasets that can improve the FR performance. It is not trivial to incorporate the signal from the FR model, as the direct manipulation of an input image towards decreasing the FR loss results in adversarial images that are not analogous to the real image distribution~\cite{goodfellow2014explaining}. We thus propose to learn manipulation in the subspace of the \textit{style space} of the target properties, so that the control can be accomplished 1) in low-dimensions, 2) semantically meaningful along with various quality factors. In light of this, this paper aims to answer these three questions: \emph{1. Can we learn a face synthesis model that can discover the styles in the target unconstrained data, which enables us to precisely control and increase the diversity of the labeled training samples?} \emph{2. Can we incorporate the feedback provided by the FR model in generating synthetic training data, towards facilitating FR model training?} \emph{3. Additionally, as a by-product of our proposed style based synthesis, can we model the distribution of a target dataset, so that it allows us to quantify the distributional similarity among face datasets?} \noindent Towards this end, we propose a face synthesis model that is 1) controllable in the synthesis process and 2) guided in the sense that the sample generation is aided by the signal from the FR model. Specifically, given a labeled training sample set, our controllable face synthesis model (CFSM) is trained to discover different attributes of the unconstrained target dataset in a style latent space. To learn the explicit degree and direction that control the styles in an unsupervised manner, we embed one linear subspace model with orthogonal bases into the style latent space. Within generative adversarial training, the face synthesis model seeks to capture the principal variations of the data distribution, and the style feature magnitude controls the degree of manipulation in the synthesis process. More importantly, to extract the feedback of the FR model, we apply adversarial perturbations (FGSM) in the learned style latent space to guide the sample generation. This feedback is rendered meaningful and efficient because the manipulation is in the low dimensional style space as opposed to in the high dimensional image space. With the feedback from the FR model, the synthesized images are more beneficial to the FR performance, leading to significantly improved generalization capabilities of the FR models trained with them. It is worth noting that our pre-trained synthesis models could be a plug-in to any SoTA FR model. Unlike the conventional face synthesis models that focus on high quality realistic facial images, our face synthesis module is a conditional mapping from one image to a set of style shifted images that match the distribution of the target unconstrained dataset towards boosting its FR performance. Additionally, the learned orthogonal bases characterize the target dataset distribution that could be utilized to quantify distribution similarity between datasets. The quantification of datasets has broad impact on various aspects. For example, knowing the dataset distribution similarity could be utilized to gauge the expected performance of FR systems in new datasets. Likewise, given a choice of various datasets to train an FR model, one can find one closest to the testing scenario of interest. Finally, when a new face dataset is captured in the future, we may also access its similarity to existing datasets in terms of styles, in addition to typical metrics such as number of subjects, demographics, etc. In summary, the contributions of this work include: $\diamond$ We show that a controllable face synthesis model with linear subspace style representation can generate facial images of the target dataset style, with precise control in the magnitude and type of style. $\diamond$ We show that FR model performance can be greatly increased by synthesized images when the feedback of the FR model is used to optimize the latent style coefficient during image synthesis. $\diamond$ Our learned linear subspace model can characterize the target dataset distribution for quantifying the distribution similarity between face datasets. $\diamond$ Our approach yields significant performance gains on unconstrained face recognition benchmarks, such as IJB-B, IJB-C, IJB-S and TinyFace. \section{Prior Work}\label{sec:prior} \Paragraph{Controllable Face Synthesis} With the remarkable ability of GANs~\cite{goodfellow2014generative}, face synthesis has seen rapid developments, such as StyleGAN~\cite{karras2019style} and its variations~\cite{karras2020analyzing,Karras2020ada,karras2021alias} which can generate high-fidelity face images from random noises. Lately, GANs have seen widespread use in face image translation or manipulation~\cite{hu2018disentangling,deng2020disentangled,xiao2018elegant,pumarola2018ganimation,sun2019single,choi2018stargan,lin2018conditional,bulat2018learn}. These methods typically adopt an encoder-decoder/generator-discriminator paradigm where the encoder embeds images into disentangled latent representations characterizing different face properties. Another line of works incorporates $3$D prior (\emph{i.e.}, $3$DMM~\cite{blanz1999morphable}) into GAN for $3$D-controllable face synthesis~\cite{shen2018facefeat,kim2018deep,deng2018uv,gecer2018semi,geng20193d,piao2019semi,nguyen2019hologan,most-gan-3d-morphable-stylegan-for-disentangled-face-image-manipulation}. Also, EigenGAN ~\cite{he2021eigengan} introduces the linear subspace model into each generator layer, which enables to discover layer-wise interpretable variations. Unfortunately, these methods mainly focus on high-quality face generation or editing on pose, illumination and age, which has a well-defined semantic meaning. However, style or domain differences are hard to be factorized at the semantic level. Therefore, we utilize learned bases to cover unconstrained dataset attributes, such as resolution, noise, \emph{etc}. \Paragraph{Face Synthesis for Recognition} Early attempts exploit disentangled face synthesis to generate additional synthetic training data for either reducing the negative effects of dataset bias in FR ~\cite{kortylewski2019analyzing,shi2020towards,qiu2021synface,trigueros2021generating,ruiz2020morphgan} or more efficient training of pose-invariant FR models~\cite{tran2017disentangled,yin2017towards,zhao20183d}, resulting in increased FR accuracy. However, these models only control limited face properties, such as pose, illumination and expression, which are not adequate for bridging the domain gap between the semi-constrained and unconstrained face data. The most pertinent study to our work is~\cite{shi2021boosting}, which proposes to generalize face representations with auxiliary unlabeled data. Our framework differs in two aspects: i) our synthesis model is precisely-controllable in the style latent space, in both magnitude and direction, and ii) our synthesis model incorporates \emph{guidance} from the FR model, which significantly improves the generalizability to unconstrained FR. \Paragraph{Domain Generalization and Adaptation} Domain Generalization (DG) aims to make DNN perform well on unseen domains~\cite{muandet2013domain,ghifary2015domain,ghifary2016scatter,motiian2017unified,li2018domain}. Conventionally, for DG, few labeled samples are provided for the target domain to generalize. Popular DG methods utilize auxiliary losses such as Maximum Mean Discrepancy or domain adversarial loss to learn a shared feature space across multiple source domains~\cite{muandet2013domain,li2018domain,ghifary2016scatter}. In contrast, our method falls into the category of Unsupervised Domain Adaptation where adaptation is achieved by adversarial loss, contrastive loss or image translation, and learning a shared feature space that works for both the original and target domains~\cite{ganin2015unsupervised,tzeng2017adversarial,saito2018maximum,kang2019contrastive,murez2018image,nam2021reducing}. Our method augments data resembling the target domain with unlabeled images. \Paragraph{Dataset Distances} It is important to characterize and contrast datasets in computer vision research. In recent years, various notions of dataset similarity have been proposed~\cite{david,Mansour,alvarez2020geometric}. Alpha-distance and discrepancy distance~\cite{david,Mansour} measures a dissimilarity that depends on a loss function and the predictor. To avoid the dependency on the label space,~\cite{alvarez2020geometric} proposes OT distance, an optimal transport distance in the feature space. However, it still depends on the ability of the predictor to create a separable feature space across domains. Moreover, a feature extractor trained on one domain may not predict the resulting features as separable in a new domain. In contrast, we propose to utilize the learned linear bases for latent style codes, which are optimized for synthesizing images in target domains, to measure the dataset distance. The style-based distance has the benefit of not being dependent on the feature space or the label space. \section{Proposed Method} \begin{figure}[t] \centering \includegraphics[width=12.0cm]{fig/main/fig2.pdf} \caption{Overview of the proposed method. Top (Stage 1): Pipeline for training the controllable face synthesis module that mimics the distribution of the target domain. $\mathcal{L}_{adv}$ ensures target domain similarity, $\mathcal{L}_{id}$ enforces the magnitude of $\mathbf{o}$ to control the degree of synthesis, and $\mathcal{L}_{ort}$ factorizes the target domain style with linear bases. Bottom (Stage 2): Pipeline for using the pre-trained face synthesis module for the purpose of training an FR model. The synthesis module works as an augmentation to the training data. We adversarially update $\mathbf{o}$ to maximize $\mathcal{L}_{cls}$ of a given FR model.} \label{fig:synthesis} \end{figure} \subsection{Controllable Face Synthesis Model} Generally, for face recognition model training, we are given a labeled semi-constrained dataset that consists of $n$ face images $\mathcal{X}={\mathbf{\{X}\}_{i=1}^{n}}$ and the corresponding identity labels. Meanwhile, similar to the work in~\cite{shi2021boosting}, we assume the availability of an unlabeled target face dataset with $m$ images $\mathcal{Y}={\mathbf{\{Y}\}_{i=1}^{m}}$, which contains a large variety of unconstrained factors. Our goal is to learn a style latent space where the face synthesis model, given an input image from the semi-constrained dataset, can generate new face images of the same subject, whose style is similar to the target dataset. Due to the lack of corresponding images, we seek an unsupervised algorithm that can learn to translate between domains without paired input-output examples. In addition, we hope this face synthesis model has explicit dimensions to control the unconstrained attributes. \emph{Our face synthesis model is not designed to translate the intrinsic properties between faces, i.e., pose, identity or expression.} It is designed to focus on capturing the unconstrained imaging environment factors in unconstrained face images, such as noise, low resolution, motion blur, turbulence effects, etc. These variations are not present in large-scale labeled training data for face recognition. \Paragraph{Multimodal Image Translation Network.} We adopt a multimodal image-to-image translation network~\cite{huang2018multimodal,lee2018diverse} to discover the underlying style distribution in the target domain. Specifically, as shown in Fig.~\ref{fig:synthesis}, our face synthesis generator consists of an encoder $E$ and a decoder $G$. Given an input image $\mathbf{X}\in\mathbb{R}^{W\times H\times3}$, the encoder first extracts its content features $\mathbf{C}=E(\mathbf{X})$. Then, the decoder generates the output image $\hat{\mathbf{X}}\in\mathbb{R}^{W\times H\times3}$, conditioned on both the content features and a random style latent code $\mathbf{z}\in\mathcal{Z}^{d}$: $\hat{\mathbf{X}}=G(\mathbf{C}, \mathbf{z})$. Here, the style code $\mathbf{z}$ is utilized to control the style of the output image. Inspired by recent works that use affine transformation parameters in normalization layers to represent image styles~\cite{huang2017arbitrary,dumoulin2016learned,karras2019style,huang2018multimodal}, we equip the residual blocks in the decoder $D$ with Adaptive Instance Normalization (AdaIN) layers~\cite{huang2017arbitrary}, whose parameters are dynamically generated by a multilayer perceptron (MLP) from the style code $\mathbf{z}$. Formally, the decoder process can be presented as \begin{equation} \hat{\mathbf{X}}=G(\mathbf{C}, \textup{MLP}(\mathbf{z})). \label{eqn:dec} \end{equation} It is worth noting that such $G$ can model continuous distributions which enables us to generate multimodal outputs from a given input. We employ one adversarial discriminator $D$ to match the distribution of the synthesized images to the target data distribution; images generated by the model should be indistinguishable from real images in the target domain. The discriminator loss can be described as: \begin{equation} \mathcal{L}_{D} = -\mathbb{E}_{ \mathbf{Y}\sim \mathcal{Y}}[\textup{log}(D(\mathbf{Y}))] - \mathbb{E}_{\mathbf{X}\sim \mathcal{X},\mathbf{z}\sim \mathcal{Z} }[\textup{log}(1-D(\hat{\mathbf{X}}))]. \label{eqn:dis_loss} \end{equation} The adversarial loss for the generator (including $E$ and $G$) is then defined as: \begin{equation} \mathcal{L}_{adv} = -\mathbb{E}_{\mathbf{X}\sim \mathcal{X},\mathbf{z}\sim \mathcal{Z}}[\textup{log}(D(\hat{\mathbf{X}}))]. \label{eqn:gen_loss} \end{equation} \Paragraph{Domain-Aware Linear Subspace Model.} To enable precise control of the targeted face properties, achieving flexible image generation, we propose to embed a linear subspace model with orthogonal bases into the style latent space. As illustrated in Fig.~\ref{fig:synthesis}, a random style coefficient $\mathbf{o}\sim \mathcal{N}_{q}(\mathbf{0},\mathbf{I})$ can be used to linearly combine the bases and form a new style code $\mathbf{z}$, as in \begin{equation} \mathbf{z} = \mathbf{U}\mathbf{o}+ \boldsymbol\mu, \label{eqn:pca} \end{equation} where $\mathbf{U}=[\mathbf{u}_{1}, \cdots, \mathbf{u}_{q}]\in\mathbb{R}^{d\times q}$ is the orthonormal basis of the subspace. $\boldsymbol\mu\in \mathbb{R}^{d}$ denotes the mean style. This equation relates a $q$-dimensional coefficient $\mathbf{o}$ to a corresponding $d$-dimensional style vector ($q<<d$) by an affine transformation and translation. During training, both $\mathbf{U}$ and $\boldsymbol\mu$ are learnable parameters. The entire bases $\mathbf{U}$ are optimized with the orthogonality constraint~\cite{he2021eigengan}: $\mathcal{L}_{ort}=|\mathbf{U}^{\textup{T}}\mathbf{U}-\mathbf{I}|_1$, where $\mathbf{I}$ is an identity matrix. The isotropic prior distribution of $\mathbf{o}$ does not indicate which directions are useful. However, with the help of the subspace model, each basis vector in $\mathbf{U}$ identifies a latent direction that allows control over target image attributes that vary from straightforward high-level face properties. This mechanism is algorithmically simple, yet leads to effective control without requiring ad-hoc supervision. Accordingly, Eqn.~\ref{eqn:dec} can be updated as $\hat{\mathbf{X}}=G(\mathbf{C}, \textup{MLP}(\mathbf{U}\mathbf{o}+ \boldsymbol\mu))$. \noindent \Paragraph{Magnitude of the Style Coefficient and Identity Preservation.} Although the adversarial learning (Eqn.~\ref{eqn:dis_loss} and~\ref{eqn:gen_loss}) could encourage the face synthesis module to characterize the attributes in the target data, it cannot ensure the identity information is maintained in the output face image. Hence, the cosine similarity $S_{C}$ between the face feature vectors $f(\mathbf{X})$ and $f(\hat{\mathbf{X}})$ is used to enforce identity preservation: $\mathcal{L}_{id}{=}1{-} S_{C}(f(\mathbf{X}),f(\hat{\mathbf{X}}))$, where $f(\cdot)$ represents a pre-trained feature extractor, \emph{i.e.,} ArcFace~\cite{deng2019arcface} in our implementation. Besides identifying the meaningful latent direction, we continue to explore the property of the magnitude $a=||\mathbf{o}||$ of the style coefficient. We expect the magnitude can measure the degree of identity-preservation in the synthesized image $\hat{\mathbf{X}}$. In other words, $S_{C}(f(\mathbf{X}),f(\hat{\mathbf{X}}))$ monotonically increases when the magnitude $a$ is decreased. To realize this goal, we re-formulate the identity loss: \begin{equation} \mathcal{L}_{id} = \left \| \left( 1- S_{C}(f(\mathbf{X}),f(\hat{\mathbf{X}})) \right)-g(a) \right \|^2_{2}, \label{equ:id_loss} \end{equation} where $g(a)$ is a function with respect to $a$. We assume the magnitude $a$ is bounded in $[l_a, u_a]$. In our implementation, we define $g(a)$ as a linear function on $[l_a, u_a]$ with $g(l_a) = l_m$, $g(u_a) = u_m$: $g(a)=(a-l_a)\frac{u_m-l_m}{u_a-l_a}+l_m$. By simultaneously learning the direction and magnitude of the style latent coefficients, our model becomes \emph{precisely controllable} in capturing the variability of faces in the target domain. To our knowledge, this is the first method which is able to explore the complete set of two properties associated with the style, namely direction and magnitude, in unsupervised multimodal face translation. \Paragraph{Model learning.} The total loss for the generator (including encoder $E$, decoder $G$ and domain-aware linear subspace model), with weights $\lambda_{i}$, is \begin{equation} \mathcal{L}_{\mathcal{G}} = \lambda_{adv} \mathcal{L}_{adv} + \lambda_{ort} \mathcal{L}_{ort} + \lambda_{id} \mathcal{L}_{id}. \label{equ:overall_loss} \end{equation} \subsection{Guided Face Synthesis for Face Recognition} \label{sec:guide} In this section, we introduce how to incorporate the \emph{pre-trained} face synthesis module into deep face representation learning, enhancing the generalizability to unconstrained FR. It is effectively addressing, \emph{which synthetic images, when added as an augmentation to the data, will increase the performance of the learned FR model in the unconstrained scenarios?} Formally, the FR model is trained to learn a mapping $\mathcal{F}$, such that $\mathcal{F}(\mathbf{X})$ is discriminative for different subjects. If $\mathcal{F}$ is only trained on the domain defined by semi-constrained $ \mathcal{X}$, it does not generalize well to unconstrained scenarios. However, $\mathbf{X}$ with identity label $l$ in a training batch may be augmented with a random style coefficient $\mathbf{o}$ to produce a synthesized image $\hat{\mathbf{X}}$ with CFSM. However, such data synthesis with random style coefficients may generate either extremely easy or hard samples, which may be redundant or detrimental to the FR training. To address this issue, we introduce an adversarial regularization strategy to \emph{guide} the data augmentation process, so that the face synthesis module is able to generate meaningful samples for the FR model. Specifically, for a given pre-trained CFSM, we apply adversarial perturbations in the learned style latent space, in the direction of maximizing the FR model loss. Mathematically, given the perturbation budget $\epsilon$, the adversary tries to find a style latent perturbation $\boldsymbol\delta\in\mathbb{R}^d$ to maximize the classification loss function $\mathcal{L}_{cla}$: \begin{equation} \begin{aligned} \boldsymbol\delta^{*}=\argmax_{||\boldsymbol\delta||_{\infty }<\epsilon}\mathcal{L}_{cla}\left(\mathcal{F}(\mathbf{X}^{*}),l\right), \text{where} \; \mathbf{X}^{*}\! =\! G(E(\mathbf{X}),\textup{MLP}(\mathbf{U}(\mathbf{o}+\boldsymbol\delta)+\boldsymbol\mu)). \end{aligned} \label{eqn:inter_max} \end{equation} Here, $\mathbf{X}^{*}$ denotes the perturbed synthesized image. $\mathcal{L}_{cla}$ could be any \\ classification-based loss, \emph{e.g.,} popular angular margin-based loss, ArcFace~\cite{deng2019arcface} in our implementation. In this work, for efficiency, we adopt the one-step Fast Gradient Sign Method (FGSM)~\cite{goodfellow2014explaining} to obtain $\boldsymbol\delta^{*}$ and subsequently update $\mathbf{o}$: \begin{equation} \begin{aligned} \mathbf{o}^{*} = \mathbf{o} + \boldsymbol\delta^{*}, \quad \boldsymbol\delta^{*}=\epsilon\cdot \text{sgn}\left( \nabla_{\mathbf{z}}\mathcal{L}_{cla}\left(\mathcal{F}(\mathbf{X}^{*}),l\right)\right), \end{aligned} \label{eqn:perb} \end{equation} \begin{wrapfigure}[15]{r}{0.5\textwidth} \vspace{-8mm} \centering \includegraphics[width=0.5\textwidth]{fig/main/fig3.pdf} \caption{Plot of mini-batch samples augmented with CFSM during training of the FR model. Top: Original images. Middle: Synthesized results before the feedback of the FR model. Bottom: Synthesized results after the feedback. The guide from the FR model can vary the images' style for increased difficulty (a-d), and preventing the images from identity lost (e-h).} \label{fig:update_faces} \end{wrapfigure} where $\nabla_{\mathbf{}}\mathcal{L}_{cla}(\cdot,\cdot)$ denotes the gradient of $\mathcal{L}_{cla}(\cdot,\cdot)$ w.r.t.~$\mathbf{o}$, and $\text{sgn}(\cdot)$ is the sign function. Finally, based on the adversarial-based augmented face images, we further optimize the face embedding model $\mathcal{F}$ via the objective: \begin{equation} \min_{\theta}\mathcal{L}_{cla}([\mathbf{X}^{*}, \mathbf{X}], l), \label{eqn:outer_min} \end{equation} where $\theta$ indicates the parameters of FR model $\mathcal{F}$ and $[\cdot]$ refers to concatenation in the batch dimension. In other words, it encourages to search for the best perturbations in the learned style latent space in the direction of maximal difficulty for the FR model. Examples within a mini-batch are shown in Fig.~\ref{fig:update_faces}. \Paragraph{Dataset Distribution Measure} As mentioned above, as a by-product of our learned face synthesis model, we obtain a target-specific linear subspace model, which can characterize the variations in the target dataset. Such learned linear subspace models allow us to quantify the distribution similarity among different datasets. For example, given two unlabeled datasets $A$ and $B$, we can learn the corresponding linear subspace models $\{\mathbf{U}_{A}, \boldsymbol\mu_{A}\}$ and $\{\mathbf{U}_{B}, \boldsymbol\mu_{B}\}$. We define the distribution similarity between them as \begin{equation} \mathcal{S}(A,B) =\frac{1}{q}\left(\sum_{i}^{q}S_{C}(\mathbf{u}_{A}^{i}+\boldsymbol\mu_{A}, \mathbf{u}_{B}^{i}+\boldsymbol\mu_{B})\right), \label{eqn:dist_simil} \end{equation} where $S_{C}(\cdot,\cdot)$ denotes the Cosine Similarity between the corresponding basis vectors in $\mathbf{U}_{A}$ and $\mathbf{U}_{B}$ respectively, and $q$ is the number of the basis vectors. Measuring the distance or similarity between datasets is a fundamental concept underlying research areas such as domain adaptation and transfer learning. However, the solution to this problem typically involves measuring feature distance with respect to a learned model, which may be susceptible to modal failure that the model may encounter in unseen domains. In this work, we provide an alternative solution via learned style bases vectors, that are directly optimized to capture the characteristics of the target dataset. We hope our method could provide new understandings and creative insights in measuring the dataset similarity. For visualizations of $S$ among different datasets, please refer to Sec.~\ref{sec:exp_dist_simi}. \subsection{Implementation Details} All face images are aligned and resized into $112\times 112$ pixels. The network architecture of the face synthesis model is given in the supplementary (\emph{\textbf{Supp}}). In the main experiments, we set $q{=}10$, $d{=}128$, $l_a{=}0$, $u_a{=}6$, $l_m{=}0.05$, $u_m{=}0.65$, $\lambda_{adv}{=}1$, $\lambda_{ort}{=}1$, $\lambda_{id}{=}8$, $\epsilon{=}0.314$. For more details, refer to Sec.~\ref{sec:exp} or \emph{\textbf{Supp}}. \section{Experimental Results}\label{sec:exp} \subsection{Comparison with SoTA FR methods}~\label{sec:comparison} \Paragraph{Datasets} Following the experimental setting of~\cite{shi2021boosting}, we use \textbf{MS-Celeb-1M}~\cite{guo2016ms} as our labeled training dataset. MS-Celeb-1M is a large-scale public face dataset with web-crawled celebrity photos. For a fair comparison, we use the cleaned MS$1$M-V$2$ ($3.9$M images of $85.7$K classes) from~\cite{shi2021boosting}. For our target data, \textbf{WiderFace}~\cite{yang2016wider} is used. WiderFace is a dataset collected for face detection in challenging scenarios, with a diverse set of unconstrained variations. It is a suitable target dataset for training CFSM, as we aim to bridge the gap between the semi-constrained training faces and the faces in challenging testing scenarios. We follow~\cite{shi2021boosting} and use $70$K face images from WiderFace. For evaluation, we test on four \textbf{unconstrained} face recognition benchmarks: IJB-B, IJB-C, IJB-S and TinyFace. These $4$ datasets represent real-world testing scenarios where faces are significantly different from the semi-constrained training dataset. % $\diamond$ \textbf{IJB-B}~\cite{whitelam2017iarpa} contains both high-quality celebrity photos collected in the wild and low-quality photos or video frames with large variations. It consists of $1,845$ subjects with $21.8$K still images and $55$K frames from $7,011$ videos. % $\diamond$ \textbf{IJB-C}~\cite{maze2018iarpa} is an extension of IJB-B, which includes about $3,500$ subjects with a total of $31,334$ images and $117,542$ unconstrained video frames. % $\diamond$ \textbf{IJB-S}~\cite{kalka2018ijb} is an extremely challenging benchmark where the images were collected in real-world surveillance environments. The dataset contains $202$ subjects with an average of $12$ videos per subject. Each subject also has $7$ high-quality enrollment photos under different poses. We test on three protocols, Surveillance-to-Still (\textbf{V2S}), Surveillance-to-Booking (\textbf{V2B}) and Surveillance-to-Surveillance (\textbf{V2V}). The first/second notation in the protocol refers to the probe/gallery image source. `Surveillance' (V) refers to the surveillance video, `still' (S) refers to the frontal high-quality enrollment image and `Booking' (B) refers to the $7$ high-quality enrollment images. % $\diamond$ \textbf{TinyFace}~\cite{cheng2018low} consists of $5,139$ labelled facial identities given by $169,403$ native low resolution face images, which is created to facilitate the investigation of unconstrained low-resolution face recognition. \begin{table}[t] \renewcommand\arraystretch{1.00} \caption{ Comparison with state-of-the-art methods on the IJB-B benchmark. \quad\quad\quad `*' denotes a subset of data selected by the authors.} \centering \resizebox{0.92\linewidth}{!}{ \begin{tabular}{l |c |c || c |c |c |c |c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{ Train Data, \#labeled(+\#unlabeled) } & \multirow{2}{*}{Backbone} & \multicolumn{3}{c|}{Verification} & \multicolumn{2}{c}{Identification} \\ \cline{4-8} & & & $1e-5$ & $1e-4$ & $1e-3$ & Rank$1$ & Rank$5$ \\ \hline VGGFace$2$~\cite{cao2018vggface2} & VGGFace$2$, $3.3$M & SE-ResNet-$50$ & $70.50$ & $83.10$ & $90.80$ & $90.20$ & $94.6$ \\ AFRN~\cite{kang2019attentional} & VGGFace$2$-*, $3.1$M & ResNet-$101$ & $77.10$ & $88.50$ & $94.90$ & $\textbf{97.30}$ & $\textbf{97.60}$\\ ArcFace~\cite{deng2019arcface} & MS$1$MV$2$, $5.8$M & ResNet-$50$ & $84.28$ & $91.66$ & $94.81$ & $92.95$ & $95.60$ \\ MagFace~\cite{meng2021magface} & MS$1$MV$2$, $5.8$M & ResNet-$50$ & $83.87$ & $91.47$ & $94.67$ & $-$ & $-$ \\ \hline Shi~\emph{et al.}~\cite{shi2021boosting} & cleaned MS$1$MV2, $3.9$M(+$70$K) & ResNet-$50$ & $88.19$ & $92.78$ & $95.86$ & $95.86$ & $96.72$\\ \hline \textbf{ArcFace} & cleaned MS$1$MV2, $3.9$M & ResNet-$50$ & $87.26$ & $94.01$ & $95.95$ & $94.61$ & $96.52$ \\ \textbf{ArcFace+Ours} & cleaned MS$1$MV2, $3.9$M(+$70$K) & ResNet-$50$ & $\textbf{90.95}$ & $\textbf{94.61}$ & $\textbf{96.21}$ & $94.96$ & $96.84$ \\ \hline \end{tabular} } \label{tab:result_ijbb} \end{table} \begin{table}[t] \renewcommand\arraystretch{1.00} \caption{ Comparison with state-of-the-art methods on the IJB-C benchmark.} \centering \resizebox{0.92\linewidth}{!}{ \begin{tabular}{l |c |c || c |c |c |c |c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{ Train Data, \#labeled(+\#unlabeled)} & \multirow{2}{*}{Backbone} & \multicolumn{3}{c|}{Verification} & \multicolumn{2}{c}{Identification} \\ \cline{4-8} & & & $1e-6$ & $1e-5$ & $1e-4$ & Rank$1$ & Rank$5$ \\ \hline VGGFace$2$~\cite{cao2018vggface2} & VGGFace$2$, $3.3$M & SE-ResNet-$50$ & - & $76.80$ & $86.20$ & $91.40$ & $95.10$ \\ AFRN~\cite{kang2019attentional} & VGGFace$2$-*, $3.1$M & ResNet-$101$ & - & $88.30$ & $93.00$ & $95.70$ & $\textbf{97.60}$ \\ PFE~\cite{shi2019probabilistic} & MS$1$M-$*$, $4.4$M & ResNet-$64$ & - & $89.64$ & $93.25$ & $95.49$ & $97.17$ \\ DUL~\cite{chang2020data} & MS$1$M-$*$, $3.6$M & ResNet-$64$ & - & $90.23$ & $94.20$ & $95.70$ & $97.60$ \\ ArcFace~\cite{deng2019arcface} & MS$1$MV$2$, $5.8$M & ResNet-$50$ & $80.52$ & $88.36$ & $92.52$ & $93.26$ & $95.33$ \\ MagFace~\cite{meng2021magface} & MS$1$MV$2$, $5.8$M & ResNet-$50$ & $81.69$ & $88.95$ & $93.34$ & $-$ & $-$ \\ \hline Shi~\emph{et al.}~\cite{shi2021boosting} & cleaned MS$1$MV2, $3.9$M(+$70$K) & ResNet-$50$ & $87.92$ & $91.86$ & $94.66$ & $95.61$ & $97.13$\\ \hline \textbf{ArcFace} & cleaned MS$1$MV2, $3.9$M & ResNet-$50$ & $87.24$ & $93.32$ & $95.61$ & $95.89$ & $97.08$ \\ \textbf{ArcFace+ours} & cleaned MS$1$MV2, $3.9$M(+$70$K) & ResNet-$50$ & $\textbf{89.34}$ & $\textbf{94.06}$ & $\textbf{95.90}$ & $\textbf{96.31}$ & $97.48$ \\ \hline \end{tabular} } \label{tab:result_ijbc} \end{table} \begin{table}[t] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \renewcommand\arraystretch{1.00} \caption{ Comparison with state-of-the-art methods on three protocols of the IJB-S and TinyFace benchmark. The performance is reported in terms of rank retrieval (closed-set) and TAR@FAR (open-set). It is worth noting that MARN~\cite{gong2019low} is a multi-mode aggregation method and is fine-tuned on UMDFaceVideo~\cite{bansal2017s}, a video dataset.} \centering \resizebox{1\linewidth}{!}{ \begin{tabular}{l | c | c || c c c c |c c c c |c c c c||c c} \hline \multirow{2}{*}{Method } & \multirow{2}{*}{\tabincell{c}{\textbf{Labeled}\\Train Data}} & \multirow{2}{*}{Backbone} & \multicolumn{4}{c|}{IJB-S V2S} & \multicolumn{4}{c|}{IJB-S V2B} & \multicolumn{4}{c||}{IJB-S V2V} & \multicolumn{2}{c}{TinyFace} \\ \cline{4-7} \cline{8-11} \cline{12-15} \cline{16-17} & & &Rank$1$ & Rank$5$ & $1\%$ & $10\%$ &Rank$1$ & Rank$5$ & $1\%$ & $10\%$ &Rank$1$ & Rank$5$ & $1\%$ & $10\%$ &Rank$1$ & Rank$5$\\ \hline C-FAN~\cite{gong2019video} & MS$1$M-$*$ & ResNet-$64$ & $50.82$ & $61.16$ & $16.44$ & $24.19$ & $53.04$ & $62.67$ & $27.40$ & $29.70$ & $10.05$ & $17.55$ & $0.11$ & $0.68$ & $-$ & $-$ \\ MARN~\cite{gong2019low} & MS$1$M-$*$ & ResNet-$64$ & $58.14$ & $64.11$ & $21.47$ & $-$ & $59.26$ & $65.93$ & $32.07$ & $-$ & $22.25$ & $34.16$ & $0.19$ & $-$ & $-$ & $-$ \\ PFE~\cite{shi2019probabilistic} & MS$1$M-$*$ & ResNet-$64$ & $50.16$ & $58.33$ & $31.88$ & $35.33$ & $53.60$ & $61.75$ & $35.99$ & $39.82$ & $9.20$ & $20.82$ & $0.84$ & $2.83$ & $-$ & $-$ \\ ArcFace~\cite{deng2019arcface} & MS$1$MV2 & ResNet-$50$ & $50.39$ & $60.42$ & $32.39$ & $42.99$ & $52.25$ & $61.19$ & $34.87$ & $43.50$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ \hline Shi~\emph{et al.}~\cite{shi2021boosting} & MS$1$MV2-* & ResNet-$50$ & $59.29$ & $66.91$ & $39.92$ & $50.49$ & $60.58$ & $67.70$ & $32.39$ & $44.32$ & $17.35$ & $28.34$ & $1.16$ & $5.37$ & $-$ & $-$ \\\hline \textbf{ArcFace}~\cite{deng2019arcface} & MS$1$MV2-* & ResNet-$50$ & $58.78$ & $66.40$ & $40.99$ & $50.45$ & $60.66$ & $67.43$ & $43.12$ & $51.38$ & $14.81$ & $26.72$ & $2.51$ & $5.72$ & $62.21$ & $66.85$ \\ \textbf{ArcFace+Ours*} & MS$1$MV2-* & ResNet-$50$ & $61.69$ & $68.33$ & $43.99$ & $53.34$ & $62.20$ & $69.50$ & $44.38$ & $53.49$ & $18.14$ & $31.34$ & $2.09$ & $4.51$ & $62.39$ & $67.36$ \\ \textbf{ArcFace+Ours} & MS$1$MV2-* & ResNet-$50$ & $\textbf{63.86}$ & $\textbf{69.95}$ & $\textbf{47.86}$ & $\textbf{56.44}$ & $\textbf{65.95}$ & $\textbf{71.16}$ & $\textbf{47.28}$ & $\textbf{57.24}$ & $\textbf{21.38}$ & $\textbf{35.11}$ & $\textbf{2.96}$ & $\textbf{7.41}$ & $\textbf{63.01}$ & $\textbf{68.21}$ \\ \hline \textbf{AdaFace}~\cite{kim2022adaface} & WebFace12M & IResNet-$100$ & $71.35$ & $76.24$ & $59.40$ & $\textbf{66.34}$ & $71.93$ & $76.56$ & $59.37$ & $\textbf{66.68}$ & $36.71$ & $50.03$ & $4.62$ & $11.84$ & $72.29$ & $74.97$ \\ \textbf{AdaFace+Ours} & WebFace12M & IResNet-$100$ & $\textbf{72.54}$ & $\textbf{77.59}$ & $\textbf{60.94}$ & $66.02$ & $\textbf{72.65}$ & $\textbf{78.18}$ & $\textbf{60.26}$ & $65.88$ & $\textbf{39.14}$ & $\textbf{50.91}$ & $\textbf{5.05}$ & $\textbf{13.17}$ & $\textbf{73.87}$ & $\textbf{76.77}$ \\ \hline \end{tabular} } \vspace{0mm} \label{tab:result_ijbs} \end{table} \Paragraph{Experiment Setting} We first train CFSM with $\sim 10\%$ of MS-Celeb-1M training data ($n=0.4$M) as the source domain, and WiderFace as the target domain ($m=70$K). The model is trained for $125,000$ steps with a batch size of $32$. Adam optimizer is used with $\beta_{1}=0.5$ and $\beta_{2}=0.99$ at a learning rate of $1e-4$. For the FR model training, we adopt ResNet-$50$ as modified in~\cite{deng2019arcface} as the backbone and use ArcFace loss function~\cite{deng2019arcface} for training. We also train a model without using CFSM (\emph{i.e.,} replication of ArcFace) for comparison, denoted as \textbf{ArcFace}. The efficacy of our method (\textbf{ArcFace+Ours}) is validated by training an FR model with the guided face synthesis as the auxiliary data augmentation during training according to Eq.~\ref{eqn:outer_min}. \Paragraph{Results.} Tables~\ref{tab:result_ijbb} and~\ref{tab:result_ijbc} respectively show the face verification and identification results on IJB-B and IJB-C datasets. Our approach achieves SoTA performance on most of the protocols. For IJB-B, performance increase from using CFSM (\textbf{ArcFace+Ours}) is $3.69\%$ for TAR@FAR=$1e-5$, and $2.10\%$ for TAR@FAR=$1e-6$ on IJB-C. Since both IJB-B and IJB-C are a mixture of high quality images and low quality videos, the performance gains with the augmented data indicate that our model can generalize to both high and low quality scenarios. In Tab.~\ref{tab:result_ijbs}, we show the comparisons on IJB-S and TinyFace. With our CFSM (\textbf{ArcFace+Ours}), ArcFace model outperforms all the baselines in both face identification and verification tasks, and achieves a new SoTA performance. \begin{figure}[t] \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c c } \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/main/ijbs_vs_rank1.pdf}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/main/ijbs_vb_rank1.pdf}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/main/ijbs_vv_rank1.pdf}} \\ \vspace{-5mm}\\ \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/main/ijbs_vs_1v1.pdf}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/main/ijbs_vb_1v1.pdf}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/main/ijbs_vv_1v1.pdf}} \end{tabular} } \vspace{0mm} \caption{ Comparison on three IJB-S protocols with varied number of training data on the $x$-axis (maximum is 3.9M). The plots shows that using guided CFSM as an augmentation (\textcolor{mygreen}{\textbf{ArcFace+Ours}}) can lead to higher performance in all settings. Note that CFSM trained on $70$K unlabeled data is more useful than $3$M original data as shown by the higher \textbf{V2V} performance of \textbf{Ours} with $0.5$M than Baseline $3.9$M. } \label{fig:diff_training} \vspace{-2mm} \end{figure} \subsection{Ablation and Analysis} In this experiment, we compare the face verification and identification performance on the \emph{most challenging} IJB-S and TinyFace datasets. \Paragraph{Large-scale Training Data vs.~Augmentation.} To further validate the applicability of our CFSM as an augmentation in different training settings, we adopt IResNet-$100$~\cite{deng2019arcface} as the FR model backbone and utilize SoTA AdaFace loss function~\cite{kim2022adaface} and large-scale WebFace12M~\cite{zhu2021webface260m} dataset for training. As compared in Tab.~\ref{tab:result_ijbs}, our model still improves unconstrained face recognition accuracies by a promising margin (Rank1: $+2.43\%$ on the IJB-S V2V protocol and $+1.58\%$ on TinyFace) on a large-scale training dataset (WebFace12M). \Paragraph{Effect of Guidance in CFSM.} To validate the effectiveness of the proposed \emph{controllable} and \emph{guided} face synthesis model in face recognition, we train a FR model with CFSM as augmentation but with random style coefficients (\textbf{Ours*}), and compare with guided CFSM (\textbf{Ours}). Tab.~\ref{tab:result_ijbs} shows that the synthesis with random coefficients does not bring significant benefit to unconstrained IJB-S dataset performance. However, when samples are generated with guided CFSM, the trained FR model performs much better. Fig.~\ref{fig:update_faces} shows the effect of guidance in the training images. For low quality images, too much degradation leads to images with altered identity. Fig.~\ref{fig:update_faces} shows that guided CFSM avoids synthesizing bad quality images that are unidentifiable. \begin{figure}[t] \centering \includegraphics[width=10.0cm]{fig/main/dataset.png} \vspace{-2mm} \caption{ \textbf{(a)} Few examples from each dataset. Note differences in style. For example, AgeDB contains old grayscale photos, WiderFace has mostly low resolution faces, and IJB-S includes extreme unconstrained attributes (\emph{i.e.,} noise, motion blur or turbulence effect). \textbf{(b)} shows the pairwise distribution similarity scores among datasets that are calculated using the learned subspace via Eq.~\ref{eqn:dist_simil}. Note both IJB-B and WiderFace have high similarity scores with IJB-S. \textbf{(c)} The t-SNE plot of the learned $[\mathbf{u}_1,...,\mathbf{u}_{10}]$ and mean style $\mathbf{\mu}$. The dots represent $\mathbf{u}_{i}+\mathbf{\mu}$ and the stars denote $\mathbf{\mu}$.} \vspace{-2mm} \label{fig:dataset} \end{figure} \begin{figure}[t] \centering \includegraphics[width=11cm]{fig/main/mag_examples.png} \caption{ \textbf{Interpretable magnitude of the style coefficients}. Given an input image, we randomly sample two sets of style coefficients $\mathbf{o_{1}}$ (left) and $\mathbf{o_{2}}$ (right) for all $3$ models (respectively trained with the IJB-S, WiderFace and AgeDB datasets as the target data). We dynamically adjust the magnitude of these two coefficients by $0.5a$, $a$, $1.5a$, $2a$, $3a$, $4a$, where $a=\frac{\mathbf{o}}{||\mathbf{o}||}$. As can be seen, our model indeed realizes the goal of changing the degree of style synthesis with the coefficient magnitude. } \label{fig:mag_examples} \end{figure} \Paragraph{Effect of the Number of Labeled Training Data.} To validate the effect of the number of labeled training data, we train a series of models by adjusting the number of labeled training samples from $0.5$M to $3.9$M and report results both on \textbf{ArcFace} and \textbf{ArcFace+Ours} settings. Fig.~\ref{fig:diff_training} shows the performance on various IJB-S protocols. For the full data usage setting, our model trained with guided CFSM as an augmentation outperforms the baseline by a large margin. Also note that the proposed method trained with $1/8$th ($0.5$M) labeled data still achieves comparable performance, or even better than the baseline with $3.9$M labeled data on \textbf{V2V} protocol. This is due to CFSM generating target data-specific augmentations, thus demonstrating the value of our controllable and guided face synthesis, which can significantly boost unconstrained FR performance. \subsection{Analysis and Visualizations of the Face Synthesis model}\label{sec:exp_dist_simi} In this experiment, we quantitatively evaluate the distributional similarity between datasets based on the learned linear subspace model for face synthesis. To this end, we choose $6$ face datasets that are publicly available and popular for face recognition testing. These datasets are LFW~\cite{huang2008labeled}, AgeDB-$30$~\cite{moschoglou2017agedb}, CFP-FP~\cite{sengupta2016frontal}, IJB-B~\cite{whitelam2017iarpa}, WiderFace (WF)~\cite{yang2016wider} and IJB-S~\cite{kalka2018ijb}. Figure~\ref{fig:dataset}\textcolor{red}{(a)} shows examples from these $6$ datasets. Each dataset has its own style. For example, CFP-FP includes profile faces, WiderFace has mostly low resolution faces, and IJB-S contains extreme unconstrained attributes. During training, for each dataset, we randomly select $12$K images as our target data to train the synthesis model. For the source data, we use the same subset of MS-Celeb-$1$M as in Sec.~\ref{sec:comparison}. \Paragraph{Distribution Similarity} Based on the learned dataset-specific linear subspace model, we calculate the pairwise distribution similarity score via Eqn.~\ref{eqn:dist_simil}. As shown in Fig.~\ref{fig:dataset}\textcolor{red}{(b)}, the score reflects the style correlation between datasets. For instance, strong correlations among IJB-B, IJB-S and WiderFace (WF) are observed. We further visualize the learned basis vectors $[\mathbf{u}_{1},...,\mathbf{u}_{q}]$ and the mean style $\mu$ in Fig.~\ref{fig:dataset}\textcolor{red}{(c)}. The basis vectors are well clustered and the discriminative grouping indicates the correlation between dataset-specific models. \Paragraph{Visualizations of Style Latent Spaces} Fig.~\ref{fig:mag_examples} shows face images generated by the learned CFSM. It can be seen that when the magnitude increases, the corresponding synthesized faces reveal more dataset-specific style variations. This implies the magnitude of the style code is a good indicator of image quality. We also visualize the learned $\mathbf{U}$ of $6$ models in Fig.~\ref{fig:basis_examples}. As we move along a basis vector of the learned subspace, the synthesized images change their style in dataset-specific ways. For instance, with target as WiderFace or IJB-S, synthesized images show various low quality styles such as blurring or turbulence effect. CFP dataset contains cropped images, and the ``crop'' style manifests in certain directions. Also, we can observe the learned $\mathbf{U}$ are different among datasets, which further verifies that our learned linear subspace model in CFSM is able to capture the variations in target datasets. \begin{figure}[t] \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c c } { LFW} & { AgeDB} & { CFP}\\ \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/main/component_LFW.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/main/component_W_agedb.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/main/component_W_cfp.png}} \\ \vspace{-2mm} \\ { IJB-B} & { WiderFace} & { IJB-S}\\ \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/main/component_IJBB.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/main/component_W_wf.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/main/component_IJBS.png}} \\ \end{tabular} } \caption{ Given a single input image, we visualize the synthesized images by traversing along with the learned orthonormal basis $\mathbf{U}$ in the $6$ dataset-specific models. For each dataset, Rows $1{-}3$ illustrate the first $3$ basis vectors traversed. Columns $1{-}5$ show the directions which are scaled to emphasize their effect, {\it i.e.}, only one element of style coefficient $\mathbf{o}$ varies from $-3\sigma$ to $3\sigma$ while other $q\!-\!1$ elements remain $0$. } \label{fig:basis_examples} \end{figure} \section{Conclusions}\label{sec:con} We answer the fundamental question of ``\textit{How can image synthesis benefit the end goal of improving the recognition task?}'' Our controllable face synthesis model (CFSM) with adversarial feedback of FR model shows the merit of task-oriented image manipulation, evidenced by significant performance increases in unconstrained face datasets (IJB-B, IJB-C, IJB-S and TinyFace). In a broader context, it shows that adversarial manipulation could go beyond being an attacker, and serve to increase recognition accuracies in vision tasks. Meanwhile, we define a dataset similarity metric based on the learned style bases, which capture the style differences in a label or predictor agnostic way. We believe that our research has presented the power of a controllable and guided face synthesis model for unconstrained FR and provides an understanding of dataset differences. \Paragraph{Acknowledgments.} This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via 2022-21102100004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. \section*{Supplementary} \renewcommand{\thesection}{\Alph{section}} \setcounter{section}{0} In this supplementary material, we provide: $\diamond$ Additional implementation details including the network structure of the face synthesis model and the training process. $\diamond$ Additional ablation studies, including the effects of different target datasets, the dimensionality of the style coefficient, the perturbation budget, and the ratio of real and synthetic images in each mini-batch. $\diamond$ Additional visualizations of the adversarial training process. \section{Additional Implementation Details} \Paragraph{Network Structure.} The network architecture of the generator (including the encoder $E$ and decoder $G$) used in our face synthesis module is illustrated in Tab.~\ref{tab:generator}. We apply Instance Normalization~\cite{ulyanov2017improved} to the encoder and Adaptive Instance Normalization~\cite{huang2017arbitrary} to RESBLOKs (the residual basic block) of the decoder. The encoder takes an image $\mathbf{X}$ with the resolution of $112\times112$ as input, and outputs its content feature $\mathbf{C}\in\mathbb{R}^{256\times28\times28}$. The input and output to the decoder are $\mathbf{C}$ and the synthesized image $\hat{\mathbf{X}}$, respectively. Additionally, as shown in Fig.~\ref{fig:decoder}, the parameters of the Adaptive Instance Normalization (AdaIN) layer in residual blocks are dynamically generated by a multiplayer perceptron (MLP) from the linear subspace model. Following~\cite{wang2018high}, we employ multi-scale discriminators with $3$ scales as our discriminator $D$. \Paragraph{Training Process.} We summarize the training process in Tab.~\ref{tab:train_process}. In Stage 1, we train our controllable face synthesis module with the identity consistency loss and the adversarial objective. In Stage 2, based on the pre-trained and fixed face synthesis model, we introduce an adversarial regularization strategy to guide the data augmentation process and train the face feature extractor $\mathcal{F}$. Specifically, in the adversarial FR model training, given $B$ face images $\{\mathbf{X}\}_{i=1}^{B}$ in a mini-batch, our synthesis model (CFSM) is utilized to produces their synthesized version $\hat{\mathbf{X}}$ with initial random style coefficients $\{\mathbf{o}\}_{i=1}^{B}$. Based on the Eqn. 7 and 8 (main paper), we obtain the updated style coefficients $\{\mathbf{o}^{*}\}_{i=1}^{B}$ with perturbations. We then generate the perturbed images $\{\mathbf{X}^{*}\}_{i=1}^{B}$ with CFSM. Finally, we randomly select half of $\{\mathbf{X}\}_{i=1}^{B}$ and half of $\{\mathbf{X}^{*}\}_{i=1}^{B}$ to form a new training batch for the FR model training. Note that, every epoch of the FR model training we will randomly initialize different style coefficients, even for the same training samples. \begin{table}[t] \vspace{0mm} \renewcommand\arraystretch{1.3} \caption{\small Network architectures of the generator of face synthesis module. RESBLK denotes the residual basic block. [Keys: N=Neurons, K=Kernel size, S=Stride, B=Batch size]. } \centering \vspace{0mm} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{c || c ||c } \hline Layer & Encoder (E) & Decoder (G) \\ \hline 1 & CONV-(N64,K7,S1), ReLU & RESBLK-(N256,K3,S1) \\ 2 & CONV-(N128,K4,S2), ReLU & RESBLK-(N256,K3,S1) \\ 3 & CONV-(N256,K4,S2), ReLU & RESBLK-(N256,K3,S1) \\ 4 & RESBLK-(N256,K3,S1) & RESBLK-(N256,K3,S1) \\ 5 & RESBLK-(N256,K3,S1) & CONV-(N128,K5,S1), ReLU \\ 6 & RESBLK-(N256,K3,S1) & CONV-(N64,K5,S1), ReLU \\ 7 & RESBLK-(N256,K3,S1) & CONV-(N3,K7,S1), TanH \\ \hline Output & $\mathbf{C}\in\mathbb{R}^{B\times256\times28\times28}$ & $\hat{\mathbf{X}}\in\mathbb{R}^{B\times3\times W\times H}$ \\ \hline \end{tabular} } \label{tab:generator} \vspace{0mm} \end{table} \begin{figure}[t] \centering \includegraphics[width=12.0cm]{fig/supp/decoder.png} \vspace{0mm} \caption{\small Additional illustration of the decoder network structure. The parameters of Adaptive Instance Normalization (AdaIN) in residual blocks are dynamically generated by a multiplayer perceptron (MLP) from the linear subspace model.} \label{fig:decoder} \vspace{-2mm} \end{figure} \begin{table}[t] \vspace{0mm} \renewcommand\arraystretch{1.3} \caption{\small Stages of the training process.} \centering \vspace{0mm} \resizebox{0.6\linewidth}{!}{ \begin{tabular}{c | c |c } \hline & Network or parameters & Loss \\ \hline \hline Stage 1 & $E$, $G$, $D$, MLP, $\mathbf{U}$, $\mu$ & $\mathcal{L}_{ort}$, $\mathcal{L}_{adv}$, $\mathcal{L}_{D}$, $\mathcal{L}_{id}$ \\ \hline Stage 2 & $\mathcal{F}$, $\delta^{*}$ & $\mathcal{L}_{cla}$ \\ \hline \end{tabular} } \label{tab:train_process} \vspace{-2mm} \end{table} \section{Additional Ablation Studies} \Paragraph{Effect of Different Target Datasets.} To study how the choice of target dataset in face synthesis model training would affect the face recognition performance, we choose two other datasets, LFW~\cite{huang2008labeled} and IJB-S~\cite{{kalka2018ijb}} to train the face synthesis models and apply them for the FR model training. During training, for each dataset, we randomly select \emph{unlabeled} $12$K face images as the target data to train the face synthesis model. For efficiency, we train the FR models with $0.5$M labeled training samples from the MS-Celeb-1M dataset. The diversity of the three face datasets can be ranked as IJB-S $>$ WiderFace $>$ LFW. We show the comparisons on IJB-S protocols in Fig.~\ref{fig:diff_datasets}, which shows that the more diverse the unlabeled target dataset is, the more performance gain is obtained. In particular, although LFW is similar to MS-Celeb-1M, it can introduce additional diversity in the dataset when augmented with our controllable and guided face synthesis model. Using \emph{unlabeled} IJB-S images as the target data further improves the performance on the IJB-S dataset, which indicates that our model can be applied for boosting face recognition with limited unlabeled samples available. \Paragraph{Effect of the Dimensionality ($q$) of the Style Coefficient.} Fig.~\ref{fig:diff_styledim} shows the recognition performances on IJB-S over the dimensionality of the style coefficient. Fig.~\ref{fig:diff_styledim} shows that the dimensionality of the style coefficient does have significant effects on the recognition performance. The model with $q=10$ performs slightly better in face verification setting, such as V2S and V2B (TAR@FAR=1e-2). The results also indicate that learning manipulation in the low-dimensional subspace is effective and robust for face recognition. \Paragraph{Effect of the Perturbation Budget ($\epsilon$).} We conduct experiments to demonstrate the effect of the perturbation budget $\epsilon$. As shown in Fig.~\ref{fig:diff_perturbation}, we can clearly find that a large perturbation budget ($\epsilon=0.628$) leads to a better performance in the protocol of Surveillance-to-Surveillance (V2V) while performs slightly worse in the protocols of Surveillance-to-Still (V2S) and Surveillance-to-Booking (V2B). These observations are not surprising because the large style coefficient perturbation would generate faces with low qualities, which is beneficial for improving generalization to the unconstrained testing scenarios. \Paragraph{Effect of the Ratio of Real and Synthetic Images in Each Mini-batch.} As illustrated in Sec. 3.2 (main paper), we combine the original real images and their corresponding synthesized version as a mini-batch for the FR model training. In this experiment, we further study the ratio of real (R) and synthetic (S) images in each mini-batch. As shown in Fig.~\ref{fig:diff_ratio}, with more synthetic images in each mini-batch (R:S $=25\%:75\%$), the model achieves the best performance in the most challenging Surveillance-to-Surveillance (V2V) protocol (Rank1). \begin{figure}[t] \vspace{0mm} \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c c } \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_dataset_v2s.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_dataset_v2b.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_dataset_v2v.png}} \\ \vspace{-4mm}\\ (a)& (b) & (c) \\ \end{tabular} } \caption{\small Evaluation results on IJB-S with different target datasets. \textbf{Baseline} refers to the performance of the FR model trained on $0.5$ million labeled samples (a subset of MS-Celeb-1M) without using the proposed face synthesis model. In this experiment, other $3$ FR models are trained on the $0.5$ million labeled samples with the proposed face synthesis models, which are trained with additional $12$K unlabeled samples (from LFW, WiderFace or IJB-S, respectively).} \label{fig:diff_datasets} \end{figure} \begin{figure}[t] \vspace{0mm} \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c c } \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_styledim_v2s.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_styledim_v2b.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_styledim_v2v.png}} \\ \vspace{-4mm}\\ (a)& (b) & (c) \\ \end{tabular} } \caption{\small Evaluation results on IJB-S with the different dimensionalities of the style coefficient ($q=5$, $10$, $20$). \textbf{Baseline} refers to the performance of the FR model trained on $0.5$ million labeled samples (a subset of MS-Celeb-1M) without using the proposed face synthesis model. In this experiment, other $3$ models are trained on the $0.5$ million labeled samples with the proposed face synthesis model, which is trained with additional $70$K unlabeled samples from WiderFace.} \label{fig:diff_styledim} \end{figure} \begin{figure}[t] \vspace{0mm} \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c c } \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_perturbation_v2s.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_perturbation_v2b.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_perturbation_v2v.png}} \\ \vspace{-4mm}\\ (a)& (b) & (c) \\ \end{tabular} } \caption{\small Evaluation results on IJB-S with different perturbation budget values ($\epsilon=0.157$, $0.314$ or $0.628$). \textbf{Baseline} refers to the performance of the FR model trained on $0.5$ million labeled samples (a subset of MS-Celeb-1M) without using the proposed face synthesis model. In this experiment, other $3$ models are trained on the $0.5$ million labeled samples with the proposed face synthesis model, which is trained with additional $70$K unlabeled samples from WiderFace.} \label{fig:diff_perturbation} \end{figure} \begin{figure}[t] \vspace{0mm} \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c c } \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_ratio_v2s.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_ratio_v2b.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/diff_ratio_v2v.png}} \\ \vspace{-4mm}\\ (a)& (b) & (c) \\ \end{tabular} } \caption{\small Evaluation results on IJB-S with different ratios of real (R) and synthetic (S) images in each mini-batch (R:S $=75\%:25\%$, $50\%:50\%$ or $25\%:75\%$). \textbf{Baseline} refers to the performance of the FR model trained on $0.5$ million labeled samples (a subset of MS-Celeb-1M) without using the proposed face synthesis model. In this experiment, other $3$ models are trained on the $0.5$ million labeled samples with the proposed face synthesis model, which is trained with additional $70$K unlabeled samples from WiderFace.} \label{fig:diff_ratio} \end{figure} \section{Additional Visualizations} \Paragraph{The Perturbations in Direction or Magnitude.} In adversarial FR model training, our synthesis model is able to offer two meaningful possibilities to perform style coefficient perturbation: magnitude and direction. To study the perturbation properties (direction or magnitude), we collect the initial style coefficient $\mathbf{o}$ and style perturbation $\boldsymbol\delta^{*}$ of $10$K samples during the FR model training. We first measure the Cosine Similarity $S_C$ (Fig.~\ref{fig:sta_info}~\textcolor{red}{(a)}) between the initial style coefficient $\mathbf{o}$ and the updated one $\mathbf{o}^*=\mathbf{o}+\boldsymbol\delta^*$. Then we present the histogram of the differences (Fig.~\ref{fig:sta_info}~\textcolor{red}{(b)}) between the magnitude of $\mathbf{o}$ and $\mathbf{o}^{*}$: $a^*-a$, where $a^*=||\mathbf{o}||$, $a=||\mathbf{o}^*||$. Finally, in Fig.~\ref{fig:sta_info}~\textcolor{red}{(c)}, we show the $Sc$ over $(a^*-a)$. As observed in Fig.~\ref{fig:sta_info}, the style coefficient perturbation guided by FR model training indeed leads to the changes of both magnitude and direction of the initial style coefficient, which supports the motivation of our controllable face synthesis model design. More interestingly, the synthesis model attempts to achieve a balance between magnitude and direction in the adversarial-based augmentation process (see Fig.~\ref{fig:sta_info}~\textcolor{red}{(c)}). For example, when the magnitude is decreasing ($(a-a^*)<0$), the model is inclined to generate faces in lower quality but more target styles (lower $Sc$). In contrast, when the magnitude is increasing ($(a-a^*)>0$), the model prefers to generate faces with higher quality but less target style (larger $Sc$). \Paragraph{Additional Visualizations of $\mathbf{X}$, $\hat{\mathbf{X}}$ and $\mathbf{X}^{*}$.} In Fig.~\ref{fig:adversarial_examples}, we show the original examples $\mathbf{X}$, synthesized examples with initial style coefficients $\hat{\mathbf{X}}$ and synthesized examples with style perturbations $\mathbf{X}^*$ in a mini-batch during the FR model training. Additionally, we visualize the pairwise error maps among these $3$ types of data. As shown, the guide from the FR model encourages the face synthesis model to generate images with either increased or decreased target face style. \begin{figure}[t] \vspace{0mm} \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c c } \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/sta_direction.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/sta_magnitude.png}} & \raisebox{-.5\height}{\includegraphics[scale=0.3]{fig/supp/sta_magnitude_direction.png}} \\ \vspace{-3mm}\\ (a)& (b) & (c) \\ \end{tabular} } \caption{\small (a) Histogram of the Cosine Similarity between the initial style coefficient $\mathbf{o}$ and its updated one $\mathbf{o}^*$ with perturbation. (b) Histogram of differences between the magnitude of $\mathbf{o}$ and $\mathbf{o}^{*}$: $a-a^*$, where $a^*=||\mathbf{o}||$, $a=||\mathbf{o}^*||$. (c) Scatter plot showing the correlation between $Sc$ and $a-a^*$.} \label{fig:sta_info} \end{figure} \begin{figure}[t] \vspace{0mm} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \footnotesize \resizebox{1\linewidth}{!}{ \begin{tabular}{ c c } \vspace{0.5mm} {\tiny \tabincell{c}{ Original \\$\mathbf{X}$}}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/1_img.png}} \\ \vspace{0.5mm} {\tiny \tabincell{c}{ Synthesized \\$\hat{\mathbf{X}}$}}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/1_init_img.png}} \\ \vspace{0.5mm} {\tiny $|\hat{\mathbf{X}}-\mathbf{X}|$} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/1_init_error.png}} \\ \vspace{0.5mm} {\tiny \tabincell{c}{ Guided \\$\mathbf{X}^*$}} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/1_update_img.png}} \\ \vspace{0.5mm} {\tiny $|\mathbf{X}^*-\mathbf{X}|$}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/1_update_error.png}} \\\vspace{2mm} {\tiny $|\hat{\mathbf{X}}-\mathbf{X}^*|$}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/1_final_error.png}} \\ \vspace{0.5mm} {\tiny \tabincell{c}{ Original \\$\mathbf{X}$}}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/2_img.png}} \\ \vspace{0.5mm} {\tiny \tabincell{c}{ Synthesized \\$\hat{\mathbf{X}}$}}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/2_init_img.png}} \\ \vspace{0.5mm} {\tiny $|\hat{\mathbf{X}}-\mathbf{X}|$} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/2_init_error.png}} \\ \vspace{0.5mm} {\tiny \tabincell{c}{ Guided \\$\mathbf{X}^*$}} & \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/2_update_img.png}} \\ \vspace{0.5mm} {\tiny $|\mathbf{X}^*-\mathbf{X}|$}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/2_update_error.png}} \\ {\tiny $|\hat{\mathbf{X}}-\mathbf{X}^*|$}& \raisebox{-.5\height}{\includegraphics[scale=0.28]{fig/supp/2_final_error.png}} \\ \end{tabular} } \caption{ \small Training examples in a mini-batch with our face synthesis model during the FR model training. For each image set, we show the original images $\mathbf{X}$, synthesized results with initial style coefficients, and synthesized results with style perturbations $\mathbf{X}^*$. We additionally show their corresponding error maps: $|\hat{\mathbf{X}}-\mathbf{X}|$, $|\mathbf{X}^*-\mathbf{X}|$ and $|\hat{\mathbf{X}}-\mathbf{X}^*|$.} \label{fig:adversarial_examples} \vspace{0mm} \end{figure}
2,877,628,088,840
arxiv
\section{Introduction} Although we often work with user-labeled data, and we can quantify inter-annotator agreement -- we often do not know what these agreement scores mean in terms of our target measures, like mean average precision (mAP), or F1. What does an agreement of 70\% allow for us to understand on this dataset? What level of mAP is believable? What is the best a system can do here? Instead of an agreement probability or a statistical measure of agreement, what we really want is to be able to quantify the effect our level of agreement is having on our ability to evaluate systems. Therefore, what we really need is a technique for exploring what our agreement means in any arbitrary evaluation measure. While existing agreement measures give us a sense of how difficult the task was for annotators, it is hard to quantify what that means for measures, particularly those involving ranking. Maybe annotators agree on the most important instances, or maybe they disagree on the most critical instances -- the same agreement score may lead to very different reliabilities in results. Controversy is a problem that has attracted a lot of attention in recent years~\cite{dori2013detecting,jang2016probabilistic,jang2016improving,jang2017modeling,zielinski2018computing}. Much like relevance, sentiment, and other labels of interest, it is both somewhat subjective (noisy) and expensive to collect (limited). In this study, we will look at the dataset of 343 pages collected by Dori-Hacohen and Allan~\cite{dori2013detecting} and used in further studies~\cite{jang2016probabilistic,jang2016improving}. We find that the language-modeling approaches introduced by Jang et al.~\cite{jang2016probabilistic} effectively ``max-out'' this dataset, as their language modeling classifier achieves statistically indistinguishable performance from our human-model simulations. This means that given the limited dataset size and the inherent disagreement between annotators on which documents are controversial -- there are no AUC scores higher than currently published results that we should believe without collection of additional labels. In this paper, we introduce a simulation technique that will allow this analysis to be performed on any dataset with a set of annotator labels for any arbitrary measure or metric. \section{Related Work} The effect of the subjectivity and difficulty of relevance on IR evaluation has long been studied~\cite{bermingham2009study,carterette2010effect,webber2013assessor,voorhees2000variations,buckley2004retrieval,sanderson2005information}. While these studies look at the robustness of measures in the face of this subjectivity and noise, they do not quantify how well a system can do in comparison to humans -- probably because IR systems rarely retrieve otherwise perfect rankings. As agreement measures can be used to evaluate classification tasks directly, studies connecting the two are often looking at the suitability of an agreement score for classifier evaluation, e.g,~\cite{ben2008relationship}. Simulating users of IR or ML systems is also not a new contribution (e.g., \cite{tague1981simulation}) and recently work has begun to accelerate in this direction~\cite{maxwell2016agents}. However, we are unaware of work that simulates users in order to understand the limitations of agreement for a dataset. \section{Truth Simulation Models} In this section, we introduce a number of models for deriving truth from a set of labels for a document. Given a document $D$ which has a set of labels $L = \{l_1, l_2, \ldots l_{|L|}\}$, with each label $l_i$ being provided by a different annotator, most studies choose a simple heuristic function $f(L)$ that generates a single label from the set. Our models are applicable to both binary and multiclass judgments, provided that functions $f(L)$ map from a set of labels to a valid label. Since we look at controversy, we focus on ordinal labels, and we can use fractional labels as predictions, but not truth. \subsection{Average and Max Models} In prior work~\cite{dori2013detecting,jang2016probabilistic,jang2016improving}, the assignment given to a document is the average of its labels. Another appropriate model for controversy we consider in addition to the average modeling done here is a maximum model: i.e., a document is controversial if any annotator considered it controversial -- a policy aimed at maximizing recall. \subsection{Agreement-Flip Model} Here we let $p$ be the probability of agreement calculated across the dataset. We could argue that with probability $1-p$, a label will be disputed and therefore is possibly incorrect with this probability. This is a fairly simple model of agreement, and the one that is represented by presenting agreement ratio in papers. \subsection{Label Sampling Model} A better model takes document-level confusion into account: if a document garners a variety of labels, we consider these observations of the underlying distribution for that document. Here, our $f$ samples a label at random from a document. \subsection{Label Conflation Model} In a world where there are multi-value relevance labels, and you have many documents with only a single annotator e.g., Excellent, Good, Fair, Bad, we may wish to have a simulation that can generalize to these cases in a more accurate manner. Our label conflation model first learns the probabilities of mistaking labels for each other. We would expect disagreement between highly-relevant and relevant documents, for instance, but less disagreement between highly-relevant and non-relevant documents. However, this model is data driven, so it will reflect the actual behavior of users. As a concrete example, the model learned for labels in the Dori-Hacohen and Allan dataset is presented in Table~\ref{tab:conflation}. Given any truth label, we then sample a new value based on how humans often disagree with that particular truth value. \begin{table}[ht] \caption{Label Conflation Model for Controversy Web Pages~\cite{dori2013detecting}. {\rm The label with most agreement is the ``Clearly Non-Controversial'' label.}} \label{tab:conflation} \centering \begin{tabular}{lr|rrrr} & & \multicolumn{4}{c}{Conflated With} \\ Text & \# & 2 & 1 & 0 & -1 \\ \hline Very Controversial & 2 & 237 & 83 & 23 & 48 \\ Controversial & 1 & 83 & 182 & 27 & 53 \\ Possibly Non-Controversial & 0 & 23 & 27 & 133 & 92 \\ Clearly Non-Controversial & -1 & 48 & 53 & 92 & 594 \\ \end{tabular} \end{table} \section{Results} Given our set of models that each reasonably approximate human labeling disagreement on this ambiguous task, we can now run a simulation to understand what the expected performance (under any measure) should be for our humans under these models. For each setting, we run $N=10000$ simulations. \begin{table}[ht] \caption{Simulated AUC scores for controversy detection. {\rm When predictions and truth are generated by the given models, we obtain a the following sampling of AUC scores.}} \label{tab:simulationAUCs} \centering \begin{tabular}{lcc|rrr} & & & \multicolumn{3}{c}{Percentile AUC} \\ \# & System Model & Truth Model & 5th & 50th & 95th\\ \hline 1 & Sample & Average & 0.862 & 0.890 & 0.917 \\ 2 & Sample & Max & 0.846 & 0.874 & 0.900 \\ 3 & Sample & Sample & 0.818 & 0.852 & 0.884 \\ 4 & Conflate(Truth) & Sample & 0.794 & 0.836 & 0.875 \\ 5 & Conflate(Sample) & Conflate(Sample) & 0.674 & 0.725 & 0.774 \\ 6 & Flip(p=0.643, Truth) & Average & 0.593 & 0.639 & 0.685 \\ \end{tabular} \end{table} In prior work, the best AUC reported for this task is 0.856~\cite{jang2016probabilistic}, and the AUC reported for the original work is 0.743~\cite{dori2013detecting}. We present six pairings of our truth simulation models in Table~\ref{tab:simulationAUCs}. We have ordered our simulations from optimistic (label sampling system, average truth \#1) to pessimistic (traditional agreement probabilities \#6). This suggests to us that we can believe in the improvement presented from 0.743-0.856, but that we should be skeptical of any further improvements shown on this dataset, as even our optimistic models suggest that we are doing as well as a human can do given the ambiguity of the task. \section{Conclusions} In this work, we have briefly presented a number of strategies for investigating the agreement of users on labeling tasks. Given a vector of document labels assigned by different people at each document, we can model the difficulty of particular instances and particular labels. Further work is needed to understand the best simulation models for given tasks, but exploring a variety of reasonable models allows us to come to a reasonable conclusion that the discriminative power of an existing controversy detection dataset has been used up in terms of a robust classification metric: AUC. We therefore propose that future work on classifying or ranking using subjective labels consider simulation as an explainable alternative to opaque agreement scores. \section*{Acknowledgements} This work was supported in part by the Center for Intelligent Information Retrieval. \bibliographystyle{ACM-Reference-Format}
2,877,628,088,841
arxiv
\section{Introduction} Given a set $S$ of $n$ points in the plane, we say a line $l$ is a spanning line of $S$ (or $l$ is spanned by $S$) if $l$ contains at least two distinct points of $S$. The following theorem was proposed by Erd\H{o}s and proved by Beck in \cite{Beck}: \begin{thm}\label{Erdos-Beck} [Erd\H{o}s- Beck's theorem] Any set $S$ of $n$ points in $\mathbb{R}^2$ among which at most $n-x$ points are collinear spans at least $cxn$ lines for some positive constant $c$. \end{thm} As a corollary, \begin{cor}\label{Beck2} On the plane, for each $\beta\in (0,1)$, there exists $\gamma>0$ depending on $\beta$ such that for any set $S$ of $n$ points, either a line contains $\beta n$ points of $S$, or the number of spanning lines exceeds $\gamma n^2$. \end{cor} In this paper, we will present two ways to extend this result to $\mathbb{R}^d$ for any $d\geq 3$, the first one resembles theorem \ref{Erdos-Beck} and the second one resembles corollary \ref{Beck2}. From now on, $d$ is some fixed integer $(d\geq 2)$ and $n$ is rather big compared to $d$. The following terms appear in \cite{Eleke}: \begin{defi}\label{defi_saturated} Given a set $S$ of $n$ points and a $k$-dimensional flat $F$ in $\mathbb{R}^d$. Let \textbf{$H_S(F)$} denote the number of $(k-1)$-dimensional hyperplane in $F$ spanned by $S\cap F$. $F$ is called \textbf{$k$-rich} if $|F\cap S|\geq k$; $F$ is called \textbf{$\gamma$-saturated} if $H_S(F)\geq \gamma |F\cap S|^{d-1}$. We say $F$ is saturated if there exists some $\gamma>0$ such that $F$ is $\gamma$-saturated. We say $F$ is rich if there exists some $c>0$ such that $F$ is $c|S|$-rich. \end{defi} Our first main result is the following: \begin{thm}\label{E-B_d_dim} Assume $S$ is a set of $n$ points in $\mathbb{R}^d$, and there is some $c_1$-saturated $c_2n$-rich hyperplane $P$, then there is some positive constant $\gamma$ depending on $c_1$ and $c_2$ such that $H_S(\mathbb{R}^d)\ge \gamma xn^{d-1}$ where $x=|S\setminus P|$. \end{thm} When $d=2$, since any line is saturated, we get back Erd\H{o}s-Beck's theorem. To obtain a result similar to corollary \ref{Beck2}, we start with another classical result in \cite{Beck}: \begin{thm}\label{Beck} [Beck's theorem] Given an integer $d\geq 2$, there are constants $\beta_d, \gamma_d$ in $(0,1)$ such that given any set $S$ of $n$ points in $\mathbb{R}^d$, either there exists a hyperplane that contains at least $\beta_d n$ points of $S$ or the number of spanning hyperplanes is at least $\gamma_d n^d$. \end{thm} What is the maximum value of $\beta_d$ so that there is some $\gamma_d$ that satisfies the condition mentioned above? By corollary \ref{Beck2}, in two dimension, any $\beta_2<1$ would work. However, this is no longer the case in higher dimensions. For example, in $\mathbb{R}^3$, consider two skew lines $l_1, l_2$ and the set $S$ consisting of $n/2$ points on each line (assuming $n$ is even). It is easy to see that any plane contains at most $n/2+1$ points of $S$, yet there are only $n$ spanning (hyper)planes. Thus $\beta_3\leq 1/2$. In this paper, we will prove that any $\beta_3<1/2$ would work, and that points clustering to two skew lines is the only new exceptional case in $\mathbb{R}^3$ (see theorem \ref{thm_3_dim}). Our second main result is the following: \begin{thm} \label{thm_d_dim} For any $0<\beta<1$ there is some constant $\gamma(\beta)$ depending on $d$ and $\beta$ such that for any set $S$ of $n$ points in $\mathbb{R}^d$, either there exists a collection of flats $\{F_1,\cdots,F_k\}$ whose union contains at least $\beta n$ points of $S$ and $\sum_{i=1}^k \dim F_i<d$, or $H_S(\mathbb{R}^d)\ge \gamma(\beta) n^d$. \end{thm} \begin{remark} After posting this paper on arxiv, the author learned that a stronger result was proved by Ben Lund in \cite{Lund} five months earlier. Indeed, theorem 2 part (1) in \cite{Lund} implies theorem \ref{thm_d_dim}. However, our method of proof is different from that in \cite{Lund}. Lund's proof uses projection and induction by the dimension, while our proof uses saturated flats, theorem \ref{E-B_d_dim} and Beck's theorem. \end{remark} In theorem \ref{thm_d_dim}, when $d=2$, we get back corollary \ref{Beck2} since the only possible collection of flat whose sum of dimension less than 2 is a line. Roughly speaking, this theorem implies $n$ points in $\mathbb{R}^d$ spans $\Theta(n^d)$ hyperplanes (or the space $\mathbb{R}^d$ is saturated) unless most points cluster to a collection of flats whose sum of dimension is strictly less than $d$. This description is satisfactory because if all but $o(n)$ points are outside union of such a collection, we do not expect to get $\Theta(n^d)$ hyperplanes, as shown in the following result: \begin{prop} $S$ is a set of $n$ points in $\mathbb{R}^d$. Assume all but at most $x$ points belong to the union of flats $\{F_1,\dots, F_k\}$ where $\sum_{i=1}^k \dim F_i <d$, then $H_S(\mathbb{R}^d)\leq (x+d)n^{d-1}$. \end{prop} An immediate consequence of theorem \ref{thm_d_dim} is a stronger version of Beck's theorem: \begin{cor}\label{cor_beta_d} Given an integer $d\geq 2$, in $\mathbb{R}^d$, for any $\beta_d\in(0,\frac{1}{d-1})$ there is some $\gamma_d$ such that any $n$ points in $\mathbb{R}^d$ defines at least $\gamma n^d$ hyperplanes unless some hyperplane contains at least $\beta_dn$ points. In other words, any $0<\beta_d<\frac{1}{d-1}$ works in theorem \ref{Beck}. \end{cor} Indeed, let $\beta=(d-1)\beta_d<1$ and choose $\gamma_d=\gamma(\beta)$ as in theorem \ref{thm_d_dim}. If the number of spanning hyperplanes is less than $\gamma_d n^d$, there is some collection of flats $\{F_1,\dots, F_k\}$ whose union contains at least $\beta n$ points. This implies some flat contains $\ge (\beta/k) n\geq \beta_d n$ points since $k\leq d-1$. Any hyperplane that contains this flat must contain at least $\beta_d n$ points. Theorem \ref{thm_d_dim} also has some application in incidence geometry. In \cite{Eleke}, Elekes and T\'oth gave a bound for the number of $k$-rich $\gamma$-saturated hyperplanes w.r.t. $n$ points in $\mathbb{R}^d$, which in turns implies a bound for the number of $k$-rich $\alpha$- degenerate hyperplanes where $\alpha$-degenerate flats are defined as followed: An $r$-flat $F$ in $\mathbb{R}^d$ is \textit{$\alpha$-degenerate} for some $0<\alpha\leq 1$ if $F\cap S\neq \emptyset$ and at most $\alpha |F\cap S|$ points of $F\cap S$ lie in an $(r-1)$-flat. \begin{thm}\label{Eleke-2}[Elekes-T\'oth] Given a set $S$ of $n$ points in $\mathbb{R}^d$. There is some constant $C(d,\gamma)>0$ such that for any $k$, the number of $k$-rich, $\gamma$-saturated hyperplanes w.r.t. $S$ is at most $C(d,\gamma) (n^dk^{-(d+1)}+n^{d-1}k^{-(d-1)})$. This, combined with Beck's theorem, implies there are positive constants $\beta_{d-1}$ and $C(d)$ such that for anu set $n$ points in $\mathbb{R}^d$, the number of $k$-rich $\beta_{d-1}$-degenerate hyperplanes is at most $C(d)(n^dk^{-(d+1)}+n^{d-1}k^{-(d-1)})$. \end{thm} By corollary \ref{cor_beta_d}, the second part of this theorem holds for any $\beta_{d-1}<1/(d-1)$. Moreover, if we redefine $\alpha$-degenerate as: \begin{defi} For integers $0<r\leq d$, given a point set $S$ and an $r$-flat $F$ in $\mathbb{R}^d$, we say $F$ is $\alpha$-degenerate for some $0<\alpha\leq 1$ if $F\cap S\neq \emptyset$ and at most $\alpha |F\cap S|$ points of $F\cap S$ lie in union of some flats whose sum of dimension is strictly less than $r$. \end{defi} then using theorem \ref{thm_d_dim}, we have \begin{cor}\label{new-eleke} For any $\beta\in (0,1)$, there is some positive constant $C(d,\beta)$ such that for any set $n$ points in $\mathbb{R}^d$, the number of $k$-rich $\beta$-degenerate hyperplanes is at most $C(d,\beta)(n^dk^{-(d+1)}+n^{d-1}k^{-(d-1)})$. \end{cor} The structure of the paper is as followed: proofs of theorem \ref{E-B_d_dim}, theorem \ref{thm_3_dim} and theorem \ref{thm_d_dim} will be presented in section 2,3 and 4 respectively. In section 5, we will summarize our results and conclude with several open questions. \section{Proof of theorem \ref{E-B_d_dim}} Observe that if we embed $\mathbb{R}^d$ into $\mathbb{R}\mathbb{P}^d$, theorem \ref{E-B_d_dim} and \ref{thm_d_dim} still hold. From now on, we will assume we are working over $\mathbb{R}\mathbb{P}^d$ even if the statement says $\mathbb{R}^d$. The advantage of working over projective spaces is that we do not need to worry about parallel situation. For example, in $\mathbb{R}\mathbb{P}^3$, given any line $l$ and $P$, either $l\subset P$ or $l\cap P$ at exactly a point. This does not hold in $\mathbb{R}^3$ as $l$ can be parallel to $P$. In this case we can say $l\cap P$ at the infinity point. In general, \begin{lem}\label{projective_space_union_flats} For any flats $A,B$ in $\mathbb{R}\mathbb{P}^d$, let $\langle A,B\rangle$ denote the span of $A$ and $B$, the smallest flat that contains both $A$ and $B$. Then $$\dim \langle A,B\rangle =\begin{cases} \dim A+\dim B+1 & \mbox{if} A\cap B=\emptyset\\ \dim A+\dim B-\dim A\cap B &\mbox{otherwise} \end{cases} $$ \end{lem} In this section we will prove theorem \ref{E-B_d_dim}. The key idea is to pair each point outside $P$ to a spanning hyperplane of $P$ to form a spanning hyperplane of $\mathbb{R}^d$, then use lemma \ref{lem_fix_point} and Cauchy-Schwartz inequality to take care of the over-counting. Indeed, since $P$ is $c_1$-saturated and $c_2n$-rich w.r.t. $S$, the number of spanning hyperplanes in $P$ (which are $(d-2)$-dim flats of $\mathbb{R}^d$) is big: $H_S(P)\ge c_1|S\cap P|^{d-1}\ge c_1c_2^{d-1}n^{d-1}$. Let $X=S\setminus P$, the set of points of $S$ outside $P$, then $|X|=x$. Pairing each spanning hyperplanes of $P$ with a point in $X$ we get a spanning hyperplane of our space. If all those hyperplanes are distinct, we expect to see $\sim xn^{d-1}$ of them, exactly what we are trying to prove. However those hyperplanes are usually not distinct. It would be bad if all points of $X$ belong to a line $l$ and all spanning hyperplanes of $P$ pass through $l\cap P$. Fortunately this is not the case, as the following lemma shows: \begin{lem}\label{lem_fix_point} For a fixed point $q\in P$ (where $P$ is a $(d-1)$ dim flat), there are at most $n^{d-2}$ spanning hyperplanes of $P$ passing through $q$. In particular, for any given a set $S$ of $n$ points in the plane, the number of $S$-spanned lines passing through a fixed point $q$ (not necessarily in $S$) is at most $n$. \end{lem} \begin{proof} In any spanning hyperplane $H$ of $P$ that passes through $q$, we can find $d-2$ points of $S\cap H$ so that they together with $q$ form $d-1$ points in general position that spans $H$. Two hyperplanes are distinct only if we can find distinct sets of $d-2$ points. Hence the number of hyperplanes is at most ${n\choose d-2}<n^{d-2}$. \end{proof} Now consider the set of all hyperplanes spanned by a point in $X$ and a spanning hyperplane of $P$: $\mathcal{P}= \{P_1,\dots, P_L\}$ and assume $|P_i\cap X|=a_i$. Then $$\sum_{i=1}^L a_i=\#\{(q,H): q\in X, H\in H_S(P)\}\geq c_1c_2^{d-1} xn^{d-1}$$ Here we abuse the notation $H_S(P)$ to denote the set of all $S$-spanned hyperplanes of $P$. On the other hand, consider $$J=\#\{(q_1,q_2,P_i): P_i\in \mathcal{P}; q_1,q_2\in X\cap P_i; q_1\neq q_2\}$$ For each choice of $(q_1,q_2)$, the line through them intersect $P$ at some point $q$. For hyperplane $P_i$ in $\mathcal{P}$ that contain $q_1,q_2$, $P_i\cap P$ at some hyperplane of $P$ that contains $q$. By lemma \ref{lem_fix_point}, number of choices for such hyperplanes is at most $n^{d-2}$. Hence $J\leq x^2n^{d-2}$. On the other hand, for each fixed $P_i$, there are ${a_i\choose 2}$ choices for $(q_1,q_2$). Using Cauchy-Schwartz inequality: $$J=\sum_{i=1}^L{a_i\choose 2}\geq \frac{1}{3}\sum_{i=1}^L a_i^2-L\geq \frac{1}{3L}\left(\sum_{i=1}^L a_i\right)^2-L\ge 2c\frac{(xn^{d-1})^2}{L}-L$$ where $2c=1/3(c_1c_2)^2$. Rewrite the inequality as $JL+L^2\geq cx^2n^{2d-2}$, we must have either $L^2\geq c x^2n^{2d-2}$ or $LJ\geq c x^2n^{2d-2}$. As $J\leq x^2n^{d-2}$ and $x\leq n$, in both cases, we would have $L\geq \gamma xn^{d-1}$ for some constant $\gamma$ depending on $c_1$ and $c_2$. Finally it is clear $H_S(\mathbb{R}^d)\geq L\ge \gamma xn^{d-1}$. \section{Three dimensional case} In this section we will prove theorem \ref{thm_3_dim}, a special case of our main theorem \ref{thm_d_dim} when $d=3$. We want to prove it separately because its proof is similar, yet much simpler than the general case. We hope that by understanding the proof in this simple case, readers can convince themselves that our strategy works for the general case as well. It is of course totally fine to skip this section and go straight to the next one where the general case's proof is presented. \begin{thm}\label{thm_3_dim} For any $\beta\in (0,1)$, there is some constant $\gamma$ depending on $\beta$ such that: for any $n$-points set $S$ in $\mathbb{R}^3$, either there exists a plane contains at least $\beta n$ points of $S$, or there are two skew lines whose union contains at least $\beta n$ points of $S$, or the number of spanning hyperplanes exceeds $\gamma n^3$. As a consequence, any $\beta_3\in (0,1/2)$ would work in theorem \ref{Beck}. \end{thm} \begin{remark} A stronger result is proved in \cite{Lund-Purdy} using some point-plane incidence bound: $H_S(\mathbb{R}^3)\gtrsim x^2 n$ where $n-x$ is the maximum number of points of $S$ that belong to a plane or two lines. \end{remark} Assume no plane or two skew lines contains more than $\beta n$ points of $S$, we need to show $H_S(\mathbb{R}^3)\gtrsim_\beta n^3$. Here the notation $\gtrsim_\beta$ means we can put a constant that may depends on $\beta$ right after $\ge$ to make the inequality correct. We will sometimes write $\gtrsim$ when the dependence on $\beta$ is implicit. By Beck's theorem \ref{Beck}, if no plane contains more than $\beta_3 n$ points, the space is saturated and we are done. So assume there is some plane $P_1$ that contains more than $\beta_3n$ points. If $P_1$ is $\gamma_2$-saturated, theorem \ref{E-B_d_dim} implies $H_S(\mathbb{R}^3)\gtrsim (n-|P_1\cap S|)n^2\gtrsim (1-\beta)n^3$ since no plane contains more than $\beta n$ points. If $P_1$ is not $\gamma_2$-saturated, by theorem \ref{Beck}, some line, say $l_1$, contains more than $\beta_2 |P\cap S|\geq \beta_2\beta_3n$ points of $S$. Excluding this line, there remains at least $(1-\beta)n$ points. We can repeat our argument for those points to find another line $l_2$ that contains at least $(1-\beta)\beta_2\beta_3n$ points. If $l_1,l_2$ belongs to a same plane, then that plane is saturated and contains a portion of points, so we can again apply theorem \ref{E-B_d_dim}. Otherwise $l_1$ and $l_2$ are skew. Because of our assumption, excluding those two lines we still have at least $(1-\beta)n$ points. Repeat the argument one more time, we can find another line $l_3$ that contains at least $(1-\beta)\beta_2\beta_3n$ points and is skew to $l_1$ and $l_2$. We finish our proof by the following lemma: \begin{lem}\label{3skewlines} Given 3 lines $l_1,l_2,l_3$ in $\mathbb{R}^3$, pairwise skew, and $|l_i\cap S|\geq c_in$ for $i=1,2,3$. Then $H_S(\mathbb{R}^3)\gtrsim_{c_1,c_2,c_3} n^3$. \end{lem} \begin{proof} Heuristically if we pick a point on each line, they will form $\sim n^3$ planes, but those planes may not be distinct. To guarantee distinctness, we need to pick our points more carefully: for any $p_1\in l_1$, choose $p_2\in l_2$ that does not belong to $\langle p_1,l_3\rangle$, then choose $p_3\in l_3$ which does not belong to $\langle p_1,l_2\rangle$ or $\langle l_1,p_2\rangle$. Now $p_1,p_2,p_3$ spans some plane $H$ such that $H\cap l_i=p_i$. Hence all planes $\langle p_1,p_2,p_3\rangle$ are distinct. So the number of spanning planes is at least $c_1n(c_2n-1)(c_3n-2)\gtrsim n^3$ (since we are not allowed to pick at most 1 point in $l_2$ and at most 2 points in $l_3$). \end{proof} \section{Higher dimensions} The main purpose of this section is to prove theorem\ref{thm_d_dim}. But before we start, we will prove Proposition \ref{prop}, which illustrates that our result is tight. \textit{Proof of Proposition \ref{prop}:} For any spanning hyperplane $H$, we can pick $d$ points in $S\cap H$ in general position that generates $H$, call that set $D_H$. One hyperplane may have many generating sets, but two distinct hyperplanes must have distinct ones. Thus $H_S(\mathbb{R}^d)$ is at most the number of generating sets $\{D_H\}$. If $D_H$ contains a point outside $\cup F_i$, there are at most $x$ choices for that point, and ${n\choose d-1}<n^{d-1}$ choices for the remaining $d-1$ points. Therefore in this case the number of distinct $D_H$ is at most $xn^{d-1}$. Otherwise, assume $D_H$ contains only points in $\cup F_i$. Let $a_i$ denote the dimension of $F_i$, and $b_i=|D_H\cap F_i|$. As $\sum b_i\geq d>\sum a_i$, there must exist some $i$ such that $b_i>a_i$, which implies $F_i\subset H$. For each $i\in [k]$, two hyperplanes $H_1$ and $H_2$ that contains $F_i$ are distinguished by the set $D_{H_1}\setminus F_i$ and $D_{H_2}\setminus F_i$. Since $|D_H\setminus F_i|=d-a_i-1$ for each such $H$, there are at most $n^{d-a_i-1}$ spanning hyperplanes that contains $F_i$. Summing together $\#\{D_H\}\le kn^{d-1}$. Therefore, the number of spanning hyperplanes do not exceed $(x+d)n^{d-1}$. \qed The overall strategy to prove theorem \ref{thm_d_dim} is similar to that of the 3 dimensional cases presented in the previous section. Assume any collection of flats whose sum of dimensions less than $d$ does not contain more than $\beta n$ points of $S$, we will show that $H_S(\mathbb{R}^d)\gtrsim n^d$. By Beck's theorem \ref{Beck}, if no hyperplane contains more than $\beta_d n$ points, the space is saturated and we are done. So assume there is some $\beta_dn$-rich hyperplane $P_1$. If $H_S(P_1)\geq \gamma_{d-1}|P_1\cap S|^{d-1}\gtrsim n^{d-1}$, we can apply theorem \ref{E-B_d_dim} as now we have a rich saturated hyperplane. Otherwise, by Beck's theorem, $P_1$ contains some $(\beta_d|S\cap P_1|)$- rich hyperplane $P_2$ (which is of dimension $d-2$). Again by Beck's theorem, either $P_2$ is saturated or it contains some rich hyperplane. Repeating this argument, we end up with a $c_1n$-rich $\gamma_k$-saturated $k$-flat for some constant $c_1$ and $k\leq d-1$. By our assumption, this flat contains at most $\beta n$ points. Excluding this flat we are left with at least $(1-\beta)n$ points. Hence we can find another rich and saturated flat. Repeating this argument, we end up with a collection of rich saturated flats $\{F_1,\dots, F_k\}$ whose sum of dimensions is greater than or equal to $d$. We want to prove a result similar to lemma \ref{3skewlines}: a collection of rich saturated flats whose sum is at least $d$ defines $~n^d$ hyperplanes. However, notice that if $\langle F_1,F_2\rangle$, the span of $F_1$ and $F_2$, i.e. the smallest flat that contains both $F_1$ and $F_2$, has dimension less than $\dim F_1+\dim F_2$, by replacing $F_1,F_2$ by $\langle F_1,F_2\rangle$, we obtain another collection of flats whose sum of dimensions decreases. That observation inspires the following definition: \begin{defi} Consider a collection of flats $\{F_1,\dots, F_k\}$ in $\mathbb{R}^d$. For any $I\subset [k]:=\{1,\cdots,k\}$, $F_I$ denote the span of $\{F_i\}_{i\in I}$. This collection is called good if $\dim F_{[k]}=d\leq \sum_{i} \dim F_i$ while $\dim F_I \geq \sum_{i\in I} \dim F_i$ for any $I\subsetneq [k]$. \end{defi} We are now ready to state the generalization of lemma \ref{3skewlines}. \begin{lem}\label{lem_good_flats} In $\mathbb{R}^d$, if there are a good collection of flats $\{F_1,\cdots, F_k\}$ each $F_i$ is of dimension $a_i$, $\gamma_{a_i}$-saturated and $c_in$-rich w.r.t. $S$, then $H_S(\mathbb{R}^d)\gtrsim n^d$. \end{lem} We will show that this lemma finishes our proof of theorem \ref{thm_d_dim}. \textit{Proof of theorem \ref{thm_d_dim}:} Recall from the beginning of this section: after applying Beck's theorem many times, we have a collection of rich saturated flats whose sum of dimensions is at least $d$. If this collection is not a good one, which means there is some $I\subset [k]$ so that $\dim F_I<\sum_{i\in I} \dim F_i$. By lemma \ref{lem_good_flats} apply for $d=\dim F_I$, $F_I$ is saturated; clearly $F_I$ is rich. So we can replace $\{F_i\}_{i\in I}$ by their union, $F_I$, to get a new collection of flats whose sum of dimensions decreases. If the sum of dimensions is strictly less than $d$, we repeat our argument using Beck's theorem to find a new rich saturated flat. If the sum of dimensions is at least $d$ but the collection is still not good, again we can find a way to combine flats $F_I$ as above. This guarantees we will obtain a good collection of flats at some point. Using lemma \ref{lem_good_flats}, our space is saturated. \qed \\ \\ It remains to prove lemma \ref{lem_good_flats}, which will be the hardest part of this paper. We encourage readers to read lemma \ref{3skewlines}, a simple version where the good collection of flats are 3 pairwise skew lines, before proceeding any further. If you find some step in the following proof hard to follow, think about what it means in the case of 3 skew lines. \\ \\ \textit{Proof of lemma \ref{lem_good_flats}:} There are two cases: when the sum of dimensions is $d$, and when the sum is strictly greater than $d$. Let us consider case 1 first, as it is simpler. Case 2 is similar with modification at the last step. \\ \\ \textbf{Case 1: $\sum_{i=1}^k a_i=d$} Heuristically, if we pick $a_i$ points in $S\cap F_i$ for each $i$ to form $\sum_i a_i=d$ points, those points are likely to generate an $S$-spanned hyperplane. There are $\sim n^{a_i}$ choices for points in $F_i$, and thus $\sim n^{\sum a_i}=n^d$ spanning hyperplanes. There are two things that may go wrong: $d$ chosen points may not generate a hyperplane, and the generated hyperplanes may not be distinct. In order to use the saturated of flats $F_i$, instead of picking $a_i$ points, let us pick an $S$- spanned hyperplanes $P_i$ in each $F_i$. Since $F_i$ is saturated, there are still $\sim n^{a_i}$ choices for $P_i$. One way to make sure $\langle P_1,\dots,P_k\rangle=:H$ are distinct is to choose $P_i$ so that $H\cap F_i=P_i$. That motivates the following definition: \begin{defi} In $\mathbb{R}^d$, given a good collection of flats $\{F_1,\dots, F_k\}$ w.r.t. $S$, a sequence of flats $\{P_1,\dots, P_k\}$ where $P_i$ is a hyperplane of $F_i$ is a \textbf{nice sequence} if $\langle P_1,\dots, P_k\rangle=H$ is a spanning hyperplane of $\mathbb{R}^d$ and $H\cap F_i=P_i$. \end{defi} Clearly each nice sequence generates a distinct spanning hyperplane. Indeed, assume $\{P_1,\dots, P_k\}$ and $\{Q_1,\dots, Q_k\}$ are two nice sequences that generate a same hyperplane $H$. Then $P_i=F_i\cap H=Q_i$ for all $i$, so two sequences are the same. It remains to show there are $\gtrsim n^d$ distinct nice sequences. As in lemma \ref{3skewlines}, we shall pick $P_i$ one at a time in a careful manner. We use the following notations: $F_I:=\langle \{F_i\}_{i\in I}\rangle$; $P_I=\langle \{P_i\}_{i\in I}\rangle$; $a_I=\sum_{i\in I} a_i$ and $[n]=\{1,\dots, n\}$. For $s=1,\dots, k$, assume we have picked $P_1,\dots, P_{s-1}$. When $s=1$ it means we have not picked any flat yet. We now choose a spanning hyperplane $P_s$ of $F_s$ such that: $\langle P_s, P_I, F_J\rangle=\langle F_s, P_I, F_J\rangle$ for any $I\subset [s-1], J\subset [k]\setminus I$ that satisfies $\langle P_I, F_J\rangle \cap F_s\neq \emptyset$. \\ \\ \textit{Claim 1:} There are at least $\mu_s n^{a_s}$ choices for such $P_s$ with some positive constant $\mu_s$. \\ \\ \textit{Proof of claim 1:} We count how many spanning hyperplanes in $F_s$ that we cannot pick. For any $I\subset [s-1], J\subsetneq [s+1,\dots, k]$ such that $\langle P_I, F_J\rangle \cap F_l= Q_{I,J}\neq \emptyset$. Any hyperplane $P$ in $F_1$ that does not contain $Q_{I,J}$ satisfies our condition because $\langle Q_{I,J}, P_s\rangle$ is strictly bigger than $P_s$, hence must be $F_s$. The number of $S$-spanned hyperplanes in $F_s$ that contains a fixed point is bounded by $n^{a_s-1}$, hence the number of excluded hyperplanes is $\lesssim_d n^{a_s-1}$. Since $F_s$ is $\gamma_{a_s}$-saturated, $H_S(F_s)\geq \gamma_{a_s}n^{a_s}$, so for big enough $n$ there remains a portion of $n^{a_s}$ choices for $P_s$.\qed \\ \\ \textit{Claim 2:} For any $I\subset [k]$ and $J\subset [k]\setminus I$ we have: \begin{equation}\label{condition_dim} \dim\langle P_I, F_J\rangle = \begin{cases} a_{I}-1 &\mbox{if } J=\emptyset \\ \geq a_{I\cup J} & \mbox{if } J\neq \emptyset \end{cases} \end{equation} In particular, the sequence $\{P_i\}_{i\in[k]}$ is a nice one. \\ \\ \textit{Proof of claim 2:} We prove \eqref{condition_dim} holds for any $I\subset [s]$ by induction by $s$. When $s=0$, $I=\emptyset$, condition \eqref{condition_dim} becomes $\dim F_J\geq a_J$ which is true as the collection $\{F_i\}$ is good. Assume \eqref{condition_dim} holds for any $I\subset [s-1]$, we will show that it still holds for any $I\subset [s]$. Clearly we only need to consider the case $s\in I$. If $\langle P_{I\setminus s},F_J\rangle \cap F_s=\emptyset$, clearly $\langle P_{I\setminus s}, F_J\rangle\cap P_s=\emptyset$, thus $\dim \langle P_I, F_J\rangle= \dim \langle P_{I\setminus s}, F_J\rangle + \dim P_s+1\geq a_{I\cup J\setminus s}+(a_s-1)+1=a_{I\cup J}$ by lemma \ref{projective_space_union_flats}. If on the other hand, $\langle P_{I\setminus s},F_J\rangle \cap F_s\neq \emptyset$, by our choice of $P_s$, $\langle P_I, F_J\rangle = \langle P_{I\setminus s}, F_{J\cup s}\rangle \geq a_{I\cup J}$ as \eqref{condition_dim} holds up to $s-1$. By induction, $\dim P_{[s-1]} =a_{[s-1]}-1$ and $\dim \langle P_{[s-1]} F_s\rangle\geq a_{[s]}. $ This implies $\dim \langle P_{[s-1]}, F_s\rangle > \dim P_{[s-1]}+\dim F_s$. As a consequence, we must have $P_{[s-1]}\cap F_s=\emptyset$ by lemma \ref{projective_space_union_flats}. Thus $\dim P_{[s]}=\langle P_{[s-1]}, P_s\rangle = \dim P_{[s-1]}+\dim P_s+1= a_{[s]}-1$ as we wished. Finally we prove $\{P_i\}_i$ is a nice sequence. Let $H:=P_{[k]}$, then $H$ is a hyperplane as $\dim H=a_{[k]}-1=d-1$. For any $i\in [k]$, $H\cap F_i$ has codimension at most 1 in $F_i$, hence either $H\cap F_i=F_i$ or $P_i$. If there is some $i$ such that $H\cap F_i\neq P_i$, then $F_i\subset H$, and $\langle F_i, P_{[k]\setminus i}\rangle\subset H$. However, by \eqref{condition_dim}, $\dim \langle F_i, P_{[k]\setminus i}\rangle\geq a_{[k]}=d$, contradiction. \qed \\ \\ \textbf{Case 2:} The sum of dimensions of good flats is strictly bigger than $d$. We will start with a simple example to inspire the general solution. \textbf{Example:} $\{F_1,F_2,F_3\}$ in $\mathbb{R}^8$, each of dimensions three, pairwise non-intersecting and $\langle F_1,F_2,F_3\rangle =\mathbb{R}^5$. Heuristically, we can no longer take a spanning hyperplane in each flat because if we pick 3 generic points of $S$ in each flat to form a plane, those 9 points may span the whole space. Instead, we should take 3 points in $S\cap F_1$, 3 points in $S\cap F_2$ and only 2 points in $S\cap F_3$. As in Case 1, we can find many $S$-spanned planes $P_1\subset F_1$ and $P_2\subset F_2$ such that $\langle P_1,P_2,F_3\rangle=\mathbb{R}^8$ and $\dim \langle P_1,P_2\rangle=5$. By lemma \ref{projective_space_union_flats}, $\langle P_1,P_2\rangle\cap F_3$ at some point $Q$, not necessarily a point of $S$. Let $Q_1:=\langle P_1, F_2\rangle \cap F_3$ and $Q_2:=\langle F_1, P_2\rangle \cap F_3$, then by dimension counting $Q_1,Q_2$ are two lines in $F_3$ and $Q_1\cap Q_2=Q$. If there is a plane $P_3$ in $F_3$ that contains $Q$ and an $S$-spanned line $l$ but does not contain $Q_1,Q_2$, then we can check that $H:=\langle P_1,P_2,P_3\rangle$ is a spanned hyperplane of $\mathbb{R}^8$ and $H\cap F_i=P_i$ for $i=1,2,3$. Indeed, $\dim H= \dim \langle P_1,P_2\rangle +\dim P_3=5+2=7$; $H$ is $S$-spanned because we can find 3 points in $S\cap P_1$, 3 points in $S\cap P_2$ and 2 points in $S\cap l$ to form 8 points of $S\cap H$ in general position. To prove $H\cap F_i=P_i$, we prove $H$ does not contain $F_i$. For $i=1,2$, $H$ does not contain $F_i$ because $P_3$ does not contain $Q_i$. For $i=3$, if $F_3\subset H$, $H=\langle P_1, P_2, F_3\rangle =\mathbb{R}^8$ contradiction. It remains to count how many choices there are for $P_3$. In $F_3$ which we shall treat as the space $\mathbb{R}^3$, consider the projection map $\pi: F_3 \to F$ where $x\mapsto \langle Q, x\rangle \cap F$. Pick $F$ generic so that $\# \pi(S\cap F_3)\sim n$. Since $F_3$ is saturated, those points define $\sim n^2$ distinct lines. Excluding $\pi(Q_1), \pi(Q_2)$, those statement remain unchanged. The span of $Q$ with any of those $\pi(S\cap F_3)$-spanned lines form a plane $P_3$ that satisfies our condition. There are $\sim n^2$ choices for $P_3$, combine with $\sim n^3$ choices for each $P_1,P_2$ we have $\sim n^8$ spanning hyperplanes. \textbf{Back to our general case:} $\{F_i\}_{i=1}^k$ is a good collection of flats and $\sum_{i=1}^k a_i=d+x$ for some $x\geq 1$. Observe that $a_{[k-1]}\leq d-1$ because otherwise we should have considered the collection $\{F_1,\dots, F_{k-1}\}$ instead. As a consequence, $x\leq a_k-1$. We first pick a sequence $\{P_1,\dots, P_{k-1}\}$ of spanning flats as in case 1. Those flats satisfy for any $I\subset[k-1]$: \begin{equation} \dim\langle P_I, F_J\rangle = \begin{cases} a_{I}-1 &\mbox{if } J=\emptyset \\ \geq a_{I\cup J} & \mbox{if } J\neq \emptyset, J\subsetneq [k]\setminus I \\ d &\mbox{if} J=[k]\setminus I\end{cases} \end{equation} Now we pick a hyperplane $P_k$ of $F_k$ not necessarily $S$-spanned to form a nice sequence $\{P_1,\dots, P_k\}$, i.e. $H:=\langle P_1,\dots, P_k\rangle$ is an $S$-spanned hyperplane of $\mathbb{R}^d$ and $H\cap F_i=P_i$. Since $\dim P_{[k-1]}=a_{[k-1]}-1$, by lemma \ref{projective_space_union_flats}, $P_{[k-1]}$ intersects $F_k$ at some $(x-1)$-dim flat $Q$. For each $i\in[k-1]$, $\langle F_i, P_{[k-1]\setminus \{i\}}\rangle$ intersects $F_k$ at some $x$-dim flat $Q_i$ which contains $Q$. As in the example, any $P_k$ that contains $Q$ and an $S$-spanned $(a_k-x-1)$-flat in $F_k$ but does not contain $Q_i$ for $i=1,\dots, k-1$ will satisfy our condition. The proof is quite simple and completely similar to that in the example, hence we will omit it here. In $F_k$ which is equivalent to $\mathbb{R}^{a_k}$, consider a map $\pi$ which is a projection from $Q$ to some generic $(a_k-x)$-dim flat $F$ such that most points of $S\cap F_k$ remain distinct. By dimension counting, each $Q_i$ is projected to a point $q_i$ in $F$. Excluding those $k-1$ points, there remains a portion of $n$ points in $F$. As $F_k$ is $S$-saturated, we must have $F$ is $\pi(S)$-saturated. In other words, there are $\gtrsim n^{a_k-x}$ flats of dimension $(a_k-x-1)$ that are spanned by $\pi(S)$. The span of $Q$ with each of these flats will generate a hyperplane $P_k$ satisfy our conditions. Hence we have $\gtrsim n^{a_1+\dots+a_{k-1}+a_k-x}=n^d$ distinct hyperplanes in $\mathbb{R}^d$. This concludes our proof of lemma \ref{lem_good_flats}.\qed \section{Extension and future work} In this paper we have generalized the Erd\H{o}s-Beck theorem to higher dimensions as stated in theorems \ref{E-B_d_dim} and \ref{thm_d_dim}. It implies a stronger version of Beck's theorem (corollary \ref{cor_beta_d}) and has some application in incidence geometry. Here are some final thoughts: \begin{enumerate} \item What happens in other fields? Proof of Beck's theorem uses (a weaker version of) Szemer\'edi-Trotter theorem. Since Szemer\'edi-Trotter theorem still holds in complex fields as proved in \cite{Josh} and \cite{Toth}, we can easily extend Beck's theorem to $\mathbb{C}^d$ and all the results in this paper can be extended as well. Much less is known in finite fields. One partial result is in \cite{IRZ}. We wonder whether this partial result can be extended in any way using the techniques in this paper. \item What is the best bound for $\beta_d$ when $d\geq 4$? In corollary \ref{cor_beta_d} we show that any $\beta_d<1/(d-1)$ works in Beck's theorem. This bound is tight when $d=2$ and 3, but it may not be tight for $d\geq 4$. In $\mathbb{R}^4$, if we choose 3 pair-wise skew lines each contains $n/3$ points, some hyperplane will contain two lines, and thus $2n/3$ points. We conjecture that the best bound for $\beta_4$ is $\beta_4<1/2$ by choosing a line and a plane in general position, each contains $n/2$ points. In general, we suspect we can find best bound for $\beta_d$ by carefully analysing all possibilities of flats whose sum of dimensions is less than $d$. \item Matroidal version: we can think of the plane as a simple matroid, a line is a $2-flat$, a maximal set of rank 2. In a simple matroid, most essential properties of points and lines still holds: two lines intersects at at most 1 point, 2 distinct points define at most a line. However, Beck's theorem may not hold in finite fields, we suspect it may not hold in matroids either. \item In paper \cite{Sharir}, Apfelbaum and Sharir used results about incidences between points and degenerate hyperplanes in \cite{Eleke} to show that if the number of incidences between $n$ points and $m$ arbitrary hyperplanes is big enough, the incidence graph must contain a large complete bipartite subgraph. We wonder if our new version of this result, corollary \ref{new-eleke}, would yield any better result. \end{enumerate} \section*{Acknowledgement} The author would like to thank Larry Guth for suggesting this problem and for his tremendous help and support throughout the project. The author also thanks Ben Yang, Josh Zahl, Hannah Alpert, Richard Stanley and Nate Harman for helpful conversations.
2,877,628,088,842
arxiv
\section{The image of $\mathrm{tmf}_*\mathrm{tmf}$ in $\mathrm{TMF}_*\mathrm{TMF}_\mathbb{Q}$: two variable modular forms}\label{sec:2var} \subsection{Review of Laures's work on cooperations} \emph{In this brief subsection, we do \emph{not} work $2$-locally, but integrally.} For $N > 1$, the spectrum $\mathrm{TMF}_1(N)$ is even periodic, with $$ \mathrm{TMF}_1(N)_{2*} \cong M_*(\Gamma_1(N))[\Delta^{-1}]_{\mathbb{Z}[1/N]}. $$ In particular, its homotopy is torsion-free. As a result, there is an embedding \begin{align*} \mathrm{TMF}_1(N)_{2*} \mathrm{TMF}_1(N) & \hookrightarrow \mathrm{TMF}_1(N)_{2*}\mathrm{TMF}_1(N)_\mathbb{Q} \\ & \cong M_*(\Gamma_1(N))[\Delta^{-1}]_\mathbb{Q} \otimes M_*(\Gamma_1(N))[\Delta^{-1}]_\mathbb{Q}. \end{align*} Consider the multivariate $q$-expansion map $$ M_*(\Gamma_1(N))[\Delta^{-1}]_\mathbb{Q} \otimes M_*(\Gamma_1(N))[\Delta^{-1}]_\mathbb{Q} \rightarrow \mathbb{Q}((q, \bar{q})). $$ In \cite[Thm.~2.10]{Laures}, Laures uses it to determine the image of $\mathrm{TMF}_1(N)_*\mathrm{TMF}_1(N)$ under the embedding above. \begin{thm}[Laures]\label{thm:Laures} The multivariate $q$-expansion map gives a pullback $$ \xymatrix{ \mathrm{TMF}_1(N)_*\mathrm{TMF}_1(N) \ar[r] \ar[d] & \mathrm{TMF}_1(N)_*\mathrm{TMF}_1(N)_\mathbb{Q} \ar[d] \\ \mathbb{Z}[1/N]((q, \bar{q})) \ar[r] & \mathbb{Q}((q, \bar{q})). } $$ Therefore, elements of $\mathrm{TMF}_1(N)_*\mathrm{TMF}_1(N)$ are given by sums $$ \sum_i f_i \otimes g_i \in M_*(\Gamma_1(N))[\Delta^{-1}]_\mathbb{Q} \otimes M_*(\Gamma_1(N))[\Delta^{-1}]_\mathbb{Q} $$ with $$ \sum_i f_i(q) \otimes g_i(q) \in \mathbb{Z}[1/N]((q, \bar{q})).$$ \end{thm} We shall let $M^{2-var}_*(\Gamma_1(N))[\Delta^{-1}, \bar{\Delta}^{-1}]$ denote this ring of integral $2$-variable modular forms (meromorphic at the cusps). We shall denote the subring of those integral $2$-variable modular forms which have holomorphic multivariate $q$-expansions by $M^{2-var}_*(\Gamma_1(N))$. \begin{rmk} Laures's methods also apply to the case of $N = 1$ provided $6$ is inverted to give an isomorphism $$ \mathrm{TMF}_*\mathrm{TMF}[1/6] \cong M^{2-var}_*(\Gamma(1))[1/6, \Delta^{-1}, \bar{\Delta}^{-1}]. $$ \end{rmk} \subsection{Representing $\mathrm{TMF}_*\mathrm{TMF}/tors$ with $2$-variable modular forms} \emph{From now on, everything is again implicitly $2$-local.} We now turn to adapting Laures's persective to identify $\mathrm{TMF}_*\mathrm{TMF}/tors$. To do this, we use the descent spectral sequence for $$ \mathrm{TMF} \rightarrow \mathrm{TMF}_1(3) . $$ Let $(B_*, \Gamma_{B_*})$ denote the Hopf algebroid encoding descent from $\Msl{3}$ to $\mathcal{M}$, with \begin{align*} B_* & = \pi_* \mathrm{TMF}_1(3) = \mathbb{Z} [a_1, a_3, \Delta^{-1}], \\ \Gamma_{B_*} & = \pi_* \mathrm{TMF}_1(3) \wedge_\mathrm{TMF} \mathrm{TMF}_1(3) = B_*[r,s,t]/(\sim), \end{align*} (see Section~\ref{sec:review}) where $\sim$ denotes the relations (\ref{eq:relations}). The Bousfield-Kan spectral sequence associated to the cosimplicial resolution $$ \mathrm{TMF} \rightarrow \mathrm{TMF}_1(3) \Rightarrow \mathrm{TMF}_1(3)^{\wedge_{\mathrm{TMF}} 2} \Rrightarrow \mathrm{TMF}_1(3)^{\wedge_{\mathrm{TMF}} 3} \cdots $$ yields the descent spectral sequence $$ \Ext^{s,t}_{\Gamma_{B_*}}(B_*) \Rightarrow \pi_{t-s} \mathrm{TMF} . $$ We can use parallel methods to construct a descent spectral sequence for the extension $$ \mathrm{TMF} \wedge \mathrm{TMF} \rightarrow \mathrm{TMF}_1(3) \wedge \mathrm{TMF}_1(3) . $$ Let $(B^{(2)}_*, \Gamma_{B^{(2)}_*})$ denote the associated Hopf algebroid encoding descent, with \begin{align*} B^{(2)}_* & = \pi_* \mathrm{TMF}_1(3) \wedge \mathrm{TMF}_1(3) , \\ \Gamma_{B^{(2)}_*} & = \pi_* (\mathrm{TMF}_1(3)^{\wedge_\mathrm{TMF} 2} \wedge \mathrm{TMF}_1(3)^{\wedge_\mathrm{TMF} 2}) . \end{align*} The Bousfield-Kan spectral sequence associated to the cosimplicial resolution $$ \mathrm{TMF}^{\wedge 2} \rightarrow \mathrm{TMF}_1(3)^{\wedge 2} \Rightarrow \left( \mathrm{TMF}_1(3)^{\wedge_\mathrm{TMF} 2}\right) ^{\wedge 2} \Rrightarrow \left( \mathrm{TMF}_1(3)^{\wedge_\mathrm{TMF} 3}\right)^{\wedge 2} \cdots $$ yields a descent spectral sequence $$ \Ext^{s,t}_{\Gamma_{B^{(2)}_*}}(B^{(2)}_*) \Rightarrow \mathrm{TMF}_{t-s} \mathrm{TMF} . $$ \begin{lem}\label{lem:2varlem1} The map induced from the edge homomorphism $$ \mathrm{TMF}_*\mathrm{TMF} /tors \rightarrow \Ext^{0,*}_{\Gamma^{(2)}_{B_*}}(B_*^{(2)}) $$ is an injection. \end{lem} \begin{proof} This follows from the fact that the map $$ \mathrm{TMF} \wedge \mathrm{TMF} \rightarrow \mathrm{TMF} \wedge \mathrm{TMF}_\mathbb{Q} $$ induces a map of descent spectral sequences $$ \xymatrix{ \Ext^{s,t}_{\Gamma^{(2)}_{B_*}}(B^{(2)}_*) \ar@{=>}[r] \ar[d] & \mathrm{TMF}_{t-s} \mathrm{TMF} \ar[d] \\ \Ext^{s,t}_{\Gamma^{(2)}_{B_*}}(B^{(2)}_* \otimes \mathbb{Q}) \ar@{=>}[r] & \mathrm{TMF}_{t-s} \mathrm{TMF}_{\mathbb{Q}} } $$ and the rational spectral sequence is concentrated on the $s = 0$ line. \end{proof} The significance of this homomorphism is that the target is the space of $2$-integral\footnote{i.e. integral for $\mathbb{Z}_{(2)}$} two-variable modular forms for $\Gamma(1)$. \begin{lem}\label{lem:2varlem2} The $0$-line of the descent spectral sequence for $\mathrm{TMF}_*\mathrm{TMF} $ may be identified with the space of $2$-integral two-variable modular forms of level $1$ (meromorphic at the cusp): $$ \Ext^{0, 2*}_{\Gamma^{(2)}_{B_*}}(B_*^{(2)}) = M_*^{2-var}(\Gamma(1))[\Delta^{-1}, \bar{\Delta}^{-1}] . $$ \end{lem} \begin{proof} This follows from the composition of pullback squares $$ \xymatrix{ \Ext^{0, *}_{\Gamma_{B^{(2)}_*}}(B_*^{(2)}) \ar@{^{(}->}[r] \ar@{^{(}->}[d] & \Ext^{0, *}_{\Gamma_{B^{(2)}_*}}(B_*^{(2)} \otimes \mathbb{Q}) \ar@{^{(}->}[d] \\ \mathrm{TMF}_1(3)_*\mathrm{TMF}_1(3) \ar@{^{(}->}[r] \ar[d] & \mathrm{TMF}_1(3)_*\mathrm{TMF}_1(3)_\mathbb{Q} \ar[d] \\ \mathbb{Z} ((q, \bar{q})) \ar[r] & \mathbb{Q}((q, \bar{q})). } $$ The bottom square is a pullback by Theorem~\ref{thm:Laures}. Note that since $\mathrm{TMF}_1(3) \wedge_\mathrm{TMF} \mathrm{TMF}_1(3)$ is Landweber exact, $\Gamma_{B_*^{(2)}}$ is torsion-free. Thus an element of $B_*^{(2)}$ is $\Gamma_{B^{(2)}_*}$-primitive if and only if its image in $B_*^{(2)} \otimes \mathbb{Q}$ is primitive. This shows that the top square is a pullback. \end{proof} \subsection{Representing $\mathrm{tmf}_*\mathrm{tmf} /tors$ with $2$-variable modular forms} Recall from Section~\ref{sec:review} that the Adams filtration of $c_4$ is 4 and the Adams filtration of $c_6$ is 5. Regarding $2$-variable modular forms as a subring $$ M^{2-var}_*(\Gamma(1)) \subset \mathbb{Q}[c_4, c_6, \bar{c}_4, \bar{c}_6], $$ we shall denote $M^{2-var}_*(\Gamma(1))^{AF \ge 0} $ the subring of 2-variable modular forms with non-negative Adams filtration. The results of the previous section now easily give the following result. \begin{prop} The composite induced by Lemmas~\ref{lem:2varlem1} and \ref{lem:2varlem2} $$ \mathrm{tmf}_{2*}\mathrm{tmf} /tors \rightarrow \mathrm{TMF}_{2*}\mathrm{TMF} /tors \hookrightarrow M_*^{2-var}(\Gamma(1))[\Delta^{-1}, \bar{\Delta}^{-1}] $$ induces an injection $$ \mathrm{tmf}_{2*}\mathrm{tmf} /tors \hookrightarrow M_*^{2-var}(\Gamma(1))^{AF \ge 0} $$ which is a rational isomorphism. \end{prop} \begin{proof} Consider the commutative cube $$ \xymatrix@C-3em@R-1em{ \mathrm{tmf}_{2*}\mathrm{tmf} /tors \ar[rr] \ar@{^{(}->}[dd] \ar@{.>}[dr] && \mathrm{TMF}_{2*}\mathrm{TMF} /tors \ar[rd] \ar[dd]|{\phantom{MM}} \\ & M^{2-var}_*(\Gamma(1)) \ar[rr] \ar[dd] && M_*^{2-var}(\Gamma(1))[\Delta^{-1}, \bar{\Delta}^{-1}] \ar[dd] \\ \mathrm{tmf}_{2*}\mathrm{tmf}_{\mathbb{Q}} \ar[rr]|{\phantom{MM}} \ar@{=}[dr] && \mathrm{TMF}_{2*}\mathrm{TMF}_{\mathbb{Q}} \ar@{=}[rd] \\ & M^{2-var}_*(\Gamma(1))_{\mathbb{Q}} \ar[rr] && M_*^{2-var}(\Gamma(1))[\Delta^{-1}, \bar{\Delta}^{-1}]_{\mathbb{Q}} . } $$ (The dotted arrow exists because the front face of the cube is a pullback.) The commutativity of the diagram, and the fact that rationally the top face is isomorphic to the bottom face, give an injection $$ \mathrm{tmf}_{2*}\mathrm{tmf} /tors \hookrightarrow M_*^{2-var}(\Gamma(1)) $$ that is a rational isomorphism. Since all of the elements of the source have Adams filtration $\ge 0$, this injection factors through the subring $$ \mathrm{tmf}_{2*}\mathrm{tmf} /tors \hookrightarrow M_*^{2-var}(\Gamma(1))^{AF \ge 0} . $$ \end{proof} \subsection{Detecting $2$-variable modular forms in the ASS} \begin{defn}\label{def:detect} Suppose that we are given a class $$ x \in \Ext(\mathrm{tmf} \wedge \mathrm{tmf}) $$ and a $2$-variable modular form $$ f \in M^{2-var}_*(\Gamma(1))^{AF \ge 0} . $$ We shall say that \emph{$x$ detects $f$} if the image of $x$ in $v_0^{-1}\Ext(\mathrm{tmf} \wedge \mathrm{tmf})$ detects the image of $f$ in $M^{2-var}_*(\Gamma(1)) \otimes \mathbb{Q}_2$ in the localized ASS $$ v_0^{-1}\Ext(\mathrm{tmf} \wedge \mathrm{tmf}) \Rightarrow \mathrm{tmf}_*\mathrm{tmf} \otimes \mathbb{Q}_2 \cong M^{2-var}_*(\Gamma(1)) \otimes {\mathbb{Q}_2}. $$ \end{defn} \begin{rmk} Suppose $x$ as above is a permanent cycle in the unlocalized ASS $$ \Ext(\mathrm{tmf} \wedge \mathrm{tmf}) \Rightarrow \mathrm{tmf}_*\mathrm{tmf}^\wedge_2, $$ and detects $\zeta \in \mathrm{tmf}_*\mathrm{tmf}^\wedge_2$. If $f$ is the image of $\zeta$ under the map $$ \mathrm{tmf}_*\mathrm{tmf}^\wedge_2 \rightarrow [M^{2-var}_*(\Gamma(1))^\wedge_2]^{AF \ge 0}, $$ then $x$ detects $f$ in the sense of Definition \ref{def:detect}. \end{rmk} Given a $2$-variable modular form $f \in M^{2-var}_*(\Gamma(1)) $, let $f(a_i, \bar{a}_i)$ denote its image in $$ M^{2-var}_*(\Gamma_1(3))\otimes \mathbb{Q}_2 \cong \mathbb{Q}_2[a_1, a_3, \bar{a}_1, \bar{a}_3] \cong \mathrm{tmf}_1(3)_*\mathrm{tmf}_1(3) \otimes \mathbb{Q}_2, $$ and let $$ [f(a_i, \bar{a}_i)] \in v_0^{-1} \Ext(\mathrm{tmf}_1(3) \wedge \mathrm{tmf}_1(3)) \cong \mathbb{F}_2[v_0^{\pm 1}, [a_1],[a_3], [\bar{a}_1], [\bar{a}_3]] $$ denote the element which detects it in the (collapsing) $v_0$-localized ASS. Similarly, let $t_k(a_i, \bar{a}_i)$ denote the images of $t_k$ in $\mathrm{tmf}_1(3)_*\mathrm{tmf}_1(3) \otimes \mathbb{Q}_2$ (as in Section~\ref{sec:review}), and let $[t_k(a_i, \bar{a}_i)]$ denote the elements of $\Ext$ which detect these images in the $v_0$-localized ASS for $\mathrm{tmf}_1(3)_*\mathrm{tmf}_1(3) \otimes \mathbb{Q}_2$. The following key proposition gives a convenient criterion for determining when a particular element $x \in \Ext(\mathrm{tmf} \wedge \mathrm{tmf})$ detects a $2$-variable modular form $f$. \begin{prop}\label{prop:detection} Suppose that we are given a cocycle $$ z = \sum_j z_j {\bar{\xi}}^{2k_{1,j}}_1 {\bar{\xi}}_2^{2k_{2,j}}\cdots \in C^*_{A(2)_*}((A//A(2))_*) $$ (with $z_j \in C^*_{A(2)_*}(\mathbb{F}_2)$) representing $[z] \in \Ext(\mathrm{tmf} \wedge \mathrm{tmf})$, and a $2$-variable modular form $$ f \in M^{2-var}_*(\Gamma(1)) ^{AF \ge 0}. $$ The images $\bar{z}_j$ of the terms $z_j$ in the cobar complex $C^*_{E[Q_0, Q_1, Q_2]_*}(\mathbb{F}_2)$ are cycles that represent classes $$ [\bar{z}_j] \in \Ext_{E[Q_0, Q_1, Q_2]}(\mathbb{F}_2) = \mathbb{F}_2[v_0, [a_1], [a_3]]. $$ If we have $$ [f(a_i, \bar{a}_i)] = \sum_j[z_j] [t_1(a_i, \bar{a}_i)]^{k_{1,j}} [t_2(a_i, \bar{a}_i)]^{k_{2,j}} \cdots, $$ then $[z]$ detects $f$. \end{prop} \begin{proof} Let $\bar{z} \in C^*_{E[Q_0, Q_1, Q_2]_*}((A//E[Q_0, Q_1, Q_2])_*)$ denote the image of $z$. We first note that the map $$ M_*(\Gamma(1))^{2-var} \otimes \mathbb{Q}_2 = \mathrm{tmf}_*\mathrm{tmf} \otimes \mathbb{Q}_2 \rightarrow \mathrm{tmf}_1(3)_*\mathrm{tmf}_1(3) \otimes \mathbb{Q}_2 = M_*(\Gamma_1(3))^{2-var} \otimes \mathbb{Q}_2 $$ is injective. Both $\mathrm{tmf} \wedge \mathrm{tmf}$ and $\mathrm{tmf}_1(3) \wedge \mathrm{tmf}_1(3)$ have collapsing $v_0$-localized ASS's, with a map on $E_2$-terms induced from the map $$ C^*_{A(2)_*}((A//A(2))_*) \rightarrow C^*_{E[Q_0, Q_1, Q_2]}((A//E[Q_0, Q_1, Q_2])_*) $$ so that $[z]$ detects $f$ if and only if $[\bar{z}]$ detects $f(a_i, \bar{a}_i)$. Thus it suffices to prove the latter. Note that since the elements $$ {\bar{\xi}}^{2k_{1,j}}_1 {\bar{\xi}}_2^{2k_{2,j}}\cdots \in (A//E[Q_0, Q_1, Q_2])_* $$ are $E[Q_0, Q_1, Q_2]_*$-primitive, it follows from the fact that $z$ is a cocycle that the elements $\bar{z}_j$ are cocycles. The only thing left to check is that $$ [{\bar{\xi}}^{2k_{1,j}}_1 {\bar{\xi}}_2^{2k_{2,j}} \cdots] = [t_1(a_i, \bar{a}_i)]^{k_{1,j}} [t_2(a_i, \bar{a}_i)]^{k_{2,j}} \cdots $$ in $\Ext_{E[Q_0, Q_1, Q_2]_*}((A//E[Q_0, Q_1, Q_2])_*)$. But this follows from the commutative diagram $$ \xymatrix{ \mr{BP}_*\mr{BP} \ar[rr] \ar[dr] && H_*H \\ & H_* \mathrm{tmf}_1(3) \ar@{^{(}->}[ur] } $$ together with the fact that $t_k$ is mapped to ${\bar{\xi}}_k^2$ by the top horizontal map. \end{proof} \subsection{Low dimensional computations of $2$-variable modular forms} Below is a table of generators of $\Ext(\mathrm{tmf} \wedge \mathrm{tmf})/tors$, as a module over $\mathbb{F}_2[h_0, [c_4]]$, through dimension 64, with $2$-variable modular forms they detect. The columns of this table are: \begin{description} \item[dim] dimension of the generator, \item[$\text{\bf bo}_k$] indicates generator lies in the summand $\Ext_{A(2)_*}(\ul{\bo}_k)$ (see the charts in Section~\ref{sec:ass}), \item[AF] the Adams filtration of the generator, \item[cell] the name of the image of the generator in $v_0^{-1}\Ext_{A(2)_*}(\ul{\bo}_k)$, in the sense of Section~\ref{sec:rationalgens}, \item[form] a two-variable modular form which is detected by the generator in the $v_0$-localized ASS (where $f_k$ are defined below). \end{description} The table below also gives a basis of $M^{2-var}_*(\Gamma(1)) $ as a $\mathbb{Z} [c_4]$-module: in dimension $2k$, a form $\alpha g$ in the last column, with $\alpha \in \mathbb{Q}$ and $g$ a monomial in $\mathbb{Z}[c_4, c_6, \Delta, f_k]$ not divisible by 2, corresponds to a generator $g$ of $M^{2-var}_k(\Gamma(1)) $.\footnote{There is one exception: there is a $2$-variable modular form $\td{c_4f_{10}}$ which agrees with $c_4f_{10}$ modulo terms of higher AF, but which is $2$-divisible. See Example~\ref{ex:echelon2}.} \begin{table}[h] \caption{Table of generators of $\Ext(\mathrm{tmf} \wedge \mathrm{tmf})/tors$.}\label{tab:gens} \end{table} \begin{alignat*}{5} & \mr{dim} \quad && \mathrm{bo}_k \quad && \mr{AF}\quad && \mr{cell} & \quad & \mr{form} \\ & 8 && 1 && 0 && {\bar{\xi}}_1^{8 } && f_{1 } \\ & 12 && 1 && 3 && [8 ]{\bar{\xi}}_2^{4 } && 2 f_{2 } \\ & 16 && 2 && 0 && {\bar{\xi}}_1^{16 } && f_{1}^{2 } \\ & 20 && 1 && 3 && [c_6/4 ]\cdot {\bar{\xi}}_1^{8 } && 2 f_{3 } \\ & 20 && 2 && 3 && [8 ]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^{4 } && 2 f_{1} f_{2 } \\ & 24 && 1 && 4 && [c_6/2 ]\cdot {\bar{\xi}}_2^{4 } && f_{4 } \\ & 24 && 2 && 0 && {\bar{\xi}}_2^{8 } && f_{5 } \\ & 24 && 3 && 0 && {\bar{\xi}}_1^{24 } && f_{1}^{3 } \\ & 28 && 2 && 3 && [8 ]{\bar{\xi}}_3^{4 } && 2 f_{6 } \\ & 28 && 3 && 3 && [8 ]{\bar{\xi}}_1^{16} {\bar{\xi}}_2^{4 } && 2 f_{1}^{2} f_{2 } \\ & 32 && 1 && 4 && [ \Delta ]{\bar{\xi}}_1^{8 } && \Delta f_{1 } \\ & 32 && 2 && 1 && [c_6/16 ]\cdot {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{4}+ [c_4/8 ]\cdot {\bar{\xi}}_2^{8 } && f_{9 } \\ & 32 && 3 && 0 && {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{8 } && f_{1} f_{5 } \\ & 32 && 4 && 0 && {\bar{\xi}}_1^{32 } && f_{1}^{4 } \\ & 36 && 1 && 7 && [8 \Delta ]{\bar{\xi}}_2^{4 } && 2 \Delta f_{2 } \\ & 36 && 2 && 3 && [c_6/4 ]\cdot {\bar{\xi}}_2^{8 } && 2 f_{7 } \\ & 36 && 3 && 3 && [8 ]{\bar{\xi}}_2^{12 } && 2 f_{2} f_{5 } \\ & 36 && 3 && 0 && {\bar{\xi}}_1^{8} {\bar{\xi}}_3^{4}+ {\bar{\xi}}_2^{12 } && f_{10 } \\ & 36 && 4 && 3 && [8 ]{\bar{\xi}}_1^{24} {\bar{\xi}}_2^{4 } && 2 f_{1}^{3} f_{2 } \\ & 40 && 2 && 4 && [c_6/2 ]\cdot {\bar{\xi}}_3^{4 } && f_{8 } \\ & 40 && 3 && 1 && [2 ]{\bar{\xi}}_2^{4} {\bar{\xi}}_3^{4 } && f_{11 } \\ & 40 && 4 && 0 && {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{8 } && f_{1}^{2} f_{5 } \\ & 40 && 5 && 0 && {\bar{\xi}}_1^{20 } && f_{1}^{5 } \\ & 44 && 1 && 7 && [ \Delta c_6/4 ]\cdot {\bar{\xi}}_1^{8 } && 2 \Delta f_{3 } \\ & 44 && 2 && 7 && [c_6/4] ( [c_6/16 ]\cdot {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{4}+ [c_4/8]\cdot {\bar{\xi}}_2^{8 }) && c_{6} f_{9}/4 \\ & 44 && 3 && 3 && [c_6/4]\cdot {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{8 } && 2 f_{1} f_{7 } \\ & 44 && 4 && 3 && [8 ]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^{12 } && 2 f_{1} f_{2} f_{5 } \\ & 44 && 4 && 0 && {\bar{\xi}}_1^{16} {\bar{\xi}}_3^{4}+ {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{12 } && 2 f_{13 } \\ & 44 && 5 && 3 && [8 ]{\bar{\xi}}_1^{32} {\bar{\xi}}_2^{4 } && 2 f_{1}^{4} f_{2 } \\ & 48 && 1 && 8 && [ \Delta c_6/2 ]\cdot {\bar{\xi}}_2^{4 } && \Delta f_{4 } \\ & 48 && 2 && 4 && [ \Delta ]{\bar{\xi}}_2^{8 } && \Delta f_{5 } \\ & 48 && 3 && 4 && [c_6/2 ]\cdot {\bar{\xi}}_2^{12 } && f_{2} f_{7 } \\ & 48 && 3 && 1 && [c_6/16 ]\cdot ( {\bar{\xi}}_1^{8} {\bar{\xi}}_3^{4}+ {\bar{\xi}}_2^{12 }) && f_{14 } \\ & 48 && 4 && 0 && {\bar{\xi}}_2^{16 } && f_{5}^{2 } \\ & 48 && 4 && 1 && [2 ]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^{4} {\bar{\xi}}_3^{4 } && f_{1} f_{11 } \\ & 48 && 5 && 0 && {\bar{\xi}}_1^{24} {\bar{\xi}}_2^{8 } && f_{1}^{3} f_{5 } \\ & 48 && 6 && 0 && {\bar{\xi}}_1^{48 } && f_{1}^{6 } \\ & 52 && 2 && 7 && [8 \Delta ]{\bar{\xi}}_3^{4 } && 2 \Delta f_{6 } \\ & 52 && 3 && 4 && [c_6/2]\cdot {\bar{\xi}}_2^{4} {\bar{\xi}}_3^{4 } && 2 f_{15 } \\ & 52 && 4 && 3 && [8 ]{\bar{\xi}}_2^{8} {\bar{\xi}}_3^{4 } && 2 f_{5} f_{6 } \\ & 52 && 5 && 3 && [8 ]{\bar{\xi}}_1^{16} {\bar{\xi}}_2^{12 } && 2 f_{1}^{2} f_{2} f_{5 } \\ & 52 && 5 && 0 && {\bar{\xi}}_1^{24} {\bar{\xi}}_3^{4}+ {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{12 } && 2 f_{1} f_{13 } \\ & 52 && 6 && 3 && [8 ]{\bar{\xi}}_1^{40} {\bar{\xi}}_2^{4 } && 2 f_{1}^{5} f_{2 } \\ & 56 && 1 && 8 && [ {\Delta^2} {\bar{\xi}}_1^{8 } && \Delta^{2} f_{1 } \\ & 56 && 2 && 8 && [ \Delta]([c_6/2 ]\cdot {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{4}+ [c_4 ]\cdot {\bar{\xi}}_2^{8 }) && 8 \Delta f_{9 } \\ & 56 && 3 && 4 && [ \Delta ]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^{8 } && \Delta f_{5} f_{1 } \\ & 56 && 4 && 1 && [c_6/16 ]\cdot {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{12}+ [c_4/8 ]\cdot {\bar{\xi}}_2^{16 } && f_{5} f_{9 } \\ & 56 && 4 && 0 && {\bar{\xi}}_3^{8 } && f_{16 } \\ & 56 && 5 && 0 && {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{16 } && f_{1} f_{5}^{2 } \\ & 56 && 5 && 1 && [2 ]{\bar{\xi}}_1^{16} {\bar{\xi}}_2^{4} {\bar{\xi}}_3^{4 } && f_{1}^{2} f_{11 } \\ & 56 && 6 && 0 && {\bar{\xi}}_1^{32} {\bar{\xi}}_2^{8 } && f_{1}^{4} f_{5 } \\ & 60 && 1 && 11 && [8 \Delta^2 ]\cdot {\bar{\xi}}_2^{4 } && 2 \Delta^{2} f_{2 } \\ & 60 && 2 && 7 && [ \Delta c_6/4 ]\cdot {\bar{\xi}}_2^{8 } && 2 \Delta f_{7 } \\ & 60 && 3 && 7 && [8 \Delta ]{\bar{\xi}}_2^{12 } && 2 \Delta f_{5} f_{2 } \\ & 60 && 3 && 4 && [ \Delta]( {\bar{\xi}}_1^{8} {\bar{\xi}}_3^{4}+ {\bar{\xi}}_2^{12 }) && \Delta f_{10 } \\ & 60 && 4 && 4 && [c_6/2 ]\cdot {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{4} {\bar{\xi}}_3^{4}+ [c_4 ]\cdot {\bar{\xi}}_2^{8} {\bar{\xi}}_3^{4 } && 2 f_{6} f_{9 } \\ & 60 && 4 && 3 && [8 ]{\bar{\xi}}_4^{4 } && 2 f_{17 } \\ & 60 && 5 && 0 && {\bar{\xi}}_2^{20}+ {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{8} {\bar{\xi}}_3^{4 } && f_{18 } \\ & 60 && 5 && 3 && [8 ]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^{8} {\bar{\xi}}_3^{4 } && 2 f_{1} f_{5} f_{6 } \\ & 60 && 6 && 3 && [8 ]{\bar{\xi}}_1^{24} {\bar{\xi}}_2^{12 } && 2 f_{1}^{3} f_{2} f_{5 } \\ & 60 && 6 && 0 && {\bar{\xi}}_1^{32} {\bar{\xi}}_3^{4 } && 2 f_{1}^{2} f_{13 } \\ & 60 && 7 && 3 && [8 ]{\bar{\xi}}_1^{48} {\bar{\xi}}_2^{4 } && 2 f_{1}^{6} f_{2 } \\ & 64 && 2 && 8 && [ \Delta c_6/2 ]\cdot {\bar{\xi}}_3^{4 } && \Delta f_{8 } \\ & 64 && 3 && 5 && [2 \Delta ]{\bar{\xi}}_2^{4} {\bar{\xi}}_3^{4 } && \Delta f_{11 } \\ & 64 && 4 && 2 && [c_6/16 ]\cdot {\bar{\xi}}_2^{8} {\bar{\xi}}_3^{4}+ [c_4/8 ]\cdot {\bar{\xi}}_3^{8 } && f_{9}^{2 }/2 \\ & 64 && 5 && 1 && [2 ]{\bar{\xi}}_2^{12} ]{\bar{\xi}}_3^{4} && f_{1} f_{5} f_{9 } \\ & 64 && 5 && 0 && {\bar{\xi}}_1^{8} {\bar{\xi}}_3^{8 } && f_{1} f_{16 } \\ & 64 && 6 && 0 && {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{16 } && f_{5}^{2} f_{1}^{2 } \\ & 64 && 6 && 1 && [2 ]{\bar{\xi}}_1^{24} {\bar{\xi}}_2^{4} {\bar{\xi}}_3^{4 } && f_{11} f_{1}^{3 } \\ & 64 && 7 && 0 && {\bar{\xi}}_1^{40} {\bar{\xi}}_2^{8 } && f_{1}^{5} f_{5 } \\ & 64 && 8 && 0 && {\bar{\xi}}_1^{64 } && f_{1}^{8 } \end{alignat*} The $2$-variable modular forms $f_k \in M^{2-var}_*(\Gamma(1)) $ in the above table are the generators of $M^{2-var}_*(\Gamma(1)) $ as an $M_*(\Gamma(1)) $-algebra in this range, and are defined as follows. \begin{align*} f_{1 } & := ( -\bar c_{4}+ c_{4})/ 16 \\ f_{2 } & := ( -\bar c_{6}+ c_{6})/ 8 \\ f_{3 } & := ({ 5} f_{1} c_{6}+{ 21} f_{2} c_{4})/ 8 \\ f_{4 } & := ({ 5} f_{2} c_{6}+{ 21} f_{1} c_{4}^{2})/ 8 \\ f_{5 } & := ( -f_{1}^{2} c_{4}+ f_{2}^{2})/ 16 \\ f_{6 } & := ( -c_{4}^{2} c_{6}+ c_{4}^{2} c_{6}+{ 544} f_{2} c_{4}^{2}+{ 768} f_{3} c_{4}+{ 1792} f_{1} f_{2} c_{4})/ 2048 \\ f_{7 } & := ({ 4} f_{2} \Delta+ f_{5} c_{6}+{ 5} f_{2} c_{4}^{3}+{ 6} f_{3} c_{4}^{2}+{ 5} f_{1} f_{2} c_{4}^{2}+{ 7} f_{6} c_{4}+{ 4} f_{1}^{2} f_{2} c_{4})/ 8 \\ f_{8 } & := ({ 4} f_{1} c_{4} \Delta+ f_{6} c_{6}+{ 5} f_{1} c_{4}^{4}+{ 5} f_{1}^{2} c_{4}^{3}+{ 7} f_{5} c_{4}^{2}+{ 2} f_{4} c_{4}^{2}+{ 4} f_{1}^{3} c_{4}^{2})/ 8 \\ f_{9 } & := ({ 32} f_{1} \Delta+ f_{1} f_{2} c_{6}+{ 33} f_{1}^{2} c_{4}^{2}+{ 8} f_{5} c_{4}+{ 32} f_{4} c_{4}+{ 32} f_{1}^{3} c_{4})/ 64 \\ f_{10 } & := ({ 2} f_{2} c_{4}^{3}+ f_{1} f_{2} c_{4}^{2}+{ 2} f_{6} c_{4}+{ 3} f_{1}^{2} f_{2} c_{4}+ f_{1} f_{6}+ f_{2} f_{5})/ 4 \\ f_{11 } & := ({ 4} f_{1} c_{4} \Delta+{ 11} f_{1}^{2} c_{4}^{3}+{ 34} f_{5} c_{4}^{2}+{ 28} f_{4} c_{4}^{2}+{ 23} f_{1}^{3} c_{4}^{2}+{ 4} f_{9} c_{4}+ f_{1} f_{5} c_{4}+{ 4} f_{1}^{4} c_{4} \\ & \qquad +{ 4} f_{8}+ f_{2} f_{6})/ 8 \\ f_{12 } & := ( f_{1} f_{5} c_{6}+{ 8} f_{2} c_{4}^{4}+{ 8} f_{3} c_{4}^{3}+{ 8} f_{1} f_{2} c_{4}^{3}+{ 8} f_{6} c_{4}^{2}+{ 8} f_{1}^{2} f_{2} c_{4}^{2}+ f_{2} f_{5} c_{4})/ 8 \\ f_{13 } & := ({ 8} f_{3} \Delta+{ 80} f_{2} c_{4}^{4}+{ 56} f_{3} c_{4}^{3}+{ 80} f_{1} f_{2} c_{4}^{3}+{ 76} f_{6} c_{4}^{2}+{ 55} f_{1}^{2} f_{2} c_{4}^{2 }+{ 4} f_{10} c_{4}\\ &\qquad +{ 18} f_{2} f_{5} c_{4}+{ 11} f_{1}^{3} f_{2} c_{4}+{ 4} f_{12}+ f_{1}^{2} f_{6}+ f_{1} f_{2} f_{5}+{ 4} f_{1}^{4} f_{2})/ 8 \\ f_{14 } & := ({ 21} f_{1} c_{4}^{2} \Delta+{ 8} f_{5} \Delta+{ 16} f_{4} \Delta+{ 20} f_{1}^{3} \Delta+ f_{10} c_{6}+{ 11} f_{1} c_{4}^{5}+{ 36} f_{1}^{2} c_{4}^{4}+{ 591} f_{5} c_{4}^{3} \\ & \qquad +{ 490} f_{4} c_{4}^{3}+{ 437} f_{1}^{3} c_{4}^{3}+{ 119} f_{9} c_{4}^{2}+{ 140} f_{1} f_{5} c_{4}^{2}+{ 75} f_{1}^{4} c_{4}^{2}+{ 10} f_{11} c_{4}+{ 11} f_{8} c_{4} \\ & \qquad +{ 32} f_{1}^{5} c_{4}+{ 8} f_{1} f_{2} f_{6})/ 16 \\ f_{15 } & := ({ 4} f_{6} \Delta+ f_{1}^{2} f_{2} \Delta+{ 76} f_{2} c_{4}^{5}+{ 54} f_{3} c_{4}^{4}+{ 90} f_{1} f_{2} c_{4}^{4}+{ 73} f_{6} c_{4}^{3}+{ 50} f_{1}^{2} f_{2} c_{4}^{3}+{ 3} f_{10} c_{4}^{2} \\ & \qquad +{ 8} f_{7} c_{4}^{2}+{ 20} f_{2} f_{5} c_{4}^{2}+{ 8} f_{1}^{3} f_{2} c_{4}^{2}+{ 7} f_{12} c_{4}+{ 4} f_{1} f_{2} f_{5} c_{4})/ 8 \\ f_{16 } & := ({ 2} f_{1} \Delta^{2}+{ 24} f_{1} c_{4}^{3} \Delta+{ 9} f_{5} c_{4} \Delta+{ 18} f_{4} c_{4} \Delta+{ 4} f_{1}^{3} c_{4} \Delta+{ 2} f_{9} \Delta+ f_{1} f_{5} \Delta \\ & \qquad +{ 36} f_{1}^{2} c_{4}^{5}+{ 480} f_{5} c_{4}^{4}+{ 402} f_{4} c_{4}^{4}+{ 359} f_{1}^{3} c_{4}^{4}+{ 94} f_{9} c_{4}^{3}+{ 112} f_{1} f_{5} c_{4}^{3}+{ 55} f_{1}^{4} c_{4}^{3} \\ & \qquad +{ 12} f_{11} c_{4}^{2}+{ 14} f_{8} c_{4}^{2}+{ 20} f_{1}^{5} c_{4}^{2}+{ 2} f_{14} c_{4}+{ 5} f_{2} f_{7} c_{4}+ f_{5}^{2} c_{4}+{ 4} f_{1}^{3} f_{5} c_{4}+ f_{1} f_{14} \\ & \qquad + f_{5} f_{9}+ f_{1} f_{2} f_{7})/ 2 \\ f_{17 } & := ({ 2} f_{2} \Delta^{2}+{ 22} f_{3} c_{4}^{2} \Delta+{ 11} f_{6} c_{4} \Delta+ f_{2} f_{5} \Delta+{ 19} f_{9} c_{4}^{2} c_{6}+{ 682} f_{2} c_{4}^{6}+{ 480} f_{3} c_{4}^{5} \\ & \qquad +{ 768} f_{1} f_{2} c_{4}^{5}+{ 648} f_{6} c_{4}^{4}+{ 462} f_{1}^{2} f_{2} c_{4}^{4}+{ 30} f_{10} c_{4}^{3}+{ 63} f_{7} c_{4}^{3}+{ 185} f_{2} f_{5} c_{4}^{3} \\ & \qquad +{ 84} f_{1}^{3} f_{2} c_{4}^{3}+{ 12} f_{13} c_{4}^{2}+{ 27} f_{12} c_{4}^{2}+{ 29} f_{1} f_{2} f_{5} c_{4}^{2}+{ 16} f_{1}^{4} f_{2} c_{4}^{2}+{ 4} f_{15} c_{4}+{ 4} f_{5} f_{6} c_{4} \\ & \qquad +{ 2} f_{1}^{2} f_{2} f_{5} c_{4}+ f_{2} f_{14}+ f_{6} f_{9})/ 2 \\ f_{18 } & := ({4} f_{2} \Delta^{2}+{168} f_{3} c_{4}^{2} \Delta+{96} f_{6} c_{4} \Delta+{8} f_{2} f_{5} \Delta+{168} f_{9} c_{4}^{2} c_{6}+{5880} f_{2} c_{4}^{6} \\ & \qquad +{4140} f_{3} c_{4}^{5}+{6648} f_{1} f_{2} c_{4}^{5}+{5592} f_{6} c_{4}^{4}+{3980} f_{1}^{2} f_{2} c_{4}^{4}+{248} f_{10} c_{4}^{3}+{560} f_{7} c_{4}^{3} \\ & \qquad +{1586} f_{2} f_{5} c_{4}^{3}+{744} f_{1}^{3} f_{2} c_{4}^{3}+{112} f_{13} c_{4}^{2}+{220} f_{12} c_{4}^{2}+{265} f_{1} f_{2} f_{5} c_{4}^{2} \\ & \qquad +{136} f_{1}^{4} f_{2} c_{4}^{2}+{40} f_{15} c_{4}+{4} f_{1} f_{13} c_{4}+{34} f_{5} f_{6} c_{4}+{19} f_{1}^{2} f_{2} f_{5} c_{4}+{8} f_{1}^{5} f_{2} c_{4} \\ & \qquad +{4} f_{6} f_{9}+ f_{1} f_{5} f_{6}+ f_{2} f_{5}^{2})/4 \end{align*} We shall now indicate the methods used to generate Table~\ref{tab:gens}, and make some remarks about its contents. The short exact sequences of Section~\ref{sec:boSES} give an inductive scheme for computing $\Ext_{A(2)_*}(\ul{\bo}_k)$, and the charts in that section display the computation through dimension $64$. In Section~\ref{sec:rationalgens}, these short exact sequences are used to give an inductive scheme for identifying the generators of $v_0^{-1}\Ext_{A(2)_*}(\ul{\bo}_k)$, and appropriate multiples of these generators generate the image of $\Ext_{A(2)_*}(\ul{\bo}_k)/tors$ in these localized Ext groups. These generators are listed in the fourth column of Table~\ref{tab:gens}. The two variable modular forms in the last column of Table~\ref{tab:gens} are detected by the generators in the fourth column, in the sense of the previous section. In each instance, if necessary, we use Corollary~\ref{cor:integralCnu} or \ref{cor:integralCnuHZ1} to find the image of the generator in $\Ext(\mathrm{tmf}_1(3) \wedge \mathrm{tmf}_1(3))$ and then apply Proposition~\ref{prop:detection}. The 2-variable modular forms were generated by the following inductive method. Suppose inductively that we have generated a basis of $M^{2-var}_*(\Gamma(1)) $ in dimension $n$ and Adams filtration greater than $s$ and suppose that we wish to generate a $2$-variable modular form $f$ in dimension $n$ and Adams filtration $s$. \begin{description} \item[Step 1] Write an approximation (modulo higher Adams filtration) for $f$. This could either be generated using Proposition~\ref{prop:detection}, or it could be obtained by taking an appropriate product of 2-variable modular forms in lower degrees. Write this approximation as $g(q,\bar{q})/2^k$ where $g(q,\bar{q})$ is a 2-integral 2-variable modular form. \item[Step 2] Write $g(q, \bar{q})$ as a linear combination of $2$-variable modular forms already produced mod $2$: $$ g(q, \bar{q}) = \sum_i h_i(q, \bar{q}). $$ \item[Step 3] Set $$ g'(q, \bar{q}) = \frac{g(q, \bar{q}) + \sum_i h_i(q, \bar{q})}{2}; $$ the form $g'(q, \bar{q})/2^{k-1}$ is a better approximation for $f$. \item[Step 4] Repeat steps 2 and 3 until the denominator is completely eliminated. \end{description} We explain all of this by working it through some low degrees: \begin{description} \item[$\mbf{f_1}$] The corresponding generator of $\Ext^{0,8}_{A(2)_*}(\Sigma^8 \ul{\bo}_1)$ is ${\bar{\xi}}_1^8$. We compute $$ [t_1(a_i, \bar{a_i})^4] = \left[\frac{\bar a_1^4 + a_1^4}{2^4}\right] = [\frac{-\bar c_4 + c_4}{2^4}]. $$ We check that $$ f_1 := \frac{-\bar c_4 + c_4}{2^4} $$ has an integral $q$-expansion. \item[$\mbf{2f_2}$] The corresponding generator of $\Ext^{3,15}_{A(2)_*}(\Sigma^8 \ul{\bo}_1)$ is $[8]{\bar{\xi}}_2^4$. We compute (appealing to Corollary~\ref{cor:integralCnu}) $$ [8t_2(a_i, \bar{a_i})^2 + 2a_1^2t_1(a_i, \bar{a_i})^4] = \left[2\bar a_3^2 + 2a_3^2 \right] = [\frac{-\bar c_6 + c_6}{4}]. $$ We check that $\frac{-\bar c_6 + c_6}{4}$ has integral $q$-expansion. In fact the $q$-expansion is zero mod $2$, so we set $$ f_2 := \frac{-\bar c_6 + c_6}{8}. $$ \item[$\mbf{f_1^2}$] The corresponding generator of $\Ext^{0,16}_{A(2)_*}(\Sigma^{16}\ul{\bo}_2)$ is ${\bar{\xi}}_1^{16}$. Since ${\bar{\xi}}_1^8$ detects $f_1$, ${\bar{\xi}}_1^{16}$ detects $f_1^2$. \item[$\mbf{2f_1f_2}$] The corresponding generator of $\Ext^{3,23}_{A(2)_*}(\Sigma^{16}\ul{\bo}_2)$ is ${\bar{\xi}}_1^{8}{\bar{\xi}}_2^4$. Since ${\bar{\xi}}_1^8$ detects $f_1$ and $[8]{\bar{\xi}}_2^4$ detects $2f_2$, $[8]{\bar{\xi}}_1^{8}{\bar{\xi}}_2^4$ detects $2f_1f_2$. \item[$\mbf{2f_3}$] The corresponding generator of $\Ext^{3,23}_{A(2)_*}(\Sigma^8 \ul{\bo}_1)$ is $[c_6/4]{\bar{\xi}}_1^8$. Since ${\bar{\xi}}_1^8$ detects $f_1$, we begin with a leading term $c_6 f_1/4$. This $2$-variable modular form is not integral, but we find that $$ c_6(q) f_1(q, \bar{q}) + f_2(q, \bar{q}) c_4(q) \equiv 0 \mod 4. $$ Therefore $[c_6/4]{\bar{\xi}}_1^8$ detects $$ \frac{c_6 f_1 + f_2 c_4}{4}. $$ In fact $$ 5c_6(q) f_1(q, \bar{q}) + 21f_2(q, \bar{q}) c_4(q) \equiv 0 \mod 8, $$ so we set $$ f_3 := \frac{5c_6 f_1 + 21 f_2 c_4}{8}.$$ \end{description} \section{The Adams spectral sequence for $\mathrm{tmf} _*\mathrm{tmf}$ and $\mathrm{bo}$-Brown-Gitler modules}\label{sec:ass} \emph{Recall that we are concerned with the prime $2$, hence everything is implicitly $2$-localized.} \subsection{Brown-Gitler modules} Mod $2$ Brown-Gitler spectra were introduced in \cite{BrownGitler} to study obstructions to immersing manifolds, but immediately found use in studying the stable homotopy groups of spheres (eg. \cite{MahowaldInf}, \cite{CohenZeta} and many other places). As discussed in Section~\ref{sec:bo}, Mahowald, Milgram, and others have used integral Brown-Gitler modules/spectra to decompose the ring of cooperations of $\mathrm{bo}$ \cite{boresolutions}, \cite{Milgram-connectivektheory}, and much of the work of Davis, Mahowald, and Rezk on $\mathrm{tmf}$-resolutions has been based on the use of $\mathrm{bo}$-Brown-Gitler spectra \cite{MRlevel3},\cite{MahowaldConnective}, \cite{BHHM}. In this section we recapitulate and extend this latter body of work. Generalizing the discussion of Section~\ref{sec:bo}, we consider the subalgebra of the dual Steenrod algebra \[ (A//A(i))_* =\mathbb{F}_2[\bar\xi_1^{2^{i+1}},\bar\xi_2^{2^i},\dots,\bar\xi_{i+1}^2,\bar\xi_{i+2},\dots]. \] We have the examples \begin{align*} H_* \mr{H}\FF_2 & \cong A_* = (A//A(-1))_*, \\ H_* \mr{H}\mathbb{Z} & \cong (A//A(0))_*, \\ H_* \mathrm{bo} & \cong (A//A(1))_*, \\ H_* \mathrm{tmf} & \cong (A//A(2))_*. \end{align*} The algebra $(A//A(i))_*$ admits an increasing filtration by defining $ wt(\bar\xi_i)=2^{i-1} $; every element has filtration divisible by $ 2^{i+1} $. The Brown-Gitler subcomodule $ N_i(j) $ is defined to be the $\mathbb{F}_2$-subspace spanned by all monomials of weight less than or equal to $ 2^{i+1}j $, which is also an $ A_* $-subcomodule as the coaction cannot increase weight. The modules $ N_{-1}(j) $ through $ N_1(j) $ are known to be realizable by the mod-$ 2 $ (classical), integral, and $ \mathrm{bo} $-Brown-Gitler spectra respectively, which we will denote by $ (H\mathbb{F}_2)_j $, $ H\mathbb{Z}_j $, and $ \mathrm{bo}_j $, since we have \begin{align*} \mr{H}\FF_2 & \simeq \varinjlim (\mr{H}\FF_2)_j, \\ \mr{H}\mathbb{Z} & \simeq \varinjlim \mr{H}\mathbb{Z}_j, \\ \mathrm{bo} & \simeq \varinjlim \mathrm{bo}_j. \end{align*} To clarify notation we shall continue the convention we adopted in Section~\ref{sec:bo} and underline a spectrum to refer to the corresponding subcomodule of the dual Steenrod algebra, so that we have \begin{align*} (\ul{\mr{H}\FF_2})_j & := H_* (\mr{H}\FF_2)_j = N_{-1}(j), \\ \ul{\mr{H}\mathbb{Z}}_j & := H_* \mr{H}\mathbb{Z}_j = N_0(j), \\ \ul{\mathrm{bo}}_j & := H_* \mathrm{bo}_j = N_1(j). \end{align*} It is not known if $\mathrm{tmf}$-Brown-Gitler spectra $\mathrm{tmf}_j$ exist in general, though we will still define $$ \ul{\mathrm{tmf}}_j := N_2(j). $$ The spectrum $N_3(1)$ is not realizible, by the Hopf-invariant one theorem. There are algebraic splittings of $A(i)_*$-comodules \[ (A//A(i))_* \cong \bigoplus_j \Sigma^{2^{i+1}j}N_{i-1}(j). \] This splitting is given by the sum of maps: \begin{equation}\label{eq:splittingmap} \begin{split} \Sigma^{2^{i+1}j}N_{i-1}(j) & \rightarrow (A//A(i))_* \\ {\bar{\xi}}_1^{i_1} {\bar{\xi}}_2^{i_2}\cdots & \mapsto {\bar{\xi}}_1^{a} {\bar{\xi}}_2^{i_1} {\bar{\xi}}_3^{i_2} \cdots, \end{split} \end{equation} where the exponent $a$ above is chosen such that the monomial has weight $2^{i+1}j$. It follows that there are algebraic splittings \begin{align} \Ext(\mr{H}\mathbb{Z} \wedge \mr{H}\mathbb{Z}) & \cong \bigoplus \Ext_{A(0)_*}(\si{2j} (\mr{H}\FF_2)_j), \\ \Ext(\mathrm{bo} \wedge \mathrm{bo}) & \cong \bigoplus \Ext_{A(1)_*}(\si{4j} \mr{H}\mathbb{Z}_j), \\ \Ext(\mathrm{tmf} \wedge \mathrm{tmf}) & \cong \bigoplus \Ext_{A(2)_*}(\si{8j} \mathrm{bo}_j). \label{eq:tmfextsplit} \end{align} These algebraic splittings can be realized topologically for $i\leq 1$ \cite{boresolutions}: \begin{align*} \mr{H}\mathbb{Z}\wedge \mr{H}\mathbb{Z} & \simeq \bigvee_{j} \Sigma^{2j} \mr{H}\mathbb{Z} \wedge (\mr{H}\FF_2)_j ,\\ \mathrm{bo}\wedge \mathrm{bo} &\simeq \bigvee_{j} \Sigma^{4j} \mathrm{bo} \wedge \mr{H}\mathbb{Z}_j. \end{align*} However, the corresponding splitting fails for $\mathrm{tmf}$ as was shown by Davis, Mahowald, and Rezk \cite{MRlevel3}, \cite{MahowaldConnective}, so $$ \mathrm{tmf} \wedge \mathrm{tmf} \not\simeq \bigvee_j \Sigma^{8j} \mathrm{tmf} \wedge \mathrm{bo}_j. $$ Indeed, they observe that in $\mathrm{tmf} \wedge \mathrm{tmf}$ the homology summands $$ \Sigma^8 \mathrm{tmf} \wedge \mathrm{bo}_1, \quad \text{and} \quad \Sigma^{16} \mathrm{tmf} \wedge \mathrm{bo}_2 $$ are attached non-trivially. We shall see in Section~\ref{sec:cover} that our methods recover this fact. \subsection{Rational calculations}\label{sec:rational} Recall that we have $$ \mathrm{tmf}_*\mathrm{tmf}_\mathbb{Q} \cong \mathbb{Q}[c_4, c_6, \bar{c}_4, \bar{c}_6] \text{ and } $$ consider the (collapsing) $v_0$-inverted ASS $$ \bigoplus_j v_0^{-1} \Ext_{A(2)_*}(\si{8j} \ul{\bo}_j) \Rightarrow \mathrm{tmf}_* \mathrm{tmf} \otimes \mathbb{Q}_2. $$ In this section we explain the decomposition imposed on the $E_\infty$-term of this spectral sequence from the decomposition on the $E_2$-term. In particular, given a torsion-free element $x \in \mathrm{tmf}_*\mathrm{tmf}$, this will allow us to determine which $\mathrm{bo}$-Brown-Gitler module detects it in the $E_2$-term of the ASS for $\mathrm{tmf} \wedge \mathrm{tmf}$. Recall from Section~\ref{sec:review} that $\mathrm{tmf}_1(3) \simeq \mr{BP}\bra{2}$. In particular, we have $$ H^*(\mathrm{tmf}_1(3)) \cong A // E[Q_0, Q_1, Q_2]. $$ We begin by studying the map between $v_0$-inverted ASS's induced by the map $\mathrm{tmf} \rightarrow \mathrm{tmf}_1(3)$ $$ \xymatrix{ v_0^{-1}\Ext^{*,*}_{A(2)_*}(\mathbb{F}_2) \ar@{=>}[r] \ar[d] & \pi_* \mathrm{tmf} \otimes \mathbb{Q}_2 \ar[d] \\ v_0^{-1}\Ext^{*,*}_{E[Q_0, Q_1, Q_2]_*}(\mathbb{F}_2) \ar@{=>}[r] & \pi_* \mathrm{tmf}_1(3) \otimes \mathbb{Q}_2. } $$ We have $$ v_0^{-1} \Ext^{*,*}_{E[Q_0, Q_1, Q_2]_*}(\mathbb{F}_2) \cong \mathbb{F}_2[v_0^{\pm 1}, v_1, v_2], $$ where the $v_i$'s have $(t-s,s)$ bidegrees: \begin{align*} \abs{v_0} & = (0, 1), \\ \abs{v_1} & = (2, 1), \\ \abs{v_2} & = (6,1). \end{align*} Recall from Section~\ref{sec:review} that $\pi_* \mathrm{tmf}_1(3)_\mathbb{Q} = \mathbb{Q}[a_1, a_3]$, and that \begin{align*} v_1 & = [a_1], \\ v_2 & = [a_3]. \end{align*} Of course $\pi_*\mathrm{tmf}_{\mathbb{Q}} = \mathbb{Q}[c_4, c_6]$, with corresponding localized Adams $E_2$-term $$ v_0^{-1} \Ext^{*,*}_{A(2)_*}(\mathbb{F}_2) \cong \mathbb{F}_2[v_0^{\pm 1}, c_4, c_6], $$ where the $[c_i]$'s have $(t-s,s)$ bidegrees \begin{align*} \abs{[c_4]} & = (8, 4), \\ \abs{[c_6]} & = (12, 5). \end{align*} Recall also from Section~\ref{sec:review} that the formulas for $c_4$ and $c_6$ in terms of $a_1$ and $a_3$ imply that the map of $E_2$-terms of spectral sequences above is injective, and is given by \begin{equation}\label{eq:c4c6ASS} \begin{split} [c_4] & \mapsto [a_1^4], \\ [c_6] & \mapsto [8a_3^2]. \end{split} \end{equation} Corresponding to the isomorphism $$ \pi_* \mathrm{tmf}_\mathbb{Q} \cong \mr{H}\mathbb{Q}_* \mathrm{tmf} $$ there is an isomorphism of localized Adams $E_2$-terms $$ v_0^{-1} \Ext_{A(2)}(\mathbb{F}_2) \cong v_0^{-1} \Ext_{A(0)} ((A//A(2))_*). $$ Since the decomposition $$ A//A(2)_* \cong \bigoplus_{j} \si{8j} \ul{\bo}_j $$ is a decomposition of $A(2)_*$-comodules, it is in particular a decomposition of $A(0)_*$-comodules, and therefore there is a decomposition \begin{equation}\label{eq:Qsplit} v_0^{-1} \Ext_{A(2)_*} (\mathbb{F}_2) \cong \bigoplus_j v_0^{-1} \Ext_{A(0)_*}(\si{8j} \ul{\bo}_j). \end{equation} \begin{prop} Under the decomposition (\ref{eq:Qsplit}), we have \begin{align*} v_0^{-1} \Ext_{A(0)_*}(\Sigma^{8j} \ul{\bo}_j) & = \mathbb{F}_2[v_0^{\pm 1}]\{ [c_4^{i_1}c_6^{i_2}] \: : \: i_1 + i_2 = j\} \\ & \subset v_0^{-1}\Ext_{A(2)_*}(\mathbb{F}_2). \end{align*} \end{prop} \begin{proof} Statement~(2) of the proof of Lemma~\ref{lem:HZ_i} implies that we have $$ v_0^{-1} \Ext_{A(0)_*}(\ul{\bo}_j) \cong \mathbb{F}_2[v_0^{\pm 1}]\{ {\bar{\xi}}_1^{4i} \: : \: 0 \le i \le j \}.$$ Using the map (\ref{eq:splittingmap}), we deduce that we have \begin{align*} v_0^{-1} \Ext_{A(0)_*}(\si{8j} \ul{\bo}_j) & \cong \mathbb{F}_2[v_0^{\pm 1}]\{ {\bar{\xi}}_1^{8i_1}{\bar{\xi}}_2^{4i_2} \: : \: i_1 + i_2 = j \} \\ & \subset \Ext_{A(0)_*}((A//A(2))_*). \end{align*} Consider the diagram: \begin{equation}\label{eq:Qtmfdiag} \xymatrix{ H_* \mathrm{tmf} \ar[r] & H_* \mathrm{tmf}_1(3) & \mr{BP}_* \mr{BP} \ar[l] \ar[d] \\ \mr{H}\mathbb{Z}_*\mathrm{tmf} \ar[u] \ar[d] \ar[r] & \mr{H}\mathbb{Z}_* \mathrm{tmf}_1(3) \ar[u] \ar[d] & \mathrm{tmf}_1(3)_* \mathrm{tmf}_1(3) \ar[l] \ar[d] \\ \rm{H}\mathbb{Q}_* \mathrm{tmf} \ar[r] & \rm{H}\mathbb{Q}_* \mathrm{tmf}_1(3) & \mathrm{tmf}_1(3)_* \mathrm{tmf}_1(3)_\mathbb{Q}. \ar[l] } \end{equation} The map $$ \mr{BP}_*\mr{BP} \rightarrow H_* \mathrm{tmf}_1(3) \cong \mathbb{F}_2[{\bar{\xi}}_1^2, {\bar{\xi}}_2^2, {\bar{\xi}}_3^2, {\bar{\xi}}_4, \ldots ] $$ sends $t_i$ to ${\bar{\xi}}_i^2$. Thus the elements \begin{align*} {\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2} & \in H_* \mathrm{tmf}, \\ t_1^{4i_1}t_2^{2i_2} & \in \mr{BP}_*\mr{BP} \end{align*} have the same image in $H_* \mathrm{tmf}_1(3)$. However, using the formulas of Section~\ref{sec:review}, we deduce that the images of $t_1$ and $t_2$ in $$ \mathrm{tmf}_1(3)_* \mathrm{tmf}_1(3)_\mathbb{Q} = \mathbb{Q}[a_1, a_3, \bar{a}_1, \bar{a}_3] $$ are given by \begin{align*} t_1 & \mapsto (\bar{a}_1 + a_1)/2, \\ t_2 & \mapsto ( 4\bar{a}_3 - a_1\bar{a}_1^2 - 4a_3 - a_1^3 ) / 8 + \text{terms of higher Adams filtration}. \end{align*} Since the map $$ \mathrm{tmf}_1(3)_* \mathrm{tmf}_1(3)_\mathbb{Q} \rightarrow \mr{H}\mathbb{Q}_* \mathrm{tmf}_1(3) = \mathbb{Q}[a_1, a_3] $$ of diagram~(\ref{eq:Qtmfdiag}) sends $\bar{a}_i$ to $a_i$ and $a_i$ to zero, we deduce that the image of $t_1$ and $t_2$ in $\mr{H}\mathbb{Q}_* \mathrm{tmf}_1(3)$ is \begin{align*} t_1 & \mapsto a_1/2, \\ t_2 & \mapsto a_3/2 + \text{terms of higher Adams filtration}. \end{align*} It follows that under the map of $v_0$-localized ASS's induced by the map $\mathrm{tmf} \rightarrow \mathrm{tmf}_1(3)$ $$ v_0^{-1} \Ext_{A(2)_*}(\mathbb{F}_2) \rightarrow v_0^{-1}\Ext_{E[Q_0, Q_1, Q_2]_*} (\mathbb{F}_2) $$ we have $$ {\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2} \mapsto [a_1/2]^{4i_1} [a_3/2]^{2i_2}. $$ Therefore, by (\ref{eq:c4c6ASS}), we have the equality (in $v_0^{-1}\Ext_{A(0)_*}((A//A(2))_*)$) $$ {\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2} = [c_4/16]^{i_1}[c_6/32]^{i_2} $$ and the result follows. \end{proof} Corresponding to the K\"unneth isomorphism for $\mr{H}\mathbb{Q}$, there is an isomorphism $$ v_0^{-1}\Ext_{A(0)_*}(M \otimes N) \cong v_0^{-1} \Ext_{A(0)_*}(M) \otimes_{\mathbb{F}_2[v_0^{\pm 1}]} \Ext_{A(0)_*}(N). $$ In particular, since the maps $$ v_0^{-1}\Ext(\mathrm{tmf} \wedge \si{8j} \mathrm{bo}_j) \rightarrow v_0^{-1} \Ext(\mathrm{tmf} \wedge \mathrm{tmf}) $$ can be identified with the maps \begin{multline*} v_0^{-1}\Ext_{A(0)_*}((A//A(2))_*) \otimes_{\mathbb{F}_2[v_0^{\pm 1}]} v_0^{-1}\Ext_{A(0)_*}(\Sigma^{8j} \ul{\bo}_j) \\ \rightarrow v_0^{-1}\Ext_{A(0)_*}((A//A(2))_*) \otimes_{\mathbb{F}_2[v_0^{\pm 1}]} v_0^{-1}\Ext_{A(0)_*}((A//A(2))_*) \end{multline*} we have the following corollary. \begin{cor} The map $$ v_0^{-1}\Ext(\mathrm{tmf} \wedge \si{8j} \mathrm{bo}_j) \rightarrow v_0^{-1} \Ext(\mathrm{tmf} \wedge \mathrm{tmf}) $$ obtained by localizing (\ref{eq:tmfextsplit}) is the canonical inclusion $$ \mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6]] \{ [\bar{c}_4]^{i_1} [\bar{c}_6]^{i_2} \: : \: i_1 + i_2 = j \} \hookrightarrow \mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6], [\bar{c}_4], [\bar{c}_6]]. $$ \end{cor} \subsection{Exact sequences relating the bo-Brown-Gitler modules}\label{sec:boSES} In order to proceed with integral calculations we use analogs of the short exact sequences of Section~\ref{sec:bo}. Lemmas 7.1 and 7.2 from \cite{BHHM} state that there are short exact sequences \begin{gather} 0\to \Sigma^{8j} \ul{\mathrm{bo}}_j \to \ul{\mathrm{bo}}_{2j}\to (A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1} \to \Sigma^{8j+9} \ul{\mathrm{bo}}_{j-1} \to 0 \label{eq:boSES1}, \\ 0 \to \Sigma^{8j} \ul{\mathrm{bo}}_j \otimes \ul{\mathrm{bo}}_1 \to \ul{\mathrm{bo}}_{2j+1}\to (A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1} \to 0 \label{eq:boSES2} \end{gather} of $A(2)_*$-comodules. These short exact sequences provide an inductive method of computing $\Ext_{A(2)_*}(\ul{\bo}_j)$ in terms of $\Ext_{A(1)_*}$-computations and $\Ext_{A(2)_*}(\ul{\mathrm{bo}}_1^i)$. We briefly recall how the maps in the exact sequences (\ref{eq:boSES1}) and (\ref{eq:boSES2}) are defined. On the level of basis elements, the maps \begin{gather*} \si{8j} \ul{\mathrm{bo}}_j \rightarrow \ul{\mathrm{bo}}_{2j}, \\ \si{8j} \ul{\mathrm{bo}}_j \otimes \ul{\mathrm{bo}}_1 \rightarrow \ul{\mathrm{bo}}_{2j+1} \end{gather*} are given respectively by \begin{align*} {\bar{\xi}}_1^{4i_1} {\bar{\xi}}_2^{2i_2} {\bar{\xi}}_3^{i_3} \cdots & \mapsto {\bar{\xi}}_1^a {\bar{\xi}}_2^{4i_1} {\bar{\xi}}_3^{2i_2} {\bar{\xi}}_4^{i_3} \cdots, \\ {\bar{\xi}}_1^{4i_1} {\bar{\xi}}_2^{2i_2} {\bar{\xi}}_3^{i_3} \cdots \otimes \{ 1, {\bar{\xi}}_1^4, {\bar{\xi}}_2^2, {\bar{\xi}}_3 \} & \mapsto ({\bar{\xi}}_1^a {\bar{\xi}}_2^{4i_1} {\bar{\xi}}_3^{2i_2} {\bar{\xi}}_4^{i_3} \cdots) \cdot \{1, {\bar{\xi}}_1^4, {\bar{\xi}}_2^2, {\bar{\xi}}_3 \}, \end{align*} where $a$ is taken to be $8j - wt({\bar{\xi}}_2^{4i_1} {\bar{\xi}}_3^{2i_2} {\bar{\xi}}_4^{i_3} \cdots)$; The notation was introduced in \eqref{eq:basismap}. The maps \begin{gather} \ul{\mathrm{bo}}_{2j} \rightarrow (A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1} , \label{eq:boSESmap1} \\ \ul{\mathrm{bo}}_{2j+1} \rightarrow (A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1} \label{eq:boSESmap2} \end{gather} are given by \begin{multline*} {\bar{\xi}}_1^{8i_1+4\epsilon_1} {\bar{\xi}}_2^{4i_2+2\epsilon_2} {\bar{\xi}}_3^{2i_3 + \epsilon_3} {\bar{\xi}}_4^{i_4}\cdots \mapsto \\ \begin{cases} {\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2} {\bar{\xi}}_3^{2i_3} {\bar{\xi}}_4^{i_4} \cdots \otimes {\bar{\xi}}_1^{4\epsilon_1} {\bar{\xi}}_2^{2\epsilon_2} {\bar{\xi}}_3^{\epsilon_3}, & wt({\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2} {\bar{\xi}}_3^{2i_3} {\bar{\xi}}_4^{i_4} \cdots) \le 8j-8, \\ 0, & \mr{otherwise}, \end{cases} \end{multline*} where $\epsilon_s \in \{0,1\}$. The only change from the integral Brown-Gitler case is that while the map (\ref{eq:boSESmap2}) is surjective, the map (\ref{eq:boSESmap1}) is not. The cokernel is spanned by the submodule $$ \mathbb{F}_2\{ {\bar{\xi}}_1^4 {\bar{\xi}}_2^2 {\bar{\xi}}_3 \} \otimes \Sigma^{8j-8} \ul{\bo}_{j-1} \subset (A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1}. $$ We therefore have an exact sequence $$ \ul{\mathrm{bo}}_{2j} \rightarrow (A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1} \rightarrow \Sigma^{8j+9} \ul{\bo}_{j-1} \rightarrow 0. $$ We give some low dimensional examples. We shall use the shorthand $$ M \Leftarrow \bigoplus M_i[k_i] $$ to denote the existence of a spectral sequence $$ \bigoplus \Ext_{A(2)_*}^{s-k_i,t+k_i}(M_i) \Rightarrow \Ext_{A(2)_*}^{s,t}(M). $$ In the notation above, we shall abbreviate $M_i[0]$ as $M_i$. We have \begin{equation}\label{eq:boSES_low} \begin{split} \Sigma^{16} \ul{\bo}_2 & \Leftarrow \Sigma^{16} (A(2)//A(1))_* \oplus \Sigma^{24}\ul{\bo}_1 \oplus \Sigma^{32} \mathbb{F}_2[1], \\ \Sigma^{24} \ul{\bo}_3 & \Leftarrow \Sigma^{24} (A(2)//A(1))_* \oplus \Sigma^{32} \ul{\bo}_1^{2} , \\ \si{32} \ul{\bo}_4 & \Leftarrow (A(2)//A(1))_* \otimes \left( \si{32} \ul{\tmf}_1 \oplus \si{48} \mathbb{F}_2 \right) \oplus \si{56} \ul{\bo}_1 \oplus \si{56} \ul{\bo}_1[1] \oplus \si{64}\mathbb{F}_2[1], \\ \si{40}\ul{\bo}_5 & \Leftarrow (A(2)//A(1))_* \otimes \left( \si{40} \ul{\tmf}_1 \oplus \si{56} \ul{\bo}_1 \right) \oplus \si{64}\ul{\bo}_1^{2} \oplus \si{72} \ul{\bo}_1[1], \\ \si{48} \ul{\bo}_6 & \Leftarrow (A(2)//A(1))_* \otimes \left(\si{48} \ul{\tmf}_2 \oplus \si{72} \mathbb{F}_2 \oplus \si{80} \mathbb{F}_2[1] \right) \\ & \quad \quad \oplus \si{80} \ul{\bo}_1^2 \oplus \si{88} \ul{\bo}_1[1] \oplus \si{96}\mathbb{F}_2[2], \\ \si{56} \ul{\bo}_7 & \Leftarrow (A(2)//A(1))_* \otimes \left(\si{56} \ul{\tmf}_2 \oplus \si{80} \ul{\bo}_1 \right) \oplus \si{88} \ul{\bo}_1^3, \\ \si{64} \ul{\bo}_8 & \Leftarrow (A(2)//A(1))_* \otimes \left(\si{64}\ul{\tmf}_3 \oplus \si{96}\ul{\tmf}_1 \oplus \si{112}\mathbb{F}_2 \oplus \si{104} \mathbb{F}_2[1] \right) \\ & \quad \quad \oplus \si{112}\ul{\bo}_1^2[1] \oplus \si{120}\ul{\bo}_1 \oplus \si{120} \ul{\bo}_1[1] \oplus \si{128}\mathbb{F}_2[1]. \end{split} \end{equation} In practice, these spectral sequences tend to collapse. In fact, in the range computed explicitly in this paper, there are no differentials in these spectral sequences, and the authors have not yet encountered any differentials in these spectral sequences. These spectral sequences ought to collapse with $v_0$-inverted, for dimensional reasons. In principle, the exact sequences (\ref{eq:boSES1}) and (\ref{eq:boSES2}) allow one to inductively compute $\Ext_{A(2)_*}(\ul{\bo}_j)$ given $\Ext_{A(2)_*}(\ul{\bo}_1^{\otimes k})$, where $\ul{\bo}_1$ is depicted in Figure \ref{fig:bo1}. \begin{figure}[h] $$ \xymatrix@R-2em@C-2em{ {\bar{\xi}}_3 & \circ \ar@{-}[d]^{\sq^1} \\ {\bar{\xi}}_2^2 & \circ \ar@/^1pc/@{-}[dd]^{\sq^2} \\ \\ {\bar{\xi}}_1^4 & \circ \ar@{-} `r[dddd] `[dddd]^{\sq^4} [dddd] \\ \\ \\ \\ 1 & \circ } $$ \caption{$\ul{\bo}_1$}\label{fig:bo1} \end{figure} The problem is that, unlike the $A(1)$-case, we do not have a closed form computation of $\Ext_{A(2)_*}(\ul{\bo}_1^{\otimes k})$. These computations for $k \le 3$ appeared in \cite{BHHM} (the cases of $k = 0, 1$ appeared elsewhere). We include in Figures \ref{fig:A2andbo12} through \ref{fig:bo5andbo6} the charts for $\si{8j} \ul{\bo}_j$, for $0 \leq j\leq 6$, as well as $\Sigma^8\ul{\bo}_1^2$ in dimensions $\le 64$. \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{A2.PNG} \caption{$\ul{\bo}_0$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{bo12.PNG} \caption{$\Sigma^8\ul{\bo}_1^2$} \end{subfigure} \caption{} \label{fig:A2andbo12} \end{figure} \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{bo1color.PNG} \caption{$\Sigma^8\ul{\bo}_1$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{bo2color.PNG} \caption{$\Sigma^{16}\ul{\bo}_2$} \end{subfigure} \caption{}\label{fig:bo1andbo2} \end{figure} \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{bo3color.PNG} \caption{$\Sigma^{24}\ul{\bo}_3$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{bo4color.PNG} \caption{$\Sigma^{32}\ul{\bo}_4$} \end{subfigure} \caption{}\label{fig:bo3andbo4} \end{figure} \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{bo5color.PNG} \caption{$\Sigma^{40}\ul{\bo}_5$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[height =\textheight]{bo6color.PNG} \caption{$\Sigma^{48}\ul{\bo}_6$} \end{subfigure} \caption{}\label{fig:bo5andbo6} \end{figure} \subsection{Rational behavior of the exact sequences}\label{sec:rationalgens} We finish this section with a discussion on how to identify the generators of $\frac{\Ext_{A(2)_*}(\si{8j} \ul{\bo}_j)}{v_0-tors}$. On one hand, the inclusion $$ \xymatrix@C-1em{ \frac{\Ext_{A(2)_*}(\si{8j} \ul{\bo}_j)}{v_0-tors} \ar@{^{(}->}[r] & v_0^{-1}\Ext_{A(2)_*}(\si{8j} \ul{\bo}_j) \ar@{^{(}->}[d] &{}\save[]+<2.2cm,0cm>* {=\mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6]]\{ {\bar{\xi}}^{8i_1}{\bar{\xi}}^{4i_2} \: : \: i_1 + i_2 = j \}} \restore \\ & v_0^{-1}\Ext_{A(2)_*}((A//A(2))_*) } $$ discussed in Section~\ref{sec:rational} informs us that the $h_0$-towers of $\Ext_{A(2)_*}(\si{8j}\ul{\bo}_j)$ are all generated by $$ h^k_0 [c_4]^{p} [c_6]^q {\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2} $$ for appropriate (possibly negative) values of $k$ depending on $i_1, i_2, p,$ and $q$. The problem is that the terms \begin{align} v_0^{-1} \Ext_{A(2)}(\si{16j}(A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1}) & \subset \Ext_{A(2)_*}(\si{16j} \mathrm{bo}_{2j}), \label{eq:boSES12} \\ v_0^{-1} \Ext_{A(2)}(\si{16j+8}(A(2)//A(1))_* \otimes \ul{\mathrm{tmf}}_{j-1}) & \subset \Ext_{A(2)_*}(\si{16j+8} \mathrm{bo}_{2j+1}) \label{eq:boSES22} \end{align} in the short exact sequences (\ref{eq:boSES1}), (\ref{eq:boSES2}) are not free over $\mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6]]$ (however, they are free over $\mathbb{F}_2[v_0^{\pm 1}, [c_4]]$). We therefore instead identify the generators of $v_0^{-1}\Ext_{A(2)_*}((A//A(2))_*)$ corresponding to the generators of (\ref{eq:boSES12}) and (\ref{eq:boSES22}) as modules over $\mathbb{F}_2[v_0^{\pm 1}, [c_4]]$, as well as those generators coming (inductively) from \begin{align} v_0^{-1} \Ext_{A(2)_*}(\si{24j} \ul{\bo}_j) & \subset v^{-1}_0\Ext_{A(2)_*}(\si{16j}\ul{\bo}_{2j}), \label{eq:boSES11}\\ v_0^{-1} \Ext_{A(2)_*}(\si{24j+8} \ul{\bo}_j \otimes \ul{\bo}_1) & \subset v^{-1}_0\Ext_{A(2)_*}(\si{16j+8}\ul{\bo}_{2j+1}) \label{eq:boSES21} \end{align} in the following two lemmas, whose proofs are immediate from the definitions of the maps in (\ref{eq:boSES1}), (\ref{eq:boSES2}). \begin{lem}\label{lem:boSES_2} The summands (\ref{eq:boSES12}) (respectively (\ref{eq:boSES22})) are generated, as modules over $\mathbb{F}_2[v_0^{\pm 1}, [c_4]]$, by the elements $$ {\bar{\xi}}_1^a {\bar{\xi}}_2^{8i_1} {\bar{\xi}}_3^{4i_3}, \: {\bar{\xi}}_1^{a-8} {\bar{\xi}}_2^{8i_1+4} {\bar{\xi}}_3^{4i_3} \in (A//A(2))_*, $$ with $i_1 + i_2 \le j-1$ and $a = 16j - 8i_1 - 8i_2$ (respectively $a = 16j+8 - 8i_1 - 8i_2$). \end{lem} \begin{lem}\label{lem:boSES_1} Suppose inductively (via the exact sequences (\ref{eq:boSES1}),(\ref{eq:boSES2})) that the summand $$ v_0^{-1}\Ext_{A(2)_*}(\si{8j}\ul{\bo}_j) \subset v_0^{-1}\Ext_{A(2)_*}((A//A(2))_*) $$ has generators of the form $$ \{ {\bar{\xi}}_1^{i_1}{\bar{\xi}}_2^{i_2}\ldots \}. $$ Then the summand (\ref{eq:boSES11}) is generated by $$ \{ {\bar{\xi}}_2^{i_1} {\bar{\xi}}_3^{i_2} \cdots \} $$ and the summand (\ref{eq:boSES21}) is generated by $$ \{ {\bar{\xi}}_2^{i_1} {\bar{\xi}}_3^{i_2} \cdots \} \cdot \{ {\bar{\xi}}_1^8, {\bar{\xi}}_2^4 \}. $$ \end{lem} The remaining term \begin{equation}\label{eq:boSES13} v_0^{-1}\Ext_{A(2)_*}(\si{24j+8}\ul{\bo}_{j-1}[1]) \subset v_0^{-1} \Ext_{A(2)_*}(\ul{\bo}_{2j}) \end{equation} coming from (\ref{eq:boSES1}) is handled by the following lemma. \begin{lem}\label{lem:boSES13} Consider the summand $$ v_0^{-1}\Ext_{A(1)_*}(\si{24j-8}\ul{\bo}_{j-1}) \subset v_0^{-1} \Ext_{A(1)_*}(\si{16j}\ul{\mathrm{tmf}}_{j-1}) \subset v_0^{-1}\Ext_{A(2)_*}(\si{16j} \mathrm{bo}_{2j}) $$ generated as a module over $\mathbb{F}_2[v_0^{\pm 1}, [c_4]]$ by the generators $$ {\bar{\xi}}^{16}_1 {\bar{\xi}}^{8i_1}_2 {\bar{\xi}}_3^{4i_2}, \: {\bar{\xi}}^{8}_1 {\bar{\xi}}^{8i_1+4}_2 {\bar{\xi}}_3^{4i_2} \in (A//A(2))_*, $$ with $i_1 + i_2 = j-1$. Let $x_i$ ($0 \le i \le j-1$) be the generator of the summand (\ref{eq:boSES13}), as a module over $\mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6]]$ corresponding to the generator ${\bar{\xi}}_1^{4i} \in \ul{\bo}_{j-1}.$ Then we have $$ [c_6]{\bar{\xi}}^{8}_1 {\bar{\xi}}_2^{8i_1+4} {\bar{\xi}}^{4i_2}_3 = v_0^4 x_{i_2} + \cdots $$ in $v_0^{-1}\Ext_{A(2)_*}(\si{16j} \ul{\bo}_{2j})$, where the additional terms not listed above all come from the summand $$ v_0^{-1} \Ext_{A(2)_*}(\si{24j}\ul{\bo}_{j}) \subset v_0^{-1} \Ext_{A(2)_*}(\si{16j} \ul{\bo}_{2j}). $$ \end{lem} \begin{proof} This follows from the definition of the last map in (\ref{eq:boSES1}), together with the fact that with $v_0$-inverted, the cell ${\bar{\xi}}_1^{4}{\bar{\xi}}_2^2{\bar{\xi}}_3 \in (A(2)//A(1))_*$ attaches to the cell ${\bar{\xi}}_1^4$ with attaching map $[c_6]/v_0^4$. \end{proof} Lemmas~\ref{lem:boSES_2}, \ref{lem:boSES_1}, and \ref{lem:boSES13} give an inductive method of identifying a collection of generators for $v_0^{-1}\Ext_{A(2)_*}(\ul{\bo}_j)$, which are compatible with the exact sequences (\ref{eq:boSES1}), (\ref{eq:boSES2}). We tabulate these below for the decompositions arising from the spectral sequences (\ref{eq:boSES_low}). For those summands of the form $(A(2)//A(1))_* \otimes -$ these are generators over $\mathbb{F}_2[v_0^{\pm 1}, [c_4]]$, for the other summands these are generators over $\mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6]]$. \begin{alignat*}{3} \ul{\bo}_0 : & \quad & \mathbb{F}_2: & \quad && 1 \\ \si{8}\ul{\bo}_1: && \si{8} \ul{\bo}_1: &&& {\bar{\xi}}_1^{8}, {\bar{\xi}}_2^4 \\ \Sigma^{16} \ul{\bo}_2: && \Sigma^{16} (A(2)//A(1))_*: &&& {\bar{\xi}}_1^{16}, {\bar{\xi}}_1^8 {\bar{\xi}}_2^4 \\ && \Sigma^{24}\ul{\bo}_1: &&& {\bar{\xi}}_2^8, {\bar{\xi}}_3^4 \\ && \Sigma^{32}\mathbb{F}_2[1]: &&& v_0^{-4}[c_6] {\bar{\xi}}_1^8 {\bar{\xi}}_2^4 + \cdots \\ \Sigma^{24} \ul{\bo}_3: && \Sigma^{24} (A(2)//A(1))_*: &&& {\bar{\xi}}_1^{24}, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^4 \\ && \Sigma^{32} \ul{\bo}_1^{2}: &&& \{ {\bar{\xi}}_2^{8}, {\bar{\xi}}_3^4 \}\cdot\{{\bar{\xi}}_1^8, {\bar{\xi}}_2^4 \} \\ \si{32} \ul{\bo}_4: && \si{32} (A(2)//A(1))_* \otimes \ul{\tmf}_1: &&& {\bar{\xi}}_1^32, {\bar{\xi}}_1^{24}{\bar{\xi}}_2^4, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^8, {\bar{\xi}}_1^8 {\bar{\xi}}_2^{12}, {\bar{\xi}}_1^{16} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{8} {\bar{\xi}}_2^4 {\bar{\xi}}_3^4 \\ && \si{48} (A(2)//A(1))_*: &&& {\bar{\xi}}_2^{16}, {\bar{\xi}}_2^8{\bar{\xi}}_3^4 \\ && \si{56} \ul{\bo}_1: &&& {\bar{\xi}}_3^8, {\bar{\xi}}_4^4 \\ && \si{64}\mathbb{F}_2[1]: &&& v_0^{-4}[c_6] {\bar{\xi}}_2^8{\bar{\xi}}_3^4 + \cdots \\ && \si{56} \ul{\bo}_1[1]: &&& v_0^{-4}[c_6]{\bar{\xi}}_1^8 {\bar{\xi}}_2^{12} + \cdots , v_0^{-4}[c_6]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^4 {\bar{\xi}}_3^4 + \cdots \\ \si{40}\ul{\bo}_5: && \si{40} (A(2)//A(1))_* \otimes \ul{\tmf}_1: &&& {\bar{\xi}}_1^{40}, {\bar{\xi}}_1^{32} {\bar{\xi}}_2^4, {\bar{\xi}}_1^{24} {\bar{\xi}}_2^8, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{12}, {\bar{\xi}}_1^{24} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^4 {\bar{\xi}}_3^4 \\ && \si{56} (A(2)//A(1))_* \otimes\ul{\bo}_1: &&& \{ {\bar{\xi}}_2^{16}, {\bar{\xi}}_2^8 {\bar{\xi}}_3^4 \}\cdot \{ {\bar{\xi}}_1^8, {\bar{\xi}}_2^4 \} \\ && \si{64}\ul{\bo}_1^{2}: &&& \{ {\bar{\xi}}_3^8, {\bar{\xi}}_4^4 \} \cdot \{ {\bar{\xi}}_1^8, {\bar{\xi}}_2^4 \}\\ && \si{72} \ul{\bo}_1[1]: &&& \{ v_0^{-4}[c_6] {\bar{\xi}}_2^8 {\bar{\xi}}_3^4 + \cdots \} \cdot \{ {\bar{\xi}}_1^8, {\bar{\xi}}_2^4 \}\\ \si{48} \ul{\bo}_6: && \si{48} (A(2)//A(1))_* \otimes \ul{\tmf}_2: &&& {\bar{\xi}}_1^{48}, {\bar{\xi}}_1^{40} {\bar{\xi}}_2^4, {\bar{\xi}}_1^{32} {\bar{\xi}}_2^8, {\bar{\xi}}_1^{24} {\bar{\xi}}_2^{12}, {\bar{\xi}}_1^{32} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{24} {\bar{\xi}}_2^4 {\bar{\xi}}_3^4, \\ && &&& \quad {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{16}, {\bar{\xi}}_1^8 {\bar{\xi}}_2^{20}, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^8 {\bar{\xi}}_3^4, {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{12} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{16} {\bar{\xi}}_3^8, {\bar{\xi}}_1^8 {\bar{\xi}}_2^4 {\bar{\xi}}_3^8 \\ && \si{72} (A(2)//A(1))_*: &&& {\bar{\xi}}_2^{24}, {\bar{\xi}}_2^{16} {\bar{\xi}}_3^4 \\ && \si{80} \ul{\bo}_1^2: &&& \{ {\bar{\xi}}_3^{8}, {\bar{\xi}}_4^4 \}\cdot\{{\bar{\xi}}_2^8, {\bar{\xi}}_3^4 \} \\ && \si{80} \ul{\bo}_2[1] &&& v_0^{-4}[c_6]{\bar{\xi}}_1^8 {\bar{\xi}}_2^{20} + \cdots , v_0^{-4}[c_6]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^{12} {\bar{\xi}}_3^4 + \cdots , \\ && &&& \quad v_0^{-4}[c_6]{\bar{\xi}}_1^8 {\bar{\xi}}_2^4 {\bar{\xi}}_3^8 + \cdots \\ \si{56} \ul{\bo}_7: && \si{56} (A(2)//A(1))_* \otimes \ul{\tmf}_2: &&& {\bar{\xi}}_1^{56}, {\bar{\xi}}_1^{48} {\bar{\xi}}_2^4, {\bar{\xi}}_1^{40} {\bar{\xi}}_2^8, {\bar{\xi}}_1^{32} {\bar{\xi}}_2^{12}, {\bar{\xi}}_1^{40} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{32} {\bar{\xi}}_2^4 {\bar{\xi}}_3^4, \\ && &&& \quad {\bar{\xi}}_1^{24} {\bar{\xi}}_2^{16}, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{20}, {\bar{\xi}}_1^{24} {\bar{\xi}}_2^8 {\bar{\xi}}_3^4, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{12} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{24} {\bar{\xi}}_3^8, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^4 {\bar{\xi}}_3^8 \\ &&\si{80} (A(2)//A(1))_* \otimes \ul{\bo}_1: &&& \{ {\bar{\xi}}_2^{24}, {\bar{\xi}}_2^{16} {\bar{\xi}}_3^4 \} \cdot \{ {\bar{\xi}}_1^8, {\bar{\xi}}_2^4 \} \\ && \si{88} \ul{\bo}_1^3: &&& \{ {\bar{\xi}}_3^{8}, {\bar{\xi}}_4^4 \}\cdot\{{\bar{\xi}}_2^8, {\bar{\xi}}_3^4 \} \cdot \{ {\bar{\xi}}_1^8, {\bar{\xi}}_2^4 \} \\ \si{64} \ul{\bo}_8: && \si{64} (A(2)//A(1))_* \otimes \ul{\tmf}_3: &&& {\bar{\xi}}_1^{64}, {\bar{\xi}}_1^{56} {\bar{\xi}}_2^4, {\bar{\xi}}_1^{48} {\bar{\xi}}_2^8, {\bar{\xi}}_1^{40} {\bar{\xi}}_2^{12}, {\bar{\xi}}_1^{48} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{40} {\bar{\xi}}_2^4 {\bar{\xi}}_3^4, \\ && &&& \quad {\bar{\xi}}_1^{32} {\bar{\xi}}_2^{16}, {\bar{\xi}}_1^{24} {\bar{\xi}}_2^{20}, {\bar{\xi}}_1^{32} {\bar{\xi}}_2^8 {\bar{\xi}}_3^4, {\bar{\xi}}_1^{24} {\bar{\xi}}_2^{12} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{32} {\bar{\xi}}_3^8, {\bar{\xi}}_1^{24} {\bar{\xi}}_2^4 {\bar{\xi}}_3^8, \\ && &&& \quad {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{24}, {\bar{\xi}}_1^{8} {\bar{\xi}}_2^{28}, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{16} {\bar{\xi}}_3^4, {\bar{\xi}}_1^8 {\bar{\xi}}_2^{20} {\bar{\xi}}_3^4, {\bar{\xi}}_1^{16} {\bar{\xi}}_2^{8} {\bar{\xi}}_3^8, {\bar{\xi}}_1^8 {\bar{\xi}}_2^{12} {\bar{\xi}}_3^8, \\ && &&& \quad {\bar{\xi}}_1^{16} {\bar{\xi}}_3^{12}, {\bar{\xi}}_1^8 {\bar{\xi}}_2^4 {\bar{\xi}}_3^{12} \\ && \si{96} (A(2)//A(1))_* \otimes \ul{\tmf}_1: &&& {\bar{\xi}}_2^32, {\bar{\xi}}_2^{24} {\bar{\xi}}_3^4, {\bar{\xi}}_2^{16} {\bar{\xi}}_3^8, {\bar{\xi}}_2^8 {\bar{\xi}}_3^{12}, {\bar{\xi}}_2^{16} {\bar{\xi}}_4^4, {\bar{\xi}}_2^{8} {\bar{\xi}}_3^4 {\bar{\xi}}_4^4 \\ && \si{112} (A(2)//A(1))_*: &&& {\bar{\xi}}_3^{16}, {\bar{\xi}}_3^8{\bar{\xi}}_4^4 \\ && \si{120}\ul{\bo}_1: &&& {\bar{\xi}}_4^8, {\bar{\xi}}_5^4 \\ && \si{128}\mathbb{F}_2[1]: &&& v_0^{-4}[c_6] {\bar{\xi}}_3^8{\bar{\xi}}_4^4 + \cdots \\ && \si{120} \ul{\bo}_1[1]: &&& v_0^{-4}[c_6]{\bar{\xi}}_2^8 {\bar{\xi}}_3^{12} + \cdots , v_0^{-4}[c_6]{\bar{\xi}}_2^{8} {\bar{\xi}}_3^4 {\bar{\xi}}_4^4 + \cdots\\ && \si{104} \ul{\bo}_3 [1]: &&& v_0^{-4}[c_6]{\bar{\xi}}_1^{8} {\bar{\xi}}_2^{28}+\cdots, v_0^{-4}[c_6]{\bar{\xi}}_1^8 {\bar{\xi}}_2^{20} {\bar{\xi}}_3^4+\cdots, \\ && &&& \quad v_0^{-4}[c_6]{\bar{\xi}}_1^8 {\bar{\xi}}_2^{12} {\bar{\xi}}_3^8 + \cdots \end{alignat*} \subsection{Identification of the integral lattice} Having constructed useful bases of the summands $$ v_0^{-1} \Ext_{A(2)_*}(\si{8j}\mathrm{bo}_j) \subset v_0^{-1}\Ext_{A(2)_*}(A//A(2)_*) $$ it remains to understand the lattices $$ \frac{\Ext_{A(2)_*}(\si{8j}\mathrm{bo}_j)}{v_0-tors} \subset v_0^{-1} \Ext_{A(2)_*}(\si{8j}\mathrm{bo}_j). $$ This can accomplished inductively; the rational generators we identified in the last section are compatible with the exact sequences (\ref{eq:boSES1}), (\ref{eq:boSES2}), and $\frac{\Ext_{A(2)_*}}{v_0-tors}$ of the terms in these exact sequences are determined by the $\frac{\Ext_{A(1)_*}}{v_0-tors}$-computations of Section~\ref{sec:bo}, and knowledge of $$ \frac{\Ext_{A(2)_*}(\ul{\bo}_1^k)}{v_0-tors}. $$ Unfortunately the latter requires separate explicit computation for each $k$, and hence does not yield a general answer. Nevertheless, in this section we will give some lemmas which provide convenient criteria for identifying the $i$ so that given a rational generator $x \in (A//A(2))_*$ (as in the previous section) we have $$ v_0^i x \in \frac{\Ext_{A(2)*}((A//A(2))_*)}{v_0-tors} \subset v_0^{-1} \Ext_{A(2)*}((A//A(2))_*). $$ We first must clarify what we actually mean by ``rational generator''. The generators identified in the last section originate from the exact sequences (\ref{eq:boSES1}), (\ref{eq:boSES12}). More precisely, they come from the generators of $v_0^{-1} \Ext_{A(2)_*}(M)$ where $M$ is given by \begin{align*} \text{Case 1:} & \quad M = \ul{\bo}_1^k, \\ \text{Case 2:} & \quad M = (A(2)//A(1))_* \otimes \ul{\tmf}_j. \end{align*} In Case 1, a generator $x$ of $v_0^{-1}\Ext_{A(2)_*}(M)$ is a generator as a module over $\mathbb{F}_2[v_0^{\pm 1}, [c_4]]$ using the isomorphisms \begin{equation}\label{eq:integralcase1} \begin{split} & v_0^{-1}\Ext_{A(2)_*}((A(2)//A(1))_* \otimes \ul{\tmf}_j) \\ & \quad \cong v_0^{-1}\Ext_{A(1)_*}(\ul{\tmf}_j) \\ & \quad \cong v_0^{-1}\Ext_{A_*}((A//A(1))_* \otimes \ul{\tmf}_j) \\ & \quad \xrightarrow[\cong]{\alpha} v_0^{-1}\Ext_{A(0)_*}((A//A(1))_* \otimes \ul{\tmf}_j) \\ & \quad \cong v_0^{-1}\Ext_{A(0)_*}((A//A(1))_*) \otimes_{\mathbb{F}_2[v_0^{\pm 1}]} v_0^{-1}\Ext_{A(0)_*}(\ul{\tmf}_j) \\ & \quad \cong \mathbb{F}_2[v_0^{\pm 1}, [c_4]]\{ 1, {\bar{\xi}}_1^4 \} \otimes_{\mathbb{F}_2} \mathbb{F}_2\{ {\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2} \: : \: i_1 + i_2 \le j \}. \end{split} \end{equation} The rational generators in this case correspond to the generators $$ x = {\bar{\xi}}_1^{4\epsilon} \otimes {\bar{\xi}}_1^{8i_1} {\bar{\xi}}_2^{4i_2}. $$ In Case 2, a generator $x$ of $v_0^{-1}\Ext_{A(2)_*}(M)$ is a generator as a module over $\mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6]]$, using the isomorphisms \begin{equation}\label{eq:integralcase2} \begin{split} & v_0^{-1}\Ext_{A(2)_*}(\ul{\bo}_1^k) \\ & \quad \cong v_0^{-1}\Ext_{A_*}((A//A(2))_* \otimes \ul{\bo}_1^k) \\ & \quad \xrightarrow[\cong]{\alpha} v_0^{-1}\Ext_{A(0)_*}((A//A(2))_* \otimes \ul{\bo}^k_1) \\ & \quad \cong v_0^{-1}\Ext_{A(0)_*}((A//A(2))_*) \otimes_{\mathbb{F}_2[v_0^{\pm 1}]} v_0^{-1}\Ext_{A(0)_*}(\ul{\bo}_1^k) \\ & \quad \cong \mathbb{F}_2[v_0^{\pm 1}, [c_4], [c_6]] \otimes_{\mathbb{F}_2} \mathbb{F}_2\{ 1, {\bar{\xi}}_1^4 \}^{\otimes k}. \end{split} \end{equation} The rational generators in this case correspond to the generators $$ x \in \{ 1, {\bar{\xi}}_1^4 \}^{\otimes k}. $$ In either case, the maps $\alpha$ in both (\ref{eq:integralcase1}) and (\ref{eq:integralcase2}) arise from surjections of cobar complexes $$ C^*_{A_*}(N) \rightarrow C^*_{A(0)_*}(N) $$ induced from the surjection $$ A_* \rightarrow A(0)_*. $$ Thus a term $v_0^i x \in C^*_{A(0)_*}(N)$ representing an element in $v_0^{-1} \Ext_{A(0)_*}(N)$ corresponds (for $i$ sufficiently large) to a term $[{\bar{\xi}}_1]^ix + \cdots \in C^*_{A_*}(N)$. Then we have determined an element of the integral lattice $$ \left\lbrack [{\bar{\xi}}_1]^ix + \cdots \right\rbrack \in \frac{\Ext_{A_*}(N)}{v_0-tors} \subset v_0^{-1}\Ext_{A_*}(N). $$ \begin{lem}\label{lem:integralCnu} Suppose that the $A(2)_*$-coaction on $x \in (A//A(2))_*$ satisfies $$ \psi(x) = {\bar{\xi}}_1^4 \otimes y + \text{terms in lower dimension} $$ with $y$ primitive, as in the following ``cell diagram'': $$ \xymatrix@C-2em@R-2em{ x & \circ \ar@{-} `r[ddd] `[ddd]^{\sq^4} [ddd] \\ \\ \\ y & \circ } $$ Then $$ v_0^3 x \in \frac{\Ext_{A(2)*}((A//A(2))_*)}{v_0-tors} \subset v_0^{-1} \Ext_{A(2)*}((A//A(2))_*) $$ and is represented by $$ [{\bar{\xi}}_1|{\bar{\xi}}_1 | {\bar{\xi}}_1]x + \left( [{\bar{\xi}}_1|{\bar{\xi}}_2|{\bar{\xi}}_2] + [{\bar{\xi}}_1|{\bar{\xi}}_1|{\bar{\xi}}_1^2{\bar{\xi}}_2] + [{\bar{\xi}}_1|{\bar{\xi}}_1{\bar{\xi}}_2|{\bar{\xi}}_1^2] + [{\bar{\xi}}_2|{\bar{\xi}}_1^2|{\bar{\xi}}_1^2] \right)y $$ in the cobar complex $C^*_{A(2)_*}((A//A(2))_*)$. \end{lem} \begin{proof} Since the cell complex depicted agrees with $A(2)//A(1)$ through dimension $4$, $\Ext_{A(2)_*}$ of this comodule agrees with $\Ext_{A(1)_*}(\mathbb{F}_2)$ through dimension $4$. In particular, $v_0^3 x + \cdots $ generates an $\frac{\Ext_{A(2)_*}}{v_0-tors}$-term in this dimension. To determine the exact representing cocycle, we note that $$ [{\bar{\xi}}_1|{\bar{\xi}}_2|{\bar{\xi}}_2] + [{\bar{\xi}}_1|{\bar{\xi}}_1|{\bar{\xi}}_1^2{\bar{\xi}}_2] + [{\bar{\xi}}_1|{\bar{\xi}}_1{\bar{\xi}}_2|{\bar{\xi}}_1^2] + [{\bar{\xi}}_2|{\bar{\xi}}_1^2|{\bar{\xi}}_1^2] $$ kills $h_0^3 h_2$ in $\Ext_{A(2)_*}(\mathbb{F}_2)$. \end{proof} \begin{ex} Let $\alpha = {\bar{\xi}}^{8j_1}_{i_1} {\bar{\xi}}_{i_2}^{8j_2} \cdots $ be a monomial with exponents all divisible by $8$. A typical instance of a set of generators of $(A//A(2))_*$ satisfying the hypotheses of Lemma~\ref{lem:integralCnu} is $$ \xymatrix@C-2em@R-2em{ {\bar{\xi}}_i^4 \alpha & \circ \ar@{-} `r[ddd] `[ddd]^{\sq^4} [ddd] \\ \\ \\ {\bar{\xi}}_{i-1}^8 \alpha & \circ } $$ \end{ex} The following corollary will be essential to relating the integral generators of Lemma~\ref{lem:integralCnu} to $2$-variable modular forms in Section~\ref{sec:2var}. \begin{cor}\label{cor:integralCnu} Suppose that $x$ satisfies the hypotheses of Lemma~\ref{lem:integralCnu}. The image of the corresponding integral generator $$ v_0^3 x + \cdots \in \Ext_{A(2)_*}((A//A(2)_*)) $$ in $\Ext_{E[Q_0, Q_1, Q_2]_*}((A//E[Q_0, Q_1, Q_2])_*)$ is given by $$ v_0^3 x + v_0 [a_1]^2 y. $$ \end{cor} \begin{proof} Note the equality $$ E[Q_0, Q_1, Q_2]_* = \mathbb{F}_2[{\bar{\xi}}_1, {\bar{\xi}}_2, {\bar{\xi}}_3]/({\bar{\xi}}_1^2, {\bar{\xi}}_2^2, {\bar{\xi}}_3^2). $$ Therefore the image of the integral generator of Lemma~\ref{lem:integralCnu} under the map $$ C^*_{A(2)_*}((A//A(2))_*) \rightarrow C^*_{E[Q_0, Q_1, Q_2]_*}((A//E[Q_0, Q_1, Q_2])_*) $$ is $$ [{\bar{\xi}}_1|{\bar{\xi}}_1 | {\bar{\xi}}_1]x + [{\bar{\xi}}_1|{\bar{\xi}}_2|{\bar{\xi}}_2] y $$ and this represents $v_0^3 x + v_0 [a_1]^2 y$. \end{proof} Similar arguments provide the following slight refinement. \begin{lem}\label{lem:integralCnuHZ1} Suppose that the $A(2)_*$-coaction on $x \in (A//A(2))_*$ satisfies $$ \psi(x) = {\bar{\xi}}_1^4 \otimes y + \text{terms in lower dimension} $$ with $y$ primitive, and that there exists $w$ and $z$ satisfying $$ \psi(z) = {\bar{\xi}}_1^2 y + \text{terms in lower dimension} $$ and $$ \psi(w) = {\bar{\xi}}_1 z + {\bar{\xi}}_2 y + \text{terms in lower dimension} $$ as in the following ``cell diagram'': $$ \xymatrix@C-2em@R-1em{ x & \circ \ar@{-} `r[dddd] `[dddd]^{\sq^4} [dddd] \\ w & \circ \ar@{-}[d]_{\sq^1} \\ z & \circ \ar@/_1pc/@{-}[dd]_{\sq^2} \\ \\ y & \circ } $$ Then $$ v_0 x \in \frac{\Ext_{A(2)*}((A//A(2))_*)}{v_0-tors} \subset v_0^{-1} \Ext_{A(2)*}((A//A(2))_*) $$ is represented by $$ [{\bar{\xi}}_1]x + [{\bar{\xi}}_1^2]w + \left( [{\bar{\xi}}_1^3]+[{\bar{\xi}}_2]\right)z + [{\bar{\xi}}_1^2 {\bar{\xi}}_2]y $$ in the cobar complex $C^*_{A(2)_*}((A//A(2))_*)$. \end{lem} \begin{ex} Let $\alpha = {\bar{\xi}}^{8j_1}_{i_1} {\bar{\xi}}_{i_2}^{8j_2} \cdots $ be a monomial with exponents all divisible by $8$. A typical instance of a set of generators of $(A//A(2))_*$ satisfying the hypotheses of Lemma~\ref{lem:integralCnuHZ1} is $$ \xymatrix@C-2em@R-1em{ {\bar{\xi}}^{4}_{i} {\bar{\xi}}^{4}_{i'} \alpha & \circ \ar@{-} `r[dddd] `[dddd]^{\sq^4} [dddd] \\ ({\bar{\xi}}^{8}_{i-1} {\bar{\xi}}_{i'+2} + {\bar{\xi}}_{i+2} {\bar{\xi}}^8_{i'-1}) \alpha & \circ \ar@{-}[d]_{\sq^1} \\ ({\bar{\xi}}^{8}_{i-1} {\bar{\xi}}^2_{i'+1} + {\bar{\xi}}^2_{i+1} {\bar{\xi}}^8_{i'-1}) \alpha & \circ \ar@/_1pc/@{-}[dd]_{\sq^2} \\ \\ ({\bar{\xi}}^{8}_{i-1} {\bar{\xi}}^4_{i'} + {\bar{\xi}}^4_i {\bar{\xi}}^8_{i'-1}) \alpha & \circ } $$ \end{ex} \begin{cor}\label{cor:integralCnuHZ1} Suppose that $x$ satisfies the hypotheses of Lemma~\ref{lem:integralCnuHZ1}. The image of the corresponding integral generator $$ v_0 x + \cdots \in \Ext_{A(2)_*}((A//A(2)_*)) $$ in $\Ext_{E[Q_0, Q_1, Q_2]_*}((A//E[Q_0, Q_1, Q_2])_*)$ is given by $$ v_0 x + [a_1] z. $$ \end{cor} \section{Motivation: analysis of $\mathrm{bo}_*\mathrm{bo}$}\label{sec:bo} In analogy with the four perspectives described in the introduction, there are four primary perspectives on the ring of cooperations for real $K$-theory. \begin{enumerate} \item The spectrum $\mathrm{bo} \wedge \mathrm{bo}$ admits a decomposition (at the prime $2$) \[ \mathrm{bo} \wedge \mathrm{bo} \simeq \Bigvee{j \ge 0} \Sigma^{4j} \mathrm{bo} \wedge \mr{H}\mathbb{Z}_j, \] where $\mr{H}\mathbb{Z}_j$ is the $j$th integral Brown-Gitler spectrum. \item There is an isomorphism $\mr{KO}_*\mr{KO} \cong \mr{KO}_* \otimes_{\mr{KO}_0} \mr{KO}_0\mr{KO}$, and $\mr{KO}_0\mr{KO}$ is isomorphic to a subring of the ring of numerical functions. \item $K(1)$-locally, the ring spectrum $(\mr{KO} \wedge \mr{KO})_{K(1)}$ is given by the function spectrum $$ (\mr{KO} \wedge \mr{KO})_{K(1)} \simeq \Map(\mathbb{Z}_2^\times/\{\pm 1\}, \mr{KO}^\wedge_2). $$ \item By evaluation on Adams operations, $\mr{KO}_*\mr{KO}$ injects into a product of copies of $\mr{KO}$: $$ \mr{KO} \wedge \mr{KO} \hookrightarrow \prod_{i \in \mathbb{Z}} \mr{KO}. $$ \end{enumerate} \subsection{Integral Brown-Gitler spectra} The decomposition of $\mathrm{bo} \wedge \mathrm{bo}$ above is a topological realization of a homology decomposition (see \cite{boresolutions}, \cite{Milgram-connectivektheory}). Endow the monomials of the $A_*$-comodule $$ H_*\mr{H}\mathbb{Z} = \mathbb{F}_2[\bar\xi_1^2, \bar\xi_2, \bar\xi_3, \ldots ]$$ with a weight by defining $wt(\bar\xi_i) = 2^{i-1}$, $wt(1)=0$, and $wt(xy)=wt(x)+wt(y)$ for all $x,y\in H_* \mr{H}\mathbb{Z}$. The comodule $H_*\mr{H}\mathbb{Z}$ admits an increasing filtration by integral Brown-Gitler comodules $\ul{\mr{H}\mathbb{Z}}_j$, where $\ul{\mr{H}\mathbb{Z}}_j$ is spanned by elements of weight at most $2j$. These $A_*$-comodules are realized by the integral Brown-Gitler spectra $\mr{H}\mathbb{Z}_j$, so that $$ H_* \mr{H}\mathbb{Z}_j \cong \ul{\mr{H}\mathbb{Z}}_j. $$ There is a decomposition of $A(1)_*$-comodules: $$ H_* \mathrm{bo} = (A//A(1))_* \cong \bigoplus_{j \ge 0} \Sigma^{4j} \ul{\mr{H}\mathbb{Z}}_j. $$ This results in a decomposition on the level of Adams $E_2$-terms \begin{align*} \Ext(\mathrm{bo} \wedge \mathrm{bo}) & \cong \bigoplus_{j \ge 0} \Ext(\Sigma^{4j}\mathrm{bo} \wedge \mr{H}\mathbb{Z}_j) \\ & \cong \bigoplus_{j \ge 0} \Ext_{A(1)_*}(\Sigma^{4j}\mr{H}\mathbb{Z}_j). \end{align*} This algebraic splitting is topologically realized by a splitting $$ \mathrm{bo} \wedge \mathrm{bo} \simeq \Bigvee{j \ge 0} \Sigma^{4j} \mathrm{bo} \wedge \mr{H}\mathbb{Z}_j. $$ The goal of this section is to calculate the images of the maps \[ \mathrm{bo} \wedge \mr{H}\mathbb{Z}_j \lra{} \mathrm{bo} \wedge \mathrm{bo} \] in the decomposition above in order to illustrate the method used in our analysis of $\mathrm{tmf} \wedge \mathrm{tmf}$. Even in this case our perspective has some novel elements which provide a conceptual explanation for formulas obtained by Lellmann and Mahowald in \cite{LellmannMahowald}. \subsection{Exact sequences relating $H\mathbb{Z}_j$} Just as with $\ul{\mr{H}\mathbb{Z}}_j$ we define $\ul{\mathrm{bo}}_j$ to be the the submodule of \[ (A//A(1))_* \cong \mathbb{F}_2[\bar\xi_1^4, \bar\xi_2^2, \bar\xi_3, \ldots ] \] generated by elements of weight at most $4j$. These submodules are discussed more thoroughly at the beginning of Section $\ref{sec:ass}$. With these in hand we have the following exact sequences. \begin{lem}\label{lem:HZSES} There are short exact sequences of $A(1)_*$-comodules \begin{gather} 0 \rightarrow \Sigma^{4j}\ul{\mr{H}\mathbb{Z}}_j \rightarrow \ul{\mr{H}\mathbb{Z}}_{2j} \rightarrow \ul{\mathrm{bo}}_{j-1} \otimes (A(1)//A(0))_* \rightarrow 0, \label{eq:HZSES1} \\ 0 \rightarrow \Sigma^{4j}\ul{\mr{H}\mathbb{Z}}_j \otimes \ul{\mr{H}\mathbb{Z}}_1 \rightarrow \ul{\mr{H}\mathbb{Z}}_{2j+1} \rightarrow \ul{\mathrm{bo}}_{j-1} \otimes (A(1)//A(0))_* \rightarrow 0. \label{eq:HZSES2} \end{gather} \end{lem} \begin{proof} These short exact sequences are the analogs for integral Brown-Gitler modules of a pair of short exact sequences for $\mathrm{bo}$-Brown-Gitler modules (see Propositions 7.1 and 7.2 of \cite{BHHM}). The proof is almost identical to that given in \cite{BHHM}. On the level of basis elements, the map \begin{gather*} \si{4j} \ul{\mr{H}\mathbb{Z}}_j \rightarrow \ul{\mr{H}\mathbb{Z}}_{2j} \end{gather*} is given by \begin{align*} {\bar{\xi}}_1^{2i_1} {\bar{\xi}}_2^{i_2} \cdots & \mapsto {\bar{\xi}}_1^a {\bar{\xi}}_2^{2i_1} {\bar{\xi}}_3^{i_2} \cdots, \end{align*} whereas the map \begin{gather*} \si{4j} \ul{\mr{H}\mathbb{Z}}_j \otimes \ul{\mr{H}\mathbb{Z}}_1 \rightarrow \ul{\mr{H}\mathbb{Z}}_{2j+1} \end{gather*} is determined by \begin{align*} {\bar{\xi}}_1^{2i_1} {\bar{\xi}}_2^{i_2} \cdots \otimes 1 & \mapsto ({\bar{\xi}}_1^a {\bar{\xi}}_2^{2i_1} {\bar{\xi}}_3^{i_2} \cdots) \cdot 1, \\ {\bar{\xi}}_1^{2i_1} {\bar{\xi}}_2^{i_2} \cdots \otimes {\bar{\xi}}_1^2 & \mapsto ({\bar{\xi}}_1^a {\bar{\xi}}_2^{2i_1} {\bar{\xi}}_3^{i_2} \cdots) \cdot {\bar{\xi}}_1^2, \\ {\bar{\xi}}_1^{2i_1} {\bar{\xi}}_2^{i_2} \cdots \otimes {\bar{\xi}}_2 & \mapsto ({\bar{\xi}}_1^a {\bar{\xi}}_2^{2i_1} {\bar{\xi}}_3^{i_2} \cdots) \cdot {\bar{\xi}}_2. \end{align*} We abbreviate this by writing \begin{align}\label{eq:basismap} {\bar{\xi}}_1^{2i_1} {\bar{\xi}}_2^{i_2} \cdots \otimes \{ 1, {\bar{\xi}}_1^2, {\bar{\xi}}_2 \}& \mapsto ({\bar{\xi}}_1^a {\bar{\xi}}_2^{2i_1} {\bar{\xi}}_3^{i_2} \cdots) \cdot \{1, {\bar{\xi}}_1^2, {\bar{\xi}}_2 \}. \end{align} In all of the above assignments, the integer $a$ is taken to be $4j - wt({\bar{\xi}}_2^{2i_1} {\bar{\xi}}_3^{i_2} \cdots)$. The maps \begin{gather*} \ul{\mr{H}\mathbb{Z}}_{2j} \rightarrow \ul{\mathrm{bo}}_{j-1} \otimes (A(1)//A(0))_* , \\ \ul{\mr{H}\mathbb{Z}}_{2j+1} \rightarrow \ul{\mathrm{bo}}_{j-1} \otimes (A(1)//A(0))_* \end{gather*} are given by $$ {\bar{\xi}}_1^{4i_1+2\epsilon_1} {\bar{\xi}}_2^{2i_2+\epsilon_2} {\bar{\xi}}_3^{i_3} \cdots \mapsto \begin{cases} {\bar{\xi}}_1^{4i_1} {\bar{\xi}}_2^{2i_2} {\bar{\xi}}_3^{i_3} \cdots \otimes {\bar{\xi}}_1^{2\epsilon_1} {\bar{\xi}}_2^{\epsilon_2}, & wt({\bar{\xi}}_1^{4i_1} {\bar{\xi}}_2^{2i_2} {\bar{\xi}}_3^{i_3} \cdots) \le 4j-4, \\ 0, & \mr{otherwise}, \end{cases} $$ where $\epsilon_s \in \{0,1\}$. The proof is now a direct computation. \end{proof} Define $$ \frac{\Ext_{A(1)_*}(X)}{\text{$v_1$-{tor}}} := \mr{Image}\left(\Ext_{A(1)_*}(X) \rightarrow v_1^{-1}\Ext_{A(1)_*}(X)\right). $$ The following lemma follows from a simple induction, using the fact that $\ul{\mr{H}\mathbb{Z}}_1$ is given by the following cell diagram. $$ \xymatrix@R-2em@C-2em{ {\bar{\xi}}_2 & \circ \ar@{-}[d]^{\sq^1} \\ {\bar{\xi}}_1^2 & \circ \ar@/^1pc/@{-}[dd]^{\sq^2} \\ \\ 1 & \circ } $$ \begin{lem}\label{lem:HZ1^i} We have $$ \frac{\Ext_{A(1)_*}(\ul{\mr{H}\mathbb{Z}}_1^{\otimes i})}{\text{$v_1$-tor}} \cong \begin{cases} \Ext(\mathrm{bo}^{\bra{i}}), & \text{$i$ even}, \\ \Ext(\mathrm{bsp}^{\bra{i-1}}), & \text{$i$ odd}. \end{cases} $$ Here, $X^{\bra{i}}$ denotes the $i$th Adams cover. \end{lem} We deduce the following well known result (cf. \cite[Thm.~2.1]{LellmannMahowald}). \begin{prop}\label{lem:HZ_i} For a non-negative integer $j$, denote by $\alpha(j)$ the number of 1's in the dyadic expansion of $j$. Then $$ \frac{\Ext_{A(1)_*}(\ul{\mr{H}\mathbb{Z}}_j)}{\text{$v_1$-tor}} \cong \begin{cases} \Ext(\mathrm{bo}^{\bra{2j-\alpha(j)}}), & \text{$j$ even}, \\ \Ext(\mathrm{bsp}^{\bra{2j-\alpha(j)-1}}), & \text{$j$ odd}. \end{cases} $$ \end{prop} \begin{proof} This may be established by induction on $j$ using the short exact sequences of Lemma~\ref{lem:HZSES}, by augmenting Lemma~\ref{lem:HZ1^i} with the following facts. \begin{enumerate} \item All $v_0$-towers in $\Ext_{A(1)_*}(\ul{\mr{H}\mathbb{Z}}_i)$ are $v_1$-periodic. This can be seen as $\Ext_{A(1)_*}(\ul{\mr{H}\mathbb{Z}}_j)$ is a summand of $\Ext(\mathrm{bo} \wedge \mathrm{bo})$, and after inverting $v_0$, the latter has no $v_1$-torsion. Explicitly we have $$ v_0^{-1} \Ext(\mathrm{bo} \wedge \mathrm{bo}) = \mathbb{F}_2[v^{\pm 1}_0, u^2, v^2]. $$ \item We have \begin{align*} \frac{\Ext_{A(1)_*}((A(1)//A(0))_* \otimes \ul{\mathrm{bo}}_j)}{\text{$v_0$-tors}} & \cong \frac{\Ext_{A(0)_*}(\ul{\mathrm{bo}}_j)}{\text{$v_0$-tors}} \\ & \cong \mathbb{F}_2[v_0]\{ 1, \xi_1^4, \ldots, \xi_1^{4j}\}. \end{align*} This follows from the fact that $$ \frac{\Ext_{A(0)_*}(\ul{\mr{H}\mathbb{Z}}_j)}{\text{$v_0$-tors}} \cong \mathbb{F}_2[v_0], $$ which, for instance, can be established by induction using the short exact sequences of Lemma~\ref{lem:HZSES}. \end{enumerate} \end{proof} \subsection{The cooperations of $\mr{KU}$ and $\mr{bu}$} In order to put the ring of cooperations for $\mathrm{bo}$ in the proper setting, we briefly review the story for $\mr{bu}$. We begin by recalling the Adams-Harris determination of $\mr{KU}_*\mr{KU}$ \cite[Sec.~II.13]{Adams}. We have an arithmetic square $$ \xymatrix{ \mr{KU} \wedge \mr{KU} \ar[r] \ar[d] & (\mr{KU} \wedge \mr{KU})^\wedge_2 \ar[d] \\ (\mr{KU} \wedge \mr{KU})_\mathbb{Q} \ar[r] & ((\mr{KU} \wedge \mr{KU})^\wedge_2)_\mathbb{Q}, } $$ which results in a pullback square after applying $\pi_*$ $$ \xymatrix{ \mr{KU}_* \mr{KU} \ar[r] \ar[d] & \Map^c(\mathbb{Z}_2^\times, \pi_* \mr{KU}^\wedge_2) \ar[d] \\ \mathbb{Q}[u^{\pm 1},v^{\pm 1}] \ar[r] & \Map^c(\mathbb{Z}_2^\times, \mathbb{Q}_2[u^{\pm 1}]). } $$ Setting $w = v/u$, the bottom map in the above square is given on homogeneous polynomials by $$ f(u,v) = u^n f(1,w) \mapsto \left(\lambda \mapsto u^n f(1,\lambda) \right). $$ We therefore deduce that $\mr{KU}_*\mr{KU} = \mr{KU}_* \otimes_{\mr{KU}_0} \mr{KU}_0\mr{KU}$, and continuity implies that $$ \mr{KU}_0\mr{KU} = \{ f(w) \in \mathbb{Q}[w^{\pm 1}] \: : \: f(k) \in \mathbb{Z}_{(2)}, \text{for all $k \in \mathbb{Z}^{\times}_{(2)}$}\}. $$ Note that we can perform a similar analysis for $\mr{KU}_*\mr{bu}$: since $\mr{bu}$ and $\mr{KU}$ are $K(1)$-locally equivalent, applying $\pi_*$ to the arithmetic square yields a pullback square with the same terms on the right hand edge $$ \xymatrix{ \mr{KU}_* \mr{bu} \ar[r] \ar[d] & \Map^c(\mathbb{Z}_2^\times, \pi_* \mr{KU}^\wedge_2) \ar[d] \\ \mathbb{Q}[u^{\pm 1},v] \ar[r] & \Map^c(\mathbb{Z}_2^\times, \mathbb{Q}_2[u^{\pm 1}]). } $$ Consequently $\mr{KU}_*\mr{bu} = \mr{KU}_* \otimes_{\mr{KU}_0} \mr{KU}_0\mr{bu}$, with $$ \mr{KU}_0\mr{bu} = \{ g(w) \in \mathbb{Q}[w] \: : \: g(k) \in \mathbb{Z}_{(2)}, \text{for all $k \in \mathbb{Z}^{\times}_{(2)}$}\}. $$ Consider the related space of \emph{$2$-local numerical polynomials}: $$ \mr{NumPoly}_{(2)} := \{ h(x) \in \mathbb{Q}[x] \: : \: h(k) \in \mathbb{Z}_{(2)}, \text{for all $k \in \mathbb{Z}_{(2)}$}\}. $$ The theory of numerical polynomials states that $\mr{NumPoly}_{(2)}$ is the free $\mathbb{Z}_{(2)}$-module generated by the basis elements $$ h_n(x) := \binom{x}{n} = \frac{x(x-1)\cdots (x-n+1)}{n !}. $$ We can relate $\mr{KU}_0\mr{bu}$ to $\mr{NumPoly}_{(2)}$ by a change of coordinates. A function on $\mathbb{Z}^\times_{(2)}$ can be regarded as a function on $\mathbb{Z}_{(2)}$ via the change of coordinates \begin{align*} \mathbb{Z}_{(2)} & \xrightarrow{\approx} \mathbb{Z}_{(2)}^\times \\ k & \mapsto 2k+1. \end{align*} Observe that \begin{align*} \frac{k(k-1)\cdots (k-n+1)}{n !} & = \frac{2k(2k-2)\cdots (2k-2n+2)}{2^n n !} \\ & = \frac{(2k+1)((2k+1)-3)\cdots ((2k+1)-(2n-1))}{2^n n !}. \end{align*} We deduce that a $\mathbb{Z}_{(2)}$-basis for $\mr{KU}_0\mr{bu}$ is given by \[ g_n(w) = \frac{(w-1)(w-3)\ldots(w-(2n-1))}{2^nn!}. \] (Compare with \cite[Prop.~17.6(i)]{Adams}.) From this we deduce a basis of the image of the map $$ \mr{bu}_*\mr{bu} \hookrightarrow \mr{KU}_* \mr{KU}, $$ as we now explain. In \cite[p. 358]{Adams} it is shown that this image is the ring \[ \frac{\mr{bu}_*\mr{bu}}{\text{$v_1$-tor}} = (\mr{KU}_*\mr{bu} \cap \mathbb{Q}[u,v])_{\mr{AF} \geq 0}, \] where $\mr{AF} \geq 0$ means the elements of Adams filtration $\geq 0$. Since the elements $2$, $u$, and $v$ have Adams filtration $1$, this image is equivalently described as $$ \frac{\mr{bu}_*\mr{bu}}{\text{$v_1$-tor}} = \mr{KU}_* \mr{bu} \cap \mathbb{Z}_{(2)}[u/2, v/2]. $$ To compute a basis for this image we need to calculate the Adams filtration of the elements of the basis $\{ g_n(w) \}$ for $\mr{KU}_0\mr{bu}$. Since $w$ has Adams filtration $0$ we need only compute the $2$-divisibility of the denominators of the functions $g_n(w)$. As usual in this subject, for an integer $k \in \mathbb{Z}$ let $\nu_2(k)$ be the largest power of $2$ that divides $k$ and let $\alpha(k)$ be the number of $1$'s in the binary expansion of $k$. Then \[ \nu_2(n!) = n - \alpha(n) \] and so \[ \mr{AF}(g_n) = \alpha(n) - 2n. \] The following is a list of the Adams filtration of the first few basis elements: \begin{center} \begin{tabular}{|l|r|r|} \hline $n$ & binary & $\mr{AF}(g_n)$ \\ \hline $0$ & $0$ & $0$ \\ $1$ & $1$ & $-1$ \\ $2$ & $10$ & $-3$ \\ $3$ & $11$ & $-4$ \\ $4$ & $100$ & $-7$ \\ $5$ & $101$ & $-8$ \\ $6$ & $110$ & $-10$ \\ $7$ & $111$ & $-11$ \\ $8$ & $1000$ & $-15$ \\ \hline \end{tabular} \end{center} It follows (compare with \cite[Prop.~17.6(ii)]{Adams}) that the image of $\mr{bu}_*\mr{bu}$ in $\mr{KU}_* \mr{KU}$ is the free module: $$ \frac{\mr{bu}_*\mr{bu}}{\text{$v_1$-tor}} = \mathbb{Z}_{(2)}\{ 2^{\max(0, 2n-m-\alpha(n))} u^m g_n(w)\: : \: n \ge 0, m \ge n \}. $$ The Adams chart in Figure \ref{fig:bubu} illustrates how the description of $\mr{bu}_*\mr{bu}$ given above along with the Mahler basis can be used to identify $\mr{bu}_*\mr{bu}$ as a $\mr{bu}_*$-module inside of $\mr{KU}_*\mr{KU}$. \begin{figure} \centering \caption{$\mr{bu}_*\mr{bu}$} \includegraphics[height = .5\textheight, trim = 1cm 13cm 4cm 2cm]{bubucolor.pdf} \end{figure}\label{fig:bubu} \subsection{The cooperations of $\mr{KO}$ and $\mathrm{bo}$} Adams and Switzer computed $\mr{KO}_*\mr{KO}$ along simlar lines \cite[Sec.~II.17]{Adams}. There is an arithmetic square $$ \xymatrix{ \mr{KO} \wedge \mr{KO} \ar[r] \ar[d] & (\mr{KO} \wedge \mr{KO})^\wedge_2 \ar[d] \\ (\mr{KO} \wedge \mr{KO})_\mathbb{Q} \ar[r] & ((\mr{KO} \wedge \mr{KO})^\wedge_2)_\mathbb{Q}, } $$ which results in a pullback when applying $\pi_*$ $$ \xymatrix{ \mr{KO}_* \mr{KO} \ar[r] \ar[d] & \Map^c(\mathbb{Z}_2^\times/\{\pm 1\}, \pi_* \mr{KO}^\wedge_2) \ar[d] \\ \mathbb{Q}[u^{\pm 2},v^{\pm 2}] \ar[r] & \Map^c(\mathbb{Z}_2^\times/\{ \pm 1\}, \mathbb{Q}_2[u^{\pm 2}]). } $$ (One can use the fact that $\mr{KU}^\wedge_2$ is a $K(1)$-local $C_2$-Galois extension of $\mr{KO}^\wedge_2$ to identify the upper right hand corner of the above pullback.) Continuing to let $w = v/u$, the bottom map in the above square is given by $$ f(u^2,v^2) = u^{2n} f(1,w^2) \mapsto \left([\lambda] \mapsto u^{2n} f(1,\lambda^2) \right). $$ We therefore deduce that $\mr{KO}_*\mr{KO} = \mr{KO}_* \otimes_{\mr{KO}_0} \mr{KO}_0\mr{KO}$, with $$ \mr{KO}_0\mr{KO} = \{ f(w^2) \in \mathbb{Q}[w^{\pm 2}] \: : \: f(\lambda^2) \in \mathbb{Z}^\times_{2}, \text{for all $[\lambda] \in \mathbb{Z}_2^{\times}/\{\pm 1\}$}\}. $$ Again, $\mr{KO}_*\mathrm{bo}$ is similarly determined: since $\mathrm{bo}$ and $\mr{KO}$ are $K(1)$-locally equivalent, applying $\pi_*$ to the arithmetic square yields a pullback square with the same terms on the right hand edge: $$ \xymatrix{ \mr{KO}_* \mathrm{bo} \ar[r] \ar[d] & \Map^c(\mathbb{Z}_2^\times/\{ \pm 1\}, \pi_* \mr{KO}^\wedge_2) \ar[d] \\ \mathbb{Q}[u^{\pm 2},v^2] \ar[r] & \Map^c(\mathbb{Z}_2^\times/\{\pm 1\}, \mathbb{Q}_2[u^{\pm 2}]). } $$ We therefore deduce that $\mr{KO}_*\mathrm{bo} = \mr{KO}_* \otimes_{\mr{KO}_0} \mr{KO}_0\mathrm{bo}$, with $$ \mr{KO}_0\mathrm{bo} = \{ f(w^2) \in \mathbb{Q}[w^2] \: : \: f(\lambda^2) \in \mathbb{Z}_{2}, \text{for all $[\lambda] \in \mathbb{Z}^{\times}_{2}/\{\pm 1 \}$}\}. $$ To produce a basis of this space of functions we use the $q$-Mahler bases developed in \cite{Conrad}, which we promptly recall. First note that there is an exponential isomorphism \[ \mathbb{Z}_2 \lra{\cong} \mathbb{Z}_{2}^{\times}/\{\pm 1\}:k \mapsto [3^k]. \] Taking $w = 3^k$, we have $w^2 = 9^k$, or in other words, the functions $f(w^2)$ that we are concerned with can be regarded as functions on $2\mathbb{Z}_2$. They take the form \[ f(9^k):2\mathbb{Z}_2 \cong 1+8\mathbb{Z}_2 \lra{} \mathbb{Z}_2, \] where $1+8\mathbb{Z}_2 \subset \mathbb{Z}_{2}^{\times}$ is the image of $2\mathbb{Z}_2$ under the isomorphism given by $3^k$. To obtain a $q$-Mahler basis as in \cite{Conrad} with $q = 9$ it is important that $\nu_2(9-1)>0$. The $q$-Mahler basis is a basis for numerical polynomials with domain restricted to $2\mathbb{Z}_2$. In the notation of \cite{Conrad} we have that \[ f(9^k) = \sum_{n\geq 0}c_n \binom{k}{n}_9, \] where $c_n \in \mathbb{Z}_{(2)}$ are coefficients and \[ \binom{k}{n}_9 = \frac{(9^k-1)(9^k-9)\cdots (9^k - 9^{n-1})}{(9^n-1)(9^n-9)\cdots (9^n - 9^{n-1})}. \] Let us set \begin{align}\label{eq:f_n(w^2)} f_n(w^2) = \frac{(w^2-1)(w^2-9)\cdots (w^2 - 9^{n-1})}{(9^n-1)(9^n-9)\cdots (9^n - 9^{n-1})}; \end{align} then any $f\in \mr{KO}_0\mathrm{bo}$ is given by \[ f(w^2) = \sum_{n}c_n f_n(w^2), \qquad c_n \in \mathbb{Z}_{(2)}, \] i.e. a basis for $\mr{KO}_0 \mathrm{bo}$ is given by the set $\{ f_n (w^2) \}_{n \ge 0}$. As in the $\mr{KU}$-case, it turns out that the image of $\mathrm{bo}_* \mathrm{bo}$ in $\mr{KO}_* \mr{KO}$ is given by $$ \frac{\mathrm{bo}_*\mathrm{bo}}{\text{$v_1$-tor}} = (\mr{KO}_*\mathrm{bo} \cap \mathbb{Q}[u^2, v^2])_{\mr{AF} \ge 0}. $$ In order to compute a basis for this we once again need to know the Adams filtration of $f_n$. One can show that \begin{eqnarray*} \nu_2((9^n-1)(9^n-9)\cdots (9^n - 9^{n-1})) & = & \nu_2(n!)+3n \\ & = & 4n - \alpha(n). \end{eqnarray*} It follows that we have \begin{align*} \frac{\mathrm{bo}_*\mathrm{bo}}{\text{$v_1$-tor}} = & \: \mathbb{Z}_{(2)}\{ 2^{\max(0, 4n-2m-\alpha(n))} u^{2m} f_n(w^2)\: : \: n \ge 0, \: m \ge n, \: m \equiv 0 \mod 2 \} \\ & \oplus \mathbb{Z}_{(2)}\{ 2^{\max(0, 4n-2m-1-\alpha(n))} 2u^{2m} f_n(w^2)\: : \: n \ge 0, \: m \ge n, \: m \equiv 0 \mod 2 \} \\ & \oplus \mathbb{Z}/2\left\{ u^{2m} f_n(w^2) \eta^c \: : \: \begin{array}{l} n \ge 0, \: m \ge n, \: m \equiv 0 \mod 2, \\ c \in \{1,2\}, \: \alpha(n)-4n+2m+c \ge 0 \end{array} \right\}. \end{align*} Here is a list of the Adams filtration of the first several elements in the $q$-Mahler basis: \begin{center} \begin{tabular}{|l|r|r|} \hline $n$ & $f_n$ in terms of $g_i$ & $\mr{AF}(f_n)$ \\ \hline $0$ & $g_0$ & $0$ \\ $1$ & $g_2 + g_1$ & $-3$ \\ $2$ & $\frac{1}{15}g_4 + \frac{2}{15}g_3 + \frac{1}{15} g_2$ & $-7$ \\ \hline \end{tabular} \end{center} With this information we can now give the Adams chart (Figure \ref{fig:bobo}) of $\mathrm{bo}_*\mathrm{bo}$ modulo $v_1$-torsion. \begin{figure} \centering \caption{$\mathrm{bo}_*\mathrm{bo}$} \includegraphics[height = .3\textheight, trim = 5cm 17.5cm 3cm 2cm]{bobocolor.pdf} \end{figure}\label{fig:bobo} \subsection{Calculation of the image of $\mathrm{bo}_* \mr{H}\mathbb{Z}_j$ in $\mr{KO}_* \mr{KO}$} We now compute the image (on the level of Adams $E_\infty$-terms) of the composite $$ \mathrm{bo}_*\mr{H}\mathbb{Z}_j \rightarrow \mathrm{bo}_*\mathrm{bo} \rightarrow \mr{KO}_*\mr{KO}. $$ Since $v_1^{-1}\mathrm{bo}_*\Sigma^{4j}\mr{H}\mathbb{Z}_j \cong \mr{KO}_*$, it suffices to determine the image of the generator $$ e_{4j} \in \mathrm{bo}_{4j}(\Sigma^{4j}\mr{H}\mathbb{Z}_j). $$ Because the maps $$ \mathrm{bo} \wedge \Sigma^{4j} \mr{H}\mathbb{Z}_j \rightarrow \mathrm{bo} \wedge \mathrm{bo} $$ are constructed to be $\mathrm{bo}$-module maps, everything else is determined by $2$ and $v_1$, i.e. $u$-multiplication. Consider the commutative diagram induced by the maps $\mathrm{bo} \rightarrow \mr{bu}$, $\mr{bu} \rightarrow \mr{H}\FF_2$, and $\mr{BP} \rightarrow \mr{bu}$ $$ \xymatrix{ \mathrm{bo} \wedge \Sigma^{4j} \mr{H}\mathbb{Z}_j \ar[r] \ar[d] & \mathrm{bo} \wedge \mathrm{bo} \ar[r] \ar[d] & \mr{bu} \wedge \mr{bu} \ar[d] & \mr{BP} \wedge \mr{BP} \ar[ld] \ar[l] \\ \mr{H}\FF_2 \wedge \Sigma^{4j} \mr{H}\mathbb{Z}_j \ar[r] & \mr{H}\FF_2 \wedge \mathrm{bo} \ar[r] & \mr{H}\FF_2 \wedge \mr{H}\FF_2. } $$ On the level of homotopy groups the bottom row of the above diagram takes the form $$ \mathbb{F}_2\{\bar{\xi}_1^{4j}, \ldots \} \hookrightarrow \mathbb{F}_2[\bar{\xi}_1^4, \bar{\xi}_2^2, \bar{\xi}_3, \ldots ] \hookrightarrow \mathbb{F}_2[\bar{\xi}_1, \bar{\xi}_2, \bar{\xi}_3, \ldots ]. $$ Since we have \begin{align*} \mathrm{bo}_{*}\Sigma^{4j}\mr{H}\mathbb{Z}_j & \rightarrow (\mr{H}\FF_2)_*\Sigma^{4j}\mr{H}\mathbb{Z}_j \\ e_{4j} & \mapsto \bar{\xi}_1^{4j}, \end{align*} it suffices to find an element $b_j \in \mathrm{bo}_{4j}\mathrm{bo}$ such that \begin{align*} \mathrm{bo}_{*}\mathrm{bo} & \rightarrow (\mr{H}\FF_2)_*\mathrm{bo} \\ b_i & \mapsto \bar{\xi}_1^{4j}. \end{align*} Clearly we can take $b_0 = 1 \in \mathrm{bo}_0\mathrm{bo}$. Note that we have \begin{align*} \mr{BP}_{*}\mr{BP} & \rightarrow (\mr{H}\FF_2)_*\mr{H}\FF_2 \\ t_1 & \mapsto \bar{\xi}_1^{2}. \end{align*} From the equation $$ \eta_R(v_1) = v_1 + 2t_1 $$ and the fact that the map $\mr{BP}_*\mr{BP} \to \mr{bu}_*\mr{bu}$ is one of Hopf algebroids, we deduce that we have \begin{align*} \mr{BP}_{*}\mr{BP} & \rightarrow \mr{bu}_*\mr{bu} \\ t_1 & \mapsto \frac{v-u}{2} = ug_1(w). \end{align*} Hence we get that \begin{align*} \mr{bu}_{*}\mr{bu} & \rightarrow (\mr{H}\FF_2)_*\mr{H}\FF_2 \\ \frac{v-u}{2} & \mapsto \bar{\xi}_1^2 \end{align*} and thus \begin{align*} \mr{bu}_{*}\mr{bu} & \rightarrow (\mr{H}\FF_2)_*\mr{H}\FF_2 \\ \left(\frac{v^2-u^2}{4}\right)^{j} & \mapsto \bar{\xi}_1^{4j}. \end{align*} Since $$ 2^{2j - \alpha(j)}u^{2j}f_j(w^2) = \left(\frac{v^2-u^2}{4}\right)^{j} \quad \text{modulo terms of higher $\mr{AF}$} $$ by \eqref{eq:f_n(w^2)} we see that we have \begin{align*} \mathrm{bo}_{*}\mathrm{bo} & \rightarrow (\mr{H}\FF_2)_*\mathrm{bo} \\ 2^{2j - \alpha(j)}u^{2j}f_j(w^2) & \mapsto \bar{\xi}_1^{4j}, \end{align*} so that we can take $$ b_j = 2^{2j - \alpha(j)}u^{2j}f_j(w^2). $$ We have therefore arrived the following well-known theorem (see \cite[Cor.~2.5(a)]{LellmannMahowald}). \begin{thm}\label{thm:HZjImage} The image of the map $$ \frac{\Ext(\mathrm{bo} \wedge \Sigma^{4j} \ul{\mr{H}\mathbb{Z}}_j)}{\text{$v_1$-tors}} \rightarrow \frac{\Ext(\mathrm{bo} \wedge bo)}{\text{$v_1$-tors}} $$ is the submodule \begin{align*} & \mathbb{F}_2[v_0]\{ v_0^{\max(0, 4j-2m-\alpha(j))} u^{2m} f_j(w^2)\: : \: m \ge j, \: m \equiv 0 \mod 2 \} \\ & \oplus \mathbb{F}_2[v_0] \{ v_0^{\max(0, 4j-2m-1-\alpha(j))} v_0 u^{2m} f_j(w^2)\: : \: m \ge j, \: m \equiv 0 \mod 2 \} \\ & \oplus \mathbb{F}_2\left\{ u^{2m} f_j(w^2) \eta^c \: : \: \begin{array}{l} m \ge j, \: m \equiv 0 \mod 2, \\ c \in \{1,2\}, \: \alpha(j)-4j+2m+c \ge 0 \end{array} \right\}. \end{align*} \end{thm} \begin{rmk} These are the colors in Figure \ref{fig:bobo}. \end{rmk} \subsection{The embedding into $\prod \mr{KO}$} The final step is to consider the maps of $\mr{KO}$-algebras given by the composite $$ \td{\psi}^{3^k}: \mr{KO} \wedge \mr{KO} \xrightarrow{1 \wedge \psi^{3^k}} \mr{KO} \wedge \mr{KO} \xrightarrow{\mu} \mr{KO}. $$ Together, they result in a map of $\mr{KO}$-algebras $$ \mr{KO} \wedge \mr{KO} \xrightarrow{\prod \td{\psi}^{3^k}} \prod_{k \in \mathbb{Z}} \mr{KO}. $$ \begin{rmk} The map above has a modular interpretation. Let $\mathcal{M}_{fg}$ denote the moduli stack of formal groups, and let \[ (\Spec\mathbb{Z})//C_2 \rightarrow \mathcal{M}_{fg} \] classify $\hat{\mathbb{G}}_m$ with the action of $[-1]$. This map equips $(\Spec \mathbb{Z})//C_2$ with a sheaf of $\Einf$-rings, such that the derived global sections are $\mr{KO}$; the reader is referred to the appendix of \cite{LawsonNaumann2} for details. The spectrum $\mr{KO} \wedge \mr{KO}$ is the global sections of the pullback \[ \left(\Spec \mathbb{Z} \times_{\mathcal{M}_{fg}} \Spec \mathbb{Z}\right)//(C_2 \times C_2). \] For $k \in \mathbb{Z}$ we may consider the map of stacks \[ (\Spec \mathbb{Z})//C_2 \rightarrow \left(\Spec\mathbb{Z} \times_{\mathcal{M}_{fg}} \Spec\mathbb{Z} \right)//(C_2 \times C_2) \] sending $\hat{\mathbb{G}}_m$ to the object $[3^k]:\hat{\mathbb{G}}_m \rightarrow \hat{\mathbb{G}}_m$. As $k$ varies this induces the map $\prod \td{\psi}^{3^k}$. \end{rmk} \begin{prop} The map $$ \mr{KO}_* \mr{KO} \xrightarrow{\prod \td{\psi}^{3^k}} \prod_{k \in \mathbb{Z}} \mr{KO}_* $$ is an injection. \end{prop} \begin{proof} Consider the diagram $$ \xymatrix{ \mr{KO}_* \mr{KO} \ar[r]^{\prod \td{\psi}^{3^k}} \ar[d] & \prod_{k \in \mathbb{Z}} \mr{KO}_* \ar[d] \\ (\mr{KO}_* \mr{KO})^{\wedge}_{2} \ar[r]^{\prod \td{\psi}^{3^k}} \ar@{=}[d] & \prod_{k \in \mathbb{Z}} (\mr{KO}_*)^{\wedge}_2 \ar@{=}[d] \\ \Map^c(\mathbb{Z}_2^\times/\{\pm 1\}, (\mr{KO}_*)^\wedge_2) \ar[r] & \Map(3^\mathbb{Z}, (\mr{KO}_*)^\wedge_2), } $$ where the bottom horizontal map is the map induced from the inclusion of groups $$ 3^\mathbb{Z} \hookrightarrow \mathbb{Z}_2^\times/\{ \pm 1 \}. $$ The vertical maps are injections, since $$ \bigcap_i 2^i \mr{KO}_*\mr{KO} = 0, \quad \rm{and} \quad \bigcap_i 2^i \mr{KO}_* = 0. $$ The bottom horizontal map is an injection since $3^\mathbb{Z}$ is dense in $\mathbb{Z}_2^\times/\{\pm 1\}$. The result follows. \end{proof} We investigated the Brown-Gitler wedge decomposition $$ \bigvee_j \mathrm{bo} \wedge \Sigma^{4j} \mr{H}\mathbb{Z}_j \xrightarrow{\simeq} \mathrm{bo} \wedge \mathrm{bo}, $$ and we now end this section by explaining how the map $$ \mr{KO} \wedge \mr{KO} \xrightarrow{\prod \td{\psi}^{3^k}} \prod_{k \in \mathbb{Z}} \mr{KO} $$ is compatible with the above decomposition. \begin{prop} The composites $$ \mathrm{bo} \wedge \mr{H}\mathbb{Z}_j \rightarrow \mathrm{bo} \wedge \mathrm{bo} \rightarrow \mr{KO} \wedge \mr{KO} \xrightarrow{\td{\psi}^{3^j}} \mr{KO} $$ are equivalences after inverting $v_1$. \end{prop} \begin{proof} This follows from the fact that $f_j(9^j) = 1.$ \end{proof} \begin{rmk} In fact, the ``matrix" representing the composite $$ \bigvee_j \mathrm{bo} \wedge \mr{H}\mathbb{Z}_j \rightarrow \mathrm{bo} \wedge \mathrm{bo} \rightarrow \mr{KO} \wedge \mr{KO} \xrightarrow{\prod \td{\psi}^{3^k}} \prod_{k \in \mathbb{Z}} \mr{KO} $$ is upper triangular, as we have $$ f_j(9^k) = \begin{cases} 0, & k < j, \\ 1, & k = j. \\ \end{cases} $$ \end{rmk} \subsection{Faithfullness of $\psi$} In this section we will prove the following theorem. \begin{thm}\label{thm:faithful} The map on homotopy $$ \psi_* : \mathrm{TMF}_*\mathrm{TMF} \rightarrow \prod\limits_{i\in \mathbb{Z},j\ge 0} \pi_* \mathrm{TMF}_0(3^j) \times \pi_*\mathrm{TMF}_0(5^j) $$ induced by the map $\psi$ defined in the last section is injective. \end{thm} Theorem~ \ref{thm:faithful} will be proven in two steps. Consider the following diagram \begin{equation}\label{diag:faithful} \xymatrix{ \mathrm{TMF}_*\mathrm{TMF} \ar[r]^-{\psi_*} \ar[d] & \prod\limits_{i\in \mathbb{Z},j\ge 0} \pi_* \mathrm{TMF}_0(3^j) \times \pi_*\mathrm{TMF}_0(5^j) \ar[d] \\ \pi_*(\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)} \ar[r]^-{(\psi_{K(2)})_*} & \prod\limits_{i\in \mathbb{Z},j\ge 0} \pi_* \mathrm{TMF}_0(3^j)_{K(2)} \times \pi_*\mathrm{TMF}_0(5^j)_{K(2)} } \end{equation} where the vertical maps are the localization maps. We will first argue that the left vertical map in (\ref{diag:faithful}) is injective, and we will observe that the same argument shows the right hand vertical map is injective. Secondly, we will show that the bottom horizontal map of (\ref{diag:faithful}) is injective. Theorem~\ref{thm:faithful} then follows from the commutativity of (\ref{diag:faithful}) and these injectivity results. \begin{lem} The localization map $$ \mathrm{TMF}_*\mathrm{TMF} \rightarrow \mathrm{TMF}_*\mathrm{TMF}_{K(2)}$$ is injective. \end{lem} \begin{proof} Since $\mathrm{TMF} \wedge \mathrm{TMF} $ is $E(2)$-local, we have $$ (\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)} \simeq \holim_{i,j} \mathrm{TMF} \wedge \mathrm{TMF} \wedge M(2^i, v_1^j) $$ where $(i,j)$ above run over a suitable cofinal range of $\mathbb{N}^+ \times \mathbb{N}^+$. In order to conclude that there is an isomorphism $$ \pi_*(\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)} \cong \mathrm{TMF}_*\mathrm{TMF}^{\wedge}_{(2,c_4)} $$ and for the map $$ \mathrm{TMF}_*\mathrm{TMF} \rightarrow \mathrm{TMF}_*\mathrm{TMF}^{\wedge}_{(2,c_4)} $$ to be injective we must show that no element of $\mathrm{TMF}_*\mathrm{TMF} $ is infinitely divisible by elements of the ideal $(2,c_4)$. Consider the Adams-Novikov spectral sequence for $\mathrm{TMF}_* \mathrm{TMF} $. This spectral sequences converges since $\mathrm{TMF} \wedge \mathrm{TMF} $ is $E(2)$-local \cite[Thm.~5.3]{HoveySadofsky}. The $E_1$-term of this spectral sequence is easily seen to not be infinitely divisible by elements of the ideal $(2,c_4)$. Therefore, any infinite divisibility in $\mathrm{TMF}_*\mathrm{TMF} $ would have to occur through infinitely many hidden extensions. This would result in elements in negative Adams-Novikov filtration, which is impossible. \end{proof} The same argument shows that the various maps $$ \pi_*\mathrm{TMF}_0(N) \rightarrow \pi_*\mathrm{TMF}_0(N)_{K(2)} $$ are injections. The only remaining step to proving Theorem~\ref{thm:faithful} is to show the bottom arrow of Diagram~(\ref{diag:faithful}) is an injection. This is the heart of the matter. \begin{lem}\label{lem:faithful} The map $$ \pi_*(\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)} \xrightarrow{(\psi_{K(2)})_*} \prod\limits_{i\in \mathbb{Z},j\ge 0} \pi_* \mathrm{TMF}_0(3^j)_{K(2)} \times \pi_*\mathrm{TMF}_0(5^j)_{K(2)} $$ is an injection. \end{lem} In order to prove this lemma, we will need the following technical observation. \begin{lem}\label{lem:open} Suppose that $G$ is a profinite group, $H$ is a finite subgroup of $G$, and $U$ is an open subgroup of $G$ containing $H$. Then there is a finite set of open subgroups $U_i \le U$ which contain $H$, and a corresponding finite set $\{y_k\}$ of elements in $G$ such that \begin{enumerate} \item $\{y_k U_{k} \}$ forms an open cover of $G$, and \item $H \cap y_kU_ky_{k}^{-1} = H \cap y_kHy_k^{-1}. $ \end{enumerate} \end{lem} \begin{proof} We have $$ H = \bigcap_{H \le V \le_o U }V $$ (where we use $\le_o$ to denote ``open subgroup''). Therefore, for each $y \in G$, we have $$ H \cap yHy^{-1} = \bigcap_{H \le V \le_o U} H \cap yVy^{-1}. $$ Therefore, for each $z \in H$ with $z \not\in yHy^{-1}$, there must be a subgroup $H \le V_z \le_o U$ so that $z \not\in yV_zy^{-1}$. Define $$ U_y = \bigcap_z V_z. $$ (If the set of all such $z$ is empty, define $U_y = U$.) Since $H$ is finite, this is a finite intersection, hence $U_y$ is open. Note that $U_y$ has the property that $H \le U_y \le_o U$ and $$ H \cap yU_y y^{-1} = H \cap yHy^{-1}. $$ Consider the cover $\{ yU_y \}_y$ where $y$ ranges over the elements of $G$. Since $G$ is compact, there is a finite subcover $\{y_k U_{y_k}\}$. We may therefore take $U_k = U_{y_k}$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:faithful}] Let $\mathbb{S}_2$ denote the second Morava stabilizer group, and let $\bar{E}_2$ denote the version of Morava $E$-theory associated to a height $2$ formal group over $\bar{\mathbb{F}}_2$. The spectrum $\bar{E}_2$ admits an action by the group $\mathbb{S}_2 \rtimes Gal$ where $Gal$ is the Galois group of $\bar{\mathbb{F}}_2$ over $\mathbb{F}_2$, and we have $$ \mathrm{TMF}_{K(2)} \simeq \left( \bar{E}_2^{hG_{24}} \right)^{hGal} $$ where $G_{24}$ is the group of automorphisms of the (unique) supersingular elliptic curve $C$ over $\bar{\mathbb{F}}_2$. In \cite{GHMR}, it is shown that this homotopy fixed point description of $\mathrm{TMF}_{K(2)}$ gives rise to the following description of $(\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)}$ \begin{align*} (\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)} & \simeq \left( \Map^c(\mathbb{S}_2/G_{24}, \bar{E}_2)^{hG_{24}} \right)^{hGal}. \end{align*} There is a subtlety being hidden with the above notation: the Galois group is acting on the continuous mapping spectrum with the conjugation action, where it acts on the source through the left action on $$ (\mathbb{S}_2 \rtimes Gal)/(G_{24} \rtimes Gal) \cong \mathbb{S}_2/G_{24}. $$ For $N$ coprime to $2$, let $\mathcal{M}^{ss}_0(N)(\bar{\mathbb{F}}_2)$ denote the groupoid whose objects are pairs $(C,H)$ where $C$ is a supersingular elliptic curve over $\bar{\mathbb{F}}_2$ and $H \le C(\bar{\mathbb{F}}_2)$ is a cyclic subgroup of order $N$, and whose morphisms are isomorphisms of elliptic curves which preserve the subgroup. Then we have $$ \mathrm{TMF}_0(N)_{K(2)} \simeq \left( \prod_{[C,H] \in \mathcal{M}^{ss}_0(N)(\bar{\mathbb{F}}_2)} \bar{E}^{h\aut(C,H)}_2 \right)^{hGal}. $$ For a prime $\ell \ne 2$, let $\mr{Isog}^{ss}_\ell(\bar{\mathbb{F}}_2)$ denote the groupoid whose objects are quasi-isogenies $$ \phi: C_1 \rightarrow C_2 $$ with $C_1, C_2$ supersingular curves over $\bar{\mathbb{F}}_2$, and whose morphisms from $\phi$ to $\phi'$ are pairs of isomorphisms $(\alpha_1, \alpha_2)$ making the following square commute $$ \xymatrix{ C_1 \ar[r]^{\phi} \ar[d]_{\alpha_1} & C_2 \ar[d]^{\alpha_2} \\ C_1' \ar[r] _{\phi'} & C_2' } $$ It is easy to see that there is an equivalance of groupoids $$ \coprod_{i \in \mathbb{Z}, j \ge 0} \mathcal{M}^{ss}_0(\ell^j)(\bar{\mathbb{F}}_2) \xrightarrow{\simeq} \mr{Isog}^{ss}_\ell(\bar{\mathbb{F}}_2) $$ given by sending a pair $(C,H)$ to the quasi-isogeny $\phi$ given by the composite $$ \phi: C \xrightarrow{[\ell^i]} C \rightarrow C/H. $$ However, since there is a unique supersingular elliptic curve $C$ over $\bar{\mathbb{F}}_2$, the category $\mr{Isog}^{ss}_\ell(\bar{\mathbb{F}}_2)$ admits the following alternative description (we actually only need that $C$ is unique up to $\ell$-power isogeny). Let $\Gamma_\ell$ denote the group of quasi-isogenies $\phi: C \rightarrow C$ whose order is a power of $\ell$. There is an inclusion $$ \Gamma_\ell \hookrightarrow \mathbb{S}_2 $$ given by associating to a quasi-isogeny $\phi$ the associated automorphism $\widehat{\phi}$ of the formal group $\widehat{C}$. Then there is a bijection between the isomorphism classes of objects of $\mr{Isog}_\ell^{ss}(\bar{\mathbb{F}}_2)$ and the double cosets $$ G_{24} \backslash \Gamma_\ell / G_{24}. $$ Moreover, given an element $[\phi] \in G_{24} \backslash \Gamma_\ell / G_{24}$, the corresponding automorphisms of the associated object $\phi$ in $\mr{Isog}^{ss}_{\ell}(\bar{\mathbb{F}}_2)$ is the group $$ G_{24} \cap \phi G_{24} \phi^{-1} \subset \Gamma_\ell. $$ Putting this all together, we have \begin{align*} \left( \Map(\Gamma_\ell/G_{24}, \bar{E}_2)^{hG_{24}} \right)^{hGal} & \simeq \left( \prod_{[\phi] \in G_{24} \backslash \Gamma_\ell / G_{24}} \bar{E}_2^{hG_{24} \cap \phi G_{24} \phi^{-1}} \right)^{hGal} \\ & \simeq \left( \prod_{[\phi] \in \mr{Isog}^{ss}_\ell(\bar{\mathbb{F}}_2)} \bar{E}_2^{h\aut(\phi)} \right)^{hGal} \\ & \simeq \left( \prod_{i \in \mathbb{Z}, j \ge 0} \prod_{[(C,H)] \in \mathcal{M}^{ss}_0(\ell^j)(\bar{\mathbb{F}}_2)} \bar{E}_2^{h\aut(C,H)} \right)^{hGal} \\ & \simeq \prod_{i \in \mathbb{Z}, j \ge 0} \mathrm{TMF}_0(\ell^j)_{K(2)} \end{align*} and under the equivalences described above, the map $$ \psi_{K(2)}: (\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)} \rightarrow \prod\limits_{i\in \mathbb{Z},j\ge 0} \mathrm{TMF}_0(3^j)_{K(2)} \times \mathrm{TMF}_0(5^j)_{K(2)} $$ can be identified with the map \begin{equation}\label{eq:dense} \left( \Map^c(\mathbb{S}_2/G_{24}, \bar{E}_2)^{hG_{24}} \right)^{hGal} \rightarrow \left( \Map(\Gamma_3/G_{24} \amalg \Gamma_5/G_{24}, \bar{E}_2)^{hG_{24}} \right)^{hGal}. \end{equation} induced by the map \begin{equation}\label{eq:densecoprod} \Gamma_3/G_{24} \amalg \Gamma_5/G_{24} \rightarrow \mathbb{S}_2/G_{24}. \end{equation} In \cite{BehrensLawson} it is shown that the image of the above map is dense. Intuitively, one would like to say that this density implies that a continuous function on $\mathbb{S}_2/G_{24}$ is determined by its restrictions to $\Gamma_3/G_{24}$ and $\Gamma_5/G_{24}$, and this should imply that the map (\ref{eq:dense}) is injective on homotopy. The difficulty lies in making this argument precise. Before we make the argument precise (which is rather technical) we pause to give the reader an idea of the intuition behind the argument. An element in $$ \pi_* \Map^c(\mathbb{S}_2/G_{24}, \bar{E}_2)^{hG_{24}} $$ is something like a section of a sheaf over $G_{24} \backslash \mathbb{S}_2/G_{24}$ whose stalk over $[x] \in G_{24}\backslash \mathbb{S}_2/ G_{24}$ is $$\pi_* \bar{E}^{hG_{24} \cap xG_{24}x^{-1}}_2. $$ One would like to say a section of this sheaf is trivial if its values on the stalks are trivial. However, the actual space of continuous maps is a ($K(2)$-local) colimit of maps \begin{align*} \Map^c(\mathbb{S}_2/G_{24}, \bar{E}_2)^{hG_{24}} & \simeq \varinjlim_{G_{24} \le U \le_o \mathbb{S}_2} \Map(\mathbb{S}_2/U, \bar{E}_2)^{hG_{24}} \\ & \simeq \varinjlim_{G_{24} \le U \le_o \mathbb{S}_2} \prod_{[x] \in G_{24} \backslash \mathbb{S}_2 /U} \bar{E}_2^{hG_{24}\cap x U x^{-1}}, \end{align*} so an element of the homotopy of the continuous mapping space is actually represented by a kind of locally constant section with constant value over $G_{24}xU$ lying in the group $$ \pi_* \bar{E}^{hG_{24} \cap xUx^{-1}}_2. $$ The difficulty is that there are only maps $$ \pi_* \bar{E}^{hG_{24} \cap xUx^{-1}}_2 \rightarrow \pi_* \bar{E}^{hG_{24} \cap xG_{24}x^{-1}}_2 $$ and these maps are not necessarily injections. The point of Lemma~\ref{lem:open} is that the open cover of $\mathbb{S}_2$ given by the double cosets $G_{24}xU$ admits a finite refinement, over which the ``constant sections'' have values in one of the stalks, and hence the vanishing of a value at a stalk implies the vanishing of the constant section. We now make this argument completely precise. We have \begin{align*} \pi_*\left( \Map^c(\mathbb{S}_2/G_{24}, \bar{E}_2)^{hG_{24}} \right)^{hGal} & \cong \varprojlim_{i,j} \varinjlim_{G_{24} \le U\le_o \mathbb{S}_2} \left( \frac{\pi_* \Map(\mathbb{S}_2/U, \bar{E}_2)^{hG_{24}}}{(2^i, v_1^j)} \right)^{Gal} \\ & \simeq \varprojlim_{i,j} \varinjlim_{G_{24} \le U\le_o \mathbb{S}_2} \left( \prod_{[x] \in G_{24} \backslash \mathbb{S}_2 / U} \frac{\pi_* \bar{E}^{hG_{24} \cap xUx^{-1}}_2}{(2^i, v_1^j)} \right)^{Gal} \end{align*} and \begin{align*} \pi_*\left( \Map(\Gamma_\ell/G_{24}, \bar{E}_2)^{hG_{24}} \right)^{hGal} & \cong \varprojlim_{i,j} \left( \frac{\pi_* \Map(\Gamma_\ell/G_{24}, \bar{E}_2)^{hG_{24}}}{(2^i, v_1^j)} \right)^{Gal} \\ & \simeq \varprojlim_{i,j} \left( \prod_{[x] \in G_{24} \backslash \Gamma_\ell / G_{24}} \frac{\pi_* \bar{E}^{hG_{24} \cap xG_{24}x^{-1}}_2}{(2^i, v_1^j)} \right)^{Gal} \end{align*} for suitable pairs $(i,j)$. Consider the natural maps $$ \phi_\ell : \varinjlim_{G_{24} \le U\le_o \mathbb{S}_2} \prod_{[x] \in G_{24} \backslash \mathbb{S}_2 / U} \frac{\pi_* \bar{E}^{hG_{24} \cap xUx^{-1}}_2}{(2^i, v_1^j)} \rightarrow \prod_{[x] \in G_{24} \backslash \Gamma_\ell / G_{24}} \frac{\pi_* \bar{E}^{hG_{24} \cap xG_{24}x^{-1}}_2}{(2^i, v_1^j)}. $$ Lemma~\ref{lem:faithful} will be proven if we can show that if we are given an open subgroup $G_{24} \le U \le_o \mathbb{S}_2$ and a sequence in the product $$ (z_{G_{24}xU})_{[x]} \in \prod_{[x] \in G_{24} \backslash \mathbb{S}_2 / U} \frac{\pi_* \bar{E}^{hG_{24} \cap xUx^{-1}}_2}{(2^i, v_1^j)} $$ such that $$ \phi_\ell(z_{G_{24}xU}) = 0 $$ for $\ell = 3,5$, then there is another subgroup $G_{24} \le U' \le_o U$ such that the associated sequence $$ (z_{G_{24}xU'})_{[x]} \in \prod_{[x] \in G_{24} \backslash \mathbb{S}_2 / U'} \frac{\pi_* \bar{E}^{hG_{24} \cap xU'x^{-1}}_2}{(2^i, v_1^j)} $$ is zero, where $ z_{G_{24}xU'} $ is the restriction to $U^\prime$ of $z_{G_{24}xU}$. Suppose that $(z_{G_{24}xU})_{[x]}$ is such a sequence in the kernel of $\phi_3$ and $\phi_5$. Take a cover $\{ y_k U_k \}$ of $\mathbb{S}_2$ as in Lemma~\ref{lem:open}, and let $U' = \cap_k U_k$. Regarding $\Gamma_3$ and $\Gamma_5$ as subgroups of $\mathbb{S}_2$, the density of the image of the map (\ref{eq:densecoprod}) implies that the map $$ \Gamma_3/U' \amalg \Gamma_5/U' \rightarrow \mathbb{S}_2/U'$$ is surjective. We therefore may assume without loss of generality that the elements $y_k$ are either in $\Gamma_3$ or $\Gamma_5$. We need to show that the associated sequence $(z_{G_{24}xU'})_{[x]}$ is zero. Take a representative $x$ of a double coset $[x] \in G_{24} \backslash \mathbb{S}_2 /U'$. Then $x \in y_kU_k$ for some $k$. Note that we therefore have $$ G_{24} \cap x U' x^{-1} \le G_{24} \cap x U_k x^{-1} = G_{24}\cap y_k U_k y_k^{-1} = G_{24} \cap y_k G_{24} y_k^{-1} \le G_{24} \cap xUx^{-1}. $$ Consider the associated composite of restriction maps $$ \frac{\pi_*\bar{E}_2^{hG_{24} \cap xUx^{-1}}}{(2^i, v_1^j)} \rightarrow \frac{\pi_*\bar{E}_2^{hG_{24} \cap y_kG_{24}y_k^{-1}}}{(2^i, v_1^j)} \rightarrow \frac{\pi_*\bar{E}_2^{hG_{24} \cap xU'x^{-1}}}{(2^i, v_1^j)}. $$ The element $z_{G_{24}xU'}$ is the image of $z_{G_{24}xU}$ under the above composite. However, since $z_{G_{24}xU}$ is in the kernel of $\phi_3$ and $\phi_5$, it follows that the image of $z_{G_{24}xU}$ is zero in $$ \frac{\pi_*\bar{E}_2^{hG_{24} \cap y_kG_{24}y_k^{-1}}}{(2^i, v_1^j)}. $$ We therefore deduce that $z_{G_{24}xU'}$ is zero, as desired. \end{proof} \subsection{Computation of $\Psi_3$ and $\Psi_5$ in low degrees}\label{sec:psicomp} Using the formulas for $f^*$ and $q^*$ for $\Gamma_0(3)$ and $\Gamma_0(5)$ in the beginning of this section, we now compute the effect of the maps $\Psi_3$ and $\Psi_5$ on a piece of $\mathrm{tmf} \wedge \mathrm{tmf}$. Using the notation of (\ref{eq:boSES_low}), we have decompositions: \begin{align*} \Ext^{*,*}_{A(2)_*}(\Sigma^{16}\ul{\bo}_2) & \cong \underbrace{\Ext^{*,*}_{A(1)_*}(\Sigma^{16} \mathbb{F}_2) \oplus \Ext^{*,*}_{A(2)_*}(\Sigma^{24} \ul{\bo}_1)}_{\Ext^{*,*}_{A(2)}(\Sigma^{16}\td{\ul{\bo}}_2)} \oplus \underbrace{\Ext^{*,*}_{A(2)_*}(\Sigma^{32}\mathbb{F}_2[1])}_{\Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2)}, \\ \Ext^{*,*}_{A(2)_*}(\Sigma^{24}\ul{\bo}_3) & \cong \Ext^{*,*}_{A(1)_*}(\Sigma^{24} \mathbb{F}_2) \oplus \Ext^{*,*}_{A(2)_*}(\Sigma^{32} \ul{\bo}_1^{2}), \\ \Ext^{*,*}_{A(2)}(\Sigma^{32}\ul{\bo}_4) & \cong \underbrace{\Ext^{*,*}_{A(2)_*}(\si{64}\mathbb{F}_2[1])}_{\Ext^{*,*}_{A(2)_*}(\Sigma^{32}\td{\ul{\bo}}_4)} \oplus \left( \begin{array}{c} \Ext^{*,*}_{A(1)_*}\left( \si{32} \ul{\tmf}_1 \oplus \si{48} \mathbb{F}_2 \right) \\ \oplus \Ext^{*,*}_{A(2)_*}(\si{56} \ul{\bo}_1 \oplus \si{56} \ul{\bo}_1[1]) \end{array} \right). \end{align*} As indicated by the underbraces above, we shall refer to the first piece of $\ul{\bo}_2$ as $\td{\ul{\bo}}_2$, and the second piece as $\td{\td{\ul{\bo}}}_2$, and the first piece of $\ul{\bo}_4$ as ${\td{\ul{\bo}}}_4$. We define a $\mathrm{tmf}_*$-\emph{lattice} of $\pi_*\mathrm{TMF}_0(\ell) $ to be an $\pi_*\mathrm{tmf} $-submodule $I < \pi_*\mathrm{TMF}_0(\ell)$ which is finitely generated as a $\pi_*\mathrm{tmf} $-module, and has the property that $$ \Delta^{-1}I = \pi_*\mathrm{TMF}_0(\ell) . $$ Note that the first condition forces $I$ to be concentrated in $\pi_{\ge N}\mathrm{TMF}_0(\ell) $ for some $N$. We will show that a portion $I_3$ of $\mathrm{tmf}_*\mathrm{tmf} $ detected by $$ \Ext^{*,*}_{A(2)_*}(\Sigma^{8} \ul{\bo}_1 \oplus \Sigma^{16} \td{\ul{\bo}}_2) $$ in the ASS maps isomorphically onto a $\mathrm{tmf}_*$-lattice of $\pi_*\mathrm{TMF}_0(3) $, recovering an observation of Davis, Mahowald, and Rezk \cite{MRlevel3}, \cite{MahowaldConnective}. Similarly, we will show that a portion $I_5$ of $\mathrm{tmf}_*\mathrm{tmf} $ detected by $$ \Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) $$ in the ASS maps isomorphically onto a $\mathrm{tmf}_*$-lattice of $\pi_*\mathrm{TMF}_0(5) $. This is a new phenomenon. Actually, Davis, Mahowald, and Rezk proved something stronger in \cite{MRlevel3}, \cite{MahowaldConnective}: they showed ($2$-locally) that there is actually a $\mathrm{tmf}$-module $$ \td{\mathrm{tmf}}_0(3) := \mathrm{tmf} \wedge (\Sigma^{16}\mathrm{bo}_1 \cup \Sigma^{24} \td{\mathrm{bo}}_2) \cup_\beta \Sigma^{33} \mathrm{tmf} $$ which maps to $\mathrm{TMF}_0(3)$ as a \emph{connective cover}, in the sense that on homotopy groups it gives the aforementioned $\mathrm{tmf}_*$-lattice. In the last section of this paper we will reprove and strengthen their result, and show that there is also a ($2$-local) $\mathrm{tmf}$-module $$ \td{\mathrm{tmf}}_0(5) := \Sigma^{32}\mathrm{tmf} \cup \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}'_3 \cup \Sigma^{64} \mathrm{tmf}$$ (where $\mathrm{tmf} \wedge \mathrm{bo}_3'$ is a $\mathrm{tmf}$-module whose cohomology is isomorphic to the cohomology of $\mathrm{tmf} \wedge \mathrm{bo}_3$ as an $A$-module) which maps to $\mathrm{TMF}_0(5)$ as a connective cover, topologically realizing the corresponding $\mathrm{tmf}_*$-lattice of $\pi_*\mathrm{TMF}_0(5)$. It will turn out that to verify these computational claims, it will suffice to compute the maps \begin{gather*} \Psi_3: I_3 \rightarrow \pi_*\mathrm{TMF}_0(3) \\ \Psi_5: I_5 \rightarrow \pi_*\mathrm{TMF}_0(5) \end{gather*} rationally. The behavior of the torsion classes will then be forced. {\it The case of $\mathrm{TMF}_0(3)$.} Observe that we have \begin{align*} & v_0^{-1}\Ext^{*,*}_{A(2)_*}(\Sigma^{8} \ul{\bo}_1 \oplus \Sigma^{16} \td{\ul{\bo}}_2) \\ \\ &\quad = v_0^{-1}\Ext^{*,*}_{A(2)_*}(\Sigma^8\ul{\bo}_1) \\ & \quad \quad \oplus v_0^{-1} \Ext^{*,*}_{A(1)_*}(\Sigma^{16} \mathbb{F}_2) \\ &\quad \quad \oplus v_0^{-1} \Ext^{*,*}_{A(2)_*}(\Sigma^{24}\ul{\bo}_1) \\ \\ & \quad = \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [f_1], [f_2], [f_3], [f_4] \} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4]]\{ [f_1^2],[f_1f_2]\} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [f_5], [f_6], [f_7], [f_8] \} . \end{align*} Recall that $$ M_*(\Gamma_0(3)) = \mathbb{Z} [a_1^2, a_1a_3, a_3^2] $$ (regarded as a subring of $\mathbb{Z} [a_1, a_3]$). For a $\Gamma_0(3)$ modular form $f$, we will write $$ f = 2^i a_1^j a_3^k + \cdots, $$ where we have \begin{enumerate} \item $f \equiv 0 \mod (2^{i})$, and \item $ f \equiv 2^i a_1^j a_3^k \mod (2^{i+1}, a_1^{j+1}). $ \end{enumerate} We shall refer to $2^i a_1^j a_3^k$ as the \emph{leading term} of $f$. The forgetful map $$ f^*: M_*(\Gamma(1)) \rightarrow M_*(\Gamma_0(3)) $$ is computed on the level of leading terms by \begin{align*} f^*(c_4) & = a_1^4 + \cdots, \\ f^*(c_6) & = a_1^6 + \cdots, \\ f^*(\Delta) & = a_3^4 + \cdots. \end{align*} Using the formulas for $f^*$ and $q^*$ given in the beginning of this section, we have \begin{equation}\label{eq:Psi_3} \begin{array}{lll} \Psi_3(f_1 ) =a_1a_3 +\cdots &\quad & \Psi_3(f_2 ) =a_1^3a_3 +\cdots \\ \Psi_3(f_3 ) =a_1a_3^3 +\cdots && \Psi_3(f_4 ) =a_1^3a_3^3 +\cdots \\ \Psi_3(f_1^2 ) =a_1^2a_3^2 +\cdots && \Psi_3(f_1 f_2 ) =a_1^4a_3^2 +\cdots \\ \Psi_3(f_5 ) =a_3^4 +\cdots &&\Psi_3(f_6 ) =a_3^4a_1^2 +\cdots \\ \Psi_3(f_7 ) =a_3^6 +\cdots && \Psi_3(f_8 ) =a_3^6a_1^2 +\cdots .\\ \end{array} \end{equation} It follows that on the level of leading terms, the $(\mathrm{tmf}_*)_\mathbb{Q}$-submodule of $\mathrm{tmf}_*\mathrm{tmf}_\mathbb{Q}$ given by \begin{gather*} \mathbb{Q}[c_4, \Delta]\{f_1, f_2, f_3, f_4\} \\ \oplus \mathbb{Q}[c_4]\{f_1^2, f_1f_2\} \\ \oplus \mathbb{Q}[c_4, \Delta]\{f_5, f_6, f_7, f_8\} \end{gather*} maps under $\Psi_3$ to the $(\mathrm{tmf}_*)_\mathbb{Q}$-lattice given by the ideal $$ (I_3)_\mathbb{Q} := (a_1a_3, a_3^2) \subset M_*(\Gamma_0(3))_{\mathbb{Q}}$$ expressed as \begin{gather*} \mathbb{Q}[a_1^4, a_3^4]\{a_1a_3 , a_1^3a_3 , a_1a_3^3 , a_1^3a_3^3 \} \\ \oplus \mathbb{Q}[a_1^4]\{a_1^2a_3^2, a_1^4a_3^2\} \\ \oplus \mathbb{Q}[a_1^4, a_3^4]\{a_3^4, a_3^4a_1^2, a_3^6, a_3^6a_1^2\}. \end{gather*} {\it The case of $\mathrm{TMF}_0(5)$.} Observe that we have \begin{align*} & v_0^{-1}\Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \\ \\ &\quad = v_0^{-1}\Ext^{*,*}_{A(2)_*}(\Sigma^{32}\mathbb{F}_2[1]) \\ & \quad \quad \oplus v_0^{-1} \Ext^{*,*}_{A(1)_*}(\Sigma^{24} \mathbb{F}_2) \\ &\quad \quad \oplus v_0^{-1} \Ext^{*,*}_{A(2)_*}(\Sigma^{32}\ul{\bo}_1^2) \\ &\quad \quad \oplus v_0^{-1} \Ext^{*,*}_{A(2)_*}(\Sigma^{64}\mathbb{F}_2^1) \\ \\ & \quad = \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [f_9], [c_6f_9] \} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4]]\{ [f_1^3],[f_1^2f_2]\} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [f_5f_1], [f_5f_2], [f_{10}], [f_{11}], [f_7f_1], [f_7f_2], [f_{14}], [f_{15}] \} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{[f_9^2], [c_6 f_9^2] \} . \end{align*} Recall that $$ M_*(\Gamma_0(5)) = \mathbb{Z} [b_2,b_4,\delta]/(b_4^2=b_2^2\delta-4\delta^2). $$ For a $\Gamma_0(5)$ modular form $f$, we will write $$ f = 2^i b_2^j \delta^k b_4^\epsilon + \cdots, $$ where $\epsilon \in \{0,1\}$ and \begin{enumerate} \item $f \equiv 0 \mod (2^{i})$, and \item $ \quad $ $$ \begin{cases} f \equiv 2^i b_2^j (\delta^k + \alpha \delta^{k-1}b_4) \mod (2^{i+1}, b_2^{j+1}), & \epsilon = 0, \\ f \equiv 2^i b_2^j \delta^{k}b_4 \mod (2^{i+1}, b_2^{j+1}), & \epsilon = 1. \end{cases} $$ \end{enumerate} We shall refer to $2^i b_2^j \delta^k b_4^\epsilon$ as the \emph{leading term} of $f$. The forgetful map $$ f^*: M_*(\Gamma(1)) \rightarrow M_*(\Gamma_0(5)) $$ is computed on the level of leading terms by \begin{align*} f^*(c_4) & = b_2^2 + \cdots, \\ f^*(c_6) & = b_2^3 + \cdots, \\ f^*(\Delta) & = \delta^3 + \cdots. \end{align*} Unlike the case of $\Gamma_0(3)$, the $M_*(\Gamma(1))$-submodule of $2$-variable modular forms generated by the forms listed above in $$ v_0^{-1}\Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) $$ does \emph{not} map nicely into $M_*(\Gamma_0(5))$. Rather, we choose different generators as listed below. These generators were chosen inductively (first by increasing degree, and second, by decreasing Adams filtration) by using a row echelon algorithm based on leading terms (see Examples~\ref{ex:echelon1} and \ref{ex:echelon2}). In every case, a generator named $\td{x}$ agrees with $x$ modulo terms of higher Adams filtration: \begin{equation}\label{eq:newforms} \begin{split} \td{f_9 } & =f_9+\Delta f_1+ c_4^2 f_1^2 ,\\ \td{c_6f_9 } & = c_6 f_9+ c_4 \Delta f_2+ c_4^3 f_1 f_2 ,\\ \td{f_1^3 } & =f_1^3+f_4+ c_4 f_1^2 ,\\ \td{f_1^2f_2 } & =f_1^2 f_2 + c_4 f_3 + c_4 f_1 f_2 ,\\ \td{f_5f_1 } & =f_1 f_5+\Delta f_1 ,\\ \td{f_5f_2 } & =f_5 f_2+\Delta f_2 ,\\ \td{f_7f_1 } & =f_1 f_7+\Delta f_3+ c_4 f_7+ c_4 \Delta f_2+ c_4^2 f_6+ c_4^3 f_1 f_2+ c_4^4 f_2 ,\\ \td{f_7f_2 } & =f_2 f_7+\Delta f_4+ c_4 f_8+ c_4^2 \Delta f_1+ c_4^4 f_1^2 ,\\ \td{f_{14} } & =f_{14}+\Delta f_4+ c_4^3 f_5+ c_4^3 f_4 ,\\ \td{f_{15} } & =f_{15}+ c_4 \Delta f_3+ c_4^3 f_6+ c_4^4 f_3. \\ \end{split} \end{equation} The following forms, while not detected by $\Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4)$, will be needed: \begin{align*} \td{f_1^4 } & =f_1^4+ c_4 f_5+ c_4 f_4+ c_4^2 f_1^2, \\ \td{f_1^3f_2 } & =f_1^3 f_2+ c_4 f_6+ c_4^2 f_3+ c_4^3 f_2. \end{align*} We now define: \begin{align*} \td{f_{10} } & =f_{10}+f_7+ c_4 f_6+ c_4^2 f_1 f_2, \\ \td{f_{11} } & =f_{11}+f_8+ c_4 \Delta f_1+ c_4^2 f_5, \\ \td{c_4f_{10} } & = c_4\td{f_{10}}+\td{c_6f_9}+ c_4 \td{f_1^3f_2}+ c_4^2 \td{f_1^2f_2} .\\ \end{align*} Again, the following forms are not detected by $\Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4)$, but will be needed: \begin{align*} \td{f_1^4f_2 } & =f_1^4 f_2+ c_4 \Delta f_2+ c_4^2 f_6+ c_4^3 f_3+ c_4^4 f_2+ c_4 \td{f_5f_2}, \\ \td{f_{13} } & =f_{13}+\Delta f_3+ c_4 f_7+ c_4 \Delta f_2+ c_4^2 f_6+ c_4^3 f_3+ c_4^3 f_1 f_2+ c_4^4 f_2+\td{f_7f_1}+\frac{\td{c_4f_{10}}}{2} \\ & \quad \quad +\td{c_6f_9}+ c_4 \td{f_5f_2}+\td{f_1^4f_2}+ c_4^2\td{f_1^2f_2} .\\ \end{align*} We then define: \begin{align*} \td{f_9^2 } & =\td{f_9}^2 ,\\ \td{c_4f_9^2 } & = c_4 \td{f_9}^2+\Delta \td{f_7f_2}+ c_4 \Delta \td{f_{11}}+ c_4^2 \Delta \td{f_5f_1}+ c_4^3 \td{f_{14}}+ c_4^5 \td{f_9}+ c_4^5 \td{f_5f_1}+ c_4^5 \td{f_1^4} ,\\ \td{c_6f_9^2 } & = c_6 \td{f_9^2}+ c_4 \Delta \td{f_7f_1}+ c_4 \Delta \frac{\td{c_4f_{10}}}{2}+ c_4 \Delta \td{c_6f_9}+ c_4^2 \Delta \td{f_5f_2}+ c_4^4 \frac{\td{c_4f_{10}}}{2}+ c_4^4 \td{f_1^4f_2} \\ & \quad \quad + c_4^5 \td{f_1^3f_2}+ c_4^4 \td{f_{13}}. \end{align*} Using the formulas for $f^*$ and $q^*$ given in the beginning of this section, we have \begin{equation}\label{eq:Psi_5} \begin{array}{lll} \Psi_5( \td{f_9 }) = \delta^4 + \cdots &\quad &\Psi_5( \td{c_6f_9 }) = b_2^3\delta^4 + \cdots \\ \Psi_5( \td{f_1^3 }) = b_2^2\delta^2 + \cdots && \Psi_5( \td{f_1^2f_2 }) = b_2^3\delta^2 + \cdots \\ \Psi_5( \td{f_5f_1 }) = \delta^3b_4 + \cdots && \Psi_5( \td{f_5f_2 }) = b_2\delta^3b_4 + \cdots \\ \Psi_5( \td{f_7f_1 }) = b_2\delta^5 + \cdots && \Psi_5( \td{f_7f_2 }) = b_2^2\delta^5 + \cdots \\ \Psi_5( \td{f_{14} }) = \delta^6 + \cdots && \Psi_5( \td{f_{15} }) = b_2\delta^6 + \cdots \\ \Psi_5( \td{f_1^4 }) = b_2^2\delta^2b_4 + \cdots && \Psi_5( \td{f_1^3f_2 }) = b_2^3\delta^2b_4 + \cdots \\ \Psi_5( \td{f_{10} }) = b_2\delta^4 + \cdots && \Psi_5( \td{f_{11} }) = \delta^4b_4 + \cdots \\ \Psi_5( \td{c_4f_{10} }) = 2b_2\delta^4b_4 + \cdots && \Psi_5( \td{f_1^4f_2 }) = b_2^5\delta^3 + \cdots \\ \Psi_5( \td{f_{13} }) = b_2^9\delta + \cdots && \Psi_5( \td{f_9^2 }) = \delta^8 + \cdots \\ \Psi_5( \td{c_4f_9^2 }) = 2\delta^8b_4 + \cdots && \Psi_5( \td{c_6f_9^2 }) = b_2\delta^8b_4 + \cdots . \end{array} \end{equation} \begin{ex}\label{ex:echelon1} We explain how the above generators were produced by working through the example of $\td{f_{10}}$. \begin{description} \item[Step 1] Add terms to $f_{10}$ of higher Adams filtration to ensure that $\Psi_3(\td{f_{10}}) \equiv 0 \mod 2$. For example, we compute $$ \Psi_3(f_{10}) = a_3^6 + \cdots. $$ According to (\ref{eq:Psi_3}), we have $\Psi_3(f_7) = a_3^6 + \cdots $. Since $f_7$ has higher Adams filtration, we can add it to $f_{10}$ without changing the element detecting it in the ASS, to cancel the leading term of $a_3^6$. We compute $$ \Psi_3(f_{10} + f_7) = a_1^6 a_3^4 + \cdots. $$ Again, using (\ref{eq:Psi_3}), we see that $\Psi_3(c_4f_6)$ (of higher Adams filtration) also has this leading term, so we now compute: $$ \Psi_3(f_{10}+f_7+c_4 f_6) = a_1^{12}a_3^2+\cdots. $$ We see that $\Psi_3(c_4^2f_1f_2)$ also has this leading term, and $$ \Psi_3(f_{10}+f_7+c_4 f_6+c_4^2f_1f_2) \equiv 0 \mod 2. $$ \item[Step 2] Add terms to $f_{10}+f_7+c_4 f_6+c_4^2f_1f_2$ to ensure that the leading term of $\Psi_5(\td{f_{10}})$ is distinct from those generated by elements in lower degree, or higher Adams filtration. In this case, we compute $$ \Psi_5(f_{10}+f_7+c_4 f_6+c_4^2f_1f_2) = b_2\delta^4 + \cdots. $$ By induction we know the leading term of $\Psi_5$ on generators in lower degree and higher Adams filtration, and in particular (\ref{eq:Psi_5}) tells us that this leading term is distinct from leading terms generated from elements of lower degree. We therefore define $$ \td{f_{10}} = f_{10}+f_7+c_4 f_6+c_4^2f_1f_2. $$ \end{description} \end{ex} \begin{ex}\label{ex:echelon2} We now explain a subtlety which may arise by working through the example of $\td{c_4f_{10}}$. \begin{description} \item[Step 1] We would normally add terms to $c_4f_{10}$ of higher Adams filtration to ensure that $\Psi_3(\td{c_4f_{10}}) \equiv 0 \mod 2$. Of course, because we already know that $\Psi_3(\td{f_{10}}) \equiv 0 \mod 2$, we have $$ \Psi_3(c_4\td{f_{10}}) \equiv 0 \mod 2. $$ \item[Step 2] We now add terms to $c_4 \td{f_{10}}$ to ensure that the leading term of $\Psi_5(\td{c_4 f_{10}})$ is distinct from those generated by elements in lower degree. In this case, we compute $$ \Psi_5(c_4\td{f_{10}}) = b_2^3\delta^4 + \cdots. $$ By induction we know the leading term of $\Psi_5$ on generators in lower degree and higher Adams filtration, but now (\ref{eq:Psi_5}) tells us that $$ \Psi_5(\td{c_6 f_9}) = b_2^3 \delta^4 + \cdots.$$ Since $c_6 f_9$ has higher Adams filtration, we add it to $c_4\td{f_{10}}$ and compute $$ \Psi_5(c_4\td{f_{10}}+ \td{c_6f_9}) = b_2^5\delta^2b_4. $$ We inductively know that $\Psi_5(\td{f_1^3f_2}) = b_2^3 \delta^2b_4 + \cdots$, and we compute $$ \Psi_5(c_4\td{f_{10}}+ \td{c_6f_9}+c_4 \td{f_1^3f_2}) = b_2^7\delta^2. $$ We inductively know that $\Psi_5(\td{f_1^2f_2}) = b_2^3 \delta^2 + \cdots$, and we compute $$ \Psi_5(c_4\td{f_{10}}+ \td{c_6f_9}+c_4 \td{f_1^3f_2}+c_4^2\td{f_1^2f_2}) = 2b_2\delta^4 b_4 + \cdots. $$ This leading term is distinct from leading terms generated from elements of lower degree, and we define $$ \td{c_4f_{10}} = c_4\td{f_{10}}+ \td{c_6f_9}+c_4 \td{f_1^3f_2}+c_4^2\td{f_1^2f_2}. $$ (In fact, the 2-variable modular form $\td{c_4f_{10}}$ is $2$-divisible, and this is why some of the equations in (\ref{eq:newforms}) involve the term $\frac{\td{c_4 f_{10}}}{2}$.) \end{description} \end{ex} In light of the form the leading terms of (\ref{eq:Psi_5}) take, we rewrite \begin{align*} & v_0^{-1}\Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \\ & \quad = \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [f_9], [c_6f_9] \} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4]]\{ [f_1^3],[f_1^2f_2]\} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [f_5f_1], [f_5f_2], [f_{10}], [f_{11}], [f_7f_1], [f_7f_2], [f_{14}], [f_{15}] \} \\ & \quad \quad \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{[f_9^2], [c_6 f_9^2] \} \end{align*} in the form \begin{gather*} \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [\td{f_9}], [\td{c_6f_9}] \} \oplus \\ \mathbb{F}_2[v_0^{\pm 1}, [c_4]]\{ [\td{f_1^3}],[\td{f_1^2f_2}]\} \oplus \\ \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{ [\td{f_5f_1}], [\td{f_5f_2}], [\td{f_{11}}], [\td{c_4f_{10}}], [\td{f_7f_1}], [\td{f_7f_2}], [\td{f_{14}}], [\td{f_{15}}] \} \oplus \mathbb{F}_2[v_0^{\pm 1}, [\Delta]]\{[\td{f_{10}}]\} \\ \oplus \mathbb{F}_2[v_0^{\pm 1}, [c_4], [\Delta]]\{[\td{c_4 f_9^2}], [\td{c_6 f_9^2}] \} \oplus \mathbb{F}_2[v_0^{\pm 1}, [\Delta]]\{[\td{f_{9}^2]}\}. \end{gather*} It follows from (\ref{eq:Psi_5}) that on the level of leading terms, the $(\mathrm{tmf}_*)_\mathbb{Q}$ submodule of $\mathrm{tmf}_*\mathrm{tmf}_\mathbb{Q}$ given by \begin{gather*} \mathbb{Q}[c_4, \Delta]\{ \td{f_9}, \td{c_6f_9} \} \\ \oplus \mathbb{Q}[c_4]\{ \td{f_1^3},\td{f_1^2f_2}\} \\ \oplus \mathbb{Q}[c_4, \Delta]\{ \td{f_5f_1}, \td{f_5f_2}, \td{f_{11}}, \frac{\td{c_4f_{10}}}{2}, \td{f_7f_1}, \td{f_7f_2}, \td{f_{14}}, \td{f_{15}} \} \oplus \mathbb{Q}[\Delta]\{\td{f_{10}}\} \\ \oplus \mathbb{Q}[c_4, \Delta]\{\frac{\td{c_4 f_9^2}}{2}, \td{c_6 f_9^2} \} \oplus \mathbb{Q}[\Delta]\{\td{f_{9}^2}\} \end{gather*} maps under $\Psi_5$ to the $(\mathrm{tmf}_*)_\mathbb{Q}$-lattice $$ (I_5)_\mathbb{Q} = \mathbb{Q}[b_2, \delta^3] \{b_2^2 \delta^2, \delta^3b_4, \delta^4, \delta^4b_4, b_2\delta^5, \delta^6, \delta^8, \delta^8b_4 \} \subset M_*(\Gamma_0(5))_{\mathbb{Q}}$$ expressed as \begin{gather*} \mathbb{Q}[b_2^2, \delta^3]\{ \delta^4, b_2^3\delta^4 \} \\ \oplus \mathbb{Q}[b_2^2]\{b_2^2\delta^2, b_2^3\delta^2 \} \\ \oplus \mathbb{Q}[b_2^2, \delta^3]\{ \delta^3b_4, b_2\delta^3b_4, \delta^4b_4, b_2\delta^4b_4, b_2\delta^5, b_2^2\delta^5, \delta^6, b_2\delta^6 \} \oplus \mathbb{Q}[\Delta]\{b_2\delta^4\} \\ \oplus \mathbb{Q}[c_4, \Delta]\{\delta^8b_4, b_2\delta^8b_4 \} \oplus \mathbb{Q}[\Delta]\{\delta^8 \}. \end{gather*} \subsection{Using level structures to detect differentials and hidden extensions in the ASS}\label{sec:diffext} In the previous section we observed that $\Psi_3$ maps a $\mathrm{tmf}_*$-submodule of $\mathrm{tmf}_*\mathrm{tmf} $ detected in the ASS by $$ \Ext^{*,*}_{A(2)_*}(\Sigma^{8} \ul{\bo}_1 \oplus \Sigma^{16} \td{\ul{\bo}}_2) $$ to a $\mathrm{tmf}_*$-lattice $I_3 \subset \pi_*\mathrm{TMF}_0(3) $, and $\Psi_5$ maps a $\mathrm{tmf}_*$-submodule of $\mathrm{tmf}_*\mathrm{tmf} $ detected in the ASS $$ \Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) $$ to a $\mathrm{tmf}_*$-lattice $I_5 \subset \pi_*\mathrm{TMF}_0(5) $. We now observe that using the known structure of $\pi_*\mathrm{TMF}_0(3)$ and $\pi_*\mathrm{TMF}_0(5)$, we can deduce differentials in the portion of the ASS detected by $$ \Ext^{*,*}_{A(2)_*}(\Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16}\ul{\bo}_2 \oplus \Sigma^{24}\ul{\bo}_3 \oplus \Sigma^{32} \td{\ul{\bo}}_4). $$ \begin{figure} \centering \includegraphics[height =\textheight]{tmf03.PNG} \caption{Differentials and hidden extensions in the portion of the ASS for $\mathrm{tmf}_*\mathrm{tmf}$ detected by $\Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16}\td{\ul{\bo}}_2$ coming from $\mathrm{TMF}_0(3)$.}\label{fig:tmf3} \end{figure} \begin{figure} \centering \includegraphics[height =\textheight]{tmf05.PNG} \caption{Differentials and hidden extensions in the portion of the ASS for $\mathrm{tmf}_*\mathrm{tmf}$ detected by $\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24}\ul{\bo}_3 \oplus\Sigma^{32}\td{\ul{\bo}}_4$ coming from $\mathrm{TMF}_0(5)$.}\label{fig:tmf5} \end{figure} We begin with $\Sigma^{8} \ul{\bo}_1 \oplus \Sigma^{16} \td{\ul{\bo}}_2$. Figure~\ref{fig:tmf3} displays this portion of the $E_2$-term of the ASS for $\mathrm{tmf}_*\mathrm{tmf}$, with differentials and hidden extensions. The $v_0^{-1}\Ext_{A(2)}$-generators in the chart are also labeled with $\Gamma_0(3)$-modular forms. These are the leading terms of the $\Gamma_0(3)$-modular forms that they map to under the map $\Psi_{3}$ (see (\ref{eq:Psi_3})). The Adams differentials and hidden extensions are all deduced from the behavior of $\Psi_3$ on these torsion-free classes, as we will now explain. We will also describe how the $h_0$-torsion in this portion of the ASS detects homotopy classes which map isomorphically under $\Psi_3$ onto torsion in $\pi_*\mathrm{TMF}_0(3) $. We freely make reference to the descent spectral sequence $$ H^{s}(\Mcl{3} , \omega^{\otimes t}) \Rightarrow \pi_{2t-s}\mathrm{TMF}_0(3) , $$ as computed in \cite{MRlevel3}. \begin{description} \item[Stem 17] We have $$ \Psi_3(\eta f_4) = \eta a_1^3a_3^3 + \cdots. $$ Mahowald and Rezk \cite {MRlevel3} define a class $x$ in $\pi_{17}\mathrm{TMF}_0(3)$ such that $$ c_4 x = \eta a_1^3a_3^3 + \cdots. $$ There is a class $z_{17}$ in $\Ext^{1,18}_{A(2)_*}(\Sigma^8\ul{\bo}_1)$ such that $$ [c_4] z_{17} = h_1 [f_4]. $$ The class $z_{17}$ is a permanent cycle, and detects an element $y_{17} \in \mathrm{tmf}_{17}\mathrm{tmf} $. We deduce \begin{align*} \Psi_3(y_{17}) & = x, \\ \Psi_3(\eta y_{17}) & = \eta x, \\ \Psi_3(\nu y_{17}) & = \nu x. \\ \end{align*} \item[Stem 24] The modular form $a_3^4$ is not a permanent cycle in the descent spectral sequence for $\mathrm{TMF}_0(3)$. It follows that the corresponding element of $\Ext_{A(2)_*}(\td{\ul{\bo}}_2)$ must support an ASS differential. There is only one possible target for this differential. \item[Stem 33] There is a class $z_{33} \in \Ext^{1,34}_{A(2)_*}(\Sigma^{16} \td{\ul{\bo}}_2)$ satisfying $$ [c_4] z_{33} = h_1[f_8].$$ There are no possible non-trivial differentials supported by $h_1 z_{33}$. Dividing both sides of $$ \Psi_3(\eta^2 f_8) = \eta^2 a_1^2 a_3^6 + \cdots $$ by $c_4$, we deduce that there is an element $y_{34} \in \mathrm{tmf}_{34}\mathrm{tmf} $ detected by $h_1 z_{33}$ satisfying $$ \Psi_3(y_{34}) = x^2. $$ Since $x^2$ is not $\eta$-divisible, we deduce that $z_{33}$ must support an Adams differential, and there is only one possible target for such a differential. Since $$ \Psi_3(\bar\kappa y_{17}) = \bar\kappa x = \nu x^2 $$ it follows that the element $g y_{17} \in \Ext^{5,42}_{A(2)*}(\ul{\bo}_1)$ detects $\nu y_{33}$, which maps to $\nu x^2$ under $\Psi_3$. We then deduce that $$ \Psi_3(\bra{\eta,\nu,\nu y_{33}}) = \bra{\eta,\nu,\nu x^2} = a_1a_3 x^2. $$ \item[Stem 48] Let $z_{48} \in \Ext^{4,52}_{A(2)_*}(\td{\ul{\bo}}_2)$ denote the unique non-trivial class with $h_1 z_{48} = 0$, so that $[\Delta f_5] + z_{48}$ is the unique class in that bidegree which supports non-trivial $h_1$ and $h_2$-multiplication. Note that there is only one potential target for an Adams differential supported by $[\delta f_5]$ or $z_{48}$. Since $a_3^8$ supports non-trivial $\eta$ and $\nu$ multiplication, it follows that $[\Delta f_5] + z_{48}$ must be a permanent cycle in the ASS, detecting an element $y_{48} \in \mathrm{tmf}_*\mathrm{tmf} $ satisfying $$ \Psi_3(y_{48}) = a_3^8. $$ Since $\nu^2 a_3^8$ is not $\eta$-divisible, we conclude that $h_{2,1} z_{48}$ cannot be a permanent cycle. We deduce using $h_{2,1}$-multiplication (i.e. application of $\bra{\nu,\eta,-}$) that $$ d_3(h_{2,1}^i z_{48}) = h^{i-1}_{2,1} d_3(h_{2,1}z_{48}) $$ for $i \ge 1$, and that\ $$ d_3(z_{48}) = d_3([\delta f_5]) \ne 0. $$ \end{description} We now proceed to analyze $\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4$. Figure~\ref{fig:tmf5} displays this portion of the $E_2$-term of the ASS for $\mathrm{tmf}_*\mathrm{tmf}$, with differentials and hidden extensions. The $v_0^{-1}\Ext_{A(2)}$-generators in the chart are also labeled with $\Gamma_0(5)$-modular forms. These are the leading terms of the $\Gamma_0(5)$-modular forms that they map to under $\Psi_{5}$ (see (\ref{eq:Psi_5})). As in the case of $\Sigma^8\mathrm{bo}_1 \oplus \Sigma^{16}\td{\mathrm{bo}}_2$, the Adams differentials and hidden extensions are all deduced from the behavior of $\Psi_5$ on these torsion-free classes. We will also describe how the $h_0$-torsion in this portion of the ASS detects homotopy classes which map isomorphically under $\Psi_5$ onto torsion in $\pi_*\mathrm{TMF}_0(5) $. We freely make reference to the descent spectral sequence $$ H^{s}(\Mcl{5} , \omega^{\otimes t}) \Rightarrow \pi_{2t-s}\mathrm{TMF}_0(5) , $$ as computed in \cite{Q5}, for instance. Most of the differentials and extensions follow from the fact that the element $[f_9]$ which generates $$ \Ext_{A(2)_*}(\Sigma^{16}\td{\td{\ul{\bo}}}_{2}) \cong \Ext_{A(2)_*}(\Sigma^{32}\mathbb{F}_2[1]) $$ must be a permanent cycle in the ASS, and that the ASS for $\mathrm{tmf} \wedge \mathrm{tmf}$ is a spectral sequence of modules over the ASS for $\mathrm{tmf}$ $$ \Ext^{*,*}_{A(2)_*}(\mathbb{F}_2) \Rightarrow \pi_* \mathrm{tmf}^\wedge_2. $$ Below we give some brief explanation for the main differentials and hidden extensions which do not follow from this. \begin{description} \item[Stem 36] We have $$ \Psi_5(f_{10}) = b_2 \delta^4 + \cdots. $$ Since $b_2 \delta^4$ is not a permanent cycle in the descent spectral sequence for $\mathrm{TMF}_0(5) $, we deduce that $f_{10}$ must support a differential. There is only one possibility (taking into account the differential $d_3(h_2 z_{33})$ coming from $\mathrm{TMF}_0(3)$), $$ d_4([f_{10}]) = h_1^3 [f_9].$$ This is especially convenient, in light of the fact that $\eta^3 \delta^4 = 0$. \item[Stem 41] The hidden extension follows from dividing $$ \Psi_5(\eta [\td{f_7f_2}]) = \eta b_2^2\delta^5 + \cdots $$ by $c_4$. \item[Stem 54] The three hidden extensions to the element $[\kappa c_4 \td{f}_9]$ all follow from the fact that $\nu^2 (2\delta^6)$ is non-trivial, and that $$ 2 (\nu^2\delta^6) = \eta^2\bar{\kappa}\delta^2. $$ \item[Stem 56] The hidden extension follows from the Toda bracket manipulation $$ 2\bra{\nu, 2\bar\kappa, 2\td{f_9}} = \bra{2,\nu, 2\bar\kappa}2\td{f_9}. $$ \item[Stem 64] The differential on $[\td{f^2_9}/2]$ follows from the fact that $\delta^8$ is not $2$-divisible. The hidden extensions follow from the fact that $\eta \delta^8 \ne 0$ and $\nu^2 \delta^8 \ne 0$. \item[Stem 65] The hidden $\eta$-extension follows from the fact that $\delta^4 \kappa \bar\kappa$ is $\eta$-divisible, and $\nu (\delta^4 \kappa \bar\kappa) = (2\delta^6)\bar\kappa$. \end{description} \subsection{Connective covers of $\mathrm{TMF}_0(3)$ and $\mathrm{TMF}_0(5)$ in the $\mathrm{tmf}$-resolution}\label{sec:cover} In this section we will topologically realize the summands \begin{gather*} \Ext^{*,*}_{A(2)_*}(\Sigma^{8} \ul{\bo}_1 \oplus \Sigma^{16} \td{\ul{\bo}}_2), \\ \Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \end{gather*} of $\Ext(\mathrm{tmf} \wedge \mathrm{tmf})$, which we showed detect $\mathrm{tmf}_*$-submodules which map to $\mathrm{tmf}_*$-lattices of $\pi_*\mathrm{TMF}_0(3) $ and $\pi_*\mathrm{TMF}_0(5) $ under the maps $\Psi_3$ and $\Psi_5$, respectively. From now on, everything is implicitly $2$-local. For the purposes of context, we shall say that a spectrum $$ X\rightarrow \mathrm{tmf} $$ over $\mathrm{tmf}$ is a \emph{$\mathrm{tmf}$-Brown-Gitler spectrum} if the induced map $$ H_* X \rightarrow H_*\mathrm{tmf} $$ maps $H_* X$ isomorphically onto one of the $A_*$-subcomodules $\ul{\mathrm{tmf}}_i \subset H_*\mathrm{tmf}$ defined in Section~\ref{sec:ass}. Not much is known about the existence of $\mathrm{tmf}$-Brown-Gitler spectra, but the most optimistic hope would be that the spectrum $\mathrm{tmf}$ admits a filtration by $\mathrm{tmf}$-Brown-Gitler spectra $\mathrm{tmf}_i$. The case of $i = 0$ is trivial (define $\mathrm{tmf}_0 = S^0$) and the case of $i = 1$ is almost as easy: a spectrum $\mathrm{tmf}_1$ can be defined to be the $15$-skeleton: $$ \mathrm{tmf}_1 := \mathrm{tmf}^{[15]} \hookrightarrow \mathrm{tmf}. $$ In light of the short exact sequences $$ 0 \rightarrow \ul{\mathrm{tmf}}_{i-1} \rightarrow \ul{\mathrm{tmf}}_i \rightarrow \Sigma^{8i} \ul{\bo}_i \rightarrow 0 $$ one would anticipate that such $\mathrm{tmf}$-Brown-Gitler spectra would be built from $\mathrm{bo}$-Brown-Gitler spectra, so that $$ \mathrm{tmf}_i \simeq \mathrm{bo}_0 \cup \Sigma^8 \mathrm{bo}_1 \cup \cdots \cup \Sigma^{8i} \mathrm{bo}_i. $$ Davis, Mahowald, and Rezk \cite{MRlevel3}, \cite{MahowaldConnective} nearly construct a spectrum $\mathrm{tmf}_2$; they show that there is a subspectrum $$ \Sigma^8 \mathrm{bo}_1 \cup \Sigma^{16} \mathrm{bo}_2 \hookrightarrow \br{\mathrm{tmf}} $$ (where $\br{\mathrm{tmf}}$ is the cofiber of the unit $S^0 \rightarrow \mathrm{tmf}$) realizing the subcomodule $$ \Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \subseteq H_* \br{\mathrm{tmf}}. $$ We will not pursue the existence of $\mathrm{tmf}$-Brown-Gitler spectra here, but instead will consider the easier problem of constructing the beginning of a potential filtration of $\mathrm{tmf} \wedge \mathrm{tmf}$ by $\mathrm{tmf}$-modules, which we denote $\mathrm{tmf} \wedge \mathrm{tmf}_i$ even though we do not require the existence of the individual spectra $\mathrm{tmf}_i$. We would have $$\mathrm{tmf} \wedge \mathrm{tmf}_i \simeq \mathrm{tmf} \wedge \mathrm{bo}_0 \cup \Sigma^8 \mathrm{tmf} \wedge \mathrm{bo}_1 \cup \cdots \cup \Sigma^{8i} \mathrm{tmf} \wedge \mathrm{bo}_i,$$ such that the map $$ H_* \mathrm{tmf} \wedge \mathrm{tmf}_i \rightarrow H_* \mathrm{tmf} \wedge \mathrm{tmf}. $$ maps $H_* \mathrm{tmf} \wedge \mathrm{tmf}_i$ onto the sub-comodule $$ (A//A(2))_* \otimes \ul{\mathrm{tmf}}_i \subset H_* \mathrm{tmf} \wedge \mathrm{tmf}. $$ Note that in the case of $i = 0$, we may take $$ \mathrm{tmf} \wedge \mathrm{tmf}_0 := \mathrm{tmf} \xrightarrow{\eta_L} \mathrm{tmf} \wedge \mathrm{tmf}. $$ Since this is the inclusion of a summand, with cofiber denoted $\br{\mathrm{tmf}}$, it suffices to instead look for a filtration $$ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_1 \hookrightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \hookrightarrow \cdots \hookrightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}} $$ of $\mathrm{tmf}$-modules. Our previous discussion indicates that the cases of $i = 1$ is easy, and now the work of Davis-Mahowald-Rezk fully handles the case of $i = 2$. In this section we will address the case of $i = 3$, and a ``piece'' of the case of $i = 4$. \begin{prop}\label{prop:tmf_3} $\quad$ \begin{enumerate} \item There is a $\mathrm{tmf}$-module $$ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \simeq \Sigma^8 \mathrm{tmf} \wedge \mathrm{bo}_1 \cup \Sigma^{16} \mathrm{tmf} \wedge \mathrm{bo}_2 \cup \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}_3' \hookrightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}} $$ which realizes the submodule $$ (A//A(2))_* \otimes (\Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3) \subset H_* \mathrm{tmf} \wedge \br{\mathrm{tmf}} $$ where $\mathrm{tmf} \wedge \mathrm{bo}'_3$ is a $\mathrm{tmf}$-module with $$ H_*(\mathrm{tmf} \wedge \mathrm{bo}_3') \cong (A//A(2))_* \otimes \ul{\bo}_3 $$ (but which may not be equivalent to $\mathrm{tmf} \wedge \mathrm{bo}_3$ as a $\mathrm{tmf}$-module). \item There is a map of $\mathrm{tmf}$-modules $$ \Sigma^{63} \mathrm{tmf} \xrightarrow{\alpha} \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 $$ and an extension $$ \xymatrix{ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \ar[r] \ar[d] & \mathrm{tmf} \wedge \mathrm{tmf} . \\ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64} \mathrm{tmf} \ar@{.>}_{\iota}[ur] } $$ \item There is a modified Adams spectral sequence $$ \Ext^{*,*}_{A(2)_*}(\Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \Rightarrow \pi_* \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf}, $$ and the map $\iota$ induces a map from this modified ASS to the ASS for $\mathrm{tmf} \wedge \mathrm{tmf}$ such that the induced map on $E_2$-terms is the inclusion of the summand $$ \Ext^{*,*}_{A(2)_*}(\Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \hookrightarrow \Ext^{*,*}_{A(2)_*}((A//A(2))_*). $$ \end{enumerate} \end{prop} In \cite{MRlevel3},\cite{MahowaldConnective}, Davis, Mahowald, and Rezk construct a map $$ \Sigma^{32} \mathrm{tmf} \xrightarrow{\beta} \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2$$ such that cofiber $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \cup_\beta \Sigma^{33} \mathrm{tmf}$ has an ASS with $E_2$-term $$ E^{*,*}_2 \cong \Ext^{*,*}_{A(2)_*}(\Sigma^8 \ul{\bo}_1 \oplus \td{\ul{\bo}}_2) $$ and there is an equivalence $$ v_2^{-1}( \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \cup_\beta \Sigma^{33} \mathrm{tmf}) \simeq \mathrm{TMF}_0(3). $$ What they do not address is how this connective cover is related to $\mathrm{tmf} \wedge \mathrm{tmf}$ and the map $\Psi_3$ to $\mathrm{TMF}_0(3)$. \begin{thm}\label{thm:tdtmf03}$\quad$ \begin{enumerate} \item There is a choice of attaching map $\beta$ such that the $\mathrm{tmf}$-module $$ \td{\mathrm{tmf}}_0(3) := \Sigma^8 \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \cup_\beta \Sigma^{33} \mathrm{tmf} \\ $$ fits into a diagram \begin{equation}\label{eq:tdtmf03} \xymatrix{ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \ar@{^{(}->}[r] \ar[d] & \mathrm{tmf} \wedge \mathrm{tmf} \ar[r]^{\Psi_3} & \mathrm{TMF}_0(3). \\ \td{\mathrm{tmf}}_0(3) \ar@{.>}[urr] } \end{equation} \item The $E_2$-term of the ASS for $\td{\mathrm{tmf}}_0(3)$ is given by $$ E_2^{*,*} = \Ext^{*,*}_{A(2)_*}(\Sigma^{8}\ul{\bo}_1 \oplus \Sigma^{16}\td{\ul{\bo}}_2). $$ \item The map $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \rightarrow \td{\mathrm{tmf}}_0(3)$ of Diagram~(\ref{eq:tdtmf03}) induces the projection $$ \Ext^{*,*}_{A(2)_*}(\Sigma^{8}\ul{\bo}_1 \oplus \Sigma^{16}\ul{\bo}_2) \rightarrow \Ext^{*,*}_{A(2)_*}(\Sigma^{8}\ul{\bo}_1 \oplus \Sigma^{16}\td{\ul{\bo}}_2)$$ on Adams $E_2$-terms. \item The map $\td{\mathrm{tmf}}_0(3) \rightarrow \mathrm{TMF}_0(3)$ of Diagram~(\ref{eq:tdtmf03}) makes $\td{\mathrm{tmf}}_0(3)$ a connective cover of $\mathrm{TMF}_0(3)$. \end{enumerate} \end{thm} We also will provide the following analogous connective cover of $\mathrm{TMF}_0(5)$. \begin{thm}\label{thm:tdtmf05} $\quad$ \begin{enumerate} \item There is a $\mathrm{tmf}$-module $$ \td{\mathrm{tmf}}_0(5) := \Sigma^{32} \mathrm{tmf} \cup \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}'_3 \cup \Sigma^{64}\mathrm{tmf} $$ which fits into a diagram \begin{equation}\label{eq:tdtmf05} \xymatrix{ \td{\mathrm{tmf}}_0(5) \ar[dr] \ar@{.>}[r] & \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_{\alpha} \Sigma^{64} \mathrm{tmf} \ar[d] \ar[r] & \mathrm{tmf} \wedge \mathrm{tmf} \ar[r]_{\Psi_5} & \mathrm{TMF}_0(5). \\ & \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}_3' \cup_{\td{\alpha}} \Sigma^{64}\mathrm{tmf} } \end{equation} \item There is a modified ASS $$ \Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24}\ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \Rightarrow \pi_*(\td{\mathrm{tmf}}_0(5)).$$ \item The map $\td{\mathrm{tmf}}_0(5) \rightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf}$ of Diagram~(\ref{eq:tdtmf05}) induces a map of modifed ASS's, which on $E_2$-terms is given by the inclusion $$ \Ext^{*,*}_{A(2)_*}(\Sigma^{16} \td{\td{\ul{\bo}}}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \hookrightarrow \Ext^{*,*}_{A(2)_*}(\Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4). $$ \item The composite $\td{\mathrm{tmf}}_0(5) \rightarrow \mathrm{TMF}_0(5)$ of Diagram~(\ref{eq:tdtmf05}) makes $\td{\mathrm{tmf}}_0(5)$ a connective cover of $\mathrm{TMF}_0(5)$. \end{enumerate} \end{thm} The remainder of this section will be devoted to proving Proposition~\ref{prop:tmf_3}, Theorem~\ref{thm:tdtmf03}, and Theorem~\ref{thm:tdtmf05}. The proofs of all of these will be accomplished by taking fibers and cofibers of a series of maps, using brute force calculation of the ASS. These brute force calculations boil down to having low degree computations of the groups $\Ext_{A(2)_*}(\ul{\bo}_i, \ul{\bo}_j)$ for various small values of $i$ and $j$. The computations were performed using R.~Bruner's $\Ext$-software \cite{Bruner}. The software requires module definition input that completely describes the $A(2)$-module structure of the modules $H^*\mathrm{bo}_i$. The first author was fortunate to have an undergraduate research assistant, Brandon Tran, generate module files using Sage. \begin{proof}[Proof of Proposition~\ref{prop:tmf_3}] Endow $\mathrm{tmf} \wedge \br{\mathrm{tmf}}$ with a minimal $\mathrm{tmf}$-cell structure corresponding to an $\mathbb{F}_2$-basis of $H_*\br{\mathrm{tmf}}$. Let $\mathrm{tmf}\wedge \br{\mathrm{tmf}}^{[46]}$ denote the $46$-skeleton of this $\mathrm{tmf}$-cell module, so we have \begin{equation}\label{eq:Hbrtmf46} H_*(\mathrm{tmf} \wedge \br{\mathrm{tmf}}^{[46]}) \cong (A//A(2))_* \otimes ( \Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32} \ul{\bo}^{[14]}_4 \oplus \Sigma^{40}\ul{\bo}^{[6]}_5). \end{equation} We first wish to form a $\mathrm{tmf}$-module $X_1$ with \begin{equation}\label{eq:HX1} H_* X_1 \cong (A//A(2))_* \otimes ( \Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32} \ul{\bo}^{[14]}_4 ) \end{equation} by taking the fiber of a suitable map of $\mathrm{tmf}$-modules $$ \gamma_5: \mathrm{tmf} \wedge \br{\mathrm{tmf}}^{[46]} \rightarrow \mathrm{tmf} \wedge \Sigma^{40}\mathrm{bo}_5^{[6]}. $$ We use the ASS $$ \Ext^{s,t}_{A(2)_*}(H_*\br{\mathrm{tmf}}^{[46]}, \Sigma^{40}\ul{\bo}_5^{{6}}) \Rightarrow [\Sigma^{t-s}\mathrm{tmf} \wedge \br{\mathrm{tmf}}^{[46]},\Sigma^{40}\mathrm{tmf} \wedge \mathrm{bo}_5^{[6]}]_\mathrm{tmf}. $$ The decomposition (\ref{eq:Hbrtmf46}) induces a corresponding decomposition of $\Ext$ groups. The only non-zero contributions near $t-s = 0$ come from $\Sigma^{40}\ul{\bo}_5^{[6]}$, $\Sigma^{32} \ul{\bo}_4^{[14]}$, and $\Sigma^{24} \ul{\bo}_3$; the corresponding $\Ext$ charts are depicted below. \begin{center} \includegraphics[width = \textwidth]{X1ASS.PNG} \end{center} The generator in $[\gamma_5] \in \Ext^{0,0}_{A(2)_*}(\Sigma^{40}\ul{\bo}^{[6]}_5, \Sigma^{40}\ul{\bo}_5^{[6]})$ would detect the desired map $\gamma_5$. We just need to show that this generator is a permanent cycle in the ASS. As the charts indicate, the only potential target is the non-trivial class $$ x \in \Ext^{2,-1+2}_{A(2)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{40}\ul{\bo}^{[6]}_5). $$ We shall call $x$ the \emph{potential obstruction} for $\gamma_5$; if $d_2(\gamma_5) = x$ then we will say that $\gamma_5$ is \emph{obstructed} by $x$. The key observation is that in the vicinity of $t-s = 0$, the groups $\Ext_{A(1)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{40}\ul{\bo}_5^{[6]})$ are depicted below. \begin{center} \includegraphics[width=1.75in]{A1X1ASS.PNG} \end{center} Under the map of ASS's $$ \xymatrix{ \Ext^{s,t}_{A(2)_*}(H_*\br{\mathrm{tmf}}^{[46]}, \Sigma^{40}\ul{\bo}_5^{[6]}) \ar@{=>}[r] \ar[d] & [\Sigma^{t-s}\mathrm{tmf} \wedge \br{\mathrm{tmf}}^{[46]},\Sigma^{40}\mathrm{tmf} \wedge \mathrm{bo}_5^{[6]}]_\mathrm{tmf} \ar[d]^{\mathrm{bo} \wedge_{\mathrm{tmf}} -} \\ \Ext^{s,t}_{A(1)_*}(H_*\br{\mathrm{tmf}}^{[46]}, \Sigma^{40}\ul{\bo}_5^{[6]}) \ar@{=>}[r] & [\Sigma^{t-s}\mathrm{bo} \wedge \br{\mathrm{tmf}}^{[46]},\Sigma^{40}\mathrm{bo} \wedge \mathrm{bo}_5^{[6]}]_{\mathrm{bo}} } $$ the potential obstruction $x$ maps to the nonzero class $$ y \in \Ext^{2,-1+2}_{A(2)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{40}\ul{\bo}^{[6]}_5). $$ Therefore if $\gamma_5$ is obstructed by $x$, then $y$ is the obstruction to the existence of a corresponding map of $\mathrm{bo}$-modules $$ \mathrm{bo} \wedge_\mathrm{tmf} \gamma_5: \mathrm{bo} \wedge \br{\mathrm{tmf}}^{[46]} \rightarrow \Sigma^{40}\mathrm{bo} \wedge \mathrm{bo}_5^{[6]}. $$ However, Bailey showed in \cite{Bailey} that there is a splitting of $\mathrm{bo}$-modules $$ \mathrm{bo} \wedge \mathrm{tmf} \simeq \bigvee_i \Sigma^{8i} \mathrm{bo} \wedge \mathrm{bo}_i. $$ In particular, the map $\mathrm{bo} \wedge_\mathrm{tmf} \gamma_5$ is realized by restricting the splitting map $$ \mathrm{bo} \wedge \br{\mathrm{tmf}} \rightarrow \Sigma^{40} \mathrm{bo} \wedge \mathrm{bo}_5 $$ to $46$-skeleta (in the sense of $\mathrm{bo}$-cell spectra). Therefore, $\mathrm{bo} \wedge_\mathrm{tmf} \gamma_5$ is unobstructed, and we deduce that $\gamma_5$ cannot be obstructed. The $\mathrm{tmf}$-module $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_3$ may then be defined to to be the fiber of a map $$ \gamma_4: X_1 \rightarrow \Sigma^{32} \mathrm{bo}_4^{[14]} $$ which on homology is the projection on the summand $$ H_* X_1 \rightarrow \Sigma^{32} (A//A(2))_* \otimes \ul{\bo}^{[14]}_4 .$$ Again, we use the ASS $$ \Ext^{s,t}_{A(2)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32} \ul{\bo}^{[14]}_4, \Sigma^{32}\ul{\bo}_4^{[14]}) \Rightarrow [\Sigma^{t-s}X_1 ,\Sigma^{32}\mathrm{bo}_4^{[14]}]_\mathrm{tmf}. $$ The $E_2$-term is computed using the decomposition (\ref{eq:HX1}). The only non-zero contributions come from the following summands: \begin{center} \includegraphics[width=3in]{tmf_3ASS.PNG} \end{center} We discover that the only potential obstruction to the existence of $\gamma_4$ is the non-trivial class $$ z \in \Ext^{2,-1+2}_{A(2)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{32}\ul{\bo}^{[14]}_4). $$ Unfortunately we cannot simply imitate the argument in the previous paragraph, because $z$ is in the kernel of the homomorphism $$ \Ext^{2,-1+2}_{A(2)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{32}\ul{\bo}^{[14]}_4) \rightarrow \Ext^{2,-1+2}_{A(1)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{32}\ul{\bo}^{[14]}_4). $$ Nevertheless, a more roundabout approach will eliminate this potential obstruction. We first observe that there is a map of $\mathrm{tmf}$-modules $$ \gamma'_4: X_1 \rightarrow \Sigma^{32} (\mathrm{bo}_4)_{[9]}^{[14]} $$ (with $(\mathrm{bo}_4)_{[9]}^{[14]}$ denoting the quotient $\mathrm{bo}_4^{[14]}/\mathrm{bo}_4^{[8]}$), which on homology is the cannonical composite $$ H_* X_1 \rightarrow \Sigma^{32} (A//A(2))_* \otimes \ul{\bo}^{[14]}_4 \rightarrow \Sigma^{32}(A//A(2))_* \otimes (\ul{\bo}_4)_{[9]}^{[14]}.$$ The existence of $\gamma_4'$ is verified by the ASS $$ \Ext^{s,t}_{A(2)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32}\ul{\bo}_4^{[14]}, \Sigma^{32}(\ul{\bo}_4)_{[9]}^{[14]}) \Rightarrow [\Sigma^{t-s}X_1 ,\Sigma^{32}(\mathrm{bo}_4)_{[9]}^{[14]}]_\mathrm{tmf}. $$ The $E_2$-term is computed using the decomposition (\ref{eq:HX1}). The only non-zero contributions in the vicinity of $t-s = 0$ come from the following summands: \begin{center} \includegraphics[width=3in]{X2ASS.PNG} \end{center} We see that there are no potential obstructions for the existence of $\gamma'_4$. Let $X_2$ denote the fiber of $\gamma'_4$, so that we have \begin{equation}\label{eq:HX2} H_* X_2 \cong (A//A(2))_* \otimes ( \Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32} \ul{\bo}^{[8]}_4 ). \end{equation} We instead contemplate the potential obstructions to the existence of a map of $\mathrm{tmf}$-modules $$ \gamma''_4: X_2 \rightarrow \Sigma^{32} \mathrm{tmf} \wedge \mathrm{bo}_4^{[8]} $$ which on homology induces the projection $$ H_*X_2 \rightarrow \Sigma^{32} (A//A(2))_* \otimes \ul{\bo}_4^{[8]}. $$ The $E_2$-term of the ASS $$ \Ext^{s,t}_{A(2)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32}\ul{\bo}_4^{[8]}, \Sigma^{32}\ul{\bo}_4^{[8]}) \Rightarrow [\Sigma^{t-s}X_2 ,\Sigma^{32}\mathrm{bo}_4^{[8]}]_\mathrm{tmf} $$ is computed using the decomposition (\ref{eq:HX2}), and in particular the contribution coming from the summand $\Sigma^{24} \ul{\bo}_3 \subset \br{\ul{\mathrm{tmf}}}_3$ gives the following classes in the vicinity of $t-s = 0$: \begin{center} \includegraphics[width=1.5in]{X2bo3ASS.PNG} \end{center} We see that there are many potential obstructions to the existence of $\gamma''_4$ in $$ \Ext_{A(2)_*}^{2,-1+2}(\Sigma^{24}\ul{\bo}_3, \Sigma^{32}\ul{\bo}_4^{[8]}). $$ The potential obstructions for the related map $$ \mathrm{bo} \wedge_\mathrm{tmf} \gamma''_4: X_2 \rightarrow \Sigma^{32} \mathrm{tmf} \wedge \mathrm{bo}_4^{[8]} $$ of $\mathrm{bo}$-modules in the ASS $$ \Ext^{s,t}_{A(1)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32}\ul{\bo}_4^{[8]}, \Sigma^{32}\ul{\bo}_4^{[8]}) \Rightarrow [\Sigma^{t-s}X_2 ,\Sigma^{32}\mathrm{bo}_4^{[8]}]_\mathrm{bo} $$ may be analyzed, and the contribution coming from the summand $\Sigma^{24} \ul{\bo}_3 \subset \br{\ul{\mathrm{tmf}}}_3$ gives the following classes in the vicinity of $t-s = 0$: \begin{center} \includegraphics[width=1.5in]{A1X2bo3ASS.PNG} \end{center} We see that there is one potential obstruction to the existence of $\mathrm{bo} \wedge_\mathrm{tmf} \gamma''_4$ in $$ \Ext_{A(1)_*}^{2,-1+2}(\Sigma^{24}\ul{\bo}_3, \Sigma^{32}\ul{\bo}_4^{[8]}). $$ We analyze these potential obstructions through the following zig-zag of ASS's: $$ \xymatrix{ \Ext^{s,t}_{A(2)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32}\ul{\bo}_4^{[14]}, \Sigma^{32}\ul{\bo}_4^{[14]}) \ar@{=>}[r] \ar[d]_r & [\Sigma^{t-s}X_1,\Sigma^{32}\mathrm{tmf} \wedge \mathrm{bo}_4^{[14]}]_\mathrm{tmf} \ar[d] \\ \Ext^{s,t}_{A(2)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32}\ul{\bo}_4^{[8]}, \Sigma^{32}\ul{\bo}_4^{[14]}) \ar@{=>}[r] & [\Sigma^{t-s}X_2,\Sigma^{32}\mathrm{tmf} \wedge \mathrm{bo}_4^{[14]}]_\mathrm{tmf} \\ \Ext^{s,t}_{A(2)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32}\ul{\bo}_4^{[8]}, \Sigma^{32}\ul{\bo}_4^{[8]}) \ar[u]^{i} \ar[d]_{j} \ar@{=>}[r] & [\Sigma^{t-s}X_2,\Sigma^{32}\mathrm{tmf} \wedge \mathrm{bo}_4^{[8]}]_\mathrm{tmf} \ar[u] \ar[d]^{\mathrm{bo} \wedge_{\mathrm{tmf}} -} \\ \Ext^{s,t}_{A(1)_*}(\br{\ul{\mathrm{tmf}}}_3 \oplus \Sigma^{32}\ul{\bo}_4^{[8]}, \Sigma^{32}\ul{\bo}_4^{[8]}) \ar@{=>}[r] & [\Sigma^{t-s}\mathrm{bo} \wedge_\mathrm{tmf} X_2,\Sigma^{32}\mathrm{bo} \wedge \mathrm{bo}_4^{[8]}]_\mathrm{bo} . } $$ In the above diagram the potential obstruction $z$ to the existence of $\gamma_4$ maps under $r$ to a non-trivial class, so that if $z$ obstructs $\gamma_4$, then $r(z)$ obstructs the composite $$ \gamma_4|_{X_2}: X_2 \rightarrow X_1 \xrightarrow{\gamma_4} \Sigma^{32} \mathrm{tmf} \wedge \mathrm{bo}_4^{[14]}. $$ The key fact to check using Bruner's $\Ext$-software is that in bidegree $(t-s,s) = (-1,2)$, the maps $i$ and $j$ are both surjective, with the same kernel. It follows that if $\gamma_4|_{X_2}$ is obstructed by $r(z)$, then the map $$ \mathrm{bo} \wedge_\mathrm{tmf} \gamma''_4: \mathrm{bo} \wedge_\mathrm{tmf} X_2 \rightarrow \Sigma^{32}\mathrm{bo} \wedge \mathrm{bo}_4^{[8]} $$ is obstructed. We may now avail ourselves of the Bailey splitting of $\mathrm{bo} \wedge \mathrm{tmf}$: the map $\mathrm{bo} \wedge_\mathrm{tmf} \gamma''_4$ is unobstructed, because it is realized by the projection $$ \mathrm{bo} \wedge_\mathrm{tmf} X_2 \simeq \mathrm{bo} \wedge (\Sigma^8 \mathrm{bo}_1 \vee \Sigma^{16} \mathrm{bo}_2 \vee \Sigma^{24} \mathrm{bo}_3 \vee \Sigma^{32} \mathrm{bo}_4^{[8]}) \rightarrow \Sigma^{32} \mathrm{bo} \wedge \mathrm{bo}_4^{[8]}. $$ We conclude that $z$ cannot obstruct the existence of $\gamma_4$. We may therefore define $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_3$ to be the fiber of the map $\gamma_4$. We now need to show that the $\mathrm{tmf}$-module $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_3$ is built as $$ \Sigma^8 \mathrm{tmf} \wedge \mathrm{bo}_1 \cup \Sigma^{16} \mathrm{tmf} \wedge \mathrm{bo}_2 \cup \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}_3'. $$ In order to establish this decomposition, our first task is to construct a map of $\mathrm{tmf}$-modules $$ \gamma_3: \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \rightarrow \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}_3 $$ by analyzing the ASS $$ \Ext^{s,t}_{A(2)_*}(\br{\ul{\mathrm{tmf}}}_3, \Sigma^{24}\ul{\bo}_3) \Rightarrow [\Sigma^{t-s}\mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 ,\Sigma^{24}\mathrm{bo}_3]_\mathrm{tmf}. $$ The only contributions in the vicinity of $t-s = 0$ come from the summands $\Sigma^{16} \ul{\bo}_2$ and $\Sigma^{32} \ul{\bo}_3$ of $\br{\ul{\mathrm{tmf}}}_3$: \begin{center} \includegraphics[width=3in]{tmf_2ASS.PNG} \end{center} As we see from the charts above there is a potential obstruction to the existence of $\gamma_2$ in $$ \Ext^{2,-1+2}_{A(2)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{24}\ul{\bo}_3). $$ The Bailey splitting does not eliminate this potential obstruction, as $$ \Ext^{2,-1+2}_{A(1)_*}(\Sigma^{24}\ul{\bo}_3, \Sigma^{24}\ul{\bo}_3) = 0. $$ However, by Toda's Realization Theorem \cite{Miller}, this potential obstruction also corresponds to the existence of a different ``form'' of the $\mathrm{tmf}$-module $\mathrm{tmf} \wedge \mathrm{bo}_3$, with the same homology. Since $\Ext^{s,-2+s}_{A(2)_*}(\ul{\bo}_3, \ul{\bo}_3) = 0$ for $s \ge 3$, both forms are realized. It follows that if $\gamma_3$ is obstructed with the standard form, then it is unobstructed for the other form. Let $\mathrm{tmf} \wedge \mathrm{bo}'_3$ be the unobstructed form, so that there exists a map $$ \gamma_3 : \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \rightarrow \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}'_3. $$ The fiber of $\gamma_3$ is $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_2$, where $$ \br{\mathrm{tmf}}_2 \simeq \Sigma^8 \mathrm{bo}_1 \cup \Sigma^{16} \mathrm{bo}_2 $$ is the spectrum constructed by Davis, Mahowald, and Rezk. We note that there is a fiber sequence $$ \Sigma^8 \mathrm{tmf} \wedge \mathrm{bo}_1 \rightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \rightarrow \Sigma^{16}\mathrm{tmf} \wedge \mathrm{bo}_2 $$ since a quick check of $\Ext^{s,-1+s}(\ul{\bo}_i, \ul{\bo}_i)$ reveals there are no exotic ``forms'' of $\mathrm{tmf} \wedge \mathrm{bo}_i$ for $i = 1,2$, and $\Sigma^8 \mathrm{bo}_1$ is the $15$-skeleton of the $\mathrm{tmf}$-cell complex $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_2$. We now must produce the map of $\mathrm{tmf}$-modules $$ \alpha : \Sigma^{63}\mathrm{tmf} \rightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3. $$ This just corresponds to an element $\alpha \in \pi_{63} \mathrm{tmf} \wedge\br{\mathrm{tmf}}_3$. In the ASS $$ \Ext^{s,t}_{A(2)_*}(\ul{\br{\mathrm{tmf}}}_3) \Rightarrow \pi_{t-s}(\mathrm{tmf} \wedge \br{\mathrm{tmf}}_3) $$ there is a class $$ x_{63} \in \Ext^{4,4+63}_{A(2)_*}(\Sigma^{24}\ul{\bo}_3) \subseteq \Ext^{4,4+63}_{A(2)_*}(\ul{\br{\mathrm{tmf}}}_3) $$ (see Figure~\ref{fig:bo3andbo4}). Moreover, according to Figures~\ref{fig:bo1andbo2} and \ref{fig:bo3andbo4}, there are no possible targets of an Adams differential supported by this class. Therefore, $x_{63}$ corresponds to a permanent cycle: take $\alpha$ to be the element in homotopy detected by it. The factorization $$ \xymatrix{ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \ar[r] \ar[d] & \mathrm{tmf} \wedge \mathrm{tmf} \\ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64} \mathrm{tmf} \ar@{.>}_{\iota}[ur] } $$ exists because the element $x_{63}$, when regarded as an element of the ASS $$ \Ext^{s,t}_{A(2)_*}(H_*\mathrm{tmf}) \Rightarrow \pi_{t-s} (\mathrm{tmf} \wedge \mathrm{tmf}), $$ is the target of a differential $$ d_3([\td{f^2_9}/2]) = x_{63} $$ (see Figure~\ref{fig:tmf5}). The modified ASS $$ \Ext^{*,*}_{A(2)_*}(\Sigma^8 \ul{\bo}_1 \oplus \Sigma^{16} \ul{\bo}_2 \oplus \Sigma^{24} \ul{\bo}_3 \oplus \Sigma^{32}\td{\ul{\bo}}_4) \Rightarrow \pi_* \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf} $$ is constructed by taking the modified Adams resolution $$ \xymatrix{ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf} \ar[d]_{\rho} & Y_1 \ar[l] \ar[d] & Y_2 \ar[l] \ar[d] & \cdots, \ar[l] \\ H \wedge \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 & H \wedge Y_1 & H \wedge Y_2 } $$ where the map $\rho$ is the composite $$ \rho: \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf} \rightarrow H \wedge \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf} \xrightarrow{s} H \wedge \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3, $$ $s$ is a section of the inclusion $$ H \wedge \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \hookrightarrow H \wedge \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf}, $$ $Y_1$ is the fiber of $\rho$, and $Y_i$ is the fiber of the map $$ Y_{i-1} \rightarrow H \wedge Y_{i-1}. $$ The map from this modified ASS to the ASS for $\mathrm{tmf} \wedge \mathrm{tmf}$ arises from the existence of a commutative diagram $$ \xymatrix{ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf} \ar[r]^-{\iota} \ar[d]_\rho & \mathrm{tmf} \wedge \mathrm{tmf} \ar[d] \\ H \wedge \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \ar@{^{(}->}[r] & H \wedge \mathrm{tmf} \wedge \mathrm{tmf}. } $$ (This diagram commutes since the class $[\td{f^2_9}/2]$ killing $x_{63}$ in the ASS for $\mathrm{tmf} \wedge \mathrm{tmf}$ has Adams filtration $1$.) \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:tdtmf03}] Define a $2$-variable modular form $$ \td{\td{f_9}} = f_9 -\frac{212}{315} c_4 f_4 - \frac{34}{441} c_4 f_5 + \frac{2501}{11025} f_1^2 c_4^2 - 851 f_1 \Delta $$ so that $[\td{\td{f_9}}] = [f_9]$ and $\Psi_3(\td{\td{f_9}}) = 0$. (This form was produced by executing an integral variant of the ``row-reduction'' method outlined in Step 1 of Example~\ref{ex:echelon1}.) Then we may take the attaching map $$ \beta : \Sigma^{32} \mathrm{tmf} \rightarrow \mathrm{tmf} \wedge \mathrm{tmf} $$ to be the map of $\mathrm{tmf}$-modules corresponding to the homotopy class $$ \td{\td{f_9}} \in \pi_{32}\mathrm{tmf} \wedge \mathrm{tmf}. $$ We define $$ \td{\mathrm{tmf}}_0(3) := \Sigma^8 \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \cup_\beta \Sigma^{33} \mathrm{tmf}. $$ Since $\Psi_3(\td{\td{f_9}}) = 0$, there is a factorization $$ \xymatrix{ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \ar@{^{(}->}[r] \ar[d] & \mathrm{tmf} \wedge \mathrm{tmf} \ar[r]^{\Psi_3} & \mathrm{TMF}_0(3). \\ \td{\mathrm{tmf}}_0(3) \ar@{.>}[urr] } $$ The rest of the theorem is fairly straightforward given this, and our analysis of $\Psi_3$ in the previous section. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:tdtmf05}] An analysis of the Adam $E_2$-terms in low dimensions reveals that the only non-trivial attaching map of $\mathrm{tmf}$-modules $$ \upsilon: \Sigma^{23} \mathrm{tmf} \wedge \mathrm{bo}'_3 \rightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 $$ must factor as \begin{equation}\label{eq:upsilon} \upsilon: \Sigma^{23} \mathrm{tmf} \wedge \mathrm{bo}'_3 \xrightarrow{\upsilon'} \Sigma^{32} \mathrm{tmf} \xrightarrow{\beta} \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2, \end{equation} where $\upsilon'$ is the unique non-trivial class in that degree. The existence of differentials in Figure~\ref{fig:tmf5} from $\ul{\bo}_3$-classes to $\td{\td{\ul{\bo}}}_2$-classes implies that in $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_3$, $\mathrm{tmf} \wedge \mathrm{bo}_3$ must be attached non-trivially to $\mathrm{tmf} \wedge \br{\mathrm{tmf}}_2$, and we therefore have $$ \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \simeq \mathrm{tmf} \wedge \br{\mathrm{tmf}}_2 \cup_\upsilon \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}'_3. $$ When applied to the factorization (\ref{eq:upsilon}), Verdier's Axiom implies that there is a fiber sequence $$ \Sigma^{32}\mathrm{tmf} \cup_{\upsilon'} \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}'_3 \rightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \rightarrow \td{\mathrm{tmf}}_0(3). $$ Now, an easy check with the ASS reveals that the composite $$ \Sigma^{63} \mathrm{tmf} \xrightarrow{\alpha} \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \rightarrow \td{\mathrm{tmf}}_0(3) $$ is null, from which it follows that there is a lift $$ \xymatrix{ & \Sigma^{32} \mathrm{tmf} \cup_{\upsilon'} \Sigma^{24} \mathrm{tmf} \wedge \mathrm{bo}_3' \ar[d] \\ \Sigma^{63} \mathrm{tmf} \ar[r]_{\alpha} \ar@{.>}[ur]^{{\alpha'}} & \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3. } $$ Define $$ \td{\mathrm{tmf}}_0(5) := \Sigma^{32}\mathrm{tmf} \cup_{\upsilon'} \Sigma^{24}\mathrm{tmf} \wedge \mathrm{bo}'_3 \cup_{\alpha'} \Sigma^{64} \mathrm{tmf}. $$ Verdier's axiom, applied to the factorization above, gives a fiber sequence $$ \td{\mathrm{tmf}}_0(5) \rightarrow \mathrm{tmf} \wedge \br{\mathrm{tmf}}_3 \cup_\alpha \Sigma^{64}\mathrm{tmf} \rightarrow \td{\mathrm{tmf}}_0(3). $$ Given our analysis of $\Psi_5$, the rest of the statements of the theorem are now fairly straightforward. \end{proof} \section{Introduction}\label{sec:intro} The Adams-Novikov spectral sequence based on a connective spectrum $ E $ ($ E $-ANSS) is perhaps the best available tool for computing stable homotopy groups. For example, $ H\mathbb{F}_p $ and $\mr{BP}$ give the classical Adams spectral sequence and the Adams-Novikov spectral sequence respectively. To begin to compute with the $E$-ANSS, one needs to know the structure of the smash powers $ E^{\wedge k} $. When $ E $ is one of $ H\mathbb{F}_p $, $ \mathrm{MU} $, or $ \mr{BP} $, the situation is simpler than in general, since in this case $ E\wedge E $ is an infinite wedge of suspensions of $ E $ itself, which allows for an algebraic description of the $ E_2 $-term. This is not the case for $ \mr{bu}, \mathrm{bo}, $ or $ \mathrm{tmf} $, in which case the $ E_2 $ page is harder to describe, and in fact, has not yet been described in the the case of $ \mathrm{tmf} $. Mahowald and his collaborators have studied the $2$-primary $ \mathrm{bo} $-ANSS to great effect: it gives the only known approach to the calculation of the telescopic $2$-primary $ v_1 $-periodic homotopy in the sphere spectrum \cite{LellmannMahowald,boresolutions}. The starting input in that calculation is a complete description of $ \mathrm{bo}\wedge \mathrm{bo} $ as an infinite wedge of spectra, each of which is a smash product of $\mathrm{bo}$ with a suitable finite complex (as in \cite{Milgram-connectivektheory} and others). The finite complexes involved are the so-called integral Brown-Gitler spectra. (See also the related work of \cite{ClarkeCrossleyWhitehouse1,ClarkeCrossleyWhitehouse2, MR2434436}.) Mahowald has worked on a similar description for $ \mathrm{tmf}\wedge \mathrm{tmf} $, but concluded that no analogous result could hold. In this paper we use his insights to explore four different perspectives on $2$-primary $\mathrm{tmf}$-cooperations. While we do not arrive at a complete and closed-form description of $\mathrm{tmf} \wedge \mathrm{tmf}$, we believe our results have the potential to be very useful as a computational tool. The four perspectives are the following. \begin{enumerate} \item The $E_2$ term of the $2$-primary Adams spectral sequence for $\mathrm{tmf} \wedge \mathrm{tmf}$ admits a splitting in terms of $\mathrm{bo}$-Brown-Gitler modules: $$ \Ext(\mathrm{tmf} \wedge \mathrm{tmf}) \cong \bigoplus_i \Ext(\Sigma^{8i} \mathrm{tmf} \wedge \mathrm{bo}_i). $$ \item Modulo torsion, $\mathrm{TMF}_*\mathrm{TMF}$ is isomorphic to a subring of the ring of integral two variable modular forms. \item $K(2)$-locally, the ring spectrum $(\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)}$ is given by an equivariant function spectrum: $$ (\mathrm{TMF} \wedge \mathrm{TMF})_{K(2)} \simeq \Map^c(\mathbb{G}_2/G_{48}, E_2)^{hG_{48}}. $$ \item By our Theorem \ref{thm:faithful}, $\mathrm{TMF}_*\mathrm{TMF}$ injects into a certain product of homotopy groups of topological modular forms with level structures: $$ \mathrm{TMF} \wedge \mathrm{TMF} \hookrightarrow \prod_{\substack{i \in \mathbb{Z}, \\ j \ge 0}} \mathrm{TMF}_0(3^j) \times \mathrm{TMF}_0(5^j). $$ \end{enumerate} The purpose of this paper is to describe and investigate the relationship between these different perspectives. As an application of our method, in Theorems \ \ref{thm:tdtmf03} and \ref{thm:tdtmf05} we construct connective covers $\td{\mathrm{tmf}}_0(3)$ and $\td{\mathrm{tmf}}_0(5)$ of the periodic spectra $\mathrm{TMF}_0(3)$ and $\mathrm{TMF}_0(5)$, respectively, recovering and extending previous results of Davis, Mahowald, and Rezk \cite{MRlevel3}, \cite{MahowaldConnective}. Others have also investigated the ring of cooperations for elliptic cohomology. Clarke and Johnson \cite{MR1232203} conjectured that $\mathrm{TMF}_0(2)_*\mathrm{TMF}_0(2)$ was given by the ring of $2$-variable modular forms for $\Gamma_0(2)$ over $\mathbb{Z}[1/2]$. Versions of this conjecture were subsequently verified by Baker \cite{MR1307488} (in the case of $\mathrm{TMF}[1/6]$) and Laures \cite{Laures} (for all $\mathrm{TMF}(\Gamma)[S^{-1}]$ associated to congruence subgroups, where $S$ is a large enough set of primes to make the theory Landweber exact). This previous work clearly feeds into perspective (2) (indeed Laures' work is cited as an initial step to establishing perspective (2)). In retrospect, Baker's work also contains observations related to perspective (4): in \cite{MR1307488} he observes that the ring of $2$-variable modular forms can be regarded as a certain space of functions on a space of isogenies of elliptic curves. \subsection{A tour of the paper} For the reader's convenience, we take some time here to outline the contents of the paper. \subsection*{Section 2} This section is devoted to the motivating example of $\mathrm{bo}\wedge\mathrm{bo}$. Sections 2.1-2.4 are primarily expository, based upon the foundational work of Adams, Lellmann, Mahowald, and Milgram. We make an effort to consolidate their theorems and recast them in modern notation and terminology, and hope that this will prove a useful resource to those trying to learn the classical theory of $\mathrm{bo}$-cooperations and $v_1$-periodic stable homotopy. To the best of our knowledge, Sections 2.5-2.6 provide new perspectives on this subject. In Section 2.1, we review the theory of integral Brown-Gitler spectra $\mr{H}\mathbb{Z}_i$ and the splitting \[ \mathrm{bo}\wedge\mathrm{bo}\simeq \bigvee_{i\ge 0}\mathrm{bo}\vee\Sigma^{4i}\mr{H}\mathbb{Z}_i. \] Section 2.2 is devoted to the homology of the $\mr{H}\mathbb{Z}_i$ and certain $\Ext_{A(1)_*}$-computations relevant to the Adams spectral sequence computation of $\mathrm{bo}_*\mathrm{bo}$. We shift perspectives in Section 2.3 and recall Adams's description of $\mr{KU}_*\mr{KU}$ in terms of numerical polynomials. This allows us to study the image of $\mr{bu}_*\mr{bu}$ in $\mr{KU}_*\mr{KU}$ as a warm-up for our study of the image of $\mathrm{bo}_*\mathrm{bo}$ in $\mr{KO}_*\mr{KO}$. We undertake this latter study in Section 2.4, where we ultimately describe a basis of $\mr{KO}_0\mathrm{bo}$ in terms of the ``9-Mahler basis" for 2-adic numerical polynomials with domain $2\mathbb{Z}_2$. By studying the Adams filtration of this basis, we are able to use the above results to fully describe $\mathrm{bo}_*\mathrm{bo}$ mod $v_1$-torsion elements. In Section 2.5, we link the above two perspectives, studying the image of $\mathrm{bo}_*\mr{H}\mathbb{Z}_i$ in $\mr{KO}_*\mr{KO}$. Theorem \ref{thm:HZjImage} provides a complete description of this image (mod $v_1$-torsion) in terms of the 9-Mahler basis. We conclude with Section 2.6 which studies a certain map \[ \mr{KO}\wedge\mr{KO} \xrightarrow{\prod \widetilde{\psi}^{3^k}} \prod_{k\in \mathbb{Z}} \mr{KO} \] constructed from Adams operations. We show that this map is an injection after applying $\pi_*$ and exhibit how it interacts with the Brown-Gitler decomposition of $\mathrm{bo}\wedge\mathrm{bo}$. \subsection*{Section 3} In Section 3, we recall certain essential features of $\mathrm{TMF}$ and $\mathrm{tmf}$, the periodic and connective topological modular forms spectra. Section 3.1 reviews the Goerss-Hopkins-Miller sheaf of $E_\infty$-ring spectra, $\mathcal{O}^{top}$, on the moduli stack of smooth elliptic curves $\mathcal{M}$. One can use this sheaf to construct $\mathrm{TMF}$ (sections on $\mathcal{M}$ itself), $\mathrm{TMF}_1(n)$ (sections on the moduli stack of $\Gamma_1(n)$-structures after inverting $n$), and $\mathrm{TMF}_0(n)$ (sections on the moduli stack of $\Gamma_0(n)$-structures after inverting $n$). We consider the maps \[ f,q:\mathrm{TMF}[1/n]\to \mathrm{TMF}_0(n) \] induced by forgetting the level structure and taking the quotient by it, respectively. We use these maps to produce a $\mathrm{TMF}[1/n]$-module map \[ \Psi_n:\mathrm{TMF}[1/n]\wedge \mathrm{TMF}[1/n]\to \mathrm{TMF}_0(n) \] important in our subsequent studies. Section 3.2 reviews Lawson and Naumann's work on the construction of $\mr{BP}\langle 2\rangle$ as the $E_\infty$-ring spectrum $\mathrm{tmf}_1(3)$. We use formal group laws and some computer calculations to compute the maps \[ \mr{BP}_*\to \mathrm{tmf}_1(3)_*,\quad \mr{BP}_*\mr{BP}\to \mathrm{tmf}_1(3)_*\mathrm{tmf}_1(3). \] We isolate the lowest Adams filtration portion of this map in Section 3.4 via our computation of $\pi_* f:\mathrm{TMF}_*\to \mathrm{TMF}_1(3)_*$ in Section 3.3. Finally, we review the $K(2)$-local version of $\mathrm{TMF}\wedge\mathrm{TMF}$ in Section 3.5. \subsection*{Section 4} With the stage set, our work begins in earnest in Section 4. Here we study the Adams spectral sequence for $\mathrm{tmf}\wedge\mathrm{tmf}$. Section 4.1 begins with a review of mod 2, integral, and $\mathrm{bo}$-Brown-Gitler spectra. Our interest stems from the fact that the $E_2$-term of the Adams spectral sequence for $\mathrm{tmf}\wedge\mathrm{tmf}$ splits as a direct sum of $\Ext$-groups for the $\mathrm{bo}$-Brown-Gitler spectra. We study the rational behavior of this sequence in Section 4.2, observing that it collapses after inverting $v_0$. This provides a precise computation of the map \[ v_0^{-1}\Ext(\mathrm{tmf}\wedge \Sigma^{8j}\mathrm{bo}_j)\to v_0^{-1}\Ext(\mathrm{tmf}\wedge\mathrm{tmf}). \] Section 4.3 reviews known exact sequences relating $\mathrm{bo}$-Brown-Gitler modules, which allows inductive computations of $\Ext(\mathrm{tmf}\wedge\Sigma^{8j}\mathrm{bo}_j)$ relative to $\Ext(\mathrm{tmf}\wedge\mathrm{bo}_1^{\wedge k})$. We produce detailed charts for $\Ext(\mathrm{tmf}\wedge\Sigma^{8j}\mathrm{bo}_j)$ for $j\le 6$. Sections 4.4 and 4.5 are concerned with identifying the generators of the lattice \[ \Ext(\mathrm{tmf}\wedge\Sigma^{8j}\mathrm{bo}_j)/v_0\text{-torsion} \] inside of the ``vector space'' \[ v_0^{-1}\Ext(\mathrm{tmf}\wedge\Sigma^{8j}\mathrm{bo}_j). \] In Section 4.4, we produce an inductive method compatible with the exact sequences of Section 4.3. Section 4.5 completes the task of computing said generators. \subsection*{Section 5} In Section 5, we study the role of 2-variable modular forms in $\mathrm{tmf}$-cooperations. Laures has proved that, after inverting 6, $\mathrm{TMF}$-cooperations are precisely the 2-variable $\Gamma(1)$ modular forms (meromorphic at the cusp). After reviewing his work in Section 5.1, we adapt it to the study of $\mathrm{TMF}_*\mathrm{TMF}$ modulo torsion in Section 5.2. In particular, we prove that 2-integral 2-variable $\Gamma(1)$-modular forms (again meromorphic at the cusp) are exactly the $0$-line of a descent spectral sequence for $\mathrm{TMF}_*\mathrm{TMF}$. The efficacy of this result becomes apparent in Section 5.3 where we prove that $\mathrm{tmf}_*\mathrm{tmf}$ modulo torsion injects into the ring of 2-integral 2-variable modular forms with nonnegative Adams filtration. Moreover, the injection is a rational isomorphism; once again we are primed to identify the generators of a lattice inside a vector space. Sections 5.4 and 5.5 undertake the task of detecting 2-variable modular forms in the Adams spectral sequence for $\mathrm{tmf}\wedge\mathrm{tmf}$, resulting in a table of 2-variable modular form generators of $\Ext(\mathrm{tmf}\wedge\mathrm{tmf})/\text{torsion}$ in dimensions $\le 64$. \subsection*{Section 6} Our final section studies the level structure approximation map \[ \Psi:\mathrm{tmf}\wedge\mathrm{tmf}\to \prod_{i\in\mathbb{Z},j\ge 0} \mathrm{TMF}_0(3^j)\times \mathrm{TMF}_0(5^j). \] The first theorem of Section 6.1 is that the analogous map \[ \psi:\mathrm{TMF}\wedge\mathrm{TMF}\to \prod_{i\in\mathbb{Z},j\ge 0} \mathrm{TMF}_0(3^j)\times \mathrm{TMF}_0(5^j) \] induces an injection on homotopy groups. The proof is quite involved. It includes a reduction to a $K(2)$-local variant of the theorem, whose proof in turn requires the key technical Lemma \ref{lem:faithful} on detecting homotopy fixed points of profinite groups using dense subgroups. In Section~\ref{sec:psicomp}, we compute the effect of the maps \begin{align*} \Psi_3: \frac{\pi_* \mathrm{tmf} \wedge \mathrm{tmf}}{\mit{torsion}} & \rightarrow \pi_* \mathrm{TMF}_0(3), \\ \Psi_5: \frac{\pi_* \mathrm{tmf} \wedge \mathrm{tmf}}{\mit{torsion}} & \rightarrow \pi_* \mathrm{TMF}_0(5) \\ \end{align*} on a certain submodules of $\pi_*\mathrm{tmf} \wedge \mathrm{tmf}$. In Section~\ref{sec:diffext}, we observe that these computations allow us to deduce differentials and hidden extensions in the corresponding portion of the ASS for $\mathrm{tmf} \wedge \mathrm{tmf}$ using the known homotopy of $\mathrm{TMF}_0(3)$ and $\mathrm{TMF}_0(5)$. Davis, Mahowald, and Rezk \cite{MRlevel3}, \cite{MahowaldConnective} observed that one can build a connective cover $$ \td{\mathrm{tmf}}_0(3) \rightarrow \mathrm{TMF}_0(3) $$ out of $\mathrm{tmf} \wedge \mathrm{bo}_1$ and a piece of $\mathrm{tmf} \wedge \mathrm{bo}_2$. In Section~\ref{sec:cover}, we reprove this result, and relate this connective cover to our map $\Psi_3$. We also show that similar methods allow us build a connective cover $$ \td{\mathrm{tmf}}_0(5) \rightarrow \mathrm{TMF}_0(5) $$ out of the other part of $\mathrm{tmf} \wedge \mathrm{bo}_2$, $\mathrm{tmf} \wedge \mathrm{bo}_3$, and a piece of $\mathrm{tmf} \wedge \mathrm{bo}_4$. \subsection{Notation and conventions} In this paper, unless we say explicitly otherwise, we shall always be implicitly working $2$-locally. We denote homology by $H_*$, and it will be taken with mod $2$ coefficients, unless specified otherwise. We let $A = H^*H$ denote the mod 2 Steenrod algebra, and \[ A_*=H_*H\cong \mathbb{F}_2[\xi_1,\xi_2,\ldots] \] denotes its dual. In any Hopf algebra, we let $\overline{x}$ denote the antipode of $x$. We let $A(i)$ denote the subalgebra of $A$ generated by $\mathrm{Sq}^1,\cdots,\mathrm{Sq}^{2^i}$. Let $A//A(i)$ be the Hopf algebra quotient of $A$ by $A(i)$ and let $(A//A(i))_*$ be the dual of this Hopf algebra. We will use $\Ext(X)$ to abbreviate $\Ext_{A_*}(\mathbb{F}_2, H_* X)$, the $E_2$-term of the Adams spectral sequence (ASS) for $\pi_* X$ and will let $C^*_{A_*}(H^*X)$ denote the corresponding cobar complex. Given an element $x \in \pi_* X$, we shall let $[x]$ denote the coset of the ASS $E_2$-term which detects $x$. We let $AF(x)$ denote the Adams filtration of $x$. We write $\mr{bu}$ for the connective complex $\mr{K}$-theory spectrum, $\mathrm{bo}$ for the connective real $K$-theory spectrum, and $\mathrm{bsp}$ for the connective symplectic $K$-theory spectrum, so that $\Sigma^4 \mathrm{bsp}$ is the $3$-connected cover of $\mathrm{bo}$. \section{Approximating by level structures}\label{sec:approxlevel} Recall from \S\ref{sec:review} the maps \[ \Psi_n:\mathrm{TMF}[1/n]\wedge \mathrm{TMF}[1/n]\to \mathrm{TMF}_0(n) \] and \[ \phi_{[n]}:\mathrm{TMF}\wedge \mathrm{TMF}[1/n]\to \mathrm{TMF}\wedge \mathrm{TMF}[1/n]. \] Here $\Psi_n$ is induced by the forgetful and quotient maps $f,q:\mathcal{M}_0(n)\to \mathcal{M}[1/n]$, while $\phi_{[n]} = 1\wedge [n]$ where $[n]:\mathrm{TMF}[1/n]\to \mathrm{TMF}[1/n]$ is the ``Adams operation'' associated to the multiplication by $n$ isogeny on $\mathcal{M}[1/n]$. For reasons which will become clear in the next section, we are interested in the composite map $\Psi$ given as \[ \xymatrix{ \mathrm{tmf}\wedge\mathrm{tmf} \ar[rr]^-\Psi\ar[d] &&\prod\limits_{i\in \mathbb{Z},j\ge 0}\mathrm{TMF}_0(3^j)\times\mathrm{TMF}_0(5^j),\\ \mathrm{TMF}\wedge \mathrm{TMF} \ar[urr]_-{\psi} } \] where \[ \psi = \prod_{i\in \mathbb{Z},j\ge 0}\Psi_{3^j}\phi_{[3^i]}\times \Psi_{5^j}\phi_{[5^i]}. \] We will abuse notation and refer to the composite \[ \mathrm{tmf}\wedge\mathrm{tmf} \to \mathrm{TMF}\wedge\mathrm{TMF} \xrightarrow{\Psi_n} \mathrm{TMF}_0(n) \] (for $(2,n)=1$) as $\Psi_n$ as well; these are the $i=0$ factors of $\Psi$. In order to study $\Psi_n$ we consider the square \[\xymatrix{ \mathrm{tmf}_*\mathrm{tmf} \ar[r]^{\pi_*\Psi_n}\ar[d] &\pi_*\mathrm{TMF}_0(n)\ar[d]\\ M_*^{2-var}(\Gamma(1)) \ar[r]_-{\psi_n} &M_*(\Gamma_0(n)). }\] Here the left-hand vertical map is the composite \[ \mathrm{tmf}_*\mathrm{tmf} \to \mathrm{tmf}_*\mathrm{tmf} /tors\hookrightarrow M_*^{2-var}(\Gamma(1)) ^{AF\ge 0}\hookrightarrow M_*^{2-var}(\Gamma(1)) , \] and $M_*(\Gamma_0(n))$ is the ring of level $\Gamma_0(n)$-modular forms. The bottom horizontal map is also induced by $f$ and $q$; if we consider a $2$-variable modular form as a polynomial $p(c_4,c_6,\bar{c}_4,\bar{c}_6)$, then $\psi_n(p) = p(f^*c_4,f^*c_6,q^*\bar{c}_4,q^*\bar{c}_6)$. We are especially interested in the cases $n=3,5$. Recall from \cite{MRlevel3} (or \cite[\S3.3]{Q5}) that $M_*(\Gamma_0(3))$ has a convenient presentation as a subalgebra of $M_*(\Gamma_1(3))$. More precisely, $M_*(\Gamma_1(3)) = \mathbb{Z} [a_1,a_3,\Delta^{-1}]$ with $\Delta = a_3^3(a_1^3-27a_3)$, and $M_*(\Gamma_0(3)) $ is the subring \[ M_*(\Gamma_0(3)) = \mathbb{Z} [a_1^2,a_1a_3,a_3^2,\Delta^{-1}]. \] Using the formulas from \emph{loc.~cit.}, we may compute \[ \begin{aligned} f^*(c_4) &= a_1^4-24a_1a_3, &q^*(c_4)&= a_1^4 + 216a_1a_3,\\ f^*(c_6) &= -a_1^6+36a_1^3a_3-216a_3^2, &q^*(c_6)&= -a_1^6 + 540a_1^3a_3 + 5832a_3^2. \end{aligned} \] There are similar formulas for the $n=5$ case which we recall from \cite[\S3.4]{Q5}. Here the ring of $\Gamma_0(5)$-modular forms takes the form \[ M_*(\Gamma_0(5)) = \mathbb{Z} [b_2,b_4,\delta,\Delta^{-1}]/(b_4^2=b_2^2\delta-4\delta^2), \] where $|b_2| = 2$ and $|b_4| = |\delta| = 4$. (These are the algebraic, rather than topological, degrees.) The discriminant takes the form \[ \Delta = \delta^2b_4-11\delta^3 \] and we have \[ \begin{aligned} f^*(c_4) &= b_2^2-12b_4+12\delta, &q^*(c_4) &= b_2^2+228b_4+492\delta,\\ f^*(c_6) &= -b_2^3+18b_2b_4-72b_2\delta, &q^*(c_6) &= -b_2^3+522b_2b_4+10008b_2\delta. \end{aligned} \] \section{Recollections on topological modular forms}\label{sec:review} \subsection{Generalities} \emph{In this subsection, we work integrally.} The remainder of this paper is concerned with determining as much information as we can about the cooperations in the homology theory $\mathrm{tmf}$ of connective topological modular forms, following our guiding example of $\mathrm{bo}$. Even more than in the $\mathrm{bo}$ case, an extensive cast of characters will play supporting roles. First of all, we will extensively use the periodic spectrum $\mathrm{TMF}$, which is the analogue of $\mr{KO}$. In particular, we will use the fact that this periodic form of topological modular forms arises as the global sections of the Goerss-Hopkins-Miller sheaf of ring spectra $\mathcal{O}^{top}$ on the moduli stack of smooth elliptic curves $\mathcal{M}$. As the associated homotopy sheaves are \[\pi_{k}\mathcal{O}^{top}=\begin{cases} \omega^{\otimes k/2}, \text{ if } $k$ \text{ is even},\\ 0, \text{ if } $k$ \text{ is odd}, \end{cases} \] there is a descent spectral sequence \[ H^s(\mathcal{M}, \omega^{\otimes t}) \Rightarrow \pi_{2t-s} \mathrm{TMF}.\] Morally, the connective $\mathrm{tmf}$ should arise as global sections of an analogous sheaf on the moduli stack of all cubic curves (i.e. allowing nodal and cuspidal singularities); however, this has not been formally carried out. Nevertheless, $\mathrm{tmf}$ can be constructed as an $\Einf$-ring spectrum from $\mathrm{TMF}$ as a result of the gap in the homotopy of a third, non-connective and non-periodic, version of topological modular forms associated to the compactification of $\mathcal{M}$. Rationally, every smooth elliptic curve $C/S$ is locally isomorphic to a cubic of the form \[ y^{2}= x^{3}-27c_{4}x-54c_{6},\] with the discriminant $\Delta=c_{4}^{3}-c_{6}^{2}$ invertible. Here $c_i$ is a section of the line bundle $\omega^{\otimes i}$ over the \'etale map $S\to \mathcal{M}$ classifying $C$. This translates to the fact that $\mathcal{M}_{\mathbb{Q}}\cong \Proj \mathbb{Q}[c_4,c_6][\Delta^{-1}]$, which in turn implies that $(\mathrm{TMF}_{*})_{\mathbb{Q}}=\mathbb{Q}[c_{4},c_{6}][\Delta^{-1}].$ The connective version has $(\mathrm{tmf}_{*})_{\mathbb{Q}}=\mathbb{Q}[c_{4},c_{6}]$. The spectrum of topological modular forms is, of course, not complex orientable, and just like in the case of $\mathrm{bo}$, we will need the aid of a related complex orientable spectrum. The periodic spectrum $\mathrm{TMF}$ admits ring maps to several families of orientable (as well as non-orientable) spectra which come from the theory of elliptic curves. Namely, an elliptic curve $C$ is an abelian group scheme, and in particular it has a subgroup scheme $C[n]$ of points of order $n$ for any positive integer $n$. When $n$ is invertible, $C[n]$ is locally isomorphic to the constant group $(\mathbb{Z}/n)^{2}$. Based on this observation, there are various additional structures that one can assign to an elliptic curve. In this work we will be concerned with two types, the so-called $\Gamma_1(n)$ and $\Gamma_0(n)$ level structures. A $\Gamma_1(n)$ level structure on an elliptic curve $C$ is a specification of a point $P$ of (exact) order $n$ on $C$, whereas a $\Gamma_0(n)$ level structure is a specification of a cyclic subgroup $H$ of $C$ of order $n$. The corresponding moduli problems are denoted $\Msl{n}$ and $\Mcl{n}$. Assigning to the pair $(C,P)$ the pair $(C,H_{P})$, where $H_P$ is the subgroup of $C$ generated by $P$, determines an \'etale map of moduli stacks \[ g: \Msl{n}\to \Mcl{n}. \] Moreover, there are two morphisms \[f,q: \Mcl{n}\to \mathcal{M}[1/n] \] which are \'etale; $f$ forgets the level structure whereas $q$ quotients $C$ by the level structure subgroup. Composing with $g$, we obtain analogous maps from $\Msl{n}$. We can take sections of $\mathcal{O}^{top}$ over the forgetful maps and obtain ring spectra $\mathrm{TMF}_1(n)$ and $\mathrm{TMF}_0(n)$, ring maps $\mathrm{TMF}[1/n]\to \mathrm{TMF}_0(n)\to \mathrm{TMF}_1(n)$ as well as maps of descent spectral sequences \[\xymatrix{ H^*(\mathcal{M}[1/n], \omega^{\otimes *} ) \ar@{=>}[r]\ar[d] &\pi_* \mathrm{TMF}[1/n] \ar[d]\\ H^*(\mathcal{M}_?(n), \omega^* ) \ar@{=>}[r] & \pi_* \mathrm{TMF}_?(n), }\] obtained by pulling back. In particular, for any odd integer $n$ we have such a situation $2$-locally. We use the ring map $f:\mathrm{TMF}[1/n]\to \mathrm{TMF}_0(n)$ induced by the forgetful $f:\Mcl{n} \to \mathcal{M}[1/n]$ to equip $\mathrm{TMF}_0(n)$ with a $\mathrm{TMF}[1/n]$-module structure. With this convention, the map $q:\mathrm{TMF}[1/n]\to \mathrm{TMF}_0(n)$ induced by the quotient map on the moduli stacks does not respect the $\mathrm{TMF}[1/n]$-module structure. However, one can uniquely extend $q$ to \begin{align}\label{eq:mapPsi_n} \xymatrix{ \mathrm{TMF}[1/n] \ar[r]^{q}\ar[d] & \mathrm{TMF}_0(n).\\ \mathrm{TMF}[1/n] \wedge \mathrm{TMF}[1/n] \ar@{-->}[ru]_{\Psi_n} } \end{align} Another way to define $\Psi_n$ is as the composition of $f\wedge q$ with the multiplication on $\mathrm{TMF}_0(n)$. Finally, we will be interested in the morphism \[ \phi_{[n]}:\mathcal{M}[1/n]\to \mathcal{M}[1/n], \] which is the \'{e}tale map induced by the multiplication-by-$n$ isogeny on an elliptic curve, and the induced map $\phi_{[n]}:\mathrm{TMF}[1/n]\to \mathrm{TMF}[1/n]$ is an Adams operation on $\mathrm{TMF}[1/n]$. In Section \ref{sec:approxlevel} below, we will make heavy use of the maps $\Psi_3$ and $\Psi_5$. Their usefulness is due to the relative ease with which their behavior on non-torsion homotopy groups can be computed. \subsection{Details on $\mathrm{tmf}_1(3) $ as $\mr{BP}\bra{2}$} \emph{We return to the convention that everything is $2$-local.} The significance of $\mr{bu}$ in the computation of $\mathrm{bo}_*\mathrm{bo}$ was that at the prime $2$, $\mr{bu}$ is a truncated Brown-Peterson spectrum $\mr{BP}\bra{1}$ with a ring map $\mathrm{bo} \to \mr{bu}$ which upon $K(1)$-localization becomes the inclusion of homotopy fixed points $(\mr{KU}\hat{_2})^{hC_2}\to \mr{KU}\hat{_2}$. In particular, the image of $\mr{KO}\hat{_2} \to \mr{KU}\hat{_2}$ in homotopy is describable as certain invariant elements. By work of Lawson-Naumann \cite{LawsonNaumann}, we know that there is a $2$-primary form of $\mr{BP}\bra{2}$ obtained from topological modular forms; this will be our analogue of $\mr{bu}$ in the $\mathrm{tmf}$-cooperations case. Lawson-Naumann study the ($2$-local) compactification of the moduli stack $\Msl{3}$. Given an elliptic curve $C$ (over a $2$-local base), it is locally isomorphic to a Weierstrass curve of the form \[ y^2+a_1xy+a_3 y = x^3 +a_4x+a_6. \] A point $P=(r,s)$ of order $3$ is an inflection point of such a curve; transforming the curve so that the given point $P$ is moved to have coordinates $(0,0)$ puts $C$ in the form \begin{align}\label{eq:Ca1a3} y^2+a_1 xy +a_3 y = x^3. \end{align} This is the universal equation of an elliptic curve together with a $\Gamma_1(3)$ level structure. The discriminant of this curve is $\Delta = (a_1^3-27a_3)a_3^3$, and $\Msl{3} \simeq \Proj \mathbb{Z}_{(2)}[a_1,a_3][\Delta^{-1}]$. Consequently, $\pi_* \mathrm{TMF}_1(3) = \mathbb{Z}_{(2)}[a_1,a_3][\Delta^{-1}]$. Lawson-Naumann show that the compactification $\bar{\mathcal{M}}_1(3)\simeq \Proj\mathbb{Z}_{(2)}[a_1,a_3]$ also admits a sheaf of $\Einf$-ring spectra, giving rise to a non-connective and non-periodic spectrum $\mathrm{Tmf}_1(3)$ with a gap in its homotopy allowing to take a connective cover $\mathrm{tmf}_1(3)$ which is an $\Einf$-ring spectrum with \[\pi_* \mathrm{tmf}_1(3) =\mathbb{Z}_{(2)}[a_1,a_3]. \] This spectrum is complex oriented such that the composition of graded rings \[ \mathbb{Z}_{(2)} [v_1,v_2] \subset \mr{BP}_* \to (\mr{MU}_{(2)})_* \to \mathrm{tmf}_1(3)_* \] is an isomorphism \cite[Theorem 1.1]{LawsonNaumann}, where the $v_i$ are Hazewinkel generators. Of course, the map $\mr{BP}_* \to \mathrm{tmf}_1(3)_*$ classifies the $p$-typicalization of the formal group associated to the curve \eqref{eq:Ca1a3}, which starts as \cite[IV.2]{Silverman}, \cite{sage}: \begin{align*} F(X,Y) &= X + Y -a_1X Y -2 a_3 X^3 Y -3 a_3 X^2 Y^2 + -2 a_3 X Y^3 \\ &-2 a_1 a_3 X^4 Y -a_1 a_3 X^3 Y^2 -a_1 a_3 X^2 Y^3 -2 a_1 a_3 X Y^4 + O(X,Y)^6. \end{align*} We used Sage to compute the logarithm of this formal group law, from which we read off the coefficients $l_i$ \cite[A2.1.27]{Ravenel} in front of $X^{2^i}$ as \begin{align*} l_1&= \frac{a_1}{2}, \qquad l_2 = \frac{a_1^3+2a_3}{4},\\ l_3 & = \frac{ a_1^7 + 30 a_1^4 a_3 + 30 a_1 a_3^2}{8}\dots. \end{align*} Now the formula \cite[A2.1.1]{Ravenel} $\displaystyle{ pl_n =\sum_{0\leq i <n } l_i v_{n-i}^{2^i} }$ (in which $l_0$ is understood to be $1$) allows us to recursively compute the map $\mr{BP}_* \to \mathrm{tmf}_1(3)_*$. For the first few values of $n$, we have that \begin{align*} v_1 \mapsto a_1, \qquad v_2 \mapsto a_3, \qquad v_3 & \mapsto 7a_1a_3(a_1^3+a_3)\dots. \end{align*} We can do even more with this orientation of $\mathrm{tmf}_1(3)$, as \[ \mr{BP}_* \mr{BP} \to \mathrm{tmf}_1(3)_* \mathrm{tmf}_1(3)\] is a morphism of Hopf algebroids. Recall that $\mr{BP}_*\mr{BP}=\mathbb{Z}_{(2)}[v_1,v_2,\dots][t_1,t_2,\dots]$ with $v_i$ and $t_i$ in degree $2(2^i-1)$ and the right unit is $ \eta_R:\mr{BP}_* \to \mr{BP}_*\mr{BP} $ determined by the fact \cite[A2.1.27]{Ravenel} that \[\eta_R(l_n) = \sum_{0\leq i \leq n} l_i t_{n-i}^{2^i} \] with $l_0=t_0=1$ by convention. On the other hand, \[\mathrm{tmf}_1(3)_* \mathrm{tmf}_1(3)_{\mathbb{Q}} = \mathbb{Q}[a_1,a_3,\bar a_1, \bar a_3]\] and the right unit $\mathrm{tmf}_1(3)_* \to \mathrm{tmf}_1(3)_*\mathrm{tmf}_1(3) $ sends $a_i$ to $\bar a_i$. With computer aid from Sage, we can recursively compute the images of each $t_i$ in $\mathrm{tmf}_1(3)_* \mathrm{tmf}_1(3)$. As an example, we include here the first three values \begin{align*} t_1 & \mapsto \frac{1}{2}({\bar a_1-a_1}),\\ t_2 & \mapsto \frac{1}{8}( 4 \bar a_3 + 2 \bar a_1^3 - a_1\bar a_1^2 + 2 a_1^2\bar a_1 - 4a_3 - 3 a_1^3 ), \\ t_3 & \mapsto \frac{1}{128} (480 \bar a_1 \bar a_3^2 - 16 a_1\bar a_3^2 + 480 \bar a_1^4 \bar a_3 - 16 a_1 \bar a_1^3 \bar a_3 + 8 a_1^2 \bar a_1^2 \bar a_3 - 16 a_1^3\bar a_1\bar a_3 \\ &+ 32 a_1 a_3 \bar a_3 + 24 a_1^4 \bar a_3 + 16 \bar a_1^7 - 4 a_1\bar a_1^6 + 4 a_1^2 \bar a_1^5 - 4 a_3 \bar a_1^4 - 11 a_1^3 \bar a_1^4 + 32 a_1 a_3\bar a_1^3 \\ &+ 24 a_1^4 \bar a_1^3 - 32 a_1^2 a_3 \bar a_1^2 - 22 a_1^5 \bar a_1^2 + 32 a_1^3 a_3\bar a_1 + 20 a_1^6 \bar a_1 - 496 a_1 a_3^2 - 508 a_1^4 a_3 - 27 a_1^7 ) \end{align*} and rather than urging the reader to analyze the terms, we simply point out the exponential increase of their number. What will allow us to simplify and make sense of these expressions is using the Adams filtration in Section \ref{subsec:AF} below. \subsection{The relationship between $\mathrm{TMF}_1(3)$ and $\mathrm{TMF} $ and their connective versions} As we mentioned already, the forgetful map $f: \Msl{3}\to \mathcal{M}$ is \'etale; moreover, $f^* \omega = \omega$. As a consequence, we have a \v Cech descent spectral sequence \[E_1=H^p (\Msl{3}^{\times_{\mathcal{M}}(q+1)}, \omega^{\otimes *}) \Rightarrow H^{p+q} (\mathcal{M}, \omega^{\otimes *}). \] With it, the modular forms $H^0(\mathcal{M},\omega^{\otimes *}) $ can be computed as the equalizer of the diagram \begin{align}\label{eq:equalizer} \xymatrix{ H^0 (\Msl{3} ,\omega^{\otimes *}) \ar@<0.5ex>[r]^-{p_1^*} \ar@<-0.5ex>[r]_-{p_2^*} &H^0(\Msl{3} \times_{\mathcal{M}}\Msl{3}, \omega^{\otimes *} ), } \end{align} in which $p_1$ and $p_2$ are the left and right projection maps. The interpretation is that the $\mathcal{M}$-modular forms $MF_*$ are precisely the invariant $\Msl{3}$-modular forms. To be more explicit, note that $\Msl{3} \times_{\mathcal{M}}\Msl{3}$ classifies tuples $((C,P), (C^\prime, P^\prime), \varphi)$ of elliptic curves with a point of order $3$ and an isomorphism $\varphi:C \to C^\prime$ of elliptic curves which does not need to preserve the level structures. This data is locally given by \begin{equation} \begin{aligned} C: \qquad & y^2+ a_1xy+a_3y = x^3,\\ C^\prime: \qquad &y^2+ a_1^\prime xy+a_3^\prime y = x^3,\\ \varphi: \qquad & x \mapsto u^{-2}x+ r & y \mapsto u^{-3} y + u^{-2} sx +t, \end{aligned} \end{equation} such that the following relations hold \begin{equation}\label{eq:relations} \begin{aligned} &sa_1 - 3r + s^2 = 0,\\ &sa_3+(t+rs)a_1 - 3r^2+2st=0,\\ &r^3-ta_3-t^2-rta_1=0 ,\\ &a_1' = \eta_R(a_1) ,\\ &a_3' = \eta_R(a_3). \end{aligned} \end{equation} (Note: For more details on this presentation of $\Msl{3}$, see the beginning of \cite[\S 4]{grpcohcalc}; the relations follow from the general transformation formulas in \cite[III.1]{Silverman} by observing that the coefficients $a_{even}$ must remain zero.) Hence, the diagram \eqref{eq:equalizer} becomes \[ \mathbb{Z}_{(2)}[a_1,a_3] \rightrightarrows \mathbb{Z}_{(2)}[a_1,a_3][u^{\pm 1},r,s,t]/(\sim) \] (where $\sim$ denotes the relations \eqref{eq:relations}) with $p_1$ being the obvious inclusion and $p_2$ determined by \begin{align*} a_1 &\mapsto u(a_1+2s),\\ a_3 &\mapsto u^3(a_3+ra_1+2t), \end{align*} which is in fact a Hopf algebroid representing $\mathcal{M}_{(2)}$. Note that we do not need to localize at $2$ but only to invert $3$ to obtain this presentation. As a consequence of this discussion we can explicitly compute that the modular forms $MF_*$ are the subring of $MF_1(3)_*$ generated by \begin{align}\label{eq:ModForms} c_4 = a_1^4 - 24a_1a_3 ,\qquad \qquad c_6 = a_1^6 + 36 a_1^3 a_3 -216a_3^2, \qquad\text{and}\qquad \Delta=(a_1^3-27a_3)a_3^3, \end{align} which in particular determines the map $\mathrm{TMF}_* \to \mathrm{TMF}_1(3)_*$ on non-torsion elements. \subsection{Adams filtrations}\label{subsec:AF} The maps $\mr{BP}_* \to \mathrm{tmf}_1(3)_*$ and $\mr{BP}_*\mr{BP} \to \mathrm{tmf}_1(3)_*\mathrm{tmf}_1(3)$ respect the Adams filtration (henceforth AF), which allows us to determine the AF on the right hand sides. Recall that \[ AF(v_i) = 1 , \qquad i\geq 0\] where as usual, $v_0 =2$. Consequently, $AF(a_1)=AF(a_3)=1$, which in turn implies via \eqref{eq:ModForms} that $AF(c_4)=4$, $AF(c_6)=5$, $AF(\Delta)=4$. More precisely, modulo higher Adams filtration (we use $\sim$ to denote equality modulo terms in higher AF) we have \[ c_4 \sim a_4, \qquad \qquad c_6\sim 216 a_3^2 \sim 8a_3^2, \qquad \qquad \Delta \sim a_3^4. \] Note that the Adams filtration of each $t_i$ is zero. \subsection{Supersingular elliptic curves and $\mr{K}(2)$-localizations} At the prime $2$, there is a unique isomorphism class of supersingular elliptic curve; one representative is the Weierstrass curve \begin{align*} C: \qquad y^2+y=x^3 \end{align*} over $\mathbb{F}_2$. Recall that a supersingular elliptic curve is one whose formal completion at the identity section $\hat C$ is a formal group of height two.\footnote{As opposed to an ordinary elliptic curve whose formal completion has height one. These two are the only options.} Under the natural map $\mathcal{M} \to \mathcal{M}_{fg}$ from the moduli stack of elliptic curves to the one of formal groups sending an elliptic curve to its formal completion at the identity section, the supersingular elliptic curves (in fixed characteristic) are sent to the (unique up to isomorphism, by Cartier's theorem \cite[Appendix B]{Ravenel}) formal group of height two in that characteristic. Let $\mathcal{M}^{ss}$ denote a formal neighborhood of the supersingular point $C$ of $\mathcal{M}$, and let $\hat{\mathcal{H}}(2)$ denote a formal neighborhood of the characteristic $2$ point of height two of $\mathcal{M}_{fg}$. Formal completion yields a map $\mathcal{M}^{ss}\to\hat{\mathcal{H}}(2)$ which is used to explicitly describe the $\mr{K}(2)$-localization of $\mathrm{TMF}$ (or equivalently, $\mathrm{tmf}$) in terms of Morava $\mr{E}$-theory. The formal stack $\hat{\mathcal{H}}(2) $ has a pro-Galois cover by $\Spf \mathbb{W}(\mathbb{F}_4)[[u_1]] $ for the extended Morava stabilizer group $\mathbb{G}_2$. The Goerss-Hopkins-Miller theorem implies in particular that this quotient description of $ \hat{\mathcal{H}}(2) $ has a derived version, namely the stack $ \Spf \mr{E}_2 //\mathbb{G}_2 $, where $ \mr{E}_2 $ is a Lubin-Tate spectrum of height two. As we are working with elliptic curves, we take the Lubin-Tate spectrum associated to the formal group $\hat C$ over $\mathbb{F}_2$, and $\mathbb{G}_2=\Aut_{\mathbb{F}_2}(\hat C)$. Let $G$ denote the automorphism group of $C$; it is a finite group of order $48$ given as an extension of the binary tetrahedral group with the Galois group of $\mathbb{F}_4/\mathbb{F}_2$. Then $G$ embeds in $\mathbb{G}_2$ as a maximal finite subgroup and $\Spf \mr{E}_2$ is a Galois cover $\mathcal{M}^{ss}$ for the group $G$. In particular, taking sections of the structure sheaf $\Sh{O}^{top}$ over $\mathcal{M}^{ss}$ gives the $\mr{K}(2)$-localization of $\mathrm{TMF}$ which is equivalent to $\mr{E}_2^{hG}$. Moreover, we have $\mr{K}(2)$-local equivalences \[ (\mathrm{TMF} \wedge \mathrm{TMF})_{\mr{K}(2)} \simeq \Hom^c(\mathbb{G}_2/G,\mr{E}_2)^{hG} \simeq \prod_{x\in G\backslash \mathbb{G}_2/G} \mr{E}_2^{h(G \cap xGx^{-1})}.\] The decomposition on the right hand side is interesting though we will not pursue it further in this work. The interested reader is referred to Peter Wear's explicit calculation of the double cosets in \cite{PeterWear}.
2,877,628,088,843
arxiv
\section{Introduction} The production of an electroweak boson at large transverse momentum is arguably the most basic hard-scattering process at hadron colliders. In fact, it was one of the first for which the next-to-leading order (NLO) perturbative corrections were computed \cite{Ellis:1981hk,Arnold:1988dp,Gonsalves:1989ar}. By now the complete $\alpha_s^2$ corrections to vector boson production are known, but since the $p_T$-spectrum starts at $O(\alpha_s)$ its theoretical accuracy has not been improved since the nineteen eighties. Given that $Z$'s and $W$'s at high transverse momentum provide an important background to new physics searches and a standard way to calibrate jet energy scales, it is important to have good theoretical control of their cross sections. In the absence of a full NNLO computation, a way to improve predictions is to compute the contributions to the cross section which arise near the partonic threshold from the emission of soft and collinear gluons. For very large $p_T$, these corrections become dominant and must be resummed to all orders in perturbation theory. However, even away from this region, the corresponding terms yield in many cases a good approximation to the full cross section. For vector boson production, this resummation has been performed to next-to-leading logarithmic (NLL) accuracy in \cite{Kidonakis:1999ur,Kidonakis:2003xm,Gonsalves:2005ng}. In the present paper, we use Soft-Collinear Effective Theory (SCET) \cite{Bauer:2000yr,Bauer:2001yt,Beneke:2002ph} to perform the resummation of the threshold terms to NNLL accuracy. A detailed derivation of the factorization theorem and phenomenological analysis of the closely related process of direct photon production were presented by the authors in~\cite{Becher:2009th}. This paper is a generalization of those results to include vector boson masses which significantly complicates the calculations. In the partonic threshold region, where the final state has an invariant mass much lower than the transverse momentum, the cross section for a given partonic initial state $I$ to produce an electroweak boson $V$ factorizes as \begin{multline}\label{factform} \frac{{\mathrm d}\hat{\sigma}_{I} }{{\mathrm d}\hat{s}\, {\mathrm d}\hat{t}} = \hat{\sigma}^B_{I}( \hat{s},\hat{t})\, H_{I}(\hat{s},\hat{t},M_V,\mu)\\ \times \int\! {\mathrm d} k\, J_{I}(m_X^2-2 E_J k) S_{I}(k,\mu)\, , \end{multline} where the partonic Mandelstam variables are $\hat{s} = (p_1+p_2)^2$ and $\hat{t} = (p_1-q)^2$, with $q$ the vector boson momentum, with $q^2=M_V^2$. We have factored out the Born level cross section $\hat{\sigma}^B_{I}( \hat{s},\hat{t})$. The parton momenta $p_1^\mu$ and $p_2^\mu$ carry momentum fractions $x_1$ and $x_2$ and are related to the hadron momenta via $p_1^\mu=x_1 P_1^\mu$ and $p_2^\mu=x_2 P_2^\mu$. The hadronic cross section is obtained after convoluting with parton distribution functions (PDFs) and summing over all partonic channels. At the partonic threshold the mass of the final-state jet $m_X^2=(p_1+p_2-q)^2$ vanishes. Since the full final state also involves the remnants of the scattered hadrons, this condition does not imply that the hadronic invariant mass $M_X^2=(P_1+P_2-q)^2$ vanishes, unless the momentum fractions $x_1$ and $x_2$ are close to unity. Because there is no phase space for additional hard emissions near the threshold, the only corrections to the cross section arise from virtual corrections, encoded in the hard function $H_{I}(\hat{s},\hat{t},M_V,\mu)$, and soft and collinear emissions given by $S_{I}(k,\mu)$ and $J_{I}(p_J^2)$. The convolution over the soft momentum in (\ref{factform}) arises because the partonic jet mass is \begin{equation} m_X^2 = (p_J + k_S)^2 \approx p_J^2 +2E_J k\,, \end{equation} where $p_J^\mu$ and $k_S^\mu$ are the collinear and soft momenta in the jet, $E_J$ is the jet energy and $k=p_J\cdot k_S/E_J$. Since the reaction in the threshold region proceeds via Born-level kinematics, only two partonic channels are relevant: the Compton channel $q g \to q V$ and the annihilation channel $q\bar q \to g V$. A detailed derivation of the factorization formula (\ref{factform}) was given in \cite{Becher:2009th}. That paper focused on photon production, but exactly the same jet and soft functions are relevant also for $W$ and $Z$ production. Explicit expressions for these functions in both the Compton and annihilation channel can be found in \cite{Becher:2009th}, together with the relevant anomalous dimensions. The non-zero boson mass only enters the hard function $H_{I}(\hat{s},\hat{t},M_V,\mu)$ and modifies the kinematics. The hard function is given by the virtual corrections to the corresponding hard scattering channel. To obtain the function at NLO, one needs the interference of the one-loop amplitude with the tree-level result, which is given in equations (A.7) to (A.9) of \cite{Arnold:1988dp}. These expressions correspond to the bare result for the hard function. After renormalization, we obtain \begin{multline} H_{q\bar q}(\hat{s},\hat{t},M_V,\mu) = 1+\frac{\alpha_s}{4\pi} \bigg[ \left(-C_A-2 C_F\right) \ln ^2\frac{\mu ^2}{\hat{s}} \\ -2 \ln\frac{\mu ^2}{\hat{s}} \left(C_A \ln \frac{\hat{s}^2}{\hat{t} \hat{u}}+3 C_F\right) -6 C_F \ln\frac{\hat{s}}{M_V^2} \\ -C_A \ln ^2\frac{\hat{t} \hat{u}}{\hat{s} M_V^2} -C_A \ln ^2\frac{\hat{s}}{M_V^2 } +C_F \left(\frac{7 \pi ^2}{3}-16\right) \\ + C_A \left(f(\hat{t})+f(\hat{u}) \right) \bigg] + \Delta {\cal L}(\hat{s},\hat{t}) /\hat{\sigma}_{q\bar q}^{B}\, \end{multline} for the hard function in the annihilation channel, with \begin{equation} f(\hat{t}) = 2 \text{Li}_2\left(\frac{M_V^2}{M_V^2-\hat{t}}\right)+\ln^2\frac{M_V^2-\hat{t}}{M_V^2}+\frac{\pi ^2}{12} \,. \end{equation} The extra piece $\Delta {\cal L}(\hat{s},\hat{t})$ is the part of the virtual corrections not proportional to the Born cross section. It is finite and given in the last five lines of (A.9) of~\cite{Arnold:1988dp}.\footnote{The result is also given in \cite{Gonsalves:1989ar} but the sign of the $\Delta {\cal L}$ terms in expression (A4) in this reference is incorrect. We thank R.~Gonsalves for helping us to resolve this discrepancy.} The virtual corrections for the Compton channel are related to the above by crossing, see \cite{Arnold:1988dp} for the necessary relations. \begin{figure}[t!] \begin{center} \psfrag{m}[t]{$\mu$} \psfrag{h}{$\mu_h$} \psfrag{j}{$\mu_j$} \psfrag{s}{$\mu_s$} \psfrag{f}{$\mu_f$} \psfrag{H}[]{$H_{I}(\hat{s},\hat{t})$} \psfrag{J}[]{$J_{I}(m_X^2)$} \psfrag{S}[]{$S_{I}(k)$} \psfrag{F}[l]{$f_1(x_1)f_2(x_2)$} \includegraphics[width=0.75\hsize]{running} \end{center} \vspace{-0.5cm} \caption{Resummation by RG evolution.\label{running}} \end{figure} Instead of integrating over the momentum fractions $x_1$ and $x_2$ of the two partons involved in the hard scattering, we follow \cite{Ellis:1981hk} and change variables to \begin{equation} \label{kinematics} \int _0^1 {\mathrm d} x_1 {\mathrm d} x_2 \theta(m_X^2) = \int_{x_{\rm min}}^1 \frac{{\mathrm d} x_1 }{x_1 s+u- M_V^2 }\int_0^{m_{\rm max}^2} {\mathrm d} m_X^2\,, \end{equation} with $u=(P_2-q)^2=M_V^2-\sqrt{s}\sqrt{M_V^2+p_T^2}e^y$ and \begin{align*} m_{\rm max}^2&=u + x_1 (M_X^2- u) \, , & x_{\rm min}&=\frac{-u}{M_X^2-u}\,. \end{align*} In the above variables, the expansion around partonic threshold $m_X^2=0$ is performed at fixed $x_1$, which is the same as fixed $\hat{t} =(x_1 P_1-q)^2$. However, simply using (\ref{kinematics}) would be problematic, since the choice of expansion variables is not invariant under crossing $p_1\leftrightarrow p_2$, which would induce unphysical asymmetries in the rapidity spectrum. To avoid this, we symmetrize by performing the expansion around threshold twice: once at fixed $x_1$ using (\ref{kinematics}) and once at fixed $x_2$. Our result for the cross section is the average of the two expansions. To perform the resummation of the soft and collinear emissions, the hard, jet and soft functions must be evaluated at their own characteristic scales and then evolved to the factorization scale using the renormalization group (RG) where they are combined with the PDFs. This is illustrated in Figure~\ref{running}. The running is straightforward using the Laplace space formalism developed in \cite{Becher:2006nr}. \begin{figure}[t!] \begin{center} \includegraphics[height=0.56\hsize]{bandsplot.eps} \\ \hspace*{0.15cm}\includegraphics[height=0.573\hsize]{ScaleChoice.eps} \end{center} \vspace{-0.5cm} \caption{Scale setting. The plots are for $W^+$ bosons, but the qualitative features are the same for all bosons. \label{scalefig}} \end{figure} The most naive approach to scale setting would be to set the jet scale $\mu_j$ equal to the partonic jet mass $m_X$. However the partonic jet mass is not an observable; it is integrated over in the convolution with the PDFs. So setting $\mu_j=m_X$ one will encounter the Landau pole in the strong coupling constant, since the partonic jet mass $m_X$ can become arbitrarily small. The problematic choice $\mu_j=m_X$ is inherent in the traditional formalism for resummation. To avoid the Landau pole previous work did not perform the resummation to all orders and instead just computed the singular terms in the partonic cross section at the next, or the next two orders in perturbation theory~\cite{Kidonakis:1999ur,Kidonakis:2003xm}. In our work, we resum to all orders but choose $\mu_j$ to be the average-jet mass. Since the partonic cross section is convoluted with the PDFs, the average jet mass must be calculated numerically. To obtain the proper scale for each ingredient of the factorization formula (\ref{factform}), we use the numerical procedure advocated in \cite{Becher:2007ty}. We evaluate the factorization theorem (\ref{factform}) numerically at a fixed scale $\mu$ and study individually the impact of the NLO corrections from the hard, jet and soft functions. The scale variations are shown on the top panel in Figure~\ref{scalefig}. Note that the jet and hard function variations have natural extrema. These extrema are shown as the points in the lower panel for the jet scale. The solid curves are a reasonable approximation to these points, given by \begin{equation} \begin{aligned} \label{scalevalues} \mu_h &= \frac{13 p_T+ 2 M_V}{12} -\frac{p_T^2}{\sqrt{s}}\, ,\\ \mu_j &= \frac{7 p_T + 2 M_V }{12} \left (1-\frac{2p_T}{\sqrt{s}}\right) \, , \end{aligned} \end{equation} which we use instead of the exact extrema for simplicity. We also set $\mu_s=\mu_j^2/\mu_h$, as dictated by the factorization theorem. By choosing scales close to these extrema, we minimize the scale uncertainty. The scale setting procedure beautifully illustrates the power of the effective field theory approach. In a fixed-order computation, the hard, jet and soft corrections cannot be separated and are included at a common value of the renormalization scale. This is shown in the NLO curves in the top of Figure~\ref{scalefig}. In this case the scale dependence is monotonous. Because there are multiple relevant scales in the problem, there is no natural scale choice at fixed order. But there are natural choices when $\mu$ is split into hard, jet and soft. For illustration, we also show in the bottom panel of Figure~\ref{scalefig} the popular scale choice $\mu_j = \sqrt{p_T^2 + M_V^2}$, which is a bad fit to the jet scale. The most natural choice for the factorization scale $\mu_f$ would be at or below $\mu_s$, since this scale defines the boundary between the perturbative and non-perturbative part of the process. However, since all PDF fits were performed with $\mu_f$ set equal to the hardest scale in the process, we will follow this convention and use $\mu_f=\mu_h$ as our default value. Note that this implies that we use the RG to run the jet and soft functions from lower to higher values, in contrast to the situation depicted in Figure \ref{running}. In order for our results to contain the full NLO cross section we match the resummed result to fixed order. The matching is straightforward since the resummation switches itself off when we set all scales equal, $\mu_h=\mu_j=\mu_s=\mu_f$. Doing so in the NNLL result yields all logarithmically enhanced terms at one loop level, which we denote by NLO$_{\rm sing}$. The matched result is obtained by adding the difference between the full NLO result and the singular terms NLO$_{\rm sing}$ to the NNLL resummed result. We denote the matched result by NNLL+NLO. To compute the NLO fixed-order result, we use the code {\sc qt}~\cite{qt}, and have verified that it agrees with {\sc mcfm}~\cite{mcfm}. \begin{figure}[t!] \begin{center} \includegraphics[height=0.6\hsize]{D0plotMuon.eps} \end{center} \vspace{-0.5cm} \caption{Comparison to DZero results \cite{Abazov:2010kn}.\label{tev}} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[height=0.6\hsize]{BothWsLHC7.eps} \end{center} \vspace{-0.5cm} \caption{Prediction for the combined $W^+$ and $W^-$ cross sections the LHC (7 TeV).\label{lhc}} \end{figure} Most of the ingredients necessary to go to N$^3$LL accuracy are already known. The 3-loop anomalous dimensions are all known. We use these, along with the two-loop jet function constants~\cite{Becher:2006qw,Becher:2010pd} and the Pad\'e approximant for the 4-loop cusp anomalous dimension, to get our most accurate prediction, which we denote by N$^3$LL$_{\rm partial}$. For fixed order, we set the renormalization and factorization scales equal to $\mu_h$. Uncertainties are estimated by varying by a factor of 2 around the defaults values \eqref{scalevalues} and extracting the maximum and minimum values. For the final scale variation error bands on the resummed distributions, we add the jet, hard, soft and factorization scale uncertainties in quadrature. All numerical predictions are computed using MSTW2008NNLO PDFs \cite{Martin:2009iq}, with $\alpha_s(M_Z)=0.1171$. For the electroweak parameters, we use $\alpha=1/127.92$, $\sin\theta_W=0.2263$, $M_W=80.40\, {\rm GeV}$, $M_Z=91.19\, {\rm GeV}$. In Figure \ref{tev}, we show the $p_T$ spectrum of the $Z$-boson at the Tevatron in comparison to results of the D0 experiment \cite{Abazov:2010kn}. Our results agree well with the experimental results but have significantly smaller uncertainties, in the region of high transverse momentum. In Figure \ref{lhc}, we give results for the production of $W$ bosons at the LHC. The perturbative uncertainty on our result is much smaller than the PDF uncertainty, which implies that these results could be used to obtain more precise determinations of the PDFs, once the experimental results become available. \begin{figure}[t!] \begin{center} \includegraphics[height=0.55\hsize]{Ratios.eps} \end{center} \vspace{-0.5cm} \caption{Prediction for ratios of cross sections for different gauge bosons at the LHC (7 TeV).\label{ratio}} \end{figure} In Figure \ref{ratio}, we give the ratio of the $W^-$ to $W^+$ cross section, as well as the ratio of $Z$'s to $W$'s. Most theoretical uncertainties drop out in these ratios, making them precision observables. Roughly half as many $W^-$ as $W^+$ bosons are produced at the LHC, which is just a reflection of the quark content of the protons, while the $Z$ to $W^\pm$ ratio is essentially given by the ratio of the electroweak couplings. Note that the PDF uncertainty is larger on the ratio involving the $W^-$, since the $W^-$ is sensitive to sea-quark PDFs. In Table~\ref{tab}, we give numerical results for integrated cross section with $p_T > 200$ GeV at the Tevatron and LHC. In this table, we have included rows NLO$_{\rm sing.}$ and NNLO$_{\rm sing.}$ which refer to the resummed distributions expanded to fixed order. Indeed, Figure~\ref{scalefig} and the prescription (\ref{scalevalues}) make it clear that there is not a very large hierarchy between the different scales. For example, at the Tevatron, for $p_T=200\,{\rm GeV}$, the ratio between the scales is $\mu_h/\mu_s \approx 5$. Since the logarithms of the different scales are of moderate size, one can re-expand the resummed result in $\alpha_s$ and will find that most of the effect of the resummation is captured by the singular terms. NLO$_{\rm sing.}$ refers to the expansion of the NNLL result (without matching) to $\alpha_s^2$. NNLO$_{\rm sing.}$ refers to the expansion of the N$^3$LL$_{\rm partial}$ result to $\alpha_s^3$. Comparing NLO to NLO$_{\rm sing.}$ we see that for $Z$-production with $p_T>200 {\rm GeV}$ at the Tevatron (LHC), we find that 82\% (70\%) of the NLO correction to the cross section is due to soft and collinear gluon emission. Thus we expect that SCET reproduces most of the NNLO result at high $p_T$. Finally, there is one important effect that we have neglected, namely electroweak Sudakov logarithms. These can be large and must be included. Already at the Tevatron these are non-negligible and lower the cross section at $p_T=300\,{\rm GeV}$ by about $10\%$. At the LHC at 14TeV, the effect is about $-15\%$ at $p_T=500\,{\rm GeV}$, and around $-30\%$ for $p_T=2\,{\rm TeV}$~\cite{Kuhn:2004em,Hollik:2007sq}. To get a really accurate comparison to data, the electroweak corrections must be included. \begin{table}[t!] \begin{tabular}{lcccc} & \multicolumn{2}{c}{Tevatron} & \multicolumn{2}{c}{LHC at $7\,$TeV} \\ & $W^\pm$ & $Z$ & $W^\pm$ & $Z$ \\\hline LO & $0.91^{+0.19}_{-0.13}$ & $0.46^{+0.13}_{-0.09}$ & $34.5^{+4.6}_{-3.7}$ & $14.0^{+2.5}_{-2.0}$ \\ NLO$_{\rm sing.}$ & $1.19^{+0.07}_{-0.08}$ & $0.60^{+0.05}_{-0.06}$ & $47.1^{+2.1}_{-2.3}$ & $19.2^{+1.1}_{-1.2}$ \\ NLO & $1.28^{+0.09}_{-0.10}$ & $0.63^{+0.05}_{-0.06}$ & $53.3^{+3.8}_{-3.5}$ & $21.4^{+2.0}_{-1.9}$ \\ NNLL +NLO& $1.34^{+0.03}_{-0.03}$ & $0.66^{+0.02}_{-0.02}$ & $53.8^{+2.1}_{-2.1}$ & $22.5^{+1.0}_{-0.7}$ \\ NNLO$_{\rm sing.}$+NLO & $1.34^{+0.03}_{-0.04}$ & $0.65^{+0.02}_{-0.02}$ & $55.9^{+2.0}_{-1.4}$ & $21.7^{+1.1}_{-1.1}$ \\ N$^3$LL$_{\rm partial}$+NLO & $1.35^{+0.02}_{-0.02}$ & $0.66^{+0.01}_{-0.01}$ & $56.0^{+1.6}_{-0.7}$ & $22.6^{+0.7}_{-0.4}$ \end{tabular} \vspace{-0.0cm} \caption{Cross section $\sigma(p_T>200\,{\rm GeV})$ (in picobarn) using different approximations, see text\label{tab}. In addition to the scale uncertainties shown in the table, there is a relative PDF uncertainty of $3 \%$ ($2\%$) for $W^\pm$ production at the Tevatron (LHC) and $5 \%$ ($2\%$) for $Z$ production .} \end{table} It would be interesting to obtain NNLL predictions for less inclusive quantities. In particular, to compare to the preliminary CMS results for the $Z$ boson $p_T$ spectrum \cite{cmsres}, based on $36\,{\rm pb}^{-1}$, one would need results which are differential in the lepton momenta to account for the experimental cuts. To obtain these results, one has to compute the hard function for arbitrary boson polarizations, which is in progress \cite{christian}. Putting in cuts on the hadronic side is more complicated, but would be quite interesting, since it would allow to compute the production in association with a jet. The LHC has just delivered its first inverse femtobarn of data at 7 TeV. Comparing our results to this data will allow for precision tests of the standard model at the highest energies ever produced in a collider. {\em Acknowledgments:\/} The work of TB and CL is supported in part by the SNSF and ``Innovations- und Kooperationsprojekt C-13'' of SUK. TB and MDS would like to thank the Aspen Center for Physics and KITP and for hospitability during the completion of the project. This research was supported in part by the National Science Foundation under Grant No. NSF PHY05-51164 and by the Department of Energy, under grant DE-SC003916.
2,877,628,088,844
arxiv
\section{} Let $S$ be a scheme and let $f: G\to S$ be a finite flat finitely presented $S$-group scheme. The \emph{order}, or \emph{rank}, of $G$, namely, the rank of the locally free $\mathcal{O}_S$-module of finite type $f_*\mathcal{O}_G$, is a locally constant function on $S$. We say that $G$ is of \emph{prime order} (resp. \emph{square free order}) if its order at each point of $S$ is $1$ or a prime (resp. non-divisible by prime squares). \smallskip \smallskip {\bf Theorem.} --- \emph{Let $S$ be a scheme and let $G$ be a finite flat finitely presented $S$-group scheme of square free order. Then there exists a Hochschild extension of $S$-group schemes \[1\to G'\to G\to G''\to 1,\] split by a finite \'{e}tale surjective base change, with $G''$ finite \'{e}tale over $S$, and $G'$ a direct sum, as $G''$-modules, of commutative finite flat finitely presented $S$-group schemes of prime order. If \[1\to G'_{\iota} \to G\to G''_{\iota}\to 1,\] $\iota=1, 2$, are two such extensions, there is a third \[1\to G'_3\to G\to G''_3\to 1\] with $G'_3=G'_1\cap G'_2$.} \smallskip \smallskip {\bf Lemma.} --- \emph{Let $S$ be a scheme and let $G$ be a finite flat finitely presented $S$-group scheme of square free order. Then }: \smallskip \emph{a) If $G$ is of prime order, it is commutative.} \smallskip \emph{b) If $G$ is commutative and of order $n$, it is in a unique way a direct sum, indexed by the factors $p\in \Gamma(S, \mathbf{Z})$ in the prime factorization of $n$, of finite flat finitely presented $S$-group schemes of prime order $p$. This decomposition commutes with base change and is invariant by all $S$-group automorphisms of $G$.} \smallskip \emph{c) Each extension of $S$-group schemes \[1\to G'\to G\to G''\to 1,\] with $G''$ $S$-finite \'{e}tale and $G'$ commutative finite flat of finite presentation over $S$, is Hochschild and split by a finite \'{e}tale surjective base change.} \smallskip \emph{d) If, for $\iota=1, 2$, either \[1\to G'_{\iota}\to G\to G''_{\iota}\to 1\] writes $G$ as extension of a finite \'{e}tale $S$-group scheme by a finite flat finitely presented $S$-group scheme, then there is a third such \[1\to G'\to G\to G''\to 1\] with $G'=G'_1\cap G'_2$.} \smallskip \emph{e) If $S$ is of residue characteristic zero, $G$ is finite \'{e}tale over $S$.} \smallskip \emph{f) If $G$ is finite \'{e}tale over $S$, then, for each $d\in \Gamma(S, \mathbf{Z})$, the set $U$ of points $s$ of $S$ such that $G(\overline{s})$ has a unique subgroup of order $d(s)$, where $\overline{s}$ is the spectrum of an algebraic closure of $k(s)$, is open and closed in $S$. Over $U$, $G\times_SU$ has a unique normal finite \'{e}tale sub-$U$-group scheme of order $d$.} \smallskip \emph{g) If $S$ is the spectrum of an algebraically closed field of characteristic $p>0$, the identity component of $G$, if non-trivial, is $\mu_p$ or $\alpha_p$.} \smallskip \emph{h) For every point $s$ of $S$, if $i_G(s)$ denotes the order of the identity component of $G\times_Ss$, the function $s\mapsto i_G(s)$, called the infinitesimal rank of $G$, is locally constructible upper semi-continuous.} \smallskip \emph{i) The set $S_1$ of points $s$ of $S$ such that $i_G(s)$ is $1$ is a retro-compact open subset of $S$; it consists exactly of all those points of $S$ above which $G$ is $S$-\'{e}tale.} \smallskip \emph{j) Let $p$ be a prime number. The set $S_p$ of points $s$ of $S$ such that $i_G(s)$ divides $p$ is a retro-compact open subset of $S$. The union $V_p$ of $S_p-S_1$ and the points $s$ of $S$ such that $G(\overline{s})$ has a unique $p$-Sylow subgroup of order $p$, where $\overline{s}$ is the spectrum of an algebraic closure of $k(s)$, is open and closed in $S$. On $V_p$, $G\times_SV_p$ has a unique normal finite flat finitely presented sub-$V_p$-group scheme $G_p$ of order $p$.} \begin{proof} \emph{a}) This is \cite{oort_tate} \S 1 Theorem 1. \smallskip \emph{b}) By \cite{oort_tate} \S 1 Theorem, as an abelian sheaf on $S$ for the \emph{fppf} topology, $G$ is of $n$-torsion, namely, a module over the locally constant sheaf of rings \[\mathbf{Z}/n\mathbf{Z}=\prod_p\mathbf{Z}/p\mathbf{Z},\] and in particular $G$ is the direct sum of its $p$-torsion sub-modules $G_p$, where $p\in\Gamma(S, \mathbf{Z})$ are the factors in the prime factorization of $n$. As the kernel of $p.\mathrm{Id}_G$, $G_p$ is a finitely presented closed sub-$S$-group scheme of $G$. As also the \emph{fppf} image of $(n/p).\mathrm{Id}_G$, $G_p$ is $S$-flat (EGA IV 11.3.11). The order $n_p$ of $G_p$ is $p$, for \[n=\prod p\ |\ \prod n_p=n.\] This decomposition of $G$ is clearly invariant by all its $S$-group automorphisms and commutes with base change. The uniqueness follows as every finite flat finitely presented $S$-group scheme of prime order $p$ is of $p$-torsion. \smallskip \emph{c}) One may assume $G''$ of constant order $n''$. The extension \[1\to G'\to G\to G''\to 1\] is Hochschild, namely, the sequence \[1\to G'(T)\to G(T)\to G''(T)\to 1\] is exact for every $S$-scheme $T$. Indeed, for all $x\in G''(T)$, if \[d: G''(T)\to H^1((\mathrm{Sch}/T)_{\mathrm{fppf}}, G')\] denotes the coboundary homomorphism, one has $d(x)=0$, as \[n''.d(x)=d(x^{n''})=d(1)=0\] and $n''.\mathrm{Id}_{G'}: G'\to G'$ is an isomorphism. Next, by a finite \'{e}tale surjective base change, one may assume that the finite \'{e}tale $S$-group scheme $G''$ is locally constant and then constant of value $|G''|$. It suffices to show that the exact sequence \[1\to G'(S)\to G(S)\to G''(S)\to 1\] is split by $|G''|\to G''(S)$. This follows as $H^2(B|G''|, G'(S))=0$. \smallskip \emph{d}) The intersection $G'=G'_1\cap G'_2$ is a closed finitely presented normal sub-$S$-group scheme of $G$. It remains to show that $G'$ is flat over $S$ and that the quotient $G''=G/G'$, which is then representable by a finite flat finitely presented $S$-group scheme, is \'{e}tale over $S$. \smallskip The assertion being local on $S$, one may assume $S$ affine, then by a ``passage \`{a} la limite'', noetherian and local, then by completion along its closed point, complete, and finally by reduction to each of its closed sub-schemes of finite lengths, artin local. Let the closed point of $S$ be $s$ and let $G_s^o$ be the identity component of $G_s=G\times_Ss$. Consider the canonical exact sequence \[1\to G_s^o\to G_s\to G_s/G_s^o\to 1.\] As the base change by $s\to S$ induces an equivalence from the category of finite \'{e}tale $S$-group schemes to the category of finite \'{e}tale $s$-group schemes (EGA IV 18.1.2), there is up to unique isomorphisms a unique finite \'{e}tale $S$-group scheme $Q$ with $Q\times_Ss=G_s/G_s^o$. By EGA IV 18.1.3, there is a unique $S$-group homomorphism $G\to Q$ which specializes to the projection $G_s\to G_s/G_s^o$. This morphism $G\to Q$ is faithfully flat (EGA IV 11.3.11) and provides an exact sequence of $S$-group schemes \[1\to P\to G\to Q\to 1\] with $P$ finite flat over $S$, $P\times_Ss=G_s^o$. For $\iota=1, 2$, either composition \[P\to G\to G''_{\iota}\] is by EGA IV 18.1.3 trivial, for it is when specialized to $s$. So $P\subset G'_1\cap G'_2=G'$. Replacing $G$ by $G/P$, $G'_{\iota}$ by $G'_{\iota}/P$ and $G'$ by $G'/P$, one can assume $G=Q$ finite \'{e}tale over $S$. But now all of $G'_1, G'_2, G', G''$ are finite \'{e}tale over $S$. This completes the proof. \smallskip \emph{e}) One can (EGA IV 17.6.2) assume that $S$ is the spectrum of a field of characteristic zero and apply SGA 3 VII B 3.3.1. \smallskip \emph{f}) The question being local on $S$, one can assume $S$ affine and by a ``passage \`{a} la limite'' noetherian and connected. If $U$ is not empty, let $\overline{s}$ be a geometric point of $S$ with image $s$ in $U$ and let \[\rho: \pi_1(S, \overline{s})\to \mathrm{Aut}(G(\overline{s}))\] be the monodromy representation corresponding to the finite \'{e}tale $S$-group scheme $G$. The unique subgroup of $G(\overline{s})$ of order $d(s)$ is characteristic in $G(\overline{s})$ and in particular invariant by $\rho$. So $G$ has a unique normal finite \'{e}tale sub-$S$-group scheme of order $d$, and $U=S$. \smallskip \emph{g}) This follows by SGA 3 XVII 4.2.1. \smallskip \emph{h}) The order of the quotient of $G\times_Ss$ by its identity component, the \emph{separable rank} of $G\times_Ss$, is a locally constructible lower semi-continuous function of $s\in S$ (EGA IV 15.5.1). \smallskip \emph{i}) This follows by \emph{h}) and EGA IV 17.6.2. \smallskip \emph{j}) That $S_p$ is a retro-compact open subset of $S$ follows by \emph{g})+\emph{h}). \smallskip --- \emph{The set $U_p:=V_p\cap S_p$ is open and closed in $S_p$, and $G\times_SU_p$ has a unique normal finite flat finitely presented sub-$U_p$-group scheme $P$ of order $p$. The quotient $(G\times_SU_p)/P$ is finite \'{e}tale over $U_p$ }: \smallskip Restricting to $S_p$, suppose $S=S_p$, namely, that $i_G$ takes values in $\{1, p\}$. Let $U: (\mathrm{Sch}/S)^o\to (\mathrm{Sets})$ be the following sub-functor of the final functor : \smallskip \emph{For an $S$-scheme $S'$, $U(S')=\{\emptyset\}$, if $G\times_SS'$ is the extension of a finite \'{e}tale $S'$-group scheme by a finite flat finitely presented $S'$-group scheme of order $p$, and $U(S')=\emptyset$, otherwise.} \smallskip It suffices to show that $U\to S$ is representable by an open and closed immersion. \smallskip \emph{The functor $U$ is a sheaf on $(\mathrm{Sch}/S)$ for the \'{e}tale topology }: \smallskip Namely, $U(T)=U(T')$ for every \'{e}tale surjective morphism $T'\to T$ of $S$-schemes. Indeed, if $U(T')$ is not empty so that there is an extension of $T'$-group schemes \[1\to P'\to G\times_ST'\to Q'\to 1\] with $Q'$ $T'$-finite \'{e}tale and $P'$ finite flat of finite presentation over $T'$ of order $p$, it follows by \emph{d}) that $p_1^*P'=p_2^*P'$ in $G\times_ST''$, where $p_1, p_2$ are the two projections of $T''=T'\times_TT'$ onto $T'$. On $P'\hookrightarrow G\times_ST'$, there is therefore a canonical descent datum relative to $T'\to T$, and so $U(T)$ is not empty. \smallskip \emph{The functor $U$ is formally \'{e}tale }: \smallskip Namely, $U(T)=U(T')$ for every nilpotent $S$-immersion $T'\hookrightarrow T$. This is immediate from EGA IV 18.1.2+18.1.3 by a similar argument as in \emph{d}). \smallskip \emph{The functor $U$ verifies the valuative criterion of properness }: \smallskip As $U$ is a sheaf on $(\mathrm{Sch}/S)$ for the \'{e}tale topology, the question is local on $S$. It suffices to assume $S$ affine and by a ``passage \`{a} la limite'' noetherian. Then given an $S$-scheme $T$ which is the spectra of a discrete valuation ring and which has generic point $t$, one has $U(T)=U(t)$. Indeed, if $U(t)$ is not empty and so $G\times_St$ is the extension of a finite \'{e}tale $t$-group scheme $Q_t$ by a finite $t$-group scheme $P_t$ of order $p$, then $Q=G_T/P$ is finite \'{e}tale over $T$, where $P$ is the closed image of $P_t$ in $G_T=G\times_ST$. For, the infinitesimal rank of $G$, hence that of $Q$ as well, takes values in $\{1, p\}$, and so $i_{Q}(T)=\{1\}$, as $Q$ is of order prime to $p$. \smallskip Now it is clear that $U$ is representable by an open and closed sub-scheme of $S$ with underlying set $U_p$. \smallskip --- \emph{The set $U'_p:=V_p-(S_p-S_1)$ is open and closed in $S-(S_p-S_1)$, and $G\times_SU'_p$ has a unique normal finite \'{e}tale sub-$U'_p$-group scheme of order $p$ }: \smallskip Restricting to each $S_q$, where $q$ is a prime distinct from $p$, one may assume $S=S_q$. And by \emph{f}), it suffices to assume $S=U_q$. Consider the canonical exact sequence \[1\to Q\to G\to G/Q\to 1\] where $Q$ is the unique normal finite flat finitely presented sub-$S$-group scheme of $G$ of order $q$. By \emph{f}) applied to the prime-to-$q$ finite \'{e}tale quotient $G/Q$, and replacing $S$ by an open and closed sub-scheme, one may assume $G/Q$ has a unique normal finite \'{e}tale sub-$S$-group scheme $\overline{P}$ of order $p$. Let $\overline{G}$ be the pre-image of $\overline{P}$ in $G$. Note that replacing $G$ by $\overline{G}$ one may assume $P:=G/Q$ of order $p$. According to \emph{c}), by a finite \'{e}tale surjective base change, one may assume that $G=QP$ is a semi-direct product and that $P$ is constant of generator $g$. Now, by conjugation, $P$ acts on $Q$. Let $\zeta$ be the image of $g$ in $\mathrm{Aut}_S(Q)$ and consider the following sub-functor of the final functor : \smallskip \emph{For an $S$-scheme $S'$, $V(S')=\{\emptyset\}$, if $\zeta\times_SS'=\mathrm{id}_Q\times_SS'$, and $V(S')=\emptyset$, otherwise.} \smallskip This sub-functor $V\to S$ is representable by a closed immersion (SGA 3 VIII 6.4). And $V$ has underlying set $U'_p$, and $G\times_SV$ is commutative with its $p$-torsion being the unique normal finite \'{e}tale sub-$V$-group scheme of order $p$. \smallskip On $S_1$, $V\times_SS_1\to S_1$ is an open and closed immersion. For, $Q\times_SS_1$, hence $\underline{\mathrm{Aut}}_S(Q)\times_SS_1$ as well, is finite \'{e}tale over $S_1$. \smallskip To show that $V\to S$ is an open immersion at each point $s$ of $U'_p-S_1$, it suffices by strictly localizing $S$ at $s$ to assume $S$ strictly local. By \cite{oort_tate} \S 2 Theorem 2, $Q$ corresponds to a triple $(L, c, d)$ which consists of an invertible $\mathcal{O}_S$-module $L$ and of $\mathcal{O}_S$-linear homomorphisms $c: L\to L^{\otimes q}$, $d: L^{\otimes q}\to L$ satisfying $d\circ c=w.\mathrm{Id}_L$ for a certain element $w\in\Gamma(S, \mathcal{O}_S)$. This correspondence identifies the $S$-group automorphisms of $Q$ with the units $u\in \Gamma(S, \mathcal{O}_S)^{\times}$ such that $uc=u^qc$, $ud=u^qd$. In particular, $\zeta$ is identified to an element of $\mu_p(S)$, and the relation ``$\zeta=\mathrm{Id}_Q$'' is an open and closed relation on $S$, as $\mu_{p S}$ is finite \'{e}tale over $S$. \end{proof} \smallskip \smallskip \emph{Proof of the theorem } --- \smallskip The last assertion follows by Lemma, \emph{d}). By Lemma, \emph{c})+\emph{b}), it only remains to write $G$ as an extension \[1\to G'\to G\to G''\to 1\] with $G''$ $S$-finite \'{e}tale and $G'$ commutative finite flat of finite presentation over $S$. \smallskip One may assume $G$ of constant order $n$. The order $i_G(s)$ of the identity component of each fiber $G\times_Ss$ divides $n$ and equals either $1$ or the characteristic of $k(s)$. If $E:=i_G(S)$ consists only of $1$, $G$ is $S$-finite \'{e}tale (Lemma, \emph{i})). Otherwise, for each prime $p$ in $E$, there is (Lemma, \emph{j})) an open and closed sub-scheme $V_p$ of $S$ such that : \smallskip --- \emph{On $S-V_p$, $i_G$ does not take value $p$.} \smallskip --- \emph{On $V_p$, $G\times_SV_p$ has a unique normal finite flat finitely presented sub-$V_p$-group scheme $G_p$ of order $p$.} \smallskip By induction on the cardinality of $E-\{1\}=\{p_1, p_2, \cdots\}$, suppose that the claim holds for the restriction of $G$ to \[S-(V_{p_1}\cap V_{p_2}\cap\cdots)=(S-V_{p_1})\coprod (V_{p_1}-V_{p_2})\coprod (V_{p_1}\cap V_{p_2}-V_{p_3})\coprod \cdots\] Then, restricting to $V=V_{p_1}\cap V_{p_2}\cap\cdots$, one may assume $V_p=S$ for every prime $p$ in $E$. Put $G'=G_{p_1}\times_SG_{p_2}\times_S\cdots$. Notice that the $S$-morphism \[G'\to G,\ (g_1, g_2, \cdots)\mapsto g_1.g_2\cdots\] identifies $G'$ as a normal sub-$S$-group scheme of $G$ with $G''=G/G'$ finite \'{e}tale over $S$. \smallskip \smallskip \bibliographystyle{amsplain}
2,877,628,088,845
arxiv
\section*{Introduction} Let $p\in\C[t,x,y]$ be a homogeneous polynomial of degree $d\ge 1$. A \emph{(linear symmetric) determinantal representation} of $p$ is an expression \[ p=\det(tM_1+xM_2+yM_3) \] where $M_1,M_2,M_3$ are complex symmetric matrices of size $d\times d$. Determinantal representations of plane curves are a classical topic of algebraic geometry. Existence for smooth curves of arbitrary degree was first proved by Dixon in 1902 \cite{Di}. For an exposition in modern language, see Beauville \cite{Be}. Real determinantal representations of real curves have only been studied systematically much later in the work of Dubrovin \cite{Du} and Vinnikov \cite{Vi89}. Of particular interest here are the \emph{definite representations}, where some linear combination of the matrices $M_1,M_2,M_3$ is positive definite. By a celebrated result due to Helton and Vinnikov \cite{HV}, these correspond exactly to the \emph{hyperbolic curves}, whose real points consist of maximally nested ovals in the real projective plane. The Helton-Vinnikov theorem (previously known as the Lax Conjecture) has attracted attention in connection with semidefinite programming, since it characterizes the boundary of those convex subsets of the real plane that can be described by linear matrix inequalities. See Vinnikov \cite{Vi11} for an excellent survey. While the Helton-Vinnikov theorem ensures the existence of a definite determinantal representation for any hyperbolic curve, finding such a representation for a given polynomial $p$ remains a difficult computational problem. With a suitable choice of coordinates, we can restrict to representations of the form \[ p=\det(tI_d+xD+yR) \] where $I_d$ is the identity matrix, $D$ is a real diagonal and $R$ a real symmetric matrix. The hyperbolicity of $p$ is reflected in the fact that for any point $(u,v)\in\R^2$, all roots of the univariate polynomial $p(t,u,v)\in\R[t]$ are real. Given such $p$, the computational task of finding the unknown entries of $D$ and $R$ leads, in general, to a zero-dimensional system of polynomial equations. However, as $d$ grows, this direct approach quickly becomes infeasible in practice. This, as well as symbolic methods and an alternative approach via theta functions based on the proof of the Helton-Vinnikov theorem, have been investigated in \cite{PSV}. As far as actual computations are concerned, $d=6$ was the largest degree for which computations terminated in reasonable time. We present here a more sophisticated numerical approach, implemented with \textsc{NAG4M2}: the \emph{NumericalAlgebraicGeometry} package~\cite{Leykin:NAG4M2} for \textsc{Macaulay2}~\cite{M2www}. We consider the branched cover of the space of homogeneous polynomials by pairs of matrices $(D,R)$, with $D$ diagonal and $R$ symmetric, via the determinantal map $$(D,R)\mapsto \det(tI_d+xD+yR).$$ We use known results on the number of equivalence classes of complex determinantal representations to show that the determinantal map is unramified over the set of smooth hyperbolic polynomials (Thm.~\ref{Thm:U}). We then use the fact that this set is path-connected. In fact, an explicit path connecting any hyperbolic polynomial to a certain fixed polynomial was constructed by Nuij in \cite{Nu}, which we refer to as the N-path. Our algorithm works by constructing a lifting of the N-path to the covering space. The advantage over an application of a blackbox homotopy continuation solver to the zero-dimensional system of equations is that we need to track a {\em single} path instead of as many paths as there are complex solutions. We also recover the Helton-Vinnikov theorem from these topological considerations and the count of equivalence classes of complex representations. Since the singular locus has codimension at least $2$ inside the set of strictly hyperbolic polynomials, the N-path avoids singularities for almost all starting polynomials (Prop. \ref{Prop: avoid L}). In the unlikely event that the N-path goes through the singular locus, it is possible to perturb the starting point and obtain an approximate determinantal representation. We also provide an algorithm that produces a complex determinantal representation via the complexification of the N-path. We modify the approach of Nuij to introduce a {\em randomized N-path}, which depends on a choice of random linear forms. For a given starting polynomial $p$, we conjecture (Conjecture~\ref{conjecture:main}) that the randomized N-path avoids the singular locus with probability 1. This also results in a better practical complexity of the computation than the original N-path (Remark~\ref{Remark:N-path-comparison}). Our proof-of-concept implementation is written in the top-level interpreted language of {\sc Macaulay2} and, by default, uses standard double floating point precision. Even with these limitations we can compute small examples in reasonable time (see the example in \S\ref{section:example} for $d=6$): we are able to finish examples with $d\leq 10$ within one day. With arbitrary precision arithmetic and speeding up the numerical evaluation procedure, we see no obstacles to computing robustly for $d$ in double digits using the present-day hardware. We note that the developed method constructs an {\em intrinsically real} homotopy to find real solutions to a (specially structured) polynomial system. The only other intrinsically real homotopy known to us is introduced for Khovanskii-Rolle continuation in~\cite{BS}. \emph{Acknowledgements.} We are grateful to Institut Mittag-Leffler, where this project has started, for hosting us in the Spring of 2011. We would like to thank Greg Blekherman and Victor Vinnikov for helpful discussions. \section{Hyperbolic and determinantal polynomials} \noindent We consider real or complex homogeneous polynomials of degree $d\ge 1$ in $n+1$ variables $(t,x)$, $x=x_1,\dots,x_n$. Let \begin{align*} \sF &=\bigl\{p\in\C[t,x]\:|\: p\text{ is homogeneous of total degree }d\text{ and }p(1,0,\dots,0)=1\bigr\}\\ \sF_\R &= \sF\cap\R[t,x]. \end{align*} A polynomial $p\in \sF_\R$ is called \emph{hyperbolic} if all roots of the univariate polynomial $p(t,u)\in\R[t]$ are real, for all $u\in\R^n$. It is called \emph{strictly hyperbolic} if all these roots are distinct, for all $u\in\R^n$, $u\neq 0$. Write \[ \sH =\bigl\{p\in \sF_\R\:|\: p\text{ is hyperbolic}\bigr\}.\\ \] \begin{Prop}\label{Prop:TopologyH}\mbox{} \begin{enumerate} \item The interior $\interior(\sH)$ of $\sH$ is the set of strictly hyperbolic polynomials and $\sH$ is the closure of $\interior(\sH)$ in $\sF_\R$. \item The set $\interior(\sH)$ is contractible and path-connected (hence so is $\sH$). \item A polynomial $f\in\sH$ is strictly hyperbolic if and only if the projective variety $\sV_\C(f)$ defined by $f$ has no real singular points. \item Let $\sH^\circ$ be the set of hyperbolic polynomials $p\in\sH$ for which $\sV_\C(p)$ is smooth. Then $\interior(\sH)\setminus\sH^\circ$ has codimension at least $2$ in $\sF_\R$. \end{enumerate} \end{Prop} \begin{proof} (1) and (2) are proved by Nuij \cite{Nu} (see also Section \ref{Sec:N-path} below). (3) is proved in \cite[Lemma 2.4]{PV}. (4) follows from the fact that the elements of $\interior(\sH)$ have no real singularities, while complex singularities must come in conjugate pairs. \end{proof} For the remainder of this section, we restrict to the case $n=2$ (plane projective curves) and use $(x,y)$ instead of $(x_1,x_2)$. \begin{Remark}\label{Remark:StrictlyHyperbolicSingular} If $n=2$ and $d\le 3$, then $\sH^\circ=\interior(\sH)$, i.e.~every strictly hyperbolic curve of degree at most $3$ is smooth. This is simply because a real plane curve of degree at most $3$ cannot have any non-real singularities. When $d\ge 4$, a strictly hyperbolic curve may still have complex singularities. For example, let $p=1/19(19t^4 -31 x^2t^2 - 86 y^2t^2 + 9x^4 + 41 x^2 y^2 +39 y^4)$. One can check that $p$ is hyperbolic and that the projective plane curve defined by $p$ has no real singularities, hence $p$ is strictly hyperbolic. However, $(1:\pm 2:\pm i)$ are two pairs of complex-conjugate singular points of $\sV_\C(p)$. Thus $p\in\interior(\sH)\setminus\sH^\circ$. \end{Remark} \noindent We will use the notation \begin{align*} \sM&=\bigl\{(D,R)\in\bigl(\Sym_d(\C))^2\:|\: D\text{ is diagonal}\bigr\},\\ \sM_\R&=\sM\cap(\Sym(\R))^2. \end{align*} Note that, since $n=2$, we have $\dim_\C\sF=\dim_\R\sF_\R=\dim_\R\sM_\R=\dim_\C\sM=\frac{d(d+3)}{2}$. We study the map \[ \Phi\colon\left\{ \begin{array}{ccc} \sM & \to & \sF\\ (D,R) & \mapsto & \det(tI_d+xD+yR). \end{array}\right. \] and its restriction to $\sM_\R$. The image of $\sM_\R$ under $\Phi$ is contained in $\sH$. It is also not hard to show that it is closed (see \cite[Lemma 3.4]{PV}). Our first goal is to find a connected open subset $U$ of $\sH$ such that the restriction of $\Phi$ to $\Phi^{-1}(U)$ is smooth. For fixed $p\in\sH$, the group $\SL_d(\C)\times\{\pm 1\}$ acts on the determinantal representations $p=\det(tM_1+xM_2+yM_3)$ via symmetric equivalence. In other words, any $A\in\SL_d(\C)\times\{\pm 1\}$ gives a new representation $p=\det(tAM_1A^T+xAM_2A^T+yAM_3A^T)$. When we restrict to the normalized representations we are considering, we have an action on pairs $(D,R)\in\Phi^{-1}(p)$ by those elements $A\in\SL_d(\C)\times\{\pm 1\}$ for which $AA^T=I_d$ (i.e.~$A\in\Orth_d(\C)$) and $ADA^T$ is diagonal. \begin{Thm} For $n=2$, any $p\in\sF$ has only finitely many complex representations $p=\det(tI_d+xD+yR)$ up to symmetric equivalence. If the curve $\sV_\C(p)$ is smooth, the number of equivalence classes is precisely $2^{g-1}\cdot(2^g+1)$, where $g=\binom{d-1}{2}$ is the genus of $\sV_\C(p)$. \end{Thm} \begin{proof} For smooth curves, the equivalence classes of symmetric determinantal representations are in canonical bijection with ineffective even theta characteristics; see \cite[Thm.~2.1]{PSV} and references given there. \end{proof} \begin{Thm}\label{Thm:U} The set $\sH^\circ$ of smooth hyperbolic polynomials in three variables is an open, dense, path-connected subset of $\sH$, and each fibre of $\Phi$ over a point of $\sH^\circ$ consists of exactly $2^{g-1}\cdot(2^g+1)\cdot 2^{d-1}\cdot d!$ distinct points. \end{Thm} \begin{proof} The statements about the topology of $\sH^\circ$ follow immediately from Prop.~\ref{Prop:TopologyH}. Let $p\in\interior(\sH)$ and let $(D,R)\in\Phi^{-1}(p)$, which means $p=\det(tI_d+xD+yR)$. The diagonal entries of $D$ are the zeros of $p(t,-1,0)$. Since $p$ is strictly hyperbolic, these zeros are real and distinct. So $D$ is a real diagonal matrix with distinct entries. It follows then that the centralizer of $D$ in $\Orth_d(\C)$ consists precisely of the $2^d$ diagonal matrices with entries $\pm 1$. Let $S$ be such a matrix with $S\neq \pm I_d$. We want to identify the set of symmetric matrices $R$ that commute with $S$. Up to permutation, we may assume that the first $k$ diagonal entries of $S$ are equal to $-1$ and the remaining $d-k$ are equal to $1$. It follows then that any $R$ with $SR=RS$ must have $r_{ij}=r_{ji}=0$ if $i>k\ge j$, so that $R$ is block-diagonal. For such $R$ to show up in a pair $(D,R)\in\Phi^{-1}(p)$, the polynomial $p$ must be reducible. In particular, if $p\in\sH^\circ$, there is no such $S$ commuting with $R$. It follows then that $\{SRS\:|\: S\text{ diagonal with }S^2=I_d\}$ has $2^{d-1}$ distinct elements. Permuting the distinct diagonal entries of $D$ gives $d!$ possible choices of $D$. This, combined with the count of equivalence classes in the preceding theorem, completes the proof. \end{proof} \begin{Cor} The restriction of $\Phi$ to $\Phi^{-1}(\sH^\circ)$ is smooth. \end{Cor} \begin{proof} The restriction of $\Phi$ to $\Phi^{-1}(\sH^\circ)$ is a polynomial map with finite fibres that is unramified over $\sH^\circ$, since the cardinality of the fibre does not change. Hence it is smooth (see for example Hartshorne \cite[III.10]{Ha}). \end{proof} \noindent We sketch an argument for deducing the Helton-Vinnikov theorem from Thm.~\ref{Thm:U}. \begin{Cor}[Helton-Vinnikov Theorem] Every hyperbolic polynomial $p\in\sH$ in three variables admits a determinantal representation $p=\det(tI_d+xD+yR)$ with $D$ diagonal and $R$ real symmetric. \end{Cor} \begin{proof} Since all fibres of $\Phi$ over $\sH^\circ$ have the same cardinality and $\sH^\circ$ is path-connected, the number of real points in each fibre must also be constant over $\sH^\circ$. That number cannot be zero, since there exist fibres with real points. (This amounts to showing that for each $d\ge 1$ there exists a real pair $(D,R)$ such that $p=\det(tI_d+xD+yR)$ defines a smooth curve. This can for example be deduced with the help of Bertini's theorem.) It follows that $\sH^\circ$ is contained in $\Phi(\sM_\R)$. On the other hand, $\Phi(\sM_\R)$ is closed in $\sF_\R$ by \cite[Lemma 3.4]{PV} and contained in $\sH$, hence $\Phi(\sM_\R)=\sH$. \end{proof} \begin{Remark}\label{rem: real repr} The number of equivalence classes of real definite representations of a hyperbolic curve is in fact also known, namely it is $2^g$. See \cite{PSV} and references to \cite{Vi93} given there. We conclude that $\Phi^{-1}(p)\cap\sM_\R$ consists of $2^g\cdot 2^{d-1}\cdot d!$ distinct points for every $p\in\sH^\circ$. Note also that, even if $p$ is hyperbolic, it will typically admit real determinantal representations $p=\det(tM_1+xM_2+yM_3)$ that are not definite, i.e.~are not equivalent to such a representation with $M_1=I_d$ and $M_2,M_3$ real. Such representations do not reflect the hyperbolicity of $p$. \end{Remark} \section{The Nuij path}\label{Sec:N-path} In order to use homotopy continuation methods for numerical computations, we need an explicit path connecting any two given points in the space $\sH$ of hyperbolic polynomials. \subsection{Original N-path} Following Nuij \cite{Nu}, we consider the following operators on polynomials $\sF_\R \subset \R[t,x]=\R[t,x_1,\ldots,x_n]$. \begin{align*} T^\ell_s &\colon p\mapsto p+s \ell\frac{\partial p}{\partial t}\quad (\ell\in\R[x] \text{ a linear form})\\ G_s &\colon p\mapsto p(t,sx)\\ F_s &= (T^{x_1}_s)^d\cdots (T^{x_n}_s)^d\\ N_s &= F_{1-s}G_s\,, \end{align*} where $s\in\R$ is a parameter. For fixed $s$, all of these are linear operators on $\R[t,x]$ taking the affine-linear subspace $\sF_\R$ to itself. Clearly, $G_s$ preserves hyperbolicity for any $s\in\R$, and $G_0(p)=t^d$ for all $p\in\sF_\R$. The operator $F_s$ is used to ``smoothen'' the polynomials along the path $s\mapsto G_s(p)$. The exact statement is the following. \begin{Prop}[Nuij \cite{Nu}] For $s\ge 0$, the operators $T^\ell_s$ preserve hyperbolicity. Moreover, for $p\in\sH$, we have $N_s(p)\in\interior(\sH)$ for all $s\in [0,1)$, with $N_0(p)\in\interior(\sH)$ not depending on $p$ and $N_1(p)=p$.\qed \end{Prop} For $p\in\sF_\R$, we call $[0,1]\ni s\mapsto N_s(p)$ the \emph{N-path} of $p$. \begin{Remark} The N-path defines the contraction that appeared in Proposition~\ref{Prop:TopologyH}(2): for all $p\in\sH$, the N-path leads to $N_0(p)=F_1G_0(p)=F_1(t^d)=N_0(t^d)$. However, it does not define a {\em strong} deformation retract as $N_s(t^d)=t^d$ holds only at the beginning and at the end of the N-path. \end{Remark} In order to ensure smoothness of the map $\Phi\colon\sM\to\sH$ along an N-path, we would like to ensure that the N-path stays inside the set $\sH^\circ$ of smooth hyperbolic polynomials and thus away from the ramification locus, by the discussion in the preceding section. \begin{Example} If $n=2$ and $d\le 3$, we know that $\sH^\circ=\interior(\sH)$ (Remark \ref{Remark:StrictlyHyperbolicSingular}). For $n=d=2$, we verify by explicit computation that the N-path stays inside the strictly hyperbolic (thus irreducible) conics. Let $D=\Diag(d_1,d_2)$ and $R = \begin{bmatrix} r_{(1,1)}&r_{(1,2)}\\ r_{(1,2)}&r_{(2,2)} \end{bmatrix}.$ The quadric $$N_s(\Phi(D,R)) = Ax^2+Bxy+Cy^2+Dxt+Eyt+Ft^2$$ is not contained in $\sH^\circ$ if and only if it factors, which happens if and only if \begin{eqnarray*} 4\Disc({N_s(\Phi(D,R))}) &=& 4\det \begin{bmatrix} A&B/2&D/2\\ B/2&C&E/2\\ D/2&E/2&F \end{bmatrix} \\ &=&2s^2(s-1)^2(d_1 - d_2)^2+\\ &&2s^2(s-1)^2\left(r_{11}-r_{22}\right)^2 + \\ &&s^4r_{12}^2\left({d_1} - {d_2} \right)^2+\\ &&8r_{12}^2 s^2\left(1-s\right)^2+\\ &&16(s-1)^4\\ &=& 0. \end{eqnarray*} The sum-of-squares representation was produced using the intuition obtained by a numerical sum-of-squares decomposition delivered by \textsc{Yalmip} and further exact symbolic computations in \textsc{Mathematica} (see {\tt Nuij-d2.mathematica}~\cite{Leykin-Plaumann:wwwVCN}). The components of the sum-of-squares decomposition above vanish simultaneously only when $s = 1$ (this proves the proposition) and either $r_{12} = 0$ or $d_2 = d_1$. \end{Example} Unfortunately, for $d\ge 4$ it is no longer true that the N-path always stays inside $\sH^\circ$, due to the existence of strictly hyperbolic polynomials with complex singularities. \begin{Example} Consider again the polynomial \[ p=1/19(19t^4 -31 x^2t^2 - 86 y^2t^2 + 9x^4 + 41 x^2 y^2 +39 y^4), \] which is contained in $\interior(\sH)$ but not in $\sH^\circ$ (c.f.~Remark \ref{Remark:StrictlyHyperbolicSingular}). One can verify through direct computation that the polynomial \begin{align*} r = \frac{1}{124659}&\biggl(124659 t^4-221616 t^3 (x+y)-324 t^2 - \left(205 x^2-912 x y+1580 y^2\right)\\ &+1440 t \left(98 x^3+41 - x^2 y+316 x y^2+373 y^3\right)\\ &+40 \left(1099 x^4-1568 x^3 - y+5540 x^2 y^2-5968 x y^3+5849 y^4\right)\biggr) \end{align*} is hyperbolic and smooth, i.e.~$r\in H^\circ$, but $N_{1/9}(r)=p$, so that the N-path for $r$ is not fully contained in $\sH^\circ$. \end{Example} However, one can still attempt to avoid the ramification points in the following manner. \begin{Prop}\label{Prop: avoid L} The N-path $N_s(p)$, with parameter $s$ varied along a piecewise-linear path $[0,c]\cup[c,1) \subset \C$, does not meet the ramification locus of $\Phi$, for almost all $c\in \C$. (More precisely, this holds for any $c$ taken in the complement of some proper real algebraic subset of $\C \simeq \R^2$.) \end{Prop} \begin{proof} This immediately follows from the fact that the ramification locus is a proper complex subvariety of $\sF$ and therefore has real codimension at least $2$. \end{proof} \begin{Remark} Using a random path as described in the proposition may result in a non-real determinantal representation: indeed, a non-real path for $s$ is not guaranteed to result in a real point in the fiber $\Phi^{-1}(N_0(p))$ when a real point in $\Phi^{-1}(N_1(p))$ is taken. One may ask for the probability of obtaining a real determinantal representation at the end of the path described in Proposition~\ref{Prop: avoid L}. (For a more precise question, one may pick $c$ on a unit circle with a uniform distribution.) While this probability is clearly non-zero, deriving an explicit lower bound seems to be a very hard problem. A naive intuition suggests that the probability can be estimated as a ratio of the number of real representations (Remark~\ref{rem: real repr}) to the total count of complex representations (Theorem~\ref{Thm:U}). The experiments with quartic hyperbolic curves, however, suggest that the probability is much higher. \end{Remark} On the other hand, the set of polynomials for which the N-path avoids the ramification locus altogether is dense, as the following proposition shows. \begin{Prop} The set of strictly hyperbolic polynomials $p$ such that $N_s(p)\in\sH^\circ$ for all $s\in (0,1]$ is dense in $\sH$. (More precisely, its complement is a semialgebraic subset of positive codimension.) \end{Prop} \begin{proof} Consider the map \[ N\colon\left\{ \begin{array}{ccc} \sF_\R\times\R &\to &\sF_\R\\ (p,s) &\mapsto &N_s(p). \end{array}\right. \] and put $\sR=\interior(\sH)\setminus\sH^\circ\subset\sF_\R$. We want to show that the projection of the semialgebraic set $N^{-1}(\sR)\subset\sF_\R\times\R$ onto $\sF_\R$ has codimension at least $1$. The linear operator $N_s=(T^{x_1}_{1-s})^d\cdots (T^{x_n}_{1-s})^d G_s$ is bijective for all $s\neq 0$. For $p = -sx_k (\partial p/\partial t)$ is only possible for $p=0$, since $sx_k(\partial p/\partial t)$ has strictly lower degree in $t$ than $p$, so $T^{x_k}_s$ has trivial kernel. Furthermore, $G_s$ is bijective for $s\neq 0$, hence so is $N_s$. This implies that all fibres $N^{-1}(p)$ of $N$ have dimension at most $1$ (except if $p=N_0(p)$ is the fixed endpoint of the N-path). In particular, all fibres of $N$ over $\sR$ are at most $1$-dimensional, and since $\sR$ has codimension $2$ in $\sF_\R$, this implies that $N^{-1}(\sR)$ also has codimension at least $2$ in $\sF_\R\times\R$, so that the projection of $N^{-1}(\sR)$ onto $\sF_\R$ has codimension at least $1$, as claimed. \end{proof} In principle, this proposition can be used as follows: Suppose $p\in\sH$ is such that the N-path intersects the ramification locus. If $p\in\interior(\sH)$, we can apply the algorithm to small random perturbations of $p$, which will avoid the ramification locus with probability $1$. If $p$ is a boundary point of $\sH$, i.e.~if $p$ is not strictly hyperbolic, we can first replace $p$ by $N_{1-\epsilon}(p)$ for small $\epsilon>0$, which is strictly hyperbolic, and then perturb further if necessary. On the other hand, we do not know whether the fixed endpoint $N_0(p)=F_1(t^d)$ is contained in $\sH^\circ$ in all degrees $d$, although the direct computation in our experiments suggests that this is the case. (See also Thm.~\ref{Thm:SmoothRandomEndpoint} below). \subsection{Randomized N-path} The following modification of Nuij's construction has proved itself useful in computations. Let $e=\max\{d,n\}$ and choose a sequence $L=(\ell_1,\dots,\ell_e)\in\R[x]^e$ of $e$ linear forms. The {\em randomized N-path} $\randN_s^{L}$ given by $L$ is defined by \begin{align*} \randF_s^{L} &= T^{\ell_1}_s T^{\ell_2}_s\cdots T^{\ell_e}_s\,,\\ \randN_s^{L} &= \randF_{1-s}^{L}\,G_s\,, \end{align*} with $T$, $G$ as before. Thus the randomized N-path only involves $\max\{d,n\}$ differential operators rather than $dn$. In practice, replacing the N-path by the randomized N-path has worked very well (c.f.~Remark \ref{Remark:N-path-comparison}).\\ The product of operators in $F_s^L$ can be expanded explicitly, namely \[ \randF_s^{L}(p)=\sum_{k=0}^e s^k\sigma_k(\ell_1,\dots,\ell_e)\frac{\partial}{\partial t^k}p \] for all $p\in\sF_\R$, where $\sigma_k(y_1,\dots,y_e)$ denotes the elementary symmetric polynomial of degree $k$ in the variables $y_1,\dots,y_e$. \begin{Conjecture}\label{conjecture:main} Let $e\geq\{d,n\}$. \begin{enumerate} \item For a given $p\in\sH$, the set of $L$ such that $\randN_s^{L}(p) \in \sH^\circ$ for $s\in [0,1)$ is dense in $\R[x]^e$. \item For a general choice of the linear forms $L\in\R[x]^e$, the set of polynomials $p$ such that $\randN_s^{L}(p) \in \sH^\circ$ for all $s\in[0,1]$ is dense in $\sH$. \end{enumerate} \end{Conjecture} Note that $e$ is chosen minimally in the sense that if $e<d$ or $e<n$, the fixed endpoint $\randN_0^{L}(p)=\randF_1^{L}(t^d)$ is no longer smooth or even strictly hyperbolic. We will show that if $n=2$, then at least that endpoint lies in $\sH^\circ$ for a generic choice of $L$. The proof relies on Bertini's theorem in the following form. \begin{Thm}[Bertini's theorem, extended form; {\cite[Thm.~4.1]{Kl}}] On an arbitrary ambient variety, if a linear system has no fixed components, then the general member has no singular points outside of the base locus of the system and of the singular locus of the ambient variety. \end{Thm} \noindent For the proof, see also \cite[Thm.~6.6.2]{Jo}. \begin{Thm}\label{Thm:SmoothRandomEndpoint} Let $n=2$. For all $d\ge 2$, the plane projective curve $\sV_\C(\randF_1^L(t^d))$ is smooth for a generic choice of $L\in\C[x,y]_1^d$. \end{Thm} \begin{proof} We first show the following: Let $q\in\C[t,x,y]$ be homogeneous and monic in $t$ with $\sV_\C(q)$ smooth, $q\neq t$. Let $\ell\in\C[x,y]_1$ and $k$ a positive integer, then $ T_1^\ell(t^kq) = t^kq + \ell(kt^{k-1}q + t^kq') = t^{k-1}\bigl((t+k\ell)q + t\ell q'\bigr)$, where $q'=\partial q/\partial t$. Put \[ r_\ell = (t+k\ell)q + t\ell q'. \] We claim that, for generic $\ell$, the variety $\sV_\C(r_\ell)$ is smooth. To see this, consider $R= (t+u)q + tvq'\in\C[t,x,y,u,v]$. We find $\partial R/\partial u=q$ and $\partial R/\partial v=tq'$, hence the singular locus of the variety $\sV_\C(R)$ in $\P^4$ is contained in $\sV_\C(q)\cap\sV_\C(tq')$. Consider the linear series on $\sV_\C(R)$ defined by $u = k\ell$, $v=\ell$, $\ell\in\C[x,y]_1$. It is basepoint-free (in particular without fixed components), since the only base point on $\P^4$ is $(1:0:0:0:0)$, and that is not a point on $\sV_\C(R)$. By Bertini's theorem as stated above, the variety $\sV_\C(r_\ell)$ has no singular points outside the singular locus of $\sV_\C(R)$ for generic $\ell\in\C[x,y]_1$. Thus we are left with showing that, for any point $P\in\sV_\C(q)\cap\sV_\C(tq')$, we have $(\nabla r_\ell)(P)\neq 0$ for generic $\ell\in\C[x,y]_1$. Since $\sV_\C(q)$ is smooth by assumption, $q$ and $tq'$ are coprime in $\C[t,x,y]$, hence the intersection $\sV_\C(q)\cap\sV_\C(tq')$ in $\P^2$ is finite. If $P$ is any of these intersection points, suppose first that $q'(P)=0$, which implies either $(\partial q/\partial x)(P)\neq 0$ or $(\partial q/\partial y)(P)\neq 0$, since $\sV_\C(q)$ is smooth. Suppose $a=(\partial q/\partial x)(P)\neq 0$ and put $b=(\partial q'/\partial x)(P)$, then \begin{align*} (\partial r_\ell/\partial x)(P) &= \bigl(t(P)+k\ell(P)\bigr)a + t(P)\ell(P)b\\ & = (ka+bt(P))\ell(P) + at(P) \end{align*} If $ka\neq -bt(P)$, there is at most one value $\ell(P)$ for which $(\partial r_\ell/\partial x)(P)=0$. Otherwise, if $ka=-bt(P)$, then $at(P)\neq 0$, so $(\partial r_\ell/\partial x)(P)\neq 0$. The case $(\partial q/\partial x)(P)=0$ and $(\partial q/\partial y)(P)\neq 0$ is analogues. Finally, if $q'(P)\neq 0$, then we must have $t(P)=0$, hence $(\partial r_\ell/\partial t)(P) = (k+1)\ell(P)q'(P)$ is non-zero, provided that $\ell(P)\neq 0$. Thus we have shown that $\sV_\C(r_\ell)$ is smooth for generic $\ell$. To prove the original claim, let $L\in\C[x,y]_1^d$ and consider \[ \randF_1^L(t^d)=T_1^{\ell_1}\cdots T_1^{\ell_d}(t^d). \] Applying the above, with $k=d$ and $q=1$, shows that $T_1^{\ell_d}(t^d)$ is of the form $t^{d-1}q$, and $\sV_\C(q)$ is smooth for generic $\ell_d$. The claim now follows by induction. \end{proof} \section{Algorithm and implementation}\label{Sec:implementation} Given a hyperbolic polynomial $p\in \sH$, the N-path $N_s(p)$ connects $p=N_1(p)$ with $p_0=N_0(p)$ which does not depend on $p$. This suggests the following algorithm to compute a determinantal representation $(D_p,R_p)\in \sM_\R$ for $p$: \begin{enumerate} \item Pick $(D_q,R_q)\in \sM_\R$ giving a strictly hyperbolic polynomial $q=\Phi(D_q,R_q)$. Track the homotopy path $N_s(q)$ from $s=1$ with the {\em start} solution $(D_q,R_q)$ to $s=0$ producing the {\em target} solution $(D_{p_0},R_{p_0})$. Then $p_0 = \Phi(D_{p_0},R_{p_0})$. \item Track the homotopy path $N_s(p)$ from $s=0$ with the start solution $(D_{p_0},R_{p_0})$ to $s=1$ to obtain $(D_p,R_p)$ such that $p = \Phi(D_p,R_p)$. \end{enumerate} In principle, the first step only has to be performed once in each degree $d$. In what follows we describe two ways to set up a polynomial homotopy continuation for the pullback of an N-path. \subsection{N-path in the monomial basis} One way is to take the coefficients of the polynomial $\Phi(D,R)-N_s(p)\in \C[D,R,s][t,x,y]$ with respect to the monomial basis of $\sF$. This gives a family of square (\#equations=\#unknowns) systems of polynomial equations in $\C[D,R]$ parametrized by $s$. Then this family is passed to a homotopy continuation software package (we use {\sc NAG4M2}~\cite{Leykin:NAG4M2}). As long as $s\in\C$ follows a path that ensures that $N_s(p)$ stays in $\sH^\circ\subset \sH$, there are no singularities on the homotopy path except, perhaps, at the target system (see discussion in Section~\ref{Sec:N-path}). The bottleneck of this approach is the expansion of the determinant in the expression $\Phi(D,R)$ and evaluation of its $(t,x,y)$-coefficients: it takes $\Theta(d!)$ operations and results in an expression with $\Theta(d!)$ terms. This limits us to $d\leq 5$ in the current implementation of this approach. \subsection{N-path with respect to a dual basis} While it may seem that picking a basis of $\sF$ different from the monomial one does not bring any advantage, it turns out to be crucial for practical computation in case of larger $d$. We fix a {\em dual basis} in $\sF^*$ consisting of $m=\dim\sF$ evaluations $e_i$ at general points $(t_i,x_i,y_i)\in \C^3$, for $i=1,\ldots,m$. The current implementation generates the points with coordinates on the unit circle in $\C$ at random. Now the family of polynomial systems to consider is $$h_i=e_i(\Phi(D,R)-N_s(p)) \in \C[D,R,s],\ i=1,\ldots,m.$$ Since $e_i(\Phi(D,R)) = \det(It_i+Dx_i+Ry_i)$, the evaluation of $h_i$ and its partial derivatives costs $O((\dim \sF)^3) = O(d^6)$. Evaluation of the (unexpanded) expression $\Phi(D,R)$ and its partial derivatives is much faster than expanding it in the monomial basis. The latter costs $\Theta(d!)$ in the worst case and, in addition, numerical tracking procedures would still need to evaluate the large expanded expression and its partial derivatives. We modified the {\sc NAG4M2} implementation of evaluation circuits, which can be written as straight-line programs, to include taking a determinant as an atomic operation. \section{Example}\label{section:example} The last improvement in the implementation allows us to compute examples for larger $d$. With an implementation of the homotopy tracking in arbitrary precision arithmetic, we see no obstacles to computing determinantal representations for $d$ in double digits. To give an example, we choose the sextic \begin{align*} p =& -36 x^6 - 157 x^4 y^2 - 20 x^3 y^3 - 109 x^2 y^4 + 246 x y^5 - 92 y^6 - 12 x^3 y^2 t + 90 x^2 y^3 t\\ &+ 10 x y^4 t + 76 y^5 t + 49 x^4 t^2 + 156 x^2 y^2 t^2 - 16 x y^3 t^2 + 132 y^4 t^2 + 12 x y^2 t^3\\ &- 14 y^3 t^3 - 14 x^2 t^4 - 27 y^2 t^4 + t^6. \end{align*} The polynomial $p$ is hyperbolic, since $p=\Phi(D,R)$ with \begin{equation} \label{eq:d6-example} D=\Diag(-3,-2,-1,1,2,3), \ \ R= \begin{bmatrix} 0 & 1 & -1 & 1 & 2 & 1 \\ 1 & 0 & -1 & -2 & 1 & -1 \\ -1 & -1 & 0 & 1 & 2 & 1 \\ 1 & -2 & 1 & 0 & -1 & 1 \\ 2 & 1 & 2 & -1 & 0 & -2 \\ 1 & -1 & 1 & 1 & -2 & 0 \end{bmatrix}. \end{equation} Assuming this pair $(D,R)$ is not known, let us describe the application of our algorithm to recover a determinantal presentation of $p$; one can reproduce the following results by running lines in {\tt showcase.m2}~\cite{Leykin-Plaumann:wwwVCN}. First, taking an arbitrary pair $(D_q,R_q)$ and tracking the N-path $N_s(q)$ from the strictly hyperbolic polynomial $q = \Phi(D_q,R_q) = N_1(q)$ to the fixed polynomial $p_0 = N_0(q)$ we get \begin{align*} D_{p_0} &= \Diag(.222847, 1.18893, 2.99274, 5.77514, 9.83747, 15.9829)\\ R_{p_0} & = \begin{bmatrix}6& 2.51352& 1.19571& 4.04309& 1.42786& {-1.98597}\\ 2.51352& 6& 3.08656& .468873& 2.38468& 1.05948\\ 1.19571& 3.08656& 6& .785785& 4.66027& 2.29433\\ 4.04309& .468873& .785785& 6& 1.6226& .933245\\ 1.42786& 2.38468& 4.66027& 1.6226& 6& 3.50198\\ {-1.98597}& 1.05948& 2.29433& .933245& 3.50198& 6\\ \end{bmatrix} \end{align*} \noindent Tracking the N-path $N_s(p)$ from $p_0 = N_0(p) = N_0(q)$ to $p = N_1(p)$, we obtain \begin{align*} D' &= \Diag(-3,-2,-,1,1,2,3)\\ R' &= \begin{bmatrix}0& .596508& {-1.43241}& 2.00316& 1.10471& {-.725394}\\ .596508& 0& .739773& 1.79407& .0604427& {-1.60948}\\ {-1.43241}& .739773& 0& 1.56816& 1.66137& {-.165953}\\ 2.00316& 1.79407& 1.56816& 0& .839374& 2.00885\\ 1.10471& .0604427& 1.66137& .839374& 0& 1.57679\\ {-.725394}& {-1.60948}& {-.165953}& 2.00885& 1.57679& 0\\ \end{bmatrix} \end{align*} which is an alternative determinantal presentation of $p$. While we returned to the same point $p$ in the base of the cover $\Phi$, the route taken has led us to a different sheet than the sheet of the fiber point $(D,R)\in \Phi^{-1}(p)$ in (\ref{eq:d6-example}) used to construct this example. With the default settings of {\sc NAG4M2} the homotopy tracking algorithm takes 28 steps on the first path and 15 steps on the second. We were not able to find a determinantal representation for this example trying to solve the system $p=\Phi(D,R)$ directly. This is in line with what is reported in~\cite{PSV}: the largest examples that the general solvers could compute with this na\"ive strategy are in degree $d=5$. \begin{Remark}\label{Remark:N-path-comparison} Using the randomized N-path instead of the original N-path allows to compute determinantal representations for larger examples than the original N-path. The main reason that may explain the observed lower practical complexity comes from the observation that $\deg_s \randN_s^{L}(p) = d$ while $\deg_s N_s(p) = 2d$, where $d=\deg p$. \end{Remark} {\small\linespread{1}
2,877,628,088,846
arxiv
\section{Introduction} In a striking paper, Groenewold \cite{gr} showed that one cannot ``consistently'' quantize all polynomials in the classical positions $q^i$ and momenta $p_i$ on $\Bbb R^{2n}.$ Subsequently Van Hove \cite{vh1,vh2} refined and extended Groenewold's result, in effect showing that there does not exist a quantization functor which is consistent with the Schr\"odinger quantization of $\Bbb R^{2n}$. (For discussions of Groenewold's and Van Hove's work and related results, see \cite{a-m,c,f,go1,g-s,j} and references contained therein.) However, these theorems rely heavily on certain properties of $\Bbb R^{2n}$, and so it is not clear whether they can be generalized. Naturally, one {\em expects} similar ``no\,-go'' theorems to hold in a wide range of situations, but we are not aware of any previous results along these lines. In this paper we prove a Groenewold-Van Hove theorem for the symplectic manifold $S^2$. Our proof is similar to Groenewold's for $\Bbb R^{2n}$, although it differs from his in several important respects and is technically more complicated. On the other hand, as $S^2$ is compact there are no problems with the completeness of the flows generated by the classical observables, and so Van Hove's modification of Groenewold's theorem is unnecessary in this instance. To set the stage, let $(M,\omega)$ be a symplectic manifold. We are interested in quantizing the Poisson algebra $C^{\infty}(M)$ of smooth real-valued functions on $M$, or at least some subalgebra $\cal C$ of it, in the following sense. \begin{defn} A {\em quantization} of $\cal C$ is a linear map $\cal Q$ from $\cal C$ to an algebra of self-adjoint operators\footnote{Technical difficulties with unbounded operators will be ignored, as they are not important for what follows.} on a Hilbert space such that \medskip \begin{enumerate}\begin{enumerate} \item[({\em i\/})] $\cal Q\big(\{f,g\}\big) = -\mbox{i}\big[\cal Q(f),\cal Q(g)\big],$ \end{enumerate}\end{enumerate} \noindent where $\{\,,\,\}$ denotes the Poisson bracket and $[\;,\,]$ the commutator. If $\cal C$ contains the constant function 1, then we also demand \medskip \begin{enumerate}\begin{enumerate} \item[({\em ii\/})] $\cal Q(1) = I.$ \end{enumerate}\end{enumerate} \end{defn} As is well known, it is necessary to supplement these conditions for the quantization to be physically meaningful. To this end, one often requires that a certain subalgebra $\cal B$ of observables be represented irreducibly. Exactly which observables should be taken as ``basic'' in this regard depends upon the particular example at hand; one typically uses the components of a momentum map associated to a (transitive) Lie symmetry group. For $\Bbb R^{2n}$ the relevant group is the Heisenberg group \cite{f,g-s} and $\cal B = {\text {span}}\{1,q^i,p_i\,|\, i=1,\ldots,n\}$. In the case of $S^2$ the appropriate group is SU$(2) \times \Bbb R$ whence the basic observables are span$\{1,S_1,S_2,S_3\}$, the $S_i$ being the components of the spin angular momentum. Alternatively, one could require the {\em strong von Neumann rule} \cite{vn} \[\cal Q\big(k(f)\big) = k\big(\cal Q(f)\big)\] \noindent to hold for all polynomials $k$ and all $f \in \cal C$ such that $k(f) \in \cal C$. Usually it is necessary to weaken this condition \cite{f}, insisting only that it hold for a certain subclass $\cal B$ of observables $f$ and certain polynomials $k$. We refer to this simply as a ``von Neumann rule.'' In the case of $\Bbb R^{2n}$, the von Neumann rule as applied to the $q^i$ and $p_i$ with $k(x) = x^2$ is actually implied by the irreducibility of the $\cal Q(q^i)$ and $\cal Q(p_i)$ \cite{c}. But the corresponding statement is not quite true for $S^2$, as we will see. We refer the reader to \cite{f} for further discussion of von Neumann\ rules. In our view, imposing an irreducibility condition on the quantization map $\cal Q$ seems more compelling physically and pleasing aesthetically than requiring $\cal Q$ to satisfy a von Neumann\ rule. With this as well as the observations above in mind, we make \begin{defn} An {\em admissible quantization} of the pair $(\cal C,\cal B)$ is a quantization of $\cal C$ which is irreducible on $\cal B$, where $\cal C \subset C^\infty(M)$ is a given Poisson algebra, and $\cal B \subset\cal C$ a given subalgebra. \end{defn} The results of Groenewold\ and Van Hove may then be interpreted as showing that there does not exist an admissible quantization of the pair $\big(C^{\infty}(\Bbb R^{2n}),\,\text{span}\{1,q^i,p_i\,| \linebreak \, i=1,\ldots,n\} \big)$ nor, for that matter, of the subalgebra of all polynomials in the $q^i$ and $p_i$. We will prove here that there likewise does not exist an admissible quantization of the pair $\big(C^{\infty}(S^2),\,\text{span}\{1,S_1,S_2,S_3\}\big)$ nor, for that matter, of the subalgebra of all polynomials in the components $S_i$ of the spin vector. \medskip \section{No\,-Go Theorems} Consider a sphere $S^2$ of radius $s > 0$. We view this sphere as the ``internal'' phase space of a massive particle with spin $s$ and realize it as the subset of $\Bbb R^3$ given by \begin{equation} S_1\,\!^2 + S_2\,\!^2 + S_3\,\!^2 = s^2, \label{cs2} \end{equation} \noindent where $\bold S = (S_1,S_2,S_3)$ is the spin angular momentum. The symplectic form is\footnote{Note that $\omega$ is $1/s$ times the area form $d{\boldsymbol \sigma}$ on $S^2$. It is the symplectic form on $S^2$ viewed as a coadjoint orbit of SU(2).} \[\omega = \frac{1}{2s^2}\sum_{i,j,k=1}^3\epsilon_{ijk}S_i\,dS_j \wedge dS_k\] \noindent with corresponding Poisson bracket \begin{equation} \{f,g\} = \sum_{i,j,k=1}^3\epsilon_{ijk}S_i\frac{\partial f}{\partial S_j} \frac{\partial g}{\partial S_k} \label{pb} \end{equation} \noindent for $f,g \in C^{\infty}(S^2).$ We have the relations $\{S_i,S_j\} = \sum_{k=1}^3 \epsilon_{ijk}S_k.$ The group \,SU(2)\, acts transitively on $S^2$ with momentum map $\bold S= (S_1,S_2,S_3)$, i.e., the pair $\big(S^2,{\text {SU(2)}}\big)$ is an ``elementary system'' in the sense of \cite{w}. Thus it is natural to require that quantization provide an irreducible representation of \,SU(2). In terms of observables, quantization should produce a representation which is irreducible when restricted to the subalgebra generated by $\{S_1,S_2,S_3\}$. However, this subalgebra does not include the constants. To remedy this, we consider instead the central extension ${\text {SU(2)}} \times \Bbb R$ of \,SU(2)\, by $\Bbb R$ with momentum map $(1,S_1,S_2,S_3)$, and take the subalgebra generated by these observables to be the basic set $\cal B$ in the sense of the Introduction. Let $\cal P$ denote the Poisson algebra of polynomials in the components $S_1,S_2,S_3$ of the spin vector $\bold S$ modulo the relation \eqref{cs2}. (This means we are restricting polynomials as functions on $\Bbb R^3$ to $S^2$.) We shall refer to an equivalence class $p \in \cal P$ as a ``polynomial'' and take its degree to be the minimum of the degrees of its polynomial representatives. We denote by $\cal P^k$ the subspace of polynomials of degree at most $k$. In particular, $\cal P^1$ is just the Poisson subalgebra generated by $\{1,S_1,S_2,S_3\}$. When equipped with the $L^2$ inner product given by integration over $S^2$, the vector space $\cal P^k$ becomes a real Hilbert space which admits the orthogonal direct sum decomposition $\cal P^k = \bigoplus_{l=0}^k \cal H_l$, where $\cal H_l$ is the vector space of spherical harmonics of degree $l$ (i.e., the restrictions to $S^2$ of homogeneous harmonic polynomials of degree $l$ on $\Bbb R^3$ \cite{a-b-r}). Note that $\cal H_1$ is the Poisson subalgebra generated by $\{S_1,S_2,S_3\}$. The collection of spherical harmonics\ $\big\{Y_l^m,\;l = 0,1,\ldots,k,\;m = -l,-l+1,\ldots,l\big\}$ forms the standard (complex) orthogonal basis for the complexification $\cal P^k_{\Bbb C}$: \begin{equation*} \int_{S^2}Y_{l_1}^{m_1*}Y_{l_2}^{m_2}\,d\sigma = s^2\delta_{l_1l_2}\delta_{m_1m_2}. \end{equation*} \noindent Thus if $p \in \cal P^k_{\Bbb C}$, we have the harmonic decomposition \begin{equation} p = p_k + p_{k-1} + \cdots + p_0, \label{hd} \end{equation} \noindent where $p_l \in (\cal H_l)_{\Bbb C}$ is given by \begin{equation} p_l = \frac{1}{s^2}\sum_{m=-l}^l\left(\int_{S^2}Y_l^{m*}p\,d\sigma\right)Y_l^m. \label{hdy} \end{equation} It is well known that O(3) acts orthogonally on $\cal P$, and that this action is irreducible on each $\cal H_l$ where it is the standard real (orbital) angular momentum $l$ representation. The corresponding infinitesimal generators on $\cal H_l$ are $L_i = \{S_i,\cdot\}$. If we identify o(3) and $\cal H_1$ as Lie algebras, it follows that the ``adjoint'' action of $\cal H_1$ on $\cal H_l$ given by $S_i \mapsto \{S_i,\cdot\}$ is irreducible as well. \medskip Now suppose $\cal Q$ is a quantization of $\cal P$, so that \begin{equation} \big[\cal Q(S_i),\cal Q(S_j)\big] = \mbox{i}\sum_{k=1}^3\epsilon_{ijk}\cal Q(S_k) \label{com} \end{equation} \noindent and \vspace{-1ex} \begin{equation} \cal Q\big(\bold S^2\big) = s^2I. \label{s2} \end{equation} \vspace{1ex} \noindent If in addition $\cal Q$ is admissible on $(\cal P,\,\cal P^1)$, then $\cal P^1$ must be irreducibly represented. Since as a Lie algebra $\cal P^1$ is isomorphic to \,su$(2) \times \Bbb R$, its irreducible representations are all finite-dimensional.\footnote {Every irreducible representation of su$(2) \times \Bbb R$ by (essentially) self-adjoint operators on an invariant dense domain in a Hilbert space can be integrated to a continuous irreducible representation of $\text {SU(2)} \times \Bbb R$ \cite[\S 11.10.7.3]{b-r}. But it is well known that every such representation of this group is finite-dimensional.} These are just the usual (spin angular momentum) representations labeled by $j = 0,\frac{1}{2},1,\ldots$, where\footnote{In what follows we use standard quantum mechanical notation, cf. \cite{m}.} \begin{equation} \sum_{i=1}^3\cal Q(S_i)^2 = j(j+1)I. \label{j2} \end{equation} \noindent The Hilbert space corresponding to the quantum number $j$ has dimension $2j+1$, with the standard orthonormal basis $\big\{|\,j, m\rangle,\: m = -j,-j+1,\ldots,j\big\}$ consisting of eigenvectors of $\cal Q(S_3)$. We regard the representation defined by $j=0$ as trivial, in that it corresponds to quantum spin 0. The following result shows that admissibility implies a weak type of von Neumann\ rule on $\cal P^1$. \begin{prop} If $\cal Q$ is an admissible quantization of $(\cal P,\,\cal P^1)$, then \begin{equation} \cal Q\big(S_i\,\!^2\big) = a \cal Q(S_i)^2 + cI \label{ii} \end{equation} \noindent for $i = 1,2,3$, where $a$ and $c$ are real constants with $a^2 + c^2 \neq 0$. Furthermore, for $i \neq \ell$, \begin{equation} \cal Q(S_iS_\ell) = \frac{a}{2}\big(\cal Q(S_i)\cal Q(S_\ell)+\cal Q(S_\ell)\cal Q(S_i)\big). \label{pr} \end{equation} \label{qvn} \end{prop} \vspace{-2ex} We have placed the proof, which is rather long and technical, in Appendix A so as not to interrupt the exposition. Observe that on summing \eqref{ii} over $i$, we get $s^2=aj(j+1)+3c$ which fixes the constant $c$ in terms of $s,\:j$ and $a.$ {}From these relations the main result now follows. \begin{thm}[No\,-Go Theorem] There does not exist a nontrivial admissible quantization of $(\cal P,\cal P^1)$. \label{ng2} \end{thm} \begin{pf} Suppose there did exist an admissible quantization of $(\cal P,\cal P^1)$; we shall show that for $j>0$ this leads to a contradiction. First observe that we have the classical equality \[s^2S_3 = \{S_1\,\!^2 - S_2\,\!^2,S_1S_2\} - \{S_2S_3,S_3S_1\}.\] \noindent Quantizing this, a calculation using \eqref{ii}, \eqref{pr}, \eqref{com} and \eqref{j2} gives \[s^2\cal Q(S_3) = a^2\bigg (j(j+1) - \frac {3}{4} \bigg ) \cal Q(S_3).\] \noindent Thus either $\cal Q(S_3) = 0$, whence $j=0$, or $j > 0$, in which case \begin{equation} s^2 = a^2\bigg (j(j+1) - \frac {3}{4} \bigg ). \label{qc1} \end{equation} \noindent Observe that when $j = \frac{1}{2}$, this implies that $s = 0$, which is impossible. Henceforth take $j>\frac{1}{2}.$ Next we quantize the relation \[2s^2S_2S_3 = \big\{S_2\,\!^2,\{S_1S_2,S_1S_3\}\big\} - \frac{3}{4}\big\{S_1\,\!^2,\{S_1\,\!^2,S_2S_3\}\big\}.\] \noindent Using \eqref{pr}, \eqref{ii}, \eqref{com} and \eqref{j2}, the l.h.s. becomes \[as^2\big(\cal Q(S_2)\cal Q(S_3)+\cal Q(S_3)\cal Q(S_2)\big) = as^2 \big (2\cal Q(S_2)\cal Q(S_3) - \mbox{i}\cal Q(S_1)\big )\] \noindent while the r.h.s. reduces to \[a^3\bigg (j(j+1) - \frac{9}{4}\bigg) \big(2\cal Q(S_2)\cal Q(S_3) - \mbox{i}\cal Q(S_1)\big ).\] \noindent Since for $j > \frac{1}{2}$ the matrix element \[\big{\langle}\, j, j\,|\,2\cal Q(S_2)\cal Q(S_3) - \mbox{i}\cal Q(S_1)|\,j, j\!-\!1\big{\rangle} = \text {i} \bigg (\frac{1}{2} - j\bigg )\sqrt{2j}\] \noindent is nonzero, it follows that \[as^2 = a^3\bigg (j(j+1) - \frac{9}{4} \bigg ).\] \noindent If $a=0$ \eqref{qc1} yields $s=0$, whereas if $a \neq 0$ this conflicts with \eqref{qc1}. Thus we have derived contradictions provided $j > 0$. Since $j=0$ is the trivial representation, the theorem is proven. \end{pf} This contradiction shows that the quantization goes awry on the level of quadratic polynomials. On the other hand, there are many admissible quantizations of the Poisson subalgebra $\cal P^1$ of all polynomials of degree at most one, viz. the irreducible representations $\cal Q$ of \,su$(2) \times \Bbb R$ with $\cal Q(1) = I$. Thus it is of interest to determine the largest subalgebra of $\cal P$ containing $\{1,S_1,S_2,S_3\}$ that can be so quantized. We will now show that this largest subalgebra is just $\cal P^1$ itself. Unfortunately, this is not entirely straightforward, since $\cal P^1$ is not a maximal Poisson subalgebra of $\cal P$; indeed, if $\cal O$ denotes the Poisson subalgebra of odd polynomials (i.e., polynomials all of whose terms are of odd degree), then $\cal P^1$ is contained in $\tilde{\cal O} = \cal O \oplus \Bbb R$. To prove the result, we proceed in two stages. First we show that $\cal P^1$ is maximal in $\tilde {\cal O}$, and then prove a no\,-go theorem for $\tilde {\cal O}$. \begin{prop} $\cal P^1$ is maximal in $\tilde{\cal O}$. \label{max} \end{prop} \begin{pf} Actually, the constants are unimportant, and it will suffice to prove that the Poisson subalgebra $\cal H_1$ generated by $\{S_1,S_2,S_3\}$ is maximal in $\cal O$. Set $\cal O^l = \cal O \cap \cal P^l$, where henceforth $l$ is odd. For $k$ odd it is clear from \eqref{pb} and \eqref{cs2} that $\{\cal H_k,\cal H_l\} \subset \cal O^{k+l-1}$. Let $\cal R$ be the Poisson algebra generated by a single polynomial $r \in \cal O^l$ of degree $l>1$ together with $\cal H_1$. Evidently $\cal R \subset \cal O$; we must show that $\cal O \subset \cal R$. We will accomplish this in a series of lemmas. \begin{lem} If in its harmonic decomposition an element of $\cal R$ has a nonzero component in $\cal H_k$, then $\cal H_k\subset\cal R$. \label{inhomo} \end{lem} \begin{pf} Let $\cal R' \subset \cal R$ be the span of all elements of the form \[\big\{h_n,\ldots\big\{h_2,\{h_1,r\}\big\}\ldots\big\}\] \noindent for $h_i \in \cal H_1$ and $n \in \Bbb N.$ Then $\cal R'$ is an o(3)-invariant subspace of $\cal O^l \subset \cal P$. Since the representation of o(3) on $\cal P$ is completely reducible, $\cal R'$ must be the direct sum of certain $\cal H_k$ with $k \leq l.$ Consequently, if when harmonically decomposed an element of $\cal R'$ has a nonzero component in some $\cal H_k$, then $\cal H_k \subset \cal R' \subset \cal R$. \renewcommand{\qedsymbol}{$\bigtriangledown$} \end{pf} Now by assumption $r_l \neq 0$ in $\cal H_l$ and hence $\cal H_l \subset \cal R.$ Then $\{\cal H_l,\cal H_l\} \subset \cal O^{2l-1} \cap \cal R$. We will use this fact to show that $\cal O^{2l-1} \subset \cal R$. The proof devolves upon an explicit computation of the harmonic decomposition of $\{Y_l^m,Y_l^n\}$. \begin{lem} For each $j$ in the range $0 < j \leq 2l$, we have \begin{equation*} \{Y_l^{l-j},Y_l^l\} = \sum_{k=1}^{l}y_{2k-1}(l-j,l)Y_{2k-1}^{2l-j}. \end{equation*} In particular, when $j = 1$ the top coefficients $y_{2l-1}(l-1,l)$ are nonzero. Furthermore, provided $l\geq 5$, $k \geq \frac{l-1}{2}$ and $k > l - \frac{j+1}{2}$, the coefficients $y_{2k-1}(l-j,l)$ are nonzero. \label{main} \end{lem} Since the proof requires an extended calculation, we defer it until Appendix B. \begin{lem} $\cal O^{2l-1} \subset \cal R$. \end{lem} \begin{pf} Decompose $Y_l^m = R_l^m+\text {i}I_l^m$ into real and imaginary parts, with $R_l^m,\;I_l^m\in\cal H_l$. So in $\cal P_{\Bbb C}$ we observe that both $\Re\{Y_l^m,\,Y_l^n\}=\{R_l^m,R_l^n\}- \{I_l^m,I_l^n\}$ and $\Im\{Y_l^m,Y_l^n\}=\{R_l^m,I_l^n\}+\{I_l^m,R_l^n\}$ belong to $\{\cal{H}_l,\,\cal{H}_l\}$. Thus if the harmonic decomposition of $\{Y_l^m,Y_l^n\}$ has a nonzero $k^{\text{th}}$ component, then either its real or imaginary part must be nonzero which allows us to conclude that $\{\cal{H}_l,\cal{H}_l\}$ contains an element with nonzero component in $\cal{H}_k$. If $l=3$, then $\cal H_3 \subset \cal R$. Now consider the bracket $\{Y_3^{2},\,Y_3^3\}$. By Lemma \ref{main} with $j=1$ it has a nonzero $5^{\text{th}}$ component, so by the preceding and Lemma \ref{inhomo} it follows that $\cal H_5 \subset \cal R$. Since by definition $\cal H_1\subset\cal R$, we then have $\cal O^5 = \cal H_1 \oplus \cal H_3 \oplus \cal H_5 \subset \cal R.$ If $l\geq 5$, we consider $\{Y_l^{-2},\,Y_l^l\}\in\cal R_{\Bbb C}$. By Lemma \ref{main} with $j=l+2$, the preceding and Lemma \ref{inhomo} we conclude that \[\cal H_{l-2}\oplus\cal H_l\oplus\cdots\oplus\cal H_{2l-1}\subset\cal R.\] \noindent Hence $\cal H_{l-2}\subset\cal R$, so by the same argument applied to $\{Y_{l-2}^{-2},Y_{l-2}^{l-2}\}$ we get that $\cal H_{l-4}\subset\cal R$. Continuing in this way we obtain $\cal H_{l-2n}\subset\cal R$ for all $n$ with $l-2n\geq 3$. In particular, taking $n = \frac{l-3}{2}$, we get $\cal H_3\subset\cal R$. But we have already remarked that $\cal H_1 \subset \cal R,$ so the lemma is proven. \renewcommand{\qedsymbol}{$\bigtriangledown$} \end{pf} Thus $\cal R$ must contain all odd polynomials of degree at most $2l-1$. To obtain higher degree polynomials, we need only bracket $\cal H_{2l-1} \subset \cal R$ with itself and apply the argument above to conclude that $\cal O^{4l-3} \subset \cal R$. Continuing in this manner, we have finally that $\cal O \subset \cal R$, and this proves the proposition. \end{pf} Our strategy in proving the no\,-go theorem for $\tilde {\cal O}$ is the same as for $\cal P$. To begin, we use admissibility to obtain a weak version of a cubic von Neumann\ rule. \begin{prop} If $\cal Q$ is an admissible quantization of $(\tilde{\cal O},\cal P^1)$, then \begin{equation} \cal Q\big(S_i\,\!^3\big) = a \cal Q(S_i) ^3 + c \cal Q(S_i) \label{vn3} \end{equation} \noindent for $i=1,2,3,$ where $a$ and $c$ are real constants. Furthermore, when $i \neq \ell$, \begin{equation} \cal Q(S_i\cjS_i) = a\cal Q(S_i)\cal Q(S_\ell)\cal Q(S_i) + \frac{1}{3}(a+c)\cal Q(S_\ell). \label{iji} \end{equation} \noindent Finally, \begin{equation} \cal Q(S_1S_2S_3) = a\cal Q(S_1)\cal Q(S_2)\cal Q(S_3) + \frac{a}{2 \mathrm{i}}\big (\cal Q(S_1)^2 - \cal Q(S_2)^2 + \cal Q(S_3)^2\big). \label{123} \end{equation} \label{cvn} \end{prop} \vspace{-1.5ex} Again the proof is placed in Appendix A. We derive some consequences of these results. Multiplying \eqref{cs2} through by $S_\ell$ and quantizing gives \[\sum_{i=1}^3 \cal Q(S_iS_\ell S_i) = s^2 \cal Q(S_\ell).\] \noindent Applying \eqref{iji} and \eqref{vn3}, this in turn becomes \begin{equation} a\sum_{i=1}^3 \cal Q(S_i)\cal Q(S_\ell)\cal Q(S_i) = \bigg(s^2 - \frac{2a}{3} - \frac{5c}{3}\bigg) \cal Q(S_\ell). \label{s3s} \end{equation} \noindent We also find, by rearranging the factors in $\sum_{i=1}^3 \cal Q(S_i)\cal Q(S_\ell)\cal Q(S_i)$, that \begin{equation} \sum_{i=1}^3 \cal Q(S_i)\cal Q(S_\ell)\cal Q(S_i) = \big (j(j+1) -1\big )\cal Q(S_\ell). \label{s3j} \end{equation} \noindent A comparison of \eqref{s3s} and \eqref{s3j} yields \begin{equation} s^2 = a\bigg(j(j+1) -\frac{1}{3}\bigg ) + \frac{5c}{3} \label{js} \end{equation} \noindent provided $j>0.$ \begin{thm} There does not exist a nontrivial admissible quantization of $(\tilde{\cal O},\cal P^1)$. \label{ng3} \end{thm} \vspace{-3ex} \begin{pf} Suppose $\cal Q$ were an admissible quantization of $(\tilde{\cal O},\cal P^1)$; we will show that then $j=0.$ Consider the Poisson bracket relation \begin{eqnarray*} 3s^4S_3 &\!\!\! =\!\!\! & 4\{S_1\,^3,S_2S_3\,^2\} - 4\{S_2\,^3,S_3\,^2S_1\} + \{S_2\,^2S_1,S_2\,^3\} \\ & & \rule{0ex}{3ex} - \,\{S_2S_1\,^2,S_1\,^3\} - 6\{S_2\,^3,S_1\,^3\} -3\{S_2S_3\,^2,S_3\,^2S_1\}. \end{eqnarray*} \noindent Upon quantizing, an enormous calculation using \eqref{vn3}, \eqref{iji}, \eqref{com} and \eqref{s3j} gives\footnote{This calculation was done using the {\sl Mathematica} package {\sl NCAlgebra} \cite{h-m}.} \begin{eqnarray*} 3s^4 \cal Q(S_3) & = & \bigg(3a^2j^4 + 6a^2j^3 + 14acj^2 + 8a^2j^2 \nonumber \\ & & \mbox{} + 14acj + 5a^2j + \frac{29c^2}{3} - 14 \frac{ac}{3} - \frac{7a^2}{3}\bigg)\cal Q(S_3) \nonumber \\ \rule{0ex}{3ex} & & \mbox{} - (10a^2 + 4ac)\cal Q(S_3)^3 \end{eqnarray*} \noindent which in view of \eqref{js} simplifies to \begin{equation} \hspace{-.25in} \bigg [\frac{4}{3}(2a-c)(a+c) - a(7a+4c)j(j+1)\bigg ]\cal Q(S_3) + a(10a+4c)\cal Q(S_3)^3 = 0. \label{cc1} \end{equation} Now suppose $j = \frac{1}{2}$, so that $\cal Q(S_3)^2 = \frac{1}{4}I$. Then \eqref{cc1} implies that $a = -4c$ which, when substituted into \eqref{js} yields $s=0$. Similarly, when $j = 1$, $\cal Q(S_3)^3 = \cal Q(S_3)$. In this case \eqref{cc1} implies that $a = -c$, and again \eqref{js} requires $s=0.$ Thus we have derived contradictions for these two values of $j$. Henceforth take $j>1.$ Next we quantize \[ 6s^2S_1S_2S_3 = \{S_1\,\!^3,\cyyS_1\} +\{S_2\,\!^3,\czzS_2\} + \{S_3\,\!^3,S_1\,\!^2S_3\}. \] \noindent Another computer calculation using \eqref{vn3}, \eqref{iji}, \eqref{123}, \eqref{com} and \eqref{s3j} yields \begin{eqnarray*} \lefteqn{6s^2\Big[a\cal Q(S_1)\cal Q(S_2)\cal Q(S_3) + \frac{a}{2\text{i}}\big(\cal Q(S_1)^2 - \cal Q(S_2)^2 + \cal Q(S_3)^2\big)\Big]} \nonumber \\ \rule{0ex}{1ex} & & \hspace{-.18in} \rule{0ex}{4ex} = -3a\text{i}\big(c-2a+aj(j+1)\big)\big[\cal Q(S_1)^2-\cal Q(S_2)^2+\cal Q(S_3)^2 + 2\text{i}\cal Q(S_1)\cal Q(S_2)\cal Q(S_3)\big]. \label{cc2a} \end{eqnarray*} \noindent Since for $j>1$ the matrix element \begin{eqnarray*} \lefteqn{\hspace{-1.5in} \Big {\langle}\,j, j\!-\!2\,\Big|\,\cal Q(S_1)\cal Q(S_2)\cal Q(S_3) + \frac{1}{2\text{i}}\big(\cal Q(S_1)^2 - \cal Q(S_2)^2 + \cal Q(S_3)^2\big)\Big|\,j, j\Big{\rangle}} \\ & & = \rule{0ex}{4ex}\frac{1}{2{\text {i}}}(1-j)\sqrt{j(2j-1)} \nonumber \end{eqnarray*} \noindent is nonzero, we conclude that either $a=0$ or \begin{equation} s^2 = c-2a + aj(j+1). \label{cc2} \end{equation} \noindent If $a=0$, \eqref{cc1} implies that $c=0$, and then \eqref{js} leads to a contradiction, so \eqref{cc2} must hold. Subtracting \eqref{cc2} from \eqref{js} gives $c = -5a/2$; substituting this into \eqref{cc1} produces \[3a^2(j^2+j-3)\cal Q(S_3) = 0.\] \noindent But then $j = (-1 \pm \surd{13})/2$, neither of which is permissible. Thus we have derived contradictions for all $j>0$ and the theorem follows. \end{pf} We remark that Theorem \ref{ng3} is actually sharper than Theorem \ref{ng2}; we have included the latter because it is simpler. As Proposition \ref{max} shows, by augmenting $\cal P^1$ with a single odd polynomial, we generate $\tilde{\cal O}$. Similarly we can generate all of $\cal P$ from $\cal P^1$ and a single polynomial not in $\tilde{\cal O}$, which implies that the only Poisson subalgebras of $\cal P$ strictly containing $\cal P^1$ are $\tilde{\cal O}$ and $\cal P$ itself. This can be proven in the same manner as Proposition \ref{max}, but in the interests of economy, we make do with: \begin{lem} Any Poisson subalgebra of $\cal P$ which strictly contains $\cal P^1$ also contains $\tilde{\cal O}$. \end{lem} \begin{pf} By Proposition \ref{max} it suffices to consider the Poisson algebra $\cal T$ generated by $\cal P^1$ and a polynomial $p$ not in $\tilde{\cal O}$. Then $p$ has a component in some $\cal H_{2k}$ for $k>0$, so by Lemma \ref{inhomo} it follows that $\cal H_{2k}\subset\cal T$. Now consider the bracket $\{Y_{2k}^{2k-1},\, Y_{2k}^{2k}\}$. According to Lemma \ref{main}, either its real or imaginary part has a nonzero component in $\cal H_{4k-1}$. Hence $\cal H_{4k-1}\subset\cal T$, and so by Proposition \ref{max} we have $\tilde{\cal O}\subset\cal T$. \end{pf} Now given any Poisson subalgebra of $\cal P$ strictly containing $\cal P^1$ we only have to apply Theorem \ref{ng3} to the subalgebra $\tilde{\cal O}$ inside it to obtain a contradiction, hence: \begin{thm} No nontrivial quantization of $\cal P^1$ can be admissibly extended beyond $\cal P^1$. \label{dogo} \end{thm} This result stands in marked contrast to the analogous one for $\Bbb R^{2n}.$ There one runs into difficulties with {\em cubic} polynomials in $\{1,q^i,p_i\},$ so that $\cal P^2$ (the Poisson algebra of polynomials of degree at most two) is a maximal polynomial subalgebra containing $\cal P^1$ that can be admissibly quantized \cite{g-s}. This dichotomy seems to be connected with the fact that for $\Bbb R^{2n}$, $\cal P^2$ is the Poisson normalizer of $\cal P^1,$ whereas for $S^2$ the normalizer of $\cal P^1$ is itself. On the other hand, it should be noted that there are other maximal polynomial subalgebras of $C^{\infty}(\Bbb R^{2n})$ containing $\cal P^1$ that can be admissibly quantized, for instance the Schr\"odinger subalgebra \[\left\{\sum_{i=1}^nh^i(q^1,\dots,q^n)p_i + k(q^1,\dots,q^n)\right\}\] \noindent where the $h^i$ and $k$ are polynomials. Here one encounters problems when one tries to extend to terms which are quadratic in the momenta. \medskip Finally, a word is in order regarding the case $j = 0$ -- the one instance in which we did not derive a contradiction. It happens that the spin 0 representation of $\cal P^1$ {\em can\/} be extended, in a unique way, to an admissible quantization $\cal Q$ of all of $\cal P$. Indeed, given $p \in \cal P$, let $p_0$ denote the constant term in the harmonic decomposition \eqref{hd} of $p$. Then $\cal Q:\cal P \rightarrow \Bbb C$ defined by $\cal Q(p) = p_0$ is, technically, an admissible quantization of $(\cal P,\cal P^1)$. To prove this, it is only necessary to show that $\cal Q$ so defined is a Lie algebra homomorphism, which in this context means $\{p,p'\}_0 = 0$. But from \eqref{hdy} and \eqref{pb}, in vector notation, \[4\pi s^2\{p,p'\}_0 = \int_{S^2}\{p,p'\}\,d\sigma = \int_{S^2}\bold S \cdot (\nabla p \times \nabla p')\,d\sigma = s\int_{S^2} (\nabla p \times \nabla p') \cdot d{\boldsymbol \sigma}, \] \noindent which vanishes by the divergence theorem.\footnote{This actually is a consequence of a general fact about momentum maps on compact symplectic manifolds, cf. \cite[p.~ 187]{g-s}.} To show that this quantization is unique, we need \begin{lem} For $l > 0$, $\cal H_l = \{\cal H_1,\cal H_l\}.$ \label{l1l} \end{lem} \begin{pf} For $l>0$ $\{\cal H_1,\cal H_l\}$ is a nontrivial invariant subspace of $\cal H_l$, and hence by the irreducibility of the $\cal H_1$-action on $\cal H_l$, $\{\cal H_1,\cal H_l\} = \cal H_l.$ \end{pf} Now suppose $\chi$ is an admissible quantization of $(\cal P,\cal P^1)$ with $j=0$ so that by the above, $\chi$ is a linear map $\cal P \rightarrow \Bbb C$ which must annihilate Poisson brackets. But then $\chi(p) = \chi(p_0)$, since by Lemma \ref{l1l} each term $p_l \in \cal H_l$ for $l > 0$ in the harmonic decomposition of $p$ must be a sum of terms of the form $\{h_l,r_l\}$ for some $h_l \in \cal H_1$ and $r_l \in \cal H_l$. It follows that $\chi$ is uniquely determined by its value on the constants, and as $\chi(1) = I = \cal Q(1)$ we must have $\chi = \cal Q.$ Although the corresponding representation of $\cal P^1$ is trivial in that $\cal Q(S_i) = 0$ for all $i$, it is worth emphasizing that $\cal Q$ is {\em not\/} zero on the remainder of $\cal P$. For example, $\cal Q(S_i\,\!^2) = \frac{s^2}{3}I$ for all $i$, consistent with Proposition \ref{qvn} and \eqref{s2}. The existence of this trivial yet ``not completely trivial'' representation of $\cal P$ -- for any nonzero value of the classical spin $s$ -- may be related to a well-known ``anomaly'' in the geometric quantization of spin, cf. \cite[\S7]{t} and \cite[\S11.2]{s}. \medskip\ms\medskip \section{Discussion} Theorems \ref{ng2} and \ref{dogo} could have been ``predicted'' on the basis of geometric quantization theory. Here one knows that one can quantize those classical observables $f \in C^{\infty}(M)$ whose flows preserve a given polarization $P$. In our case we take $P$ to be the antiholomorphic polarization on $S^2$ (thought of as $\Bbb CP^1$); then $\cal P^1$ is exactly the set of polarization-preserving observables. However, in geometric quantization theory one does not expect to be able to consistently quantize observables outside this class \cite{w}. Further corroboration for our results is provided by Rieffel \cite{r}, who showed that there are no strict SU(2)-invariant deformation quantizations of $C^{\infty}(S^2)$. In fact, it seems that only the polynomial algebra $\cal P^1 \subset C^{\infty}(S^2)$ can be rigorously deformation quantized in an SU(2)-invariant way \cite{k}. There are several points we would like to make concerning the von Neumann\ rules for $S^2$, especially in comparison with those for $\Bbb R^{2n}$. Our no\,-go theorems may be interpreted as stating that the ``Poisson bracket $\rightarrow$ commutator'' rule is {\em totally} incompatible with even the relatively weak von Neumann\ rules given in Propositions \ref{qvn} and \ref{cvn}. On $\Bbb R^{2n}$, on the other hand, the von Neumann\ rule $k(x) = x^n$ with $n=2$ does hold for $x\in\cal P^1 \subset \cal P^2$ in the metaplectic representation \cite{c,g-s}, and with any $n \geq 0$ for $x=q^i$ in the Schr\"odinger representation. Moreover it is curious that, according to Propositions \ref{qvn} and \ref{cvn}, requiring that the $S_i$ be irreducibly represented does not yield strict von Neumann\ rules as happens for $\{q^i,p_i\}$ on $\Bbb R^{2n},$ cf. \cite{gr,c}.\footnote{In the case of $\Bbb R^{2n}$, one usually employs the Stone-von Neumann\ theorem to show that irreducibility implies the von Neumann\ rules, but in fact one can prove this without recourse to the Stone-von Neumann\ theorem.} It is not clear why $S^2$ and $\Bbb R^{2n}$ behave differently in these regards. We remark that it is substantially easier to prove the no\,-go theorems \ref{ng2} and \ref{ng3} if one assumed from the start that the strict von Neumann\ rules $\cal Q(S_i\,\!^2) = \cal Q(S_i)^2$ and $\cal Q(S_i\,\!^3) = \cal Q(S_i)^3$ hold. We can gain some insight into this as follows. Suppose the strong von Neumann\ rule applied to one of the $S_i$, say $S_3$, so that $\cal Q(S_3\,\!^n) = \cal Q(S_3)^n$ for all positive integers $n$. Suppose furthermore that $\cal Q$ is injective when restricted to the polynomial algebra $\Bbb R[S_3]$ generated by $S_3$. Provided it is appropriately continuous, $\cal Q$ will then extend to an isomorphism from the real $C^*$--algebra $C^*(S_3)$ (consisting of the closure of $\Bbb R[S_3]$ in the supremum norm on $S^2$ with pointwise operations) to the real $C^*$--algebra generated by $\cal Q(S_3)$. But this implies that the classical spectrum of $S_3$ (i.e., the set of all values it takes) is the same as the operator spectrum of $\cal Q(S_3)$. Since the classical spectrum of $S_3$ is $[-s,s]$ whereas the quantum spectrum is discrete, it is clear -- in retrospect -- why no (strong) von Neumann\ rule can apply to $S_3$. Since $S^2$ is in a sense the opposite extreme from $\Bbb R^{2n}$ insofar as symplectic manifolds go, our result lends support to the contention that no\,-go theorems should hold in some generality. Nonetheless, these two examples are special in that they are symplectic homogeneous spaces \big($\Bbb R^{2n}$ for the translation group $\Bbb R^{2n}$, and $S^2$ for $\mbox{SU}(2)$\big). Thus in both cases we are quantizing finite-dimensional Lie algebras \big(the Heisenberg algebra and $\mbox{su}(2) \times \Bbb R$, respectively, which are certain central extensions by $\Bbb R$ of $\Bbb R^{2n}$ and su(2)\big). Will a similar analysis work for other symplectic homogeneous spaces, e.g., $\Bbb CP^n$ with group $\mbox{SU}(n+1)$? How does one proceed in the case of symplectic manifolds which do not have such a high degree of symmetry? What set of observables will play the role of the distinguished subalgebras generated by $\{1,q^i,p_i\}$ and $\{1,S_1,S_2,S_3\}$ (which are the components of the momentum mappings for the Hamiltonian actions of the Heisenberg group and $\mbox{SU}(2) \times \Bbb R$, resp.)? For a cotangent bundle $T^*Q$, the obvious counterpart would be the infinite-dimensional abgebra of linear momentum observables $P_X + f,$ where $P_X$ is the momentum in the direction of the vector field $X$ on $Q$ and $f$ is a function on $Q$. (These are the components of the momentum mapping for the transitive action of $\text{Diff}(Q) \ltimes C^{\infty}(Q)$ on $T^*Q$.) In this regard, it is known that geometric quantization (formally) obeys the von Neumann rules $\cal Q\big(P_X\,^2\big) = \cal Q(P_X)^2$ and $\cal Q(f^n) = \cal Q(f)^n$ \cite{go2}. We hope to explore some of these issues in future papers. \medskip\ms \section*{Acknowledgments} One of us (M.J.G.) would like to thank L. Bos, G. Emch, G. Goldin, M. Karasev and J. Tolar for enlightening conversations, and the University of New South Wales for support while this research was underway. He would also like to express his appreciation to both the organizers of the XIII Workshop on Geometric Methods in Physics and the University of Warsaw, who provided a lively and congenial atmosphere for working on this problem. This research was supported in part by NSF grant DMS-9222241. Another one of us (H.G.) would like to acknowledge the support of an ARC--grant. \medskip
2,877,628,088,847
arxiv
\section{Introduction} Let $\mathbb{k}$ be an arbitrary field, ${\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$ be a multiprojective space over $\mathbb{k}$, and $X \subseteq {\normalfont\mathbb{P}}$ be a closed subscheme of ${\normalfont\mathbb{P}}$. The \emph{multidegrees} of $X$ are fundamental invariants that describe algebraic and geometric properties of $X$. For each ${\normalfont\mathbf{n}}=(n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ with $n_1+\cdots+n_p=\dim(X)$ one can define the \textit{multidegree of $X$ of type ${\normalfont\mathbf{n}}$ with respect to ${\normalfont\mathbb{P}}$}, denoted by $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X)$, in different ways (see \autoref{def_multdeg}, \autoref{rem_chow_ring} and \autoref{rem_Hilb_series}). In classical geometrical terms, when $\mathbb{k}$ is algebraically closed, $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X)$ equals the number of points (counting multiplicity) in the intersection of $X$ with the product $L_1 \times_\mathbb{k} \cdots \times_\mathbb{k} L_p \subset {\normalfont\mathbb{P}}$, where $L_i \subset {\normalfont\mathbb{P}}_\mathbb{k}^{m_i}$ is a general linear subspace of dimension $m_i-n_i$ for each $1 \le i \le p$. \medskip The study of multidegrees goes back to pioneering work by van~der~Waerden \cite{VAN_DER_WAERDEN}. From a more algebraic point of view, multidegrees receive the name of \emph{mixed multiplicities} (see \autoref{def_multdeg}). More recent papers where the notion of multidegree (or mixed multiplicity) is studied are, e.g., \cite{Bhattacharya,VERMA_BIGRAD,HERMANN_MULTIGRAD,TRUNG_POSITIVE,MIXED_MULT, EXPONENTIAL_VARIETIES,KNUTSON_MILLER, michalek2020maximum, CONCA_DENEGRI_GORLA}. \medskip The main goal of this paper is to answer the following fundamental question considered by Trung \cite{TRUNG_POSITIVE} and by Huh \cite{Huh12} in the case $p=2$. \begin{itemize} \item \emph{For ${\normalfont\mathbf{n}} \in \normalfont\mathbb{N}^p$ with $n_1+\cdots+n_p=\dim(X)$, when do we have that $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X) >0$?} \end{itemize} Our main result says that the positivity of $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X)$ is determined by the dimensions of the images of the natural projections from ${\normalfont\mathbb{P}}$ restricted to the irreducible components of $X$. First, we set a basic notation: for each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq \{1,\ldots,p\}$, let $\Pi_{\mathfrak{J}}$ be the natural projection $$ \Pi_{\mathfrak{J}}: {\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_p} \;\rightarrow\; {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_1}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_k}}. $$ The following is the main theorem of this article. Here, we give necessary and sufficient conditions for the positivity of multidegrees. \begin{headthm}[\autoref{thm_main_irreducible}, \autoref{thm_main}] \label{thmA} Let $\mathbb{k}$ be an arbitrary field, ${\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$ be a multiprojective space over $\mathbb{k}$, and $X \subseteq {\normalfont\mathbb{P}}$ be a closed subscheme of ${\normalfont\mathbb{P}}$. Let ${\normalfont\mathbf{n}}=(n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ be such that $n_1+\cdots+n_p=\dim(X)$. Then, $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X) > 0$ if and only if there is an irreducible component $Y \subseteq X$ of $X$ that satisfies the following two conditions: \begin{enumerate}[\rm (a)] \item $\dim(Y) = \dim(X)$. \item For each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq \{1,\ldots,p\}$ the inequality $$ n_{j_1} + \cdots + n_{j_k} \le \dim\big(\Pi_{\mathfrak{J}}(Y)\big) $$ holds. \end{enumerate} \end{headthm} When $\mathbb{k}$ is the field of complex numbers \autoref{thmA} is essentially covered by the geometric results in \cite[Theorems 2.14, 2.19]{KAVEH_KHOVANSKII},\footnote{In \autoref{rem_Kaveh_Kho} we briefly discuss how (over the complex numbers) \autoref{thmA} can be obtained by using the results in \cite[\S 2.2]{KAVEH_KHOVANSKII}.} however their methods do not extend to arbitrary fields. Here we follow an algebraic approach that allows us to prove the result for all fields, and hence a general version for algebras over Artinian local rings (see \autoref{corB}). The main idea in the proof of \autoref{thmA} is the study of the dimensions of the images of the natural projections after cutting by a general hyperplane (see \autoref{thm_projections}). \medskip We note that if $p=2$ and $X$ is arithmetically Cohen-Macaulay, the conclusion of \autoref{thmA} in the irreducible case also holds for $X$ (see \cite[Corollary 2.8]{TRUNG_POSITIVE}). In \autoref{ex_cm} we show that this is not necessarily true for $p>2$. \medskip If $X$ is irreducible, then the function $r:2^{\{1,\ldots,p\}}\rightarrow \normalfont\mathbb{Z}$ defined by $r({\mathfrak{J}}):=\dim\big(\Pi_{\mathfrak{J}}(Y)\big)$ is a submodular function, i.e., $r({\mathfrak{J}}_1\cap {\mathfrak{J}}_2)+r({\mathfrak{J}}_1\cup {\mathfrak{J}}_2)\leq r({\mathfrak{J}}_1)+r({\mathfrak{J}}_2)$ for any two subsets ${\mathfrak{J}}_1,{\mathfrak{J}}_2\subseteq \{1,\ldots,p\}$, as proved in \autoref{prop_poly} (see also \autoref{def_algebraic_polymatroid}). By the Submodular Theorem (see, e.g., \cite[Theorem 3.11]{NESTED} or \cite[Appendix B]{YU}) and the inequalities of \autoref{thmA}, the points ${\normalfont\mathbf{n}}\in \normalfont\mathbb{N}^p$ for which $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X) > 0$ are the lattice points of a {\it generalized permutohedron}. Defined by A.~Postnikov in \cite{POSTNIKOV} generalized permutohedra are polytopes obtained by deforming usual permutohedra. In recent years this family of polytopes has been studied in relation to other fields such as probability, combinatorics, and representation theory (see \cite{YU,YONG,FACES}). \medskip In a more algebraic flavor, we state the translation of \autoref{thmA} to the mixed multiplicities of a standard multigraded algebra over an Artinian local ring (see \autoref{def_mixed_mult}). \begin{headthm}[\autoref{cor_positive_mixed_mult}] \label{corB} Let $A$ be an Artinian local ring and $R$ be a finitely generated standard $\normalfont\mathbb{N}^p$-graded $A$-algebra. For each $1 \le j \le p$, let ${\normalfont\mathfrak{m}}_j \subset R$ be the ideal generated by the elements of degree ${\normalfont\mathbf{e}}_j$, where ${\normalfont\mathbf{e}}_j \in \normalfont\mathbb{N}^p$ denotes the $j$-th elementary vector. Let ${\normalfont\mathfrak{N}} = {\normalfont\mathfrak{m}}_1 \cap \cdots \cap {\normalfont\mathfrak{m}}_p \subset R$. Let ${\normalfont\mathbf{n}}=(n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ be such that $n_1+\cdots+n_p=\dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right)-p$. Then, $e({\normalfont\mathbf{n}};R) > 0$ if and only if there is a minimal prime ideal ${\mathfrak{P}} \in {\normalfont\text{Min}}\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)$ of $\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)$ that satisfies the following two conditions: \begin{enumerate}[\rm (a)] \item $\dim\left(R/{\mathfrak{P}}\right) = \dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right)$. \item For each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq \{1,\ldots,p\}$ the inequality $$ n_{j_1} + \cdots + n_{j_k} \le \dim\left(\frac{R}{{\mathfrak{P}}+\sum_{j \not\in {\mathfrak{J}}} {\normalfont\mathfrak{m}}_j}\right) - k $$ holds. \end{enumerate} \end{headthm} For a given finite set of ideals in a Noetherian local ring, such that one of them is zero-dimensional, we can define their mixed multiplicities by considering a certain associated standard multigraded algebra (see \cite{TRUNG_VERMA_MIXED_VOL} for more information). These multiplicities have a long history of interconnecting problems from commutative algebra, algebraic geometry, and combinatorics, with applications to the topics of Milnor numbers, mixed volumes, and integral dependence (see, e.g., \cite{Huh12,huneke2006integral,TRUNG_VERMA_MIXED_VOL,teissier1973cycles}). As a direct consequence of \autoref{corB} we are able to give a characterization for the positivity of mixed multiplicities of ideals (see \autoref{cor_mixed_mult_ideals}). In another related result, we focus on homogeneous ideals generated in one degree; this case is of particular importance due to its relation with rational maps between projective varieties. In this setting, we provide more explicit conditions for positivity in terms of the analytic spread of products of these ideals (see \autoref{thm_equigen_ideals}). \medskip Going back to the setting of \autoref{thmA}, we switch our attention to the following discrete set $$ \normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X) \,=\, \big\{{\normalfont\mathbf{n}}\in \normalfont\mathbb{N}^p \;\mid\; \deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X)>0\big\}, $$ which we call the {\it support of $X$ with respect to ${\normalfont\mathbb{P}}$}. When $X$ is irreducible, we show that $\normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)$ is a {\it (discrete) polymatroid} (see \autoref{sub_Poly}, \autoref{prop_poly}). The latter result was included in an earlier version of this paper when $\mathbb{k}$ is algebraically closed, and an alternative proof is given by Br{\"a}nd{\'e}n and Huh in \cite[Corollary 4.7]{LORENTZ} using the theory of Lorentzian polynomials. An advantage of our approach is that we can describe the corresponding rank submodular functions of the polymatroids, a fact that we exploit in the applications of \autoref{sec_comb}. Additionally, our results are valid when $X$ is just irreducible and not necessarily geometrically irreducible over $\mathbb{k}$ (i.e., we do not need to assume that $X \times_\mathbb{k} \overline{\mathbb{k}}$ is irreducible for an algebraic closure $\overline{\mathbb{k}}$ of $\mathbb{k}$); it should be noticed that this generality is not covered by the statements in \cite{LORENTZ} and \cite{KAVEH_KHOVANSKII}. \medskip Discrete polymatroids \cite{HERZOG_HIBI_MONOMIALS} have also been studied under the name of M-convex sets \cite{MUROTA}. Polymatroids can also be described as the integer points in a generalized permutohedron \cite{POSTNIKOV}, so they are closely related to submodular functions, which are well studied in optimization, see \cite{LOVASZ} and \cite[Part IV]{SCHRIJVER} for comprehensive surveys on submodular functions, their applications, and their history. There are two distinguishable types of polymatroids, linear and algebraic polymatroids, whose main properties are inherited by their representation in terms of other algebraic structures. \autoref{thmA} allows us to define another type of polymatroids, that we call \emph{Chow polymatroids}, and which interestingly lies in between the other two. In the following theorem we summarize our main results in this direction. \begin{headthm}[\autoref{thm_classification}]\label{thmC} Over an arbitrary field $\mathbb{k}$, we have the following inclusions of families of polymatroids $$ \Big(\texttt{Linear polymatroids}\Big) \;\subseteq\; \Big(\texttt{Chow polymatroids}\Big) \;\subseteq\; \Big(\texttt{Algebraic polymatroids}\Big). $$ Moreover, when $\mathbb{k}$ is a field of characteristic zero, the three families coincide. \end{headthm} If $\mathbb{k}$ has positive characteristic, then these types of polymatroids do not agree. In fact, there exist examples of polymatroids which are algebraic over any field of positive characteristic but never linear (see \autoref{rem_Alg_not_line}). \smallskip \autoref{thmA} can be applied to particular examples of varieties coming from combinatorial algebraic geometry. In \autoref{subsect_Schubert} we do so to matrix Schubert varieties; in this case the multidegrees are the coefficients of Schubert polynomials, thus our results allow us to give an alternative proof to a recent conjecture regarding the support of these polynomials (see \autoref{thm_Schubert}). In \autoref{sec_flag} and \autoref{sec_M0} we study certain embeddings of flag varieties and of the moduli space $\overline{M}_{0,p+3}$, respectively (see \autoref{prop_sottile} and \autoref{prop_parking}). In \autoref{sec_mixed} we recover a well-known characterization for the positivity of mixed volumes of convex bodies (see \autoref{thm_mixed}). \smallskip We now outline the contents of the article. In \autoref{section_notations} we set up the notation used throughout the document. We also include key preliminary definitions and results, paying special attention to the connection between mixed multiplicities of standard multigraded graded algebras and multidegrees of their corresponding schemes. \autoref{sec_main} is devoted to the proof of \autoref{thmA} and \autoref{corB}. Our results for mixed multiplicities of ideals are included in \autoref{sec_mix_ideals}. In \autoref{sec_polym} we relate our results to the theory of polymatroids. In particular, we show the proof of \autoref{thmC}. We finish the paper with \autoref{sec_comb} where the applications to combinatorial algebraic geometry are presented. \smallskip We conclude the Introduction with an illustrative example. The following example is constructed following the same ideas in \autoref{prop_linear}. \begin{example} Consider the polynomial ring $S=\mathbb{k}[v_1,v_2,v_3][w_1,w_2,w_3]$ with the $\normalfont\mathbb{N}^3$-grading $\deg(v_{i})=(0,0,0)$, $\deg(w_{i})={\normalfont\mathbf{e}}_i$ for $1 \le i \le 3$. Let $T$ be the $\normalfont\mathbb{N}^3$-graded polynomial ring $ T = \mathbb{k}\left[x_0,\ldots,x_3\right]\left[y_0,\ldots,y_3\right]\left[z_0,\ldots,z_3\right] $ where $\deg(x_i)={\normalfont\mathbf{e}}_1$, $\deg(y_i)={\normalfont\mathbf{e}}_2$ and $\deg(z_i)={\normalfont\mathbf{e}}_3$. Consider the $\normalfont\mathbb{N}^3$-graded $\mathbb{k}$-algebra homomorphism $$ \varphi = T \rightarrow S, \qquad \begin{array}{llll} x_0 \mapsto w_1, & x_1 \mapsto v_1w_1, & x_2 \mapsto v_1w_1, & x_3 \mapsto v_1w_1, \\ y_0 \mapsto w_2, & y_1 \mapsto v_1w_2, & y_2 \mapsto v_2w_2, & y_3 \mapsto (v_1+v_2)w_2, \\ z_0 \mapsto w_3, & z_1 \mapsto v_1w_3, & z_2 \mapsto v_2w_3, & z_3 \mapsto v_3w_3. \end{array} $$ Note that ${\mathfrak{P}} = \normalfont\text{Ker}(\varphi) \subset T$ is an $\normalfont\mathbb{N}^3$-graded prime ideal. Let $Y \subset {\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^3 \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^3 \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^3$ be the closed subscheme corresponding to ${\mathfrak{P}}$. In this case, one can easily compute the dimension of the projections $\Pi_{{\mathfrak{J}}}(Y)$ for each ${\mathfrak{J}} \subseteq \{1,2,3\}$, and so \autoref{thmA} implies that $\normalfont\text{MSupp}_{{\normalfont\mathbb{P}}}(Y)$ is given by all ${\normalfont\mathbf{n}}=(n_1,\ldots,n_3) \in \normalfont\mathbb{N}^3$ satisfying the following conditions: \begin{align*} &n_1+n_2+n_3 = 3=\dim(Y),\\ &n_1+n_2 \leq 2=\dim\left(\Pi_{\{1,2\}}(Y)\right), \;\; n_1+n_3 \leq 3=\dim\left(\Pi_{\{1,3\}}(Y)\right), \;\; n_2+n_3 \leq 3=\dim\left(\Pi_{\{2,3\}}(Y)\right), \\ &n_1 \leq 1=\dim\left(\Pi_{\{1\}}(Y)\right), \;\; n_2 \leq 2=\dim\left(\Pi_{\{2\}}(Y)\right), \;\; n_3 \leq 3=\dim\left(\Pi_{\{3\}}(Y)\right). \end{align*} Hence $\normalfont\text{MSupp}_{{\normalfont\mathbb{P}}}(Y)=\{(0,0,3),(0,1,2),(0,2,1),(1,0,2),(1,1,1)\}\subset\normalfont\mathbb{N}^3$. This set can also be represented graphically as follows: \newcommand*\rows{3} \begin{center} \begin{tikzpicture} \draw [red!10, fill=red!10] ($(0,0)$) -- ($(2,0)$) -- ($(3/2, {1*sqrt(3)/2})$)--($(1/2, {1*sqrt(3)/2})$)-- cycle; \foreach \row in {0, 1, ...,\rows} { \draw ($\row*(0.5, {0.5*sqrt(3)})$) -- ($(\rows,0)+\row*(-0.5, {0.5*sqrt(3)})$); \draw ($\row*(1, 0)$) -- ($(\rows/2,{\rows/2*sqrt(3)})+\row*(0.5,{-0.5*sqrt(3)})$); \draw ($\row*(1, 0)$) -- ($(0,0)+\row*(0.5,{0.5*sqrt(3)})$); } \node [below left] at (0,0) {$(0,0,3)$}; \node [below right] at (3,0) {$(0,3,0)$}; \node [above] at ($(3/2, {3*sqrt(3)/2})$) {$(3,0,0)$}; \draw [red,fill] (0,0) circle [radius = 0.08]; \draw [red,fill] (1,0) circle [radius = 0.08]; \draw [red,fill] (2,0) circle [radius = 0.08]; \draw [red,fill] ($(1/2, {1*sqrt(3)/2})$) circle [radius = 0.08]; \draw [red,fill] ($(3/2, {1*sqrt(3)/2})$) circle [radius = 0.08]; \end{tikzpicture} \label{fig:example} \end{center} Additionally, by using {\tt Macaulay2} \cite{MACAULAY2} we can compute that its \emph{multidegree polynomial} (see \autoref{defMsupp}) is equal to: \[ \deg_{{\normalfont\mathbb{P}}}(Y;t_1,t_2,t_3)=\,{t}_1^{3}{t}_2^{3}+\,{t}_1^{3}{t}_2^{2}{t}_3+\,{t}_1^{3}{t}_2{t}_3^{2}+\,{t}_1^{2}{t}_2^{3}{t}_3+\,{t}_1^{2}{t}_2^{2}{t}_3^{2}. \] We note that here we are following the convention that $\normalfont\text{MSupp}_{{\normalfont\mathbb{P}}}(Y)$ is given by the complementary degrees of the polynomial $\deg_{{\normalfont\mathbb{P}}}(Y;t_1,t_2,t_3)$; for instance, the term ${t}_1^{3}{t}_2^{3}$ corresponds to the point $(3,3,3)-(3,3,0)=(0,0,3) \in \normalfont\text{MSupp}_{{\normalfont\mathbb{P}}}(Y)$. \end{example} \section{Notation and Preliminaries} \label{section_notations} In this section, we set up the notation that is used throughout the paper. We also present some preliminary results needed in the proofs of our main theorems. Let $p \ge 1$ be a positive integer. If ${\normalfont\mathbf{n}} = (n_1,\ldots,n_p),{\normalfont\mathbf{m}} = (m_1,\ldots,m_p) \in \normalfont\mathbb{Z}^p$ are two multi-indexes, we write ${\normalfont\mathbf{n}} \ge {\normalfont\mathbf{m}}$ whenever $n_i \ge m_i$ for all $1 \le i \le p$, and ${\normalfont\mathbf{n}} > {\normalfont\mathbf{m}}$ whenever $n_j > m_j$ for all $1 \le j \le p$. For each $1 \le i \le p$, let ${\normalfont\mathbf{e}}_i \in \normalfont\mathbb{N}^p$ be the $i$-th elementary vector ${\normalfont\mathbf{e}}_i=\left(0,\ldots,1,\ldots,0\right)$. Let $\mathbf{0} \in \normalfont\mathbb{N}^p$ and $\mathbf{1} \in \normalfont\mathbb{N}^p$ be the vectors $\mathbf{0}=(0,\ldots,0)$ and $\mathbf{1}=(1,\ldots,1)$ of $p$ copies of $0$ and $1$, respectively. For any ${\normalfont\mathbf{n}} = (n_1,\ldots,n_p) \in \normalfont\mathbb{Z}^p$, we define its weight as $\lvert {\normalfont\mathbf{n}} \rvert:= n_1+\cdots+n_p$. Let $[p]$ denote the set $[p]:=\{1,\ldots,p\}$. For clarity of exposition we first introduce the main concepts in the theory of multidegrees over an arbitrary field. Later, we also work over Artinian local rings; we highlight important details in this more general setting in \autoref{sub_Art}. \subsection{The case over a field} We begin by introducing a general setup for \autoref{thmA} and its preparatory results. \begin{setup} \label{setup_initial} Let $\mathbb{k}$ be an arbitrary field. Let $R$ be a finitely generated standard $\normalfont\mathbb{N}^p$-graded algebra over $\mathbb{k}$, that is, $\left[R\right]_{\mathbf{0}}=\mathbb{k}$ and $R$ is finitely generated over $\mathbb{k}$ by elements of degree ${\normalfont\mathbf{e}}_i$ with $1 \le i \le p$. For each subset $\mathfrak{J} = \{j_1,\ldots,j_k\} \subseteq [p] = \{1, \ldots, p\}$ denote by $R_{({\mathfrak{J}})}$ the standard $\normalfont\mathbb{N}^k$-graded $\mathbb{k}$-algebra given by $$ R_{({\mathfrak{J}})} := \bigoplus_{\substack{i_1\ge 0,\ldots, i_p\ge 0\\ i_{j} = 0 \text{ if } j \not\in {\mathfrak{J}}}} {\left[R\right]}_{(i_1,\ldots,i_p)}; $$ for instance, for each $1 \le j \le p$, $R_{({j})}$ denotes the standard $\normalfont\mathbb{N}$-graded $\mathbb{k}$-algebra $ R_{(j)} := \bigoplus_{k \ge 0} {\left[R\right]}_{k \cdot {\normalfont\mathbf{e}}_j}. $ For each $1 \le j \le p$, let ${\normalfont\mathfrak{m}}_j \subset R$ be the ideal ${\normalfont\mathfrak{m}}_j := \left([R]_{{\normalfont\mathbf{e}}_j }\right)$. Let ${\normalfont\mathfrak{N}} \subset R$ be the multigraded irrelevant ideal ${\normalfont\mathfrak{N}} := {\normalfont\mathfrak{m}}_1 \cap \cdots \cap {\normalfont\mathfrak{m}}_p$. For each ${\mathfrak{J}} \subseteq [p]$, let ${\normalfont\mathfrak{N}}_{\mathfrak{J}} \subset R_{({\mathfrak{J}})}$ be the corresponding multigraded irrelevant ideal ${\normalfont\mathfrak{N}}_{\mathfrak{J}} := \left(\bigcap_{j \in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j\right) \cap R_{({\mathfrak{J}})}$. Let $X$ be the multiprojective scheme $X := \normalfont\text{MultiProj}(R)$ (see \autoref{def_multproj} below) and $X_{\mathfrak{J}}$ be the multiprojective scheme $X_{\mathfrak{J}} := \normalfont\text{MultiProj}(R_{({\mathfrak{J}})})$ for each ${\mathfrak{J}} \subseteq [p]$. To avoid trivial situations, we always assume that $X \neq \emptyset$. \end{setup} \begin{definition} \label{def_multproj} The multiprojective scheme $\normalfont\text{MultiProj}(R)$ is given by $ \normalfont\text{MultiProj}(R) := \big\{ {\mathfrak{P}} \in \normalfont\text{Spec}(R) \mid {\mathfrak{P}} \text{ is $\normalfont\mathbb{N}^p$-graded and } {\mathfrak{P}} \not\supseteq {\normalfont\mathfrak{N}} \big\}, $ and its scheme structure is obtained by using multi-homogeneous localizations (see, e.g., \cite[\S 1]{HYRY_MULTIGRAD}). \end{definition} The inclusion $R_{({\mathfrak{J}})} \hookrightarrow R$ induces the natural projection \begin{align*} \Pi_{\mathfrak{J}}: X \rightarrow X_{\mathfrak{J}}, \quad {\mathfrak{P}} \in X \mapsto {\mathfrak{P}} \cap R_{({\mathfrak{J}})} \in X_{\mathfrak{J}}. \end{align*} We embed $X$ as a closed subscheme of a multiprojective space ${\normalfont\mathbb{P}}:={\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$. Then, for each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$, $\Pi_{\mathfrak{J}}: X \rightarrow X_{\mathfrak{J}}$ corresponds with the restriction to $X$ and to $X_{\mathfrak{J}}$ of the natural projection $$ \Pi_{\mathfrak{J}}: {\normalfont\mathbb{P}} \;\rightarrow\; {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_1}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_k}}, $$ and $X_{\mathfrak{J}}$ becomes a closed subscheme of ${\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_1}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_k}}$. For any multi-homogeneous element $x \in R$, the closed subscheme $\normalfont\text{MultiProj}(R/xR) \subseteq X$ is denoted by $X \cap V(x)$. \begin{notation} From now on, ${\mathfrak{J}}=\{j_1,\ldots,j_k\}$ denotes a subset of $[p]$. Set $r := \dim(X)$ and $r({\mathfrak{J}}):=\dim\left(\Pi_{\mathfrak{J}}(X)\right)$ for each ${\mathfrak{J}} \subseteq [p]$. For a singleton set $\{i\} \subseteq [p]$, $r(\{i\})$ and $\Pi_{\{i\}}$ are simply denoted by $r(i)$ and $\Pi_i$, respectively. \end{notation} Note that the image of $\Pi_{\mathfrak{J}} : X \rightarrow X_{\mathfrak{J}}$ can be described by the following isomorphism \begin{equation} \label{eq_isom_0} \Pi_{\mathfrak{J}}(X) \;\cong\; \normalfont\text{MultiProj}\left(\frac{R_{({\mathfrak{J}})}}{R_{({\mathfrak{J}})} \cap \left(0:_R{\normalfont\mathfrak{N}}^\infty\right)}\right). \end{equation} \begin{remark} \label{rem_isom_quotient_grad} Since $R = R_{({\mathfrak{J}})} \oplus \left(\sum_{j \not\in {\mathfrak{J}}} {\normalfont\mathfrak{m}}_j\right)$, we obtain a natural isomorphism $ R_{({\mathfrak{J}})} \cong \frac{R}{\sum_{j \not\in {\mathfrak{J}}} {\normalfont\mathfrak{m}}_j} $ of $\normalfont\mathbb{N}^{k}$-graded $\mathbb{k}$-algebras where $k = \lvert {\mathfrak{J}} \rvert$. \end{remark} We now provide some preparatory results. \begin{lemma} \label{lem_basics_projections} Under \autoref{setup_initial}, the following statements hold: \begin{enumerate}[\rm (i)] \item $r = \dim(X) = \dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right) - p$. \item There is an isomorphism \begin{equation} \label{eq_isom} \Pi_{\mathfrak{J}}(X) \;\cong\; \normalfont\text{MultiProj}\left(R/\Big(\left(0:_R{\normalfont\mathfrak{N}}^\infty\right) + \sum_{j \not\in {\mathfrak{J}}} {\normalfont\mathfrak{m}}_j\Big)\right). \end{equation} \item If $\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)=0$, then $\Pi_{\mathfrak{J}}(X) \cong \normalfont\text{MultiProj}(R_{({\mathfrak{J}})})=X_{\mathfrak{J}}$. \item If $\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)=0$, then $\left(0:_{R_{({\mathfrak{J}})}} {\normalfont\mathfrak{N}}_{\mathfrak{J}}^\infty\right)=0$. \end{enumerate} \end{lemma} \begin{proof} (i) This formula follows from \cite[Lemma 1.2]{HYRY_MULTIGRAD} (also, see \cite[Corollary 3.5]{SPECIALIZATION_RAT_MAPS}). (ii) From the natural maps $R_{({\mathfrak{J}})} \hookrightarrow R \twoheadrightarrow R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)$, we obtain a natural isomorphism $$ R_{({\mathfrak{J}})}/\left(R_{({\mathfrak{J}})} \cap \left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right) \xrightarrow{\cong} \big(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\big)_{({\mathfrak{J}})}. $$ By using \autoref{rem_isom_quotient_grad} it follows that $\big(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\big)_{({\mathfrak{J}})} \cong R/\left(\left(0:_R{\normalfont\mathfrak{N}}^\infty\right) + \sum_{j \not\in {\mathfrak{J}}} {\normalfont\mathfrak{m}}_j\right)$. Therefore, the claimed isomorphism is obtained from \autoref{eq_isom_0}. (iii) It follows directly from part (ii) and \autoref{rem_isom_quotient_grad}. (iv) This part is clear. \end{proof} \smallskip Let $P_R({\normalfont\mathbf{t}})=P_R(t_1,\ldots,t_p) \in \mathbb{Q}[{\normalfont\mathbf{t}}]=\mathbb{Q}[t_1,\ldots,t_p]$ be the \emph{Hilbert polynomial} of $R$ (see, e.g., \cite[Theorem 4.1]{HERMANN_MULTIGRAD}, \cite[Theorem 3.4]{MIXED_MULT}). Then, the degree of $P_R$ is equal to $r$ and $$ P_R(\nu) = \dim_\mathbb{k}\left([R]_\nu\right) $$ for all $\nu \in \normalfont\mathbb{N}^p$ such that $\nu \gg \mathbf{0}$. Furthermore, if we write \begin{equation} \label{eq_Hilb_poly} P_{R}({\normalfont\mathbf{t}}) = \sum_{n_1,\ldots,n_p \ge 0} e(n_1,\ldots,n_p)\binom{t_1+n_1}{n_1}\cdots \binom{t_p+n_p}{n_p}, \end{equation} then $0 \le e(n_1,\ldots,n_r) \in \normalfont\mathbb{Z}$ for all $n_1+\cdots+n_p = r$. \begin{remark}\label{modSat} The following are basic properties of Hilbert polynomials. \begin{enumerate}[(i)] \item Since ${\normalfont\mathfrak{N}}^k(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ for $k\gg 0$ we have $\dim_\mathbb{k}\left([R]_\nu\right) = \dim_\mathbb{k}\left([R/(0:_R{\normalfont\mathfrak{N}}^\infty)]_\nu\right) $ for $\nu\gg \mathbf{0}$. Thus, $$P_{R}({\normalfont\mathbf{t}})=P_{R/(0:_R{\normalfont\mathfrak{N}}^\infty)}({\normalfont\mathbf{t}}).$$ \item Let $\mathbb{L}$ be a field extension of $\mathbb{k}$. Then, $R \otimes_\mathbb{k} \mathbb{L}$ is a finitely generated standard $\normalfont\mathbb{N}^p$-graded $\mathbb{L}$-algebra and $ \dim_{\mathbb{L}} \left([R \otimes_\mathbb{k} \mathbb{L}]_\nu\right) = \dim_\mathbb{k}\left([R]_\nu\right) $ for all $\nu \in \normalfont\mathbb{N}^p$. Thus, $$P_{R \otimes_\mathbb{k} \mathbb{L}}({\normalfont\mathbf{t}}) =P_{R}({\normalfont\mathbf{t}}).$$ In particular, one can always assume $\mathbb{k}$ is an infinite field (for instance, we can substitute $\mathbb{k}$ by a purely transcendental field extension $\mathbb{k}(\xi)$). \end{enumerate} \end{remark} Under the notation of \autoref{eq_Hilb_poly} we define the following invariants. \begin{definition} \label{def_multdeg} Let ${\normalfont\mathbf{n}} = (n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ with $\lvert{\normalfont\mathbf{n}}\rvert=r$. Then: \begin{enumerate}[(i)] \item $e({\normalfont\mathbf{n}},R) := e(n_1,\ldots,n_p)$ is the \textit{mixed multiplicity of $R$ of type $\mathbf{n}$}. \item $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X):=e(n_1,\ldots,n_p)$ is the \textit{multidegree of $X=\normalfont\text{MultiProj}(R)$ of type ${\normalfont\mathbf{n}}$ with respect to ${\normalfont\mathbb{P}}$}. \end{enumerate} \end{definition} As stated in the Introduction, in classical geometrical terms, when $\mathbb{k}$ is algebraically closed, $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X)$ is also equal to the number of points (counting multiplicity) in the intersection of $X$ with the product $L_1 \times_\mathbb{k} \cdots \times_\mathbb{k} L_p \subset {\normalfont\mathbb{P}}$, where $L_i \subseteq {\normalfont\mathbb{P}}_\mathbb{k}^{m_i}$ is a general linear subspace of dimension $m_i-n_i$ for each $1 \le i \le p$ (see \cite{VAN_DER_WAERDEN}, \cite[Theorem 4.7]{MIXED_MULT}). The multidegrees of $X$ can be defined easily in terms of Chow rings and in terms of Hilbert series. \begin{remark} \label{rem_chow_ring} The Chow ring of ${\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$ is given by $$ A^*({\normalfont\mathbb{P}}) = \frac{\normalfont\mathbb{Z}[H_1,\ldots,H_p]}{\left(H_1^{m_1+1},\ldots,H_p^{m_p+1}\right)} $$ where $H_i$ represents the class of the inverse image of a hyperplane of ${\normalfont\mathbb{P}}_\mathbb{k}^{m_i}$ under the natural projection $\Pi_i: {\normalfont\mathbb{P}} \rightarrow {\normalfont\mathbb{P}}_\mathbb{k}^{m_i}$. Then, the class of the cycle associated to $X$ coincides with $$ \left[X\right] \;=\; \sum_{\substack{0 \le n_i \le m_i\\ \lvert{\normalfont\mathbf{n}}\rvert=r}} \deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}\left(X\right)\, H_1^{m_1-n_1}\cdots H_p^{m_p-n_p} \;\in A^*({\normalfont\mathbb{P}}). $$ \end{remark} \begin{remark} \label{rem_Hilb_series} By considering the Hilbert series $ {\normalfont\text{Hilb}}_R(t_1,\ldots,t_p) := \sum_{\nu \in \normalfont\mathbb{N}^p} \dim_\mathbb{k}\left([R]_\nu\right)t_1^{\nu_1}\cdots t_p^{\nu_p} $ of $R$, one can analogously define the notions of mixed multiplicities and multidegrees (see \cite[\S 8.5]{MILLER_STURMFELS}, \cite[Theorem A]{MIXED_MULT}). Here we quickly derive this analogous definition because we shall use it in \autoref{subsect_Schubert}. Let $S=\mathbb{k}[x_{1,0},x_{1,1},\ldots,x_{1,m_1}] \otimes_\mathbb{k} \cdots \otimes_\mathbb{k} \mathbb{k}[x_{p,0},x_{p,1},\ldots,x_{p,m_p}]$ be the multigraded polynomial ring corresponding with ${\normalfont\mathbb{P}}={\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$, that is ${\normalfont\mathbb{P}} = \normalfont\text{MultiProj}(S)$. By considering an $S$-free resolution of $R$, we can write $$ {\normalfont\text{Hilb}}_R({\normalfont\mathbf{t}})=\mathcal{K}(R;{\normalfont\mathbf{t}})/\mathbf{(1-t)^{m+1}} = \mathcal{K}(R;t_1,\ldots,t_p)/\prod_{i=1}^p(1-t_i)^{m_i+1}, $$ where $\mathcal{K}(R;{\normalfont\mathbf{t}})$ is called the \emph{K-polynomial of $R$} (see \cite[Definition 8.21]{MILLER_STURMFELS}). Let $\mathcal{C}(R;{\normalfont\mathbf{t}}) \in \normalfont\mathbb{Z}[t_1,\ldots,t_p]$ be the sum of all the terms in $\mathcal{K}(R;\mathbf{1-t})$ of total degree equal to $\dim(S)-\dim(R)$ (see \cite[Definition 8.45]{MILLER_STURMFELS}). Then, if $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$, we obtain the equality $$ \mathcal{C}(R;{\normalfont\mathbf{t}}) = \sum_{\substack{0 \le n_i \le m_i\\ \lvert{\normalfont\mathbf{n}}\rvert=r}} \deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}\left(X\right)\, t_1^{m_1-n_1}\cdots t_p^{m_p-n_p}. $$ \end{remark} \begin{proof} From \cite[Theorem A(I)]{MIXED_MULT} we have ${\normalfont\text{Hilb}}_R({\normalfont\mathbf{t}}) = \sum_{\lvert\mathbf{k}\rvert=\dim(R)}Q_\mathbf{k}({\normalfont\mathbf{t}})/\mathbf{(1-t)}^\mathbf{k}$ where $Q_\mathbf{k}({\normalfont\mathbf{t}}) \in \normalfont\mathbb{Z}[{\normalfont\mathbf{t}}]$. The assumption $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ gives that $\dim(R)=r+p$ (see \autoref{lem_basics_projections}(i)). Hence, by using \cite[Theorem A(II,III)]{MIXED_MULT} we obtain that $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}\left(X\right)=e({\normalfont\mathbf{n}};R) = Q_{\mathbf{n+1}}(\mathbf{1})$ for all $\lvert{\normalfont\mathbf{n}}\rvert = r$. Also, the assumption $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ and \cite[Theorem 2.8(ii)]{MIXED_MULT} imply that $Q_\mathbf{k}(\mathbf{1})=0$ when $\lvert\mathbf{k}\rvert=\dim(R)$ and $\mathbf{k}_i=0$ for some $1 \le i \le p$. After writing ${\normalfont\text{Hilb}}_R({\normalfont\mathbf{t}}) = \sum_{\lvert\mathbf{k}\rvert=\dim(R)}Q_\mathbf{k}({\normalfont\mathbf{t}})\mathbf{(1-t)}^\mathbf{m+1-k}/\mathbf{(1-t)}^\mathbf{m+1}$, we obtain the equality $$\mathcal{K}(R;{\normalfont\mathbf{t}})=\sum_{\lvert\mathbf{k}\rvert=\dim(R)}Q_\mathbf{k}({\normalfont\mathbf{t}})\mathbf{(1-t)}^\mathbf{m+1-k}. $$ Making the substitution $t_i\mapsto (1-t_i)$ and choosing the terms of total degree $\dim(S)-\dim(R)=\sum_{i=1}^pm_i-r$, it follows that $\mathcal{C}(R;{\normalfont\mathbf{t}})=\sum_{\lvert\mathbf{k}\rvert=\dim(R)}Q_\mathbf{k}(\mathbf{1})\mathbf{t^{m+1-k}} = \sum_{\lvert\mathbf{n}\rvert=r}Q_{\mathbf{n+1}}(\mathbf{1})\mathbf{t^{m-n}}$. So, the result is clear. \end{proof} Although in the proofs of \autoref{thmA} and \autoref{corB} we do not exploit the fact that multidegrees can be defined as in \autoref{rem_chow_ring}, we do encode the multidegrees in a homogeneous polynomial that mimics the cycle associated to $X$ in the Chow ring $A^*({\normalfont\mathbb{P}})$. The following objects are the main focus of this paper. \begin{definition}\label{defMsupp} Let $X\subseteq {\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_p}$ be a closed subscheme with $r = \dim(X)$. We denote the \emph{multidegree polynomial of $X$ with respect to ${\normalfont\mathbb{P}}$} as the homogeneous polynomial $$ \deg_{\normalfont\mathbb{P}}(X;t_1,\ldots,t_p) \;:=\; \sum_{\substack{0 \le n_i \le m_i\\ \lvert{\normalfont\mathbf{n}}\rvert=r}} \deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}\left(X\right)\, t_1^{m_1-n_1}\cdots t_p^{m_p-n_p} \;\in\; \normalfont\mathbb{N}[t_1,\ldots,t_p] $$ of degree $m_1+\cdots+m_p-r$. We say that the \textit{support of $X$ with respect to ${\normalfont\mathbb{P}}$} is given by $$ \normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X) \,:=\, \big\{{\normalfont\mathbf{n}}\in \normalfont\mathbb{N}^p \;\mid\; \deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X)>0\big\}. $$ \end{definition} \begin{remark} Note that under the assumption $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ we obtain the equality $\deg_{\normalfont\mathbb{P}}(X;{\normalfont\mathbf{t}}) = \mathcal{C}(R;{\normalfont\mathbf{t}})$. \end{remark} \smallskip \subsection{The case over an Artinian local ring}\label{sub_Art} In this subsection, we show how the mixed multiplicities are defined for a standard multigraded algebra over an Artinian local ring. \begin{setup} \label{setup_Artinian} Keep the notations and assumptions introduced in \autoref{setup_initial} and now substitute the field $\mathbb{k}$ by an Artinian local ring $A$. \end{setup} In this setting, the notion of mixed multiplicities is defined essentially in the same way as in \autoref{def_multdeg}. \begin{definition} \label{def_mixed_mult} Let $P_R({\normalfont\mathbf{t}})=P_R(t_1,\ldots,t_p) \in \mathbb{Q}[{\normalfont\mathbf{t}}]=\mathbb{Q}[t_1,\ldots,t_p]$ be the \emph{Hilbert polynomial} of $R$ (see, e.g., \cite[Theorem 4.1]{HERMANN_MULTIGRAD}, \cite[Theorem 3.4]{MIXED_MULT}). Then, as before, the degree of $P_R$ is equal to $\dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right) - p$ and $$ P_R(\nu) = \text{length}_A\left([R]_\nu\right) $$ for all $\nu \in \normalfont\mathbb{N}^p$ such that $\nu \gg \mathbf{0}$. If we write $ P_{R}({\normalfont\mathbf{t}}) = \sum_{n_1,\ldots,n_p \ge 0} e(n_1,\ldots,n_p)\binom{t_1+n_1}{n_1}\cdots \binom{t_p+n_p}{n_p}, $ then $0 \le e(n_1,\ldots,n_r) \in \normalfont\mathbb{Z}$ for all $n_1+\cdots+n_p = \dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right) - p$. For each ${\normalfont\mathbf{n}} = (n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ with $\lvert {\normalfont\mathbf{n}}\rvert=\dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right) - p$, we set that $e({\normalfont\mathbf{n}},R) := e(n_1,\ldots,n_p)$ is the \textit{mixed multiplicity of $R$ of type $\mathbf{n}$}. \end{definition} \subsection{Polymatroids}\label{sub_Poly} In this subsection we include some relevant information about polymatroids. \begin{definition} Let $E$ be a finite set and $r$ a function $r:2^{E}\rightarrow \normalfont\mathbb{Z}_{\geq 0}$ satisfying the following two properties: (i) it is \emph{non-decreasing}, i.e., $r({\mathfrak{T}}_1)\leq r({\mathfrak{T}}_2)$ if ${\mathfrak{T}}_1\subseteq {\mathfrak{T}}_2\subseteq E$, and (ii) it is \emph{submodular}, i.e., $r({\mathfrak{T}}_1\cap {\mathfrak{T}}_2)+r({\mathfrak{T}}_1\cup {\mathfrak{T}}_2)\leq r({\mathfrak{T}}_1)+r({\mathfrak{T}}_2)$ if ${\mathfrak{T}}_1,{\mathfrak{T}}_2 \subseteq E$. The function $r$ is called a {\it rank function on $E$}. We usually let $E=[p]$. A \textit{$($discrete$)$ polymatroid} $\mathcal{P}$ on $[p]$ with rank function $r$ is a collection of points in $\normalfont\mathbb{N}^p$ of the following form \[ \mathcal{P=}\left\{{\normalfont\mathbf{x}}=(x_1,\ldots, x_p)\in \normalfont\mathbb{N}^p \;\mid\; \sum_{j\in{\mathfrak{J}}} x_j\leq r({\mathfrak{J}}), \;\forall {\mathfrak{J}}\subsetneq [p], \;\sum_{i\in [p]} x_i=r([p])\right\}. \] By definition, a polymatroid consists of the integer points of a polytope (the convex hull of $\mathcal{P}$), we call that polytope a \emph{base polymatroid polytope}. We note that a polymatroid is completely determined by its rank function. \end{definition} \begin{remark}\label{rem:matroid} If the rank function of $\mathcal{P}$ satisfies $r(\{i\})\leq 1$ for every $i\in [p]$, then $\mathcal{P}$ is called a \emph{matroid}. In other words, matroids are discrete polymatroids where every integer point is an element of $\{0,1\}^p$. A general reference for matroids is \cite{OXLEY}. \end{remark} In the following definition we consider the standard notions of linear and algebraic matroids (see \cite[Chapter 6]{OXLEY}) and adapt them to the polymatroid case. \begin{definition}\label{def_algebraic_polymatroid} Let $\mathcal{P}$ be a polymatroid. \begin{itemize} \item We say $\mathcal{P}$ is \textit{linear} over a field $\mathbb{k}$ if there exists a $\mathbb{k}$-vector space $V$ and subspaces $V_i, i\in [p]$ such that for every ${\mathfrak{J}}\subseteq [p]$ we have $r({\mathfrak{J}})=\dim_\mathbb{k}\left(\sum_{j\in{\mathfrak{J}}} V_j\right)$ \cite[Proposition 1.1.1]{OXLEY}. The vector space $V$ together with the subspaces $V_i$ for $1\leq i\leq p$, are a \emph{linear representation} of $\mathcal{P}$. \item We say $\mathcal{P}$ is \textit{algebraic} over a field $\mathbb{k}$ if there exists a field extension $\mathbb{k}\hookrightarrow \mathbb{L}$ and intermediate field extensions $\mathbb{L}_i, i\in [p]$ such that for every ${\mathfrak{J}}\subseteq [p]$ we have $r({\mathfrak{J}})=\text{trdeg}_\mathbb{k}\left(\bigwedge_{j\in {\mathfrak{J}}} \mathbb{L}_j\right)$, where $\bigwedge_{j\in {\mathfrak{J}}} \mathbb{L}_j$ is the \emph{compositum} of the subfields, i.e., the smallest subfield in $\mathbb{L}$ containing all of them \cite[Theorem 6.7.1]{OXLEY}. The field $\mathbb{L}$ together with the subfields $\mathbb{L}_i$ for $1\leq i\leq p$, are an \emph{algebraic representation} of $\mathcal{P}$. \end{itemize} \end{definition} \section{A characterization for the positivity of multidegrees}\label{sec_main} In this section, we focus on characterizing the positivity of multidegrees and our main goal is to prove \autoref{thmA} and \autoref{corB}. Throughout this section we continue using the same notations and assumptions of \autoref{section_notations}. We begin with the following result that relates the Hilbert polynomial $P_R({\normalfont\mathbf{t}}) \in \mathbb{Q}[{\normalfont\mathbf{t}}]$ of $R$ with the dimensions $r({\mathfrak{J}})=\dim\left(\Pi_{{\mathfrak{J}}}(X)\right)$ of the schemes $\Pi_{{\mathfrak{J}}}(X)$. It extends \cite[Theorem 1.7]{TRUNG_POSITIVE} to a multigraded setting. \begin{proposition} \label{thm_deg_Hilb_pol} Assume \autoref{setup_initial}. For each ${\mathfrak{J}} = \{j_1,\ldots, j_k\}\subseteq [p]$, let $\deg(P_R;{\mathfrak{J}})$ be the degree of the Hilbert polynomial $P_R$ in the variables $t_{j_1},\ldots, t_{j_k}$. Then, for every such ${\mathfrak{J}} = \{j_1,\ldots, j_k\}$ we have that $$ \deg(P_R; {\mathfrak{J}}) = r({\mathfrak{J}}). $$ \end{proposition} \begin{proof} We may assume that $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ and $\mathbb{k}$ is an infinite field by \autoref{modSat}. Fix ${\mathfrak{J}}=\{j_1,\ldots, j_k\}\subseteq [p]$ and let $w\in \normalfont\mathbb{N}$ be such that $\dim_\mathbb{k}([R]_{\mathbf{n}})=P_R(\mathbf{n})$ for every $\mathbf{n}=(n_1,\ldots, n_p)\geqslant w\mathbf{1}$. Let $(d_1,\ldots. d_k)$ be such that $\delta:=\deg(P_R; {\mathfrak{J}})=d_1+\cdots+d_k$ and $t_{j_1}^{d_1}\cdots t_{j_k}^{d_k}$ divides a term of $P_R$. Let $q$ be a polynomial in the variables $\{t_i\mid i\not\in {\mathfrak{J}} \}$ such that $P_R-q\cdot t_{j_1}^{d_1}\cdots t_{j_k}^{d_k}$ has no term divisible by $t_{j_1}^{d_1}\cdots t_{j_k}^{d_k}$. Let $\mathbf{s}=(s_i\mid i\not\in {\mathfrak{J}} )\in \normalfont\mathbb{N}^{p-|{\mathfrak{J}}|}$ be a vector of integers such that $\mathbf{s}\geqslant w\mathbf{1}$ and $q(\mathbf{s})\neq 0$. Thus, if one evaluates $t_i=s_i$ in $P_R$ for every $i\not\in {\mathfrak{J}} $ one obtains a polynomial $Q$ on the variables $t_{j_1},\cdots, t_{j_k}$ of degree $\delta$. On the other hand, by \cite[Theorem 3.4]{MIXED_MULT}, for $n_{j_1},\cdots, n_{j_k}\geqslant w$ this polynomial $Q$ coincides with the Hilbert polynomial of the $R_{({\mathfrak{J}})}$-module generated by $[R]_{\mathbf{s}'}$, where $\mathbf{s}'_i=\mathbf{s}_i$ if $i\not\in{\mathfrak{J}}$ and $\mathbf{s}'_i=0$ otherwise. Call this module $M$. Since $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$, for every $1\le i\le p$ we have $\normalfont\text{grade}({\normalfont\mathfrak{m}}_i) \ge 1$, and then there exist elements $y_i \in [R]_{{\normalfont\mathbf{e}}_i}$ which are non-zero-divisors (see, e.g., \cite[Lemma 1.5.12]{BRUNS_HERZOG}). From the fact that $y_1^{\mathbf{s}_1'}\cdots y_p^{\mathbf{s}_p'} \in M$, it follows that $\normalfont\text{Ann}_{R_{({\mathfrak{J}})}}(M)=0$. Therefore, $\delta= \dim\left(\normalfont\text{Supp}(M) \cap X_{({\mathfrak{J}})}\right) = r({\mathfrak{J}})$, by \cite[Theorem 3.4]{MIXED_MULT}, finishing the proof. \end{proof} In the following remark we gather some basic relations for the radicals of certain ideals. \begin{remark} \label{rem_radicals} (i) Let $I, J, K \subset R$ be ideals. If $J \subset \sqrt{K}$, then $I+J \subset \sqrt{I+K}$. In particular, if $\sqrt{J} = \sqrt{K}$, then $\sqrt{I+J} = \sqrt{I+K}$. \noindent (ii) For any element $x \in {\normalfont\mathfrak{m}}_1$, since $(x:_R{\normalfont\mathfrak{m}}_1^\infty){\normalfont\mathfrak{m}}_1^k \subset (x)$ for some $k >0$, it follows that $\sqrt{(x)} = \sqrt{(x:_R{\normalfont\mathfrak{m}}_1^\infty){\normalfont\mathfrak{m}}_1^k} = \sqrt{(x:_R{\normalfont\mathfrak{m}}_1^\infty)\cap {\normalfont\mathfrak{m}}_1}$. \end{remark} If $\mathbb{k}$ is an infinite field, then for each $1 \le i \le p$ we say that a property $\mathbf{P}$ is satisfied by a \emph{general element} in the $\mathbb{k}$-vector space $[R]_{{\normalfont\mathbf{e}}_i}$, if there exists a dense open subset $U$ of ${\left[R\right]}_{{\normalfont\mathbf{e}}_i}$ with the Zariski topology such that every element in $U$ satisfies the property $\mathbf{P}$. The following three technical lemmas are important steps for the proof of \autoref{thm_projections}. \begin{lemma} \label{lem_radical_sats} Assume \autoref{setup_initial} with $\mathbb{k}$ being an infinite field. Suppose that $R$ is a domain. Let $x \in [R]_{{\normalfont\mathbf{e}}_1}$ be a general element. Then, we have the equality $ \sqrt{\left(x:_{R}{\normalfont\mathfrak{N}}^\infty\right)} = \sqrt{\left(x:_{R}{\normalfont\mathfrak{m}}_1^\infty\right)} $. \end{lemma} \begin{proof} Since $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$, we have that $\normalfont\text{ht}({\normalfont\mathfrak{m}}_j) \ge 1$ for every $1\le j\le p$. Consider the following finite set of prime ideals $$ \mathfrak{S} = \big\lbrace {\mathfrak{P}} \in \normalfont\text{Spec}(R) \mid {\mathfrak{P}} \in {\normalfont\text{Min}}({\normalfont\mathfrak{m}}_j) \text{ for some } 2 \le j \le p \text{ and } {\mathfrak{P}} \not\supseteq {\normalfont\mathfrak{m}}_1 \big\rbrace. $$ By using the Prime Avoidance Lemma and the fact that $\mathbb{k}$ is infinite, for a general element $x \in [R]_{{\normalfont\mathbf{e}}_1}$ we have that $x \not\in \bigcup_{{\mathfrak{P}} \in \mathfrak{S}} {\mathfrak{P}}$. If ${\mathfrak{P}} \in {\normalfont\text{Min}}\left(x:_R{\normalfont\mathfrak{m}}_1^\infty\right)$, then $\normalfont\text{ht}({\mathfrak{P}}) \le 1$ by Krull's Principal Ideal Theorem, and so we would have that ${\mathfrak{P}} \in \mathfrak{S}$ whenever $ {\mathfrak{P}} \not\supseteq {\normalfont\mathfrak{m}}_1$ and ${\mathfrak{P}} \supseteq {\normalfont\mathfrak{m}}_j$ for some $2 \le j \le p$. Therefore, for any ${\mathfrak{P}} \in \normalfont\text{Spec}(R)$ and a general element $x \in [R]_{{\normalfont\mathbf{e}}_1}$, if ${\mathfrak{P}} \in {\normalfont\text{Min}}\left(x:_R{\normalfont\mathfrak{m}}_1^\infty\right)$ we get ${\mathfrak{P}} \supseteq (x:_R{\normalfont\mathfrak{N}}^\infty) = (x:_R({\normalfont\mathfrak{m}}_1\cap {\normalfont\mathfrak{m}}_2 \cap \cdots \cap {\normalfont\mathfrak{m}}_p)^\infty)$; so, $\sqrt{(x:_R{\normalfont\mathfrak{N}}^\infty)} = \sqrt{(x:_R{\normalfont\mathfrak{m}}_1^\infty)}$. \end{proof} The lemma below is necessary for some reduction arguments in \autoref{thm_projections}. \begin{lemma} \label{lem_cut_project} Assume \autoref{setup_initial} with $\mathbb{k}$ being an infinite field. Suppose that $R$ is a domain. Let $x \in [R]_{{\normalfont\mathbf{e}}_1}$ be a general element and set $Z = X \cap V(x) = \normalfont\text{MultiProj}(R/xR)$. Then, for each ${\mathfrak{J}} = \{1,j_2,\ldots,j_k\}\subseteq [p]$, the following statements hold: \begin{enumerate}[\rm (i)] \item $\dim\left(\Pi_{\mathfrak{J}}(Z)\right) = \dim\left(X_{\mathfrak{J}} \cap V(x)\right)$, where $X_{\mathfrak{J}} \cap V(x) = \normalfont\text{MultiProj}\left(R_{({\mathfrak{J}})}/xR_{({\mathfrak{J}})}\right)$. \item $\dim(\Pi_{\mathfrak{L}}(Z)) = \dim\big(\Pi_{\mathfrak{L}}'(X_{\mathfrak{J}} \cap V(x))\big)$, where $\mathfrak{L} = {\mathfrak{J}} \setminus \{1\}$ and $\Pi_{\mathfrak{L}}'$ denotes the natural projection $\Pi_{\mathfrak{L}}' : {\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_2}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_k}} \rightarrow {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_2}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_k}}$. \end{enumerate} \end{lemma} \begin{proof} For notational purposes, let $\mathfrak{b}_{\mathfrak{J}} := {\normalfont\mathfrak{m}}_1 \cap R_{({\mathfrak{J}})}$. (i) From \autoref{eq_isom} we have that $ \Pi_{{\mathfrak{J}}}(Z) \cong \normalfont\text{MultiProj}\big(R/\big((x:_R{\normalfont\mathfrak{N}}^\infty)+\sum_{j\not\in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j\big)\big). $ Since we are assuming $1 \in {\mathfrak{J}}$, from \autoref{rem_isom_quotient_grad} we obtain the natural isomorphism $$ R_{({\mathfrak{J}})}/\left(x:_{R_{({\mathfrak{J}})}} \mathfrak{b}_{\mathfrak{J}}^\infty\right) \;\xrightarrow{\cong} \; R/\Big((x:_R{\normalfont\mathfrak{m}}_1^\infty)+\sum_{j\not\in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j\Big); $$ indeed, for $\ell \ge 0$ and $y \in R_{({\mathfrak{J}})}$, one notices that $\mathfrak{b}_{\mathfrak{J}}^\ell \cdot y \in xR_{({\mathfrak{J}})}$ if and only if ${\normalfont\mathfrak{m}}_1^\ell \cdot y \in xR$. By \autoref{lem_radical_sats} and \autoref{rem_radicals}(i) we have $\sqrt{(x:_R{\normalfont\mathfrak{m}}_1^\infty)+\sum_{j\not\in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j}=\sqrt{(x:_R{\normalfont\mathfrak{N}}^\infty)+\sum_{j\not\in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j}$, and by applying \autoref{lem_radical_sats} to the ring $R_{({\mathfrak{J}})}$ we obtain $\sqrt{(x:_{R_{({\mathfrak{J}})}}\mathfrak{b}_{\mathfrak{J}}^\infty)}=\sqrt{(x:_{R_{({\mathfrak{J}})}}{\normalfont\mathfrak{N}}_{\mathfrak{J}}^\infty)} $. It follows that \begin{equation} \label{eq_radicals_fJ} R_{({\mathfrak{J}})} \;\big/\; \sqrt{(x:_{R_{({\mathfrak{J}})}}{\normalfont\mathfrak{N}}_{\mathfrak{J}}^\infty)} \;\cong\; R \;\big/\; \sqrt{(x:_R{\normalfont\mathfrak{N}}^\infty)+\sum_{j\not\in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j}, \end{equation} which gives the result. (ii) By using \autoref{eq_isom} we obtain that $\Pi_{{\mathfrak{L}}}(Z) \cong \normalfont\text{MultiProj}\big(R/\big((x:_R{\normalfont\mathfrak{N}}^\infty)+\sum_{j\not\in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j + {\normalfont\mathfrak{m}}_1\big)\big)$ and that $\Pi_{{\mathfrak{L}}}'\left(X_{\mathfrak{J}} \cap V(x)\right) \cong \normalfont\text{MultiProj}\left(R_{({\mathfrak{J}})} /\left( (x:_{R_{({\mathfrak{J}})}}{\normalfont\mathfrak{N}}_{\mathfrak{J}}^\infty) + \mathfrak{b}_{\mathfrak{J}}\right)\right)$. Since the isomorphism in \autoref{eq_radicals_fJ} can be extended to \begin{equation*} \label{eq_radicals_bJ} R_{({\mathfrak{J}})} \;\Big/\; \left( \sqrt{(x:_{R_{({\mathfrak{J}})}}{\normalfont\mathfrak{N}}_{\mathfrak{J}}^\infty)} + \mathfrak{b}_{\mathfrak{J}}\right) \;\cong\; R \;\Big/\; \left(\sqrt{(x:_R{\normalfont\mathfrak{N}}^\infty)+\sum_{j\not\in {\mathfrak{J}}}{\normalfont\mathfrak{m}}_j} + {\normalfont\mathfrak{m}}_1 \right), \end{equation*} the result follows from \autoref{rem_radicals}(i). \end{proof} We continue with the next auxiliary lemma that allows us to simplify the proof of \autoref{thm_projections}. \begin{lemma} \label{lem_dim_projects} Assume \autoref{setup_initial} with $\mathbb{k}$ being an infinite field. Suppose that $R$ is a domain and $r(1)\ge 1$. Let $x \in [R]_{{\normalfont\mathbf{e}}_1}$ be a general element and set $Z = X \cap V(x) = \normalfont\text{MultiProj}(R/xR)$. Then, the following statements hold: \begin{enumerate}[\rm (i)] \item If $1 \in {\mathfrak{J}} \subseteq [p]$, then $\dim\left(\Pi_{\mathfrak{J}}(Z)\right) = r({\mathfrak{J}}) - 1$; in particular, $\dim(Z) = r - 1$. \item If $1 \not\in {\mathfrak{J}}$ and $r({\mathfrak{K}}) > r({\mathfrak{J}})$, where ${\mathfrak{K}} = \{1\} \,\cup\, {\mathfrak{J}} \subseteq [p]$, then $\dim\left(\Pi_{\mathfrak{J}}(Z)\right)=r({\mathfrak{J}})$. \end{enumerate} \end{lemma} \begin{proof} (i) First, from \autoref{lem_cut_project}(i) it suffices to compute $\dim\left(X_{\mathfrak{J}} \cap V(x)\right)$, where $X_{\mathfrak{J}} \cap V(x) = \normalfont\text{MultiProj}\left(R_{({\mathfrak{J}})}/xR_{({\mathfrak{J}})}\right)$. For ${\mathfrak{J}} = \{1, j_2,\ldots,j_k\} \subseteq [p]$, note that $\Pi_1(X) \cong \Pi_1'(X_{\mathfrak{J}})$, where $\Pi_1'$ denotes the natural projection $\Pi_1':{\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_2}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_k}} \rightarrow {\normalfont\mathbb{P}}_\mathbb{k}^{m_1}$. Therefore, neither the assumption nor the conclusion changes if we substitute $R$ by $R_{({\mathfrak{J}})}$ and $X$ by $X_{\mathfrak{J}}$, and we do so. From the short exact sequence $$ 0 \rightarrow R(-{\normalfont\mathbf{e}}_1) \xrightarrow{x} R \rightarrow R/xR \rightarrow 0, $$ we obtain $P_{R/xR}({\normalfont\mathbf{t}}) = P_R({\normalfont\mathbf{t}}) - P_R({\normalfont\mathbf{t}} - {\normalfont\mathbf{e}}_1)$. By using \autoref{thm_deg_Hilb_pol}, $\deg(P_R;t_1) = r(1) \ge 1$ and so $P_R$ is non-constant as a univariate polynomial in the variable $t_1$. Thus, $P_{R/xR}({\normalfont\mathbf{t}}) \neq 0$ which implies that $(x:_R{\normalfont\mathfrak{N}}^\infty)$ is a proper ideal. So, Krull's Principal Ideal Theorem yields that $\normalfont\text{ht}(x:_R{\normalfont\mathfrak{N}}^\infty)=1$ and that $$ \dim(Z) = \dim(R/(x:_R{\normalfont\mathfrak{N}}^\infty)) - p = \dim(R) - 1 - p = (r+p)-1-p=r-1. $$ (ii) By using \autoref{lem_cut_project}(ii), we can substitute $R$ by $R_{({\mathfrak{K}})}$ and $X$ by $X_{\mathfrak{K}}$, and we do so. So, we may assume that ${\mathfrak{K}} = [p]$ and ${\mathfrak{J}} = \{2,\ldots,p\}$. From \autoref{eq_isom} we get the isomorphism \begin{equation} \label{eq_second_project_Z} \Pi_{\mathfrak{J}}(Z) \cong \normalfont\text{MultiProj}\Big(R/\left((x:_R{\normalfont\mathfrak{N}}^\infty)+{\normalfont\mathfrak{m}}_1\right)\Big). \end{equation} The equality \begin{equation} \label{eq_radicals_project} \sqrt{(x:_R{\normalfont\mathfrak{N}}^\infty) + {\normalfont\mathfrak{m}}_1} = \sqrt{(x:_R{\normalfont\mathfrak{m}}_1^\infty) + {\normalfont\mathfrak{m}}_1} \end{equation} follows from \autoref{lem_radical_sats} and \autoref{rem_radicals}(i). The assumption yields that $$ \normalfont\text{ht}({\normalfont\mathfrak{m}}_1)=\dim\left(R\right)-\dim\left(R_{({\mathfrak{J}})}\right)=(r+p)-(r({\mathfrak{J}})+p-1)\ge 2, $$ then as a consequence Krull's Principal Ideal Theorem it follows that $\sqrt{(x)} = \sqrt{(x:_R{\normalfont\mathfrak{m}}_1^\infty)}$; therefore, \autoref{rem_radicals}(i) implies that $\sqrt{(x:_R{\normalfont\mathfrak{m}}_1^\infty) + {\normalfont\mathfrak{m}}_1}=\sqrt{{\normalfont\mathfrak{m}}_1}$. By summing up, we obtain the equalities $\dim\big(R/\left((x:_R{\normalfont\mathfrak{N}}^\infty)+{\normalfont\mathfrak{m}}_1\right)\big)=\dim(R/{\normalfont\mathfrak{m}}_1)=r({\mathfrak{J}}) +p-1$, and so the result follows. \end{proof} The next important theorem computes the dimension of the image of the projections $\Pi_{\mathfrak{J}}$ after cutting with a general hyperplane under certain conditions. For the proof of this result, we need the following version of Grothendieck's Connectedness Theorem. For that, we recall the definitions \begin{align*} &c(R) := \min\big\{ \dim(R/\mathfrak{a}) \mid \mathfrak{a} \subset R \text{ is an ideal and } \normalfont\text{Spec}(R) \setminus V(\mathfrak{a}) \text{ is disconnected} \big\},\\ &{\normalfont\text{sdim}}(R) := \min\big\{\dim(R/{\mathfrak{P}}) \mid {\mathfrak{P}} \in {\normalfont\text{Min}}(R) \big\} \text{ and }\\ &{\normalfont\text{ara}}(\mathfrak{a}) :=\min\{n \mid \sqrt{(a_1,\ldots,a_n)} = \sqrt{\mathfrak{a}} \text{ and } a_i \in R \} \end{align*} for any ideal $\mathfrak{a} \subset R$. \begin{lemma}[{\cite[Proposition 2.1]{BRODMANN_RUNG}, \cite[Lemma 2.6]{TRUNG_POSITIVE}}] \label{lem_dimension_connect} For two proper homogeneous ideals $\mathfrak{a},\mathfrak{b} \subset R$, if $\min\{\dim(R/\mathfrak{a}), \dim(R/\mathfrak{b})\} > \dim(R/(\mathfrak{a}+\mathfrak{b}))$, then $$ \dim(R/(\mathfrak{a}+\mathfrak{b})) \ge \min\{ c(R), \;{\normalfont\text{sdim}}(R)-1 \} - {\normalfont\text{ara}}(\mathfrak{a} \cap \mathfrak{b}). $$ \end{lemma} We are now ready to present the following theorem. \begin{theorem} \label{thm_projections} Assume \autoref{setup_initial} with $\mathbb{k}$ being an infinite field. Suppose that $R$ is a domain and $r(1) \ge 1$. Let $x \in [R]_{{\normalfont\mathbf{e}}_1}$ be a general element and set $Z = X \cap V(x) = \normalfont\text{MultiProj}(R/xR)$. Then, for each ${\mathfrak{J}} \subseteq [p]$ we have that $$ \dim(\Pi_{\mathfrak{J}}(Z)) = \min\Big\{ r({\mathfrak{J}}),\; r\big({\mathfrak{J}} \cup \{1\}\big)-1\Big\}. $$ \end{theorem} \begin{proof} For each ${\mathfrak{J}}=\{j_1,\ldots,j_k\} \subseteq {\mathfrak{K}} = \{h_1,\ldots,h_\ell\} \subseteq [p]$ we have that $\Pi_{\mathfrak{J}}(Z) = \Pi_{\mathfrak{J}}^\prime(\Pi_{\mathfrak{K}}(Z))$ where $\Pi_{\mathfrak{J}}^\prime$ denotes the natural projection $\Pi_{\mathfrak{J}}^\prime : {\normalfont\mathbb{P}}_\mathbb{k}^{m_{h_1}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{h_\ell}} \rightarrow {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_1}} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_{j_k}}$. So, from \autoref{lem_dim_projects}(i) it follows that the inequality ``$\le$'' holds in the desired equality. Due to \autoref{lem_dim_projects}, in order to show the reversed inequality ``$\ge$'', it suffices to show that $\dim(\Pi_{\mathfrak{J}}(Z)) \ge r({\mathfrak{J}})-1$ when $1 \not\in {\mathfrak{J}}$ and $r({\mathfrak{K}}) = r({\mathfrak{J}})$, where ${\mathfrak{K}} = \{1\} \,\cup\, {\mathfrak{J}} \subseteq [p]$. By using \autoref{lem_cut_project}(ii), we assume may that ${\mathfrak{K}} = [p]$ and ${\mathfrak{J}} = \{2,\ldots,p\}$. From \autoref{eq_second_project_Z} and \autoref{eq_radicals_project}, the proof would be complete if we prove the inequality $\dim\big(R/\left((x:_R{\normalfont\mathfrak{m}}_1^\infty)+{\normalfont\mathfrak{m}}_1\right)\big) \ge (r_{\mathfrak{J}}-1)+(p-1)=r+p-2$. By using \autoref{lem_radical_sats} and \autoref{lem_dim_projects}(i) we obtain that $$ \dim\left(R/(x:_R{\normalfont\mathfrak{m}}_1^\infty)\right) = \dim\left(R/(x:_R{\normalfont\mathfrak{N}}^\infty)\right) = (r-1)+p = r+p-1, $$ and since $r({\mathfrak{J}})=r$, we have $$\dim(R/{\normalfont\mathfrak{m}}_1) =\dim(R_{({\mathfrak{J}})}) = r({\mathfrak{J}})+(p-1)= r +(p-1)=r+p-1.$$ Moreover, \autoref{eq_radicals_project} and \autoref{lem_dim_projects}(i) yield that $$ \dim\big(R/\left((x:_R{\normalfont\mathfrak{m}}_1^\infty)+{\normalfont\mathfrak{m}}_1\right)\big) = \dim\big(R/\left((x:_R{\normalfont\mathfrak{N}}^\infty)+{\normalfont\mathfrak{m}}_1\right)\big) \le (r-1)+(p-1)=r+p-2. $$ Since $x \in {\normalfont\mathfrak{m}}_1$, \autoref{rem_radicals}(ii) gives that ${\normalfont\text{ara}}\left((x:_R{\normalfont\mathfrak{m}}_1^\infty)\cap {\normalfont\mathfrak{m}}_1\right) = {\normalfont\text{ara}}\left((x)\right)=1$. As $R$ is a domain, $c(R)={\normalfont\text{sdim}}(R)=r+p$. Therefore, from \autoref{lem_dimension_connect} we obtain that $$ \dim\big(R/\left((x:_R{\normalfont\mathfrak{m}}_1^\infty)+{\normalfont\mathfrak{m}}_1\right)\big) \ge \min\{ r+p, (r+p)-1 \} - 1 = r+p-2. $$ So, the proof is complete. \end{proof} \begin{notation} \label{nota_generic} Let $\{x_0,\ldots,x_s\}$ be a basis of the $\mathbb{k}$-vector space $[R]_{{\normalfont\mathbf{e}}_1}$. Consider a purely transcendental field extension $\mathbb{L} := \mathbb{k}(z_0,\ldots,z_s)$ of $\mathbb{k}$, and set $R_\mathbb{L} := R \otimes_\mathbb{k} \mathbb{L}$ and $X_\mathbb{L} := X \otimes_\mathbb{k} \mathbb{L} = \normalfont\text{MultiProj}\left(R_\mathbb{L}\right) \subseteq {\normalfont\mathbb{P}} \otimes_\mathbb{k} \mathbb{L} = {\normalfont\mathbb{P}}_\mathbb{L}^{m_1} \times_\mathbb{L} \cdots \times_\mathbb{L} {\normalfont\mathbb{P}}_\mathbb{L}^{m_p}$. We say that $z := z_0x_0 + \cdots + z_sx_s \in {\left[R_\mathbb{L}\right]}_{{\normalfont\mathbf{e}}_1}$ is the \emph{generic element} of ${\left[R_\mathbb{L}\right]}_{{\normalfont\mathbf{e}}_1}$. \end{notation} In the following remark we explain that field extensions as in \autoref{nota_generic} preserve the domain assumption. \begin{remark} \label{rem_infinite_field} Suppose that $R$ is a domain and consider a purely transcendental field extension $\mathbb{k}(\xi)$. Then, $R \otimes_\mathbb{k} \mathbb{k}(\xi)$ is also a domain; indeed, one can see that $R \otimes_\mathbb{k} \mathbb{k}(\xi)$ is a subring of the field of fractions $\normalfont\text{Quot}(R[\xi])$ of the polynomial ring $R[\xi]$. So, when $R$ is a domain one can extend $\mathbb{k}$ to an infinite field without loosing the assumption of $R$ being a domain. \end{remark} The lemma below shows that the Hilbert function modulo a generic element coincides with the one module a general element. \begin{lemma} \label{lem_generic} Assume \autoref{nota_generic} with $\mathbb{k}$ being an infinite field. Let $x \in {[R]}_{{\normalfont\mathbf{e}}_1}$ be a general element, then $$ \dim_\mathbb{k}\big(\left[R/xR\right]_\nu\big) = \dim_\mathbb{L}\big(\left[R_\mathbb{L}/zR_\mathbb{L}\right]_\nu\big) $$ for all $\nu \in \normalfont\mathbb{N}^p$. \end{lemma} \begin{proof} Let $T$ be the polynomial ring $T = \mathbb{k}[z_0,\ldots,z_s]$ and consider the finitely generated $T$-algebra given by $S = \left(R \otimes_\mathbb{k} T\right)/w\left(R \otimes_\mathbb{k} T\right)$ where $w = z_0x_0+\cdots+z_sx_s \in R \otimes_\mathbb{k} T$. From the Grothendieck's Generic Freeness Lemma (see, e.g., \cite[Theorem 24.1]{MATSUMURA}, \cite[Theorem 14.4]{EISEN_COMM}) there exists an element $0 \neq a \in T$ such that $S_a$ is a free $T_a$-module. Hence, for any ${\normalfont\mathfrak{p}} \in \normalfont\text{Spec}(T)$ inside the dense open subset $D(a) \subset \normalfont\text{Spec}(T)$, if $k({\normalfont\mathfrak{p}})$ denotes the residue field $k({\normalfont\mathfrak{p}}) = T_{\normalfont\mathfrak{p}}/{\normalfont\mathfrak{p}} T_{\normalfont\mathfrak{p}}$ of $T_{\normalfont\mathfrak{p}}$, one has that $$ \dim_{k({\normalfont\mathfrak{p}})}\big(\left[S_a\otimes_{T_a} k({\normalfont\mathfrak{p}})\right]_\nu\big) = \dim_{\normalfont\text{Quot}(T)}\big(\left[S_a\otimes_{T_a} \normalfont\text{Quot}(T)\right]_\nu\big) = \dim_\mathbb{L}\big(\left[R_\mathbb{L}/zR_\mathbb{L}\right]_\nu\big) $$ for all $\nu \in \normalfont\mathbb{N}^p$. Note that for any $\beta = (\beta_0,\ldots,\beta_s) \in \mathbb{k}^{s+1}$ with ${\normalfont\mathfrak{p}}_\beta=(z_0-\beta_0,\ldots,z_s-\beta_s) \in D(a)$ one has the isomorphisms $$ S_a\otimes_{T_a} k({\normalfont\mathfrak{p}}_\beta) \;\cong\; \frac{R\otimes_\mathbb{k} T}{\left(z_0x_0+\cdots+z_sx_s, z_0-\beta_0,\ldots, z_s-\beta_s\right)} \;\cong\; R / \left(\beta_0x_0+\cdots+\beta_sx_s\right)R. $$ So, the result follows. \end{proof} We now obtain \autoref{thmA} when $X$ is an irreducible scheme. \begin{remark} We first provide a couple of general words regarding the proof of \autoref{thm_main_irreducible} below and where the irreducibility assumption comes into play. The proof is achieved by iteratively cutting with generic hyperplanes (following \autoref{nota_generic}) to arrive to a zero-dimensional situation, and the main constraint is to control the dimension of the image of all the possible projections after cutting with a general hyperplane (see \autoref{eq_induction_step_main}). Our main tool to control those dimensions is \autoref{thm_projections}, where it is needed to assume that $R$ is a domain. When $X$ is irreducible, by just taking the reduced scheme structure $X_\text{red} = \normalfont\text{MultiProj}(R/\sqrt{0})$ we can easily reduce to the case where $R$ is a domain. To maintain the irreducibility assumption during the inductive process, we use a ``generic'' version of Bertini's Theorem as presented in \cite[Proposition 1.5.10]{FLENNER_O_CARROLL_VOGEL}. It should be noted that the usual versions of Bertini's Theorem for irreducibility require $X$ to be geometrically irreducible and that the dimension of the image of certain morphism is bigger or equal than two (see \cite[Theoreme 6.10, Corollaire 6.11]{JOUANOLOU_BERTINI}). Finally, \autoref{lem_generic} is used to relate the process of cutting with a \emph{generic} hyperplane with the one of cutting with a \emph{general} hyperplane. \end{remark} \begin{theorem} \label{thm_main_irreducible} Assume \autoref{setup_initial}. Suppose that $X$ is irreducible. Let ${\normalfont\mathbf{n}}=(n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ such that $\lvert {\normalfont\mathbf{n}} \rvert=r$. Then, $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X) > 0$ if and only if for each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$ the inequality $ n_{j_1} + \cdots + n_{j_k} \le r({\mathfrak{J}}) $ holds. \end{theorem} \begin{proof} From \autoref{thm_deg_Hilb_pol} it is clear that the inequalities $n_{j_1} + \cdots + n_{j_k} \le r({\mathfrak{J}})$ are a necessary condition for $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X)=e({\normalfont\mathbf{n}};R) >0$. Therefore, it suffices to show that they are also sufficient. Assume that $ n_{j_1} + \cdots + n_{j_k} \le r({\mathfrak{J}}) $ for every ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$. We may also assume that $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ by \autoref{modSat}(i). Hence, the condition of $X$ being irreducible implies that $\sqrt{0} \subset R$ is a prime ideal. Since the associativity formula for mixed multiplicities (see, e.g., \cite[Lemma 2.7]{MIXED_MULT}) yields that $$ e({\normalfont\mathbf{n}};R)=\text{length}_{R_{\sqrt{0}}}\left(R_{\sqrt{0}}\right)\cdot e\left({\normalfont\mathbf{n}};R/{\sqrt{0}}\right), $$ we can assume that $R$ is a domain, and we do so. In addition, by \autoref{modSat}(ii), \autoref{thm_deg_Hilb_pol}, and \autoref{rem_infinite_field} we may also assume that $\mathbb{k}$ is an infinite field. We proceed by induction on $r$. If $r = 0$, then \cite[Theorem 3.10]{MIXED_MULT} implies $e(\mathbf{0};R)>0$. Suppose now that $r \ge 1$. Without any loss of generality, perhaps after changing the grading, we can assume that $n_1 \ge 1$. Let $\mathbb{L}$, $R_\mathbb{L}$, $X_\mathbb{L}$ and $z$ be defined as in \autoref{nota_generic}. Let $x \in [R]_{{\normalfont\mathbf{e}}_1}$ be a general element. Set $S = R/xR$, $Z = X \cap V(x) = \normalfont\text{MultiProj}(S)$, $T = R_\mathbb{L}/zR_\mathbb{L}$, $W = X_\mathbb{L} \cap V(z) = \normalfont\text{MultiProj}(T)$ and ${\normalfont\mathbf{n}}'={\normalfont\mathbf{n}}-{\normalfont\mathbf{e}}_1$. Then, \cite[Lemma 3.9]{MIXED_MULT} and \autoref{lem_generic} yield that $e({\normalfont\mathbf{n}};R)=e({\normalfont\mathbf{n}}';S)=e({\normalfont\mathbf{n}}';T)$. From \cite[Proposition 1.5.10]{FLENNER_O_CARROLL_VOGEL} we obtain that $W$ is also an irreducible scheme. By the assumed inequalities and because $n_1\ge 1$ we have that for each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$ the following inequality holds \begin{align} \label{eq_induction_step_main} n_{j_1}' + \cdots + n_{j_k}' \le \min\Big\{ r({\mathfrak{J}}),\; r\big({\mathfrak{J}} \cup \{1\}\big)-1\Big\}, \end{align} and the latter is equal to $\dim(\Pi_{\mathfrak{J}}(Z))$ by \autoref{thm_projections}. Moreover, by \autoref{lem_generic} and \autoref{thm_deg_Hilb_pol}, we also have $ \dim(\Pi_{\mathfrak{J}}(W))=\dim(\Pi_{\mathfrak{J}}(Z))$; here, by an abuse of notation $\Pi_{\mathfrak{J}}(W)$ denotes the image of the natural projection $ \Pi_{\mathfrak{J}}: {\normalfont\mathbb{P}} \otimes_\mathbb{k} \mathbb{L} \rightarrow {\normalfont\mathbb{P}}_\mathbb{L}^{m_{j_1}} \times_\mathbb{L} \cdots \times_\mathbb{L} {\normalfont\mathbb{P}}_\mathbb{L}^{m_{j_k}} $ restricted to $W$. Finally, by using the inductive hypothesis applied to the irreducible scheme $W$, we obtain that $e({\normalfont\mathbf{n}};R)=e({\normalfont\mathbf{n}}';T) > 0$, and so the result follows. \end{proof} Now we are ready to show the general version of \autoref{thmA}. \begin{corollary} \label{thm_main} Assume \autoref{setup_initial}. Let ${\normalfont\mathbf{n}}=(n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ such that $\lvert {\normalfont\mathbf{n}} \rvert=\dim(X)$. Then, $\deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X) > 0$ if and only if there is an irreducible component $Y \subseteq X$ of $X$ that satisfies the following two conditions: \begin{enumerate}[\rm (a)] \item $\dim(Y) = \dim(X)$. \item For each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$ the inequality $ n_{j_1} + \cdots + n_{j_k} \le \dim\big(\Pi_{\mathfrak{J}}(Y)\big) $ holds. \end{enumerate} \end{corollary} \begin{proof} We may assume that $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ by \autoref{modSat}(i). By the associativity formula for mixed multiplicities (see, e.g., \cite[Lemma 2.7]{MIXED_MULT}) we get the equation $$ \deg_{\normalfont\mathbb{P}}^{\normalfont\mathbf{n}}(X) \;=\; e({\normalfont\mathbf{n}};R) \;=\; \sum_{\substack{{\mathfrak{P}} \in {\normalfont\text{Min}}(R)\\ \dim(R/{\mathfrak{P}}) = r+p}} \text{length}_{R_{\mathfrak{P}}}(R_{\mathfrak{P}})\cdot e({\normalfont\mathbf{n}}; R/{\mathfrak{P}}). $$ Thus, $e({\normalfont\mathbf{n}};R) > 0$ if and only if $e({\normalfont\mathbf{n}};R/{\mathfrak{P}}) > 0$ for some minimal prime ${\mathfrak{P}} \in {\normalfont\text{Min}}(R)$ of maximal dimension. So, the result is clear from \autoref{thm_main_irreducible}. \end{proof} Below we have a proof for \autoref{corB}. \begin{corollary} \label{cor_positive_mixed_mult} Assume \autoref{setup_Artinian}. Let ${\normalfont\mathbf{n}}=(n_1,\ldots,n_p) \in \normalfont\mathbb{N}^p$ such that $\dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right) - p= \lvert {\normalfont\mathbf{n}} \rvert$. Then, $e({\normalfont\mathbf{n}};R) > 0$ if and only if there is a minimal prime ideal ${\mathfrak{P}} \in {\normalfont\text{Min}}\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)$ of $\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)$ that satisfies the following two conditions: \begin{enumerate}[\rm (a)] \item $\dim\left(R/{\mathfrak{P}}\right) = \dim\left(R/\left(0:_R{\normalfont\mathfrak{N}}^\infty\right)\right)$. \item For each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$ the inequality $ n_{j_1} + \cdots + n_{j_k} \le \dim\left(\frac{R}{{\mathfrak{P}}+\sum_{j \not\in {\mathfrak{J}}} {\normalfont\mathfrak{m}}_j}\right) - k $ holds. \end{enumerate} \end{corollary} \begin{proof} As in \autoref{thm_main}, after assuming that $(0:_R{\normalfont\mathfrak{N}}^\infty)=0$ and using the associativity formula for mixed multiplicities, we obtain that $e({\normalfont\mathbf{n}};R) > 0$ if and only if $e({\normalfont\mathbf{n}};R/{\mathfrak{P}}) > 0$ for some minimal prime ${\mathfrak{P}} \in {\normalfont\text{Min}}(R)$ of maximal dimension. Note that, for each ${\mathfrak{P}} \in {\normalfont\text{Min}}(R)$, $R/{\mathfrak{P}}$ is naturally a finitely generated standard $\normalfont\mathbb{N}^p$-graded algebra over a field. So, the result follows by using \autoref{thm_main_irreducible}. \end{proof} Finally, for the sake of completeness, we provide a brief discussion on how \autoref{thm_main_irreducible} can be recovered (over the complex number) from the related results of \cite[\S 2.2]{KAVEH_KHOVANSKII}. \begin{remark} \label{rem_Kaveh_Kho} Assume $\mathbb{k} = \mathbb{C}$. For the closed subscheme $X \subset {\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$, let $L_i$ be the pullback of $\mathscr{O}_{{\normalfont\mathbb{P}}_\mathbb{k}^{m_i}}(1)$ to $X$. Take $E_i$ to be $|L_i|$. Following the notation in \cite[\S 2.2]{KAVEH_KHOVANSKII}, for each $\emptyset\neq {\mathfrak{J}}\subseteq [p]$, denote by $$ \Phi_{\mathfrak{J}} \,:\, X \rightarrow {\normalfont\mathbb{P}}\left(E_{\mathfrak{J}}^{\vee}\right) $$ the Kodaira map corresponding with the linear system $E_{\mathfrak{J}}$. Let $\tau_{\mathfrak{J}}$ be the dimension of the closure of the image of $\Phi_{\mathfrak{J}}$ (\cite[Definition 2.12]{KAVEH_KHOVANSKII}). Consequently, it is easy to check that $\dim\left(\Pi_{{\mathfrak{J}}}(X)\right)=\tau_{{\mathfrak{J}}}$. Thus, \cite[Theorems 2.14, 2.19]{KAVEH_KHOVANSKII} translate into the following statement: $\dim\left(\Pi_{{\mathfrak{J}}}(X)\right) \ge |{\mathfrak{J}}|$ if and only if for general hyperplanes $H_j\in |\Pi_j^*\mathscr{O}_{{\normalfont\mathbb{P}}_\mathbb{k}^{m_i}}(1)|$ ($j\in{\mathfrak{J}}$), $X\cap(\bigcap_{j\in{\mathfrak{J}}}H_j)\neq\emptyset$. The latter is equivalent to the condition $[X]\cdot \prod_{j\in{\mathfrak{J}}} [H_j]\neq 0$ on intersection of classes. \autoref{thm_main_irreducible} (over the complex numbers) eventually follows from applying this statement finitely many times to relevant index subsets ${\mathfrak{J}}$. \end{remark} \section{Positivity of the mixed multiplicities of ideals}\label{sec_mix_ideals} In this section, we characterize the positivity of the mixed multiplicities of ideals. The results obtained here are a consequence of applying \autoref{corB} to a certain multigraded algebra. For the particular case of ideals generated in one degree in graded domains we obtain a neat characterization in \autoref{thm_equigen_ideals}. Throughout this section we use the following setup. \begin{setup} \label{setup_mixed_mult_ideals} Let $R$ be a Noetherian local ring with maximal ideal ${\normalfont\mathfrak{m}} \subset R$ (or a finitely generated standard graded algebra over a field $\mathbb{k}$ with graded irrelevant ideal ${\normalfont\mathfrak{m}} \subset R$). \end{setup} Let $J_0 \subset R$ be an ${\normalfont\mathfrak{m}}$-primary ideal and $J_1,\ldots,J_p \subset R$ be arbitrary ideals (homogeneous in the graded case). The \textit{multi-Rees algebra} of the ideals $J_0,J_1,\ldots,J_p$ is given by $$ \mathcal{R}(J_0,\ldots,J_p) \;:=\; R[J_0t_0,\ldots,J_pt_p] \;=\; \bigoplus_{i_0 \ge 0, \ldots,i_p \ge 0} J_0^{i_0}\cdots J_p^{i_p} t_0^{i_0}\cdots t_p^{i_p} \;\subset\; R[t_0,\ldots,t_p], $$ where $t_0,\ldots,t_p$ are new variables. Note that $\mathcal{R}(J_0,\ldots,J_p)$ is naturally a standard $\normalfont\mathbb{N}^{p+1}$-graded algebra and that, for $0 \le k \le p$, the ideal ${\normalfont\mathfrak{m}}_k$ generated by elements of degree ${\normalfont\mathbf{e}}_k$ is given by $$ {\normalfont\mathfrak{m}}_k \;:=\; J_kt_k \, \mathcal{R}(J_0,\ldots,J_p) \; \subset \; \mathcal{R}(J_0,\ldots,J_p). $$ Let ${\normalfont\mathfrak{N}}:={\normalfont\mathfrak{m}}_0 \cap \cdots \cap {\normalfont\mathfrak{m}}_p$ be the corresponding multigraded irrelevant ideal. Since $J_0$ is ${\normalfont\mathfrak{m}}$-primary, we obtain that $$ T(J_0\mid J_1,\ldots,J_p) \;:=\; \mathcal{R}(J_0,\ldots,J_p) \otimes_R R/J_0 \;=\; \bigoplus_{i_0 \ge 0, i_1 \ge 0 \ldots,i_p \ge 0} J_0^{i_0}J_1^{i_1}\cdots J_p^{i_p} \big/ J_0^{i_0+1}J_1^{i_1}\cdots J_p^{i_p} $$ is a finitely generated standard $\normalfont\mathbb{N}^{p+1}$-graded algebra over the Artinian local ring $R/J_0$. For simplicity of notation, throughout this section we fix $\mathcal{R} := \mathcal{R}(J_0,\ldots,J_p)$ and $T := T(J_0\mid J_1,\ldots,J_p)$. Let $r$ be the integer $ r := \dim\left(\normalfont\text{MultiProj}\left(T\right)\right), $ which coincides with the degree of the Hilbert polynomial $P_{_{T} }(u_1,\ldots,u_{p+1})$ of the $\normalfont\mathbb{N}^{p+1}$-graded $R/J_0$-algebra $T$. From \cite[Theorem 1.2(a)]{TRUNG_VERMA_MIXED_VOL} we have the equality $r = \dim\left(R/(0:_R(J_1\cdots J_p)^\infty)\right)-1$. \begin{definition} Under the above notations, for each ${\normalfont\mathbf{n}} \in \normalfont\mathbb{N}^{p+1}$ with $\lvert {\normalfont\mathbf{n}} \rvert = r$, we say that $$ e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right) \;:= \; e\left({\normalfont\mathbf{n}}; T\right) $$ is the \emph{mixed multiplicity of $J_0,J_1,\ldots,J_p$ of type ${\normalfont\mathbf{n}}$}. \end{definition} The main focus in this section is to characterize when $e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right) > 0$. As a direct consequence of \autoref{corB} we get the following general criterion for the positivity $e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right)$. \begin{corollary} \label{cor_mixed_mult_ideals} Assume \autoref{setup_mixed_mult_ideals} and the notations above. Let ${\normalfont\mathbf{n}}=(n_0,n_1,\ldots,n_p) \in \normalfont\mathbb{N}^{p+1}$ such that $\lvert {\normalfont\mathbf{n}} \rvert = r$. Then, $e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right) > 0$ if and only if there is a minimal prime ideal ${\mathfrak{P}} \in {\normalfont\text{Min}}\left(0:_T{\normalfont\mathfrak{N}}^\infty\right)$ of $\left(0:_T{\normalfont\mathfrak{N}}^\infty\right)$ that satisfies the following two conditions: \begin{enumerate}[\rm (a)] \item $\dim\left(T/{\mathfrak{P}}\right) = \dim\left(T/\left(0:_T{\normalfont\mathfrak{N}}^\infty\right)\right)$. \item For each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq \{0\} \cup [p]$ the inequality, $ n_{j_1} + \cdots + n_{j_k} \le \dim\left(\frac{T}{{\mathfrak{P}}+\sum_{j \not\in {\mathfrak{J}}} {\normalfont\mathfrak{m}}_jT}\right) - k $ holds. \end{enumerate} \end{corollary} We now focus on the case where $R$ is a graded $\mathbb{k}$-domain and each ideal $J_i$ is generated in one degree. In this case, our characterization depends on the \emph{analytic spread} of certain ideals; recall that the analytic spread of an ideal $I \subset R$ is given by $\ell(I) := \dim\Big(\mathcal{R}(I)/{\normalfont\mathfrak{m}}\mathcal{R}(I)\Big)$. \begin{theorem} \label{thm_equigen_ideals} Let $R$ be a finitely generated standard graded domain over a field $\mathbb{k}$ with graded irrelevant ideal ${\normalfont\mathfrak{m}} \subset R$. Let $J_0 \subset R$ be an ${\normalfont\mathfrak{m}}$-primary ideal and $J_1,\ldots,J_p \subset R$ be arbitrary ideals. Suppose that, for each $0 \le i \le p$, $J_i$ is generated by homogeneous elements of the same degree $d_i>0$. Let ${\normalfont\mathbf{n}}=(n_0,n_1,\ldots,n_p) \in \normalfont\mathbb{N}^{p+1}$ such that $\lvert {\normalfont\mathbf{n}} \rvert = \dim(R)-1$. Then, $e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right) > 0$ if and only if for each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$ the inequality $$ n_{j_1} + \cdots + n_{j_k} \;\le\; \ell\big(J_{j_1}\cdots J_{j_k}\big) - 1 $$ holds. \end{theorem} \begin{proof} First, note that $r = \dim(R)-1$. Since $J_0$ is ${\normalfont\mathfrak{m}}$-primary, the kernel of the canonical map $T \twoheadrightarrow T' := \mathcal{R} \otimes_R R/{\normalfont\mathfrak{m}}$ is nilpotent. Therefore, the conditions (a),(b) in \autoref{cor_mixed_mult_ideals} are satisfied for $T$ if and only if they are satisfied for $T'$. Consider the $\normalfont\mathbb{N}^{p+1}$-graded domain given by $$ \normalfont\mathcal{F} \;:=\; \bigoplus_{i_0 \ge 0, \ldots,i_p \ge 0} \left[J_0^{i_0}\right]_{i_0d_0}\left[J_1^{i_1}\right]_{i_1d_1}\cdots \left[J_p^{i_p}\right]_{i_pd_p} t_0^{i_0}t_1^{i_1}\cdots t_p^{i_p} \; \subset \; \mathcal{R}. $$ Since ${[J_0^{i_0}]}_{i_0d_0}{[J_1^{i_1}]}_{i_1d_1}\cdots {[J_p^{i_p}]}_{i_pd_p} \cong J_0^{i_0}J_1^{i_1}\cdots J_p^{i_p} \otimes_R R/{\normalfont\mathfrak{m}}$, we have the isomorphism $T' \cong \normalfont\mathcal{F}$ and so $T'$ is a domain. For any ${\mathfrak{K}} = \{h_1,\ldots,h_s\} \subseteq \{0\} \cup [p]$, since we have the natural isomorphism $ \mathcal{R}\big/\sum_{h \not\in {\mathfrak{K}}}{\normalfont\mathfrak{m}}_h \;\cong \; \mathcal{R}(J_{h_1},\ldots,J_{h_s}) = R[J_{h_1}t_{h_1},\ldots,J_{h_s}t_{h_s}], $ it follows that $$ \dim\big(T'\big/\sum_{h \not\in {\mathfrak{K}}}{\normalfont\mathfrak{m}}_hT'\big) = \dim\big(\mathcal{R}(J_{h_1},\ldots,J_{h_s}) \otimes_R R/{\normalfont\mathfrak{m}}\big). $$ After using the Segre embedding we get the isomorphism $\normalfont\text{MultiProj}\big(\mathcal{R}(J_{h_1},\ldots,J_{h_s}) \otimes_R R/{\normalfont\mathfrak{m}}\big) \cong \normalfont\text{Proj}\big(\mathcal{R}(J_{h_1}\cdots J_{h_s}) \otimes_R R/{\normalfont\mathfrak{m}}\big)$ and, accordingly, from \autoref{lem_basics_projections}(i) we have $$ \dim\big(\mathcal{R}(J_{h_1},\ldots,J_{h_s}) \otimes_R R/{\normalfont\mathfrak{m}}\big) = \dim\big(\mathcal{R}(J_{h_1}\cdots J_{h_s}) \otimes_R R/{\normalfont\mathfrak{m}}\big) +s-1 = \ell(J_{h_1}\cdots J_{h_s})+s-1, $$ (also, see \cite[Corollary 3.10]{bivia2020analytic}). So, $e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right) > 0$ if and only if for each ${\mathfrak{K}} = \{h_1,\ldots,h_s\} \subseteq {0} \cup [p]$ the inequality $ n_{h_1} + \cdots + n_{h_s} \;\le\; \dim(T'/\sum_{h \not\in {\mathfrak{K}}}{\normalfont\mathfrak{m}}_hT') - s = \ell(J_{h_1}\cdots J_{h_s})-1 $ holds. For any ${\mathfrak{K}} = \{0,h_2,\ldots,h_s\} \subseteq \{0\} \cup [p]$, as $J_0$ is ${\normalfont\mathfrak{m}}$-primary, from \cite[Theorem 5.1.4, Proposition 5.1.6]{huneke2006integral} we obtain \begin{align*} \dim\big(T'\big/\sum_{h \not\in {\mathfrak{K}}}{\normalfont\mathfrak{m}}_hT'\big) = \dim\big(\mathcal{R}(J_0,J_{h_2},\ldots,J_{h_s}) \otimes_R R/J_0\big) &= \dim\Big({\normalfont\text{gr}}_{_{J_0\mathcal{R}(J_{h_2},\ldots,J_{h_s})}}\big(\mathcal{R}(J_{h_2},\ldots,J_{h_s})\big) \Big)\\ &=\dim(R) + s -1. \end{align*} Therefore, we only need check the inequalities corresponding to the subsets ${\mathfrak{J}} \subseteq [p]$, and so the result follows. \end{proof} \begin{remark} Note that in \autoref{thm_equigen_ideals} the conditions for the positivity of $e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right)$ do not involve the ${\normalfont\mathfrak{m}}$-primary ideal $J_0$ (see \cite[Corollary 1.8(a)]{TRUNG_VERMA_MIXED_VOL}). \end{remark} \begin{remark} We note that if in \autoref{thm_equigen_ideals} we have $\ell(J_i)=\dim(R)$ for every $1\le i\le p$, then by \cite[Lemma 4.7]{hyry2002cohen} for each ${\mathfrak{J}} = \{j_1,\ldots,j_k\} \subseteq [p]$ we also have $ \ell\big(J_{j_1}\cdots J_{j_k}\big)=\dim(R)$. Therefore, by \autoref{thm_equigen_ideals} it follows that $e_{\normalfont\mathbf{n}}\left(J_0\mid J_1,\ldots,J_p\right) > 0$ for every ${\normalfont\mathbf{n}} \in \normalfont\mathbb{N}^{p+1}$ such that $\lvert {\normalfont\mathbf{n}} \rvert = \dim(R)-1$. \end{remark} \section{Polymatroids}\label{sec_polym} We recall that \autoref{thmA} implies that $\normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)$ (see \autoref{defMsupp}) is the set of integer points in a polytope when $X$ is irreducible. In this section we explore properties of these discrete sets. Following standard notations, we say that $X$ is a \emph{variety} over $\mathbb{k}$ if $X$ is a reduced and irreducible separated scheme of finite type over $\mathbb{k}$ (see, e.g., \cite[\href{https://stacks.math.columbia.edu/tag/020C}{Tag 020C}]{stacks-project}). In the following two results we connect the theory of polymatroids (see \autoref{sub_Poly}) with $\normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)$ when $X$ is a variety. \begin{proposition}\label{prop_poly} Let $X\subseteq {\normalfont\mathbb{P}}={\normalfont\mathbb{P}}_{\mathbb{k}}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_p}$ be a multiprojective variety over an arbitrary field $\mathbb{k}$. Then $\normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)$ is a discrete algebraic polymatroid over $\mathbb{k}$. \end{proposition} \begin{proof} We consider the associated $\normalfont\mathbb{N}^p$-graded $\mathbb{k}$-domain $R$. Let $\xi$ be the generic point of $X$ and set $\mathbb{L}:=\mathcal{O}_{X, \xi}$. For each ${\mathfrak{J}} \subseteq [p]$, let $X_{\mathfrak{J}} = \Pi_{\mathfrak{J}}(X) = \normalfont\text{MultiProj}\left(R_{({\mathfrak{J}})}\right)$ and $\xi_{\mathfrak{J}}$ be the generic point of $X_{\mathfrak{J}}$, and notice that $$ \mathcal{O}_{X_{\mathfrak{J}}, \xi_{\mathfrak{J}}} = \big\{f/g \mid f,g \in R_{({\mathfrak{J}})}, \;g \neq 0, \; \deg(f)=\deg(g) \big\} \;\subseteq\; \mathbb{L}. $$ For $1\le i \le p$, let $\mathbb{L}_i := \mathcal{O}_{X_i, \xi_i}$. Then, for each ${\mathfrak{J}} \subseteq [p]$, we have that $\mathcal{O}_{X_{\mathfrak{J}}, \xi_{\mathfrak{J}}} = \bigwedge_{j\in {\mathfrak{J}}} \mathbb{L}_j$ and that $\dim\left(X_{\mathfrak{J}}\right) = {\rm trdeg}_\mathbb{k}\left(\mathcal{O}_{X_{\mathfrak{J}}, \xi_{\mathfrak{J}}}\right)$ (see, e.g., \cite[Theorem 5.22]{GORTZ_WEDHORN}, \cite[Exercise II.3.20]{HARTSHORNE}). Finally, the result follows from \autoref{thmA}. \end{proof} In \cite[Corollary 2.8]{TRUNG_POSITIVE} it is shown that the conclusion of \autoref{thm_main_irreducible} holds if $p=2$ and $X$ is arithmetically Cohen-Macaulay. The following example shows that this result does not always hold for $p> 2$. \begin{example}\label{ex_cm} Let $S=\mathbb{k}[x_1,\ldots,x_{12},y_1,\ldots,y_{12}]$ be a polynomial rings with an $\normalfont\mathbb{N}^{12}$-grading induced by $\deg(x_i)=\deg(y_i)={\normalfont\mathbf{e}}_i$ for $1\le i\le 12$. Let $\Delta$ be the simplicial complex given by the boundary of the icosahedron. We note that $\Delta$ is a Cohen-Macaulay complex (because it is a triangulation of the sphere $\mathbb{S}^2$ \cite[Corollary II.4.4]{STANLEY}), but it is not a (poly)matroid (see \autoref{rem:matroid}) since not every restriction is pure \cite[Proposition III.3.1]{STANLEY}. Let $J_\Delta=\{(x_{i_1}+y_{i_1})\cdots (x_{i_k}+y_{i_k})\mid \{i_1,\ldots,i_k\}\notin\Delta\}$ and $X_\Delta=\normalfont\text{MultiProj}(S/J_{\Delta})\subset {\normalfont\mathbb{P}}={\normalfont\mathbb{P}}_\mathbb{k}^1\times\cdots\times {\normalfont\mathbb{P}}_\mathbb{k}^1$. The definition of $J_\Delta$ is a modification on the definition of $I_\Delta$, the Stanley-Reisner ideal of $\Delta$ with monomials in the variables $\{x_1,\ldots,x_{12}\}$ \cite[Chapter 1]{MILLER_STURMFELS}, \cite[Chapter II]{STANLEY}. It can be easily verified that $I_\Delta$ is the initial ideal of $J_\Delta$ with respect to any elimination order with $\{x_1,\ldots,x_{12}\} \ge \{y_1,\ldots,y_{12}\}$. Since the ideal $J_\Delta$ is obtained from $I_\Delta$ by a linear change of variables, we have a similar primary decomposition as \cite[Theorem 1.7]{MILLER_STURMFELS}, so no component is supported on any coordinate subspace and thus $J_\Delta$ is saturated with respect to the irrelevant ideal of $S$. By \cite[Corollary 3.3.5]{HERZOG_HIBI_MONOMIALS}, $X_\Delta$ is arithmetically Cohen-Macaulay. Moreover, since Hilbert functions are preserved by Gr\"obner degenerations, the multidegree of $X_{\Delta}$ coincides with $\mathcal{C}(S/I_\Delta;{\normalfont\mathbf{t}})$ (see \autoref{rem_Hilb_series}). Thus, $\normalfont\text{MSupp}_\mathbb{P}(X_\Delta)$ consists of all the incidence vectors of the facets of $\Delta$ \cite[Theorem 1.7]{MILLER_STURMFELS} and then it is not a polymatroid. \end{example} With \autoref{prop_poly} in hand, we can introduce the following class of polymatroids. \begin{definition}\label{def_chow} A polymatroid $\mathcal{P}$ is \textit{Chow} over a field $\mathbb{k}$ if there exists a variety $X\subseteq {\normalfont\mathbb{P}}={\normalfont\mathbb{P}}_{\mathbb{k}}^{m_1} \times_\mathbb{k} \cdots \times_\mathbb{k} {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_p}$ such that $\normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)=\mathcal{P}$. \end{definition} The following statement follows as an easy corollary of the main result in \cite{BINGLIN}, when $\mathbb{k}$ is an infinite field. Here we give a simple direct argument for an arbitrary field $\mathbb{k}$. \begin{proposition}\label{prop_linear} A linear polymatroid over an arbitrary field $\mathbb{k}$ is Chow over the same field. \end{proposition} \begin{proof} Let $V$ be a $\mathbb{k}$-vector space and $V_1,\ldots,V_p$ be arbitrary subspaces. Let $S$ be the polynomial ring $S = \normalfont\text{Sym}(V) = \mathbb{k}[x_1,\ldots,x_q]$, where $q = \dim_\mathbb{k}(V)$. By using the isomorphism $[S]_1 \cong V$, we identify each $V_i$ with a $\mathbb{k}$-subspace $U_i$ of $[S]_1$. For each $1 \le i \le p$, let $\{x_{i,1},x_{i,2},\ldots,x_{i,r_i}\} \subset [S]_1$ be a basis of the $\mathbb{k}$-vector space $U_i$. Let $T$ be the $\normalfont\mathbb{N}^p$-graded polynomial ring $$ T := \mathbb{k}\left[y_{i,j} \mid 1 \le i \le p, \,0 \le j \le r_i,\, \deg(y_{i,j}) = {\normalfont\mathbf{e}}_i\right]. $$ Induce an $\normalfont\mathbb{N}^p$-grading on $S[t_1,\ldots,t_p]$ given by $\deg(t_i) = {\normalfont\mathbf{e}}_i$ and $\deg(x_j) = \mathbf{0}$. Consider the $\normalfont\mathbb{N}^p$-graded $\mathbb{k}$-algebra homomorphism $$ \varphi = T \rightarrow S[t_1,\ldots,t_p], \qquad \begin{array}{l} y_{i,0} \mapsto t_i \quad\;\, \text{ for } \; 1 \le i \le p \\ y_{i,j} \mapsto x_{i,j}t_i \; \text{ for } \; 1 \le i \le p,\, 1 \le j \le r_i. \end{array} $$ Note that ${\mathfrak{P}} := \normalfont\text{Ker}(\varphi)$ is an $\normalfont\mathbb{N}^p$-graded prime ideal. Set $R:= T/{\mathfrak{P}}$ and $X := \normalfont\text{MultiProj}(R)$. By construction, for each ${\mathfrak{J}} \subseteq [p]$, we obtain the isomorphism $$ R_{({\mathfrak{J}})} \;\cong\; \mathbb{k}\left[x_{i,j}t_i \mid i \in {\mathfrak{J}},\, 1 \le j \le r_i\right]\left[t_i\mid i \in {\mathfrak{J}}\right] \;\subset\; S[t_1,\ldots,t_p]; $$ thus, it is clear that \begin{align*} \dim\left(R_{({\mathfrak{J}})}\right) &= {\rm trdeg}_\mathbb{k}\big(\mathbb{k}\left[x_{i,j}t_i \mid i\in {\mathfrak{J}},\, 1 \le j \le r_i\right]\left[t_i\mid i \in {\mathfrak{J}}\right]\big)\\ &= {\rm trdeg}_\mathbb{k}\big(\mathbb{k}\left[x_{i,j}\mid i\in {\mathfrak{J}},\, 1 \le j \le r_i\right]\left[t_i\mid i \in {\mathfrak{J}}\right]\big)\\ &= \dim_\mathbb{k}\left(\sum_{i\in{\mathfrak{J}}} U_i\right) + \lvert {\mathfrak{J}} \rvert = \dim_\mathbb{k}\left(\sum_{i\in{\mathfrak{J}}} V_i\right) + \lvert {\mathfrak{J}} \rvert. \end{align*} Therefore, \autoref{lem_basics_projections} yields that $\dim\left(\Pi_{{\mathfrak{J}}}(X)\right)= \dim_\mathbb{k}\left(\sum_{i\in{\mathfrak{J}}} V_i\right)$, and so the result follows from \autoref{thmA}. \end{proof} The following is the main theorem of this section. Here we summarize the results presented above to show that the class of Chow polymatroids lies in between the ones introduced in \autoref{def_algebraic_polymatroid}. \begin{theorem}\label{thm_classification} Over an arbitrary field $\mathbb{k}$, we have the following inclusions of families of polymatroids $$ \Big(\texttt{Linear polymatroids}\Big) \;\subseteq\; \Big(\texttt{Chow polymatroids}\Big) \;\subseteq\; \Big(\texttt{Algebraic polymatroids}\Big). $$ Moreover, when $\mathbb{k}$ is a field of characteristic zero, the three families coincide. \end{theorem} \begin{proof} The first inclusion follows from \autoref{prop_linear}; the second from \autoref{prop_poly}. In the characteristic zero case linear and algebraic polymatroids coincide by \cite[Corollary, Page 166]{INGLETON} (also, see \autoref{IngletonRem}). \end{proof} \begin{remark}\label{IngletonRem} The result mentioned above from \cite{INGLETON} is stated for matroids but the arguments go unchanged for polymatroids. \end{remark} \begin{remark}\label{rem_Alg_not_line} Over finite fields there are algebraic matroids that are not linear. An example is the Non-Pappus matroid described in \cite[Page 517]{OXLEY}, it is algebraic over any field of positive characteristic but not linear over any field. \end{remark} Classifying linear polymatroid rank functions is a difficult problem. For linear matroids over a field of characteristic zero, the poetically titled \textit{``The missing axiom of matroid theory is lost forever''} \cite{AXIOM_LOST} together with a recent addition \cite{AXIOM_LOST_RECENT} shows that there is no finite list of axioms that characterize which rank functions are linear. For fields of positive characteristic, Rota conjectured in 1971 that for each field there is a list of finite restrictions. A proof of Rota's conjecture has been announced by Geelen, Gerards, and Whittle, but expected to be several hundred of pages long. Little is known about the algebraic case. In \cite{NONALGEBRAIC} there is an example of a matroid that is not algebraic over any field: the Vamos matroid $V_8$ \cite[Page 511]{OXLEY}. For these reasons we do not expect a further characterizations of Chow polymatroids. We finish this section with the following question. \begin{question} Are all algebraic polymatroids Chow? \end{question} \section{Applications}\label{sec_comb} In this section we relate our results to several objects from combinatorial algebraic geometry. \subsection{Schubert polynomials} \label{subsect_Schubert} Let ${\mathfrak{S}}_p$ be the symmetric group on the set $[p]$. For every $i\in[p-1]$ we have the transposition $s_i:=(i,i+1)\in {\mathfrak{S}}_p$. Recall that the set $S=\langle s_i, 1\leq i< p \rangle$ generates ${\mathfrak{S}}_p$. The \emph{length} $l(\pi)$ of a permutation $\pi$ is the least amount of elements in $S$ needed to obtain $\pi$. Alternatively, the length is equal to the number of \emph{inversions}, i.e., $\l(\pi)=\{(i,j)\in[p]\times[p]:i<j,\; \pi(i)>\pi(j)\}$. The permutation $\pi_0=(p,p-1,\cdots,2,1)$ (in one line notation) is the longest permutation, it has length $\frac{p(p-1)}{2}$. \begin{definition} The {\it Schubert polynomials} ${\mathfrak{S}}_\pi\in\normalfont\mathbb{Z}[t_1,\ldots,t_p]$ are defined recursively in the following way. First we define ${\mathfrak{S}}_{\pi_0}:=\prod_{i} t_i^{p-i}$, and for any permutation $\pi$ and transposition $s_i$ with $l(s_i\pi)<l(\pi)$ we let \[ {\mathfrak{S}}_{s_i\pi} = \dfrac{{\mathfrak{S}}_{\pi}-s_i{\mathfrak{S}}_{\pi}}{t_i-t_{i+1}}, \] where ${\mathfrak{S}}_{p}$ acts on $\normalfont\mathbb{Z}[t_1,\ldots,t_p]$ by permutation of variables. For more information see \cite[Chapter 10]{FULTON_TABLEAUX}. \end{definition} Next we define {\it matrix Schubert varieties} following \cite[Chapter 15]{MILLER_STURMFELS}. Let $\mathbb{k}$ be an algebraic closed field and $M_p(\mathbb{k})$ be the $\mathbb{k}$-vector space of $p\times p$ matrices with entries in $\mathbb{k}$. As an affine variety we define its coordinate ring as $R_p:=\mathbb{k}[x_{ij}:(i,j)\in[p]\times[p]]$. Furthermore we consider an $\normalfont\mathbb{N}^p$-grading on $R_p$ by letting $\deg(x_{ij}) = {\normalfont\mathbf{e}}_i$. \begin{definition} Let $\pi$ be a permutation matrix. The matrix Schubert variety $\overline{X_\pi}\subset M_p(\mathbb{k})$ is the subvariety \[ \overline{X_\pi}=\{Z\in M_p(\mathbb{k})\mid \normalfont\text{rank}(Z_{m\times n})\leq \normalfont\text{rank}(\pi_{m\times n}) \; \text{ for all }\; m,n\}, \] where $Z_{m\times n}$ is the restriction to the first $m$ rows and $n$ columns. This is an irreducible variety and the prime ideal $I\left(\overline{X_\pi}\right)$ is multihomogeneous \cite[Theorem 15.31]{MILLER_STURMFELS}. By \cite[Theorem 15.40]{MILLER_STURMFELS}, the Schubert polynomial ${\mathfrak{S}}_\pi$ equals the multidegree polynomial of the variety corresponding to the ideal $I\left(\overline{X_\pi}\right)$ (see \autoref{defMsupp}). \end{definition} Following \cite{YONG} we say a polynomial $f=\sum_{{\normalfont\mathbf{n}}}c_{{\normalfont\mathbf{n}}}{\normalfont\mathbf{t}}^{\normalfont\mathbf{n}}\in \normalfont\mathbb{Z}[t_1,\cdots,t_p]$ have the Saturated Newton Polytope property (SNP for short) if $\text{supp}(f):=\{{\normalfont\mathbf{n}}\in\normalfont\mathbb{N}^p\mid c_{\normalfont\mathbf{n}}>0\}=\text{ConvexHull}\{{\normalfont\mathbf{n}}\in\normalfont\mathbb{N}^p\mid c_{\normalfont\mathbf{n}}>0\}\cap\normalfont\mathbb{N}^p$, in other words, if the support of $f$ consist of the integer points of a polytope. In \cite[Conjecture 5.5]{YONG} it was conjectured that the Schubert polynomials have SNP property and they even conjectured a set of defining inequalities for the Newton polytope in \cite[Conjecture 5.13]{YONG}. A.~Fink, K.~M\'ez\'aros, and A.~St.~Dizier confirmed the full conjecture in \cite{FINK}. As noted by the authors of \cite{huh2019logarithmic} the combination of \autoref{prop_poly} (they use the equivalent \cite[Corollary 10.2]{LORENTZ}) and \cite[Theorem 15.40]{MILLER_STURMFELS} (which is also included in \cite[Theorem 6]{huh2019logarithmic}) is enough to give an alternative proof to \cite[Conjecture 5.5]{YONG}. \begin{theorem}\label{thm_Schubert} For any permutation $\pi$, the Schubert polynomial ${\mathfrak{S}}_\pi$ has SNP and its Newton polytope is a polymatroid polytope. \end{theorem} The Newton polytope of a polynomial $f$ is by definition the convex hull of the exponents in the support of $f$, however in by our convention in \autoref{defMsupp} $\normalfont\text{MSupp}$ consists of the complementary exponents. This does not change the conclusion that the resulting polytope is a polymatroid polytope. \medskip {\bf Codimensions of projections.} We now use \autoref{thmA} to give a combinatorial interpretation for the codimensions of the natural projections of matrix Schubert varieties. First we need some terminology. A {\it diagram} $D$ is a subset of a $p\times p$ grid whose boxes are indexed by the set $[p]\times [p]$. The authors of \cite{YONG} define a function $\theta_D:2^{[p]}\mapsto \normalfont\mathbb{Z}$ as follows: for a subset ${\mathfrak{J}}\subseteq [p]$ and $c\in [p]$ , we construct a word $W_D^c({\mathfrak{J}})$ by reading the column $c$ of $[p]\times [p]$ from top to bottom and recording \begin{itemize} \item $($ if $(r,c)\notin D$ and $r\in {\mathfrak{J}}$, \item $)$ if $(r,c)\in D$ and $r\notin {\mathfrak{J}}$, \item $\star$ if $(r,c)\in D$ and $r\in {\mathfrak{J}}$; \end{itemize} let $\theta_D^c({\mathfrak{J}})=\#\textrm{ paired ``()" in }W_D^c({\mathfrak{J}})+\#\star\textrm{ in } W_D^c({\mathfrak{J}})$, and finally $\theta_D({\mathfrak{J}})=\sum_{i=1}^p \theta_D^i({\mathfrak{J}})$. \begin{example}\label{ex_diagram} For examplae, let $D$ be the diagram depicted in \autoref{fig:diagram} and ${\mathfrak{J}}=\{2,3\}$, then $\theta_D({\mathfrak{J}})=3$. \begin{figure}[ht] \scalebox{.5}{ \begin{tikzpicture} \draw[dotted] (0,0)--(5,0)--(5,5)--(0,5)--(0,0); \draw[dotted] (1,0)--(1,5); \draw[dotted] (2,0)--(2,5); \draw[dotted] (3,0)--(3,5); \draw[dotted] (4,0)--(4,5); \draw[dotted] (0,1)--(5,1); \draw[dotted] (0,2)--(5,2); \draw[dotted] (0,3)--(5,3); \draw[dotted] (0,4)--(5,4); \draw[fill=red!10] (0,5)--(1,5)--(1,4)--(0,4)--cycle; \draw[fill=red!10] (1,5)--(2,5)--(2,4)--(1,4)--cycle; \draw[fill=red!10](2,5)--(3,5)--(3,4)--(2,4)--cycle; \draw[fill=red!10](0,4)--(1,4)--(1,3)--(0,3)--cycle; \draw[fill=red!10](0,2)--(0,1)--(1,1)--(1,2)--cycle; \draw[fill=red!10] (2,3)--(2,2)--(3,2)--(3,3)--cycle;\ \end{tikzpicture} } \caption{Example of a diagram in $[5]\times[5]$} \label{fig:diagram} \end{figure} \end{example} For any $\pi\in{\mathfrak{S}}_p$ we can define its {\it Rothe diagram} as $$D_\pi:=\{(i,j)\mid 1\leq i,j\leq n, \pi(i)>j\text{ and }\pi^{-1}(j)>i\}\subset[p]\times[p].$$ For example when $\pi=42531$ then $D_\pi$ is the diagram of \autoref{fig:diagram}. \begin{theorem} \label{thm_projections_schubert} Let $\pi\in{\mathfrak{S}}_p$, then for any ${\mathfrak{J}}\subseteq [p]$ the projection $\Pi_{\mathfrak{J}}\left(\overline{X_\pi}\right)$ onto the rows indexed by ${\mathfrak{J}}$ has codimension $\theta_{D_\pi}([p])-\theta_{D_\pi}({\mathfrak{J}}')$, where ${\mathfrak{J}}'=[p]\backslash{\mathfrak{J}}$ is the complement of ${\mathfrak{J}}$. \end{theorem} \begin{proof} In \cite[Theorem 10]{FINK} the authors givea proof of \cite[Conjecture 5.13]{YONG}, which in our setup (recall the indexing in \autoref{defMsupp}) states that $\normalfont\text{MSupp}(\overline{X_\pi})$ is equal to \[ \left\{{\normalfont\mathbf{n}} \in \normalfont\mathbb{N}^p \;\mid\; \sum_{j\in{\mathfrak{J}}} \left((p-1)-n_j \right)\leq \theta_{D_\pi}({\mathfrak{J}}), \;\forall {\mathfrak{J}}\subsetneq [p], \;\sum_{j\in [p]}\left((p-1)-n_j \right)=\theta_{D_\pi}([p])\right\}. \] The first inequalities can be rewritten as $(p-1)|{\mathfrak{J}}|-\theta_{D_\pi}({\mathfrak{J}})\leq \sum_{j\in{\mathfrak{J}}}n_j$, and combining them with $(p-1)p-\theta_{D_\pi}([p])=\sum_{j\in[p]}n_i$ we obtain \[ \sum_{j\in {\mathfrak{J}}'} n_j \leq (p-1)|{\mathfrak{J}}'|-\bigg( \theta_{D_\pi}([p])-\theta_{D_\pi}({\mathfrak{J}})\bigg). \] By \autoref{thmA} we must have that $\normalfont\text{codim}\left(\Pi_{{\mathfrak{J}}}\left(\overline{X_\pi}\right)\right)= \theta_{D_\pi}([p])-\theta_{D_\pi}({\mathfrak{J}}')$ for every ${\mathfrak{J}}\subseteq [p]$, as we wanted to show. \end{proof} \begin{remark} Notice that $\theta_{D_\pi}([p])$ counts the total number of boxes in $D_\pi$, which is equal to the length of $\pi$ (see \cite[Definition 15.13]{MILLER_STURMFELS}). So the case ${\mathfrak{J}}=[p]$ of \autoref{thm_projections_schubert} above is equivalent to the well-known fact that the codimesion of a matrix Schubert variety is equal to the length of the permutation (see \cite[Theorem 15.31]{MILLER_STURMFELS}). \end{remark} \subsection{Flag varieties} \label{sec_flag} We now focus on a multiprojective embedding of flag varieties. We first review some terminology. For more information the reader is referred to \cite{FULTON_TABLEAUX} or \cite{BRION}. In this subsection we work over an algebraically closed field $\mathbb{k}$. Consider the \emph{complete flag variety} $Fl(V)$ of a $\mathbb{k}$-vector space $V$ of dimension $p+1$. This variety parametrizes complete flags, i.e., sequences $V_\bullet := (V_0,\cdots,V_{p+1})$ such that $\{0\}=V_0\subset V_1 \subset V_2 \subset \cdots \subset V_{p} \subset V_{p+1}=V,$ and each $V_i$ is a linear subspace of $V$ of dimension $i$. One can embed this variety in a product of Grassmannians $Fl(V) \hookrightarrow \textrm{Gr}(1,V)\times \textrm{Gr}(2,V) \times \cdots\times \textrm{Gr}(p,V)$ as the subvariety cut out by incidence relations. Furthermore, each Grassmannian can be embedded in a projective space via the Pl\"ucker embedding $\iota_i:\textrm{Gr}(i,V) \rightarrow{\normalfont\mathbb{P}}_\mathbb{k}^{m_i}$ for $1\leq i\leq p$. By considering the product of these maps, we obtain a multiprojective embedding of $\iota: Fl(V)\hookrightarrow {\normalfont\mathbb{P}}_\mathbb{k}^{m_1}\times_\mathbb{k}\cdots \times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$. For convenience we also call $\iota$ the {\it Pl\"ucker embedding}. The proposition below computes the corresponding multidegree support. \begin{proposition}\label{prop_sottile} Let $V$ be a $\mathbb{k}$-vector space of dimension $p+1$ and let $X$ be the image of the Pl\"ucker embedding $\iota: Fl(V)\hookrightarrow {\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^{m_1}\times_\mathbb{k}\cdots\times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^{m_{p}}$, then \begin{equation}\label{eq:flag_simple} \normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)=\left\{{\normalfont\mathbf{n}}\in\normalfont\mathbb{N}^{p} \;\mid\; 1\leq n_k\leq \sum_{j=1}^{k}(p-j)-\sum_{i=1}^{k-1} n_i, \;\; \forall k\in [p],\;\; \sum_{j=1}^p n_j = \binom{p+1}{2} \right\}; \end{equation} \end{proposition} \begin{proof} We need to compute the dimension of $\Pi_{\mathfrak{J}}(X))$ for each ${\mathfrak{J}}=\{j_1,\ldots,j_k\} \subseteq [p]$. The key observation is that $\Pi_{\mathfrak{J}}(X)$ is isomorphic to the \emph{partial flag variety} $Fl_{{\mathfrak{J}}}(V)$: it parametrizes flags $W_\bullet := \{0\}=V_0\subset V_{2} \subset V_{2} \subset \cdots \subset V_{k} \subset V_{k+1}=V,$ where $\dim V_{k}=j_k$. Hence \[ \dim\left(\Pi_{\mathfrak{J}}(X)\right) = \dim\left(Fl_{{\mathfrak{J}}}(V)\right) = \sum_{1\leq i<j\leq k+1} d_id_j = \mathcal{S}({\mathfrak{J}}); \] here, for each ${\mathfrak{J}} =\{j_1,\ldots,j_k\} \subseteq [p]$, we set $\mathcal{S}({\mathfrak{J}}) := \sum_{1\leq i<j\leq k+1} d_id_j$, where $d_i:=j_i-j_{i-1}$ and by convention $j_0:=0,\, j_{k+1}:=p+1$. For a proof of the second equality see \cite[\S 1.2]{BRION}. From \autoref{thmA} it follows that \begin{equation}\label{eq:flag} \normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)=\left\{{\normalfont\mathbf{n}}\in\normalfont\mathbb{N}^{p} \;\mid\; \sum_{j\in{\mathfrak{J}}}n_j\leq \mathcal{S}({\mathfrak{J}}), \;\; \forall{\mathfrak{J}}\subseteq[p],\;\; \sum_{j=1}^p n_j = \binom{p+1}{2} \right\}. \end{equation} It can be checked that the description in \autoref{eq:flag} coincides with the one in \autoref{eq:flag_simple}. \end{proof} The pullbacks of the classes $H_i$ from ${\normalfont\mathbb{P}}_\mathbb{k}^{m_1}\times_\mathbb{k}\cdots\times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^{m_p}$ to $Fl(V)$ are called the Schubert divisors, so \autoref{prop_sottile} amounts to a criterion for which powers of these classes intersect. These intersections are called Grassmannian Schubert problems in \cite{PURBHOO_SOTTILE}. In \cite[Theorem 1.2]{PURBHOO_SOTTILE} K.~Purbhoo and F.~Sottile give a stronger statement by providing an explicit combinatorial formula using \emph{filtered tableau} to compute the exact intersection numbers. \subsection{A multiprojective embedding of $\overline{M}_{0,p+3}$}\label{sec_M0} The moduli space $\overline{M}_{0,p+3}$ parametrizes rational stable curves with $p+3$ marked points. Here we apply our methods to an embedding considered in \cite{MONKS}. The starting point is the closed embedding $\Psi_p: \overline{M}_{0,p+3}\longrightarrow \overline{M}_{0,p+2}\times_\mathbb{k} {\normalfont\mathbb{P}}_\mathbb{k}^p$ constructed by S.~Keel and J.~Tevelev in \cite[Corollary 2.7]{KEEL_TEVELEV}. By iterating this construction we obtain an embedding $\overline{M}_{0,p+3}\hookrightarrow {\normalfont\mathbb{P}}_\mathbb{k}^1\times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^2\times_\mathbb{k}\cdots\times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^p$ (see \cite[Corollary 3.2]{MONKS}). In \cite{MONKS}, R.~Cavalieri, M.~Gillespie, and L.~Monin computed the corresponding multidegree which turns out to be related to parking functions. As an easy consequence of our \autoref{thmA}, we can compute its support. \begin{proposition}\label{prop_parking} Let $X$ be the image of $\overline{M}_{0,p+3}\hookrightarrow {\normalfont\mathbb{P}} = {\normalfont\mathbb{P}}_\mathbb{k}^1\times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^2\times_\mathbb{k}\cdots\times_\mathbb{k}{\normalfont\mathbb{P}}_\mathbb{k}^p$, then \begin{equation}\label{eq:M0n} \normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)=\left\{{\normalfont\mathbf{n}}\in\normalfont\mathbb{N}^p\;\mid\; \sum_{i=1}^k n_i\leq k,\; \forall 1\leq k\leq p-1,\; \sum_{i=1}^p n_i=p\right\}. \end{equation} \end{proposition} \begin{proof} First, as explained in \cite[\S 3]{MONKS} we have $\dim\left(\Pi_{[p]}(X)\right)=\dim\left( \Pi_{\{p\}}(X)\right)=p$. Also, by construction $\Pi_{[p-1]}(X)\cong\overline{M}_{0,p+2}$, and thus $\dim \left(\Pi_{[i-1]}(X)\right)=i-1$ for every $2\le i\le p$. So, by induction one gets $\dim\left(\Pi_{\{i\}}(X)\right)=i$ for all $1 \le i \le p$. To use \autoref{thmA}, we must compute $\dim \left(\Pi_{{\mathfrak{J}}}(X)\right)$ for all ${\mathfrak{J}} \subseteq [p]$. Let $m:=\max\{j \mid j \in{\mathfrak{J}}\}$, then as explained above we have $\dim\left(\Pi_{[m]}(X)\right)=\dim\left(\Pi_{\{m\}}(X)\right)=m$ and so we must have $\dim\left(\Pi_{{\mathfrak{J}}}(X)\right)=m$. By \autoref{thmA} we obtain that $\normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)$ is equal to $$\{{\normalfont\mathbf{n}}\in\normalfont\mathbb{N}^p \,\mid\, \sum_{i\in{\mathfrak{J}}} n_i\leq \max\{j \mid j \in{\mathfrak{J}}\},\; \forall {\mathfrak{J}}\subseteq[p],\; \sum_{i=1}^p n_i=p\}, $$ but it is straightforward to check that the inequalities in \autoref{eq:M0n} are enough to describe the same set. \end{proof} The cardinality of $\normalfont\text{MSupp}_{\normalfont\mathbb{P}}(X)$ is equal to the Catalan number $C_n$ (see \cite[Exercise 86]{CATALAN}). For a comprehensible survey on Catalan numbers see \cite{CATALAN}. \subsection{Mixed Volumes}\label{sec_mixed} In this subsection we assume $\mathbb{k}$ is an algebraically closed field. We begin by reviewing the definition of mixed volumes of convex bodies, as a general reference see \cite[Chapter IV]{EWALD}. Let $\textbf{K}=(K_1,\ldots,K_p)$ be a $p$-tuple of convex bodies in $\mathbb{R}^d$. The volume polynomial $v(\textbf{K})\in \normalfont\mathbb{Z}[w_1,\ldots,w_p]$ is defined as \[ v(K_1,\ldots,K_p) := \text{Vol}_d(w_1K_1+\cdots+w_pK_p). \] This is a homogeneous polynomial of degree $d$. If the coefficients of $v(\textbf{K})$ are written as $\binom{d}{{\normalfont\mathbf{n}}}V(\textbf{K};{\normalfont\mathbf{n}})w^{\normalfont\mathbf{n}}$, then the numbers $V(\textbf{K};{\normalfont\mathbf{n}})$ are called the {\it mixed volumes of} $\textbf{K}$. A natural question to ask is: when are mixed volumes positive? The relation between mixed volumes and toric varieties (see \autoref{eq_mixed-intersection} below) together with \autoref{thmA} allows us to give another proof of a classical theorem formulated on the non-vanishing of mixed volumes \cite[Theorem 5.1.8]{SCHNEIDER}. \begin{theorem} \label{thm_mixed} Let $\textnormal{\textbf{K}}=(K_1,\ldots,K_p)$ be a $p$-tuple of convex bodies in $\mathbb{R}^d$. Then, $V(\textnormal{\textbf{K}};{\normalfont\mathbf{n}})>0$ if and only if $\sum_{i=1}^p n_i=d$ and $\sum_{i\in {\mathfrak{J}}} n_i\leq \dim\left(\sum_{i\in {\mathfrak{J}}} K_i\right)$ for every subset ${\mathfrak{J}} \subseteq [p]$. \end{theorem} We first indicate how to reduce to the case of polytopes. The basic idea is that convex bodies can be approximated by polytopes in the Hausdorff metric \cite[Section 1.8]{SCHNEIDER}. However, the condition for positivity as stated in \autoref{thm_mixed} is a priori not stable under limits. To fix this we invoke an equivalent condition more suitable for the limiting argument. \begin{lemma}\label{lem_reduction} It suffices to show \autoref{thm_mixed} for polytopes. \end{lemma} \begin{proof} This follows from two facts. The first is that mixed volumes $V(\textnormal{\textbf{K}};{\normalfont\mathbf{n}})$ are continuous \cite[Theorem 5.1.7]{SCHNEIDER} and monotonous \cite[Equation 5.25]{SCHNEIDER} on each entry. The second fact is that for a given sequence $\textnormal{\textbf{K}}=(K_1,\ldots,K_p)\subset\left(\mathbb{R}^d\right)^p$ of convex bodies, by \cite[Lemma 5.1.9]{SCHNEIDER} the following conditions are equivalent: \begin{enumerate} \item $\sum_{i=1}^p n_i=d$ and $\sum_{i\in {\mathfrak{J}}} n_i\leq \dim\left(\sum_{i\in {\mathfrak{J}}} K_i\right)$ for every subset ${\mathfrak{J}}\subseteq[p]$. \item \label{cond_segments} There exist line segments $S_{i,1},S_{i,2},\ldots,S_{i,n_i}\subseteq K_i$, for every $i$, such that $\{S_{i,j}\}_{1\leqslant i\leqslant p,1\leqslant j\leqslant n_i}$ has segments in $d$ linearly independent directions. \end{enumerate} We now assume the statement of \autoref{thm_mixed} is true when each $K_i$ is a polytope and show that it follows in the case where each $K_i$ is an arbitrary convex body. If $V(\textnormal{\textbf{K}};{\normalfont\mathbf{n}})>0$ then by continuity we can find polytopes $\textnormal{\textbf{P}}=(P_1,\cdots,P_p)$ with $V(\textnormal{\textbf{P}};{\normalfont\mathbf{n}})>0$ and $P_i \subseteq K_i$ for each $i\in[p]$. By assumption, the sequence $\textnormal{\textbf{P}}$ satisfies condition \hyperref[cond_segments]{(2)} above and hence so does the sequence $\textnormal{\textbf{K}}$. Conversely, suppose condition \hyperref[cond_segments]{(2)} holds for $\textnormal{\textbf{K}}$, then by continuity we can find polytopes $\textnormal{\textbf{P}}=(P_1,\ldots,P_p)$ with $P_i\subseteq K_i$ arbitrarily close so that \hyperref[cond_segments]{(2)} holds for $\textnormal{\textbf{P}}$ too. Due to the assumption, we have $V(\textnormal{\textbf{P}};{\normalfont\mathbf{n}})>0$ and thus $V(\textnormal{\textbf{K}};{\normalfont\mathbf{n}})\geq V(\textnormal{\textbf{P}};{\normalfont\mathbf{n}})>0$ by monotonicity. \end{proof} To finish the proof of \autoref{thm_mixed} we need some preliminary results about toric varieties and lattice polytopes. As an initial step we recall some facts about basepoint free divisors; a general reference is \cite[Section 6]{COX_TORIC}. Let $\Sigma$ be a fan and let $P$ be a lattice polytope whose normal fan coarsens $\Sigma$. Then, $P$ induces a basepoint free divisor $D_P$ in the toric variety $Y_{\Sigma}$ \cite[Proposition 6.2.5]{COX_TORIC}. Here, being basepoint free means that the complete linear series $|D_P|$ induces a morphism $\phi_{P}:Y_{\Sigma}\rightarrow {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_i}$ for some $m_i\in\normalfont\mathbb{N}$ such that $\phi^*(H)=D_P\in A^*(Y_{\Sigma})$, where $H$ is the class of a hyperplane in the projective space ${\normalfont\mathbb{P}}_{\mathbb{k}}^{m_i}$. \begin{lemma}\label{lem_toric-aux-1} Let $K_1,\ldots,K_p$ be lattice polytopes and let $K:=K_1+\cdots+K_p$ be their Minkowski sum . Let $Y$ be the toric variety associated to $\Sigma$, the normal fan of $K$, then for each ${\mathfrak{J}}\subseteq[p]$ we have a map $\phi_{{\mathfrak{J}}}: Y\rightarrow \prod_{j\in{\mathfrak{J}}} {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_j}$ such that $\dim\left(\phi_{{\mathfrak{J}}}(Y)\right)=\dim\left(\sum_{i\in{\mathfrak{J}}}K_i\right)$. \end{lemma} \begin{proof} The fan $\Sigma$ is the common refinement of the normal fans of $K_1,\ldots,K_p$ \cite[Proposition 7.12]{ZIEGLER}, so each $K_i$ induces a basepoint free divisor $D_i$ on $Y=Y_\Sigma$ and thus also a map $\phi_i: Y\rightarrow {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_i}$. By the universal property of fiber products these maps induce a canonical map $\phi_{{\mathfrak{J}}}: Y\rightarrow \prod_{j\in{\mathfrak{J}}} {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_j}$ for each ${\mathfrak{J}}\subseteq[p]$. It remains to compute the dimensions of the images of these maps. By composing with the Segre embedding $\varphi_{{\mathfrak{J}}}:\prod_{j\in{\mathfrak{J}}} {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_j}\rightarrow {\normalfont\mathbb{P}}_{\mathbb{k}}^m$ we obtain a map $\varphi_{{\mathfrak{J}}}\circ\phi_{{\mathfrak{J}}}: Y\rightarrow {\normalfont\mathbb{P}}_{\mathbb{k}}^m$. Let $H$ be the class of a hyperplane in $A^*({\normalfont\mathbb{P}}_{\mathbb{k}}^m)$, we have that $\varphi_{{\mathfrak{J}}}^*(H)=\sum_{i\in{\mathfrak{J}}} H_i\in A^*\left(\prod_{j\in{\mathfrak{J}}} {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_j}\right)$ where the each $H_i$ is the pullback of a hyperplane in the $i$-th factor (see, e.g., \cite[Exercise 5.11]{HARTSHORNE}). Then $(\varphi_{{\mathfrak{J}}}\circ\phi_{{\mathfrak{J}}})^*(H)=\sum_{i\in{\mathfrak{J}}} D_i\in A^*(Y)$. This means that the morphism $\varphi_{{\mathfrak{J}}}\circ\phi_{{\mathfrak{J}}}$ corresponds to the complete linear series $|D_{K_{{\mathfrak{J}}}}|$ where $K_{{\mathfrak{J}}}=\sum_{j\in{\mathfrak{J}}} K_i$, hence the image has dimension $\dim(K_{{\mathfrak{J}}})=\dim(\sum_{j\in{\mathfrak{J}}} K_i)$ \cite[Theorem 6.1.22]{COX_TORIC}. \end{proof} \begin{lemma} \label{lem_toric-aux-2} In the setup of \autoref{lem_toric-aux-1}, if ${\mathfrak{J}}=[p]$ then after scaling each polytope if necessary, $\phi = \phi_{{\mathfrak{J}}}$ is an embedding. \end{lemma} \begin{proof} By construction the normal fan of $K_{[p]}=\sum_{i=1}^p K_i$ is $\Sigma$, so the corresponding divisor is ample \cite[Theorem 6.1.14]{COX_TORIC}. By replacing the list of polytopes by large enough scalings we obtain a very ample divisor, hence an embedding. \end{proof} \begin{proof}[Proof of \autoref{thm_mixed}] By using \autoref{lem_reduction}, we can assume that each $K_i$ is a polytope. Additionally, we can reduce to the case where each $K_i$ is a lattice polytope since any polytope can be approximated by lattice polytopes (see \cite[Page 120]{FULTON}). Let $K=K_1+\cdots+K_p$ and let $Y$ be the toric projective variety associated to the normal fan of $K$. Each lattice polytope $K_i$ induces a basepoint free divisor $D_i$ on $Y$. As explained in \cite[Eq. (2), Page 116]{FULTON}, the fundamental connection between mixed volumes and intersection products is given by the following equation \begin{equation}\label{eq_mixed-intersection} V(\textbf{K};{\normalfont\mathbf{n}})=\left(D_1^{n_1}\cdots D_p^{n_p}\right)/d!, \end{equation} where the numerator is the intersection product of the divisors in $Y$. Notice that positivity of mixed volumes is unchanged by scaling so whenever needed we can scale each polytope. By \autoref{lem_toric-aux-2} we have an embedding $\phi: Y\rightarrow \prod_{i=1}^p {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_i}$ such that the pullback of each $H_i\in A^*\left(\prod_{i=1}^p {\normalfont\mathbb{P}}_{\mathbb{k}}^{m_i}\right)$ is $D_i$. By using the projection formula \cite[Proposition 2.5(c)]{FULTON_INTERSECTION_THEORY}, we can consider the product $\left(H_1^{n_1}\cdots H_p^{n_p}\right)/d!$ instead of the one in \autoref{eq_mixed-intersection}. From the fact that $Y$ is irreducible we are now in the \autoref{setup_initial}. So, the result follows by \autoref{rem_chow_ring}, \autoref{thmA}, and \autoref{lem_toric-aux-1}. \end{proof} \section*{Acknowledgments} We thank the reviewer for his/her suggestions for the improvement of this work. We would like to thank Chris Eur, Maria Gillespie, June Huh, David Speyer, and Mauricio Velasco for useful conversations. We are also grateful with Frank Sottile for useful comments on an earlier version (in particular for the simplification of the statement of \autoref{prop_sottile}). Special thanks to Brian Osserman for many insightful conversations and encouragements. The computer algebra system \texttt{Macaulay2} \cite{MACAULAY2} was of great help to compute several examples in the preparation of this paper. \bibliographystyle{elsarticle-num}
2,877,628,088,848
arxiv
\section{Introduction} Many combinatorial optimization problems are ${\cal NP}$-hard, and the theory of ${\cal NP}$-completeness has reduced hopes that ${\cal NP}$-{\it hard} problems can be solved within polynomial bounded computation times. Nevertheless, sub-optimal solutions are sometimes easy to find. Consequently, there is much interest in approximation and heuristic algorithms that can find near optimal solutions within reasonable running time. Heuristic algorithms are typically among the best strategies in terms of efficiency and solution quality for problems of realistic size and complexity. In contrast to individual heuristic algorithms that are designed to solve a specific problem, meta-heuristics are strategic problem solving frameworks that can be adapted to solve a wide variety of problems. Meta-heuristic algorithms are widely recognized as one of the most practical approaches for combinatorial optimization problems. The most representative meta-heuristics include genetic algorithms, simulated annealing, tabu search and ant colony. Useful references regarding meta-heuristic methods can be found in \cite{gk}. {\em The Generalized Traveling Salesman Problem (GTSP)} has been introduced in \cite{ln} and \cite{nb}. The {\em GTSP} has several applications to location and telecommunication problems. More information on these problems and their applications can be found in \cite{figo,fist,ln}. Several approaches were considered for solving the {\em GTSP}: a branch-and-cut algorithm for {\em Symmetric GTSP} is described and analyzed in \cite{fist}, in \cite{nb} is given a Lagrangian-based approach for {\em Asymmetric GTSP}, in \cite{snda} is described a random-key genetic algorithm for the {\em GTSP}, in \cite{rb} it is proposed an efficient composite heuristic for the {\em Symmetric GTSP} etc. The aim of this paper is to provide an exact algorithm for the {\em GTSP} as well as an effective meta-heuristic algorithm for the problem. The proposed meta-heuristic is a modified version of {\em Ant Colony System (ACS)}. Introduced in (\cite{cdm,do}), {\em Ant System} is a heuristic algorithm inspired by the observation of real ant colonies. {\em ACS} is used to solve hard combinatorial optimization problems including the {\em Traveling Salesman Problem (TSP)}. \section{Definition and complexity of the GTSP} Let $G=(V,E)$ be an $n$-node undirected graph whose edges are associated with non-negative costs. We will assume w.l.o.g. that $G$ is a complete graph (if there is no edge between two nodes, we can add it with an infinite cost). Let $V_1,...,V_p$ be a partition of $V$ into $p$ subsets called {\it clusters} (i.e. $V=V_1 \cup V_2 \cup ... \cup V_p$ and $V_l \cap V_k = \emptyset$ for all $l,k \in \{1,...,p\}$). We denote the cost of an edge $e=\{i,j\}\in E$ by $c_{ij}$. The {\it generalized traveling salesman problem} ({\em GTSP}) asks for finding a minimum-cost tour $H$ spanning a subset of nodes such that $H$ contains exactly one node from each cluster $V_i$, $i\in \{1,...,p\}$. The problem involves two related decisions: choosing a node subset $S\subseteq V$, such that $|S \cap V_k | = 1$, for all $k=1,...,p$ and finding a minimum cost Hamiltonian cycle in the subgraph of $G$ induced by $S$. Such a cycle is called a {\it Hamiltonian tour}. The {\em GTSP} is called {\em symmetric} if and only if the equality $c(i,j)=c(j,i)$ holds for every $i,j \in V$, where $c$ is the cost function associated to the edges of $G$. \section{An exact algorithm for the GTSP} In this section, we present an algorithm that finds an exact solution to the {\em GTSP}. Given a sequence $(V_{k_{1}},...,V_{k_{p}})$ in which the clusters are visited, we want to find the best feasible Hamiltonian tour $H^*$ (w.r.t cost minimization), visiting the clusters according to the given sequence. This can be done in polynomial time by solving $|V_{k_{1}}|$ shortest path problems as described below. We construct a layered network, denoted by LN, having $p+1$ layers corresponding to the clusters $V_{k_{1}},...,V_{k_{p}}$ and in addition we duplicate the cluster $V_{k_{1}}$. The layered network contains all the nodes of $G$ plus some extra nodes $v'$ for each $v\in V_{k_1}$. There is an arc $(i,j)$ for each $i\in V_{k_l}$ and $j\in V_{k_{l+1}}$ ($l=1,...,p-1$), having the cost $c_{ij}$ and an arc $(i,h)$, $i,h \in V_{k_l}$, ($l=2,...,p$) having cost $c_{ih}$. Moreover, there is an arc $(i,j')$ for each $i\in V_{k_p}$ and $j'\in V_{k_1}$ having cost $c_{ij'}$. For any given $v\in V_{k_1}$, we consider paths from $v$ to $w'$, $w'\in V_{k_1}$, that visits exactly two nodes from each cluster $V_{k_{2}},...,V_{k_{p}}$, hence it gives a feasible Hamiltonian tour. Conversely, every Hamiltonian tour visiting the clusters according to the sequence $(V_{k_{1}},...,V_{k_{p}})$ corresponds to a path in the layered network from a certain node $v\in V_{k_1}$ to $w'\in V_{k_1}$. Therefore the best (w.r.t cost minimization) Hamiltonian tour $H^*$ visiting the clusters in a given sequence can be found by determining all the shortest paths from each $v\in V_{k_1}$ to each $w'\in V_{k_1}$ with the property that visits exactly one node from cluster. The overall time complexity is then $|V_{k_1}|O(m+n\log n)$, i.e. $O(nm+nlogn)$ in the worst case. We can reduce the time by choosing $|V_{k_1}|$ as the cluster with minimum cardinality. It should be noted that the above procedure leads to an $O((p-1)!(nm+nlogn))$ time exact algorithm for the {\em GTSP}, obtained by trying all the $(p-1)!$ possible cluster sequences. Therefore we have established the following result: the above procedure provides an exact solution to the {\em GSTP} in $O((p-1)!(nm+nlogn))$ time, where $n$ is the number of nodes, $m$ is the number of edges and $p$ is the number of clusters in the input graph. Clearly, the algorithm presented is an exponential time algorithm unless the number of clusters $p$ is fixed. \section{Ant Colony System} {\em Ant System} proposed in \cite{cdm,do} is a multi-agent approach used for various combinatorial optimization problems. The algorithms were inspired by the observation of real ant colonies. An ant can find shortest paths between food sources and a nest. While walking from food sources to the nest and vice versa, ants deposit on the ground a substance called pheromone, forming a pheromone trail. Ants can smell pheromone and, when choosing their way, they tend to choose paths marked by stronger pheromone concentrations. It has been shown that this pheromone trail following behavior employed by a colony of ants can lead to the emergence of shortest paths. When an obstacle breaks the path ants try to get around the obstacle randomly choosing either way. If the two paths encircling the obstacle have the different length, more ants pass the shorter route on their continuous pendulum motion between the nest points in particular time interval. While each ant keeps marking its way by pheromone the shorter route attracts more pheromone concentrations and consequently more and more ants choose this route. This feedback finally leads to a stage where the entire ant colony uses the shortest path. There are many variations of the ant colony optimization applied on various classical problems. {\em Ant System} make use of simple agents called ants which iterative construct candidate solution to a combinatorial optimization problem. The ants solution construction is guided by pheromone trails and problem dependent heuristic information. An individual ant constructs candidate solutions by starting with an empty solution and then iterative adding solution components until a complete candidate solution is generated. Each point at which an ant has to decide which solution component to add to its current partial solution is called a choice point. After the solution construction is completed, the ants give feedback on the solutions they have constructed by depositing pheromone on solution components which they have used in their solution. Solution components which are part of better solutions or are used by many ants will receive a higher amount of pheromone and, hence, will more likely be used by the ants in future iterations of the algorithm. To avoid the search getting stuck, typically before the pheromone trails get reinforced, all pheromone trails are decreased by a factor. \textit{Ant Colony System} ({\em ACS}) was developed to improve \textit{Ant System}, making it more efficient and robust. \textit{Ant Colony System} works as follows: \textit{m} ants are initially positioned on n nodes chosen according to some initialization rule, for example randomly. Each ant builds a tour by repeatedly applying a stochastic greedy rule - the state transition rule. While constructing its tour, an ant also modifies the amount of pheromone on the visited edges by applying the local updating rule. Once all ants have terminated their tour, the amount of pheromone on edges is modified again by applying the global updating rule. As was the case in ant system, ants are guided, in building their tours by both heuristic information and by pheromone information: an edge with a high amount of pheromone is a very desirable choice. The pheromone updating rules are designed so that they tend to give more pheromone to edges which should be visited by ants. The ants solutions are not guaranteed to be optimal with respect to local changes and hence may be further improved using local search methods. Based on this observation, the best performance are obtained using hybrid algorithms combining probabilistic solution construction by a colony of ants with local search algorithms as 2-3 opt, tabu-search etc. In such hybrid algorithms, the ants can be seen as guiding the local search by constructing promising initial solutions, because ants preferably use solution components which, earlier in the search, have been contained in good locally optimal solutions. \section{Reinforcing Ant Colony System for GTSP} An {\em Ant Colony System} for the {\em GTSP} it is introduced. In order to enforces the construction of a valid solution used in {\em ACS} a new algorithm called {\em Reinforcing Ant Colony System} {\em (RACS)} it is elaborated with a new pheromone rule as in \cite{cdd} and pheromone evaporation technique as in \cite{th}. Let $V_k(y)$ denote the node $y$ from the cluster $V_k$. The {\em RACS} algorithm for the {\em GTSP} works as follows: \begin{itemize} \item Initially the ants are placed in the nodes of the graph, choosing randomly the \textit{clusters} and also a random node from the chosen cluster \item At iteration $t+1$ every ant moves to a new node from an unvisited \textit{cluster} and the parameters controlling the algorithm are updated. \item Each edge is labeled by a trail intensity. Let $\tau_{ij}(t)$ represent the trail intensity of the edge $(i,j)$ at time $t$. An ant decides which node is the next move with a probability that is based on the distance to that node (i.e. cost of the edge) and the amount of trail intensity on the connecting edge. The inverse of distance from a node to the next node is known as the \textit{visibility}, $\eta_{ij}=\frac{1}{c_{ij}}$. \item At each time unit evaporation takes place. This is to stop the intensity trails increasing unbounded. The rate evaporation is denoted by $\rho$, and its value is between 0 and 1. In order to stop ants visiting the same \textit{cluster} in the same tour a tabu list is maintained. This prevents ants visiting \textit{clusters} they have previously visited. The ant tabu list is cleared after each completed tour. \item To favor the selection of an edge that has a high pheromone value, $\tau$, and high visibility value, $\eta$ a probability function ${p^{k}}_{iu}$ is considered. ${J^{k}}_{i}$ are the unvisited neighbors of node $i$ by ant $k$ and $u\in {J^{k}}_{i}, u=V_k(y)$, being the node $y$ from the unvisited cluster $V_k$. This probability function is defined as follows: \begin{equation}\label{probabilitate} {p^{k}}_{iu}(t)= \frac{[\tau_{iu}(t)] [\eta_{iu}(t)]^{\beta}} {\Sigma_{o\in {J^{k}}_{i}}[\tau_{io}(t)] [\eta_{io}(t)]^{\beta}} , \end{equation} \indent where $\beta$ is a parameter used for tuning the relative importance of edge cost in selecting the next node. ${p^{k}}_{iu}$ is the probability of choosing $j=u$, where $u=V_k(y) $ is the next node, if $q>q_{0}$ (the current node is $i$). If $q\leq q_{0}$ the next node $j$ is chosen as follows: \begin{equation} j=argmax_{u\in J^{k}_{i}} \{\tau_{iu}(t) {[\eta_{iu}(t)]}^{\beta}\} , \end{equation} \noindent where $q$ is a random variable uniformly distributed over $[0,1]$ and $q_{0}$ is a parameter similar to the temperature in simulated annealing, $0\leq q_{0}\leq 1$. \item After each transition the trail intensity is updated using the correction rule from \cite{cdd}: \begin{equation} \tau_{ij}(t+1)=(1-\rho)\tau_{ij}(t)+\rho \frac{1}{n \cdot L^{+}} . \end{equation} where $L^{+}$ is the cost of the best tour. \item In {\em Ant Colony System} only the ant that generate the best tour is allowed to \textit{globally} update the pheromone. The global update rule is applied to the edges belonging to the {\it best tour}. The correction rule is \begin{equation}\label{global} \tau_{ij}(t+1)=(1-\rho) \tau_{ij}(t)+\rho \Delta \tau_{ij}(t) , \end{equation} \noindent where $\Delta\tau_{ij}(t)$ is the inverse cost of the best tour. \item In order to avoid stagnation we used the pheromone evaporation technique introduced in \cite{th}. When the pheromone trail is over an upper bound $\tau_{max}$, the pheromone trail is re-initialized. The pheromone evaporation is used after the global pheromone update rule. \end{itemize} The {\em RACS} algorithm computes for a given time $time_{max}$ a sub-optimal solution, the optimal solution if it is possible and can be stated as follows in the pseudo-code description. \begin{figure} \centering \includegraphics[scale=0.9]{alg.eps} \caption{\small \sffamily Pseudo-code: the {\em Reinforcing Ant Colony System} ({\em RACS}).} \label{fig:figura11} \end{figure} \begin{figure} \centering \includegraphics[scale=0.55]{sq.eps} \caption{\small \sffamily A graphic representation of the {\em Generalized Traveling Salesman Problem} ({\em GTSP}) solved with an ant-based heuristic called {\em Reinforcing Ant Colony System} ({\em RACS}) is illustrated. The first picture shows an ant starting from the nest to find food, going once through each cluster and returning to the nest; all the ways are initialized with the same $\tau_{0}$ pheromone quantity; after several iterations performed by each ant from the nest, the solution is visible. The second picture shows a solution of {\em Generalized Traveling Salesman Problem} ({\em GTSP}) represented by the largest pheromone trail (thick lines); the pheromone is evaporating on all the other trails (gray lines). } \label{fig:figura} \end{figure} \section{Representation and computational results} A graphic representation of {\em Reinforcing Ant Colony System} for solving {\em GTSP} is show in Fig.~\ref{fig:figura}. At the beginning, the ants are in their nest and will start to search food in a specific area. Assuming that each cluster has specific food and the ants are capable to recognize this, they will choose each time a different cluster. The pheromone trails will guide the ants to the shorter path, a solution of {\em GTSP}, as in Fig.~\ref{fig:figura}. To evaluate the performance of the proposed algorithm, the {\em RACS} was compared to the basic {\em ACS} algorithm for {\em GTSP} and furthermore to other heuristics from literature: {\em Nearest Neighbor (NN)}, a composite heuristic $GI^{3}$ and a {\em random key-Genetic Algorithm} \cite{rb,snda}. The numerical experiments that compare {\em RACS} with other heuristics used problems from {\em TSP} library \cite{tl}. {\em TSPLIB} provides optimal objective values for each of the problems. Several problems with Euclidean distances have been considered. The exact algorithm proposed in section 3, is clearly outperformed by heuristics including {\em RACS}, because his running time is exponential, while heuristics including {\em RACS} are polynomial time algorithms and provide good sub-optimal solution for reasonable sizes of the problem. To divide the set of nodes into subsets we used the procedure proposed in \cite{figo}. This procedure sets the number of clusters $m=[n/5]$, identifies the $m$ farthest nodes from each other, called centers, and assigns each remaining node to its nearest center. Obviously, some real world problems may have different cluster structures, but the solution procedure presented in this paper is able to handle any cluster structure. The initial value of all pheromone trails, $\tau_{0}=1/(n \cdot L_{nn})$, where $L_{nn}$ is the result of \textit{Nearest Neighbor, (NN)} algorithm. In {\em NN} algorithm the rule is always to go next to the nearest as-yet-unvisited location. The corresponding tour traverses the nodes in the constructed order. For the pheromone evaporation phase, let denote the upper bound with $\tau_{max}=1/((1-\rho) L_{nn})$. The decimal values can be treated as parameters and can be changed if it is necessary. The parameters for the algorithm are critical as in all other ant systems. Currently there is no mathematical analysis developed to give the optimal parameter in each situation. In the {\em ACS} and {\em RACS} algorithm the values of the parameters were chosen as follows: $\beta=5$, $\rho=0.5$, $q_{0}=0.5$. In table from Figure~\ref{fig:figura12} we compare the computational results for solving the {\em GTSP} using the {\em ACS} and {\em RACS} algorithm with the computational results obtained using {\em NN}, $GI^{3}$ and {\em random key-Genetic Algorithm} mentioned above. The columns in table from Figure~\ref{fig:figura12} are as follows: {\bf Problem}- the name of the test problem; the first digits give the number of clusters ($nc$)and the last ones give the number of nodes ($n$); {\bf Opt.val.}-the optimal objective value for the problem \cite{snda}; {\bf ACS, RACS, NN, G$I^{3}$, GA}- the objective value returned by {\em ACS, RACS, NN, $GI^{3}$} and {\em random-key Genetic Algorithm}. \begin{figure} \centering \includegraphics[scale=0.6]{TAB.eps} \caption{\small \sffamily Reinforcing Ant Colony System (RACS) versus ACS, NN, G$I^{3}$ and GA.} \label{fig:figura12} \end{figure} The best solutions are in Table ~\ref{fig:figura12} in the bold format. All the solutions of {\em ACS} and {\em RACS} are the average of five successively run of the algorithm, for each problem. Termination criteria for {\em ACS} and {\em RACS} is given by the $time_{max}$ the maximal computing time set by the user; in this case ten minutes. Table ~\ref{fig:figura12} shows that {\em Reinforcing Ant Colony System} performed well finding the optimal solution in many cases. The results of {\em RACS} are better than the results of {\em ACS}. The {\em RACS} algorithm for the {\em Generalized Traveling Salesman Problem} can be improved if more appropriate values for the parameters are used. Also, an efficient combination with other algorithms can potentially improve the results. \newpage \section{Conclusion} The basic idea of {\em ACS} is that of simulating the behavior of a set of agents that cooperate to solve an optimization problem by means of simple communications. The algorithm introduced to solve the {\em Generalized Traveling Salesman Problem}, called {\em Reinforcing Ant Colony System}, is an {\em ACS}-based algorithm with new correction rules. The computational results of the proposed {\em RACS} algorithm are good and competitive in both solution quality and computational time with the existing heuristics from the literature \cite{rb,snda}. The {\em RACS} results can be improved by considering better values for the parameters or combining the {\em RACS} algorithm with other optimization algorithms. Some disadvantages have also been identified and they refer the multiple parameters used for the algorithm and the high hardware resources requirements.
2,877,628,088,849
arxiv
\section{Introduction} When searching for new physics, a discovery claim is made if the data collected by the experiment provides sufficient statistical evidence in favor of the new phenomenon. If the background and signal distributions are specified correctly, this can be done by means of statistical tests of hypothesis, upper limits and confidence intervals. \textbf{\emph{The problem.}} In practice, even if a reliable description of the signal distribution is available, providing accurate background models may be challenging, as the behavior of the sources which contribute to it is often poorly understood. Some examples include searches for nuclear recoils of weakly interacting massive particles over electron recoils backgrounds \cite{aprile18, agnese18}, searches for gravitational-wave signals over non-Gaussian backgrounds from stellar-mass binary black holes \cite{smith18}, and searches for a standard model-like Higgs boson over prompt diphoton production \cite{CMS18}. Unfortunately, model uncertainty due to background mismodelling can significantly compromise the sensitivity of the experiment under study. Specifically, overestimating the background distribution in the signal region increases the chances of missing new physics. Conversely, underestimating the background outside the signal region leads to an artificially enhanced sensitivity, which can easily result in false discovery claims. Several methods have been proposed in literature to address this problem \cite[e.g.,][]{yellin, Priel,dauncey}. However, to the best of the author's knowledge, none of the methods available \black{provides a unified strategy to} (i) assess the validity of existing models for the background, (ii) fully characterize the background distribution, (iii) perform signal detection even if the signal distribution is not available, (iv) characterize the signal distribution, and (v) detect additional signals of new unexpected sources. \emph{\textbf{Goal.}} \black{The aim of this work is to integrate modelling, estimation, and inference under background mismodelling and provide a general statistical methodology to perform of (i)-(v).} As a brief overview, given a source-free sample and the (partial) scientific knowledge available on the background distribution, a data-updated version of it is obtained in a purely nonparametric fashion without requiring the specification of prior distributions \black{on the unknown parameters}. At this stage, a graphical tool is provided in order to assess if and where significant deviations between the true and the postulated background distributions occur. The ``updated'' background distribution is then used to assess if the distribution of the data collected by the experiment deviates significantly from the background model. Also in this case, it is possible to assess graphically how the data distribution deviates from the expected background model. If a source-free sample is available, or if control regions can be identified, the solution proposed does not require the specification of a model for the signal; however, if the signal distribution is known (up to some free parameters), the latter can be used to further improve the accuracy of the analysis and to detect the signals of unexpected new sources. \black{Finally, the method can be easily adjusted to cover situations in which a source-free sample or control regions are not available, the background is unknown, or incorrectly specified, but a functional form of the signal distribution is known}. \emph{\textbf{The key of the solution.}} The statistical methodologies involved rely on the novel \emph{LP approach to statistical modelling} first introduced by Mukhopadhyay and Parzen in 2014 \cite{LPapproach}. \black{As it will become clearer later on in the paper, the letter \emph{L} typically denotes robust nonparametric methods based on quantiles, whereas \emph{P} stands for polynomials \cite[Supp S1]{ksamples}.} This approach allows the unification of many of the standard results of classical statistics by expressing them in terms of quantiles and comparison distributions and provides a simple and powerful framework for statistical learning and data analysis. The interested reader is directed to \cite{LPmode, LPBayes, LPtime, LPFdr, LPdec} and references therein, for recent advancements in mode detection, nonparametric time series, goodness-of-fit on prior distributions, and large-scale inference using an LP approach. \emph{\textbf{Organization.}} Section \ref{LPmodelling} is dedicated to a review of the main constructs of LP modelling. Section \ref{cali} highlights the practical advantages offered by modelling background distributions using an LP approach. Section \ref{inference} introduces a novel LP-based framework for statistical inference. Section \ref{DS} outlines the main steps of a \black{data-scientific approach} for signal detection and characterization. \black{In Section \ref{PSDMsec}, the methods proposed are applied in the context of dark matter searches where the goal is to distinguish $\gamma$-ray emissions due to dark matter from those due to pulsars. In Section \ref{instrument}, the tools discussed are applied to a simulation of the Fermi Large Area Telescope $\gamma$-ray telescope and it is shown how upper limits and Brazil plots can be constructed by means of comparison distributions}. Section \ref{denoising} is dedicated to model-denoising. Section \ref{stacking} presents an application to data from the NVSS astronomical survey and discusses a simple approach to assess the validity of distributional assumptions on the polarized intensity in stacking experiments. A discussion of the main results and extensions is proposed in Section \ref{discussion}. \section{LP Approach to Statistical Modelling} \label{LPmodelling} The \emph{LP Approach to Statistical Modelling} \citep{LPapproach} is a novel statistical approach which provides an ideal framework to simultaneously assess the validity of the scientific knowledge available and fill the gap between the initial scientific belief and the evidence provided by the data. \black{Sections \ref{skewGsec}, \ref{LegPoly} and \ref{LPestimate} below introduce the LP modelling framework, whereas Section \ref{cali} discusses how the problem of background mismodelling can be formulated under this paradigm.} \black{ \subsection{The skew-G density model} \label{skewGsec} Let $X$ be a continuous random variable with cumulative distribution function (cdf) and probability density function (pdf) $F(x)$. Since $F$ is the true distribution of the data, it is typically unknown. However, suppose a suitable cdf $G(x)$ is available, and let $g(x)$ be the respective pdf. In order to understand if $G$ is a good candidate for $F$, it is convenient to express the relationship among the two in a concise manner.} \black{ The \emph{skew-G density model} \citep{LPapproach,LPmode} is a universal representation scheme which allows to express any pdf $f(x)$ as \begin{equation} \label{skewG} f(x)=g(x)d(G(x);G,F) \end{equation} where $d(u;G,F)$ is called \emph{comparison density} \cite{manny2} and it is such that \begin{equation} \label{cd1} d(u;G,F)=\frac{f(G^{-1}(u))}{g(G^{-1}(u))}\qquad\text{with $0\leq u\leq 1$}, \end{equation} with $u=G(x)$ and $G^{-1}(u)=\inf\{x: G(x)\geq u\}$ denoting the ``postulated'' quantile function of $X$. The comparison density is the pdf of the random variable $U=G(X)$; whereas, its cdf is given by \begin{equation} \label{cd2} D(u)=F\bigl(G^{-1}(u)\bigl)=\int_0^u d(v;G,F)\partial v, \end{equation} and it is called \emph{comparison distribution}. } \black{ \noindent\emph{\underline{Practical remarks.}} Equations \eqref{cd1} and \eqref{cd2} are of fundamental importance to understand the power of a statistical modelling approach based on the comparison density. Specifically, $d(u;G,F)$ allows to ``connect'' any given pdf $g$ to the true pdf $f$ through the quantile transformation $G^{-1}$ of $u$. Furthermore, $g\equiv f$ if and only if $d(u;G,F)=1$ for all $u \in [0,1]$, i.e., $U$ is uniformly distributed over the interval $[0,1]$. Whereas, if $g\not\equiv f$, $d(u;G,F)$ models the departure of the true density $f(x)$ from the postulated model $g(x)$. Consequently, an adequate estimate of $d(u;G,F)$, not only leads to an estimate of the true $f(x)$ based on \eqref{skewG}, but it also allows to identify the regions where $f(x)$ deviates substantially from $g(x)$. } \black{ \subsection{LP skew-G series representation} \label{LegPoly} Denote with $L_2[0,1]$ the Hilbert space of square integrable functions on the unit interval with respect to the measure $G$. A complete, orthonormal basis of functions in $L_2[0,1]$ can be constructed considering powers of $G(x)$, i.e., $G(x),G^2(x),G^3(x),\dots$ and adequately orthonormalized via Gram-Schmidt procedure \cite{LPmode}. The resulting bases can equivalently be expressed as normalized shifted \textbf{L}egendre \textbf{P}olynomials,\footnote{Classical Legendre polynomials are defined over $[-1,1]$; here, their ``shifted'' counterpart over the range $[0,1]$ is considered. The first three normalized shifted Legendre polynomials are: $Leg_0(u)=1$, $Leg_1(u) =\sqrt{12}(u-0.5)$, $Leg_2(u)=\sqrt{5}(6u^2-6u+1)$, etc.} namely $Leg_j(u)$, with $u=G(x)$. } \black{ Under the assumption that \eqref{cd1} is a square integrable function on $[0,1]$, i.e., $d\in L_2[0,1]$, we can then represent $d(u;G,F)$ via a series of $\{Leg_j(u)\}_{j\geq0}$ polynomials, i.e., \begin{equation} \label{cd} d(u;G,F)=1+\sum_{j>0}LP_jLeg_j(u) \end{equation} with coefficients $LP_j=\int_0^1Leg_j(u)d(u;G,F)\partial u$. The representation in \eqref{cd} is called \emph{LP skew-G series representation} \citep{LPmode}. } \black{ \subsection{LP density estimate} \label{LPestimate} Let $x_1,\dots,x_n$ be a sample of independent and identically distributed (i.i.d.) observations from $X$. Observations from $U$ are given by $u_1=G(x_1),\dots,u_n=G(x_n)$. The $LP_j$ coefficients in \eqref{cd} can then be estimated via \begin{equation} \label{LPest} \widehat{LP}_j=\frac{1}{n}\sum_{i=1}^n Leg_j(u_i). \end{equation} Aternatively, in virtue of \eqref{cd2}, the estimates $\widehat{LP}_j$ can also be specified as \begin{equation} \label{ecdf} \widehat{LP}_j=\int_0^1 Leg_j(u) \partial \tilde{D}(u)=\int_0^1 Leg_j(u) \partial \tilde{F}(G^{-1}(u)) \end{equation} where $\tilde{F}$ and $\tilde{D}$ denote the empirical distribution of the samples $x_1,\dots,x_n$ and $u_1,\dots,u_n$, respectively. } \black{ The moments of the $\widehat{LP}_j$ are \begin{equation} \label{moments} E[\widehat{LP}_j]=LP_j, \quad V(\widehat{LP}_j)=\frac{\sigma^2_j}{n}\quad\text{and}\quad Cov(\widehat{LP}_j, \widehat{LP}_k)=\frac{\sigma_{jk}}{n} \end{equation} where $\sigma^2_j=\int_0^1(Leg_j(u)-LP_j)^2d(u;G,F)\partial u$ and $\sigma^2_{jk}=\int_0^1(Leg_j(u)-LP_j)(Leg_k(u)-LP_k)d(u;G,F)\partial u$. When $f\equiv g$, the equalities in \eqref{moments} reduce to \begin{equation} \label{momentsH0} E[\widehat{LP}_j]=0,\quad V(\widehat{LP}_j)=\frac{1}{n}\quad\text{and}\quad Cov(\widehat{LP}_j, \widehat{LP}_k)=0 \end{equation} for all $j\neq k$. Derivations of \eqref{moments} and \eqref{momentsH0} are discussed in Appendix \ref{appA}. } If \eqref{cd} is approximated by the first $M+1$ terms,\footnote{Recall that the first normalized shifted Legendre polynomial is $Leg_0(u)=1$.} an estimate of the comparison density is given by \begin{equation} \label{dhat} \widehat{d}(u;G,F)=1+\sum_{j=1}^{M} \widehat{LP}_{j} Leg_{j}(u), \end{equation} \black{with variance \begin{equation} \label{variancedhat} V\bigl[\widehat{d}(u;G,F)\bigl]=\sum_{j=1}^{M}\frac{\sigma^2_j}{n}Leg^2_j(u)+2\sum_{j<k}\frac{\sigma_{jk}}{n}Leg_j(u)Leg_k(u). \end{equation} See Appendix \ref{appB} for more details on the derivation of \eqref{variancedhat}. Finally, the standard error of $\widehat{d}(u;G,F)$ corresponds the square root of \eqref{variancedhat}, with $\sigma^2_{j}$ and $\sigma_{jk}$ estimated by their sample counterpart, i.e., \begin{equation*} \begin{split} \widehat{\sigma}^2_{j}&=\frac{1}{n}\sum_{i=1}^n (Leg_j(u_i)-\widehat{LP}_j)^2\\ \widehat{\sigma}_{jk}&=\frac{1}{n}\sum_{i=1}^nLeg_j(u_i)Leg_k(u_i)-\widehat{LP}_j\widehat{LP}_k. \end{split} \end{equation*}} \begin{figure}[htb] \centering \begin{adjustbox}{center} \includegraphics[width=1\columnwidth]{Fig1} \end{adjustbox} \caption[Figure 1]{Upper panel: histogram of a source-free sample simulated from the tail of a Gaussian with mean $55$ and width $15$ (green solid line). The candidate background distribution is given by the best fit of a second-degree polynomial (red dashed line), and it is updated using the source-free data by means of \eqref{fbhat} (purple dot-dashed line). \black{The Kernel density estimator of $f_b$ is also displayed for comparison (orange dot-dashed line).} Bottom panel: comparison density estimate (blue solid line) plotted on the $x$-scale and respective standard errors (light blue area). } \label{Fig1} \end{figure} Finally, in virtue of the skew-G density model in \eqref{skewG} we can estimate $f(x)$ as \begin{equation} \label{fhat} \widehat{f}(x)=g(x)\widehat{d}(G(x);G,F). \end{equation} \black{ Since each $Leg_j(u)$ is a polynomial function of the random variable $U$, each $\widehat{LP}_j$ estimate can be expressed as a linear combination of the first $j$ sample moments of $U$, e.g., \[\widehat{LP}_2=\frac{1}{n}\sum_{i=1}^n Leg_2(u_i)=\sqrt{5}\Bigl(6\widehat{\mu}_2-6\widehat{\mu}_1+1\Bigl)\] where $\widehat{\mu}_2=\frac{1}{n}\sum_{i=1}^nu_i^2$, $\widehat{\mu}_1=\frac{1}{n}\sum_{i=1}^nu_i$. Therefore, the truncation point $M$ can be interpreted as the order of the highest moment considered to characterize the distribution of $U$. (The reader is directed to Section \ref{chooseMsec} for a discussion on the choice of $M$.)} \black{ \subsection{The bias variance trade-off} \label{biasvariance} In order to understand how good \eqref{dhat} is in estimating $d(u;G,F)$ we consider the Mean Integrated Squared Error (MISE) of $\widehat{d}(u;G,F)$, i.e., \begin{align} \label{MISE} MISE&=E\biggl[\int_0^1 \bigl(\widehat{d}(u;G,F)-d(u;G,F)\bigl)^2\partial u\biggl]\\ \label{MISE2} &=\sum_{j=1}^M\frac{\sigma^2_j}{n}+\sum_{j>M}LP^2_j \end{align} where the first term in \eqref{MISE2} corresponds to the integral of the \eqref{variancedhat} over $[0,1]$; whereas the second term corresponds to the Integrated Squared Bias (IBS), i.e, \begin{equation} \label{intBias} IBS=\bigintsss_0^1 \biggl(E\bigl[\widehat{d}(u;G,F)\bigl]-d(u;G,F)\biggl)^2 \partial u. \end{equation} Interestingly, the latter can also be specified as \begin{equation} \label{intBias2} IBS=\bigintsss_0^1\biggl(\frac{f(x)-g(x)}{g(x)}\biggl)^2\partial u-\sum_{j=1}^MLP^2_j \end{equation} (see derivations in Appendix \ref{appB}). The first term on the right hand side of \eqref{intBias2} is particularly important in understanding the role played by $g$ in obtaining a reliable estimate of $f$. Specifically, the closer $g$ is to $f$ the lower the bias of $\widehat{d}(G(x);G,F)$ and $\widehat{f}(x)$ in \eqref{fhat}. } \black{ \noindent\emph{\underline{Practical remarks.}} Equation \eqref{MISE2} implies that larger values of $M$ do not necessarily lead to better estimates of $d(u;G,F)$. Specifically, when $n\rightarrow\infty$, the first term in \eqref{MISE2} tends to zero. However, for large values of $M$, more and more terms to contribute to it and thus increasing $M$ may lead to a substantial inflation of the variance in \eqref{variancedhat}. Conversely, the bias is not affected by sample size and it can be controlled by either choosing $g$ sufficiently close to $f$ (see \eqref{intBias2}) and/or increasing $M$ while preserving a good bias-variance trade-off. } \black{ \noindent\emph{\underline{Further remarks.}} Equation \eqref{ecdf} implies that the estimator in \eqref{dhat} relies on the empirical distribution of the sample observed by means of the $\widehat{LP}_j$ estimates. Therefore, an estimator of the comparison density based entirely on the empirical cdf can be expressed by setting $M=n-1$ in \eqref{dhat}. However, as discussed in this section, while this would reduce the bias, it would also icrease the variance drastically. Therefore, for $M<n-1$, the estimator in \eqref{dhat}, not only leads to a reduction of the variance but, in virtue of \eqref{intBias2}, its bias is mitigated when the postulated model $g$ is sufficiently close to the true pdf of the data $f$. } \section{Data-driven corrections for misspecified background models} \label{cali} \black{ Let $\bm{x}_{\text{B}}=(x_{1},\dots,x_{N})$ be a sample of observations from control regions or the result of Monte Carlo simulations, where we expect no signal to be present. Hereafter, we refer to $\bm{x}_{\text{B}}$ as the \emph{source-free sample}. Therefore, $\bm{x}_{\text{B}}$ can be used to ``learn'' the unknown pdf of the background, namely $f_b(x)$, and obtain an estimate for it via \eqref{fhat}. } Despite the true background model being unknown, suppose that a candidate pdf, namely $g_b(x)$, is available. The candidate model $g_b(x)$ can be specified from previous experiments or theoretical results or can be obtained by fitting specific functions (e.g., polynomial, exponential, etc.) to $\bm{x}_{\text{B}}$. If $g_b(x)$ does not provide an accurate description of $f_b(x)$, the sensitivity of the experiment can be strongly affected. Consider, for instance, a source-free sample of $N=5000$ observations whose true (unknown) distribution corresponds to the tail of a Gaussian with mean $55$ and width $15$ over the range $[0,50]$, i.e., \begin{equation} \label{fb} f_b(x)= \frac{e^{-\frac{1}{2}\bigl(\frac{x-55}{15}\bigl)^2}}{k_{fb}} \end{equation} with $k_{fb}=\int_{0}^{50}e^{-\frac{1}{2}\bigl(\frac{x-55}{15}\bigl)^2}\partial x$. Suppose that a candidate model for the background is obtained by fitting a second-degree polynomial on the source-free sample and adequately normalizing it in order to obtain a proper pdf, i.e., \begin{equation} \label{gb} g_b(x)=\frac{9.52 -2.22x+ 0.15x^2}{k_{gb}} \end{equation} with $k_{gb}=\int_{0}^{50}[9.52 -2.22x+ 0.15x^2]\partial x$. For illustrative purposes, assume that the distribution of the signal is a Gaussian centered at $25$, with width $4.5$ and pdf \begin{equation} \label{fs} f_s(x)= \frac{e^{-\frac{1}{2}\bigl(\frac{x-25}{4.5}\bigl)^2}}{k_{fs}} \end{equation} with $k_{fs}=\int_{0}^{50}e^{-\frac{1}{2}\bigl(\frac{x-25}{4.5}\bigl)^2}\partial x$. The histogram of the source-free sample along with \eqref{fb}-\eqref{fs} is shown in Fig. \ref{Fig1}. At the higher end of the spectrum, the postulated background (red dashed line) underestimates the true background distribution (green solid line). As a result, using \eqref{gb} as background model increases the chance of false discoveries in this region. Conversely, at the lower end of the spectrum, $g_b(x)$ underestimates $f_b(x)$, reducing the sensitivity of the analysis. \black{For the sake of comparison, a Kernel density estimate (orange dot-dashed line) has been computed by selecting the bandwidth parameter as recommended in \cite{sheater}. The latter exhibits substantial bias at the boundary and appears to overfit the data sample.} It is important to point out that, the discrepancy of $f_b(x)$ from $g_b(x)$ is typically due to the fact that the specific functional form imposed (in our example, a second-degree polynomial) is not adequate for the data. Thus, changing the values of the fitted parameters (or assigning priors to them) is unlikely to solve the problem. However, it is possible to ``repair'' $g_b(x)$ and obtain a suitable estimate of $f_b(x)$ by means of \eqref{fhat}. Specifically, $f_b(x)$ can be estimated via \begin{equation} \label{fbhat} \widehat{f_b}(x)=g_b(x)\widehat{d}(G_b(x);G_b,F_b) \end{equation} where $\widehat{d}(G_b(x);G_b,F_b)$ is the comparison density estimated via \eqref{dhat} on the sample $G_b(x_{1}),\dots,G_b(x_{N})$, whereas $F_b$ and $G_b$ are the true and the postulated background distributions, with pdfs as in \eqref{fb} and \eqref{gb}, respectively. In our example, choosing $M=2$ (see Section \ref{chooseMsec}), we obtain {\fontsize{3.5mm}{3.5mm}\selectfont{ \begin{equation} \label{dbhat} \begin{split} \widehat{d}(G_b(x);G_b,F_b)=&1+0.063Leg_1[G_b(x)]-\\ &0.082Leg_2[G_b(x)],\\ \end{split} \end{equation}}} where $Leg_1[G_b(x)]$ and $Leg_2[G_b(x)]$ are the first and second normalized shifted Legendre polynomials evaluated at $G_b(x)$. Notice that, by combining \eqref{fbhat} and \eqref{dbhat}, we can easily write the background model using of a series of shifted Legendre polynomials. This may be especially useful when dealing with complicated likelihoods and for which a functional form is difficult to specify. The upper panel of Fig. \ref{Fig1} shows that the ``calibrated'' background model in \eqref{fbhat} as a purple dot-dashed line and matches almost exactly the true background density in \eqref{fb} (green solid line). The plot of $\widehat{d}(G_b(x);G_b,F_b)$ in the bottom panel of Fig. \ref{Fig1} provides important insights on the deficiencies of \eqref{gb} as a candidate background model. Specifically, the magnitude and the direction of the departure of $\widehat{d}(G_b(x);G_b,F_b)$ from one corresponds to the estimated departure of $f_b(x)$ from $g_b(x)$ for each value of $x$. Therefore, if $\widehat{d}(G_b(x);G_b,F_b)$ is below one in the region where we expect the signal to occur, using $\widehat{f_b}(x)$ in place of $g_b(x)$ increases the sensitivity of the analysis. Conversely, if $\widehat{d}(G_b(x);G_b,F_b)$ is above one outside the signal region, the use of $\widehat{f_b}(x)$ instead of $g_b(x)$ prevents from false discoveries. Notice that in this article we only consider continuous data. In this respect, the goal is to learn the model of the background considered as a continuum and no binning is applied. Therefore, the histograms presented here are only a graphical tool used to display the data distribution and are not intended to represent an actually binning of the data. \section{LP-based inference} \label{inference} When discussing the skew-G density model in \eqref{skewG}, we have witnessed that $f\equiv g$ if $d(u;G,F)=1$ for all $u \in [0,1]$. Additionally, the graph of $\widehat{d}(u;G,F)$ provides an exploratory tool to understand the nature of the deviation of $f(x)$ from $g(x)$. This section introduces a novel inferential framework to test the significance of the departure of $f(x)$ from $g(x)$. Specifically, our goal is to test the hypotheses \begin{equation} \label{hp1} \begin{split} H_0:d(u;G,F)=1 &\text{ for all $u \in [0,1]$}\\ &vs\\ H_1:d(u;G,F)\neq1 &\text{ for some $u \in [0,1]$.}\\ \end{split} \end{equation} First, an overall test, namely the \emph{deviance test}, is presented. The deviance test assesses \underline{if} $f(x)$ deviates significantly from $g(x)$ anywhere over the range of $x$ considered. Second, adequate confidence bands are constructed in order to assess \underline{where} significant departures occur. \subsection{The deviance test} \label{dev}\black{ Recall that the $LP_j$ coefficients in \eqref{cd} specify as $LP_j=\int^1_0Leg_u(d)d(u;G,F)\partial u$. Consequently, by orthogonality of the $\{Leg_j(u)\}_j>0$ polynomials and $Leg_0(u)=1$, when $H_0$ in \eqref{hp1} is true all the $LP_j$ coefficients are equal to zero, including the first $M$ of them. We can then quantify the departure of $\widehat{d}(u;G,F)$ from one by means of the \emph{deviance} statistics \cite{LPFdr} which specifies as $\sum_{j=1}^{M}\widehat{LP}^2_{j}$.} If the deviance is equal to zero, we may expect that $g$ is approximately equivalent to $f$; hence, we test \begin{equation} \label{Dtest} H_0:\sum_{j=1}^{M}LP^2_{j}=0 \qquad \text{vs\qquad}H_1:\sum_{j=1}^{M}LP^2_{j}>0 \end{equation} by means of the test statistic \begin{equation} \label{D} D_M=n \sum_{j=1}^{M}\widehat{LP}^2_{j}. \end{equation} \black{It can be shown \cite{LPmode} that, as $n\rightarrow\infty$ \begin{equation} \label{lpH0} \sqrt{n}\widehat{LP}_j\xrightarrow{d} N(0, 1), \end{equation} where $\xrightarrow{d}$ denotes convergence in distribution, and thus, } under $H_0$, $D_M$ is asymptotically $\chi^2_M$-distributed. Hence, an asymptotic p-value for \eqref{Dtest} is given by \begin{equation} \label{D_distr} P(D_M>d_M)\xrightarrow[{n\rightarrow\infty}]{} P(\chi^2_M> d_M), \end{equation} where $d_M$ is the value of $D_M$ observed on the data. \\ \noindent\emph{\underline{Practical remarks.}} Notice that $H_1$ in \eqref{Dtest} implies $H_1$ in \eqref{hp1}. Similarly, $H_0$ in \eqref{hp1} implies $H_0$ in \eqref{Dtest}; however, the opposite is not true in general since there may be some non-zero $LP_j$ coefficients for $j>M$. Therefore, even when choosing $M$ small may lead to conservative, but yet valid, inference. \subsection{Confidence bands} \label{bands} \black{The estimator in \eqref{dhat} only accounts for the first $M+1$ terms of the polynomial series in \eqref{cd}. Therefore, $\widehat{d}(u;G,F)$ is a biased estimator of $d(u;G,F)$. Specifically, as discussed in Section \ref{biasvariance}, the integrated bias is given by $\sum_{j>M}LP^2_j$, whereas, as show in Appendix \ref{appB}, the bias at a given point $u$ is given by $\sum_{j>M}LP_jLeg(u)$. } It follows that, when the bias is large, confidence bands based on $\widehat{d}(u;G,F)$ are shifted away from the true density $d(u;G,F)$. Despite the bias cannot be easily quantified in the general setting, it follows from \eqref{momentsH0} that, \black{when $H_0$ in \eqref{hp1} (and consequently $H_0$ in \eqref{Dtest})} is true, both the bias at a point $u$ and the integrated bias are equal to zero. Thus, we can exploit this property to construct reliable confidence bands under the null. Specifically, the goal is to identify $c_{\alpha}$, such that \begin{equation} \label{significance} \begin{split} 1-\alpha&=P(-c_{\alpha}\leq \widehat{d}(u;G,F)-1\leq c_{\alpha},\text{ for all $u\in[0,1]$}|H_0)\\ &=P(\max_{u} |\widehat{d}(u;G,F)-1|\leq c_{\alpha}|H_0)\\ \end{split} \end{equation} where $\alpha$ is the desired significance level.\footnote{In astrophysics, the statistical significance $\alpha$ is often expressed in terms of number of $\sigma$-deviations from the mean of a standard normal, namely $\sigma$. For instance, a 2$\sigma$ significance corresponds to $\alpha=1-\Phi(2)=0.0227$, where $\Phi(\cdot)$ denotes the cdf of a standard normal. } If the bias determines where the confidence bands are centered, the distribution and the variance of $\widehat{d}(u;G,F)$ determine their width. \black{ As discussed in Section \ref{LPestimate} (see \eqref{momentsH0}), under $H_0$ in \eqref{hp1}, the $\widehat{LP}_j$ estimates have mean zero, variance $\frac{1}{n}$ and they are uncorrelated one another. Therefore, when $f\equiv g$, the standard error of $\widehat{d}(u;G,F)$, corresponds to the square root of \eqref{variancedhat} with $\sigma^2_j=1$ and $\sigma^2_{jk}=0$ i.e., \begin{equation} \label{SEdhat0} SE\Bigl[\widehat{d}(u;G,F)|H_0\Bigl]=\sqrt{\sum_{j=1}^{M} \frac{1}{n} Leg^2_{j}(u)}. \end{equation}} Additionally, \eqref{lpH0} implies that $\widehat{d}(u;G,F)$ is asymptotically normally distributed, hence \begin{equation} \label{pivot1} \frac{\widehat{d}(u;G,F)-1}{\sqrt{\sum_{j=1}^M\frac{1}{n}Leg_j^2(u)}}\xrightarrow{d}N(0,1). \end{equation} as $n\rightarrow\infty$, for all $u\in[0,1]$, under $H_0$. We can then construct approximate confidence bands under $H_0$ which satisfy \eqref{significance} by means of tube formulae (see \cite[Ch.5]{larry} and \cite{PL05}), i.e., \begin{equation} \label{CIband} \Biggl[1-c_\alpha\sqrt{\sum_{j=1}^M\frac{1}{n}Leg_j^2(u)},1+c_\alpha\sqrt{\sum_{j=1}^M\frac{1}{n}Leg_j^2(u)}\Biggl], \end{equation} where $c_\alpha$ is the solutions of \begin{equation} \label{CIband2} 2(1-\Phi(c_\alpha))+\frac{k_0}{\pi}e^{-0.5c^2_{\alpha}}=\alpha, \end{equation} with $k_0=\sqrt{\sum^M_{j=1}[\frac{\partial}{\partial u}Leg_j(u)]^2}$. If $\widehat{d}(u;G,F)$ is within the bands in \eqref{CIband} over the entire range $[0,1]$, we conclude that there is no evidence that $f$ deviates significantly from $g$ anywhere over the range considered \black{and at confidence level $1-\alpha$}. Conversely, we expect significant departures to occur in regions where $\widehat{d}(u;G,F)$ lies outside the confidence bands.\\ \noindent\emph{\underline{Practical remarks.}} Notice that, under $H_0$ in \eqref{Dtest}, the $\widehat{d}(u;G,F)$ is an unbiased estimator of ${d}(u;G,F)$, regardless of the choice of $M$. This implies that the confidence bands in \eqref{CIband} are only affected by the variance and asymptotic distribution of $\widehat{d}(u;G,F)$ under $H_0$. \subsection{Choice of $M$} \label{chooseMsec} The number of $\widehat{LP}_j$ estimates considered determines the \black{level of ``smoothness''} \footnote{ \black{As an anonymous referee correctly pointed out, $\widehat{d}(u;G,F)$ is always smooth as it is constructed as a series of infinitely differentiable functions. In statistics, however, the word ``smoothness'' is often used to indicate the flexibility of the estimator considered or, in other words, its degrees of freedom. Often, this is quantified in terms of magnitude of the second derivative of the function considered. Despite the abuse of terminology, throughout the manuscript we will refer to the latter definition of smoothness.}} of $\widehat{d}(u;G,F)$, with smaller values of $M$ leading to smoother estimates. The deviance test can be used to select the value $M$ which maximizes the sensitivity of the analysis according to the following scheme: \begin{enumerate} \item[i.] Choose a sufficiently large value $M_{\max}$. \item[ii.] Obtain the estimates $\widehat{LP}_1,\dots, \widehat{LP}_{M_{\max}}$ as in \eqref{LPest}. \item[iii.] For $m=1,\dots,M_{\max}$: \begin{enumerate} \item[ ] calculate the deviance test p-value as in \eqref{D_distr}, i.e., \begin{equation} \label{pvalm} p(m)=P\Bigl(\chi^2_m> d_m\Bigl)\end{equation} with $d_m=n\sum_{j=1}^{m}\widehat{LP}^2_{j}$. \end{enumerate} \item[iv.] Choose $M$ such that \begin{equation} \label{chooseM} M=\underset{m}{\mathrm{argmin}}\{ p(m)\}. \end{equation} \end{enumerate} \subsubsection{Adjusting for post-selection} As any data-driven selection process, the scheme presented above affects the distribution of \eqref{dhat} and can yield to overly optimistic inference \cite{xiaotong,potscher}. Despite this aspect being often ignored in practical applications, correct coverage can only be guaranteed if adequate corrections are implemented. \black{The issues arising in the context of post-selection inference can be interpreted in terms of looks-elsewhere effect \cite{gv10,meJINST} where one has to adjust the inference for the fact that, in practice, many different models have been considered and, consequently, many different tests have been conducted for the sake of assessing the goodness of fit.} In our setting, the number of models under comparison is typically small ($M_{\max}\leq20$); therefore, post-selection inference can be easily adjusted by means of Bonferroni's correction \cite{bonferroni35}. Specifically, the adjusted deviance p-value is given by \begin{equation} \label{bonf} M_{\max}\cdot P(\chi^2_M> d_M), \end{equation} \begin{algorithm}[!h] \label{algo} \caption{a data-scientific signal search} \vspace{0.1cm} \textbf{INPUTS:} \black{source-free sample ${\bm{x}}_{\text{B}}$;}\\ \hspace{1.5cm}postulated background distribution $g_b(x)$;\\ \hspace{1.5cm}physics sample $\bm{x}$.\\ \hspace{1.5cm}\emph{If available}: signal distribution, $f_s(x,\bm{\theta}_s)$.\\ \vspace{0.3cm} \textbf{PHASE A: background calibration} \begin{enumerate} \item[\emph{Step 1:}]\black{ Estimate $\widehat{d}(u;G_b,F_b)$ on ${\bm{u}}_{\text{B}}=G_b({\bm{x}}_{\text{B}})$ and test \eqref{hp1} via deviance test and CD plot.} \item[\emph{Step 2:}] \textbf{if} $F_b\not \equiv G_b$, set $\widehat{f}_b(x)=g_b(x)\widehat{d}(u;G_b,F_b)$; \\ \textbf{else} set $\widehat{f}_b(x)=g_b(x)$. \end{enumerate} \vspace{0.3cm} \textbf{PHASE B: signal search}\\ \vspace{0.15cm} \textbf{Stage 1: nonparametric signal detection} \begin{itemize} \item[\emph{Step 3:}] set $g(x)=\widehat{f}_b(x)$. \item[\emph{Step 4:}] estimate $\widehat{d}(u;G,F)$ on $\bm{u}=G(\bm{x})$ and test \eqref{hp1} via deviance test and CD plot. \item[\emph{Step 5:}] \textbf{if} $G\not \equiv F$, claim \underline{evidence in favor of the signal} and go to Step 6; \\ \textbf{else} set $\widehat{f}(x)=g(x)$, claim that \underline{no signal is} \underline{present} and stop. \end{itemize} \vspace{0.15cm} \textbf{Stage 2: semiparametric signal characterization} \begin{enumerate} \item[\emph{Step 6:}] \textbf{if} $f_s(x,\bm{\theta}_s)$ given, fit $g_{bs}(x)$ in \eqref{gbs};\\ \textbf{else} use the CD plot of $\widehat{d}(u;G,F)$ and the theory available to specify/fit a suitable model for $f_s(x,\bm{\theta}_s)$ and fit $g_{bs}(x)$ in \eqref{gbs}. \item[\emph{Step 7:}] estimate $\widehat{d}(u;G_{bs},F)$ on $\bm{u}=G_{bs}(\bm{x})$ and test \eqref{hp1} via deviance test and CD plot. \item[\emph{Step 8:}] \textbf{if} $G_{bs}\not \equiv F$, claim \underline{evidence of unexpected signal} and use the CD plot of $\widehat{d}(u;G_{bs},F)$ and the theory available to further investigate the nature the deviation from $G_{bs}$;\\ \textbf{else} go to Step 9. \item[\emph{Step 9:}] compute $\widehat{\widehat{d}(}u;G,F)$ as in \eqref{semipard} and use it to refine $\widehat{f}_b(x)$ or $f_s(x,\widehat{\bm{\theta}_s})$ as in \eqref{refine}. Go back to Step 3. \end{enumerate} \end{algorithm} where $M$ is the value selected via \eqref{chooseM}, whereas confidence bands can be adjusted by substituting $c_\alpha$ in \eqref{CIband}, with $c_{\alpha,M_{\max}}$ satisfying \begin{equation} \label{CIband3} 2(1-\Phi(c_{\alpha,M_{\max}}))+\frac{k_0}{\pi}e^{-0.5c^2_{\alpha,M_{\max}}}=\frac{\alpha}{M_{\max}}. \end{equation} \noindent\emph{\underline{Practical remarks.}} As noted in Section \ref{LPmodelling}, the estimate \eqref{dhat} involves the first $M$ sample moments of $U$; therefore, $M_{\max}$ can be interpreted as the order of the highest moment which we expect to contribute in discriminating the distribution of $U$ from uniformity. \black{Notice that, in addition to the inflation of the variance of \eqref{dhat}, when $M$ is large, the computation of normalized shifted Legendre of higher order may face numerical instability (see Section \ref{AICtest}). } Therefore, as a rule of thumb, $M_{\max}$ is typically chosen $\leq20$. Finally, Steps i-iv aim to select the approximant based on the first most significant $M$ moments, while excluding powers of higher order. A further note on model-denoising is given in Section \ref{denoising}. \section{A data-scientific approach to signal searches} \label{DS} The tools presented in Sections \ref{LPmodelling} and \ref{inference} provide a natural framework to simultaneously \begin{itemize} \item[(a)] assess the validity of the postulated background model and, if necessary, update it using the data (Section \ref{cali}); \item[(b)] perform signal detection on the physics sample; \item[(c)] characterize the signal when a model for it is not available. \end{itemize} Furthermore, if the model for the signal is known (up to some free parameters), it is possible to \begin{itemize} \item[(d)] further refine the background or signal distribution; \item[(e)] detect hidden signals from new unexpected sources. \end{itemize} Notice that, since Bonferroni's correction leads to an upper bound for the overall significance, the resulting coverage will be higher than the nominal one. Alternatively, approximate post-selection confidence bands and inference can be constructed using Monte Carlo and/or resampling methods and repeating the selection process at each replicate. Tasks (a)-(e) can be tackled in two main phases. In the first phase, the postulated background model is ``calibrated'' on a source-free sample in order to improve the sensitivity of the analysis and reduce the risk of false discoveries. The second phase focuses on searching for the signal of interest and involves both a nonparametric signal detection stage and a semiparametric stage for signal characterization. Both phases and respective steps are described in details below and summarized in Algorithm 1. \begin{figure}[htb] \centering \begin{adjustbox}{center} \includegraphics[width=90mm]{Fig2} \end{adjustbox} \caption[Figure 2]{Deviance test and CD plot for the source-free sample. The comparison density is estimated via \eqref{dbhat} (solid blue line), \black{ whereas its standard error (light blue area) is computed as the squared root of the estimate of the variance in \eqref{variancedhat}.} Finally, confidence bands have been constructed around one (grey areas) via \eqref{CIband} with $c_\alpha$ replaced by $c_{\alpha,M_{\max}}$ in \eqref{CIband3}. The notation $\geq 2\sigma$ is used to highlight that Bonferroni's correction has been applied to adjust for post-selection inference, leading to an increase of the nominal coverage. } \label{Fig2} \end{figure} \subsection{Background calibration} \label{bkgcali} As discussed in Section \ref{cali}, deviations of $\widehat{d}(G_b(x);G_b,F_b)$ from one suggest that a further refinement of the candidate background model $g_b$ is needed. However, as $M$ increases, the deviations of $\widehat{d}(G_b(x);G_b,F_b)$ from one may become more and more prominent while the variance inflates. Thus, it is important to assess if such deviations are indeed significant. In order address this task, the analysis of Section \ref{cali} can be further refined in light of the inferential tools introduced in Section \ref{inference}. For the toy example discussed in Section \ref{cali}, we have seen that $g_b$ overestimates $f_b$ in the signal region and underestimates it at the higher end of the range considered (Fig. \ref{Fig1}). We can now assess if any of these deviations are significant by implementing the deviance test in \eqref{D}-\eqref{D_distr}, whereas, to identify where the most significant departures occur, we construct confidence bands under the null model as in \eqref{CIband}, i.e., assuming that no ``update'' of $g_b$ is necessary. The results are collected in the \emph{comparison density plot} or \emph{CD plot} presented in Fig. \ref{Fig2}. First, a value $M=2$ has been selected as in \eqref{chooseM}, and the respective deviance test (adequately adjusted via Bonferroni) indicates that the deviation of $f_b$ from $g_b$ is significant at a $6.430\sigma$ significance level (adjusted p-value of $6.397\cdot10^{-11}$). Additionally, the estimated comparison density in \eqref{dbhat} lies outside the $2\sigma$ confidence bands in the region $[0,50]$ where the signal specified in \eqref{fs} is expected to occur. Hence, using \eqref{fbhat} instead of \eqref{gb} is recommended in order to improve the sensibility of the analysis in the signal region.\\ \noindent\underline{\emph{Important remarks on the CD plot.}} When comparing different models for the background or when assessing if the data distribution deviates from the model expected when no signal is present, it is common practice to visualize the results of the analysis by superimposing the models under comparison to the histogram of the data observed on the original scale (e.g., upper panel of Fig. \ref{Fig1}). This corresponds to a data visualization in the density domain. Conversely, the CD plot (e.g., Fig. \ref{Fig2}) provides a representation of the data in the quantile domain, which offers the advantage of connecting the true density of the data with the quantiles of the postulated model (see \eqref{cd1}-\eqref{cd2}). Consequently, the most substantial departures of the data distribution from the expected model are magnified, and those due to random fluctuations are smoothed out (see, also, Section \ref{upperlimits}). Furthermore, the deviance tests and the CD plot together provide a powerful goodness-of-fit tool and exploratory which, conversely from classical methods such as Anderson-Darling \cite{anderson} and Kolmogorov-Smirnov \cite{darling}, not only allow to test \underline{if} the distributions under comparison differ, but they also allow to assess \underline{how} and \underline{where} they differ. As a result, the CD plot can be used to characterize the unknown signal distribution (see Section \ref{signalcar}) and to identify exclusion regions (e.g., Case I in Section \ref{nonpar}). \black{As an additional advantage, the deviance test appears to enjoy higher detection power than classical approaches. This aspect is highlighted in Table \ref{GOF} where several methods for goodness of fit or two-samples comparisons are implemented, along with the deviance test, for all the cases discussed in Section \ref{DS}. }\\ \noindent\underline{\emph{Reliability of the calibrated background model.}} The size $N$ of the source-free sample plays a fundamental role in the validity of $\widehat{f}_b(x)$ as a reliable background model. Specifically, the randomness involved in \eqref{fbhat} only depends on the $\widehat{LP}_j$ estimates. If $N$ is sufficiently large, by the strong law of large numbers, \[P\bigl(\underset{{N\rightarrow\infty}}{\lim}\widehat{LP}_j=LP_j\bigl)=1.\] Therefore, despite the variance of $\widehat{f}_b(x)$ becoming negligible as $N\rightarrow\infty$, one has to account for the fact that $\widehat{f}_b(x)$ leads to a biased estimate of $f_b(x)$ when $f_b\not\equiv g_b$ (see Section \ref{biasvariance}). For sufficiently smooth densities, a visual inspection is often sufficient to assess if $\widehat{d}(u;G_b,F_b)$ (and, consequently, $\widehat{f}_b(x)$) provides a satisfactory fit for the data, whereas, for more complex distributions the effect of the bias can be mitigated considering larger values of $M$ and model-denoising (see Section \ref{denoiseAIC}). {\fontsize{3mm}{3mm}\selectfont{ \begin{table*} \begin{tabular}{|c|ccc|cc|} \hline & & & & &\\[-1.5ex] & \multicolumn{3}{c|}{Goodness-of-fit test p-values} & \multicolumn{2}{c|}{Two-samples test p-values } \\[-1.5ex] & & & & &\\[-1.5ex] & & & & &\\[-1.5ex] \multirow{1}{*}{\bf Sample}& \textbf{Anderson-Darling}& \textbf{Cramer-von Mises} & \textbf{Deviance (adjusted)} & \textbf{Kolmogorov-Smirnov} & \textbf{Wilcoxon Rank Sum }\\[-1ex] & & & & &\\[-1.5ex] \hline & & & & &\\[-1.5ex] \multirow{1}{*}{Calibration} & $1.2\cdot 10^{-7}$ & $4.2\cdot10^{-7}$ & $3.2\cdot 10^{-12}$ ($6.4\cdot 10^{-11}$) &- & - \\[-1ex] & & & & &\\[-1.5ex] \hline & & & & &\\[-1.5ex] \multirow{1}{*}{Case I} & $0.7776$ & $0.7711$ & $0.2657$ ($>1$) &0.9248 &0.5487 \\[-1ex] & & & & &\\[-1.5ex] \hline & & & & &\\[-1.5ex] Case II& $4.6\cdot10^{-7}$ & $8.2\cdot10^{-11}$ & $9.0\cdot10^{-33}$ ($1.8\cdot10^{-31}$) & $1.9\cdot10^{-13}$ & $4.5\cdot10^{-12}$\\[-1ex] & & & & &\\[-1.5ex] \hline & & & & &\\[-1.5ex] Case III& $4.6\cdot10^{-7}$ & $2.6\cdot10^{-10}$ & $2.6\cdot10^{-28}$ ($5.2\cdot10^{-27}$) & $ 2.1\cdot10^{-15}$ & $2.2\cdot10^{-16}$\\[-2ex] & & & & &\\ \hline \end{tabular} \caption[Table 1]{\black{Comparison of deviance test and classical inferential tools. The first two columns report the p-values of Anderson-Darling \cite{anderson} and Cramer-von Mises \cite{darling} goodness-of-fit tests obtained assuming as theoretical distribution the same $G$ indicated in Sections \ref{bkgcali} and \ref{signalsearch} for the the calibration phase, case I, II and III, respectively. The raw deviance p-values and their post-selection adjusted counterparts are reported in the third columns. Finally, the fouth and fifth column report, respectively, the Kolmogorov-Smirnov \cite{darling} and Wilcoxon rank sum \cite{wilcoxon} tests used to compare directly the physics samples in Case I, II and III with the source-free sample used in Section \ref{bkgcali}.} } \label{GOF} \end{table*} }} \begin{figure*}[htb] \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{Fig3a} & \includegraphics[width=90mm]{Fig3b}\\ \end{tabular*} \caption[Figure 3]{Deviance test and CD plots for Case~I where no signal is present (left panel) and Case~II where the signal is present (right panel). In both cases, the postulated distribution $G$ corresponds to the cdf of the calibrated background model in \eqref{gest}. For the sake of comparison, $d(u;G,F)$ has been estimated via \eqref{dhat} with $M=4$ for both samples. } \label{Fig3} \end{figure*} \begin{figure}[htb] \centering \begin{adjustbox}{center} \includegraphics[width=0.9\columnwidth]{Fig4} \end{adjustbox} \caption[Figure 4]{Histogram of a physics sample of $n=1300$ observations from both background and signal and with pdf as in \eqref{fbs} (grey solid line). The true density has been estimated semiparametrically, as in \eqref{gbs} (pink dashed line), whereas the nonparametric estimates of $f(x)$ have been computed as in \eqref{fhat}, by plugging-in the $\widehat{d}(G(x);G,F)$ estimates obtained with $M=4$ (blue dot-dashed line) and $M=9$ (black dotted line). } \label{Fig4} \end{figure} \subsection{Signal search} \label{signalsearch} \subsubsection{Nonparametric signal detection} \label{nonpar} The background calibration phase allows the specification of a well tailored model for the background, namely $\widehat{f}_b(x)$, which simultaneously integrates the initial guess, $g_b$, and the information carried by the source-free data sample. Hereafter, we disregard the source-free data sample and focus on analyzing the physics sample. Under the assumption that the source-free sample has no significant source contamination, we expect that, if the signal is absent, both the source-free and the physics sample follow the same distribution. Therefore, the calibrated background model, $\widehat{f}_b(x)$, plays the role of the postulated distribution for the physics sample, i.e., the model that we expect the data to follow when no signal is present; hence, we set $g(x)=\widehat{f}_b(x)$. Let $f(x)$ be the (unknown) true pdf of the physics sample which may or may not carry evidence in favor of the source of interest. When no model for the signal is specified, it is reasonable to consider any significant deviation of $f$ from $g$ as an indication that a signal of unknown nature may be present. In this setting, similarly to the background calibration phase, we can construct deviance tests and CD plots to assess if and where significant departures of $f$ from $g$ occur. Two possible scenarios are considered -- a physics sample which collects only background data (Case I) and a physics sample of observations from both background and signal (Case II). \emph{\textbf{Case I: background-only.}} Let $\bm{x}$ be a physics sample of $n=1300$ observations whose true (unknown) pdf $f(x)$ is equivalent to $f_b(x)$ in \eqref{fb}. We set \begin{equation} \label{gest} g(x)=\widehat{f}_b(x)=g_b(x)\widehat{d}(G_b(x);G_b,F_b) \end{equation} where $g_b(x)$ and $\widehat{d}(G_b(x);G_b,F_b)$ are defined as in \eqref{gb} and \eqref{dbhat}, respectively. The resulting CD plot and deviance test are reported in the left panel of Fig. \ref{Fig3}. When applying the scheme in Section \ref{chooseMsec} with $M_{\max}=20$, none of the values of $M$ considered leads to significant results; therefore, for the sake of comparison with Case II below, we choose $M=4$. Not surprisingly, the estimated comparison density approaches one over the entire range and lies entirely within the confidence bands. This suggests that the true distribution of the data does not differ significantly from the model which accounts only for the background. Similarly, the deviance test leads to very low significance (adjusted p-value $>1$); hence, we conclude that our physics sample does not provide evidence in favor of the new source. \emph{\textbf{Case II: background + signal.}} Let $\bm{x}$ be a physics sample of $n=1300$ observations whose true (unknown) pdf $f(x)$ is equal to $f_{bs}(x)$ in \eqref{fbs} \begin{equation} \label{fbs} f_{bs}(x)=(1-\eta)f_b(x)+\eta f_s(x) \end{equation} with $f_b(x)$ and $f_s(x)$ defined as in \eqref{fb} and \eqref{fs} respectively, and $\eta=0.15$. The histogram of the data and the graph of $f_{bs}(x)$ are plotted in Fig. \ref{Fig4}. As in Case I, we set $g(x)$ as in \eqref{gest}. \begin{figure*}[htb] \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{Fig5a} & \includegraphics[width=90mm]{Fig5b}\\ \includegraphics[width=90mm]{Fig6a} & \includegraphics[width=90mm]{Fig6b}\\ \end{tabular*} \caption[Figure 5]{Upper panels: deviance test and CD plots for Case~IIa where the signal is present (right panel), and the postulated distribution$G_{bs}$ corresponds to the cdf of the estimated background+signal model in \eqref{gbs} with $\widehat{\eta}=0.146$. The comparison density estimate has been obtained considering $M=3$. Bottom panels: Deviance test and CD plots for Case~III where, in addition to the signal of interest, an additional resonance is present. The data are first analyzed considering the background-only pdf in \eqref{gb} as the postulated model (left panel). The analysis is then repeated by assuming the fitted background + signal model in \eqref{gbs} as the postulated distribution (right panel). Both estimates of the comparison density in the left and right panels have been computed as in \eqref{dhat} with $M=9$. } \label{Fig5} \end{figure*} The CD plot and deviance test in the right panel of Fig. \ref{Fig3} show a significant departure of the data distribution from the background-only model in \eqref{gest}. The maximum significance of the deviance is achieved at $M=4$, leading to a rejection of the null hypothesis at a $11.611\sigma$ significance level (adjusted p-value$=1.799\cdot10^{-31}$). The CD plot shows a prominent peak at the lower end of the spectrum; hence, we conclude that there is evidence in favor of the signal, and we proceed to characterize its distribution as described in Section \ref{signalcar}. \subsubsection{Semiparametric signal characterization} \label{signalcar} The signal detection strategy proposed in Section \ref{nonpar} does not require the specification of a distribution for the signal. However, if a model for the signal is known (up to some free parameters), the analysis can be further refined by providing a parametric estimate of the comparison density and assessing if additional signals from new unexpected sources are present. \textbf{\emph{Case IIa: background + (known) signal.} } Assume that a model for the signal, $f_s(x,\bm{\theta}_s)$, is given, with $\bm{\theta}_s$ being a vector of unknown parameters. Since the CD plot in the right panel of Fig. \ref{Fig3} provides evidence in favor of the signal, we expect the data to be distributed according to the pdf \begin{equation} \label{fhatbs} \widehat{f}_{bs}(x)=(1-\eta)\widehat{f}_b(x)+\eta f_s(x,\bm{\theta}_s), \qquad 0\leq\eta\leq 1, \end{equation} where $\widehat{f}_b(x)$ is the calibrated background distribution in \eqref{gest} and $\eta$ and $\bm{\theta}_s$ can be estimated via Maximum Likelihood (ML). Letting $\widehat{\eta}$ and $\widehat{\bm{\theta}}_s$ be the ML estimates of $\eta$ and $\bm{\theta}_s$ respectively, we specify \begin{equation} \label{gbs} g_{bs}(x)=(1-\widehat{\eta})\widehat{f}_b(x)+\widehat{\eta} f_s(x,\widehat{\bm{\theta}}_s) \end{equation} as postulated model. For simplicity, let $f_s$ to be fully specified as in \eqref{fs}; we construct the deviance test and the CD plot to assess if \eqref{gbs} deviates significantly from the true distribution of the data. The scheme in Section \ref{chooseMsec} has been implemented with $M_{\max}=20$, and none of the values of $M$ considered led to significant results. The CD plot and deviance test for $M=4$ are reported in the upper left panel of Fig. \ref{Fig5}. Both the large p-value of the deviance test (adjusted p-value$> 1$) and the CD plot suggest that no significant deviations occur; thus, \eqref{gbs} is a reliable model for the physics sample. Moreover, we can use \eqref{gbs} to further refine our $\widehat{f}_b(x)$ or $f_s(x,\widehat{\bm{\theta}}_s)$ distributions. Specifically, we first construct a semiparametric estimate of $d(G(x);G,F)$, i.e., \begin{equation} \label{semipard} \widehat{\widehat{d}(}G(x);G,F)=(1-\widehat{\eta})\frac{\widehat{f}_b(x)}{f_s(x,\widehat{\bm{\theta}}_s)}+\widehat{\eta}, \end{equation} and rewrite \begin{equation} \label{refine} \begin{split} \widehat{\widehat{f}_b}(x)&=\frac{\widehat{f}_b(x)\widehat{\widehat{d}(}G(x);G,F)-\widehat{\eta}f_s(x,\widehat{\bm{\theta}}_s)}{(1-\widehat{\eta})}\\ \widehat{\widehat{f}_s}(x)&=\frac{\widehat{f}_b(x)\widehat{\widehat{d}(}G(x);G,F)-(1-\widehat{\eta})\widehat{f}_b(x)}{\widehat{\eta}}.\\ \end{split} \end{equation} In the upper right panel of Fig. \ref{Fig5}, the true comparison density (grey dashed line) of our physics sample is compared with its semiparametric estimate computed as in \eqref{semipard} (pink dashed line) with $f_s(x,\widehat{\bm{\theta}}_s)=f_s(x)$ in \eqref{fs}. The graphs of two nonparametric estimates of $d(u;G,F)$ computed via \eqref{dhat} with $M=4$ and $M=9$ (blue dot-dashed line and black dotted line), respectively, are added to the same plot. Not surprisingly, incorporating the information available on the signal distribution drastically improves the accuracy of the analysis. The semiparametric estimate matches $d(u;G,F)$ almost exactly, whereas both nonparametric estimates show some discrepancies from the true comparison density. All the estimates suggest that there is only one prominent peak in correspondence of the signal region. When moving from the comparison density domain to the density domain in Fig. \ref{Fig4}, the discrepancies between the nonparametric estimates and the true density $f(x)$ are substantially magnified. Specifically, when computing \eqref{dhat} and \eqref{fhat} with $M=4$ (blue dot-dashed line), the height signal peak is underestimated whereas, when choosing $M=9$, the $\widehat{f}(x)$ exhibits high bias at the boundaries\footnote{Boundary bias is a common problem among nonparametric density estimation procedures \cite[e.g.,][Ch.5, Ch.8]{larry}. When aiming for a non-parametric estimate of the data density $f(x)$, solutions exists to mitigate this problem \cite[e.g.,][]{efromovich}. } (dotted black line). \textbf{\emph{Case IIb: background + (unknown) signal.} } When the signal distribution is unknown, the CD plot of $\widehat{d}(u;G,F)$ can be used to guide the scientist in navigating across the different theories on the astrophysical phenomenon under study and specify a suitable model for the signal, i.e., $f_{s}$. The model proposed can then be validated, as in Case IIa, by fitting \eqref{gbs} and constructing deviance tests and CD plots. At this stage, the scientist has the possibility to iteratively query the data and explore the distribution of the signal by assuming different models. A viable signal characterization is achieved when no significant deviations of $\widehat{d}(u,G_{bs},F)$ from one are observed (e.g., see upper left panel of Fig. \ref{Fig5}). Notice that a similar approach can be followed also in the background calibration stage (Section \ref{bkgcali}) to provide a parametric characterization of the background distribution. \textbf{\emph{Case III: background + (known) signal + unexpected source.} } The tools proposed so far can also be used to detect signals from unexpected sources whose pdfs are, by design, unknown. \begin{figure*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{HistoDM} & \includegraphics[width=90mm]{HistoPS}\\ \end{tabular*} \caption[Figure 6]{\black{Dark matter and pulsar samples. The left panel corresponds to the histogram of a sample of 2000 observations simulated from the model in \eqref{DM} with $M_\chi=2.5$. The right panel corresponds to the histogram of a sample of 2000 observations simulated from the model in \eqref{PS} with $\tau=2$. The best fit of \eqref{DM} and \eqref{PS} are also reported as blue solid line and black dashed lines, respectively, on top of each histogram.}} \label{hists} \end{figure*} Suppose that the physics sample $\bm{x}$ contains $n=1300$ observations whose true (unknown) pdf $f(x)$ is equal to $f_{bsh}(x)$ \begin{equation} \label{fbsh} f_{bsh}(x)=(1-\eta_1-\eta_2)f_b(x)+\eta_1 f_s(x)+\eta_2 f_h(x) \end{equation} where $f_h(x)$ is the pdf of the unexpected signal and assume its distribution to be normal with center at 37 and width 1.8. Let $f_b(x)$ and $f_s(x)$ be defined as in \eqref{fb} and \eqref{fs}, respectively, and let $\eta_1=0.15$ and $\eta_2=0.1$. We can start with a nonparametric signal detection stage by setting $g(x)=\widehat{g}_{bs}(x)$ in \eqref{gest}, with $f_s$ defined as in \eqref{fs} and $\widehat{\eta}$ estimated via MLE. The respective CD plot and deviance tests are reported in the bottom left panel of Fig. \ref{Fig5}. Choosing $M=9$, as in \eqref{chooseM}, both the CD plot and deviance test indicate a significant departure from the expected background-only model and a prominent peak is observed in correspondence of the signal of interest centered around 25. A second but weaker peak appears to be right on the edge of our confidence bands, suggesting the possibility of an additional source. At this stage, if $f_s$ was unknown, we could proceed with a semiparametric signal characterization as in Case IIb. Whereas assuming that the distribution of the signal of interest is known and given by \eqref{fs}, we fit \eqref{gbs}, aiming to capture a significant deviation in correspondence of the second bump. This is precisely what we observe in the bottom right panel of Fig. \ref{Fig5}. Here the estimated comparison density deviates from \eqref{gest} around 35, providing evidence in favor of an additional signal in this region. We can then proceed as in Case IIb by exploring the theories available and/or collecting more data to further investigate the nature and the cause of the unanticipated bump. \black{ \section{Signal detection without calibration sample and model selection} \label{PSDMsec} There are situations where a source-free sample is simply not available and thus the calibration phase in Section \ref{bkgcali} cannot be implemented. The tools described in Sections \ref{LPmodelling} and \ref{inference} can, however, still be applied in order to perform signal detection and goodness-of-fit when a model for the signal is known, up to some free parameters. In this framework, we expect the data to either come only from the signal (with at most some negligible background contamination) or only from the background.} \black{ In order to illustrate how to proceed in this setting, we consider a dark matter search where the postulated model for dark matter $\gamma$-ray emissions is the one of \cite[][Eq. 29]{bergstrom}, i.e., \begin{equation} \label{DM} g_{DM}(y)=\frac{0.73 M_{\chi}^{1.5}}{yk_{M_{\chi}}}\exp\biggl\{-7.8\frac{y}{M_{\chi}}\biggl\} \end{equation} with $y\in [0.5, 5]$ Teraelectron Volt (TeV), $M_{\chi}\in [0.5, 5]$ TeV and $k_{M_{\chi}}$ is a normalizing constant. The goal is to show that, when considering a background-only sample, the method proposed correctly rejects \eqref{DM} as suitable model for the data; whereas, when considering a dark matter sample, the dark matter model in \eqref{DM} is ``accepted''.} \black{ To further increase the complexity of the problem, we consider a situation where the background sample corresponds to $\gamma$-ray emissions due to a pulsar, with distribution \begin{equation} \label{PS} g_{PS}(y)= \frac{1}{y k_\tau}\exp\biggl\{-\biggl(\frac{y}{y_0}\biggl)^{\tau}\biggl\}, \end{equation} with $y_0=0.5$, $y\in [0.5, 5]$ TeV, $\tau>0$ and and $k_{\tau}$ is a normalizing constant. Notice that, as discussed in \cite{baltz}, distinguishing $\gamma$-ray emissions due to pulsars from those due to dark matter is a particularly challenging task. The histograms of the two datasets considered are shown in Figure \ref{hists}; the overlapping curves correspond to the best fit of the models in \eqref{DM} and \eqref{PS} on each sample. Interestingly, for both samples, \eqref{DM} and \eqref{PS} provide a very similar fit to the data; hence the importance of correctly selecting the most adequate model or, excluding the dark matter hypothesis when observing emissions due to pulsars.} \black{ The upper panels of Figure \ref{cdPSDM} display the CD plots obtained by setting $g=g_{DM}$ in \eqref{DM} as postulated model and comparing it with the distribution of the dark matter sample (upper left panel) and of the background pulsar sample (upper right panel). Remarkably, the CD plots and the adjusted deviance tests correctly lead to the conclusion that the distribution of the dark matter sample does not deviates significantly from \eqref{DM}, whereas the distribution of the pulsar sample does deviate substantially from \eqref{DM} and the deviance test (adequately adjusted for post-selection inference) rejects the dark matter model with $3.897\sigma$ significance (adjusted p-value of $4.870\cdot 10^{-5}$). Notice that, in both cases, we are ignoring the information regarding the pulsar distribution and the only inputs considered are the data and the signal model in \eqref{DM}.} \black{ Finally, when incorporating the knowledge of the pulsar distribution in \eqref{PS} into the analysis, one can select between the models in \eqref{DM} and \eqref{PS} by constructing additional CD plots and deviance test for both samples and setting $g=g_{PS}$ in \eqref{PS}. The results are shown in lower panels of Figure \ref{cdPSDM}. As expected, the dark matter model is rejected (lower left panel) with $2.297\sigma$ significance (adjusted p-value of $0.0108$) whereas the pulsar model is ``accepted'' (lower right panel).} \begin{figure*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{DMsample_DM_CDplotN2000} & \includegraphics[width=90mm]{PSsample_DM_CDplotN2000}\\ \includegraphics[width=90mm]{DMsample_PS_CDplotN2000} & \includegraphics[width=90mm]{PSsample_PS_CDplotN2000}\\ \end{tabular*} \caption[Figure 7]{ \black{ CD plots and deviance test for dark matter and pulsar samples. The upper left panel displays CD plot and deviance test for the dark matter sample with $g=g_{DM}$ in \eqref{DM}. The upper right panel compares the distribution of the pulsar sample with the model in \eqref{DM}. The lower left panel displays CD plot and deviance test for the dark matter sample with $g=g_{PS}$ in \eqref{PS}. The lower right panel displays CD plot and deviance test for the pulsar sample with $g=g_{PS}$ in \eqref{PS}. For the plots on the left, the size of the basis selected is $M=6$, whereas, for the plots on the right, $M=5$.}} \label{cdPSDM} \end{figure*} \black{ \section{Background mismodelling due to instrumental noise and upper limits constructions} \label{instrument}} When conducting real data analyses one has to take into account that the data generating process is affected by both statistical and non-random uncertainty due to the instrumental noise. As a result, even when a model for the background is known, the data distribution may substantially deviate from it due to the smearing introduced by the detector \cite[e.g.,][]{lyonsPHY}. In order to account for the instrumental error affecting the data, it is common practice to consider folded distributions where the errors due to the detector are often modelled assuming a normal distribution or estimated via non-parametric methods \cite[e.g.,][]{PHY,PHY2}. \black{In Section \ref{modinstr}, it is shown how the same approach described in Sections \ref{bkgcali} and \ref{nonpar} can be used to assess if the instrumental error is negligible and, when not, how to update the postulated background model in order to incorporate the instrumental noise. Section \ref{upperlimits} discusses upper limits constructions by means of comparison distributions}. \black{ \subsection{Modelling the instrumental error} \label{modinstr}} The data considered come from a simulated observation by the Fermi Large Area Telescope \cite{atwood} with realistic representations of the effects of the detector and present backgrounds \cite{meJINST,meMNRAS}. The Fermi-LAT is a pair-conversion $\gamma$-ray telescope on board the earth-orbiting Fermi satellite. It measures energies and images $\gamma$-rays between about a 100 MeV and several TeV. The goal of the analysis is to assess if the data could result from the self-annihilation of a dark matter particle. \begin{figure*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{FermiCAL} & \includegraphics[width=90mm]{Fig8}\\ \end{tabular*} \caption[Figure 8]{Histograms of simulated Fermi-LAT samples. \black{The left panel displays the histogram of a source-free simulated Fermi-LAT sample of $N=35,157$ observations, whereas the black dashed line corresponds to the best fit of the power-law model in \eqref{gb2}. The right panel shows} the histogram of two simulated Fermi-LAT physics samples of $n=200$ observations. The grey histogram corresponds to the background-only sample, whereas the blue histogram corresponds to the dark matter signal sample.} \label{Fig8} \end{figure*} \begin{figure*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{Fig6} & \includegraphics[width=90mm]{Fig88}\\ \end{tabular*} \caption[Figure 6]{\black{Deviance test and CD plot for the source-free simulated Fermi-LAT sample including the instrumental error (left panel) and simulated sample with no instrumental error (right panel). In both cases $M=4$.} } \label{Fig6} \end{figure*} Let the distribution of the astrophysical background be \black{a power-law, i.e.,} \begin{equation} \label{gb2} g_b(x)=\frac{1}{k_{\phi}x^{\phi+1}} \end{equation} where $k_{\phi}$ is a normalizing constant and $x\in [1,35]$ Giga electron Volt (GeV). \black{Equation \eqref{gb2} corresponds to the distribution we would expect the background to follow if there was no smearing of the detector. The left panel of Figure \ref{Fig8} shows the histogram of a source-free sample of 35,157 i.i.d. observations from a power-law distributed background source with index 2.4 (i.e., $\phi=1.4$ in \eqref{gb2}) and contaminated by instrumental errors of unknown distribution.} \black{In order to assess if \eqref{gb2} is a suitable distribution for these data, we proceed by fitting \eqref{gb2} via maximum likelihood and setting it as postulated background distribution. The best fit of \eqref{gb2} is displayed on the left panel of Figure \ref{Fig8} as a black dashed line.} \black{We proceed estimating $d(G_b(x);G_b,F_b)$ and $f_b$ as in \eqref{dhat} and \eqref{fbhat} respectively, with $M=4$ (chosen as in Section \ref{chooseMsec}). The deviance test and CD plot are reported in the left panel of Figure \ref{Fig6} and suggest that significant departures from the fitted power-law model occur. This implies that the instrumental error is not negligible and thus, in order to account for it, we consider \eqref{fbhat2} as ``calibrated'' background density the model } \begin{equation} \label{fbhat2} \begin{split} \widehat{f}_b(x)&=\frac{1}{k_{\hat{\phi}}x^{\hat{\phi}+1}}\Bigl(1+0.027Leg_1[G_b(x)]-0.067Leg_2[G_b(x)]\\ & + 0.026Leg_3[G_b(x)]-0.045Leg_4[G_b(x)]\Bigl),\\ \end{split} \end{equation} where $G_b(x)$ is the cdf of \eqref{gb2} and $\hat{\phi}=1.359$ is the ML estimate of $\phi$ in \eqref{gb2}. \black{For the sake of comparison, the same analysis has been repeated considering $35,157$ i.i.d. observations from a power-law background source with index 2.4, without instrumental error. The respective CD plot and deviance test are shown on the right panel of Figure \ref{Fig6} and indicate that the power-law model in \eqref{gb2}, with $\phi$ replace by its MLE (i.e., $\widehat{\phi}=1.391$), provides a good fit for the data, i.e., the instrumental error is, in this case, absent or negligible. } \black{ \subsection{Signal detection and upper limit construction} \label{upperlimits}} Once obtained a calibrated background distribution, we proceed with the signal detection phase by setting $g(x)=\widehat{f}_b(x)$ in \eqref{fbhat2}. Similarly to Section \ref{nonpar}, two physics samples are given; one containing 200 observations from the background source distributed, as in \eqref{gb2}, and the other containing 200 observations from a dark matter emission. The signal distribution from which the data have been simulated is the pdf of $\gamma$-ray dark matter energies in \cite[][Eq. 28]{bergstrom} \black{with $M_{\chi}=3.5$}. Both physics samples include the contamination due to the instrumental noise \black{with unknown distribution}. The respective histograms are shown in the right panel of Fig. \ref{Fig8}. The selection scheme in Section \ref{chooseMsec} suggests that no significant departure from \eqref{fbhat2} occurs on the background-only physics sample, whereas, for the signal sample, the strongest significance is observed at $M=3$; therefore, for the sake of comparison, we choose $M=3$ in both cases. The respective deviance tests and CD plots are reported in Fig. \ref{Fig7}. As expected, the upper left panel of Fig. \ref{Fig7} shows a flat estimate of the comparison density on the background-only sample. Conversely, the upper right panel of Fig. \ref{Fig7} suggests that an extra bump is present over the $[2,3.5]$ region with $3.318\sigma$ significance (adjusted p-value = $4.552\cdot 10^{-4}$). As in \eqref{semipard}, it is possible to proceed with the signal characterization stage (see Section \ref{signalcar}); however, in this setting, one has to account for the fact that also the signal distribution must include the smearing effect of the detector. \begin{figure*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{Fig7c} & \includegraphics[width=90mm]{Fig7d}\\ \includegraphics[width=90mm]{FermiBKG} & \includegraphics[width=90mm]{FermiBS}\\ \end{tabular*} \caption[Figure 7]{\black{Deviance test, CD plots and Brazil plots for the simulated Fermi-LAT background-only sample of size 200 (left panels) and the simulated Fermi-LAT dark matter signal sample of size 200 (right panels). In both cases, the postulated distribution $G$ corresponds to the cdf of the calibrated background model in \eqref{fbhat2}. For the sake of comparison, $d(u;G,F)$ has been estimated via \eqref{dhat}, with $M=3$ in both cases. }} \label{Fig7} \end{figure*} \black{ As an anonymous referee pointed out, it is important to discuss how upper limits and Brazil plot can be constructed via LP modelling and how they relate to the constructs discussed so far in this manuscript. Indeed, the confidence bands reported in the CD plots are themselves upper limits. Specifically, in the signal detection framework of Section \ref{nonpar}, the confidence bands in \eqref{CIband} are constructed assuming that there is no signal in the data. Specifically, they correspond to the regions where the comparison density estimator is expected to lie, at $1-\alpha$ confidence level, if the data includes background-only events. Conversely, any deviation from the confidence bands characterizes the quantiles of the distribution where the data distribution does not conform with the one postulated under the assumption that no signal is present. } \black{When the interest is in identifying areas of the search region where deviations from the background model occur, one can exploit the fact that $u=G(x)$, and thus upper limits and classical ``Brazil plots'' based on the comparison density can be obtained by plotting \eqref{dbhat} and the respective confidence bands in \eqref{CIband} as a function of $x$. This is shown, for our Fermi-LAT example in the bottom panels of Figure \ref{Fig7}. Indeed, the upper and bottom panels in Figure \ref{Fig7} carry essentially the same information in two different domains. Specifically, the CD plots display the departure of $f$ from $g$ in the quantile domain whereas the Brazil plots show the same differences in the frequency domain. For signal detection purpose, the bottom panels may be preferred to identify the location where substantial deviations among the background and signal model occur. Whereas, the CD plots are more suitable for goodness-of-fit purposes as they provide a simulataneous visualization of the differences occurring at each quantile of the distribution. } \section{model-denoising} \label{denoising} \black{As discussed in Section \ref{biasvariance}, the choice of $M$ affects the resulting estimator of $d(u;G,F)$ in terms of both bias and variance. When dealing with complex background distributions, a large value of $M$ may be necessary to reduce the bias of the estimated comparison density. At the same times, however, a large value of $M$ leads to an inflation of the variance. In other words, considering a basis of $M$ shifted Legendre polynomials may lead to overfitting.} \black{Practically speaking, overfitting leads to wiggly (i.e., non-smooth) estimates and thus one may overcome this limitation by attempting to denoise the estimator in \eqref{dhat}}. Section \ref{denoiseAIC} reviews the model-denoising approach proposed by \cite{LPapproach,LPmode}, whereas Section \ref{AICtest} briefly discusses inference and model selection in this setting. Finally, Section \ref{comparison} compares the results obtained with a full and a denoised solution on the examples of Section \ref{DS}. \subsection{AIC denoising} \label{denoiseAIC} Let $\widehat{LP}_1,\dots, \widehat{LP}_M$ be the estimate of the first $M$ coefficients of the expansion in \eqref{cd}. The most ``significant'' ${LP}_j$ coefficients are selected by sorting the respective $\widehat{LP}_j$ estimates so that \[\widehat{LP}^2_{(1)}\geq \widehat{LP}^2_{(2)}\geq \dots \geq \widehat{LP}^2_{(M)}\] and choosing the value $k=1,\dots,M$ for which $AIC(k)$ in \eqref{AIC} is maximum \begin{equation} \label{AIC} AIC(k)=\sum_{j=1}^{k} \widehat{LP}^2_{(j)}-\frac{2k}{n}. \end{equation} The AIC-denoised estimator of $d(u;G,F)$ is given by \begin{equation} \label{dhat2} \widehat{d}^*(u;G,F)=1+\sum_{j=1}^{k^*_M} \widehat{LP}_{(j)} Leg_{(j)}(u) \end{equation} where $\widehat{LP}_{(j)}$ is the estimate whose square is the $j^{\text{th}}$ largest among $\widehat{LP}^2_{1},\dots,\widehat{LP}^2_{M}$, $Leg_{(j)}(u)$ is the respective shifted Legendre polynomial and \begin{equation} \label{mstar} k^*_M=\underset{k}{\mathrm{argmax}} \{AIC(1),\dots, AIC(M)\}. \end{equation} \noindent\emph{\underline{Practical remarks.}} Recall that the first $M$ coefficients $LP_j$ can be expressed as a linear combination of the first $M$ moments of $U$. Thus, the AIC-denoising approach selects the $LP_j$ coefficients which carry all the ``sufficient'' information on the first $M$ moments of the distribution. \subsection{Inference after denoising} \label{AICtest} The deviance test can be used, as in Section \ref{chooseMsec}, to choose the size of the initial basis of $M$ polynomials among $M_{\max}$ possible models. Finally, the $k^\star_M$ largest coefficients are chosen by maximizing \eqref{AIC}. This two-step procedure selects $\widehat{d}^*(u;G,F)$ in \eqref{dhat2} from a pool of $M_{tot}=M_{\max}+\frac{M(M-1)}{2}$ possible estimators. Therefore, the Bonferroni-adjusted p-value of the deviance test is given by \begin{equation} \label{adjk} M_{tot}\cdot P(\chi^2_{k_M^*}>d_{k_M^*}) \end{equation} withe $d_{k_M^*}=\sum_{j=1}^{k_M^*} \widehat{LP}^2_{(j)}$. Similarly, confidence bands can be constructed as \begin{equation} \label{CIband4} \Biggl[1-c_{\alpha,M_{tot}}\sqrt{\sum_{j=1}^{k_M^*}\frac{1}{n}Leg_{(j)}^2(u)},1+c_\alpha\sqrt{\sum_{j=1}^{k_M^*}\frac{1}{n}Leg_{(j)}^2(u)}\Biggl] \end{equation} where $c_{\alpha,M_{tot}}$ is the solutions of \begin{equation} \label{CIband3b} 2(1-\Phi(c_{\alpha,M_{tot}}))+\frac{k_0}{\pi}e^{-0.5c^2_{\alpha,M_{tot}}}=\frac{\alpha}{M_{tot}}. \end{equation} \black{\emph{\underline{Practical remarks.}} Given the possibility of denoising our solution, one may legitimately wonder why not to consider a large value of $M_{\max}$, e.g., $M_{\max}=100$ and then select $k^\star_{M_{\max}}$ directly. In other words, why should we first implement the procedure in Section \ref{chooseMsec} and, only after, refine our estimator as in Section \ref{denoiseAIC} and not vice-versa? There are two main reasons why such approach is discouraged. } \black{ First of all, one has to take into account that ignoring the selection stage proposed in Section \ref{chooseMsec}, there is no guarantee that the resulting $k^\star_{M_{\max}}$ would include all the $\widehat{LP}_j$ terms that provide the strongest evidence in favor of $H_1$ in \eqref{Dtest}. Therefore, the resulting p-value can in principle be lower than the one in \eqref{adjk}. Indeed the AIC criterion in \eqref{AIC}, aims to improve the fit of the estimator to the data, whereas the deviance selection criteria in \eqref{chooseM} aim to maximize the power of the inferential procedure.} \black{ Second, choosing $M_{\max}=100$ is computationally unfeasible with most of the standard programming languages such as \texttt{R} and \texttt{Python}, and the numerical computation of \eqref{dhat} may easily lead to divergent or inaccurate results. } \begin{figure*}[htb] \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}c@{}c@{}} \includegraphics[width=45mm]{denoise1} &\includegraphics[width=45mm]{denoise2}&\includegraphics[width=45mm]{denoise3}& \includegraphics[width=45mm]{denoise4}\\[-3ex] \multicolumn{4}{c}{\includegraphics[width=180mm]{legend}} \\[-12ex] \end{tabular*} \caption[Figure 8]{\black{Comparison of the estimators $\widehat{d}(u;G,F)$ (blue solid lines) and $\widehat{d}^*(u;G,F)$ (pink dashed lines) for the toy examples in Section \ref{DS}. The true comparison densities $d(u;G,F)$ are displayed as dotted black curves. The light gray areas corresponds to the confidence bands of the full solution whereas dark gray areas refer to the confidence bands of the denoised solution. In all the examples proposed, the latter is almost entirely overlapping with the former. }} \label{denoisefig} \end{figure*} {\fontsize{3mm}{3mm}\selectfont{ \begin{table*} \begin{tabular}{|l|c|cc|c|} \hline & &&&\\ &\textbf{$M,k_M^*$}& \textbf{Method} &\textbf{Deviance } & \textbf{Adjusted }\\ &\textbf{selected }& &\textbf{ p-values} & \textbf{ p-values}\\ & & &&\\[-1.5ex] \hline & & && \\[-1.5ex] \textbf{Toy example}& $M=2$&Full& $p(2)=3.199\cdot 10^{-12}$ & $20\cdot p(2)=6.397\cdot 10^{-11}$\\ \textbf{Calibration} & $k^*_2=2$ &Denoised& $p(2)=3.199\cdot 10^{-12}$ & $21\cdot p(2)=6.717\cdot 10^{-11}$\\[-1ex] & & && \\[-1.5ex] \hline & & && \\[-1.5ex] \textbf{Toy example}&M=18 &Full & $p(18)=0.2657$&$20\cdot p(18)>1$\\ \textbf{Case I} &$k^*_{18}=2$ &Denoised& $p(2)=5.096\cdot 10^{-4}$& $21\cdot p(2)=0.0882$ \\[-1ex] & & && \\[-1.5ex] \hline & & && \\[-1.5ex] \textbf{Toy example} &$M=4$&Full& $p(4)=8.994\cdot 10^{-33}$ & $20\cdot p(4)=1.799\cdot 10^{-31}$\\ \textbf{Case II} &$k^*_4=4$&Denoised& $p(4)=8.994\cdot 10^{-33}$& $21\cdot p(4)=2.338\cdot 10^{-31}$ \\[-1ex] & & && \\[-1.5ex] \hline & & && \\[-1.5ex] \textbf{Toy example} & $M=9$&Full&$p(9)=2.590\cdot 10^{-28}$& $20\cdot p(9)=5.181\cdot 10^{-27}$\\ \textbf{Case III} & $k^*_9=6$&Denoised& $p(6)=4.457\cdot 10^{-30}$ & $35\cdot p(1)=2.496\cdot 10^{-28}$\\ [-1ex] & & && \\ \hline \end{tabular} \caption[Table 2]{Model selection and inference for the toy example in Section \ref{DS}. The second column reports the M and $k^*_M$ values selected as in \eqref{chooseM} and \eqref{mstar}, respectively. The third column collects the unadjusted deviance p-values for the full and denoised solutions. The Bonferroni-adjusted p-values, computed as in \eqref{bonf} and \eqref{adjk} are reported in the fourth column. The correction terms applied correspond to $M_{\max}=20$ for the full solution and $M_{tot}=M_{\max}+\frac{M(M-1)}{2}$ for the denoised solution. } \label{AICdev} \end{table*} }} \subsection{Comparing full and denoised solution} \label{comparison} Fig. \ref{denoisefig} compares the fit of the estimators $\widehat{d}(u;G,F)$ and $\widehat{d}^*(u;G,F)$ for the examples in Section \ref{DS}. For all the cases considered, $M$ and $k^*_M$ have been selected as in \eqref{chooseM} and \eqref{mstar} (see second column of Table \ref{AICdev}). When no significance was achieved for any of the values of $M$ considered, a small basis of $M=3$ or $M=4$ polynomials was chosen for the full estimator $\widehat{d}(u;G,F)$, which was then further denoised in order to obtain $\widehat{d}^*(u;G,F)$. Table \ref{AICdev} shows the results of the deviance tests of the full and the denoised solution for the examples in Section \ref{DS}. The unadjusted p-values and the Bonferroni-adjusted p-values are reported in the second and third columns, respectively. In \black{half of the} cases, $k^*_M=M$ and the estimators $\widehat{d}(u;G,F)$ and $\widehat{d}^*(u;G,F)$ overlap over the entire range $[0,1]$. The inferential results were also approximately equivalent in the majority of the situations considered. The main differences are observed in the analysis of the background-only physics sample (Case I). In this case, the deviance-selection procedure leads to non-significant results for all the values of $M$ considered; the minimum p-value is observed at $M=18$ (unadjusted p-value = $0.2657$). In this setting, the denoising process leads to $k^*_{18}=2$ and the respective unadjusted p-value is $5.096\cdot10^{-4}$. This further emphasizes the importance of adjusting for model selection in order to avoid false discoveries. For modelling purposes and for the sake of comparison with the case where a signal is present, a basis of $M=4$ was selected. Since the true distribution of the data is the same as the postulated one, the denoising process sets all the coefficients equal to zero ($k^*_M=0$). For Case III, only $k^*_M=6$ out of $M=9$ coefficients are selected when denoising (see Table \ref{AICdev}). Despite the right panel of Fig. \ref{denoisefig} shows that the full and the denoised solution are almost overlapping, the latter leads to an increased sensitivity (adjusted p-value=$2.496\cdot10^{-28}$) compared to the full solution (adjusted p-value=$5.181\cdot10^{-27}$). \begin{figure*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=1\columnwidth]{HistNVSSOriginal}&\includegraphics[width=1\columnwidth]{HistNVSS}\\ \end{tabular*} \caption[Figure 9]{Histograms of the NVSS samples. \black{ The left panels show the source-free sample with and without outliers (upper left and lower left panels, respectively). In both cases, the best fit of the Rayleigh model in \eqref{gb3} is displayed as a black curve. The right panel compares the source-free sample without outliers (grey histogram) with the source sample (blue histogram) truncated over the search area considered. }} \label{NVSSfig} \end{figure*} \begin{figure}[htb] \centering \begin{adjustbox}{center} \includegraphics[width=1\columnwidth]{control} \end{adjustbox} \caption[Figure 10]{ Deviance test and CD plot for the NVSS source-free sample of size $28,739$ compared to the Rayleigh distribution in \eqref{gb3}. } \label{Fig10} \end{figure} These results suggest that the denoising approach can easily adapt to situations where a sparse solution is preferable (i.e., when only few of the $M$ coefficients $LP_j$ are non-zero) without enforcing sparsity when many of the $M$ coefficients considered are needed to adequately fit the data (e.g., bottom right panel of Fig. \ref{denoisefig}). From an inferential perspective, denoising can improve the sensitivity of the analysis; however, in order to avoid false discoveries, extra care needs to be taken when the deviance selection procedure leads to large p-values for all the $M_{\max}$ models considered. \section{An application to stacking experiments} \label{stacking} In radio astronomical surveys, stacking techniques are often used to combine noisy images or ``stacks'' in order to increase the signal-to-noise ratio and improve the sensitivity of the analysis in detecting faint sources \cite[e.g.,][]{lawrence,white,jeroen}. In polarized signal searches, for instance, a faint population of sources is considered when the median polarized intensity observed over control regions differs significantly from the median of the region where the sources are expected to be present. In this context, \black{under simplifying assumptions}, the distribution of the intensity of the source polarization is often assumed to \black{to have Rice distribution i.e., \begin{equation} \label{rice} f(x)=\frac{xe^{-\frac{x^2+\nu^2}{2\sigma^2}}}{k_{\nu\sigma^2}}\text{Bessel}\bigl(\frac{x\nu}{\sigma^2}\bigl) \end{equation} where $\text{Bessel}(\cdot)$ denotes the Bessel function of first kind of order zero and $k_{\nu\sigma^2}$ is a normalizing constant. Furthermore, \eqref{rice} reduces to a Rayleigh pdf when no signal is present \cite{simmons}, i.e, when $\nu=0$}. Below, it is shown how the methods described in Sections \ref{bkgcali} and \ref{nonpar} can be used to assess whether the Rayleigh distribution is a reliable model for the background and, \black{when too simplistic}, investigate the impact of incorrectly assuming a Rayleigh distribution on the \black{reliability} of the analysis. \begin{figure*} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}@{}c@{}c@{}} \includegraphics[width=90mm]{source1} & \includegraphics[width=90mm]{source2}\\ \end{tabular*} \caption[Figure 11]{Deviance tests and CD plots for the NVSS source sample assuming $g(x)$ to be the calibrated background model in \eqref{fbhat3} (left panel) and when letting $g(x)$ be the pdf in of the truncated Rayleigh distribution in \eqref{gb3}. In both cases the estimator of the comparison density has been denoised as described in Section \ref{denoiseAIC}. The values of $M$ and $k^*$ considered are $M=k^*=6$ and $M=9$, $k^*=8$ for the estimators on the left and right panels, respectively.} \label{compare} \end{figure*} The data considered comes from the NRAO VLA Sky Survey (NVSS) \cite{NVSS}. The NVSS is an astronomical survey of the Northern hemisphere carried out by the Very Large Array of the National Radio Astronomy Observatory. The NVSS has detected 1.8 million sources in total intensity, but only $14\%$ of these have reported a \black{polarized signal peak} greater than $3\sigma$ \cite{jeroen}. \black{The original source-free sample contained $29,915$ observations collected from four different control regions for each source with a brightness in total intensity between 0 and 0.0093 Jy/beam (see upper panel of Figure \ref{NVSSfig}). However, such sample appears to contain several outliers which affect the data distribution, making it far from Rayleigh. A better Rayleigh fit is obtained when removing the outliers,\footnote{In statistics, an observation $x_i$ is considered an outlier if $x_i<Q_{0.25}-1.5[Q_{0.75}-Q_{0.25}]$ or $x_i>Q_{0.75}+1.5[Q_{0.75}-Q_{0.25}]$ where $Q_{0.25}$ and $Q_{0.75}$ are the first and the third sample quartiles.} (see bottom left panel of Figure \ref{NVSSfig}). Since understanding the cause of these anomalous observations is beyond the scope of this manuscript, we proceed excluding them from the analysis and we focus on assessing the validity of the Rayleigh assumption on the remaining $28,739$ observations on the region $[0 , 0.0009]$ Jy/beam. It has to be noted that the nominal noise in NVSS polarization is 0.00029 Jy/beam and we may expect as reasonable threshold for the detection of one individual source to be three times the noise. Hence, a source sample of $6,220$ observations has been selected from positions where compact radio sources with a brightness in total intensity between 0 and 0.0009 Jy/beam are known to be present. Both source-free and source samples are assumed to be i.i.d. The histograms of the source-free and signal samples considered are shown in the right panel of Fig. \ref{NVSSfig}.} As first step, we fit a Rayleigh distribution (adequately truncated over the range $[0,0.0009]$) on the source-free sample, i.e., \begin{equation} \label{gb3} g_b(x)=\frac{x e^-\frac{x^2}{2\widehat{\sigma}^2}}{k_{\widehat{\sigma}^2}} \end{equation} where $k_{\widehat{\sigma}^2}$ is a normalizing constant, $\widehat{\sigma}=0.0003$ is the ML estimate of the unknown parameter $\sigma$, and $x\in[0,0.0009]$ Jy/beam. In order to assess if \eqref{gb3} provides a good fit for the data, we estimate the comparison density $d(G_b(x);G_b,F_b)$ by, first, selecting $M$ as in \eqref{chooseM} and then applying the AIC-based denoising approach described in Section \ref{denoiseAIC}. In this case, the denoised solution selects $k^*=9$ out of $M=10$ polynomial terms. The deviance tests and the CD plot in Fig. \ref{Fig10} suggest that, despite the fact that the median of the data coincides with the one of the Rayleigh model, overall, the latter does not provide a good fit for the distribution of the source-free sample. Specifically, the data distribution shows a higher right tail than one expected under the Rayleigh assumption, whereas the first quantiles are overestimated by the Rayleigh. Therefore, the researcher can either decide to use a more refined parametric model for the background or consider the calibrated background distribution of the form in \eqref{fbhat}, which in our setting specifies as \begin{equation} \label{fbhat3} \begin{split} &\widehat{f}_b(x)=\frac{x e^-\frac{x^2}{2\widehat{\sigma}^2}}{k_{\widehat{\sigma}^2}}\Bigl(1-0.018Leg_1[G_b(x)]+0.012Leg_2[G_b(x)]\\ &+0.052Leg_3[G_b(x)]-0.014Leg_4[G_b(x)]+0.047Leg_5[G_b(x)]\\ &-0.018Leg_6[G_b(x)]+0.031Leg_7[G_b(x)]+0.016Leg_9[G_b(x)]\\ &-0.015Leg_{10}[G_b(x)]\Bigl),\\ \end{split} \end{equation} where $G_b(x)$ is the cdf of \eqref{gb3}. The strategy described in Section \ref{nonpar} allows us to identify where significant differences between the control and source sample occur. In order to assess the effect of incorrectly assuming a Rayleigh background, we compare the distribution of the physics sample with both the Rayleigh and the calibrated background distribution in \eqref{fbhat3}. Figure \ref{compare} reports deviance tests and CD plots obtained on the physics sample when setting $g(x)=\widehat{f}_b(x)$ in \eqref{gb3} (left panel) and $g(x)=\widehat{f}_b(x)$ in \eqref{fbhat3} (right panel). Both analyses provide strong evidence that the distribution of the physics sample differs significantly from the postulated models $\widehat{f}_b(x)$ and $g_b(x)$, and the most substantial discrepancies occur on the right tail of the distribution. However, since the Rayleigh model underestimates the right tail of the background distribution (see Fig. \ref{Fig10}), it leads to an artificially enhanced sensitivity in this region. The differences between the two CD plots are less prominent around the median expected under $\widehat{f}_b(x)$ and $g_b(x)$ (i.e., in correspondence of $u=0.5$ in both plots). \black{Fig \ref{compare}} suggests that, for these data, assuming a background Rayleigh distribution would not substantially affect the results of a comparison \black{between the source-free and signal sample} based on the median. However, focusing solely on the median can strongly \black{limit} the overall sensitivity of the analysis since the major differences occur at the higher quantiles of the distribution. \black{On the other hand, assuming a Rayleigh distribution for the background would artificially inflate the evidence in favor of the source. Specifically, the sigma significance of the deviance test obtained under the Rayleigh background assumption is $23.655\sigma$ (adjusted p-value = $5.178\cdot 10^{-124}$), whereas the one obtained using \eqref{fbhat3} is $20.225\sigma$ (adjusted p-value = $2.948\cdot 10^{-91}$). } Conversely, the calibrated background model in \eqref{fbhat3} allows us to safely compare the entire distribution of the polarized intensity in the source and control regions via CD plots and deviance tests without \black{affecting} the sensitivity of the analysis. \section{Discussion} \label{discussion} This article proposes a unified framework for signal detection and characterization under background mismodelling. From a methodological perspective, the methods presented here extend LP modelling to the inferential setting. The solution discussed is articulated in two main phases: a calibration phase where the background model is ``trained'' on a source-free sample and a signal search phase conducted on the physics sample collected by the experiment. If a model for the signal is given, the method proposed allows the identification of hidden signals from new unexpected sources and/or the refining of the postulated background or signal distributions. Furthermore, the tools presented in this manuscript can be easily extended to situations where a source-free sample is not available and the background is unknown (up to some free parameters). \black{As discussed in Section \ref{PSDMsec}, however, in this setting the signal distribution is required to be known, and the physics sample is expected to contain only signal-like events, i.e., the background is almost completely reduced}. \black{The theory of Section \ref{biasvariance} and} the analyses in Section \ref{DS} have highlighted that, despite a fully non-parametric approach provides reliable inference, it may lead to unsatisfactory estimates when the postulated pdf $g$ is substantially different from the true density $f$. In this setting, a semiparametric stage can be performed in order provide a reliable model for the data. \black{Each individual step in both the nonparametric and the semiparametric stage of Sections \ref{signalcar} and \ref{bkgcali} provides useful scientific insights on the signal and background distribution. Hence, an automatized implementation of the steps of Algorithm 1 based solely on the p-values of the deviance tests is discouraged as it would lead to a substantial loss of scientific knowledge on the phenomena under study.} Finally, it is important to point out that, despite this article's focus on the one-dimensional searches on continuous data, all the constructs presented in Sections \ref{LPmodelling} and \black{the deviance test in \ref{dev} also apply to the discrete case when considering i.i.d. events. More work is needed to extend these results and those of Section \ref{bands} to searches in multiple dimensions and when considering Poisson events with functional mean. In the first case the difficulty mainly lies in generalizing the constructs of Section \ref{inference} to account for the dependence structure occuring across multiple dimensions. In the second case, the main challenge lies in identifying the equivalent of \eqref{skewG} to model the mean of the distribution, while incorporating the Poisson error.} \section*{Code availability} \black{The \texttt{LPBkg} Python package \cite{python} and the \texttt{LPBkg} R package \cite{rr} allow the implementation of the methods proposed in this manuscript. Detailed tutorials on how to use the functions provided are also available at \url{http://salgeri.umn.edu/my-research}. } \section*{Acknowledgments} The author thanks Jeroen Stil, who provided the NVSS datasets used in Section \ref{stacking}, and Lawrence Rudnick, who first recognized the usefulness of the method proposed in the context of stacking experiments. Conversations with Subhadeep Mukhopadhyay have been of great help when this work was first conceptualized. Discussions and e-mail exchanges with Charles Doss and Chad Shafer are gratefully acknowledged. \black{Finally, the author thanks an anonymous referee whose feedback has been substantial to improve the overall quality of the paper. }
2,877,628,088,850
arxiv
\section{Introduction} The formation and evolution of the Milky Way disk is one of the outstanding questions facing Galactic archaeology today \citep{2002ARA&A..40..487F}. However, new observational data are providing avenues to resolve this longstanding question. With the advent of the \textit{Gaia} satellite \citep{2016A&A...595A...1G,2018A&A...616A...1G} and large-scale spectroscopic surveys such as LAMOST \citep{2012RAA....12..723Z}, RAVE \citep{2020arXiv200204377S}, Gaia-ESO \citep{2012Msngr.147...25G}, APOGEE \citep{2017AJ....154...94M}, and GALAH \citep{2015MNRAS.449.2604D}, our understanding of the Galaxy is in the midst of a revolution. Astrometric parameters from \textit{Gaia} \citep{2018A&A...616A...1G,2018A&A...616A...2L} allow improved phase space estimates for more than a billion stars, while large-scale spectroscopic surveys allow reliable chemical abundance determinations and ages, giving us an unprecedented picture of the Galaxy and the ability to trace its structure and evolution through time. Ever since the discovery of the thick disc by \citep{1983MNRAS.202.1025G}, its origin and its link to the thin disc, which makes up most of the stars in the Milky Way, has remained unclear. Further studies of the thick disc stars have revealed that they are different from the thin disc star in multiple ways, which lead to the notion that it might be distinct from the thin disc and might have originated from a separate evolutionary pathway. The thick disc was initially identified in observations of star counts in the solar neighbourhood away from the mid-plane of the disk. A single exponential could not fit the observed stellar distributions; two components were required: a thin disk component with a small scale height ($\approx$ 300 pc), and a thick disk component with a three-fold increase in scale height \citep{1983MNRAS.202.1025G}. Spectroscopic observations of stars high above the plane belonging to the thick disk reveal that these stars are older and have higher $[\alpha/{\rm Fe}]${} relative to stars in the plane (e.g., \citealt{1998A&A...338..161F,2007ApJ...663L..13B,2013A&A...560A.109H}). Hence, the thick disk has increasingly been identified via stellar chemistry, rather than a star's distance from the plane. Large spectroscopic surveys have enabled a detailed exploration of the distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane and at different $R$ and $|z|$ locations across the Galaxy \citep[][also available in the text later]{2015ApJ...808..132H}. In the solar annulus, the distribution shows two major sequences, a high-$[\alpha/{\rm Fe}]${} sequence (associated with the thick disc) and a low-$[\alpha/{\rm Fe}]${} sequence (associated with the thin disc). The sequences are almost parallel at the low ${\rm [Fe/H]}${} end, but as ${\rm [Fe/H]}${} increases the two sequences progressively come closer and merge at slightly super-solar metallicities. It is not clear as to where the high-$[\alpha/{\rm Fe}]${} track ends. The higher ${\rm [Fe/H]}${} stars of the high-$[\alpha/{\rm Fe}]${} track have kinematics similar to that of the thin disk \citep{2007ApJ...663L..13B,2011A&A...535L..11A,2017A&A...608L...1H,2020arXiv200303316C}. The distribution of stars in the low-$[\alpha/{\rm Fe}]${} sequence changes systematically with location $R$ and $|z|$ across the Galaxy. However, the locus of the high-$[\alpha/{\rm Fe}]${} sequence appears to be the same at all locations to within observational errors \citep{2014ApJ...796...38N,2015ApJ...808..132H}. Several observation studies have found that the high-$[\alpha/{\rm Fe}]${} stars identified with the thick disk have a short scale-length, and only extend out to roughly the Sun's position. At larger radii, these high-$[\alpha/{\rm Fe}]${} populations are absent and stars above the plane are instead made up of flaring solar-$[\alpha/{\rm Fe}]${} populations (e.g., \citealt{2011ApJ...735L..46B,2012ApJ...753..148B,2015ApJ...808..132H,2019ApJ...874..102W}). For a detailed discussion of flaring see \citet{2015ApJ...804L...9M} and for a schematic illustration of the thin and thick discs see Fig. 1 of \citet{2019MNRAS.486.1167B}. In the inner disc, we find only a single sequence, the low-$[\alpha/{\rm Fe}]${} sequence has shifted towards higher ${\rm [Fe/H]}${} and is merged with the high-$[\alpha/{\rm Fe}]${} sequence, implying that the thin and thick disc are chemically connected \citep{2014ApJ...781L..31S,2015ApJ...808..132H,2016A&A...589A..66H}. The distinct gap between the chemical thin and thick disks found locally does not exist in the inner Galaxy, leaving its origin in the solar neighbourhood an open question. The thick disc has also been associated with distinct kinematic features. The velocity dispersion as a function of age for stars in the solar neighborhood shows a break from a power law with an increase being found for older stars \citep{1991dodg.conf...15F, 1993A&A...275..101E, 2001ASPC..230...87Q,2020arXiv200406556S}. Despite the thick disc appearing to be a distinct population, a number of studies have argued against it. \citet{1987ApJ...314L..39N} suggested that the double exponential vertical distribution of stars can be explained by a disc whose vertical velocity dispersion (or equivalently scale height) varies continuously with metallicity. \citet{2012ApJ...751..131B} binned up the SEGUE \citep{2009AJ....137.4377Y} G-dwarfs in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane to create mono-abundance populations (MAP) and studied the spatial distribution of stars for each of the MAPs. Using these MAPs they showed that the mass weighted distribution of scale height in the solar annulus is continuous. A similar analysis was repeated using giants from APOGEE by \citet{2016ApJ...823...30B} and \citet{2017MNRAS.471.3057M} that lead to the same conclusion. \citet{2020arXiv200406556S} argue that the velocity dispersion of the thick disc stars follow the same relations for their dependence on age, angular momentum and metallicity, as stars belonging to the thin disk. The apparent uniqueness of the thick disc kinematics is because the velocity dispersion also depends on angular momentum, this was not taken into account in the previous studies. \citet{2009MNRAS.396..203S} showed using a chemical evolution model that the double sequence (bimodality) in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane can be reproduced by a model having a continuous star formation history. They introduced a chemical evolution model that included radial flow of gas and radial migration of stars in addition to other relevant physical processes, e.g., star formation history, stellar yields, gas accretion and outflow. In addition to chemical abundances their model also included the phase space distribution of stars. The free parameters of the model were tuned to reproduce observations of the solar neighbourhood from the Geneva-Copenhagen Survey \citep{2004A&A...418..989N}. Their model was able to reproduce many observed properties of the disk, anti-correlation of angular momentum with metallicity for the low-$[\alpha/{\rm Fe}]${} stars \citet{2009MNRAS.399.1145S}, the scatter in the age-metallicity relation locally, as well as the shift in both the peak of the MDF (radial gradient) and change in shape with radius. The origin of the double sequence was attributed to the time delay of SNIa, which makes the $[\alpha/{\rm Fe}]${} transition from high-$[\alpha/{\rm Fe}]${} at earlier times to low-$[\alpha/{\rm Fe}]${} at later times. Radial migration was identified as the key mechanism that brought kinematically hot stars from the inner disc to the solar neighborhood to create the thick disc. The large spread in metallicity of the low-$[\alpha/{\rm Fe}]${} sequence (thin disc) was also due to radial migration coupled with the existence of a strong metallicity gradient in the ISM. Later findings \citep{2014ApJ...794..173V,2018MNRAS.476.1561D} that radial migration is efficient only for kinemtaically cold stars, raised questions as to how can the thick disc be created by migration of stars from the inner disc and still be kinematically hot. \citet{2017MNRAS.470.3685A} show using idealized N-body simulations that if the Galaxy has an inside out growth, it is possible to have outward migrators with high velocity dispersion. In spite of significant successes of the \citet{2009MNRAS.399.1145S} model, their study lacked a detailed comparison with observations. They only made predictions for the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution in the solar neighborhood and compared it with the limited observational data set that was available at that time. Although two sequences could be clearly seen but their model also predicted a significant number of stars in between the two sequences. Unfortunately, a large enough kinematically unbiased data set was not available to thoroughly test the predicted distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane. With data from APOGEE survey becoming available, it soon became possible to study the distribution of $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} at different $R$ and $|z|$ locations of the Galaxy \citep{2014ApJ...796...38N, 2015ApJ...808..132H}. Additionally, the bimodality was clearly visible in this kinematically unbiased data set. The $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distributions showed a number of interesting trends with Galactic location which were never compared with model predictions; this comparison is one of the main aims of this paper. Since \citet{2009MNRAS.396..203S}, a number of other chemical evolution models have been proposed, but no attempt has been made to reproduce the observed $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution at different Galactic locations. Models by \citet{2013A&A...558A...9M,2014A&A...572A..92M} use cosmological zoom-in simulations as input to generate realistic kinematic distributions, and then add a detailed chemical evolution prescription on top of the dynamics from the simulation. However, the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution does not show bimodality, and the ${\rm [Fe/H]}${} distribution peaks at the same value independent of the radius $R$. Such simulations are computationally expensive to run and often difficult to directly compare to the Milky Way, as the simulated galaxy might not be a perfect match for the evolutionary history of the Milky Way. A potential way around this is to characterize the dynamic processes found in N-body or cosmological simulations with analytic functions, allowing for models that have good approximations for the important physical processes in the dynamical evolution of a disk, while also being inexpensive and having the flexibility to tune parameters to better match Milky Way observables, e.g., as in \citet{2015A&A...580A.126K,2015A&A...580A.127K}. In this model, the old stars forms the high-$[\alpha/{\rm Fe}]${} sequence and the young stars form the low-$[\alpha/{\rm Fe}]${} sequence similar to the observed sequences. However, a proper distribution of stars in $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane with two distinct sequences was not shown. There have also been alternate chemical evolution models that suggest a different formation scenario for the thick disc. A typical feature of these is strong star formation at early times which forms the thick disc, followed by a period where star formation is quenched or stops, the star formation resumes and continues at a slower rate at later times. Some models have closed box chemical evolution \citealt{2015A&A...578A..87S, 2016A&A...589A..66H,2019A&A...625A.105H} while others have open box chemical evolution where accretion of fresh gas happens over an extended time scale \citep{1997ApJ...477..765C,2001ApJ...554.1044C, 2019A&A...623A..60S}. A common problem with these models is that they either ignore radial migration or consider it to be insignificant \citep{2019A&A...625A.105H}. Typically, the chemical evolution tracks for a given birth radius show discontinuity or abrupt changes both in the the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane and evolution with time. \citet{2019A&A...623A..60S}, which has a longer delay between the first and the second gas infall phase, leads to a loop in $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane, which does not match observations. While the evolutionary tracks have been shown to qualitatively coincide with the locus of the high and low-$[\alpha/{\rm Fe}]${} sequences, a detailed prediction of the distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane and its variation with $R$ and $|z|$ has not been done. Additionally, the anti-correlation of angular momentum with metallicity has also not been shown. There are also some general purpose chemical evolution models which track the evolution in individual radial zones and have up-to-date stellar yields and realistic time delays for SNIa \citep{2014ApJ...796...38N, 2017ApJ...835..224A} but they lack phase-space information. They are potentially useful diagnostic tools in determining the impact of star formation efficiencies and gas infall and outflow rates on the chemical history of the Galaxy. Full blown chemical evolution models, especially those that include dynamical processes like \citet{2009MNRAS.396..203S} model, are not easy to fit to data. In this regard, models based on an analytical distribution function offer a distinct advantage. Phase space distribution functions based on actions have been developed \citet{2012MNRAS.426.1328B}. \citet{2015MNRAS.449.3479S} extended analytical action based distribution function to also track the evolution of metallicity; additionally, they introduced a prescription for radial migration. However, they did not track the evolution of any element other that iron. None of the previous models have been shown to reproduce the observed trends Galaxy wide in their entirety. There are various reasons for why this is the case, varying from outdated observational constraints on the chemical distributions of the Galaxy to overly simplistic approximations for the velocity dispersion of the disk. However, significant improvements have been made in the characterization of the velocity dispersion and its dependence on age, angular momentum, and ${\rm [Fe/H]}${} in \citet{2020arXiv200406556S}. This, along with the improved observational constraints provided by \textit{Gaia} and large-scale spectroscopic surveys, allows us to generate a new chemodynamical model for the Galaxy and make detailed comparisons of it to observations. In this paper, we will describe the framework of our chemodynamical model which take into account a number of relevant physical processes and reproduces chemodynamical observations throughout the disk. This paper is organized as follows: in Section 2, we describe the observational data sets used to constrain our model. In Section 3, we describe the parameters and functionary of the model. In Section 4, we describe our results and directly compare them with observational data sets. In Section 5, we discuss impact of our work and its ability to reproduce observational trends throughout the Galaxy, as well as compare to existing works on the chemodynamical evolution of the disk. Section 6 summarizes our main findings and highlights where future improvements can be made. \section{Data} For studying the distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane at different $R$ and $|z|$ locations, we use data from the APOGEE survey. We use the ASPCAP catalog of stellar parameters and abundances from APOGEE-DR14. We select stars according to the following criteria, \begin{eqnarray}} \def\ee{\end{eqnarray} (1.0<\log g<3.5)\&(3500 <T_{\rm eff}/{\rm K}<5300)\& \nonumber\\ (7<H<11). \label{equ:rgselect1} \ee The $\log g$ and $T_{\rm eff}$ selection function is designed to select giants. Although the APOGEE survey extends in $H$ band upto 13.8 mag, but we restrict to 11 mag as beyond it the selection function is not homogeneous and difficult to reproduce. Additionally, we restrict to stars with ${\rm S/N} > 80$, are main survey targets (EXTRAARG flag = 0), are not flagged bad (ASPCAPFLAG $\neq 23$) and have valid distance, [Fe/H] and $[\alpha/{\rm Fe}]${}. This resulted in a sample of 94,488 stars. For distances, we use the BPG distances by \citet{2016A&A...585A..42S} from the APOGEE-DR14 value added catalog. We calibrate our chemeical enrichment model using data from LAMOST and GALAH. Since, the abundance estimates from different spectrosocpic surveys in general do not agree with each other we need to calibrate them. We have crudely recalibrated the APOGEE ${\rm [Fe/H]}${} values to that of GALAH and LAMOST by decreasing them by -0.15 dex. For studying the dependence of angular momentum on metallicity, we use red giant stars from both the APOGGE and LAMOST surveys. We used the LAMOST-DR4 value added catalog from \citet{2017MNRAS.467.1890X}, for radial velocity, $T_{\rm eff}$, $\log g$, [Fe/H], [$\alpha$/Fe], and distance. For the RG stars, we adopt the following selection criteria, \begin{eqnarray}} \def\ee{\end{eqnarray} (1<\log g < 3.5) \& (3500\; <\; T_{\rm eff}/{\rm K}\;<5500)\&\nonumber\\ (7<H<13.8). \label{equ:rgselect2} \ee The criteria is less strict than \autoref{equ:rgselect1}, so as to increase the sample size. To model the chemical enrichment we make use of data from LAMOST and GALAH surveys. We used the LAMOST-DR4 value added catalog from \citet{2017MNRAS.467.1890X}. For LAMOST stars, we used two types of stars, the MSTO stars and the red-giant (RG) stars. The ages for the LAMOST-MSTO sample were taken from \citet{2017ApJS..232....2X} and for the LAMOST-RG-CN sample were taken from \citet{2019MNRAS.484.5315W}. The LAMOST-RG-CN sample consists of red giant branch stars with ages derived from spectroscopic C and N features. For the GALAH survey, we used MSTO stars. More precisely, we make use of the extended GALAH catalog (GALAH+), which also includes data from TESS-HERMES \citep{2018MNRAS.473.2004S} and K2-HERMES \citep{2019MNRAS.490.5335S} surveys that use the same spectrograph and observational setup as the GALAH survey. In this paper, we exploit parameters from GALAH-iDR3, an internal data release where every star has been analysed using SME and incorporates Gaia-DR2 distance information \citep{2018A&A...616A...1G,2018A&A...616A...2L}. A full discussion will be presented in a forthcoming paper and the results will be available as part of GALAH-DR3. The ages and distances for the GALAH-MSTO stars are computed with the BSTEP code \citep{2018MNRAS.473.2004S}. BSTEP provides a Bayesian estimate of intrinsic stellar parameters from observed parameters by making use of stellar isochrones. For results presented in this paper, we use the PARSEC-COLIBRI stellar isochrones \citep{2017ApJ...835...77M}. To select stars with reliable ages, we adopt the following selection function for MSTO stars, \begin{eqnarray}} \def\ee{\end{eqnarray} (3.2 < \log g < 4.1) \& (5000 <\; T_{\rm eff}/{\rm K}\;<7000). \label{equ:msto_select} \ee \section{Chemical evolution model with radial mixing} \label{sec:model} One of the main purpose of a Galactic model is to predict the joint distribution of all possible stellar observables for stars in the Milky Way. Due to large spectroscopic surveys, the following set of observables position ${\bf x}$, velocity ${\bf v}$, age $\tau$, iron abundance $[{\rm Fe/H}]'$ and $\alpha$ abundance $[\alpha/[{\rm Fe}]'$ are readily available for a large number of stars. Hence the distribution function we wish to seek is $p({\bf x},{\bf v},\tau,{\rm [Fe/H]},[\alpha/{\rm Fe}]|\theta)$, where $\theta$ denotes the free parameters of the model (which we sometimes omit to shorten the equation). Full list of parameters and their adopted values is given in \autoref{tab:coeff}. Our model is inspired and based on the extended distribution function model proposed by \citet{2015MNRAS.449.3479S} but improves upon it by adding significant new features, e.g., the distribution of $\alpha$ elemental abundances, a new prescription for velocity dispersion of stars. We also simplify certain aspects of the \citet{2015MNRAS.449.3479S} model, e.g., the phase space distribution is described by the Shu distribution function \citep{1969ApJ...158..505S} instead of an action based quasi-isothermal distribution function. For simplicity we assume the Galaxy to be axisymmetric, i.e., none of the Galactic properties depend on the azimuth coordinate. Strictly speaking this is not true, an evidence for this is the presence of non-axisymmetry structures like bars and spiral arms. The azimuthal crossing time scale is quite small compared to the age of the Galaxy, hence, except for very young stars, axisymmetry should still a good approximation for majority of the stars in the Milky Way. Due to axisymmetry we work in cylindrical coordinates, and express the phase space in following coordinates $R$, $\phi$, $z$, $v_R$, $v_{\phi}$ and $v_z$. At any given time the stars are born out of ISM, which is made up of cold gas that is on circular orbit around the Galactic center. Stars inherit the elemental composition and kinematic properties of the gas from which they are born. Hence, the fundamental building block of our model is a group of stars born at a lookback time $\tau$ with birth radius $R_b$. The chemical evolution of the Galaxy will dictate how the abundances vary as a function of $\tau$ and $R_b$. We denote the chemical evolution of iron abundance by ${\rm [Fe/H]}(\tau,R_b)$ and that of $\alpha$ elemental abundance by $[\alpha{\rm Fe}](\tau,R_b)$. If ${\rm [Fe/H]}'$ and $[\alpha/{\rm Fe}]'$ are observed with uncertainties $\sigma_{\rm [Fe/H]}$ and $\sigma_{[\alpha{\rm Fe}]}$, the joint distribution of observables can be modelled as \begin{eqnarray}} \def\ee{\end{eqnarray} p({\bf x},{\bf v},\tau,{\rm [Fe/H]'},[\alpha/{\rm Fe}]')= p({\bf x},{\bf v}|\tau,R_b) p(\tau,R_b) \times \nonumber \\ \frac{\partial{{\rm [Fe/H]}(\tau,R_b)}}{\partial{R_b}} \times \nonumber\\ \mathcal{N}\left([{\rm Fe/H}]'|[{\rm Fe/H}](\tau,R_b),\sigma_{[\alpha{\rm Fe}]}\right) \times \nonumber\\ \mathcal{N}([\alpha/{\rm Fe}]'|[\alpha/{\rm Fe}](\tau,R_b),\sigma_{\rm [Fe/H]}) \ee To also include stellar mass $m$ in the joint distribution, the right hand side (RHS) of the above equation should be multiplied by the initial-mass-function of stars $\xi(m)$. This is important when we want to take the selection function of the survey into account and we postpone this discussion to \autoref{sec:selfunc}. Having specified the full joint distribution, it is easy to explore any other projections of this distribution, e.g., by integrating over azimuth $\phi$ and velocities $v_{\phi}$, $v_R$ and $v_z$ we get $p(R,z,R_g,\tau,{\rm [Fe/H]},[\alpha/{\rm Fe}])$. In this paper we are interested in the distribution of $({\rm [Fe/H]},[\alpha/{\rm Fe}])$ at a given $R$ and $z$, and this is given by \begin{eqnarray}} \def\ee{\end{eqnarray} p({\rm [Fe/H]'},[\alpha/{\rm Fe}]'|R,z)=\frac{p(R,z,{\rm [Fe/H]'},[\alpha/{\rm Fe}]')}{p(R,z)} \ee \begin{table*} \caption{Parameters of the chemical evolution model} \begin{tabular}{lllll} \hline Description & symbol & value & \\ \hline Solar Radius & $R_{\odot}$ & 8.0 & kpc &\\ Circular velocity at Solar Radius & $\Theta_{\odot}$ & 232.0 & km/s &\\ Gravitational Potential & $\Phi(R,z)$& MWPotential2014-galpy & & \\ \hline Age of disc & $\tau_{\rm max}$ & 13.0 & Gyr & \\ Late time star formation rate decay constant & $\tau_{\rm fall}$ & 10.0 & Gyr & \\ Early time star formation rate rise constant & $\tau_{\rm rise}$ & 0.63 & Gyr & \\ \hline Current metallicity gradient & $F_R$ & -0.08 & dex/kpc & \\ Metallicity at birth & $F_{\rm min}$ & -0.85 & dex & \\ Radius of current ISM solar metallicity & $r_F$ & 6.5 & kpc &\\ ISM Metallicity enrichment time scale & $\tau_F$ & 3.2 & Gyr & \\ \hline $[\alpha/{\rm Fe}]$ transition time & $\tau_{\alpha}$ & 10.5 & Gyr &\\ Maximum $[\alpha/{\rm Fe}]$ & $\alpha_{\rm max}$ & 0.225 & dex & \\ Current $[\alpha/{\rm Fe}]$ of outermost disc& $\alpha_{\rm outer}$ & 0.1 & dex &\\ Transition metallicity & $F_{\alpha}$ & -0.5 & dex &\\ Transition metallicity scale & $\Delta F_{\alpha}$ & 0.5 & dex &\\ Time scale for transition of $[\alpha/{\rm Fe}]$ & $\Delta \tau_{\alpha}$ & 1.5 & Gyr & \\ \hline Maximum radial scale length & $R_{d}^{\rm max}$ & 3.45 & kpc & \\ Minimum radial scale length & $R_{d}^{\rm min}$ & 2.31 & kpc & \\ Time of transition of radial scale length & $\tau_{R_d}$ & 9.0 & Gyr & \\ Time scale for transition of radial scale length & $\Delta \tau_{R_d}$ & 1.0 & Gyr & \\ \hline Churning efficiency & $\sigma_{L0}$ & 1150 & kpc km/s & \\ \hline Vertical Velocity dispersion normalization & $\sigma_{0,vz}$ & 25.0 & km/s & \\ Radial Velocity dispersion normalization & $\sigma_{0,vR}$ & 39.6 & km/s & \\ Vertical heating growth parameter & $\beta_{z}$ & 0.441 & \\ Radial heating growth parameter & $\beta_{R}$ & 0.251 & \\ Vertical heating angular momentum scale& $\lambda_{L,vz}$ & 1130 & kpc km/s & \\ Radial heating angular momentum scale& $\lambda_{L,R}$ & 2300 & kpc km/s & \\ Vertical heating angular momentum coefficient & $\alpha_{L,z}$ & 0.58 & \\ Radial heating angular momentum coefficient & $\alpha_{L,R}$ & 0.09 & \\ Vertical dispersion gradient with metallicity & $\gamma_{\rm [Fe/H],z}$ & [-0.52,-0.8] & km/s/dex & \\ Radial dispersion gradient with metallicity & $\gamma_{\rm [Fe/H],R}$ & [-0.19,-0.5] & km/s/dex & \\ \hline \end{tabular} \label{tab:coeff} \end{table*} \subsection{Phase space distribution and radial mixing} \label{sec:phasespace} The newly formed stars are also on circular orbit with a star born at radius $R_b$ having an angular momentum $v_{\rm c}(R_b)R_b$, where $v_{\rm c}(R_b)$ is the circular velocity. The distribution of newly formed stars $p(\tau,R_b)$ is fully specified by specifying the star formation history $p(\tau)$ and the distribution of birth radius $p(R_b|\tau)$ for a given $\tau$. \begin{eqnarray}} \def\ee{\end{eqnarray} p(\tau,R_b) &=& p(\tau) p(R_b|\tau) \ee Following \citet{2015MNRAS.449.3479S} we express the star formation history as \begin{eqnarray}} \def\ee{\end{eqnarray} p(\tau) &\propto& \exp\left(\frac{\tau}{\tau_{\rm fall}}-\frac{\tau_{\rm rise}}{\tau_{\rm max}-\tau}\right), \ee which is marked by a peak at $\tau=\tau_{\rm max}-\sqrt{\tau_{\rm fall} \tau_{\rm rise}}$. For our choice of parameters the peak is at 10.5 Gyr, see \autoref{fig:amr_ism}c. The star formation increases at earlier times with a rate controlled by $\tau_{\rm rise}$ to a maximum value and then falls off exponentially untill the present time with time scale $\tau_{\rm fall}$. The radial distribution of stars at birth is given by \begin{eqnarray}} \def\ee{\end{eqnarray} p(R_b|\tau) &=& \frac{R_b}{R_d^2}\exp\left(-R_b/R_d\right), \label{equ:rb_dist} \ee Unlike \citet{2015MNRAS.449.3479S}, who consider distinct thin ($\tau < 10$ Gyr) and thick ($\tau > 10$ Gyr) discs with different scale lengths, we allow for a smooth inside out formation of the disc by specifying the scale length $R_d$ to evolve with time according to \begin{eqnarray}} \def\ee{\end{eqnarray} R_d &=& R_d^{\rm max}-\frac{R_d^{\rm max}-R_d^{\rm min}}{2}\left({\rm tanh}\left(\frac{\tau-\tau_{R_{d}}}{\Delta \tau_{R_d}}\right)+1\right), \ee the corresponding profile is shown in \autoref{fig:amr_ism}c. Over time, due to various dynamical processes, like scattering from spiral arms, giant molecular clouds and a bar, stars move away from their place of birth and acquire random motion. Following \citet{2009MNRAS.396..203S} we describe the dynamical processes using the churning and blurring mechanisms. Churning refers to the scattering in angular momentum space, while blurring refers to increase of random motion that is characterized by radial velocity dispersion $\sigma_R$ and vertical velocity dispersion $\sigma_z$. We assume $\sigma_R$ and $\sigma_z$ to be functions of $\tau$, $R_b$, and $R_g$ (guiding radius, defined as the radius of a circular orbit with a given angular momentum ). Specifically, due to churning stars born at a lookback time $\tau$ and at radius $R_b$ , will have a distribution of angular momentum $L$ or equivalently guiding radius $R_g(L)$ given by $p(R_g|\tau,R_b)$. Following \citet{2015MNRAS.449.3479S} we model churning as a Gaussian diffusion in the space of angular momentum $L$, which leads to \begin{eqnarray}} \def\ee{\end{eqnarray} p(R_g|R_b,\tau) &=& \frac{1}{K} \mathcal{N}\left(L|R_b \Theta_{\odot}-\frac{\sigma^2_{L}}{2 \Theta_{\odot} R_d},\sigma_{L}^2\right)\frac{{\rm d}L}{{\rm d}R_g}. \ee Here $\sigma_{L}$ characterizes the dispersion of angular momentum which increases with time according to \begin{eqnarray}} \def\ee{\end{eqnarray} \sigma_L(\tau)& =& \sigma_{L0} \left(\frac{\tau}{\tau_{\rm max}}\right)^{1/2}. \ee The distribution is only valid for positive values of $L$, the factor \begin{eqnarray}} \def\ee{\end{eqnarray} K=\frac{1}{2}\left[1+{\rm erf}\left(\frac{R_b\Theta_{\odot}-\frac{\sigma^2_{L}}{2 \Theta_{\odot} R_d}}{\sqrt{2}\sigma_L}\right)\right] \ee is a normalization constant to ensure that the integral over the positive $L$ axis is unity. To model the present day phase space distribution of stars born at a lookback time of $\tau$ and at radius $R_b$, $p({\bf x},{\bf v}|\tau,R_b)$, we use a distribution function of the following form \citep[see Equation 4.147 from][]{2008gady.book.....B}. \begin{eqnarray}} \def\ee{\end{eqnarray} f(E_R,L,E_z) \propto \frac{F(L)}{\sigma_R^2}\exp\left(-\frac{E_R}{2\sigma_R^2}\right)\exp\left(-\frac{E_z}{2\sigma_z^2}\right)\ \ee Here, the potential $\Phi(R,z)$ is assumed to be linearly separable in $R$ and $z$ allowing the vertical and planar motion to be studied separately. The planar distribution is modeled using the Shu distribution function while the vertical distribution is modelled as an isothermal population. \begin{eqnarray}} \def\ee{\end{eqnarray} E_z=\frac{1}{2}v_z^2+(\Phi(R,z)-\Phi(R,0)) \ee is the energy associated with the vertical motion. $\Phi(R,z)$ is the Galactic gravitational potential and we adopt the MWPotential2014 from galpy \citet{2015ApJS..216...29B}. $E_R$ is the random energy over and above that of $E_c(L)$ (energy required for a star with a given $L$ to be in a circular orbit with radius $R_g(L)$) and is given by \begin{eqnarray}} \def\ee{\end{eqnarray} E_R&=&E-E_c(L)=\frac{V_R^2}{2}+\Phi_{\rm eff}(R,R_g)-\Phi_{\rm eff}(R_g,R_g) \nonumber \\ &=&\frac{V_R^2}{2}+\Delta \Phi (R,R_g). \ee $\Phi_{\rm eff}(R,R_g)$ is the effective potential for a planar orbit and is given by \begin{eqnarray}} \def\ee{\end{eqnarray} \Phi_{\rm eff}(R,R_g) &=& \Phi(R)+\frac{1}{2}v_{c}(R_g)^2(R_g/R)^2, \ee Given that we assume $\sigma_R$ to be a function of $\tau$, $R_b$, and $R_g$, the phase space distribution can now be written as \begin{eqnarray}} \def\ee{\end{eqnarray} p({\bf x},{\bf v}|\tau,R_b) &=& p(R_g|\tau,R_b)p(R|R_g,\sigma_R) \times \nonumber \\ && \frac{1}{\sqrt{2\pi}\sigma_R} \exp\left(-\frac{v_{R}^{2}}{2\sigma_R^2}\right) \times \nonumber\\ && p(z,v_z|R,\sigma_z) \frac{1}{2\pi}. \label{equ:churblur} \ee It follows from \citet{2013ApJ...773..183S} \citep[see also][]{2012MNRAS.419.1546S} that for a Shu distribution function, \begin{eqnarray}} \def\ee{\end{eqnarray} p(R|R_g,\sigma_R)&=&\frac{p(R,R_g|\sigma_R)}{\int P(R,R_g|\sigma_R) {\rm d}R} \nonumber \\ &=&\frac{1}{g_K(a,R_g)R_g} \times \\ &&\exp\left( -\frac{\Phi_{\rm eff}(R,R_g)-\Phi_{\rm eff}(R_g,R_g)}{\sigma_R^2}\right), \ee where $a=\sigma_R/\Theta(R_g)$, and \begin{eqnarray}} \def\ee{\end{eqnarray} g_K(a,R_g)=\int \exp\left( -\frac{\Phi_{\rm eff}(R,R_g)-\Phi_{\rm eff}(R_g,R_g)}{a^2v_{\rm c}^2(R_g)}\right) \frac{{\rm d}R}{R_g} \ee The vertical phase space distribution of stars at a given $R$ for an isothermal population characterized by vertical velocity dispersion $\sigma_z$ is given by \begin{eqnarray}} \def\ee{\end{eqnarray} p(z,v_z|R,\sigma_{z}) &=& \frac{1}{2 z_0} \exp\left(-\frac{\Phi(R,z)-\Phi(R,0)}{\sigma^2_{z}}\right) \times \\ &&\frac{1}{\sqrt{2\pi}\sigma_z} \exp\left(-\frac{v_{z}^{2}}{2\sigma_z^2}\right), \ee where $z_0$ is the vertical scale height \citep[see Equation 4.153 from][]{2008gady.book.....B} and is given by \begin{eqnarray}} \def\ee{\end{eqnarray} z_0(R,\sigma_{z})=\int_{0}^{\infty} \exp\left(-\frac{\Phi(R,z)-\Phi(R,0)}{\sigma^2_{z}}\right). \ee \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{amr_ism.pdf} \caption{Properties of our Galactic model. (a) [Fe/H] as a function of age for different birth radius. (b) [$\alpha$/Fe] as a function of age for different birth radius. (c) The star formation rate and radial scale length as function of age. The dotted lines simply show for reference the traditional definition of thick and thin disc based on age. \label{fig:amr_ism}} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{alpha_feh_profile_1.pdf} \caption{Metallicity and age dependence of $\alpha$ elemental abundance. Solid lines show data from four different sources. The dashed lines are predictions of a model with, $\alpha_{\rm max}=0.225$, $\alpha_{\rm outer}=0.18$, $F_{\alpha}=-0.5$, $\Delta F_{\alpha}=0.4$, $\tau_{\alpha}=10.5$ Gyr and $\Delta \tau_{\alpha}=1.5$, which fits the GALAH data. $F$ stands for metallicity [Fe/H], and $\alpha_{\rm min}(F)$ is an analytical function of metallicity as shown by the dashed line in panel (a). \label{fig:alpha_feh_profile}} \end{figure} \subsection{Chemical evolution} The abundance of elements as a function of time and birth radius is dictated by the chemical evolution of the Galaxy. Chemical evolution from first principles involves tracking the birth and death of stars, the synthesis of elements in stars using nucleosynthetic yields, the return of synthesized elements to the ISM, the dilution of the ISM by the infall of fresh gas and so on. Rather than adopting an ab-into approach we adopt an empirical approach. We specify simple but physically motivated functional forms for the evolution of abundances as a function of $\tau$ and $R_b$ and fine tune some of the free parameters using observational data. \subsubsection{Iron abundance} For iron abundance [Fe/H], a reasonable assumption is to assume that it decreases monotonically with birth radius at all times. This is motivated by the fact that the star formation efficiency is highest in the center of the Galaxy and falls off with radius. A metallicity gradient of about $-0.09$ dex/kpc has been observed in the Milky Way \citep{2014AJ....147..116H}. Most chemical evolution models also predict metallicity to fall off with birth radius \citep{2009MNRAS.396..203S}. As for the dependence of metallicity on time, the models like that of \citet{2009MNRAS.396..203S} predict a sharp increase in metallicity at earlier times, but at later times the rate of increase progressively slows down and the metallicity approaches an asymptotic value $F_{\rm max}(R_b)$, which depends on $R_b$. These above mentioned features are captured by the following adopted functional form. Note, for clarity and brevity, $F$ is used to denote [Fe/H]. \begin{eqnarray}} \def\ee{\end{eqnarray} && F(R_b,\tau)=F_{\rm min}+(F_{\rm max}(R_b)-F_{\rm min}){\rm tanh}\left(\frac{\tau_{\rm max}-\tau}{\tau_F}\right) \label{equ:fe_tau}\\ &&F_{\rm max}(R_b)=F_{\rm min}{\rm tanh}\left( \frac{F_R(R_b-R_F)}{F_{\rm min}}\right) \label{equ:fe_rb} \ee The $F_{\rm min}$ denotes the minimum ISM metallicity, $\tau_F$ the metallicity enrichment time scale, $F_R$ the current metallicity gradient in the solar neighborhood, $r_F$ the radius at where the ISM has solar metallicity. \autoref{equ:fe_tau} is depicted graphically in \autoref{fig:amr_ism}a. The model is similar to that of \citet{2015MNRAS.449.3479S}, except for the form of $F_{\rm max}(R_b)$-- our variation of radial gradient $dF_{\rm max}/dR_b$ with $R_b$ is weaker than that of \citet{2015MNRAS.449.3479S}. \subsubsection{$\alpha$ elemental abundance} For $[\alpha/{\rm Fe}]${}, instead of expressing its formation and evolution in terms of $\tau$ and $R_b$, we express it in terms of $\tau$ and [Fe/H]. This is because the birth radius cannot be observed directly, hence a relation constructed out of $R_b$ is difficult to verify and calibrate. However, using \autoref{equ:fe_tau} and \autoref{equ:fe_rb} we can express $R_b$ in terms of [Fe/H] and $\tau$ as an analytical function $R_b(F,\tau)$. This is possible because we assume that [Fe/H] decreases monotonically with birth radius for any given age. From previous works \citep{2017A&A...608L...1H,2017ApJS..232....2X}, $[\alpha/{\rm Fe}]${} has been found to be approximately constant with age till about 8 Gyr followed by a rapid rise thereafter. We postulate a $\tanh$ function that transitions from a low value $\alpha_{\rm min}$ to a high value $\alpha_{\rm max}$ at an age $t_{\alpha}$, with the sharpness of the transition being controlled by $\Delta t_{\alpha}$. \begin{eqnarray}} \def\ee{\end{eqnarray} [\alpha/{\rm Fe}](F,\tau)&=&\alpha_{\rm min}(F)+ \nonumber \\ && \frac{\alpha_{\rm max}-\alpha_{\rm min}(F)}{2}\left[{\rm tanh}\left(\frac{\tau-\tau_{\alpha}}{\Delta \tau_{\alpha}}\right)+1\right] \ee The relationship is shown as dashed line in \autoref{fig:alpha_feh_profile}b. The relationship is motivated by the physics of chemical enrichment (Fe and $\alpha$ elements) in the Galaxy which is mainly regulated by Supernovaes. The initial value $\alpha_{\rm max}$ of $[\alpha/{\rm Fe}]${} is set by the yields of SNII, which occur almost immediately (10 Myr) after the initiation of star formation at age $\tau_{\rm max}$. We expect $\alpha_{\rm max}$ to be independent of metallicity $F$. SNIa mostly produce Fe and almost no $\alpha$ elements, which leads to a drop in $[\alpha/{\rm Fe}]${}. SNIa require a binary companion and can only occur after significant time delay. The SNIa rates typically peak about 1 Gyr after star formation. This typically sets the time scale $\Delta \tau_{\alpha}$ of transition from high to low $[\alpha/{\rm Fe}]${}. We expect $\tau_{\alpha}$ to be given by $\tau_{\rm max}-k\Delta \tau_{\alpha}$, with $k$ being somewhere between 1 and 2, the exact value needs to be determined by fitting to observational data. As the evolution proceeds at some stage the ISM will reach an equilibrium state due to infall of fresh metal poor gas and this will set the floor $\alpha_{\rm min}$. Since, the star formation rate and the infall rate are not same at all birth radius, $\alpha_{\rm min}$ will depend on birth radius. Given $F$ is a function of $R_b$ and $\tau$, we expect $\alpha_{\rm min}$ to be a function of $F$. Given that $[\alpha/{\rm Fe}]${} is approximately constant for young stars, we can easily deduce the dependence of $[\alpha/{\rm Fe}]${} on [Fe/H] for them, and this is shown in \autoref{fig:alpha_feh_profile}a using different spectroscopic data sets. For young stars $[\alpha/{\rm Fe}]${} is strongly anti-correlated with metallicity for $-0.8<{\rm [Fe/H]}<0$, but outside this range the slope approaches zero. We use the $\tanh$ function \begin{eqnarray}} \def\ee{\end{eqnarray} \alpha_{\rm min}(F)=\frac{\alpha_{\rm outer}}{2}\left[{\rm tanh}\left(-\frac{(F-F_{\alpha})}{\Delta F_{\alpha}} \right)+1\right] \ee to describe this relationship. $\alpha_{\rm outer}$ indicates the $[\alpha/{\rm Fe}]${} for young stars in the outer disc which have the least value of [Fe/H]. \autoref{fig:alpha_feh_profile}a shows that different observational data sets are all consistent with the adopted relationship. The data sets used are, main sequence turnoff (MSTO) stars from the LAMOST survey, the red-giant-branch (RGB) stars from the LAMOST survey, and the MSTO stars from the GALAH survey. For older stars, the variation of age-$[\alpha/{\rm Fe}]${} relation with metallicity is difficult to study, this is because old stars are mostly metal poor, which means it is difficult to get a sample with a wide range of metallicities. At earlier times, we expect the $[\alpha/{\rm Fe}]${} to be same throughout the disc as they are formed out of the same primordial gas. Hence, we postulate $\alpha_{\rm max}$ to be independent of $F$. Additionally, we also postulate $\tau_{\alpha}$ and $\Delta \tau_{\alpha}$ to be independent of $F$. \autoref{fig:alpha_feh_profile}b shows the observed dependence of $([\alpha/{\rm Fe}]-\alpha_{\rm min}(F))/(\alpha_{\rm max}-\alpha_{\rm min}(F))$ on ${\rm [Fe/H]}$ for stars belonging to different data sets. GALAH-MSTO data set is consistent with the adopted functional form and shows the sharpest transition compared to other data sets, most likely due to better age precision. For GALAH-MSTO, the relationship is very flat for stars younger than 8 Gyr, but for other data sets a small increase with age can be seen. Given that $R_b$ can be estimated from [Fe/H] and $\tau$, we can now express $[\alpha/{\rm Fe}]$ in terms of $\tau$ and $R_b$ and this is shown in \autoref{fig:amr_ism}b. A detailed study of $[\alpha/{\rm Fe}]${} as a function of age and metallicity, based on the GALAH survey, will be presented in a forthcoming paper, we here adapt some of its relevant findings. Due to systematic differences between spectroscopic surveys we need to adjust the relations depending upon the survey we want to use it for. The actual values that we use for building our model for APOGEE data are given in \autoref{tab:coeff}, which differ slightly in values for $\alpha_{\rm outer}$, $\Delta F_{\alpha}$ as compared to those given in \autoref{fig:alpha_feh_profile}. \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{alpha_fe_Rz_apogee_h11_data.pdf} \caption{Distribution of APOGEE stars in the $({\rm [Fe/H]},[\alpha/{\rm Fe}])$ plane at different locations in the Galaxy. The stars follow the selection function given in \autoref{equ:rgselect1}. In each panel the density is normalized such that maximum density is unity. The locations are specified in terms of cylindrical coordinates $R$ and $z$ and the quoted values are in units of kpc. Each panel corresponds to a bin in $(R,|z|)$ space, with $R$ increasing from left to right and $|z|$ increasing from bottom to top. In each panel the solid lines show the evolution of abundances at a given birth radius taken from our model. The blue line is for the birth radius of 4 kpc, while the orange line is for the birth radius corresponding to the central value of $R$ in each bin. The black dots mark the evolution at age of 4, 8, 10, 11, 12, and 13 Gyr, with metallicity decreasing with age. In panel (f), the model profiles corresponding to birth radii of 1, 2, 4, 6, 8, 10, 12 and 14 are shown, with [$\alpha$/Fe] increasing with birth radius. \label{fig:apogee_data}} \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{alpha_fe_Rz_apogee_h11_model.pdf} \caption{Model predictions for distribution of stars in the $({\rm [Fe/H]},[\alpha/{\rm Fe}])$ plane at different locations in the Galaxy, satisfying the selection function of APOGEE stars (\autoref{equ:rgselect1}). Solid lines are evolutionary tracks for a given birth radius, for further description see \autoref{fig:apogee_data} \label{fig:apogee_model}} \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{alpha_fe_Rz_apogee_h11_model_no_churn.pdf} \caption{Distribution of mock APOGEE stars in the $({\rm [Fe/H]},[\alpha/{\rm Fe}])$ plane at different locations in the Galaxy as predicted by a model with negligible churning ($\sigma_{L0}=150$ kpc km/s). The solid lines mark model evolutionary tracks for a given birth radius as described in \autoref{fig:apogee_data}. Abundances follow the profile corresponding to that of the local radial coordinate. \label{fig:apogee_model_no_churn}} \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{p_fe_Rz_apogee_h11_model_0.pdf} \caption{Distribution of [Fe/H] for low-$[\alpha/{\rm Fe}]${} stars from APOGEE along with predictions from our model. \label{fig:p_fe_Rz_0}} \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{p_fe_Rz_apogee_h11_model_1.pdf} \caption{Same as \autoref{fig:p_fe_Rz_0} but for high-$[\alpha/{\rm Fe}]${} stars. \label{fig:p_fe_Rz_1}} \end{figure*} \begin{figure*}[tb] \centering \includegraphics[width=0.98\textwidth]{lz_fe_Rz_lamapo_model_low_alpha.pdf} \caption{Specific angular momentum as a function of [Fe/H], for low-$[\alpha/{\rm Fe}]${} stars. Solid lines show observational data from LAMOST and APOGEE surveys. The dashed lines are model predictions. Different panels correspond to different locations in the Galaxy, specified by Galactocentric coordinates $R$ and $z$. The black dot marks the mean metallicity and angular momentum of a circular orbit corresponding to the mean radius of the stars in each $(R,z)$ bin. \label{fig:Lz_lamapo_model}} \end{figure*} \begin{figure}[tb] \centering \includegraphics[width=0.49\textwidth]{pz_R8.pdf} \caption{Vertical distribution of star forming mass at $R=8.0$ kpc according to our model. The distribution is well fit by function that is a sum of two exponential functions, with scale lengths $h_{\rm thin}$ and $h_{\rm thick}$, and fractional contribution of the thicker component $f_{\rm thick}$ being 0.17. \label{fig:pz_R8}} \end{figure} \subsection{Velocity dispersion relations} In \autoref{sec:phasespace} velocity dipsersions $\sigma_R$ and $\sigma_z$ were assumed to be a function of $\tau$, $R_b$ and $R_g$ (or equivalently $L$). Given that $R_b$ can be expressed in terms of $\tau$ and [Fe/H], we seek functions of the following form $\sigma_v(\tau,L,{\rm [Fe/H]})$. Functions of the above form were explored by \citet{2020arXiv200406556S} using data from the LAMOST, GALAH and APOGEE spectroscopic surveys. They showed that different stellar samples, even though they target different tracer populations and employ a variety of age estimation techniques, follow the same set of fundamental relations. In addition to the well known power law dependence on age, velocity dispersion is a parabolic shaped function of $L$ with a minima at around solar angular momentum and it is anti-correlated with metallicity. In \citet{2020arXiv200406556S}, the dispersion $\sigma_v$ of velocity $v$ (for either $v_R$ or $v_z$), is assumed to depend on the stellar age $\tau$, angular momentum $L$, metallicity ${\rm [Fe/H]}${}, and vertical height from the disc midplane $z$, via the following multiplicatively separable functional form \begin{equation} \sigma_v(\tau,L_z,{\rm [Fe/H]},z,\theta_v)=\sigma_{0,v} f_{\tau}f_{L_z}f_{{\rm [Fe/H]}} f_{z}, \label{equ:vdisp_model} \end{equation} with \begin{equation} f_{\tau}=\left(\frac{\tau/{\rm Gyr}+0.1}{10+0.1}\right)^{\beta_v}, \label{equ:f_tau} \end{equation} \begin{equation} f_{L}=\frac{\alpha_{L,v} (L/L_{\odot})^2+\exp[-(L-L_{\odot})/\lambda_{L,v}]}{1+\alpha_{L,v}}, \label{equ:f_lz} \end{equation} \begin{equation} f_{\rm [Fe/H]}=1+\gamma_{{\rm [Fe/H]},v} {\rm [Fe/H]}, \label{equ:f_feh} \end{equation} \begin{equation} f_{z}=1+\gamma_{z,v} |z|. \label{equ:f_z} \end{equation} $\theta_v=\{\sigma_{0,v},\beta_{v},\lambda_{L,v},\alpha_{L,v},\gamma_{{\rm [Fe/H]},v},\gamma_{z,v}\}$ is a set of free parameters and we adopt the values from \citep{2020arXiv200406556S} (also listed in \autoref{tab:coeff}). The $\sigma_{0,v}$ is a constant that denotes the velocity dispersion for stars lying in the midplane with solar metallicity, solar angular momentum ($L_{\odot}=\Omega_{\odot}R_{\odot}^2$) and an age of 10 Gyr. Since in our models velocity dispersions have no dependence on $z$, we set $\gamma_{z,v}$ to zero, and to compensate for it we increase $\sigma_{0z}$ by a few km/s. The origin of the $z$ dependence is not fully understood, and more work is required in future before we can successfully incorporate them in theoretical models. Based on results of \citet{2020arXiv200406556S} where $\gamma_{{\rm [Fe/H]},v}$ was found to decrease with age, we allow it to vary linearly with age and the adopted maximum and minimum values are given in \autoref{tab:coeff}. \subsection{Selection Function} \label{sec:selfunc} To compare the predictions of a model with observations, we need to take the selection function of the observational data into account. Let $S$ denote the event that a star is in a survey based on criteria defined over some set of observables ${\bf y}$, e.g. $\log g$, $T_{\rm eff}$, apparent magnitude $H$. The selection function of a survey $p(S|{\bf y})$ is then the probability of the event $S$ given ${\bf y}$ \citep{2012MNRAS.427.2119S}. This is typically an indicator function that is 1 if the star satisfies the selection criteria and 0 if it does not. Given the intrinsic variables age $\tau$, metallicity ${\rm [Fe/H]}${}, distance $s$ and mass $m$, one can predict any observable ${\bf y}$ using theoretical stellar isochrones. Hence the selection function can also be computed over the intrinsic variables. For a given initial-mass-function $\xi(m)$ (IMF), normalized such that $\int \xi(m)dm=1$, we have \begin{eqnarray}} \def\ee{\end{eqnarray} p(S|\tau,{\rm [Fe/H]},s)=\int p(S|\tau,{\rm [Fe/H]},s,m) \xi(m) dm. \ee We use PARSEC-COLIBRI stellar isochrones \citep{2017ApJ...835...77M} to compute this. In this paper, we are mainly interested in the distribution of ${\rm [Fe/H]}${} and $[\alpha/{\rm Fe}]${} for stars in a bin $k_{Rz}$ in $(R,|z|)$ space and chosen with some given selection function $S$. This required distribution is given by \begin{eqnarray}} \def\ee{\end{eqnarray} p({\rm [Fe/H]}, [\alpha/{\rm Fe}] | S, k_{Rz}) &=& \int p({\rm [Fe/H]}, [\alpha/{\rm Fe}], \tau | S, k_{Rz}) {\rm d}\tau. \nonumber \\ && \ee Assuming that the bin $k_{Rz}$ is small enough such that $p({\rm [Fe/H]}, [\alpha/{\rm Fe}], \tau | R, z)$ is constant over the bin, we have \begin{eqnarray}} \def\ee{\end{eqnarray} p({\rm [Fe/H]}, [\alpha/{\rm Fe}], \tau | S, k_{Rz}) &=& p({\rm [Fe/H]}, [\alpha/{\rm Fe}], \tau | R, z) \times \nonumber \\ && p(S | \tau, {\rm [Fe/H]}, k_{Rz}), \ee with \begin{eqnarray}} \def\ee{\end{eqnarray} p(S|\tau,{\rm [Fe/H]},k_{Rz})= \int p(S|\tau,{\rm [Fe/H]},s) p(s|k_{Rz}){\rm d}s, \ee and $p(s|k_{Rz})$ being the distribution of distances of observed stars in bin $k_{Rz}$. \section{Results} We explore the joint distribution of ${\rm [Fe/H]}${} and $[\alpha/{\rm Fe}]${} at different $R$ and $|z|$ locations in the Galaxy. First, we present observational results. Next, we compare the observational results with the predictions of our theoretical model from \autoref{sec:model}. \autoref{fig:apogee_data} shows the observational results from APOGEE-DR14 survey. This was first presented by \citet{2015ApJ...808..132H}, our figure here is a reproduction of their figure but with a few changes. Our $(R, |z|)$ grid is slightly different, we have an extra bin in $|z|$. Also our target selection criteria is more conservative, so that the selection function can be easily reproduced when we do forward modelling of the observed data. In \autoref{fig:apogee_data}, to aid comparison with theoretical predictions, the chemical evolutionary tracks corresponding to different birth radius are indicated by solid lines. Black dots mark the progression of time, with [Fe/H] increasing (decreasing) with time (age). The blue line is for $R_b=4$ kpc, while the orange line is for $R_b$ corresponding to the central value of $R$ in each panel. In \autoref{fig:apogee_data}, a double sequence (high-$[\alpha/{\rm Fe}]${} and low-$[\alpha/{\rm Fe}]${}) is visible in most panels, e.g., panels (d), (i), (m), (n) and (o). The two sequences are well separated at the low [Fe/H] end, but with increase of [Fe/H] they progressively approach each other and eventually merge at ${\rm [Fe/H]}${} of about 0. The relative number of stars belonging to each sequence depends sensitively upon $R$ and $|z|$. The fraction of stars belonging to the high-$[\alpha/{\rm Fe}]${} sequence increases with increase of height $|z|$ and decrease of radius $R$, in other words, the fraction is strongest away from the disc plane and towards the inner disc (top-left panel). The opposite is true for the low-$[\alpha/{\rm Fe}]${} sequence, which is strongest close to the disc plane and towards the outer disc (bottom-right panel). The high-$[\alpha/{\rm Fe}]${} sequence appears to follow the track with $R_b=4$ kpc in all panels, and the distribution of stars along this track is also very similar in all panels-- a property that we refer to as uniformity of the high-$[\alpha/{\rm Fe}]${} sequence. In contrast, the distribution of stars along the low-$[\alpha/{\rm Fe}]${} sequence is highly variable. With increase of either $R$ (going from the inner disc to the outer disc) or $|z|$ (going from midplane upwards), the density peak shits to the left, i.e., towards lower values of [Fe/H]. \autoref{fig:apogee_model} shows the distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane predicted by our model for the same $(R,|z|)$ grid as in \autoref{fig:apogee_data} and using the same target selection criteria (\autoref{equ:rgselect1}). The predicted distributions are strikingly similar to the observed distributions and are even found to reproduce some of the finer details of the observed distributions. Some examples of similarities are as follows. The double sequence is prominent in panels (d), (i), (m), (n) and (o). The relative fraction of stars in the two sequences varies with $R$ and $|z|$ in the same way as in \autoref{fig:apogee_data}. The high-$[\alpha/{\rm Fe}]${} sequence is strongest at high $|z|$ and small $R$, and gradually diminishes in strength with increase of $R$ and decrease of $|z|$. The high-$[\alpha/{\rm Fe}]${} sequence seems to follow the $R_b=4$ kpc evolutionary track in all panels. For the low-$[\alpha/{\rm Fe}]${} sequence, the [Fe/H] and $[\alpha/{\rm Fe}]${} coordinates of the density peak change with $R$ and $|z|$ in exactly the same way as in \autoref{fig:apogee_data}. To summarize, \autoref{fig:apogee_model} demonstrates that our chemodynamical model can successfully reproduce the observed distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane across different locations in the Galaxy. \autoref{fig:apogee_model_no_churn} shows the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution predicted by our model where churning is set to be negligible, and it looks very different from \autoref{fig:apogee_model}. Unlike the double sequence seen in \autoref{fig:apogee_model}, only one sequence can be seen in \autoref{fig:apogee_model_no_churn}. In each panel, the sequence mainly follows the orange line, which is the evolutionary track with $R_b$ equal to mean radius $R$ of each panel. Close to the plane, the sequence is more like a blob which is centered around the black dot corresponding to age of 4 Gyr. However, with increase of $|z|$ the sequence becomes elongated and moves upward towards older stars. This is because the scale height increases with age (due to increase of $\sigma_z$ with age), which makes it more likely to have old stars at higher $|z|$. \autoref{fig:apogee_model_no_churn} makes it clear that radial migration, or more precisely the process of churning, is essential to get the double $\alpha$-sequence. Note, blurring was kept unchanged and its effect is included in \autoref{fig:apogee_model_no_churn}. So blurring by itself is not enough to bring stars to a given $R$ from a birth radius that is too far from $R$. An important observation made by \citet{2015ApJ...808..132H} was that the shape of ${\rm [Fe/H]}${} distribution (MDF) changes systematically with $R$, for stars close to the plane the skewness changes from being negative in the inner disc ($R<7$ kpc) to being positive in the outer disc ($R>11$ kpc). To investigate this, we split the APOGGE data set used in \autoref{fig:apogee_data} into low and high $\alpha$ sample, and show with orange lines, the observed MDFs for the low-$[\alpha/{\rm Fe}]${} and the high-$[\alpha/{\rm Fe}]${} stars in \autoref{fig:p_fe_Rz_0} and \autoref{fig:p_fe_Rz_1} respectively. The MDFs are shown at different $R$ and $|z|$ locations. The predictions of the model are shown alongside as blue lines. For low-$[\alpha/{\rm Fe}]${} stars, overall the model predictions are in very good agreement with the observations. Some panels show slight differences, with observations showing sharper peak, e.g., panels (d), (e), (k) and (s). For high-$[\alpha/{\rm Fe}]${} stars, the model predictions are also in good agreement with observations, however, in some panels the low metallicity tail is more extended in the observations. An important prediction of the radial migration model is that along the low-$[\alpha/{\rm Fe}]${} sequence the mean specific-angular-momentum of stars should decrease systematically from the low-[Fe/H] end to the high-[Fe/H] end. This is because, at any given $R$, stars that have migrated from the inner disc carry less angular momentum than stars that have migrated from the outer disc. The model predictions are shown in \autoref{fig:Lz_lamapo_model}, where the angular momentum is plotted as a function of ${\rm [Fe/H]}${} (dashed light-blue line) for stars belonging to the low-$[\alpha/{\rm Fe}]${} sequence for different $R$ and $|z|$ locations. For panels with $R>5$ kpc, strong anti-correlation can be seen. The relationship between $L$ and [Fe/H] is not universal, it varies with $R$ and $|z|$. As expected, the mean angular momentum increases with $R$ in proportion to $v_{\rm circ}(R)R$, which is indicated by black dot. The steepness of the profiles increases with $R$ and $|z|$. Observations are also shown alongside (solid orange lines) which match reasonably well with the predictions. The vertical distribution of star in the Milky Way is well fit by a sum of two exponential functions \citep{1983MNRAS.202.1025G}, leading to the suggestion that the Milky Way is made up of two distinct components the thin disc (with smaller scale height) and the thick disc (with larger scale height). Model predictions for vertical distribution of stellar mass is shown in \autoref{fig:pz_R8}. It can be seen that it is well fit by a sum of two exponential functions, although the model does not have a distinct thick disc component. In our tests, a model with constant star formation rate and scale length was also well fit by a sum of two exponential functions, but with slightly different fit parameters. To conclude, a continuous stars formation history can give rise to vertical density distribution that is well fit by a sum of two exponential functions. A similar argument against the existence of a distinct thick disc was also presented by \citet{1987ApJ...314L..39N}. \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{alpha_fe_Rz_apogee_h11_one_model.pdf} \caption{Model prediction for the distribution of stars in the $({\rm [Fe/H]},[\alpha/{\rm Fe}])$ plane for $(7<R/{\rm kpc}<9)\ \& \ (1<|z|/{\rm kpc}<1.5)$. The solid lines show the evolution of abundances for different birth radius $R_b$. The black dots mark the evolution at age of 4, 8, 10, 11, 12 and 13 Gyr. \label{fig:alpha_fe_Rz_one}} \end{figure*} \begin{figure}[tb] \centering \includegraphics[width=0.49\textwidth]{alpha_fe_Rz_apogee_h11_one_model_csfr_cRd.pdf} \caption{Model prediction for the distribution of stars in the $({\rm [Fe/H]},[\alpha/{\rm Fe}])$ plane for $(7<R/{\rm kpc}<9)\ \& \ (1<|z|/{\rm kpc}<1.5)$. The model has a constant star formation rate and scale length (no inside out formation). The black dots mark the evolution at age of 4, 8, 10, 11, 12 and 13 Gyr as in \autoref{fig:alpha_fe_Rz_one}. \label{fig:alpha_fe_csfr}} \end{figure} \begin{figure*}[tb] \centering \includegraphics[width=0.99\textwidth]{p_Rb_Rz_apogee_h11_model.pdf} \caption{Model predictions for the distribution of birth radius $R_b$. Shown are cases for young and old stars, which correspond to low-$[\alpha/{\rm Fe}]${} and high-$[\alpha/{\rm Fe}]${} sequences respectively. The distributions correspond to star forming mass as selection function was not applied. The two numbers on each panel denote the position of the peaks in the distribution, left number for old stars and right number for young stars. \label{fig:p_Rb_Rz}} \end{figure*} \section{Discussions} \subsection{What is the reason for the existence of the high- and low- $[\alpha/{\rm Fe}]${} sequences?} Comparing the map in \autoref{fig:alpha_fe_Rz_one} with the overlaid evolutionary tracks provides insight into the origin of the high- and low- $[\alpha/{\rm Fe}]${} sequences. The high-$[\alpha/{\rm Fe}]${} sequence coincides with the $R_b=4$ kpc evolutionary track, suggesting that it is primarily a sequence of age. This is very obvious at the high [Fe/H] end. However, at the low [Fe/H] end, the high-$[\alpha/{\rm Fe}]${} sequence is also partly a sequence of birth-radius. In contrast, the low-$[\alpha/{\rm Fe}]${} sequence follows a 8 Gyr isochrone, suggesting that it is primarily a sequence of birth radius. The densest portions of both the sequences are parallel to the isochrones. The dense low-$[\alpha/{\rm Fe}]${} portion is made up of stars younger than 10 Gyr while the dense high-$[\alpha/{\rm Fe}]${} portion is made up of stars older than 11 Gyr. The gap between the two sequences is due to the sharp transition of $[\alpha/{\rm Fe}]${} from a high value to a low value to, within a span of a few Gyrs and centered around 10.5 Gyr. This sharp transition, which is due to time delay in the onset of SNIa explosions, creates a valley in the number density of stars corresponding to the region occupied by the 10.5 Gyr isochrone, and is the reason behind the existence of the double sequence. In our preferred model the star formation history peaks at 10.5 Gyr and the radial scale length of the disc decreases with increasing age (\autoref{fig:amr_ism}). In \autoref{fig:alpha_fe_csfr} we plot the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution for a model with constant star formation history and radial scale length, which looks very similar to \autoref{fig:alpha_fe_Rz_one}. This demonstrates that the double sequence is not due to any features in the profile of star formation rate or scale length with age. \subsection{How should we interpret the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane?} \label{sec:interpretation} The best way to interpret the distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane for a given Galactic location is in terms of a sequence of evolutionary tracks corresponding to different birth radii (see \autoref{fig:alpha_fe_Rz_one}). For each point in $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane there is a corresponding point in $(R_b,\tau)$ plane. The tracks are labelled by their value for $R_b$ and $\tau$, which determines the location of a point along the track. For a given Galactic location $(R,|z|)$, the number density at an age $\tau$ on a track is given by \begin{eqnarray}} \def\ee{\end{eqnarray} p(R_b,\tau|R,z) &\propto& p(\tau)p(R_b|\tau)p(z|\sigma_z) \times \nonumber \\ && [\int p(R_g|R_b,\tau)p(R|R_g,R_b,\tau) {\rm d}R_g] \label{equ:ptrack} \ee The term in square bracket on the RHS is a function that typically peaks at around $R_b=R$ but has tails extending to lower and higher $R_b$. The extent of the tails is governed by the strength of churning ($\sigma_{L0}$) and blurring ($\sigma_R$). In absence of churning and blurring, stars will be distributed only along the track $R_b=R$ (similar to \autoref{fig:apogee_model_no_churn}). The first term on the RHS of \autoref{equ:ptrack} is the star formation rate, the second is the distribution of birth radii at the time of formation. The third term is the vertical density, which is roughly proportional to ${\rm sech}^2(z/(2h_z))/h_z$, with scale height $h_z \propto \sigma_z^2$. $h_z$ in general increases with age (using \autoref{equ:f_tau} as $\tau^{0.88}$). At small $|z|$, stellar populations with small $h_z$ will dominate, while at high $|z|$, stellar populations with large $h_z$ will dominate. Going up the evolutionary track, we expect to see kinematically hotter populations (populations with large $\sigma_z$). The rate of change of $[\alpha/{\rm Fe}]${} with age is highest at around 10.5 Gyr, which leads to a local minimum in the number density of stars at that age. Hence, even if $p(\tau)$ is smooth and continuous, we will still see a minimum in the density distribution along a track and consequently a bimodality in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane. However, along a single evolutionary track, $h_z \propto \tau^{0.88}$, the distribution of $h_z$ is expected to be continuous as long as the star formation rate is continuous. It is now easy to see why the mass weighted distribution of scale height $h_z$ at the solar annulus can be a continuous function as found by \citet{2012ApJ...751..131B}, in spite of the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution being bimodal. \subsection{Why does the locus of the high-$[\alpha/{\rm Fe}]${} sequence appear to be the same across all locations of the Galaxy?} The high-$[\alpha/{\rm Fe}]${} sequence seems to approximately follow the $R_b=4$ kpc evolutionary track at all Galactic locations. It can be seen from \autoref{fig:alpha_fe_Rz_one} that the shape of this track has a knee at ${\rm [Fe/H]} \sim -0.2$. To the left of the knee the sequence is almost parallel to ${\rm [Fe/H]}${} axis and to the right it slopes downwards. The left part is mainly made up of stars with age greater than 11 Gyr. The lower envelope is made up of 11 Gyr isochrones with $R_b>4$ kpc which runs parallel to the ${\rm [Fe/H]}${} axis. This is separated from the low-$[\alpha/{\rm Fe}]${} sequence due to the sharp transition of $[\alpha/{\rm Fe}]${} when age changes from 11 to 10 Gyr. At different Galactic locations the distribution of $R_b$ can change but the high-$[\alpha/{\rm Fe}]${} sequence will still remain flat and the lower envelope will remain the same. The upper/outer envelope is made up of stars with $R_b<4$ kpc. In this regime the distribution of $R_b$ has a very characteristic shape, it is a rising function of $R_b$ at all locations. Hence, as long as there are enough stars coming from $R_b<4$ kpc, we will always see the same shape of the outer envelope. The reason we always have enough stars from the inner Galaxy is because of churning. \autoref{fig:p_Rb_Rz} shows that the distribution of $R_b$ is a skewed distribution with a well defined peak. Such a distribution is predicted by \autoref{equ:rb_dist}, which peaks at $R_b=R_d$. In reality, the peak shifts to larger $R_b$ with increase of $R$. This is because the actual distribution of $R_b$ at a given $R$ (\autoref{equ:ptrack}) depends upon additional factors involving churning and blurring. For old stars ($\tau<10$ Gyr) the churning is very efficient such that the term within the square bracket in \autoref{equ:ptrack} is a very broad, hence the $p(R_b|\tau)$ term dominates. \citet{2014ApJ...796...38N} argue that thinness of the high-$[\alpha/{\rm Fe}]${} sequence combined with the fact that the same sequence exists at a wide range of $R$ and $|z|$, is probably indicative of the fact that similar condition existed throughout the disc. This argument was further supported by \citet{2016ApJ...823...30B} based on similarity of radial profile of high-$[\alpha/{\rm Fe}]${} MAPs. However, the existence of the high-$[\alpha/{\rm Fe}]${} sequence at all $R$ and $|z|$ says very little about where they were born. Birth radius is uniquely specified by location on the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane. The fact that the high-$[\alpha/{\rm Fe}]${} sequence can still be seen at large $R$ is due to churning and blurring. For ages greater than 11 Gyr, different evolutionary tracks have very similar $[\alpha/{\rm Fe}]${} at birth. This is the reason for the thinness along the high-$[\alpha/{\rm Fe}]${} track: a spread in birth radius does not have a spread in $[\alpha/{\rm Fe}]${}, only ${\rm [Fe/H]}$. The high-${\rm [Fe/H]}${} end of the sequence is made up of stars with $R_b<4$ kpc, here the evolutionary tracks are close together in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane and appear quite similar. \subsection{Chemical enrichment} We have proposed an empirical model for chemical enrichment and constrained it using observational data containing age and abundance of stars. The fact that it reproduces the distribution of stars in $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane at different Galactic locations, further supports the adopted enrichment model. However, our model is also physically motivated and its parameters can be used to shed light on the physics of enrichment. $\Delta t_{\alpha}$ indicates the time delay between the onset of star formation and the peak in the rate of SNIa, which mainly depends on the lifetime of the binary companion of a white dwarf. Our adopted value of 1.5 Gyr is in good agreement with typical expectation from theoretical models \citep{2003MNRAS.340..908K}. Our results suggest that the initial composition at all birth radius was very similar, probably due to short timescale of SNII that sets the initial value of $[\alpha/{\rm Fe}]${}. At later times, the ${\rm [Fe/H]}${} and $[\alpha/{\rm Fe}]${} values are found to reach an equilibrium value which depends on birth radius, this is probably regulated by gas dynamical processes like inflow of fresh gas, outflows and radial flows, but needs further investigation. In the future, we should relax some of the assumptions that were made and let the data inform if they are true. \subsection{The role played by velocity dispersion relations} The overall pattern of the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distributions is sensitive to the velocity dispersion relations, specially, the relative fraction of stars in the high- and the low-$[\alpha/{\rm Fe}]${} sequences. This is because $\sigma_z$ determines the scale height of a population and the scale height determines which population is going to dominate at what $|z|$ (\autoref{sec:interpretation}). The high-$[\alpha/{\rm Fe}]${} sequence is primarily made up of old stars while the low-$[\alpha/{\rm Fe}]${} sequence is made up of comparatively younger stars. So their relative fraction is particularly sensitive to the dependence of $\sigma_z$ on age. For a given $R$, with increase of $|z|$, the peak for the low-$[\alpha/{\rm Fe}]${} sequence moves to lower [Fe/H] and higher $[\alpha/{\rm Fe}]${}, both for the observations and the model. This is most evident for panels corresponding to $7<R/{\rm kpc}<9$ (\autoref{fig:apogee_data} and \autoref{fig:apogee_model}). The slight increase in $[\alpha/{\rm Fe}]${} of the peak is due to increase of $\sigma_z$ with age, which makes it more likely for older stars to occupy regions with higher $|z|$. But the shift of ${\rm [Fe/H]}${} is more than that predicted by simply travelling up along the orange evolutionary track. The extra shift of the peak to lower [Fe/H] is specifically due to anti-correlation of $\sigma_z$ with [Fe/H], which makes it more likely for low [Fe/H] stars to occupy regions with higher $|z|$. If the parameter $\gamma_{{\rm [Fe/H]},v}$ that controls the dependence of $\sigma_z$ on [Fe/H] is set to zero, no shift of the peak is seen in the distributions predicted by the model. This provides an independent confirmation of the \citet{2020arXiv200406556S} scaling of velocity dispersion with ${\rm [Fe/H]}${} (or equivalently birth radius) for a given age. In the topmost $|z|$ slice (\autoref{fig:apogee_data} and \autoref{fig:apogee_model}), we have a dominant low-$[\alpha/{\rm Fe}]${} sequence at large $R$ . Naively, we expect the topmost slice to be dominated by old stars as they have high $\sigma_z$ and hence large scale height. We see the domination of old, high-$[\alpha/{\rm Fe}]${} stars for $R<9$ kpc but not for larger values of $R$. Inside out formation of the disc, i.e., scale length of a disc being smaller at earlier times, is one possible explanation for this result. In our model, $R_d^{\rm min}<R_d^{\rm max}$ indicates inside out formation of the disc. However, setting the $R_d^{\rm min}=R_d^{\rm max}$ was found to have very little effect on the distributions. The high-$[\alpha/{\rm Fe}]${} sequence was found to shift slightly towards low [Fe/H], which is due to more contribution from stars with larger $R_b$, but for $R>9$ kpc the low-$[\alpha/{\rm Fe}]${} sequence was still dominant. Next, we set the parameter $\alpha_{L,v}$, which regulates the increase of $\sigma_z$ with $L$, to zero. With this change, the high-$[\alpha/{\rm Fe}]${} sequence was found to dominate the panels (d), (e), and (f) corresponding to large $R$. $\alpha_{L,v}$ is responsible for flaring of young low-$[\alpha/{\rm Fe}]${} stars in the outer disc, and this makes the low-$[\alpha/{\rm Fe}]${} sequence dominate at large $R$. Low-$[\alpha/{\rm Fe}]${} stars are made up of all stars with age less than 10 Gyr, hence, they significantly outnumber the high-$[\alpha/{\rm Fe}]${} stars. So, even a small amount of flaring is enough to make them dominate over the high-$[\alpha/{\rm Fe}]${} stars. Flaring has been reported in the outer disc of the Milky Way and has been shown to occur in numerical simulations \citep{2015ApJ...804L...9M}. Both \citet{2016ApJ...823...30B} and \citet{2017MNRAS.471.3057M} using APOGEE data showed that the youngest stars flare the most. Simulations of discs with GMCs, spiral arms and a bar by \citet{2017MNRAS.470.2113A} show that flaring is due to constant birth velocity dispersion. \citet{2020arXiv200406556S} provide kinematic indication of flaring. For constant scale height, $\sigma_z$ is expected to decrease exponentially with $L$. \citet{2020arXiv200406556S} show that $\sigma_z$ decreases with $L$ for upto solar angular momentum, but increases thereafter, which is indicative of flaring in the outer disc. They argue that constant birth dispersion will lead to more flaring in young stars and flaring will start at much lower values of $R$. The reason being that flaring starts when an exponentially declining $\sigma_z$ as a function of $L$ hits the floor of constant birth dispersion, for younger stars the overall dispersion is small and hence the floor is hit at a smaller $L$. \subsection{Relation to other studies} \citet{2009MNRAS.396..203S} introduced a detailed model for chemical evolution with radial migration and gas flows that was capable of simulating the joint distribution of abundances and phase space coordinates. They made predictions for the distribution of solar neighborhood stars in the ${\rm([Fe/H], [O/Fe])}$ plane. They made predictions for main sequence type stars following the GCS survey \citep{2004A&A...418..989N} selection function. Note, for the purpose of the discussion here, ${\rm [O/Fe]}$ can be considered as proxy for $[\alpha/{\rm Fe}]${}. Their models were able to reproduce the high- and low-$[\alpha/{\rm Fe}]${} sequences. They showed that the double sequence has nothing to do with breaks in star formation history but was a consequence of the sharp enrichment of $[\alpha/{\rm Fe}]${} due to SNIa. The low-$[\alpha/{\rm Fe}]${} sequence was a sequence of stars born at different radius but were present in the solar neighborhood due to radial migration. They also predicted an anti-correlation of angular momentum with ${\rm [Fe/H]}${} which was in qualitative agreement with data from GCS. We arrive at the same conclusions, but using an empirical model for chemical enrichment that is calibrated to observational data and using an improved model for velocity dispersion. Our empirical relations for the evolution of abundances are in good agreement with theirs, which provides strong support to their ab-inito chemical evolution model. They compared their results with a small observational data set, moreover, the observational data set had kinematic biases. Hence it was not possible to do a proper comparison of density distribution in the $[\alpha/{\rm Fe}]${} plane with the models. Specifically, the models predicted a significant number of stars in between the two sequences but that seemed to be missing in the presented observational data. In contrast, we make a detailed comparison with observations using a significantly larger data set and test the predictions over different locations across the Galaxy. \citet{2013A&A...549A.147B,2017A&A...605A..89B} studied the abundance distribution of bulge stars using microlensed dwarfs and subgiants within 1 kpc of the Galactic center. They found that $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution of stars in the Galactic bulge are very similar to the sequence found in the inner disc \citep[see also][]{2019A&A...626A..16R} . This is correctly predicted by our model. We expect the bulge to look like panel (m) and (s) of \autoref{fig:apogee_model}.The bulge should follow the $R_b<1$ kpc evolutionary track, and this is similar to $R_b=4$ kpc track which represents the high-$[\alpha/{\rm Fe}]${} sequence seen throughout the Galaxy. \citet{2016ApJ...823...30B} studied the $R$ and $|z|$ distributions of mono-abundance populations using APOGGE red-clump stars. They found that the radial distribution of high-$[\alpha/{\rm Fe}]${} MAPs are well described by a single exponential (for $R>4$ kpc) but that of low-$[\alpha/{\rm Fe}]${} MAPs are more complex. The low-$[\alpha/{\rm Fe}]${} MAP stars are distributed in a ring like structure characterized by a peak with exponential fall off away from the peak both for smaller and larger radii. \citet{2017MNRAS.471.3057M}, using mono-age and mono-metallicity APOGEE red-giant-branch stars, also reported similar findings. The above findings are easy to understand using \autoref{fig:alpha_fe_Rz_one}. A low-$[\alpha/{\rm Fe}]${} MAP represents young stars born at a particular birth radius. Our churning mechanism predicts that at any given time after birth, the distribution of guiding radius $p(R_g|R_b,\tau)$ will be ring-like centered around $R_b$. Blurring further distributes stars with a given $R_g$ over $R$ in a ring around $R_g$. Hence, the distribution of $R$, \begin{eqnarray}} \def\ee{\end{eqnarray} p(R|R_b,\tau)= \int p(R_g|R_b,\tau) p(R|R_g,\tau,R_b) {\rm d}R_g \ee will also be ring-like as it is given by a convolution of one ring-like distribution with another ring-like distribution. A high-$[\alpha/{\rm Fe}]${} MAP typically represents stars with $R_b$ of about 4 kpc, which will also be ring like but with peak close to $R=4$ kpc. Since there was no observed data inwards of 4 kpc, the radial distribution was expected to be well fit by a single exponential. Along the high-$[\alpha/{\rm Fe}]${} sequence, the evolutionary tracks are closely spaced. Hence, a MAP can in general also contain stars from multiple birth radii, which can shift the peak further inwards. \citet{2015MNRAS.449.3479S} proposed action-based analytical distribution function with a prescription for radial migration. Our model is similar to theirs, but unlike them we also make predictions for $[\alpha/{\rm Fe}]${}. We adopt their prescription for radial migration. The strength of migration is characterized by parameter $\sigma_{L0}$, defined as the dispersion of angular momentum for 12 Gyr population. By fitting to GCS stars, \citet{2015MNRAS.449.3479S} estimated $\sigma_{L0}=1150$ kpc km/s. We adopt a very similar value and find that it reproduces the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution of APOGEE stars quite well. Recently, \citet{2020arXiv200204622F} have also estimated $\sigma_{L0}$ making use of APOGEE red-clump stars and building upon the model of \citet{2015MNRAS.449.3479S}. They estimate the dispersion for a 12 Gyr population to be 875 kpc km/s, which is slightly smaller. \section{Conclusions} We have presented an analytical chemodynamical model of the Milky Way that can make predictions for the joint distribution of position, velocity, age and abundance of stars in the Milky Way. Parametric models of this sort have important uses. Even before the Gaia DR2 data release, observational multi-dimensional data sets were becoming vast and unwieldy. The same holds true for cosmological simulations of Milky Way analogues \citep[e.g. ][]{2018MNRAS.480..652E}. Our model provides a framework for fitting both observational and simulated data, and tying both together through a basis set of key parameters. The key aspect in which the model improves upon previous works is its inclusion of a new prescription for the evolution of $[\alpha/{\rm Fe}]${} with age and ${\rm [Fe/H]}${} and a new set of relations describing the velocity dispersion of stars. For the first time, we have been able to show that a model with a smooth and continuous star formation history and velocity dispersion relations can reproduce the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution of observed stars at different $R$ and $|z|$ locations across the Galaxy. The model also satisfies a number of other observational constraints. It has a vertical distribution of stars that is well fit by a sum of two exponential functions. For the low-$[\alpha/{\rm Fe}]${} stars, the model is also able to reproduce the trend of mean angular momentum as a function of metallicity at different $R$ and $|z|$ locations. A number of finer details of the observed $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution are also correctly reproduced. These include (i) the observed double sequence (bimodality) in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane, (ii) the relative fraction of stars in the high- and the low-$[\alpha/{\rm Fe}]${} sequences and its variation with $R$ and $|z|$, (iii) the change in position of low-$[\alpha/{\rm Fe}]${} peak with $R$ and $|z|$, and (iv) the skewness of the MDFs as a function of $R$ and $|z|$. Our work confirms and significantly extends the earlier findings of \citet{2009MNRAS.396..203S} relating to the origin of the double sequence in $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane and the thick disc; their study was limited to the solar neighborhood and lacked a detailed comparison with greatly improved data since the Gaia DR2 data release. Our work is also in agreement with \citet{2012ApJ...751..131B} who had shown that the scale-height distribution of mono-abundance populations is continuous, which supports the argument that the star formation history is also continuous. The ring-like radial distribution of stars for a low-$[\alpha/{\rm Fe}]${} mono-abundance populations as shown by \citet{2016ApJ...823...30B} and \citet{2017MNRAS.471.3057M} is also in agreement with the predictions of our model. In \citet{2020arXiv200406556S}, it was shown that for older stars the apparent break and rise of the velocity dispersion profile, with respect to that of a power law, is due to systematic decrease of angular momentum with radius. When this is taken into account, the velocity dispersion of old and high-$[\alpha/{\rm Fe}]${} stars, which are traditionally associated with the thick disc, also follow the same set of relations for their dependence on age, angular momentum and metallicity as that of other stars that make up the thin disc. Hence, the break in age velocity-dispersion relation, the bimodality in $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} distribution, the uniformity of the locus of the high-$[\alpha/{\rm Fe}]${} sequence, and the double exponential nature of the vertical density distribution, do not require an abrupt change in either the star formation history or the kinematic evolutionary history of the Milky Way. In other words, these are no longer sufficient arguments for the existence of a distinct thick disc stellar population. The word `distinct' is used in the sense that the evolution is not smooth or continuous. A brief period of quenching, as proposed by others \citep{2019A&A...625A.105H,2001ApJ...554.1044C}, could potentially be still present but it is not required to explain the above mentioned properties of the Milky Way. The high-$[\alpha/{\rm Fe}]${} sequence at the low-${\rm [Fe/H]}${} end is a sequence of both age and birth radius, while at the high-${\rm [Fe/H]}${} end it is a sequence of age. In contrast, the low-$[\alpha/{\rm Fe}]${} sequence is primarily a trend of different birth radius. The origin of the double sequence is due to two key processes: the sharp transition the of $[\alpha/{\rm Fe}]${} at around 10.5 Gyr ago, and the radial migration of stars. The transition is most likely due to the delay between the onset of star formation and the occurrence of SNIa in the early Universe. This sharp transition creates a valley in the density distribution of stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])${} plane, approximately parallel to the ${\rm [Fe/H]}${} axis. The radial migration, more precisely the process of churning, is responsible for the large spread of the low-$[\alpha/{\rm Fe}]${} sequence along the ${\rm [Fe/H]}${} axis. We show that if churning is not included the process of blurring alone is not sufficient to reproduce the double sequence. At any given radius, the high-$[\alpha/{\rm Fe}]${} sequence is dominated by stars that have migrated outwards from the inner Galaxy, however, the contribution of locally born stars and inward migrators is not negligible. The apparent uniformity in the locus of the high-$[\alpha/{\rm Fe}]${} sequence is due to churning being very efficient. Efficient churning firstly makes it possible for enough stars from the inner radius to reach large $R$, and secondly it makes the distribution of birth radius $R_b$ almost independent of $R$. The velocity dispersion relations are responsible for some of the systematic trends of the $([{\rm Fe/H}], [\alpha/{\rm Fe}])$\ distribution with $R$ and $|z|$. The MDF of the low-$[\alpha/{\rm Fe}]${} sequence is found to shift towards lower ${\rm [Fe/H]}${} with increase of $|z|$. This is due to dependence of vertical velocity dispersion on ${\rm [Fe/H]}${} (or equivalently birth radius). At high $|z|$, the lack of high-$[\alpha/{\rm Fe}]${} stars for $R>9$ kpc, is due to flaring, and the flaring is due to the parabolic shaped dependence of velocity dispersion on angular momentum, characterized by a minimum at around solar angular momentum and a rise thereafter. There are various aspects of the model that can be improved in the future. We have only explored the evolution of iron and $\alpha$ elements, it should be straightforward to extend the model to also include $r$ and $s-$process elements. Production sites and nucleosynthetic yields for these elements are not well understood. High resolution spectroscopic surveys like GALAH and APOGEE are now providing reliable estimates of abundances for these elements for a large number of stars in the Milky Way. The phase-space evolution of the model is now reasonably well constrained, in future, we can focus on the physics exclusive to these elements. Our model has a number of free parameters and there are likely to be degeneracies between some of them, which we have not explored. The parameters were tuned manually. In the future, a proper MCMC based exploration of the parameter space should be useful \citep[e.g. ][]{2017ARA&A..55..213S}. One of the greatest strengths of the model is that, being purely analytical, it can be easily fit to observational or cosmologically simulated data. Since the model is physically motivated, it means that we can gain understanding about the various physical processes that have shaped our Galaxy. One of the most poorly understood physical processes is radial migration. For our parametric model, we have adopted only a simple prescription. In reality, a more complex process is likely to exist as non-axisymmetric perturbations (spiral arms, bars, interlopers) come and go over the aeons. Our results show that the distribution of low-$[\alpha/{\rm Fe}]${} stars in the $([{\rm Fe/H}], [\alpha/{\rm Fe}])$\ plane and their variation with $R$ and $|z|$, is very sensitive to radial migration, this is very promising to constrain radial migration. The gap between the low-$[\alpha/{\rm Fe}]${} and the high-$[\alpha/{\rm Fe}]${} sequence is very sensitive to parameters $t_{\rm \alpha}$ and $\Delta t_{\alpha}$ that control the enrichment of $\alpha$ elements in the Galaxy, another process that is not fully understood. Our chemical evolution model, although physically motivated, is still empirical in nature. The star formation rate is decoupled from the chemical evolution which is clearly incorrect. In future, it will be useful to investigate {\it ab initio} chemical evolution models, that take star formation, gas infall, outflows and nucleosynthesis and fine-tune them to reproduce the chemical evolution tracks that we have derived here. Finally, the model being purely analytical, it should be easy to insert into forward-modelling tools like {\sl Galaxia} \citep{2011ApJ...730....3S} that generate synthetic catalogs of stars and are useful for interpreting stellar surveys. \acknowledgments SS is funded by a Senior Fellowship (University of Sydney), an ASTRO-3D Research Fellowship and JBH's Laureate Fellowship from the Australian Research Council (ARC). JBH's research team is supported by an ARC Laureate Fellowship (FL140100278) and funds from ASTRO-3D. MJH is supported by an ASTRO-3D 4-year Research Fellowship. The GALAH Survey is supported by the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project CE170100013. This work has made use of data acquired through the Australian Astronomical Observatory, under programs: GALAH, TESS-HERMES and K2-HERMES. We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This work has made use of data from SDSS-III. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. This work has made use of Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) which is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. \bibliographystyle{yahapj}
2,877,628,088,851
arxiv
\section{Introduction}\label{sec:intro} Sorting $n$ real values stored in an array $\mathbf{x}=(x_1,\dots,x_n)\in\mathbb{R}^n$ requires finding a permutation $\sigma$ in the symmetric group $\mathcal{S}_n$ such that $\mathbf{x}_\sigma:=(x_{\sigma_1},\dots, x_{\sigma_n})$ is increasing. A call to a sorting procedure returns either the vector of sorted values $S(\mathbf{x}):=\mathbf{x}_\sigma$, or the vector $R(\mathbf{x})$ of the ranks of these values, namely the inverse of the sorting permutation, $R(\mathbf{x}):=\sigma^{-1}$. For instance, if the input vector $\mathbf{x}=(0.38,4,-2,6,-9)$, one has $\sigma=(5,3,1,2,4)$, and the sorted vector $S(\mathbf{x})$ is $\mathbf{x}_\sigma=(-9,-2,0.38,4,6)$, while $R(\mathbf{x})=\sigma^{-1}=(3,4,2,5,1)$ lists the rank of each entry in $\mathbf{x}$. \textbf{On (not) learning with sorting and ranking.} Operators $R$ and $S$ play an important role across statistics and machine learning. For instance, $R$ is the main workhorse behind order statistics~\cite{david2004order}, but also appears prominently in $k$-NN rules, in which $R$ is applied on a vector of distances to select the closest neighbors to a query point. Ranking is also used to assess the performance of an algorithm: either at test time, such as $0/1$ and top-$k$ classification accuracies and NDCG metrics when learning-to-rank~\citep{jarvelin2002cumulated}, or at train time, by selecting pairs~\cite{burges2007learning,burges2011learning} and triplets~\cite{weinberger2009distance} of points of interest. The sorting operator $S$ is of no less importance, and can be used to handle outliers in robust statistics, as in trimmed~\cite{huber2011robust} and least-quantile regression~\cite{rousseeuw1984least} or median-of-means estimators~\cite{lugosi2019regularization,lecue2017robust}. Yet, and although examples of using $R$ and $S$ abound in ML, neither $R$ nor $S$ are actively used in end-to-end learning approaches: while $S$ is not differentiable everywhere, $R$ is outright pathological, since it is piecewise constant and has therefore a Jacobian $\partial R/\partial \mathbf{x}$ that is almost everywhere zero. \textbf{Everywhere differentiable proxies to ranking and sorting.} Replacing the usual ranking and sorting operators by differentiable approximations holds an interesting promise, as it would immediately enable an end-to-end training of any algorithm or metric that uses sorting. For instance, all of the test metrics enumerated above could be upgraded to training losses, if one were able to replace their inner calls to $R$ and $S$ by differentiable proxies. More generally, one can envision applications in which these proxies can be used to impose rank/sorting based constraints, such as fairness considerations that rely on the quantiles of (logistic) regression outputs~\cite{feldman2015certifying,jiang2019wasserstein}. In the literature, such smoothed ranks operators appeared first in~\citep{taylor2008softrank}, where a softranks operator is defined as the expectation of the rank operator under a random perturbation, $\mathbb{E}_{\mathbf{z}}[R(\mathbf{x}+\mathbf{z})]$, where $\mathbf{z}$ is a standard Gaussian random vector. That expectation (and its gradient w.r.t. $\mathbf{x}$) were approximated in ~\citep{taylor2008softrank} using a $O(n^3)$ algorithm. Shortly after, \citep{qin2010general} used the fact that the rank of each value $x_i$ in $\mathbf{x}$ can be written as $\sum_j \mathbf{1}_{x_i> x_j}$, and smoothed these indicator functions with logistic maps $g_\tau(u):=(1+\exp(-u/\tau))^{-1}$. The soft-rank operator they propose is $A\mathbf{1}_n$ where $A=g_\tau(D)$, where $g_\tau$ is applied elementwise to the pairwise matrix of differences $D=[x_i-x_j]_{ij}$, for a total of $O(n^2)$ operations. A similar yet more refined approach was recently proposed by~\cite{grover2019}, building on the same pairwise difference matrix $D$ to output a unimodal row-stochastic matrix. This yields as in ~\citep{taylor2008softrank} a probabilistic rank for each input. \textbf{Our contribution: smoothed $R$ and $S$ operators using optimal transport (OT).} % We show first that the sorting permutation $\sigma$ for $\mathbf{x}$ can be recovered by solving an optimal assignment (OA) problem, from an input measure supported on all values in $\mathbf{x}$ to a second \textit{auxiliary} target measure supported on \emph{any} increasing family $\mathbf{y}=(y_1<\dots<y_n)$. Indeed, a key result from OT theory states that, pending a simple condition on the matching cost, the OA is achieved by matching the smallest element in $\mathbf{x}$ to $y_1$, the second smallest to $y_2$, and so forth, therefore ``revealing'' the sorting permutation of $\mathbf{x}$. We leverage the flexibility of OT to introduce generalized ``split'' ranking and sorting operators that use target measures with only $m\ne n$ weighted target values, and use the resulting optimal transport plans to compute convex combinations of ranks and sorted values. These operators are however far too costly to be of practical interest and, much like sorting algorithms, remain non-differentiable. To recover tractable and differentiable operators, we regularize the OT problem and solve it using the Sinkhorn algorithm~\cite{CuturiSinkhorn}, at a cost of $O(nm\ell)$ operations, where $\ell$ is the number of Sinkhorn iterations needed for the algorithm to converge. We show that the size $m$ of the target measure can be set as small as $3$ in some applications, while $\ell$ rarely exceeds $100$ with the settings we consider. \textbf{Outline.} We recall first the link between the $R$ and $S$ operators and OT between 1D measures, to define then generalized Kantorovich rank and sort operators in \S\ref{sec:OTsortingcdfquantiles}. We turn them into differentiable operators using entropic regularization, and discuss in \S\ref{sec:sinkhornops} the several parameters that can shape this smoothness. Using these smooth operators, we propose in \S\ref{sec:quantileactivation} alternatives to cross-entropy and least-quantile losses to learn classifiers and regression functions. \textbf{Notations.} We write $\mathbb{O}_n\subset\mathbb{R}^n$ for the set of increasing vectors of size $n$, and $\Sigma_n\subset\mathbb{R}^n_+$ for the probability simplex. $\mathbf{1}_n$ is the $n$-vector of ones. Given $\mathbf{c}=(c_1,\dots,c_n)\in\mathbb{R}^n$, we write $\overline{\mathbf{c}}$ for the cumulative sum of $\mathbf{c}$, namely vector $(c_1+\dots+c_i)_i$. Given two permutations $\sigma\in \mathcal{S}_n, \tau \in \mathcal{S}_m$ and a matrix $A\in\mathbb{R}^{n\times m}$, we write $A_{\sigma\tau}$ for the $n\times m$ matrix $[A_{\sigma_i\tau_j}]_{ij}$ obtained by permuting the rows and columns of $A$ using $\sigma,\tau$. For any $x\in\mathbb{R}$, $\delta_x$ is the Dirac measure on $x$. For a probability measure $\xi\in\mathcal{P}(\mathbb{R})$, we write $F_\xi$ for its cumulative distribution function (CDF), and $Q_\xi$ for its quantile function (generalized if $\xi$ is discrete). Functions are applied element-wise on vectors or matrices; the $\circ$ operator stands for the element-wise product of vectors. \section{The Sinkhorn Ranking and Sorting Operators}\label{sec:sinkhornops} Both K-rank $\widetilde{R}$ and K-sort $\widetilde{S}$ operators are expressed using the optimal solution $P_\star$ to the linear program in \eqref{eq:discreteOT}. However, $P_\star$ is not differentiable w.r.t inputs $\mathbf{a},\mathbf{x}$ nor parameters $\mathbf{b},\mathbf{y}$ ~\cite[\S5]{bertsimas1997introduction}. We propose instead to rely on a differentiable variant~\citep{CuturiSinkhorn,CuturiBarycenter} of the OT problem that uses entropic regularization~\cite{wilson1969use,Galichon-Entropic,kosowsky1994invisible}, as detailed in~\citep[\S4]{MAL-073}. This differentiability is reflected in the fact that the optimal regularized transport plan is a dense matrix (yielding more arrows in Fig.~\ref{fig:myOTsorting}$\emph{(c)}$), which ensures differentiability everywhere w.r.t. both $\mathbf{a}$ and $\mathbf{x}$. Consider first a regularization strength $\varepsilon>0$ to define the solution to the regularized OT problem: $$ P_\star^\varepsilon := \uargmin{P\in U(\mathbf{a},\mathbf{b})} \dotp{P}{C_{\mathbf{x}\mathbf{y}}}-\varepsilon H(P)\quad, \text{where} \quad H(P) = -\sum_{i,j} P_{ij} \left( \log P_{ij} - 1\right) \,. $$ One can easily show~\cite{CuturiSinkhorn} that $P_\star^\varepsilon$ has the factorized form $\mathbf{D}(\mathbf{u})K\mathbf{D}(\mathbf{v})$, where $K=\exp(-C_{\mathbf{x}\mathbf{y}}/\varepsilon)$ and $\mathbf{u}\in\mathbf{R}^n$ and $\mathbf{v}\in\mathbf{R}^m$ are fixed points of the Sinkhorn iteration outlined in Alg.~\ref{alg-sink}. To differentiate $P^\varepsilon_\star$ w.r.t. $\mathbf{a}$ or $\mathbf{x}$ one can use the implicit function theorem, but this would require solving a linear system using $K$. We consider here a more direct approach, using algorithmic differentiation of the Sinkhorn iterations, after a number $\ell$ of iterations needed for Alg.~\ref{alg-sink} to converge~\citep{hashimoto2016learning,2016-bonneel-barycoord,flamary2018wasserstein}. That number $\ell$ depends on the choice of $\varepsilon$~\citep{franklin1989scaling}: typically, the smaller $\varepsilon$, the more iterations $\ell$ are needed to ensure that each successive update in $\mathbf{v},\mathbf{u}$ brings the column-sum of the iterate $\mathbf{D}(\mathbf{u})K\mathbf{D}(\mathbf{v})$ closer to $\mathbf{b}$, namely that the difference between $\mathbf{v}\circ K^T \mathbf{u}$ and $\mathbf{b}$ (as measured by a discrepancy function $\Delta$ as used in Alg.~\ref{alg-sink}) falls below a tolerance parameter $\eta$. Assuming $P^\varepsilon_\star$ has been computed, we introduce Sinkhorn ranking and sorting operators by simply appending an $\varepsilon$ subscript to the quantities presented in Def.~\ref{def:splitquant}, and replacing $P_\star$ in these definitions by the regularized OT solution $P_\star^\varepsilon=\mathbf{D}(\mathbf{u})K\mathbf{D}(\mathbf{v})$. \begin{wrapfigure}{r}{0.37\textwidth} \begin{minipage}{0.37\textwidth} \begin{algorithm}[H]\label{alg-sink} \SetAlgoLined \textbf{Inputs:} $\mathbf{a},\mathbf{b},\mathbf{x},\mathbf{y},\varepsilon,h,\eta$ $C_{\mathbf{x}\mathbf{y}} \gets [h(y_j-x_i)]_{ij}$\; $K \gets e^{-C_{\mathbf{x}\mathbf{y}}/\varepsilon}, \mathbf{u}=\mathbf{1}_n$\; \Repeat{$\Delta(\mathbf{v}\circ K^T \mathbf{u},\mathbf{b})<\eta$}{ $\mathbf{v}\gets \mathbf{b}/K^T\mathbf{u},\;\mathbf{u}\gets \mathbf{a}/K\mathbf{v}$ } \KwResult{$\mathbf{u},\mathbf{v},K$} \caption{Sinkhorn} \end{algorithm} \end{minipage}\vskip-.7cm \end{wrapfigure} \begin{defn}[Sinkhorn Rank \& Sort]\label{def:softquant} Given a regularization strength $\varepsilon>0$, run Alg.\ref{alg-sink} to define $$ \begin{aligned} \opSR{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}} &:= n \mathbf{a}^{-1}\circ\mathbf{u}\circ K (\mathbf{v}\circ \overline{\mathbf{b}})\in[0,n]^n,\\ \opSquantile{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}} & := \, \mathbf{b}^{-1}\circ \mathbf{v} \circ K^T (\mathbf{u}\circ \mathbf{x})\in\mathbb{R}^{m}. \end{aligned}$$ \end{defn} \textbf{Sensitivity to $\varepsilon$.} Parameter $\varepsilon$ plays the same role as other temperature parameters in previously proposed smoothed sorting operators~\cite{qin2010general,taylor2008softrank,grover2019}: the smaller $\varepsilon$ is, the closer the Sinkhorn operator's output is to the original vectors of ranks and sorted values; The bigger $\varepsilon$, the closer $P_\star^\varepsilon$ to matrix $\mathbf{a}\mathbf{b}^T$, and therefore all entries of $\widetilde{R}_\varepsilon$ collapse to the average of $n\bar{\mathbf{b}}$, while all entries of $\widetilde{S}_\varepsilon$ collapse to the weigted average (using $\mathbf{a}$) of $\mathbf{x}$, as illustrated in Fig.~\ref{fig:explainSop}. Although choosing a small value for $\varepsilon$ might seem natural, in the sense that $\widetilde{R}_\varepsilon,\widetilde{S}_\varepsilon$ approximate more faithfully $R, S$, one should not forget that this would result in recovering the deficiencies of $R,S$ in terms of differentiability. When \emph{learning} with such operators, it may therefore be desirable to use a value for $\varepsilon$ that is large enough to ensure $\partial P^\varepsilon_\star/\partial \mathbf{x}$ has non-null entries. We usually set $\varepsilon=10^{-2}$ or $10^{-3}$ when $\mathbf{x}, \mathbf{y}$ lie in [0,1] as in Fig.~\ref{fig:explainSop}. We have kept $\varepsilon$ fixed throughout Alg.~\ref{alg-sink}, but we do notice some speedups using scheduling as advocated by~\cite{schmitzer2016stabilized}. \begin{figure} \centering \includegraphics[width=\textwidth]{fig/cdfs.pdf} \caption{Behaviour of the S-ranks $\opSR{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}}{}$ and S-sort operators $\opSquantile{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}}{}$ as a function of $\varepsilon$. Here $n=m=10$, $\mathbf{b}$ is uniform and $\mathbf{y}=(0,\dots,m-1)/(m-1)$ is the regular grid in $[0,1]$. \emph{(left)} input data $\mathbf{x}$ presented as a bar plot. \emph{(center)} Vector output of $\opSR{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}}{}$ (various continuous) ranks as a function of $\varepsilon$. When $\varepsilon$ is small, one recovers an integer valued vector of ranks. As $\varepsilon$ increases, regularization kicks in and produces mixtures of rank values that are continuous. These mixed ranks are closer for values that are close in absolute terms, as is the case with the 0-th and 9-th index of the input vector whose continuous ranks are almost equal when $\varepsilon\approx 10^-2$. \emph{(right)} vector of "soft" sorted values. These converge to the average of values in $\mathbf{x}$ as $\varepsilon$ is increased.} \label{fig:explainSop} \end{figure} \textbf{Parallelization.} The Sinkhorn computations laid out in Algorithm~\ref{alg-sink} imply the application of kernels $K$ or $K^T$ to vectors $\mathbf{v}$ and $\mathbf{u}$ of size $m$ and $n$ respectively. These computation can be carried out in parallel to compare $S$ vectors $\mathbf{x}_1,\dots,\mathbf{x}_S\in\mathbb{R}^n$ of real numbers, with respective probability weights $\mathbf{a}_1,\dots,\mathbf{a}_S$, to a single vector $\mathbf{y}$ with weights $\mathbf{b}$. To do so, one can store all kernels $K_s:= e^{-C_s/\varepsilon}$ in a tensor of size $S\times n\times m$, where $C_s=C_{\mathbf{x}_s\mathbf{y}}$. \textbf{Numerical Stability.} When using small regularization strengths, we recommend to cast Sinkhorn iterations in the log-domain by considering the following stabilized iterations for each pair of vectors $\mathbf{x}_{s},\mathbf{y}$, resulting in the following updates (with $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ initialized to $\mathbf{0}_n$ and $\mathbf{0}_m$), \begin{equation}\begin{aligned}\label{eq:stabilized} \boldsymbol{\alpha}&\gets\varepsilon\log\mathbf{a}+\text{min}_\varepsilon\left(C_{\mathbf{x}_s\mathbf{y}}-\boldsymbol{\alpha}\mathbf{1}_m^T-\mathbf{1}_n\boldsymbol{\beta}^T\right)+\boldsymbol{\alpha},\\ \boldsymbol{\beta}&\gets\varepsilon\log\mathbf{b}+\text{min}_\varepsilon\left(C_{\mathbf{x}_s\mathbf{y}}^T-\mathbf{1}_m\boldsymbol{\alpha}^T-\boldsymbol{\beta}\mathbf{1}_n^T\right)+\boldsymbol{\beta}, \end{aligned}\end{equation} where $\min_\varepsilon$ is the soft-minimum operator applied linewise to a matrix to output a vector, namely for $M\in\mathbf{R}^{n\times m}$, $\min_\varepsilon(M)\in\mathbf{R}^n$ and is such that $[\min_\varepsilon(M)]_i=-\varepsilon(\log\sum_{j}e^{-M_{ij}/\varepsilon})$. The rationale behind the substractions/additions of $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ above is that once a Sinkhorn iteration is carried out, the terms inside the parenthesis above are normalized, in the sense that once divided by $\varepsilon$, their exponentials sum to one (they can be used to recover a coupling). Therefore, they must be negative, which improves the stability of summing exponentials~\citep[\S4.4]{MAL-073}. \begin{wrapfigure}{r}{0.685\textwidth} \begin{minipage}{0.685\textwidth} \begin{algorithm}[H]\label{alg-sink2} \SetAlgoLined \textbf{Inputs:} $(\mathbf{a}_s,\mathbf{x}_s)_s\in(\Sigma_n\times\mathbb{R}^n)^S,(\mathbf{b},\mathbf{y})\in\Sigma_m\times\mathbb{O}_m,h,\varepsilon,\eta,\widetilde{g}.$ $\forall s, \widetilde{\mathbf{x}}_s= \widetilde{g}(\mathbf{x}_s),\, C_{s}= [h(y_j-(\widetilde{\mathbf{x}}_s)_i)]_{ij},\, \boldsymbol{\alpha}_s=\mathbf{0}_{n},\boldsymbol{\beta}_s=\mathbf{0}_m.$ \Repeat{$\max_s \Delta\left( \exp\left(C_{\mathbf{x}_s\mathbf{y}}^T-\mathbf{1}_m\boldsymbol{\alpha}_s^T-\boldsymbol{\beta}_s\mathbf{1}_n^T\right)\mathbf{1}_{n},\mathbf{b}\right)<\eta$}{ $\forall s, \boldsymbol{\beta}_s \gets \varepsilon \log\mathbf{b}_s+\text{min}_\varepsilon\left(C_s^T-\mathbf{1}_m\boldsymbol{\alpha}_s^T-\boldsymbol{\beta}_s\mathbf{1}_n^T\right)+\boldsymbol{\beta}_s$ $\forall s, \boldsymbol{\alpha}_s \gets \varepsilon\log\mathbf{a}_s+\text{min}_\varepsilon\left(C_s-\boldsymbol{\alpha}_s\mathbf{1}_m^T-\mathbf{1}_n\boldsymbol{\beta}_s^T\right)+\boldsymbol{\alpha}_s$ } $\forall s, \widetilde{R}_\varepsilon(\mathbf{x}_s)\leftarrow\mathbf{a}_s^{-1}\circ \exp\left(C_{\mathbf{x}_s\mathbf{y}}-\boldsymbol{\alpha}_s\mathbf{1}_m^T-\mathbf{1}_n\boldsymbol{\beta}_s^T\right) \overline{\mathbf{b}},$ $\forall s, \widetilde{S}_\varepsilon(\mathbf{x}_s)\leftarrow\mathbf{b}_s^{-1}\circ \exp\left(C_{\mathbf{x}_s\mathbf{y}}^T-\mathbf{1}_m\boldsymbol{\alpha}_s^T-\boldsymbol{\beta}_s\mathbf{1}_n^T\right) \mathbf{x}_s.$ \KwResult{$\left(\widetilde{R}_\varepsilon(\mathbf{x}_s), \widetilde{S}_\varepsilon(\mathbf{x}_s)\right)_s$.} \caption{Sinkhorn Ranks/Sorts} \end{algorithm} \end{minipage} \end{wrapfigure} \textbf{Cost function.} Any nonnegative convex function $h$ can be used to define the ground cost, notably $h(u)=|u|^p$, with $p$ set to either $1$ or $2$. Another important result that we inherit from OT is that, assuming $\varepsilon$ is close enough to $0$, the transport matrices $P^\star_\varepsilon$ we obtain should \emph{not} vary under the application of any increasing map to each entry in $\mathbf{x}$ or $\mathbf{y}$. We take advantage of this important result to stabilize further Sinkhorn's algorithm, and at the same time resolve the thorny issue of being able to settle for a value for $\varepsilon$ that can be used consistently, regardless of the range of values in $\mathbf{x}$. We propose to set $\mathbf{y}$ to be the regular grid on $[0,1]$ with $m$ points, and rescale the input entries of $\mathbf{x}$ so that they cover $[0,1]$ to define the cost matrice $C_{\mathbf{x}\mathbf{y}}$. We rescale the entries of $\mathbf{x}$ using an increasing squashing function, such as $\arctan$ or a logistic map. We also notice in our experiments that it is important to standardize input vectors $\mathbf{x}$ before squashing them into $[0,1]^n$, namely to apply, given a squashing function $g$, the map $\tilde{g}$ on $\mathbf{x}$ before computing the cost matrix $C_{\mathbf{x}\mathbf{y}}$: \begin{equation}\tilde{g}:\mathbf{x} \mapsto g\left(\frac{\mathbf{x} - (\mathbf{x}^T\mathbf{1}_n)\mathbf{1}_n}{\tfrac{1}{\sqrt{n}}\|\mathbf{x} - (\mathbf{x}^T\mathbf{1}_n)\mathbf{1}_n\|_2}\right).\label{eq:squashing}\end{equation} The choices that we have made are summarized in Alg.~\ref{alg-sink2}, but we believe there are opportunities to perfect them depending on the task. \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-.6cm} \includegraphics[width=0.4\textwidth]{./fig/quantilefig}\vspace{-.35cm} \caption{Computing the $30\%$ quantile of 20 values as the weighted average of values that are selected by the Sinkhorn algorithm to send their mass onto filler weight $t$ located halfway in $[0,1]$, and ``sandwiched'' by two masses approximately equal to $\tau,1-\tau$.}\label{fig:quantile \end{wrapfigure} \vspace{-.15cm} \textbf{Soft $\tau$ quantiles.} To illustrate the flexibility offered by the freedom to choose a non-uniform target measure $\mathbf{b},\mathbf{y}$, we consider the problem of computing a smooth approximation of the $\tau$ quantile of a discrete distribution $\xi$, where $\tau\in [0,1]$. This smooth approximation can be obtained by transporting $\xi$ towards a tilted distribution, with weights split roughly as $\tau$ on the left and $(1-\tau)$ on the right, with the addition of a small ``filler'' weight in the middle. This filler weight is set to a small value $t$, and is designed to ``capture'' whatever values may lie close to that quantile. This choice results in $m=3$, with weights $\mathbf{b}=[\tau-t/2,t,1-\tau-t/2]$ and target values $\mathbf{y}=[0,1/2,1]$ as in Figure~\ref{fig:quantile}, in which $t=0.1$. With such weights/locations, a differentiable approximation to the $\tau$-quantile of the inputs can be recovered as the second entry of vector $\tilde{S}_\varepsilon$: \begin{equation}\label{eq:tausquantile}\tilde{q}_\varepsilon(\mathbf{x};\tau,t) = \left[\opSquantile{\frac{\mathbf{1}_n}{n},\mathbf{x}}{\begin{bmatrix}\tau-t/2\\t\\1-\tau-t/2\end{bmatrix},\begin{bmatrix}0\\\tfrac{1}{2}\\1\end{bmatrix},h}\right]_2\,.\end{equation} \section{Ranking and Sorting as an Optimal Transport Problem}\label{sec:OTsortingcdfquantiles} The fact that solving the OT problem between two discrete univariate measures boils down to sorting is well known~\citep[\S2]{SantambrogioBook}. The usual narrative states that the Wasserstein distance between two univariate measures reduces to comparing their quantile functions, which can be obtained by inverting CDFs, which are themselves computed by considering the sorted values of the supports of these measures. This downstream connection from OT to quantiles, CDFs and finally sorting has been exploited in several works, notably because the $n \log n$ price for sorting is far cheaper than the order $n^3\log n$ \citep{Tarjan1997} one has to pay to solve generic OT problems. This is evidenced by the recent surge in interest for sliced Wasserstein distances~\citep{rabin-ssvm-11,2013-Bonneel-barycenter,kolouri2016sliced}. We propose in this section to go instead \textit{upstream}, that is to redefine ranking and sorting functions as byproducts of the resolution of an optimal assignment problem between measures supported on the reals. We then propose in Def.\ref{def:splitquant} generalized rank and sort operators using the Kantorovich formulation of OT. \textbf{Solving the OT problem between 1D measures using sorting.} Let $\xi,\upsilon$ be two discrete probability measures on $\mathbb{R}$, defined respectively by their supports $\mathbf{x},\mathbf{y}$ and probability weight vectors $\mathbf{a},\mathbf{b}$ as $\xi=\sum_{i=1}^n a_i \delta_{x_i}$ and $\upsilon=\sum_{j=1}^m b_j \delta_{y_j}$. We consider in what follows a \textit{translation invariant} and \textit{non-negative} ground metric defined as $(x,y)\in\mathbb{R}^2\mapsto h(y-x)$, where $h:\mathbb{R}\rightarrow \mathbb{R}_+$. With that ground cost, the OT problem between $\xi$ and $\upsilon$ boils down to the following LP, writing $C_{\mathbf{x}\mathbf{y}}:=[h(y_j-x_i)]_{ij}$, \begin{equation}\label{eq:discreteOT} \ot_h(\xi,\upsilon) := \min_{P\in U(\mathbf{a}\,\mathbf{b})} \dotp{P}{C_{\mathbf{x}\mathbf{y}}}, \text{ where } U(\mathbf{a},\mathbf{b}):= \{ P\in \mathbb{R}^{n\times m}_+ | P\mathbf{1}_m=\mathbf{a}, P^T\mathbf{1}_n=\mathbf{b}\}\,. \end{equation} We make in what follows the additional assumption that $h$ is \textit{convex}. A fundamental result~\citep[Theorem 2.9]{SantambrogioBook} states that in that case (see also~\cite{delon-concave} for the more involved case where $h$ is concave) $\ot_h(\xi,\upsilon)$ can be computed in closed form using the quantile functions $Q_\xi,Q_\upsilon$ of $\xi,\upsilon$: \begin{equation}\label{eq:1dOT} \ot_h(\xi,\upsilon)=\int_{[0,1]} h\left(Q_\upsilon(u)-Q_\xi(u)\right)du. \end{equation} Therefore, to compute OT between $\xi$ and $\upsilon$, one only needs to integrate the difference in their quantile functions, which can be done by inverting the empirical distribution functions for $\xi,\upsilon$, which itself only requires sorting the entries in $\mathbf{x}$ and $\mathbf{y}$ to obtain their sorting permutations $\sigma$ and $\tau$. Additionally, Eq.~\eqref{eq:1dOT} allows us not only to recover the value of $\ot_h$ as defined in Eq.~\eqref{eq:discreteOT}, but it can also be used to recover the corresponding optimal solution $P_\star$ in $n+m$ operations, using the permutations $\sigma$ and $\tau$ to build a so-called north-west corner solution~\cite[\S3.4.2]{MAL-073}: \begin{prop}\label{prop:nwc} Let $\sigma$ and $\tau$ be sorting permutations for $\mathbf{x}$ and $\mathbf{y}$. Define $N$ to be the north-west corner solution using permuted weights $\mathbf{a}_{\sigma},\mathbf{b}_{\tau}$. Then $N_{\sigma^{-1},\tau^{-1}}$ is optimal for~\eqref{eq:discreteOT}. \end{prop} Such a permuted north-western corner solution is illustrated in Figure~\ref{fig:myOTsorting}\emph{(b)}. It is indeed easy to check that in that case $(P_\star)_{\sigma,\tau}$ runs from the top-left (north-west) to the bottom right corner. In the simple case where $n=m$ and $\mathbf{a}=\mathbf{b}=\mathbf{1}_n/n$, the solution $N_{\sigma^{-1},\tau^{-1}}$ is a permutation matrix divided by $n$, namely a matrix equal to $0$ everywhere except for its entries indexed by $(i,\tau\circ\sigma^{-1})_i$ which are all equal to $1/n$. That solution is a vertex of the Birkhoff \cite{birkhoff} polytope, namely, an optimal assignment which to the $i$-th value in $\mathbf{x}$ associates the $(\tau\circ\sigma^{-1})_i$-th value in $\mathbf{y}$; informally, this solution assigns the $i$-th smallest entry in $\mathbf{x}$ to the $i$-th smallest entry in $\mathbf{y}$. \textbf{Generalizing sorting, CDFs and quantiles using optimal transport.} From now on in this paper, we make the crucial assumption that $\mathbf{y}$ is already sorted, that is, $y_1< \dots < y_m$. $\tau$ is therefore the identity permutation. When in addition $n=m$, the $i$-th value in $\mathbf{x}$ is simply assigned to the $\sigma_i^{-1}$-th value in $\mathbf{y}$. Conversely, and as illustrated in Figure~\ref{fig:myOTsorting}\emph{(a)}, the rank $i$ value in $\mathbf{x}$ is assigned to the $i$-th value $y_i$. Because of this, $R$ and $S$ can be rewritten using the optimal assignment matrix $P_\star$: \begin{prop}\label{prop:simpleidentities} Let $n=m$ and $\mathbf{a}=\mathbf{b}=\mathbf{1}_n/n$. Then for all strictly convex functions $h$ and $\mathbf{y}\in\mathbb{O}_n$, if $P_\star$ is an optimal solution to~\eqref{eq:discreteOT}, then $$R(\mathbf{x}) = n^2 P_\star \overline{\mathbf{b}}= n P_\star \begin{bmatrix}1\\ \vdots\\n \end{bmatrix} = n F_\xi(\mathbf{x}), \quad S(\mathbf{x}) = n P_\star^T \mathbf{x} =Q_\xi(\overline{\mathbf{b}})\in\mathbb{O}_n. $$ \end{prop} These identities stem from the fact that $nP_\star$ is a permutation matrix, which can be applied to the vector $n\overline{\mathbf{b}}=(1,\dots,n)$ to recover the rank of each entry in $\mathbf{x}$, or transposed and applied to $\mathbf{x}$ to recover the sorted values of $\mathbf{x}$. The former expression can be equivalently interpreted as $n$ times the CDF of $\xi$ evaluated elementwise to $\mathbf{x}$, the latter as the quantiles of $\xi$ at levels $\overline{\mathbf{b}}$. The identities in Prop.~\ref{prop:simpleidentities} are valid when the input measures $\xi,\upsilon$ are uniform and of the same size. The first contribution of this paper is to consider more general scenarios, in which $m$, the size of $\mathbf{y}$, can be smaller than $n$, and where weights $\mathbf{a},\mathbf{b}$ need not be uniform. This is a major departure from previous references~\cite{grover2019,taylor2008softrank,qin2010general}, which all require pairwise comparisons between the entries in $\mathbf{x}$. We show in our applications that $m$ can be as small as 3 when trying to recover a quantile, as in Figs.~\ref{fig:myOTsorting},~\ref{fig:quantile}. \textbf{Kantorovich ranks and sorts.} The so-called Kantorovich formulation of OT~\citep[\S1.5]{SantambrogioBook} can be used to compare discrete measures of varying sizes and weights. Solving that problem usually requires \textit{splitting} the mass $a_i$ of a point $x_i$ so that it it assigned across many points $y_j$ (or vice-versa). As a result, the $i$-th line (or $j$-th column) of a solution $P_\star\in\mathbb{R}^{n\times m}_+$ has usually more than one positive entry. Extending directly the formulas presented in Prop.~\ref{prop:simpleidentities} we recover extended operators that we call Kantorovich ranking and sorting operators. These operators are new to the best of our knowledge. \begin{wrapfigure} {r}{0.51 \textwidth} \includegraphics[width=0.53\textwidth]{./fig/new_sorted.pdf}\vspace{-.2cm} \caption{\textit{(a)} sorting seen as transporting optimally $\mathbf{x}$ to milestones in $\mathbf{y}$. \textit{(b)} Kantorovich sorting generalizes the latter by considering target measures $\mathbf{y}$ with $m=3$ non-uniformly weighted points (here $\mathbf{b}=[.48, .16, .36]$). K-ranks and K-sorted vectors $\widetilde{R},\widetilde{S}$ are generalizations of $R$ and $S$ that operate by mixing ranks in $\mathbf{b}$ or mixing original values in $\mathbf{x}$ to form continuous ranks for the elements in $\mathbf{x}$ and $m$ ``synthetic'' quantiles at levels $\overline{\mathbf{b}}$. \textit{(c)} Entropy regularized OT generalizes further K-operations by solving OT with the Sinkhorn algorithm, which results in dense transport plans differentiable in all inputs.\vspace{-1.5cm}}\label{fig:myOTsorting} \end{wrapfigure} The K-ranking operator computes convex combinations of rank values (as described in the entries $n\overline{\mathbf{b}}$) while the K-sorting operator computes convex combinations of values contained in $\mathbf{x}$ directly. Note that we consider here convex combinations (weighted averages) of these ranks/values, according to the Euclidean geometry. Extending more generally these combinations to Fréchet means using alternative geometries (KL, hyperbolic, etc) on these ranks/values is left for future work. Because these quantities are only defined pointwisely (we output vectors and not functions) and depend on the ordering of $\mathbf{a},\mathbf{x},\mathbf{b},\mathbf{y}$, we drop our reference to measure $\xi$ in notations. \begin{defn}\label{def:splitquant} For any $(\mathbf{x},\mathbf{a},\mathbf{y},\mathbf{b})\in\mathbb{R}^n\times \Sigma_n\times \mathbb{O}_m\times\Sigma_m$, let $P_\star\in U(\mathbf{a},\mathbf{b})$ be an optimal solution for~\eqref{eq:discreteOT} with a given convex function $h$. The K-ranks and K-sorts of $\mathbf{x}$ w.r.t to $\mathbf{a}$ evaluated using $(\mathbf{b},\mathbf{y})$ are respectively: $$\begin{aligned} \opKR{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}} &:=\, n\mathbf{a}^{-1} \circ (P_\star \overline{\mathbf{b}})\in[0,n]^n,\\ \opKquantile{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}} &:=\, \,\mathbf{b}^{-1}\circ (P_\star^T \mathbf{x})\in\mathbb{O}_{m}. \end{aligned} $$ \end{defn} The K-rank vector map $\widetilde{R}$ outputs a vector of size $n$ containing a continuous rank for each entry for $\mathbf{x}$ (these entries can be alternatively interpreted as $n$ times a ``synthetic'' CDF value in $[0,1]$, itself a convex mixture of the CDF values $\overline{\mathbf{b}}_j$ of the $y_j$ onto which each $x_i$ is transported). $\widetilde{S}$ is a split-quantile operator outputting $m$ increasing values which are each, respectively, barycenters of some of the entries in $\mathbf{x}$. The fact that these values are increasing can be obtained by a simple argument in which $\xi$ and $\upsilon$ are cast again as uniform measures of the same size using duplicated supports $x_i$ and $y_j$, and then use the monotonicity given by the third identity of Prop.~\ref{prop:simpleidentities}. \textbf{Computations and Non-differentiability} The generalized ranking and sorting operators presented in Def.~\ref{def:splitquant} are interesting in their own right, but have very little practical appeal. For one, their computation relies on solving an OT problem at a cost of $O(nm(n+m)\log(nm))$\cite{Tarjan1997} and remains therefore far more costly than regular sorting, even when $m$ is very small. Furthermore, these operators remain fundamentally \emph{not} differentiable. This can be hinted by the simple fact that it is difficult to guarantee in general that a solution $P_\star$ to~\eqref{eq:discreteOT} is unique. Most importantly, the Jacobian $\partial P_\star/\partial \mathbf{x}$ is, very much like $R$, null almost everywhere. This can be visualized by looking at Figure~\ref{fig:myOTsorting}\textit{(b)} to notice that an infinitesimal change in $\mathbf{x}$ would not change $P_\star$ (notice however that an infinitesimal change in weights $\mathbf{a}$ would; that Jacobian would involve North-west corner type mass transfers). All of these pathologies --- computational cost, non-uniqueness of optimal solution and non-differentiability --- can be avoided by using regularized OT~\cite{CuturiSinkhorn}. \section{Learning with Smoothed Ranks and Sorts}\label{sec:quantileactivation} \textbf{Differentiable approximation of the top-$k$ Loss.} Given a set of labels $\{1,\dots,L\}$ and a space $\Omega$ of input points, a parameterized multiclass classifier on $\Omega$ is a function $f_\theta:\Omega\rightarrow\mathbb{R}^L$. The function decides the class attributed to $\omega$ by selecting a label with largest activation, $l^\star\in\argmax_l [f_\theta(\omega)]_l$. To train the classifier using a training set $\{(\omega_i,l_i)\}\in(\Omega\times\mathcal{L})^N$, one typically resorts to minimizing the cross-entropy loss, which results in solving $\min_\theta \sum_i \mathbf{1}_L^T \log f_\theta(\omega_i) - [f_\theta(\omega_i)]_{l_i}$. We propose a differentiable variant of the 0/1 and more generally top $k$ losses that bypasses combinatorial consideration~\cite{nguyen2013algorithms,NIPS2013_5214} nor builds upon non-differentiable surrogates~\cite{boyd2012accuracy}. Ignoring the degenerate case in which $l^\star$ is not unique, given a query $\omega$, stating that the the label $l^\star$ has been selected is equivalent to stating that the entry indexed at $l^\star$ of the vector of ranks $R(f_\theta(\omega))$ is $L$. Given a labelled pair $(\omega,l)$, the 0/1 loss of the classifier for that pair is therefore, \begin{equation}\label{eq:topk} \mathcal{L}_{0/1}(f_\theta(\omega),l) = H\left(L-\left[R(f_\theta(\omega)\right]_{l}\right), \end{equation} \begin{figure} \includegraphics[width=0.95\textwidth]{./fig/cifar10} \label{fig:cifar10}\caption{Error bars (averages over 12 runs) for test accuracy curves on CIFAR-10 using the same networks structures, a vanilla CNN for 4 convolution layers on the left and a resnet18 on the right. We use the ADAM optimizer with a constant stepsize set to $10^{-4}$.}\vspace{-.05cm} \end{figure} \begin{figure} \includegraphics[width=0.95\textwidth]{./fig/cifar100}\caption{Identical setup to Fig.~\ref{fig:cifar10}, with the CIFAR-100 database.}\label{fig:cifar100}\vspace{-.05cm} \end{figure} where $H$ is the heaviside function: $H(u) = 1$ if $u>0$ and $H(u)=0$ for $u\leq 0$. More generally, if for some labelled input $\omega$, the entry $[R(f_\theta)]_{l_o}$ is bigger than $L-k+1$, then that labelled example has a top-$k$ error of $0$. Conversely, if $[R(f_\theta)]_{l}$ is smaller than $L-k+1$, then the top-$k$ error is $1$. The top-$k$ error can be therefore formulated as in \eqref{eq:topk}, where the argument $L-\left[R(f_\theta(\omega)\right]_{l}$ within the Heaviside function is replaced by $L-\left[R(f_\theta(\omega)\right]_{l}-k+1$. The 0/1 and top-k losses are unstable on two different counts: $H$ is discontinuous, and so is $R$ with respect to the entries $f_\theta(\omega)$. The differentiable loss that we propose, as a replacement for cross-entropy (or more generalized top-$k$ cross entropy losses~\cite{berrada2018smooth}), leverages therefore both the Sinkhorn rank operator and a smoothed Heaviside like function. Because Sinkhorn ranks are always within the boundaries of $[0,L]$, we propose to modify this loss by considering a continuous increasing function $J_k$ from $[0,L]$ to $\mathbb{R}$: $$ \widetilde{\mathcal{L}}_{k,\varepsilon}(f_\theta(\omega),l) = J_k\left(L-\left[\widetilde{R}_\varepsilon\left(\frac{\mathbf{1}_L}{L},f_\theta(\omega);\frac{\mathbf{1}_L}{L},\frac{\overline{\mathbf{1}}_L}{L},h\right)\right]_{l}\right), $$ We propose the simple family of ReLU losses $J_k(u)=\max(0,u - k+1)$, and have focused our experiments on the case $k=1$. We train a vanilla CNN (4 Conv2D with 2 max-pooling layers, ReLU activation, 2 fully connected layers, batchnorm on each) and a Resnet18 on CIFAR-10 and CIFAR-100. Fig.~\ref{fig:cifar10} and~\ref{fig:cifar100} report test-set classification accuracies / epochs. We used $\varepsilon=10^{-3}$, $\eta=10^{-3}$, a squared distance cost $h(u)=u^2$ and a stepsize of $10^{-4}$ with the ADAM optimizer. \begin{wrapfigure}{r}{0.5\textwidth} \includegraphics[width=0.5\textwidth]{./fig/neural_sort}\vspace{-.35cm} \label{fig:grover}\caption{Test accuracy on the simultaneous MNIST CNN / sorting task proposed in \citep{grover2019} (average of 12 runs)} \end{wrapfigure} \textbf{Learning CNNs by sorting handwritten numbers.} We use the MNIST experiment setup in~\citep{grover2019}, in which a CNN is given $n$ numbers between between 0 and 9999 given as 4 concatenated MNIST images. The labels are the ranks (within $n$ pairs) of each of these $n$ numbers. We use the code kindly made available by the authors. We use $100$ epochs, and confirm experimentally that S-sort performs on par with their neural-sort function. We set $\varepsilon=0.005$. \begin{table} \centering \small \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline algorithm & n=3 & n=5 & n=7 & n=9 & n=15 \\ \hline Stochastic NeuralSort & 0.920 (0.946) & 0.790 (0.907) & 0.636 (0.873) & 0.452 (0.829) & 0.122 (0.734) \\ Deterministic NeuralSort & 0.919 (0.945) & 0.777 (0.901) & 0.610 (0.862) & 0.434 (0.824) & 0.097 (0.716) \\ \hline Our & \textbf{0.928 (0.950)} & \textbf{0.811 (0.917)} & \textbf{0.656 (0.882)} & \textbf{0.497 (0.847)} & \textbf{0.126 (0.742)} \\ \hline \end{tabular} \caption{Sorting exact and partial precision on the neural sort task averaged over 10 runs. Our method performs better than the method presented in \cite{grover2019} for all the sorting tasks, with the exact same network architecture. \vspace{-.12cm}} \label{table:neuralsort} \end{table} \textbf{Least quantile regression.} The goal of least quantile regression \cite{rousseeuw1984least} is to minimize, given a vector of response variables $z_1,\dots,z_N\in\mathbb{R}$ and regressor variables $\mathbf{W}=[\mathbf{w}_1,\dots,\mathbf{w}_N]\in\mathbb{R}^{d\times N}$, the $\tau$ quantile of the loss between response and predicted values, namely writing $\mathbf{x} =(|z_i - f_\theta(\mathbf{w}_i)|)_i$ and setting $\mathbf{a}=\mathbf{1}_N/N$ and $\xi$ the measure with weights $\mathbf{a}$ and support $\mathbf{x}$, to minimize w.r.t. $\theta$ the quantile $\tau$ of $\xi$. We proceed by drawing mini-batches of size 512. Our baseline method (labelled $\varepsilon=0$) consists in identifying which point, among those 512, has an error that is equal to the desired quantile, and then take gradient steps according to that point. Our proposal is to consider the soft $\tau$ quantile $\tilde{q}_\varepsilon(\mathbf{x};\tau,t)$ operator defined in~\eqref{eq:tausquantile}, using for the filler weight $t=1/512$. This is labelled as $\varepsilon=10^{-2}$. We use the datasets considered in~\cite{romano2019conformalized} and consider the same regressor architecture, namely a 2 hidden layer NN with hidden layer size 64, ADAM optimizer and steplength $10^{-4}$. Results are summarized in Table\ref{table:quantilereg}. We consider two quantiles, $\tau=50\%$ and $90\%$. For each quantile/dataset pair, we report the original (not-regularized) $\tau$ quantile of the errors evaluated on the entire training set, on an entire held-out test set, and the MSE on the test set of the function that is recovered. We notice that our algorithm reaches overall better quantile errors on the training set---this is our main goal---but comparable test/MSE errors. \begin{table} \centering \input{res/quantilereg} \caption{Least quantile losses (averaged on 12 runs) obtained on datasets compiled by~\cite{romano2019conformalized}. We consider two quantiles, at 50\% and 90\%. The baseline method ($\varepsilon=0$) consists in estimating the quantile empirically and taking a gradient step with respect to that point. Our method ($\varepsilon=10^{-2}$) uses the softquantile operator $\tilde{q}_\varepsilon(\mathbf{x};\tau,t)$ defined in~\eqref{eq:tausquantile}, using for the filler weight $t=1/512$. We observe better performance at train time (which may be due to a ``smoothed'' optimization landscape with less local minima) but different behaviors on test sets, either using the quantile loss or the MSE. Note that we report here for both methods and for both train and test sets the ``true'' quantile error metric.}\label{table:quantilereg} \end{table} \section{Appendix} Jacobian of $J_{\mathbf{x}}\opScdf{\mathbf{a},\mathbf{x}}{\mathbf{b},\mathbf{y}}$ as $\ell\rightarrow \infty$. We write everything as a function of $\mathbf{x}$. We write $K(\mathbf{x})=e^{-c(x_i,y_j)/\varepsilon}$. Upon termination of Sinkhorn iterations one has $$u(\mathbf{x}) \circ K v(\mathbf{x}) = \mathbf{a}, \quad u v(\mathbf{x}) \circ K^T u(\mathbf{x}) = \mathbf{b}.$$ We simplify this expression by writing $$ w(x):= \begin{bmatrix}u(\mathbf{x})\\v(\mathbf{x})\end{bmatrix}, \; \mathbf{c}:= \begin{bmatrix}\mathbf{a}\\ \mathbf{b}\end{bmatrix},\quad \Lambda(\mathbf{x}):= \begin{bmatrix} \mathbf{0}_{n\times n} & K(\mathbf{x})\\ K(\mathbf{x})^T & \mathbf{0}_{m\times m} \end{bmatrix}, $$ Writing $$f(\mathbf{x},\mathbf{z})=\mathbf{z}\circ \Lambda(\mathbf{x}) \mathbf{z} - \mathbf{c},$$ one has upon convergence that, $f(\mathbf{x},w(\mathbf{x}))=0$. The implicit function theorem states in that case that $$J_{\mathbf{x}} w(\mathbf{x}) = - \left[J_{\mathbf{z}} f(\mathbf{x},\omega(\mathbf{x}))\right]^{-1} J_{\mathbf{x}}f(\mathbf{x},\omega(\mathbf{x}))\in\mathbb{R}^{(n+m)\times n}$$ To compute the action of the first operator, one can use simple differential calculus to get that the application of the first Jacobian to an infinitesimally small vector $\mathbf{h}$ is equal to: $$\left(J_{\mathbf{z}} f(\mathbf{x},\mathbf{z}) \right) \mathbf{h} = f(\mathbf{x},\mathbf{z}+\mathbf{h})-f(\mathbf{x},\mathbf{z})= \mathbf{h}\circ\Lambda(\mathbf{x})\mathbf{z} + \mathbf{z}\circ \Lambda(\mathbf{x})\mathbf{h}$$ Therefore if one is interested in applying the inverse of that operator, one has that if $J_{\mathbf{z}} [f(\mathbf{x},\mathbf{z})] \mathbf{h} = \mathbf{g}$, then $$\mathbf{g} = \mathbf{h}\circ\Lambda(\mathbf{x})\mathbf{z} + \mathbf{z}\circ \Lambda(\mathbf{x})\mathbf{h}$$ which yields $$\frac{\mathbf{g}}{\mathbf{z}}= \mathbf{h}\circ\frac{\Lambda(\mathbf{x})\mathbf{z}}{\mathbf{z}} + \Lambda(\mathbf{x})\mathbf{h}$$ and therefore $$\frac{\mathbf{g}}{\mathbf{z}} = \left(\mathbf{D}\left(\frac{\Lambda(\mathbf{x})\mathbf{z}}{\mathbf{z}}\right) + \Lambda(\mathbf{x})\right)\mathbf{h}$$ yielding $$\left(J_{\mathbf{z}} [f(\mathbf{x},\mathbf{z})]\right)^{-1}= \left(\mathbf{D}\left(\frac{\Lambda(\mathbf{x})\mathbf{z}}{\mathbf{z}}\right) + \Lambda(\mathbf{x})\right)^{-1}\mathbf{D}(\mathbf{z}^{-1}),$$ which, reverting to our notations, can be seen as $$\left(\begin{bmatrix}\mathbf{D}\left(\frac{K(\mathbf{x}) v(\mathbf{x})}{u(\mathbf{x})}\right)&K(\mathbf{x})\\K^T(\mathbf{x})& \mathbf{D}\left(\frac{K^T(\mathbf{x}) u(\mathbf{x})}{v(\mathbf{x})}\right)\end{bmatrix}\right)^{-1}\mathbf{D}\left(\begin{bmatrix}u(\mathbf{x})^{-1}\\v(\mathbf{x})^{-1}\end{bmatrix}\right) = \left(\begin{bmatrix}\mathbf{D}\left(K(\mathbf{x}) v(\mathbf{x})\right)&\mathbf{D}(u(\mathbf{x}))K(\mathbf{x})\\\mathbf{D}(v(\mathbf{x}))K^T(\mathbf{x})& \mathbf{D}\left(K^T(\mathbf{x}) u(\mathbf{x})\right)\end{bmatrix}\right)^{-1} $$ On the other hand, $J_{\mathbf{x}}f(\mathbf{x},\omega(\mathbf{x}))=\begin{bmatrix}\mathbf{D}(u(\mathbf{x}))\circ \Delta v(\mathbf{x})\\\mathbf{D}(v(\mathbf{x}))\Delta^T \mathbf{D}(u(\mathbf{x}))\end{bmatrix}\in\mathbb{R}^{(n+m)\times n}$ where $\Delta=\left[-\frac{c'_x(x_i,y_j)}{\varepsilon}e^{-\frac{c(x_i,y_j)}{\varepsilon}}\right].$ Hence, we obtain that $$J_{\mathbf{x}} w(\mathbf{x}) = - \left[J_{\mathbf{z}} f(\mathbf{x},\omega(\mathbf{x}))\right]^{-1} J_{\mathbf{x}}f(\mathbf{x},\omega(\mathbf{x}))\in\mathbb{R}^{(n+m)\times n}$$
2,877,628,088,852
arxiv
\section{\textbf{Introduction and Preliminaries}} \label{sec:intro} Fixed-point theory has been extensively studied by various aspects. One of these is the discontinuity problem at fixed points (see \cit {Bisht-2017-1, Bisht-2017-2, Bisht-3, Bisht and Rakocevic, Bisht-Tbilisi, Bisht 2019, PantA, Pant, Pant-Ozgur-Tas, Belgium} for some examples). Discontinuous functions have been widely appeared in many areas of science such as neural networks (for example, see \cite{Forti-2003, Liu-2011, Lu-2005, Lu-2006}). In this paper we give a new solution to the Rhoades' open problem (see \cite{Rhoades} for more details) on the discontinuity at fixed point in the setting of an $S$-metric space which is a recently introduced generalization of a metric space. $S$-metric spaces were introduced in \cite{Sedgi-Shobe-Aliouche} by Sedgi et al., as follows: \begin{definition} \cite{Sedgi-Shobe-Aliouche} \label{def5} Let $X$ be a nonempty set and \mathcal{S}:X\times X\times X\rightarrow \lbrack 0,\infty )$ be a function satisfying the following conditions for all $x,y,z,a\in X:$ $S1)$ $\mathcal{S}(x,y,z)=0$ if and only if $x=y=z$, $S2)$ $\mathcal{S}(x,y,z)\leq \mathcal{S}(x,x,a)+\mathcal{S}(y,y,a)+\mathcal S}(z,z,a)$. Then $\mathcal{S}$ is called an $S$-metric on $X$ and the pair $(X,\mathcal{ })$ is called an $S$-metric space. \end{definition} Relationships between a metric and an $S$-metric were given as follows: \begin{lemma} \label{lem4} \cite{Hieu} Let $(X,d)$ be a metric space. Then the following properties are satisfied$:$ \begin{enumerate} \item $\mathcal{S}_{d}(x,y,z)=d(x,z)+d(y,z)$ for all $x,y,z\in X$ is an $S -metric on $X$. \item $x_{n}\rightarrow x$ in $(X,d)$ if and only if $x_{n}\rightarrow x$ in $(X,\mathcal{S}_{d})$. \item $\{x_{n}\}$ is Cauchy in $(X,d)$ if and only if $\{x_{n}\}$ is Cauchy in $(X,\mathcal{S}_{d}).$ \item $(X,d)$ is complete if and only if $(X,\mathcal{S}_{d})$ is complete. \end{enumerate} \end{lemma} The metric $\mathcal{S}_{d}$ was called as the $S$-metric generated by $d$ \cite{Ozgur-mathsci}. Some examples of an $S$-metric which is not generated by any metric are known (see \cite{Hieu, Ozgur-mathsci} for more details). Furthermore, Gupta claimed that every $S$-metric on $X$ defines a metric d_{S}$ on $X$ as follows$: \begin{equation} d_{S}(x,y)=\mathcal{S}(x,x,y)+\mathcal{S}(y,y,x), \label{ds} \end{equation for all $x,y\in X$ \cite{Gupta}. However, since the triangle inequality does not satisfied for all elements of $X$ everywhen, the function $d_{S}(x,y)$ defined in (\ref{ds}) does not always define a metric (see \cit {Ozgur-mathsci}). In the following, we see an example of an $S$-metric which is not generated by any metric. \begin{example} \cite{Ozgur-mathsci} \label{exm:S-metric} Let $X \mathbb{R} $ and the function $\mathcal{S}:X\times X\times X\rightarrow \lbrack 0,\infty )$ be defined a \begin{equation*} \mathcal{S}(x,y,z)=\left\vert x-z\right\vert +\left\vert x+z-2y\right\vert \text{,} \end{equation* for all $x,y,z\in \mathbb{R} $. Then $\mathcal{S}$ is an $S$-metric which is not generated by any metric and the pair $(X,\mathcal{S})$ is an $S$-metric space. \end{example} The following lemma will be used in the next sections. \begin{lemma} \label{lem5} \cite{Sedgi-Shobe-Aliouche} Let $(X,\mathcal{S})$ be an $S -metric space. Then we hav \begin{equation*} \mathcal{S}(x,x,y)=\mathcal{S}(y,y,x)\text{.} \end{equation*} \end{lemma} In this paper, our aim is to obtain a new solution to the Rhoades' open problem on the existence of a contractive condition which is strong enough to generate a fixed point but which does not force the map to be continuous at the fixed point. To do this, we inspire of a result of Zamfirescu given in \cite{Zamfirescu}. On the other hand, a recent aspect to the fixed point theory is to consider geometric properties of the set $Fix(T)$, the fixed point set of the self-mapping $T$. Fixed circle problem (resp. fixed disc problem) have been studied in this context (see \cite{Bisht 2019, Ozgur-chapter, Ozgur-Tas-malaysian, Ozgur-Tas-circle-thesis, Ozgur-Tas-Celik, ozgur-aip, Ozgur-simulation, Pant-Ozgur-Tas, Belgium, Tas math, Tas}). As an application, we present a new solution to these problems. We give necessary examples to support our theoretical results. \section{\textbf{Main Results}} \label{sec:1} From now on, we assume that $(X,\mathcal{S})$ is an $S$-metric space and $T:X\rightarrow X$ is a self-mapping. In this section we use the numbers defined a \begin{equation*} M_{z}\left( x,y\right) =\max \left\{ ad\left( x,y\right) ,\frac{b}{2}\left[ d\left( x,Tx\right) +d\left( y,Ty\right) \right] ,\frac{c}{2}\left[ d\left( x,Ty\right) +d\left( y,Tx\right) \right] \right\} \end{equation* an \begin{equation*} M_{z}^{S}\left( x,y\right) =\max \left\{ \begin{array}{c} a\mathcal{S}\left( x,x,y\right) ,\frac{b}{2}\left[ \mathcal{S}\left( x,x,Tx\right) +\mathcal{S}\left( y,y,Ty\right) \right] , \\ \frac{c}{2}\left[ \mathcal{S}\left( x,x,Ty\right) +\mathcal{S}\left( y,y,Tx\right) \right] \end{array \right\} , \end{equation* where $a,b\in \left[ 0,1\right) $ and $c\in \left[ 0,\frac{1}{2}\right] $. We give the following theorem as a new solution to the Rhoades' open problem. \begin{theorem} \label{thm1} Let $(X,\mathcal{S})$ be a complete $S$-metric space and $T$ a self-mapping on $X$ satisfying the conditions $i)$ There exists a function $\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$ such that $\phi (t)<t$ for each $t>0$ and \begin{equation*} \mathcal{S}\left( Tx,Tx,Ty\right) \leq \phi \left( M_{z}^{S}\left( x,y\right) \right) \text{,} \end{equation* for all $x,y\in X$, $ii)$ There exists a $\delta =\delta \left( \varepsilon \right) >0$ such that $\varepsilon <M_{z}^{S}\left( x,y\right) <\varepsilon +\delta $ implies $\mathcal{S}\left( Tx,Tx,Ty\right) \leq \varepsilon $ for a given \varepsilon >0$. Then $T$ has a unique fixed point $u\in X$. Also, $T$ is discontinuous at $u$ if and only if $\underset{x\rightarrow u}{\lim }M_{z}^{S}\left( x,u\right) \neq 0$. \end{theorem} \begin{proof} At first, we define the numbe \begin{equation*} \xi =\max \left\{ a,\frac{2}{2-b},\frac{c}{2-2c}\right\} . \end{equation* Clearly, we have $\xi <1$. By the condition $(i)$, there exists a function $\phi :\mathbb{R ^{+}\rightarrow \mathbb{R}^{+}$ such that $\phi (t)<t$ for each $t>0$ and \begin{equation*} \mathcal{S}\left( Tx,Tx,Ty\right) \leq \phi \left( M_{z}^{S}\left( x,y\right) \right) \text{,} \end{equation* for all $x,y\in X$. Using the properties of $\phi $, we obtai \begin{equation} \mathcal{S}\left( Tx,Tx,Ty\right) <M_{z}^{S}\left( x,y\right) \text{,} \label{eqno1} \end{equation whenever $M_{z}^{S}\left( x,y\right) >0$. Let us consider any $x_{0}\in X$ with $x_{0}\neq Tx_{0}$ and define a sequence $\left\{ x_{n}\right\} $ as $x_{n+1}=Tx_{n}=T^{n}x_{0}$ for all n=0,1,2,3,...$. Using the condition $(i)$ and the inequality (\ref{eqno1}), we get \begin{eqnarray} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) &=&\mathcal{S}\left( Tx_{n-1},Tx_{n-1},Tx_{n}\right) \leq \phi \left( M_{z}^{S}\left( x_{n-1},x_{n}\right) \right) \label{eqno2} \\ &<&M_{z}^{S}\left( x_{n-1},x_{n}\right) \notag \\ &=&\max \left\{ \begin{array}{c} a\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) , \\ \frac{b}{2}\left[ \mathcal{S}\left( x_{n-1},x_{n-1},Tx_{n-1}\right) \mathcal{S}\left( x_{n},x_{n},Tx_{n}\right) \right] , \\ \frac{c}{2}\left[ \mathcal{S}\left( x_{n-1},x_{n-1},Tx_{n}\right) +\mathcal{ }\left( x_{n},x_{n},Tx_{n-1}\right) \right \end{array \right\} \notag \\ &=&\max \left\{ \begin{array}{c} a\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) , \\ \frac{b}{2}\left[ \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) +\mathcal{S \left( x_{n},x_{n},x_{n+1}\right) \right] , \\ \frac{c}{2}\left[ \mathcal{S}\left( x_{n-1},x_{n-1},x_{n+1}\right) +\mathcal S}\left( x_{n},x_{n},x_{n}\right) \right \end{array \right\} \notag \\ &=&\max \left\{ \begin{array}{c} a\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) , \\ \frac{b}{2}\left[ \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) +\mathcal{S \left( x_{n},x_{n},x_{n+1}\right) \right] , \\ \frac{c}{2}\mathcal{S}\left( x_{n-1},x_{n-1},x_{n+1}\right \end{array \right\} . \notag \end{eqnarray Assume that $M_{z}^{S}\left( x_{n-1},x_{n}\right) =a\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) $. Then using the inequality (\ref{eqno2}), we have \begin{equation*} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <a\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) \leq \xi \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) <\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) \end{equation* and s \begin{equation} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) . \label{eqno3} \end{equation Let $M_{z}^{S}\left( x_{n-1},x_{n}\right) =\frac{b}{2}\left[ \mathcal{S \left( x_{n-1},x_{n-1},x_{n}\right) +\mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) \right] .$ Again using the inequality (\ref{eqno2 ), we ge \begin{equation*} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <\frac{b}{2}\left[ \mathcal{S \left( x_{n-1},x_{n-1},x_{n}\right) +\mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) \right] \text{,} \end{equation* which implie \begin{equation*} \left( 1-\frac{b}{2}\right) \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) \frac{b}{2}\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) \end{equation* and henc \begin{equation*} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <\frac{b}{2-b}\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) \leq \xi \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) . \end{equation* This yield \begin{equation} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) . \label{eqno4} \end{equation} Suppose that $M_{z}^{S}\left( x_{n-1},x_{n}\right) =\frac{c}{2}\mathcal{S \left( x_{n-1},x_{n-1},x_{n+1}\right) .$ Then using the inequality (\re {eqno2}), Lemma \ref{lem5} and the condition $(S2)$, we obtain \begin{eqnarray*} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) &<&\frac{c}{2}\mathcal{S}\left( x_{n-1},x_{n-1},x_{n+1}\right) =\frac{c}{2}\mathcal{S}\left( x_{n+1},x_{n+1},x_{n-1}\right) \\ &\leq &\frac{c}{2}\left[ \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) + \mathcal{S}\left( x_{n+1},x_{n+1},x_{n}\right) \right] \\ &=&\frac{c}{2}\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) +c\mathcal{S \left( x_{n+1},x_{n+1},x_{n}\right) \\ &=&\frac{c}{2}\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) +c\mathcal{S \left( x_{n},x_{n},x_{n+1}\right) , \end{eqnarray* which implies \begin{equation*} \left( 1-c\right) \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <\frac{c}{2 \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) . \end{equation* Considering this, we fin \begin{equation*} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <\frac{c}{2\left( 1-c\right) \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) \leq \xi \mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) \end{equation* and so \begin{equation} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) <\mathcal{S}\left( x_{n-1},x_{n-1},x_{n}\right) . \label{eqno5} \end{equation If we set $\alpha _{n}=\mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) $, then by the inequalities (\ref{eqno3}), (\ref{eqno4}) and (\ref{eqno5}), we fin \begin{equation} \alpha _{n}<\alpha _{n-1}, \label{eqno6} \end{equation that is, $\alpha _{n}$ is strictly decreasing sequence of positive real numbers whence the sequence $\alpha _{n}$ tends to a limit $\alpha \geq 0$. Assume that $\alpha >0$. There exists a positive integer $k\in \mathbb{N}$ such that $n\geq k$ implie \begin{equation} \alpha <\alpha _{n}<\alpha +\delta (\alpha ). \label{eqno7} \end{equation Using the condition $(ii)$ and the inequality (\ref{eqno6}), we ge \begin{equation} \mathcal{S}\left( Tx_{n-1},Tx_{n-1},Tx_{n}\right) =\mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) =\alpha _{n}<\alpha , \label{eqno8} \end{equation for $n\geq k$. Then the inequality (\ref{eqno8}) contradicts to the inequality (\ref{eqno7}). Therefore, it should be $\alpha =0$. Now we prove that $\left\{ x_{n}\right\} $ is a Cauchy sequence. Let us fix an $\varepsilon >0$. Without loss of generality, we suppose that $\delta \left( \varepsilon \right) <\varepsilon $. There exists $k\in \mathbb{N}$ such tha \begin{equation*} \mathcal{S}\left( x_{n},x_{n},x_{n+1}\right) =\alpha _{n}<\frac{\delta }{4 \text{,} \end{equation* for $n\geq k$ since $\alpha _{n}\rightarrow 0$. Using the mathematical induction and the Jachymski's technique (see \cite{Jach-1, Jach-2} for more details) we show \begin{equation} \mathcal{S}\left( x_{k},x_{k},x_{k+n}\right) <\varepsilon +\frac{\delta }{2 \text{,} \label{eqno9} \end{equation for any $n\in \mathbb{N}$. At first, the inequality (\ref{eqno9}) holds for n=1$ since \begin{equation*} \mathcal{S}\left( x_{k},x_{k},x_{k+1}\right) =\alpha _{k}<\frac{\delta }{4 <\varepsilon +\frac{\delta }{2}. \end{equation* Assume that the inequality (\ref{eqno9}) holds for some $n$. We show that the inequality (\ref{eqno9}) holds for $n+1$. By the condition $(S2)$, we ge \begin{equation*} \mathcal{S}\left( x_{k},x_{k},x_{k+n+1}\right) \leq 2\mathcal{S}\left( x_{k},x_{k},x_{k+1}\right) +\mathcal{S}\left( x_{k+n+1},x_{k+n+1},x_{k+1}\right) \text{.} \end{equation* From Lemma \ref{lem5}, we have \begin{equation*} \mathcal{S}\left( x_{k+n+1},x_{k+n+1},x_{k+1}\right) =\mathcal{S}\left( x_{k+1},x_{k+1},x_{k+n+1}\right) \end{equation* and so it suffices to prove \begin{equation*} \mathcal{S}\left( x_{k+1},x_{k+1},x_{k+n+1}\right) \leq \varepsilon \text{.} \end{equation* To do this, we sho \begin{equation*} M_{z}^{S}(x_{k},x_{k+n})\leq \varepsilon +\delta \text{.} \end{equation* Then we fin \begin{equation*} a\mathcal{S}(x_{k},x_{k},x_{k+n})<\mathcal{S}(x_{k},x_{k},x_{k+n}) \varepsilon +\frac{\delta }{2}\text{,} \end{equation* \begin{eqnarray*} \frac{b}{2}\left[ \mathcal{S}(x_{k},x_{k},x_{k+1})+\mathcal{S (x_{k+n},x_{k+n},x_{k+n+1})\right] &<&\mathcal{S}(x_{k},x_{k},x_{k+1}) \mathcal{S}(x_{k+n},x_{k+n},x_{k+n+1}) \\ &<&\frac{\delta }{4}+\frac{\delta }{4}=\frac{\delta }{2} \end{eqnarray* an \begin{eqnarray} &&\frac{c}{2}\left[ \mathcal{S}(x_{k},x_{k},x_{k+n+1})+\mathcal{S (x_{k+n},x_{k+n},x_{k+1})\right] \notag \\ &\leq &\frac{c}{2}\left[ 4\mathcal{S}(x_{k},x_{k},x_{k+1})+\mathcal{S (x_{k+1},x_{k+1},x_{k+1+n})+\mathcal{S}(x_{k},x_{k},x_{k+n})\right] \notag \\ &=&c\left[ 2\mathcal{S}(x_{k},x_{k},x_{k+1})+\frac{\mathcal{S (x_{k+1},x_{k+1},x_{k+1+n})}{2}+\frac{\mathcal{S}(x_{k},x_{k},x_{k+n})}{2 \right] \notag \\ &<&c\left[ \frac{\delta }{2}+\varepsilon +\frac{\delta }{2}\right] <\varepsilon +\delta \text{.} \label{eqno10} \end{eqnarray Using the definition of $M_{z}^{S}(x_{k},x_{k+n})$, the condition $(ii)$ and the inequalities (\ref{eqno10}), we obtai \begin{equation*} M_{z}^{S}(x_{k},x_{k+n})\leq \varepsilon +\delta \end{equation* and s \begin{equation*} \mathcal{S}\left( x_{k+1},x_{k+1},x_{k+n+1}\right) \leq \varepsilon \text{.} \end{equation* Hence we ge \begin{equation*} \mathcal{S}(x_{k},x_{k},x_{k+n+1})<\varepsilon +\frac{\delta }{2}\text{,} \end{equation* whence $\{x_{n}\}$ is Cauchy. From the completeness hypothesis, there exists a point $u\in X$ such that $x_{n}\rightarrow u$ for $n\rightarrow \infty $. Also we ge \begin{equation*} \underset{n\rightarrow \infty }{\lim }Tx_{n}=\underset{n\rightarrow \infty } \lim }x_{n+1}=u\text{.} \end{equation* Now we prove that $u$ is a fixed point of $T$. On the contrary, $u$ is not a fixed point of $T$. Then using the condition $(i)$ and the property of $\phi $, we obtai \begin{eqnarray*} \mathcal{S}(Tu,Tu,Tx_{n}) &\leq &\phi (M_{z}^{S}(u,x_{n}))<M_{z}^{S}(u,x_{n}) \\ &=&\max \left\{ \begin{array}{c} a\mathcal{S}(u,u,x_{n}),\frac{b}{2}\left[ \mathcal{S}(u,u,Tu)+\mathcal{S (x_{n},x_{n},Tx_{n})\right] , \\ \frac{c}{2}\left[ \mathcal{S}(u,u,Tx_{n})+\mathcal{S}(x_{n},x_{n},Tu)\right \end{array \right\} \end{eqnarray* and so taking a limit for $n\rightarrow \infty $. Using Lemma \ref{lem5}, we fin \begin{equation*} \mathcal{S}(Tu,Tu,u)<\max \left\{ \frac{b}{2}\mathcal{S}(u,u,Tu),\frac{c}{2 \mathcal{S}(u,u,Tu)\right\} <\mathcal{S}(Tu,Tu,u)\text{,} \end{equation* a contradiction. It should be $Tu=u$. We show that $u$ is a unique fixed point of $T$. Let $v$ be another fixed point of $T$ such that $u\neq v$. From the condition $(i)$ and Lemma \ref{lem5}, we hav \begin{eqnarray*} \mathcal{S}(Tu,Tu,Tv) &=&\mathcal{S}(u,u,v)\leq \phi (M_{z}^{S}(u,v))<M_{z}^{S}(u,v) \\ &=&\max \left\{ \begin{array}{c} a\mathcal{S}(u,u,v),\frac{b}{2}\left[ \mathcal{S}(u,u,Tu)+\mathcal{S}(v,v,Tv \right] , \\ \frac{c}{2}\left[ \mathcal{S}(u,u,Tv)+\mathcal{S}(v,v,Tu)\right \end{array \right\} \\ &=&\max \left\{ a\mathcal{S}(u,u,v),c\mathcal{S}(u,u,v)\right\} <\mathcal{S (u,u,v)\text{,} \end{eqnarray* a contradiction. So it should be $u=v$. Therefore, $T$ has a unique fixed point $u\in X$. Finally, we prove that $T$ is discontinuous at $u$ if and only if $\underset x\rightarrow u}{\lim }M_{z}^{S}(x,u)\neq 0$. To do this, we can easily show that $T$ is continuous at $u$ if and only if $\underset{x\rightarrow u}{\lim }M_{z}^{S}(x,u)=0$. Suppose that $T$ is continuous at the fixed point $u$ and $x_{n}\rightarrow u$. Hence we get $Tx_{n}\rightarrow Tu=u$ and using the condition $(S2)$, we fin \begin{equation*} \mathcal{S}(x_{n},x_{n},Tx_{n})\leq 2\mathcal{S}(x_{n},x_{n},u)+\mathcal{S (Tx_{n},Tx_{n},u)\rightarrow 0\text{,} \end{equation* as $x_{n}\rightarrow u$. So we get $\underset{x_{n}\rightarrow u}{\lim M_{z}^{S}(x_{n},u)=0$. On the other hand, let us consider $\underset x_{n}\rightarrow u}{\lim }M_{z}^{S}(x_{n},u)=0$. Then we obtain $\mathcal{S (x_{n},x_{n},Tx_{n})\rightarrow 0$ as $x_{n}\rightarrow u$, which implies Tx_{n}\rightarrow Tu=u$. Consequently, $T$ is continuous at $u$. \end{proof} We give an example. \begin{example} \label{exm1} Let $X=\left\{ 0,2,4,8\right\} $ and $(X,\mathcal{S})$ be the S $-metric space defined as in Example \ref{exm:S-metric}. Let us define the self-mapping $T:X\rightarrow X$ a \begin{equation*} Tx=\left\{ \begin{array}{ccc} 4 & ; & x\leq 4 \\ 2 & ; & x> \end{array \right. \text{,} \end{equation* for all $x\in \left\{ 0,2,4,8\right\} $. Then $T$ satisfies the conditions of Theorem \ref{thm1} with $a=\frac{3}{4},b=c=0$ and has a unique fixed point $x=4$. Indeed, we get the following table$: \begin{equation*} \begin{array}{ccc} \mathcal{S}\left( Tx,Tx,Ty\right) =0 & \text{and} & 3\leq M_{z}^{S}\left( x,y\right) \leq 6\text{ when }x,y\leq 4 \\ \mathcal{S}\left( Tx,Tx,Ty\right) =4 & \text{and} & 6\leq M_{z}^{S}\left( x,y\right) \leq 12\text{ when }x\leq 4,y>4 \\ \mathcal{S}\left( Tx,Tx,Ty\right) =4 & \text{and} & 6\leq M_{z}^{S}\left( x,y\right) \leq 12\text{ when }x>4,y\leq \end{array . \end{equation* Hence $T$ satisfies the conditions of Theorem \ref{thm1} wit \begin{equation*} \phi (t)=\left\{ \begin{array}{ccc} 5 & ; & t\geq 6 \\ \frac{t}{2} & ; & t< \end{array \right. \end{equation* an \begin{equation*} \delta \left( \varepsilon \right) =\left\{ \begin{array}{ccc} 6 & ; & \varepsilon \geq 3 \\ 6-\varepsilon & ; & \varepsilon < \end{array \right. . \end{equation*} \end{example} Now we give the following results as the consequences of Theorem \ref{thm1}. \begin{corollary} \label{cor2} Let $(X,\mathcal{S})$ be a complete $S$-metric space and $T$ a self-mapping on $X$ satisfying the conditions $i)$ $\mathcal{S}\left( Tx,Tx,Ty\right) <M_{z}^{S}\left( x,y\right) $ for any $x,y\in X$ with $M_{z}^{S}\left( x,y\right) >0$, $ii)$ There exists a $\delta =\delta \left( \varepsilon \right) >0$ such that $\varepsilon <M_{z}^{S}\left( x,y\right) <\varepsilon +\delta $ implies $\mathcal{S}\left( Tx,Tx,Ty\right) \leq \varepsilon $ for a given \varepsilon >0$. Then $T$ has a unique fixed point $u\in X$. Also, $T$ is discontinuous at $u$ if and only if $\underset{x\rightarrow u}{\lim }M_{z}^{S}\left( x,u\right) \neq 0$. \end{corollary} \begin{corollary} \label{cor3} Let $(X,\mathcal{S})$ be a complete $S$-metric space and $T$ a self-mapping on $X$ satisfying the conditions $i)$ There exists a function $\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$ such that $\phi (\mathcal{S}(x,x,y))<\mathcal{S}(x,x,y)$ and $\mathcal{S (Tx,Tx,Ty)\leq \phi (\mathcal{S}(x,x,y))$, $ii)$ There exists a $\delta =\delta \left( \varepsilon \right) >0$ such that $\varepsilon <t<\varepsilon +\delta $ implies $\phi (t)\leq \varepsilon $ for any $t>0$ and a given $\varepsilon >0$. Then $T$ has a unique fixed point $u\in X$. \end{corollary} The following theorem shows that the power contraction of the type $M_{z}^{S}\left( x,y\right)$ allows also the possibility of discontinuity at the fixed point. \begin{theorem} \label{thm3} Let $(X,\mathcal{S})$ be a complete $S$-metric space and $T$ a self-mapping on $X$ satisfying the conditions $i)$ There exists a function $\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$ such that $\phi (t)<t$ for each $t>0$ and \begin{equation*} \mathcal{S}\left( T^{m}x,T^{m}x,T^{m}y\right) \leq \phi \left( M_{z}^{S^{\ast }}\left( x,y\right) \right) \text{,} \end{equation* wher \begin{equation*} M_{z}^{S^{\ast }}\left( x,y\right) =\max \left\{ \begin{array}{c} a\mathcal{S}\left( x,x,y\right) ,\frac{b}{2}\left[ \mathcal{S}\left( x,x,T^{m}x\right) +\mathcal{S}\left( y,y,T^{m}y\right) \right] , \\ \frac{c}{2}\left[ \mathcal{S}\left( x,x,T^{m}y\right) +\mathcal{S}\left( y,y,T^{m}x\right) \right] \end{array \right\} \end{equation* for all $x,y\in X$, $ii)$ There exists a $\delta =\delta \left( \varepsilon \right) >0$ such that $\varepsilon <M_{z}^{S^{\ast }}\left( x,y\right) <\varepsilon +\delta $ implies $\mathcal{S}\left( T^{m}x,T^{m}x,T^{m}y\right) \leq \varepsilon $ for a given $\varepsilon >0$. Then $T$ has a unique fixed point $u\in X$. Also, $T$ is discontinuous at $u$ if and only if $\underset{x\rightarrow u}{\lim }M_{z}^{S^{\ast }}\left( x,u\right) \neq 0$. \end{theorem} \begin{proof} By Theorem \ref{thm1}, the function $T^{m}$ has a unique fixed point $u$. Hence we hav \begin{equation*} Tu=TT^{m}u=T^{m}Tu \end{equation* and so $Tu$ is another fixed point of $T^{m}$. From the uniqueness fixed point, we obtain $Tu=u$, that is, $T$ has a unique fixed point $u$. \end{proof} We note that if the $S$-metric $\mathcal{S}$ generates a metric $d$ then we consider Theorem \ref{thm1} on the corresponding metric space as follows: \begin{theorem} \label{thm4} Let $(X,d)$ be a complete metric space and $T$ a self-mapping on $X$ satisfying the conditions $i)$ There exists a function $\phi :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$ such that $\phi (t)<t$ for each $t>0$ and \begin{equation*} d(Tx,Ty)\leq \phi \left( M_{z}\left( x,y\right) \right) \text{,} \end{equation* for all $x,y\in X$, $ii)$ There exists a $\delta =\delta \left( \varepsilon \right) >0$ such that $\varepsilon <M_{z}\left( x,y\right) <\varepsilon +\delta $ implies d(Tx,Ty)\leq \varepsilon $ for a given $\varepsilon >0$. Then $T$ has a unique fixed point $u\in X$. Also, $T$ is discontinuous at $u$ if and only if $\underset{x\rightarrow u}{\lim }M_{z}\left( x,u\right) \neq 0 $. \end{theorem} \begin{proof} By the similar arguments used in the proof of Theorem \ref{thm1}, the proof can be easily proved. \end{proof} \section{\textbf{An Application to the Fixed-Circle Problem}} \label{sec:2} In this section, we investigate new solutions to the fixed-circle problem raised by \"{O}zg\"{u}r and Ta\c{s} in \cit {Ozgur-Tas-malaysian} related to the geometric properties of the set $Fix(T)$ for a self mapping $T$ on an $S$-metric space $(X,\mathcal{S})$. Some fixed-circle or fixed-disc results, as the direct solutions of this problem, have been studied using various methods on a metric space or some generalized metric spaces (see \cite{Mlaiki, Mlaiki-Axioms, Ozgur-Tas-circle-thesis, Ozgur-Tas-Celik, ozgur-aip, Ozgur-simulation, Pant-Ozgur-Tas, Belgium, Tas math, Tas, Tas-fbed}). Now we recall the notions of a circle and a disc on an $S$-metric space as follows \begin{equation*} C_{x_{0},r}^{S}=\left\{ x\in X:\mathcal{S}(x,x,x_{0})=r\right\} \end{equation* an \begin{equation*} D_{x_{0},r}^{S}=\left\{ x\in X:\mathcal{S}(x,x,x_{0})\leq r\right\} , \end{equation* where $r\in \lbrack 0,\infty )$ \cite{Ozgur-Tas-circle-thesis}, \cit {Sedgi-Shobe-Aliouche}. If $Tx=x$ for all $x\in C_{x_{0},r}^{S}$ (resp. $x\in D_{x_{0},r}^{S}$) then the circle $C_{x_{0},r}^{S}$ (resp. the disc $D_{x_{0},r}^{S}$) is called as the fixed circle (resp. fixed disc) of $T$. We begin the following definition. \begin{definition} \label{def2} A self-mapping $T$ is called an $\mathcal{S}$-Zamfirescu type x_{0}$-mapping if there exists $x_{0}\in X$ and $a,b\in \left[ 0,1\right) $ such tha \begin{equation*} \mathcal{S}(Tx,Tx,x)>0\Longrightarrow \mathcal{S}(Tx,Tx,x)\leq \max \left\{ \begin{array}{c} a\mathcal{S}(x,x,x_{0}), \\ \frac{b}{2}\left[ \mathcal{S}(Tx_{0},Tx_{0},x)+\mathcal{S}(Tx,Tx,x_{0} \right] \end{array \right\} \text{,} \end{equation* for all $x\in X$. \end{definition} We define the following number \begin{equation} \rho :=\inf \left\{ \mathcal{S}(Tx,Tx,x):Tx\neq x,x\in X\right\} . \label{rho} \end{equation Now we prove that the set $Fix(T)$ contains a circle (resp. a disc) by means of the number $\rho $. \begin{theorem} \label{thm2} If $T$ is an $\mathcal{S}$-Zamfirescu type $x_{0}$-mapping with $x_{0}\in X$ and the condition \begin{equation*} \mathcal{S}(Tx,Tx,x_{0})\leq \rho \end{equation*} holds for each $x\in C_{x_{0},\rho }^{S}$ then $C_{x_{0},\rho }^{S}$ is a fixed circle of $T$, that is, $C_{x_{0},\rho }^{S}\subset Fix(T)$. \end{theorem} \begin{proof} At first, we show that $x_{0}$\ is a fixed point of $T$. On the contrary, let $Tx_{0}\neq x_{0}$. Then we have $\mathcal{S}(Tx_{0},Tx_{0},x_{0})>0$. By the definition of an $\mathcal{S}$-Zamfirescu type $x_{0}$-mapping and the condition $(S1)$, we obtain \begin{eqnarray*} \mathcal{S}(Tx_{0},Tx_{0},x_{0}) &\leq &\max \left\{ a\mathcal{S (x_{0},x_{0},x_{0}),\frac{b}{2}\left[ \mathcal{S}(Tx_{0},Tx_{0},x_{0}) \mathcal{S}(Tx_{0},Tx_{0},x_{0})\right] \right\} \\ &=&b\mathcal{S}(Tx_{0},Tx_{0},x_{0}), \end{eqnarray* a contradiction because of $b\in \left[ 0,1\right) $. This shows that Tx_{0}=x_{0}$. We have two cases: \textbf{Case 1: }If\textbf{\ }$\rho =0$, then we get $C_{x_{0},\rho }^{S}=\{x_{0}\}$ and clearly this is a fixed circle of $T$. \textbf{Case 2:} Let $\rho >0$ and $x\in C_{x_{0},\rho }^{S}$ be any point such that $Tx\neq x$. Then we have \begin{equation*} \mathcal{S}(Tx,Tx,x)>0 \end{equation*} and using the hypothesis we obtai \begin{eqnarray*} \mathcal{S}(Tx,Tx,x) &\leq &\max \left\{ a\mathcal{S}(x,x,x_{0}),\frac{b}{2 \left[ \mathcal{S}(Tx_{0},Tx_{0},x)+\mathcal{S}(Tx,Tx,x_{0})\right] \right\} \\ &\leq &\max \left\{ a\rho ,b\rho \right\} <\rho , \end{eqnarray* which is a contradiction with the definition of $\rho $. Hence it should be Tx=x$ whence $C_{x_{0},\rho }^{S}$ is a fixed circle of $T$. \end{proof} \begin{corollary} \label{cor1} If $T$ is an $\mathcal{S}$-Zamfirescu type $x_{0}$-mapping with $x_{0}\in X$ and the condition \begin{equation*} \mathcal{S}(Tx,Tx,x_{0})\leq \rho \end{equation*} holds for each $x\in D_{x_{0},\rho }^{S}$ then $D_{x_{0},\rho }^{S}$ is a fixed disc of $T$, that is, $D_{x_{0},\rho }^{S}\subset Fix(T)$. \end{corollary} Now we give an illustrative example to show the effectiveness of our results. \begin{example} \label{exm4} Let $X \mathbb{R} $ and $(X,\mathcal{S})$ be the $S$-metric space defined as in Example \re {exm:S-metric}. Let us define the self-mapping $T:X\rightarrow X$ a \begin{equation*} Tx=\left\{ \begin{array}{ccc} x & ; & x\mathcal{\in }\left[ -3,3\right] \\ x+1 & ; & x\mathcal{\notin }\left[ -3,3\right \end{array \right. \text{,} \end{equation* for all $x\in \mathbb{R} $. Then $T$ is an $\mathcal{S}$-Zamfirescu type $x_{0}$-mapping with x_{0}=0,a=\frac{1}{2}$ and $b=0$. Indeed, we ge \begin{equation*} \mathcal{S}(Tx,Tx,x)=2\left\vert Tx-x\right\vert =2>0\text{,} \end{equation* for all $x\in \left( -\infty ,-3\right) \cup \left( 3,\infty \right) $. So we obtai \begin{eqnarray*} \mathcal{S}(Tx,Tx,x) &=&2\leq \max \left\{ aS\left( x,x,0\right) ,\frac{b}{2 \left[ \mathcal{S}(0,0,x)+\mathcal{S}(x+1,x+1,0)\right] \right\} \\ &=&\frac{1}{2}.2\left\vert x\right\vert . \end{eqnarray* Also we have \begin{equation*} \rho =\inf \left\{ \mathcal{S}(Tx,Tx,x):Tx\neq x,x\in X\right\} =2 \end{equation* an \begin{equation*} \mathcal{S}(Tx,Tx,0)=\mathcal{S}(x,x,0)\leq 2\text{,} \end{equation* for all $x\in C_{0,2}^{S}=\left\{ x:\mathcal{S}(x,x,0)=2\right\} =\left\{ x:2\left\vert x\right\vert =2\right\} =\left\{ x:\left\vert x\right\vert =1\right\} $. Consequently, $T$ fixes the circle $C_{0,2}^{S}$ and the disc D_{0,2}^{S}$. \end{example} \textbf{Acknowledgement.} This work is financially supported by Balikesir University under the Grant no. BAP 2018 /019 and BAP 2018 /021.
2,877,628,088,853
arxiv
\section{The elliptic function $\dd$} \medbreak Fix $\kappa \in (0, 1)$ as modulus, with corresponding (acute) modular angle $\alpha \in (0, \tfrac{1}{2} \pi)$ defined by $\sin \alpha = \kappa$ and with complementary modulus $\lambda \in (0, 1)$ defined by $\lambda = (1 - \kappa^2)^{1/2}$. The rule $$f(T) = \int_0^T F(\tfrac{1}{4}, \tfrac{3}{4}; \tfrac{1}{2} ; \kappa^2 \sin^2 t) \, {\rm d}t$$ defines a strictly increasing bijection $f : \R \to \R$. We write $\phi : \R \to \R$ for its inverse: thus, if $u \in \R$ then $$u = \int_0^{\phi (u)} F(\tfrac{1}{4}, \tfrac{3}{4}; \tfrac{1}{2} ; \kappa^2 \sin^2 t) \, {\rm d}t.$$ A subsidiary angular function with range $[- \alpha, \alpha]$ is then defined as the composite $$\psi = \arcsin (\kappa \sin \phi).$$ \medbreak Now, the function $$d_{\kappa} = \cos \psi : \R \to \R$$ has range $[\cos \alpha, 1] = [\lambda, 1]$ and satisfies the following initial value problem. \medbreak \begin{theorem} \label{d} The function $d_{\kappa}$ has initial value $d_{\kappa} (0) = 1$ and satisfies the differential equation $$(d_{\kappa}')^2 = 2 (1 - d_{\kappa}) (d_{\kappa}^2 - \lambda^2).$$ \end{theorem} \begin{proof} The initial value is clear: $\phi(0) = 0$ so that $\psi(0) = 0$ and therefore $d(0) = 1$; here and below, we drop the subscript $\kappa$ when convenient. From $d = \cos \psi$ follows $d ' = - (\sin \psi) \psi'$; from $\sin \psi = \kappa \sin \phi$ follows $(\cos \psi) \psi ' = \kappa (\cos \phi) \phi '$; and from $f \circ \phi = {\rm id}$ follows $$\phi ' = \frac{1}{f ' \circ \phi} = \frac{1}{F(\tfrac{1}{4}, \tfrac{3}{4}; \tfrac{1}{2} ; \kappa^2 \sin^2 \phi)} = \frac{1}{F(\tfrac{1}{4}, \tfrac{3}{4}; \tfrac{1}{2} ; \sin^2 \psi)} = \frac{\cos \psi}{\cos \frac{1}{2} \psi}$$ on account of the standard hypergeometric identity $$F(\tfrac{1}{4}, \tfrac{3}{4}; \tfrac{1}{2} ; \sin^2 \psi) = \frac{\cos \frac{1}{2} \psi}{\cos \psi}$$ for which we refer to item (11) on page 101 in Volume 1 of the compendious Bateman Manuscript Project [1953]. Thus $$d ' = - \sin \psi \, \Big(\kappa \frac{\cos \phi}{\cos \psi}\Big) \, \frac{\cos \psi}{\cos \frac{1}{2} \psi} = - 2 \sin \tfrac{1}{2} \psi \, (\kappa \, \cos \phi)$$ and so $$(d ')^2 = 4 \sin^2 \tfrac{1}{2} \psi \, (\kappa^2 - \sin^2 \psi) = 2 (1 - \cos \psi) \, (\kappa^2 - 1 + \cos^2 \psi).$$ \end{proof} \medbreak The solution to this initial value problem is readily identifiable in Weierstrassian terms. \medbreak \begin{theorem} \label{p} The function $d_{\kappa} : \R \to \R$ satisfies $$(1 - d_{\kappa}) (\tfrac{1}{3} + \pk) = \tfrac{1}{2} \kappa^2$$ where $\pk = \wp( \bullet ; g_2, g_3)$ is the Weierstrass function with invariants $$g_2 = \lambda^2 + \tfrac{1}{3} \; \; {\and} \; \; g_3 = \tfrac{1}{3} \lambda^2 - \tfrac{1}{27}.$$ \end{theorem} \begin{proof} Either verify that the function $$p = - \tfrac{1}{3} + \frac{\frac{1}{2} \kappa^2}{1 - d}$$ has a pole at $0$ and satisfies the differential equation $$(p ')^2 = 4 p^3 - (\lambda^2 + \tfrac{1}{3}) p - (\tfrac{1}{3} \lambda^2 - \tfrac{1}{27})$$ or apply the argument that is to be found on page 453 in the classic treatise [1927] of Whittaker and Watson. \end{proof} \medbreak Thus, $d_{\kappa}$ is the restriction to $\R$ of an elliptic function: this is the elliptic function $\dd$ of Shen, given by $$\dd = 1 - \frac{\frac{1}{2} \kappa^2}{\tfrac{1}{3} + \pk} \, .$$ \medbreak The elliptic function $\dd$ and the Weierstrass function $\pk$ are evidently coperiodic. We shall write $(2 \ok, 2 \ok ')$ for their shared fundamental pair of periods such that $\ok > 0$ and $- \ii \, \ok ' > 0$. In the next two sections, we shall develop hypergeometric expressions for these periods. To close the present section, it is convenient to record the midpoint values of the Weiersstrass function $\pk$: in decreasing order, these zeros of the cubic $$4 e^3 - (\lambda^2 + \tfrac{1}{3}) e - (\tfrac{1}{3} \lambda^2 - \tfrac{1}{27})$$ are readily checked to be $$e_1 = \pk(\ok) = \tfrac{1}{6} + \tfrac{1}{2} \lambda$$ $$e_2 = \pk(\ok + \ok ') = \tfrac{1}{6} - \tfrac{1}{2} \lambda$$ $$e_3 = \pk(\ok ') = - \tfrac{1}{3}.$$ \medbreak \section{Fundamental periods in terms of $F_4$} \medbreak The very definition of $\dd$ as an extension of $d_{\kappa}$ provides immediate access to the real half-period $\ok$ of $\dd$ and $\pk$. \medbreak \begin{theorem} \label{okF4} $\ok = \tfrac{1}{2} \pi \, F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; \kappa^2).$ \end{theorem} \begin{proof} With $$I = \int_0^{\frac{1}{2} \pi} F(\tfrac{1}{4}, \tfrac{3}{4}; \tfrac{1}{2} ; \kappa^2 \sin^2 t) \, {\rm d} t$$ it may be verified by integration that $$\phi (u + 2 I ) = \phi (u) + \pi $$ so that $$\psi(u + 2 I) = - \psi(u)$$ and $$d (u + 2 I) = \cos \psi(u + 2 I) = \cos \psi (u) = d(u).$$ This shows that $\dd$ has $2 I$ as a period, which is easily seen to be least positive. Finally, expansion of the hypergeometric integrand and termwise integration show that $$\int_0^{\frac{1}{2} \pi} F(\tfrac{1}{4}, \tfrac{3}{4}; \tfrac{1}{2} ; \kappa^2 \sin^2 t) \, {\rm d} t= \tfrac{1}{2} \pi \, F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; \kappa^2).$$ \end{proof} \medbreak Access to the imaginary half-period $\ok '$ of $\dd$ and $\pk$ is facilitated by investigating the relationship between the primary Weierstrass function $$\pk = \wp (\bullet ; \ok, \ok ') = \wp (\bullet ; g_2, g_3)$$ and the auxiliary Weierstrass function $$q_{\kappa} = \wp (\bullet ; \ok, \tfrac{1}{2} \ok ') = \wp (\bullet ; h_2, h_3)$$ that results when its imaginary period is halved. Here, the invariants $h_2$ and $h_3$ of $q_{\kappa}$ are related to the invariants $g_2$ and $g_3$ of $\pk$ by $$h_2 = - 4 \, g_2 + 60 \, \pk (\ok ')^2$$ $$h_3 = 8 \, g_3 + 56 \, \pk (\ok ')^3.$$ This is a quite general consequence of the halving of a Weierstrassian period, for the proof of which we refer to Section 9.8 of [1989]. \medbreak \begin{theorem} \label{ok'F4} $\ok ' = \ii \, \sqrt2 \, \tfrac{1}{2} \pi \, F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; 1 - \kappa^2).$ \end{theorem} \begin{proof} When the invariants of $\pk$ as displayed in Theorem \ref{p} and the subsequent evaluation $\pk(\ok ') = - 1/3$ are taken into account, we find that $q_{\kappa}$ has invariants $$h_2 = \tfrac{4}{3} + 4 \kappa^2 = (\ii \, \sqrt2)^4 (\kappa^2 + \tfrac{1}{3})$$ $$h_3 = \tfrac{8}{27} - \tfrac{8}{3} \kappa^2 = (\ii \, \sqrt2)^6 (\tfrac{1}{3} \kappa^2 - \tfrac{1}{27}).$$ By a further consultation of Theorem \ref{p} (but for the complementary modulus) in conjunction with the homogeneity relation for $\wp$ functions, we deduce that $q_{\kappa}$ is related to the Weierstrass function $p_{\lambda}$ of complementary modulus according to the rule $$q_{\kappa} (z) = - 2 \, p_{\lambda} (\ii \, \sqrt2 \, z).$$ Now on the one hand $q_{\kappa}$ has fundamental half-periods $\ok$ and $\tfrac{1}{2} \ok '$, while on the other hand $p_{\lambda}$ has fundamental half-periods $\ol$ and $\ol '$. In light of the above rule by which $q_{\kappa}$ and $\pl$ are related, we see that $$\ok ' = \ii \, \sqrt2 \, \ol.$$ It only remains to invoke Theorem \ref{okF4} (for the complementary modulus) and recall that $\lambda^2 = 1 - \kappa^2$. \end{proof} \medbreak \section{Fundamental periods in terms of $F_2$} \medbreak In order to obtain equivalent expressions for $\ok$ and $\ok '$ in terms of the hypergeometric function $F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \bullet)$ we shall reformulate the Weierstrass function $\pk$ in terms of classical Jacobian elliptic functions. \medbreak Recall from page 505 of [1927] that if the Weierstrass function $p$ has real midpoint values $e_1 > e_2 > e_3$ then $$p(z) = e_3 + \frac{e_1 - e_3}{\sn^2 [z (e_1 - e_3)^{1/2}]}$$ where $\sn = \sn (\bullet, k)$ is the Jacobian sine function with modulus $k \in (0, 1)$ given by $$k^2 = \frac{e_2 - e_3}{e_1 - e_3}$$ and its square $\sn^2$ has fundamental periods $(2 K, 2 \ii K')$ given by $$K = \tfrac{1}{2}\pi \, F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; k^2) \; \; {\rm and} \; \; K' = \tfrac{1}{2}\pi \, F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; 1 - k^2).$$ Accordingly, $p$ itself has fundamental half-periods $$ \frac{K}{ (e_1 - e_3)^{1/2}} \; \; {\rm and} \; \; \ii \, \frac{K'}{ (e_1 - e_3)^{1/2}}.$$ \medbreak \begin{theorem} \label{okF2} The half-periods $\ok$ and $\ok '$ of $\dd$ and $\pk$ are given by $$\sqrt{\tfrac{1 + \lambda}{2}} \; \ok = \tfrac{1}{2} \pi \, F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \tfrac{1 - \lambda}{1 + \lambda})$$ and $$\sqrt{\tfrac{1 + \lambda}{2}} \; \ok ' = \ii \tfrac{1}{2} \pi \, F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \tfrac{2 \lambda}{1 + \lambda}).$$ \end{theorem} \begin{proof} Apply to $p = \pk$ the foregoing recollections. As noted after Theorem \ref{p}, $\pk$ has midpoint values $$e_1 = \tfrac{1}{6} + \tfrac{1}{2} \lambda, \, \, e_2 = \tfrac{1}{6} - \tfrac{1}{2} \lambda, \, \, e_3 = - \tfrac{1}{3}$$ so that $$k^2 = \frac{e_2 - e_3}{e_1 - e_3} = \frac{1 - \lambda}{1 + \lambda}$$ and $$1 - k^2 = \frac{2 \lambda}{1 + \lambda}$$ while $$(e_1 - e_3)^{1/2} = \sqrt{\tfrac{1 + \lambda}{2}}.$$ \end{proof} \medbreak Of course, we may also use the assembled information to express the elliptic function $\dd$ in terms of the classical Jacobian elliptic functions to modulus $k$. Thus, the relation $$\pk (z) = - \tfrac{1}{3} + \frac{\tfrac{1}{2} (1 + \lambda)}{\sn^2 \Big[z \, (\tfrac{1}{2} (1 + \lambda))^{1/2}\Big]}$$ may be recast as $$\dd (z) = 1 - (1 - \lambda) \, \sn^2 \Big[(\tfrac{1}{2} (1 + \lambda))^{1/2} \Big];$$ equivalently, it may be recast either in terms of the Jacobian cosine function ${\rm cn}$ as $$\dd (z) = \lambda + (1 - \lambda) \, {\rm cn}^2 \Big[(\tfrac{1}{2} (1 + \lambda))^{1/2} \Big]$$ or in terms of the Jacobian `delta amplitude' ${\rm dn}$ as $$\dd (z) = - \lambda + (1 + \lambda) \, {\rm dn}^2 \Big[(\tfrac{1}{2} (1 + \lambda))^{1/2} \Big].$$ Incidentally, it may be checked that the Jacobian modulus $k$ equals $\tan \tfrac{1}{2} \alpha$. \medbreak \section{The transfer principle} \medbreak All the pieces are in place: we are now in a position to deduce the hypergeometric identities that opened our paper. \medbreak \begin{theorem} \label{hyp} If $0 < \lambda < 1$ then $$\sqrt{1 + \lambda} \, F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; 1 - \lambda^2) = \sqrt2 \, F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \frac{1 - \lambda}{1 + \lambda})$$ and $$\sqrt{1 + \lambda} \, F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; \lambda^2) = F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \frac{2 \lambda}{1 + \lambda})\, .$$ \end{theorem} \begin{proof} Direct comparison of Theorem \ref{okF4} with the first formula of Theorem \ref{okF2} yields $$\sqrt{1 + \lambda} \, F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; \kappa^2) = \sqrt2 \, F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \frac{1 - \lambda}{1 + \lambda})$$ while direct comparison of Theorem \ref{ok'F4} with the second formula of Theorem \ref{okF2} yields $$\sqrt{1 + \lambda} \, F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; 1 - \kappa^2) = F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \frac{2 \lambda}{1 + \lambda})\, .$$ \end{proof} \medbreak As in [1995] these hypergeometric identities entail a connexion between the base $q_4$ that is appropriate to the signature four elliptic theory and the base $q$ that is appropriate to the classical elliptic theory. To be explicit, Theorem \ref{hyp} implies that $$\frac{F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; 1 - \lambda^2)}{F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; \lambda^2)} = \sqrt2 \, \frac{F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \frac{1 - \lambda}{1 + \lambda})}{F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \frac{2 \lambda}{1 + \lambda})}$$ whence $$q_4 \, (\lambda^2) : = \exp \Big\{ - \pi \sqrt2 \, \frac{F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; 1 - \lambda^2)}{F(\tfrac{1}{4}, \tfrac{3}{4}; 1 ; \lambda^2)} \Big\}$$ and $$q \, \big(\tfrac{2 \lambda}{1 + \lambda}\big) : = \exp \Big\{ - \pi \, \frac{F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; 1 - \frac{2 \lambda}{1 + \lambda}))}{F(\tfrac{1}{2}, \tfrac{1}{2}; 1 ; \frac{2 \lambda}{1 + \lambda})}\Big\}$$ satisfy the relation $$q_4 \, (\lambda^2) = q \, \big(\tfrac{2 \lambda}{1 + \lambda}\big)^2 .$$ \medbreak The signature four transfer principle now follows exactly as in [1995]. \medbreak \begin{center} {\small R}{\footnotesize EFERENCES} \end{center} \medbreak [1927] E.T. Whittaker and G.N. Watson, {\it A Course of Modern Analysis}, Fourth Edition, Cambridge University Press. \medbreak [1953] A. Erdelyi (director), {\it Higher Transcendental Functions}, Volume 1, McGraw-Hill. \medbreak [1989] D.F. Lawden, {\it Elliptic Functions and Applications}, Applied Mathematical Sciences {\bf 80}, Springer-Verlag. \medbreak [1995] B.C. Berndt, S. Bhargava, and F.G. Garvan, {\it Ramanujan's theories of elliptic functions to alternative bases}, Transactions of the American Mathematical Society {\bf 347} 4163-4244. \medbreak [2014] Li-Chien Shen, {\it On a theory of elliptic functions based on the incomplete integral of the hypergeometric function $_2 F_1 (\frac{1}{4}, \frac{3}{4} ; \frac{1}{2} ; z)$}, Ramanujan Journal {\bf 34} 209-225. \medbreak \end{document} \medbreak \begin{theorem} \end{theorem} \begin{proof} \end{proof} \medbreak \section*{} It is proved in [2004] Section 5 that the is precisely $- \tfrac{1}{3}$. We begin by presenting the coperiodic Weierstrass function $p_{\kappa}$ in Jacobian terms. With these midpoint values, the corresponding Jacobian modulus $k$ is given by $$k^2 = \frac{16 s^3 c}{8 s^3 c + \sqrt3 (8 s^4 - 12 s^2 + 3)}.$$ following fundamental half-periods $\omega_{\kappa}$ and $\omega_{\kappa} '$ of $p_{\kappa}$ and $\ddd$ are now seen to have the following expressions in terms of the `classical' hypergeometric function. As in the Introduction, for typographical convenience we shall here adopt the abbreviations $$F_2 = F(\tfrac{1}{2}, \tfrac{1}{2}; 1; \bullet)$$ and $$F_3 = F(\tfrac{1}{3}, \tfrac{2}{3}; 1; \bullet)$$ for the indicated hypergeometric functions. [1989] D.F. Lawden, {\it Elliptic Functions and Applications}, Applied Mathematical Sciences {\bf 80}, Springer-Verlag. \medbreak \medbreak [2014] Li-Chien Shen, {\it On a theory of elliptic functions based on the incomplete integral of the hypergeometric function $_2 F_1 (\frac{1}{4}, \frac{3}{4} ; \frac{1}{2} ; z)$}, Ramanujan Journal {\bf 34} 209-225. Explicitly, let us write $$p = 2 \, \frac{s^2 + \sqrt3 s c}{3 - 4s^2}$$ with $s = \sin \tfrac{1}{3} \theta$ and $c = \cos \tfrac{1}{3} \theta$ as in Section 2. As $0 < \theta < \tfrac{1}{2} \pi$ it ifollows that $0 < s < \tfrac{1}{2}$ whence it is readily verified that $0 < p < 1$. Further, it is straightforward to verify that $$s = \frac{\sqrt3}{2} \frac{p}{1 + p + p^2} \; \; \; {\rm and} \; \; \; c = \frac{2 + p}{2(1 + p + p^2)}.$$ A moderate amount of calculation now confirms that $$8 s^4 - 12 s^2 + 3 = \frac{3}{2 (1 + p + p^2)}\big(2 + 4 p - 2 p^3 - p^4 \big)$$ and $$s^3 c = \frac{3 \sqrt3}{16} \frac{p^3 (2 + p)}{(1 + p + p^2)^4}$$ whence (after further calculation) the spread of the midpoint values is given by $$e_1 - e_3 = \frac{1 + 2 p}{1 + p + p^2}$$ and the Jacobian modulus is given by $$k^2 = \frac{e_2 - e_3}{e_1 - e_3} = \frac{p^3 (2 + p)}{1 + 2 p}.$$ Lastly, the original modulus $\kappa = \sin \theta$ is given by $$\kappa = s^2 (3 - 4 s^2)^2 = \frac{27}{4} \frac{p^2 (1 + p)^2}{(1 + p + p^2)^3}.$$ Lastly, the original modulus $\kappa = \sin \theta$ is given by $$\kappa^2 = s^2 (3 - 4 s^2)^2 = \frac{27}{4} \frac{p^2 (1 + p)^2}{(1 + p + p^2)^3}.$$ and make some closing comments. A simple matter of substituting The signature three transfer principle of Berndt, Bhargava and Garvan// All the essentials are now in place. We remark that the original determination of the half-periods $\ok$ and $\ok '$ proceeded along very different lines. In [2014] they were calculated as half-periods of $\dd$ itself, by evaluating the integral $$\int \frac{{\rm d} x}{\sqrt{2 (1 - x) (x^2 - \lambda^2)}}$$ along suitable paths, in each case making appropriate trigonometric substitutions; of course, the calculations for the imaginary half-period are a little more involved than those for the real half-period. \medbreak